SRCLID:Simulation-Based Design Optimization
Design optimization can be thought of as used at the different length scale models and materials similar to every ICME notion.
General Design Optimization
Optimization discipline deals with finding the maximum and minimum of functions subject to some constraints.
Design variables: A design variable is a specification that is controllable by the designer (eg., thickness, material, etc.) and are often bounded by maximum and minimum values. Sometimes these bounds can be treated as constraints.
Constraints: A constraint is a condition that must be satisfied for the design to be feasible. Examples include physical laws, constraints can reflect resource limitations, user requirements, or bounds on the validity of the analysis models. Constraints can be used explicitly by the solution algorithm or can be incorporated into the objective using Lagrange multipliers.
Objectives: An objective is a numerical value or function that is to be maximized or minimized. For example, a designer may wish to maximize profit or minimize weight. Many solution methods work only with single objectives. When using these methods, the designer normally weights the various objectives and sums them to form a single objective. Other methods allow multi-objective optimization, such as the calculation of a Pareto frontier
Models: The designer must also choose models to relate the constraints and the objectives to the design variables. They may include finite element analysis, reduced order metamodels, etc.
Reliability: the probability of a component to perform its required functions under stated conditions for a specified period of time.
Adjoint equation, Newton’s method, Steepest descent, Conjugate gradient, and Sequential quadratic programming
Hooke-Jeeves pattern search and Nelder-Mead method
Genetic algorithm, Memetic algorithm, Particle swarm optimization, Ant colony, and Harmony search
Random search, Grid search, Simulated annealing, Direct search, and IOSO(Indirect Optimization based on Self-Organization)
Convergence of Pareto Frontier
It is relatively simple to determine an optimal solution for single objective methods (solution with the lowest error function). However, for multiple objectives, we must evaluate solutions on a “Pareto frontier.” A solution lies on the Pareto frontier when any further changes to the parameters result in one or more objectives improving with the other objective(s) suffering as a result. Once a set of solutions have converged to the Pareto frontier, further testing is required in order to determine which candidate force field is optimal for the problems of interest. Be aware that searches with a limited number of parameters might “cram” a lot of important physics into a few parameters.
These methods are referred to as “zeroth-order methods” because they require only evaluation of the function, f(X), in each iterative step. Some examples of zeroth-order methods are the Bracketing Method and the Golden Section Search Method. Some population based methods could also be categorized as zeroth-order methods .
The Bracketing method is a zeroth-order method which used progressively smaller intervals to converge to an optimal solution. The interval is set up such that the x value corresponding to the optimal value of f lies within the interval. The interval is then divided into any number of sub-intervals of any given length. At each dividing point the value of f is calculated. The optimum sub-interval is then chosen as the next interval. This process iterates until convergence criteria is met .
Golden Section Search
In addition to evaluation of f(X), first-order methods require the calculation of the gradient vector ∇f(X) in each iterative step. Some examples of first-order methods are the Steepest Descent or Cauchy Method and the Conjugate Gradient Method.
Steepest Descent (Cauchy) Method
The Steepest Descent method uses a search direction of some magnitude in the negative direction of the gradient. The negative of the gradient gives the direction of maximum decrease, hence steepest descent. The magnitude of the constant for the search direction can be determined through zeroth-order methods or from direct calculation. The direct calculation is done by setting the derivative equal to zero and solving for the constant. This method is guaranteed to converge to a local minimum, but convergence may be slow as previous iterations are not considered in determining the search direction of subsequent iterations. The rate of convergence can be estimated using the condition number of the Hessian matrix. If the condition number of the Hessian is large convergence will be slow .
Conjugate Gradient Method
The Conjugate Gradient Method is similar to the Steepest Descent Method except that it takes into consideration previous iterations when choosing search directions. The conjugate direction is determining by adding the steepest descent direction of the previous iteration, scaled by some value, to the steepest descent direction of the current iteration. The constant used to scale the search direction of the previous iteration can be determined using either the Fletcher-Reeves formula or the Polak-Ribiere formula .
Second-order methods take advantage of the Hessian matrix, the second derivative, of the the function to improve search direction and rate of convergence. Some examples of second-order methods are Newton's Method, Davidon-Fletcher-Powell (DFP) method, and the Broyden-Fletcher-Goldfarb-Shanno (BFGS) method .
Population based methods generate a population of points throughout the design space. Some methods then specify a range of the best points and generate a new population, continuing until convergence is reached (Monte-Carlo Method). Others generate a population and then "evolve" the points. The weakest of the new population is eliminated and the remainder evolved again until convergence is reached (Genetic Algorithm).
Genetic algorithms are based on the principles of natural selection and natural genetics, meaning reproduction, crossover, and mutation are involved in the search procedure. The design variables are represented as strings of binary numbers which mirror chromosomes in genetics. These strings allow for the different binary numbers, or bits, to be adjusted during the reproduction, mutation, and crossover stage. A population of points is used, and the number of initial points is typically two to four times the number of design variables. These points are evaluated to provide a fitness value, and above average points are selected and added to a new population of points. Points in this new population undergo the second stage in the algorithm known as crossover. In this stage information from two "parent" points, or strings, is combined to produce a new "child" point. The mutation operator is optional. It selects points based on a user-defined probability and alters a bit in the points binary string, thereby maintaining diversity in the population. The process is iterated until convergence is reached. GAs differ from other optimization techniques in that they work with a coding of the parameter set and not the parameters themselves, search a population of points instead of a single point, and use objective function knowledge instead of derivatives or other auxiliary knowledge
Structural Scale Optimization
Analytical Model for Axial Crushing of Multi-cell Multi-corner Tubes (Multi-CRUSH)
Topology Optimization of Continuum Structures Using Element Exchange Method
Topology Optimization of Continuum Structures Using Element Exchange Method
Element Exchange Method for Topology Optimization
Optimization algorithms can be used for model calibration. For example, the DMGfit and MSFfit routines employ optimization algorithms to automatically fit the plasticity-damage model and the fatigue model, respectively. The constants of interest are selected and a Monte Carlo optimization routine is performed to generate candidate constants. A single element simulation then produces the model stress-strain curve. The curve is compared to the input data for fit comparison, and this process is repeated until a satisfactory fit is achieved or a maximum number of iterations is reached. The resulting optimized constants are then output.
The Embedded Atom Method (EAM) and Modified Embedded Atom Method (MEAM) potentials can be optimized based upon on Electronics Scale calculation results and experimental data.
Multilevel Design Optimization
This is an emerging topics at CAVS. The pages describing the progress are currently available only to the members of the research team.
- ↑ 1.0 1.1 1.2 1.3 1.4 Rais-Rohani,Masoud “Handout #3: Mathematical Programming Methods for Unconstrained Optimization,” Design Optimization Class, Mississippi State University, Spring 2012.
- ↑ Rao, S.S., “Genetic Algorithms,” Engineering Optimization: Theory and Practice, John Wiley and Sons, Inc., 2009, pp. 694-702.
- ↑ Goldberg, D.E., Genetic Algorithms in search, optimization, and machine learning, Addison Wesley Longman, 1989.