

(5 intermediate revisions by 3 users not shown) 
Line 1: 
Line 1: 
−  Design optimization can be thought of as used at the different length scale models and materials similar to every ICME notion.
 +  #REDIRECT [[:Category:OptimizationCategory:Optimization]] 
−   +  
−  =General Design Optimization=
 +  
−   +  
−  Optimization discipline deals with finding the maximum and minimum of functions subject to some constraints.
 +  
−   +  
−  ==Optimization guidelines==
 +  
−   +  
−  Design variables: A design variable is a specification that is controllable by the designer (eg., thickness, material, etc.) and are often bounded by maximum and minimum values. Sometimes these bounds can be treated as constraints.
 +  
−   +  
−  Constraints: A constraint is a condition that must be satisfied for the design to be feasible. Examples include physical laws, constraints can reflect resource limitations, user requirements, or bounds on the validity of the analysis models. Constraints can be used explicitly by the solution algorithm or can be incorporated into the objective using Lagrange multipliers.
 +  
−   +  
−  Objectives: An objective is a numerical value or function that is to be maximized or minimized. For example, a designer may wish to maximize profit or minimize weight. Many solution methods work only with single objectives. When using these methods, the designer normally weights the various objectives and sums them to form a single objective. Other
 +  
−  methods allow multiobjective optimization, such as the calculation of a Pareto frontier
 +  
−   +  
−  Models: The designer must also choose models to relate the constraints and the objectives to
 +  
−  the design variables. They may include finite element analysis, reduced order metamodels, etc.
 +  
−   +  
−  Reliability: the probability of a component to perform its required functions under stated
 +  
−  conditions for a specified period of time.
 +  
−   +  
−  ==Optimization Methods==
 +  
−   +  
−  Gradientbased methods:
 +  
− 
 +  
−  Adjoint equation,
 +  
−  Newton’s method,
 +  
−  Steepest descent,
 +  
−  Conjugate gradient, and
 +  
−  Sequential quadratic programming
 +  
−   +  
−  Gradientfree methods:
 +  
− 
 +  
−  HookeJeeves pattern search and
 +  
−  NelderMead method
 +  
−   +  
−  Populationbased methods:
 +  
−   +  
−  Genetic algorithm,
 +  
−  Memetic algorithm,
 +  
−  Particle swarm optimization,
 +  
−  Ant colony, and
 +  
−  Harmony search
 +  
−   +  
−  Other methods:
 +  
−   +  
−  Random search,
 +  
−  Grid search,
 +  
−  Simulated annealing,
 +  
−  Direct search, and
 +  
−  IOSO(Indirect Optimization based on SelfOrganization)
 +  
−   +  
−  ==Convergence of Pareto Frontier==
 +  
−   +  
−  It is relatively simple to determine an optimal solution for single objective methods (solution
 +  
−  with the lowest error function). However, for multiple objectives, we must evaluate solutions on a “Pareto frontier.” A solution lies on the Pareto frontier when any further changes to the parameters result in one or more objectives improving with the other objective(s) suffering as a result. Once a set of solutions have converged to the Pareto frontier, further testing is required in order to determine which candidate force field is optimal for the problems of interest. Be aware that searches with a limited number of parameters might “cram” a lot of important
 +  
−  physics into a few parameters.
 +  
−   +  
−  ==Metamodeling==
 +  
−   +  
−  see [[MetamodelingMetamodeling]] [http://en.wikipedia.org/wiki/Metamodeling Metamodeling Wikipedia]
 +  
−   +  
−  ==Unconstrained Optimization==
 +  
−   +  
−  ===ZerothOrder Methods===
 +  
−   +  
−  These methods are referred to as “zerothorder methods” because they require only evaluation of the function, ''f''('''X'''), in each iterative step. Some examples of zerothorder methods are the Bracketing Method and the Golden Section Search Method. Some population based methods could also be categorized as zerothorder methods <ref name = 'Opt'>RaisRohani,Masoud “Handout #3: Mathematical Programming Methods for Unconstrained Optimization,” Design Optimization Class, Mississippi State University, Spring 2012.</ref>.
 +  
−   +  
−  ====Bracketing Method====
 +  
−   +  
−  The Bracketing method is a zerothorder method which used progressively smaller intervals to converge to an optimal solution. The interval is set up such that the x value corresponding to the optimal value of f lies within the interval. The interval is then divided into any number of subintervals of any given length. At each dividing point the value of f is calculated. The optimum subinterval is then chosen as the next interval. This process iterates until convergence criteria is met <ref name = 'Opt'>RaisRohani,Masoud “Handout #3: Mathematical Programming Methods for Unconstrained Optimization,” Design Optimization Class, Mississippi State University, Spring 2012.</ref>.
 +  
− 
 +  
−  ====Golden Section Search====
 +  
−   +  
−  ===FirstOrder Methods===
 +  
−   +  
−  In addition to evaluation of f(X), firstorder methods require the calculation of the gradient vector ∇f(X) in each iterative step. Some examples of firstorder methods are the Steepest Descent or Cauchy Method and the Conjugate Gradient Method.
 +  
−   +  
−  ====Steepest Descent (Cauchy) Method====
 +  
−   +  
−  The Steepest Descent method uses a search direction of some magnitude in the negative direction of the gradient. The negative of the gradient gives the direction of maximum decrease, hence steepest descent. The magnitude of the constant for the search direction can be determined through zerothorder methods or from direct calculation. The direct calculation is done by setting the derivative equal to zero and solving for the constant. This method is guaranteed to converge to a local minimum, but convergence may be slow as previous iterations are not considered in determining the search direction of subsequent iterations. The rate of convergence can be estimated using the condition number of the Hessian matrix. If the condition number of the Hessian is large convergence will be slow <ref name = 'Opt'>RaisRohani,Masoud “Handout #3: Mathematical Programming Methods for Unconstrained Optimization,” Design Optimization Class, Mississippi State University, Spring 2012.</ref>.
 +  
−   +  
−  ====Conjugate Gradient Method====
 +  
−   +  
−  The Conjugate Gradient Method is similar to the Steepest Descent Method except that it takes into consideration previous iterations when choosing search directions. The conjugate direction is determining by adding the steepest descent direction of the previous iteration, scaled by some value, to the steepest descent direction of the current iteration. The constant used to scale the search direction of the previous iteration can be determined using either the FletcherReeves formula or the PolakRibiere formula <ref name = 'Opt'>RaisRohani,Masoud “Handout #3: Mathematical Programming Methods for Unconstrained Optimization,” Design Optimization Class, Mississippi State University, Spring 2012.</ref>.
 +  
−   +  
−  ===SecondOrder Methods===
 +  
−   +  
−  Secondorder methods take advantage of the Hessian matrix, the second derivative, of the the function to improve search direction and rate of convergence. Some examples of secondorder methods are Newton's Method, DavidonFletcherPowell (DFP) method, and the BroydenFletcherGoldfarbShanno (BFGS) method <ref name = 'Opt'>RaisRohani,Masoud “Handout #3: Mathematical Programming Methods for Unconstrained Optimization,” Design Optimization Class, Mississippi State University, Spring 2012.</ref>.
 +  
−   +  
−  ===PopulationBased Methods===
 +  
−   +  
−  Population based methods generate a population of points throughout the design space. Some methods then specify a range of the best points and generate a new population, continuing until convergence is reached (MonteCarlo Method). Others generate a population and then "evolve" the points. The weakest of the new population is eliminated and the remainder evolved again until convergence is reached (Genetic Algorithm).
 +  
−   +  
−  ====MonteCarlo Method====
 +  
−   +  
−  see [http://en.wikipedia.org/wiki/MonteCarlo_method MonteCarlo]
 +  
−   +  
−  ====Genetic Algorithm====
 +  
−   +  
−  [[SRCLIDback to the SRCLID home]]
 +  
−   +  
−  [[Category:Overview]]
 +  
−   +  
−  =Structural Scale Optimization=
 +  
−   +  
−  ===Analytical Model for Axial Crushing of Multicell Multicorner Tubes ([[MultiCRUSH]])===
 +  
−  Contributers: [http://www.cavs.msstate.edu/information.php?eid=317 Ali Najafi] and [http://www.ae.msstate.edu/pages/rohani.php Masoud RaisRohani]
 +  
−   +  
−  '''Topology Optimization of Continuum Structures Using Element Exchange Method'''
 +  
−   +  
−  Authors: [http://www.cavs.msstate.edu/information.php?eid=316 Mohammad Rouhi] ([mailto:rouhi@cavs.msstate.edu rouhi@cavs.msstate.edu]) and [http://www.ae.msstate.edu/pages/rohani.php Masoud RaisRohani] ([mailto:masoud@ae.msstate.edu masoud@ae.msstate.edu])
 +  
−   +  
−  [[Media:Mohammad_Rouhi_Thesis_final.pdf Mohammad Rouhi's MSc Thesis]]
 +  
−   +  
−  '''Topology Optimization of Continuum Structures Using Element Exchange Method'''
 +  
−   +  
−  Authors: [http://www.cavs.msstate.edu/information.php?eid=316 Mohammad Rouhi] ([mailto:rouhi@cavs.msstate.edu rouhi@cavs.msstate.edu]) and [http://www.ae.msstate.edu/pages/rohani.php Masoud RaisRohani] ([mailto:masoud@ae.msstate.edu masoud@ae.msstate.edu])
 +  
−   +  
−  http://pdf.aiaa.org/preview/CDReadyMSDM08_1875/PV2008_1707.pdf
 +  
−   +  
−  '''Element Exchange Method for Topology Optimization'''
 +  
−   +  
−  Authors: [http://www.cavs.msstate.edu/information.php?eid=316 Mohammad Rouhi] ([mailto:rouhi@cavs.msstate.edu rouhi@cavs.msstate.edu]), [http://www.ae.msstate.edu/pages/rohani.php Masoud RaisRohani] ([mailto:masoud@ae.msstate.edu masoud@ae.msstate.edu]) and [http://www.cavs.msstate.edu/information.php?eid=144 Thomas Neil Williams] ([mailto:tnw7@cavs.msstate.edu tnw7@cavs.msstate.edu])
 +  
−   +  
−  http://springerlink.com/index/m30m6x1x62k252lr.pdf
 +  
−   +  
−  = Macroscale=
 +  
−   +  
−  Optimization algorithms can be used for model calibration. For example, the DMGfit and MSFfit routines employ optimization to fit the plasticitydamage model and the fatigue model, respectively. The constants of interest are selected and a Monte Carlo optimization routine is performed to generate candidate constants. A single element simulation then produces the model stressstrain curve. The curve is compared to the input data for fit comparison, and this process is repeated until a satisfactory fit is achieved or a maximum number of iterations is reached. The resulting optimized constants are then output.
 +  
−   +  
−  = Mesoscale=
 +  
−   +  
−  = Microscale=
 +  
−   +  
−  = Nanoscale=
 +  
−   +  
−  The Embedded Atom Method (EAM) and Modified Embedded Atom Method (MEAM) potentials can be optimized based upon on Electronics Scale calculation results and experimental data.
 +  
−   +  
−  = Electronic Scale=
 +  
−   +  
−  = Multilevel Design Optimization =
 +  
−  This is an emerging topics at CAVS. The [[Multilevel Design Optimizationpages describing the progress]] are currently available only to the members of the research team.
 +  
−   +  
−  ====References====
 +  
−  <references/>
 +  