Code: microMegas

From EVOCD
Revision as of 14:47, 6 September 2010 by Florina (Talk | contribs)

Jump to: navigation, search
Name microMegas (mM)
Status released
Release Date mM v. 1.0 - 09/2009; mMpar v. 1.0 - 01/2010
Authors Developers Team at CNRS ONERA, France; mMpar v.1.0=extension of original mM v. 3.2 + OpenMP additions, by Florina Ciorba
Contact CNRS-ONERA The Developers' Team or Sebastien Groh
License GNU GPL License
Repository mM v1.0 or mMpar
Documentation mM: README.txt (mM/trunk); README.txt (mM/tags/1.0) or mMpar: README.txt (mMpar/trunk); MSU.CAVS.CMD.2010-R0002.pdf (mMpar/doc)
Known problems None
Description MicroMegas is a 3-D DDD (Discrete Dislocation Dynamics) simulations

To report bugs, problems or to make comments please use the discussion tab above.


Contents

Overview of microMegas

MicroMegas (also known as 'mM') is an open source program for DD (Dislocation Dynamics) simulations originally developed at the 'Laboratoire d'Etude des Microstructures', CNRS-ONERA, France. mM is a free software under the terms of the GNU General Public License as published by the Free Software Foundation. Discrete dislocation dynamics (DDD) is a numerical tool used to model the plastic behavior of crystalline materials using the elastic theory of dislocations [1]. DDD is the computational counterpart to in site TEM tests. MicroMegas is a legacy simulation code used to study the plasticity of mono-crystalline metals, based on the elasticity theory that models the dislocation interactions into an elastic continuum. In crystalline materials, plastic deformation may be explained by (i) twinning, (ii) martensic transformation or/and (iii) dislocation interactions (see Figure 1).

MicroMegas is used at CAVS for modeling dislocation interactions and reactions in an elastic continuum. The code is used in a hierarchical multiscale framework of the plasticity to obtain information related to the hardening of the material (see for example, the multiscale framework presented in this review paper‎). Details of the discrete dislocations model can be found in the methodology‎ paper and in the references at the bottom of the page.

The discrete dislocation simulation code can be used for HCP, BCC and FCC materials.

Available versions of microMegas

This section includes links to versions of the discrete dislocation dynamics codes. microMegas is commonly used at CAVS to simulate the behavior of dislocations for metals at the microscale.

  • microMegas (download the original microMegas code from the French Aerospace Lab here)
    • mM v.1.0: serial version with Intel Compiler Optimizations (download mM v. 1.0 from the Codes Repository at CAVS here, using "Download GNU tarball"; compile it as 'mm')
    • mM v.1.0: parallel version with Intel Compiler Optimizations and MPI (download mM v. 1.0 from the Codes Repository at CAVS here, using "Download GNU tarball"; compile it as 'mmp')
    • mMpar v.1.0: parallel version of mM v.1.0 using openMP threads (download mMpar v. 1.0 from the Codes Repository at CAVS here, using "Download GNU tarball"; compile it as 'mm_omp')

Download and Setup

microMegas can be freely downloaded from the original development site at the French Aerospace Lab. It can also be downloaded from the CAVS Cyberinfrastructure Repository of Codes in two versions:

  • mM ver 1.0 – serial mM [original ver. 3.2] with various Intel Compiler Optimizations, or
  • mMpar ver. 1.0 – parallel version of mM [original ver. 3.2], where force calculations for each segment are calculated in parallel using OpenMP threads.

Download

microMegas is not available in a system-wide implementation on the HPC2 systems. To use microMegas, please choose one of the available versions above to download on your local computer or workstation. After downloading, untar the tarball by typing:

tar xzf [name_of_the_tarball].tar.gz 

or

tar xzf [name_of_the_tarball].tar

Go to the directory resulting from the above operations. Please follow the instructions in the readme files provided in the directory to setup microMegas on your system.

Setup

In this page we describe how to install, configure and run DDD simulations using mM ver. 1.0 (with Intel Compiler optimizations) and mMpar ver. 1.0 (with Intel Compiler optimizations and OpenMP threads). Installation instructions for mM ver. 1.0 and mMpar ver. 1.0 can also be found in the ‘readme’ files provided in each directory and subdirectory of the code. mM can be run in batch mode to get data analyzed with conventional graphical display programs (exemples of Gnuplot scripts are provided) or it can be used in interactive mode to simply visualize dislocations activity. Herein, we describe how to run the code in batch mode. For instructions on how to run mM in interactive mode, please refer to the ‘readme’ files provided with the code.

Before compiling microMegas on any of the HPC-CAVS computing systesm, one needs to route to the proper compiler and MPI path on the system using:

  • swsetup intel - to use the latest Intel Fortran Compiler installed on the system, and
  • swsetup openmpi-intel-64 - to use the latest version of the OpenMPI libaries, compiled with the Intel compiler for 64-bit systems.

The general workflow for running discrete dislocation dynamics simulations using microMegas is illustrated in the figure below:

Workflow for running discrete dislocation dynamics simulations (using microMegas) at CAVS

Input files

The input files are located in the mM/in directory. The following files are needed to run the mM simulation.

The input file with parameters used for the simulation. For instance, one can select the type of simulation (initial or restart from previous simulation) via the parameter SIDEJA. One can also select whether cross-slip displacement of dislocations is desired by setting the GLDEV parameter accordingly (‘T’ for enabled and ‘F’ for disabled). Also, one can set the total number of simulation steps via the NSTEP parameter. Each simulation time step corresponds to 10-9 real time seconds. Therefore, for a very small simulation use NSTEP=500 while for a long running simulation set NSTEP to anything from 106 and above. Finally, one can also select how often should the simulation save the current state of the code, via the KISAUVE, KISTAT, KKIM and KPREDRAW parameters. For more details, check the file that comes with code.

This is the file containing the material variables. See the existing file for more details.

This is the file containing the initial dislocation configuration (e.g., the active slip systems, the number of segments, the dimensions of the simulation reference volume box, etc). See the bottom of the existing file for more details.

This is the file describing the initial number, type and characteristics of the dislocation segments. See the existing file for more details.

These are the input files needed to run polyphase simulations. See the existing files for more details.

Simulation source files

Micromegas is written in a mix of Fortran 90 and Fortran 95, consists of 18 source modules and contains roughly 25,000 lines of code. The pseudocode of the MAIN module in Micromegas is shown below.

! Module MAIN: simulation time loop
TIME: do = 1, STEPS
...
call SOLLI ! Apply load 
call DISCRETI ! Discretize the simulation volume !into dislocation lines/segments
call FORCE ! Calculate interaction forces 
!FORCE calls SIGMA_INT_CP to calculate short !range interaction forces
!FORCE calls SIGMA_INT_LP to calculate long !range interaction forces
call DEPPREDIC ! Predict moving segments 
call UPDATE ! Search for obstacles, determine & make contact reactions, update positions of segments
call CORRIGER_CONFIG ! Check the connections between all segments
...
enddo TIME

The source files are located in the mM/src/simu/ directory. These 18 modules are briefly described below. For more information, please refer to the actual content of these files.

  • 01constantes.f90 - module containing the declaration of all simulation constants
  • 02bricamat.f90 - module containing a toolbox of useful subroutines for, e.g., dot products, etc.
    • uses 01constantes module
  • 03varbase.f90 – module containing the data structures and variables database (lattice, etc)
    • uses 01constantes module
  • 04varglob.F90 – module containing initializations of all the constants and variables common to all the modules of the main program
    • uses 01constantes and 03varbase modules
  • 05intergra.f90 – module that enables integration with the graphical module (for interactive modem mM simulations)
    • uses 04varglob module
  • 06debug.f90 – module containing the subroutines required for debugging, i.e., subroutine Conf(i) and subroutine verif_reseau
    • uses 01constantes, 02bricamat, 03varbase and 04varglob modules
  • 07init.F90 – module that reads the input files and assigns values to all other variables not initialized in 04varglob
    • uses 02bricamat, 04varglob, 06debug and carto modules
  • 08connec.f90 – module that checks the connectivity between all segments (not CPU intensive)
    • uses 04varglob and 06debug modules
  • 09elasti.F90 – module where the short-range and long-range interaction forces between each segments pair is calculated
    • uses 02bricamat, 04varglob, 06debug, 08connec and microstructure modules
  • 10dynam.F90 - module where the moving velocity of each segment is calculated
    • uses 01constantes, 04varglob, 06debug and 08connec modules
  • 11topolo.f90 – module containing the procedures used to generate the boundary conditions, to discretize the dislocation lines into segments and to locate the segments before they are eliminated
    • uses 04varglob, 06debug, 08connec and microstructure modules
  • 12contact.f90 – module containing simple displacements and where the interactions between segments are updated in four steps.
    • uses 02bricamat, 04varglob, 06 debug, 08connec and microstructure modules
  1. check for every possible obstacle,
  2. check for every possible contact reaction (annihilation, junction formation, etc.),
  3. make the reactions, and
  4. update the positions of the segments
  • 13resul.F90 – module where the results and statistics are calculated
    • uses 02bricamat, 04varglob, 06 debug and microstructure modules
  • 14bigsave.F90 – module that saves the simulation state either when the number of selected time steps has elapsed or to be able to restart a computation
    • uses 02bricamat and 04varglob modules
  • 15main.F90 - main module containing the simulation time loops; it calls all other modules either implicitly or explicitly; for more details see Figure 2
    • uses 02bricamat, 04varglob, 06debug, 07init, microstructure, 09elasti, 10dynam, 11topolo, 12contact, 13resul and 14bigsave modules
  • base.f90 – module that reads all the data of the main program, in three groups of files:
    • materiaux – given material physical properties
    • control – given simulation parameters
    • seg3D – regroups the characteristics of the segments given at the beginning of the simulation
    • uses 01constantes, 02bricamat and 04varglob modules
  • carto.f90 – to be written
  • microstructure.F90 – module containing the subroutines used to detect the obstacles, i.e., subroutine barriere_spherique and subroutine barriere_plane; it prints the segments structure
    • uses 02bricamat, 03varbase, 04varglob, 06debug and 08connec modules


Compiling microMegas

Compiling the original microMegas code

Compile the simulation

To compile microMegas, you need to buildup a makefile dedicated to the machine you want to run the simulation, in the ‘mM/bin’ or ‘mMpar/bin’ directory. Solutions already exist for many different platforms; you should be able to do your one without too much effort.

The "config" file is the part of "makefiles" which is the same on all the machines

To create a new machine "makefile" you must add at the end of config the corresponding ".PHONY" definition.

Then, you need to buildup your one "Make_DEFS" file. The latter must contains all the headers useful for your new machine. See the following examples.

Make_DEFS.amd -> An AMD Linux platform with gcc and the Intel FORTRAN compilers Make_DEFS.dec -> A DEC Alpha machine with the native C and FORTRAN compilers Make_DEFS.g5 -> An Apple G5 machine with gcc and the IBM FORTRAN compilers Make_DEFS.mac -> An Apple G4 or G3 Machine with gcc and the ABSOFT FORTRAN compilers Make_DEFS.mad -> An AMD Cluster Make_DEFS.madmax -> A cluster of Xeon machines with gcc and the Intel FORTRAN cimpilers Make_DEFS.pc -> A simple PC workstation Make_DEFS.sgi -> An SGI Itanium machine with gcc and the Intel(64) FORTRAN Compiler etc....

Once you have made your "Make_DEFS.machine_type", type:

make -f  config machine_type

For instance for my machine I simply type "make -f config mac")

At that stage you should have a "makefile" file created in the bin directory

Execute the version of microMegas of your choice

According to the version of microMegas you want to execute, type:

  • make or make all - to compile all the binaries (this does not include the MPI binary)
  • make mm - to compile only the batch version of the simulation
  • make gmm - to compile only the simulation with its graphical interface (interactive mode)
  • make mm_omp - to compile only the batch version for OpenMP parallel threads
  • make mmp - to compile only the batch version for MPI clusters
  • make cam - to compile only the graphical interace (needed to see the simulation film)
  • make base - to compile only the code needed to generate the simulation vectors base
  • make confinit - to compile only the code needed to generate random intitial configurations
  • make pavage - to compile only the code needed to generate the database needed for the simulation interfaces
  • make clean - to sweep out all the useless pieces of codes
  • make cleanall - to clean up everything

Run the simulation

To run the simulation, simply type:

  • mm > screen & - to run the simulation in batch mode
  • gmm - to run the simulation in interactive mode and with the graphic interface
  • mm_omp - to run the OpenMP-based simulation in batch mode, assuming all the OpenMP-related environment variables are set (see the next subsection for more details).
  • mpirun -np "x" -machinefile ../in/hosts.dd mmp > screen & - to run the MPI batch simulation

Additional tools

  • cam - The camera code to see after and during calculations the film of the simulation
  • confinit - The code used to buildup initial configurations
  • base - The code you can use to generate alone the base of vectors used in the simulation
  • pavage - The code used to generate the interfaces files "b_poly" needed to simulate periodic polycrystals

Where and who is who

All the inputs data are defined in the directory "mM/in". Take a look to the README file in this directory for more information.

All the outputs data are written in the directory "mM/out". Take a look to the README file in this directory for more information.


Running microMegas

A typical simulation run in Micromegas requires somewhere between 10^6 to 10^9 time steps to gain more insight about the plastic deformation range. Simulations with a smaller number of steps will very likely not capture the plastic range of deformation – the region of interest for the materials scientists studying plastic deformation. A simulation run over 10,000 steps using serial version of Micromegas requires 68 hours on average and reaches 0.2% of the plastic deformation on a Nehalem quad-core Xeon W3570 processor, with 6GB of triple channel 133MHz DDR-3 RAM. Simulations of about 10^9 time steps are needed to reach the desired percentage of deformation, that is, the strain rate as high over 1% as possible.

To get an idea of the type of simulations that can be conducted with microMegas, we give here the parameters of a representative simulation selected in the input files, the compilation and execution commands. The simulation parameters of a representative microMegas simulation are:

  • 0.5% plastic deformation
  • 10x10x10 µm^3 simulation box dimensions
  • 1012 1/m^2 initial density
  • 10 1/s strain rate in multi-slip conditions

Note: Multi-slip calculations were performed to evaluate and demonstrate the efficiency of the parallel version of microMegas.

  • Material: representative volume elements of Al (FCC crystal structure with Burgers vector of magnitude b = 2.86 Å) of dimensions 9x10x12 µm^3
  • For tension simulations: loading along the [001] direction
  • For compression simulations: loading along the [100] direction
  • strain rate of 20 1/s
  • temperature of 300 K under periodic boundary conditions
  • time step was considered to be 1/10^9 seconds

Note: Screw dislocations were not allowed to cross-slip at any time.

Simple batch execution

Serial microMegas (mm)

To run serial microMegas for production simulations from the command line add the corresponding software modules (compilers, libraries, etc.) to load in your ‘.bashrc’ file (in your home directory, i.e. /home/<your_username>/). To load the compiler of your choice, e.g., my choice is Intel Fortran, type:

swsetup intel

Then in the 'mM/bin’ directory, to compile only the batch version of the simulation type:

make –f serial-Makefile clean

make –f serial-Makefile mm

Launch the serial version of the simulation from the same directory by typing:

mkdir ../production_runs

to run the simulation in batch mode, or to record the running time and save the output in a separate directory and files, type:

/usr/bin/time -p -o ../production_run/mm.time ./mm | tee ../production_runs/mm.log

Parallel microMegas (mm_omp - OpenMP version)

To run parallel microMegas for production simulations add the corresponding software modules (compilers, libraries, visualisers, etc.) to load in your ‘.bashrc’ file. To load the compiler of your choice, e.g., my choice is Intel Fortran, type:

swsetup intel

Then in the ‘mMpar/bin’, tp compile only the batch version of the simulation type:

make –f openMP-Makefile clean

make –f openMP-Makefile mm_omp

Before running mm_omp, one needs to configure the target system for executing OpenMP programs. This is done by ensuring that the environment variables used by the chosen compiler and its OpenMP extension are properly set. For a quad-core Linux system running SuSE SLES 10, and the Intel Compiler version 11.1, the following values are recommended.

export OMP_THREAD_NUM=4              // This value can be adjusted to match the existing number of cores in the compute node of your choice. 
                                                                   // E.g., in talon nodes, this can be set to 12.
export KMP_AFFINITY=verbose,respect,granularity=core,scatter
export KMP_LIBRARY=turnaround
export KMP_SETTINGS=1
export KMP_STACKSIZE=512m
export KMP_VERSION=.TRUE.

For more details on the values and meaning of these environment variables, please consult the Intel Compiler manual and its OpenMP specification. Note that these environment variables are specific to the Intel Compiler and its OpenMP specification, and that they may differ based on the compiler of your choice and the specifics of its own OpenMP extension.

Launch the parallel OpenMP version of the simulation from the same directory by typing:

mkdir ../production_runs

to run the simulation in batch mode, or to record the running time and save the output in a separate directory and files, type:

/usr/bin/time -p -o ../production_run/mm_omp.time ./mm_omp | tee ../production_runs/mm_omp.log

Parallel microMegas (mmp - MPI version)

For the parallel microMegas simulations, also load the MPI libraries, e.g. OpenMPI ver. 1.4.2, by typing:

swsetup openmpi-intel-64

Note: To avoid any compilation or execution errors, please make sure that during the selection of any additional libraries, such as MPI, you choose the library version that was compiled using the same compiler of your choice. For instance, if you compile the code using Intel compilers, please select the MPI library that was compiled using Intel compilers. Not doing so, may cause unpredictable errors during the simulation.

Then in the ‘mM/bin’ directory, to compile only the batch version of the simulation, type:

make –f openMPI-Makefile clean

make –f openMPI-Makefile mmp

Launch the parallel MPI version of the simulation from the same directory by typing:

mkdir ../production_runs

to run the simulation in batch mode, or to record the running time and save the output in a separate directory and files, type:

/usr/bin/time -p -o ../production_run/mmp.time mpirun –np “x”./mmp | tee ../production_runs/mmp.log

PBS batch execution

The serial code (mm), the OpenMP-based code (mm_omp) and the OpenMPI-based code (mmp) can be launched either locally (as described in the Subsection above - Simple batch execution) or remotely. For remote execution on high performance compute clusters, a PBS (Portable Batch Script) is needed to submit the execution as a job.

Below are three sample PBS scripts one could use to run microMegas in any of the three versions on the talon.hpc.msstate.edu high-performance cluster at HPC2. Each of these scripts can be cut and pasted into a file, e.g., mm.pbs.talon or mm_omp.pbs.talon or mmp.pbs.talon. To submit a pbs script to the jobs queue on talon, first log on to the talon-login node, typing:

rlogin talon-login

from any HPC2 machine, and then type:

qsub mm.pbs.talon           or             qsub mm_omp.pbs.talon         or             qsub mmp.pbs.talon

Note: microMegas is a long running code. To run long simulations please contact the HPC2 administrators to request access to the 'special' queue.

PBS script for serial mM execution (mm executable) on talon.hpc.msstate.edu

#!/bin/bash
#PBS -N mm
#PBS -q special@talon
#PBS -l nodes=1:ppn=12
#PBS -l walltime=700:00:00
#PBS -m abe
#PBS -j oe
#PBS -r n
#PBS -V
# Set the stack size to unlimited

ulimit -s unlimited
# Set the core size to zero

ulimit -c 0
# List all resource limits

ulimit -a
echo "I ran on:"
# Print the nodes on which the project will run

cat $PBS_NODEFILE
# Change your execution directory to /data/lustre/<your_username> for fast
# I/O
cd /data/lustre/<your_username>

# Copy all necessary files from your project directory 
# (/cavs/cmd/data1/users/<your_username>/<your_project_directory>) to the 
# execution directory (/data/lustre/<your_username>)
cp -fr /cavs/cmd/data1/users/<your_username>/<your_project_directory>/mm .

# Go to the directory with the ‘mm’ executable
cd /data/lustre/<your_username>/mm/bin

# Run the serial ‘mm’ code with/without cross-slip activated (set GLDEV to be # T or F in /data/lustre/<your_username>/mm/in/ContCu)

/usr/bin/time -p -o ../serial_tests/[no-]cross-slip/mm.time ./mm | tee ../serial_tests/[no-]cross-slip/mm.log

# Move all files from the execution directory (/data/lustre/<your_username>) 
# back to your project directory 
# (/cavs/cmd/data1/users/<your_username>/<your_project_directory>)

cd /data/lustre/<your_username>/
cp -fr mm/ /cavs/cmd/data1/users/<your_username>/<your_project_directory>/mm_test/1
#

echo "All Done!"

PBS script for openMP-based mMpar execution (mm_omp executable) on talon.hpc.msstate.edu

#!/bin/bash
#PBS -N mm_omp
#PBS -q special@talon
#PBS -l nodes=1:ppn=12
#PBS -l walltime=700:00:00
#PBS -m abe
#PBS -j oe
#PBS -r n
#PBS -V
# Set the stack size to unlimited

ulimit -s unlimited
# Set the core size to zero

ulimit -c 0
# List all resource limits

ulimit -a
echo "I ran on:"
# Print the nodes on which the project will run

cat $PBS_NODEFILE
# Change your execution directory to /data/lustre/<your_username> for fast
# I/O
cd /data/lustre/<your_username>

# Copy all necessary files from your project directory 
# (/cavs/cmd/data1/users/<your_username>/<your_project_directory>) to the 
# execution directory (/data/lustre/<your_username>)
cp -fr /cavs/cmd/data1/users/<your_username>/<your_project_directory>/mm_omp .

# Go to the directory with the ‘mm_omp’ executable
cd /data/lustre/<your_username>/mm_omp/bin

# Run the parallel ‘mm_omp’ code on 12 parallel threads with/without cross-
# slip activated (set GLDEV to be T or F in 
# /data/lustre/<your_username>/mm_omp/in/ContCu)

/usr/bin/time -p -o ../omp_tests/[no-]cross-slip/mm_omp.time ./mm_omp | tee ../omp_tests/[no-]cross-slip/mm_omp.log

# Move all files from the execution directory (/data/lustre/<your_username>) 
# back to your project directory 
# (/cavs/cmd/data1/users/<your_username>/<your_project_directory>)

cd /data/lustre/<your_username>/
cp -fr mm_omp/ /cavs/cmd/data1/users/<your_username>/<your_project_directory>/mm_omp_test/1
#

echo "All Done!"

PBS script for MPI-based mMpar execution (mmp executable) on talon.hpc.msstate.edu

#!/bin/bash
#PBS -N mmp
#PBS -q special@talon
#PBS -l nodes=4:ppn=12
#PBS -l walltime=700:00:00
#PBS -m abe
#PBS -j oe
#PBS -r n
#PBS -V
# Set the stack size to unlimited

ulimit -s unlimited
# Set the core size to zero

ulimit -c 0
# List all resource limits

ulimit -a
echo "I ran on:"
# Print the nodes on which the project will run

cat $PBS_NODEFILE
# Change your execution directory to /data/lustre/<your_username> for fast
# I/O
cd /data/lustre/<your_username>

# Copy all necessary files from your project directory 
# (/cavs/cmd/data1/users/<your_username>/<your_project_directory>) to the 
# execution directory (/data/lustre/<your_username>)
cp -fr /cavs/cmd/data1/users/<your_username>/<your_project_directory>/mmp .

# Go to the directory with the ‘mmp’ executable
cd /data/lustre/<your_username>/mmp/bin

# Run the parallel ‘mmp’ code on 4x12=48 parallel processes with/without 
# cross-slip activated (set GLDEV to be T or F in 
# /data/lustre/<your_username>/mmp/in/ContCu)

/usr/bin/time -p -o ../mmp_tests/[no-]cross-slip/mmp.time ./mmp | tee ../mmp_tests/[no-]cross-slip/mmp.log

# Move all files from the execution directory (/data/lustre/<your_username>) 
# back to your project directory 
# (/cavs/cmd/data1/users/<your_username>/<your_project_directory>)

cd /data/lustre/<your_username>/
cp -fr mmp/ /cavs/cmd/data1/users/<your_username>/<your_project_directory>/mmp_test/1
#

echo "All Done!"

Output files

The output files are located in the mM/out directory. The most important output are briefly described below. For more details on the content and meaning of each file, please refer to the actual content of these files.

  • BVD.CFC - the set of reference vectors used in the simulation for a given crystal
  • bigsave.bin - a binary file containing everything needed to re-start a simulation if it is accidentally stopped
  • film.bin - a binary file where the coordinates of segments are periodically saved to buildup a trajectory file
  • gamma - a file containing the evolution of gamma for all existing slip systems
  • gammap - a file containing the evolution of the instantaneous gamma dot for all the slip systems
  • rau - a file containing the evolution of rho, the dislocation density, for all the slip systems
  • raujonc - a file containing the evolution of the junction density and number for all slip systems
  • resul - a GNU plotting script for plotting various simulation data (run 'gnuplot resul' to see the results)
  • sigeps - an output file containing the stress, strain and other information (an accompanying file to "stat")
  • stat - a file where most of the global statistics of the simulation are written
  • travapp - a file containing the evolution of the applied mechanical work (presently do not trust those computations)
  • travint - is a file containing the evolution of the internal mechanical work (presently do not trust those computations)

References

Please remember to cite the relevant references from the articles below when publishing results obtained with microMegas:

  • F. M. Ciorba, S. Groh and M. F. Horstemeyer. Parallelizing discrete dislocation dynamics simulations on multi-core systems. 10th Int. Conf. on Computational Science, Procedia Computer Science, 1:1, pp. 2129-2137, 2010.
  • S. Groh, E. B. Marin, M. F. Horstemeyer, and H. M. Zbib. Multiscale modeling of the plasticity in an aluminum single crystal. Int. J. of Plasticity, 25, pp. 1456-1473, 2009.
  • S. Groh and H. M. Zbib. Advances in Discrete Dislocations Dynamics and Multiscale Modeling, J. Eng. Mater. Technol. vol. 131:4, 041209 (10 pages), 2009.
  • Multiscale Modeling of Heterogenous Materials: From microstructure to macro-scale properties. Chapter 2: Discrete Dislocation Dynamics: Principles and Recent Applications (by Marc Fivel). Edited by Oana Cazacu. Published by Wiley. ISBN: 9781848210479, 2008.
  • B. Devincre, V. Pontikis, Y. Brechet, G.R. Canova, M. Condat and L.P. Kubin. Three-dimensional Simulations of Plastic Flow in Crystals. Plenum Press: New York, M. Marechal, B.L. Holian (eds.), 1992, p. 413
  • L.P. Kubin and G. R. Canova. The modelling of dislocation patterns. Scripta Metall., 27, pp. 957-962, 1992.

back to the Material Models home

Personal tools
Namespaces

Variants
Actions
home
Materials
Material Models
Design
Resources
Projects
Education
Toolbox