Running PBS script with LAMMPS

From EVOCD
(Difference between revisions)
Jump to: navigation, search
(Created page with '== Abstract == This example shows how to run LAMMPS (or any other UNIX executable) on a UNIX cluster that uses the batch scripting language, PBS. This will use the LAMMPS input …')
 
(Go Back)
 
(11 intermediate revisions by 4 users not shown)
Line 1: Line 1:
 
== Abstract ==
 
== Abstract ==
This example shows how to run LAMMPS (or any other UNIX executable) on a UNIX cluster that uses the batch scripting language, PBS.  This will use the LAMMPS input script from tutorial 1.
+
This example shows how to run LAMMPS (or any other UNIX executable) on a UNIX cluster that uses the batch scripting language, PBS (e.g., at CAVS, Raptor, Talon etc.).  This example will also show how to use LAMMPS on a cluster that does not have PBS scripting (e.g., at CAVS, Javelin, Bazooka, etc.).  This will use the LAMMPS input script from [[LAMMPS_Help |Tutorial 1]].  Notice that these scripts are meant to show how to run on the HPC clusters at Mississippi State University - other alterations may be required for other clusters and universities.
  
Author(s): Mark A. Tschopp
+
Author(s): [http://www.hpc.msstate.edu/directory/information.php?eid=1967 Mark A. Tschopp]
  
 
== LAMMPS Input File ==
 
== LAMMPS Input File ==
  
In order to have LAMMPS output data useful for plotting, the simulation cell length must be stored; and the stress and strain data must be written to an external file.The following script shows how the final cell length is stored in the aluminum examples. The cell length is lx, while the initial length of the cell is stored as L0. These values are needed for strain calculation. Note that these commands are written before deformation commands in the input scripts.
+
== Download an input file ==
 +
 
 +
This input script was run using the Jan 2010 version of LAMMPS.  Changes in some commands may require revision of the input script.  Copy the text below and paste it into a text file, 'calc_fcc.in'.  Use the 'Paste Special' command with 'Unformatted Text'. Notice that the replicate command is used in the following script so that it is a 20 x 20 x 20 simulation cell (32,000 atoms) that is going to be run on 16 processors. Notice that we get the same cohesive energy as that run with 4 atoms in Tutuorial 1.
 +
 
 
{|border  ="0"
 
{|border  ="0"
 
|<pre>
 
|<pre>
# Store final cell length for strain calculations
+
 
variable tmp equal "lx"
+
# Find minimum energy fcc configuration
variable L0 equal ${tmp}
+
# Mark Tschopp, 2010
variable L0 equal 40.64912642
+
 
print "Initial Length, L0: ${L0}"
+
# ---------- Initialize Simulation ---------------------
Initial Length, L0: 40.64912642
+
clear
 +
units metal
 +
dimension 3
 +
boundary p p p
 +
atom_style atomic
 +
atom_modify map array
 +
 
 +
# ---------- Create Atoms ---------------------
 +
lattice fcc 4
 +
region box block 0 1 0 1 0 1 units lattice
 +
create_box 1 box
 +
 
 +
lattice fcc 4 orient x 1 0 0 orient y 0 1 0 orient z 0 0 1 
 +
create_atoms 1 box
 +
replicate 20 20 20
 +
 
 +
# ---------- Define Interatomic Potential ---------------------
 +
pair_style eam/alloy
 +
pair_coeff * * Al99.eam.alloy Al
 +
neighbor 2.0 bin
 +
neigh_modify delay 10 check yes
 +
 +
# ---------- Define Settings ---------------------
 +
compute eng all pe/atom
 +
compute eatoms all reduce sum c_eng
 +
 
 +
# ---------- Run Minimization ---------------------
 +
reset_timestep 0
 +
fix 1 all box/relax iso 0.0 vmax 0.001
 +
thermo 10
 +
thermo_style custom step pe lx ly lz press pxx pyy pzz c_eatoms
 +
min_style cg
 +
minimize 1e-25 1e-25 5000 10000
 +
 
 +
variable natoms equal "count(all)"
 +
variable teng equal "c_eatoms"
 +
variable length equal "lx"
 +
variable ecoh equal "v_teng/v_natoms"
 +
 
 +
print "Total energy (eV) = ${teng};"
 +
print "Number of atoms = ${natoms};"
 +
print "Lattice constant (Angstoms) = ${length};"
 +
print "Cohesive energy (eV) = ${ecoh};"
 +
 
 +
print "All done!"  
 +
 
 
</pre>
 
</pre>
 
|}
 
|}
  
The following script shows how the aluminum examples write stress and strain data to a file. Strain is calculated from variables defined above and is stored as p1. The principal stresses are first converted to GPa and then stored as p1, p2, and p3. The fix command is used to print these four variables to a new file, in this case, named "Al_comp.def1.txt." When LAMMPS is run, this file appears in the directory with other LAMMPS output files. Note that these commands are written after the deformation procedure in the input scripts.
+
== PBS batch script ==
 +
 
 +
Here is an example batch script for Raptor. Copy the text below and paste it into a text file, 'pbs_Raptor_calc_fcc.txt'. Use the 'Paste Special' command with 'Unformatted Text'.  
 +
 
 
{|border  ="0"
 
{|border  ="0"
 
|<pre>
 
|<pre>
# Output strain and stress information to file.
 
# For metal units, pressure is [bars] = 100 [kPa] = 1/10000 [GPa]
 
# p2, p3, and p4 are in GPa
 
  
variable strain equal "(lx - v_L0) / v_L0"
+
#!/bin/sh
variable p1 equal "v_strain"
+
#PBS -N calc_fcc
variable p2 equal "-pxx/10000"
+
#PBS -q q16p192h@Raptor
variable p3 equal "-pyy/10000"
+
#PBS -l nodes=4:ppn=4
variable p4 equal "-pzz/10000"
+
#PBS -l walltime=192:00:00
fix def1 all print 100 "${p1} ${p2} ${p3} ${p4}" file Al_comp.def1.txt screen no
+
#PBS -mea
 +
#PBS -r n
 +
#PBS -V
 +
cd $PBS_O_WORKDIR
 +
mpirun -np 16 lmp_exe < calc_fcc.in
 +
 
 
</pre>
 
</pre>
 
|}
 
|}
  
== LAMMPS Datafiles ==
+
Here is an example batch script for Talon.  Copy the text below and paste it into a text file, 'pbs_Talon_calc_fcc.txt'.  Use the 'Paste Special' command with 'Unformatted Text'.  
The following datafiles were created for the tension and compression aluminum examples.
+
  
Al_SC_100.def1.txt:
 
 
{|border  ="0"
 
{|border  ="0"
 
|<pre>
 
|<pre>
# Fix print output for fix def1
+
 
0.0009999999119 0.08667300618 0.03175758225 0.03801213452
+
#!/bin/sh
0.001999999912 0.09131420312 -0.05499624531 -0.03396621651
+
#PBS -N calc_fcc
0.002999999912 0.2254889267 0.04825004506 0.04680554658
+
#PBS -q q192p48h@Talon
0.003999999912 0.2199842183 -0.04156649661 -0.02074145126
+
#PBS -l nodes=16:ppn=12
0.004999999912 0.3127067321 0.01435171918 0.01093418588
+
#PBS -l walltime=48:00:00
...
+
#PBS -mbea
 +
#PBS -r n
 +
#PBS -V
 +
cd $PBS_O_WORKDIR
 +
mpirun -np 192 lmp_exe < calc_fcc.in
 +
 
 
</pre>
 
</pre>
 
|}
 
|}
  
Al_comp.def1.txt:
+
Here is an example batch script for running the MATLAB script, "run_MATLAB-script_Raptor.m", on Raptor.  Copy the text below and paste it into a text file, 'pbs_Raptor_MATLAB.txt'.  Use the 'Paste Special' command with 'Unformatted Text'.
 +
 
 
{|border  ="0"
 
{|border  ="0"
 
|<pre>
 
|<pre>
# Fix print output for fix def1
+
 
-0.001000000088 -0.06084024339 0.005103309452 0.011634529
+
#!/bin/sh
-0.002000000088 -0.1414344573 -0.02482512452 -0.00442782696
+
#PBS -N calc_fcc
-0.003000000088 -0.1893439523 0.009944994037 0.01031293284
+
#PBS -q q16p192h@Raptor
-0.004000000088 -0.2745471681 -0.01431999273 0.006837682636
+
#PBS -l nodes=4:ppn=4
-0.005000000088 -0.3417247015 -0.005247107989 -0.01437892764
+
#PBS -l walltime=192:00:00
...
+
#PBS -mea
 +
#PBS -r n
 +
#PBS -V
 +
cd $PBS_O_WORKDIR
 +
matlab -nodesktop -nodisplay -nosplash -nojvm -r "run_MATLAB-Script_Raptor;"
 +
 
 
</pre>
 
</pre>
 
|}
 
|}
  
== MATLAB Script ==
+
== Running simulations using a batch script ==
The stress strain curve for the aluminum in tension and compression examples can be seen in Figures 1 and 2. The following MATLAB script will plot the stress-strain curve for either case, provided the LAMMPS datafile and this script are located in the same directory. Note that values are negative in the compressive case. In order to attain the image in Figure 2, simply make negative the strain and stress variables in the script.
+
  
The plot can be saved as a MATLAB figure after it appears. Additionally, the exportfig command, which refers to a script that can be downloaded at the [http://www.mathworks.com/matlabcentral/fileexchange/727-exportfig MATLAB Central File website], exports the plot to a tiff file.
+
Here are the steps that you need to do for running on Raptor:
{|border  ="0"
+
|<pre>
+
%% Analyze def1.txt files
+
% Plot the various responses
+
  
d = dir('*.def1.txt');
+
# Open up a Secure Shell Client on your computer.
for i = 1:length(d)
+
# Quick connect using hostname "raptor-login" with your user name.
    % Get data
+
# Congratulations!  You are now logged on to the compute node for Raptor.  Do not run any simulations on this node - it is simply meant for submitting pbs scripts on, and it then assigns jobs using the PBS scheduler.
    fname = d(i).name;
+
# Type "swsetup pbs" to setup the paths to the PBS scheduler.
    A = importdata(fname);
+
# Change to the directory that contains your input script and your pbs script, i.e., "cd work_directory" where work_directory is where your scripts are contained.
    % Define strain as first column of data in A (*.def1.txt)
+
# Type "qsub pbs_Raptor_calc_fcc.txt" and hit enter.  Your job has been submitted and will run when the scheduler can fit it in.
    strain = A.data(:,1);
+
    % Define stress as second through fourth columns in A (*def1.txt)
+
    stress = A.data(:,2:4);
+
  
    % Generate plot
+
The same steps can be used for Talon. Login to "talon-login" with your user name. Change to the directory with your scripts, type "qsub pbs_Talon_calc_fcc.txt" and hit enter.
    plot(strain,stress(:,1),'-or','LineWidth',2,'MarkerEdgeColor','r',...
+
                'MarkerFaceColor','r','MarkerSize',5),hold on
+
    plot(strain,stress(:,2),'-ob','LineWidth',2,'MarkerEdgeColor','b',...
+
                'MarkerFaceColor','b','MarkerSize',5),hold on
+
    plot(strain,stress(:,3),'-og','LineWidth',2,'MarkerEdgeColor','g',...
+
                'MarkerFaceColor','g','MarkerSize',5),hold on
+
    axis square
+
    ylim([0 10])
+
    set(gca,'LineWidth',2,'FontSize',24,'FontWeight','normal','FontName','Times')
+
    set(get(gca,'XLabel'),'String','Strain','FontSize',32,'FontWeight','bold','FontName','Times')
+
    set(get(gca,'YLabel'),'String','Stress (GPa)','FontSize',32,'FontWeight','bold','FontName','Times')
+
    set(gcf,'Position',[1 1 round(1000) round(1000)])
+
  
    % Export the figure to a tif file
+
A few things about the PBS scheduler:
    exportfig(gcf,strrep(fname,'.def1.txt','.tif'),'Format','tiff',...
+
        'Color','rgb','Resolution',300)
+
  
end
+
* There are numerous websites online that explain the #PBS commands in the PBS file.  Use them for questions.
</pre>
+
* The "#PBS -q q192p48h@Talon" line tells the scheduler what queue you will be running on.  In this queue, you can request a maximum of 192 processors for 48 hours.  A quick trick for checking what queues may be available on a cluster is to type "qstat" and see what other people are using.
|}
+
* The "#PBS -l nodes=16:ppn=12" line is specific to the cluster that you are running on.  For Talon, there are 12 processors per node.  Therefore, 16 nodes are needed for 192 processors.
 +
* The "cd $PBS_O_WORKDIR" line changes the directory to whatever directory you submitted your pbs script from.  When the PBS script is submitted, certain variables are stored with it, the $PBS_O_WORKDIR being one of them.
 +
* The pbs scheduler can handle lots and lots of pbs scripts (if you have a lot of scripts to run) and will prioritize which job is run next.
 +
* The "qstat" command can be used to display the status of your job.
 +
* If you know approximately how long your job will take, you can request less processors and less hours in the "#PBS -l walltime=48:00:00" line.  Sometimes, the scheduler can fit in smaller simulations in between the larger simulations that are scheduled to run.  For instance, consider a job that requests 192 processors for 48 hours and there are only 96 available.  If another 96 processors will not become available for 24 hours, the scheduler can fit in a job that only requires 96 processors for 24 hours while it is waiting for processors for the larger job.  The whole goal of the PBS scheduling system is to try to obtain 100% usage of its processors. 
 +
* Adjust the number of processors through the "#PBS -l nodes=16:ppn=12", not the queue line.  For example, if you only require 96 processors, change this line to "#PBS -l nodes=8:ppn=12".  Remember to alter your "mpirun -np 96 lmp_exe < fcc_calc.in" line too!
  
{|
+
== Without PBS Scheduler ==
| [[image:Al_SC_100_stress-strain.jpg|thumb|350px|Figure 1. Stress-strain curve for uniaxial tensile loading of single crystal aluminum in the <100> loading direction. ]]
+
| [[image:Al comp sscurve.jpg|thumb|350px|Figure 2. Stress-strain curve for uniaxial compressive loading of single crystal aluminum in the <100> loading direction. ]]
+
|}
+
  
{|
+
This is easy.  Just use the executable line from the PBS Batch Script, so:
| The following script will combine the tensile and compressive curves into a single plot, seen in Figure 3. Note that only the principal stresses in the x direction have been included.
+
# Open up a Secure Shell Client on your computer.
{|border ="0"
+
# Quick connect using hostname "javelin" with your user name.
|<pre>
+
# Congratulations!  You are now logged on Javelin.  Unlike clusters with PBS, where there is only a compute node, you can submit your job on multiple processors here. There is a scheduler that tries to balance the load, though.   
% Compare responses for Uniaxial Tension and Compression in Single Crystal
+
# Change to the directory that contains your input script and your pbs script, i.e., "cd work_directory" where work_directory is where your scripts are contained.
% Aluminum
+
# Type "lmp_exe < calc_fcc.in" to run on 1 processor or "mpirun -np 12 lmp_exe < calc_fcc.in" to run on 12 processors.
  
%Find files from which to pull data
+
IMPORTANT NOTES:
a = dir('Al_SC_100.def1.txt');
+
# There are a much smaller amount of processors on Javelin, so please do not use 128 processors for a cluster with only 24 processors, for example. It will try to run, but the communication between processors will severely slow down the calculation.
b = dir('Al_comp.def1.txt');
+
# Understand that processors may switch between jobs on the fly to try to balance the load. Therefore, you can run a job over the top of someone else's job. If all the processors are being used, you can still run your job, but it will be slowed down due to the other jobs running.  This is the advantage of running on a cluster with the PBS scheduler - you get sole possession of the processors for running your job.
for i = 1:length(a)
+
# These are often useful for debugging and setting up codes prior to running on the bigger clusters with PBS schedulers, i.e., because if you have a relatively small simulation, you do not have to go through a scheduler to execute your code.
   
+
# How many processors should I use?  Type 'top' to see what is running currently.  If nothing is running and your simulation is short, then feel free to use more processors than if there are a lot of processes running. Type 'q' to quit the 'top' screen.
    %Pull data and define columns and stress and strain
+
 
    fname_t = a(i).name;
+
== Go Back ==
    A = importdata(fname_t);
+
 
    strain_t = A.data(:,1);
+
*[[MaterialModels:_Nanoscale | Nanoscale]]
    stress_t = A.data(:,2);
+
 
   
+
 
    fname_c = b(i).name;
+
 
    B = importdata(fname_c);
+
[[Category: Script]]
    strain_c = -B.data(:,1);
+
[[Category: Tutorial]]
    stress_c = -B.data(:,2);
+
[[Category: LAMMPS]]
   
+
    %Plot columns of data on a common graph
+
    plot(strain_t,stress_t,'-or','LineWidth',2,'MarkerEdgeColor','r',...
+
        'MarkerFaceColor','r','MarkerSize',5),hold on
+
    plot(strain_c,stress_c,'-ob','LineWidth',2,'MarkerEdgeColor','b',...
+
        'MarkerFaceColor','b','MarkerSize',5),hold on
+
    axis square
+
    ylim([0,9])
+
   
+
    %Configure labels for the axes
+
    set(gca,'LineWidth',2,'FontSize',12,'FontWeight','normal','FontName','Times')
+
    set(get(gca,'xlabel'),'String','Strain','FontSize',20,'FontWeight','bold','FontName','Times')
+
    set(get(gca,'ylabel'),'String','Stress (GPa)','FontSize',20','FontWeight','bold','FontName','Times')
+
    set(gcf,'Position',[1 1 round(1000) round(1000)])
+
   
+
    %Display a legend to label the curves
+
    legend('show','Tension','Compression')
+
   
+
    % Export the figure to a tif file
+
    exportfig(gcf,strrep(fname_t,'Al_SC_100.def1.txt','jointfig.tif'),'Format','tiff',...
+
        'Color','rgb','Resolution',300)
+
end
+
</pre>
+
|}
+
| [[image:Al ss tension comp.jpg|thumb|350px|Figure 3. Stress-strain curve for uniaxial tensile and compressive loading of single crystal aluminum in the <100> direction. ]]
+
|}
+

Latest revision as of 16:20, 16 April 2015

Contents

[edit] Abstract

This example shows how to run LAMMPS (or any other UNIX executable) on a UNIX cluster that uses the batch scripting language, PBS (e.g., at CAVS, Raptor, Talon etc.). This example will also show how to use LAMMPS on a cluster that does not have PBS scripting (e.g., at CAVS, Javelin, Bazooka, etc.). This will use the LAMMPS input script from Tutorial 1. Notice that these scripts are meant to show how to run on the HPC clusters at Mississippi State University - other alterations may be required for other clusters and universities.

Author(s): Mark A. Tschopp

[edit] LAMMPS Input File

[edit] Download an input file

This input script was run using the Jan 2010 version of LAMMPS. Changes in some commands may require revision of the input script. Copy the text below and paste it into a text file, 'calc_fcc.in'. Use the 'Paste Special' command with 'Unformatted Text'. Notice that the replicate command is used in the following script so that it is a 20 x 20 x 20 simulation cell (32,000 atoms) that is going to be run on 16 processors. Notice that we get the same cohesive energy as that run with 4 atoms in Tutuorial 1.


# Find minimum energy fcc configuration
# Mark Tschopp, 2010

# ---------- Initialize Simulation --------------------- 
clear 
units metal 
dimension 3 
boundary p p p 
atom_style atomic 
atom_modify map array

# ---------- Create Atoms --------------------- 
lattice 	fcc 4
region	box block 0 1 0 1 0 1 units lattice
create_box	1 box

lattice	fcc 4 orient x 1 0 0 orient y 0 1 0 orient z 0 0 1  
create_atoms 1 box
replicate 20 20 20

# ---------- Define Interatomic Potential --------------------- 
pair_style eam/alloy 
pair_coeff * * Al99.eam.alloy Al
neighbor 2.0 bin 
neigh_modify delay 10 check yes 
 
# ---------- Define Settings --------------------- 
compute eng all pe/atom 
compute eatoms all reduce sum c_eng 

# ---------- Run Minimization --------------------- 
reset_timestep 0 
fix 1 all box/relax iso 0.0 vmax 0.001
thermo 10 
thermo_style custom step pe lx ly lz press pxx pyy pzz c_eatoms 
min_style cg 
minimize 1e-25 1e-25 5000 10000 

variable natoms equal "count(all)" 
variable teng equal "c_eatoms"
variable length equal "lx"
variable ecoh equal "v_teng/v_natoms"

print "Total energy (eV) = ${teng};"
print "Number of atoms = ${natoms};"
print "Lattice constant (Angstoms) = ${length};"
print "Cohesive energy (eV) = ${ecoh};"

print "All done!" 

[edit] PBS batch script

Here is an example batch script for Raptor. Copy the text below and paste it into a text file, 'pbs_Raptor_calc_fcc.txt'. Use the 'Paste Special' command with 'Unformatted Text'.


#!/bin/sh 
#PBS -N calc_fcc 
#PBS -q q16p192h@Raptor 
#PBS -l nodes=4:ppn=4 
#PBS -l walltime=192:00:00 
#PBS -mea 
#PBS -r n 
#PBS -V 
cd $PBS_O_WORKDIR 
mpirun -np 16 lmp_exe < calc_fcc.in

Here is an example batch script for Talon. Copy the text below and paste it into a text file, 'pbs_Talon_calc_fcc.txt'. Use the 'Paste Special' command with 'Unformatted Text'.


#!/bin/sh 
#PBS -N calc_fcc
#PBS -q q192p48h@Talon 
#PBS -l nodes=16:ppn=12 
#PBS -l walltime=48:00:00 
#PBS -mbea 
#PBS -r n 
#PBS -V 
cd $PBS_O_WORKDIR 
mpirun -np 192 lmp_exe < calc_fcc.in

Here is an example batch script for running the MATLAB script, "run_MATLAB-script_Raptor.m", on Raptor. Copy the text below and paste it into a text file, 'pbs_Raptor_MATLAB.txt'. Use the 'Paste Special' command with 'Unformatted Text'.


#!/bin/sh 
#PBS -N calc_fcc 
#PBS -q q16p192h@Raptor 
#PBS -l nodes=4:ppn=4 
#PBS -l walltime=192:00:00 
#PBS -mea 
#PBS -r n 
#PBS -V 
cd $PBS_O_WORKDIR 
matlab -nodesktop -nodisplay -nosplash -nojvm -r "run_MATLAB-Script_Raptor;"

[edit] Running simulations using a batch script

Here are the steps that you need to do for running on Raptor:

  1. Open up a Secure Shell Client on your computer.
  2. Quick connect using hostname "raptor-login" with your user name.
  3. Congratulations! You are now logged on to the compute node for Raptor. Do not run any simulations on this node - it is simply meant for submitting pbs scripts on, and it then assigns jobs using the PBS scheduler.
  4. Type "swsetup pbs" to setup the paths to the PBS scheduler.
  5. Change to the directory that contains your input script and your pbs script, i.e., "cd work_directory" where work_directory is where your scripts are contained.
  6. Type "qsub pbs_Raptor_calc_fcc.txt" and hit enter. Your job has been submitted and will run when the scheduler can fit it in.

The same steps can be used for Talon. Login to "talon-login" with your user name. Change to the directory with your scripts, type "qsub pbs_Talon_calc_fcc.txt" and hit enter.

A few things about the PBS scheduler:

  • There are numerous websites online that explain the #PBS commands in the PBS file. Use them for questions.
  • The "#PBS -q q192p48h@Talon" line tells the scheduler what queue you will be running on. In this queue, you can request a maximum of 192 processors for 48 hours. A quick trick for checking what queues may be available on a cluster is to type "qstat" and see what other people are using.
  • The "#PBS -l nodes=16:ppn=12" line is specific to the cluster that you are running on. For Talon, there are 12 processors per node. Therefore, 16 nodes are needed for 192 processors.
  • The "cd $PBS_O_WORKDIR" line changes the directory to whatever directory you submitted your pbs script from. When the PBS script is submitted, certain variables are stored with it, the $PBS_O_WORKDIR being one of them.
  • The pbs scheduler can handle lots and lots of pbs scripts (if you have a lot of scripts to run) and will prioritize which job is run next.
  • The "qstat" command can be used to display the status of your job.
  • If you know approximately how long your job will take, you can request less processors and less hours in the "#PBS -l walltime=48:00:00" line. Sometimes, the scheduler can fit in smaller simulations in between the larger simulations that are scheduled to run. For instance, consider a job that requests 192 processors for 48 hours and there are only 96 available. If another 96 processors will not become available for 24 hours, the scheduler can fit in a job that only requires 96 processors for 24 hours while it is waiting for processors for the larger job. The whole goal of the PBS scheduling system is to try to obtain 100% usage of its processors.
  • Adjust the number of processors through the "#PBS -l nodes=16:ppn=12", not the queue line. For example, if you only require 96 processors, change this line to "#PBS -l nodes=8:ppn=12". Remember to alter your "mpirun -np 96 lmp_exe < fcc_calc.in" line too!

[edit] Without PBS Scheduler

This is easy. Just use the executable line from the PBS Batch Script, so:

  1. Open up a Secure Shell Client on your computer.
  2. Quick connect using hostname "javelin" with your user name.
  3. Congratulations! You are now logged on Javelin. Unlike clusters with PBS, where there is only a compute node, you can submit your job on multiple processors here. There is a scheduler that tries to balance the load, though.
  4. Change to the directory that contains your input script and your pbs script, i.e., "cd work_directory" where work_directory is where your scripts are contained.
  5. Type "lmp_exe < calc_fcc.in" to run on 1 processor or "mpirun -np 12 lmp_exe < calc_fcc.in" to run on 12 processors.

IMPORTANT NOTES:

  1. There are a much smaller amount of processors on Javelin, so please do not use 128 processors for a cluster with only 24 processors, for example. It will try to run, but the communication between processors will severely slow down the calculation.
  2. Understand that processors may switch between jobs on the fly to try to balance the load. Therefore, you can run a job over the top of someone else's job. If all the processors are being used, you can still run your job, but it will be slowed down due to the other jobs running. This is the advantage of running on a cluster with the PBS scheduler - you get sole possession of the processors for running your job.
  3. These are often useful for debugging and setting up codes prior to running on the bigger clusters with PBS schedulers, i.e., because if you have a relatively small simulation, you do not have to go through a scheduler to execute your code.
  4. How many processors should I use? Type 'top' to see what is running currently. If nothing is running and your simulation is short, then feel free to use more processors than if there are a lot of processes running. Type 'q' to quit the 'top' screen.

[edit] Go Back

Personal tools
Namespaces

Variants
Actions
home
Materials
Material Models
Design
Resources
Projects
Education
Toolbox