Pre-compiled executables for Wind-US and/or the tools are not available for the NASA Advanced Supercomputing (NAS) systems: columbia and pleiades. User's must install the build distribution into their local directory. This section describes the procedure for doing this.
User's are reminded that they are responsible for protecting the dissemination of Wind-US, particularly on a shared computer resource like the NAS. It is therefore recommended that users restrict the access permissions on their NAS home and nobackup directories to prevent access from other users. If this is not already the default behavior, one can use the following commands.
chmod go-rwx FileName
umask 077
Storage space on NAS is split between a rather limited (8 GB) $HOME directory and a larger (200 GB) /nobackup/$USER directory. Most users install Wind-US into their home directory and submit jobs from their nobackup directory. Offline tape storage is available by transferring files to the machine called lou. Please see the NAS website for additional details.
The NAS computing cluster is comprised of various computing nodes, each of which contains a different number and type of core processors. The user can select which nodes to use by specifying the model name within the PBS script. The default option for most queues is Westmere.
Because each processor type has a different computational efficiency,
NAS charges for their use via a Standard Billing Unit (SBU).
In this scheme, use of faster computing nodes incurs a larger SBU cost.
Each submitted job is is given exclusive access to the requested nodes.
The user is charged for using each node, even if the job does not
utilize all of the available processors.
To make the most of their allotted time on NAS, users should try to
fully utlize each computing node.
The table below summarizes the available computing resources.
Processor Type | Model Name | SBU/node | CPUs/node | RAM(GB)/node |
---|---|---|---|---|
Westmere | wes | 1.00 | 12 | 22.5 |
Sandy Bridge | san | 1.82 | 16 | 30 |
Ivy Bridge | ivy | 2.52 | 20 | 62 |
Haswell | has | 3.34 | 24 | 122 |
For more information, visit:
windus.build.tar.gz
$HOME/WINDUS/windus.build.tar.gz
cd $HOME/WINDUS gunzip -c windus.build.tar.gz | tar xvf -This should extract everything to the new subdirectory:
$HOME/WINDUS/wind-dev
module load comp-intel/2013.5.192 module load mpi-sgi/mpt.2.11r13 setenv CFDROOT "$HOME/WINDUS/wind-dev" setenv WIND_DEV "$HOME/WINDUS/wind-dev" source $HOME/WINDUS/wind-dev/bin/cfd.loginSimilarly, bash, ksh, and sh users must edit their $HOME/.profile file and add make sure the following lines appear:
module load comp-intel/2013.5.192 module load mpi-sgi/mpt.2.11r13 CFDROOT = "$HOME/WINDUS/wind-dev" ; export CFDROOT WIND_DEV = "$HOME/WINDUS/wind-dev" ; export WIND_DEV . $HOME/WINDUS/wind-dev/bin/cfd.profileThis sets up the Intel Fortran/C compilers, loads SGI's MPI message passing toolkit, and causes the contents of the cfd.login (or cfd.profile) file to be executed automatically for each new shell instance. The environment variables CFDROOT, WIND_DEV, SYSTEM, and SYSTEM_CPU, that are required for installing and running Wind-US will also be set, and the PATH will be modified to include the location of the Wind-US executable directories.
printenv CFDROOT printenv WIND_DEV printenv SYSTEM printenv SYSTEM_CPU ifort --version icc --version which mpiexecThey should have values similar to:
$HOME/WINDUS/wind-dev $HOME/WINDUS/wind-dev LINUX64-GLIBC2.11 XEON ifort (IFORT) 13.1.3 20130607 icc (ICC) 13.1.3 20130607 /nasa/sgi/mpt/2.11r13/bin/mpiexecNote that $HOME will likely be expanded to your full home directory path. If the variables are not set, or not set correctly, go back to step 4 and try again.
cd $WIND_DEVThis should put you in the $HOME/WINDUS/wind-dev directory.
If you plan to build both Wind-US and the tools, then follow the instructions below for unpacking the tools distribution before making any changes to the makefiles. The reason for this is that the tools distribution also contains a copy of the makefiles and will overwrite any changes you might make here.
The default values in the configuration files should be sufficient to produce an executable. To be safe, review the contents of the following files or parts of files, where SYSTEM and SYSTEM_CPU correspond to the SYSTEM and SYSTEM_CPU environment variables, paying special attention to the items noted below.
cp -p source/makefiles/Makefile.include.LINUX64-GLIBC2.4.XEON.opt source/makefiles/Makefile.include.LINUX64-GLIBC2.11.XEON.opt
whereis -b cpp
ABI= -Zp8 -pc80 -fp-model strict -fno-alias -heap-arrays -fpic -traceback FC= ifort FCOMP= $(FC) $(ABI) -pad -ip -DINTEL FFHOP= -O3 -xSSE4.2 -axAVX FFOPT= -O3 -xSSE4.2 -axAVX FFLOW= -O1 FFNOP= -O0 F90= ifort $(ABI) -pad -ip -DINTEL F90FHOP= -O3 -xSSE4.2 -axAVX F90FOPT= -O3 -xSSE4.2 -axAVX F90FLOW= -O1 F90FNOP= -O0 CC= icc CCOMP= $(CC) $(ABI) ANSI= -ansi POSIX= -D_POSIX_SOURCE CFOPT= -O2 -xSSE4.2 -axAVX CCP= icpc $(ABI) CPFOPT= -O2 -xSSE4.2 -axAVX LD= ifort $(ABI) -O3 -ip -pad -xSSE4.2 -axAVX
The compiler flags -xSSE4.2 and -axAVX are specific optimizations for the Westmere, Sandy Bridge, and Ivy Bridge processors that set the baseline code path and alternate optimized code paths respectively. For more information, visit:
MPILIBS= -L/nasa/sgi/mpt/2.11r13/lib -lmpi
The SGI MPI module makes available an mpif90 command that automatically passes the library information to the Intel compiler through the LD_LIBRARY_PATH, LIBRARY_PATH, and F_PATH environment variables. However, all code compiled with this command will include the MPI libraries, even if they are not needed. This has been found to cause problems with some of the tools to be compiled below. Using ifort as the compiler directive and setting the MPILIBS makefile variable ensures that the MPI libraries will only be included in the Wind-US executable.
#PBS -lselect=1:ncpus=4,walltime=2:00:00 cd $PBS_O_WORKDIR echo "CFDROOT = $CFDROOT" |& tee make.wind.log echo "WIND_DEV = $WIND_DEV" |& tee -a make.wind.log echo "SYSTEM = $SYSTEM" |& tee -a make.wind.log echo "SYSTEM_CPU = $SYSTEM_CPU" |& tee -a make.wind.log cd $WIND_DEV make opt |& tee -a make.wind.logFor sh and ksh use the above, but replace "|&" with "2>&1 |".
cd $WIND_DEV qsub -q devel make.wind.pbsThis may take roughly 30 minutes once your job begins running. You can use the qstat command to check on the status.
$WIND_DEV/$SYSTEM/$SYSTEM_CPU/bin/Wind-USalpha.exeIf the executable was not produced, then examine the log file ($WIND_DEV/make.wind.log) for compilation errors.
make install make install_scripts make copy_pvmThis will:
windThe new executable should appear in the list of available versions.
Select the desired version 0: Exit wind 1: Wind-US AlphaSee the instruction below for running Wind-US on the NAS.
Unlike the Wind-US build distribution, in order to obtain the source for all the tools several downloads are required. The smaller tools are all bundled together and may be acquired from the "Downloads" page of the "Tools Makefiles" project. GMAN, CFPOST, and MADCAP are normally downloaded separately from their respective "Downloads" pages. Note that the Project Names for these are "gmanpre", "cfpost_prod", and "Madcap production", respectively. The instructions below only describe how to install the smaller utilities and CFPOST on NAS, since GMAN and MADCAP are more graphical in nature and typically not used over a remote connection.
Each build distribution is designed to be a completely independent package, so that the tools can be built without requiring any additional files from IVMS. [There are some exceptions to this, such as CFPOST, described below.] Thus, one could have separate directory trees for the Wind-US build distribution and each of the tools build distributions. This would lead to a great deal of duplication, however. Therefore, the build distributions are designed to overlay one another. The following instructions assume that all the tools are being built in the same tree.
Note that the build procedure for the tools is somewhat more complicated than the Wind-US build process. Please report problems to the NPARC Alliance User Support Team at nparc-support@arnold.af.mil or (931) 454-7885.
tools.build.tgz
cfpost_prod.build.tar.gz
lua.4.0.1.tar.gzDue to changes in the API, newer versions of lua will not work with the Wind-US tools!!!
cgnslib.2.5.5.tar.gzNewer versions of cgnslib may also work.
$HOME/WINDUS/tools.build.tar.gz $HOME/WINDUS/cfpost_prod.build.tar.gz $HOME/WINDUS/lua.4.0.1.tar.gz $HOME/WINDUS/cgnslib.2.5.5.tar.gz
cd $HOME/WINDUS gunzip -c tools.build.tar.gz | tar xvf - gunzip -c cfpost_prod.build.tar.gz | tar xvf - gunzip -c lua.4.0.1.tar.gz | tar xvf - gunzip -c cgnslib.2.5.5.tar.gz | tar xvf -This should extract everything to the following subdirectories:
$HOME/WINDUS/wind-dev/tools-dev/* $HOME/WINDUS/wind-dev/tools-dev/cfpost_prod $HOME/WINDUS/lua-4.0.1 $HOME/WINDUS/cgns_2.5
module load comp-intel/2013.5.192 module load mpi-sgi/mpt.2.11r13 setenv CFDROOT "$HOME/WINDUS/wind-dev" setenv WIND_DEV "$HOME/WINDUS/wind-dev" setenv TOOLSROOT "$HOME/WINDUS/wind-dev/tools-dev" source $HOME/WINDUS/wind-dev/bin/cfd.login source $HOME/WINDUS/wind-dev/tools-dev/bin/tools.loginSimilarly, bash, ksh, and sh users must edit their $HOME/.profile file and make sure the following lines appear:
module load comp-intel/2013.5.192 module load mpi-sgi/mpt.2.11r13 CFDROOT = "$HOME/WINDUS/wind-dev" ; export CFDROOT WIND_DEV = "$HOME/WINDUS/wind-dev" ; export WIND_DEV TOOLSROOT = "$HOME/WINDUS/wind-dev/tools-dev" ; export TOOLSROOT . $HOME/WINDUS/wind-dev/bin/cfd.profile . $HOME/WINDUS/wind-dev/tools-dev/bin/tools.profileThis causes the contents of the tools.login (or tools.profile) file to be executed automatically at login time to set up the environment variable TOOLSROOT that is required for installing and running the Wind-US tools, and to modify PATH to include the location of the proper executable.
printenv CFDROOT printenv WIND_DEV printenv TOOLSROOT printenv SYSTEM printenv SYSTEM_CPU ifort --version icc --versionThey should have values similar to:
$HOME/WINDUS/wind-dev $HOME/WINDUS/wind-dev $HOME/WINDUS/wind-dev/tools-dev LINUX64-GLIBC2.11 XEON ifort (IFORT) 13.1.3 20130607 icc (ICC) 13.1.3 20130607(Note that $HOME may be expanded to your full home directory path.) If the variables are not set, or not set correctly, go back to step 4 and try again.
cd $HOME/WINDUS/lua-4.0.1 makeThis should create the following files:
$HOME/WINDUS/lua-4.0.1/bin/lua $HOME/WINDUS/lua-4.0.1/bin/luac $HOME/WINDUS/lua-4.0.1/lib/liblua.a $HOME/WINDUS/lua-4.0.1/lib/liblualib.a
cd $HOME/WINDUS/cgnslib_2.5 ./configure --prefix=$HOME/WINDUS/cgnslib_2.5 --with-system=LINUX64 --enable-64bit make SYSTEM=LINUX64 mkdir include mkdir lib make installThis should create the following files:
$HOME/WINDUS/cgnslib_2.5/include/cgnslib.h $HOME/WINDUS/cgnslib_2.5/include/cgnslib_f.h $HOME/WINDUS/cgnslib_2.5/include/cgnswin_f.h $HOME/WINDUS/cgnslib_2.5/lib/libcgns.a
cd $WIND_DEVThis should put you in the $HOME/WINDUS/wind-dev directory.
#TOOLS_STATLIB= -static #TOOLS_SHARLIB= -sharedThe default behavior of the Intel compiler is to use shared libraries.
TOOLS_TCLLIBS= -L/usr/lib64 -ltcl8.5 TOOLS_LUALIBS= -L$(HOME)/WINDUS/lua-4.0.1/lib -llua -llualib TOOLS_CGNSLIBS= -L$(HOME)/WINDUS/cgnslib_2.5/lib -lcgns TCL_INCLUDE= $(INCCMD)/usr/include LUA_INCLUDE= $(INCCMD)$(HOME)/WINDUS/lua-4.0.1/include CGNS_INCLUDE= $(INCCMD)$(HOME)/WINDUS/cgnslib_2.5/include
TOOLS_OSLIBS= -L/nasa/intel/Compiler/2013.5.192/lib/intel64 -lifcore -lirc -limf -lpthread TOOLS_CPPLIBS= -L/nasa/intel/Compiler/2013.5.192/lib/intel64 -lifcore -lirc -limf MADCAP_OSLIBS= -L/nasa/intel/Compiler/2013.5.192/lib/intel64 -lifcore -lcxa -lunwind -lpthread -lstdc++Depending on which version of the Intel compiler you choose, the library files may be in a different subdirectory below /nasa/intel. These library paths might already be present in the enviroment variables.
cp -p tools-dev/libmdgl/LINUX64-GLIBC2.7.mkf tools-dev/libmdgl/LINUX64-GLIBC2.11.mkf
ABI= -mcmodel=medium -pad -pc80 -fp-model strict -fno-alias CC= icc CFLOPT=Note that CFLOPT is defined to be empty.
On NAS, the front end node has the openmotif-devel-* package installed, which makes available a number of header files needed to compile the Wind-US tools. The worker nodes only have the library files installed. This means that the tools must be compiled on the front end node. So, instead of using a batch script like that to compile Wind-US, simply compile the tools from the command line.
Make sure you are in the build directory.
cd $WIND_DEVNext, csh and tcsh users should do
make all_tools |& tee make_tools.logwhile sh and ksh users should do
make all_tools 2>&1 | tee make_tools.log
Tools can also be compiled individually, by doing
make tool_namewhere tool_name is the name of the tool. Note that the names to be used for GMAN, CFPOST, and MADCAP are gmanpre, cfpost_prod, and madcapprod, respectively.
After compilation is complete, the following programs should appear in directory $WIND_DEV/$SYSTEM/$SYSTEM_CPU/bin
USintrpltQ.exe cfreset_iter.exe gpro.exe rplt3d adfedit cfrevert.exe gridvel.exe scan cfappend.exe cfsequence.exe icees terp cfaverage.exe cfspart jormak.exe thplt.exe cfbeta.exe cfsplit.exe lstvars timplt.exe cfcnvt cfsubset.exe mpigetnzone tmptrn.exe cfcombine.exe cfunsequence.exe newtmp usplit-hybrid.exe cflistnum cfview.exe npair windpar.exe cfnav.exe chmgr.exe parcnl wnparc cfpart.exe decompose.exe readcf wplt3d cfpost_prod.exe delvars resplt.exe writcf cfreorder.exe fpro rnparc
Depending on the size of the array parameters requested, thplt.exe might not get created with the default memory model. If it was not created, then edit $WIND_DEV/source/makefiles/Makefile.include.$SYSTEM.$SYSTEM_CPU.opt to use the following ABI settings:
ABI= -Zp8 -pc80 -fp-model strict -fno-alias -heap-arrays -mcmodel medium -tracebackand recompile just that tool. From the build directory, remove the old object file.
cd $WIND_DEV rm -f OBJECTS/$SYSTEM/$SYSTEM_CPU/thplt.oNext, csh and tcsh users should do
make thplt |& tee make_thplt.logwhile sh and ksh users should do
make thplt 2>&1 | tee make_thplt.logCheck $WIND_DEV/$SYSTEM/$SYSTEM_CPU/bin to confirm that thplt.exe was created.
make install_toolsThis copies the tools executables to $TOOLSROOT/$SYSTEM/$SYSTEM_CPU/bin.
run.dat run.cgd run.mpc run.lis (if continuing from a previous solution) run.cfl (if continuing from a previous solution)The run.mpc file should have the following form:
/ Wind-US parallel processing file for NAS. / Currently set to use 2 nodes with 12 processors each / and 1 node with 6 processors / for a total of 30 cores. / host localhost nproc 12 host localhost nproc 12 host localhost nproc 6Each different type of NAS compute node has a different number of processors. For example, the Westmere nodes have 12 processor cores. Therefore, each host entry above has at most nproc 12. The user will need to experiment to determine whether the best performance is obtained when all of the processor cores on a given host are used (12+12+6=30) or when the hosts are most closely balanced (10+10+10=30). The difference between internode and intranode communication might be mesh dependent.
Users might also want to include a checkpoint
command in the above file so that the worker solutions are
sent to the master process at regular intervals.
Please see the User's Manual
for more details on the format and features of the parallel
processing file.
wind -runinplace -cl -usessh -mpmode MPI -mpiver SGI wind -runinplace -cl -usessh -mpmode PVM
model=wesThe model names for other processor types are listed in the table at the top of this page.
The maximum solver execution time is determined by subtracting the termination processing time from the solver run wall clock time. When the Wind-US run job is submitted, it will create a preNDSTOP file. When the maximum solver execution time has expired, this file will be renamed NDSTOP, forcing Wind-US to begin a graceful shutdown.
Users should make sure that the termination processing time is sufficient to allow Wind-US to complete the termination process. At the end of the *.lis, there is a summary indicating the time spent during execuation and termination.
Users should also make sure that they request adequate time
from the queue in which they submit their jobs. This is
detailed in the next step.
Otherwise, the queue will terminate Wind-US, resulting in a
less than graceful shutdown.
#PBS -l nodes=1234:ppn=2replace it with
#PBS -l select=2:ncpus=12+1:ncpus=6 #PBS -l walltime=40:00:00 #PBS -m eThis will select 2 nodes with 12 cpus and 1 node with 6 cpus, which matches the request in the run.mpc file. The walltime can be adjusted as desired (hh:mm:ss) or as an integer number of seconds, and the last line will send you an email when your job is completed.
Make sure to request at least as much walltime as was
specified in the Wind-US prompts above, because the queuing
system does not terminate jobs as cleanly as Wind-US does.
cp -p run.job.pl run.job.pl.bakTo resubmit later, you can skip the above steps and simply reuse the run script.
cp -p run.job.pl.bak run.job.plNote that if you increase the number of cycles in your *.dat file, you may need to adjust the run time specified in the job file.
qsub -q long run.job.plSome other useful commands are:
Command | Action |
---|---|
qstat -q | List all queue names and run limits. |
qstat -a long | List all jobs running in the long queue. |
qstat -u USER | List all jobs running for username USER. |
qdel JOBNAME | Delete job with name JOBNAME. Useful if Wind-US has not yet started. |
Last updated 20 Aug 2015