Pre-compiled executables for Wind-US and/or the tools are not available for the NASA Advanced Supercomputing (NAS) systems: columbia and pleiades. User's must install the build distribution into their local directory. This section describes the procedure for doing this.
User's are reminded that they are responsible for protecting the dissemination of Wind-US, particularly on a shared computer resource like the NAS. It is therefore recommended that users restrict the access permissions on their NAS home and nobackup directories to prevent access from other users. If this is not already the default behavior, one can use the following commands.
To restrict access to an existing file or directory use the command:
chmod go-rwx filename
To restrict access to all future files and directories created during the current session use the command:
umask 077
Depending on which linux shell is being used, the umask command above can be placed in the $HOME/.login or $HOME/.profile login scripts to apply to future sessions.
Storage space on NAS is split between a rather limited $HOME directory and a larger /nobackup/$USER directory. Most users install Wind-US into their home directory and submit jobs from their nobackup directory. Offline tape storage is available by transferring files to the machine called lou. Please see the NAS website for additional details.
The NAS computing cluster is comprised of various computing nodes, each of which contains a different number and type of core processors. The user must select which nodes to use by specifying the model name within the PBS script.
Because each processor type has a different computational efficiency, NAS charges for their use via a Standard Billing Unit (SBU). In this scheme, use of faster computing nodes incurs a larger SBU cost. Each submitted job is is given exclusive access to the requested nodes. The user is charged for using each node, even if the job does not utilize all of the available processors. To make the most of their allotted time on NAS, users should try to fully utlize each computing node. The table below summarizes the available computing resources.
Processor Type | Model Name | SBU/node | CPUs/node | RAM(GB)/node |
---|---|---|---|---|
Sandy Bridge | san | 0.47 | 16 | 32 |
Ivy Bridge | ivy | 0.66 | 20 | 64 |
Haswell | has | 0.80 | 24 | 128 |
Broadwell | bro | 1.00 | 28 | 128 |
Skylake | sky_ele | 1.59 | 32 | 192 |
Cascade Lake | cas_ait | 1.64 | 40 | 192 |
Rome | rom_ait | 4.06 | 128 | 512 |
Note: The amount of memory available to a PBS job is less than the total physical memory because the system kernel can use up to 4 GB of memory in each node.
For more information, visit:
Download the Wind-US Build Distribution.
This distribution contains all of the necessary run scripts and source files needed to compile and run Wind-US. Registered users are provided the source bundle and/or instructions for downloading it themselves. You should have a file for the form:
dist.windus.4.111.tgz
Transfer the build bundle to NAS (columbia or pleiades).
Put the build bundle in the (new) directory $HOME/WINDUS where the source will be installed. You should now have:
$HOME/WINDUS/dist.windus.4.111.tgz
Unpack the build bundle on NAS.
cd $HOME/WINDUS tar xzf dist.windus.4.111.tgz
This should extract everything to the new subdirectory:
$HOME/WINDUS/wind-dev
Update the NAS login scripts.
Users of csh and tcsh shells must edit the $HOME/.cshrc or $HOME/.tcshrc file respectively and make sure the following lines appear:
module load comp-intel/2020.4.304 module load mpi-hpe/mpt.2.26 setenv MPI_NUM_MEMORY_REGIONS 0 setenv WIND_DEV "$HOME/WINDUS/wind-dev" setenv CFDROOT "$HOME/WINDUS/wind-dev" source $HOME/WINDUS/wind-dev/bin/cfd.login
Similarly, bash, ksh, and sh users must edit their $HOME/.profile file and add make sure the following lines appear:
module load comp-intel/2020.4.304 module load mpi-sgi/mpt.2.26 MPI_NUM_MEMORY_REGIONS = 0 ; export MPI_NUM_MEMORY_REGIONS WIND_DEV = "$HOME/WINDUS/wind-dev" ; export WIND_DEV CFDROOT = "$HOME/WINDUS/wind-dev" ; export CFDROOT . $HOME/WINDUS/wind-dev/bin/cfd.profile
This sets up the Intel Fortran/C compilers, loads SGI's MPI message passing toolkit, and causes the contents of the cfd.login (or cfd.profile) file to be executed automatically for each new shell instance. The environment variables CFDROOT, WIND_DEV, SYSTEM, and SYSTEM_CPU, that are required for installing and running Wind-US will also be set, and the PATH will be modified to include the location of the Wind-US executable directories. Setting MPI_NUM_MEMORY_REGIONS to 0 disables the MPI data buffer in order to avoid cache trashing.
Note that the application directory (CFDROOT) is set to the same location as the build/development directory (WIND_DEV). One could instead use a separate application directory as described in the installation instructions for a general system. However, most NAS users do not need that additional flexibility and want to get the code working as quickly as possible.
Test the NAS login scripts.
At this point, log out and log back in. Check to see if the environment variables have been set to appropriate values, by doing:
printenv WIND_DEV printenv CFDROOT printenv SYSTEM printenv SYSTEM_CPU ifort --version icc --version which mpiexec
They should have values similar to:
$HOME/WINDUS/wind-dev $HOME/WINDUS/wind-dev LINUX64-GLIBC2.28 XEON ifort (IFORT) 19.1.3.304 20200925 icc (ICC) 19.1.3.304 20200925 /nasa/hpe/mpt/2.26_rhel85/bin/mpiexec
Note that $HOME will likely be expanded to your full home directory path. If the variables are not set, or not set correctly, go back to step 4 and try again.
Move into the Wind-US build directory.
cd $WIND_DEV
This should put you in the $HOME/WINDUS/wind-dev directory.
Configure the makefiles.
If you plan to build both Wind-US and the tools, then follow the instructions for unpacking the tools distribution before making any changes to the makefiles. The reason for this is that the tools distribution also contains a copy of the makefiles and will overwrite any changes you might make here.
The default values in the configuration files should be sufficient to produce an executable. To be safe, review the contents of the following files or parts of files, where SYSTEM and SYSTEM_CPU correspond to the SYSTEM and SYSTEM_CPU environment variables, paying special attention to the items noted below.
When modifying files it is always a good idea to save a copy of the original (i.e., as *.bak), and for new files to create an empty *.bak file. This makes it easy to identify which files you have changed, and you can use a visual difference program to compare those changes against the original.
Makefile.configure
source/Makefile.user
source/makefiles/Makefile.include.SYSTEM.SYSTEM_CPU.opt
This file contains the compiler optimizations for the
specific system being used.
For NAS, you may download the following file:
Makefile.include.LINUX64-GLIBC2.28.XEON.opt
and copy it into the source/makefiles directory.
Otherwise, you may follow the instructions below to
create your own.
If this makefile does not exist, you will need to create it. Usually the easiest way to do so is to copy and modify one of the existing files. For example:
cp -p source/makefiles/Makefile.include.LINUX64-GLIBC2.4.XEON.opt source/makefiles/Makefile.include.LINUX64-GLIBC2.28.XEON.opt
Check the definition of CPP, which is the full path name for the C pre-processor cpp, to make sure it is correct. You can locate cpp using:
whereis -b cpp
You may need to change the definitions related to the Fortran and C compilers being used (i.e., the variables ABI, FC, F90, LD, etc.). There is coding in the Linux makefiles for various Fortran and C compilers. These lines should be commented/uncommented as needed, but additional changes may also be necessary or desired. At the time of this writing, the following Intel 19.1.3.304 compiler settings were used:
ABI= -Zp8 -pc80 -fp-model strict -fno-alias -heap-arrays -fpic -traceback FC= ifort FCOMP= $(FC) $(ABI) -pad -ip -DINTEL FFHOP= -O3 -axCORE-AVX512,CORE-AVX2 -xAVX FFOPT= -O3 -axCORE-AVX512,CORE-AVX2 -xAVX FFLOW= -O1 FFNOP= -O0 F90= ifort $(ABI) -pad -ip -DINTEL F90FHOP= -O3 -axCORE-AVX512,CORE-AVX2 -xAVX F90FOPT= -O3 -axCORE-AVX512,CORE-AVX2 -xAVX F90FLOW= -O1 F90FNOP= -O0 CC= icc CCOMP= $(CC) $(ABI) ANSI= -ansi POSIX= -D_POSIX_SOURCE CFOPT= -O2 -axCORE-AVX512,CORE-AVX2 -xAVX CCP= icpc $(ABI) CPFOPT= -O2 -axCORE-AVX512,CORE-AVX2 -xAVX LD= ifort $(ABI) -O3 -ip -pad -axCORE-AVX512,CORE-AVX2 -xAVX
The compiler flag -ip applies interprocedural optimizations for single-file compilation. A similar -ipo flag is available for multi-file optimizations, but is not used because it takes significantly longer to compile. The -pad flag permits the compiler to adjust the location of variables and arrays in memory to improve performance. The compiler flags -xAVX and -axCORE-AVX512,CORE-AVX2 are processor-specific optimizations that set the baseline code path and alternate optimized code paths respectively. For more information, visit:
If your preferred compiler doesn't support the Fortran long integer data type, add "-DNOFLONG" (without the quotes) to the definition of WINDDEFS. Errors in the compilation of mem_management_module.f90 about an ambiguous definition for the generic interfaces alloc and dealloc, involving the specific interfaces for allockindlong_1 and allockindint_1, etc., are an indication that "-DNOFLONG" is needed.
If necessary, specify the location of the MPI libraries.
MPILIBS= -L/nasa/hpe/mpt/2.26_rhel85/lib -lmpi
The SGI MPI module makes available an mpif90 command that automatically passes the library information to the Intel compiler through the LD_LIBRARY_PATH, LIBRARY_PATH, and F_PATH environment variables. However, all code compiled with this command will include the MPI libraries, even if they are not needed. This has been found to cause problems with some of the tools to be compiled below. Using ifort as the compiler directive and setting the MPILIBS makefile variable ensures that the MPI libraries will only be included in the Wind-US executable.
Compile the source code.
Create a PBS script ($WIND_DEV/make.wind.pbs)
to compile Wind-US.
For csh and tcsh use the following:
#PBS -lselect=1:ncpus=4:model=san,walltime=2:00:00 cd $PBS_O_WORKDIR echo "CFDROOT = $CFDROOT" |& tee make.wind.log echo "WIND_DEV = $WIND_DEV" |& tee -a make.wind.log echo "SYSTEM = $SYSTEM" |& tee -a make.wind.log echo "SYSTEM_CPU = $SYSTEM_CPU" |& tee -a make.wind.log cd $WIND_DEV make -j 10 opt |& tee -a make.wind.log
For sh and ksh use the above, but replace "|&" with "2>&1 |".
Submit the job to the developer queue to compile the code.
cd $WIND_DEV qsub -q devel make.wind.pbs
This may take roughly 30 minutes once your job begins running. You can use the qstat command to check on the status.
If Wind-US compiled successfully you should now have:
$WIND_DEV/$SYSTEM/$SYSTEM_CPU/bin/Wind-US4.exe
If the executable was not produced, then examine the log file ($WIND_DEV/make.wind.log) for compilation errors.
Install the executables and scripts.
If you have a previous installation of Wind-US that you wish to update (i.e., CFDROOT points to some directory other than the WIND_DEV you are building from), you will need to install the Wind-US and PVM executables. To do that, issue the commands
make install make install_scripts make copy_pvm
This will:
If this is a new installation, it would probably be best to log out and log back in before running Wind-US. This executes the shell start-up scripts, modifying the PATH environment variable to include the newly-created location for the Wind-US executable.
Run the wind script.
wind
The new executable should appear in the list of available versions.
Select the desired version 0: Exit wind 1: Wind-US 4.0
See the instruction below for running Wind-US on the NAS.
The tools distribution contains source code for all of the Wind-US pre- and post-processing utilities. It is designed to be a completely self-contained package, so one could have separate directory trees for the Wind-US build distribution and the tools build distribution. However, this would lead to a some duplication of shared routines. To reduce this redundancy, the tools distribution is packaged such that it can overlay the Wind-US distribution. The following instructions assume that all of the tools are being built in the same tree as Wind-US.
Note that the build procedure for the tools is somewhat more complicated than the Wind-US build process. Please report problems via the support web page.
Download the Tools Build Distribution.
This distribution contains all of the necessary run scripts and source files needed to compile and run the Wind-US utilities. Registered users are provided the source bundle and/or instructions for downloading it themselves.
dist.alltools.tgz
Some of the tools require lua and cgnslib header files and/or libraries.
Download lua (4.0.1) from http://www.lua.org/ftp/lua-4.0.1.tar.gz and save as:
lua.4.0.1.tar.gz
Due to changes in the API, newer versions of lua will not work with the Wind-US tools!
Download cgnslib (2.5.5) from https://cgns.github.io/download.html and save as:
cgnslib.2.5.5.tar.gz
Newer versions of cgnslib may also work.
Put the *gz files in the (existing) directory $HOME/WINDUS where the source will be installed. You should now have:
$HOME/WINDUS/alldist.tools.tgz $HOME/WINDUS/lua.4.0.1.tar.gz $HOME/WINDUS/cgnslib.2.5.5.tar.gz
Unpack the source code on NAS.
Note that this will overwrite any changes you made to the Wind-US makefiles!
cd $HOME/WINDUS tar xzf dist.alltools.tgz gunzip -c lua.4.0.1.tar.gz | tar xvf - gunzip -c cgnslib.2.5.5.tar.gz | tar xvf -
This should extract everything to the following subdirectories:
$HOME/WINDUS/wind-dev/tools-dev/* $HOME/WINDUS/lua-4.0.1 $HOME/WINDUS/cgns_2.5
Update the NAS login scripts.
Users of csh and tcsh shells must edit the $HOME/.cshrc or $HOME/.tcshrc file respectively and make sure the following lines appear:
module load comp-intel/2020.4.304 module load mpi-hpe/mpt.2.26 setenv MPI_NUM_MEMORY_REGIONS 0 setenv WIND_DEV "$HOME/WINDUS/wind-dev" setenv CFDROOT "$HOME/WINDUS/wind-dev" setenv TOOLSROOT "$HOME/WINDUS/wind-dev/tools-dev" source $HOME/WINDUS/wind-dev/bin/cfd.login source $HOME/WINDUS/wind-dev/tools-dev/bin/tools.login
Similarly, bash, ksh, and sh users must edit their $HOME/.profile file and make sure the following lines appear:
module load comp-intel/2020.4.304 module load mpi-hpe/mpt.2.26 MPI_NUM_MEMORY_REGIONS = 0 ; export MPI_NUM_MEMORY_REGIONS CFDROOT = "$HOME/WINDUS/wind-dev" ; export CFDROOT WIND_DEV = "$HOME/WINDUS/wind-dev" ; export WIND_DEV TOOLSROOT = "$HOME/WINDUS/wind-dev/tools-dev" ; export TOOLSROOT . $HOME/WINDUS/wind-dev/bin/cfd.profile . $HOME/WINDUS/wind-dev/tools-dev/bin/tools.profile
This causes the contents of the tools.login (or tools.profile) file to be executed automatically at login time to set up the environment variable TOOLSROOT that is required for installing and running the Wind-US tools, and to modify PATH to include the location of the proper executable.
Test the NAS login scripts.
At this point, log out and log back in. Check to see if the environment variables have been set to appropriate values, by doing:
printenv WIND_DEV printenv CFDROOT printenv TOOLSROOT printenv SYSTEM printenv SYSTEM_CPU ifort --version icc --version
They should have values similar to:
$HOME/WINDUS/wind-dev $HOME/WINDUS/wind-dev $HOME/WINDUS/wind-dev/tools-dev LINUX64-GLIBC2.28 XEON ifort (IFORT) 19.1.3.304 20200925 icc (ICC) 19.1.3.304 20200925
(Note that $HOME may be expanded to your full home directory path.)
If the variables are not set, or not set correctly, go back to step 4 and try again.
Compile Lua.
cd $HOME/WINDUS/lua-4.0.1 make
This should create the following files:
$HOME/WINDUS/lua-4.0.1/bin/lua $HOME/WINDUS/lua-4.0.1/bin/luac $HOME/WINDUS/lua-4.0.1/lib/liblua.a $HOME/WINDUS/lua-4.0.1/lib/liblualib.a
Compile the CGNS library.
cd $HOME/WINDUS/cgnslib_2.5 ./configure --prefix=$HOME/WINDUS/cgnslib_2.5 --with-system=LINUX64 --enable-64bit make SYSTEM=LINUX64 mkdir include mkdir lib make install
This should create the following files:
$HOME/WINDUS/cgnslib_2.5/include/cgnslib.h $HOME/WINDUS/cgnslib_2.5/include/cgnslib_f.h $HOME/WINDUS/cgnslib_2.5/include/cgnswin_f.h $HOME/WINDUS/cgnslib_2.5/lib/libcgns.a
Now that the environment variables are set properly, move into the Wind-US build directory.
cd $WIND_DEV
This should put you in the $HOME/WINDUS/wind-dev directory.
Configure the makefiles.
Review the contents of the following files or parts of files, where SYSTEM and SYSTEM_CPU correspond to the SYSTEM and SYSTEM_CPU environment variables, paying special attention to the items noted below.
Makefile.configure
Use the same settings described above for building Wind-US on NAS.
source/makefiles/Makefile.include.SYSTEM.SYSTEM_CPU.opt
Make the modifications listed above in the instructions for building Wind-US on NAS.
Comment out the explicit static and shared library settings:
#TOOLS_STATLIB= -static #TOOLS_SHARLIB= -shared
The default behavior of the Intel compiler is to use shared libraries.
Provide the proper locations of Tcl, Lua, and CGNS:
TOOLS_TCLLIBS= -L/usr/lib64 -ltcl8.5 TOOLS_LUALIBS= -L$(HOME)/WINDUS/lua-4.0.1/lib -llua -llualib TOOLS_CGNSLIBS= -L$(HOME)/WINDUS/cgnslib_2.5/lib -lcgns TCL_INCLUDE= $(INCCMD)/usr/include LUA_INCLUDE= $(INCCMD)$(HOME)/WINDUS/lua-4.0.1/include CGNS_INCLUDE= $(INCCMD)$(HOME)/WINDUS/cgnslib_2.5/include
Provide the proper locations of the Intel libraries:
TOOLS_OSLIBS= -L/nasa/intel/Compiler/2020.4.304/lib/intel64 -lifcore -lirc -limf -lpthread TOOLS_CPPLIBS= -L/nasa/intel/Compiler/2020.4.304/lib/intel64 -lifcore -lirc -limf MADCAP_OSLIBS= -L/nasa/intel/Compiler/2020.4.304/lib/intel64 -lifcore -lcxa -lunwind -lpthread -lstdc++
Depending on which version of the Intel compiler you choose, the library files may be in a different subdirectory below /nasa/intel. These library paths might already be present in the enviroment variables.
source/makefiles/pvm_conf/SYSTEM.SYSTEM_CPU.def.opt
Do not bother configuring or creating this file unless you encounter an error during the compilation process. This file is used to override the default settings given in a similarly named pvm/conf/SYSTEM.def file.
tools-dev/Makefile
This file defines the make dependencies and should not need to be changed.
tools-dev/libmdgl/SYSTEM.mkf
This file is used to compile the graphics library needed by GMAN, CFPOST, and MADCAP.
If this makefile does not exist, you will need to create it. Usually the easiest way to do so is to copy and modify one of the existing files.
Use the settings for the Intel compiler, if they are not already the default.
ABI= -mcmodel=medium -pad -pc80 -fp-model strict -fno-alias CC= icc CFLOPT=
Note that CFLOPT is defined to be empty.
Compile the tools source.
On NAS, the front end node has the openmotif-devel-* package installed, which makes available a number of header files needed to compile the Wind-US tools. The worker nodes only have the library files installed. This means that the tools must be compiled on the front end node. So, instead of using a batch script like that to compile Wind-US, simply compile the tools from the command line.
Make sure you are in the build directory.
cd $WIND_DEV
Next, csh and tcsh users should do
make all_tools |& tee make_tools.log
while sh and ksh users should do
make all_tools 2>&1 | tee make_tools.log
Tools can also be compiled individually, by doing
make tool_name
where tool_name is the name of the tool. Note that the names to be used for GMAN, CFPOST, and MADCAP are gmanpre, cfpost_prod, and madcapprod, respectively.
After compilation is complete, the following programs should appear in directory $WIND_DEV/$SYSTEM/$SYSTEM_CPU/bin
USintrpltQ.exe cfpost_prod.exe chmgr.exe mpigetnzone thplt.exe adfedit cfreorder.exe decompose.exe newtmp timplt.exe cfappend.exe cfreset_iter.exe delvars npair tmptrn.exe cfaverage.exe cfrevert.exe fpro parcnl usplit-hybrid.exe cfbeta.exe cfsequence.exe gman_pre.exe readcf windpar.exe cfcnvt cfspart gpro.exe resplt.exe wnparc cfcombine.exe cfsplit.exe gridvel.exe rnparc wplt3d cflistnum cfsubset.exe icees rplt3d writcf cfnav.exe cfunsequence.exe jormak.exe scan cfpart.exe cfview.exe lstvars terp
Depending on the size of the array parameters requested, the thplt.exe executable might not get created with the default memory model. If it was not created, then edit $WIND_DEV/source/makefiles/Makefile.include.$SYSTEM.$SYSTEM_CPU.opt to use the following ABI settings:
ABI= -Zp8 -pc80 -fp-model strict -fno-alias -heap-arrays -mcmodel medium -traceback
and recompile just that tool. From the build directory, remove the old object file.
cd $WIND_DEV rm -f OBJECTS/$SYSTEM/$SYSTEM_CPU/thplt.o
Next, csh and tcsh users should do
make thplt |& tee make_thplt.log
while sh and ksh users should do
make thplt 2>&1 | tee make_thplt.log
Check $WIND_DEV/$SYSTEM/$SYSTEM_CPU/bin to confirm that thplt.exe was created.
In order for the tool scripts to locate the executables, they must be installed in the proper location. To install the executables, do:
make install_tools
This copies the tools executables to $TOOLSROOT/$SYSTEM/$SYSTEM_CPU/bin.
If this is a new installation, it would probably be best to log out and log back in again before running any of the tools. This executes the shell start-up scripts, modifying the PATH environment variable to include the newly-created location for the tools executables.
Make a directory containing your Wind-US input files:
run.dat run.cgd run.mpc run.lis (if continuing from a previous solution) run.cfl (if continuing from a previous solution)
The run.mpc file should have the following form:
/ Wind-US parallel processing file for NAS. / Currently set to use 2 nodes with 20 processors each / and 1 node with 6 processors / for a total of 46 cores. / host localhost nproc 20 host localhost nproc 20 host localhost nproc 6
Each different type of NAS compute node has a different number of processors. For example, the Ivy Bridge nodes have 20 processor cores. Therefore, each host entry above has at most nproc 20. The user will need to experiment to determine whether the best performance is obtained when all of the processor cores on a given host are used (20+20+6=46) or when the hosts are most closely balanced (16+16+14=46). The difference between internode and intranode communication might be mesh dependent.
Users might also want to include a checkpoint command in the above file so that the worker solutions are sent to the master process at regular intervals. Please see the User's Manual for more details on the format and features of the parallel processing file.
Start the Wind-US script with one of the following commands:
wind -runinplace -cl -usessh -mpmode MPI -mpiver SGI wind -runinplace -cl -usessh -mpmode PVM
To specify a non-default NAS charge number (ie, a0101), add the following to the Wind-US command line:
-grpcharge a0101
To obtain a list of the groups you have access to, type:
groups
Follow the prompts for your input, output, mesh, and flow files or specify them on the command line using the proper syntax.
When asked, indicate that you want to run in multi-processor mode.
If prompted, enter the number of zones in the cgd file.
When prompted for the type of queue, choose QSUB_PBS_QUE.
Enter the queue name: i.e., long for the long queue.
Enter the solver run wall clock time.
Enter the termination processing time.
Enter the number of nodes to run on. This should match the number of host entries in your mpc file.
Enter the number of processors per node to use.
Enter the number of MPI processes per node to use. This is typically the same as the number of processors per node. For the default setting of ASSIGNMENT MODE DEDICATED in the .mpc file, each zone should have its own MPI process. One additional MPI process is required for the master process. You must remember to account for this extra process when answering the prompts or your run will fail to initialize MPI.
Enter any optional attributes. For example, to specify that the job should run on Ivy Bridge nodes use the following:
model=ivy
The model names for other processor types are listed in the table at the top of this page.
When prompted to "Press CR to submit job, or another key (except space) and CR to abort," do the latter. This will create a file called run.job.pl.
The maximum solver execution time is determined by subtracting the termination processing time from the solver run wall clock time. When the Wind-US run job is submitted, it will create a preNDSTOP file. When the maximum solver execution time has expired, this file will be renamed NDSTOP, forcing Wind-US to begin a graceful shutdown.
Users should make sure that the termination processing time is sufficient to allow Wind-US to complete the termination process. At the end of the *.lis, there is a summary indicating the time spent during execuation and termination.
Users should also make sure that they request adequate time from the queue in which they submit their jobs. This is detailed in the next step. Otherwise, the queue will terminate Wind-US, resulting in a less than graceful shutdown.
Edit the run.job.pl file. If you see a line like the following near the top of the file:
#PBS -l nodes=1234:ppn=2
replace it with
#PBS -l select=2:ncpus=20:model=ivy+1:ncpus=6:model=ivy #PBS -l walltime=40:00:00 #PBS -m e
This will select 2 nodes with 20 cpus and 1 node with 6 cpus, which matches the request in the run.mpc file. The walltime can be adjusted as desired (hh:mm:ss) or as an integer number of seconds, and the last line will send you an email when your job is completed.
Make sure to request at least as much walltime as was specified in the Wind-US prompts above, because the queuing system does not terminate jobs as cleanly as Wind-US does.
If you plan on resubmitting the same job again later (i.e., you want to run 10000 cycles now and 10000 cycles later) you can save a copy of the run script.
cp -p run.job.pl run.job.pl.bak
To resubmit later, you can skip the above steps and simply reuse the run script.
cp -p run.job.pl.bak run.job.pl
Note that if you increase the number of cycles in your *.dat file, you may need to adjust the run time specified in the job file. In this case it might be best to answer the Wind-US prompts again to create a new run.job.pl file.
Submit the job to the long queue with the command:
qsub -q long run.job.pl
Some other useful commands are:
Command | Action |
---|---|
node_stats.sh | List the number and type of available cpu nodes. |
qstat -q | List all queue names and run limits. |
qstat -a long | List all jobs running in the long queue. |
qstat -u USER | List all jobs running for username USER. |
qdel JOBNAME | Delete job with name JOBNAME. Useful if Wind-US has not yet started. |
In order to improve I/O performance for large jobs, Wind-US 3.0 (and later) uses a newer ADF library than its predecessors. Grid and solution files used with Wind-US will automatically be upgraded to the new format, and the tools compiled in the above steps will also work with the new file structure. However, if you transfer the grid or solution file(s) back to your local workstation your existing tools may not be able to read them. If you experience this problem, you should upgrade the tools at your local site.
Last updated 19 Jan 2024