Skip navigation links
(Wind-US Documentation Home Page) (Wind-US User's Guide) (GMAN User's Guide) (MADCAP User's Guide) (CFPOST User's Guide) (Wind-US Utilities) (Common File User's Guide) (Wind-US Installation Guide) (Wind-US Developer's Reference) (Guidelines Documents)

(Overview) (Obtaining Wind-US and the Tools) (Installing the Build Distributions) (Installing and Running the Build Distributions on NAS) (Porting Wind-US to a New UNIX Platform) (Wind-US at NASA Glenn (GRC only))

Installing and Running the Build Distributions on the NAS

Pre-compiled executables for Wind-US and/or the tools are not available for the NASA Advanced Supercomputing (NAS) systems: columbia and pleiades. User's must install the build distribution into their local directory. This section describes the procedure for doing this.

Security Issues on the NAS

User's are reminded that they are responsible for protecting the dissemination of Wind-US, particularly on a shared computer resource like the NAS. It is therefore recommended that users restrict the access permissions on their NAS home and nobackup directories to prevent access from other users. If this is not already the default behavior, one can use the following commands.

Storage Issues on the NAS

Storage space on NAS is split between a rather limited $HOME directory and a larger /nobackup/$USER directory. Most users install Wind-US into their home directory and submit jobs from their nobackup directory. Offline tape storage is available by transferring files to the machine called lou. Please see the NAS website for additional details.

Computational Resources on the NAS

The NAS computing cluster is comprised of various computing nodes, each of which contains a different number and type of core processors. The user must select which nodes to use by specifying the model name within the PBS script.

Because each processor type has a different computational efficiency, NAS charges for their use via a Standard Billing Unit (SBU). In this scheme, use of faster computing nodes incurs a larger SBU cost. Each submitted job is is given exclusive access to the requested nodes. The user is charged for using each node, even if the job does not utilize all of the available processors. To make the most of their allotted time on NAS, users should try to fully utlize each computing node. The table below summarizes the available computing resources.

NAS Computing Resources
Processor Type Model Name SBU/node CPUs/node RAM(GB)/node
Sandy Bridge san 1.82 16 32
Ivy Bridge ivy 2.52 20 64
Haswell has 3.34 24 128
Broadwell bro 4.04 28 128

Note: The amount of memory available to a PBS job is less than the total physical memory because the system kernel can use up to 4 GB of memory in each node.

For more information, visit:

Building Wind-US on the NAS

  1. Download the Wind-US Build Distribution.

    This distribution contains all of the necessary run scripts and source files needed to compile and run Wind-US. Registered users are provided the source bundle and/or instructions for downloading it themselves. You should have a file for the form:

       dist.windus.4.111.tgz
    
  2. Transfer the build bundle to NAS (columbia or pleiades).

    Put the build bundle in the (new) directory $HOME/WINDUS where the source will be installed. You should now have:

       $HOME/WINDUS/dist.windus.4.111.tgz
    
  3. Unpack the build bundle on NAS.

       cd $HOME/WINDUS
       tar xzf dist.windus.4.111.tgz
    

    This should extract everything to the new subdirectory:

       $HOME/WINDUS/wind-dev
    
  4. Update the NAS login scripts.

    Users of csh and tcsh shells must edit the $HOME/.cshrc or $HOME/.tcshrc file respectively and make sure the following lines appear:

       module load comp-intel/2013.5.192
       module load mpi-sgi/mpt.2.12r26
       setenv MPI_NUM_MEMORY_REGIONS 0
       setenv WIND_DEV  "$HOME/WINDUS/wind-dev"
       setenv CFDROOT   "$HOME/WINDUS/wind-dev"
       source            $HOME/WINDUS/wind-dev/bin/cfd.login
    

    Similarly, bash, ksh, and sh users must edit their $HOME/.profile file and add make sure the following lines appear:

       module load comp-intel/2013.5.192
       module load mpi-sgi/mpt.2.12r26
       MPI_NUM_MEMORY_REGIONS = 0                    ; export MPI_NUM_MEMORY_REGIONS
       WIND_DEV  = "$HOME/WINDUS/wind-dev"           ; export WIND_DEV
       CFDROOT   = "$HOME/WINDUS/wind-dev"           ; export CFDROOT
       .            $HOME/WINDUS/wind-dev/bin/cfd.profile
    

    This sets up the Intel Fortran/C compilers, loads SGI's MPI message passing toolkit, and causes the contents of the cfd.login (or cfd.profile) file to be executed automatically for each new shell instance. The environment variables CFDROOT, WIND_DEV, SYSTEM, and SYSTEM_CPU, that are required for installing and running Wind-US will also be set, and the PATH will be modified to include the location of the Wind-US executable directories. Setting MPI_NUM_MEMORY_REGIONS to 0 disables the MPI data buffer in order to avoid cache trashing.

    Note that the application directory (CFDROOT) is set to the same location as the build/development directory (WIND_DEV). One could instead use a separate application directory as described in the installation instructions for a general system. However, most NAS users do not need that additional flexibility and want to get the code working as quickly as possible.

  5. Test the NAS login scripts.

    At this point, log out and log back in. Check to see if the environment variables have been set to appropriate values, by doing:

       printenv WIND_DEV
       printenv CFDROOT
       printenv SYSTEM
       printenv SYSTEM_CPU
       ifort --version
       icc   --version
       which mpiexec
    

    They should have values similar to:

       $HOME/WINDUS/wind-dev
       $HOME/WINDUS/wind-dev
       LINUX64-GLIBC2.11
       XEON
       ifort (IFORT) 13.1.3 20130607
       icc (ICC) 13.1.3 20130607
       /nasa/sgi/mpt/2.12r26/bin/mpiexec
    

    Note that $HOME will likely be expanded to your full home directory path. If the variables are not set, or not set correctly, go back to step 4 and try again.

  6. Move into the Wind-US build directory.

       cd $WIND_DEV
    

    This should put you in the $HOME/WINDUS/wind-dev directory.

  7. Configure the makefiles.

    If you plan to build both Wind-US and the tools, then follow the instructions for unpacking the tools distribution before making any changes to the makefiles. The reason for this is that the tools distribution also contains a copy of the makefiles and will overwrite any changes you might make here.

    The default values in the configuration files should be sufficient to produce an executable. To be safe, review the contents of the following files or parts of files, where SYSTEM and SYSTEM_CPU correspond to the SYSTEM and SYSTEM_CPU environment variables, paying special attention to the items noted below.

    When modifying files it is always a good idea to save a copy of the original (i.e., as *.bak), and for new files to create an empty *.bak file. This makes it easy to identify which files you have changed, and you can use a visual difference program to compare those changes against the original.

  8. Compile the source code.

  9. Install the executables and scripts.

    If you have a previous installation of Wind-US that you wish to update (i.e., CFDROOT points to some directory other than the WIND_DEV you are building from), you will need to install the Wind-US and PVM executables. To do that, issue the commands

       make install
       make install_scripts
       make copy_pvm
    

    This will:

  10. If this is a new installation, it would probably be best to log out and log back in before running Wind-US. This executes the shell start-up scripts, modifying the PATH environment variable to include the newly-created location for the Wind-US executable.

  11. Run the wind script.

       wind
    

    The new executable should appear in the list of available versions.

                      Select the desired version
    
       0: Exit wind
       1: Wind-US 4.0
    

    See the instruction below for running Wind-US on the NAS.

Building the Tools on the NAS

The tools distribution contains source code for all of the Wind-US pre- and post-processing utilities. It is designed to be a completely self-contained package, so one could have separate directory trees for the Wind-US build distribution and the tools build distribution. However, this would lead to a some duplication of shared routines. To reduce this redundancy, the tools distribution is packaged such that it can overlay the Wind-US distribution. The following instructions assume that all of the tools are being built in the same tree as Wind-US.

Note that the build procedure for the tools is somewhat more complicated than the Wind-US build process. Please report problems to the NPARC Alliance User Support Team at nparc-support@arnold.af.mil or (931) 454-7885.

  1. Download the Tools Build Distribution.

    This distribution contains all of the necessary run scripts and source files needed to compile and run the Wind-US utilities. Registered users are provided the source bundle and/or instructions for downloading it themselves.

       dist.alltools.tgz
    

    Some of the tools require lua and cgnslib header files and/or libraries.

  2. Transfer the source code to NAS (columbia or pleiades).
  3. Unpack the source code on NAS.

    Note that this will overwrite any changes you made to the Wind-US makefiles!

       cd $HOME/WINDUS
       tar xzf dist.alltools.tgz
       gunzip -c lua.4.0.1.tar.gz | tar xvf -
       gunzip -c cgnslib.2.5.5.tar.gz | tar xvf -
    

    This should extract everything to the following subdirectories:

       $HOME/WINDUS/wind-dev/tools-dev/*
       $HOME/WINDUS/lua-4.0.1
       $HOME/WINDUS/cgns_2.5
    
  4. Update the NAS login scripts.

    Users of csh and tcsh shells must edit the $HOME/.cshrc or $HOME/.tcshrc file respectively and make sure the following lines appear:

       module load comp-intel/2013.5.192
       module load mpi-sgi/mpt.2.12r26
       setenv MPI_NUM_MEMORY_REGIONS 0
       setenv WIND_DEV  "$HOME/WINDUS/wind-dev"
       setenv CFDROOT   "$HOME/WINDUS/wind-dev"
       setenv TOOLSROOT "$HOME/WINDUS/wind-dev/tools-dev"
       source            $HOME/WINDUS/wind-dev/bin/cfd.login
       source            $HOME/WINDUS/wind-dev/tools-dev/bin/tools.login
    

    Similarly, bash, ksh, and sh users must edit their $HOME/.profile file and make sure the following lines appear:

       module load comp-intel/2013.5.192
       module load mpi-sgi/mpt.2.12r26
       MPI_NUM_MEMORY_REGIONS = 0 ; export MPI_NUM_MEMORY_REGIONS
       CFDROOT   = "$HOME/WINDUS/wind-dev"           ; export CFDROOT
       WIND_DEV  = "$HOME/WINDUS/wind-dev"           ; export WIND_DEV
       TOOLSROOT = "$HOME/WINDUS/wind-dev/tools-dev" ; export TOOLSROOT
       .            $HOME/WINDUS/wind-dev/bin/cfd.profile
       .            $HOME/WINDUS/wind-dev/tools-dev/bin/tools.profile
    

    This causes the contents of the tools.login (or tools.profile) file to be executed automatically at login time to set up the environment variable TOOLSROOT that is required for installing and running the Wind-US tools, and to modify PATH to include the location of the proper executable.

  5. Test the NAS login scripts.

    At this point, log out and log back in. Check to see if the environment variables have been set to appropriate values, by doing:

       printenv WIND_DEV
       printenv CFDROOT
       printenv TOOLSROOT
       printenv SYSTEM
       printenv SYSTEM_CPU
       ifort --version
       icc   --version
    

    They should have values similar to:

       $HOME/WINDUS/wind-dev
       $HOME/WINDUS/wind-dev
       $HOME/WINDUS/wind-dev/tools-dev
       LINUX64-GLIBC2.11
       XEON
       ifort (IFORT) 13.1.3 20130607
       icc (ICC) 13.1.3 20130607
    

    (Note that $HOME may be expanded to your full home directory path.)

    If the variables are not set, or not set correctly, go back to step 4 and try again.

  6. Compile Lua.

       cd $HOME/WINDUS/lua-4.0.1
       make
    

    This should create the following files:

       $HOME/WINDUS/lua-4.0.1/bin/lua
       $HOME/WINDUS/lua-4.0.1/bin/luac
       $HOME/WINDUS/lua-4.0.1/lib/liblua.a
       $HOME/WINDUS/lua-4.0.1/lib/liblualib.a
    
  7. Compile the CGNS library.

       cd $HOME/WINDUS/cgnslib_2.5
       ./configure --prefix=$HOME/WINDUS/cgnslib_2.5 --with-system=LINUX64 --enable-64bit
       make SYSTEM=LINUX64
       mkdir include
       mkdir lib
       make install
    

    This should create the following files:

       $HOME/WINDUS/cgnslib_2.5/include/cgnslib.h
       $HOME/WINDUS/cgnslib_2.5/include/cgnslib_f.h
       $HOME/WINDUS/cgnslib_2.5/include/cgnswin_f.h
       $HOME/WINDUS/cgnslib_2.5/lib/libcgns.a
    
  8. Now that the environment variables are set properly, move into the Wind-US build directory.

       cd $WIND_DEV
    

    This should put you in the $HOME/WINDUS/wind-dev directory.

  9. Configure the makefiles.

    Review the contents of the following files or parts of files, where SYSTEM and SYSTEM_CPU correspond to the SYSTEM and SYSTEM_CPU environment variables, paying special attention to the items noted below.

  10. Compile the tools source.

    On NAS, the front end node has the openmotif-devel-* package installed, which makes available a number of header files needed to compile the Wind-US tools. The worker nodes only have the library files installed. This means that the tools must be compiled on the front end node. So, instead of using a batch script like that to compile Wind-US, simply compile the tools from the command line.

    Make sure you are in the build directory.

       cd $WIND_DEV
    

    Next, csh and tcsh users should do

       make all_tools |& tee make_tools.log
    

    while sh and ksh users should do

       make all_tools 2>&1 | tee make_tools.log
    

    Tools can also be compiled individually, by doing

       make tool_name
    

    where tool_name is the name of the tool. Note that the names to be used for GMAN, CFPOST, and MADCAP are gmanpre, cfpost_prod, and madcapprod, respectively.

    After compilation is complete, the following programs should appear in directory $WIND_DEV/$SYSTEM/$SYSTEM_CPU/bin

          USintrpltQ.exe  cfpost_prod.exe   chmgr.exe      mpigetnzone  thplt.exe
          adfedit         cfreorder.exe     decompose.exe  newtmp       timplt.exe
          cfappend.exe    cfreset_iter.exe  delvars        npair        tmptrn.exe
          cfaverage.exe   cfrevert.exe      fpro           parcnl       usplit-hybrid.exe
          cfbeta.exe      cfsequence.exe    gman_pre.exe   readcf       windpar.exe
          cfcnvt          cfspart           gpro.exe       resplt.exe   wnparc
          cfcombine.exe   cfsplit.exe       gridvel.exe    rnparc       wplt3d
          cflistnum       cfsubset.exe      icees          rplt3d       writcf
          cfnav.exe       cfunsequence.exe  jormak.exe     scan
          cfpart.exe      cfview.exe        lstvars        terp
    

    Depending on the size of the array parameters requested, the thplt.exe executable might not get created with the default memory model. If it was not created, then edit $WIND_DEV/source/makefiles/Makefile.include.$SYSTEM.$SYSTEM_CPU.opt to use the following ABI settings:

       ABI=       -Zp8 -pc80 -fp-model strict -fno-alias -heap-arrays -mcmodel medium -traceback
    

    and recompile just that tool. From the build directory, remove the old object file.

       cd $WIND_DEV
       rm -f OBJECTS/$SYSTEM/$SYSTEM_CPU/thplt.o
    

    Next, csh and tcsh users should do

       make thplt |& tee make_thplt.log
    

    while sh and ksh users should do

       make thplt 2>&1 | tee make_thplt.log
    

    Check $WIND_DEV/$SYSTEM/$SYSTEM_CPU/bin to confirm that thplt.exe was created.

  11. In order for the tool scripts to locate the executables, they must be installed in the proper location. To install the executables, do:

       make install_tools

    This copies the tools executables to $TOOLSROOT/$SYSTEM/$SYSTEM_CPU/bin.

  12. If this is a new installation, it would probably be best to log out and log back in again before running any of the tools. This executes the shell start-up scripts, modifying the PATH environment variable to include the newly-created location for the tools executables.

Running Wind-US on the NAS

  1. Make a directory containing your Wind-US input files:

       run.dat
       run.cgd
       run.mpc
       run.lis  (if continuing from a previous solution)
       run.cfl  (if continuing from a previous solution)
    

    The run.mpc file should have the following form:

       / Wind-US parallel processing file for NAS.
       / Currently set to use 2 nodes with 20 processors each
       /                  and 1 node  with  6 processors
       /                  for a total of   46 cores.
       /
       host localhost nproc 20
       host localhost nproc 20
       host localhost nproc 6
    

    Each different type of NAS compute node has a different number of processors. For example, the Ivy Bridge nodes have 20 processor cores. Therefore, each host entry above has at most nproc 20. The user will need to experiment to determine whether the best performance is obtained when all of the processor cores on a given host are used (20+20+6=46) or when the hosts are most closely balanced (16+16+14=46). The difference between internode and intranode communication might be mesh dependent.

    Users might also want to include a checkpoint command in the above file so that the worker solutions are sent to the master process at regular intervals. Please see the User's Manual for more details on the format and features of the parallel processing file.

  2. Start the Wind-US script with one of the following commands:

       wind -runinplace -cl -usessh -mpmode MPI -mpiver SGI
       wind -runinplace -cl -usessh -mpmode PVM
    

    The maximum solver execution time is determined by subtracting the termination processing time from the solver run wall clock time. When the Wind-US run job is submitted, it will create a preNDSTOP file. When the maximum solver execution time has expired, this file will be renamed NDSTOP, forcing Wind-US to begin a graceful shutdown.

    Users should make sure that the termination processing time is sufficient to allow Wind-US to complete the termination process. At the end of the *.lis, there is a summary indicating the time spent during execuation and termination.

    Users should also make sure that they request adequate time from the queue in which they submit their jobs. This is detailed in the next step. Otherwise, the queue will terminate Wind-US, resulting in a less than graceful shutdown.

  3. Edit the run.job.pl file. If you see a line like the following near the top of the file:

       #PBS -l nodes=1234:ppn=2
    

    replace it with

       #PBS -l select=2:ncpus=20:model=ivy+1:ncpus=6:model=ivy
       #PBS -l walltime=40:00:00
       #PBS -m e
    

    This will select 2 nodes with 20 cpus and 1 node with 6 cpus, which matches the request in the run.mpc file. The walltime can be adjusted as desired (hh:mm:ss) or as an integer number of seconds, and the last line will send you an email when your job is completed.

    Make sure to request at least as much walltime as was specified in the Wind-US prompts above, because the queuing system does not terminate jobs as cleanly as Wind-US does.

  4. If you plan on resubmitting the same job again later (i.e., you want to run 10000 cycles now and 10000 cycles later) you can save a copy of the run script.

       cp -p run.job.pl run.job.pl.bak
    

    To resubmit later, you can skip the above steps and simply reuse the run script.

       cp -p run.job.pl.bak run.job.pl
    

    Note that if you increase the number of cycles in your *.dat file, you may need to adjust the run time specified in the job file. In this case it might be best to answer the Wind-US prompts again to create a new run.job.pl file.

  5. Submit the job to the long queue with the command:

       qsub -q long run.job.pl
    

    Some other useful commands are:

    Command Action
    node_stats.sh List the number and type of available cpu nodes.
    qstat -q List all queue names and run limits.
    qstat -a long List all jobs running in the long queue.
    qstat -u USER List all jobs running for username USER.
    qdel JOBNAME Delete job with name JOBNAME. Useful if Wind-US has not yet started.
  6. In order to improve I/O performance for large jobs, Wind-US 3.0 (and later) uses a newer ADF library than its predecessors. Grid and solution files used with Wind-US will automatically be upgraded to the new format, and the tools compiled in the above steps will also work with the new file structure. However, if you transfer the grid or solution file(s) back to your local workstation your existing tools may not be able to read them. If you experience this problem, you should upgrade the tools at your local site.


Last updated 27 Jun 2017