Skip navigation links
(Wind-US Documentation Home Page) (Wind-US User's Guide) (GMAN User's Guide) (MADCAP User's Guide) (CFPOST User's Guide) (Wind-US Utilities) (Common File User's Guide) (Wind-US Installation Guide) (Wind-US Developer's Reference) (Guidelines Documents)

(Overview) (Obtaining Wind-US and the Tools) (Installing the Application Distributions) (Installing the Build Distributions) (Porting Wind-US to a New UNIX Platform) (Wind-US at NASA Glenn (GRC only))

Installing and Running the Build Distributions on the NAS

Pre-compiled executables for Wind-US and/or the tools are not available for the NASA Advanced Supercomputing (NAS) systems: columbia and pleiades. User's must install the build distribution into their local directory. This section describes the procedure for doing this.

Security Issues on the NAS

User's are reminded that they are responsible for protecting the dissemination of Wind-US, particularly on a shared computer resource like the NAS. It is therefore recommended that users restrict the access permissions on their NAS home and nobackup directories to prevent access from other users. If this is not already the default behavior, one can use the following commands.

Storage Issues on the NAS

Storage space on NAS is split between a rather limited (8 GB) $HOME directory and a larger (200 GB) /nobackup/$USER directory. Most users install Wind-US into their home directory and submit jobs from their nobackup directory. Offline tape storage is available by transferring files to the machine called lou. Please see the NAS website for additional details.

Computational Resources on the NAS

The NAS computing cluster is comprised of various computing nodes, each of which contains a different number and type of core processors. The user can select which nodes to use by specifying the model name within the PBS script. The default option for most queues is Westmere.

Because each processor type has a different computational efficiency, NAS charges for their use via a Standard Billing Unit (SBU). In this scheme, use of faster computing nodes incurs a larger SBU cost. Each submitted job is is given exclusive access to the requested nodes. The user is charged for using each node, even if the job does not utilize all of the available processors. To make the most of their allotted time on NAS, users should try to fully utlize each computing node. The table below summarizes the available computing resources.

Processor Type Model Name SBU/node CPUs/node RAM(GB)/node
Westmere wes 1.00 12 22.5
Sandy Bridge san 1.82 16 30
Ivy Bridge ivy 2.52 20 62
Haswell has 3.34 24 122

For more information, visit:

Building Wind-US on the NAS

  1. Download the Wind-US Build Distribution from IVMS.
    This distribution contains all of the necessary run scripts and source files needed to compile and run Wind-US.
  2. Transfer the build bundle to NAS (columbia or pleiades).
  3. Unpack the build bundle on NAS.
       cd $HOME/WINDUS
       gunzip -c windus.build.tar.gz | tar xvf -
    
    This should extract everything to the new subdirectory:
       $HOME/WINDUS/wind-dev
  4. Update the NAS login scripts.

    Users of csh and tcsh shells must edit the $HOME/.cshrc or $HOME/.tcshrc file respectively and make sure the following lines appear:
       module load comp-intel/2013.5.192
       module load mpi-sgi/mpt.2.11r13
       setenv CFDROOT   "$HOME/WINDUS/wind-dev"
       setenv WIND_DEV  "$HOME/WINDUS/wind-dev"
       source            $HOME/WINDUS/wind-dev/bin/cfd.login
    
    Similarly, bash, ksh, and sh users must edit their $HOME/.profile file and add make sure the following lines appear:
       module load comp-intel/2013.5.192
       module load mpi-sgi/mpt.2.11r13
       CFDROOT   = "$HOME/WINDUS/wind-dev"           ; export CFDROOT
       WIND_DEV  = "$HOME/WINDUS/wind-dev"           ; export WIND_DEV
       .            $HOME/WINDUS/wind-dev/bin/cfd.profile
    
    This sets up the Intel Fortran/C compilers, loads SGI's MPI message passing toolkit, and causes the contents of the cfd.login (or cfd.profile) file to be executed automatically for each new shell instance. The environment variables CFDROOT, WIND_DEV, SYSTEM, and SYSTEM_CPU, that are required for installing and running Wind-US will also be set, and the PATH will be modified to include the location of the Wind-US executable directories.

  5. Test the NAS login scripts.

    At this point, log out and log back in. Check to see if the environment variables have been set to appropriate values, by doing:
       printenv CFDROOT
       printenv WIND_DEV
       printenv SYSTEM
       printenv SYSTEM_CPU
       ifort --version
       icc   --version
       which mpiexec
    
    They should have values similar to:
       $HOME/WINDUS/wind-dev
       $HOME/WINDUS/wind-dev
       LINUX64-GLIBC2.11
       XEON
       ifort (IFORT) 13.1.3 20130607
       icc (ICC) 13.1.3 20130607
       /nasa/sgi/mpt/2.11r13/bin/mpiexec
    
    Note that $HOME will likely be expanded to your full home directory path. If the variables are not set, or not set correctly, go back to step 4 and try again.

  6. Move into the Wind-US build directory.
       cd $WIND_DEV
    This should put you in the $HOME/WINDUS/wind-dev directory.

  7. Configure the makefiles.

    If you plan to build both Wind-US and the tools, then follow the instructions below for unpacking the tools distribution before making any changes to the makefiles. The reason for this is that the tools distribution also contains a copy of the makefiles and will overwrite any changes you might make here.

    The default values in the configuration files should be sufficient to produce an executable. To be safe, review the contents of the following files or parts of files, where SYSTEM and SYSTEM_CPU correspond to the SYSTEM and SYSTEM_CPU environment variables, paying special attention to the items noted below.


  8. Compile the source code.


  9. Install the executables and scripts.

    If you have a previous installation of Wind-US that you wish to update (i.e., CFDROOT points to some directory other than the WIND_DEV you are building from), you will need to install the Wind-US and PVM executables. To do that, issue the commands
       make install
       make install_scripts
       make copy_pvm
    
    This will:
  10. If this is a new installation, it would probably be best to log out and log back in before running Wind-US. This executes the shell start-up scripts, modifying the PATH environment variable to include the newly-created location for the Wind-US executable.

  11. Run the wind script.
       wind
    
    The new executable should appear in the list of available versions.
                      Select the desired version
    
       0: Exit wind
       1: Wind-US Alpha
    
    See the instruction below for running Wind-US on the NAS.

Building the Tools on the NAS

Unlike the Wind-US build distribution, in order to obtain the source for all the tools several downloads are required. The smaller tools are all bundled together and may be acquired from the "Downloads" page of the "Tools Makefiles" project. GMAN, CFPOST, and MADCAP are normally downloaded separately from their respective "Downloads" pages. Note that the Project Names for these are "gmanpre", "cfpost_prod", and "Madcap production", respectively. The instructions below only describe how to install the smaller utilities and CFPOST on NAS, since GMAN and MADCAP are more graphical in nature and typically not used over a remote connection.

Each build distribution is designed to be a completely independent package, so that the tools can be built without requiring any additional files from IVMS. [There are some exceptions to this, such as CFPOST, described below.] Thus, one could have separate directory trees for the Wind-US build distribution and each of the tools build distributions. This would lead to a great deal of duplication, however. Therefore, the build distributions are designed to overlay one another. The following instructions assume that all the tools are being built in the same tree.

Note that the build procedure for the tools is somewhat more complicated than the Wind-US build process. Please report problems to the NPARC Alliance User Support Team at nparc-support@arnold.af.mil or (931) 454-7885.

  1. Download the source files.

    The Tools Makefiles contains the source code for most of the smaller utilities. The CFPOST source code must be downloaded separately. Note that CFPOST requires the Madcap library files, which should be included in this cfpost build bundle.

    Some of the tools require lua and cgnslib header files and/or libraries.
  2. Transfer the source code to NAS (columbia or pleiades).
  3. Unpack the source code on NAS.

    Note that this will overwrite any changes you made to the Wind-US makefiles!
       cd $HOME/WINDUS
       gunzip -c tools.build.tar.gz | tar xvf -
       gunzip -c cfpost_prod.build.tar.gz | tar xvf -
       gunzip -c lua.4.0.1.tar.gz | tar xvf -
       gunzip -c cgnslib.2.5.5.tar.gz | tar xvf -
    
    This should extract everything to the following subdirectories:
       $HOME/WINDUS/wind-dev/tools-dev/*
       $HOME/WINDUS/wind-dev/tools-dev/cfpost_prod
       $HOME/WINDUS/lua-4.0.1
       $HOME/WINDUS/cgns_2.5
    
  4. Update the NAS login scripts.

    Users of csh and tcsh shells must edit the $HOME/.cshrc or $HOME/.tcshrc file respectively and make sure the following lines appear:
       module load comp-intel/2013.5.192
       module load mpi-sgi/mpt.2.11r13
       setenv CFDROOT   "$HOME/WINDUS/wind-dev"
       setenv WIND_DEV  "$HOME/WINDUS/wind-dev"
       setenv TOOLSROOT "$HOME/WINDUS/wind-dev/tools-dev"
       source            $HOME/WINDUS/wind-dev/bin/cfd.login
       source            $HOME/WINDUS/wind-dev/tools-dev/bin/tools.login
    
    Similarly, bash, ksh, and sh users must edit their $HOME/.profile file and make sure the following lines appear:
       module load comp-intel/2013.5.192
       module load mpi-sgi/mpt.2.11r13
       CFDROOT   = "$HOME/WINDUS/wind-dev"           ; export CFDROOT
       WIND_DEV  = "$HOME/WINDUS/wind-dev"           ; export WIND_DEV
       TOOLSROOT = "$HOME/WINDUS/wind-dev/tools-dev" ; export TOOLSROOT
       .            $HOME/WINDUS/wind-dev/bin/cfd.profile
       .            $HOME/WINDUS/wind-dev/tools-dev/bin/tools.profile
    
    This causes the contents of the tools.login (or tools.profile) file to be executed automatically at login time to set up the environment variable TOOLSROOT that is required for installing and running the Wind-US tools, and to modify PATH to include the location of the proper executable.

  5. Test the NAS login scripts.

    At this point, log out and log back in. Check to see if the environment variables have been set to appropriate values, by doing:
       printenv CFDROOT
       printenv WIND_DEV
       printenv TOOLSROOT
       printenv SYSTEM
       printenv SYSTEM_CPU
       ifort --version
       icc   --version
    
    They should have values similar to:
       $HOME/WINDUS/wind-dev
       $HOME/WINDUS/wind-dev
       $HOME/WINDUS/wind-dev/tools-dev
       LINUX64-GLIBC2.11
       XEON
       ifort (IFORT) 13.1.3 20130607
       icc (ICC) 13.1.3 20130607
    
    (Note that $HOME may be expanded to your full home directory path.) If the variables are not set, or not set correctly, go back to step 4 and try again.

  6. Compile Lua.
       cd $HOME/WINDUS/lua-4.0.1
       make
    
    This should create the following files:
       $HOME/WINDUS/lua-4.0.1/bin/lua
       $HOME/WINDUS/lua-4.0.1/bin/luac
       $HOME/WINDUS/lua-4.0.1/lib/liblua.a
       $HOME/WINDUS/lua-4.0.1/lib/liblualib.a
    
  7. Compile the CGNS library.
       cd $HOME/WINDUS/cgnslib_2.5
       ./configure --prefix=$HOME/WINDUS/cgnslib_2.5 --with-system=LINUX64 --enable-64bit
       make SYSTEM=LINUX64
       mkdir include
       mkdir lib
       make install
    
    This should create the following files:
       $HOME/WINDUS/cgnslib_2.5/include/cgnslib.h
       $HOME/WINDUS/cgnslib_2.5/include/cgnslib_f.h
       $HOME/WINDUS/cgnslib_2.5/include/cgnswin_f.h
       $HOME/WINDUS/cgnslib_2.5/lib/libcgns.a
    
  8. Move into the Wind-US build directory.
       cd $WIND_DEV
    This should put you in the $HOME/WINDUS/wind-dev directory.

  9. Configure the makefiles.

    Review the contents of the following files or parts of files, where SYSTEM and SYSTEM_CPU correspond to the SYSTEM and SYSTEM_CPU environment variables, paying special attention to the items noted below.


  10. Compile the tools source.

    On NAS, the front end node has the openmotif-devel-* package installed, which makes available a number of header files needed to compile the Wind-US tools. The worker nodes only have the library files installed. This means that the tools must be compiled on the front end node. So, instead of using a batch script like that to compile Wind-US, simply compile the tools from the command line.

    Make sure you are in the build directory.

       cd $WIND_DEV
    
    Next, csh and tcsh users should do
       make all_tools |& tee make_tools.log
    
    while sh and ksh users should do
       make all_tools 2>&1 | tee make_tools.log
    

    Tools can also be compiled individually, by doing

       make tool_name
    
    where tool_name is the name of the tool. Note that the names to be used for GMAN, CFPOST, and MADCAP are gmanpre, cfpost_prod, and madcapprod, respectively.

    After compilation is complete, the following programs should appear in directory $WIND_DEV/$SYSTEM/$SYSTEM_CPU/bin

          USintrpltQ.exe   cfreset_iter.exe  gpro.exe     rplt3d
          adfedit          cfrevert.exe      gridvel.exe  scan
          cfappend.exe     cfsequence.exe    icees        terp
          cfaverage.exe    cfspart           jormak.exe   thplt.exe
          cfbeta.exe       cfsplit.exe       lstvars      timplt.exe
          cfcnvt           cfsubset.exe      mpigetnzone  tmptrn.exe
          cfcombine.exe    cfunsequence.exe  newtmp       usplit-hybrid.exe
          cflistnum        cfview.exe        npair        windpar.exe
          cfnav.exe        chmgr.exe         parcnl       wnparc
          cfpart.exe       decompose.exe     readcf       wplt3d
          cfpost_prod.exe  delvars           resplt.exe   writcf
          cfreorder.exe    fpro              rnparc
    

    Depending on the size of the array parameters requested, thplt.exe might not get created with the default memory model. If it was not created, then edit $WIND_DEV/source/makefiles/Makefile.include.$SYSTEM.$SYSTEM_CPU.opt to use the following ABI settings:

       ABI=       -Zp8 -pc80 -fp-model strict -fno-alias -heap-arrays -mcmodel medium -traceback
    
    and recompile just that tool. From the build directory, remove the old object file.
       cd $WIND_DEV
       rm -f OBJECTS/$SYSTEM/$SYSTEM_CPU/thplt.o
    
    Next, csh and tcsh users should do
       make thplt |& tee make_thplt.log
    
    while sh and ksh users should do
       make thplt 2>&1 | tee make_thplt.log
    
    Check $WIND_DEV/$SYSTEM/$SYSTEM_CPU/bin to confirm that thplt.exe was created.
  11. In order for the tool scripts to locate the executables, they must be installed in the proper location. To install the executables, do:
       make install_tools
    This copies the tools executables to $TOOLSROOT/$SYSTEM/$SYSTEM_CPU/bin.

  12. If this is a new installation, it would probably be best to log out and log back in again before running any of the tools. This executes the shell start-up scripts, modifying the PATH environment variable to include the newly-created location for the tools executables.

Running Wind-US on the NAS

  1. Make a directory containing your Wind-US input files:
       run.dat
       run.cgd
       run.mpc
       run.lis  (if continuing from a previous solution)
       run.cfl  (if continuing from a previous solution)
    
    The run.mpc file should have the following form:
       / Wind-US parallel processing file for NAS.
       / Currently set to use 2 nodes with 12 processors each
       /                  and 1 node  with  6 processors
       /                  for a total of   30 cores.
       /
       host localhost nproc 12
       host localhost nproc 12
       host localhost nproc 6
    
    Each different type of NAS compute node has a different number of processors. For example, the Westmere nodes have 12 processor cores. Therefore, each host entry above has at most nproc 12. The user will need to experiment to determine whether the best performance is obtained when all of the processor cores on a given host are used (12+12+6=30) or when the hosts are most closely balanced (10+10+10=30). The difference between internode and intranode communication might be mesh dependent.

    Users might also want to include a checkpoint command in the above file so that the worker solutions are sent to the master process at regular intervals. Please see the User's Manual for more details on the format and features of the parallel processing file.

  2. Start the Wind-US script with one of the following commands:
       wind -runinplace -cl -usessh -mpmode MPI -mpiver SGI
       wind -runinplace -cl -usessh -mpmode PVM
    

    The maximum solver execution time is determined by subtracting the termination processing time from the solver run wall clock time. When the Wind-US run job is submitted, it will create a preNDSTOP file. When the maximum solver execution time has expired, this file will be renamed NDSTOP, forcing Wind-US to begin a graceful shutdown.

    Users should make sure that the termination processing time is sufficient to allow Wind-US to complete the termination process. At the end of the *.lis, there is a summary indicating the time spent during execuation and termination.

    Users should also make sure that they request adequate time from the queue in which they submit their jobs. This is detailed in the next step. Otherwise, the queue will terminate Wind-US, resulting in a less than graceful shutdown.

  3. Edit the run.job.pl file. If you see a line like the following near the top of the file:
       #PBS -l nodes=1234:ppn=2
    
    replace it with
       #PBS -l select=2:ncpus=12+1:ncpus=6
       #PBS -l walltime=40:00:00
       #PBS -m e
    
    This will select 2 nodes with 12 cpus and 1 node with 6 cpus, which matches the request in the run.mpc file. The walltime can be adjusted as desired (hh:mm:ss) or as an integer number of seconds, and the last line will send you an email when your job is completed.

    Make sure to request at least as much walltime as was specified in the Wind-US prompts above, because the queuing system does not terminate jobs as cleanly as Wind-US does.

  4. If you plan on resubmitting the same job again later (i.e., you want to run 10000 cycles now and 10000 cycles later) you can save a copy of the run script.
       cp -p run.job.pl run.job.pl.bak
    
    To resubmit later, you can skip the above steps and simply reuse the run script.
       cp -p run.job.pl.bak run.job.pl
    
    Note that if you increase the number of cycles in your *.dat file, you may need to adjust the run time specified in the job file.

  5. Submit the job to the long queue with the command:
       qsub -q long run.job.pl
    
    Some other useful commands are:
    Command Action
    qstat -q List all queue names and run limits.
    qstat -a long List all jobs running in the long queue.
    qstat -u USER List all jobs running for username USER.
    qdel JOBNAME Delete job with name JOBNAME. Useful if Wind-US has not yet started.

  6. In order to improve I/O performance for large jobs, Wind-US 3.0 uses a newer ADF library than its predecessors. Grid and solution files used with Wind-US 3.0 will automatically be upgraded to the new format, and the tools compiled in the above steps will also work with the new file structure. However, if you transfer the grid or solution file(s) back to your local workstation your existing tools may not be able to read them. If you experience this problem, you should upgrade the tools at your local site.


Last updated 20 Aug 2015