User Tools

Site Tools


software:specfem

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
Next revision Both sides next revision
software:specfem [2017/03/30 13:52]
wphase
software:specfem [2018/03/05 16:04]
wphase
Line 1: Line 1:
 ====== SPECFEM3D_GLOBE ====== ====== SPECFEM3D_GLOBE ======
 +
  
 ==== Running SPECFEM3D_GLOBE on the Strasbourg HPC cluster with gnu 4.8 and cuda 7.5 ==== ==== Running SPECFEM3D_GLOBE on the Strasbourg HPC cluster with gnu 4.8 and cuda 7.5 ====
Line 9: Line 10:
 module load batch/slurm module load batch/slurm
 module load compilers/​cuda-7.5 module load compilers/​cuda-7.5
-module load mpi/openmpi-basic+export CUDA_INC=/usr/​local/​cuda/​cuda-7.5/include
 export CUDA_LIB=/​usr/​local/​cuda/​cuda-7.5/​lib64 export CUDA_LIB=/​usr/​local/​cuda/​cuda-7.5/​lib64
-export ​CUDA_INC=/usr/local/cuda/cuda-7.5/include+export ​PATH=/rpriv/ipgs/zac/openmpi-1.10.7/​bin:​$PATH 
 +export LD_LIBRARY_PATH=/​rpriv/​ipgs/​zac/​openmpi-1.10.7/lib:​$LD_LIBRARY_PATH
 </​code>​ </​code>​
 Notice that we use default gnu compiler of the operating system: ​ Notice that we use default gnu compiler of the operating system: ​
Line 20: Line 22:
  
 === Compilation === === Compilation ===
-Before compilation, ​you must log in hpc-n523 (the frontal node is not yet migrated on the new OS): +Before compilation,​ make sure that required modules are loaded and CUDA_LIB, CUDA_INC environment variables are declared (see previous section). Create a run directory including directories ''​DATABASE_MPI'',​ ''​OUTPUT_FILES'',​ ''​bin''​ and ''​DATA''​.
-<​code>​ +
-ssh hpc-n523 +
-</​code>​ +
- +
-First make sure that required modules are loaded and CUDA_LIB, CUDA_INC environment variables are declared (see previous section). Create a run directory including directories ''​DATABASE_MPI'',​ ''​OUTPUT_FILES'',​ ''​bin''​ and ''​DATA''​.+
 In the directory ''​DATA'',​ create ''​CMTSOLUTION'',​ ''​Par_file''​ and ''​STATIONS''​ file (cf., SPECFEM3D_GLOBE documentation). In the directory ''​DATA'',​ create ''​CMTSOLUTION'',​ ''​Par_file''​ and ''​STATIONS''​ file (cf., SPECFEM3D_GLOBE documentation).
  
Line 284: Line 281:
 echo CPUtime ​ : $(squeue -j $SLURM_JOBID -o "​%M"​ -h)   # HH:MM:SS echo CPUtime ​ : $(squeue -j $SLURM_JOBID -o "​%M"​ -h)   # HH:MM:SS
 </​code>​ </​code>​
 +
 +
 +
 +==== Running multiple SPECFEM3D_GLOBE jobs in parallel ====
 +
 +Some instructions to use custom scripts enabling parallel SEM simulations on the HPC cluster
 +
 +=== Preparing the input files ===
 +
 +
 +First, create an event list "​Events.txt"​ with 3 collumns:
 +  * 1st column: event_id (will also be the name of the run directory)
 +  * 2nd column: path to ''​CMTSOLUTION''​ file for this event
 +  * 3nd column: path to ''​STATION''​ file for this event (can be the same for all events)
 +
 +Then you must setup a ''​Par_file''​ (be careful to use a version of ''​Par_file''​ that is compatible with your SEM version)
 +
 +Finally, you must setup hostfiles named ''​nodelistN''​ files where N=0,​...,​Np-1 (Np, the number of parallel SEM simulations). These files must specify host names and number of slots per node. Here is an example:
 +<​code>​
 +$ cat nodelist0 ​
 +hpc-n443 slots=8
 +hpc-n444 slots=8
 +hpc-n445 slots=8
 +</​code>​
 +(see ''/​b/​home/​eost/​zac/​jobs/​specfem/​parallelSEM/​nodelist0''​)
 +
 +=== Running the simulations in parallel ===
 +
 +Parallel SEM simulations are handled using 3 scripts:
 +  * ''​parallelSEM.sh'':​ is the main script, that compiles the code and run the simulations
 +  * ''​run_gpu_nodelist.sh''​ is the script used to run the mesher and solver
 +  * ''​sleep.slurm''​ is a script to reserve the GPU nodes
 +All these scripts are available in ''/​b/​home/​eost/​zac/​jobs/​specfem/​parallelSEM''​
 +
 +Before running your job, make sure that the input parameters in ''​parallelSEM.sh''​ are consistent with the input parameters stated above (see ''​INPUT PARAMS''​ in the main script). Specifically:​
 +  * ''​SPECFEMDIR'':​ path to SPECFEM3D_GLOBE directory
 +  * ''​Par_file'':​ path to the Par_file used in simulations
 +  * ''​Nparallel'':​ Number of SEM simulations in parallel (make sure enough GPUs are available)
 +  * ''​event_list'':​ List of events with the format given above
 + 
 +Then run your simulations:​
 +<​code>​
 +./​parallelSEM.sh
 +</​code>​
 +The script will make sure that the GPU nodes are available before launching SPECFEM3D_GLOBE.
 +
 +
 +
 +
software/specfem.txt · Last modified: 2018/03/05 16:06 by wphase