User Tools

Site Tools


software:specfem

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
Next revision Both sides next revision
software:specfem [2017/03/29 13:45]
wphase
software:specfem [2018/03/05 15:04]
wphase
Line 1: Line 1:
 ====== SPECFEM3D_GLOBE ====== ====== SPECFEM3D_GLOBE ======
 +
  
 ==== Running SPECFEM3D_GLOBE on the Strasbourg HPC cluster with gnu 4.8 and cuda 7.5 ==== ==== Running SPECFEM3D_GLOBE on the Strasbourg HPC cluster with gnu 4.8 and cuda 7.5 ====
Line 9: Line 10:
 module load batch/slurm module load batch/slurm
 module load compilers/​cuda-7.5 module load compilers/​cuda-7.5
-module load mpi/openmpi-basic+export CUDA_INC=/usr/​local/​cuda/​cuda-7.5/include
 export CUDA_LIB=/​usr/​local/​cuda/​cuda-7.5/​lib64 export CUDA_LIB=/​usr/​local/​cuda/​cuda-7.5/​lib64
-export ​CUDA_INC=/usr/local/cuda/cuda-7.5/include+export ​PATH=/rpriv/ipgs/zac/openmpi-1.10.7/​bin:​$PATH 
 +export LD_LIBRARY_PATH=/​rpriv/​ipgs/​zac/​openmpi-1.10.7/lib:​$LD_LIBRARY_PATH
 </​code>​ </​code>​
 Notice that we use default gnu compiler of the operating system: ​ Notice that we use default gnu compiler of the operating system: ​
Line 20: Line 22:
  
 === Compilation === === Compilation ===
-Before compilation, ​you must log in hpc-n523 (the frontal node is not yet migrated on the new OS): +Before compilation,​ make sure that required modules are loaded and CUDA_LIB, CUDA_INC environment variables are declared (see previous section). Create a run directory including directories ''​DATABASE_MPI'',​ ''​OUTPUT_FILES'',​ ''​bin''​ and ''​DATA''​.
-<​code>​ +
-ssh hpc-n523 +
-</​code>​ +
- +
-First make sure that required modules are loaded and CUDA_LIB, CUDA_INC environment variables are declared (see previous section). Create a run directory including directories ''​DATABASE_MPI'',​ ''​OUTPUT_FILES'',​ ''​bin''​ and ''​DATA''​.+
 In the directory ''​DATA'',​ create ''​CMTSOLUTION'',​ ''​Par_file''​ and ''​STATIONS''​ file (cf., SPECFEM3D_GLOBE documentation). In the directory ''​DATA'',​ create ''​CMTSOLUTION'',​ ''​Par_file''​ and ''​STATIONS''​ file (cf., SPECFEM3D_GLOBE documentation).
  
Line 84: Line 81:
  
 # configure # configure
-./configure -with-gpu=cuda5+./configure -with-cuda=cuda5
  
 # compiles for a forward simulation # compiles for a forward simulation
Line 284: Line 281:
 echo CPUtime ​ : $(squeue -j $SLURM_JOBID -o "​%M"​ -h)   # HH:MM:SS echo CPUtime ​ : $(squeue -j $SLURM_JOBID -o "​%M"​ -h)   # HH:MM:SS
 </​code>​ </​code>​
 +
 +
 +
 +==== Running multiple SPECFEM3D_GLOBE jobs in parallel ====
 +
 +To launch batch of SEM simulations:​
 +
 +First, create an event list "​Events.txt"​ with 3 collumns:
 +  * 1st column: event_id (will also be the name of the run directory)
 +  * 2nd column: path to ''​CMTSOLUTION''​ file for this event
 +  * 3nd column: path to ''​STATION''​ file for this event (can be the same for all events)
 +
 +Then you must setup a ''​Par_file''​ as described above (be careful to use a version of ''​Par_file''​ that is compatible with your SEM version)
 +
 +The SPECFEM runs will be handled using 3 scripts:
 +  * ''​parallelSEM.sh'':​ is the main script
 +  * ''​run_gpu_nodelist.sh''​ is the script used to run the mesher and solver
 +  * ''​sleep.slurm''​ is a script to reserve the GPU nodes
 +All these scripts are available in ''/​b/​home/​eost/​zac/​jobs/​specfem/​parallelSEM''​
 +
 +Before running it, make sure the input parameters in ''​parallelSEM.sh''​ are consistent with the input parameters stated above (see ''​INPUT PARAMS''​ in the main script). Then run:
 +<​code>​
 +./​parallelSEM.sh
 +</​code>​
 +The script will make sure that the GPU nodes are available before launching SPECFEM3D_GLOBE.
 +
 +
 +
 +
software/specfem.txt · Last modified: 2018/03/05 16:06 by wphase