Skip to content

Parallel (MPI/OpenMP) simulation

OpenLB – Open Source Lattice Boltzmann Code Forums on OpenLB General Topics Parallel (MPI/OpenMP) simulation

Viewing 14 posts - 1 through 14 (of 14 total)
  • Author
    Posts
  • #1825

    Hi everyone!rnrnI am interested to simulated in parallel. In the user guide, the chapter 10 (The examples programs) says that all the examples could be implemented in parallel, either MPI or OpenMP. Also, I saw a post from 2012

    Quote:
    Quote from Padde86 on August 22, 2012, 12:12 Problem with MPI execution an LBM algorithm

    . It is explained that the flag CXX must be changed in the “”Makefile.inc””. However, I do not know how to do this and how to make the parallel simulation from the terminal.rnrnI am trying to simulate the cavity2d example in the parallel folder. This is the “”Makefile.inc”” code:rn

    Code:
    rn###########################################################################rn###########################################################################rn## DEFINITIONS TO BE CHANGEDrnrnROOT := ../../..rnSRC :=rnOUTPUT := cavity2drnrn###########################################################################rn## definitionsrnrninclude $(ROOT)/Makefile.incrnrnOBJECTS := $(foreach file, $(SRC) $(OUTPUT), $(PWD)/$(file).o)rnDEPS := $(foreach file, $(SRC) $(OUTPUT), $(PWD)/$(file).d)rnrn###########################################################################rn## allrnrnall : depend compile updatelib linkrnrnrn###########################################################################rn## dependenciesrnrndepend : $(DEPS)rnrn$(PWD)/%.d : %.cpprn @echo Create dependencies for $<rn @$(SHELL) -ec ‘$(CXX) -M $(CXXFLAGS) $(IDIR) $< rn | sed -e “”s!$*.o!$(PWD)/$*.o!1″” > .tmpfile; rn cp -f .tmpfile $@;’rnrn###########################################################################rn## compilernrncompile : $(OBJECTS)rnrn$(PWD)/%.o: %.cpprn @echo Compile $<rn $(CXX) $(CXXFLAGS) $(IDIR) -c $< -o $@rnrn###########################################################################rn## cleanrnrnclean : cleanrub cleanobj cleandeprnrncleanrub: rn @echo Clean rubbish filesrn @rm -f *~ core .tmpfile tmp/*.* $(OUTPUT)rn @rm -f tmp/vtkData/*.* tmp/vtkData/data/*.* tmp/imageData/*.* rnrncleanobj:rn @echo Clean object filesrn @rm -f $(OBJECTS)rnrncleandep: rn @echo Clean dependencies filesrn @rm -f $(DEPS)rnrncleanbuild:rn @echo Clean olb mainrn @cd $(ROOT); rn $(MAKE) cleanbuild;rnrn###########################################################################rn## update librnrnupdatelib :rn @cd $(ROOT); rn $(MAKE) all;rnrn###########################################################################rn## linkrnrnlink: $(OUTPUT)rnrn$(OUTPUT): $(OBJECTS) $(ROOT)/$(LIBDIR)/lib$(LIB).arn @echo Link $@rn $(CXX) $(foreach file, $(SRC), $(file).o) $@.o $(LDFLAGS) -L$(ROOT)/$(LIBDIR) -l$(LIB) -o $@rnrn###########################################################################rn## include dependenciesrnrnifneq “”$(strip $(wildcard *.d))”” “”””rn include $(foreach file,$(DEPS),$(file))rnendifrnrn###########################################################################rn###########################################################################rn

    rnrn1. Where the MPI/OpenMP must be defined in this code?rn2. Which are the commands needed to simulate this example in MPI or OpenMP mode? How do I need to processed in the terminal?rnrnBest Regards,rnrnAlejandro

    #2325
    robin.trunk
    Keymaster

    Hi Alejandro,rnrn””Makefile.inc”” refers to the file in the OpenLB main folder, since the whole library has to be compiled again for parallel execution. Important parts arernrnCXX := g++rn#CXX := icpc -D__aligned__=ignoredrn#CXX := mpiCCrn#CXX := mpic++rnrnPARALLEL_MODE := OFFrn#PARALLEL_MODE := MPIrn#PARALLEL_MODE := OMPrn#PARALLEL_MODE := HYBRIDrnrnHere you can chose the mode and compiler used by removing the “”#”” in the regarding lines. To clean the previous compiled library use make cleanbuild. To execute the program in parallel with mpi use mpirun as documented in the manual.rnrnBest regardsrnRobin

    #2326

    Hi Robin,rnrnThanks for yours answers. I changed the “”Makefile.inc”” like this:rn

    Code:
    rnCXX := g++rn#CXX := icpc -D__aligned__=ignoredrn#CXX := mpiCCrnCXX := mpic++rnrn###########################################rnrnPARALLEL_MODE := OFFrnPARALLEL_MODE := MPIrn#PARALLEL_MODE := OMPrn#PARALLEL_MODE := HYBRIDrn

    rnrnI tested these changes from the “”cavity2d/parallel”” directory. As indicated in the user manual (pag. 29), I executed these commands:rn

      rn
    Code:
    make clean

    rn

    Code:
    make cleanbuild

    rn

    Code:
    make

    rnrnrnThen, I ran the simulation with 3 and 4 cores in order to test the time taken in each case. I used

    Code:
    mpirun -np x ./cavity2d

    The “”x”” corresponds to the number of cores, in this case 3 and 4. It seems that I obtain the same problem indicated by Padde86 in the topic

    Quote:
    Problem with MPI execution an LBM algorithm

    The 4 cores simulation takes more time that the 3 cores. The “”timer”” results obtained only varying the number of cores in the parallel simulation is:rnrn4 Coresrn

    Code:
    rn[LatticeStatistics] step=18816; t=14.7; uMax=0.1; avEnergy=0.000346835; avRho=1rn[Timer] step=18816; percent=98; passedTime=151.523; remTime=3.09231; MLUPs=2.4942rn[Timer] rn[Timer] —————-Summary:Timer—————-rn[Timer] measured time (rt) : 151.619srn[Timer] measured time (cpu): 141.128srn[Timer] average MLUPs : 2.107rn[Timer] average MLUPps: 2.107rn[Timer] ———————————————rn[LatticeStatistics] step=19072; t=14.9; uMax=0.1; avEnergy=0.000349094; avRho=1rn[Timer] step=19072; percent=99.3333; passedTime=151.653; remTime=1.01781; MLUPs=2.68606rn[Timer] rn[Timer] —————-Summary:Timer—————-rn[Timer] measured time (rt) : 151.738srn[Timer] measured time (cpu): 142.437srn[Timer] average MLUPs : 2.106rn[Timer] average MLUPps: 2.106rn[Timer] ———————————————rn[LatticeStatistics] step=18944; t=14.8; uMax=0.1; avEnergy=0.000347976; avRho=1rn[Timer] step=18944; percent=98.6667; passedTime=152.063; remTime=2.05491; MLUPs=3.94453rn[Timer] rn[Timer] —————-Summary:Timer—————-rn[Timer] measured time (rt) : 152.102srn[Timer] measured time (cpu): 142.303srn[Timer] average MLUPs : 2.101rn[Timer] average MLUPps: 2.101rn[Timer] ———————————————rn[LatticeStatistics] step=19072; t=14.9; uMax=0.1; avEnergy=0.000349094; avRho=1rn[Timer] step=19072; percent=99.3333; passedTime=152.471; remTime=1.0233; MLUPs=5.22071rn[Timer] rn[Timer] —————-Summary:Timer—————-rn[Timer] measured time (rt) : 152.866srn[Timer] measured time (cpu): 140.036srn[Timer] average MLUPs : 2.090rn[Timer] average MLUPps: 2.090rn[Timer] ———————————————rn

    rnrn3 Coresrn

    Code:
    rn[LatticeStatistics] step=18560; t=14.5; uMax=0.1; avEnergy=0.000344785; avRho=1rn[Timer] step=18560; percent=96.6667; passedTime=109.805; remTime=3.78638; MLUPs=3.22735rn[Timer] rn[Timer] —————-Summary:Timer—————-rn[Timer] measured time (rt) : 110.165srn[Timer] measured time (cpu): 108.535srn[Timer] average MLUPs : 2.900rn[Timer] average MLUPps: 2.900rn[Timer] ———————————————rn[LatticeStatistics] step=18688; t=14.6; uMax=0.1; avEnergy=0.000345858; avRho=1rn[Timer] step=18688; percent=97.3333; passedTime=110.375; remTime=3.02397; MLUPs=3.73693rn[LatticeStatistics] step=19072; t=14.9; uMax=0.1; avEnergy=0.000349094; avRho=1rn[Timer] step=19072; percent=99.3333; passedTime=110.421; remTime=0.741081; MLUPs=3.24208rn[Timer] rn[Timer] —————-Summary:Timer—————-rn[Timer] measured time (rt) : 110.860srn[Timer] measured time (cpu): 109.294srn[Timer] average MLUPs : 2.882rn[Timer] average MLUPps: 2.882rn[Timer] ———————————————rn[LatticeStatistics] step=18816; t=14.7; uMax=0.1; avEnergy=0.000346835; avRho=1rn[Timer] step=18816; percent=98; passedTime=110.844; remTime=2.26212; MLUPs=4.55138rn[LatticeStatistics] step=18944; t=14.8; uMax=0.1; avEnergy=0.000347976; avRho=1rn[Timer] step=18944; percent=98.6667; passedTime=111.24; remTime=1.50324; MLUPs=5.37891rn[LatticeStatistics] step=19072; t=14.9; uMax=0.1; avEnergy=0.000349094; avRho=1rn[Timer] step=19072; percent=99.3333; passedTime=111.627; remTime=0.749174; MLUPs=5.504rn[Timer] rn[Timer] —————-Summary:Timer—————-rn[Timer] measured time (rt) : 112.14srn[Timer] measured time (cpu): 110.124srn[Timer] average MLUPs : 2.852rn[Timer] average MLUPps: 2.852rn[Timer] ———————————————rn

    rnrnIf I understand the interest of using a parallel simulation is to divide the load in the different cores to get a result in less time (I know it is a very simplified principle). rnrn1. So, why measured time in the 3 cores option is less that the 4 cores option? rnrn2. Am I making a sequential simulation?rnrn3. How could I be sure that the simulation is running in parallel and not the same case x times in sequential?rnrnThe machine that I am using has these characteristics:rn

    Code:
    rnArchitecture: x86_64rnMode(s) opératoire(s) des processeurs :32-bit, 64-bitrnByte Order: Little EndianrnCPU(s): 4rnOn-line CPU(s) list: 0-3rnThread(s) par cœur : 2rnCœur(s) par socket : 1rnSocket(s): 2rnNœud(s) NUMA : 1rnIdentifiant constructeur :GenuineIntelrnFamille de processeur :15rnModèle : 4rnRévision : 10rnVitesse du processeur en MHz :3200.172rnBogoMIPS: 6400.71rnCache L1d : 16KrnCache L2 : 2048KrnNUMA node0 CPU(s): 0-3rn

    rnrnBest regards,rnrnAlejandro

    #2327
    mathias
    Keymaster

    Dear Alejandro,rnrnfrom the output, it is clear that you run the sequential code and not the parallel one. That is why the performance is poor. After “”make clean”” “”make cleanbuild”” and “”make”” was the example really being compiled? Please check the settings in the Makefile.inc again, remove the executable by hand and try to “”make clean”” “”make cleanbuild”” and “”make”” again.rnrnBestrnMathias

    #2328

    Hello Mathias,rnrnI checked the “”Makefile.inc”” and the definitions are:rnrn

    Code:
    rn## DEFINITIONS TO BE CHANGEDrnrnCXX := g++rn#CXX := icpc -D__aligned__=ignoredrn#CXX := mpiCCrnCXX := mpic++rnrnOPTIM := -O3 -WallrnDEBUG := -g -DOLB_DEBUGrnrnCXXFLAGS := $(OPTIM)rn#CXXFLAGS := $(DEBUG)rnrn# to enable std::shared_ptr in functor arithmetikrn# works in gcc 4.3 and later, source https://gcc.gnu.org/projects/cxx0x.htmlrnCXXFLAGS += -std=c++0xrn# works in gcc 4.7 and later (recommended)rn#CXXFLAGS += -std=c++11rnrn#CXXFLAGS += -fdiagnostics-color=autorn#CXXFLAGS += -std=gnu++14rnrnARPRG := arrn#ARPRG := xiar # mandatory for intel compilerrnrnLDFLAGS :=rnrnrnPARALLEL_MODE := OFFrnPARALLEL_MODE := MPIrn#PARALLEL_MODE := OMPrn#PARALLEL_MODE := HYBRIDrnrnMPIFLAGS :=rnOMPFLAGS := -fopenmprnrnBUILDTYPE := precompiledrn#BUILDTYPE := genericrn

    rnrnI removed the executable from the Cavity2d-parallel and the Cylinder2d directories by hand. Then I made “”make clean””, “”make cleanbuild”” and “”make””. The compilation is working the same way as before:rnrnCavity2d-Parallel 4 Coresrn

    Code:
    rn[Timer] rn[Timer] —————-Summary:Timer—————-rn[Timer] measured time (rt) : 152.160srn[Timer] measured time (cpu): 133.202srn[Timer] average MLUPs : 2.100rn[Timer] average MLUPps: 2.100rn[Timer] ———————————————rn[LatticeStatistics] step=18304; t=14.3; uMax=0.1; avEnergy=0.000342504; avRho=1rn/*****************************************************************************/rn[LatticeStatistics] step=18688; t=14.6; uMax=0.1; avEnergy=0.000345858; avRho=1rn[Timer] step=18688; percent=97.3333; passedTime=154.379; remTime=4.22956; MLUPs=3.46913rn[Timer] rn[Timer] —————-Summary:Timer—————-rn[Timer] measured time (rt) : 154.859srn[Timer] measured time (cpu): 137.808srn[Timer] average MLUPs : 2.063rn[Timer] average MLUPps: 2.063rn[Timer] ———————————————rn[LatticeStatistics] step=18816; t=14.7; uMax=0.1; avEnergy=0.000346835; avRho=1rn/******************************************************************************/rn[LatticeStatistics] step=19072; t=14.9; uMax=0.1; avEnergy=0.000349094; avRho=1rn[Timer] step=19072; percent=99.3333; passedTime=155.8; remTime=1.04564; MLUPs=4.76521rn[Timer] rn[Timer] —————-Summary:Timer—————-rn[Timer] measured time (rt) : 156.165srn[Timer] measured time (cpu): 135.256srn[Timer] average MLUPs : 2.046rn[Timer] average MLUPps: 2.046rn[Timer] ———————————————rn[Timer] rn[Timer] —————-Summary:Timer—————-rn[Timer] measured time (rt) : 156.214srn[Timer] measured time (cpu): 136.896srn[Timer] average MLUPs : 2.045rn[Timer] average MLUPps: 2.045rn[Timer] ———————————————rn

    rnrnCavity2d-Parallel 3 Coresrn

    Code:
    rn[LatticeStatistics] step=18176; t=14.2; uMax=0.1; avEnergy=0.000341447; avRho=1rn[Timer] step=18176; percent=94.6667; passedTime=119.872; remTime=6.75335; MLUPs=2.72733rn[Timer] rn[Timer] —————-Summary:Timer—————-rn[Timer] measured time (rt) : 120.176srn[Timer] measured time (cpu): 113.472srn[Timer] average MLUPs : 2.659rn[Timer] average MLUPps: 2.659rn[Timer] ———————————————rn/**************************************************************************************/rn[LatticeStatistics] step=19072; t=14.9; uMax=0.1; avEnergy=0.000349094; avRho=1rn[Timer] step=19072; percent=99.3333; passedTime=124.519; remTime=0.835698; MLUPs=3.49762rn[Timer] rn[Timer] —————-Summary:Timer—————-rn[Timer] measured time (rt) : 124.710srn[Timer] measured time (cpu): 118.398srn[Timer] average MLUPs : 2.562rn[Timer] average MLUPps: 2.562rn[Timer] ———————————————rn[Timer] rn[Timer] —————-Summary:Timer—————-rn[Timer] measured time (rt) : 124.978srn[Timer] measured time (cpu): 118.115srn[Timer] average MLUPs : 2.557rn[Timer] average MLUPps: 2.557rn[Timer] ———————————————rn

    rnrnCylinder2d 4 Coresrn

    Code:
    rn[Timer] step=30800; percent=96.25; passedTime=470.8; remTime=18.3429; MLUPs=2.89811rn[LatticeStatistics] step=30800; t=15.4; uMax=0.0408193; avEnergy=0.000245803; avRho=1.00062rn[getResults] pressure1=0.134373; pressure2=0.0159914; pressureDrop=0.118382; drag=5.63413; lift=0.0122223rn[Timer] rn[Timer] —————-Summary:Timer—————-rn[Timer] measured time (rt) : 473.156srn[Timer] measured time (cpu): 362.941srn[Timer] average MLUPs : 2.489rn[Timer] average MLUPps: 2.489rn[Timer] ———————————————rn/******************************************************************************************************/rn[Timer] step=31400; percent=98.125; passedTime=478.282; remTime=9.13915; MLUPs=3.07357rn[LatticeStatistics] step=31400; t=15.7; uMax=0.0408199; avEnergy=0.000245815; avRho=1.00059rn[getResults] pressure1=0.132571; pressure2=0.0142374; pressureDrop=0.118333; drag=5.632; lift=0.0122383rn[Timer] rn[Timer] —————-Summary:Timer—————-rn[Timer] measured time (rt) : 479.666srn[Timer] measured time (cpu): 364.392srn[Timer] average MLUPs : 2.455rn[Timer] average MLUPps: 2.455rn[Timer] ———————————————rn/******************************************************************************************************/rn[Timer] step=31800; percent=99.375; passedTime=482.199; remTime=3.0327; MLUPs=3.8062rn[LatticeStatistics] step=31800; t=15.9; uMax=0.040816; avEnergy=0.000245588; avRho=1.00057rn[getResults] pressure1=0.13189; pressure2=0.0135893; pressureDrop=0.118301; drag=5.63043; lift=0.0122372rn[Timer] rn[Timer] —————-Summary:Timer—————-rn[Timer] measured time (rt) : 483.783srn[Timer] measured time (cpu): 363.782srn[Timer] average MLUPs : 2.435rn[Timer] average MLUPps: 2.435rn[Timer] ———————————————rn[Timer] rn[Timer] —————-Summary:Timer—————-rn[Timer] measured time (rt) : 483.808srn[Timer] measured time (cpu): 364.824srn[Timer] average MLUPs : 2.434rn[Timer] average MLUPps: 2.434rn[Timer] ———————————————rn

    rnrnI also checked the mpi version installed in the machine:rn

    Code:
    rnPackage: Open MPI buildd@allspice Distributionrn Open MPI: 1.6.5rn Open MPI SVN revision: r28673rn Open MPI release date: Jun 26, 2013rn Open RTE: 1.6.5rn Open RTE SVN revision: r28673rn Open RTE release date: Jun 26, 2013rn OPAL: 1.6.5rn OPAL SVN revision: r28673rn OPAL release date: Jun 26, 2013rn MPI API: 2.1rn Ident string: 1.6.5rn Prefix: /usrrn Configured architecture: x86_64-pc-linux-gnurn Configure host: allspicern Configured by: builddrn Configured on: Sat Dec 28 23:38:31 UTC 2013rn Configure host: allspicern Built by: builddrn Built on: Sat Dec 28 23:41:47 UTC 2013rn Built host: allspicern C bindings: yesrn C++ bindings: yesrn Fortran77 bindings: yes (all)rn Fortran90 bindings: yesrn Fortran90 bindings size: smallrn C compiler: gccrn C compiler absolute: /usr/bin/gccrn C compiler family name: GNUrn C compiler version: 4.8.2rn C++ compiler: g++rn C++ compiler absolute: /usr/bin/g++rn Fortran77 compiler: gfortranrn Fortran77 compiler abs: /usr/bin/gfortranrn Fortran90 compiler: gfortranrn Fortran90 compiler abs: /usr/bin/gfortranrn C profiling: yesrn C++ profiling: yesrn Fortran77 profiling: yesrn Fortran90 profiling: yesrn C++ exceptions: norn Thread support: posix (MPI_THREAD_MULTIPLE: no, progress: no)rn Sparse Groups: norn Internal debug support: norn MPI interface warnings: norn MPI parameter check: runtimernMemory profiling support: nornMemory debugging support: norn libltdl support: yesrn Heterogeneous support: yesrn mpirun default –prefix: norn MPI I/O support: yesrn MPI_WTIME support: gettimeofdayrn Symbol vis. support: yesrn Host topology support: yesrn MPI extensions: affinity examplern FT Checkpoint support: yes (checkpoint thread: yes)rn VampirTrace support: norn MPI_MAX_PROCESSOR_NAME: 256rn MPI_MAX_ERROR_STRING: 256rn MPI_MAX_OBJECT_NAME: 64rn MPI_MAX_INFO_KEY: 36rn MPI_MAX_INFO_VAL: 256rn MPI_MAX_PORT_NAME: 1024rn MPI_MAX_DATAREP_STRING: 128rn MCA backtrace: execinfo (MCA v2.0, API v2.0, Component v1.6.5)rn MCA memory: linux (MCA v2.0, API v2.0, Component v1.6.5)rn MCA paffinity: hwloc (MCA v2.0, API v2.0, Component v1.6.5)rn MCA carto: auto_detect (MCA v2.0, API v2.0, Component v1.6.5)rn MCA carto: file (MCA v2.0, API v2.0, Component v1.6.5)rn MCA shmem: mmap (MCA v2.0, API v2.0, Component v1.6.5)rn MCA shmem: posix (MCA v2.0, API v2.0, Component v1.6.5)rn MCA shmem: sysv (MCA v2.0, API v2.0, Component v1.6.5)rn MCA maffinity: first_use (MCA v2.0, API v2.0, Component v1.6.5)rn MCA maffinity: hwloc (MCA v2.0, API v2.0, Component v1.6.5)rn MCA timer: linux (MCA v2.0, API v2.0, Component v1.6.5)rn MCA installdirs: env (MCA v2.0, API v2.0, Component v1.6.5)rn MCA installdirs: config (MCA v2.0, API v2.0, Component v1.6.5)rn MCA sysinfo: linux (MCA v2.0, API v2.0, Component v1.6.5)rn MCA hwloc: external (MCA v2.0, API v2.0, Component v1.6.5)rn MCA crs: none (MCA v2.0, API v2.0, Component v1.6.5)rn MCA dpm: orte (MCA v2.0, API v2.0, Component v1.6.5)rn MCA pubsub: orte (MCA v2.0, API v2.0, Component v1.6.5)rn MCA allocator: basic (MCA v2.0, API v2.0, Component v1.6.5)rn MCA allocator: bucket (MCA v2.0, API v2.0, Component v1.6.5)rn MCA coll: basic (MCA v2.0, API v2.0, Component v1.6.5)rn MCA coll: hierarch (MCA v2.0, API v2.0, Component v1.6.5)rn MCA coll: inter (MCA v2.0, API v2.0, Component v1.6.5)rn MCA coll: self (MCA v2.0, API v2.0, Component v1.6.5)rn MCA coll: sm (MCA v2.0, API v2.0, Component v1.6.5)rn MCA coll: sync (MCA v2.0, API v2.0, Component v1.6.5)rn MCA coll: tuned (MCA v2.0, API v2.0, Component v1.6.5)rn MCA io: romio (MCA v2.0, API v2.0, Component v1.6.5)rn MCA mpool: fake (MCA v2.0, API v2.0, Component v1.6.5)rn MCA mpool: rdma (MCA v2.0, API v2.0, Component v1.6.5)rn MCA mpool: sm (MCA v2.0, API v2.0, Component v1.6.5)rn MCA pml: bfo (MCA v2.0, API v2.0, Component v1.6.5)rn MCA pml: crcpw (MCA v2.0, API v2.0, Component v1.6.5)rn MCA pml: csum (MCA v2.0, API v2.0, Component v1.6.5)rn MCA pml: ob1 (MCA v2.0, API v2.0, Component v1.6.5)rn MCA pml: v (MCA v2.0, API v2.0, Component v1.6.5)rn MCA bml: r2 (MCA v2.0, API v2.0, Component v1.6.5)rn MCA rcache: vma (MCA v2.0, API v2.0, Component v1.6.5)rn MCA btl: ofud (MCA v2.0, API v2.0, Component v1.6.5)rn MCA btl: openib (MCA v2.0, API v2.0, Component v1.6.5)rn MCA btl: self (MCA v2.0, API v2.0, Component v1.6.5)rn MCA btl: sm (MCA v2.0, API v2.0, Component v1.6.5)rn MCA btl: tcp (MCA v2.0, API v2.0, Component v1.6.5)rn MCA topo: unity (MCA v2.0, API v2.0, Component v1.6.5)rn MCA osc: pt2pt (MCA v2.0, API v2.0, Component v1.6.5)rn MCA osc: rdma (MCA v2.0, API v2.0, Component v1.6.5)rn MCA crcp: bkmrk (MCA v2.0, API v2.0, Component v1.6.5)rn MCA iof: hnp (MCA v2.0, API v2.0, Component v1.6.5)rn MCA iof: orted (MCA v2.0, API v2.0, Component v1.6.5)rn MCA iof: tool (MCA v2.0, API v2.0, Component v1.6.5)rn MCA oob: tcp (MCA v2.0, API v2.0, Component v1.6.5)rn MCA odls: default (MCA v2.0, API v2.0, Component v1.6.5)rn MCA ras: cm (MCA v2.0, API v2.0, Component v1.6.5)rn MCA ras: gridengine (MCA v2.0, API v2.0, Component v1.6.5)rn MCA ras: loadleveler (MCA v2.0, API v2.0, Component v1.6.5)rn MCA ras: slurm (MCA v2.0, API v2.0, Component v1.6.5)rn MCA ras: tm (MCA v2.0, API v2.0, Component v1.6.5)rn MCA rmaps: load_balance (MCA v2.0, API v2.0, Component v1.6.5)rn MCA rmaps: rank_file (MCA v2.0, API v2.0, Component v1.6.5)rn MCA rmaps: resilient (MCA v2.0, API v2.0, Component v1.6.5)rn MCA rmaps: round_robin (MCA v2.0, API v2.0, Component v1.6.5)rn MCA rmaps: seq (MCA v2.0, API v2.0, Component v1.6.5)rn MCA rmaps: topo (MCA v2.0, API v2.0, Component v1.6.5)rn MCA rml: ftrm (MCA v2.0, API v2.0, Component v1.6.5)rn MCA rml: oob (MCA v2.0, API v2.0, Component v1.6.5)rn MCA routed: binomial (MCA v2.0, API v2.0, Component v1.6.5)rn MCA routed: cm (MCA v2.0, API v2.0, Component v1.6.5)rn MCA routed: direct (MCA v2.0, API v2.0, Component v1.6.5)rn MCA routed: linear (MCA v2.0, API v2.0, Component v1.6.5)rn MCA routed: radix (MCA v2.0, API v2.0, Component v1.6.5)rn MCA routed: slave (MCA v2.0, API v2.0, Component v1.6.5)rn MCA plm: rsh (MCA v2.0, API v2.0, Component v1.6.5)rn MCA plm: slurm (MCA v2.0, API v2.0, Component v1.6.5)rn MCA plm: tm (MCA v2.0, API v2.0, Component v1.6.5)rn MCA snapc: full (MCA v2.0, API v2.0, Component v1.6.5)rn MCA filem: rsh (MCA v2.0, API v2.0, Component v1.6.5)rn MCA errmgr: default (MCA v2.0, API v2.0, Component v1.6.5)rn MCA ess: env (MCA v2.0, API v2.0, Component v1.6.5)rn MCA ess: hnp (MCA v2.0, API v2.0, Component v1.6.5)rn MCA ess: singleton (MCA v2.0, API v2.0, Component v1.6.5)rn MCA ess: slave (MCA v2.0, API v2.0, Component v1.6.5)rn MCA ess: slurm (MCA v2.0, API v2.0, Component v1.6.5)rn MCA ess: slurmd (MCA v2.0, API v2.0, Component v1.6.5)rn MCA ess: tm (MCA v2.0, API v2.0, Component v1.6.5)rn MCA ess: tool (MCA v2.0, API v2.0, Component v1.6.5)rn MCA grpcomm: bad (MCA v2.0, API v2.0, Component v1.6.5)rn MCA grpcomm: basic (MCA v2.0, API v2.0, Component v1.6.5)rn MCA grpcomm: hier (MCA v2.0, API v2.0, Component v1.6.5)rn MCA notifier: command (MCA v2.0, API v1.0, Component v1.6.5)rn MCA notifier: syslog (MCA v2.0, API v1.0, Component v1.6.5)rn

    rnrn1. Do you have any idea of the possibles problems that are not letting me to send the simultion in parallel ?rnrnBest regards,rnrnAlejandro

    #2331
    robin.trunk
    Keymaster

    Hi Alejandro,rnrnthat’s strange. What Operating System are you using? Ubuntu? Windows with Cygwin?rnrnBest regardsrnRobin

    #2332

    Hi robin,rnrnThe operating system that I am using is Ubuntu.rnrn

    Code:
    rnNo LSB modules are available.rnDistributor ID: UbunturnDescription: Ubuntu 14.04.2 LTSrnRelease: 14.04rnCodename: trustyrn

    rnrnBest regards,rnrnAlejandro

    #2333
    mathias
    Keymaster

    Can you send us the compilation protocol? Can you also try to run a simple mpi programm like “”hello world””? Best Mathias

    #2334

    Hi mathias,rnrnI understand some basic thing in programming but I am still new in Linux, OpenLB and C++. What it is exactly a “”compilation protocol””?rnrnI searched a simple example for running “”hello world”” in mpi. I found this website: http://people.sc.fsu.edu/~jburkardt/cpp_src/hello/hello.html. I saved the hello.cpp and hello.sh files. Then, I tried to compile the “”.sh”” but I got:rn

    Code:
    rnaclarobarr@urgc-hu21:~/Documents/olb-1.0r0/Test_MPI$ ./hello.shrnhello.cpp:9:18: fatal error: mpi.h: Aucun fichier ou dossier de ce typern # include “”mpi.h””rn ^rncompilation terminated.rnErrors compiling hello.cpprn

    rnrnI tried to search the “”mpi.h”” file in the computer but I did not find anything. Although, I have a mpi version already installed, I used

    Code:
    ompi_info

    . Do you know the meaning of this problem?rnrn1. Do you know how can I checked that I can run mpi program?rn2. Do you have any simple hello-world like code that you can send me in order to test the mpi?rnrnBest regards,rnrnAlejandro rnrn

    #2335
    mathias
    Keymaster

    How did you install mpi? On Ubuntu you can install it by rnrnsudo apt-get install openmpi-bin openmpi-doc libopenmpi-devrnrnBestrnMathias

    #2336

    Hi mathias, rnrnI do not have administrator rights in the machine where I am working with OpenLB. I think that mpi was installed like that but until Monday I can not confirm you this. Today is some kind of holiday here. rnrnBest, rnAlejandro

    #2339

    Hello everyone,rnrnFinally, I can simulate in parallel. The “”libopenmpi-dev”” gave somes problems. I found the solution to this problem here : https://forum.ubuntu-fr.org/viewtopic.php?id=1915001.rnrnIn the “”Makefile.inc”” exists these options:rn

    Code:
    #PARALLEL_MODE := OMPrn#PARALLEL_MODE := HYBRID

    rnrn1. Could any one give me some good references to start learning about this parallel options ? (I have a civil engineer background)rnrn2. What is the command line for OMP and HYBRID parallel mode?rnrnBest regards,rnrnAlejandro

    #2341
    mathias
    Keymaster

    In OpenLB you can ran use distrubuted- (MPI) and shared-memory (OMP) as well as a combination of both (HYBRID) parallelism. For details you can look at rnrn2010, Krause, M.J.: Fluid Flow Simulation and Optimisation with Lattice Boltzmann Methods on High Performance Computers: Application to the Human Respiratory System, Dissertation, Karlsruhe Institute of Technology (KIT), http://digbib.ubka.uni-karlsruhe.de/volltexte/1000019768 rnrnFor practical use the MPI version which you now can use is fine since it works well on both kind of platforms!rnrnBestrnMathias

    #2342

    Hi Mathias,rnrnThanks you for the reference, I will check it.rnrnBest,rnAlejandro

Viewing 14 posts - 1 through 14 (of 14 total)
  • You must be logged in to reply to this topic.