Skip to content

Alejandro_Claro

Forum Replies Created

Viewing 12 posts - 16 through 27 (of 27 total)
  • Author
    Posts
  • in reply to: LES-Body Force implementation Problem #2357

    Hi Albert,rnrnI compiled with the flag GENERIC (in the Makefile.inc). It works well.rnrnThanks.rnrnBest,rnAlejandro

    in reply to: Doubts – Prepare Geometry – Cylinder3d #2343

    Hello everyone,rnrnI have a new doubt about how the geometry is “”voxelized””. I ran the cylinder3d example and I am looking the coordinates of each material-number. I tested the example case with the default values. When the user-defined function “”prepareGeometry”” ends to give all the material numbers, it is printed the results of this processes with the function:

    Code:
    superGeometry.print();

    rnWhat it is printed is:rn

    Code:
    [prepareGeometry] Prepare Geometry …rn[SuperGeometry3D] cleaned 0 outer boundary voxel(s)rn[SuperGeometryStatistics3D] updatedrn[SuperGeometryStatistics3D] updatedrn[SuperGeometryStatistics3D] updatedrn[SuperGeometry3D] cleaned 0 outer boundary voxel(s)rn[SuperGeometry3D] the model is correct!rn[CuboidGeometry3D] —Cuboid Stucture Statistics—rn[CuboidGeometry3D] Number of Cuboids: 7rn[CuboidGeometry3D] Delta (min): 0.01rn[CuboidGeometry3D] (max): 0.01rn[CuboidGeometry3D] Ratio (min): 0.837209rn[CuboidGeometry3D] (max): 1.19444rn[CuboidGeometry3D] Nodes (min): 66564rn[CuboidGeometry3D] (max): 66564rn[CuboidGeometry3D] ——————————–rn[SuperGeometryStatistics3D] updatedrn[SuperGeometryStatistics3D] materialNumber=0; count=1849; minPhysR=(0.4625,0.1675,-0.0025); maxPhysR=(0.5325,0.2375,0.4175)rn[SuperGeometryStatistics3D] materialNumber=1; count=417011; minPhysR=(0.0025,0.0075,0.0075); maxPhysR=(2.4925,0.4075,0.4075)rn[SuperGeometryStatistics3D] materialNumber=2; count=42250; minPhysR=(-0.0075,-0.0025,-0.0025); maxPhysR=(2.5025,0.4175,0.4175)rn[SuperGeometryStatistics3D] materialNumber=3; count=1681; minPhysR=(-0.0075,0.0075,0.0075); maxPhysR=(-0.0075,0.4075,0.4075)rn[SuperGeometryStatistics3D] materialNumber=4; count=1681; minPhysR=(2.5025,0.0075,0.0075); maxPhysR=(2.5025,0.4075,0.4075)rn[SuperGeometryStatistics3D] materialNumber=5; count=1476; minPhysR=(0.4525,0.1575,0.0075); maxPhysR=(0.5425,0.2475,0.4075)rn[prepareGeometry] Prepare Geometry … OKrn

    rnThe example used a rectangular channel of 2500x410x410 mm (x-y-z coordinates) with origin at (0;0;0). Each material number is:

      1: Fluidrn2: Lateral, bottom and top wallrn3: Inlet rn4: Outletrn5: Cylinder wall

    rnIf we are interested just in the fluid flow, it can be seen that minimum and maximum coordinates (x-y-z) for material number 1 are:rn

    Code:
    minPhysR=(0.0025,0.0075,0.0075); rnmaxPhysR=(2.4925,0.4075,0.4075)

    rnAs the code use the bounce-back approach for the wall (without the cylinder wall) the minimum and maximum coordinates should be:rn

    Code:
    minPhysR=(0.005,0.005,0.005); rnmaxPhysR=(2.495,0.405,0.405)

    rnCould someone explain me why this is happening?rnrnBest regards,rnAlejandrorn

    in reply to: Parallel (MPI/OpenMP) simulation #2342

    Hi Mathias,rnrnThanks you for the reference, I will check it.rnrnBest,rnAlejandro

    in reply to: Parallel (MPI/OpenMP) simulation #2339

    Hello everyone,rnrnFinally, I can simulate in parallel. The “”libopenmpi-dev”” gave somes problems. I found the solution to this problem here : https://forum.ubuntu-fr.org/viewtopic.php?id=1915001.rnrnIn the “”Makefile.inc”” exists these options:rn

    Code:
    #PARALLEL_MODE := OMPrn#PARALLEL_MODE := HYBRID

    rnrn1. Could any one give me some good references to start learning about this parallel options ? (I have a civil engineer background)rnrn2. What is the command line for OMP and HYBRID parallel mode?rnrnBest regards,rnrnAlejandro

    in reply to: Parallel (MPI/OpenMP) simulation #2336

    Hi mathias, rnrnI do not have administrator rights in the machine where I am working with OpenLB. I think that mpi was installed like that but until Monday I can not confirm you this. Today is some kind of holiday here. rnrnBest, rnAlejandro

    in reply to: Parallel (MPI/OpenMP) simulation #2334

    Hi mathias,rnrnI understand some basic thing in programming but I am still new in Linux, OpenLB and C++. What it is exactly a “”compilation protocol””?rnrnI searched a simple example for running “”hello world”” in mpi. I found this website: http://people.sc.fsu.edu/~jburkardt/cpp_src/hello/hello.html. I saved the hello.cpp and hello.sh files. Then, I tried to compile the “”.sh”” but I got:rn

    Code:
    rnaclarobarr@urgc-hu21:~/Documents/olb-1.0r0/Test_MPI$ ./hello.shrnhello.cpp:9:18: fatal error: mpi.h: Aucun fichier ou dossier de ce typern # include “”mpi.h””rn ^rncompilation terminated.rnErrors compiling hello.cpprn

    rnrnI tried to search the “”mpi.h”” file in the computer but I did not find anything. Although, I have a mpi version already installed, I used

    Code:
    ompi_info

    . Do you know the meaning of this problem?rnrn1. Do you know how can I checked that I can run mpi program?rn2. Do you have any simple hello-world like code that you can send me in order to test the mpi?rnrnBest regards,rnrnAlejandro rnrn

    in reply to: Parallel (MPI/OpenMP) simulation #2332

    Hi robin,rnrnThe operating system that I am using is Ubuntu.rnrn

    Code:
    rnNo LSB modules are available.rnDistributor ID: UbunturnDescription: Ubuntu 14.04.2 LTSrnRelease: 14.04rnCodename: trustyrn

    rnrnBest regards,rnrnAlejandro

    in reply to: Drop Raising in a Vertical Channel #2330

    Hello ivan,rnrnI am new in LBM and OpenLB, however, the example multiComponentXD (X stands for the dimension) may be a better start. You can already start the simulation with the drop inside the heavier fluid. If you are just interested to check the drop raise, I think, you should set the boundaries as no-slip boundary conditions. The initial form of the drop could be made by “”hand”” using the IndicatorCuboid2D/IndicatorCircle2D (4.2 Indicator functions from the user guide).rnrnLet us know how you proceeded 😉 rnrnBest regards,rnrnAlejandro

    in reply to: Doubts – Prepare Geometry – Cylinder3d #2329

    Hi Robin,rnrnThank you for your response. rnrnBest regards,rnrnAlejandro

    in reply to: Parallel (MPI/OpenMP) simulation #2328

    Hello Mathias,rnrnI checked the “”Makefile.inc”” and the definitions are:rnrn

    Code:
    rn## DEFINITIONS TO BE CHANGEDrnrnCXX := g++rn#CXX := icpc -D__aligned__=ignoredrn#CXX := mpiCCrnCXX := mpic++rnrnOPTIM := -O3 -WallrnDEBUG := -g -DOLB_DEBUGrnrnCXXFLAGS := $(OPTIM)rn#CXXFLAGS := $(DEBUG)rnrn# to enable std::shared_ptr in functor arithmetikrn# works in gcc 4.3 and later, source https://gcc.gnu.org/projects/cxx0x.htmlrnCXXFLAGS += -std=c++0xrn# works in gcc 4.7 and later (recommended)rn#CXXFLAGS += -std=c++11rnrn#CXXFLAGS += -fdiagnostics-color=autorn#CXXFLAGS += -std=gnu++14rnrnARPRG := arrn#ARPRG := xiar # mandatory for intel compilerrnrnLDFLAGS :=rnrnrnPARALLEL_MODE := OFFrnPARALLEL_MODE := MPIrn#PARALLEL_MODE := OMPrn#PARALLEL_MODE := HYBRIDrnrnMPIFLAGS :=rnOMPFLAGS := -fopenmprnrnBUILDTYPE := precompiledrn#BUILDTYPE := genericrn

    rnrnI removed the executable from the Cavity2d-parallel and the Cylinder2d directories by hand. Then I made “”make clean””, “”make cleanbuild”” and “”make””. The compilation is working the same way as before:rnrnCavity2d-Parallel 4 Coresrn

    Code:
    rn[Timer] rn[Timer] —————-Summary:Timer—————-rn[Timer] measured time (rt) : 152.160srn[Timer] measured time (cpu): 133.202srn[Timer] average MLUPs : 2.100rn[Timer] average MLUPps: 2.100rn[Timer] ———————————————rn[LatticeStatistics] step=18304; t=14.3; uMax=0.1; avEnergy=0.000342504; avRho=1rn/*****************************************************************************/rn[LatticeStatistics] step=18688; t=14.6; uMax=0.1; avEnergy=0.000345858; avRho=1rn[Timer] step=18688; percent=97.3333; passedTime=154.379; remTime=4.22956; MLUPs=3.46913rn[Timer] rn[Timer] —————-Summary:Timer—————-rn[Timer] measured time (rt) : 154.859srn[Timer] measured time (cpu): 137.808srn[Timer] average MLUPs : 2.063rn[Timer] average MLUPps: 2.063rn[Timer] ———————————————rn[LatticeStatistics] step=18816; t=14.7; uMax=0.1; avEnergy=0.000346835; avRho=1rn/******************************************************************************/rn[LatticeStatistics] step=19072; t=14.9; uMax=0.1; avEnergy=0.000349094; avRho=1rn[Timer] step=19072; percent=99.3333; passedTime=155.8; remTime=1.04564; MLUPs=4.76521rn[Timer] rn[Timer] —————-Summary:Timer—————-rn[Timer] measured time (rt) : 156.165srn[Timer] measured time (cpu): 135.256srn[Timer] average MLUPs : 2.046rn[Timer] average MLUPps: 2.046rn[Timer] ———————————————rn[Timer] rn[Timer] —————-Summary:Timer—————-rn[Timer] measured time (rt) : 156.214srn[Timer] measured time (cpu): 136.896srn[Timer] average MLUPs : 2.045rn[Timer] average MLUPps: 2.045rn[Timer] ———————————————rn

    rnrnCavity2d-Parallel 3 Coresrn

    Code:
    rn[LatticeStatistics] step=18176; t=14.2; uMax=0.1; avEnergy=0.000341447; avRho=1rn[Timer] step=18176; percent=94.6667; passedTime=119.872; remTime=6.75335; MLUPs=2.72733rn[Timer] rn[Timer] —————-Summary:Timer—————-rn[Timer] measured time (rt) : 120.176srn[Timer] measured time (cpu): 113.472srn[Timer] average MLUPs : 2.659rn[Timer] average MLUPps: 2.659rn[Timer] ———————————————rn/**************************************************************************************/rn[LatticeStatistics] step=19072; t=14.9; uMax=0.1; avEnergy=0.000349094; avRho=1rn[Timer] step=19072; percent=99.3333; passedTime=124.519; remTime=0.835698; MLUPs=3.49762rn[Timer] rn[Timer] —————-Summary:Timer—————-rn[Timer] measured time (rt) : 124.710srn[Timer] measured time (cpu): 118.398srn[Timer] average MLUPs : 2.562rn[Timer] average MLUPps: 2.562rn[Timer] ———————————————rn[Timer] rn[Timer] —————-Summary:Timer—————-rn[Timer] measured time (rt) : 124.978srn[Timer] measured time (cpu): 118.115srn[Timer] average MLUPs : 2.557rn[Timer] average MLUPps: 2.557rn[Timer] ———————————————rn

    rnrnCylinder2d 4 Coresrn

    Code:
    rn[Timer] step=30800; percent=96.25; passedTime=470.8; remTime=18.3429; MLUPs=2.89811rn[LatticeStatistics] step=30800; t=15.4; uMax=0.0408193; avEnergy=0.000245803; avRho=1.00062rn[getResults] pressure1=0.134373; pressure2=0.0159914; pressureDrop=0.118382; drag=5.63413; lift=0.0122223rn[Timer] rn[Timer] —————-Summary:Timer—————-rn[Timer] measured time (rt) : 473.156srn[Timer] measured time (cpu): 362.941srn[Timer] average MLUPs : 2.489rn[Timer] average MLUPps: 2.489rn[Timer] ———————————————rn/******************************************************************************************************/rn[Timer] step=31400; percent=98.125; passedTime=478.282; remTime=9.13915; MLUPs=3.07357rn[LatticeStatistics] step=31400; t=15.7; uMax=0.0408199; avEnergy=0.000245815; avRho=1.00059rn[getResults] pressure1=0.132571; pressure2=0.0142374; pressureDrop=0.118333; drag=5.632; lift=0.0122383rn[Timer] rn[Timer] —————-Summary:Timer—————-rn[Timer] measured time (rt) : 479.666srn[Timer] measured time (cpu): 364.392srn[Timer] average MLUPs : 2.455rn[Timer] average MLUPps: 2.455rn[Timer] ———————————————rn/******************************************************************************************************/rn[Timer] step=31800; percent=99.375; passedTime=482.199; remTime=3.0327; MLUPs=3.8062rn[LatticeStatistics] step=31800; t=15.9; uMax=0.040816; avEnergy=0.000245588; avRho=1.00057rn[getResults] pressure1=0.13189; pressure2=0.0135893; pressureDrop=0.118301; drag=5.63043; lift=0.0122372rn[Timer] rn[Timer] —————-Summary:Timer—————-rn[Timer] measured time (rt) : 483.783srn[Timer] measured time (cpu): 363.782srn[Timer] average MLUPs : 2.435rn[Timer] average MLUPps: 2.435rn[Timer] ———————————————rn[Timer] rn[Timer] —————-Summary:Timer—————-rn[Timer] measured time (rt) : 483.808srn[Timer] measured time (cpu): 364.824srn[Timer] average MLUPs : 2.434rn[Timer] average MLUPps: 2.434rn[Timer] ———————————————rn

    rnrnI also checked the mpi version installed in the machine:rn

    Code:
    rnPackage: Open MPI buildd@allspice Distributionrn Open MPI: 1.6.5rn Open MPI SVN revision: r28673rn Open MPI release date: Jun 26, 2013rn Open RTE: 1.6.5rn Open RTE SVN revision: r28673rn Open RTE release date: Jun 26, 2013rn OPAL: 1.6.5rn OPAL SVN revision: r28673rn OPAL release date: Jun 26, 2013rn MPI API: 2.1rn Ident string: 1.6.5rn Prefix: /usrrn Configured architecture: x86_64-pc-linux-gnurn Configure host: allspicern Configured by: builddrn Configured on: Sat Dec 28 23:38:31 UTC 2013rn Configure host: allspicern Built by: builddrn Built on: Sat Dec 28 23:41:47 UTC 2013rn Built host: allspicern C bindings: yesrn C++ bindings: yesrn Fortran77 bindings: yes (all)rn Fortran90 bindings: yesrn Fortran90 bindings size: smallrn C compiler: gccrn C compiler absolute: /usr/bin/gccrn C compiler family name: GNUrn C compiler version: 4.8.2rn C++ compiler: g++rn C++ compiler absolute: /usr/bin/g++rn Fortran77 compiler: gfortranrn Fortran77 compiler abs: /usr/bin/gfortranrn Fortran90 compiler: gfortranrn Fortran90 compiler abs: /usr/bin/gfortranrn C profiling: yesrn C++ profiling: yesrn Fortran77 profiling: yesrn Fortran90 profiling: yesrn C++ exceptions: norn Thread support: posix (MPI_THREAD_MULTIPLE: no, progress: no)rn Sparse Groups: norn Internal debug support: norn MPI interface warnings: norn MPI parameter check: runtimernMemory profiling support: nornMemory debugging support: norn libltdl support: yesrn Heterogeneous support: yesrn mpirun default –prefix: norn MPI I/O support: yesrn MPI_WTIME support: gettimeofdayrn Symbol vis. support: yesrn Host topology support: yesrn MPI extensions: affinity examplern FT Checkpoint support: yes (checkpoint thread: yes)rn VampirTrace support: norn MPI_MAX_PROCESSOR_NAME: 256rn MPI_MAX_ERROR_STRING: 256rn MPI_MAX_OBJECT_NAME: 64rn MPI_MAX_INFO_KEY: 36rn MPI_MAX_INFO_VAL: 256rn MPI_MAX_PORT_NAME: 1024rn MPI_MAX_DATAREP_STRING: 128rn MCA backtrace: execinfo (MCA v2.0, API v2.0, Component v1.6.5)rn MCA memory: linux (MCA v2.0, API v2.0, Component v1.6.5)rn MCA paffinity: hwloc (MCA v2.0, API v2.0, Component v1.6.5)rn MCA carto: auto_detect (MCA v2.0, API v2.0, Component v1.6.5)rn MCA carto: file (MCA v2.0, API v2.0, Component v1.6.5)rn MCA shmem: mmap (MCA v2.0, API v2.0, Component v1.6.5)rn MCA shmem: posix (MCA v2.0, API v2.0, Component v1.6.5)rn MCA shmem: sysv (MCA v2.0, API v2.0, Component v1.6.5)rn MCA maffinity: first_use (MCA v2.0, API v2.0, Component v1.6.5)rn MCA maffinity: hwloc (MCA v2.0, API v2.0, Component v1.6.5)rn MCA timer: linux (MCA v2.0, API v2.0, Component v1.6.5)rn MCA installdirs: env (MCA v2.0, API v2.0, Component v1.6.5)rn MCA installdirs: config (MCA v2.0, API v2.0, Component v1.6.5)rn MCA sysinfo: linux (MCA v2.0, API v2.0, Component v1.6.5)rn MCA hwloc: external (MCA v2.0, API v2.0, Component v1.6.5)rn MCA crs: none (MCA v2.0, API v2.0, Component v1.6.5)rn MCA dpm: orte (MCA v2.0, API v2.0, Component v1.6.5)rn MCA pubsub: orte (MCA v2.0, API v2.0, Component v1.6.5)rn MCA allocator: basic (MCA v2.0, API v2.0, Component v1.6.5)rn MCA allocator: bucket (MCA v2.0, API v2.0, Component v1.6.5)rn MCA coll: basic (MCA v2.0, API v2.0, Component v1.6.5)rn MCA coll: hierarch (MCA v2.0, API v2.0, Component v1.6.5)rn MCA coll: inter (MCA v2.0, API v2.0, Component v1.6.5)rn MCA coll: self (MCA v2.0, API v2.0, Component v1.6.5)rn MCA coll: sm (MCA v2.0, API v2.0, Component v1.6.5)rn MCA coll: sync (MCA v2.0, API v2.0, Component v1.6.5)rn MCA coll: tuned (MCA v2.0, API v2.0, Component v1.6.5)rn MCA io: romio (MCA v2.0, API v2.0, Component v1.6.5)rn MCA mpool: fake (MCA v2.0, API v2.0, Component v1.6.5)rn MCA mpool: rdma (MCA v2.0, API v2.0, Component v1.6.5)rn MCA mpool: sm (MCA v2.0, API v2.0, Component v1.6.5)rn MCA pml: bfo (MCA v2.0, API v2.0, Component v1.6.5)rn MCA pml: crcpw (MCA v2.0, API v2.0, Component v1.6.5)rn MCA pml: csum (MCA v2.0, API v2.0, Component v1.6.5)rn MCA pml: ob1 (MCA v2.0, API v2.0, Component v1.6.5)rn MCA pml: v (MCA v2.0, API v2.0, Component v1.6.5)rn MCA bml: r2 (MCA v2.0, API v2.0, Component v1.6.5)rn MCA rcache: vma (MCA v2.0, API v2.0, Component v1.6.5)rn MCA btl: ofud (MCA v2.0, API v2.0, Component v1.6.5)rn MCA btl: openib (MCA v2.0, API v2.0, Component v1.6.5)rn MCA btl: self (MCA v2.0, API v2.0, Component v1.6.5)rn MCA btl: sm (MCA v2.0, API v2.0, Component v1.6.5)rn MCA btl: tcp (MCA v2.0, API v2.0, Component v1.6.5)rn MCA topo: unity (MCA v2.0, API v2.0, Component v1.6.5)rn MCA osc: pt2pt (MCA v2.0, API v2.0, Component v1.6.5)rn MCA osc: rdma (MCA v2.0, API v2.0, Component v1.6.5)rn MCA crcp: bkmrk (MCA v2.0, API v2.0, Component v1.6.5)rn MCA iof: hnp (MCA v2.0, API v2.0, Component v1.6.5)rn MCA iof: orted (MCA v2.0, API v2.0, Component v1.6.5)rn MCA iof: tool (MCA v2.0, API v2.0, Component v1.6.5)rn MCA oob: tcp (MCA v2.0, API v2.0, Component v1.6.5)rn MCA odls: default (MCA v2.0, API v2.0, Component v1.6.5)rn MCA ras: cm (MCA v2.0, API v2.0, Component v1.6.5)rn MCA ras: gridengine (MCA v2.0, API v2.0, Component v1.6.5)rn MCA ras: loadleveler (MCA v2.0, API v2.0, Component v1.6.5)rn MCA ras: slurm (MCA v2.0, API v2.0, Component v1.6.5)rn MCA ras: tm (MCA v2.0, API v2.0, Component v1.6.5)rn MCA rmaps: load_balance (MCA v2.0, API v2.0, Component v1.6.5)rn MCA rmaps: rank_file (MCA v2.0, API v2.0, Component v1.6.5)rn MCA rmaps: resilient (MCA v2.0, API v2.0, Component v1.6.5)rn MCA rmaps: round_robin (MCA v2.0, API v2.0, Component v1.6.5)rn MCA rmaps: seq (MCA v2.0, API v2.0, Component v1.6.5)rn MCA rmaps: topo (MCA v2.0, API v2.0, Component v1.6.5)rn MCA rml: ftrm (MCA v2.0, API v2.0, Component v1.6.5)rn MCA rml: oob (MCA v2.0, API v2.0, Component v1.6.5)rn MCA routed: binomial (MCA v2.0, API v2.0, Component v1.6.5)rn MCA routed: cm (MCA v2.0, API v2.0, Component v1.6.5)rn MCA routed: direct (MCA v2.0, API v2.0, Component v1.6.5)rn MCA routed: linear (MCA v2.0, API v2.0, Component v1.6.5)rn MCA routed: radix (MCA v2.0, API v2.0, Component v1.6.5)rn MCA routed: slave (MCA v2.0, API v2.0, Component v1.6.5)rn MCA plm: rsh (MCA v2.0, API v2.0, Component v1.6.5)rn MCA plm: slurm (MCA v2.0, API v2.0, Component v1.6.5)rn MCA plm: tm (MCA v2.0, API v2.0, Component v1.6.5)rn MCA snapc: full (MCA v2.0, API v2.0, Component v1.6.5)rn MCA filem: rsh (MCA v2.0, API v2.0, Component v1.6.5)rn MCA errmgr: default (MCA v2.0, API v2.0, Component v1.6.5)rn MCA ess: env (MCA v2.0, API v2.0, Component v1.6.5)rn MCA ess: hnp (MCA v2.0, API v2.0, Component v1.6.5)rn MCA ess: singleton (MCA v2.0, API v2.0, Component v1.6.5)rn MCA ess: slave (MCA v2.0, API v2.0, Component v1.6.5)rn MCA ess: slurm (MCA v2.0, API v2.0, Component v1.6.5)rn MCA ess: slurmd (MCA v2.0, API v2.0, Component v1.6.5)rn MCA ess: tm (MCA v2.0, API v2.0, Component v1.6.5)rn MCA ess: tool (MCA v2.0, API v2.0, Component v1.6.5)rn MCA grpcomm: bad (MCA v2.0, API v2.0, Component v1.6.5)rn MCA grpcomm: basic (MCA v2.0, API v2.0, Component v1.6.5)rn MCA grpcomm: hier (MCA v2.0, API v2.0, Component v1.6.5)rn MCA notifier: command (MCA v2.0, API v1.0, Component v1.6.5)rn MCA notifier: syslog (MCA v2.0, API v1.0, Component v1.6.5)rn

    rnrn1. Do you have any idea of the possibles problems that are not letting me to send the simultion in parallel ?rnrnBest regards,rnrnAlejandro

    in reply to: Parallel (MPI/OpenMP) simulation #2326

    Hi Robin,rnrnThanks for yours answers. I changed the “”Makefile.inc”” like this:rn

    Code:
    rnCXX := g++rn#CXX := icpc -D__aligned__=ignoredrn#CXX := mpiCCrnCXX := mpic++rnrn###########################################rnrnPARALLEL_MODE := OFFrnPARALLEL_MODE := MPIrn#PARALLEL_MODE := OMPrn#PARALLEL_MODE := HYBRIDrn

    rnrnI tested these changes from the “”cavity2d/parallel”” directory. As indicated in the user manual (pag. 29), I executed these commands:rn

      rn
    Code:
    make clean

    rn

    Code:
    make cleanbuild

    rn

    Code:
    make

    rnrnrnThen, I ran the simulation with 3 and 4 cores in order to test the time taken in each case. I used

    Code:
    mpirun -np x ./cavity2d

    The “”x”” corresponds to the number of cores, in this case 3 and 4. It seems that I obtain the same problem indicated by Padde86 in the topic

    Quote:
    Problem with MPI execution an LBM algorithm

    The 4 cores simulation takes more time that the 3 cores. The “”timer”” results obtained only varying the number of cores in the parallel simulation is:rnrn4 Coresrn

    Code:
    rn[LatticeStatistics] step=18816; t=14.7; uMax=0.1; avEnergy=0.000346835; avRho=1rn[Timer] step=18816; percent=98; passedTime=151.523; remTime=3.09231; MLUPs=2.4942rn[Timer] rn[Timer] —————-Summary:Timer—————-rn[Timer] measured time (rt) : 151.619srn[Timer] measured time (cpu): 141.128srn[Timer] average MLUPs : 2.107rn[Timer] average MLUPps: 2.107rn[Timer] ———————————————rn[LatticeStatistics] step=19072; t=14.9; uMax=0.1; avEnergy=0.000349094; avRho=1rn[Timer] step=19072; percent=99.3333; passedTime=151.653; remTime=1.01781; MLUPs=2.68606rn[Timer] rn[Timer] —————-Summary:Timer—————-rn[Timer] measured time (rt) : 151.738srn[Timer] measured time (cpu): 142.437srn[Timer] average MLUPs : 2.106rn[Timer] average MLUPps: 2.106rn[Timer] ———————————————rn[LatticeStatistics] step=18944; t=14.8; uMax=0.1; avEnergy=0.000347976; avRho=1rn[Timer] step=18944; percent=98.6667; passedTime=152.063; remTime=2.05491; MLUPs=3.94453rn[Timer] rn[Timer] —————-Summary:Timer—————-rn[Timer] measured time (rt) : 152.102srn[Timer] measured time (cpu): 142.303srn[Timer] average MLUPs : 2.101rn[Timer] average MLUPps: 2.101rn[Timer] ———————————————rn[LatticeStatistics] step=19072; t=14.9; uMax=0.1; avEnergy=0.000349094; avRho=1rn[Timer] step=19072; percent=99.3333; passedTime=152.471; remTime=1.0233; MLUPs=5.22071rn[Timer] rn[Timer] —————-Summary:Timer—————-rn[Timer] measured time (rt) : 152.866srn[Timer] measured time (cpu): 140.036srn[Timer] average MLUPs : 2.090rn[Timer] average MLUPps: 2.090rn[Timer] ———————————————rn

    rnrn3 Coresrn

    Code:
    rn[LatticeStatistics] step=18560; t=14.5; uMax=0.1; avEnergy=0.000344785; avRho=1rn[Timer] step=18560; percent=96.6667; passedTime=109.805; remTime=3.78638; MLUPs=3.22735rn[Timer] rn[Timer] —————-Summary:Timer—————-rn[Timer] measured time (rt) : 110.165srn[Timer] measured time (cpu): 108.535srn[Timer] average MLUPs : 2.900rn[Timer] average MLUPps: 2.900rn[Timer] ———————————————rn[LatticeStatistics] step=18688; t=14.6; uMax=0.1; avEnergy=0.000345858; avRho=1rn[Timer] step=18688; percent=97.3333; passedTime=110.375; remTime=3.02397; MLUPs=3.73693rn[LatticeStatistics] step=19072; t=14.9; uMax=0.1; avEnergy=0.000349094; avRho=1rn[Timer] step=19072; percent=99.3333; passedTime=110.421; remTime=0.741081; MLUPs=3.24208rn[Timer] rn[Timer] —————-Summary:Timer—————-rn[Timer] measured time (rt) : 110.860srn[Timer] measured time (cpu): 109.294srn[Timer] average MLUPs : 2.882rn[Timer] average MLUPps: 2.882rn[Timer] ———————————————rn[LatticeStatistics] step=18816; t=14.7; uMax=0.1; avEnergy=0.000346835; avRho=1rn[Timer] step=18816; percent=98; passedTime=110.844; remTime=2.26212; MLUPs=4.55138rn[LatticeStatistics] step=18944; t=14.8; uMax=0.1; avEnergy=0.000347976; avRho=1rn[Timer] step=18944; percent=98.6667; passedTime=111.24; remTime=1.50324; MLUPs=5.37891rn[LatticeStatistics] step=19072; t=14.9; uMax=0.1; avEnergy=0.000349094; avRho=1rn[Timer] step=19072; percent=99.3333; passedTime=111.627; remTime=0.749174; MLUPs=5.504rn[Timer] rn[Timer] —————-Summary:Timer—————-rn[Timer] measured time (rt) : 112.14srn[Timer] measured time (cpu): 110.124srn[Timer] average MLUPs : 2.852rn[Timer] average MLUPps: 2.852rn[Timer] ———————————————rn

    rnrnIf I understand the interest of using a parallel simulation is to divide the load in the different cores to get a result in less time (I know it is a very simplified principle). rnrn1. So, why measured time in the 3 cores option is less that the 4 cores option? rnrn2. Am I making a sequential simulation?rnrn3. How could I be sure that the simulation is running in parallel and not the same case x times in sequential?rnrnThe machine that I am using has these characteristics:rn

    Code:
    rnArchitecture: x86_64rnMode(s) opératoire(s) des processeurs :32-bit, 64-bitrnByte Order: Little EndianrnCPU(s): 4rnOn-line CPU(s) list: 0-3rnThread(s) par cœur : 2rnCœur(s) par socket : 1rnSocket(s): 2rnNœud(s) NUMA : 1rnIdentifiant constructeur :GenuineIntelrnFamille de processeur :15rnModèle : 4rnRévision : 10rnVitesse du processeur en MHz :3200.172rnBogoMIPS: 6400.71rnCache L1d : 16KrnCache L2 : 2048KrnNUMA node0 CPU(s): 0-3rn

    rnrnBest regards,rnrnAlejandro

    in reply to: Doubts – Prepare Geometry – Cylinder3d #2323

    Hi Robin.Trunk,rnrnThank you for the yours answers. As “”stlReader”” and “”extendedDomain”” instances are IndicatorF3D type, the addZeroVelocityBoundary function could use either. I know that prepareLattice function needs a STLreader object, but it could be changed by a IndicatorLayer3D object, right? rnrnThe addZeroVelocityBoundary function is defined as:rnrn

    Code:
    rnvoid olb::sOffLatticeBoundaryCondition3D<T, Lattice>::addZeroVelocityBoundary(rn SuperGeometry3D< T > & superGeometry, rn int material, rn IndicatorF3D< T > & indicator, rn std::list< int > bulkMaterials = std::list<int>(1,1) rn) rn

    rnrnBest regards,rnrnAlejandrorn

Viewing 12 posts - 16 through 27 (of 27 total)