Skip to content

jflorezgi

Forum Replies Created

Viewing 12 posts - 1 through 12 (of 12 total)
  • Author
    Posts
  • jflorezgi
    Participant

    Thank you Adrian, I’m going to check this out a bit and if I have any questions I’ll ask you.

    Best

    jflorezgi
    Participant

    Hi, sorry for the insistence, I would like to know if there is any way in which the ensembles added to SuperLatticeTimeAveragedF3D can be saved and loaded as in the case of the checkpoint

    Best

    Jonathan

    jflorezgi
    Participant

    Thank you Adrian, now I am accessing to my data through BlockReduction3D2D in any of planes that I need regardless of the number of blocks.

    Best wishes.

    Jonathan

    jflorezgi
    Participant

    thanks for your response I understand what you’re saying, so I have a question.

    I want to make a function that reviews all the cells in a horizontal plane at a certain height and, depending on the value of a specific functor in each cell, counts the number of cells that have the same value of the functor.

    When I run the program with mpirun -np 1 ./…. and I look at the number of cells it is checking it gives me, for example:

    nx = sLatticeNS.getBlock(iC).getNx() = 323 ny = sLatticeNS.getBlock(iC).getNy()= 415 and the number of cells that the program is checking is the multiplication of these two values, as I expected, but if I run the program with mpirun -np 2 ./…., the values are nx = sLatticeNS.getBlock(iC).getNx() = 323 ny = sLatticeNS.getBlock(iC).getNy()= 208 and the number of cells that the program is checking is the multiplication of these two values also, so depending on the number of blocks, the function only calculates the inverse (of the number of blocks) of the cells it should calculate.

    the code of the function is:

    void computeAirAngleFraction(SuperGeometry<T,3>& superGeometry,
    UnitConverter<T, NSDESCRIPTOR> &converter,
    SuperLattice<T,NSDESCRIPTOR>& sLatticeNS,
    SuperLatticeAirAngleClass3D<T, NSDESCRIPTOR>& airAngleClass,
    std::ofstream& fileAngleFr, T breathHeight)
    {
    OstreamManager clout( std::cout,”computeAirAngleFraction” );
    clout << “Computing the Airflow Angle Fraction …” << std::endl;

    int iZ = converter.getLatticeLength(breathHeight);

    AnalyticalFfromSuperF3D<T> intpolateAirAngleClass( airAngleClass, true );

    int material = 0;
    //Write the macroscopic variables through the horizontal plane at breathHeight
    T numerical[5] { };
    T position[3] { };

    for (int iC = 0; iC < sLatticeNS.getLoadBalancer().size(); iC++) {

    for (int iY = 0; iY < sLatticeNS.getBlock(iC).getNy(); ++iY) {
    for (int iX = 0; iX < sLatticeNS.getBlock(iC).getNx(); ++iX) {

    material = superGeometry.getBlockGeometry(iC).getMaterial(iX,iY,iZ);
    if (material == 1){
    position[0] = (T)iX * converter.getPhysDeltaX();
    position[1] = (T)iY * converter.getPhysDeltaX();
    position[2] = breathHeight/*(T)iZ * converter.getPhysDeltaX()*/;

    intpolateAirAngleClass(numerical, position);
    //clout << numerical[0] << ” ” << position[0] << ” ” << position[1] << ” ” << position[2] << std::endl;

    if (numerical[0] == 0.){
    numerical[2]++;
    numerical[4]++;
    }
    else if (numerical[0] == 1.){
    numerical[3]++;
    numerical[4]++;
    }
    else{
    numerical[1]++;
    numerical[4]++;
    }
    }
    }
    }
    }

    clout << “breathHeight = ” << breathHeight << ” ” << numerical[1] << ” ” << numerical[2] << ” ” << numerical[3] << ” ” << numerical[4] << ” ” << numerical[1] / numerical[4] << ” ” << numerical[2] / numerical[4] << ” ” << numerical[3] / numerical[4] << std::endl;

    fileAngleFr << “breathHeight = ” << breathHeight << ” ” << numerical[1] << ” ” << numerical[2] << ” ” << numerical[3] << ” ” << numerical[4] << ” ” << numerical[1] / numerical[4] << ” ” << numerical[2] / numerical[4] << ” ” << numerical[3] / numerical[4] << std::endl;
    }

    I would appreciate if you can review the function and give me some idea about my question.

    jflorezgi
    Participant

    Hi Fedor, yes now I understand,

    thank you

    Jonathan

    jflorezgi
    Participant

    Thank you Adrian

    in reply to: Issues to run code examples with Nvidia A100 GPU #6799
    jflorezgi
    Participant

    I’m checking a little more and I need to update some packages, so for now I can’t guarantee problems in the compilation with the graphics card, sorry, I’ll write when I have solved it if the problem persists

    in reply to: Example issues on Cluster #6524
    jflorezgi
    Participant

    Hi Mathias, I tested the bstep2d and rayleighBenard2d examples, and these are loading the saved files correctly after the checkpoints, so I passed my code using as template the second example and now is working properly. I think the problem is related to the way that I defined the lastCPTime variable but I’m not sure.

    Thanks for your help.

    • This reply was modified 2 years ago by jflorezgi.
    in reply to: Example issues on Cluster #6505
    jflorezgi
    Participant

    Hi Mathias, I have been doing some tests to load the saved files but so far I haven’t been able to load them properly running either serial or parallel (MPI). The output generates an error like the following:

    [prepareGeometry] Prepare Geometry … OK [prepareLattice] defining dynamics [prepareLattice] Prepare Lattice … OK [LAPTOP-H5JAJF81:01813] *** Process received signal *** [LAPTOP-H5JAJF81:01813] Signal: Segmentation fault (11) [LAPTOP-H5JAJF81:01813] Signal code: Address not mapped (1) [LAPTOP-H5JAJF81:01813] Failing at address: 0x1a8 [LAPTOP-H5JAJF81:01813] [ 0] /lib/x86_64-linux-gnu/libc.so.6(+0x46210)[0x7fba20e16210] [LAPTOP-H5JAJF81:01813] [ 1] ./prueba2D(+0x5b150)[0x7fba21525150] [LAPTOP-H5JAJF81:01813] [ 2] ./prueba2D(+0x70dec)[0x7fba2153adec] [LAPTOP-H5JAJF81:01813] [ 3] ./prueba2D(+0x2e773)[0x7fba214f8773] [LAPTOP-H5JAJF81:01813] [ 4] /lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0xf3)[0x7fba20df70b3] [LAPTOP-H5JAJF81:01813] [ 5] ./prueba2D(+0x2ebfe)[0x7fba214f8bfe] [LAPTOP-H5JAJF81:01813] *** End of error message *** ————————————————————————– Primary job terminated normally, but 1 process returned a non-zero exit code. Per user-direction, the job has been aborted. ————————————————————————– [LAPTOP-H5JAJF81:01812] *** Process received signal *** [LAPTOP-H5JAJF81:01812] Signal: Segmentation fault (11) [LAPTOP-H5JAJF81:01812] Signal code: Address not mapped (1) [LAPTOP-H5JAJF81:01812] Failing at address: 0x1a8 [LAPTOP-H5JAJF81:01812] [ 0] /lib/x86_64-linux-gnu/libc.so.6(+0x46210)[0x7f87fe5a6210] [LAPTOP-H5JAJF81:01812] [ 1] ./prueba2D(+0x5b150)[0x7f87fecaf150] [LAPTOP-H5JAJF81:01812] [ 2] ./prueba2D(+0x70dec)[0x7f87fecc4dec] [LAPTOP-H5JAJF81:01812] [ 3] ./prueba2D(+0x2e773)[0x7f87fec82773] [LAPTOP-H5JAJF81:01812] [ 4] /lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0xf3)[0x7f87fe5870b3] [LAPTOP-H5JAJF81:01812] [ 5] ./prueba2D(+0x2ebfe)[0x7f87fec82bfe] [LAPTOP-H5JAJF81:01812] *** End of error message *** ————————————————————————– mpirun noticed that process rank 2 with PID 0 on node LAPTOP-H5JAJF81 exited on signal 11 (Segmentation fault).

    I don’t know if I’m calling the load function properly, so I’ll leave you part of the code, I appreciate if you can check this part:

    NSlattice.addLatticeCoupling(coupling, ADlattice);

    prepareLattice(converter, NSlattice, ADlattice, superGeometry);

    /// === 4th Step: Main Loop with Timer ===
    std::size_t iT = 0;
    util::Timer<T> timer(converter.getLatticeTime(maxPhysT), superGeometry.getStatistics().getNvoxel() );
    util::ValueTracer<T> converge(converter.getLatticeTime(0.01),epsilon);

    // checks whether there is already data of the fluid from an earlier calculation
    if ( !(NSlattice.load(“NSprueba2DCoupled”))){
    // if there is no data available, it is generated
    timer.start();

    for ( ; iT < converter.getLatticeTime(maxPhysT); ++iT) {

    if (converge.hasConverged()) {
    clout << “Simulation converged.” << std::endl;
    getResults(converter, NSlattice, ADlattice, iT, superGeometry, timer, file, converge.hasConverged());

    clout << “Time ” << iT << “.” << std::endl;

    break;
    }

    /// === 5th Step: Definition of Initial and Boundary Conditions ===
    setBoundaryValues(converter, NSlattice, ADlattice, iT, superGeometry);

    /// === 6th Step: Collide and Stream Execution ===
    ADlattice.collideAndStream();
    NSlattice.collideAndStream();
    NSlattice.executeCoupling();

    /// === 7th Step: Computation and Output of the Results ===
    getResults(converter, NSlattice, ADlattice, iT, superGeometry, timer, file, converge.hasConverged());
    converge.takeValue(ADlattice.getStatistics().getAverageEnergy(),true);
    }
    timer.stop();
    timer.printSummary();
    }
    // if there exists already data of the fluid from an earlier calculation, this is used
    else{
    NSlattice.load(“NSprueba2DCoupled”);
    ADlattice.load(“ADprueba2DCoupled”);
    NSlattice.postLoad();
    ADlattice.postLoad();

    iT = lastCPTime;
    timer.update(iT);

    for ( ; iT < converter.getLatticeTime(maxPhysT); ++iT) {

    if (converge.hasConverged()) {
    clout << “Simulation converged.” << std::endl;
    getResults(converter, NSlattice, ADlattice, iT, superGeometry, timer, file, converge.hasConverged());

    clout << “Time ” << iT << “.” << std::endl;

    break;
    }

    /// === 5th Step: Definition of Initial and Boundary Conditions ===
    setBoundaryValues(converter, NSlattice, ADlattice, iT, superGeometry);

    /// === 6th Step: Collide and Stream Execution ===
    ADlattice.collideAndStream();
    NSlattice.collideAndStream();
    NSlattice.executeCoupling();

    /// === 7th Step: Computation and Output of the Results ===
    getResults(converter, NSlattice, ADlattice, iT, superGeometry, timer, file, converge.hasConverged());
    converge.takeValue(ADlattice.getStatistics().getAverageEnergy(),true);
    }
    timer.stop();
    timer.printSummary();
    }

    in reply to: Example issues on Cluster #6414
    jflorezgi
    Participant

    Yes, I have been doing some tests and I noticed that if I send the simulation through more than four threads, the segmentation fault error appears, after loading the data from the save files , it doesn’t matter if I’m running on cluster or on my computer, but the loading call is working if there is less than or equal to four threads. I don’t know how to deal with this problem, if you have any ideas I’d appreciate it.

    Thanks in advance

    • This reply was modified 2 years ago by jflorezgi. Reason: my mistake
    in reply to: Example issues on Cluster #6412
    jflorezgi
    Participant

    Hi Adrian, thank you for your reply. I’m working first in a simulation intended to mimic a violent expiratory event resembling a mild cough, so for now I have two coupled lattices (NSLattice and ADLattice) to evolve the flow and temperature field. I’m using a D3Q19 for NSLattice and D3Q7 for ADLattice velocity fields, SmagorinskyForceMRTDynamics and SmagorinskyMRTDynamics are used respectively for the dynamics, and I’m using the SmagorinskyBoussinesqCouplingGenerator3D to couple the two lattices to calculate the buoyancy force (Boussinesq Approx.) in NSLattice and to pass the convective velocity to ADLattice.

    The code to load the checkpoint just after the prepareLattice function call is:

    // === 4th Step: Main Loop with Timer ===
    std::size_t iT = 0;
    Timer<T> timer( converter.getLatticeTime( maxPhysT ), superGeometry.getStatistics().getNvoxel() );

    // checks whether there is already data of the fluid from an earlier calculation
    if ( !(NSLattice.load(“NSChallenge2022Coupled.checkpoint”))){

    // if there is no data available, it is generated
    timer.start();

    for ( ; iT <= converter.getLatticeTime( maxPhysT ); ++iT ) {

    // === 5bth Step: Definition of Initial and Boundary Conditions ===
    setBoundaryValues( converter, NSLattice, ADLattice, superGeometry, iT );

    // === 6th Step: Collide and Stream Execution ===
    NSLattice.collideAndStream();
    ADLattice.collideAndStream();

    NSLattice.executeCoupling();

    // === 7th Step: Computation and Output of the Results ===
    getResults( NSLattice, ADLattice, cuboidGeometry, converter, iT, superGeometry, timer, file );
    }

    timer.stop();
    timer.printSummary();

    delete bulkDynamics;
    delete TbulkDynamics;
    }

    // if there exists already data of the fluid from an earlier calculation, this is used
    else{
    NSLattice.load(“NSChallenge2022Coupled.checkpoint”);
    ADLattice.load(“ADChallenge2022Coupled.checkpoint”);
    NSLattice.postLoad();
    ADLattice.postLoad();

    iT = lastCPTime;
    timer.update(iT);

    for ( ; iT <= converter.getLatticeTime( maxPhysT ); ++iT ) {

    // === 5bth Step: Definition of Initial and Boundary Conditions ===
    setBoundaryValues( converter, NSLattice, ADLattice, superGeometry, iT );

    // === 6th Step: Collide and Stream Execution ===
    NSLattice.collideAndStream();
    ADLattice.collideAndStream();
    NSLattice.executeCoupling();

    // === 7th Step: Computation and Output of the Results ===
    getResults( NSLattice, ADLattice, cuboidGeometry, converter, iT, superGeometry, timer, file );
    }

    timer.stop();
    timer.printSummary();

    delete bulkDynamics;
    delete TbulkDynamics;
    }

    The checkpoint save is call in getResults function. I’m glad to know that next release is coming soon, I’m trying to build the Outflow boundary conditions (M. Junk and Z. Yang (2008)) to open boundaries but i have some questions, but i think that is better to open a new thread with this topic. Finally, I want to know if in the next release is possible to use a gpu paralelization.

    Thanks for your help.

    in reply to: Example issues on Cluster #6409
    jflorezgi
    Participant

    Hi Adrian, I’m working in thermal indoor applications with OpenLB libraries, but I have issues with cluster MPI runnings. In my personal computer I don’t have problems loading the checkpoint files even if i’m running on parallel mode, but in the cluster generates the following error:

    [prepareGeometry] Prepare Geometry … OK
    [prepareLattice] Prepare Lattice …
    [prepareLattice] Prepare Lattice … OK
    [theclimatebox-ubuntu5:08595] *** Process received signal ***
    [theclimatebox-ubuntu5:08595] Signal: Segmentation fault (11)
    [theclimatebox-ubuntu5:08595] Signal code: (128)
    [theclimatebox-ubuntu5:08595] Failing at address: (nil)
    [theclimatebox-ubuntu5:08595] [ 0] /lib/x86_64-linux-gnu/libc.so.6(+0x430c0)[0x7fe8d349c0c0]
    [theclimatebox-ubuntu5:08595] [ 1] ./challenge2022-3DTurb(+0x7d612)[0x561067232612]
    [theclimatebox-ubuntu5:08595] [ 2] ./challenge2022-3DTurb(+0x86bc6)[0x56106723bbc6]
    [theclimatebox-ubuntu5:08595] [ 3] ./challenge2022-3DTurb(+0x2b05d)[0x5610671e005d]
    [theclimatebox-ubuntu5:08595] [ 4] /lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0xf3)[0x7fe8d347d0b3]
    [theclimatebox-ubuntu5:08595] [ 5] ./challenge2022-3DTurb(+0x2b58e)[0x5610671e058e]
    [theclimatebox-ubuntu5:08595] *** End of error message ***
    ————————————————————————–
    Primary job terminated normally, but 1 process returned
    a non-zero exit code. Per user-direction, the job has been aborted.
    ————————————————————————–
    ————————————————————————–
    mpirun noticed that process rank 24 with PID 0 on node theclimatebox-ubuntu5 exited on signal 11 (Segmentation fault).

    As you say, I think that it is necessary to include a patch as soon as possible, in my special case I am working on a server that restarts every 24 hours, so this function is vital for my work.

    Thank you for your attention, I will be waiting for your answer.

Viewing 12 posts - 1 through 12 (of 12 total)