Skip to content

Yuji

Forum Replies Created

Viewing 15 posts - 1 through 15 (of 29 total)
  • Author
    Posts
  • Yuji
    Participant

    Thank you for your comments.
    I have understood the conncept of stlreader;) I cannot understand this codes in detail though.

    For your questions,
    A) Yes, I changed the characteristic length to 100 from 0.1.
    B) No, I did not changed the stl file. I just replaced the arguments from STLreader<T> stlReader( "cylinder3d.stl", converter.getConversionFactorLength(), 0.001 ); to stlReader( “cylinder3d.stl”, converter.getConversionFactorLength(), 1,2,true )

    Thank you for your supports.

    Yuji
    Participant

    Thank you for your reply.

    How did you mark the interior of the STL file in the material geometry?
    >> I did same cpp file in example/cylinder3d because I used it.

    Just to be sure: Did you take into consideration that the Bouzidi boundary distances are computed in lattice units while the distance you see in Paraview is in physical units?
    >> Yes I did.

    in reply to: Communication between BlockLattices #7859
    Yuji
    Participant

    Sorry I got an additional question.
    4. On the specific process(SuperLattice), can 2 blockLatices in this process be separated into openMP and CUDA for intra-block calclation.

    in reply to: Communication between BlockLattices #7858
    Yuji
    Participant

    Thank you for your kind reply.
    Sorry I counld not understand I said “However, this is an implementation detail that may change in the future and has no impact on application / model code.” What is “application / model code”?

    And,I get other 3 questions.

    1.In the only CPU communication, I recon that the intra-process communication of BlockLattice is used by refer to the memory on the ovelapping cell for synchronizing BlockLattice. Is it right and where are the codes written on release1.6?

    2.The inter-process communication of BlockLattice is used by MPI on any platformas on release1.6?

    3. Dose the number of MPI match the number of SuperLattice?

    in reply to: Check point using GPU #7850
    Yuji
    Participant

    I could do implemention. Thank you for your comments.

    in reply to: Check point using GPU #7846
    Yuji
    Participant

    Dear Adrian,

    Thank you so much. I am going to implement it.

    in reply to: Check point using GPU #7843
    Yuji
    Participant

    Dear Adrian,

    Thank you for response.
    To use “SupreGeometry.sava()” and “SuperGeometry.load()” as well “SuperLattice.save()” and “SuperLattice.load()”, Do I need to add #include “serializer.h” in superGeometry.h?
    Sorry, I cannot understand exactly how to work save and load at superLattice.

    in reply to: Check point using GPU #7842
    Yuji
    Participant

    Dear Adrian,

    Thank you for response.
    To use “SupreGeometry.sava()” and “SuperGeometry.load()” as well “SuperLattice.save()” and “SuperLattice.load()”, Do I need to add #include “serializer.h” in superGeometry.h?
    Sorry, I cannot understand exactly how to use save and load at superLattice.

    in reply to: Check point using GPU #7837
    Yuji
    Participant

    Thank you. I could do “save” and “load”.
    When I use “save” and “load”, I re-make supergeometry.
    Is supergeometry alos able to be saved?

    in reply to: Multi GPUs Calculation #7685
    Yuji
    Participant

    Dear Adrian,

    Thank you for very kind replying. I am getting to understand it gradually.

    I would like to calculate LBM under the 2.5*10^11 grid points(I know it is very huge memory comsumption )
    I reckon “gpu_hybrid_mixed.mk” is useful for my case becasuse when compiled by gpu_hybrid_mixed.mk, CPU and GPU are used for LBM calculation.
    Now I have two GPUs and my CPU has 8 cores.
    In my understanding, “mpirun -np 2 ./cavity3d” commands use 2 cores of CPU and 1 GPU(No.1 GPU is not used) on the other hand, “mpirun -np 2 bash -c ‘export CUDA_VISIBLE_DEVICES=${OMPI_COMM_WORLD_LOCAL_RANK}; ./cavity3d” commands use 2 GPUs(No.0 and No.1 GPUs are used) but CPU is not used for LBM calculation.
    I would like to use 2GPUs and 8cores of CPU. Do you have any recommendation commnads? And is my understanding correct?
    Thank you.
    Yuji

    • This reply was modified 9 months, 2 weeks ago by Yuji.
    in reply to: Pressure in openLB #7682
    Yuji
    Participant

    Dear all,

    Thank you. Understood. I wiil read it.

    in reply to: In examples at cylinder3d #7681
    Yuji
    Participant

    Dear Mathias,

    Thank you for your replying. I am going to read at section 4.4 in the LBM book of Krüger et al.

    in reply to: Pressure in openLB #7676
    Yuji
    Participant

    Sorry again.
    I want to use “setSlipBoundary” on GPU but got an error which is
    “terminate called after throwing an instance of ‘std::runtime_error’
    what(): Legacy post processors not supported on GPU_CUDA
    Aborted”

    Could you teach me how to use “setSlipBoundary” on GPU?

    in reply to: Multi GPUs Calculation #7672
    Yuji
    Participant

    Dear Adrian,

    Thank you for reply.

    I’m catching on.

    You said that “No, you do not need to add or change anything in order to use any kind of parallel processing in OpenLB”. In examples/laminar/cavity3dBenchmark, why is mpi API used such as gpu::cuda::device::synchronize();? Following your useful comments, I think it is not necessary.
    I apologize that my comprehension is lacking.

    in reply to: Pressure in openLB #7647
    Yuji
    Participant

    The gpu_only_mixed.mk of config.mk is used in these cases.

Viewing 15 posts - 1 through 15 (of 29 total)