Yuji
Forum Replies Created
-
AuthorPosts
-
December 6, 2023 at 1:55 pm in reply to: About the distance between the boundary node and stl in “setBouzidiBoundary” #7988YujiParticipant
Thank you for your comments.
I have understood the conncept of stlreader;) I cannot understand this codes in detail though.For your questions,
A) Yes, I changed the characteristic length to 100 from 0.1.
B) No, I did not changed the stl file. I just replaced the arguments fromSTLreader<T> stlReader( "cylinder3d.stl", converter.getConversionFactorLength(), 0.001 );
to
stlReader( “cylinder3d.stl”, converter.getConversionFactorLength(), 1,2,true )
Thank you for your supports.
December 5, 2023 at 4:04 pm in reply to: About the distance between the boundary node and stl in “setBouzidiBoundary” #7983YujiParticipantThank you for your reply.
How did you mark the interior of the STL file in the material geometry?
>> I did same cpp file in example/cylinder3d because I used it.Just to be sure: Did you take into consideration that the Bouzidi boundary distances are computed in lattice units while the distance you see in Paraview is in physical units?
>> Yes I did.YujiParticipantSorry I got an additional question.
4. On the specific process(SuperLattice), can 2 blockLatices in this process be separated into openMP and CUDA for intra-block calclation.YujiParticipantThank you for your kind reply.
Sorry I counld not understand I said “However, this is an implementation detail that may change in the future and has no impact on application / model code.” What is “application / model code”?And,I get other 3 questions.
1.In the only CPU communication, I recon that the intra-process communication of BlockLattice is used by refer to the memory on the ovelapping cell for synchronizing BlockLattice. Is it right and where are the codes written on release1.6?
2.The inter-process communication of BlockLattice is used by MPI on any platformas on release1.6?
3. Dose the number of MPI match the number of SuperLattice?
YujiParticipantI could do implemention. Thank you for your comments.
YujiParticipantDear Adrian,
Thank you so much. I am going to implement it.
YujiParticipantDear Adrian,
Thank you for response.
To use “SupreGeometry.sava()” and “SuperGeometry.load()” as well “SuperLattice.save()” and “SuperLattice.load()”, Do I need to add #include “serializer.h” in superGeometry.h?
Sorry, I cannot understand exactly how to work save and load at superLattice.YujiParticipantDear Adrian,
Thank you for response.
To use “SupreGeometry.sava()” and “SuperGeometry.load()” as well “SuperLattice.save()” and “SuperLattice.load()”, Do I need to add #include “serializer.h” in superGeometry.h?
Sorry, I cannot understand exactly how to use save and load at superLattice.YujiParticipantThank you. I could do “save” and “load”.
When I use “save” and “load”, I re-make supergeometry.
Is supergeometry alos able to be saved?YujiParticipantDear Adrian,
Thank you for very kind replying. I am getting to understand it gradually.
I would like to calculate LBM under the 2.5*10^11 grid points(I know it is very huge memory comsumption )
I reckon “gpu_hybrid_mixed.mk” is useful for my case becasuse when compiled by gpu_hybrid_mixed.mk, CPU and GPU are used for LBM calculation.
Now I have two GPUs and my CPU has 8 cores.
In my understanding, “mpirun -np 2 ./cavity3d” commands use 2 cores of CPU and 1 GPU(No.1 GPU is not used) on the other hand, “mpirun -np 2 bash -c ‘export CUDA_VISIBLE_DEVICES=${OMPI_COMM_WORLD_LOCAL_RANK}; ./cavity3d” commands use 2 GPUs(No.0 and No.1 GPUs are used) but CPU is not used for LBM calculation.
I would like to use 2GPUs and 8cores of CPU. Do you have any recommendation commnads? And is my understanding correct?
Thank you.
Yuji- This reply was modified 9 months, 2 weeks ago by Yuji.
YujiParticipantDear all,
Thank you. Understood. I wiil read it.
YujiParticipantDear Mathias,
Thank you for your replying. I am going to read at section 4.4 in the LBM book of Krüger et al.
YujiParticipantSorry again.
I want to use “setSlipBoundary” on GPU but got an error which is
“terminate called after throwing an instance of ‘std::runtime_error’
what(): Legacy post processors not supported on GPU_CUDA
Aborted”Could you teach me how to use “setSlipBoundary” on GPU?
YujiParticipantDear Adrian,
Thank you for reply.
I’m catching on.
You said that “No, you do not need to add or change anything in order to use any kind of parallel processing in OpenLB”. In examples/laminar/cavity3dBenchmark, why is mpi API used such as gpu::cuda::device::synchronize();? Following your useful comments, I think it is not necessary.
I apologize that my comprehension is lacking.YujiParticipantThe gpu_only_mixed.mk of config.mk is used in these cases.
-
AuthorPosts