Turbulence in closed space
OpenLB – Open Source Lattice Boltzmann Code › Forums › on OpenLB › General Topics › Turbulence in closed space
- This topic has 19 replies, 4 voices, and was last updated 3 years, 10 months ago by jdm.
-
AuthorPosts
-
November 11, 2020 at 4:50 pm #5282jdmParticipant
Dear all,
I am trying to create a turbulence simulation in a closed space consisting on a 50cm^3 cube with a sphere of 12.5cm in radius at the center. This geometry is described as a STL file.
I would like to have an inlet at the surface of the sphere diffusing inside the cube (This should not be a diffusion from the inside of the sphere to the cube).I have a vector defining the direction in which I want to create the inlet at the surface of the sphere and in which I want the inlet to diffuse.
To simplify, let’s say the sphere position is {0, 0, 0} and the direction is {-1, 0, 0}
I currently have this piece of code in the prepareGeometry function, after importing the stl file.superGeometry.rename( 0, 2, indicator );
superGeometry.rename( 2, 1, stlReader );
superGeometry.clean();IndicatorCircle3D<T> inflow(
// inlet center position: sphere radius + cylinder length/2
-12.75, 0, 0,
-1, 0, 0,
1 );
IndicatorCylinder3D<T> layerInflow( inflow, 0.5 );
superGeometry.rename( 1, 3, layerInflow );superGeometry.clean();
superGeometry.innerClean();
superGeometry.checkForErrors();However, by doing so, the inlet is not created and no diffusion occurs.
If the inlet center position is defined as -12.5 (sphere radius), then the inlet is created and diffuses inside the sphere; even if the IndicatorCircle3D has been created with a norm pointing toward the sphere’s exterior (-1 in this example).
I also tried to define the IndicatorCylinder3D using
Vector<T,3> inflow_in(-12.5, 0, 0);
Vector<T,3> inflow_out(-13, 0, 0);
IndicatorCylinder3D<T> layerInflow(inflow_in, inflow_out, 1);with the same result (no inlet created, no diffusion).
Could someone indicates me what I’m doing wrong here?
Thanks in advance
November 11, 2020 at 8:53 pm #5283stephanModeratorDear jdm,
thanks for posting.
I am not quite sure if I understand your setup correctly.
Could you please attach a paraview screenshot which visualizes the geometry, or alternatively provide a little more explanation/code?Anyway, a possible issue could be that the indicator doesn’t match the grid correctly.
Maybe have a look at the standard examples, where inflows and geometry primitives are used.Please also note that we have a spring school coming up next year, where we could help you more efficiently with realizing your own simulation in OpenLB.
BR
StephanNovember 11, 2020 at 9:05 pm #5284jdmParticipantDear Stephan,
Thanks for your quick answer.
Here is a screenshot of the simulation corresponding to the second case (inlet positioned on the left, at {-12.5,0,0}, and thus spreading inside the sphere instead of outside) and showing the velocity.I re-used many pieces of code from the examples nozzle3d and aorta3d.
What could be done if the issue is coming from the indicator not matching the grid?Thanks again for your time!
November 13, 2020 at 12:39 pm #5318stephanModeratorHi jdm,
thanks for posting further information.
In paraview you can also open the geometry file located in the vtk folder, then visualize as points with colors corresponding to the material numbers.
Subsequently, slice the domain along the inlet section you want to observe closely.If I understand correctly, in the second case you plotted, the program works as you want it to.
You could try to run the first case with different resolutions to isolate the issue further.The indicator should be programmed with a buffer which depends on the meshsize, such that the nodes which you’d like to be declared as inlet or boundary are hit correctly.
BR
StephanNovember 14, 2020 at 10:39 pm #5321jdmParticipantHi Stephan,
Thanks for your advise.
This is a slice I obtained when I visualise the geometry using the vtk file, instead of the stl used as an input:<br />site png<br />
If I understand correctly, could it be that the geometry is inverted, and that diffusion is possible only inside the sphere and the walls?
If this is the case, how can I invert the materials? Setting the inside of the sphere and the walls as solid material, and allowing diffusion in the space between the sphere and the walls?
at the moment, after importing the stl file, I gotsuperGeometry.rename( 0, 2, indicator ); superGeometry.rename( 2, 1, stlReader ); superGeometry.clean();
which I believe is setting the materials for the stl file.
Thanks for your help!
- This reply was modified 3 years, 10 months ago by jdm.
November 15, 2020 at 3:38 pm #5323mathiasKeymasterDear jdm,
you can use the standard rename function starting from what you have right now:
superGeometry.rename( int fromMaterial, int toMaterial );Best
MathiasNovember 16, 2020 at 7:45 pm #5326jdmParticipantHi both,
Thanks to your help I have been able to solve a couple of issues. The geometry is now correct, and the cylinder used as the inlet is created at the right position.
I also changed the reference length of simulation geometry (charPhysLength is set to 0.01) as my StL file is in cm and not m.However, running the simulation with the cylinder as an inlet now raises a seg fault.
it is raised during the setBoundaryValues function, when trying to set:lattice.iniEquilibrium( superGeometry, 3, rhoF, uSol );
here is the seg fault if needed:
#6 0x00007ff2e04da070 in olb::Dynamics<double, olb::descriptors::D3Q19<> >::iniEquilibrium(olb::Cell<double, olb::descriptors::D3Q19<> >&, double, double const*) () from /home/jdm/git/closed_space/build/libclosed_space.so #7 0x00007ff2e062aa76 in olb::BlockLatticeStructure3D<double, olb::descriptors::D3Q19<> >::iniEquilibrium(olb::BlockIndicatorF3D<double>&, olb::AnalyticalF3D<double, double>&, olb::AnalyticalF3D<double, double>&) () from /home/jdm/git/closed_space/build/libclosed_space.so #8 0x00007ff2e04e264a in olb::SuperLattice3D<double, olb::descriptors::D3Q19<> >::iniEquilibrium(olb::FunctorPtr<olb::SuperIndicatorF3D<double> >&&, olb::AnalyticalF3D<double, double>&, olb::AnalyticalF3D<double, double>&) () from /home/jdm/git/closed_space/build/libclosed_space.so #9 0x00007ff2e04e3d98 in olb::SuperLattice3D<double, olb::descriptors::D3Q19<> >::iniEquilibrium(olb::SuperGeometry3D<double>&, int, olb::AnalyticalF3D<double, double>&, olb::AnalyticalF3D<double, double>&) () from /home/jdm/git/closed_space/build/libclosed_space.so #10 0x00007ff2e0499fcd in openlb_sim::setBoundaryValues(olb::UnitConverter<double, olb::descriptors::D3Q19<> > const&, olb::SuperLattice3D<double, olb::descriptors::D3Q19<> >&, olb::SuperGeometry3D<double>&, int) () from /home/jdm/git/closed_space/build/libclosed_space.so d
Both “prepare geometry” and “prepare Lattice” return OK before the seg fault.
Parameters are taken from the nozzle3d examples, and weren’t raising a seg fault before (when the diffusion of the inlet was happening in the inside of the sphere).
Changing the charPhysLength and stlReader back to 1 (so in meter) doesn’t have an impact.
As said at the begining of this post, the cylinder used as the inlet is correctly placed in front of the sphere. I tried to place it further away, at mid distance between the sphere and the wall as shown in the figure, with similar results.
(in this figure, the cylinder used as inlet is displayed with a material set as 0 instead of 3 only for visualisation purpose)Does anybody has an idea what could be causing this issue?
Thanks again for your help,
jdmNovember 19, 2020 at 3:42 pm #5348mathiasKeymasterDifficult to tell what exactly is going wrong from the distance. You should start form a wirking example and and step by step change the code while checking that every step works fine. We have a “bring your own problem” section where we help to users to get started at our spring schools. Best Mathias
December 1, 2020 at 7:04 pm #5381jdmParticipantDear all,
Thanks to your help I have been able to build my showcase.
I am still missing one functionality though, which is to retrieve the velocity at one point in space. Is there a method likeGet3DVelocity({x, y, z})
?I also encountered a problem when running an OpenLB example with OMP.
On a 74 cores / 144 threads computer (centos, Red Hat 4.8.5), the nozzle3d example took 3 times longer with OMP enabled than without OMP, while still using all the 144 threads at 100% during the whole simulation.
Has anybody ever faced this issue? How can it be solved?
I tried using MPI and the runtime was 19 times quicked using MPI than no parallel mode.
However, I unfortunately can not (easily) use MPI for my own simulation, which is why I would like to use OMP.Thanks in advance for your help!
jdmDecember 2, 2020 at 2:19 pm #5384stephanModeratorDear jdm,
I am glad to hear that you made progress with your showcase.
To assess the velocity field, you can use SuperLatticeVelocity3D, transform that via AnalyticalFfromSuperF3D and then use the resulting function with the typical (output, input) structure.
With the input value you can specify the location of the probe.
Please note that this procedure can be found in various examples.Considering the parallel mode, we can recommend MPI.
Since I have not encountered the OMP issue before, I can not help you with that right away.BR
StephanDecember 3, 2020 at 10:42 am #5392AdrianKeymasterSome questions on the OpenMP issue:
– Did you use the OMP or the HYBRID compilation mode? (I assume OMP?)
– Which if any external environment variables were set up for OpenMP? Especially concerning thread pinning?
– This is probably a dual socket system? If so the results might improve when using HYBRID mode with one MPI process per socket and OpenMP only socket-local [1].As a sidenote: Using more OpenMP threads than there are physical cores didn’t improve LBM performance on any system / software that I have tested.
[1]: Both binding MPI processes to the sockets and OpenMP threads to the cores
December 3, 2020 at 11:00 am #5393jdmParticipantDear all,
Thanks Stephan and Adrian for your answer.
– I used the OMP mode, not the Hybrid one, indeed.
– here is my OpenMP environment:
OPENMP DISPLAY ENVIRONMENT BEGIN _OPENMP = '201511' OMP_DYNAMIC = 'FALSE' OMP_NESTED = 'FALSE' OMP_NUM_THREADS = '144' OMP_SCHEDULE = 'DYNAMIC' OMP_PROC_BIND = 'TRUE' OMP_PLACES = '{0},{72},{1},{73},{2},{74},{3},{75},{4},{76},{5},{77},{6},{78},{7},{79},{8},{80},{9},{81},{10},{82},{11},{83},{12},{84},{13},{85},{14},{86},{15},{87},{16},{88},{17},{89},{18},{90},{19},{91},{20},{92},{21},{93},{22},{94},{23},{95},{24},{96},{25},{97},{26},{98},{27},{99},{28},{100},{29},{101},{30},{102},{31},{103},{32},{104},{33},{105},{34},{106},{35},{107},{36},{108},{37},{109},{38},{110},{39},{111},{40},{112},{41},{113},{42},{114},{43},{115},{44},{116},{45},{117},{46},{118},{47},{119},{48},{120},{49},{121},{50},{122},{51},{123},{52},{124},{53},{125},{54},{126},{55},{127},{56},{128},{57},{129},{58},{130},{59},{131},{60},{132},{61},{133},{62},{134},{63},{135},{64},{136},{65},{137},{66},{138},{67},{139},{68},{140},{69},{141},{70},{142},{71},{143}' OMP_STACKSIZE = '0' OMP_WAIT_POLICY = 'PASSIVE' OMP_THREAD_LIMIT = '4294967295' OMP_MAX_ACTIVE_LEVELS = '2147483647' OMP_CANCELLATION = 'FALSE' OMP_DEFAULT_DEVICE = '0' OMP_MAX_TASK_PRIORITY = '0' OMP_DISPLAY_AFFINITY = 'FALSE' OMP_AFFINITY_FORMAT = 'level %L thread %i affinity %A' OPENMP DISPLAY ENVIRONMENT END
– Unfortunately, I can not easily use MPI on my actual simulation so I would like to avoid anything else than OMP.
I also tried OMP on a much more modest computer (6 cores / 6 threads) and noticed only moderate speedup (~1.4, with all cores running at 100%).Best,
jdmDecember 6, 2020 at 9:26 pm #5398AdrianKeymasterOk, some things you could try:
– Set
OMP_NUM_THREADS
to 72 to match your physical core count
– SetOMP_PROC_BIND=close
andOMP_PLACES=cores
– Try pinning the threads to the cores usinglikwid-pin
[1]What kind of performance are you observing when using MPI? (Should be reported in the CLI output as MLUPs)
I suspect you are using release 1.3? Our latest release [2] which was published shortly after your initial post changes the entire core data structure and the way propagation is performed, it might be worth to try if this improves your performance results. I have not performed detailed benchmarks using only OpenMP with this latest version but the implementation in this area was simplified significantly.
[1]: https://github.com/RRZE-HPC/likwid/wiki/Likwid-Pin
[2]: https://www.openlb.net/news/openlb-release-1-4-available-for-download/December 7, 2020 at 3:16 pm #5399jdmParticipantHi Adrian,
Thanks a lot for your detailed answer.
I updated to OpenLB 1.4 on my small system (6 cores / 6 threads) and noticed an improvement when using OMP. When running the nozzel3D example on 1.3 the speedup was around x1.4 while it is now around x2.9 when using 1.4.
I couldn’t try yet on the bigger system (72 cores / 144 threads) unfortunately, but will try as soon as possible. As using OMP on this big system was increasing the run time (taking 3 times longer), it will be very interesting to see if the 1.4 version improve the situation.With version 1.3, when running the nozzel3D example with MPI, the speedup was about x19 on my big system (72 cores).
However, when trying to build my own simulation with olb 1.4, I now have a compilation error
In my own simulation, OpenLB is compiled as a shared library (using the -fPIC cxx flag), and the simulation code is outside of the olb-1.4r0 folder. This wasn’t a problem with OpenLB 1.3, but I guess it is now due to the relative address used to include functors3D.h in particleOperations3D.h.
Do you know how I can solve this issue?Best,
jdmDecember 7, 2020 at 3:29 pm #5400jdmParticipant -
AuthorPosts
- You must be logged in to reply to this topic.