Low Re flow inside Microchannel
OpenLB – Open Source Lattice Boltzmann Code › Forums › on OpenLB › General Topics › Low Re flow inside Microchannel
- This topic has 8 replies, 2 voices, and was last updated 2 years, 9 months ago by Adrian.
-
AuthorPosts
-
October 25, 2021 at 8:01 pm #6096saadbinParticipant
Hello:
Recently I have been trying to run some simulations with very small characteristic lengths, and consistently having memory issues. I am guessing that the very small characteristic length, del_x is becoming small and making the memory requirement very high. Am I missing anything? What is a possible way to overcome this? Thanks in advance.Here are some parameters I set for the cylinder3d case that I built on:
Relaxation time = 0.53
Char. Length = 0.000138 m
Phys. Viscosity = 0.000015 m2/s
Re = 0.1
N = 20More details:
[MpiManager] Sucessfully initialized, numThreads=4
[UnitConverter] —————– UnitConverter information —————–
[UnitConverter] — Parameters:
[UnitConverter] Resolution: N= 20
[UnitConverter] Lattice velocity: latticeU= 5e-05
[UnitConverter] Lattice relaxation frequency: omega= 1.88679
[UnitConverter] Lattice relaxation time: tau= 0.53
[UnitConverter] Characteristical length(m): charL= 0.000136
[UnitConverter] Characteristical speed(m/s): charU= 0.0110294
[UnitConverter] Phys. kinematic viscosity(m^2/s): charNu= 1.5e-05
[UnitConverter] Phys. density(kg/m^d): charRho= 1
[UnitConverter] Characteristical pressure(N/m^2): charPressure= 0
[UnitConverter] Mach number: machNumber= 8.66025e-05
[UnitConverter] Reynolds number: reynoldsNumber= 0.1
[UnitConverter] Knudsen number: knudsenNumber= 0.000866025
[UnitConverter]
[UnitConverter] — Conversion factors:
[UnitConverter] Voxel length(m): physDeltaX= 6.8e-06
[UnitConverter] Time step(s): physDeltaT= 3.08267e-08
[UnitConverter] Velocity factor(m/s): physVelocity= 220.588
[UnitConverter] Density factor(kg/m^3): physDensity= 1
[UnitConverter] Mass factor(kg): physMass= 3.14432e-16
[UnitConverter] Viscosity factor(m^2/s): physViscosity= 0.0015
[UnitConverter] Force factor(N): physForce= 2.25e-06
[UnitConverter] Pressure factor(N/m^2): physPressure= 48659.2
[UnitConverter] ————————————————————-October 26, 2021 at 9:09 pm #6106AdrianKeymasterBy memory isses you mean that you run out of memory on the system? One possibility is that the domain decomposition doesn’t do what it should and you end up with large empty areas that nevertheless need to be allocated on the system (as OpenLB only offers blocks of directly addressed grids at this time). Whether this is the case here is hard to tell from the provided information.
In general, small characteristic lengths do not necessitate large resolutions as the characteristic length is only a parameter of the physical parametrization of which the actual lattice quantities are derived.
November 5, 2021 at 3:55 pm #6156saadbinParticipantAdrian, thanks for the feedback.
It was actually a RAM issue, as the number of voxels was crossing 1e7. I previously set my domain size excessively high compared to the length scale. However, even after fixing that, I am getting another error. Please, see the snapshot of error message here:
https://drive.google.com/file/d/1cX0MzHUhEOSvmzRw0MHqFjfXqPuo6Ru8/view?usp=sharing
The log file is here: https://drive.google.com/file/d/1X57SFN4bG6WiuCvW0I5Wgyxo26_jqZEx/view?usp=sharing
Apart from the error, several key things to be concerned about for me:
1. My time step (del_t = 5.4803e-10) is very small, which I cannot control because it is automatically calculated based on relaxation time (0.53) and del_x = L/N = 9e-7 (because of very small characteristic length L = 136 microns and resolution N = 150). Consequently, my lattice velocity(U*del_t/del_x) is verysmall at Re=0.1, where U = Re*viscosity/L = (0.1*1.5e-5)/(136e-6). Is there any way to reduce the time step hence the lattice velocity? And hence, make the time propagation faster?
2. My case is similar to the cylinder3d case, the difference is instead of a single cylinder at the center, I have a bunch of small cylinder-like structures on the floor with an approximate diameter of 3 microns each, where my channel dimension is about 400*136*136 micron^3. In my understanding, the resolution needs to be big to resolve the structures on the floor. So I set N=150. Let me know if my understanding is incorrect about the resolution requirement. A snap of the geometry is here: https://drive.google.com/file/d/1DDSyMybGBFhPVx3mRCz-JDyS4x1iAMHY/view?usp=sharing
November 5, 2021 at 5:44 pm #6157saadbinParticipantCorrection in my question on the first point:
Is there any way to ‘INCREASE’ the time step hence the lattice velocity? And hence, make the time propagation faster?November 8, 2021 at 10:45 am #6160AdrianKeymasterIn order to investigate the failing assertion: Can you compile the program in debug mode and start it in a debugger to obtain a backtrace? (e.g., in case this is unfamiliar:
gdb ./cylinder3d
and typebt
once it crashed)1: There likely is more than one possible set of unit conversion parameters that fit your model. How did you decide on the relaxation time? How is the unit converter set up / fixing which parameters?
2: Looking at your geometry your resolution requirement is correct. (Your
N
is relative to which physical length? For a first test it should be sufficient to chose e.g. some low multiple of 10 cells to discretize a 3 micron wide cylinder)November 29, 2021 at 3:57 pm #6200saadbinParticipant1. I tried the ‘gdb’ debugger to figure out the origin (see the image here: https://drive.google.com/file/d/1O99J2tEe6WEl2TNDuoLZmfluHS2hK1iv/view?usp=sharing). From what I understand from it, the scaling error is coming from the following line inside cylinder3d.cpp:
SuperLatticeYplus3D<T, DESCRIPTOR> yPlus( sLattice, converter, superGeometry, stlReader, 5 );
This is defined inside src/functors/lattice/turbulentF3D.hh, where line 87 calls the function ‘normalize’:
normal = util::normalize(normalTemp);
-which is throwing the assertion error. Could you let me know what this line is actually doing?
And if I modify this line to not normalize, what problems should happen with the physics, because if I change it to “normal = normalTemp;” the error is not thrown anymore.2. Currently my N is relative to the width of the channel (136 micron), not the width of the obstacle (3 micron). I am setting width of channel as charactersitic length because I am targetting a specific Knudsen number (meanfreepath/charL). As mean free path of air is constant, I was thinking I have to fix the charL to achieve my target Knudsen number. Do you think this approach is wrong? What do you suggest otherwise?
Thanks!
November 29, 2021 at 10:01 pm #6201AdrianKeymaster1: Unfortunately your image link is not set to public. The easiest way would be to simply post the backtrace log as text. In any case: The only assertion in
util::normalize
isscale > 0
i.e. preventing division through zero (as scale >= 0 in any case). This means thatnormalTemp
is zero which in turn suggests a material geometry issue.November 29, 2021 at 10:06 pm #6202saadbinParticipantVery sorry, I just updated the permission for the image, you can look at it now I hope.
December 10, 2021 at 2:33 pm #6215AdrianKeymasterAs I wrote above the the assertion error is due to an invalid value for
normalTemp
which in turn is caused by a cell of material 1 being surrounded by material 5 (boundary) in such a way that the normal directions cancel (butcounter
is still non-zero s.t. the following code is wrongly called).I’d suggest to take a closer look at your material geometry and e.g. to increase the resolution to prevent such degenerate fluid cells. Additionally (or alternatively) you can add a non-zero check to the conditional in line 79.
The normalization on its own is necessary to fullfill the preconditions of STL distance calculation in line 95.
-
AuthorPosts
- You must be logged in to reply to this topic.