Skip to content

Low Re flow inside Microchannel

OpenLB – Open Source Lattice Boltzmann Code Forums on OpenLB General Topics Low Re flow inside Microchannel

Viewing 9 posts - 1 through 9 (of 9 total)
  • Author
  • #6096

    Recently I have been trying to run some simulations with very small characteristic lengths, and consistently having memory issues. I am guessing that the very small characteristic length, del_x is becoming small and making the memory requirement very high. Am I missing anything? What is a possible way to overcome this? Thanks in advance.

    Here are some parameters I set for the cylinder3d case that I built on:

    Relaxation time = 0.53
    Char. Length = 0.000138 m
    Phys. Viscosity = 0.000015 m2/s
    Re = 0.1
    N = 20

    More details:

    [MpiManager] Sucessfully initialized, numThreads=4
    [UnitConverter] —————– UnitConverter information —————–
    [UnitConverter] — Parameters:
    [UnitConverter] Resolution: N= 20
    [UnitConverter] Lattice velocity: latticeU= 5e-05
    [UnitConverter] Lattice relaxation frequency: omega= 1.88679
    [UnitConverter] Lattice relaxation time: tau= 0.53
    [UnitConverter] Characteristical length(m): charL= 0.000136
    [UnitConverter] Characteristical speed(m/s): charU= 0.0110294
    [UnitConverter] Phys. kinematic viscosity(m^2/s): charNu= 1.5e-05
    [UnitConverter] Phys. density(kg/m^d): charRho= 1
    [UnitConverter] Characteristical pressure(N/m^2): charPressure= 0
    [UnitConverter] Mach number: machNumber= 8.66025e-05
    [UnitConverter] Reynolds number: reynoldsNumber= 0.1
    [UnitConverter] Knudsen number: knudsenNumber= 0.000866025
    [UnitConverter] — Conversion factors:
    [UnitConverter] Voxel length(m): physDeltaX= 6.8e-06
    [UnitConverter] Time step(s): physDeltaT= 3.08267e-08
    [UnitConverter] Velocity factor(m/s): physVelocity= 220.588
    [UnitConverter] Density factor(kg/m^3): physDensity= 1
    [UnitConverter] Mass factor(kg): physMass= 3.14432e-16
    [UnitConverter] Viscosity factor(m^2/s): physViscosity= 0.0015
    [UnitConverter] Force factor(N): physForce= 2.25e-06
    [UnitConverter] Pressure factor(N/m^2): physPressure= 48659.2
    [UnitConverter] ————————————————————-


    By memory isses you mean that you run out of memory on the system? One possibility is that the domain decomposition doesn’t do what it should and you end up with large empty areas that nevertheless need to be allocated on the system (as OpenLB only offers blocks of directly addressed grids at this time). Whether this is the case here is hard to tell from the provided information.

    In general, small characteristic lengths do not necessitate large resolutions as the characteristic length is only a parameter of the physical parametrization of which the actual lattice quantities are derived.


    Adrian, thanks for the feedback.

    It was actually a RAM issue, as the number of voxels was crossing 1e7. I previously set my domain size excessively high compared to the length scale. However, even after fixing that, I am getting another error. Please, see the snapshot of error message here:

    The log file is here:

    Apart from the error, several key things to be concerned about for me:

    1. My time step (del_t = 5.4803e-10) is very small, which I cannot control because it is automatically calculated based on relaxation time (0.53) and del_x = L/N = 9e-7 (because of very small characteristic length L = 136 microns and resolution N = 150). Consequently, my lattice velocity(U*del_t/del_x) is verysmall at Re=0.1, where U = Re*viscosity/L = (0.1*1.5e-5)/(136e-6). Is there any way to reduce the time step hence the lattice velocity? And hence, make the time propagation faster?

    2. My case is similar to the cylinder3d case, the difference is instead of a single cylinder at the center, I have a bunch of small cylinder-like structures on the floor with an approximate diameter of 3 microns each, where my channel dimension is about 400*136*136 micron^3. In my understanding, the resolution needs to be big to resolve the structures on the floor. So I set N=150. Let me know if my understanding is incorrect about the resolution requirement. A snap of the geometry is here:


    Correction in my question on the first point:
    Is there any way to ‘INCREASE’ the time step hence the lattice velocity? And hence, make the time propagation faster?


    In order to investigate the failing assertion: Can you compile the program in debug mode and start it in a debugger to obtain a backtrace? (e.g., in case this is unfamiliar: gdb ./cylinder3d and type bt once it crashed)

    1: There likely is more than one possible set of unit conversion parameters that fit your model. How did you decide on the relaxation time? How is the unit converter set up / fixing which parameters?

    2: Looking at your geometry your resolution requirement is correct. (Your N is relative to which physical length? For a first test it should be sufficient to chose e.g. some low multiple of 10 cells to discretize a 3 micron wide cylinder)


    1. I tried the ‘gdb’ debugger to figure out the origin (see the image here: From what I understand from it, the scaling error is coming from the following line inside cylinder3d.cpp:

    SuperLatticeYplus3D<T, DESCRIPTOR> yPlus( sLattice, converter, superGeometry, stlReader, 5 );

    This is defined inside src/functors/lattice/turbulentF3D.hh, where line 87 calls the function ‘normalize’:

    normal = util::normalize(normalTemp);

    -which is throwing the assertion error. Could you let me know what this line is actually doing?
    And if I modify this line to not normalize, what problems should happen with the physics, because if I change it to “normal = normalTemp;” the error is not thrown anymore.

    2. Currently my N is relative to the width of the channel (136 micron), not the width of the obstacle (3 micron). I am setting width of channel as charactersitic length because I am targetting a specific Knudsen number (meanfreepath/charL). As mean free path of air is constant, I was thinking I have to fix the charL to achieve my target Knudsen number. Do you think this approach is wrong? What do you suggest otherwise?



    1: Unfortunately your image link is not set to public. The easiest way would be to simply post the backtrace log as text. In any case: The only assertion in util::normalize is scale > 0 i.e. preventing division through zero (as scale >= 0 in any case). This means that normalTemp is zero which in turn suggests a material geometry issue.


    Very sorry, I just updated the permission for the image, you can look at it now I hope.


    As I wrote above the the assertion error is due to an invalid value for normalTemp which in turn is caused by a cell of material 1 being surrounded by material 5 (boundary) in such a way that the normal directions cancel (but counter is still non-zero s.t. the following code is wrongly called).

    I’d suggest to take a closer look at your material geometry and e.g. to increase the resolution to prevent such degenerate fluid cells. Additionally (or alternatively) you can add a non-zero check to the conditional in line 79.

    The normalization on its own is necessary to fullfill the preconditions of STL distance calculation in line 95.

Viewing 9 posts - 1 through 9 (of 9 total)
  • You must be logged in to reply to this topic.