steed188
Forum Replies Created
-
AuthorPosts
-
steed188Participant
Hi Johanna,
Thank you. I’m interested in the cooperation project. How can I get further information?
Best wishes,steed188
steed188ParticipantHi Johanna,
The compilation is OK, but the simulation diverges directly.
I posted the main program and the temperature NeumannBC I wrote on GitLab. Would you mind help checking where the problem is?https://gitlab.com/steed188/openlb
This is an LES + Boussinesq approximation simulation of the convective heat transfer problem.
The invocation of NeumannBC is in lines 512-529 of CesT.cpp, and neumannBC is implemented in AdvectionDiffusionNeumannBoundaryProcessor3D.h.Thank you.
with best wishes,steed188
steed188ParticipantHello Johanna and Antoniowu,
I’m also writing the Neumann BC for ADlattice. I created a LocalPostProcessor3D and its corresponding PostProcessorGenerator3D. In LocalPostProcessor3D, the core is to calculate the density of the adjacent grid and assign it directly to the boundary grid. But it doesn’t seem to work.The core code of LocalPostProcessor3D is like this
template <typename T, typename descriptor>
void AdNeumannBoundaryProcessor3D<T, descriptor>::
processSubDomain(BlockLattice<T, descriptor> &blockLattice, int x0_, int x1_, int y0_,
int y1_, int z0_, int z1_)
{
int newX0, newX1, newY0, newY1, newZ0, newZ1;
if (util::intersect(
x0, x1, y0, y1, z0, z1,
x0_, x1_, y0_, y1_, z0_, z1_,
newX0, newX1, newY0, newY1, newZ0, newZ1))
{#ifdef PARALLEL_MODE_OMP
#pragma omp parallel for
#endif
T temperature = 0;for (int iX = newX0; iX <= newX1; ++iX)
{
for (int iY = newY0; iY <= newY1; ++iY)
{
for (int iZ = newZ0; iZ <= newZ1; ++iZ)
{
temperature = blockLattice.get(iX – direction[0], iY – direction[1], iZ – direction[2]).computeRho();
blockLattice.get(iX, iY, iZ).defineRho(temperature);
}
}
}
}
}And I call it like this:
PostProcessorGenerator3D<T, ADDESCRIPTOR> *ImplementADNeumann = new AdNeumannBoundaryProcessorGenerator3D<T, ADDESCRIPTOR>(x0, x1, y0, y1, z0, z1, directionX, directionY, directionZ);
adLattice.addPostProcessor(*ImplementOutletADNeumann);Do you have any ideas that what is wrong?
best wishes,
steed188steed188ParticipantOh, thank you mathias.
Do you mean that smagoPrefactor in SmagorinskyBoussinesqCouplingGenerator3D is actually the smagorinsky constant, while not smagoPrefactor = cSmago * cSmago * descriptors::invCs2<T, NSDESCRIPTOR>() * descriptors::invCs2<T, NSDESCRIPTOR>() * 2 * util::sqrt(2)?
By the way, I’m also confused about two parameters deltaTemp and T0 in SmagorinskyBoussinesqCouplingGenerator3D. Should deltaTemp always be 1, meaning I have to normalize the T_high and T_low to 1 and 0? Can I just use the real temperature (like deltaTemp=20℃)? and what is T0? The RayleighBernard example gives the low temperature. Should it always be so? Or is it just a reference temperature that I can set, such as (T_high – T_low)/2?
steed188ParticipantDear Markus
I used mpiexec of OpenMPI to simulate. My case is just like a rectangle wind tunnel with a single box in it. I used smagorinsky model, Re=125000, the mesh quantity is 16 million.For the 16 cpu of one node, its like this
[Timer] Lattice-Timesteps | CPU time/estim | REAL time/estim | ETA | MLUPs
[Timer] 5000/600500 ( 0%) |1059.57/127254.36 | 1060.94/127419.13 |126359 | 0.00For the 64 cpu of 4 nodes, its like this
[Timer] Lattice-Timesteps | CPU time/estim | REAL time/estim | ETA | MLUPs
[Timer] 5000/600500 ( 0%) |3143.54/377539.15 | 3149.02/378197.78 |375049 | 0.00It seems that the caluculation time of 4 nodes is even 3 times of 1 node?
And I also tried the example cylinder2d with the mesh quantity of 1000 times of the original one. The calculation time of 4 nodes is almost the same of 1node.
My cluster has 4 nodes. The node is HP ProLiant DL360 Gen9, one node has 2 processors of Xeon E5-2667 3.2GH. and every proceccor has 8 cores, and every node 128GB memory. My cluster has 4 nodes.
My Makefile.inc is like below
Code:#CXX := g++
#CXX := icpc -D__aligned__=ignored
#CXX := mpiCC
CXX := mpic++CC := gcc # necessary for zlib, for Intel use icc
OPTIM := -O3 -Wall -march=native -mtune=native # for gcc
#OPTIM := -O3 -Wall -xHost # for Intel compiler
DEBUG := -g -DOLB_DEBUGCXXFLAGS := $(OPTIM)
#CXXFLAGS := $(DEBUG)CXXFLAGS += -std=c++0x
#CXXFLAGS += -std=c++11#CXXFLAGS += -fdiagnostics-color=auto
#CXXFLAGS += -std=gnu++14ARPRG := ar
#ARPRG := xiar # mandatory for intel compilerLDFLAGS :=
#PARALLEL_MODE := OFF
PARALLEL_MODE := MPI
#PARALLEL_MODE := OMP
#PARALLEL_MODE := HYBRIDMPIFLAGS :=
OMPFLAGS := -fopenmp#BUILDTYPE := precompiled
BUILDTYPE := genericbest wishes,
steed188steed188ParticipantDear Marc,
Thank you for your advice. I will try to use the periodic BC, and I’m still waiting for my prof.’s permission for the spring school. 🙂best
steed188steed188ParticipantDear Marc,
In my experience of other CFD codes, gradient zero BC usually means only the gradient in the mainstream direction is zero, while other directions are not . If I pointed uAv the convection BC will make the velocity uniform, is it still a gradient zero BC? But if I did not point the uAv, the simulation could not go on.
And the periodic BC is not fitting for my case because the inlet of my case is fixed.
Is there any other method to solve the problem?
with best wishes,
steed188steed188ParticipantDear Marc,
Yeah, I passed uAv to the convection boundary constructor. But If I did not pass it just like below, the simulation will divergence immediately.
Code:bc.addConvectionBoundary(superGeometry, 4, omega);Is there a method that I can use convection without passing uAv in high Reynolds flow?
best,
steed188steed188ParticipantDear Marc,
I used Interpolation pressure BC but it still was not stable in my high Reynolds case so that I tried the convection BC.
Do you mean that the convection BC DOES make the velocity uniform at the outflow?And I tried Periodic BC. I defined a velocity at the inlet when iT=0, and let the inlet and outlet became periodic at x-direction. But seemed didn’t work. It is like below
Code:cuboidGeometry.setPeriodicity( true, false, false );
…
sLattice.defineDynamics(superGeometry, 3, &bulkDynamics); //Inlet
sLattice.defineDynamics( superGeometry, 4, &bulkDynamics ); //Outlet
……
sLattice.iniEquilibrium(superGeometry, 3, rhoF, uF);
sLattice.iniEquilibrium(superGeometry, 4, rhoF, uF);
sLattice.defineRhoU( superGeometry, 3, rhoF, uF );
sLattice.defineRhoU( superGeometry, 4, rhoF, uF );
….
sLattice.defineU(superGeometry, 3, uSol); //Define inlet velocityBest,
steed188steed188ParticipantI got the problem.
I mistakenly put the indicator Functor to create the whole simulation field “IndicatorCuboid3D<T> extendedDomain” before the main function. It means the IndicatorCuboid3D did not included in any functions but seemed as a global definition.
The mistake made simulation processed no fault and the results are correct but led to the fault in dealing with superdata3d.
But in my vtk files written from SuperDataF, the data of borders of Cuboids are missing. Does SuperData will deal with overlap data by itself? Or should I do some operation to catch overlap data myself?
August 5, 2017 at 3:10 pm in reply to: How to use a SuperIdentity3D pointer to pass between two time step #2685steed188ParticipantI maybe found the problem.
It seems that the SuperData3d can store data correctly. And I could catch cell data after I convert it into SuperDataF3D. But if I used vtmWriter.addFunctor to write the SuperDataF3D it went wrong.Is wright to write SuperDataF3D to vtk files? Or should I convert it to other functors?
with best wishes,
steed188steed188ParticipantI maybe found the problem.
It seems that the SuperData3d can store data correctly. And I could catch cell data after I convert it into SuperDataF3D. But if I used vtmWriter.addFunctor to write the SuperDataF3D it went wrong.Is wright to write SuperDataF3D to vtk files? Or should I convert it to other functors?
with best wishes,
steed188steed188ParticipantBy thousands of check, I found why my convection BC didn’t work finally!!! 😀
I chose Local Boundary condition so that the bc didn’t work. If I chose Interp bc it worked!
May I asked the meaning of this two kinds of bc?By the way, as you said that I should use flux functor to calculate uAv every time step, I found that all of the flux functors are based on a circle shape, though my outlet is a rectangle. Is there a functor that can be used for rectangle shape?
My temporary solution is that defining a circle in the rectangle and using flux functor to calculate the uAv. Although it can be calculated but I’ve no idea whether it is correct.
steed188ParticipantThe outflow is defined as below. The outflow bc is on the wall of point(0, 0, 0) to point(0, 2.5, 3.8). The positive x direction is to the inner of fluid.
Code:Vector<T, 3>outCenter(0, 1.8, 1.8) ;
T outRadius = 0.15;
Vector<T, 3> outNormal( T(1), T(), T() );
IndicatorCircle3D<T> outlet( outCenter, outNormal, outRadius);
IndicatorCylinder3D<T> outflow( outlet, 2 * converter.getLatticeL() );
superGeometry.rename( 2,4,1,outflow );The outflow bc seems to be set correctly because if I use pressure boundary it works well in low Re flow.
Then I use the same IndicatorCircle3D to set the outflow bc with material number 4 for flux as below.
Code:Vector<T, 3>outCenter(0,1.8,1.8) ;
T outRadius = 0.15;
Vector<T, 3> outNormal( T(1), T(), T() );
IndicatorCircle3D<T> outlet( outCenter, outNormal, outRadius);std::list<int> materials = { 1,3,4};
SuperLatticePhysVelocityFlux3D<T, DESCRIPTOR> vFluxOutflow0( sLattice, converter, superGeometry, outlet, materials );The mean velocity is a minus number. It means the flux is minus? And the convection bc didn’t work.
I didn’t know where is wrong.with best
steed188steed188ParticipantI tried the *uAv, but there seemed no difference ?
I First defined a global pointer uAv and initialize it.
Code:T aveOut =0;
T * uAv = &aveOut;Then defined the convection bc before iteration.
Code:void prepareLattice(….){
bc.addConvectionBoundary(superGeometry, 4, omega, uAv);
sLattice.defineRhoU(superGeometry, 4,rhoF, uF); //rhoF=1, uF=(0,0,0)
}Lastly calculated average velocity every time step and point it to the pointer
void getResults(){Code:IndicatorCircle3D<T> outflow( outCenter, outNormal, outRadius);
std::list<int> materials = { 1, 4};
SuperLatticePhysVelocityFlux3D<T,DESCRIPTOR>vFluxOutflow( sLattice,converter,superGeometry,outflow,materials );
int input[5] = {0};
T flux[5] = {0.};
vFluxOutflow( flux,input );T meanSpeed= flux[0]/flux[1];
uAv=&meanSpeed;}
I thought I did imposed the average velocity of the bc on *uAv.
I also tried to set a fix value to the *uAv. There was nothing different.
with best,
steed188 -
AuthorPosts