Adrian
Forum Replies Created
-
AuthorPosts
-
AdrianKeymaster
Which cpp file? What exactly are you doing? Such a general error hints at a basic issue with your setup. As for WSL usage we have a tech report on that. I would suggest to follow the user guide to get started.
AdrianKeymasterMapping of STL-described geometries into the regular lattice used for LBM (“meshing”) is supported natively in OpenLB without external tools via the
STLreader
indicator.You can check out the user guide and/or example cases that use STL geometries such as
examples/turbulence/aorta3d
for reference.AdrianKeymasterThere is additional documentation on the geometry setup in chapter 3. You may also find Doxygen convenient for looking up specific methods.
The conditionals in the lattice setup of the 2D Poiseuille case are only to realize the different supported boundary conditions for illustration (you control this via the definitions at the top of the file).
The basic approach followed in many OpenLB cases is to:
1. Separate the spatial domain into blocks (CuboidGeometry)
2. Assign material numbers to the cells of the discretized domain (SuperGeometry) to group them into e.g. bulk and boundary cells
3. Assign local cell models (Dynamics) and boundary conditions to cells using material indicators as a proxy
4. Define initial and boundary values
5. Start the core simulation loop with periodic output of resultsDo you have specific questions? Otherwise there really is no way around reading the code alongside the documentation if you want to further familiarize you with the code. One thing I want to highlight is that the specific way that the example cases are set up are only a convention that we found useful over the years – nothing is stopping you from structuring things differently using the core OpenLB classes.
If you want hands-on instruction with practical exercises you may find our upcoming Spring School interesting. Of course you can also ask questions in this forum.
AdrianKeymasterThank you for your interest in OpenLB! I read that you looked through the user guide but did you see chapter “10: Step by Step: Using OpenLB for Applications” there? This is probably exactly what you are searching for.
As for the geometry of the poiseuille 2d case: Which case are you looking at? The geometry setup in
examples/laminar/poiseuille2d
consists of 33 lines including comments and empty lines so I am confused what you mean.AdrianKeymasterPlease do not post the same question two times. See my answer in the other thread.
AdrianKeymasterThe error you are encountering is that your
nvcc
uses an older standard than C++11 (wherenullptr
was added) by default. Does a manual execution ofnvcc -std=c++11 -c tinystr.cpp -o build/tinystr.o
work? Is the environment where you build OpenLB using CUDA 11’snvcc
for sure? (It is quite easy to mix this up depending on where and how you installed CUDA, there may also be multiple versions in parallel)In any case, as per the release notes OpenLB 1.6 requires at least CUDA 11.4.
AdrianKeymasterThe issue here is likely that the CUDA release is too old (the diagnostic control option should be independent of any architecture setting). To confirm you can remove the option in line 78 of
rules.mk
.The
CUDA_ARCH
value for a K80 should be 30 as it belongs to the Kepler generation.AdrianKeymasterIt is very unlikely that this is related to your computer configuration and the functor setup you listed. The duplicated files on the other hand are almost certainly caused by your specific changes / the way you (wrongly) call the non-MPI OpenLB application. This is not a bug in OpenLB.
You are also using the deprecated legacy particle code (see the user guide for the current approach).
In any case, the core OpenLB developer team can not offer this detailed level of support in this forum (we are answering questions here alongside our actual research and development work), especially considering that you do not share your work.
If you want this kind of personal support you should consider attending our spring school or finding some other way for both of us to get something out of this process (I know we suggested this to you before but your level of questions is again getting out of hand, sorry).
AdrianKeymasterSo you are doing again what I told you many times before won’t work?… You can not magically turn a non-parallel build into a parallel program by prefixing its execution with
mpirun
.Periodic boundaries for resolved/subgrid-scale particles definitely work using MPI. Or are you talking about some legacy particle mode?
Please explain in detail what you expect the screenshots to tell me, as far as I can tell they simply list the files in
tmp
. What is the difference in behavior between 1.5 and 1.6? What did you modify inchannel3d
(as it doesn’t contain particles in the release version).AdrianKeymasterThe screenshot also only shows files for a single cuboid – confirming that you run a non-MPI OpenLB case via
mpirun
. If you had set this up correctly you would see at least one cuboid file per process.AdrianKeymasterJudging by your screenshot the “case you ran before in 1.5” is the same
channel3d
you are running now?Please describe exactly what you are doing. As I mentioned the 8-fold duplication of VTK output points strongly towards you still using
mpirun -np 8
for a non-MPI executable. You likely also have a contaminated tmp folder and build (which is why I suggested to fully clear the directory).The line of code you posted will do exactly what its comment describes. Again, the duplicated lines you see are due to not compiling the application with MPI enabled but trying to run it using
mpirun
.AdrianKeymaster…that is because you still used
mpirun
despite not activating MPI (as I explained in my post). Do you not see the connection between 8 processes and 8 outputs? This never worked in the way you described. Of course VTK output also works without MPI if you execute the program in the correct way.In any case, happy to hear that it works.
For your second point: This sounds as if you did not fully clear the
tmp
directory between individual tries.AdrianKeymasterThis looks very much as if you did not actually activate MPI in the config.
mpirun
doesn’t warn you if you try to call it with a non-mpi executable – it will just start the same programm n times without any work distribution. You probably also noticed that all terminal output is duplicated eight times?If you change the config to use MPI (see the user guide or the example configs) and recompile, everything should work.
January 19, 2024 at 11:04 am in reply to: nvcc fatal : Option ‘–generate-code arch=compute_60’, missing code #8143AdrianKeymasterYou mean you managed to fix it?
In any case, generating device code for a later generation than the one a GPU has would explain the Thrust exception.
January 17, 2024 at 11:28 am in reply to: nvcc fatal : Option ‘–generate-code arch=compute_60’, missing code #8130AdrianKeymasterHappy to hear that the examples are working now.
What is the issue with your case? Does it work on the CPU?
-
AuthorPosts