Accessing data during simulation
OpenLB – Open Source Lattice Boltzmann Code › Forums › on Lattice Boltzmann Methods › General Topics › Accessing data during simulation
- This topic has 2 replies, 2 voices, and was last updated 8 years, 7 months ago by fernanor.
-
AuthorPosts
-
April 5, 2016 at 1:33 pm #1821fernanorMember
Hi all,rnrnI’m new to this board, and first want to say, nice solver :)rnNo hassle to compile, works on every achitecture I tried, with various settings, multiple compilers, really nice work there.rnrnSo now I’m not actually from the CFD community, but more a visualization guy.rnAnd what I would like to try with the OpenLB solver, is integrating a so-called tightly coupled in-situ visualization.rnrnWhat this means is, I would need access to the memory where the data is being processed, to run some algorithm on it every N time steps.rnI was thinking of something like a FinalizeTimestep() call in the solver, into which I could drop my DoVisualizationStuff(timestep_results) call, which is implemented in some library of mine.rnI’ve done similar things with OpenFOAM (foam-extend actually) already, but for one I just prefer a LB solver for various reasons, but mostly, for educational reasons.rnrnWhat I would need access to more specifically (preferably) are the state variables (i.e. more or less what gets written as vti file), and, even better, the actual fluxes calculated by the solver.rnrnSo here are the questions:rnrn-Which files do I need to take a look at, to find a good place to call the Visualization?rn-Which are the data objects I need to learn about to pass them to my algorithm?rn-Is this even possible???rnrnBtw, of course I would only need data available in the current MPI rank (for starters anyway 🙂 ).rnrnThanks for reading this long post, I hope someone has a clue as to how this is achieved.rnAlso, thanks in advance for helping me out!rnrnBest,rnOlirnrn
April 5, 2016 at 3:41 pm #2314albert.minkModeratorHi Oli,rnrnare you really sure to have a DoVisualizationStuff() after EVERY collide and stream step? This is not recommended due to performance. Usually, computations and advanced post processing is strictly separated.rnKeep in mind, that LBM is formulated in meso scale and the raw data in OpenLB has to processed anyway, see prepareResults() in the given OpenLB examples. This ‘interal’ post processing is realized by the Functor concept.rnrnA quick and dirty work around would be, to write vtk data every time step and process is with a separate tool. As the output of OpenLB is VTK based, this provides a very general post processing for the user.rnrnBy the way, which data are you interested in? Macroscopic velocity, discrete particle distribution, …rnrnQ1: The examples of OpenLB call the function prepareResults(), where the internal post processing happens.rnQ2: Data hierarchy is from Cells to BlockLattice3D and SuperLattice3D. Where cells = raw data, the BlockLattice are a bunch of cells and SuperLattice is for Parallelisation.rnQ3: I am sure. Perhaps it need some adoption, but as it is open source this modifications are possible.rnrnLooking forward to your reply!rnAlbertrnrnHave a look to example aorta3d, there you will find fluxes!
April 5, 2016 at 8:08 pm #2315fernanorMemberHi Albert!rnrnFirst, thanks alot for the quick reply.rnYour answer gave me some good starting points there. The main point of the in-situ visualization is to avoid writing out data, at least to a file I/O system.rnThe goal is to sort of “”process”” the data, create a reduced representation of some form, and only write out this very small amount of data. The use case would be for a large scale simulation on a supercomputer, where you can easily rack up a couple of terabytes.rnSay, for example, you are only interested in vortex core lines in some fluid simulation. So what you do is make all the calculations on your computo nodes, and discard all additional information. Needles to say, writing out the data for a few 1D-Manifolds is much smaller than writing out all relevant data and performing the analysis as a post-process.rnIt will also be much faster, since you completely skip the file I/O for the post-process step (if you consider both simulation and post-process time). rnThere are many other advantages (and disadvantages) to consider, but that is the basic idea.rnrnSo I didn’t take a look at the code yet, but in the end I don’t really want the *raw* data, but basically the information right before it gets written to some medium. Q1 seems to answer exactly that.rnrnAnd when I said fluxes, I was thinking in terms of Navier-Stokes.rnMy knowledge concerning the inner workings of the discretization methods are rather limited. I was told however that, LB methods work on small neighborhoods (i.e. what you said in Q2), and essentially only face-neighbors, which is a nice trait for large scales. Maybe there are pitfalls I don’t know about yet, but that’s why I’m here :)rnI will try to learn more about how LB works, and specifically the OpenLB code, and maybe I can ask more specific questions then. I just don’t want to dig to much when there is an obvious reason I shouldn’t.rnrnSo thanks once more for the quick and accurate reply, I’ll post again after checking out some things.
-
AuthorPosts
- You must be logged in to reply to this topic.