VTK Repeat Write
OpenLB – Open Source Lattice Boltzmann Code › Forums › on OpenLB › Bug Reports › VTK Repeat Write
- This topic has 11 replies, 2 voices, and was last updated 1 year, 1 month ago by Rookie.
-
AuthorPosts
-
January 20, 2024 at 1:25 pm #8147RookieParticipant
Dear OpenLB developers,
I don’t know why I’m getting the following error when trying to open a VTK file with ParaView. Upon opening the VTI file, I noticed that the information is being duplicated. After re-downloading OpenLB-1.6, when I use the command ‘mpirun -np 8 ./channel3d,’ the generated VTI file exhibits the issue depicted in the following figure.
ERROR: In /build/paraview-vlxewD/paraview-5.7.0/VTK/IO/XMLParser/vtkXMLParser.cxx, line 391
vtkXMLDataParser (0x55651a73c800): Error parsing XML in stream at line 42, column 24575, byte index 6041862: mismatched tagERROR: In /build/paraview-vlxewD/paraview-5.7.0/VTK/IO/XML/vtkXMLReader.cxx, line 515
vtkXMLImageDataReader (0x55651afc00c0): Error parsing input file. ReadXMLInformation aborting.ERROR: In /build/paraview-vlxewD/paraview-5.7.0/VTK/Common/ExecutionModel/vtkExecutive.cxx, line 779
vtkCompositeDataPipeline (0x55651a6cfc80): Algorithm vtkXMLImageDataReader(0x55651afc00c0) returned failure for request: vtkInformation (0x55651b922c70)
Debug: Off
Modified Time: 510228
Reference Count: 1
Registered Events: (none)
Request: REQUEST_INFORMATION
ALGORITHM_AFTER_FORWARD: 1
FORWARD_DIRECTION: 0https://i.postimg.cc/L8PRwhPk/repeat.png
Best regards,
RookieJanuary 20, 2024 at 1:46 pm #8148AdrianKeymasterThis looks very much as if you did not actually activate MPI in the config.
mpirun
doesn’t warn you if you try to call it with a non-mpi executable – it will just start the same programm n times without any work distribution. You probably also noticed that all terminal output is duplicated eight times?If you change the config to use MPI (see the user guide or the example configs) and recompile, everything should work.
January 20, 2024 at 2:29 pm #8149RookieParticipantDear Adrian,
I didn’t activate MPI following the User Guide, because in a previous instance, I downloaded OpenLB-1.6 and was able to open VTK files without setting anything. Even without activating MPI, I observed 8 outputs in the terminal. After activating MPI, the terminal output occurred only once, and VTK files could be opened successfully. However, I would like to know how to ensure the correctness of my VTK files without activating MPI. Is it possible to achieve this?
Furthermore, I’ve observed that at times, among all the output files, one or several files can be opened successfully. In contrast, the files that cannot be opened exhibit duplicated outputs, and the repetition in these cases ranges from eight times to less than eight times. This also results in the files becoming very large
Best regards,
RookieJanuary 20, 2024 at 3:46 pm #8150AdrianKeymaster…that is because you still used
mpirun
despite not activating MPI (as I explained in my post). Do you not see the connection between 8 processes and 8 outputs? This never worked in the way you described. Of course VTK output also works without MPI if you execute the program in the correct way.In any case, happy to hear that it works.
For your second point: This sounds as if you did not fully clear the
tmp
directory between individual tries.January 21, 2024 at 4:51 am #8152RookieParticipantDear Adrian,
I found the case I ran before in version 1.5. At that time, I didn’t set MPI and I noticed that only the VTI file output in the 0th step was repeated 8 times, while others were normal. After modifying the files, the issue of duplicated output occurred not only in the 0th step. You can identify which ones are duplicated by observing the file sizes, and files exceeding 100MB are all duplicated. I still don’t know how to solve this problem. Then I found that this line of code is related to the setting of duplicated output:
// display messages from every single mpi process
// clout.setMultiOutput(false);However, if this line of code is not enabled, doesn’t it mean that the output occurs only once in total for multithreading? I conducted tests on another computer, and the same issue occurred.
https://i.postimg.cc/F1t0Pq5p/repeat.png
Best regards,
RookieJanuary 22, 2024 at 10:52 am #8158AdrianKeymasterJudging by your screenshot the “case you ran before in 1.5” is the same
channel3d
you are running now?Please describe exactly what you are doing. As I mentioned the 8-fold duplication of VTK output points strongly towards you still using
mpirun -np 8
for a non-MPI executable. You likely also have a contaminated tmp folder and build (which is why I suggested to fully clear the directory).The line of code you posted will do exactly what its comment describes. Again, the duplicated lines you see are due to not compiling the application with MPI enabled but trying to run it using
mpirun
.January 22, 2024 at 10:53 am #8159AdrianKeymasterThe screenshot also only shows files for a single cuboid – confirming that you run a non-MPI OpenLB case via
mpirun
. If you had set this up correctly you would see at least one cuboid file per process.January 22, 2024 at 12:12 pm #8160RookieParticipantThe screenshot I shared above is the output after I made some modifications. The following screenshot shows the output results of the original 1.5 version file, and in the zeroth step, it repeated the information.
https://postimg.cc/gallery/9TbmvYmI added some physical quantity outputs. I believe this should not be the cause of the problem. I understand that you suggested opening MPI:
CXX :=mpic++
PARALLEL_MODE := MPIIndeed, this allows for the correct output of VTI files. However, I need to implement periodic boundaries for particles, I can only run it with MPI turned off. Therefore, I disabled MPI, but still used “mpirun -np 8 ./channel3d” to run the program. What puzzles me is why some VTI files can be correctly output in the same program, while others show duplication.Is there any relevant setting to prevent the VTI file from repeatedly writing information after the first write?
I didn’t understand your point about completely deleting the directory. I did rename the output folder, and when outputting, two folders appear—one named “tmp” and the other with the name I specified. Is there an issue with this, and should I make changes here?
January 22, 2024 at 12:21 pm #8161AdrianKeymasterSo you are doing again what I told you many times before won’t work?… You can not magically turn a non-parallel build into a parallel program by prefixing its execution with
mpirun
.Periodic boundaries for resolved/subgrid-scale particles definitely work using MPI. Or are you talking about some legacy particle mode?
Please explain in detail what you expect the screenshots to tell me, as far as I can tell they simply list the files in
tmp
. What is the difference in behavior between 1.5 and 1.6? What did you modify inchannel3d
(as it doesn’t contain particles in the release version).January 22, 2024 at 12:55 pm #8162RookieParticipantI really don’t have much knowledge about parallel processing, and I apologize for any confusion I may have caused you. The screenshot I provided earlier was from running the original channel3d program without parallel compilation but using mpirun to execute it. What I want to convey is that the large file size in the 0th step, which cannot be opened in ParaView, is due to repeated information writing, while other files can be opened normally.
I have incorporated subgrid particles into the channel3d file, and I have achieved some of the functionalities I wanted. The periodicity is implemented using this document: olb-1.6r0/src/particles/subgrid3DLegacyFramework/boundaries/periodicBoundary3D.h. I made some modifications to the original files, but these should not be the cause of the repeated writing issue. The repeated writing issue occurred only after I added some additional output. Could it be due to limitations in my computer configuration?
SuperVTMwriter3D<T> vtmWriter(“channel3d”);
SuperVTMwriter3D<T> vtmWriterStartTime(“startingTimechannel3d”);
SuperLatticeGeometry3D<T, DESCRIPTOR> geometry(sLattice, superGeometry);
SuperLatticePhysVelocity3D<T, DESCRIPTOR> velocity(sLattice, converter);
SuperLatticePhysPressure3D<T, DESCRIPTOR> pressure(sLattice, converter);
SuperLatticePhysStrainRateFD3D<T, DESCRIPTOR> strainrate(superGeometry, sLattice, materialslist, converter);
SuperLatticePhysDissipationFD3D<T, DESCRIPTOR> dissipation(superGeometry, sLattice, materialslist, converter);
SuperLatticePhysEffectiveDissipationFD3D<T, DESCRIPTOR> effdissipation(superGeometry, sLattice, materialslist, converter, [&](Cell<T,DESCRIPTOR>& cell) -> double {
return BulkDynamics::CollisionO().computeEffectiveOmega(cell, bulkDynamicsParams);
});
SuperLatticePhysVorticityFD3D<T,DESCRIPTOR> vorticity(superGeometry, sLattice, materialslist, converter);
SuperLatticePhysStressFD3D<T,DESCRIPTOR> stress(superGeometry, sLattice, materialslist, converter);
SuperIsotropicHomogeneousTKE3D<T,DESCRIPTOR> TKE(sLattice, converter);
SuperLatticePhysEnstrophyFD3D<T,DESCRIPTOR> enstrophy(superGeometry, sLattice, materialslist, converter);
vtmWriter.addFunctor(geometry);
vtmWriter.addFunctor(velocity);
vtmWriter.addFunctor(pressure);
vtmWriter.addFunctor(sAveragedVel);
vtmWriter.addFunctor(sAveragedPre);
vtmWriter.addFunctor(sAveragedVelcc);
vtmWriter.addFunctor(wss);
vtmWriter.addFunctor(sAveragedWss);
vtmWriter.addFunctor(strainrate);
vtmWriter.addFunctor(dissipation);
vtmWriter.addFunctor(effdissipation);
vtmWriter.addFunctor(vorticity);
vtmWriter.addFunctor(stress);
vtmWriter.addFunctor(TKE);
vtmWriter.addFunctor(enstrophy);
vtmWriter.addFunctor(sAveragedVor);vtmWriterStartTime.addFunctor(velocity);
vtmWriterStartTime.addFunctor(pressure);January 22, 2024 at 2:49 pm #8164AdrianKeymasterIt is very unlikely that this is related to your computer configuration and the functor setup you listed. The duplicated files on the other hand are almost certainly caused by your specific changes / the way you (wrongly) call the non-MPI OpenLB application. This is not a bug in OpenLB.
You are also using the deprecated legacy particle code (see the user guide for the current approach).
In any case, the core OpenLB developer team can not offer this detailed level of support in this forum (we are answering questions here alongside our actual research and development work), especially considering that you do not share your work.
If you want this kind of personal support you should consider attending our spring school or finding some other way for both of us to get something out of this process (I know we suggested this to you before but your level of questions is again getting out of hand, sorry).
January 23, 2024 at 5:01 am #8165RookieParticipantI would like to express my gratitude to you and your team once again. Thank you for your previous assistance. OpenLB is indeed a fantastic software. I initially tried Palabos, but the active forum and better alignment with my research led me to choose OpenLB. I apologize for taking up your time with some simple questions (which I couldn’t figure out even after some thought).
-
AuthorPosts
- You must be logged in to reply to this topic.