Skip to content

Yuji

Forum Replies Created

Viewing 15 posts - 1 through 15 (of 34 total)
  • Author
    Posts
  • in reply to: Method to generate effective relaxation time as output in VTK #9866
    Yuji
    Participant

    OK. I can suggest another step No.3.

    Step 3-1. Implemetate new struct SmagorinskyEffectiveOmegaToWatchSGSv in src/dynamics/collisionLES.h

    
    struct SmagorinskyEffectiveOmegaToWatchSGSv {
      using MomentaF = typename MOMENTA::template type<DESCRIPTOR>;
      using CollisionO = typename COLLISION::template type<DESCRIPTOR, MOMENTA, EQUILIBRIUM>;
    
      template <concepts::Cell CELL, concepts::Parameters PARAMETERS, typename V=typename CELL::value_t>
      V computeEffectiveOmega(CELL& cell, PARAMETERS& parameters) any_platform {
        V piNeqNormSqr { };
        MomentaF().computePiNeqNormSqr(cell, piNeqNormSqr);
        const V rho = MomentaF().computeRho(cell);
        const V omega = parameters.template get<descriptors::OMEGA>();
        const V smagorinsky = parameters.template get<collision::LES::SMAGORINSKY>();
        V piNeqNorm = util::sqrt(piNeqNormSqr);
        V preFactor = smagorinsky*smagorinsky
                    * descriptors::invCs2<V,DESCRIPTOR>()*descriptors::invCs2<V,DESCRIPTOR>()
                    * 2 * util::sqrt(2);
        /// Molecular realaxation time
        V tauMol = V{1} / omega;
        /// Turbulent realaxation time
        V tauTurb = V{0.5} * (util::sqrt(tauMol*tauMol + preFactor / rho * piNeqNorm) - tauMol);
        /// Effective realaxation time
        V tauEff = tauMol + tauTurb;
        cell.template setField<descriptors::SCALAR>(tauEff);
        return  V{1} / tauEff;
      }
      
      template <concepts::Cell CELL, concepts::Parameters PARAMETERS, typename V=typename CELL::value_t>
      CellStatistic<V> apply(CELL& cell, PARAMETERS& parameters) any_platform {
        parameters.template set<descriptors::OMEGA>(
          computeEffectiveOmega(cell, parameters));
    
        return CollisionO().apply(cell, parameters);
      }
    };
    
    template <typename COLLISION>
    struct SmagorinskyEffectiveOmegaToWatchSGSv {
      using parameters = typename COLLISION::parameters::template include<
        descriptors::OMEGA, LES::SMAGORINSKY
      >;
    
      static_assert(COLLISION::parameters::template contains<descriptors::OMEGA>(),
                    "COLLISION must be parametrized using relaxation frequency OMEGA");
    
      static std::string getName() {
        return "SmagorinskyEffectiveOmegaToWatchSGSv<" + COLLISION::getName() + ">";
      }
    
      template <typename DESCRIPTOR, typename MOMENTA, typename EQUILIBRIUM>
      using type = detail::SmagorinskyEffectiveOmegaToWatchSGSv<COLLISION,DESCRIPTOR,MOMENTA,EQUILIBRIUM>;
    };

    Step 3-2. Define BulkDynamics in your cpp, for example
    using BulkDynamics = SmagorinskyEffectiveOmegaToWatchSGSv<T,DESCRIPTOR>;

    If you get something, let me know.

    Best regards,
    Yuji

    in reply to: Method to generate effective relaxation time as output in VTK #9858
    Yuji
    Participant

    I forgot mentioning important thing
    #define DISABLE_CSE is
    prior to including the OpenLB header files.

    in reply to: Method to generate effective relaxation time as output in VTK #9854
    Yuji
    Participant

    Sorry and you need to add
    vtmWriter.addFunctor(viscosiy);
    after viscosiy.getName() = "viscosiy";

    in reply to: Method to generate effective relaxation time as output in VTK #9849
    Yuji
    Participant

    Thank you for commenting.
    I can suggest how to get vtk files below steps.

    1.Add #define DISABLE_CSE in your cpp file.
    2.Add FILED in DESCRIPTOR. for example, using DESCRIPTOR = D3Q19<SCALAR>; in your cpp file.
    3.Add cell.template setField<descriptors::SCALAR>(tauEff);in src/dynamics directory file where you want to see tauEff.
    4.Add SuperLatticeField3D<T, DESCRIPTOR, SCALAR> viscosiy(sLattice);
    viscosiy.getName() = "viscosiy";
    in your cpp in getResult function.

    Best regards,
    Yuji

    • This reply was modified 6 days, 1 hour ago by Yuji.
    • This reply was modified 6 days ago by Yuji.
    • This reply was modified 6 days ago by Yuji.
    • This reply was modified 6 days ago by Yuji.
    Yuji
    Participant

    Dear @aseidler
    could you try mpirun with ” -mca btl_smcuda_use_cuda_ipc 0″? for example $mpirun -np 2 –mca btl_smcuda_use_cuda_ipc 0 bash -c ‘export CUDA_VISIBLE_DEVICES=${OMPI_COMM_WORLD_LOCAL_RANK}; ./cavity3d’

    we disscused similar topic in https://www.openlb.net/forum/topic/multi-gpus-calculation/

    • This reply was modified 8 months, 1 week ago by Yuji.
    Yuji
    Participant

    Thank you for your comments.
    I have understood the conncept of stlreader;) I cannot understand this codes in detail though.

    For your questions,
    A) Yes, I changed the characteristic length to 100 from 0.1.
    B) No, I did not changed the stl file. I just replaced the arguments from STLreader<T> stlReader( "cylinder3d.stl", converter.getConversionFactorLength(), 0.001 ); to stlReader( “cylinder3d.stl”, converter.getConversionFactorLength(), 1,2,true )

    Thank you for your supports.

    Yuji
    Participant

    Thank you for your reply.

    How did you mark the interior of the STL file in the material geometry?
    >> I did same cpp file in example/cylinder3d because I used it.

    Just to be sure: Did you take into consideration that the Bouzidi boundary distances are computed in lattice units while the distance you see in Paraview is in physical units?
    >> Yes I did.

    in reply to: Communication between BlockLattices #7859
    Yuji
    Participant

    Sorry I got an additional question.
    4. On the specific process(SuperLattice), can 2 blockLatices in this process be separated into openMP and CUDA for intra-block calclation.

    in reply to: Communication between BlockLattices #7858
    Yuji
    Participant

    Thank you for your kind reply.
    Sorry I counld not understand I said “However, this is an implementation detail that may change in the future and has no impact on application / model code.” What is “application / model code”?

    And,I get other 3 questions.

    1.In the only CPU communication, I recon that the intra-process communication of BlockLattice is used by refer to the memory on the ovelapping cell for synchronizing BlockLattice. Is it right and where are the codes written on release1.6?

    2.The inter-process communication of BlockLattice is used by MPI on any platformas on release1.6?

    3. Dose the number of MPI match the number of SuperLattice?

    in reply to: Check point using GPU #7850
    Yuji
    Participant

    I could do implemention. Thank you for your comments.

    in reply to: Check point using GPU #7846
    Yuji
    Participant

    Dear Adrian,

    Thank you so much. I am going to implement it.

    in reply to: Check point using GPU #7843
    Yuji
    Participant

    Dear Adrian,

    Thank you for response.
    To use “SupreGeometry.sava()” and “SuperGeometry.load()” as well “SuperLattice.save()” and “SuperLattice.load()”, Do I need to add #include “serializer.h” in superGeometry.h?
    Sorry, I cannot understand exactly how to work save and load at superLattice.

    in reply to: Check point using GPU #7842
    Yuji
    Participant

    Dear Adrian,

    Thank you for response.
    To use “SupreGeometry.sava()” and “SuperGeometry.load()” as well “SuperLattice.save()” and “SuperLattice.load()”, Do I need to add #include “serializer.h” in superGeometry.h?
    Sorry, I cannot understand exactly how to use save and load at superLattice.

    in reply to: Check point using GPU #7837
    Yuji
    Participant

    Thank you. I could do “save” and “load”.
    When I use “save” and “load”, I re-make supergeometry.
    Is supergeometry alos able to be saved?

    in reply to: Multi GPUs Calculation #7685
    Yuji
    Participant

    Dear Adrian,

    Thank you for very kind replying. I am getting to understand it gradually.

    I would like to calculate LBM under the 2.5*10^11 grid points(I know it is very huge memory comsumption )
    I reckon “gpu_hybrid_mixed.mk” is useful for my case becasuse when compiled by gpu_hybrid_mixed.mk, CPU and GPU are used for LBM calculation.
    Now I have two GPUs and my CPU has 8 cores.
    In my understanding, “mpirun -np 2 ./cavity3d” commands use 2 cores of CPU and 1 GPU(No.1 GPU is not used) on the other hand, “mpirun -np 2 bash -c ‘export CUDA_VISIBLE_DEVICES=${OMPI_COMM_WORLD_LOCAL_RANK}; ./cavity3d” commands use 2 GPUs(No.0 and No.1 GPUs are used) but CPU is not used for LBM calculation.
    I would like to use 2GPUs and 8cores of CPU. Do you have any recommendation commnads? And is my understanding correct?
    Thank you.
    Yuji

    • This reply was modified 1 year, 6 months ago by Yuji.
Viewing 15 posts - 1 through 15 (of 34 total)