小刘lyw
帖子
-
如何用Paraview监测出口颗粒速度 -
如何用Paraview监测出口颗粒速度@YHY 哈哈我也是刚接触不久,有问题可以随时交流呀
-
CFDEM运行时中断signal 9 (Killed)@李东岳 多谢老师解惑!
-
CFDEM运行时中断signal 9 (Killed)@李东岳 老师您好,该模拟大概每秒注入七百万颗粒,确实比较大,请问有没有什么解决方法呢,降低杨氏模量、增加模拟使用的节点或者取消颗粒间相互作用力是否有帮助呢
-
CFDEM运行时中断signal 9 (Killed)模拟时出现中断,但并未有错误输出,只出现了以下提示
mpirun noticed that process rank 0 with PID 44269 on node n09 exited on signal 9 (Killed).
请问各位老师又没有遇到类似的问题呢 -
如何用Paraview监测出口颗粒速度
各位老师好,我近期在做支撑剂运移模拟,效果是这样的,模型的六个分支端部作为出口,我想要采用Paraview后处理,提取六个出口处的颗粒速度
我尝试采用这篇文章里的方法,结合出口STL文件进行颗粒速度映射提取,但效果不好
https://blog.csdn.net/qq_22182289/article/details/117090868
请问各位大佬有没有推荐使用的filter呢,或者有没有可能在liggghts文件中添加代码实现这个功能呢 -
OpenFOAM v2012 waves2foam 造波问题@bike-北辰 不知道是不是我的错觉,您的分号是否是用了中文输入呢
-
Paraview使用Glyph Sphere后处理颗粒时出现闪退各位老师好,我使用paraview5.9.1进行颗粒的后处理,vtk文件大约900MB,使用Glyph Arrow时可以正常运行,但当换成Sphere后出现无响应,弹窗情况,如下
无报错提示,随后闪退,查看系统事件簿出现如下错误提示
请问各位老师这是什么原因呢 -
自适应网格库朗数增长迅速导致代码中断@李东岳 CFDEM采用的是IB求解器,CFD计算采用了pisoFoam求解器
-
自适应网格库朗数增长迅速导致代码中断各位老师好,目前我在用自适应网格进行球体在井筒中下落的数值模拟,但发现模拟过程出现中断,中断时间步出现库朗数指数级增长。
尝试减小时间步,运行时间长了一点,但仍出现库朗数指数级增长现象,请问各位老师出现这种情况改如何解决呢
以下是我中断时报错代码:
-
CFDEM中如何监测颗粒流速@chapaofan 非常感谢老师提供思路!
-
CFDEM中如何监测颗粒流速各位老师好,我最近要模拟大量细颗粒与流体在多分支管道中的流动,想通过模拟获得不同分支中的颗粒分布,目前一篇文献中提到可以通过监测分支中的颗粒流速来获得颗粒在不同分支中的分布情况( https://onepetro.org/SJ/article/28/04/1650/516504/Experimental-and-3D-Numerical-Investigation-on ), 有以下问题想请教各位老师:
1、如何在CFDEM中实现某一位置或多个位置的颗粒流速监测呢
2、不同分支中的流速是否能反映颗粒的分布呢,是否可以通过监测流速获取颗粒分布情况呢
还望老师们指教! -
CFDEM运行过程中断,MPI_ABORT was invoked on rank 9@星星星星晴 好的,我检查一下,谢谢谢谢!
-
CFDEM运行过程中断,MPI_ABORT was invoked on rank 9模拟球数为6的时候是正常的,但球数增加为7之后就出现了这样一个报错
以上是球数为7正常运行情况 -
CFDEM运行过程中断,MPI_ABORT was invoked on rank 9
请问各位老师这种情况是什么原因导致的呢 -
堵球模拟中:球体堵住出口后出口处仍有流速各位大佬好,我最近在尝试模拟球体在井筒中随流体流动并封堵射孔点,模拟的球直径为15mm,井筒直径为110mm,孔眼采用螺旋布孔,将六个射孔点设置为出口,每个孔眼的直径为12mm,使用清水,流速设置为4m/s。模拟采用全解析,网格划分大小在3mm左右,在twoSpheresGlowinskiMPI案例基础上进行修改。现在发现球体堵住出口后,球体速度显示为0,被堵出口处流速降低,但仍有流速,想请教一下各位大佬有可能是哪方面的原因呢。以下是我目前的模拟结果
-
分享CFDEM+OpenFOAM+LIGGGHTS初次编译安装过程@小刘lyw 已经成功解决了,将“python-numpy”修改为“python3-numpy”
-
分享CFDEM+OpenFOAM+LIGGGHTS初次编译安装过程@lixin19981013 您好,我按照您的回复将“libvtk6-dev”修改成“libvtk9-dev”,但仍出现以下报错:
E: 无法定位软件包 python-numpy
请问是什么原因呢 -
求助:interDyMFoam自适应网格问题各位大佬好,我是CFDEM新手,最近尝试参照twoSpheresGlowinskiMPI构建自己的算例,采用dynamicRefine,在计算过程中断构建的算例基本与参照算例一致,但试跑算例出现中断,不知道是否是哪里没有注意到,求大佬指导
报错描述
当不使用自适应网格时,算例正常计算;当采用自适应网格功能interDyMFoam,网格未加密前(未达到设置加密条件),正常计算;网格开始加密计算中断。
报错代码如下:Time = 0.0001 Selected 0 cells for refinement out of 1613716. Selected 0 split points out of a possible 0. Courant Number mean: 0.000199582 max: 0.0804916 - evolve() timeStepFraction() = 2 Starting up LIGGGHTS Executing command: 'run 10 ' Setting up run at Fri Jul 26 12:59:28 2024 Memory usage per processor = 6.75634 Mbytes Step Atoms KinEng rke Volume dragtota dragtota dragtota 1 1 2.5261073e-08 0 0.0260508 0 0 0 CFD Coupling established at step 10 11 1 3.5918126e-08 0 0.0260508 0 0 0 Loop time of 0.000962037 on 16 procs for 10 steps with 1 atoms, finish time Fri Jul 26 12:59:28 2024 Pair time (%) = 4.70437e-06 (0.489001) Neigh time (%) = 0 (0) Comm time (%) = 4.65044e-06 (0.483395) Outpt time (%) = 6.52643e-05 (6.78397) Other time (%) = 0.000887418 (92.2436) Nlocal: 0.0625 ave 1 max 0 min Histogram: 15 0 0 0 0 0 0 0 0 1 Nghost: 0 ave 0 max 0 min Histogram: 16 0 0 0 0 0 0 0 0 0 Neighs: 0 ave 0 max 0 min Histogram: 16 0 0 0 0 0 0 0 0 0 Total # of neighbors = 0 Ave neighs/atom = 0 Neighbor list builds = 0 Dangerous builds = 0 LIGGGHTS finished Foam::cfdemCloudIB::reAllocArrays() nr particles = 1 evolve done. DILUPBiCG: Solving for Ux, Initial residual = 1, Final residual = 1.06984e-07, No Iterations 1 DILUPBiCG: Solving for Uy, Initial residual = 1, Final residual = 4.99282e-07, No Iterations 1 DILUPBiCG: Solving for Uz, Initial residual = 1, Final residual = 6.95219e-10, No Iterations 2 DICPCG: Solving for p, Initial residual = 1, Final residual = 9.26695e-07, No Iterations 573 time step continuity errors : sum local = 3.69904e-10, global = -1.28591e-13, cumulative = -1.28591e-13 DICPCG: Solving for p, Initial residual = 0.0476412, Final residual = 9.26405e-07, No Iterations 513 time step continuity errors : sum local = 3.16922e-08, global = -3.49546e-12, cumulative = -3.62406e-12 DICPCG: Solving for p, Initial residual = 0.0055667, Final residual = 9.51869e-07, No Iterations 500 time step continuity errors : sum local = 3.3706e-08, global = -6.10801e-12, cumulative = -9.73206e-12 DICPCG: Solving for p, Initial residual = 0.00178549, Final residual = 9.94251e-07, No Iterations 473 time step continuity errors : sum local = 3.53268e-08, global = -5.71629e-12, cumulative = -1.54484e-11 No finite volume options present particleCloud.calcVelocityCorrection() DICPCG: Solving for phiIB, Initial residual = 1, Final residual = 9.63924e-07, No Iterations 501 ExecutionTime = 30.7 s ClockTime = 32 s Time = 0.0002 Selected 784 cells for refinement out of 1613716. Refined from 1613716 to 1619204 cells. Selected 0 split points out of a possible 784. Courant Number mean: 0.126921 max: 9.46381 - evolve() timeStepFraction() = 2 Starting up LIGGGHTS Executing command: 'run 10 ' Setting up run at Fri Jul 26 12:59:56 2024 Memory usage per processor = 6.75634 Mbytes Step Atoms KinEng rke Volume dragtota dragtota dragtota 11 1 3.5918126e-08 0 0.0260508 0 0 0.017347056 CFD Coupling established at step 20 21 1 3.6972707e-08 0 0.0260508 0 0 0.017347056 Loop time of 0.00149142 on 16 procs for 10 steps with 1 atoms, finish time Fri Jul 26 12:59:56 2024 Pair time (%) = 3.75399e-06 (0.251706) Neigh time (%) = 0 (0) Comm time (%) = 3.65471e-06 (0.24505) Outpt time (%) = 8.7527e-05 (5.86871) Other time (%) = 0.00139648 (93.6345) Nlocal: 0.0625 ave 1 max 0 min Histogram: 15 0 0 0 0 0 0 0 0 1 Nghost: 0 ave 0 max 0 min Histogram: 16 0 0 0 0 0 0 0 0 0 Neighs: 0 ave 0 max 0 min Histogram: 16 0 0 0 0 0 0 0 0 0 Total # of neighbors = 0 Ave neighs/atom = 0 Neighbor list builds = 0 Dangerous builds = 0 LIGGGHTS finished nr particles = 1 -------------------------------------------------------------------------- MPI_ABORT was invoked on rank 15 in communicator MPI_COMM_WORLD with errorcode 1. NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI processes. You may or may not see output from other processes, depending on exactly when Open MPI kills them. --------------------------------------------------------------------------