求助:interDyMFoam自适应网格问题
-
各位大佬好,我是CFDEM新手,最近尝试参照twoSpheresGlowinskiMPI构建自己的算例,采用dynamicRefine,在计算过程中断构建的算例基本与参照算例一致,但试跑算例出现中断,不知道是否是哪里没有注意到,求大佬指导
报错描述
当不使用自适应网格时,算例正常计算;当采用自适应网格功能interDyMFoam,网格未加密前(未达到设置加密条件),正常计算;网格开始加密计算中断。
报错代码如下:Time = 0.0001 Selected 0 cells for refinement out of 1613716. Selected 0 split points out of a possible 0. Courant Number mean: 0.000199582 max: 0.0804916 - evolve() timeStepFraction() = 2 Starting up LIGGGHTS Executing command: 'run 10 ' Setting up run at Fri Jul 26 12:59:28 2024 Memory usage per processor = 6.75634 Mbytes Step Atoms KinEng rke Volume dragtota dragtota dragtota 1 1 2.5261073e-08 0 0.0260508 0 0 0 CFD Coupling established at step 10 11 1 3.5918126e-08 0 0.0260508 0 0 0 Loop time of 0.000962037 on 16 procs for 10 steps with 1 atoms, finish time Fri Jul 26 12:59:28 2024 Pair time (%) = 4.70437e-06 (0.489001) Neigh time (%) = 0 (0) Comm time (%) = 4.65044e-06 (0.483395) Outpt time (%) = 6.52643e-05 (6.78397) Other time (%) = 0.000887418 (92.2436) Nlocal: 0.0625 ave 1 max 0 min Histogram: 15 0 0 0 0 0 0 0 0 1 Nghost: 0 ave 0 max 0 min Histogram: 16 0 0 0 0 0 0 0 0 0 Neighs: 0 ave 0 max 0 min Histogram: 16 0 0 0 0 0 0 0 0 0 Total # of neighbors = 0 Ave neighs/atom = 0 Neighbor list builds = 0 Dangerous builds = 0 LIGGGHTS finished Foam::cfdemCloudIB::reAllocArrays() nr particles = 1 evolve done. DILUPBiCG: Solving for Ux, Initial residual = 1, Final residual = 1.06984e-07, No Iterations 1 DILUPBiCG: Solving for Uy, Initial residual = 1, Final residual = 4.99282e-07, No Iterations 1 DILUPBiCG: Solving for Uz, Initial residual = 1, Final residual = 6.95219e-10, No Iterations 2 DICPCG: Solving for p, Initial residual = 1, Final residual = 9.26695e-07, No Iterations 573 time step continuity errors : sum local = 3.69904e-10, global = -1.28591e-13, cumulative = -1.28591e-13 DICPCG: Solving for p, Initial residual = 0.0476412, Final residual = 9.26405e-07, No Iterations 513 time step continuity errors : sum local = 3.16922e-08, global = -3.49546e-12, cumulative = -3.62406e-12 DICPCG: Solving for p, Initial residual = 0.0055667, Final residual = 9.51869e-07, No Iterations 500 time step continuity errors : sum local = 3.3706e-08, global = -6.10801e-12, cumulative = -9.73206e-12 DICPCG: Solving for p, Initial residual = 0.00178549, Final residual = 9.94251e-07, No Iterations 473 time step continuity errors : sum local = 3.53268e-08, global = -5.71629e-12, cumulative = -1.54484e-11 No finite volume options present particleCloud.calcVelocityCorrection() DICPCG: Solving for phiIB, Initial residual = 1, Final residual = 9.63924e-07, No Iterations 501 ExecutionTime = 30.7 s ClockTime = 32 s Time = 0.0002 Selected 784 cells for refinement out of 1613716. Refined from 1613716 to 1619204 cells. Selected 0 split points out of a possible 784. Courant Number mean: 0.126921 max: 9.46381 - evolve() timeStepFraction() = 2 Starting up LIGGGHTS Executing command: 'run 10 ' Setting up run at Fri Jul 26 12:59:56 2024 Memory usage per processor = 6.75634 Mbytes Step Atoms KinEng rke Volume dragtota dragtota dragtota 11 1 3.5918126e-08 0 0.0260508 0 0 0.017347056 CFD Coupling established at step 20 21 1 3.6972707e-08 0 0.0260508 0 0 0.017347056 Loop time of 0.00149142 on 16 procs for 10 steps with 1 atoms, finish time Fri Jul 26 12:59:56 2024 Pair time (%) = 3.75399e-06 (0.251706) Neigh time (%) = 0 (0) Comm time (%) = 3.65471e-06 (0.24505) Outpt time (%) = 8.7527e-05 (5.86871) Other time (%) = 0.00139648 (93.6345) Nlocal: 0.0625 ave 1 max 0 min Histogram: 15 0 0 0 0 0 0 0 0 1 Nghost: 0 ave 0 max 0 min Histogram: 16 0 0 0 0 0 0 0 0 0 Neighs: 0 ave 0 max 0 min Histogram: 16 0 0 0 0 0 0 0 0 0 Total # of neighbors = 0 Ave neighs/atom = 0 Neighbor list builds = 0 Dangerous builds = 0 LIGGGHTS finished nr particles = 1 -------------------------------------------------------------------------- MPI_ABORT was invoked on rank 15 in communicator MPI_COMM_WORLD with errorcode 1. NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI processes. You may or may not see output from other processes, depending on exactly when Open MPI kills them. --------------------------------------------------------------------------