Firedrake memory + performance #2683
-
Hi, when I use
it's seems a problem about my mpi, and how i can deal with it? |
Beta Was this translation helpful? Give feedback.
Replies: 7 comments 6 replies
-
I suspect this is related to the recent MPI communicator changes from @JDBetteridge |
Beta Was this translation helpful? Give feedback.
-
@tu1620808124 I have been seeing this issue for a very long time running MPICH on Arch Linux. (Well before I refactored communicators in PyOP2) The full message is:
I believe it is coming from MPICH itself. I see the same warning running other Python scripts and PETSc executables. If the above warning is causing some other issues, specifically with Firedrake, feel free to elaborate below. |
Beta Was this translation helpful? Give feedback.
-
When I am dealing with a time-related problem, I use implicit time discretization, the space is the DG method, and the degree of freedom is obviously lower than 100,000. After a certain number of timesteps under multi-core, the memory will be insufficient, but no problem with single-core. I'm not sure if it's related. |
Beta Was this translation helpful? Give feedback.
-
Could you run your code under a memory profiler and share the results? Memory profiler is the suggestion from the Firedrake website. Check you aren't keeping every timestep in memory, which would cause the memory use to grow continually. If you are still seeing issues try to create a minimal failing example to post here. I'm still not sure it's related to the leaked handle pool objects. |
Beta Was this translation helpful? Give feedback.
-
Beta Was this translation helpful? Give feedback.
-
@tu1620808124 I have taken your example and run it on my workstation and am seeing very similar numbers, so I don't think there is an issue with your installation. A few points to help you track down your performance issues:
$ diff -u sipg_heat.py sipg_heat2.py
--- sipg_heat.py 2022-12-12 10:14:27.315673505 +0000
+++ sipg_heat2.py 2022-12-12 10:34:33.005629677 +0000
@@ -5,7 +5,6 @@
#define mesh
mesh = RectangleMesh(256, 256,1.0,1.0)
V = FunctionSpace(mesh, "DG", 1)
-bc = []
x,y = SpatialCoordinate(mesh)
n = FacetNormal(mesh)
@@ -33,9 +32,12 @@
f = (1+8*pi**2)*sin(2*pi*x)*sin(2*pi*y)*exp(tc)
F = a_temp + a_time - f*v*dx
+nlvp = NonlinearVariationalProblem(F, u)
+solver = NonlinearVariationalSolver(nlvp)
+
while abs(t - t_end) > 0.1*dt:
tc.assign(t + dt)
- solve(F == 0, u, bc)
+ solver.solve()
u_.assign(u)
t += dt If you have any more questions I propose we convert this to a github discussion and others can chime in with suggestions. |
Beta Was this translation helpful? Give feedback.
-
It would be very nice to fix these warnings as they do indicate things like missing |
Beta Was this translation helpful? Give feedback.
@tu1620808124 I have taken your example and run it on my workstation and am seeing very similar numbers, so I don't think there is an issue with your installation.
A few points to help you track down your performance issues:
mprof run --multiprocess python script.py
and the plot will show the memory consumption of each rank individually.