Memory leak with DJL >0.8.0 #1562
Unanswered
nwjnilsson
asked this question in
Q&A
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Hi,
I'm facing a bit of an issue with newer versions of DJL. My project is based on a Q-learning example taken from https://github.com/kingyuluk/RL-FlappyBird, but I wanted to run on the later versions of DJL, and the problem is that by simply specifying for example version 0.15.0 in the pom, the memory consumption goes out of control. This does not occur with 0.8.0, which that project used originally.
The only changes I made when selecting a newer version (besides the pom) was changing the
Tracker
declaration on line 92 and the initializer on line 217 in src/main/java/com/kingyu/rlbird/ai/TrainBird.java. So instead ofTracker exploreRate = new LinearTracker.Builder()...
, I haveTracker exploreRate = LinearTracker.builder()...
and on line 217 I just addedParameter.Type.WEIGHT
as a second argument tooptInitializer(...)
.I don't see why those changes would cause any issues and I can't find anything else that wouldn't be compatible with 0.15.0. I would greatly appreciate if someone could help me out here so that I'm not stuck with DJL 0.8.0 and PyTorch 1.6.0.
Edit: I should probably add that I'm running on Linux, CPU only and Java 8.
Edit 2: Looked through the history of the TicTacToe RL example in repo as well since the FlappyBird example is clearly build off of that, and I just don't get what the problem is..
Edit 3: I think I found the issue. Seems to be some incorrect usage of NDManagers so I guess some resources weren't released.
Thanks,
J
Beta Was this translation helpful? Give feedback.
All reactions