Opened 15 years ago
Closed 6 years ago
#5331 closed defect (fixed)
Memory leak when removing data layers
Reported by: | bastiK | Owned by: | team |
---|---|---|---|
Priority: | minor | Milestone: | |
Component: | Core | Version: | latest |
Keywords: | Cc: |
Description (last modified by )
When opening a large file and removing the layer repeated times, there is memory leaking.
I tested a 6 MB file and JOSM's memory usage increases by about 7 MB each round.
Attachments (1)
Change History (18)
comment:1 by , 15 years ago
Resolution: | → fixed |
---|---|
Status: | new → closed |
comment:2 by , 15 years ago
Resolution: | fixed |
---|---|
Status: | closed → reopened |
Did not help. Directly after layer removal, the memory usage does not drop at all.
follow-up: 4 comment:3 by , 15 years ago
There were no remaining OsmPrimitives in heap dump after layer was removed in my test, but it's quite possible that there are multiple leaks. I suppose you've run garbage collector multiple times after removing of layer and checked real heap usage (not just size of memory of java process). jconsole is good tool for this.
You can create heap dump (using jps to get pid of josm and jmap -dump:live,format=b,file=heap2.bin <PID>) and send it to me, but note that it might contain your osm password (maybe it's different with OAuth).
comment:4 by , 15 years ago
Replying to jttt:
There were no remaining OsmPrimitives in heap dump after layer was removed in my test, but it's quite possible that there are multiple leaks. I suppose you've run garbage collector multiple times after removing of layer and checked real heap usage (not just size of memory of java process). jconsole is good tool for this.
Yes, I used jconsole to investigate this. Mostly look at tenured generation, only.
You can create heap dump (using jps to get pid of josm and jmap -dump:live,format=b,file=heap2.bin <PID>) and send it to me, but note that it might contain your osm password (maybe it's different with OAuth).
No problem, I use -Djosm.home=$HOME/.josm-dev for tests. However, it is quite easy to reproduce, I think. Just open a small country extract of ~10 MB, click multiple times on "Perform GC" in jconsole. Then click on waste bin in layer list. The mapview goes away, but another click on "Perform GC" does not free any memory. Do it repeated times to accumulate.
comment:5 by , 15 years ago
That's what I did, maybe some plugin is causing the leak. Also Perform GC sometimes doesn't work for the first time, I usually press it multiple times and wait for while to get correct result.
comment:6 by , 15 years ago
ok, it's still leaking. I've only checked heap dump that all primitives are removed, but it looks like there is something else remaining in memory.
I'm sorry, I was so sure that it's caused by remaining reference to dataset that I didn't actually checked jconsole.
comment:7 by , 15 years ago
While you are at it: Purge action with "clear undo/redo" should free memory, but does only partly. OK, it's probably my own fault then. :)
comment:8 by , 15 years ago
comment:9 by , 15 years ago
Resolution: | → fixed |
---|---|
Status: | reopened → closed |
comment:10 by , 15 years ago
Still not perfect, there will always be one instance of MapView in memory and also some usecases and plugins will probably lead to other memory leaks.
comment:11 by , 14 years ago
Resolution: | fixed |
---|---|
Status: | closed → reopened |
I would just like to mention that this is still really noticeable if you open a really large *.osm file. I happen to have a ~115MB file that I open from time to time (2010 Tiger data of Allegheny, Pennsylvania, USA) to get info from if I need to fix up an area. If I open it, JOSM usage jumps to over 1000MB usage (with or without plugins installed) from 152MB (without plugins) on a fresh start of JOSM via the command line in Windows. When I then close said "layer" that the file created, JOSM doesn't release ANY of the data and closing JOSM is the only way to clear it. Annoying when you're in the grove editing stuff.
Also, if I were to open it again during the same session of JOSM, the memory usage would go up even higher since the memory from the first time being opened didn't get released.
comment:12 by , 13 years ago
Description: | modified (diff) |
---|
I can reproduce the problem. Adding data.clear() to OsmDataLayer.java#destroy helps: Java then reports that more of the heap is free ("allocated, but free" in JOSM’s status report).
Analyzing the heap dumps using Eclipse’s MAT gives about the same results: If data.clear is called the heap is around 60 MB with top consumers being ImageCaches in swing.….GTKEngine and bufferedImages in MapView (so MapView seems to leak as well; at least I believe it is destroyed when there are no more layers.).
Without data clear the heap uses 320 MB of which 280 belong to DataSet. The follow-ups are again GTKEngine and bufImgs in MapView.
Unfortunately, Java rarely shrinks the heap once allocated. So even if you system is running OOM, Java will not free the "free heap". Apparently there are some JVM options (see (1)), but they did not work for me (somewhere on stackoverflow it was mentioned that the Min/MaxFreeHeapRatio won’t work with the parallel(?) GC and that the JVM auto-detects this because of multiple CPUs and enough RAM. This applies to my system, thus my heap will probably never shrink).
In any case I’ll attach the patch since having a free heap is still better than nothing. If I find no more issues, I’ll commit it.
(1) https://stopcoding.wordpress.com/2010/04/12/more-on-the-incredible-shrinking-jvm-heap/
by , 13 years ago
Attachment: | free_more_heap.patch added |
---|
clear dataset so it doesn't linger around once the associated layer has been destroyed. This allows Java to reclaim some of the heap.
comment:13 by , 13 years ago
When all references to the OsmDataLayer object are removed, this shouldn't be necessary - why doesn't the GC work in this case?
comment:14 by , 13 years ago
Hm, good point. However, in all other dumps the only reference I found was via Main.java’s “listeners” ArrayList. Since it only uses WeakReferences it can’t be the reason DataSet lingers around. Once I have fixed my failing disk I’ll look into the GC more closely. Maybe it simply decides that cleaning the WeakReference is not worth the effort, but that changes once clear() is called. I haven’t actually looked into what happens if I open another .osm in a completely different region – if GC is working correctly and we don’t have any references to the DataSet, then GC should reclaim the heap instead of expanding further (or running OOM).
comment:15 by , 13 years ago
How did you search for references? Did you use Path to GC root in popup menu or simply look at places where references could be? WeakReference shouldn't make a difference, I think there is somewhere option to discard these before analysis.
comment:16 by , 13 years ago
I looked at the domination tree, as recommended in the tutorials I read.
comment:17 by , 6 years ago
Resolution: | → fixed |
---|---|
Status: | reopened → closed |
(In [3441]) Fix #5331 Memory leak when removing data layers