Modify

Opened 11 years ago

Closed 3 years ago

#5331 closed defect (fixed)

Memory leak when removing data layers

Reported by: bastiK Owned by: team
Priority: minor Milestone:
Component: Core Version: latest
Keywords: Cc:

Description (last modified by xeen)

When opening a large file and removing the layer repeated times, there is memory leaking.

I tested a 6 MB file and JOSM's memory usage increases by about 7 MB each round.

Attachments (1)

free_more_heap.patch (497 bytes) - added by xeen 9 years ago.
clear dataset so it doesn't linger around once the associated layer has been destroyed. This allows Java to reclaim some of the heap.

Download all attachments as: .zip

Change History (18)

comment:1 Changed 11 years ago by jttt

Resolution: fixed
Status: newclosed

(In [3441]) Fix #5331 Memory leak when removing data layers

comment:2 Changed 11 years ago by bastiK

Resolution: fixed
Status: closedreopened

Did not help. Directly after layer removal, the memory usage does not drop at all.

comment:3 Changed 11 years ago by jttt

There were no remaining OsmPrimitives in heap dump after layer was removed in my test, but it's quite possible that there are multiple leaks. I suppose you've run garbage collector multiple times after removing of layer and checked real heap usage (not just size of memory of java process). jconsole is good tool for this.

You can create heap dump (using jps to get pid of josm and jmap -dump:live,format=b,file=heap2.bin <PID>) and send it to me, but note that it might contain your osm password (maybe it's different with OAuth).

comment:4 in reply to:  3 Changed 11 years ago by bastiK

Replying to jttt:

There were no remaining OsmPrimitives in heap dump after layer was removed in my test, but it's quite possible that there are multiple leaks. I suppose you've run garbage collector multiple times after removing of layer and checked real heap usage (not just size of memory of java process). jconsole is good tool for this.

Yes, I used jconsole to investigate this. Mostly look at tenured generation, only.

You can create heap dump (using jps to get pid of josm and jmap -dump:live,format=b,file=heap2.bin <PID>) and send it to me, but note that it might contain your osm password (maybe it's different with OAuth).

No problem, I use -Djosm.home=$HOME/.josm-dev for tests. However, it is quite easy to reproduce, I think. Just open a small country extract of ~10 MB, click multiple times on "Perform GC" in jconsole. Then click on waste bin in layer list. The mapview goes away, but another click on "Perform GC" does not free any memory. Do it repeated times to accumulate.

comment:5 Changed 11 years ago by jttt

That's what I did, maybe some plugin is causing the leak. Also Perform GC sometimes doesn't work for the first time, I usually press it multiple times and wait for while to get correct result.

comment:6 Changed 11 years ago by jttt

ok, it's still leaking. I've only checked heap dump that all primitives are removed, but it looks like there is something else remaining in memory.
I'm sorry, I was so sure that it's caused by remaining reference to dataset that I didn't actually checked jconsole.

comment:7 Changed 11 years ago by bastiK

While you are at it: Purge action with "clear undo/redo" should free memory, but does only partly. OK, it's probably my own fault then. :)

comment:8 Changed 11 years ago by jttt

(In [3443]) Fix some of the references/forgotten listeners that keeps MapView alive after all layers are removed (see #5331)

comment:9 Changed 11 years ago by jttt

Resolution: fixed
Status: reopenedclosed

(In [3444]) Fix #5331 Memory leak when removing data layers

comment:10 Changed 11 years ago by jttt

Still not perfect, there will always be one instance of MapView in memory and also some usecases and plugins will probably lead to other memory leaks.

comment:11 Changed 10 years ago by rickmastfan67

Resolution: fixed
Status: closedreopened

I would just like to mention that this is still really noticeable if you open a really large *.osm file. I happen to have a ~115MB file that I open from time to time (2010 Tiger data of Allegheny, Pennsylvania, USA) to get info from if I need to fix up an area. If I open it, JOSM usage jumps to over 1000MB usage (with or without plugins installed) from 152MB (without plugins) on a fresh start of JOSM via the command line in Windows. When I then close said "layer" that the file created, JOSM doesn't release ANY of the data and closing JOSM is the only way to clear it. Annoying when you're in the grove editing stuff.

Also, if I were to open it again during the same session of JOSM, the memory usage would go up even higher since the memory from the first time being opened didn't get released.

comment:12 Changed 9 years ago by xeen

Description: modified (diff)

I can reproduce the problem. Adding data.clear() to OsmDataLayer.java#destroy helps: Java then reports that more of the heap is free ("allocated, but free" in JOSM’s status report).

Analyzing the heap dumps using Eclipse’s MAT gives about the same results: If data.clear is called the heap is around 60 MB with top consumers being ImageCaches in swing.….GTKEngine and bufferedImages in MapView (so MapView seems to leak as well; at least I believe it is destroyed when there are no more layers.).

Without data clear the heap uses 320 MB of which 280 belong to DataSet. The follow-ups are again GTKEngine and bufImgs in MapView.

Unfortunately, Java rarely shrinks the heap once allocated. So even if you system is running OOM, Java will not free the "free heap". Apparently there are some JVM options (see (1)), but they did not work for me (somewhere on stackoverflow it was mentioned that the Min/MaxFreeHeapRatio won’t work with the parallel(?) GC and that the JVM auto-detects this because of multiple CPUs and enough RAM. This applies to my system, thus my heap will probably never shrink).

In any case I’ll attach the patch since having a free heap is still better than nothing. If I find no more issues, I’ll commit it.

(1) https://stopcoding.wordpress.com/2010/04/12/more-on-the-incredible-shrinking-jvm-heap/

Last edited 9 years ago by xeen (previous) (diff)

Changed 9 years ago by xeen

Attachment: free_more_heap.patch added

clear dataset so it doesn't linger around once the associated layer has been destroyed. This allows Java to reclaim some of the heap.

comment:13 Changed 9 years ago by bastiK

When all references to the OsmDataLayer object are removed, this shouldn't be necessary - why doesn't the GC work in this case?

comment:14 Changed 9 years ago by xeen

Hm, good point. However, in all other dumps the only reference I found was via Main.java’s “listeners” ArrayList. Since it only uses WeakReferences it can’t be the reason DataSet lingers around. Once I have fixed my failing disk I’ll look into the GC more closely. Maybe it simply decides that cleaning the WeakReference is not worth the effort, but that changes once clear() is called. I haven’t actually looked into what happens if I open another .osm in a completely different region – if GC is working correctly and we don’t have any references to the DataSet, then GC should reclaim the heap instead of expanding further (or running OOM).

comment:15 Changed 9 years ago by jttt

How did you search for references? Did you use Path to GC root in popup menu or simply look at places where references could be? WeakReference shouldn't make a difference, I think there is somewhere option to discard these before analysis.

comment:16 Changed 9 years ago by xeen

I looked at the domination tree, as recommended in the tutorials I read.

comment:17 Changed 3 years ago by Don-vip

Resolution: fixed
Status: reopenedclosed

Modify Ticket

Change Properties
Set your email in Preferences
Action
as closed The owner will remain team.
as The resolution will be set.
The resolution will be deleted.

Add Comment


E-mail address and name can be saved in the Preferences.

 
Note: See TracTickets for help on using tickets.