Index: applications/editors/josm/plugins/imagerycache/GPL-v2.0.txt
===================================================================
--- applications/editors/josm/plugins/imagerycache/GPL-v2.0.txt	(revision 29363)
+++ applications/editors/josm/plugins/imagerycache/GPL-v2.0.txt	(revision 29363)
@@ -0,0 +1,339 @@
+                    GNU GENERAL PUBLIC LICENSE
+                       Version 2, June 1991
+
+ Copyright (C) 1989, 1991 Free Software Foundation, Inc.,
+ 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
+ Everyone is permitted to copy and distribute verbatim copies
+ of this license document, but changing it is not allowed.
+
+                            Preamble
+
+  The licenses for most software are designed to take away your
+freedom to share and change it.  By contrast, the GNU General Public
+License is intended to guarantee your freedom to share and change free
+software--to make sure the software is free for all its users.  This
+General Public License applies to most of the Free Software
+Foundation's software and to any other program whose authors commit to
+using it.  (Some other Free Software Foundation software is covered by
+the GNU Lesser General Public License instead.)  You can apply it to
+your programs, too.
+
+  When we speak of free software, we are referring to freedom, not
+price.  Our General Public Licenses are designed to make sure that you
+have the freedom to distribute copies of free software (and charge for
+this service if you wish), that you receive source code or can get it
+if you want it, that you can change the software or use pieces of it
+in new free programs; and that you know you can do these things.
+
+  To protect your rights, we need to make restrictions that forbid
+anyone to deny you these rights or to ask you to surrender the rights.
+These restrictions translate to certain responsibilities for you if you
+distribute copies of the software, or if you modify it.
+
+  For example, if you distribute copies of such a program, whether
+gratis or for a fee, you must give the recipients all the rights that
+you have.  You must make sure that they, too, receive or can get the
+source code.  And you must show them these terms so they know their
+rights.
+
+  We protect your rights with two steps: (1) copyright the software, and
+(2) offer you this license which gives you legal permission to copy,
+distribute and/or modify the software.
+
+  Also, for each author's protection and ours, we want to make certain
+that everyone understands that there is no warranty for this free
+software.  If the software is modified by someone else and passed on, we
+want its recipients to know that what they have is not the original, so
+that any problems introduced by others will not reflect on the original
+authors' reputations.
+
+  Finally, any free program is threatened constantly by software
+patents.  We wish to avoid the danger that redistributors of a free
+program will individually obtain patent licenses, in effect making the
+program proprietary.  To prevent this, we have made it clear that any
+patent must be licensed for everyone's free use or not licensed at all.
+
+  The precise terms and conditions for copying, distribution and
+modification follow.
+
+                    GNU GENERAL PUBLIC LICENSE
+   TERMS AND CONDITIONS FOR COPYING, DISTRIBUTION AND MODIFICATION
+
+  0. This License applies to any program or other work which contains
+a notice placed by the copyright holder saying it may be distributed
+under the terms of this General Public License.  The "Program", below,
+refers to any such program or work, and a "work based on the Program"
+means either the Program or any derivative work under copyright law:
+that is to say, a work containing the Program or a portion of it,
+either verbatim or with modifications and/or translated into another
+language.  (Hereinafter, translation is included without limitation in
+the term "modification".)  Each licensee is addressed as "you".
+
+Activities other than copying, distribution and modification are not
+covered by this License; they are outside its scope.  The act of
+running the Program is not restricted, and the output from the Program
+is covered only if its contents constitute a work based on the
+Program (independent of having been made by running the Program).
+Whether that is true depends on what the Program does.
+
+  1. You may copy and distribute verbatim copies of the Program's
+source code as you receive it, in any medium, provided that you
+conspicuously and appropriately publish on each copy an appropriate
+copyright notice and disclaimer of warranty; keep intact all the
+notices that refer to this License and to the absence of any warranty;
+and give any other recipients of the Program a copy of this License
+along with the Program.
+
+You may charge a fee for the physical act of transferring a copy, and
+you may at your option offer warranty protection in exchange for a fee.
+
+  2. You may modify your copy or copies of the Program or any portion
+of it, thus forming a work based on the Program, and copy and
+distribute such modifications or work under the terms of Section 1
+above, provided that you also meet all of these conditions:
+
+    a) You must cause the modified files to carry prominent notices
+    stating that you changed the files and the date of any change.
+
+    b) You must cause any work that you distribute or publish, that in
+    whole or in part contains or is derived from the Program or any
+    part thereof, to be licensed as a whole at no charge to all third
+    parties under the terms of this License.
+
+    c) If the modified program normally reads commands interactively
+    when run, you must cause it, when started running for such
+    interactive use in the most ordinary way, to print or display an
+    announcement including an appropriate copyright notice and a
+    notice that there is no warranty (or else, saying that you provide
+    a warranty) and that users may redistribute the program under
+    these conditions, and telling the user how to view a copy of this
+    License.  (Exception: if the Program itself is interactive but
+    does not normally print such an announcement, your work based on
+    the Program is not required to print an announcement.)
+
+These requirements apply to the modified work as a whole.  If
+identifiable sections of that work are not derived from the Program,
+and can be reasonably considered independent and separate works in
+themselves, then this License, and its terms, do not apply to those
+sections when you distribute them as separate works.  But when you
+distribute the same sections as part of a whole which is a work based
+on the Program, the distribution of the whole must be on the terms of
+this License, whose permissions for other licensees extend to the
+entire whole, and thus to each and every part regardless of who wrote it.
+
+Thus, it is not the intent of this section to claim rights or contest
+your rights to work written entirely by you; rather, the intent is to
+exercise the right to control the distribution of derivative or
+collective works based on the Program.
+
+In addition, mere aggregation of another work not based on the Program
+with the Program (or with a work based on the Program) on a volume of
+a storage or distribution medium does not bring the other work under
+the scope of this License.
+
+  3. You may copy and distribute the Program (or a work based on it,
+under Section 2) in object code or executable form under the terms of
+Sections 1 and 2 above provided that you also do one of the following:
+
+    a) Accompany it with the complete corresponding machine-readable
+    source code, which must be distributed under the terms of Sections
+    1 and 2 above on a medium customarily used for software interchange; or,
+
+    b) Accompany it with a written offer, valid for at least three
+    years, to give any third party, for a charge no more than your
+    cost of physically performing source distribution, a complete
+    machine-readable copy of the corresponding source code, to be
+    distributed under the terms of Sections 1 and 2 above on a medium
+    customarily used for software interchange; or,
+
+    c) Accompany it with the information you received as to the offer
+    to distribute corresponding source code.  (This alternative is
+    allowed only for noncommercial distribution and only if you
+    received the program in object code or executable form with such
+    an offer, in accord with Subsection b above.)
+
+The source code for a work means the preferred form of the work for
+making modifications to it.  For an executable work, complete source
+code means all the source code for all modules it contains, plus any
+associated interface definition files, plus the scripts used to
+control compilation and installation of the executable.  However, as a
+special exception, the source code distributed need not include
+anything that is normally distributed (in either source or binary
+form) with the major components (compiler, kernel, and so on) of the
+operating system on which the executable runs, unless that component
+itself accompanies the executable.
+
+If distribution of executable or object code is made by offering
+access to copy from a designated place, then offering equivalent
+access to copy the source code from the same place counts as
+distribution of the source code, even though third parties are not
+compelled to copy the source along with the object code.
+
+  4. You may not copy, modify, sublicense, or distribute the Program
+except as expressly provided under this License.  Any attempt
+otherwise to copy, modify, sublicense or distribute the Program is
+void, and will automatically terminate your rights under this License.
+However, parties who have received copies, or rights, from you under
+this License will not have their licenses terminated so long as such
+parties remain in full compliance.
+
+  5. You are not required to accept this License, since you have not
+signed it.  However, nothing else grants you permission to modify or
+distribute the Program or its derivative works.  These actions are
+prohibited by law if you do not accept this License.  Therefore, by
+modifying or distributing the Program (or any work based on the
+Program), you indicate your acceptance of this License to do so, and
+all its terms and conditions for copying, distributing or modifying
+the Program or works based on it.
+
+  6. Each time you redistribute the Program (or any work based on the
+Program), the recipient automatically receives a license from the
+original licensor to copy, distribute or modify the Program subject to
+these terms and conditions.  You may not impose any further
+restrictions on the recipients' exercise of the rights granted herein.
+You are not responsible for enforcing compliance by third parties to
+this License.
+
+  7. If, as a consequence of a court judgment or allegation of patent
+infringement or for any other reason (not limited to patent issues),
+conditions are imposed on you (whether by court order, agreement or
+otherwise) that contradict the conditions of this License, they do not
+excuse you from the conditions of this License.  If you cannot
+distribute so as to satisfy simultaneously your obligations under this
+License and any other pertinent obligations, then as a consequence you
+may not distribute the Program at all.  For example, if a patent
+license would not permit royalty-free redistribution of the Program by
+all those who receive copies directly or indirectly through you, then
+the only way you could satisfy both it and this License would be to
+refrain entirely from distribution of the Program.
+
+If any portion of this section is held invalid or unenforceable under
+any particular circumstance, the balance of the section is intended to
+apply and the section as a whole is intended to apply in other
+circumstances.
+
+It is not the purpose of this section to induce you to infringe any
+patents or other property right claims or to contest validity of any
+such claims; this section has the sole purpose of protecting the
+integrity of the free software distribution system, which is
+implemented by public license practices.  Many people have made
+generous contributions to the wide range of software distributed
+through that system in reliance on consistent application of that
+system; it is up to the author/donor to decide if he or she is willing
+to distribute software through any other system and a licensee cannot
+impose that choice.
+
+This section is intended to make thoroughly clear what is believed to
+be a consequence of the rest of this License.
+
+  8. If the distribution and/or use of the Program is restricted in
+certain countries either by patents or by copyrighted interfaces, the
+original copyright holder who places the Program under this License
+may add an explicit geographical distribution limitation excluding
+those countries, so that distribution is permitted only in or among
+countries not thus excluded.  In such case, this License incorporates
+the limitation as if written in the body of this License.
+
+  9. The Free Software Foundation may publish revised and/or new versions
+of the General Public License from time to time.  Such new versions will
+be similar in spirit to the present version, but may differ in detail to
+address new problems or concerns.
+
+Each version is given a distinguishing version number.  If the Program
+specifies a version number of this License which applies to it and "any
+later version", you have the option of following the terms and conditions
+either of that version or of any later version published by the Free
+Software Foundation.  If the Program does not specify a version number of
+this License, you may choose any version ever published by the Free Software
+Foundation.
+
+  10. If you wish to incorporate parts of the Program into other free
+programs whose distribution conditions are different, write to the author
+to ask for permission.  For software which is copyrighted by the Free
+Software Foundation, write to the Free Software Foundation; we sometimes
+make exceptions for this.  Our decision will be guided by the two goals
+of preserving the free status of all derivatives of our free software and
+of promoting the sharing and reuse of software generally.
+
+                            NO WARRANTY
+
+  11. BECAUSE THE PROGRAM IS LICENSED FREE OF CHARGE, THERE IS NO WARRANTY
+FOR THE PROGRAM, TO THE EXTENT PERMITTED BY APPLICABLE LAW.  EXCEPT WHEN
+OTHERWISE STATED IN WRITING THE COPYRIGHT HOLDERS AND/OR OTHER PARTIES
+PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESSED
+OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF
+MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE.  THE ENTIRE RISK AS
+TO THE QUALITY AND PERFORMANCE OF THE PROGRAM IS WITH YOU.  SHOULD THE
+PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF ALL NECESSARY SERVICING,
+REPAIR OR CORRECTION.
+
+  12. IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING
+WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MAY MODIFY AND/OR
+REDISTRIBUTE THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES,
+INCLUDING ANY GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING
+OUT OF THE USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED
+TO LOSS OF DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY
+YOU OR THIRD PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER
+PROGRAMS), EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE
+POSSIBILITY OF SUCH DAMAGES.
+
+                     END OF TERMS AND CONDITIONS
+
+            How to Apply These Terms to Your New Programs
+
+  If you develop a new program, and you want it to be of the greatest
+possible use to the public, the best way to achieve this is to make it
+free software which everyone can redistribute and change under these terms.
+
+  To do so, attach the following notices to the program.  It is safest
+to attach them to the start of each source file to most effectively
+convey the exclusion of warranty; and each file should have at least
+the "copyright" line and a pointer to where the full notice is found.
+
+    <one line to give the program's name and a brief idea of what it does.>
+    Copyright (C) <year>  <name of author>
+
+    This program is free software; you can redistribute it and/or modify
+    it under the terms of the GNU General Public License as published by
+    the Free Software Foundation; either version 2 of the License, or
+    (at your option) any later version.
+
+    This program is distributed in the hope that it will be useful,
+    but WITHOUT ANY WARRANTY; without even the implied warranty of
+    MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+    GNU General Public License for more details.
+
+    You should have received a copy of the GNU General Public License along
+    with this program; if not, write to the Free Software Foundation, Inc.,
+    51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
+
+Also add information on how to contact you by electronic and paper mail.
+
+If the program is interactive, make it output a short notice like this
+when it starts in an interactive mode:
+
+    Gnomovision version 69, Copyright (C) year name of author
+    Gnomovision comes with ABSOLUTELY NO WARRANTY; for details type `show w'.
+    This is free software, and you are welcome to redistribute it
+    under certain conditions; type `show c' for details.
+
+The hypothetical commands `show w' and `show c' should show the appropriate
+parts of the General Public License.  Of course, the commands you use may
+be called something other than `show w' and `show c'; they could even be
+mouse-clicks or menu items--whatever suits your program.
+
+You should also get your employer (if you work as a programmer) or your
+school, if any, to sign a "copyright disclaimer" for the program, if
+necessary.  Here is a sample; alter the names:
+
+  Yoyodyne, Inc., hereby disclaims all copyright interest in the program
+  `Gnomovision' (which makes passes at compilers) written by James Hacker.
+
+  <signature of Ty Coon>, 1 April 1989
+  Ty Coon, President of Vice
+
+This General Public License does not permit incorporating your program into
+proprietary programs.  If your program is a subroutine library, you may
+consider it more useful to permit linking proprietary applications with the
+library.  If this is what you want to do, use the GNU Lesser General
+Public License instead of this License.
Index: applications/editors/josm/plugins/imagerycache/GPL-v3.0.txt
===================================================================
--- applications/editors/josm/plugins/imagerycache/GPL-v3.0.txt	(revision 29363)
+++ applications/editors/josm/plugins/imagerycache/GPL-v3.0.txt	(revision 29363)
@@ -0,0 +1,674 @@
+                    GNU GENERAL PUBLIC LICENSE
+                       Version 3, 29 June 2007
+
+ Copyright (C) 2007 Free Software Foundation, Inc. <http://fsf.org/>
+ Everyone is permitted to copy and distribute verbatim copies
+ of this license document, but changing it is not allowed.
+
+                            Preamble
+
+  The GNU General Public License is a free, copyleft license for
+software and other kinds of works.
+
+  The licenses for most software and other practical works are designed
+to take away your freedom to share and change the works.  By contrast,
+the GNU General Public License is intended to guarantee your freedom to
+share and change all versions of a program--to make sure it remains free
+software for all its users.  We, the Free Software Foundation, use the
+GNU General Public License for most of our software; it applies also to
+any other work released this way by its authors.  You can apply it to
+your programs, too.
+
+  When we speak of free software, we are referring to freedom, not
+price.  Our General Public Licenses are designed to make sure that you
+have the freedom to distribute copies of free software (and charge for
+them if you wish), that you receive source code or can get it if you
+want it, that you can change the software or use pieces of it in new
+free programs, and that you know you can do these things.
+
+  To protect your rights, we need to prevent others from denying you
+these rights or asking you to surrender the rights.  Therefore, you have
+certain responsibilities if you distribute copies of the software, or if
+you modify it: responsibilities to respect the freedom of others.
+
+  For example, if you distribute copies of such a program, whether
+gratis or for a fee, you must pass on to the recipients the same
+freedoms that you received.  You must make sure that they, too, receive
+or can get the source code.  And you must show them these terms so they
+know their rights.
+
+  Developers that use the GNU GPL protect your rights with two steps:
+(1) assert copyright on the software, and (2) offer you this License
+giving you legal permission to copy, distribute and/or modify it.
+
+  For the developers' and authors' protection, the GPL clearly explains
+that there is no warranty for this free software.  For both users' and
+authors' sake, the GPL requires that modified versions be marked as
+changed, so that their problems will not be attributed erroneously to
+authors of previous versions.
+
+  Some devices are designed to deny users access to install or run
+modified versions of the software inside them, although the manufacturer
+can do so.  This is fundamentally incompatible with the aim of
+protecting users' freedom to change the software.  The systematic
+pattern of such abuse occurs in the area of products for individuals to
+use, which is precisely where it is most unacceptable.  Therefore, we
+have designed this version of the GPL to prohibit the practice for those
+products.  If such problems arise substantially in other domains, we
+stand ready to extend this provision to those domains in future versions
+of the GPL, as needed to protect the freedom of users.
+
+  Finally, every program is threatened constantly by software patents.
+States should not allow patents to restrict development and use of
+software on general-purpose computers, but in those that do, we wish to
+avoid the special danger that patents applied to a free program could
+make it effectively proprietary.  To prevent this, the GPL assures that
+patents cannot be used to render the program non-free.
+
+  The precise terms and conditions for copying, distribution and
+modification follow.
+
+                       TERMS AND CONDITIONS
+
+  0. Definitions.
+
+  "This License" refers to version 3 of the GNU General Public License.
+
+  "Copyright" also means copyright-like laws that apply to other kinds of
+works, such as semiconductor masks.
+
+  "The Program" refers to any copyrightable work licensed under this
+License.  Each licensee is addressed as "you".  "Licensees" and
+"recipients" may be individuals or organizations.
+
+  To "modify" a work means to copy from or adapt all or part of the work
+in a fashion requiring copyright permission, other than the making of an
+exact copy.  The resulting work is called a "modified version" of the
+earlier work or a work "based on" the earlier work.
+
+  A "covered work" means either the unmodified Program or a work based
+on the Program.
+
+  To "propagate" a work means to do anything with it that, without
+permission, would make you directly or secondarily liable for
+infringement under applicable copyright law, except executing it on a
+computer or modifying a private copy.  Propagation includes copying,
+distribution (with or without modification), making available to the
+public, and in some countries other activities as well.
+
+  To "convey" a work means any kind of propagation that enables other
+parties to make or receive copies.  Mere interaction with a user through
+a computer network, with no transfer of a copy, is not conveying.
+
+  An interactive user interface displays "Appropriate Legal Notices"
+to the extent that it includes a convenient and prominently visible
+feature that (1) displays an appropriate copyright notice, and (2)
+tells the user that there is no warranty for the work (except to the
+extent that warranties are provided), that licensees may convey the
+work under this License, and how to view a copy of this License.  If
+the interface presents a list of user commands or options, such as a
+menu, a prominent item in the list meets this criterion.
+
+  1. Source Code.
+
+  The "source code" for a work means the preferred form of the work
+for making modifications to it.  "Object code" means any non-source
+form of a work.
+
+  A "Standard Interface" means an interface that either is an official
+standard defined by a recognized standards body, or, in the case of
+interfaces specified for a particular programming language, one that
+is widely used among developers working in that language.
+
+  The "System Libraries" of an executable work include anything, other
+than the work as a whole, that (a) is included in the normal form of
+packaging a Major Component, but which is not part of that Major
+Component, and (b) serves only to enable use of the work with that
+Major Component, or to implement a Standard Interface for which an
+implementation is available to the public in source code form.  A
+"Major Component", in this context, means a major essential component
+(kernel, window system, and so on) of the specific operating system
+(if any) on which the executable work runs, or a compiler used to
+produce the work, or an object code interpreter used to run it.
+
+  The "Corresponding Source" for a work in object code form means all
+the source code needed to generate, install, and (for an executable
+work) run the object code and to modify the work, including scripts to
+control those activities.  However, it does not include the work's
+System Libraries, or general-purpose tools or generally available free
+programs which are used unmodified in performing those activities but
+which are not part of the work.  For example, Corresponding Source
+includes interface definition files associated with source files for
+the work, and the source code for shared libraries and dynamically
+linked subprograms that the work is specifically designed to require,
+such as by intimate data communication or control flow between those
+subprograms and other parts of the work.
+
+  The Corresponding Source need not include anything that users
+can regenerate automatically from other parts of the Corresponding
+Source.
+
+  The Corresponding Source for a work in source code form is that
+same work.
+
+  2. Basic Permissions.
+
+  All rights granted under this License are granted for the term of
+copyright on the Program, and are irrevocable provided the stated
+conditions are met.  This License explicitly affirms your unlimited
+permission to run the unmodified Program.  The output from running a
+covered work is covered by this License only if the output, given its
+content, constitutes a covered work.  This License acknowledges your
+rights of fair use or other equivalent, as provided by copyright law.
+
+  You may make, run and propagate covered works that you do not
+convey, without conditions so long as your license otherwise remains
+in force.  You may convey covered works to others for the sole purpose
+of having them make modifications exclusively for you, or provide you
+with facilities for running those works, provided that you comply with
+the terms of this License in conveying all material for which you do
+not control copyright.  Those thus making or running the covered works
+for you must do so exclusively on your behalf, under your direction
+and control, on terms that prohibit them from making any copies of
+your copyrighted material outside their relationship with you.
+
+  Conveying under any other circumstances is permitted solely under
+the conditions stated below.  Sublicensing is not allowed; section 10
+makes it unnecessary.
+
+  3. Protecting Users' Legal Rights From Anti-Circumvention Law.
+
+  No covered work shall be deemed part of an effective technological
+measure under any applicable law fulfilling obligations under article
+11 of the WIPO copyright treaty adopted on 20 December 1996, or
+similar laws prohibiting or restricting circumvention of such
+measures.
+
+  When you convey a covered work, you waive any legal power to forbid
+circumvention of technological measures to the extent such circumvention
+is effected by exercising rights under this License with respect to
+the covered work, and you disclaim any intention to limit operation or
+modification of the work as a means of enforcing, against the work's
+users, your or third parties' legal rights to forbid circumvention of
+technological measures.
+
+  4. Conveying Verbatim Copies.
+
+  You may convey verbatim copies of the Program's source code as you
+receive it, in any medium, provided that you conspicuously and
+appropriately publish on each copy an appropriate copyright notice;
+keep intact all notices stating that this License and any
+non-permissive terms added in accord with section 7 apply to the code;
+keep intact all notices of the absence of any warranty; and give all
+recipients a copy of this License along with the Program.
+
+  You may charge any price or no price for each copy that you convey,
+and you may offer support or warranty protection for a fee.
+
+  5. Conveying Modified Source Versions.
+
+  You may convey a work based on the Program, or the modifications to
+produce it from the Program, in the form of source code under the
+terms of section 4, provided that you also meet all of these conditions:
+
+    a) The work must carry prominent notices stating that you modified
+    it, and giving a relevant date.
+
+    b) The work must carry prominent notices stating that it is
+    released under this License and any conditions added under section
+    7.  This requirement modifies the requirement in section 4 to
+    "keep intact all notices".
+
+    c) You must license the entire work, as a whole, under this
+    License to anyone who comes into possession of a copy.  This
+    License will therefore apply, along with any applicable section 7
+    additional terms, to the whole of the work, and all its parts,
+    regardless of how they are packaged.  This License gives no
+    permission to license the work in any other way, but it does not
+    invalidate such permission if you have separately received it.
+
+    d) If the work has interactive user interfaces, each must display
+    Appropriate Legal Notices; however, if the Program has interactive
+    interfaces that do not display Appropriate Legal Notices, your
+    work need not make them do so.
+
+  A compilation of a covered work with other separate and independent
+works, which are not by their nature extensions of the covered work,
+and which are not combined with it such as to form a larger program,
+in or on a volume of a storage or distribution medium, is called an
+"aggregate" if the compilation and its resulting copyright are not
+used to limit the access or legal rights of the compilation's users
+beyond what the individual works permit.  Inclusion of a covered work
+in an aggregate does not cause this License to apply to the other
+parts of the aggregate.
+
+  6. Conveying Non-Source Forms.
+
+  You may convey a covered work in object code form under the terms
+of sections 4 and 5, provided that you also convey the
+machine-readable Corresponding Source under the terms of this License,
+in one of these ways:
+
+    a) Convey the object code in, or embodied in, a physical product
+    (including a physical distribution medium), accompanied by the
+    Corresponding Source fixed on a durable physical medium
+    customarily used for software interchange.
+
+    b) Convey the object code in, or embodied in, a physical product
+    (including a physical distribution medium), accompanied by a
+    written offer, valid for at least three years and valid for as
+    long as you offer spare parts or customer support for that product
+    model, to give anyone who possesses the object code either (1) a
+    copy of the Corresponding Source for all the software in the
+    product that is covered by this License, on a durable physical
+    medium customarily used for software interchange, for a price no
+    more than your reasonable cost of physically performing this
+    conveying of source, or (2) access to copy the
+    Corresponding Source from a network server at no charge.
+
+    c) Convey individual copies of the object code with a copy of the
+    written offer to provide the Corresponding Source.  This
+    alternative is allowed only occasionally and noncommercially, and
+    only if you received the object code with such an offer, in accord
+    with subsection 6b.
+
+    d) Convey the object code by offering access from a designated
+    place (gratis or for a charge), and offer equivalent access to the
+    Corresponding Source in the same way through the same place at no
+    further charge.  You need not require recipients to copy the
+    Corresponding Source along with the object code.  If the place to
+    copy the object code is a network server, the Corresponding Source
+    may be on a different server (operated by you or a third party)
+    that supports equivalent copying facilities, provided you maintain
+    clear directions next to the object code saying where to find the
+    Corresponding Source.  Regardless of what server hosts the
+    Corresponding Source, you remain obligated to ensure that it is
+    available for as long as needed to satisfy these requirements.
+
+    e) Convey the object code using peer-to-peer transmission, provided
+    you inform other peers where the object code and Corresponding
+    Source of the work are being offered to the general public at no
+    charge under subsection 6d.
+
+  A separable portion of the object code, whose source code is excluded
+from the Corresponding Source as a System Library, need not be
+included in conveying the object code work.
+
+  A "User Product" is either (1) a "consumer product", which means any
+tangible personal property which is normally used for personal, family,
+or household purposes, or (2) anything designed or sold for incorporation
+into a dwelling.  In determining whether a product is a consumer product,
+doubtful cases shall be resolved in favor of coverage.  For a particular
+product received by a particular user, "normally used" refers to a
+typical or common use of that class of product, regardless of the status
+of the particular user or of the way in which the particular user
+actually uses, or expects or is expected to use, the product.  A product
+is a consumer product regardless of whether the product has substantial
+commercial, industrial or non-consumer uses, unless such uses represent
+the only significant mode of use of the product.
+
+  "Installation Information" for a User Product means any methods,
+procedures, authorization keys, or other information required to install
+and execute modified versions of a covered work in that User Product from
+a modified version of its Corresponding Source.  The information must
+suffice to ensure that the continued functioning of the modified object
+code is in no case prevented or interfered with solely because
+modification has been made.
+
+  If you convey an object code work under this section in, or with, or
+specifically for use in, a User Product, and the conveying occurs as
+part of a transaction in which the right of possession and use of the
+User Product is transferred to the recipient in perpetuity or for a
+fixed term (regardless of how the transaction is characterized), the
+Corresponding Source conveyed under this section must be accompanied
+by the Installation Information.  But this requirement does not apply
+if neither you nor any third party retains the ability to install
+modified object code on the User Product (for example, the work has
+been installed in ROM).
+
+  The requirement to provide Installation Information does not include a
+requirement to continue to provide support service, warranty, or updates
+for a work that has been modified or installed by the recipient, or for
+the User Product in which it has been modified or installed.  Access to a
+network may be denied when the modification itself materially and
+adversely affects the operation of the network or violates the rules and
+protocols for communication across the network.
+
+  Corresponding Source conveyed, and Installation Information provided,
+in accord with this section must be in a format that is publicly
+documented (and with an implementation available to the public in
+source code form), and must require no special password or key for
+unpacking, reading or copying.
+
+  7. Additional Terms.
+
+  "Additional permissions" are terms that supplement the terms of this
+License by making exceptions from one or more of its conditions.
+Additional permissions that are applicable to the entire Program shall
+be treated as though they were included in this License, to the extent
+that they are valid under applicable law.  If additional permissions
+apply only to part of the Program, that part may be used separately
+under those permissions, but the entire Program remains governed by
+this License without regard to the additional permissions.
+
+  When you convey a copy of a covered work, you may at your option
+remove any additional permissions from that copy, or from any part of
+it.  (Additional permissions may be written to require their own
+removal in certain cases when you modify the work.)  You may place
+additional permissions on material, added by you to a covered work,
+for which you have or can give appropriate copyright permission.
+
+  Notwithstanding any other provision of this License, for material you
+add to a covered work, you may (if authorized by the copyright holders of
+that material) supplement the terms of this License with terms:
+
+    a) Disclaiming warranty or limiting liability differently from the
+    terms of sections 15 and 16 of this License; or
+
+    b) Requiring preservation of specified reasonable legal notices or
+    author attributions in that material or in the Appropriate Legal
+    Notices displayed by works containing it; or
+
+    c) Prohibiting misrepresentation of the origin of that material, or
+    requiring that modified versions of such material be marked in
+    reasonable ways as different from the original version; or
+
+    d) Limiting the use for publicity purposes of names of licensors or
+    authors of the material; or
+
+    e) Declining to grant rights under trademark law for use of some
+    trade names, trademarks, or service marks; or
+
+    f) Requiring indemnification of licensors and authors of that
+    material by anyone who conveys the material (or modified versions of
+    it) with contractual assumptions of liability to the recipient, for
+    any liability that these contractual assumptions directly impose on
+    those licensors and authors.
+
+  All other non-permissive additional terms are considered "further
+restrictions" within the meaning of section 10.  If the Program as you
+received it, or any part of it, contains a notice stating that it is
+governed by this License along with a term that is a further
+restriction, you may remove that term.  If a license document contains
+a further restriction but permits relicensing or conveying under this
+License, you may add to a covered work material governed by the terms
+of that license document, provided that the further restriction does
+not survive such relicensing or conveying.
+
+  If you add terms to a covered work in accord with this section, you
+must place, in the relevant source files, a statement of the
+additional terms that apply to those files, or a notice indicating
+where to find the applicable terms.
+
+  Additional terms, permissive or non-permissive, may be stated in the
+form of a separately written license, or stated as exceptions;
+the above requirements apply either way.
+
+  8. Termination.
+
+  You may not propagate or modify a covered work except as expressly
+provided under this License.  Any attempt otherwise to propagate or
+modify it is void, and will automatically terminate your rights under
+this License (including any patent licenses granted under the third
+paragraph of section 11).
+
+  However, if you cease all violation of this License, then your
+license from a particular copyright holder is reinstated (a)
+provisionally, unless and until the copyright holder explicitly and
+finally terminates your license, and (b) permanently, if the copyright
+holder fails to notify you of the violation by some reasonable means
+prior to 60 days after the cessation.
+
+  Moreover, your license from a particular copyright holder is
+reinstated permanently if the copyright holder notifies you of the
+violation by some reasonable means, this is the first time you have
+received notice of violation of this License (for any work) from that
+copyright holder, and you cure the violation prior to 30 days after
+your receipt of the notice.
+
+  Termination of your rights under this section does not terminate the
+licenses of parties who have received copies or rights from you under
+this License.  If your rights have been terminated and not permanently
+reinstated, you do not qualify to receive new licenses for the same
+material under section 10.
+
+  9. Acceptance Not Required for Having Copies.
+
+  You are not required to accept this License in order to receive or
+run a copy of the Program.  Ancillary propagation of a covered work
+occurring solely as a consequence of using peer-to-peer transmission
+to receive a copy likewise does not require acceptance.  However,
+nothing other than this License grants you permission to propagate or
+modify any covered work.  These actions infringe copyright if you do
+not accept this License.  Therefore, by modifying or propagating a
+covered work, you indicate your acceptance of this License to do so.
+
+  10. Automatic Licensing of Downstream Recipients.
+
+  Each time you convey a covered work, the recipient automatically
+receives a license from the original licensors, to run, modify and
+propagate that work, subject to this License.  You are not responsible
+for enforcing compliance by third parties with this License.
+
+  An "entity transaction" is a transaction transferring control of an
+organization, or substantially all assets of one, or subdividing an
+organization, or merging organizations.  If propagation of a covered
+work results from an entity transaction, each party to that
+transaction who receives a copy of the work also receives whatever
+licenses to the work the party's predecessor in interest had or could
+give under the previous paragraph, plus a right to possession of the
+Corresponding Source of the work from the predecessor in interest, if
+the predecessor has it or can get it with reasonable efforts.
+
+  You may not impose any further restrictions on the exercise of the
+rights granted or affirmed under this License.  For example, you may
+not impose a license fee, royalty, or other charge for exercise of
+rights granted under this License, and you may not initiate litigation
+(including a cross-claim or counterclaim in a lawsuit) alleging that
+any patent claim is infringed by making, using, selling, offering for
+sale, or importing the Program or any portion of it.
+
+  11. Patents.
+
+  A "contributor" is a copyright holder who authorizes use under this
+License of the Program or a work on which the Program is based.  The
+work thus licensed is called the contributor's "contributor version".
+
+  A contributor's "essential patent claims" are all patent claims
+owned or controlled by the contributor, whether already acquired or
+hereafter acquired, that would be infringed by some manner, permitted
+by this License, of making, using, or selling its contributor version,
+but do not include claims that would be infringed only as a
+consequence of further modification of the contributor version.  For
+purposes of this definition, "control" includes the right to grant
+patent sublicenses in a manner consistent with the requirements of
+this License.
+
+  Each contributor grants you a non-exclusive, worldwide, royalty-free
+patent license under the contributor's essential patent claims, to
+make, use, sell, offer for sale, import and otherwise run, modify and
+propagate the contents of its contributor version.
+
+  In the following three paragraphs, a "patent license" is any express
+agreement or commitment, however denominated, not to enforce a patent
+(such as an express permission to practice a patent or covenant not to
+sue for patent infringement).  To "grant" such a patent license to a
+party means to make such an agreement or commitment not to enforce a
+patent against the party.
+
+  If you convey a covered work, knowingly relying on a patent license,
+and the Corresponding Source of the work is not available for anyone
+to copy, free of charge and under the terms of this License, through a
+publicly available network server or other readily accessible means,
+then you must either (1) cause the Corresponding Source to be so
+available, or (2) arrange to deprive yourself of the benefit of the
+patent license for this particular work, or (3) arrange, in a manner
+consistent with the requirements of this License, to extend the patent
+license to downstream recipients.  "Knowingly relying" means you have
+actual knowledge that, but for the patent license, your conveying the
+covered work in a country, or your recipient's use of the covered work
+in a country, would infringe one or more identifiable patents in that
+country that you have reason to believe are valid.
+
+  If, pursuant to or in connection with a single transaction or
+arrangement, you convey, or propagate by procuring conveyance of, a
+covered work, and grant a patent license to some of the parties
+receiving the covered work authorizing them to use, propagate, modify
+or convey a specific copy of the covered work, then the patent license
+you grant is automatically extended to all recipients of the covered
+work and works based on it.
+
+  A patent license is "discriminatory" if it does not include within
+the scope of its coverage, prohibits the exercise of, or is
+conditioned on the non-exercise of one or more of the rights that are
+specifically granted under this License.  You may not convey a covered
+work if you are a party to an arrangement with a third party that is
+in the business of distributing software, under which you make payment
+to the third party based on the extent of your activity of conveying
+the work, and under which the third party grants, to any of the
+parties who would receive the covered work from you, a discriminatory
+patent license (a) in connection with copies of the covered work
+conveyed by you (or copies made from those copies), or (b) primarily
+for and in connection with specific products or compilations that
+contain the covered work, unless you entered into that arrangement,
+or that patent license was granted, prior to 28 March 2007.
+
+  Nothing in this License shall be construed as excluding or limiting
+any implied license or other defenses to infringement that may
+otherwise be available to you under applicable patent law.
+
+  12. No Surrender of Others' Freedom.
+
+  If conditions are imposed on you (whether by court order, agreement or
+otherwise) that contradict the conditions of this License, they do not
+excuse you from the conditions of this License.  If you cannot convey a
+covered work so as to satisfy simultaneously your obligations under this
+License and any other pertinent obligations, then as a consequence you may
+not convey it at all.  For example, if you agree to terms that obligate you
+to collect a royalty for further conveying from those to whom you convey
+the Program, the only way you could satisfy both those terms and this
+License would be to refrain entirely from conveying the Program.
+
+  13. Use with the GNU Affero General Public License.
+
+  Notwithstanding any other provision of this License, you have
+permission to link or combine any covered work with a work licensed
+under version 3 of the GNU Affero General Public License into a single
+combined work, and to convey the resulting work.  The terms of this
+License will continue to apply to the part which is the covered work,
+but the special requirements of the GNU Affero General Public License,
+section 13, concerning interaction through a network will apply to the
+combination as such.
+
+  14. Revised Versions of this License.
+
+  The Free Software Foundation may publish revised and/or new versions of
+the GNU General Public License from time to time.  Such new versions will
+be similar in spirit to the present version, but may differ in detail to
+address new problems or concerns.
+
+  Each version is given a distinguishing version number.  If the
+Program specifies that a certain numbered version of the GNU General
+Public License "or any later version" applies to it, you have the
+option of following the terms and conditions either of that numbered
+version or of any later version published by the Free Software
+Foundation.  If the Program does not specify a version number of the
+GNU General Public License, you may choose any version ever published
+by the Free Software Foundation.
+
+  If the Program specifies that a proxy can decide which future
+versions of the GNU General Public License can be used, that proxy's
+public statement of acceptance of a version permanently authorizes you
+to choose that version for the Program.
+
+  Later license versions may give you additional or different
+permissions.  However, no additional obligations are imposed on any
+author or copyright holder as a result of your choosing to follow a
+later version.
+
+  15. Disclaimer of Warranty.
+
+  THERE IS NO WARRANTY FOR THE PROGRAM, TO THE EXTENT PERMITTED BY
+APPLICABLE LAW.  EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT
+HOLDERS AND/OR OTHER PARTIES PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY
+OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO,
+THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
+PURPOSE.  THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE PROGRAM
+IS WITH YOU.  SHOULD THE PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF
+ALL NECESSARY SERVICING, REPAIR OR CORRECTION.
+
+  16. Limitation of Liability.
+
+  IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING
+WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MODIFIES AND/OR CONVEYS
+THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES, INCLUDING ANY
+GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING OUT OF THE
+USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED TO LOSS OF
+DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD
+PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER PROGRAMS),
+EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF
+SUCH DAMAGES.
+
+  17. Interpretation of Sections 15 and 16.
+
+  If the disclaimer of warranty and limitation of liability provided
+above cannot be given local legal effect according to their terms,
+reviewing courts shall apply local law that most closely approximates
+an absolute waiver of all civil liability in connection with the
+Program, unless a warranty or assumption of liability accompanies a
+copy of the Program in return for a fee.
+
+                     END OF TERMS AND CONDITIONS
+
+            How to Apply These Terms to Your New Programs
+
+  If you develop a new program, and you want it to be of the greatest
+possible use to the public, the best way to achieve this is to make it
+free software which everyone can redistribute and change under these terms.
+
+  To do so, attach the following notices to the program.  It is safest
+to attach them to the start of each source file to most effectively
+state the exclusion of warranty; and each file should have at least
+the "copyright" line and a pointer to where the full notice is found.
+
+    <one line to give the program's name and a brief idea of what it does.>
+    Copyright (C) <year>  <name of author>
+
+    This program is free software: you can redistribute it and/or modify
+    it under the terms of the GNU General Public License as published by
+    the Free Software Foundation, either version 3 of the License, or
+    (at your option) any later version.
+
+    This program is distributed in the hope that it will be useful,
+    but WITHOUT ANY WARRANTY; without even the implied warranty of
+    MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+    GNU General Public License for more details.
+
+    You should have received a copy of the GNU General Public License
+    along with this program.  If not, see <http://www.gnu.org/licenses/>.
+
+Also add information on how to contact you by electronic and paper mail.
+
+  If the program does terminal interaction, make it output a short
+notice like this when it starts in an interactive mode:
+
+    <program>  Copyright (C) <year>  <name of author>
+    This program comes with ABSOLUTELY NO WARRANTY; for details type `show w'.
+    This is free software, and you are welcome to redistribute it
+    under certain conditions; type `show c' for details.
+
+The hypothetical commands `show w' and `show c' should show the appropriate
+parts of the General Public License.  Of course, your program's commands
+might be different; for a GUI interface, you would use an "about box".
+
+  You should also get your employer (if you work as a programmer) or school,
+if any, to sign a "copyright disclaimer" for the program, if necessary.
+For more information on this, and how to apply and follow the GNU GPL, see
+<http://www.gnu.org/licenses/>.
+
+  The GNU General Public License does not permit incorporating your program
+into proprietary programs.  If your program is a subroutine library, you
+may consider it more useful to permit linking proprietary applications with
+the library.  If this is what you want to do, use the GNU Lesser General
+Public License instead of this License.  But first, please read
+<http://www.gnu.org/philosophy/why-not-lgpl.html>.
Index: applications/editors/josm/plugins/imagerycache/README
===================================================================
--- applications/editors/josm/plugins/imagerycache/README	(revision 29363)
+++ applications/editors/josm/plugins/imagerycache/README	(revision 29363)
@@ -0,0 +1,13 @@
+README 
+======
+
+Readme for JOSM ImageryCache plugin
+
+    * Author of the plugin: Alexei Kasatkin
+    * License: GPL v3 (see GPL-3.0.txt) (all sources, except ones under crosby.binary package)
+    * License of MapDB  is Apache License Version 2.0, http://www.apache.org/licenses/
+    
+The source code in package org.mapdb has been copied from https://github.com/jankotek/MapDB. all rights 
+
+This plugin replace default tile loader  for TMSLayer and SlippyMap with the new one based on database. 
+This results in storing all tile cahe in a few files, not thouzands of them.
Index: applications/editors/josm/plugins/imagerycache/build.xml
===================================================================
--- applications/editors/josm/plugins/imagerycache/build.xml	(revision 29363)
+++ applications/editors/josm/plugins/imagerycache/build.xml	(revision 29363)
@@ -0,0 +1,36 @@
+<?xml version="1.0" encoding="utf-8"?>
+<!--
+** This is a template build file for a JOSM  plugin.
+**
+** Maintaining versions
+** ====================
+** See README.template
+**
+** Usage
+** =====
+** Call "ant help" to get possible build targets.
+**
+-->
+<project name="ImageryCache" default="dist" basedir=".">
+
+    <!-- enter the SVN commit message -->
+    <property name="commit.message" value="JOSM/ImageryCache: Initial commit"/>
+    <!-- enter the *lowest* JOSM version this plugin is currently compatible with -->
+    <property name="plugin.main.version" value="5779"/>
+
+    <!-- Configure these properties (replace "..." accordingly).
+         See http://josm.openstreetmap.de/wiki/DevelopersGuide/DevelopingPlugins
+    -->
+    <property name="plugin.author" value="Alexei Kasatkin"/>
+    <property name="plugin.class" value="org.openstreetmap.josm.plugins.imagerycache.ImageryCachePlugin"/>
+    <property name="plugin.description" value="This experimental plugin allows JOSM to store tile cache in database files, not in huge cache directories"/>
+    <property name="plugin.icon" value="images/session.png"/> 
+<!--    <property name="plugin.link" value="http://wiki.openstreetmap.org/wiki/JOSM/Plugins/ImageryCache"/> -->
+    <!--<property name="plugin.early" value="..."/>-->
+    <!--<property name="plugin.requires" value="..."/>-->
+    <!--<property name="plugin.stage" value="..."/>-->
+
+    <!-- ** include targets that all plugins have in common ** -->
+    <import file="../build-common.xml"/>
+  
+</project>
Index: applications/editors/josm/plugins/imagerycache/nbproject/project.xml
===================================================================
--- applications/editors/josm/plugins/imagerycache/nbproject/project.xml	(revision 29363)
+++ applications/editors/josm/plugins/imagerycache/nbproject/project.xml	(revision 29363)
@@ -0,0 +1,67 @@
+<?xml version="1.0" encoding="UTF-8"?>
+<project xmlns="http://www.netbeans.org/ns/project/1">
+    <type>org.netbeans.modules.ant.freeform</type>
+    <configuration>
+        <general-data xmlns="http://www.netbeans.org/ns/freeform-project/1">
+            <name>ImageryCache</name>
+        </general-data>
+        <general-data xmlns="http://www.netbeans.org/ns/freeform-project/2">
+            <!-- Не используйте диалоговое окно свойств проекта при редактировании данного файла вручную. -->
+            <name>ImageryCache</name>
+            <properties/>
+            <folders>
+                <source-folder>
+                    <label>ImageryCache</label>
+                    <location>.</location>
+                    <encoding>UTF-8</encoding>
+                </source-folder>
+                <source-folder>
+                    <label>src</label>
+                    <type>java</type>
+                    <location>src</location>
+                    <encoding>UTF-8</encoding>
+                </source-folder>
+            </folders>
+            <ide-actions>
+                <action name="build">
+                    <target>compile</target>
+                </action>
+                <action name="clean">
+                    <target>clean</target>
+                </action>
+                <action name="run">
+                    <target>runjosm</target>
+                </action>
+                <action name="rebuild">
+                    <target>clean</target>
+                    <target>compile</target>
+                </action>
+            </ide-actions>
+            <view>
+                <items>
+                    <source-folder style="packages">
+                        <label>src</label>
+                        <location>src</location>
+                    </source-folder>
+                    <source-file>
+                        <location>build.xml</location>
+                    </source-file>
+                </items>
+                <context-menu>
+                    <ide-action name="build"/>
+                    <ide-action name="rebuild"/>
+                    <ide-action name="clean"/>
+                    <ide-action name="run"/>
+                </context-menu>
+            </view>
+            <subprojects/>
+        </general-data>
+        <java-data xmlns="http://www.netbeans.org/ns/freeform-project-java/3">
+            <compilation-unit>
+                <package-root>src</package-root>
+                <classpath mode="compile">../../core/src</classpath>
+                <source-level>1.6</source-level>
+            </compilation-unit>
+        </java-data>
+    </configuration>
+</project>
Index: applications/editors/josm/plugins/imagerycache/src/org/mapdb/AsyncWriteEngine.java
===================================================================
--- applications/editors/josm/plugins/imagerycache/src/org/mapdb/AsyncWriteEngine.java	(revision 29363)
+++ applications/editors/josm/plugins/imagerycache/src/org/mapdb/AsyncWriteEngine.java	(revision 29363)
@@ -0,0 +1,286 @@
+/*
+ *  Copyright (c) 2012 Jan Kotek
+ *
+ *  Licensed under the Apache License, Version 2.0 (the "License");
+ *  you may not use this file except in compliance with the License.
+ *  You may obtain a copy of the License at
+ *
+ *    http://www.apache.org/licenses/LICENSE-2.0
+ *
+ *  Unless required by applicable law or agreed to in writing, software
+ *  distributed under the License is distributed on an "AS IS" BASIS,
+ *  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ *  See the License for the specific language governing permissions and
+ *  limitations under the License.
+ */
+
+package org.mapdb;
+
+import java.lang.ref.WeakReference;
+import java.util.concurrent.*;
+import java.util.concurrent.atomic.AtomicLong;
+import java.util.concurrent.locks.*;
+
+/**
+ * {@link Engine} wrapper which provides asynchronous serialization and write.
+ *  This class takes an object instance, passes it to background thread (using Queue)
+ *  where it is serialized and written to disk.
+ * <p/>
+ * Async write does not affect commit durability, write queue is flushed before each commit.
+ *
+ * @author Jan Kotek
+ */
+public class AsyncWriteEngine extends EngineWrapper implements Engine {
+
+    protected static final AtomicLong threadCounter = new AtomicLong();
+    protected final long threadNum = threadCounter.incrementAndGet();
+
+    protected final BlockingQueue<Long> newRecids = new ArrayBlockingQueue<Long>(128);
+
+    protected volatile boolean closeInProgress = false;
+    protected final CountDownLatch shutdownCondition = new CountDownLatch(2);
+    protected final int asyncFlushDelay;
+
+    protected static final Object DELETED = new Object();
+    protected final Locks.RecidLocks writeLocks = new Locks.LongHashMapRecidLocks();
+
+    protected final ReentrantReadWriteLock commitLock;
+
+    protected Throwable writerFailedException = null;
+
+
+    protected final LongConcurrentHashMap<Fun.Tuple2<Object,Serializer>> items = new LongConcurrentHashMap<Fun.Tuple2<Object, Serializer>>();
+
+    protected final Thread newRecidsThread = new Thread("MapDB prealloc #"+threadNum){
+        @Override public void run() {
+            try{
+                for(;;){
+                    if(closeInProgress || (parentEngineWeakRef!=null && parentEngineWeakRef.get()==null) || writerFailedException!=null) return;
+                    Long newRecid = getWrappedEngine().put(Utils.EMPTY_STRING, Serializer.EMPTY_SERIALIZER);
+                    newRecids.put(newRecid);
+                }
+            } catch (Throwable e) {
+                writerFailedException = e;
+            }finally {
+                shutdownCondition.countDown();
+            }
+        }
+    };
+
+    protected final Thread writerThread = new Thread("MapDB writer #"+threadNum){
+        @Override public void run() {
+            try{
+
+                for(;;){
+                    LongMap.LongMapIterator<Fun.Tuple2<Object,Serializer>> iter = items.longMapIterator();
+
+                    if(!iter.moveToNext()){
+                        //empty map, pause for a moment to give it chance to fill
+                        if(closeInProgress || (parentEngineWeakRef!=null && parentEngineWeakRef.get()==null) || writerFailedException!=null) return;
+                        Thread.sleep(asyncFlushDelay);
+
+                    }else do{
+                        //iterate over items and write them
+                        long recid = iter.key();
+
+                        writeLocks.lock(recid);
+                        try{
+                            Fun.Tuple2<Object,Serializer> value = iter.value();
+                            if(value.a==DELETED){
+                                AsyncWriteEngine.super.delete(recid, value.b);
+                            }else{
+                                AsyncWriteEngine.super.update(recid, value.a, value.b);
+                            }
+                            items.remove(recid, value);
+                        }finally {
+                            writeLocks.unlock(recid);
+                        }
+                    }while(iter.moveToNext());
+
+                }
+            } catch (Throwable e) {
+                writerFailedException = e;
+            }finally {
+                shutdownCondition.countDown();
+            }
+        }
+    };
+
+
+
+    protected AsyncWriteEngine(Engine engine, boolean _transactionsDisabled, boolean _powerSavingMode, int _asyncFlushDelay) {
+        super(engine);
+
+        newRecidsThread.setDaemon(true);
+        writerThread.setDaemon(true);
+
+        commitLock = _transactionsDisabled? null: new ReentrantReadWriteLock();
+        newRecidsThread.start();
+        writerThread.start();
+        asyncFlushDelay = _asyncFlushDelay;
+
+    }
+
+    @Override
+    public <A> long put(A value, Serializer<A> serializer) {
+        checkState();
+
+
+        if(commitLock!=null) commitLock.readLock().lock();
+        try{
+
+            try {
+                Long recid = newRecids.take();
+                update(recid, value, serializer);
+                return recid;
+            } catch (InterruptedException e) {
+                throw new RuntimeException(e);
+            }
+        }finally{
+            if(commitLock!=null) commitLock.readLock().unlock();
+        }
+
+    }
+
+    protected void checkState() {
+        if(closeInProgress) throw new IllegalAccessError("db has been closed");
+        if(writerFailedException!=null) throw new RuntimeException("Writer thread failed", writerFailedException);
+    }
+
+    @Override
+    public <A> A get(long recid, Serializer<A> serializer) {
+        checkState();
+        if(commitLock!=null) commitLock.readLock().lock();
+        try{
+            writeLocks.lock(recid);
+            try{
+                Fun.Tuple2<Object,Serializer> item = items.get(recid);
+                if(item!=null){
+                    if(item.a == DELETED) return null;
+                    return (A) item.a;
+                }
+
+                return super.get(recid, serializer);
+            }finally{
+                writeLocks.unlock(recid);
+            }
+        }finally{
+            if(commitLock!=null) commitLock.readLock().unlock();
+        }
+    }
+
+    @Override
+    public <A> void update(long recid, A value, Serializer<A> serializer) {
+        checkState();
+        if(commitLock!=null) commitLock.readLock().lock();
+        try{
+
+            writeLocks.lock(recid);
+            try{
+                items.put(recid, new Fun.Tuple2(value,serializer));
+            }finally{
+                writeLocks.unlock(recid);
+            }
+        }finally{
+            if(commitLock!=null) commitLock.readLock().unlock();
+        }
+
+    }
+
+    @Override
+    public <A> boolean compareAndSwap(long recid, A expectedOldValue, A newValue, Serializer<A> serializer) {
+        checkState();
+        writeLocks.lock(recid);
+        try{
+            Fun.Tuple2<Object, Serializer> existing = items.get(recid);
+            A oldValue = existing!=null? (A) existing.a : super.get(recid, serializer);
+            if(oldValue == expectedOldValue || (oldValue!=null && oldValue.equals(expectedOldValue))){
+                items.put(recid, new Fun.Tuple2(newValue,serializer));
+                return true;
+            }else{
+                return false;
+            }
+        }finally{
+            writeLocks.unlock(recid);
+
+        }
+    }
+
+    @Override
+    public <A> void delete(long recid, Serializer<A> serializer) {
+        update(recid, (A) DELETED, serializer);
+    }
+
+    @Override
+    public void close() {
+        try {
+            if(closeInProgress) return;
+            closeInProgress = true;
+            //put preallocated recids back to store
+            for(Long recid = newRecids.poll(); recid!=null; recid = newRecids.poll()){
+                super.delete(recid, Serializer.EMPTY_SERIALIZER);
+            }
+            //TODO commit after returning recids?
+
+            //wait for worker threads to shutdown
+            shutdownCondition.await();
+
+
+            super.close();
+        } catch (InterruptedException e) {
+            throw new RuntimeException(e);
+        }
+    }
+
+
+
+    protected WeakReference<Engine> parentEngineWeakRef = null;
+
+    /**
+     * Main thread may die, leaving Writer Thread orphaned.
+     * To prevent this we periodically check if WeakReference was GCed.
+     * This method sets WeakReference to user facing Engine,
+     * if this instance if GCed it means that user may no longer manage
+     * and we can exit Writer Thread.
+     *
+     * @param parentEngineReference reference to user facing Engine
+     */
+    public void setParentEngineReference(Engine parentEngineReference) {
+        parentEngineWeakRef = new WeakReference<Engine>(parentEngineReference);
+    }
+
+    @Override
+    public void commit() {
+        checkState();
+        if(commitLock==null){
+            super.commit();
+            return;
+        }
+        commitLock.writeLock().lock();
+        try{
+            while(!items.isEmpty()) {
+                checkState();
+                LockSupport.parkNanos(100);
+            }
+
+            super.commit();
+        }finally {
+            commitLock.writeLock().unlock();
+        }
+    }
+
+    @Override
+    public void rollback() {
+        checkState();
+        if(commitLock == null) throw new UnsupportedOperationException("transactions disabled");
+        commitLock.writeLock().lock();
+        try{
+            while(!items.isEmpty()) LockSupport.parkNanos(100);
+
+            super.commit();
+        }finally {
+            commitLock.writeLock().unlock();
+        }
+    }
+
+}
Index: applications/editors/josm/plugins/imagerycache/src/org/mapdb/Atomic.java
===================================================================
--- applications/editors/josm/plugins/imagerycache/src/org/mapdb/Atomic.java	(revision 29363)
+++ applications/editors/josm/plugins/imagerycache/src/org/mapdb/Atomic.java	(revision 29363)
@@ -0,0 +1,808 @@
+package org.mapdb;
+
+/*
+ * Adopted from Apache Harmony with following copyright:
+ *
+ * Written by Doug Lea with assistance from members of JCP JSR-166
+ * Expert Group and released to the public domain, as explained at
+ * http://creativecommons.org/licenses/publicdomain
+ */
+
+/**
+ * A small toolkit of classes that support lock-free thread-safe
+ * programming on single records.  In essence, the classes here
+ * provide provide an atomic conditional update operation of the form:
+ * <p>
+ *
+ * <pre>
+ *   boolean compareAndSet(expectedValue, updateValue);
+ * </pre>
+ *
+ * <p>This method (which varies in argument types across different
+ * classes) atomically sets a record to the {@code updateValue} if it
+ * currently holds the {@code expectedValue}, reporting {@code true} on
+ * success. Classes jere also contain methods to get and
+ * unconditionally set values.
+ *
+ * <p>The specifications of these methods enable to
+ * employ more efficient internal DB locking. CompareAndSwap
+ * operation is typically faster than using transactions, global lock or other
+ * concurrent protection.
+ *
+ * <p>Instances of classes
+ * {@link Atomic.Boolean},
+ * {@link Atomic.Integer},
+ * {@link Atomic.Long},
+ * {@link Atomic.String} and
+ * {@link Atomic.Var}
+ * each provide access and updates to a single record of the
+ * corresponding type.  Each class also provides appropriate utility
+ * methods for that type.  For example, classes {@code Atomic.Long} and
+ * {@code Atomic.Integer} provide atomic increment methods.  One
+ * application is to generate unique keys for Maps:
+ *
+ * <pre>
+ *    Atomic.Long id = Atomic.getLong("mapId");
+ *    map.put(id.getAndIncrement(), "something");
+ * </pre>
+ *
+ * <p>Atomic classes are designed primarily as building blocks for
+ * implementing non-blocking data structures and related infrastructure
+ * classes.  The {@code compareAndSet} method is not a general
+ * replacement for locking.  It applies only when critical updates for an
+ * object are confined to a <em>single</em> record.
+ *
+ * <p>Atomic classes are not general purpose replacements for
+ * {@code java.lang.Integer} and related classes.  They do <em>not</em>
+ * define methods such as {@code hashCode} and
+ * {@code compareTo}.  (Because atomic records are expected to be
+ * mutated, they are poor choices for hash table keys.)  Additionally,
+ * classes are provided only for those types that are commonly useful in
+ * intended applications. Other types has to be wrapped into general {@link Atomic.Var}
+ * <p/>
+ * You can also hold floats using
+ * {@link java.lang.Float#floatToIntBits} and
+ * {@link java.lang.Float#intBitsToFloat} conversions, and doubles using
+ * {@link java.lang.Double#doubleToLongBits} and
+ * {@link java.lang.Double#longBitsToDouble} conversions.
+ *
+ */
+final public class Atomic {
+
+    private Atomic(){}
+
+    /**
+     * Creates new record with given name and initial value.
+     *
+     * @param db to create record in
+     * @param name name of new record
+     * @param initVal initial value for Atomic.Long
+     * @throws IllegalArgumentException if name is already used
+     * @return Atomic.Long record
+     */
+    public static Long createLong(DB db, java.lang.String name, long  initVal) {
+        db.checkNameNotExists(name);
+        long recid = db.getEngine().put(initVal, Serializer.LONG_SERIALIZER);
+        db.getNameDir().put(name, recid);
+        return new Long(db.getEngine(), recid);
+    }
+
+    /**
+     * Gets or creates new record with given name.
+     * If name does not exist, new record is created with default value <code>0</code>
+     *
+     * @param db to get record from
+     * @param name of record
+     * @return record
+     */
+    public static Long getLong(DB db, java.lang.String name) {
+        java.lang.Long recid = db.nameDir.get(name);
+        return  recid == null ?
+            createLong(db, name, 0) :
+            new Long(db.getEngine(),recid);
+    }
+
+    /**
+     * Creates new record with given name and initial value.
+     *
+     * @param db to create record in
+     * @param name name of new record
+     * @param initVal initial value for Atomic.Integer
+     * @throws IllegalArgumentException if name is already used
+     * @return  Atomic.Interger record
+     */
+    public static Integer createInteger(DB db, java.lang.String name, int  initVal) {
+        db.checkNameNotExists(name);
+        long recid = db.getEngine().put(initVal, Serializer.INTEGER_SERIALIZER);
+        db.getNameDir().put(name, recid);
+        return new Integer(db.getEngine(), recid);
+    }
+
+    /**
+     * Gets or creates new record with given name.
+     * If name does not exist, new record is created with default value <code>0</code>
+     *
+     * @param db to get record from
+     * @param name of record
+     * @return record
+    */
+    public static Integer getInteger(DB db, java.lang.String name) {
+        java.lang.Long recid = db.nameDir.get(name);
+        return  recid == null ?
+                createInteger(db, name, 0) :
+                new Integer(db.getEngine(),recid);
+    }
+
+    /**
+     * Creates new record with given name and initial value.
+     *
+     * @param db to create record in
+     * @param name name of new record
+     * @param initVal initial value for Atomic.Boolean
+     * @throws IllegalArgumentException if name is already used
+     * @return Atomic.Boolean record
+     */
+    public static Boolean createBoolean(DB db, java.lang.String name, boolean  initVal) {
+        db.checkNameNotExists(name);
+        long recid = db.getEngine().put(initVal, Serializer.BOOLEAN_SERIALIZER);
+        db.getNameDir().put(name, recid);
+        return new Boolean(db.getEngine(), recid);
+    }
+
+    /**
+     * Gets or creates new record with given name.
+     * If name does not exist, new record is created with default value <code>false</code>
+     *
+     * @param db to get record from
+     * @param name of record
+     * @return record
+     */
+    public static Boolean getBoolean(DB db, java.lang.String name) {
+        java.lang.Long recid = db.nameDir.get(name);
+        return  recid == null ?
+                createBoolean(db, name, false) :
+                new Boolean(db.getEngine(),recid);
+    }
+
+    /**
+     * Creates new record with given name and initial value.
+     *
+     * @param db to create record in
+     * @param name name of new record
+     * @param initVal initial value for Atomic.String, can not be a null
+     * @throws IllegalArgumentException if name is already used or initVal was null
+     * @return Atomic.String record
+     */
+    public static String createString(DB db, java.lang.String name, java.lang.String  initVal) {
+        if(initVal==null) throw new IllegalArgumentException("initVal may not be null");
+        db.checkNameNotExists(name);
+        long recid = db.getEngine().put(initVal, Serializer.STRING_SERIALIZER);
+        db.getNameDir().put(name, recid);
+        return new String(db.getEngine(), recid);
+    }
+
+    /**
+     * Gets or creates new record with given name.
+     * If name does not exist, new record is created with default value <code>""</code>
+     *
+     * @param db to get record from
+     * @param name of record
+     * @return record
+     */
+    public static String getString(DB db, java.lang.String name) {
+        java.lang.Long recid = db.nameDir.get(name);
+        return  recid == null ?
+                createString(db, name, "") :
+                new String(db.getEngine(),recid);
+    }
+
+    /**
+     * Creates new record with given name and initial value.
+     *
+     * @param db to create record in
+     * @param name name of new record
+     * @param initVal initial value for Atomic.Var
+     * @param serializer used to convert value from/to binary form
+     * @throws IllegalArgumentException if name is already used
+     * @return Atomic.Var record
+     */
+    @SuppressWarnings("unchecked")
+	public static <E> Var<E> createVar(DB db, java.lang.String name, E  initVal, Serializer<E> serializer) {
+        db.checkNameNotExists(name);
+        if(serializer == null) serializer = (Serializer<E>) db.getDefaultSerializer();
+        long recid = db.getEngine().put(initVal, serializer);
+        db.getNameDir().put(name, recid);
+        return new Var<E>(db.getEngine(), recid, serializer);
+    }
+
+    /**
+     * Gets or creates new record with given name.
+     * If name does not exist, new record is created with default value <code>null</code>
+     *
+     * @param db to get record from
+     * @param name of record
+     * @return record
+     */
+    @SuppressWarnings("unchecked")
+	public static <E> Var<E> getVar(DB db, java.lang.String name, Serializer<E> serializer) {
+        java.lang.Long recid = db.nameDir.get(name);
+        if(serializer == null) serializer = (Serializer<E>) db.getDefaultSerializer();
+        return  recid == null ?
+                createVar(db, name, null, serializer) :
+                new Var<E>(db.getEngine(),recid, serializer);
+    }
+
+
+
+    /**
+     * An {@code int} record that may be updated atomically.  An
+     * {@code Atomic@Integer} is used in applications such as atomically
+     * incremented counters, and cannot be used as a replacement for an
+     * {@link java.lang.Integer}. However, this class does extend
+     * {@code Number} to allow uniform access by tools and utilities that
+     * deal with numerically-based classes.
+     */
+    public final static class Integer extends Number {
+
+		private static final long serialVersionUID = 4615119399830853054L;
+		
+		protected final Engine engine;
+        protected final long recid;
+
+        public Integer(Engine engine, long recid) {
+            this.engine = engine;
+            this.recid = recid;
+        }
+
+        /**
+         * Gets the current value.
+         *
+         * @return the current value
+         */
+        public final int get() {
+            return engine.get(recid, Serializer.INTEGER_SERIALIZER);
+        }
+
+        /**
+         * Sets to the given value.
+         *
+         * @param newValue the new value
+         */
+        public final void set(int newValue) {
+            engine.update(recid, newValue, Serializer.INTEGER_SERIALIZER);
+        }
+
+
+        /**
+         * Atomically sets to the given value and returns the old value.
+         *
+         * @param newValue the new value
+         * @return the previous value
+         */
+        public final int getAndSet(int newValue) {
+            for (;;) {
+                int current = get();
+                if (compareAndSet(current, newValue))
+                    return current;
+            }
+        }
+
+        /**
+         * Atomically sets the value to the given updated value
+         * if the current value {@code ==} the expected value.
+         *
+         * @param expect the expected value
+         * @param update the new value
+         * @return true if successful. False return indicates that
+         * the actual value was not equal to the expected value.
+         */
+        public final boolean compareAndSet(int expect, int update) {
+            return engine.compareAndSwap(recid, expect, update, Serializer.INTEGER_SERIALIZER);
+        }
+
+
+        /**
+         * Atomically increments by one the current value.
+         *
+         * @return the previous value
+         */
+        public final int getAndIncrement() {
+            for (;;) {
+                int current = get();
+                int next = current + 1;
+                if (compareAndSet(current, next))
+                    return current;
+            }
+        }
+
+        /**
+         * Atomically decrements by one the current value.
+         *
+         * @return the previous value
+         */
+        public final int getAndDecrement() {
+            for (;;) {
+                int current = get();
+                int next = current - 1;
+                if (compareAndSet(current, next))
+                    return current;
+            }
+        }
+
+        /**
+         * Atomically adds the given value to the current value.
+         *
+         * @param delta the value to add
+         * @return the previous value
+         */
+        public final int getAndAdd(int delta) {
+            for (;;) {
+                int current = get();
+                int next = current + delta;
+                if (compareAndSet(current, next))
+                    return current;
+            }
+        }
+
+        /**
+         * Atomically increments by one the current value.
+         *
+         * @return the updated value
+         */
+        public final int incrementAndGet() {
+            for (;;) {
+                int current = get();
+                int next = current + 1;
+                if (compareAndSet(current, next))
+                    return next;
+            }
+        }
+
+        /**
+         * Atomically decrements by one the current value.
+         *
+         * @return the updated value
+         */
+        public final int decrementAndGet() {
+            for (;;) {
+                int current = get();
+                int next = current - 1;
+                if (compareAndSet(current, next))
+                    return next;
+            }
+        }
+
+        /**
+         * Atomically adds the given value to the current value.
+         *
+         * @param delta the value to add
+         * @return the updated value
+         */
+        public final int addAndGet(int delta) {
+            for (;;) {
+                int current = get();
+                int next = current + delta;
+                if (compareAndSet(current, next))
+                    return next;
+            }
+        }
+
+        /**
+         * Returns the String representation of the current value.
+         * @return the String representation of the current value.
+         */
+        public java.lang.String toString() {
+            return java.lang.Integer.toString(get());
+        }
+
+
+        public int intValue() {
+            return get();
+        }
+
+        public long longValue() {
+            return (long)get();
+        }
+
+        public float floatValue() {
+            return (float)get();
+        }
+
+        public double doubleValue() {
+            return (double)get();
+        }
+
+    }
+
+
+    /**
+     * A {@code long} record that may be updated atomically.   An
+     * {@code Atomic#Long} is used in applications such as atomically
+     * incremented sequence numbers, and cannot be used as a replacement
+     * for a {@link java.lang.Long}. However, this class does extend
+     * {@code Number} to allow uniform access by tools and utilities that
+     * deal with numerically-based classes.
+     */
+    public final static class Long extends Number{
+
+		private static final long serialVersionUID = 2882620413591274781L;
+		
+		protected final Engine engine;
+        protected final long recid;
+
+        public Long(Engine engine, long recid) {
+            this.engine = engine;
+            this.recid = recid;
+        }
+
+
+        /**
+         * Gets the current value.
+         *
+         * @return the current value
+         */
+        public final long get() {
+            return engine.get(recid, Serializer.LONG_SERIALIZER);
+        }
+
+        /**
+         * Sets to the given value.
+         *
+         * @param newValue the new value
+         */
+        public final void set(long newValue) {
+            engine.update(recid, newValue, Serializer.LONG_SERIALIZER);
+        }
+
+
+        /**
+         * Atomically sets to the given value and returns the old value.
+         *
+         * @param newValue the new value
+         * @return the previous value
+         */
+        public final long getAndSet(long newValue) {
+            while (true) {
+                long current = get();
+                if (compareAndSet(current, newValue))
+                    return current;
+            }
+        }
+
+        /**
+         * Atomically sets the value to the given updated value
+         * if the current value {@code ==} the expected value.
+         *
+         * @param expect the expected value
+         * @param update the new value
+         * @return true if successful. False return indicates that
+         * the actual value was not equal to the expected value.
+         */
+        public final boolean compareAndSet(long expect, long update) {
+            return engine.compareAndSwap(recid, expect, update, Serializer.LONG_SERIALIZER);
+        }
+
+
+        /**
+         * Atomically increments by one the current value.
+         *
+         * @return the previous value
+         */
+        public final long getAndIncrement() {
+            while (true) {
+                long current = get();
+                long next = current + 1;
+                if (compareAndSet(current, next))
+                    return current;
+            }
+        }
+
+        /**
+         * Atomically decrements by one the current value.
+         *
+         * @return the previous value
+         */
+        public final long getAndDecrement() {
+            while (true) {
+                long current = get();
+                long next = current - 1;
+                if (compareAndSet(current, next))
+                    return current;
+            }
+        }
+
+        /**
+         * Atomically adds the given value to the current value.
+         *
+         * @param delta the value to add
+         * @return the previous value
+         */
+        public final long getAndAdd(long delta) {
+            while (true) {
+                long current = get();
+                long next = current + delta;
+                if (compareAndSet(current, next))
+                    return current;
+            }
+        }
+
+        /**
+         * Atomically increments by one the current value.
+         *
+         * @return the updated value
+         */
+        public final long incrementAndGet() {
+            for (;;) {
+                long current = get();
+                long next = current + 1;
+                if (compareAndSet(current, next))
+                    return next;
+            }
+        }
+
+        /**
+         * Atomically decrements by one the current value.
+         *
+         * @return the updated value
+         */
+        public final long decrementAndGet() {
+            for (;;) {
+                long current = get();
+                long next = current - 1;
+                if (compareAndSet(current, next))
+                    return next;
+            }
+        }
+
+        /**
+         * Atomically adds the given value to the current value.
+         *
+         * @param delta the value to add
+         * @return the updated value
+         */
+        public final long addAndGet(long delta) {
+            for (;;) {
+                long current = get();
+                long next = current + delta;
+                if (compareAndSet(current, next))
+                    return next;
+            }
+        }
+
+        /**
+         * Returns the String representation of the current value.
+         * @return the String representation of the current value.
+         */
+        public java.lang.String toString() {
+            return java.lang.Long.toString(get());
+        }
+
+
+        public int intValue() {
+            return (int)get();
+        }
+
+        public long longValue() {
+            return get();
+        }
+
+        public float floatValue() {
+            return (float)get();
+        }
+
+        public double doubleValue() {
+            return (double)get();
+        }
+
+    }
+
+
+    /**
+     * A {@code boolean} record that may be updated atomically.
+     */
+    public final static class Boolean {
+
+        protected final Engine engine;
+        protected final long recid;
+
+        public Boolean(Engine engine, long recid) {
+            this.engine = engine;
+            this.recid = recid;
+        }
+
+
+        /**
+         * Returns the current value.
+         *
+         * @return the current value
+         */
+        public final boolean get() {
+            return engine.get(recid, Serializer.BOOLEAN_SERIALIZER);
+        }
+
+        /**
+         * Atomically sets the value to the given updated value
+         * if the current value {@code ==} the expected value.
+         *
+         * @param expect the expected value
+         * @param update the new value
+         * @return true if successful. False return indicates that
+         * the actual value was not equal to the expected value.
+         */
+        public final boolean compareAndSet(boolean expect, boolean update) {
+            return engine.compareAndSwap(recid, expect, update, Serializer.BOOLEAN_SERIALIZER);
+        }
+
+
+        /**
+         * Unconditionally sets to the given value.
+         *
+         * @param newValue the new value
+         */
+        public final void set(boolean newValue) {
+            engine.update(recid, newValue, Serializer.BOOLEAN_SERIALIZER);
+        }
+
+
+        /**
+         * Atomically sets to the given value and returns the previous value.
+         *
+         * @param newValue the new value
+         * @return the previous value
+         */
+        public final boolean getAndSet(boolean newValue) {
+            for (;;) {
+                boolean current = get();
+                if (compareAndSet(current, newValue))
+                    return current;
+            }
+        }
+
+        /**
+         * Returns the String representation of the current value.
+         * @return the String representation of the current value.
+         */
+        public java.lang.String toString() {
+            return java.lang.Boolean.toString(get());
+        }
+
+    }
+
+    /*
+    * A {@code String} record that may be updated atomically.
+    */
+    public final static class String{
+
+        protected final Engine engine;
+        protected final long recid;
+
+        public String(Engine engine, long recid) {
+            this.engine = engine;
+            this.recid = recid;
+        }
+
+
+        public java.lang.String toString() {
+            return get();
+        }
+
+        /**
+         * Returns the current value.
+         *
+         * @return the current value
+         */
+        public final java.lang.String get() {
+            return engine.get(recid, Serializer.STRING_SERIALIZER);
+        }
+
+        /**
+         * Atomically sets the value to the given updated value
+         * if the current value equals the expected value.
+         *
+         * @param expect the expected value
+         * @param update the new value
+         * @return true if successful. False return indicates that
+         * the actual value was not equal to the expected value.
+         */
+        public final boolean compareAndSet(java.lang.String expect, java.lang.String update) {
+            return engine.compareAndSwap(recid, expect, update, Serializer.STRING_SERIALIZER);
+        }
+
+
+        /**
+         * Unconditionally sets to the given value.
+         *
+         * @param newValue the new value
+         */
+        public final void set(java.lang.String newValue) {
+            engine.update(recid, newValue, Serializer.STRING_SERIALIZER);
+        }
+
+
+        /**
+         * Atomically sets to the given value and returns the previous value.
+         *
+         * @param newValue the new value
+         * @return the previous value
+         */
+        public final java.lang.String getAndSet(java.lang.String newValue) {
+            for (;;) {
+                java.lang.String current = get();
+                if (compareAndSet(current, newValue))
+                    return current;
+            }
+        }
+
+    }
+
+    /**
+     * Atomically updated variable which may contain any type of record.
+     */
+    public static final class Var<E> {
+
+        protected final Engine engine;
+        protected final long recid;
+        protected final Serializer<E> serializer;
+
+        public Var(Engine engine, long recid, Serializer<E> serializer) {
+            this.engine = engine;
+            this.recid = recid;
+            this.serializer = serializer;
+        }
+
+        public java.lang.String toString() {
+            E v = get();
+            return v==null? null : v.toString();
+        }
+
+        /**
+         * Returns the current value.
+         *
+         * @return the current value
+         */
+        public final E get() {
+            return engine.get(recid, serializer);
+        }
+
+        /**
+         * Atomically sets the value to the given updated value
+         * if the current value equals the expected value.
+         *
+         * @param expect the expected value
+         * @param update the new value
+         * @return true if successful. False return indicates that
+         * the actual value was not equal to the expected value.
+         */
+        public final boolean compareAndSet(E expect, E update) {
+            return engine.compareAndSwap(recid, expect, update, serializer);
+        }
+
+
+        /**
+         * Unconditionally sets to the given value.
+         *
+         * @param newValue the new value
+         */
+        public final void set(E newValue) {
+            engine.update(recid, newValue, serializer);
+        }
+
+
+        /**
+         * Atomically sets to the given value and returns the previous value.
+         *
+         * @param newValue the new value
+         * @return the previous value
+         */
+        public final E getAndSet(E newValue) {
+            for (;;) {
+                E current = get();
+                if (compareAndSet(current, newValue))
+                    return current;
+            }
+        }
+
+
+    }
+
+}
Index: applications/editors/josm/plugins/imagerycache/src/org/mapdb/BTreeKeySerializer.java
===================================================================
--- applications/editors/josm/plugins/imagerycache/src/org/mapdb/BTreeKeySerializer.java	(revision 29363)
+++ applications/editors/josm/plugins/imagerycache/src/org/mapdb/BTreeKeySerializer.java	(revision 29363)
@@ -0,0 +1,182 @@
+package org.mapdb;
+
+import java.io.DataInput;
+import java.io.DataOutput;
+import java.io.IOException;
+
+/**
+ * Custom serializer for BTreeMap keys.
+ * Is used to take advantage of Delta Compression.
+ *
+ * @param <K>
+ */
+public abstract class BTreeKeySerializer<K>{
+    public abstract void serialize(DataOutput out, int start, int end, Object[] keys) throws IOException;
+
+    public abstract Object[] deserialize(DataInput in, int start, int end, int size) throws IOException;
+
+    static final class BasicKeySerializer extends BTreeKeySerializer<Object> {
+
+        protected final Serializer defaultSerializer;
+
+        BasicKeySerializer(Serializer defaultSerializer) {
+            this.defaultSerializer = defaultSerializer;
+        }
+
+        @Override
+        public void serialize(DataOutput out, int start, int end, Object[] keys) throws IOException {
+            for(int i = start;i<end;i++){
+                defaultSerializer.serialize(out,keys[i]);
+            }
+        }
+
+        @Override
+        public Object[] deserialize(DataInput in, int start, int end, int size) throws IOException{
+            Object[] ret = new Object[size];
+            for(int i=start; i<end; i++){
+                ret[i] = defaultSerializer.deserialize(in,-1);
+            }
+            return ret;
+        }
+    }
+
+
+    public static final  BTreeKeySerializer<Long> ZERO_OR_POSITIVE_LONG = new BTreeKeySerializer<Long>() {
+        @Override
+        public void serialize(DataOutput out, int start, int end, Object[] keys) throws IOException {
+            if(start>=end) return;
+//            System.out.println(start+" - "+end+" - "+Arrays.toString(keys));
+            long prev = (Long)keys[start];
+            Utils.packLong(out,prev);
+            for(int i=start+1;i<end;i++){
+                long curr = (Long)keys[i];
+                Utils.packLong(out, curr-prev);
+                prev = curr;
+            }
+        }
+
+        @Override
+        public Object[] deserialize(DataInput in, int start, int end, int size) throws IOException {
+            Object[] ret = new Long[size];
+            long prev = 0 ;
+            for(int i = start; i<end; i++){
+                ret[i] = prev = prev + Utils.unpackLong(in);
+            }
+            return ret;
+        }
+    };
+
+    public static final  BTreeKeySerializer<Integer> ZERO_OR_POSITIVE_INT = new BTreeKeySerializer<Integer>() {
+        @Override
+        public void serialize(DataOutput out, int start, int end, Object[] keys) throws IOException {
+            if(start>=end) return;
+//            System.out.println(start+" - "+end+" - "+Arrays.toString(keys));
+            int prev = (Integer)keys[start];
+            Utils.packLong(out,prev);
+            for(int i=start+1;i<end;i++){
+                int curr = (Integer)keys[i];
+                Utils.packInt(out, curr-prev);
+                prev = curr;
+            }
+        }
+
+        @Override
+        public Object[] deserialize(DataInput in, int start, int end, int size) throws IOException {
+            Object[] ret = new Long[size];
+            int prev = 0 ;
+            for(int i = start; i<end; i++){
+                ret[i] = prev = prev + Utils.unpackInt(in);
+            }
+            return ret;
+        }
+    };
+
+
+    public static final  BTreeKeySerializer<String> STRING = new BTreeKeySerializer<String>() {
+
+        @Override
+        public void serialize(DataOutput out, int start, int end, Object[] keys) throws IOException {
+            byte[] previous = null;
+            for (int i = start; i < end; i++) {
+                byte[] b = ((String) keys[i]).getBytes(Utils.UTF8);
+                leadingValuePackWrite(out, b, previous, 0);
+                previous = b;
+            }
+        }
+
+        @Override
+        public Object[] deserialize(DataInput in, int start, int end, int size) throws IOException {
+            Object[] ret = new Object[size];
+            byte[] previous = null;
+            for (int i = start; i < end; i++) {
+                byte[] b = leadingValuePackRead(in, previous, 0);
+                if (b == null) continue;
+                ret[i] = new String(b,Utils.UTF8);
+                previous = b;
+            }
+            return ret;
+        }
+    };
+
+    /**
+     * Read previously written data
+     *
+     * @author Kevin Day
+     */
+    public static byte[] leadingValuePackRead(DataInput in, byte[] previous, int ignoreLeadingCount) throws IOException {
+        int len = Utils.unpackInt(in) - 1;  // 0 indicates null
+        if (len == -1)
+            return null;
+
+        int actualCommon = Utils.unpackInt(in);
+
+        byte[] buf = new byte[len];
+
+        if (previous == null) {
+            actualCommon = 0;
+        }
+
+
+        if (actualCommon > 0) {
+            in.readFully(buf, 0, ignoreLeadingCount);
+            System.arraycopy(previous, ignoreLeadingCount, buf, ignoreLeadingCount, actualCommon - ignoreLeadingCount);
+        }
+        in.readFully(buf, actualCommon, len - actualCommon);
+        return buf;
+    }
+
+    /**
+     * This method is used for delta compression for keys.
+     * Writes the contents of buf to the DataOutput out, with special encoding if
+     * there are common leading bytes in the previous group stored by this compressor.
+     *
+     * @author Kevin Day
+     */
+    public static void leadingValuePackWrite(DataOutput out, byte[] buf, byte[] previous, int ignoreLeadingCount) throws IOException {
+        if (buf == null) {
+            Utils.packInt(out, 0);
+            return;
+        }
+
+        int actualCommon = ignoreLeadingCount;
+
+        if (previous != null) {
+            int maxCommon = buf.length > previous.length ? previous.length : buf.length;
+
+            if (maxCommon > Short.MAX_VALUE) maxCommon = Short.MAX_VALUE;
+
+            for (; actualCommon < maxCommon; actualCommon++) {
+                if (buf[actualCommon] != previous[actualCommon])
+                    break;
+            }
+        }
+
+
+        // there are enough common bytes to justify compression
+        Utils.packInt(out, buf.length + 1);// store as +1, 0 indicates null
+        Utils.packInt(out, actualCommon);
+        out.write(buf, 0, ignoreLeadingCount);
+        out.write(buf, actualCommon, buf.length - actualCommon);
+
+    }
+}
Index: applications/editors/josm/plugins/imagerycache/src/org/mapdb/BTreeMap.java
===================================================================
--- applications/editors/josm/plugins/imagerycache/src/org/mapdb/BTreeMap.java	(revision 29363)
+++ applications/editors/josm/plugins/imagerycache/src/org/mapdb/BTreeMap.java	(revision 29363)
@@ -0,0 +1,2175 @@
+/*
+ *  Copyright (c) 2012 Jan Kotek
+ *
+ *  Licensed under the Apache License, Version 2.0 (the "License");
+ *  you may not use this file except in compliance with the License.
+ *  You may obtain a copy of the License at
+ *
+ *    http://www.apache.org/licenses/LICENSE-2.0
+ *
+ *  Unless required by applicable law or agreed to in writing, software
+ *  distributed under the License is distributed on an "AS IS" BASIS,
+ *  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ *  See the License for the specific language governing permissions and
+ *  limitations under the License.
+ */
+
+/*
+ * NOTE: some code (and javadoc) used in this class
+ * comes from Apache Harmony with following copyright:
+ *
+ * Written by Doug Lea with assistance from members of JCP JSR-166
+ * Expert Group and released to the public domain, as explained at
+ * http://creativecommons.org/licenses/publicdomain
+ */
+
+package org.mapdb;
+
+
+import java.io.DataInput;
+import java.io.DataOutput;
+import java.io.IOException;
+import java.util.*;
+import java.util.concurrent.ConcurrentNavigableMap;
+
+import static org.mapdb.SerializationHeader.*;
+
+/**
+ * A scalable concurrent {@link ConcurrentNavigableMap} implementation.
+ * The map is sorted according to the {@linkplain Comparable natural
+ * ordering} of its keys, or by a {@link Comparator} provided at map
+ * creation time.
+ *
+ * <p>Insertion, removal,
+ * update, and access operations safely execute concurrently by
+ * multiple threads.  Iterators are <i>weakly consistent</i>, returning
+ * elements reflecting the state of the map at some point at or since
+ * the creation of the iterator.  They do <em>not</em> throw {@link
+ * ConcurrentModificationException}, and may proceed concurrently with
+ * other operations. Ascending key ordered views and their iterators
+ * are faster than descending ones.
+ * <p>
+ * It is possible to obtain <i>consistent</i> iterator by using <code>snapshot()</code>
+ * method.
+ *
+ * <p>All <tt>Map.Entry</tt> pairs returned by methods in this class
+ * and its views represent snapshots of mappings at the time they were
+ * produced. They do <em>not</em> support the <tt>Entry.setValue</tt>
+ * method. (Note however that it is possible to change mappings in the
+ * associated map using <tt>put</tt>, <tt>putIfAbsent</tt>, or
+ * <tt>replace</tt>, depending on exactly which effect you need.)
+ *
+ * <p>Beware that, unlike in most collections, the <tt>size</tt>
+ * method is <em>not</em> a constant-time operation. Because of the
+ * asynchronous nature of these maps, determining the current number
+ * of elements requires a traversal of the elements.  Additionally,
+ * the bulk operations <tt>putAll</tt>, <tt>equals</tt>, and
+ * <tt>clear</tt> are <em>not</em> guaranteed to be performed
+ * atomically. For example, an iterator operating concurrently with a
+ * <tt>putAll</tt> operation might view only some of the added
+ * elements.
+ *
+ * <p>This class and its views and iterators implement all of the
+ * <em>optional</em> methods of the {@link Map} and {@link Iterator}
+ * interfaces. Like most other concurrent collections, this class does
+ * <em>not</em> permit the use of <tt>null</tt> keys or values because some
+ * null return values cannot be reliably distinguished from the absence of
+ * elements.
+ *
+ * <p>Theoretical design of BTreeMap is based on <a href="http://www.cs.cornell.edu/courses/cs4411/2009sp/blink.pdf">paper</a>
+ * from Philip L. Lehman and S. Bing Yao. More practical aspects of BTreeMap implementation are based on <a href="http://www.doc.ic.ac.uk/~td202/">notes</a>
+ * and <a href="http://www.doc.ic.ac.uk/~td202/btree/">demo application</a> from Thomas Dinsdale-Young.
+ * B-Linked-Tree used here does not require locking for read. Updates locks only one, two or three nodes.
+ * <p/>
+ * This B-Linked-Tree structure does not support removal well, entry delete does not collapse tree nodes. Massive
+ * deletion causes empty nodes and performance lost. There is workaround in form of compaction process, but it is not
+ * implemented yet.
+ *
+ * @author Jan Kotek
+ * @author some parts by Doug Lea
+ */
+@SuppressWarnings({ "unchecked", "rawtypes" })
+//TODO better tests for BTreeMap without values (set)
+public class BTreeMap<K,V> extends AbstractMap<K,V>
+        implements ConcurrentNavigableMap<K,V>, Bind.MapWithModificationListener<K,V>{
+
+
+
+
+    /** default maximal node size */
+    protected static final int DEFAULT_MAX_NODE_SIZE = 32;
+
+
+    /** recid under which reference to rootRecid is stored */
+    protected final long rootRecidRef;
+
+    /** Serializer used to convert keys from/into binary form.
+     * TODO delta packing on BTree keys*/
+    protected final BTreeKeySerializer keySerializer;
+    /** Serializer used to convert keys from/into binary form*/
+    protected final Serializer<V> valueSerializer;
+
+    /** keys are sorted by this*/
+    protected final Comparator comparator;
+
+    /** holds node level locks*/
+    protected final Locks.RecidLocks nodeLocks = new Locks.LongHashMapRecidLocks();
+
+    /** maximal node size allowed in this BTree*/
+    protected final int maxNodeSize;
+
+    /** DB Engine in which entries are persisted */
+    protected final Engine engine;
+
+    /** is this a Map or Set?  if false, entries do not have values, only keys are allowed*/
+    protected final boolean hasValues;
+
+    /** store values as part of BTree nodes */
+    protected final boolean valsOutsideNodes;
+
+
+    protected final long treeRecid;
+
+
+    private final KeySet keySet;
+
+    private final EntrySet entrySet = new EntrySet(this);
+
+    private final Values values = new Values(this);
+    protected final Serializer defaultSerializer;
+
+
+    static class BTreeRootSerializer implements  Serializer<BTreeRoot>{
+        protected final Serializer defaultSerializer;
+
+        BTreeRootSerializer(Serializer defaultSerializer) {
+            this.defaultSerializer = defaultSerializer;
+        }
+
+        @Override
+        public void serialize(DataOutput out, BTreeRoot value) throws IOException {
+            out.writeByte(SerializationHeader.B_TREE_MAP_ROOT_HEADER);
+            out.writeLong(value.rootRecidRef);
+            out.writeBoolean(value.hasValues);
+            out.writeBoolean(value.valsOutsideNodes);
+            out.writeInt(value.maxNodeSize);
+            defaultSerializer.serialize(out, value.keySerializer);
+            defaultSerializer.serialize(out, value.valueSerializer);
+            defaultSerializer.serialize(out, value.comparator);
+
+        }
+
+        @Override
+        public BTreeRoot deserialize(DataInput in, int available) throws IOException {
+            BTreeRoot ret = new BTreeRoot();
+            if(in.readUnsignedByte()!=SerializationHeader.B_TREE_MAP_ROOT_HEADER) throw new InternalError();
+            ret.rootRecidRef = in.readLong();
+            ret.hasValues = in.readBoolean();
+            ret.valsOutsideNodes = in.readBoolean();
+            ret.maxNodeSize = in.readInt();
+            ret.keySerializer = (BTreeKeySerializer) defaultSerializer.deserialize(in, -1);
+            ret.valueSerializer = (Serializer) defaultSerializer.deserialize(in, -1);
+            ret.comparator = (Comparator) defaultSerializer.deserialize(in, -1);
+            return ret;
+        }
+    }
+
+    /** data record which holds informations about this BTree. BTreeMap class is not serialized itself. */
+    static final class BTreeRoot{
+        long rootRecidRef;
+        boolean hasValues;
+        boolean valsOutsideNodes;
+        int maxNodeSize;
+        BTreeKeySerializer keySerializer;
+        Serializer valueSerializer;
+        Comparator comparator;
+
+
+
+    }
+
+    /** if <code>valsOutsideNodes</code> is true, this class is used instead of values.
+     * It contains reference to actual value. It also supports assertions from preventing it to leak outside of Map*/
+    protected static final class ValRef{
+        /** reference to actual value */
+        final long recid;
+        public ValRef(long recid) {
+            this.recid = recid;
+        }
+
+        @Override
+        public boolean equals(Object obj) {
+            throw new InternalError();
+        }
+
+        @Override
+        public int hashCode() {
+            throw new InternalError();
+        }
+
+    }
+
+
+    /** common interface for BTree node */
+    protected interface BNode{
+        boolean isLeaf();
+        Object[] keys();
+        Object[] vals();
+        Object highKey();
+        long[] child();
+        long next();
+    }
+
+    protected final static class DirNode implements BNode{
+        final Object[] keys;
+        final long[] child;
+
+        DirNode(Object[] keys, long[] child) {
+            this.keys = keys;
+            this.child = child;
+        }
+
+        @Override public boolean isLeaf() { return false;}
+
+        @Override public Object[] keys() { return keys;}
+        @Override public Object[] vals() { return null;}
+
+        @Override public Object highKey() {return keys[keys.length-1];}
+
+        @Override public long[] child() { return child;}
+
+        @Override public long next() {return child[child.length-1];}
+
+        @Override public String toString(){
+            return "Dir(K"+Arrays.toString(keys)+", C"+Arrays.toString(child)+")";
+        }
+
+    }
+
+
+    protected final static class LeafNode implements BNode{
+        final Object[] keys;
+        final Object[] vals;
+        final long next;
+
+        LeafNode(Object[] keys, Object[] vals, long next) {
+            this.keys = keys;
+            this.vals = vals;
+            this.next = next;
+        }
+
+        @Override public boolean isLeaf() { return true;}
+
+        @Override public Object[] keys() { return keys;}
+        @Override public Object[] vals() { return vals;}
+
+        @Override public Object highKey() {return keys[keys.length-1];}
+
+        @Override public long[] child() { return null;}
+        @Override public long next() {return next;}
+
+        @Override public String toString(){
+            return "Leaf(K"+Arrays.toString(keys)+", V"+Arrays.toString(vals)+", L="+next+")";
+        }
+    }
+
+
+    protected final Serializer<BNode> nodeSerializer = new Serializer<BNode>() {
+        @Override
+        public void serialize(DataOutput out, BNode value) throws IOException {
+            final boolean isLeaf = value.isLeaf();
+
+            //first byte encodes if is leaf (first bite) and length (last seven bites)
+            if(value.keys().length>255) throw new InternalError();
+            if(!isLeaf && value.child().length!= value.keys().length) throw new InternalError();
+            if(isLeaf && hasValues && value.vals().length!= value.keys().length) throw new InternalError();
+
+            //check node integrity in paranoid mode
+            if(CC.PARANOID){
+                int len = value.keys().length;
+                for(int i=value.keys()[0]==null?2:1;
+                  i<(value.keys()[len-1]==null?len-1:len);
+                  i++){
+                    int comp = comparator.compare(value.keys()[i-1], value.keys()[i]);
+                    int limit = i==len-1 ? 1:0 ;
+                    if(comp>=limit){
+                        throw new AssertionError("BTreeNode format error, wrong key order at #"+i+"\n"+value);
+                    }
+                }
+
+            }
+
+
+            final boolean left = value.keys()[0] == null;
+            final boolean right = value.keys()[value.keys().length-1] == null;
+
+
+            final int header;
+
+            if(isLeaf)
+                if(right){
+                    if(left)
+                        header = B_TREE_NODE_LEAF_LR;
+                    else
+                        header = B_TREE_NODE_LEAF_R;
+                }else{
+                    if(left)
+                        header = B_TREE_NODE_LEAF_L;
+                    else
+                        header = B_TREE_NODE_LEAF_C;
+                }
+            else{
+                if(right){
+                    if(left)
+                        header = B_TREE_NODE_DIR_LR;
+                    else
+                        header = B_TREE_NODE_DIR_R;
+                }else{
+                    if(left)
+                        header = B_TREE_NODE_DIR_L;
+                    else
+                        header = B_TREE_NODE_DIR_C;
+                }
+            }
+
+
+
+            out.write(header);
+            out.write(value.keys().length);
+
+            //longs go first, so it is possible to reconstruct tree without serializer
+            if(isLeaf){
+                Utils.packLong(out, ((LeafNode) value).next);
+            }else{
+                for(long child : ((DirNode)value).child)
+                    Utils.packLong(out, child);
+            }
+
+
+
+            keySerializer.serialize(out,left?1:0,
+                    right?value.keys().length-1:value.keys().length,
+                    value.keys());
+
+            if(isLeaf && hasValues){
+                for(int i=0; i<value.vals().length; i++){
+                    Object val = value.vals()[i];
+                    if(valsOutsideNodes){
+                        long recid = val!=null?  ((ValRef)val).recid :0;
+                        Utils.packLong(out, recid);
+                    }else{
+                        valueSerializer.serialize(out, (V) val);
+                    }
+                }
+            }
+        }
+
+        @Override
+        public BNode deserialize(DataInput in, int available) throws IOException {
+            final int header = in.readUnsignedByte();
+            final int size = in.readUnsignedByte();
+            //first bite indicates leaf
+            final boolean isLeaf =
+                    header == B_TREE_NODE_LEAF_C  || header == B_TREE_NODE_LEAF_L ||
+                    header == B_TREE_NODE_LEAF_LR || header == B_TREE_NODE_LEAF_R;
+            final int start =
+                (header==B_TREE_NODE_LEAF_L  || header == B_TREE_NODE_LEAF_LR || header==B_TREE_NODE_DIR_L  || header == B_TREE_NODE_DIR_LR) ?
+                1:0;
+
+            final int end =
+                (header==B_TREE_NODE_LEAF_R  || header == B_TREE_NODE_LEAF_LR || header==B_TREE_NODE_DIR_R  || header == B_TREE_NODE_DIR_LR) ?
+                size-1:size;
+
+
+            if(isLeaf){
+                long next = Utils.unpackLong(in);
+                Object[] keys = (Object[]) keySerializer.deserialize(in, start,end,size);
+                if(keys.length!=size) throw new InternalError();
+                Object[] vals  = null;
+
+                if(hasValues){
+                    vals = new Object[size];
+                    for(int i=0;i<size;i++){
+                        if(valsOutsideNodes){
+                            long recid = Utils.unpackLong(in);
+                            vals[i] = recid==0? null: new ValRef(recid);
+                        }else{
+                            vals[i] = valueSerializer.deserialize(in, size-1);
+                        }
+                    }
+                }
+                return new LeafNode(keys, vals, next);
+            }else{
+                long[] child = new long[size];
+                for(int i=0;i<size;i++)
+                    child[i] = Utils.unpackLong(in);
+                Object[] keys = (Object[]) keySerializer.deserialize(in, start,end,size);
+                if(keys.length!=size) throw new InternalError();
+                return new DirNode(keys, child);
+            }
+        }
+    };
+
+
+    /** Constructor used to create new BTreeMap without existing record (recid) in Engine.
+     *  This constructor creates new record and saves all configuration parameters there.
+     *  Constructor args are defining BTreeMap format, are stored in db and can not be changed latter.
+     *
+     * @param engine used for persistence
+     * @param maxNodeSize maximal BTree Node size. Node will split if number of entries is higher
+     * @param hasValues is Map or Set? If true only keys will be stored, no values
+     * @param valsOutsideNodes Store Values outside of BTree Nodes in separate record?
+     * @param defaultSerializer serialier used to serialize/deserialize other serializers. May be null for default value.
+     * @param keySerializer Serialzier used for keys. May be null for defualt value. TODO delta packing
+     * @param valueSerializer Serializer used for values. May be null for default value
+     * @param comparator Comparator to sort keys in this BTree, may be null.
+     */
+    public BTreeMap(Engine engine, int maxNodeSize, boolean hasValues, boolean valsOutsideNodes,
+                    Serializer defaultSerializer,
+                    BTreeKeySerializer<K> keySerializer, Serializer<V> valueSerializer, Comparator<K> comparator) {
+        if(maxNodeSize%2!=0) throw new IllegalArgumentException("maxNodeSize must be dividable by 2");
+        if(maxNodeSize<6) throw new IllegalArgumentException("maxNodeSize too low");
+        if(maxNodeSize>126) throw new IllegalArgumentException("maxNodeSize too high");
+        SerializerBase.assertSerializable(keySerializer);
+        SerializerBase.assertSerializable(valueSerializer);
+        SerializerBase.assertSerializable(comparator);
+
+
+        if(defaultSerializer==null) defaultSerializer = Serializer.BASIC_SERIALIZER;
+
+
+        this.defaultSerializer = defaultSerializer;
+        this.hasValues = hasValues;
+        this.valsOutsideNodes = valsOutsideNodes;
+        this.engine = engine;
+        this.maxNodeSize = maxNodeSize;
+        this.comparator = comparator==null? Utils.COMPARABLE_COMPARATOR : comparator;
+        //TODO when delta packing implemented, add assertion for COMPARABLE_COMPARATOR
+        this.keySerializer = keySerializer==null ?  new BTreeKeySerializer.BasicKeySerializer(defaultSerializer) :  keySerializer;
+        this.valueSerializer = valueSerializer==null ? (Serializer<V>) defaultSerializer : valueSerializer;
+
+        this.keySet = new KeySet(this, hasValues);
+
+        LeafNode emptyRoot = new LeafNode(new Object[]{null, null}, new Object[]{null, null}, 0);
+        long rootRecidVal = engine.put(emptyRoot, nodeSerializer);
+        rootRecidRef = engine.put(rootRecidVal,Serializer.LONG_SERIALIZER);
+
+        BTreeRoot r = new BTreeRoot();
+        r.hasValues = this.hasValues;
+        r.valsOutsideNodes = this.valsOutsideNodes;
+        r.rootRecidRef = this.rootRecidRef;
+        r.maxNodeSize =  this.maxNodeSize;
+        r.keySerializer =  this.keySerializer;
+        r.valueSerializer =  this.valueSerializer;
+        r.comparator =  this.comparator;
+        this.treeRecid = engine.put(r, new BTreeRootSerializer(this.defaultSerializer));
+    }
+
+
+
+    /**
+     * Constructor used to load existing BTreeMap (with assigned recid).
+     * Map was already created and saved to Engine, this constructor just loads it.
+     *
+     * @param engine used for persistence
+     * @param recid under which BTreeMap was stored
+     * @param defaultSerializer used to deserialize other serializers and comparator
+     */
+    public BTreeMap(Engine engine, long recid, Serializer defaultSerializer) {
+        this.engine = engine;
+        this.treeRecid = recid;
+        if(defaultSerializer==null) defaultSerializer = Serializer.BASIC_SERIALIZER;
+        this.defaultSerializer = defaultSerializer;
+
+        BTreeRoot r = engine.get(recid, new BTreeRootSerializer(defaultSerializer));
+        this.hasValues = r.hasValues;
+        this.rootRecidRef = r.rootRecidRef;
+        this.maxNodeSize = r.maxNodeSize;
+        this.keySerializer = r.keySerializer;
+        this.valueSerializer = r.valueSerializer;
+        this.comparator = r.comparator;
+        this.valsOutsideNodes = r.valsOutsideNodes;
+
+        this.keySet = new KeySet(this, hasValues);
+    }
+
+
+
+    /**
+     * Find the first children node with a key equal or greater than the given key.
+     * If all items are smaller it returns `keys.length`
+     */
+    protected final int findChildren(final Object key, final Object[] keys) {
+        int left = 0;
+        if(keys[0] == null) left++;
+        int right = keys[keys.length-1] == null ? keys.length-1 :  keys.length;
+
+        int middle;
+
+        // binary search
+        while (true) {
+            middle = (left + right) / 2;
+            if(keys[middle]==null) return middle; //null is positive infinitive
+            if (comparator.compare(keys[middle], key) < 0) {
+                left = middle + 1;
+            } else {
+                right = middle;
+            }
+            if (left >= right) {
+                return  right;
+            }
+        }
+
+    }
+
+    @Override
+	public V get(Object key){
+        if(key==null) throw new NullPointerException();
+        K v = (K) key;
+        final long rootRecid = engine.get(rootRecidRef, Serializer.LONG_SERIALIZER);
+        long current = rootRecid;
+        BNode A = engine.get(current, nodeSerializer);
+
+        //dive until  leaf
+        while(!A.isLeaf()){
+            current = nextDir((DirNode) A, v);
+            A = engine.get(current, nodeSerializer);
+        }
+
+        //now at leaf level
+        LeafNode leaf = (LeafNode) A;
+        int pos = findChildren(v, leaf.keys);
+        while(pos == leaf.keys.length){
+            //follow next link on leaf until necessary
+            leaf = (LeafNode) engine.get(leaf.next, nodeSerializer);
+            pos = findChildren(v, leaf.keys);
+        }
+
+        if(pos==leaf.keys.length-1){
+            return null; //last key is always deleted
+        }
+        //finish search
+        if(leaf.keys[pos]!=null && 0==comparator.compare(v,leaf.keys[pos])){
+            Object ret = (hasValues? leaf.vals[pos] : Utils.EMPTY_STRING);
+            return valExpand(ret);
+        }else
+            return null;
+    }
+
+    protected V valExpand(Object ret) {
+        if(valsOutsideNodes && ret!=null) {
+            long recid = ((ValRef)ret).recid;
+            ret = engine.get(recid, valueSerializer);
+        }
+        return (V) ret;
+    }
+
+    protected long nextDir(DirNode d, Object key) {
+        int pos = findChildren(key, d.keys) - 1;
+        if(pos<0) pos = 0;
+        return d.child[pos];
+    }
+
+
+    @Override
+    public V put(K key, V value){
+        if(key==null||value==null) throw new NullPointerException();
+        return put2(key,value, false);
+    }
+
+    protected V put2(K v, V value2, final boolean putOnlyIfAbsent){
+        if(v == null) throw new IllegalArgumentException("null key");
+        if(value2 == null) throw new IllegalArgumentException("null value");
+        Utils.checkMapValueIsNotCollecion(value2);
+
+        V value = value2;
+        if(valsOutsideNodes){
+            long recid = engine.put(value2, valueSerializer);
+            value = (V) new ValRef(recid);
+        }
+
+        int stackPos = -1;
+        long[] stackVals = new long[4];
+
+        final long rootRecid = engine.get(rootRecidRef, Serializer.LONG_SERIALIZER);
+        long current = rootRecid;
+
+        BNode A = engine.get(current, nodeSerializer);
+        while(!A.isLeaf()){
+            long t = current;
+            current = nextDir((DirNode) A, v);
+            if(current == A.child()[A.child().length-1]){
+                //is link, do nothing
+            }else{
+                //stack push t
+                stackPos++;
+                if(stackVals.length == stackPos) //grow if needed
+                    stackVals = Arrays.copyOf(stackVals, stackVals.length*2);
+                stackVals[stackPos] = t;
+            }
+            A = engine.get(current, nodeSerializer);
+        }
+        int level = 1;
+
+        long p=0;
+
+        while(true){
+            boolean found;
+            do{
+                nodeLocks.lock(current);
+                found = true;
+                A = engine.get(current, nodeSerializer);
+                int pos = findChildren(v, A.keys());
+                if(pos<A.keys().length-1 &&  v!=null && A.keys()[pos]!=null &&
+                        0==comparator.compare(v,A.keys()[pos])){
+
+                    Object oldVal =   (hasValues? A.vals()[pos] : Utils.EMPTY_STRING);
+                    if(putOnlyIfAbsent){
+                        //is not absent, so quit
+                        nodeLocks.unlock(current);
+                        nodeLocks.assertNoLocks();
+                        V ret =  valExpand(oldVal);
+                        notify(v,ret, value2);
+                        return ret;
+                    }
+                    //insert new
+                    Object[] vals = null;
+                    if(hasValues){
+                        vals = Arrays.copyOf(A.vals(), A.vals().length);
+                        vals[pos] = value;
+                    }
+
+                    A = new LeafNode(Arrays.copyOf(A.keys(), A.keys().length), vals, ((LeafNode)A).next);
+                    engine.update(current, A, nodeSerializer);
+                    //already in here
+                    nodeLocks.unlock(current);
+                    nodeLocks.assertNoLocks();
+                    V ret =  valExpand(oldVal);
+                    notify(v,ret, value2);
+                    return ret;
+                }
+
+                if(A.highKey() != null && comparator.compare(v, A.highKey())>0){
+                    //follow link until necessary
+                    nodeLocks.unlock(current);
+                    found = false;
+                    int pos2 = findChildren(v, A.keys());
+                    while(A!=null && pos2 == A.keys().length){
+                        //TODO lock?
+                        long next = A.next();
+
+                        if(next==0) break;
+                        current = next;
+                        A = engine.get(current, nodeSerializer);
+                    }
+
+                }
+
+
+            }while(!found);
+
+            // can be new item inserted into A without splitting it?
+            if(A.keys().length - (A.isLeaf()?2:1)<maxNodeSize){
+                int pos = findChildren(v, A.keys());
+                Object[] keys = Utils.arrayPut(A.keys(), pos, v);
+
+                if(A.isLeaf()){
+                    Object[] vals = hasValues? Utils.arrayPut(A.vals(), pos, value): null;
+                    LeafNode n = new LeafNode(keys, vals, ((LeafNode)A).next);
+                    engine.update(current, n, nodeSerializer);
+                }else{
+                    if(p==0)
+                        throw new InternalError();
+                    long[] child = Utils.arrayLongPut(A.child(), pos, p);
+                    DirNode d = new DirNode(keys, child);
+                    engine.update(current, d, nodeSerializer);
+                }
+
+                nodeLocks.unlock(current);
+                nodeLocks.assertNoLocks();
+                notify(v,  null, value2);
+                return null;
+            }else{
+                //node is not safe, it requires splitting
+                final boolean isRoot = (current == rootRecid);
+
+                final int pos = findChildren(v, A.keys());
+                final Object[] keys = Utils.arrayPut(A.keys(), pos, v);
+                final Object[] vals = (A.isLeaf() && hasValues)? Utils.arrayPut(A.vals(), pos, value) : null;
+                final long[] child = A.isLeaf()? null : Utils.arrayLongPut(A.child(), pos, p);
+                final int splitPos = keys.length/2;
+                BNode B;
+                if(A.isLeaf()){
+                    Object[] vals2 = null;
+                    if(hasValues){
+                        vals2 = Arrays.copyOfRange(vals, splitPos, vals.length);
+                        vals2[0] = null;
+                    }
+
+                    B = new LeafNode(
+                                Arrays.copyOfRange(keys, splitPos, keys.length),
+                                vals2,
+                                ((LeafNode)A).next);
+                }else{
+                    B = new DirNode(Arrays.copyOfRange(keys, splitPos, keys.length),
+                                Arrays.copyOfRange(child, splitPos, keys.length));
+                }
+                long q = engine.put(B, nodeSerializer);
+                if(A.isLeaf()){  //  splitPos+1 is there so A gets new high  value (key)
+                    Object[] keys2 = Arrays.copyOf(keys, splitPos+2);
+                    keys2[keys2.length-1] = keys2[keys2.length-2];
+                    Object[] vals2 = null;
+                    if(hasValues){
+                        vals2 = Arrays.copyOf(vals, splitPos+2);
+                        vals2[vals2.length-1] = null;
+                    }
+                    //TODO check high/low keys overlap
+                    A = new LeafNode(keys2, vals2, q);
+                }else{
+                    long[] child2 = Arrays.copyOf(child, splitPos+1);
+                    child2[splitPos] = q;
+                    A = new DirNode(Arrays.copyOf(keys, splitPos+1), child2);
+                }
+                engine.update(current, A, nodeSerializer);
+
+                if(!isRoot){
+                    nodeLocks.unlock(current);
+                    p = q;
+                    v = (K) A.highKey();
+                    level = level+1;
+                    if(stackPos!=-1){ //if stack is not empty
+                        current = stackVals[stackPos--];
+                    }else{
+                        current = -1; //TODO pointer to left most node at level level
+                        throw new InternalError();
+                    }
+                }else{
+                    BNode R = new DirNode(
+                            new Object[]{A.keys()[0], A.highKey(), B.highKey()},
+                            new long[]{current,q, 0});
+
+
+                    long newRootRecid = engine.put(R, nodeSerializer);
+
+                    //TODO tree root locking
+                    engine.update(rootRecidRef, newRootRecid, Serializer.LONG_SERIALIZER);
+
+
+                    //TODO update tree levels
+                    nodeLocks.unlock(current);
+                    nodeLocks.assertNoLocks();
+                    notify(v, null, value2);
+                    return null;
+                }
+            }
+        }
+    }
+
+
+    class BTreeIterator{
+        LeafNode currentLeaf;
+        K lastReturnedKey;
+        int currentPos;
+
+        BTreeIterator(){
+            //find left-most leaf
+            final long rootRecid = engine.get(rootRecidRef, Serializer.LONG_SERIALIZER);
+            BNode node = engine.get(rootRecid, nodeSerializer);
+            while(!node.isLeaf()){
+                node = engine.get(node.child()[0], nodeSerializer);
+            }
+            currentLeaf = (LeafNode) node;
+            currentPos = 1;
+
+            while(currentLeaf.keys.length==2){
+                //follow link until leaf is not empty
+                if(currentLeaf.next == 0){
+                    currentLeaf = null;
+                    return;
+                }
+                currentLeaf = (LeafNode) engine.get(currentLeaf.next, nodeSerializer);
+            }
+        }
+
+        public boolean hasNext(){
+            return currentLeaf!=null;
+        }
+
+        public void remove(){
+            if(lastReturnedKey==null) throw new IllegalStateException();
+            BTreeMap.this.remove(lastReturnedKey);
+            lastReturnedKey = null;
+        }
+
+        protected void moveToNext(){
+            if(currentLeaf==null) return;
+            lastReturnedKey = (K) currentLeaf.keys[currentPos];
+            currentPos++;
+            if(currentPos == currentLeaf.keys.length-1){
+                //move to next leaf
+                if(currentLeaf.next==0){
+                    currentLeaf = null;
+                    currentPos=-1;
+                    return;
+                }
+                currentPos = 1;
+                currentLeaf = (LeafNode) engine.get(currentLeaf.next, nodeSerializer);
+                while(currentLeaf.keys.length==2){
+                    if(currentLeaf.next ==0){
+                        currentLeaf = null;
+                        currentPos=-1;
+                        return;
+                    }
+                    currentLeaf = (LeafNode) engine.get(currentLeaf.next, nodeSerializer);
+                }
+            }
+        }
+    }
+
+    @Override
+	public V remove(Object key) {
+        return remove2(key, null);
+    }
+
+    private V remove2(Object key, Object value) {
+        final long rootRecid = engine.get(rootRecidRef, Serializer.LONG_SERIALIZER);
+        long current = rootRecid;
+        BNode A = engine.get(current, nodeSerializer);
+        while(!A.isLeaf()){
+            current = nextDir((DirNode) A, key);
+            A = engine.get(current, nodeSerializer);
+        }
+
+        while(true){
+
+            nodeLocks.lock(current);
+            A = engine.get(current, nodeSerializer);
+            int pos = findChildren(key, A.keys());
+            if(pos<A.keys().length&& key!=null && A.keys()[pos]!=null &&
+                    0==comparator.compare(key,A.keys()[pos])){
+                //delete from node
+                Object oldVal =  hasValues? A.vals()[pos] : Utils.EMPTY_STRING;
+                oldVal = valExpand(oldVal);
+                if(value!=null && !value.equals(oldVal)){
+                    nodeLocks.unlock(current);
+                    return null;
+                }
+                //check for last node which was already deleted
+                if(pos == A.keys().length-1 && value == null){
+                    nodeLocks.unlock(current);
+                    return null;
+                }
+
+                Object[] keys2 = new Object[A.keys().length-1];
+                System.arraycopy(A.keys(),0,keys2, 0, pos);
+                System.arraycopy(A.keys(), pos+1, keys2, pos, keys2.length-pos);
+
+                Object[] vals2 = null;
+                if(hasValues){
+                    vals2 = new Object[A.vals().length-1];
+                    System.arraycopy(A.vals(),0,vals2, 0, pos);
+                    System.arraycopy(A.vals(), pos+1, vals2, pos, vals2.length-pos);
+                }
+
+                A = new LeafNode(keys2, vals2, ((LeafNode)A).next);
+                engine.update(current, A, nodeSerializer);
+                nodeLocks.unlock(current);
+                notify((K)key, (V)oldVal, null);
+                return (V) oldVal;
+            }else{
+                nodeLocks.unlock(current);
+                //follow link until necessary
+                if(A.highKey() != null && comparator.compare(key, A.highKey())>0){
+                    int pos2 = findChildren(key, A.keys());
+                    while(pos2 == A.keys().length){
+                        //TODO lock?
+                        current = ((LeafNode)A).next;
+                        A = engine.get(current, nodeSerializer);
+                    }
+                }else{
+                    return null;
+                }
+            }
+        }
+
+    }
+
+
+    @Override
+    public void clear() {
+        Iterator iter = keyIterator();
+        while(iter.hasNext()){
+            iter.next();
+            iter.remove();
+        }
+    }
+
+
+    class BTreeKeyIterator extends BTreeIterator implements Iterator<K>{
+
+        @Override
+        public K next() {
+            if(currentLeaf == null) throw new NoSuchElementException();
+            K ret = (K) currentLeaf.keys[currentPos];
+            moveToNext();
+            return ret;
+        }
+    }
+
+    class BTreeValueIterator extends BTreeIterator implements Iterator<V>{
+
+        @Override
+        public V next() {
+            if(currentLeaf == null) throw new NoSuchElementException();
+            Object ret = currentLeaf.vals[currentPos];
+            moveToNext();
+            return valExpand(ret);
+        }
+
+    }
+
+    class BTreeEntryIterator extends BTreeIterator implements  Iterator<Entry<K, V>>{
+
+        @Override
+        public Entry<K, V> next() {
+            if(currentLeaf == null) throw new NoSuchElementException();
+            K ret = (K) currentLeaf.keys[currentPos];
+            Object val = currentLeaf.vals[currentPos];
+            moveToNext();
+            return makeEntry(ret, valExpand(val));
+
+        }
+    }
+
+
+
+
+
+
+    protected Entry<K, V> makeEntry(Object key, Object value) {
+        if(value instanceof ValRef) throw new InternalError();
+        return new SimpleImmutableEntry<K, V>((K)key,  (V)value);
+    }
+
+
+    @Override
+    public boolean isEmpty() {
+        return !keyIterator().hasNext();
+    }
+
+    @Override
+    public int size(){
+        long size = 0;
+        BTreeIterator iter = new BTreeIterator();
+        while(iter.hasNext()){
+            iter.moveToNext();
+            size++;
+        }
+        return (int) size;
+    }
+
+    @Override
+    public V putIfAbsent(K key, V value) {
+        if(key == null || value == null) throw new NullPointerException();
+        return put2(key, value, true);
+    }
+
+    @Override
+    public boolean remove(Object key, Object value) {
+        if(key == null || (value == null)) throw new NullPointerException();
+        return remove2(key, value)!=null;
+    }
+
+    @Override
+    public boolean replace(K key, V oldValue, V newValue) {
+        if(key == null || oldValue == null || newValue == null ) throw new NullPointerException();
+
+        long rootRecid = engine.get(rootRecidRef, Serializer.LONG_SERIALIZER);
+        long current = rootRecid;
+        BNode node = engine.get(current, nodeSerializer);
+        //dive until leaf is found
+        while(!node.isLeaf()){
+            current = nextDir((DirNode) node, key);
+            node = engine.get(current, nodeSerializer);
+        }
+
+        nodeLocks.lock(current);
+        LeafNode leaf = (LeafNode) engine.get(current, nodeSerializer);
+
+        int pos = findChildren(key, node.keys());
+        while(pos==leaf.keys.length){
+            //follow leaf link until necessary
+            nodeLocks.lock(leaf.next);
+            nodeLocks.unlock(current);
+            current = leaf.next;
+            leaf = (LeafNode) engine.get(current, nodeSerializer);
+            pos = findChildren(key, node.keys());
+        }
+
+        boolean ret = false;
+        if( key!=null && leaf.keys()[pos]!=null &&
+                0==comparator.compare(key,leaf.keys[pos])){
+            Object val  = leaf.vals[pos];
+            val = valExpand(val);
+            if(oldValue.equals(val)){ //TODO use comparator here?
+                Object[] vals = Arrays.copyOf(leaf.vals, leaf.vals.length);
+                notify(key, oldValue, newValue);
+                if(valsOutsideNodes){
+                    long recid = engine.put(newValue, valueSerializer);
+                    newValue = (V) new ValRef(recid);
+                }
+                vals[pos] = newValue;
+                leaf = new LeafNode(Arrays.copyOf(leaf.keys, leaf.keys.length), vals, leaf.next);
+
+                engine.update(current, leaf, nodeSerializer);
+
+                ret = true;
+            }
+        }
+        nodeLocks.unlock(current);
+        return ret;
+    }
+
+    @Override
+    public V replace(K key, V value) {
+        if(key == null || value == null) throw new NullPointerException();
+        final long rootRecid = engine.get(rootRecidRef,Serializer.LONG_SERIALIZER);
+        long current = rootRecid;
+        BNode node = engine.get(current, nodeSerializer);
+        //dive until leaf is found
+        while(!node.isLeaf()){
+            current = nextDir((DirNode) node, key);
+            node = engine.get(current, nodeSerializer);
+        }
+
+        nodeLocks.lock(current);
+        LeafNode leaf = (LeafNode) engine.get(current, nodeSerializer);
+
+        int pos = findChildren(key, node.keys());
+        while(pos==leaf.keys.length){
+            //follow leaf link until necessary
+            nodeLocks.lock(leaf.next);
+            nodeLocks.unlock(current);
+            current = leaf.next;
+            leaf = (LeafNode) engine.get(current, nodeSerializer);
+            pos = findChildren(key, node.keys());
+        }
+
+        Object ret = null;
+        if( key!=null && leaf.keys()[pos]!=null &&
+                0==comparator.compare(key,leaf.keys[pos])){
+            Object[] vals = Arrays.copyOf(leaf.vals, leaf.vals.length);
+            Object oldVal = vals[pos];
+            ret =  valExpand(oldVal);
+            notify(key, (V)ret, value);
+            if(valsOutsideNodes && value!=null){
+                long recid = engine.put(value, valueSerializer);
+                value = (V) new ValRef(recid);
+            }
+            vals[pos] = value;
+            leaf = new LeafNode(Arrays.copyOf(leaf.keys, leaf.keys.length), vals, leaf.next);
+            engine.update(current, leaf, nodeSerializer);
+
+
+        }
+        nodeLocks.unlock(current);
+        return (V)ret;
+    }
+
+
+    @Override
+    public Comparator<? super K> comparator() {
+        return comparator;
+    }
+
+
+    @Override
+    public Map.Entry<K,V> firstEntry() {
+        final long rootRecid = engine.get(rootRecidRef, Serializer.LONG_SERIALIZER);
+        BNode n = engine.get(rootRecid, nodeSerializer);
+        while(!n.isLeaf()){
+            n = engine.get(n.child()[0], nodeSerializer);
+        }
+        LeafNode l = (LeafNode) n;
+        //follow link until necessary
+        while(l.keys.length==2){
+            if(l.next==0) return null;
+            l = (LeafNode) engine.get(l.next, nodeSerializer);
+        }
+        return makeEntry(l.keys[1], hasValues?valExpand(l.vals[1]):Utils.EMPTY_STRING);
+    }
+
+
+    @Override
+    public Entry<K, V> pollFirstEntry() {
+        while(true){
+            Entry<K, V> e = firstEntry();
+            if(e==null || remove(e.getKey(),e.getValue())){
+                return e;
+            }
+        }
+    }
+
+    @Override
+    public Entry<K, V> pollLastEntry() {
+        while(true){
+            Entry<K, V> e = lastEntry();
+            if(e==null || remove(e.getKey(),e.getValue())){
+                return e;
+            }
+        }
+    }
+
+
+    protected Entry<K,V> findSmaller(K key,boolean inclusive){
+        if(key==null) throw new NullPointerException();
+        final long rootRecid = engine.get(rootRecidRef, Serializer.LONG_SERIALIZER);
+        BNode n = engine.get(rootRecid, nodeSerializer);
+
+        Entry<K,V> k = findSmallerRecur(n, key, inclusive);
+        if(k==null || (k.getValue()==null)) return null;
+        return k;
+    }
+
+    private Entry<K, V> findSmallerRecur(BNode n, K key, boolean inclusive) {
+        final boolean leaf = n.isLeaf();
+        final int start = leaf ? n.keys().length-2 : n.keys().length-1;
+        final int end = leaf?1:0;
+        final int res = inclusive? 1 : 0;
+        for(int i=start;i>=end; i--){
+            final Object key2 = n.keys()[i];
+            int comp = (key2==null)? -1 : comparator.compare(key2, key);
+            if(comp<res){
+                if(leaf){
+                    return key2==null ? null :
+                            makeEntry(key2, hasValues?valExpand(n.vals()[i]):Utils.EMPTY_STRING);
+                }else{
+                    final long recid = n.child()[i];
+                    if(recid==0) continue;
+                    BNode n2 = engine.get(recid, nodeSerializer);
+                    Entry<K,V> ret = findSmallerRecur(n2, key, inclusive);
+                    if(ret!=null) return ret;
+                }
+            }
+        }
+
+        return null;
+    }
+
+
+    @Override
+    public Map.Entry<K,V> lastEntry() {
+        final long rootRecid = engine.get(rootRecidRef, Serializer.LONG_SERIALIZER);
+        BNode n = engine.get(rootRecid, nodeSerializer);
+        Entry e = lastEntryRecur(n);
+        if(e!=null && e.getValue()==null) return null;
+        return e;
+    }
+
+
+    private Map.Entry<K,V> lastEntryRecur(BNode n){
+        if(n.isLeaf()){
+            //follow next node if available
+            if(n.next()!=0){
+                BNode n2 = engine.get(n.next(), nodeSerializer);
+                return lastEntryRecur(n2);
+            }
+
+            //iterate over keys to find last non null key
+            for(int i=n.keys().length-1; i>=0;i--){
+                Object k = n.keys()[i];
+                if(k!=null) {
+                    return makeEntry(k, hasValues?valExpand(n.vals()[i]):Utils.EMPTY_STRING);
+                }
+            }
+        }else{
+            //dir node, dive deeper
+            for(int i=n.child().length-1; i>=0;i--){
+                long childRecid = n.child()[i];
+                if(childRecid==0) continue;
+                BNode n2 = engine.get(childRecid, nodeSerializer);
+                Entry<K,V> ret = lastEntryRecur(n2);
+                if(ret!=null) return ret;
+            }
+        }
+        return null;
+    }
+
+    @Override
+	public Map.Entry<K,V> lowerEntry(K key) {
+        if(key==null) throw new NullPointerException();
+        return findSmaller(key, false);
+    }
+
+    @Override
+	public K lowerKey(K key) {
+        Entry<K,V> n = lowerEntry(key);
+        return (n == null)? null : n.getKey();
+    }
+
+    @Override
+	public Map.Entry<K,V> floorEntry(K key) {
+        if(key==null) throw new NullPointerException();
+        return findSmaller(key, true);
+    }
+
+    @Override
+	public K floorKey(K key) {
+        Entry<K,V> n = floorEntry(key);
+        return (n == null)? null : n.getKey();
+    }
+
+    @Override
+	public Map.Entry<K,V> ceilingEntry(K key) {
+        if(key==null) throw new NullPointerException();
+        return findLarger(key, true);
+    }
+
+    protected Entry<K, V> findLarger(K key, boolean inclusive) {
+        if(key==null) return null;
+        K v = (K) key;
+        final long rootRecid = engine.get(rootRecidRef, Serializer.LONG_SERIALIZER);
+        long current = rootRecid;
+        BNode A = engine.get(current, nodeSerializer);
+
+        //dive until  leaf
+        while(!A.isLeaf()){
+            current = nextDir((DirNode) A, v);
+            A = engine.get(current, nodeSerializer);
+        }
+
+        //now at leaf level
+        LeafNode leaf = (LeafNode) A;
+        //follow link until first matching node is found
+        final int comp = inclusive?1:0;
+        while(true){
+            for(int i=1;i<leaf.keys.length-1;i++){
+                if(leaf.keys[i]==null) continue;
+
+                if(comparator.compare(key, leaf.keys[i])<comp){
+                    return makeEntry(leaf.keys[i], hasValues?valExpand(leaf.vals[i]):Utils.EMPTY_STRING);
+                }
+
+
+            }
+            if(leaf.next==0) return null; //reached end
+            leaf = (LeafNode) engine.get(leaf.next, nodeSerializer);
+        }
+
+    }
+
+    @Override
+	public K ceilingKey(K key) {
+        if(key==null) throw new NullPointerException();
+        Entry<K,V> n = ceilingEntry(key);
+        return (n == null)? null : n.getKey();
+    }
+
+    @Override
+	public Map.Entry<K,V> higherEntry(K key) {
+        if(key==null) throw new NullPointerException();
+        return findLarger(key, false);
+    }
+
+    @Override
+	public K higherKey(K key) {
+        if(key==null) throw new NullPointerException();
+        Entry<K,V> n = higherEntry(key);
+        return (n == null)? null : n.getKey();
+    }
+
+    @Override
+    public boolean containsKey(Object key) {
+        if(key==null) throw new NullPointerException();
+        return get(key)!=null;
+    }
+
+    @Override
+    public boolean containsValue(Object value){
+        if(value ==null) throw new NullPointerException();
+        Iterator<V> valueIter = valueIterator();
+        while(valueIter.hasNext()){
+            if(value.equals(valueIter.next()))
+                return true;
+        }
+        return false;
+    }
+
+
+    @Override
+    public K firstKey() {
+        Entry<K,V> e = firstEntry();
+        if(e==null) throw new NoSuchElementException();
+        return e.getKey();
+    }
+
+    @Override
+    public K lastKey() {
+        Entry<K,V> e = lastEntry();
+        if(e==null) throw new NoSuchElementException();
+        return e.getKey();
+    }
+
+
+    @Override
+    public ConcurrentNavigableMap<K,V> subMap(K fromKey,
+                                              boolean fromInclusive,
+                                              K toKey,
+                                              boolean toInclusive) {
+        if (fromKey == null || toKey == null)
+            throw new NullPointerException();
+        return new SubMap<K,V>
+                ( this, fromKey, fromInclusive, toKey, toInclusive);
+    }
+
+    @Override
+    public ConcurrentNavigableMap<K,V> headMap(K toKey,
+                                               boolean inclusive) {
+        if (toKey == null)
+            throw new NullPointerException();
+        return new SubMap<K,V>
+                (this, null, false, toKey, inclusive);
+    }
+
+    @Override
+    public ConcurrentNavigableMap<K,V> tailMap(K fromKey,
+                                               boolean inclusive) {
+        if (fromKey == null)
+            throw new NullPointerException();
+        return new SubMap<K,V>
+                (this, fromKey, inclusive, null, false);
+    }
+
+    @Override
+    public ConcurrentNavigableMap<K,V> subMap(K fromKey, K toKey) {
+        return subMap(fromKey, true, toKey, false);
+    }
+
+    @Override
+    public ConcurrentNavigableMap<K,V> headMap(K toKey) {
+        return headMap(toKey, false);
+    }
+
+    @Override
+    public ConcurrentNavigableMap<K,V> tailMap(K fromKey) {
+        return tailMap(fromKey, true);
+    }
+
+
+    Iterator<K> keyIterator() {
+        return new BTreeKeyIterator();
+    }
+
+    Iterator<V> valueIterator() {
+        return new BTreeValueIterator();
+    }
+
+    Iterator<Map.Entry<K,V>> entryIterator() {
+        return new BTreeEntryIterator();
+    }
+
+
+    /* ---------------- View methods -------------- */
+
+    @Override
+	public NavigableSet<K> keySet() {
+        return keySet;
+    }
+
+    @Override
+	public NavigableSet<K> navigableKeySet() {
+        return keySet;
+    }
+
+    @Override
+	public Collection<V> values() {
+        return values;
+    }
+
+    @Override
+	public Set<Map.Entry<K,V>> entrySet() {
+        return entrySet;
+    }
+
+    @Override
+	public ConcurrentNavigableMap<K,V> descendingMap() {
+        throw new UnsupportedOperationException("descending not supported");
+    }
+
+    @Override
+	public NavigableSet<K> descendingKeySet() {
+        throw new UnsupportedOperationException("descending not supported");
+    }
+
+    static final <E> List<E> toList(Collection<E> c) {
+        // Using size() here would be a pessimization.
+        List<E> list = new ArrayList<E>();
+        for (E e : c){
+            list.add(e);
+        }
+        return list;
+    }
+
+
+
+    static final class KeySet<E> extends AbstractSet<E> implements NavigableSet<E> {
+        protected final ConcurrentNavigableMap<E,Object> m;
+        private final boolean hasValues;
+        KeySet(ConcurrentNavigableMap<E,Object> map, boolean hasValues) {
+            m = map;
+            this.hasValues = hasValues;
+        }
+        @Override
+		public int size() { return m.size(); }
+        @Override
+		public boolean isEmpty() { return m.isEmpty(); }
+        @Override
+		public boolean contains(Object o) { return m.containsKey(o); }
+        @Override
+		public boolean remove(Object o) { return m.remove(o) != null; }
+        @Override
+		public void clear() { m.clear(); }
+        @Override
+		public E lower(E e) { return m.lowerKey(e); }
+        @Override
+		public E floor(E e) { return m.floorKey(e); }
+        @Override
+		public E ceiling(E e) { return m.ceilingKey(e); }
+        @Override
+		public E higher(E e) { return m.higherKey(e); }
+        @Override
+		public Comparator<? super E> comparator() { return m.comparator(); }
+        @Override
+		public E first() { return m.firstKey(); }
+        @Override
+		public E last() { return m.lastKey(); }
+        @Override
+		public E pollFirst() {
+            Map.Entry<E,Object> e = m.pollFirstEntry();
+            return e == null? null : e.getKey();
+        }
+        @Override
+		public E pollLast() {
+            Map.Entry<E,Object> e = m.pollLastEntry();
+            return e == null? null : e.getKey();
+        }
+        @Override
+		public Iterator<E> iterator() {
+            if (m instanceof BTreeMap)
+                return ((BTreeMap<E,Object>)m).keyIterator();
+            else
+                return ((BTreeMap.SubMap<E,Object>)m).keyIterator();
+        }
+        @Override
+		public boolean equals(Object o) {
+            if (o == this)
+                return true;
+            if (!(o instanceof Set))
+                return false;
+            Collection<?> c = (Collection<?>) o;
+            try {
+                return containsAll(c) && c.containsAll(this);
+            } catch (ClassCastException unused)   {
+                return false;
+            } catch (NullPointerException unused) {
+                return false;
+            }
+        }
+        @Override
+		public Object[] toArray()     { return toList(this).toArray();  }
+        @Override
+		public <T> T[] toArray(T[] a) { return toList(this).toArray(a); }
+        @Override
+		public Iterator<E> descendingIterator() {
+            return descendingSet().iterator();
+        }
+        @Override
+		public NavigableSet<E> subSet(E fromElement,
+                                      boolean fromInclusive,
+                                      E toElement,
+                                      boolean toInclusive) {
+            return new KeySet<E>(m.subMap(fromElement, fromInclusive,
+                    toElement,   toInclusive),hasValues);
+        }
+        @Override
+		public NavigableSet<E> headSet(E toElement, boolean inclusive) {
+            return new KeySet<E>(m.headMap(toElement, inclusive),hasValues);
+        }
+        @Override
+		public NavigableSet<E> tailSet(E fromElement, boolean inclusive) {
+            return new KeySet<E>(m.tailMap(fromElement, inclusive),hasValues);
+        }
+        @Override
+		public NavigableSet<E> subSet(E fromElement, E toElement) {
+            return subSet(fromElement, true, toElement, false);
+        }
+        @Override
+		public NavigableSet<E> headSet(E toElement) {
+            return headSet(toElement, false);
+        }
+        @Override
+		public NavigableSet<E> tailSet(E fromElement) {
+            return tailSet(fromElement, true);
+        }
+        @Override
+		public NavigableSet<E> descendingSet() {
+            return new KeySet(m.descendingMap(),hasValues);
+        }
+
+        @Override
+        public boolean add(E k) {
+            if(hasValues)
+                throw new UnsupportedOperationException();
+            else
+                return m.put(k,  Utils.EMPTY_STRING) == null;
+        }
+    }
+
+    static final class Values<E> extends AbstractCollection<E> {
+        private final ConcurrentNavigableMap<Object, E> m;
+        Values(ConcurrentNavigableMap<Object, E> map) {
+            m = map;
+        }
+        @Override
+		public Iterator<E> iterator() {
+            if (m instanceof BTreeMap)
+                return ((BTreeMap<Object,E>)m).valueIterator();
+            else
+                return ((SubMap<Object,E>)m).valueIterator();
+        }
+        @Override
+		public boolean isEmpty() {
+            return m.isEmpty();
+        }
+        @Override
+		public int size() {
+            return m.size();
+        }
+        @Override
+		public boolean contains(Object o) {
+            return m.containsValue(o);
+        }
+        @Override
+		public void clear() {
+            m.clear();
+        }
+        @Override
+		public Object[] toArray()     { return toList(this).toArray();  }
+        @Override
+		public <T> T[] toArray(T[] a) { return toList(this).toArray(a); }
+    }
+
+    static final class EntrySet<K1,V1> extends AbstractSet<Map.Entry<K1,V1>> {
+        private final ConcurrentNavigableMap<K1, V1> m;
+        EntrySet(ConcurrentNavigableMap<K1, V1> map) {
+            m = map;
+        }
+
+        @Override
+		public Iterator<Map.Entry<K1,V1>> iterator() {
+            if (m instanceof BTreeMap)
+                return ((BTreeMap<K1,V1>)m).entryIterator();
+            else
+                return ((SubMap<K1,V1>)m).entryIterator();
+        }
+
+        @Override
+		public boolean contains(Object o) {
+            if (!(o instanceof Map.Entry))
+                return false;
+            Map.Entry<K1,V1> e = (Map.Entry<K1,V1>)o;
+            K1 key = e.getKey();
+            if(key == null) return false;
+            V1 v = m.get(key);
+            return v != null && v.equals(e.getValue());
+        }
+        @Override
+		public boolean remove(Object o) {
+            if (!(o instanceof Map.Entry))
+                return false;
+            Map.Entry<K1,V1> e = (Map.Entry<K1,V1>)o;
+            K1 key = e.getKey();
+            if(key == null) return false;
+            return m.remove(key,
+                    e.getValue());
+        }
+        @Override
+		public boolean isEmpty() {
+            return m.isEmpty();
+        }
+        @Override
+		public int size() {
+            return m.size();
+        }
+        @Override
+		public void clear() {
+            m.clear();
+        }
+        @Override
+		public boolean equals(Object o) {
+            if (o == this)
+                return true;
+            if (!(o instanceof Set))
+                return false;
+            Collection<?> c = (Collection<?>) o;
+            try {
+                return containsAll(c) && c.containsAll(this);
+            } catch (ClassCastException unused)   {
+                return false;
+            } catch (NullPointerException unused) {
+                return false;
+            }
+        }
+        @Override
+		public Object[] toArray()     { return toList(this).toArray();  }
+        @Override
+		public <T> T[] toArray(T[] a) { return toList(this).toArray(a); }
+    }
+
+
+
+    static protected  class SubMap<K,V> extends AbstractMap<K,V> implements  ConcurrentNavigableMap<K,V> {
+
+        protected final BTreeMap<K,V> m;
+
+        protected final K lo;
+        protected final boolean loInclusive;
+
+        protected final K hi;
+        protected final boolean hiInclusive;
+
+        public SubMap(BTreeMap<K,V> m, K lo, boolean loInclusive, K hi, boolean hiInclusive) {
+            this.m = m;
+            this.lo = lo;
+            this.loInclusive = loInclusive;
+            this.hi = hi;
+            this.hiInclusive = hiInclusive;
+            if(lo!=null && hi!=null && m.comparator.compare(lo, hi)>0){
+                    throw new IllegalArgumentException();
+            }
+
+
+        }
+
+
+/* ----------------  Map API methods -------------- */
+
+        @Override
+		public boolean containsKey(Object key) {
+            if (key == null) throw new NullPointerException();
+            K k = (K)key;
+            return inBounds(k) && m.containsKey(k);
+        }
+
+        @Override
+		public V get(Object key) {
+            if (key == null) throw new NullPointerException();
+            K k = (K)key;
+            return ((!inBounds(k)) ? null : m.get(k));
+        }
+
+        @Override
+		public V put(K key, V value) {
+            checkKeyBounds(key);
+            return m.put(key, value);
+        }
+
+        @Override
+		public V remove(Object key) {
+            K k = (K)key;
+            return (!inBounds(k))? null : m.remove(k);
+        }
+
+        @Override
+        public int size() {
+            Iterator<K> i = keyIterator();
+            int counter = 0;
+            while(i.hasNext()){
+                counter++;
+                i.next();
+            }
+            return counter;
+        }
+
+        @Override
+		public boolean isEmpty() {
+            return !keyIterator().hasNext();
+        }
+
+        @Override
+		public boolean containsValue(Object value) {
+            if(value==null) throw new NullPointerException();
+            Iterator<V> i = valueIterator();
+            while(i.hasNext()){
+                if(value.equals(i.next()))
+                    return true;
+            }
+            return false;
+        }
+
+        @Override
+		public void clear() {
+            Iterator<K> i = keyIterator();
+            while(i.hasNext()){
+                i.next();
+                i.remove();
+            }
+        }
+
+
+        /* ----------------  ConcurrentMap API methods -------------- */
+
+        @Override
+		public V putIfAbsent(K key, V value) {
+            checkKeyBounds(key);
+            return m.putIfAbsent(key, value);
+        }
+
+        @Override
+		public boolean remove(Object key, Object value) {
+            K k = (K)key;
+            return inBounds(k) && m.remove(k, value);
+        }
+
+        @Override
+		public boolean replace(K key, V oldValue, V newValue) {
+            checkKeyBounds(key);
+            return m.replace(key, oldValue, newValue);
+        }
+
+        @Override
+		public V replace(K key, V value) {
+            checkKeyBounds(key);
+            return m.replace(key, value);
+        }
+
+        /* ----------------  SortedMap API methods -------------- */
+
+        @Override
+		public Comparator<? super K> comparator() {
+            return m.comparator();
+        }
+
+        /* ----------------  Relational methods -------------- */
+
+        @Override
+		public Map.Entry<K,V> lowerEntry(K key) {
+            if(key==null)throw new NullPointerException();
+            if(tooLow(key))return null;
+
+            if(tooHigh(key))
+                return lastEntry();
+
+            Entry<K,V> r = m.lowerEntry(key);
+            return r!=null && !tooLow(r.getKey()) ? r :null;
+        }
+
+        @Override
+		public K lowerKey(K key) {
+            Entry<K,V> n = lowerEntry(key);
+            return (n == null)? null : n.getKey();
+        }
+
+        @Override
+		public Map.Entry<K,V> floorEntry(K key) {
+            if(key==null) throw new NullPointerException();
+            if(tooLow(key)) return null;
+
+            if(tooHigh(key)){
+                return lastEntry();
+            }
+
+            Entry<K,V> ret = m.floorEntry(key);
+            if(ret!=null && tooLow(ret.getKey())) return null;
+            return ret;
+
+        }
+
+        @Override
+		public K floorKey(K key) {
+            Entry<K,V> n = floorEntry(key);
+            return (n == null)? null : n.getKey();
+        }
+
+        @Override
+		public Map.Entry<K,V> ceilingEntry(K key) {
+            if(key==null) throw new NullPointerException();
+            if(tooHigh(key)) return null;
+
+            if(tooLow(key)){
+                return firstEntry();
+            }
+
+            Entry<K,V> ret = m.ceilingEntry(key);
+            if(ret!=null && tooHigh(ret.getKey())) return null;
+            return ret;
+        }
+
+        @Override
+        public K ceilingKey(K key) {
+            Entry<K,V> k = ceilingEntry(key);
+            return k!=null? k.getKey():null;
+        }
+
+        @Override
+        public Entry<K, V> higherEntry(K key) {
+            Entry<K,V> r = m.higherEntry(key);
+            return r!=null && inBounds(r.getKey()) ? r : null;
+        }
+
+        @Override
+        public K higherKey(K key) {
+            Entry<K,V> k = higherEntry(key);
+            return k!=null? k.getKey():null;
+        }
+
+
+        @Override
+		public K firstKey() {
+            Entry<K,V> e = firstEntry();
+            if(e==null) throw new NoSuchElementException();
+            return e.getKey();
+        }
+
+        @Override
+		public K lastKey() {
+            Entry<K,V> e = lastEntry();
+            if(e==null) throw new NoSuchElementException();
+            return e.getKey();
+        }
+
+
+        @Override
+		public Map.Entry<K,V> firstEntry() {
+            Entry<K,V> k =
+                    lo==null ?
+                    m.firstEntry():
+                    m.findLarger(lo, loInclusive);
+            return k!=null && inBounds(k.getKey())? k : null;
+
+        }
+
+        @Override
+		public Map.Entry<K,V> lastEntry() {
+            Entry<K,V> k =
+                    hi==null ?
+                    m.lastEntry():
+                    m.findSmaller(hi, hiInclusive);
+
+            return k!=null && inBounds(k.getKey())? k : null;
+        }
+
+        @Override
+        public Entry<K, V> pollFirstEntry() {
+            while(true){
+                Entry<K, V> e = firstEntry();
+                if(e==null || remove(e.getKey(),e.getValue())){
+                    return e;
+                }
+            }
+        }
+
+        @Override
+        public Entry<K, V> pollLastEntry() {
+            while(true){
+                Entry<K, V> e = lastEntry();
+                if(e==null || remove(e.getKey(),e.getValue())){
+                    return e;
+                }
+            }
+        }
+
+
+
+
+        /**
+         * Utility to create submaps, where given bounds override
+         * unbounded(null) ones and/or are checked against bounded ones.
+         */
+        private SubMap<K,V> newSubMap(K fromKey,
+                                      boolean fromInclusive,
+                                      K toKey,
+                                      boolean toInclusive) {
+
+//            if(fromKey!=null && toKey!=null){
+//                int comp = m.comparator.compare(fromKey, toKey);
+//                if((fromInclusive||!toInclusive) && comp==0)
+//                    throw new IllegalArgumentException();
+//            }
+
+            if (lo != null) {
+                if (fromKey == null) {
+                    fromKey = lo;
+                    fromInclusive = loInclusive;
+                }
+                else {
+                    int c = m.comparator.compare(fromKey, lo);
+                    if (c < 0 || (c == 0 && !loInclusive && fromInclusive))
+                        throw new IllegalArgumentException("key out of range");
+                }
+            }
+            if (hi != null) {
+                if (toKey == null) {
+                    toKey = hi;
+                    toInclusive = hiInclusive;
+                }
+                else {
+                    int c = m.comparator.compare(toKey, hi);
+                    if (c > 0 || (c == 0 && !hiInclusive && toInclusive))
+                        throw new IllegalArgumentException("key out of range");
+                }
+            }
+            return new SubMap<K,V>(m, fromKey, fromInclusive,
+                    toKey, toInclusive);
+        }
+
+        @Override
+		public SubMap<K,V> subMap(K fromKey,
+                                  boolean fromInclusive,
+                                  K toKey,
+                                  boolean toInclusive) {
+            if (fromKey == null || toKey == null)
+                throw new NullPointerException();
+            return newSubMap(fromKey, fromInclusive, toKey, toInclusive);
+        }
+
+        @Override
+		public SubMap<K,V> headMap(K toKey,
+                                   boolean inclusive) {
+            if (toKey == null)
+                throw new NullPointerException();
+            return newSubMap(null, false, toKey, inclusive);
+        }
+
+        @Override
+		public SubMap<K,V> tailMap(K fromKey,
+                                   boolean inclusive) {
+            if (fromKey == null)
+                throw new NullPointerException();
+            return newSubMap(fromKey, inclusive, null, false);
+        }
+
+        @Override
+		public SubMap<K,V> subMap(K fromKey, K toKey) {
+            return subMap(fromKey, true, toKey, false);
+        }
+
+        @Override
+		public SubMap<K,V> headMap(K toKey) {
+            return headMap(toKey, false);
+        }
+
+        @Override
+		public SubMap<K,V> tailMap(K fromKey) {
+            return tailMap(fromKey, true);
+        }
+
+        @Override
+		public SubMap<K,V> descendingMap() {
+            throw new UnsupportedOperationException("Descending not supported");
+        }
+
+        @Override
+        public NavigableSet<K> navigableKeySet() {
+            return new KeySet<K>((ConcurrentNavigableMap<K,Object>) this,m.hasValues);
+        }
+
+
+        /* ----------------  Utilities -------------- */
+
+
+
+        private boolean tooLow(K key) {
+            if (lo != null) {
+                int c = m.comparator.compare(key, lo);
+                if (c < 0 || (c == 0 && !loInclusive))
+                    return true;
+            }
+            return false;
+        }
+
+        private boolean tooHigh(K key) {
+            if (hi != null) {
+                int c = m.comparator.compare(key, hi);
+                if (c > 0 || (c == 0 && !hiInclusive))
+                    return true;
+            }
+            return false;
+        }
+
+        private boolean inBounds(K key) {
+            return !tooLow(key) && !tooHigh(key);
+        }
+
+        private void checkKeyBounds(K key) throws IllegalArgumentException {
+            if (key == null)
+                throw new NullPointerException();
+            if (!inBounds(key))
+                throw new IllegalArgumentException("key out of range");
+        }
+
+
+
+
+
+        @Override
+        public NavigableSet<K> keySet() {
+            return new KeySet<K>((ConcurrentNavigableMap<K,Object>) this, m.hasValues);
+        }
+
+        @Override
+        public NavigableSet<K> descendingKeySet() {
+            throw new UnsupportedOperationException("Descending not supported");
+        }
+
+
+
+        @Override
+        public Set<Entry<K, V>> entrySet() {
+            return new EntrySet<K, V>(this);
+        }
+
+
+        /*
+         * ITERATORS
+         */
+
+        abstract class Iter<E> implements Iterator<E> {
+            Entry<K,V> current = SubMap.this.firstEntry();
+            Entry<K,V> last = null;
+
+
+            @Override
+			public boolean hasNext() {
+                return current!=null;
+            }
+
+
+            public void advance() {
+                if(current==null) throw new NoSuchElementException();
+                last = current;
+                current = SubMap.this.higherEntry(current.getKey());
+            }
+
+            @Override
+			public void remove() {
+                if(last==null) throw new IllegalStateException();
+                SubMap.this.remove(last.getKey());
+                last = null;
+            }
+
+        }
+        Iterator<K> keyIterator() {
+            return new Iter<K>() {
+                @Override
+                public K next() {
+                    advance();
+                    return last.getKey();
+                }
+            };
+        }
+
+        Iterator<V> valueIterator() {
+            return new Iter<V>() {
+
+                @Override
+                public V next() {
+                    advance();
+                    return last.getValue();
+                }
+            };
+        }
+
+        Iterator<Map.Entry<K,V>> entryIterator() {
+            return new Iter<Entry<K, V>>() {
+                @Override
+                public Entry<K, V> next() {
+                    advance();
+                    return last;
+                }
+            };
+        }
+
+    }
+
+
+    /**
+     * Make readonly snapshot view of current Map. Snapshot is immutable and not affected by modifications made by other threads.
+     * Useful if you need consistent view on Map.
+     * <p>
+     * Maintaining snapshot have some overhead, underlying Engine is closed after Map view is GCed.
+     * Please make sure to release reference to this Map view, so snapshot view can be garbage collected.
+     *
+     * @return snapshot
+     */
+    public NavigableMap<K,V> snapshot(){
+        Engine snapshot = SnapshotEngine.createSnapshotFor(engine);
+
+        return new BTreeMap<K, V>(snapshot,treeRecid, defaultSerializer);
+    }
+
+
+
+    protected final Object modListenersLock = new Object();
+    protected Bind.MapListener<K,V>[] modListeners = new Bind.MapListener[0];
+
+    @Override
+    public void addModificationListener(Bind.MapListener<K,V> listener) {
+        synchronized (modListenersLock){
+            Bind.MapListener<K,V>[] modListeners2 =
+                    Arrays.copyOf(modListeners,modListeners.length+1);
+            modListeners2[modListeners2.length-1] = listener;
+            modListeners = modListeners2;
+        }
+
+    }
+
+    @Override
+    public void removeModificationListener(Bind.MapListener<K,V> listener) {
+        synchronized (modListenersLock){
+            for(int i=0;i<modListeners.length;i++){
+                if(modListeners[i]==listener) modListeners[i]=null;
+            }
+        }
+    }
+
+    protected void notify(K key, V oldValue, V newValue) {
+        if(oldValue instanceof ValRef) throw new InternalError();
+        if(newValue instanceof ValRef) throw new InternalError();
+
+        Bind.MapListener<K,V>[] modListeners2  = modListeners;
+        for(Bind.MapListener<K,V> listener:modListeners2){
+            if(listener!=null)
+                listener.update(key, oldValue, newValue);
+        }
+    }
+
+
+    /**
+     * Closes underlying storage and releases all resources.
+     * Used mostly with temporary collections where engine is not accessible.
+     */
+    public void close(){
+        engine.close();
+    }
+
+    public void printTreeStructure() {
+        final long rootRecid = engine.get(rootRecidRef, Serializer.LONG_SERIALIZER);
+        printRecur(this, rootRecid, "");
+    }
+
+    private static void printRecur(BTreeMap m, long recid, String s) {
+        BTreeMap.BNode n = (BTreeMap.BNode) m.engine.get(recid, m.nodeSerializer);
+        System.out.println(s+recid+"-"+n);
+        if(!n.isLeaf()){
+            for(int i=0;i<n.child().length-1;i++){
+                long recid2 = n.child()[i];
+                if(recid2!=0)
+                    printRecur(m, recid2, s+"  ");
+            }
+        }
+    }
+
+
+
+}
Index: applications/editors/josm/plugins/imagerycache/src/org/mapdb/Bind.java
===================================================================
--- applications/editors/josm/plugins/imagerycache/src/org/mapdb/Bind.java	(revision 29363)
+++ applications/editors/josm/plugins/imagerycache/src/org/mapdb/Bind.java	(revision 29363)
@@ -0,0 +1,191 @@
+package org.mapdb;
+
+import java.util.Iterator;
+import java.util.Map;
+import java.util.NavigableSet;
+import java.util.Set;
+import java.util.concurrent.ConcurrentMap;
+
+/**
+ * Collection binding
+ *
+ * @author Jan Kotek
+ */
+public final class Bind {
+
+    private Bind(){}
+
+    public static <K2,K1> Iterable<K1> findSecondaryKeys(final NavigableSet<Fun.Tuple2<K2,K1>> secondaryKeys, final K2 secondaryKey) {
+        return new Iterable<K1>(){
+            @Override
+            public Iterator<K1> iterator() {
+                //use range query to get all values
+                final Iterator<Fun.Tuple2<K2,K1>> iter =
+                    ((NavigableSet)secondaryKeys) //cast is workaround for generics
+                        .subSet(
+                                Fun.t2(secondaryKey,null), //NULL represents lower bound, everything is larger than null
+                                Fun.t2(secondaryKey,Fun.HI) // HI is upper bound everything is smaller then HI
+                        ).iterator();
+
+                return new Iterator<K1>() {
+                    @Override
+                    public boolean hasNext() {
+                        return iter.hasNext();
+                    }
+
+                    @Override
+                    public K1 next() {
+                        return iter.next().b;
+                    }
+
+                    @Override
+                    public void remove() {
+                        iter.remove();
+                    }
+                };
+            }
+        };
+
+    }
+
+
+    public interface MapListener<K,V>{
+        void update(K key, V oldVal, V newVal);
+    }
+
+    public interface MapWithModificationListener<K,V> extends Map<K,V> {
+        public void addModificationListener(MapListener<K,V> listener);
+        public void removeModificationListener(MapListener<K,V> listener);
+    }
+
+    public static void size(MapWithModificationListener map, final Atomic.Long size){
+        //set initial value first if necessary
+        if(size.get() == 0 && map.isEmpty())
+            size.set(map.size()); //TODO long overflow?
+
+        map.addModificationListener(new MapListener() {
+            @Override
+            public void update(Object key, Object oldVal, Object newVal) {
+                if(oldVal == null && newVal!=null)
+                    size.incrementAndGet();
+                else if(oldVal!=null && newVal == null)
+                    size.decrementAndGet();
+                else{
+                    //update does not change collection size
+                }
+            }
+        });
+    }
+
+    public static <K,V, V2> void secondaryValue(MapWithModificationListener<K, V> map,
+                                              final Map<K, V2> secondary,
+                                              final Fun.Function2<V2, K, V> fun){
+        //fill if empty
+        if(secondary.isEmpty()){
+            for(Map.Entry<K,V> e:map.entrySet())
+                secondary.put(e.getKey(), fun.run(e.getKey(),e.getValue()));
+        }
+        //hook listener
+        map.addModificationListener(new MapListener<K, V>() {
+            @Override
+            public void update(K key, V oldVal, V newVal) {
+                if(newVal == null){
+                    //removal
+                    secondary.remove(key);
+                }else{
+                    secondary.put(key, fun.run(key,newVal));
+                }
+            }
+        });
+    }
+
+    public static <K,V, K2> void secondaryKey(MapWithModificationListener<K, V> map,
+                                                final NavigableSet<Fun.Tuple2<K2, K>> secondary,
+                                                final Fun.Function2<K2, K, V> fun){
+        //fill if empty
+        if(secondary.isEmpty()){
+            for(Map.Entry<K,V> e:map.entrySet()){
+                secondary.add(Fun.t2(fun.run(e.getKey(),e.getValue()), e.getKey()));
+            }
+        }
+        //hook listener
+        map.addModificationListener(new MapListener<K, V>() {
+            @Override
+            public void update(K key, V oldVal, V newVal) {
+                if(newVal == null){
+                    //removal
+                    secondary.remove(Fun.t2(fun.run(key, oldVal), key));
+                }else if(oldVal==null){
+                    //insert
+                    secondary.add(Fun.t2(fun.run(key,newVal), key));
+                }else{
+                    //update, must remove old key and insert new
+                    K2 oldKey = fun.run(key, oldVal);
+                    K2 newKey = fun.run(key, newVal);
+                    if(oldKey == newKey || oldKey.equals(newKey)) return;
+                    secondary.remove(Fun.t2(oldKey, key));
+                    secondary.add(Fun.t2(newKey,key));
+                }
+            }
+        });
+    }
+
+    public static <K,V> void mapInverse(MapWithModificationListener<K,V> primary,
+                                        NavigableSet<Fun.Tuple2<V, K>> inverse) {
+        Bind.secondaryKey(primary,inverse, new Fun.Function2<V, K,V>(){
+            @Override public V run(K key, V value) {
+                return value;
+            }
+        });
+    }
+
+
+    public static <K,V,C> void histogram(MapWithModificationListener<K,V> primary, final ConcurrentMap<C,Long> histogram,
+                                  final Fun.Function2<C, K, V> entryToCategory){
+
+        MapListener<K,V> listener = new MapListener<K, V>() {
+            @Override public void update(K key, V oldVal, V newVal) {
+                if(newVal == null){
+                    //removal
+                    C category = entryToCategory.run(key,oldVal);
+                    incrementHistogram(category, -1);
+                }else if(oldVal==null){
+                    //insert
+                    C category = entryToCategory.run(key,newVal);
+                    incrementHistogram(category, 1);
+                }else{
+                    //update, must remove old key and insert new
+                    C oldCat = entryToCategory.run(key, oldVal);
+                    C newCat = entryToCategory.run(key, newVal);
+                    if(oldCat == newCat || oldCat.equals(newCat)) return;
+                    incrementHistogram(oldCat,-1);
+                    incrementHistogram(oldCat,1);
+                }
+
+            }
+
+            /** atomically update counter in histogram*/
+            private void incrementHistogram(C category, long i) {
+                for(;;){
+                    Long oldCount = histogram.get(category);
+                    if(oldCount == null){
+                        //insert new count
+                        if(histogram.putIfAbsent(category,i) == null)
+                            return;
+                    }else{
+                        //increase existing count
+                        Long newCount = oldCount+i;
+                        if(histogram.replace(category,oldCount, newCount))
+                            return;
+                    }
+                }
+            }
+        };
+
+        primary.addModificationListener(listener);
+    }
+
+
+
+
+}
Index: applications/editors/josm/plugins/imagerycache/src/org/mapdb/CC.java
===================================================================
--- applications/editors/josm/plugins/imagerycache/src/org/mapdb/CC.java	(revision 29363)
+++ applications/editors/josm/plugins/imagerycache/src/org/mapdb/CC.java	(revision 29363)
@@ -0,0 +1,61 @@
+/*
+ *  Copyright (c) 2012 Jan Kotek
+ *
+ *  Licensed under the Apache License, Version 2.0 (the "License");
+ *  you may not use this file except in compliance with the License.
+ *  You may obtain a copy of the License at
+ *
+ *    http://www.apache.org/licenses/LICENSE-2.0
+ *
+ *  Unless required by applicable law or agreed to in writing, software
+ *  distributed under the License is distributed on an "AS IS" BASIS,
+ *  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ *  See the License for the specific language governing permissions and
+ *  limitations under the License.
+ */
+
+package org.mapdb;
+
+/**
+ * Compiler Configuration. There are some static final boolean fields, which describe features MapDB was compiled with.
+ * <p/>
+ * MapDB can be compiled with/without some features. For example fine logging is useful for debuging,
+ * but should not be present in production version. Java does not have preprocessor so
+ * we use <a href="http://en.wikipedia.org/wiki/Dead_code_elimination">Dead code elimination</a> to achieve it.
+ * <p/>
+ * Typical usage:
+ * <pre>
+ *     if(CC.PARANOID && arg.calculateSize()!=33){  //calculateSize may take long time
+ *         throw new IllegalArgumentException("wrong size");
+ *     }
+ * </pre>
+ *
+ * @author  Jan Kotek
+ */
+public interface CC {
+
+
+    /**
+     * Compile with more assertions and verifications.
+     * For example HashMap may check if keys implements hash function correctly.
+     * This may slow down MapDB thousands times
+     */
+    boolean PARANOID = false;
+
+    /**
+     * Compile with fine trace logging statements (Logger.debug and Logger.trace).
+     */
+    boolean LOG_TRACE = false;
+
+    /**
+     * Log lock/unlock events. Useful to diagnose deadlocks
+     */
+    boolean LOG_LOCKS = false;
+
+    /**
+     * If true MapDB will display warnings if user is using MapDB API wrong way.
+     */
+    boolean LOG_HINTS = true;
+
+}
+
Index: applications/editors/josm/plugins/imagerycache/src/org/mapdb/CacheHardRef.java
===================================================================
--- applications/editors/josm/plugins/imagerycache/src/org/mapdb/CacheHardRef.java	(revision 29363)
+++ applications/editors/josm/plugins/imagerycache/src/org/mapdb/CacheHardRef.java	(revision 29363)
@@ -0,0 +1,83 @@
+/*
+ *  Copyright (c) 2012 Jan Kotek
+ *
+ *  Licensed under the Apache License, Version 2.0 (the "License");
+ *  you may not use this file except in compliance with the License.
+ *  You may obtain a copy of the License at
+ *
+ *    http://www.apache.org/licenses/LICENSE-2.0
+ *
+ *  Unless required by applicable law or agreed to in writing, software
+ *  distributed under the License is distributed on an "AS IS" BASIS,
+ *  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ *  See the License for the specific language governing permissions and
+ *  limitations under the License.
+ */
+
+package org.mapdb;
+
+
+/**
+ * Cache created objects using hard reference.
+ * It checks free memory every N operations (1024*10). If free memory is bellow 75% it clears the cache
+ *
+ * @author Jan Kotek
+ */
+public class CacheHardRef extends CacheLRU {
+
+    final static int CHECK_EVERY_N = 10000;
+
+    int counter = 0;
+
+    public CacheHardRef(Engine engine, int initialCapacity) {
+        super(engine, new LongConcurrentHashMap<Object>(initialCapacity));
+    }
+
+    @Override
+    public <A> A get(long recid, Serializer<A> serializer) {
+        checkFreeMem();
+        return super.get(recid, serializer);
+    }
+
+    private void checkFreeMem() {
+        if((counter++)%CHECK_EVERY_N==0 ){
+
+            Runtime r = Runtime.getRuntime();
+            long max = r.maxMemory();
+            if(max == Long.MAX_VALUE)
+                return;
+
+            double free = r.freeMemory();
+            double total = r.totalMemory();
+            //We believe that free refers to total not max.
+            //Increasing heap size to max would increase to max
+            free = free + (max-total);
+
+            if(CC.LOG_TRACE)
+                Utils.LOG.fine("DBCache: freemem = " +free + " = "+(free/max)+"%");
+
+            if(free<1e7 || free*4 <max){
+                checkClosed(cache).clear();
+            }
+        }
+    }
+
+    @Override
+    public <A> void update(long recid, A value, Serializer<A> serializer) {
+        checkFreeMem();
+        super.update(recid, value, serializer);
+    }
+
+    @Override
+    public <A> void delete(long recid, Serializer<A> serializer){
+        checkFreeMem();
+        super.delete(recid,serializer);
+    }
+
+    @Override
+    public <A> boolean compareAndSwap(long recid, A expectedOldValue, A newValue, Serializer<A> serializer) {
+        checkFreeMem();
+        return super.compareAndSwap(recid, expectedOldValue, newValue, serializer);
+    }
+}
+
Index: applications/editors/josm/plugins/imagerycache/src/org/mapdb/CacheHashTable.java
===================================================================
--- applications/editors/josm/plugins/imagerycache/src/org/mapdb/CacheHashTable.java	(revision 29363)
+++ applications/editors/josm/plugins/imagerycache/src/org/mapdb/CacheHashTable.java	(revision 29363)
@@ -0,0 +1,169 @@
+/*
+ *  Copyright (c) 2012 Jan Kotek
+ *
+ *  Licensed under the Apache License, Version 2.0 (the "License");
+ *  you may not use this file except in compliance with the License.
+ *  You may obtain a copy of the License at
+ *
+ *    http://www.apache.org/licenses/LICENSE-2.0
+ *
+ *  Unless required by applicable law or agreed to in writing, software
+ *  distributed under the License is distributed on an "AS IS" BASIS,
+ *  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ *  See the License for the specific language governing permissions and
+ *  limitations under the License.
+ */
+
+package org.mapdb;
+
+/**
+ * Fixed size cache which uses hash table.
+ * Is thread-safe and requires only minimal locking.
+ * Items are randomly removed and replaced by hash collisions.
+ * <p/>
+ * This is simple, concurrent, small-overhead, random cache.
+ *
+ * @author Jan Kotek
+ */
+public class CacheHashTable extends EngineWrapper implements Engine {
+
+
+    protected final Locks.RecidLocks locks = new Locks.SegmentedRecidLocks(16);
+
+    protected HashItem[] items;
+    protected final int cacheMaxSize;
+
+    /**
+     * Salt added to keys before hashing, so it is harder to trigger hash collision attack.
+     */
+    protected final long hashSalt = Utils.RANDOM.nextLong();
+
+
+    private static class HashItem {
+        final long key;
+        final Object val;
+
+        private HashItem(long key, Object val) {
+            this.key = key;
+            this.val = val;
+        }
+    }
+
+
+
+    public CacheHashTable(Engine engine, int cacheMaxSize) {
+        super(engine);
+        this.items = new HashItem[cacheMaxSize];
+        this.cacheMaxSize = cacheMaxSize;
+    }
+
+    @Override
+    public <A> long put(A value, Serializer<A> serializer) {
+        final long recid = getWrappedEngine().put(value, serializer);
+        final int pos = position(recid);
+        try{
+            locks.lock(pos);
+            checkClosed(items)[position(recid)] = new HashItem(recid, value);
+        }finally{
+            locks.unlock(pos);
+        }
+        return recid;
+    }
+
+    @Override
+    @SuppressWarnings("unchecked")
+    public <A> A get(long recid, Serializer<A> serializer) {
+        final int pos = position(recid);
+        HashItem[] items2 = checkClosed(items);
+        HashItem item = items2[pos];
+        if(item!=null && recid == item.key)
+            return (A) item.val;
+
+        try{
+            locks.lock(pos);
+            //not in cache, fetch and add
+            final A value = getWrappedEngine().get(recid, serializer);
+            if(value!=null)
+                items2[pos] = new HashItem(recid, value);
+            return value;
+        }finally{
+            locks.unlock(pos);
+        }
+    }
+
+    private int position(long recid) {
+        return Math.abs(Utils.longHash(recid^hashSalt))%cacheMaxSize;
+    }
+
+    @Override
+    public <A> void update(long recid, A value, Serializer<A> serializer) {
+        final int pos = position(recid);
+        try{
+            locks.lock(pos);
+            checkClosed(items)[pos] = new HashItem(recid, value);
+            getWrappedEngine().update(recid, value, serializer);
+        }finally {
+            locks.unlock(pos);
+        }
+    }
+
+    @Override
+    public <A> boolean compareAndSwap(long recid, A expectedOldValue, A newValue, Serializer<A> serializer) {
+        final int pos = position(recid);
+        try{
+            HashItem[] items2 = checkClosed(items);
+            locks.lock(pos);
+            HashItem item = items2[pos];
+            if(item!=null && item.key == recid){
+                //found in cache, so compare values
+                if(item.val == expectedOldValue || item.val.equals(expectedOldValue)){
+                    //found matching entry in cache, so just update and return true
+                    items2[pos] = new HashItem(recid, newValue);
+                    getWrappedEngine().update(recid, newValue, serializer);
+                    return true;
+                }else{
+                    return false;
+                }
+            }else{
+                boolean ret = getWrappedEngine().compareAndSwap(recid, expectedOldValue, newValue, serializer);
+                if(ret) items2[pos] = new HashItem(recid, newValue);
+                return ret;
+            }
+        }finally {
+            locks.unlock(pos);
+        }
+    }
+
+    @Override
+    public <A> void delete(long recid, Serializer<A> serializer){
+        final int pos = position(recid);
+        try{
+            locks.lock(recid);
+            getWrappedEngine().delete(recid,serializer);
+            HashItem[] items2 = checkClosed(items);
+            HashItem item = items2[pos];
+            if(item!=null && recid == item.key)
+            items[pos] = null;
+        }finally {
+            locks.unlock(recid);
+        }
+
+}
+
+
+    @Override
+    public void close() {
+        super.close();
+        //dereference to prevent memory leaks
+        items = null;
+    }
+
+    @Override
+    public void rollback() {
+        for(int i = 0;i<items.length;i++)
+            items[i] = null;
+        super.rollback();
+    }
+
+
+}
Index: applications/editors/josm/plugins/imagerycache/src/org/mapdb/CacheLRU.java
===================================================================
--- applications/editors/josm/plugins/imagerycache/src/org/mapdb/CacheLRU.java	(revision 29363)
+++ applications/editors/josm/plugins/imagerycache/src/org/mapdb/CacheLRU.java	(revision 29363)
@@ -0,0 +1,112 @@
+package org.mapdb;
+
+/**
+ * Least Recently Used cache.
+ * If cache is full it removes less used items to make a space
+ */
+public class CacheLRU extends EngineWrapper {
+
+
+    protected LongMap<Object> cache;
+
+    protected final Locks.RecidLocks locks = new Locks.SegmentedRecidLocks(16);
+
+
+    public CacheLRU(Engine engine, int cacheSize) {
+        this(engine, new LongConcurrentLRUMap<Object>(cacheSize, (int) (cacheSize*0.8)));
+    }
+
+    public CacheLRU(Engine engine, LongMap<Object> cache){
+        super(engine);
+        this.cache = cache;
+    }
+
+    @Override
+    public <A> long put(A value, Serializer<A> serializer) {
+        long recid =  super.put(value, serializer);
+        try{
+            locks.lock(recid);
+            checkClosed(cache).put(recid, value);
+        }finally {
+            locks.unlock(recid);
+        }
+        return recid;
+    }
+
+    @SuppressWarnings("unchecked")
+	@Override
+    public <A> A get(long recid, Serializer<A> serializer) {
+        Object ret = cache.get(recid);
+        if(ret!=null) return (A) ret;
+        try{
+            locks.lock(recid);
+            ret = super.get(recid, serializer);
+            if(ret!=null) checkClosed(cache).put(recid, ret);
+            return (A) ret;
+        }finally {
+            locks.unlock(recid);
+        }
+    }
+
+    @Override
+    public <A> void update(long recid, A value, Serializer<A> serializer) {
+        try{
+            locks.lock(recid);
+            checkClosed(cache).put(recid, value);
+            super.update(recid, value, serializer);
+        }finally {
+            locks.unlock(recid);
+        }
+    }
+
+    @Override
+    public <A> void delete(long recid, Serializer<A> serializer){
+        try{
+            locks.lock(recid);
+            checkClosed(cache).remove(recid);
+            super.delete(recid,serializer);
+        }finally {
+            locks.unlock(recid);
+        }
+    }
+
+    @Override
+    public <A> boolean compareAndSwap(long recid, A expectedOldValue, A newValue, Serializer<A> serializer) {
+        try{
+            locks.lock(recid);
+            Engine engine = getWrappedEngine();
+            LongMap cache2 = checkClosed(cache);
+            Object oldValue = cache.get(recid);
+            if(oldValue == expectedOldValue || oldValue.equals(expectedOldValue)){
+                //found matching entry in cache, so just update and return true
+                cache2.put(recid, newValue);
+                engine.update(recid, newValue, serializer);
+                return true;
+            }else{
+                boolean ret = engine.compareAndSwap(recid, expectedOldValue, newValue, serializer);
+                if(ret) cache2.put(recid, newValue);
+                return ret;
+            }
+        }finally {
+            locks.unlock(recid);
+        }
+    }
+
+
+    @SuppressWarnings("rawtypes")
+	@Override
+    public void close() {
+        Object cache2 = cache;
+        if(cache2 instanceof LongConcurrentLRUMap)
+            ((LongConcurrentLRUMap)cache2).destroy();
+        cache = null;
+        super.close();
+    }
+
+    @Override
+    public void rollback() {
+        //TODO locking here?
+        checkClosed(cache).clear();
+        super.rollback();
+    }
+}
Index: applications/editors/josm/plugins/imagerycache/src/org/mapdb/CacheWeakSoftRef.java
===================================================================
--- applications/editors/josm/plugins/imagerycache/src/org/mapdb/CacheWeakSoftRef.java	(revision 29363)
+++ applications/editors/josm/plugins/imagerycache/src/org/mapdb/CacheWeakSoftRef.java	(revision 29363)
@@ -0,0 +1,216 @@
+/*
+ *  Copyright (c) 2012 Jan Kotek
+ *
+ *  Licensed under the Apache License, Version 2.0 (the "License");
+ *  you may not use this file except in compliance with the License.
+ *  You may obtain a copy of the License at
+ *
+ *    http://www.apache.org/licenses/LICENSE-2.0
+ *
+ *  Unless required by applicable law or agreed to in writing, software
+ *  distributed under the License is distributed on an "AS IS" BASIS,
+ *  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ *  See the License for the specific language governing permissions and
+ *  limitations under the License.
+ */
+
+package org.mapdb;
+
+import java.lang.ref.ReferenceQueue;
+import java.lang.ref.SoftReference;
+import java.lang.ref.WeakReference;
+
+/**
+ * Instance cache which uses <code>SoftReference</code> or <code>WeakReference</code>
+ * Items can be removed from cache by Garbage Collector if
+ *
+ * @author Jan Kotek
+ */
+public class CacheWeakSoftRef extends EngineWrapper implements Engine {
+
+
+    protected final Locks.RecidLocks locks = new Locks.LongHashMapRecidLocks();
+
+    protected interface CacheItem{
+        long getRecid();
+        Object get();
+    }
+
+    protected static final class CacheWeakItem<A> extends WeakReference<A> implements CacheItem{
+
+        final long recid;
+
+        public CacheWeakItem(A referent, ReferenceQueue<A> q, long recid) {
+            super(referent, q);
+            this.recid = recid;
+        }
+
+        @Override
+        public long getRecid() {
+            return recid;
+        }
+    }
+
+    protected static final class CacheSoftItem<A> extends SoftReference<A> implements CacheItem{
+
+        final long recid;
+
+        public CacheSoftItem(A referent, ReferenceQueue<A> q, long recid) {
+            super(referent, q);
+            this.recid = recid;
+        }
+
+        @Override
+        public long getRecid() {
+            return recid;
+        }
+    }
+
+    @SuppressWarnings("rawtypes")
+	protected ReferenceQueue queue = new ReferenceQueue();
+
+    protected Thread queueThread = new Thread("MapDB GC collector"){
+        @Override
+		public void run(){
+            runRefQueue();
+        }
+    };
+
+
+    protected LongConcurrentHashMap<CacheItem> items = new LongConcurrentHashMap<CacheItem>();
+
+
+    final protected boolean useWeakRef;
+
+    public CacheWeakSoftRef(Engine engine, boolean useWeakRef){
+        super(engine);
+        this.useWeakRef = useWeakRef;
+
+        queueThread.setDaemon(true);
+        queueThread.start();
+    }
+
+
+    /** Collects items from GC and removes them from cache */
+    protected void runRefQueue(){
+        try{
+            final ReferenceQueue<?> queue = this.queue;
+            final LongConcurrentHashMap<CacheItem> items = this.items;
+
+            while(true){
+                CacheItem item = (CacheItem) queue.remove();
+                items.remove(item.getRecid(), item);
+                if(Thread.interrupted()) return;
+            }
+        }catch(InterruptedException e){
+            //this is expected, so just silently exit thread
+        }
+    }
+
+    @Override
+    public <A> long put(A value, Serializer<A> serializer) {
+        long recid = getWrappedEngine().put(value, serializer);
+        putItemIntoCache(recid, value);
+        return recid;
+    }
+
+    @SuppressWarnings("unchecked")
+	@Override
+    public <A> A get(long recid, Serializer<A> serializer) {
+        LongConcurrentHashMap<CacheItem> items2 = checkClosed(items);
+        CacheItem item = items2.get(recid);
+        if(item!=null){
+            Object o = item.get();
+            if(o == null)
+                items2.remove(recid);
+            else{
+                return (A) o;
+            }
+        }
+
+        try{
+            locks.lock(recid);
+            Object value = getWrappedEngine().get(recid, serializer);
+            if(value!=null) putItemIntoCache(recid, value);
+
+            return (A) value;
+        }finally{
+            locks.unlock(recid);
+        }
+
+    }
+
+    @Override
+    public <A> void update(long recid, A value, Serializer<A> serializer) {
+        try{
+            locks.lock(recid);
+            putItemIntoCache(recid, value);
+            getWrappedEngine().update(recid, value, serializer);
+        }finally {
+            locks.unlock(recid);
+        }
+    }
+
+    @SuppressWarnings("unchecked")
+	private <A> void putItemIntoCache(long recid, A value) {
+        ReferenceQueue<A> q = checkClosed(queue);
+        checkClosed(items).put(recid, useWeakRef?
+            new CacheWeakItem<A>(value, q, recid) :
+            new CacheSoftItem<A>(value, q, recid));
+    }
+
+    @Override
+    public <A> void delete(long recid, Serializer<A> serializer){
+        try{
+            locks.lock(recid);
+            checkClosed(items).remove(recid);
+            getWrappedEngine().delete(recid,serializer);
+        }finally {
+            locks.unlock(recid);
+        }
+
+    }
+
+    @Override
+    public <A> boolean compareAndSwap(long recid, A expectedOldValue, A newValue, Serializer<A> serializer) {
+        try{
+            locks.lock(recid);
+            CacheItem item = checkClosed(items).get(recid);
+            Object oldValue = item==null? null: item.get() ;
+            if(item!=null && item.getRecid() == recid &&
+                    (oldValue == expectedOldValue || oldValue.equals(expectedOldValue))){
+                //found matching entry in cache, so just update and return true
+                putItemIntoCache(recid, newValue);
+                getWrappedEngine().update(recid, newValue, serializer);
+                return true;
+            }else{
+                boolean ret = getWrappedEngine().compareAndSwap(recid, expectedOldValue, newValue, serializer);
+                if(ret) putItemIntoCache(recid, newValue);
+                return ret;
+            }
+        }finally {
+            locks.unlock(recid);
+        }
+    }
+
+
+    @Override
+    public void close() {
+        super.close();
+        items = null;
+        queue = null;
+        
+        if (queueThread != null) {
+            queueThread.interrupt();
+            queueThread = null;
+        }
+    }
+
+
+    @Override
+    public void rollback() {
+        items.clear();
+        super.rollback();
+    }
+
+}
Index: applications/editors/josm/plugins/imagerycache/src/org/mapdb/CompressLZF.java
===================================================================
--- applications/editors/josm/plugins/imagerycache/src/org/mapdb/CompressLZF.java	(revision 29363)
+++ applications/editors/josm/plugins/imagerycache/src/org/mapdb/CompressLZF.java	(revision 29363)
@@ -0,0 +1,371 @@
+/*
+ * Copyright 2004-2011 H2 Group. Multiple-Licensed under the H2 License,
+ * Version 1.0, and under the Eclipse Public License, Version 1.0
+ * (http://h2database.com/html/license.html).
+ *
+ * This code is based on the LZF algorithm from Marc Lehmann. It is a
+ * re-implementation of the C code:
+ * http://cvs.schmorp.de/liblzf/lzf_c.c?view=markup
+ * http://cvs.schmorp.de/liblzf/lzf_d.c?view=markup
+ *
+ * According to a mail from Marc Lehmann, it's OK to use his algorithm:
+ * Date: 2010-07-15 15:57
+ * Subject: Re: Question about LZF licensing
+ * ...
+ * The algorithm is not copyrighted (and cannot be copyrighted afaik) - as long
+ * as you wrote everything yourself, without copying my code, that's just fine
+ * (looking is of course fine too).
+ * ...
+ *
+ * Still I would like to keep his copyright info:
+ *
+ * Copyright (c) 2000-2005 Marc Alexander Lehmann <schmorp@schmorp.de>
+ * Copyright (c) 2005 Oren J. Maurice <oymaurice@hazorea.org.il>
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are met:
+ *
+ *   1.  Redistributions of source code must retain the above copyright notice,
+ *       this list of conditions and the following disclaimer.
+ *
+ *   2.  Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in the
+ *       documentation and/or other materials provided with the distribution.
+ *
+ *   3.  The name of the author may not be used to endorse or promote products
+ *       derived from this software without specific prior written permission.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE AUTHOR ''AS IS'' AND ANY EXPRESS OR IMPLIED
+ * WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF
+ * MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED.  IN NO
+ * EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
+ * PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS;
+ * OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY,
+ * WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR
+ * OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED
+ * OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+package org.mapdb;
+
+import java.io.DataInput;
+import java.io.DataOutput;
+import java.io.IOException;
+import java.io.Serializable;
+import java.util.Arrays;
+
+/**
+ * <p>
+ * This class implements the LZF lossless data compression algorithm. LZF is a
+ * Lempel-Ziv variant with byte-aligned output, and optimized for speed.
+ * </p>
+ * <p>
+ * Safety/Use Notes:
+ * </p>
+ * <ul>
+ * <li>Each instance should be used by a single thread only.</li>
+ * <li>The data buffers should be smaller than 1 GB.</li>
+ * <li>For performance reasons, safety checks on expansion are omitted.</li>
+ * <li>Invalid compressed data can cause an ArrayIndexOutOfBoundsException.</li>
+ * </ul>
+ * <p>
+ * The LZF compressed format knows literal runs and back-references:
+ * </p>
+ * <ul>
+ * <li>Literal run: directly copy bytes from input to output.</li>
+ * <li>Back-reference: copy previous data to output stream, with specified
+ * offset from location and length. The length is at least 3 bytes.</li>
+ * </ul>
+ *<p>
+ * The first byte of the compressed stream is the control byte. For literal
+ * runs, the highest three bits of the control byte are not set, the the lower
+ * bits are the literal run length, and the next bytes are data to copy directly
+ * into the output. For back-references, the highest three bits of the control
+ * byte are the back-reference length. If all three bits are set, then the
+ * back-reference length is stored in the next byte. The lower bits of the
+ * control byte combined with the next byte form the offset for the
+ * back-reference.
+ * </p>
+ */
+public final class CompressLZF{
+
+    /**
+     * The number of entries in the hash table. The size is a trade-off between
+     * hash collisions (reduced compression) and speed (amount that fits in CPU
+     * cache).
+     */
+    private static final int HASH_SIZE = 1 << 14;
+
+    /**
+     * The maximum number of literals in a chunk (32).
+     */
+    private static final int MAX_LITERAL = 1 << 5;
+
+    /**
+     * The maximum offset allowed for a back-reference (8192).
+     */
+    private static final int MAX_OFF = 1 << 13;
+
+    /**
+     * The maximum back-reference length (264).
+     */
+    private static final int MAX_REF = (1 << 8) + (1 << 3);
+
+    /**
+     * Hash table for matching byte sequences (reused for performance).
+     */
+    private int[] cachedHashTable;
+
+    /**
+     * Return byte with lower 2 bytes being byte at index, then index+1.
+     */
+    private static int first(byte[] in, int inPos) {
+        return (in[inPos] << 8) | (in[inPos + 1] & 255);
+    }
+
+    /**
+     * Shift v 1 byte left, add value at index inPos+2.
+     */
+    private static int next(int v, byte[] in, int inPos) {
+        return (v << 8) | (in[inPos + 2] & 255);
+    }
+
+    /**
+     * Compute the address in the hash table.
+     */
+    private static int hash(int h) {
+        return ((h * 2777) >> 9) & (HASH_SIZE - 1);
+    }
+
+    public int compress(byte[] in, int inLen, byte[] out, int outPos) {
+        int inPos = 0;
+        if (cachedHashTable == null) {
+            cachedHashTable = new int[HASH_SIZE];
+        }
+        int[] hashTab = cachedHashTable;
+        int literals = 0;
+        outPos++;
+        int future = first(in, 0);
+        while (inPos < inLen - 4) {
+            byte p2 = in[inPos + 2];
+            // next
+            future = (future << 8) + (p2 & 255);
+            int off = hash(future);
+            int ref = hashTab[off];
+            hashTab[off] = inPos;
+            // if (ref < inPos
+            //       && ref > 0
+            //       && (off = inPos - ref - 1) < MAX_OFF
+            //       && in[ref + 2] == p2
+            //       && (((in[ref] & 255) << 8) | (in[ref + 1] & 255)) ==
+            //           ((future >> 8) & 0xffff)) {
+            if (ref < inPos
+                        && ref > 0
+                        && (off = inPos - ref - 1) < MAX_OFF
+                        && in[ref + 2] == p2
+                        && in[ref + 1] == (byte) (future >> 8)
+                        && in[ref] == (byte) (future >> 16)) {
+                // match
+                int maxLen = inLen - inPos - 2;
+                if (maxLen > MAX_REF) {
+                    maxLen = MAX_REF;
+                }
+                if (literals == 0) {
+                    // multiple back-references,
+                    // so there is no literal run control byte
+                    outPos--;
+                } else {
+                    // set the control byte at the start of the literal run
+                    // to store the number of literals
+                    out[outPos - literals - 1] = (byte) (literals - 1);
+                    literals = 0;
+                }
+                int len = 3;
+                while (len < maxLen && in[ref + len] == in[inPos + len]) {
+                    len++;
+                }
+                len -= 2;
+                if (len < 7) {
+                    out[outPos++] = (byte) ((off >> 8) + (len << 5));
+                } else {
+                    out[outPos++] = (byte) ((off >> 8) + (7 << 5));
+                    out[outPos++] = (byte) (len - 7);
+                }
+                out[outPos++] = (byte) off;
+                // move one byte forward to allow for a literal run control byte
+                outPos++;
+                inPos += len;
+                // rebuild the future, and store the last bytes to the hashtable.
+                // Storing hashes of the last bytes in back-reference improves
+                // the compression ratio and only reduces speed slightly.
+                future = first(in, inPos);
+                future = next(future, in, inPos);
+                hashTab[hash(future)] = inPos++;
+                future = next(future, in, inPos);
+                hashTab[hash(future)] = inPos++;
+            } else {
+                // copy one byte from input to output as part of literal
+                out[outPos++] = in[inPos++];
+                literals++;
+                // at the end of this literal chunk, write the length
+                // to the control byte and start a new chunk
+                if (literals == MAX_LITERAL) {
+                    out[outPos - literals - 1] = (byte) (literals - 1);
+                    literals = 0;
+                    // move ahead one byte to allow for the
+                    // literal run control byte
+                    outPos++;
+                }
+            }
+        }
+        // write the remaining few bytes as literals
+        while (inPos < inLen) {
+            out[outPos++] = in[inPos++];
+            literals++;
+            if (literals == MAX_LITERAL) {
+                out[outPos - literals - 1] = (byte) (literals - 1);
+                literals = 0;
+                outPos++;
+            }
+        }
+        // writes the final literal run length to the control byte
+        out[outPos - literals - 1] = (byte) (literals - 1);
+        if (literals == 0) {
+            outPos--;
+        }
+        return outPos;
+    }
+
+    public void expand(byte[] in, int inPos, int inLen, byte[] out, int outPos, int outLen) {
+        // if ((inPos | outPos | outLen) < 0) {
+        if (inPos < 0 || outPos < 0 || outLen < 0) {
+            throw new IllegalArgumentException();
+        }
+        do {
+            int ctrl = in[inPos++] & 255;
+            if (ctrl < MAX_LITERAL) {
+                // literal run of length = ctrl + 1,
+                ctrl++;
+                // copy to output and move forward this many bytes
+                System.arraycopy(in, inPos, out, outPos, ctrl);
+                outPos += ctrl;
+                inPos += ctrl;
+            } else {
+                // back reference
+                // the highest 3 bits are the match length
+                int len = ctrl >> 5;
+                // if the length is maxed, add the next byte to the length
+                if (len == 7) {
+                    len += in[inPos++] & 255;
+                }
+                // minimum back-reference is 3 bytes,
+                // so 2 was subtracted before storing size
+                len += 2;
+
+                // ctrl is now the offset for a back-reference...
+                // the logical AND operation removes the length bits
+                ctrl = -((ctrl & 0x1f) << 8) - 1;
+
+                // the next byte augments/increases the offset
+                ctrl -= in[inPos++] & 255;
+
+                // copy the back-reference bytes from the given
+                // location in output to current position
+                ctrl += outPos;
+                if (outPos + len >= out.length) {
+                    // reduce array bounds checking
+                    throw new ArrayIndexOutOfBoundsException();
+                }
+                for (int i = 0; i < len; i++) {
+                    out[outPos++] = out[ctrl++];
+                }
+            }
+        } while (outPos < outLen);
+    }
+
+
+    public static final Serializer<byte[]> SERIALIZER = new Serializer<byte[]>() {
+
+        final ThreadLocal<CompressLZF> LZF = new ThreadLocal<CompressLZF>() {
+            @Override
+            protected CompressLZF initialValue() {
+                return new CompressLZF();
+            }
+        };
+
+        @Override
+        public void serialize(DataOutput out, byte[] value) throws IOException {
+            if (value == null) return;
+
+            CompressLZF lzf = LZF.get();
+            byte[] outbuf = new byte[value.length + 40];
+            int len = lzf.compress(value, value.length, outbuf, 0);
+            //check if compressed data are larger then original
+            if (value.length <= len) {
+                //in this case do not compress data, write 0 as indicator
+                Utils.packInt(out, 0);
+                out.write(value);
+            } else {
+                Utils.packInt(out, value.length); //write original decompressed size
+                out.write(outbuf, 0, len);
+            }
+        }
+
+        @Override
+        public byte[] deserialize(DataInput in, int available) throws IOException {
+            if (available == 0) return null;
+            //get original decompressed size
+            DataInput2 in2 = (DataInput2) in;
+            int origPos = in2.pos;
+            int expendedLen = Utils.unpackInt(in);
+            byte[] inbuf = new byte[available - (in2.pos - origPos)];
+            in.readFully(inbuf);
+            if (expendedLen == 0) {
+                //special case, data are not compressed
+                return inbuf;
+            }
+            byte[] outbuf = new byte[expendedLen + 40];
+
+            CompressLZF lzf = LZF.get();
+            lzf.expand(inbuf, 0, inbuf.length, outbuf, 0, expendedLen);
+            outbuf = Arrays.copyOf(outbuf, expendedLen);
+
+            return outbuf;
+        }
+
+
+    };
+
+    /**
+     * Wraps existing serializer and compresses its input/output
+     */
+    public static <E> Serializer<E> serializerCompressWrapper(Serializer<E> serializer) {
+        return new SerializerCompressWrapper<E>(serializer);
+    }
+
+
+    protected static class SerializerCompressWrapper<E> implements Serializer<E>, Serializable {
+        protected final Serializer<E> serializer;
+        public SerializerCompressWrapper(Serializer<E> serializer) {
+            this.serializer = serializer;
+        }
+
+        @Override
+        public void serialize(DataOutput out, E value) throws IOException {
+            //serialize to byte[]
+            DataOutput2 out2 = new DataOutput2();
+            serializer.serialize(out2, value);
+            byte[] b = out2.copyBytes();
+            CompressLZF.SERIALIZER.serialize(out, b);
+        }
+
+        @Override
+        public E deserialize(DataInput in, int available) throws IOException {
+            byte[] b = CompressLZF.SERIALIZER.deserialize(in, available);
+            DataInput2 in2 = new DataInput2(b);
+            return serializer.deserialize(in2, b.length);
+        }
+    }
+
+}
Index: applications/editors/josm/plugins/imagerycache/src/org/mapdb/DB.java
===================================================================
--- applications/editors/josm/plugins/imagerycache/src/org/mapdb/DB.java	(revision 29363)
+++ applications/editors/josm/plugins/imagerycache/src/org/mapdb/DB.java	(revision 29363)
@@ -0,0 +1,451 @@
+/*
+ *  Copyright (c) 2012 Jan Kotek
+ *
+ *  Licensed under the Apache License, Version 2.0 (the "License");
+ *  you may not use this file except in compliance with the License.
+ *  You may obtain a copy of the License at
+ *
+ *    http://www.apache.org/licenses/LICENSE-2.0
+ *
+ *  Unless required by applicable law or agreed to in writing, software
+ *  distributed under the License is distributed on an "AS IS" BASIS,
+ *  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ *  See the License for the specific language governing permissions and
+ *  limitations under the License.
+ */
+
+package org.mapdb;
+
+import java.lang.ref.WeakReference;
+import java.util.*;
+import java.util.concurrent.CopyOnWriteArrayList;
+
+/**
+ * A database with easy access to named maps and other collections.
+ *
+ * @author Jan Kotek
+ */
+@SuppressWarnings("unchecked")
+public class DB {
+
+    /** Engine which provides persistence for this DB*/
+    protected Engine engine;
+    /** already loaded named collections. It is important to keep collections as singletons, because of 'in-memory' locking*/
+    protected Map<String, WeakReference<?>> collections = new HashMap<String, WeakReference<?>>();
+
+    /** view over named records */
+    protected Map<String, Long> nameDir;
+
+    /** default serializer used for persistence. Handles POJO and other stuff which requires write-able access to Engine */
+    protected Serializer<?> defaultSerializer;
+
+    /**
+     * Construct new DB. It is just thin layer over {@link Engine} which does the real work.
+     * @param engine
+     */
+    public DB(final Engine engine){
+        this.engine = engine;
+        // load serializer
+        final CopyOnWriteArrayList<SerializerPojo.ClassInfo> classInfos = engine.get(Engine.CLASS_INFO_RECID, SerializerPojo.serializer);
+        this.defaultSerializer = new SerializerPojo(classInfos){
+            @Override
+            protected void saveClassInfo() {
+                //hook to save classes if they are updated
+                //I did not want to create direct dependency between SerialierPojo and Engine
+                engine.update(Engine.CLASS_INFO_RECID, registered, SerializerPojo.serializer);
+            }
+        };
+
+        //open name dir
+        nameDir = HTreeMap.preinitNamedDir(engine);
+
+    }
+
+    /**
+     * Opens existing or creates new Hash Tree Map.
+     * This collection perform well under concurrent access.
+     * Is best for large keys and large values.
+     *
+     * @param name of map
+     * @param <K> key
+     * @param <V> value
+     * @return map
+     */
+    synchronized public <K,V> HTreeMap<K,V> getHashMap(String name){
+        checkNotClosed();
+        HTreeMap<K,V> ret = (HTreeMap<K, V>) getFromWeakCollection(name);
+        if(ret!=null) return ret;
+        Long recid = nameDir.get(name);
+        if(recid!=null){
+            //open existing map
+            ret = new HTreeMap<K,V>(engine, recid,defaultSerializer);
+            if(!ret.hasValues) throw new ClassCastException("Collection is Set, not Map");
+        }else{
+            //create new map
+            ret = new HTreeMap<K,V>(engine,true,Utils.RANDOM.nextInt(), defaultSerializer,null, null);
+            nameDir.put(name, ret.rootRecid);
+        }
+        collections.put(name, new WeakReference<Object>(ret));
+        return ret;
+    }
+
+
+    /**
+     * Creates new HashMap with more specific arguments
+     *
+     * @param name of map to create
+     * @param keySerializer used to convert keys into/from binary form. Use null for default value.
+     * @param valueSerializer used to convert values into/from binary form. Use null for default value.
+     * @param <K> key type
+     * @param <V> value type
+     * @throws IllegalArgumentException if name is already used
+     * @return newly created map
+     */
+    synchronized public <K,V> HTreeMap<K,V> createHashMap(
+            String name, Serializer<K> keySerializer, Serializer<V> valueSerializer){
+        checkNameNotExists(name);
+        HTreeMap<K,V> ret = new HTreeMap<K,V>(engine, true,Utils.RANDOM.nextInt(), defaultSerializer, keySerializer, valueSerializer);
+        nameDir.put(name, ret.rootRecid);
+        collections.put(name, new WeakReference<Object>(ret));
+        return ret;
+    }
+
+    /**
+     *  Opens existing or creates new Hash Tree Set.
+     *
+     * @param name of Set
+     * @param <K> values in set
+     * @return set
+     */
+    synchronized public <K> Set<K> getHashSet(String name){
+        checkNotClosed();
+        Set<K> ret = (Set<K>) getFromWeakCollection(name);
+        if(ret!=null) return ret;
+        Long recid = nameDir.get(name);
+        if(recid!=null){
+            //open existing map
+            HTreeMap<K,Object> m = new HTreeMap<K,Object>(engine, recid, defaultSerializer);
+            if(m.hasValues) throw new ClassCastException("Collection is Map, not Set");
+            ret = m.keySet();
+        }else{
+            //create new map
+            HTreeMap<K,Object> m = new HTreeMap<K,Object>(engine, false,Utils.RANDOM.nextInt(), defaultSerializer, null, null);
+            ret = m.keySet();
+            nameDir.put(name, m.rootRecid);
+        }
+        collections.put(name, new WeakReference<Object>(ret));
+        return ret;
+    }
+
+
+    /**
+     * Creates new HashSet
+     * @param name of set to create
+     * @param serializer used to convert keys into/from binary form. Use null for default value.
+     * @param <K> item type
+     * @throws IllegalArgumentException if name is already used
+
+     */
+    
+    synchronized public <K> Set<K> createHashSet(String name, Serializer<K> serializer){
+        checkNameNotExists(name);
+        HTreeMap<K,Object> ret = new HTreeMap<K,Object>(engine, false,Utils.RANDOM.nextInt(), defaultSerializer, serializer, null);
+        nameDir.put(name, ret.rootRecid);
+        Set<K> ret2 = ret.keySet();
+        collections.put(name, new WeakReference<Object>(ret2));
+        return ret2;
+    }
+
+    /**
+     * Opens existing or creates new B-linked-tree Map.
+     * This collection performs well under concurrent access.
+     * Only trade-off are deletes, which causes tree fragmentation.
+     * It is ordered and best suited for small keys and values.
+     *
+     * @param name of map
+     * @param <K> key
+     * @param <V> value
+     * @return map
+     */
+    
+    synchronized public <K,V> BTreeMap<K,V> getTreeMap(String name){
+        checkNotClosed();
+        BTreeMap<K,V> ret = (BTreeMap<K,V>) getFromWeakCollection(name);
+        if(ret!=null) return ret;
+        Long recid = nameDir.get(name);
+        if(recid!=null){
+            //open existing map
+            ret = new BTreeMap<K,V>(engine, recid,defaultSerializer);
+            if(!ret.hasValues) throw new ClassCastException("Collection is Set, not Map");
+        }else{
+            //create new map
+            ret = new BTreeMap<K,V>(engine,BTreeMap.DEFAULT_MAX_NODE_SIZE, true, false, defaultSerializer, null, null, null);
+            nameDir.put(name, ret.treeRecid);
+        }
+        collections.put(name, new WeakReference<Object>(ret));
+        return ret;
+    }
+
+    /**
+     * Creates new TreeMap
+     * @param name of map to create
+     * @param nodeSize maximal size of node, larger node causes overflow and creation of new BTree node. Use large number for small keys, use small number for large keys.
+     * @param valuesStoredOutsideNodes if true, values are stored outside of BTree nodes. Use 'true' if your values are large.
+     * @param keySerializer used to convert keys into/from binary form. Use null for default value.
+     * @param valueSerializer used to convert values into/from binary form. Use null for default value.
+     * @param comparator used to sort keys. Use null for default value. TODO delta packing
+     * @param <K> key type
+     * @param <V> value type
+     * @throws IllegalArgumentException if name is already used
+     * @return newly created map
+     */
+    synchronized public <K,V> BTreeMap<K,V> createTreeMap(
+            String name, int nodeSize, boolean valuesStoredOutsideNodes,
+            BTreeKeySerializer<K> keySerializer, Serializer<V> valueSerializer, Comparator<K> comparator){
+        checkNameNotExists(name);
+        BTreeMap<K,V> ret = new BTreeMap<K,V>(engine, nodeSize, true,valuesStoredOutsideNodes, defaultSerializer, keySerializer, valueSerializer, comparator);
+        nameDir.put(name, ret.treeRecid);
+        collections.put(name, new WeakReference<Object>(ret));
+        return ret;
+    }
+
+
+    /**
+     * Get Named directory. Key is name, value is recid under which named record is stored
+     * @return
+     */
+    public Map<String, Long> getNameDir(){
+        return nameDir;
+    }
+
+
+    /**
+     * Opens existing or creates new B-linked-tree Set.
+     *
+     * @param name of set
+     * @param <K> values in set
+     * @return set
+     */
+    synchronized public <K> NavigableSet<K> getTreeSet(String name){
+        checkNotClosed();
+        NavigableSet<K> ret = (NavigableSet<K>) getFromWeakCollection(name);
+        if(ret!=null) return ret;
+        Long recid = nameDir.get(name);
+        if(recid!=null){
+            //open existing map
+            BTreeMap<K,Object> m = new BTreeMap<K,Object>(engine,  recid, defaultSerializer);
+            if(m.hasValues) throw new ClassCastException("Collection is Map, not Set");
+            ret = m.keySet();
+        }else{
+            //create new map
+            BTreeMap<K,Object> m =  new BTreeMap<K,Object>(engine,BTreeMap.DEFAULT_MAX_NODE_SIZE,
+                    false, false, defaultSerializer, null, null, null);
+            nameDir.put(name, m.treeRecid);
+            ret = m.keySet();
+        }
+
+        collections.put(name, new WeakReference<Object>(ret));
+        return ret;
+    }
+
+    /**
+     * Creates new TreeSet.
+     * @param name of set to create
+     * @param nodeSize maximal size of node, larger node causes overflow and creation of new BTree node. Use large number for small keys, use small number for large keys.
+     * @param serializer used to convert keys into/from binary form. Use null for default value.
+     * @param comparator used to sort keys. Use null for default value. TODO delta packing
+     * @param <K>
+     * @throws IllegalArgumentException if name is already used
+     * @return
+     */
+    synchronized public <K> NavigableSet<K> createTreeSet(String name, int nodeSize, BTreeKeySerializer<K> serializer, Comparator<K> comparator){
+        checkNameNotExists(name);
+        BTreeMap<K,Object> ret = new BTreeMap<K,Object>(engine, nodeSize, false, false, defaultSerializer, serializer, null, comparator);
+        nameDir.put(name, ret.treeRecid);
+        NavigableSet<K> ret2 = ret.keySet();
+        collections.put(name, new WeakReference<Object>(ret2));
+        return ret2;
+    }
+
+//    synchronized public <E> Queue<E> getQueue(String name){
+//        Long recid = nameDir.get(name);
+//        if(recid==null){
+//
+//        }else{
+//            return new Queues.Lifo<E>(engine, getDefaultSerializer(),  recid, true);
+//        }
+//    }
+
+
+    synchronized public <E> Queue<E> getQueue(String name) {
+        Long recid = nameDir.get(name);
+        if(recid == null){
+            recid = Queues.createQueue(engine, getDefaultSerializer(), getDefaultSerializer());
+            nameDir.put(name,recid);
+        }
+        Queue<E> ret = Queues.getQueue(engine,getDefaultSerializer(),recid);
+        collections.put(name, new WeakReference<Object>(ret));
+        return ret;
+    }
+
+    synchronized public <E> Queue<E> getStack(String name) {
+        Long recid = nameDir.get(name);
+        if(recid == null){
+            recid = Queues.createStack(engine, getDefaultSerializer(), getDefaultSerializer(), true);
+            nameDir.put(name,recid);
+        }
+        Queue<E> ret = Queues.getStack(engine, getDefaultSerializer(),recid);
+        collections.put(name, new WeakReference<Object>(ret));
+        return ret;
+    }
+
+    synchronized public <E> Queue<E> getCircularQueue(String name) {
+        Long recid = nameDir.get(name);
+        if(recid == null){
+            recid = Queues.createCircularQueue(engine, getDefaultSerializer(), getDefaultSerializer(), 1000000);
+            nameDir.put(name,recid);
+        }
+        Queues.CircularQueue<E> ret = Queues.getCircularQueue(engine,getDefaultSerializer(),recid);
+        collections.put(name, new WeakReference<Object>(ret));
+        return ret;
+
+    }
+
+    synchronized public <E> Queue<E> createQueue(String name, Serializer<E> serializer) {
+        checkNameNotExists(name);
+        if(serializer==null) serializer=getDefaultSerializer();
+        Long recid = Queues.createQueue(engine, getDefaultSerializer(), serializer);
+        nameDir.put(name,recid);
+        return getQueue(name);
+    }
+
+    synchronized public <E> Queue<E> createStack(String name, Serializer<E> serializer, boolean useLocks) {
+        checkNameNotExists(name);
+        if(serializer==null) serializer=getDefaultSerializer();
+        Long recid = Queues.createStack(engine, getDefaultSerializer(), serializer, useLocks);
+        nameDir.put(name,recid);
+        return getStack(name);
+    }
+
+    synchronized public <E> Queue<E> createCircularQueue(String name, Serializer<E> serializer, long size) {
+        checkNameNotExists(name);
+        if(serializer==null) serializer=getDefaultSerializer();
+        Long recid = Queues.createCircularQueue(engine, getDefaultSerializer(), serializer, size);
+        nameDir.put(name,recid);
+        return getCircularQueue(name);
+    }
+
+
+    /**
+     * Checks that object with given name does not exist yet.
+     * @param name to check
+     * @throws IllegalArgumentException if name is already used
+     */
+    protected void checkNameNotExists(String name) {
+        if(nameDir.get(name)!=null)
+            throw new IllegalArgumentException("Name already used: "+name);
+    }
+
+
+    /**
+     * Closes database.
+     * All other methods will throw 'IllegalAccessError' after this method was called.
+     * <p/>
+     * !! it is necessary to call this method before JVM exits!!
+     */
+    synchronized public void close(){
+        if(engine == null) return;
+        engine.close();
+        //dereference db to prevent memory leaks
+        engine = null;
+        collections = null;
+        defaultSerializer = null;
+    }
+
+    /**
+     * All collections are weakly referenced to prevent two instances of the same collection in memory.
+     * This is mainly for locking, two instances of the same lock would not simply work.
+     */
+    protected Object getFromWeakCollection(String name){
+
+        WeakReference<?> r = collections.get(name);
+        if(r==null) return null;
+        Object o = r.get();
+        if(o==null) collections.remove(name);
+        return o;
+    }
+
+
+
+    protected void checkNotClosed() {
+        if(engine == null) throw new IllegalAccessError("DB was already closed");
+    }
+
+    /**
+     * @return true if DB is closed and can no longer be used
+     */
+    public synchronized  boolean isClosed(){
+        return engine == null;
+    }
+
+    /**
+     * Commit changes made on collections loaded by this DB
+     *
+     * @see org.mapdb.Engine#commit()
+     */
+    synchronized public void commit() {
+        checkNotClosed();
+        engine.commit();
+    }
+
+    /**
+     * Rollback changes made on collections loaded by this DB
+     *
+     * @see org.mapdb.Engine#rollback()
+     */
+    synchronized public void rollback() {
+        checkNotClosed();
+        engine.rollback();
+    }
+
+    /**
+     * Perform storage maintenance.
+     * Typically compact underlying storage and reclaim unused space.
+     * <p/>
+     * NOTE: MapDB does not have smart defragmentation algorithms. So compaction usually recreates entire
+     * store from scratch. This may require additional disk space.
+     */
+    synchronized public void compact(){
+        engine.compact();
+    }
+
+
+    /**
+     * Make readonly snapshot view of DB and all of its collection
+     * Collections loaded by this instance are not affected (are still mutable).
+     * You have to load new collections from DB returned by this method
+     *
+     * @return readonly snapshot view
+     */
+    synchronized public DB snapshot(){
+        Engine snapshot = SnapshotEngine.createSnapshotFor(engine);
+        return new DB (snapshot);
+    }
+
+    /**
+     * @return default serializer used in this DB, it handles POJO and other stuff.
+     */
+    public  Serializer getDefaultSerializer() {
+        return defaultSerializer;
+    }
+
+    /**
+     * @return underlying engine which takes care of persistence for this DB.
+     */
+    public Engine getEngine() {
+        return engine;
+    }
+
+
+
+}
Index: applications/editors/josm/plugins/imagerycache/src/org/mapdb/DBMaker.java
===================================================================
--- applications/editors/josm/plugins/imagerycache/src/org/mapdb/DBMaker.java	(revision 29363)
+++ applications/editors/josm/plugins/imagerycache/src/org/mapdb/DBMaker.java	(revision 29363)
@@ -0,0 +1,639 @@
+/*
+ *  Copyright (c) 2012 Jan Kotek
+ *
+ *  Licensed under the Apache License, Version 2.0 (the "License");
+ *  you may not use this file except in compliance with the License.
+ *  You may obtain a copy of the License at
+ *
+ *    http://www.apache.org/licenses/LICENSE-2.0
+ *
+ *  Unless required by applicable law or agreed to in writing, software
+ *  distributed under the License is distributed on an "AS IS" BASIS,
+ *  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ *  See the License for the specific language governing permissions and
+ *  limitations under the License.
+ */
+
+package org.mapdb;
+
+import org.mapdb.EngineWrapper.ByteTransformEngine;
+import org.mapdb.EngineWrapper.ReadOnlyEngine;
+
+import java.io.File;
+import java.io.IOError;
+import java.io.IOException;
+import java.io.UnsupportedEncodingException;
+import java.util.NavigableSet;
+import java.util.Set;
+
+/**
+ * A builder class for creating and opening a database.
+ *
+ * @author Jan Kotek
+ */
+public class DBMaker {
+
+    protected static final byte CACHE_DISABLE = 0;
+    protected static final byte CACHE_FIXED_HASH_TABLE = 1;
+    protected static final byte CACHE_HARD_REF = 2;
+    protected static final byte CACHE_WEAK_REF = 3;
+    protected static final byte CACHE_SOFT_REF = 4;
+    protected static final byte CACHE_LRU = 5;
+
+    protected byte _cache = CACHE_FIXED_HASH_TABLE;
+    protected int _cacheSize = 1024*32;
+
+    /** file to open, if null opens in memory store */
+    protected File _file;
+
+    protected boolean _journalEnabled = true;
+
+    protected boolean _asyncWriteEnabled = true;
+    protected int _asyncFlushDelay = 100;
+
+    protected boolean _deleteFilesAfterClose = false;
+    protected boolean _readOnly = false;
+    protected boolean _closeOnJvmShutdown = false;
+
+    protected boolean _compressionEnabled = false;
+
+    protected byte[] _xteaEncryptionKey = null;
+
+    protected boolean _freeSpaceReclaimDisabled = false;
+
+    protected boolean _checksumEnabled = false;
+
+    protected boolean _ifInMemoryUseDirectBuffer = false;
+
+    protected boolean _failOnWrongHeader = false;
+
+    protected boolean _RAF = false;
+    protected boolean _powerSavingMode = false;
+    protected boolean _appendStorage;
+
+    /** use static factory methods, or make subclass */
+    protected DBMaker(){}
+
+    /** Creates new in-memory database. Changes are lost after JVM exits.
+     * <p/>
+     * This will use HEAP memory so Garbage Collector is affected.
+     */
+    public static DBMaker newMemoryDB(){
+        DBMaker m = new DBMaker();
+        m._file = null;
+        return  m;
+    }
+
+    /** Creates new in-memory database. Changes are lost after JVM exits.
+     * <p/>
+     * This will use DirectByteBuffer outside of HEAP, so Garbage Collector is not affected
+     *
+     */
+    public static DBMaker newDirectMemoryDB(){
+        DBMaker m = new DBMaker();
+        m._file = null;
+        m._ifInMemoryUseDirectBuffer = true;
+        return  m;
+    }
+
+
+    /**
+     * Creates or open append-only database stored in file.
+     * This database uses format otherthan usual file db
+     *
+     * @param file
+     * @return maker
+     */
+    public static DBMaker newAppendFileDB(File file) {
+        DBMaker m = new DBMaker();
+        m._file = file;
+        m._appendStorage = true;
+        return m;
+    }
+
+
+
+    /**
+     * Create new BTreeMap backed by temporary file storage.
+     * This is quick way to create 'throw away' collection.
+     *
+     * <p>Storage is created in temp folder and deleted on JVM shutdown
+     */
+    public static <K,V> BTreeMap<K,V> newTempTreeMap(){
+        return newTempFileDB()
+                .deleteFilesAfterClose()
+                .closeOnJvmShutdown()
+                .journalDisable()
+                .make()
+                .getTreeMap("temp");
+    }
+
+    /**
+     * Create new HTreeMap backed by temporary file storage.
+     * This is quick way to create 'throw away' collection.
+     *
+     * <p>Storage is created in temp folder and deleted on JVM shutdown
+     */
+    public static <K,V> HTreeMap<K,V> newTempHashMap(){
+        return newTempFileDB()
+                .deleteFilesAfterClose()
+                .closeOnJvmShutdown()
+                .journalDisable()
+                .make()
+                .getHashMap("temp");
+    }
+
+    /**
+     * Create new TreeSet backed by temporary file storage.
+     * This is quick way to create 'throw away' collection.
+     *
+     * <p>Storage is created in temp folder and deleted on JVM shutdown
+     */
+    public static <K> NavigableSet<K> newTempTreeSet(){
+        return newTempFileDB()
+                .deleteFilesAfterClose()
+                .closeOnJvmShutdown()
+                .journalDisable()
+                .make()
+                .getTreeSet("temp");
+    }
+
+    /**
+     * Create new HashSet backed by temporary file storage.
+     * This is quick way to create 'throw away' collection.
+     * <p>
+     * Storage is created in temp folder and deleted on JVM shutdown
+     */
+    public static <K> Set<K> newTempHashSet(){
+        return newTempFileDB()
+                .deleteFilesAfterClose()
+                .closeOnJvmShutdown()
+                .journalDisable()
+                .make()
+                .getHashSet("temp");
+    }
+
+    /**
+     * Creates new database in temporary folder.
+     *
+     * @return
+     */
+    public static DBMaker newTempFileDB() {
+        try {
+            return newFileDB(File.createTempFile("mapdb-temp","db"));
+        } catch (IOException e) {
+            throw new IOError(e);
+        }
+    }
+
+
+    /** Creates or open database stored in file. */
+    public static DBMaker newFileDB(File file){
+        DBMaker m = new DBMaker();
+        m._file = file;
+        return  m;
+    }
+
+
+
+    /**
+     * Transaction journal is enabled by default
+     * You must call <b>DB.commit()</b> to save your changes.
+     * It is possible to disable transaction journal for better write performance
+     * In this case all integrity checks are sacrificed for faster speed.
+     * <p/>
+     * If transaction journal is disabled, all changes are written DIRECTLY into store.
+     * You must call DB.close() method before exit,
+     * otherwise your store <b>WILL BE CORRUPTED</b>
+     *
+     *
+     * @return this builder
+     */
+    public DBMaker journalDisable(){
+        this._journalEnabled = false;
+        return this;
+    }
+
+    /**
+     * Instance cache is enabled by default.
+     * This greatly decreases serialization overhead and improves performance.
+     * Call this method to disable instance cache, so an object will always be deserialized.
+     * <p/>
+     * This may workaround some problems
+     *
+     * @return this builder
+     */
+    public DBMaker cacheDisable(){
+        this._cache = CACHE_DISABLE;
+        return this;
+    }
+
+    /**
+     * Enables unbounded hard reference cache.
+     * This cache is good if you have lot of available memory.
+     * <p/>
+     * All fetched records are added to HashMap and stored with hard reference.
+     * To prevent OutOfMemoryExceptions JDBM monitors free memory,
+     * if it is bellow 25% cache is cleared.
+     *
+     * @return this builder
+     */
+    public DBMaker cacheHardRefEnable(){
+        this._cache = CACHE_HARD_REF;
+        return this;
+    }
+
+
+    /**
+     * Enables unbounded cache which uses <code>WeakReference</code>.
+     * Items are removed from cache by Garbage Collector
+     *
+     * @return this builder
+     */
+    public DBMaker cacheWeakRefEnable(){
+        this._cache = CACHE_WEAK_REF;
+        return this;
+    }
+
+    /**
+     * Enables unbounded cache which uses <code>SoftReference</code>.
+     * Items are removed from cache by Garbage Collector
+     *
+     * @return this builder
+     */
+    public DBMaker cacheSoftRefEnable(){
+        this._cache = CACHE_SOFT_REF;
+        return this;
+    }
+
+    /**
+     * Enables Least Recently Used cache. It is fixed size cache and it removes less used items to make space.
+     *
+     * @return this builder
+     */
+    public DBMaker cacheLRUEnable(){
+        this._cache = CACHE_LRU;
+        return this;
+    }
+    /**
+     * Enables compatibility storage mode for 32bit JVMs.
+     * <p/>
+     * By default MapDB uses memory mapped files. However 32bit JVM can only address 2GB of memory.
+     * Also some older JVMs do not handle large memory mapped files well.
+     * We can use {@code RandomAccessFile} which it is slower, but safer and more compatible.
+     * Use this if you are experiencing <b>java.lang.OutOfMemoryError: Map failed</b> exceptions
+     */
+    public DBMaker randomAccessFileEnable() {
+        this._RAF = true;
+        return this;
+    }
+
+
+    /**
+     * Check current JVM for known problems. If JVM does not handle large memory files well, this option
+     * disables memory mapped files, and use safer and slower {@code RandomAccessFile} instead.
+     */
+    public DBMaker randomAccessFileEnableIfNeeded() {
+        this._RAF = !Utils.JVMSupportsLargeMappedFiles();
+        return this;
+    }
+
+    /**
+     * Set cache size. Interpretations depends on cache type.
+     * For fixed size caches (such as FixedHashTable cache) it is maximal number of items in cache.
+     * <p/>
+     * For unbounded caches (such as HardRef cache) it is initial capacity of underlying table (HashMap).
+     * <p/>
+     * Default cache size is 32768.
+     *
+     * @param cacheSize new cache size
+     * @return this builder
+     */
+    public DBMaker cacheSize(int cacheSize){
+        this._cacheSize = cacheSize;
+        return this;
+    }
+
+
+    /**
+     * By default all modifications are queued and written into disk on Background Writer Thread.
+     * So all modifications are performed in asynchronous mode and do not block.
+     * <p/>
+     * It is possible to disable Background Writer Thread, but this greatly hurts concurrency.
+     * Without async writes, all threads blocks until all previous writes are not finished (single big lock).
+     *
+     * <p/>
+     * This may workaround some problems
+     *
+     * @return this builder
+     */
+    public DBMaker asyncWriteDisable(){
+        this._asyncWriteEnabled = false;
+        return this;
+    }
+
+    /**
+     * //TODO put this nice comment somewhere
+     * By default all objects are serialized in Background Writer Thread.
+     * <p/>
+     * This may improve performance. For example with single thread access, Async Serialization offloads
+     * lot of work to second core. Or when multiple values are added into single tree node,
+     * node has to be serialized only once. Without Async Serialization node is serialized each time
+     * node is updated.
+     * <p/>
+     * On other side Async Serialization moves all serialization into single thread. This
+     * hurts performance with many concurrent-independent updates.
+     * <p/>
+     * Async Serialization may also produce some unexpected results when your data classes are not
+     * immutable. Consider example bellow. If Async Serialization is disabled, it always prints 'Peter'.
+     * If it is enabled (by default) it creates race condition and randomly prints 'Peter' or 'Jack',
+     * <pre>
+     *     Person person = new Person();
+     *     person.setName("Peter");
+     *     map.put(id, person)
+     *     person.setName("Jack");
+     *     //long pause
+     *     println(map.get(id).getName());
+     * </pre>
+     *
+     * <p/>
+     * This may also workaround some problems
+     *
+     * @return this builder
+     */
+
+
+    /**
+     * Set flush iterval for write cache, by default is 0
+     * <p/>
+     * When BTreeMap is constructed from ordered set, tree node size is increasing linearly with each
+     * item added. Each time new key is added to tree node, its size changes and
+     * storage needs to find new place. So constructing BTreeMap from ordered set leads to large
+     * store fragmentation.
+     * <p/>
+     *  Setting flush interval is workaround as BTreeMap node is always updated in memory (write cache)
+     *  and only final version of node is stored on disk.
+     *
+     *
+     * @param delay flush write cache every N miliseconds
+     * @return this builder
+     */
+    public DBMaker asyncFlushDelay(int delay){
+        _asyncFlushDelay = delay;
+        return this;
+    }
+
+
+    /**
+     * Try to delete files after DB is closed.
+     * File deletion may silently fail, especially on Windows where buffer needs to be unmapped file delete.
+     *
+     * @return this builder
+     */
+    public DBMaker deleteFilesAfterClose(){
+        this._deleteFilesAfterClose = true;
+        return this;
+    }
+
+    /**
+     * Adds JVM shutdown hook and closes DB just before JVM;
+     *
+     * @return this builder
+     */
+    public DBMaker closeOnJvmShutdown(){
+        this._closeOnJvmShutdown = true;
+        return this;
+    }
+
+    /**
+     * Enables record compression.
+     * <p/>
+     * Make sure you enable this every time you reopen store, otherwise record de-serialization fails unpredictably.
+     *
+     * @return this builder
+     */
+    public DBMaker compressionEnable(){
+        this._compressionEnabled = true;
+        return this;
+    }
+
+
+    /**
+     * Encrypt storage using XTEA algorithm.
+     * <p/>
+     * XTEA is sound encryption algorithm. However implementation in JDBM was not peer-reviewed.
+     * JDBM only encrypts records data, so attacker may see number of records and their sizes.
+     * <p/>
+     * Make sure you enable this every time you reopen store, otherwise record de-serialization fails unpredictably.
+     *
+     * @param password for encryption
+     * @return this builder
+     */
+    public DBMaker encryptionEnable(String password){
+        try {
+            return encryptionEnable(password.getBytes(Utils.UTF8));
+        } catch (UnsupportedEncodingException e) {
+            throw new RuntimeException(e);
+        }
+    }
+
+
+
+    /**
+     * Encrypt storage using XTEA algorithm.
+     * <p/>
+     * XTEA is sound encryption algorithm. However implementation in JDBM was not peer-reviewed.
+     * JDBM only encrypts records data, so attacker may see number of records and their sizes.
+     * <p/>
+     * Make sure you enable this every time you reopen store, otherwise record de-serialization fails unpredictably.
+     *
+     * @param password for encryption
+     * @return this builder
+     */
+    public DBMaker encryptionEnable(byte[] password){
+        _xteaEncryptionKey = password;
+        return this;
+    }
+
+
+    /**
+     * Adds CRC32 checksum at end of each record to check data integrity.
+     * It throws 'IOException("CRC32 does not match, data broken")' on de-serialization if data are corrupted
+     * <p/>
+     * Make sure you enable this every time you reopen store, otherwise record de-serialization fails.
+     *
+     * @return this builder
+     */
+    public DBMaker checksumEnable(){
+        this._checksumEnabled = true;
+        return this;
+    }
+
+
+    /**
+     * Open store in read-only mode. Any modification attempt will throw
+     * <code>UnsupportedOperationException("Read-only")</code>
+     *
+     * @return this builder
+     */
+    public DBMaker readOnly(){
+        this._readOnly = true;
+        return this;
+    }
+
+
+
+    /**
+     * In this mode existing free space is not reused,
+     * but records are added to the end of the store.
+     * <p/>
+     * This slightly improves write performance as store does not have
+     * to traverse list of free records to find and reuse existing position.
+     * <p/>
+     * It also decreases chance for store corruption, as existing data
+     * are not overwritten with new record.
+     * <p/>
+     * When this mode is used for longer time, store becomes fragmented.
+     * It is necessary to run defragmentation then.
+     * <p/>
+     * NOTE: this mode is not append-only, just small setting for update-in-place storage.
+     *
+     *
+     * @return this builder
+     */
+    public DBMaker freeSpaceReclaimDisable(){
+        this._freeSpaceReclaimDisabled = true;
+        return this;
+    }
+
+    /**
+     * Enables power saving mode.
+     * Typically MapDB runs daemon threads in infinitive cycle with delays and spin locks:
+     * <pre>
+     *     while(true){
+     *         Thread.sleep(1000);
+     *         doSomething();
+     *     }
+     *
+     *    while(write_finished){
+     *         write_chunk;
+     *         sleep(10 nanoseconds)  //so OS gets chance to finish async writing
+     *     }
+     *
+     * </pre>
+     * This brings bit more stability (prevents deadlocks) and some extra speed.
+     * However it causes higher CPU usage then necessary, also CPU wakes-up every
+     * N seconds.
+     * <p>
+     * On power constrained devices (phones, laptops..) trading speed for energy
+     * consumption is not desired. So this settings tells MapDB to prefer
+     * energy efficiency over speed and stability. This is global settings, so
+     * this settings may affects any MapDB part where this settings makes sense
+     * <p>
+     * Currently is used only in {@link AsyncWriteEngine} where power settings
+     * may prevent Background Writer Thread from exiting, if main thread dies.
+     *
+     * @return this builder
+     */
+    public DBMaker powerSavingModeEnable(){
+        this._powerSavingMode = true;
+        return this;
+    }
+
+
+    /** constructs DB using current settings */
+    public DB make(){
+        return new DB(makeEngine());
+    }
+
+    
+    public TxMaker makeTxMaker(){
+        return new TxMaker(makeEngine());
+    }
+
+    /** constructs Engine using current settings */
+    public Engine makeEngine(){
+
+
+        if(_readOnly && _file==null)
+            throw new UnsupportedOperationException("Can not open in-memory DB in read-only mode.");
+
+        if(_readOnly && !_file.exists()){
+            throw new UnsupportedOperationException("Can not open non-existing file in read-only mode.");
+        }
+
+        Engine engine;
+
+        if(!_appendStorage){
+            Volume.Factory folFac = _file == null?
+                Volume.memoryFactory(_ifInMemoryUseDirectBuffer):
+                Volume.fileFactory(_readOnly, _RAF, _file);
+
+            engine = _journalEnabled ?
+                new StorageJournaled(folFac, _freeSpaceReclaimDisabled, _deleteFilesAfterClose, _failOnWrongHeader, _readOnly):
+                new StorageDirect(folFac, _freeSpaceReclaimDisabled, _deleteFilesAfterClose , _failOnWrongHeader, _readOnly);
+        }else{
+            if(_file==null) throw new UnsupportedOperationException("Append Storage format is not supported with in-memory dbs");
+            engine = new StorageAppend(_file, _RAF, _readOnly, !_journalEnabled);
+        }
+
+        AsyncWriteEngine engineAsync = null;
+        if(_asyncWriteEnabled && !_readOnly){
+            engineAsync = new AsyncWriteEngine(engine,!_journalEnabled,  _powerSavingMode, _asyncFlushDelay);
+            engine = engineAsync;
+        }
+
+        if(_checksumEnabled){
+            engine = new ByteTransformEngine(engine, Serializer.CRC32_CHECKSUM);
+        }
+
+        if(_xteaEncryptionKey!=null){
+            engine = new ByteTransformEngine(engine, new EncryptionXTEA(_xteaEncryptionKey));
+        }
+
+
+        if(_compressionEnabled){
+            engine = new ByteTransformEngine(engine, CompressLZF.SERIALIZER);
+        }
+
+        engine = new SnapshotEngine(engine);
+
+        if(_cache == CACHE_DISABLE){
+            //do not wrap engine in cache
+        }if(_cache == CACHE_FIXED_HASH_TABLE){
+            engine = new CacheHashTable(engine,_cacheSize);
+        }else if (_cache == CACHE_HARD_REF){
+            engine = new CacheHardRef(engine,_cacheSize);
+        }else if (_cache == CACHE_WEAK_REF){
+            engine = new CacheWeakSoftRef(engine,true);
+        }else if (_cache == CACHE_SOFT_REF){
+            engine = new CacheWeakSoftRef(engine,false);
+        }else if (_cache == CACHE_LRU){
+            engine = new CacheLRU(engine, _cacheSize);
+        }
+
+
+        if(_readOnly)
+            engine = new ReadOnlyEngine(engine);
+
+        if(engineAsync!=null)
+            engineAsync.setParentEngineReference(engine);
+
+        if(_closeOnJvmShutdown){
+            final Engine engine2 = engine;
+            Runtime.getRuntime().addShutdownHook(new Thread("JDBM shutdown") {
+                @Override
+				public void run() {
+                    if(!engine2.isClosed())
+                        engine2.close();
+                }
+            });
+        }
+
+        return engine;
+    }
+
+
+
+
+}
Index: applications/editors/josm/plugins/imagerycache/src/org/mapdb/DataInput2.java
===================================================================
--- applications/editors/josm/plugins/imagerycache/src/org/mapdb/DataInput2.java	(revision 29363)
+++ applications/editors/josm/plugins/imagerycache/src/org/mapdb/DataInput2.java	(revision 29363)
@@ -0,0 +1,133 @@
+/*
+ *  Copyright (c) 2012 Jan Kotek
+ *
+ *  Licensed under the Apache License, Version 2.0 (the "License");
+ *  you may not use this file except in compliance with the License.
+ *  You may obtain a copy of the License at
+ *
+ *    http://www.apache.org/licenses/LICENSE-2.0
+ *
+ *  Unless required by applicable law or agreed to in writing, software
+ *  distributed under the License is distributed on an "AS IS" BASIS,
+ *  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ *  See the License for the specific language governing permissions and
+ *  limitations under the License.
+ */
+
+package org.mapdb;
+
+import java.io.DataInput;
+import java.io.IOException;
+import java.nio.ByteBuffer;
+
+/**
+ * Wraps {@link ByteBuffer} and provides {@link DataInput}
+ *
+ * @author Jan Kotek
+ */
+public final class DataInput2 implements DataInput {
+
+    ByteBuffer buf;
+    int pos;
+
+    public DataInput2(final ByteBuffer buf, final int pos) {
+        this.buf = buf;
+        this.pos = pos;
+    }
+
+    public DataInput2(byte[] b) {
+        //TODO create implementation which uses raw byte[] and replace all refs
+        this(ByteBuffer.wrap(b),0);
+    }
+
+    @Override
+    public void readFully(byte[] b) throws IOException {
+        readFully(b, 0, b.length);
+    }
+
+    @Override
+    public void readFully(byte[] b, int off, int len) throws IOException {
+        //naive, but only thread safe way
+        //TODO investigate
+        for(int i=off;i<off+len;i++){
+            b[i] = readByte();
+        }
+    }
+
+    @Override
+    public int skipBytes(final int n) throws IOException {
+        pos +=n;
+        return n;
+    }
+
+    @Override
+    public boolean readBoolean() throws IOException {
+        return buf.get(pos++) ==1;
+    }
+
+    @Override
+    public byte readByte() throws IOException {
+        return buf.get(pos++);
+    }
+
+    @Override
+    public int readUnsignedByte() throws IOException {
+        return buf.get(pos++)& 0xff;
+    }
+
+    @Override
+    public short readShort() throws IOException {
+        final short ret = buf.getShort(pos);
+        pos+=2;
+        return ret;
+    }
+
+    @Override
+    public int readUnsignedShort() throws IOException {
+        return (( (buf.get(pos++) & 0xff) << 8) |
+                ( (buf.get(pos++) & 0xff)));
+    }
+
+    @Override
+    public char readChar() throws IOException {
+        return (char) readInt();
+    }
+
+    @Override
+    public int readInt() throws IOException {
+        final int ret = buf.getInt(pos);
+        pos+=4;
+        return ret;
+    }
+
+    @Override
+    public long readLong() throws IOException {
+        final long ret = buf.getLong(pos);
+        pos+=8;
+        return ret;
+    }
+
+    @Override
+    public float readFloat() throws IOException {
+        final float ret = buf.getFloat(pos);
+        pos+=4;
+        return ret;
+    }
+
+    @Override
+    public double readDouble() throws IOException {
+        final double ret = buf.getDouble(pos);
+        pos+=8;
+        return ret;
+    }
+
+    @Override
+    public String readLine() throws IOException {
+        return readUTF();
+    }
+
+    @Override
+    public String readUTF() throws IOException {
+        return SerializerBase.deserializeString(this);
+    }
+}
Index: applications/editors/josm/plugins/imagerycache/src/org/mapdb/DataOutput2.java
===================================================================
--- applications/editors/josm/plugins/imagerycache/src/org/mapdb/DataOutput2.java	(revision 29363)
+++ applications/editors/josm/plugins/imagerycache/src/org/mapdb/DataOutput2.java	(revision 29363)
@@ -0,0 +1,145 @@
+/*
+ *  Copyright (c) 2012 Jan Kotek
+ *
+ *  Licensed under the Apache License, Version 2.0 (the "License");
+ *  you may not use this file except in compliance with the License.
+ *  You may obtain a copy of the License at
+ *
+ *    http://www.apache.org/licenses/LICENSE-2.0
+ *
+ *  Unless required by applicable law or agreed to in writing, software
+ *  distributed under the License is distributed on an "AS IS" BASIS,
+ *  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ *  See the License for the specific language governing permissions and
+ *  limitations under the License.
+ */
+
+package org.mapdb;
+
+import java.io.DataOutput;
+import java.io.IOException;
+import java.util.Arrays;
+
+/**
+  * Provides {@link DataOutput} implementation on top of growable {@code byte[]}
+ * <p/>
+ *  {@link ByteArrayOutputStream} is not used as it requires {@code byte[]} copying
+ *
+ * @author Jan Kotek
+ */
+public final class DataOutput2 implements DataOutput {
+
+    byte[] buf;
+    int pos;
+
+    DataOutput2(){
+        pos = 0;
+        buf = new byte[16]; //TODO take hint from serializer for initial size
+    }
+
+    byte[] copyBytes(){
+        return Arrays.copyOf(buf, pos);
+    }
+
+    /**
+     * make sure there will be enought space in buffer to write N bytes
+     */
+    private void ensureAvail(final int n) {
+        if (pos + n >= buf.length) {
+            int newSize = Math.max(pos + n, buf.length * 2);
+            buf = Arrays.copyOf(buf, newSize);
+        }
+    }
+
+
+    @Override
+    public void write(final int b) throws IOException {
+        ensureAvail(1);
+        buf[pos++] = (byte) b;
+    }
+
+    @Override
+    public void write(final byte[] b) throws IOException {
+        write(b, 0, b.length);
+    }
+
+    @Override
+    public void write(final byte[] b, final int off, final int len) throws IOException {
+        ensureAvail(len);
+        System.arraycopy(b, off, buf, pos, len);
+        pos += len;
+    }
+
+    @Override
+    public void writeBoolean(final boolean v) throws IOException {
+        ensureAvail(1);
+        buf[pos++] = (byte) (v ? 1 : 0);
+    }
+
+    @Override
+    public void writeByte(final int v) throws IOException {
+        ensureAvail(1);
+        buf[pos++] = (byte) (v);
+    }
+
+    @Override
+    public void writeShort(final int v) throws IOException {
+        ensureAvail(2);
+        buf[pos++] = (byte) (0xff & (v >> 8));
+        buf[pos++] = (byte) (0xff & (v));
+    }
+
+    @Override
+    public void writeChar(final int v) throws IOException {
+        writeInt(v);
+    }
+
+    @Override
+    public void writeInt(final int v) throws IOException {
+        ensureAvail(4);
+        buf[pos++] = (byte) (0xff & (v >> 24));
+        buf[pos++] = (byte) (0xff & (v >> 16));
+        buf[pos++] = (byte) (0xff & (v >> 8));
+        buf[pos++] = (byte) (0xff & (v));
+    }
+
+    @Override
+    public void writeLong(final long v) throws IOException {
+        ensureAvail(8);
+        buf[pos++] = (byte) (0xff & (v >> 56));
+        buf[pos++] = (byte) (0xff & (v >> 48));
+        buf[pos++] = (byte) (0xff & (v >> 40));
+        buf[pos++] = (byte) (0xff & (v >> 32));
+        buf[pos++] = (byte) (0xff & (v >> 24));
+        buf[pos++] = (byte) (0xff & (v >> 16));
+        buf[pos++] = (byte) (0xff & (v >> 8));
+        buf[pos++] = (byte) (0xff & (v));
+    }
+
+    @Override
+    public void writeFloat(final float v) throws IOException {
+        ensureAvail(4);
+        writeInt(Float.floatToIntBits(v));
+    }
+
+    @Override
+    public void writeDouble(final double v) throws IOException {
+        ensureAvail(8);
+        writeLong(Double.doubleToLongBits(v));
+    }
+
+    @Override
+    public void writeBytes(final String s) throws IOException {
+        writeUTF(s);
+    }
+
+    @Override
+    public void writeChars(final String s) throws IOException {
+        writeUTF(s);
+    }
+
+    @Override
+    public void writeUTF(final String s) throws IOException {
+        SerializerBase.serializeString(this, s);
+    }
+}
Index: applications/editors/josm/plugins/imagerycache/src/org/mapdb/EncryptionXTEA.java
===================================================================
--- applications/editors/josm/plugins/imagerycache/src/org/mapdb/EncryptionXTEA.java	(revision 29363)
+++ applications/editors/josm/plugins/imagerycache/src/org/mapdb/EncryptionXTEA.java	(revision 29363)
@@ -0,0 +1,254 @@
+/*
+ * Copyright 2004-2011 H2 Group. Multiple-Licensed under the H2 License,
+ * Version 1.0, and under the Eclipse Public License, Version 1.0
+ * (http://h2database.com/html/license.html).
+ * Initial Developer: H2 Group
+ */
+package org.mapdb;
+
+
+import java.io.DataInput;
+import java.io.DataOutput;
+import java.io.IOException;
+import java.util.Arrays;
+
+/**
+ * An implementation of the EncryptionXTEA block cipher algorithm.
+ * <p>
+ * This implementation uses 32 rounds.
+ * The best attack reported as of 2009 is 36 rounds (Wikipedia).
+ * <p/>
+ * It requires 32 byte long encryption key, so SHA256 password hash is used.
+ */
+public final class EncryptionXTEA implements Serializer<byte[]>{
+
+    /**
+     * Blocks sizes are always multiples of this number.
+     */
+    public static final int ALIGN = 16;
+
+    private static final int DELTA = 0x9E3779B9;
+    private final int k0, k1, k2, k3, k4, k5, k6, k7, k8, k9, k10, k11, k12, k13, k14, k15;
+    private final int k16, k17, k18, k19, k20, k21, k22, k23, k24, k25, k26, k27, k28, k29, k30, k31;
+
+
+    public EncryptionXTEA(byte[] password) {
+        byte[] b = getHash(password, false);
+        int[] key = new int[4];
+        for (int i = 0; i < 16;) {
+            key[i / 4] = (b[i++] << 24) + ((b[i++] & 255) << 16) + ((b[i++] & 255) << 8) + (b[i++] & 255);
+        }
+        int[] r = new int[32];
+        for (int i = 0, sum = 0; i < 32;) {
+            r[i++] = sum + key[sum & 3];
+            sum += DELTA;
+            r[i++] = sum + key[ (sum >>> 11) & 3];
+        }
+        k0 = r[0]; k1 = r[1]; k2 = r[2]; k3 = r[3]; k4 = r[4]; k5 = r[5]; k6 = r[6]; k7 = r[7];
+        k8 = r[8]; k9 = r[9]; k10 = r[10]; k11 = r[11]; k12 = r[12]; k13 = r[13]; k14 = r[14]; k15 = r[15];
+        k16 = r[16]; k17 = r[17]; k18 = r[18]; k19 = r[19]; k20 = r[20]; k21 = r[21]; k22 = r[22]; k23 = r[23];
+        k24 = r[24]; k25 = r[25]; k26 = r[26]; k27 = r[27]; k28 = r[28]; k29 = r[29]; k30 = r[30]; k31 = r[31];
+    }
+
+
+    public void encrypt(byte[] bytes, int off, int len) {
+        if (len % ALIGN != 0) {
+            throw new InternalError("unaligned len " + len);
+        }
+        for (int i = off; i < off + len; i += 8) {
+            encryptBlock(bytes, bytes, i);
+        }
+    }
+
+    public void decrypt(byte[] bytes, int off, int len) {
+        if (len % ALIGN != 0) {
+            throw new InternalError("unaligned len " + len);
+        }
+        for (int i = off; i < off + len; i += 8) {
+            decryptBlock(bytes, bytes, i);
+        }
+    }
+
+    private void encryptBlock(byte[] in, byte[] out, int off) {
+        int y = (in[off] << 24) | ((in[off+1] & 255) << 16) | ((in[off+2] & 255) << 8) | (in[off+3] & 255);
+        int z = (in[off+4] << 24) | ((in[off+5] & 255) << 16) | ((in[off+6] & 255) << 8) | (in[off+7] & 255);
+        y += (((z << 4) ^ (z >>> 5)) + z) ^ k0; z += (((y >>> 5) ^ (y << 4)) + y) ^ k1;
+        y += (((z << 4) ^ (z >>> 5)) + z) ^ k2; z += (((y >>> 5) ^ (y << 4)) + y) ^ k3;
+        y += (((z << 4) ^ (z >>> 5)) + z) ^ k4; z += (((y >>> 5) ^ (y << 4)) + y) ^ k5;
+        y += (((z << 4) ^ (z >>> 5)) + z) ^ k6; z += (((y >>> 5) ^ (y << 4)) + y) ^ k7;
+        y += (((z << 4) ^ (z >>> 5)) + z) ^ k8; z += (((y >>> 5) ^ (y << 4)) + y) ^ k9;
+        y += (((z << 4) ^ (z >>> 5)) + z) ^ k10; z += (((y >>> 5) ^ (y << 4)) + y) ^ k11;
+        y += (((z << 4) ^ (z >>> 5)) + z) ^ k12; z += (((y >>> 5) ^ (y << 4)) + y) ^ k13;
+        y += (((z << 4) ^ (z >>> 5)) + z) ^ k14; z += (((y >>> 5) ^ (y << 4)) + y) ^ k15;
+        y += (((z << 4) ^ (z >>> 5)) + z) ^ k16; z += (((y >>> 5) ^ (y << 4)) + y) ^ k17;
+        y += (((z << 4) ^ (z >>> 5)) + z) ^ k18; z += (((y >>> 5) ^ (y << 4)) + y) ^ k19;
+        y += (((z << 4) ^ (z >>> 5)) + z) ^ k20; z += (((y >>> 5) ^ (y << 4)) + y) ^ k21;
+        y += (((z << 4) ^ (z >>> 5)) + z) ^ k22; z += (((y >>> 5) ^ (y << 4)) + y) ^ k23;
+        y += (((z << 4) ^ (z >>> 5)) + z) ^ k24; z += (((y >>> 5) ^ (y << 4)) + y) ^ k25;
+        y += (((z << 4) ^ (z >>> 5)) + z) ^ k26; z += (((y >>> 5) ^ (y << 4)) + y) ^ k27;
+        y += (((z << 4) ^ (z >>> 5)) + z) ^ k28; z += (((y >>> 5) ^ (y << 4)) + y) ^ k29;
+        y += (((z << 4) ^ (z >>> 5)) + z) ^ k30; z += (((y >>> 5) ^ (y << 4)) + y) ^ k31;
+        out[off] = (byte) (y >> 24); out[off+1] = (byte) (y >> 16); out[off+2] = (byte) (y >> 8); out[off+3] = (byte) y;
+        out[off+4] = (byte) (z >> 24); out[off+5] = (byte) (z >> 16); out[off+6] = (byte) (z >> 8); out[off+7] = (byte) z;
+    }
+
+    private void decryptBlock(byte[] in, byte[] out, int off) {
+        int y = (in[off] << 24) | ((in[off+1] & 255) << 16) | ((in[off+2] & 255) << 8) | (in[off+3] & 255);
+        int z = (in[off+4] << 24) | ((in[off+5] & 255) << 16) | ((in[off+6] & 255) << 8) | (in[off+7] & 255);
+        z -= (((y >>> 5) ^ (y << 4)) + y) ^ k31; y -= (((z << 4) ^ (z >>> 5)) + z) ^ k30;
+        z -= (((y >>> 5) ^ (y << 4)) + y) ^ k29; y -= (((z << 4) ^ (z >>> 5)) + z) ^ k28;
+        z -= (((y >>> 5) ^ (y << 4)) + y) ^ k27; y -= (((z << 4) ^ (z >>> 5)) + z) ^ k26;
+        z -= (((y >>> 5) ^ (y << 4)) + y) ^ k25; y -= (((z << 4) ^ (z >>> 5)) + z) ^ k24;
+        z -= (((y >>> 5) ^ (y << 4)) + y) ^ k23; y -= (((z << 4) ^ (z >>> 5)) + z) ^ k22;
+        z -= (((y >>> 5) ^ (y << 4)) + y) ^ k21; y -= (((z << 4) ^ (z >>> 5)) + z) ^ k20;
+        z -= (((y >>> 5) ^ (y << 4)) + y) ^ k19; y -= (((z << 4) ^ (z >>> 5)) + z) ^ k18;
+        z -= (((y >>> 5) ^ (y << 4)) + y) ^ k17; y -= (((z << 4) ^ (z >>> 5)) + z) ^ k16;
+        z -= (((y >>> 5) ^ (y << 4)) + y) ^ k15; y -= (((z << 4) ^ (z >>> 5)) + z) ^ k14;
+        z -= (((y >>> 5) ^ (y << 4)) + y) ^ k13; y -= (((z << 4) ^ (z >>> 5)) + z) ^ k12;
+        z -= (((y >>> 5) ^ (y << 4)) + y) ^ k11; y -= (((z << 4) ^ (z >>> 5)) + z) ^ k10;
+        z -= (((y >>> 5) ^ (y << 4)) + y) ^ k9; y -= (((z << 4) ^ (z >>> 5)) + z) ^ k8;
+        z -= (((y >>> 5) ^ (y << 4)) + y) ^ k7; y -= (((z << 4) ^ (z >>> 5)) + z) ^ k6;
+        z -= (((y >>> 5) ^ (y << 4)) + y) ^ k5; y -= (((z << 4) ^ (z >>> 5)) + z) ^ k4;
+        z -= (((y >>> 5) ^ (y << 4)) + y) ^ k3; y -= (((z << 4) ^ (z >>> 5)) + z) ^ k2;
+        z -= (((y >>> 5) ^ (y << 4)) + y) ^ k1; y -= (((z << 4) ^ (z >>> 5)) + z) ^ k0;
+        out[off] = (byte) (y >> 24); out[off+1] = (byte) (y >> 16); out[off+2] = (byte) (y >> 8); out[off+3] = (byte) y;
+        out[off+4] = (byte) (z >> 24); out[off+5] = (byte) (z >> 16); out[off+6] = (byte) (z >> 8); out[off+7] = (byte) z;
+    }
+
+
+    /**
+     * The first 32 bits of the fractional parts of the cube roots of the first
+     * sixty-four prime numbers. Used for SHA256 password hash
+     */
+    private static final int[] K = { 0x428a2f98, 0x71374491, 0xb5c0fbcf, 0xe9b5dba5,
+            0x3956c25b, 0x59f111f1, 0x923f82a4, 0xab1c5ed5, 0xd807aa98,
+            0x12835b01, 0x243185be, 0x550c7dc3, 0x72be5d74, 0x80deb1fe,
+            0x9bdc06a7, 0xc19bf174, 0xe49b69c1, 0xefbe4786, 0x0fc19dc6,
+            0x240ca1cc, 0x2de92c6f, 0x4a7484aa, 0x5cb0a9dc, 0x76f988da,
+            0x983e5152, 0xa831c66d, 0xb00327c8, 0xbf597fc7, 0xc6e00bf3,
+            0xd5a79147, 0x06ca6351, 0x14292967, 0x27b70a85, 0x2e1b2138,
+            0x4d2c6dfc, 0x53380d13, 0x650a7354, 0x766a0abb, 0x81c2c92e,
+            0x92722c85, 0xa2bfe8a1, 0xa81a664b, 0xc24b8b70, 0xc76c51a3,
+            0xd192e819, 0xd6990624, 0xf40e3585, 0x106aa070, 0x19a4c116,
+            0x1e376c08, 0x2748774c, 0x34b0bcb5, 0x391c0cb3, 0x4ed8aa4a,
+            0x5b9cca4f, 0x682e6ff3, 0x748f82ee, 0x78a5636f, 0x84c87814,
+            0x8cc70208, 0x90befffa, 0xa4506ceb, 0xbef9a3f7, 0xc67178f2 };
+
+
+    /**
+     * Calculate the hash code for the given data.
+     *
+     * @param data the data to hash
+     * @param nullData if the data should be filled with zeros after calculating the hash code
+     * @return the hash code
+     */
+    public static byte[] getHash(byte[] data, boolean nullData) {
+        int byteLen = data.length;
+        int intLen = ((byteLen + 9 + 63) / 64) * 16;
+        byte[] bytes = new byte[intLen * 4];
+        System.arraycopy(data, 0, bytes, 0, byteLen);
+        if (nullData) {
+            Arrays.fill(data, (byte) 0);
+        }
+        bytes[byteLen] = (byte) 0x80;
+        int[] buff = new int[intLen];
+        for (int i = 0, j = 0; j < intLen; i += 4, j++) {
+            buff[j] = readInt(bytes, i);
+        }
+        buff[intLen - 2] = byteLen >>> 29;
+        buff[intLen - 1] = byteLen << 3;
+        int[] w = new int[64];
+        int[] hh = { 0x6a09e667, 0xbb67ae85, 0x3c6ef372, 0xa54ff53a,
+                0x510e527f, 0x9b05688c, 0x1f83d9ab, 0x5be0cd19 };
+        for (int block = 0; block < intLen; block += 16) {
+            for (int i = 0; i < 16; i++) {
+                w[i] = buff[block + i];
+            }
+            for (int i = 16; i < 64; i++) {
+                int x = w[i - 2];
+                int theta1 = rot(x, 17) ^ rot(x, 19) ^ (x >>> 10);
+                x = w[i - 15];
+                int theta0 = rot(x, 7) ^ rot(x, 18) ^ (x >>> 3);
+                w[i] = theta1 + w[i - 7] + theta0 + w[i - 16];
+            }
+
+            int a = hh[0], b = hh[1], c = hh[2], d = hh[3];
+            int e = hh[4], f = hh[5], g = hh[6], h = hh[7];
+
+            for (int i = 0; i < 64; i++) {
+                int t1 = h + (rot(e, 6) ^ rot(e, 11) ^ rot(e, 25))
+                        + ((e & f) ^ ((~e) & g)) + K[i] + w[i];
+                int t2 = (rot(a, 2) ^ rot(a, 13) ^ rot(a, 22))
+                        + ((a & b) ^ (a & c) ^ (b & c));
+                h = g;
+                g = f;
+                f = e;
+                e = d + t1;
+                d = c;
+                c = b;
+                b = a;
+                a = t1 + t2;
+            }
+            hh[0] += a;
+            hh[1] += b;
+            hh[2] += c;
+            hh[3] += d;
+            hh[4] += e;
+            hh[5] += f;
+            hh[6] += g;
+            hh[7] += h;
+        }
+        byte[] result = new byte[32];
+        for (int i = 0; i < 8; i++) {
+            writeInt(result, i * 4, hh[i]);
+        }
+        Arrays.fill(w, 0);
+        Arrays.fill(buff, 0);
+        Arrays.fill(hh, 0);
+        Arrays.fill(bytes, (byte) 0);
+        return result;
+    }
+
+    private static int rot(int i, int count) {
+        return (i << (32 - count)) | (i >>> count);
+    }
+
+    private static int readInt(byte[] b, int i) {
+        return ((b[i] & 0xff) << 24) + ((b[i + 1] & 0xff) << 16)
+                + ((b[i + 2] & 0xff) << 8) + (b[i + 3] & 0xff);
+    }
+
+    private static void writeInt(byte[] b, int i, int value) {
+        b[i] = (byte) (value >> 24);
+        b[i + 1] = (byte) (value >> 16);
+        b[i + 2] = (byte) (value >> 8);
+        b[i + 3] = (byte) value;
+    }
+
+
+    @Override
+    public void serialize(DataOutput out, byte[] value) throws IOException {
+        if(value==null) return;
+        int len = value.length;
+        if(len%ALIGN!=0)
+            len += ALIGN - len%ALIGN;
+        //write length difference
+        out.writeByte(len-value.length);
+        //write actual data
+        byte[] encrypted = Arrays.copyOf(value,len);
+        encrypt(encrypted,0, encrypted.length);
+        out.write(encrypted);
+    }
+
+    @Override
+    public byte[] deserialize(DataInput in, int available) throws IOException {
+        if(available==0) return null;
+        int cut = in.readUnsignedByte(); //length dif from 16bytes
+        byte[] b = new byte[available-1];
+        in.readFully(b);
+        decrypt(b, 0, b.length);
+        if(cut!=0)
+            b = Arrays.copyOf(b, b.length-cut);
+        return b;
+    }
+}
Index: applications/editors/josm/plugins/imagerycache/src/org/mapdb/Engine.java
===================================================================
--- applications/editors/josm/plugins/imagerycache/src/org/mapdb/Engine.java	(revision 29363)
+++ applications/editors/josm/plugins/imagerycache/src/org/mapdb/Engine.java	(revision 29363)
@@ -0,0 +1,190 @@
+/*
+ *  Copyright (c) 2012 Jan Kotek
+ *
+ *  Licensed under the Apache License, Version 2.0 (the "License");
+ *  you may not use this file except in compliance with the License.
+ *  You may obtain a copy of the License at
+ *
+ *    http://www.apache.org/licenses/LICENSE-2.0
+ *
+ *  Unless required by applicable law or agreed to in writing, software
+ *  distributed under the License is distributed on an "AS IS" BASIS,
+ *  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ *  See the License for the specific language governing permissions and
+ *  limitations under the License.
+ */
+
+package org.mapdb;
+
+/**
+ * Central interface for managing records.
+ * It is primitive key value store, with <code>long</code> keys and object instances values.
+ * It contains basic CRUD operations.
+ * <p/>
+ * MapDB (unlike other DBs) does not use binary <code>byte[]</code> for its values.
+ * Instead it takes object instance with serializer, and manages serialization/deserialization itself.
+ * Integrating serialization into database gives us lot of flexibility.
+ * <p/>
+ * There is {@link Storage} class which implements basic persistence. Most of MapDB features
+ * comes from {@link EngineWrapper}, they are stacked on top of each other
+ * to provide asynchronous writes, instance cache, encryption etc..
+ * <code>Engine</code> stack is very elegant and uniform way to handle additional functionality.
+ *  Other DBs need an ORM framework to achieve similar features.
+ * <p/>
+ * In default configuration MapDB runs with this <code>Engine</code> stack:
+ * <pre>
+ *                     [DISK IO]
+ *       StorageJournaled - permament record storage with journaled transactions
+ *       AsyncWriteEngine - asynchronous writes to storage
+ *    ByteTransformEngine - compression or encryption (optional)
+ *         CacheHashTable - instance cache
+ *         SnapshotEngine - support for snapshots
+ *                     [USER API]
+ * </pre>
+ *
+ * <p/>
+ * Engine uses 'recid' to identify records. There is zero error handling in case recid is invalid
+ * (random number or already deleted record). Passing illegal recid may result into anything
+ * (return null, throw EOF or even corrupt store). Engine is considered low-level component
+ * and it is responsibility of upper layers (collections) to ensure recid is consistent.
+ * Lack of error handling is trade of for speed (similar way as manual memory management in C++)
+ * <p/>
+ * Engine must support {@code null} record values. You may insert, update and fetch null records.
+ * Nulls play important role in recid preallocation and asynchronous writes.
+ * <p/>
+ * Recid can be reused after it was deleted. If your application relies on unique being unique,
+ * you should update record with null value, instead of delete.
+ * Null record consumes only 8 bytes in store and is preserved during defragmentation.
+ *
+ * @author Jan Kotek
+ */
+public interface Engine {
+
+    long NAME_DIR_RECID = 1;
+    long CLASS_INFO_RECID = 2;
+    long LAST_RESERVED_RECID = 7;
+
+
+    /**
+     * Insert new record.
+     *
+     * @param value records to be added
+     * @param serializer used to convert record into/from binary form
+     * @param <A> type of record
+     * @return recid (record identifier) under which record is stored.
+     */
+    <A> long put(A value, Serializer<A> serializer);
+
+    /**
+     * Get existing record.
+     * <p/>
+     * Recid must be a number returned by 'put' method.
+     * Behaviour for invalid recid (random number or already deleted record)
+     * is not defined, typically it returns null or throws 'EndOfFileException'
+     *
+     * @param recid (record identifier) under which record was persisted
+     * @param serializer used to deserialize record from binary form
+     * @param <A> record type
+     * @return record matching given recid, or null if record is not found under given recid.
+     */
+    <A> A get(long recid, Serializer<A> serializer);
+
+    /**
+     * Update existing record with new value.
+     * <p/>
+     * Recid must be a number returned by 'put' method.
+     * Behaviour for invalid recid (random number or already deleted record)
+     * is not defined, typically it throws 'EndOfFileException',
+     * but it may also corrupt store.
+     *
+     * @param recid (record identifier) under which record was persisted.
+     * @param value new record value to be stored
+     * @param serializer used to serialize record into binary form
+     * @param <A> record type
+     */
+    <A> void update(long recid, A value, Serializer<A> serializer);
+
+
+    /**
+     * Updates existing record in atomic <a href="http://en.wikipedia.org/wiki/Compare-and-swap">(Compare And Swap)</a> manner.
+     * Value is modified only if old value matches expected value. There are three ways to match values, MapDB may use any of them:
+     * <ol>
+     *    <li>Equality check <code>oldValue==expectedOldValue</code> when old value is found in instance cache</li>
+     *    <li>Deserializing <code>oldValue</code> using <code>serializer</code> and checking <code>oldValue.equals(expectedOldValue)</code></li>
+     *    <li>Serializing <code>expectedOldValue</code> using <code>serializer </code> and comparing binary array with already serialized <code>oldValue</code>
+     * </ol>
+     * <p/>
+     * Recid must be a number returned by 'put' method.
+     * Behaviour for invalid recid (random number or already deleted record)
+     * is not defined, typically it throws 'EndOfFileException',
+     * but it may also corrupt store.
+     *
+     * @param recid (record identifier) under which record was persisted.
+     * @param expectedOldValue old value to be compared with existing record
+     * @param newValue to be written if values are matching
+     * @param serializer used to serialize record into binary form
+     * @param <A>
+     * @return true if values matched and newValue was written
+     */
+    <A> boolean compareAndSwap(long recid, A expectedOldValue, A newValue, Serializer<A> serializer);
+
+    /**
+     * Remove existing record from store/cache
+     *
+     * <p/>
+     * Recid must be a number returned by 'put' method.
+     * Behaviour for invalid recid (random number or already deleted record)
+     * is not defined, typically it throws 'EndOfFileException',
+     * but it may also corrupt store.
+     *
+     * @param recid (record identifier) under which was record persisted
+     * @param serializer which may be used in some circumstances to deserialize and store old object
+     */
+    <A> void delete(long recid, Serializer<A>  serializer);
+
+
+
+    /**
+     * Close store/cache. This method must be called before JVM exits to flush all caches and prevent store corruption.
+     * Also it releases resources used by MapDB (disk, memory..).
+     * <p/>
+     * Engine can no longer be used after this method was called. If Engine is used after closing, it may
+     * throw any exception including <code>NullPointerException</code>
+     * </p>
+     * There is an configuration option {@link DBMaker#closeOnJvmShutdown()} which uses shutdown hook to automatically
+     * close Engine when JVM shutdowns.
+     */
+    void close();
+
+
+    /**
+     * Checks whether Engine was closed.
+     *
+     * @return true if engine was closed
+     */
+    public boolean isClosed();
+
+    /**
+     * Makes all changes made since the previous commit/rollback permanent.
+     * In transactional mode (on by default) it means creating journal file and replaying it to storage.
+     * In other modes it may flush disk caches or do nothing at all (check your config options)
+     */
+    void commit();
+
+    /**
+     * Undoes all changes made in the current transaction.
+     * If transactions are disabled it throws {@link UnsupportedOperationException}.
+     *
+     * @throws UnsupportedOperationException if transactions are disabled
+     */
+    void rollback() throws UnsupportedOperationException;
+
+    /**
+     * Check if you can write into this Engine. It may be readonly in some cases (snapshot, read-only files).
+     *
+     * @return true if engine is read-only
+     */
+    boolean isReadOnly();
+
+    void compact();
+}
Index: applications/editors/josm/plugins/imagerycache/src/org/mapdb/EngineWrapper.java
===================================================================
--- applications/editors/josm/plugins/imagerycache/src/org/mapdb/EngineWrapper.java	(revision 29363)
+++ applications/editors/josm/plugins/imagerycache/src/org/mapdb/EngineWrapper.java	(revision 29363)
@@ -0,0 +1,381 @@
+/*
+ *  Copyright (c) 2012 Jan Kotek
+ *
+ *  Licensed under the Apache License, Version 2.0 (the "License");
+ *  you may not use this file except in compliance with the License.
+ *  You may obtain a copy of the License at
+ *
+ *    http://www.apache.org/licenses/LICENSE-2.0
+ *
+ *  Unless required by applicable law or agreed to in writing, software
+ *  distributed under the License is distributed on an "AS IS" BASIS,
+ *  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ *  See the License for the specific language governing permissions and
+ *  limitations under the License.
+ */
+
+package org.mapdb;
+
+
+import java.io.IOError;
+import java.io.IOException;
+import java.nio.ByteBuffer;
+import java.util.Arrays;
+import java.util.Iterator;
+import java.util.Queue;
+import java.util.concurrent.ConcurrentLinkedQueue;
+
+
+/**
+ * EngineWrapper adapter. It implements all methods on Engine interface.
+ *
+ * @author Jan Kotek
+ */
+public abstract class EngineWrapper implements Engine{
+
+    private Engine engine;
+
+    protected EngineWrapper(Engine engine){
+        if(engine == null) throw new IllegalArgumentException();
+        this.engine = engine;
+    }
+
+    @Override
+    public <A> long put(A value, Serializer<A> serializer) {
+        return getWrappedEngine().put(value, serializer);
+    }
+
+    @Override
+    public <A> A get(long recid, Serializer<A> serializer) {
+        return getWrappedEngine().get(recid, serializer);
+    }
+
+    @Override
+    public <A> void update(long recid, A value, Serializer<A> serializer) {
+        getWrappedEngine().update(recid, value, serializer);
+    }
+
+    @Override
+    public <A> boolean compareAndSwap(long recid, A expectedOldValue, A newValue, Serializer<A> serializer) {
+        return getWrappedEngine().compareAndSwap(recid, expectedOldValue, newValue, serializer);
+    }
+
+    @Override
+    public <A> void delete(long recid, Serializer<A> serializer) {
+        getWrappedEngine().delete(recid, serializer);
+    }
+
+    @Override
+    public void close() {
+        Engine e = engine;
+        if(e!=null)
+            e.close();
+        engine = null;
+    }
+
+    @Override
+    public boolean isClosed() {
+        return engine==null;
+    }
+
+    @Override
+    public void commit() {
+        getWrappedEngine().commit();
+    }
+
+    @Override
+    public void rollback() {
+        getWrappedEngine().rollback();
+    }
+
+
+    @Override
+    public boolean isReadOnly() {
+        return getWrappedEngine().isReadOnly();
+    }
+
+    @Override
+    public void compact() {
+        getWrappedEngine().compact();
+    }
+
+    /**
+     * Wraps an <code>Engine</code> and throws
+     * <code>UnsupportedOperationException("Read-only")</code>
+     * on any modification attempt.
+     */
+    public static class ReadOnlyEngine extends EngineWrapper {
+
+
+        public ReadOnlyEngine(Engine engine){
+            super(engine);
+        }
+
+        @Override
+        public <A> boolean compareAndSwap(long recid, A expectedOldValue, A newValue, Serializer<A> serializer) {
+            throw new UnsupportedOperationException("Read-only");
+        }
+
+        @Override
+        public <A> long put(A value, Serializer<A> serializer) {
+            throw new UnsupportedOperationException("Read-only");
+        }
+
+        @Override
+        public <A> void update(long recid, A value, Serializer<A> serializer) {
+            throw new UnsupportedOperationException("Read-only");
+        }
+
+        @Override
+        public <A> void delete(long recid, Serializer<A> serializer){
+            throw new UnsupportedOperationException("Read-only");
+        }
+
+
+
+        @Override
+        public void commit() {
+            throw new UnsupportedOperationException("Read-only");
+        }
+
+        @Override
+        public void rollback() {
+            throw new UnsupportedOperationException("Read-only");
+        }
+
+
+        @Override
+        public boolean isReadOnly() {
+            return true;
+        }
+
+    }
+
+    /**
+     * Wrapper which transform binary data. Useful for compression or encryption
+     */
+    public static class ByteTransformEngine extends  EngineWrapper {
+
+        //TODO compare and swap
+
+        protected Serializer<byte[]> blockSerializer;
+
+        public ByteTransformEngine(Engine engine, Serializer<byte[]> blockSerializer) {
+            super(engine);
+            this.blockSerializer = blockSerializer;
+        }
+
+        @Override
+        public <A> long put(A value, Serializer<A> serializer) {
+            //serialize to byte array, and pass it down with alternative serializer
+            try {
+                Engine e = getWrappedEngine();
+                Serializer<byte[]> ser = checkClosed(blockSerializer);
+
+                if(value ==null){
+                    return e.put(null, ser);
+                }
+
+                DataOutput2 out = new DataOutput2();
+                serializer.serialize(out,value);
+                byte[] b = out.copyBytes();
+
+                return e.put(b, ser);
+            } catch (IOException e) {
+                throw new IOError(e);
+            }
+        }
+
+        @Override
+        public <A> A get(long recid, Serializer<A> serializer) {
+            //get decompressed array
+            try {
+                byte[] b = getWrappedEngine().get(recid, checkClosed(blockSerializer));
+                if(b==null) return null;
+
+                //deserialize
+                DataInput2 in = new DataInput2(ByteBuffer.wrap(b),0);
+
+                return serializer.deserialize(in,b.length);
+            } catch (IOException e) {
+                throw new IOError(e);
+            }
+        }
+
+        @Override
+        public <A> void update(long recid, A value, Serializer<A> serializer) {
+            //serialize to byte array, and pass it down with alternative serializer
+            try {
+                DataOutput2 out = new DataOutput2();
+                serializer.serialize(out,value);
+                byte[] b = out.copyBytes();
+
+                getWrappedEngine().update(recid, b, checkClosed(blockSerializer));
+            } catch (IOException e) {
+                throw new IOError(e);
+            }
+        }
+
+
+        @Override
+        public void close() {
+            super.close();
+            blockSerializer = null;
+        }
+
+
+    }
+
+    public static class DebugEngine extends EngineWrapper{
+
+        //TODO CAS
+
+        final Queue<Record> records = new ConcurrentLinkedQueue<Record>();
+
+
+        protected static final class Record{
+            final long recid;
+            final String desc;
+//            final String thread = Thread.currentThread().getName();
+//            final Exception stackTrace = new Exception();
+
+            public Record(long recid, String desc) {
+                this.recid = recid;
+                this.desc = desc;
+            }
+        }
+
+        public DebugEngine(Engine engine) {
+            super(engine);
+        }
+
+        @Override
+        public <A> long put(A value, Serializer<A> serializer) {
+            long recid =  super.put(value, serializer);
+            records.add(new Record(recid,
+                    "INSERT \n  val:"+value+"\n  ser:"+serializer
+            ));
+            return recid;
+        }
+
+        @Override
+        public <A> A get(long recid, Serializer<A> serializer) {
+            A ret =  super.get(recid, serializer);
+            records.add(new Record(recid,
+                    "GET \n  val:"+ret+"\n  ser:"+serializer
+            ));
+            return ret;
+        }
+
+        @Override
+        public <A> void update(long recid, A value, Serializer<A> serializer) {
+            super.update(recid, value, serializer);
+            records.add(new Record(recid,
+                    "UPDATE \n  val:"+value+"\n  ser:"+serializer
+            ));
+
+        }
+
+        @Override
+        public <A> void delete(long recid, Serializer<A> serializer){
+            super.delete(recid,serializer);
+            records.add(new Record(recid,"DEL"));
+        }
+    }
+
+    public Engine getWrappedEngine(){
+        return checkClosed(engine);
+    }
+
+    protected static <V> V checkClosed(V v){
+        if(v==null) throw new IllegalAccessError("DB has been closed");
+        return v;
+    }
+
+
+    /**
+     * check if Record Instances were not modified while in cache.
+     * Usuful to diagnose strange problems with Instance Cache.
+     */
+    public static class ImmutabilityCheckEngine extends EngineWrapper{
+
+        protected static class Item {
+            final Serializer serializer;
+            final Object item;
+            final int oldChecksum;
+
+            public Item(Serializer serializer, Object item) {
+                if(item==null || serializer==null) throw new AssertionError("null");
+                this.serializer = serializer;
+                this.item = item;
+                oldChecksum = checksum();
+                if(oldChecksum!=checksum()) throw new AssertionError("inconsistent serialization");
+            }
+
+            private int checksum(){
+                try {
+                    DataOutput2 out = new DataOutput2();
+                    serializer.serialize(out, item);
+                    byte[] bb = out.copyBytes();
+                    return Arrays.hashCode(bb);
+                }catch(IOException e){
+                    throw new IOError(e);
+                }
+            }
+
+            void check(){
+                int newChecksum = checksum();
+                if(oldChecksum!=newChecksum) throw new AssertionError("Record instance was modified: \n  "+item+"\n  "+serializer);
+            }
+        }
+
+        protected LongConcurrentHashMap<Item> items = new LongConcurrentHashMap<Item>();
+
+        protected ImmutabilityCheckEngine(Engine engine) {
+            super(engine);
+        }
+
+        @Override
+        public <A> A get(long recid, Serializer<A> serializer) {
+            Item item = items.get(recid);
+            if(item!=null) item.check();
+            A ret = super.get(recid, serializer);
+            if(ret!=null) items.put(recid, new Item(serializer,ret));
+            return ret;
+        }
+
+        @Override
+        public <A> long put(A value, Serializer<A> serializer) {
+            long ret =  super.put(value, serializer);
+            if(value!=null) items.put(ret, new Item(serializer,value));
+            return ret;
+        }
+
+        @Override
+        public <A> void update(long recid, A value, Serializer<A> serializer) {
+            Item item = items.get(recid);
+            if(item!=null) item.check();
+            super.update(recid, value, serializer);
+            if(value!=null) items.put(recid, new Item(serializer,value));
+        }
+
+        @Override
+        public <A> boolean compareAndSwap(long recid, A expectedOldValue, A newValue, Serializer<A> serializer) {
+            Item item = items.get(recid);
+            if(item!=null) item.check();
+            boolean ret = super.compareAndSwap(recid, expectedOldValue, newValue, serializer);
+            if(ret && newValue!=null) items.put(recid, new Item(serializer,item));
+            return ret;
+        }
+
+        @Override
+        public void close() {
+            super.close();
+            for(Iterator<Item> iter = items.valuesIterator(); iter.hasNext();){
+                iter.next().check();
+            }
+            items.clear();
+        }
+    }
+
+}
Index: applications/editors/josm/plugins/imagerycache/src/org/mapdb/Fun.java
===================================================================
--- applications/editors/josm/plugins/imagerycache/src/org/mapdb/Fun.java	(revision 29363)
+++ applications/editors/josm/plugins/imagerycache/src/org/mapdb/Fun.java	(revision 29363)
@@ -0,0 +1,286 @@
+/*
+ *  Copyright (c) 2012 Jan Kotek
+ *
+ *  Licensed under the Apache License, Version 2.0 (the "License");
+ *  you may not use this file except in compliance with the License.
+ *  You may obtain a copy of the License at
+ *
+ *    http://www.apache.org/licenses/LICENSE-2.0
+ *
+ *  Unless required by applicable law or agreed to in writing, software
+ *  distributed under the License is distributed on an "AS IS" BASIS,
+ *  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ *  See the License for the specific language governing permissions and
+ *  limitations under the License.
+ */
+
+package org.mapdb;
+
+import java.io.Serializable;
+
+/**
+ * Functional stuff. Tuples, function, callback methods etc..
+ *
+ * @author Jan Kotek
+ */
+@SuppressWarnings({ "unchecked", "rawtypes" })
+public final class Fun {
+
+    private Fun(){}
+
+    /** positive infinity object. Is larger than anything else. Used in tuple comparators.
+     * Negative infinity is represented by 'null' */
+    public static final Object HI = new Object(){
+        @Override public String toString() {
+            return "HI";
+        }
+    };
+
+    /** autocast version of `HI`*/
+    public static final <A> A HI(){
+        return (A) HI;
+    }
+
+    public static <A,B> Tuple2<A,B> t2(A a, B b) {
+        return new Tuple2<A, B>(a,b);
+    }
+
+    public static <A,B,C> Tuple3<A,B,C> t3(A a, B b, C c) {
+        return new Tuple3<A, B, C>((A)a, (B)b, (C)c);
+    }
+
+    public static <A,B,C,D> Tuple4<A,B,C,D> t4(A a, B b, C c, D d) {
+        return new Tuple4<A, B, C, D>(a,b,c,d);
+    }
+
+
+    static public final class Tuple2<A,B> implements Comparable, Serializable {
+		
+    	private static final long serialVersionUID = -8816277286657643283L;
+		
+		final public A a;
+        final public B b;
+
+        public Tuple2(A a, B b) {
+            this.a = a;
+            this.b = b;
+        }
+
+        @Override public int compareTo(Object o) {
+            final Tuple2 oo = (Tuple2) o;
+            if(a!=oo.a){
+                if(a==null || oo.a==HI) return -1;
+                if(a==HI || oo.a==null) return 1;
+
+                final int c = ((Comparable)a).compareTo(oo.a);
+                if(c!=0) return c;
+            }
+
+            if(b!=oo.b){
+                if(b==null || oo.b==HI) return -1;
+                if(b==HI || oo.b==null) return 1;
+
+                final int i = ((Comparable)b).compareTo(oo.b);
+                if(i!=0) return i;
+            }
+            return 0;
+        }
+
+        @Override public boolean equals(Object o) {
+            if (this == o) return true;
+            if (o == null || getClass() != o.getClass()) return false;
+
+            final Tuple2 tuple2 = (Tuple2) o;
+
+            if (a != null ? !a.equals(tuple2.a) : tuple2.a != null) return false;
+            if (b != null ? !b.equals(tuple2.b) : tuple2.b != null) return false;
+
+            return true;
+        }
+
+        @Override public int hashCode() {
+            int result = a != null ? a.hashCode() : 0;
+            result = 31 * result + (b != null ? b.hashCode() : 0);
+            return result;
+        }
+
+        @Override public String toString() {
+            return "Tuple2[" + a +", "+b+"]";
+        }
+    }
+
+    static public class Tuple3<A,B,C> implements Comparable, Serializable{
+
+    	private static final long serialVersionUID = 11785034935947868L;
+    	
+		final public A a;
+        final public B b;
+        final public C c;
+
+        public Tuple3(A a, B b, C c) {
+            this.a = a;
+            this.b = b;
+            this.c = c;
+        }
+
+        @Override public int compareTo(Object o) {
+            final Tuple3 oo = (Tuple3) o;
+            if(a!=oo.a){
+                if(a==null || oo.a==HI) return -1;
+                if(a==HI ||  oo.a==null) return 1;
+
+                final int c = ((Comparable)a).compareTo(oo.a);
+                if(c!=0) return c;
+            }
+
+            if(b!=oo.b){
+                if(b==null || oo.b==HI) return -1;
+                if(b==HI || oo.b==null) return 1;
+
+                final int i = ((Comparable)b).compareTo(oo.b);
+                if(i!=0) return i;
+            }
+
+            if(c!=oo.c){
+                if(c==null || oo.c==HI) return -1;
+                if(c==HI || oo.c==null) return 1;
+
+                final int i = ((Comparable)c).compareTo(oo.c);
+                if(i!=0) return i;
+            }
+
+            return 0;
+        }
+
+
+        @Override public String toString() {
+            return "Tuple3[" + a +", "+b+", "+c+"]";
+        }
+
+        @Override
+        public boolean equals(Object o) {
+            if (this == o) return true;
+            if (o == null || getClass() != o.getClass()) return false;
+
+            Tuple3 tuple3 = (Tuple3) o;
+
+            if (a != null ? !a.equals(tuple3.a) : tuple3.a != null) return false;
+            if (b != null ? !b.equals(tuple3.b) : tuple3.b != null) return false;
+            if (c != null ? !c.equals(tuple3.c) : tuple3.c != null) return false;
+
+            return true;
+        }
+
+        @Override
+        public int hashCode() {
+            int result = a != null ? a.hashCode() : 0;
+            result = 31 * result + (b != null ? b.hashCode() : 0);
+            result = 31 * result + (c != null ? c.hashCode() : 0);
+            return result;
+        }
+    }
+
+    static public class Tuple4<A,B,C,D> implements Comparable, Serializable{
+
+    	private static final long serialVersionUID = 1630397500758650718L;
+    	
+		final public A a;
+        final public B b;
+        final public C c;
+        final public D d;
+
+        public Tuple4(A a, B b, C c, D d) {
+            this.a = a;
+            this.b = b;
+            this.c = c;
+            this.d = d;
+        }
+
+        @Override public int compareTo(Object o) {
+            final Tuple4 oo = (Tuple4) o;
+            if(a!=oo.a){
+                if(a==null || oo.a==HI) return -1;
+                if(a==HI || oo.a==null) return 1;
+
+                final int c = ((Comparable)a).compareTo(oo.a);
+                if(c!=0) return c;
+            }
+
+            if(b!=oo.b){
+                if(b==null || oo.b==HI) return -1;
+                if(b==HI || oo.b==null) return 1;
+
+                final int i = ((Comparable)b).compareTo(oo.b);
+                if(i!=0) return i;
+            }
+
+            if(c!=oo.c){
+                if(c==null || oo.c==HI) return -1;
+                if(c==HI || oo.c==null) return 1;
+
+                final int i = ((Comparable)c).compareTo(oo.c);
+                if(i!=0) return i;
+            }
+
+            if(d!=oo.d){
+                if(d==null || oo.d==HI) return -1;
+                if(d==HI || oo.d==null) return 1;
+
+                final int i = ((Comparable)d).compareTo(oo.d);
+                if(i!=0) return i;
+            }
+
+
+            return 0;
+        }
+
+
+        @Override public String toString() {
+            return "Tuple4[" + a +", "+b+", "+c+", "+d+"]";
+        }
+
+        @Override
+        public boolean equals(Object o) {
+            if (this == o) return true;
+            if (o == null || getClass() != o.getClass()) return false;
+
+            Tuple4 tuple4 = (Tuple4) o;
+
+            if (a != null ? !a.equals(tuple4.a) : tuple4.a != null) return false;
+            if (b != null ? !b.equals(tuple4.b) : tuple4.b != null) return false;
+            if (c != null ? !c.equals(tuple4.c) : tuple4.c != null) return false;
+            if (d != null ? !d.equals(tuple4.d) : tuple4.d != null) return false;
+
+            return true;
+        }
+
+        @Override
+        public int hashCode() {
+            int result = a != null ? a.hashCode() : 0;
+            result = 31 * result + (b != null ? b.hashCode() : 0);
+            result = 31 * result + (c != null ? c.hashCode() : 0);
+            result = 31 * result + (d != null ? d.hashCode() : 0);
+            return result;
+        }
+    }
+
+
+    public interface Function1<R,A>{
+        R run(A a);
+    }
+
+    public interface Function2<R,A,B>{
+        R run(A a, B b);
+    }
+
+    public interface Runnable2<A,B>{
+        void run(A a, B b);
+    }
+
+    public interface Runnable3<A,B,C>{
+        void run(A a, B b, C c);
+    }
+
+
+
+}
Index: applications/editors/josm/plugins/imagerycache/src/org/mapdb/HTreeMap.java
===================================================================
--- applications/editors/josm/plugins/imagerycache/src/org/mapdb/HTreeMap.java	(revision 29363)
+++ applications/editors/josm/plugins/imagerycache/src/org/mapdb/HTreeMap.java	(revision 29363)
@@ -0,0 +1,1190 @@
+/*
+ *  Copyright (c) 2012 Jan Kotek
+ *
+ *  Licensed under the Apache License, Version 2.0 (the "License");
+ *  you may not use this file except in compliance with the License.
+ *  You may obtain a copy of the License at
+ *
+ *    http://www.apache.org/licenses/LICENSE-2.0
+ *
+ *  Unless required by applicable law or agreed to in writing, software
+ *  distributed under the License is distributed on an "AS IS" BASIS,
+ *  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ *  See the License for the specific language governing permissions and
+ *  limitations under the License.
+ */
+package org.mapdb;
+
+import java.io.DataInput;
+import java.io.DataOutput;
+import java.io.IOException;
+import java.util.*;
+import java.util.concurrent.ConcurrentMap;
+import java.util.concurrent.locks.ReentrantReadWriteLock;
+
+/**
+ * Thread safe concurrent HashMap
+ * <p/>
+ * This map uses full 32bit hash from beginning, There is no initial load factor and rehash.
+ * Technically it is not hash table, but hash tree with nodes expanding when they become full.
+ * <p/>
+ * This map is suitable for number of records  1e9 and over.
+ * Larger number of records will increase hash collisions and performance
+ * will degrade linearly with number of records (separate chaining).
+ * <p/>
+ * Concurrent scalability is achieved by splitting HashMap into 16 segments, each with separate lock.
+ * Very similar to ConcurrentHashMap
+ *
+ * @author Jan Kotek
+ */
+@SuppressWarnings({ "unchecked", "rawtypes" })
+public class HTreeMap<K,V>   extends AbstractMap<K,V> implements ConcurrentMap<K, V>, Bind.MapWithModificationListener<K,V> {
+
+
+    protected static final int BUCKET_OVERFLOW = 4;
+
+    /** is this a Map or Set?  if false, entries do not have values, only keys are allowed*/
+    protected final boolean hasValues;
+
+    /**
+     * Salt added to hash before rehashing, so it is harder to trigger hash collision attack.
+     */
+    protected final int hashSalt;
+
+
+    protected final Serializer<K> keySerializer;
+    protected final Serializer<V> valueSerializer;
+
+    protected final Serializer defaultSerialzierForSnapshots;
+
+
+
+
+    /** node which holds key-value pair */
+    protected static class LinkedNode<K,V>{
+        final K key;
+        final V value;
+        final long next;
+
+        LinkedNode(final long next, final K key, final V value ){
+            this.key = key;
+            this.value = value;
+            this.next = next;
+        }
+    }
+
+    static class HashRootSerializer implements Serializer<HashRoot>{
+
+        private Serializer defaultSerializer;
+
+        public HashRootSerializer(Serializer defaultSerializer) {
+            this.defaultSerializer = defaultSerializer;
+        }
+
+        @Override
+        public void serialize(DataOutput out, HashRoot value) throws IOException {
+            out.writeBoolean(value.hasValues);
+            out.writeInt(value.hashSalt);
+            for(int i=0;i<16;i++){
+                Utils.packLong(out, value.segmentRecids[i]);
+            }
+            defaultSerializer.serialize(out,value.keySerializer);
+            defaultSerializer.serialize(out,value.valueSerializer);
+
+        }
+
+        @Override
+        public HashRoot deserialize(DataInput in, int available) throws IOException {
+            if(available==0) return null;
+            HashRoot r = new HashRoot();
+            r.hasValues = in.readBoolean();
+            r.hashSalt = in.readInt();
+            r.segmentRecids = new long[16];
+            for(int i=0;i<16;i++){
+                r.segmentRecids[i] = Utils.unpackLong(in);
+            }
+            r.keySerializer = (Serializer) defaultSerializer.deserialize(in, -1);
+            r.valueSerializer = (Serializer) defaultSerializer.deserialize(in, -1);
+            return r;
+        }
+
+    }
+
+
+    static class HashRoot{
+        long[] segmentRecids;
+        boolean hasValues;
+        int hashSalt;
+        Serializer keySerializer;
+        Serializer valueSerializer;
+    }
+
+
+    final Serializer<LinkedNode<K,V>> LN_SERIALIZER = new Serializer<LinkedNode<K,V>>() {
+        @Override
+        public void serialize(DataOutput out, LinkedNode<K,V> value) throws IOException {
+            Utils.packLong(out, value.next);
+            keySerializer.serialize(out,value.key);
+            if(hasValues)
+                valueSerializer.serialize(out,value.value);
+        }
+
+        @Override
+        public LinkedNode<K,V> deserialize(DataInput in, int available) throws IOException {
+            return new LinkedNode<K, V>(
+                    Utils.unpackLong(in),
+                    (K) keySerializer.deserialize(in,-1),
+                    hasValues? (V) valueSerializer.deserialize(in,-1) : (V) Utils.EMPTY_STRING
+            );
+        }
+    };
+
+
+    static final Serializer<long[][]>DIR_SERIALIZER = new Serializer<long[][]>() {
+        @Override
+        public void serialize(DataOutput out, long[][] value) throws IOException {
+            if(value.length!=16) throw new InternalError();
+
+            //first write mask which indicate subarray nullability
+            int nulls = 0;
+            for(int i = 0;i<16;i++){
+                if(value[i]!=null)
+                    nulls |= 1<<i;
+            }
+            out.writeShort(nulls);
+
+            //write non null subarrays
+            for(int i = 0;i<16;i++){
+                if(value[i]!=null){
+                    if(value[i].length!=8) throw new InternalError();
+                    for(long l:value[i]){
+                        Utils.packLong(out, l);
+                    }
+                }
+            }
+        }
+
+
+        @Override
+        public long[][] deserialize(DataInput in, int available) throws IOException {
+
+            final long[][] ret = new long[16][];
+
+            //there are 16  subarrays, each bite indicates if subarray is null
+            int nulls = in.readUnsignedShort();
+            for(int i=0;i<16;i++){
+                if((nulls & 1)!=0){
+                    final long[] subarray = new long[8];
+                    for(int j=0;j<8;j++){
+                        subarray[j] = Utils.unpackLong(in);
+                    }
+                    ret[i] = subarray;
+                }
+                nulls = nulls>>>1;
+            }
+
+            return ret;
+        }
+    };
+
+
+    /** list of segments, this is immutable*/
+    protected final long[] segmentRecids;
+
+    protected final ReentrantReadWriteLock[] segmentLocks = new ReentrantReadWriteLock[16];
+    {
+        for(int i=0;i< 16;i++)  segmentLocks[i]=new ReentrantReadWriteLock();
+    }
+
+    protected final Engine engine;
+    public final long rootRecid;
+
+
+    /**
+     * Constructor used to create new HTreeMap without existing record (recid) in Engine.
+     * This constructor creates new record and saves all configuration parameters there.
+     * Constructor args are defining HTreeMap format, are stored in db and can not be changed latter.
+     *
+     * @param engine used for persistence
+     * @param hasValues is Map or Set? If true only keys will be stored, no values
+     * @param defaultSerializer serialier used to serialize/deserialize other serializers. May be null for default value.
+     * @param keySerializer Serializier used for keys. May be null for default value.
+     * @param valueSerializer Serializer used for values. May be null for default value
+     */
+    public HTreeMap(Engine engine, boolean hasValues, int hashSalt, Serializer defaultSerializer, Serializer<K> keySerializer, Serializer<V> valueSerializer) {
+        this.engine = engine;
+        this.hasValues = hasValues;
+        this.hashSalt = hashSalt;
+        SerializerBase.assertSerializable(keySerializer);
+        SerializerBase.assertSerializable(valueSerializer);
+
+        if(defaultSerializer == null) defaultSerializer = Serializer.BASIC_SERIALIZER;
+        this.defaultSerialzierForSnapshots = defaultSerializer;
+        this.keySerializer = keySerializer==null ? (Serializer<K>) defaultSerializer : keySerializer;
+        this.valueSerializer = valueSerializer==null ? (Serializer<V>) defaultSerializer : valueSerializer;
+
+
+        //prealocate segmentRecids, so we dont have to lock on those latter
+        segmentRecids = new long[16];
+        for(int i=0;i<16;i++)
+            segmentRecids[i] = engine.put(new long[16][], DIR_SERIALIZER);
+        HashRoot r = new HashRoot();
+        r.hasValues = hasValues;
+        r.hashSalt = hashSalt;
+        r.segmentRecids = segmentRecids;
+        r.keySerializer = this.keySerializer;
+        r.valueSerializer = this.valueSerializer;
+        this.rootRecid = engine.put(r, new HashRootSerializer(defaultSerializer));
+    }
+
+    /**
+     * Constructor used to load existing HTreeMap (with assigned recid).
+     * Map was already created and saved to Engine, this constructor just loads it.
+     *
+     * @param engine used for persistence
+     * @param rootRecid under which BTreeMap was stored
+     * @param defaultSerializer used to deserialize other serializers and comparator
+     */
+    public HTreeMap(Engine engine, long rootRecid, Serializer defaultSerializer) {
+        if(rootRecid == 0) throw new IllegalArgumentException("recid is 0");
+        this.engine = engine;
+        this.rootRecid = rootRecid;
+        //load all fields from store
+        if(defaultSerializer==null) defaultSerializer = Serializer.BASIC_SERIALIZER;
+        this.defaultSerialzierForSnapshots = defaultSerializer;
+        HashRoot r = engine.get(rootRecid, new HashRootSerializer(defaultSerializer));
+        this.segmentRecids = r.segmentRecids;
+        this.hasValues = r.hasValues;
+        this.hashSalt = r.hashSalt;
+        this.keySerializer = r.keySerializer;
+        this.valueSerializer = r.valueSerializer;
+    }
+
+    /** hack used for Dir Name*/
+    static final Map<String, Long> preinitNamedDir(Engine engine){
+        HashRootSerializer serializer = new HashRootSerializer(Serializer.BASIC_SERIALIZER);
+        //check if record already exist
+        HashRoot r = engine.get(Engine.NAME_DIR_RECID, serializer);
+        if(r!=null)
+            return new HTreeMap<String, Long>(engine, Engine.NAME_DIR_RECID, Serializer.BASIC_SERIALIZER);
+
+        if(engine.isReadOnly())
+            return Collections.unmodifiableMap(new HashMap<String, Long>());
+
+        //prealocate segmentRecids
+        long[] segmentRecids = new long[16];
+        for(int i=0;i<16;i++)
+            segmentRecids[i] = engine.put(new long[16][], DIR_SERIALIZER);
+        r = new HashRoot();
+        r.hasValues = true;
+        r.segmentRecids = segmentRecids;
+        r.keySerializer = Serializer.BASIC_SERIALIZER;
+        r.valueSerializer = Serializer.BASIC_SERIALIZER;
+        engine.update(Engine.NAME_DIR_RECID, r, serializer);
+        //and now load it
+        return new HTreeMap<String, Long>(engine, Engine.NAME_DIR_RECID, Serializer.BASIC_SERIALIZER);
+
+    }
+
+    @Override
+    public boolean containsKey(final Object o){
+        return get(o)!=null;
+    }
+
+    @Override
+    public int size() {
+        long counter = 0;
+
+        //search tree, until we find first non null
+        for(int i=0;i<16;i++){
+            try{
+                segmentLocks[i].readLock().lock();
+
+                final long dirRecid = segmentRecids[i];
+                counter+=recursiveDirCount(dirRecid);
+            }finally {
+                segmentLocks[i].readLock().unlock();
+            }
+        }
+
+        if(counter>Integer.MAX_VALUE)
+            return Integer.MAX_VALUE;
+
+        return (int) counter;
+    }
+
+    private long recursiveDirCount(final long dirRecid) {
+        long[][] dir = engine.get(dirRecid, DIR_SERIALIZER);
+        long counter = 0;
+        for(long[] subdir:dir){
+            if(subdir == null) continue;
+            for(long recid:subdir){
+                if(recid == 0) continue;
+                if((recid&1)==0){
+                    //reference to another subdir
+                    recid = recid>>>1;
+                    counter += recursiveDirCount(recid);
+                }else{
+                    //reference to linked list, count it
+                    recid = recid>>>1;
+                    while(recid!=0){
+                        LinkedNode n = engine.get(recid, LN_SERIALIZER);
+                        if(n!=null){
+                            counter++;
+                            recid =  n.next;
+                        }else{
+                            recid = 0;
+                        }
+                    }
+                }
+            }
+        }
+        return counter;
+    }
+
+    @Override
+    public boolean isEmpty() {
+        //search tree, until we find first non null
+        for(int i=0;i<16;i++){
+            try{
+                segmentLocks[i].readLock().lock();
+
+                long dirRecid = segmentRecids[i];
+                long[][] dir = engine.get(dirRecid, DIR_SERIALIZER);
+                for(long[] d:dir){
+                    if(d!=null) return false;
+                }
+                return true;
+            }finally {
+                segmentLocks[i].readLock().unlock();
+            }
+        }
+
+        return true;
+    }
+
+
+
+    @Override
+	public V get(final Object o){
+        if(o==null) return null;
+        final int h = hash(o);
+        final int segment = h >>>28;
+        try{
+            segmentLocks[segment].readLock().lock();
+            long recid = segmentRecids[segment];
+            for(int level=3;level>=0;level--){
+                long[][] dir = engine.get(recid, DIR_SERIALIZER);
+                if(dir == null) return null;
+                int slot = (h>>>(level*7 )) & 0x7F;
+                if(slot>=128) throw new InternalError();
+                if(dir[slot/8]==null) return null;
+                recid = dir[slot/8][slot%8];
+                if(recid == 0) return null;
+                if((recid&1)!=0){ //last bite indicates if referenced record is LinkedNode
+                    recid = recid>>>1;
+                    while(true){
+                        LinkedNode<K,V> ln = engine.get(recid, LN_SERIALIZER);
+                        if(ln == null) return null;
+                        if(ln.key.equals(o)) return ln.value;
+                        if(ln.next==0) return null;
+                        recid = ln.next;
+                    }
+                }
+
+                recid = recid>>>1;
+            }
+
+            return null;
+        }finally {
+            segmentLocks[segment].readLock().unlock();
+        }
+    }
+
+    @Override
+    public V put(final K key, final V value){
+        if (key == null)
+            throw new IllegalArgumentException("null key");
+
+        if (value == null)
+            throw new IllegalArgumentException("null value");
+
+        Utils.checkMapValueIsNotCollecion(value);
+
+        final int h = hash(key);
+        final int segment = h >>>28;
+        try{
+            segmentLocks[segment].writeLock().lock();
+            long dirRecid = segmentRecids[segment];
+
+            int level = 3;
+            while(true){
+                long[][] dir = engine.get(dirRecid, DIR_SERIALIZER);
+                final int slot =  (h>>>(7*level )) & 0x7F;
+                if(slot>127) throw new InternalError();
+
+                if(dir == null ){
+                    //create new dir
+                    dir = new long[16][];
+                }
+
+                if(dir[slot/8] == null){
+                    dir[slot/8] = new long[8];
+                }
+
+                int counter = 0;
+                long recid = dir[slot/8][slot%8];
+
+                if(recid!=0){
+                    if((recid&1) == 0){
+                        dirRecid = recid>>>1;
+                        level--;
+                        continue;
+                    }
+                    recid = recid>>>1;
+
+                    //traverse linked list, try to replace previous value
+                    LinkedNode<K,V> ln = engine.get(recid, LN_SERIALIZER);
+
+                    while(ln!=null){
+                        if(ln.key.equals(key)){
+                            //found, replace value at this node
+                            V oldVal = ln.value;
+                            ln = new LinkedNode<K, V>(ln.next, ln.key, value);
+                            engine.update(recid, ln, LN_SERIALIZER);
+                            notify(key,  oldVal, value);
+                            return oldVal;
+                        }
+                        recid = ln.next;
+                        ln = recid==0? null : engine.get(recid, LN_SERIALIZER);
+                        counter++;
+                    }
+                    //key was not found at linked list, so just append it to beginning
+                }
+
+
+                //check if linked list has overflow and needs to be expanded to new dir level
+                if(counter>=BUCKET_OVERFLOW && level>=1){
+                    long[][] nextDir = new long[16][];
+
+                    {
+                        //add newly inserted record
+                        int pos =(h >>>(7*(level-1) )) & 0x7F;
+                        nextDir[pos/8] = new long[8];
+                        nextDir[pos/8][pos%8] = (engine.put(new LinkedNode<K, V>(0, key, value), LN_SERIALIZER) <<1) | 1;
+                    }
+
+
+                    //redistribute linked bucket into new dir
+                    long nodeRecid = dir[slot/8][slot%8]>>>1;
+                    while(nodeRecid!=0){
+                        LinkedNode<K,V> n = engine.get(nodeRecid, LN_SERIALIZER);
+                        final long nextRecid = n.next;
+                        int pos = (hash(n.key) >>>(7*(level -1) )) & 0x7F;
+                        if(nextDir[pos/8]==null) nextDir[pos/8] = new long[8];
+                        n = new LinkedNode<K, V>(nextDir[pos/8][pos%8]>>>1, n.key, n.value);
+                        nextDir[pos/8][pos%8] = (nodeRecid<<1) | 1;
+                        engine.update(nodeRecid, n, LN_SERIALIZER);
+                        nodeRecid = nextRecid;
+                    }
+
+                    //insert nextDir and update parent dir
+                    long nextDirRecid = engine.put(nextDir, DIR_SERIALIZER);
+                    int parentPos = (h>>>(7*level )) & 0x7F;
+                    dir[parentPos/8][parentPos%8] = (nextDirRecid<<1) | 0;
+                    engine.update(dirRecid, dir, DIR_SERIALIZER);
+                    notify(key, null, value);
+                    return null;
+                }else{
+                    // record does not exist in linked list, so create new one
+                    recid = dir[slot/8][slot%8]>>>1;
+                    long newRecid = engine.put(new LinkedNode<K, V>(recid, key, value), LN_SERIALIZER);
+                    dir[slot/8][slot%8] = (newRecid<<1) | 1;
+                    engine.update(dirRecid, dir, DIR_SERIALIZER);
+                    notify(key, null, value);
+                    return null;
+                }
+            }
+
+        }finally {
+            segmentLocks[segment].writeLock().unlock();
+        }
+    }
+
+
+    @Override
+    public V remove(Object key){
+
+        final int h = hash(key);
+        final int segment = h >>>28;
+        try{
+            segmentLocks[segment].writeLock().lock();
+
+            final  long[] dirRecids = new long[4];
+            int level = 3;
+            dirRecids[level] = segmentRecids[segment];
+
+            while(true){
+                long[][] dir = engine.get(dirRecids[level], DIR_SERIALIZER);
+                final int slot =  (h>>>(7*level )) & 0x7F;
+                if(slot>127) throw new InternalError();
+
+                if(dir == null ){
+                    //create new dir
+                    dir = new long[16][];
+                }
+
+                if(dir[slot/8] == null){
+                    dir[slot/8] = new long[8];
+                }
+
+//                int counter = 0;
+                long recid = dir[slot/8][slot%8];
+
+                if(recid!=0){
+                    if((recid&1) == 0){
+                        level--;
+                        dirRecids[level] = recid>>>1;
+                        continue;
+                    }
+                    recid = recid>>>1;
+
+                    //traverse linked list, try to remove node
+                    LinkedNode<K,V> ln = engine.get(recid, LN_SERIALIZER);
+                    LinkedNode<K,V> prevLn = null;
+                    long prevRecid = 0;
+                    while(ln!=null){
+                        if(ln.key.equals(key)){
+                            //remove from linkedList
+                            if(prevLn == null ){
+                                //referenced directly from dir
+                                if(ln.next==0){
+                                    recursiveDirDelete(h, level, dirRecids, dir, slot);
+
+
+                                }else{
+                                    dir[slot/8][slot%8] = (ln.next<<1)|1;
+                                    engine.update(dirRecids[level], dir, DIR_SERIALIZER);
+                                }
+
+                            }else{
+                                //referenced from LinkedNode
+                                prevLn = new LinkedNode<K, V>(ln.next, prevLn.key, prevLn.value);
+                                engine.update(prevRecid, prevLn, LN_SERIALIZER);
+                            }
+                            //found, remove this node
+                            engine.delete(recid, LN_SERIALIZER);
+                            notify((K) key, ln.value, null);
+                            return ln.value;
+                        }
+                        prevRecid = recid;
+                        prevLn = ln;
+                        recid = ln.next;
+                        ln = recid==0? null : engine.get(recid, LN_SERIALIZER);
+//                        counter++;
+                    }
+                    //key was not found at linked list, so it does not exist
+                    return null;
+                }
+                //recid is 0, so entry does not exist
+                return null;
+
+            }
+        }finally {
+            segmentLocks[segment].writeLock().unlock();
+        }
+    }
+
+
+    private void recursiveDirDelete(int h, int level, long[] dirRecids, long[][] dir, int slot) {
+        //was only item in linked list, so try to collapse the dir
+        dir[slot/8][slot%8] = 0;
+        //one record was zeroed out, check if subarray can be collapsed to null
+        boolean allZero = true;
+        for(long l:dir[slot/8]){
+            if(l!=0){
+                allZero = false;
+                break;
+            }
+        }
+        if(allZero)
+            dir[slot/8] = null;
+        allZero = true;
+        for(long[] l:dir){
+            if(l!=null){
+                allZero = false;
+                break;
+            }
+        }
+
+        if(allZero){
+            //delete from parent dir
+            if(level==3){
+                //parent is segment, recid of this dir can not be modified,  so just update to null
+                engine.update(dirRecids[level], new long[16][], DIR_SERIALIZER);
+            }else{
+                engine.delete(dirRecids[level], DIR_SERIALIZER);
+
+                final long[][] parentDir = engine.get(dirRecids[level + 1], DIR_SERIALIZER);
+                final int parentPos = (h >>> (7 * (level + 1))) & 0x7F;
+                recursiveDirDelete(h,level+1,dirRecids, parentDir, parentPos);
+                //parentDir[parentPos/8][parentPos%8] = 0;
+                //engine.update(dirRecids[level + 1],parentDir,DIR_SERIALIZER);
+
+            }
+        }else{
+            engine.update(dirRecids[level], dir, DIR_SERIALIZER);
+        }
+    }
+
+    @Override
+    public void clear() {
+        for(int i = 0; i<16;i++) try{
+            segmentLocks[i].writeLock().lock();
+
+            final long dirRecid = segmentRecids[i];
+            recursiveDirClear(dirRecid);
+
+            //set dir to null, as segment recid is immutable
+            engine.update(dirRecid, new long[16][], DIR_SERIALIZER);
+
+        }finally {
+            segmentLocks[i].writeLock().unlock();
+        }
+    }
+
+    private void recursiveDirClear(final long dirRecid) {
+        final long[][] dir = engine.get(dirRecid, DIR_SERIALIZER);
+        if(dir == null) return;
+        for(long[] subdir:dir){
+            if(subdir==null) continue;
+            for(long recid:subdir){
+                if(recid == 0) continue;
+                if((recid&1)==0){
+                    //another dir
+                    recid = recid>>>1;
+                    //recursively remove dir
+                    recursiveDirClear(recid);
+                    engine.delete(recid, DIR_SERIALIZER);
+                }else{
+                    //linked list to delete
+                    recid = recid>>>1;
+                    while(recid!=0){
+                        LinkedNode n = engine.get(recid, LN_SERIALIZER);
+                        engine.delete(recid,LN_SERIALIZER);
+                        notify((K)n.key, (V)n.value , null);
+                        recid = n.next;
+                    }
+                }
+
+            }
+        }
+    }
+
+
+    @Override
+    public boolean containsValue(Object value) {
+        for (V v : values()) {
+            if (v.equals(value)) return true;
+        }
+        return false;
+    }
+
+    @Override
+    public void putAll(Map<? extends K, ? extends V> m) {
+        for(Entry<? extends K, ? extends V> e:m.entrySet()){
+            put(e.getKey(),e.getValue());
+        }
+    }
+
+
+    private final Set<K> _keySet = new AbstractSet<K>() {
+
+        @Override
+        public int size() {
+            return HTreeMap.this.size();
+        }
+
+        @Override
+        public boolean isEmpty() {
+            return HTreeMap.this.isEmpty();
+        }
+
+        @Override
+        public boolean contains(Object o) {
+            return HTreeMap.this.containsKey(o);
+        }
+
+        @Override
+        public Iterator<K> iterator() {
+            return new KeyIterator();
+        }
+
+        @Override
+        public boolean add(K k) {
+            if(HTreeMap.this.hasValues)
+                throw new UnsupportedOperationException();
+            else
+                return HTreeMap.this.put(k, (V) Utils.EMPTY_STRING) == null;
+        }
+
+        @Override
+        public boolean remove(Object o) {
+//            if(o instanceof Entry){
+//                Entry e = (Entry) o;
+//                return HTreeMap.this.remove(((Entry) o).getKey(),((Entry) o).getValue());
+//            }
+            return HTreeMap.this.remove(o)!=null;
+
+        }
+
+
+        @Override
+        public void clear() {
+            HTreeMap.this.clear();
+        }
+    };
+
+    @Override
+    public Set<K> keySet() {
+        return _keySet;
+    }
+
+    private final Collection<V> _values = new AbstractCollection<V>(){
+
+        @Override
+        public int size() {
+            return HTreeMap.this.size();
+        }
+
+        @Override
+        public boolean isEmpty() {
+            return HTreeMap.this.isEmpty();
+        }
+
+        @Override
+        public boolean contains(Object o) {
+            return HTreeMap.this.containsValue(o);
+        }
+
+
+
+        @Override
+        public Iterator<V> iterator() {
+            return new ValueIterator();
+        }
+
+    };
+
+    @Override
+    public Collection<V> values() {
+        return _values;
+    }
+
+    private Set<Entry<K,V>> _entrySet = new AbstractSet<Entry<K,V>>(){
+
+        @Override
+        public int size() {
+            return HTreeMap.this.size();
+        }
+
+        @Override
+        public boolean isEmpty() {
+            return HTreeMap.this.isEmpty();
+        }
+
+        @Override
+        public boolean contains(Object o) {
+            if(o instanceof  Entry){
+                Entry e = (Entry) o;
+                Object val = HTreeMap.this.get(e.getKey());
+                return val!=null && val.equals(e.getValue());
+            }else
+                return false;
+        }
+
+        @Override
+        public Iterator<Entry<K, V>> iterator() {
+            return new EntryIterator();
+        }
+
+
+        @Override
+        public boolean add(Entry<K, V> kvEntry) {
+            K key = kvEntry.getKey();
+            V value = kvEntry.getValue();
+            if(key==null || value == null) throw new NullPointerException();
+            HTreeMap.this.put(key, value);
+            return true;
+        }
+
+        @Override
+        public boolean remove(Object o) {
+            if(o instanceof Entry){
+                Entry e = (Entry) o;
+                Object key = e.getKey();
+                if(key == null) return false;
+                return HTreeMap.this.remove(key, e.getValue());
+            }
+            return false;
+        }
+
+
+        @Override
+        public void clear() {
+            HTreeMap.this.clear();
+        }
+    };
+
+    @Override
+    public Set<Entry<K, V>> entrySet() {
+        return _entrySet;
+    }
+
+
+    protected  int hash(final Object key) {
+        int h = key.hashCode() ^ hashSalt;
+        h ^= (h >>> 20) ^ (h >>> 12);
+        return h ^ (h >>> 7) ^ (h >>> 4);
+    }
+
+
+    abstract class HashIterator{
+
+        protected Object[] currentLinkedList;
+        protected int currentLinkedListPos = 0;
+
+        private K lastReturnedKey = null;
+
+        private int lastSegment = 0;
+
+        HashIterator(){
+            currentLinkedList = findNextLinkedNode(0);
+        }
+
+        public void remove() {
+            final K keyToRemove = lastReturnedKey;
+            if (lastReturnedKey == null)
+                throw new IllegalStateException();
+
+            lastReturnedKey = null;
+            HTreeMap.this.remove(keyToRemove);
+        }
+
+        public boolean hasNext(){
+            return currentLinkedList!=null && currentLinkedListPos<currentLinkedList.length;
+        }
+
+
+        protected void  moveToNext(){
+            lastReturnedKey = (K) currentLinkedList[currentLinkedListPos];
+            currentLinkedListPos+=2;
+            if(currentLinkedListPos==currentLinkedList.length){
+                final int lastHash = hash(lastReturnedKey);
+                currentLinkedList = advance(lastHash);
+                currentLinkedListPos = 0;
+            }
+        }
+
+        private Object[] advance(int lastHash){
+
+            int segment = lastHash >>>28;
+
+            //two phases, first find old item and increase hash
+            try{
+                segmentLocks[segment].readLock().lock();
+
+                long dirRecid = segmentRecids[segment];
+                int level = 3;
+                //dive into tree, finding last hash position
+                while(true){
+                    long[][] dir = engine.get(dirRecid, DIR_SERIALIZER);
+                    int pos = (lastHash>>>(7 * level)) & 0x7F;
+
+                    //check if we need to expand deeper
+                    if(dir[pos/8]==null || dir[pos/8][pos%8]==0 || (dir[pos/8][pos%8]&1)==1) {
+                        //increase hash by 1
+                        if(level!=0){
+                            lastHash = ((lastHash>>>(7 * level)) + 1) << (7*level); //should use mask and XOR
+                        }else
+                            lastHash +=1;
+                        if(lastHash==0){
+                            return null;
+                        }
+                        break;
+                    }
+
+                    //reference is dir, move to next level
+                    dirRecid = dir[pos/8][pos%8]>>>1;
+                    level--;
+                }
+
+            }finally {
+                segmentLocks[segment].readLock().unlock();
+            }
+            return findNextLinkedNode(lastHash);
+
+
+        }
+
+        private Object[] findNextLinkedNode(int hash) {
+            //second phase, start search from increased hash to find next items
+            for(int segment = Math.max(hash >>>28, lastSegment); segment<16;segment++)try{
+
+                lastSegment = Math.max(segment,lastSegment);
+                segmentLocks[segment].readLock().lock();
+
+                long dirRecid = segmentRecids[segment];
+                Object ret[] = findNextLinkedNodeRecur(dirRecid, hash, 3);
+                //System.out.println(Arrays.asList(ret));
+                if(ret !=null) return ret;
+                hash = 0;
+            }finally {
+                segmentLocks[segment].readLock().unlock();
+            }
+
+            return null;
+        }
+
+        private Object[] findNextLinkedNodeRecur(long dirRecid, int newHash, int level){
+            long[][] dir = engine.get(dirRecid, DIR_SERIALIZER);
+            if(dir == null) return null;
+            int pos = (newHash>>>(level*7))  & 0x7F;
+            boolean first = true;
+            while(pos<128){
+                if(dir[pos/8]!=null){
+                    long recid = dir[pos/8][pos%8];
+                    if(recid!=0){
+                        if((recid&1) == 1){
+                            recid = recid>>1;
+                            //found linked list, load it into array and return
+                            Object[] array = new Object[2];
+                            int arrayPos = 0;
+                            while(recid!=0){
+                                LinkedNode ln = engine.get(recid, LN_SERIALIZER);
+                                if(ln==null){
+                                    recid = 0;
+                                    continue;
+                                }
+                                //increase array size if needed
+                                if(arrayPos == array.length)
+                                    array = Arrays.copyOf(array, array.length+2);
+                                array[arrayPos++] = ln.key;
+                                array[arrayPos++] = ln.value;
+                                recid = ln.next;
+                            }
+                            return array;
+                        }else{
+                            //found another dir, continue dive
+                            recid = recid>>1;
+                            Object[] ret = findNextLinkedNodeRecur(recid, first ? newHash : 0, level - 1);
+                            if(ret != null) return ret;
+                        }
+                    }
+                }
+                first = false;
+                pos++;
+            }
+            return null;
+        }
+    }
+
+    class KeyIterator extends HashIterator implements  Iterator<K>{
+
+        @Override
+        public K next() {
+        	if(currentLinkedList == null)
+        		throw new NoSuchElementException();
+            K key = (K) currentLinkedList[currentLinkedListPos];
+            moveToNext();
+            return key;
+        }
+    }
+
+    class ValueIterator extends HashIterator implements  Iterator<V>{
+
+        @Override
+        public V next() {
+        	if(currentLinkedList == null)
+        		throw new NoSuchElementException();
+            V value = (V) currentLinkedList[currentLinkedListPos+1];
+            moveToNext();
+            return value;
+        }
+    }
+
+    class EntryIterator extends HashIterator implements  Iterator<Entry<K,V>>{
+
+        @Override
+        public Entry<K, V> next() {
+        	if(currentLinkedList == null)
+        		throw new NoSuchElementException();
+            K key = (K) currentLinkedList[currentLinkedListPos];
+            moveToNext();
+            return new Entry2(key);
+        }
+    }
+
+    class Entry2 implements Entry<K,V>{
+
+        private final K key;
+
+        Entry2(K key) {
+            this.key = key;
+        }
+
+        @Override
+        public K getKey() {
+            return key;
+        }
+
+        @Override
+        public V getValue() {
+            return HTreeMap.this.get(key);
+        }
+
+        @Override
+        public V setValue(V value) {
+            return HTreeMap.this.put(key,value);
+        }
+
+        @Override
+        public boolean equals(Object o) {
+            return (o instanceof Entry) && key.equals(((Entry) o).getKey());
+        }
+
+        @Override
+        public int hashCode() {
+            final V value = HTreeMap.this.get(key);
+            return (key == null ? 0 : key.hashCode()) ^
+                    (value == null ? 0 : value.hashCode());
+        }
+    }
+
+
+    @Override
+    public V putIfAbsent(K key, V value) {
+        if(key==null||value==null) throw new NullPointerException();
+        Utils.checkMapValueIsNotCollecion(value);
+        final int segment = HTreeMap.this.hash(key) >>>28;
+        try{
+            segmentLocks[segment].writeLock().lock();
+
+            if (!containsKey(key))
+                 return put(key, value);
+            else
+                 return get(key);
+
+        }finally {
+            segmentLocks[segment].writeLock().unlock();
+        }
+    }
+
+    @Override
+    public boolean remove(Object key, Object value) {
+        if(key==null||value==null) throw new NullPointerException();
+        final int segment = HTreeMap.this.hash(key) >>>28;
+        try{
+            segmentLocks[segment].writeLock().lock();
+
+            if (containsKey(key) && get(key).equals(value)) {
+                remove(key);
+                return true;
+            }else
+                return false;
+
+        }finally {
+            segmentLocks[segment].writeLock().unlock();
+        }
+    }
+
+    @Override
+    public boolean replace(K key, V oldValue, V newValue) {
+        if(key==null||oldValue==null||newValue==null) throw new NullPointerException();
+        final int segment = HTreeMap.this.hash(key) >>>28;
+        try{
+            segmentLocks[segment].writeLock().lock();
+
+            if (containsKey(key) && get(key).equals(oldValue)) {
+                 put(key, newValue);
+                 return true;
+            } else
+                return false;
+
+        }finally {
+            segmentLocks[segment].writeLock().unlock();
+        }
+    }
+
+    @Override
+    public V replace(K key, V value) {
+        if(key==null||value==null) throw new NullPointerException();
+        final int segment = HTreeMap.this.hash(key) >>>28;
+        try{
+            segmentLocks[segment].writeLock().lock();
+
+            if (containsKey(key))
+                return put(key, value);
+            else
+                return null;
+        }finally {
+            segmentLocks[segment].writeLock().unlock();
+        }
+    }
+
+
+    /**
+     * Make readonly snapshot view of current Map. Snapshot is immutable and not affected by modifications made by other threads.
+     * Useful if you need consistent view on Map.
+     * <p>
+     * Maintaining snapshot have some overhead, underlying Engine is closed after Map view is GCed.
+     * Please make sure to release reference to this Map view, so snapshot view can be garbage collected.
+     *
+     * @return snapshot
+     */
+    public Map<K,V> snapshot(){
+        Engine snapshot = SnapshotEngine.createSnapshotFor(engine);
+        return new HTreeMap<K, V>(snapshot,rootRecid, defaultSerialzierForSnapshots);
+    }
+
+
+    protected final Object modListenersLock = new Object();
+    protected Bind.MapListener<K,V>[] modListeners = new Bind.MapListener[0];
+
+    @Override
+    public void addModificationListener(Bind.MapListener<K,V> listener) {
+        synchronized (modListenersLock){
+            Bind.MapListener<K,V>[] modListeners2 =
+                    Arrays.copyOf(modListeners,modListeners.length+1);
+            modListeners2[modListeners2.length-1] = listener;
+            modListeners = modListeners2;
+        }
+
+    }
+
+    @Override
+    public void removeModificationListener(Bind.MapListener<K,V> listener) {
+        synchronized (modListenersLock){
+            for(int i=0;i<modListeners.length;i++){
+                if(modListeners[i]==listener) modListeners[i]=null;
+            }
+        }
+    }
+
+    protected void notify(K key, V oldValue, V newValue) {
+        Bind.MapListener<K,V>[] modListeners2  = modListeners;
+        for(Bind.MapListener<K,V> listener:modListeners2){
+            if(listener!=null)
+                listener.update(key, oldValue, newValue);
+        }
+    }
+
+    /**
+     * Closes underlying storage and releases all resources.
+     * Used mostly with temporary collections where engine is not accessible.
+     */
+    public void close(){
+        engine.close();
+    }
+
+}
Index: applications/editors/josm/plugins/imagerycache/src/org/mapdb/Locks.java
===================================================================
--- applications/editors/josm/plugins/imagerycache/src/org/mapdb/Locks.java	(revision 29363)
+++ applications/editors/josm/plugins/imagerycache/src/org/mapdb/Locks.java	(revision 29363)
@@ -0,0 +1,141 @@
+/*
+ *  Copyright (c) 2012 Jan Kotek
+ *
+ *  Licensed under the Apache License, Version 2.0 (the "License");
+ *  you may not use this file except in compliance with the License.
+ *  You may obtain a copy of the License at
+ *
+ *    http://www.apache.org/licenses/LICENSE-2.0
+ *
+ *  Unless required by applicable law or agreed to in writing, software
+ *  distributed under the License is distributed on an "AS IS" BASIS,
+ *  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ *  See the License for the specific language governing permissions and
+ *  limitations under the License.
+ */
+
+package org.mapdb;
+
+import java.util.concurrent.locks.LockSupport;
+import java.util.concurrent.locks.ReentrantLock;
+
+/**
+ * Contains various concurrent locking utilities
+ */
+public final class Locks {
+
+    private Locks(){}
+
+
+    /**
+     * An array of ReentrantLocks with infinitive size.
+     * Is used for per-record locking.
+     */
+    public interface RecidLocks{
+        /**
+         * Unlock given recid. Throws an unspecified exception of recid is not locked
+         * @param recid number
+         */
+        public void unlock(final long recid);
+
+        /**
+         * Throws an exception if current thread holds any locks.
+         * Used for assertion that all recids were properly released
+         */
+        public void assertNoLocks();
+        /**
+         * Locks record with given recid. Blocks if already locked, until lock becomes available.
+         * @param recid number
+         */
+        public void lock(final long recid);
+    }
+
+    /**
+     * Holds all existing locks in HashMap.
+     * Lock/unlock operation looks up lock existence in map and act accordingly.
+     * Usefull if there is only handful of locks
+     */
+    public static class LongHashMapRecidLocks implements RecidLocks{
+
+        protected final LongConcurrentHashMap<Thread> locks = new LongConcurrentHashMap<Thread>();
+
+        public void unlock(final long recid) {
+            if(CC.LOG_LOCKS)
+                Utils.LOG.finest("UNLOCK R:"+recid+" T:"+Thread.currentThread().getId());
+
+            final Thread t = locks.remove(recid);
+            if(t!=Thread.currentThread())
+                throw new InternalError("unlocked wrong thread");
+
+        }
+
+        public void assertNoLocks(){
+            if(CC.PARANOID){
+                LongMap.LongMapIterator<Thread> i = locks.longMapIterator();
+                while(i.moveToNext()){
+                    if(i.value()==Thread.currentThread()){
+                        throw new InternalError("Node "+i.key()+" is still locked");
+                    }
+                }
+            }
+        }
+
+        public void lock(final long recid) {
+            if(CC.LOG_LOCKS)
+                Utils.LOG.finest("TRYLOCK R:"+recid+" T:"+Thread.currentThread().getId());
+
+            //feel free to rewrite, if you know better (more efficient) way
+            if(locks.get(recid)==Thread.currentThread()){
+                //check node is not already locked by this thread
+                throw new InternalError("node already locked by current thread: "+recid);
+            }
+
+
+            while(locks.putIfAbsent(recid, Thread.currentThread()) != null){
+                LockSupport.parkNanos(10);
+            }
+            if(CC.LOG_LOCKS)
+                Utils.LOG.finest("LOCK R:"+recid+" T:"+Thread.currentThread().getId());
+        }
+    }
+
+    /**
+     * Fixed size array of locks. <code>Recid % locks.length</code> (modulo)
+     * is used to determine which lock should be used.
+     */
+    public static class SegmentedRecidLocks implements RecidLocks{
+
+        protected final ReentrantLock[] locks;
+
+        protected final int numSegments;
+
+        /**
+         * @param numSegments number of locks, larger number means better concurrency but larger memory overhead. Good value is 16
+         */
+        public SegmentedRecidLocks(int numSegments) {
+            this.numSegments = numSegments;
+            locks = new ReentrantLock[numSegments];
+            for(int i=0;i<numSegments;i++)
+                locks[i] = new ReentrantLock();
+        }
+
+        @Override
+        public void unlock(long recid) {
+            locks[((int) (recid % numSegments))].unlock();
+        }
+
+        @Override
+        public void assertNoLocks() {
+            for(ReentrantLock l:locks){
+                if(l.isLocked())
+                    throw new InternalError("Some node is still locked by current thread");
+            }
+        }
+
+        @Override
+        public void lock(long recid) {
+            locks[((int) (recid % numSegments))].lock();
+        }
+    }
+
+}
Index: applications/editors/josm/plugins/imagerycache/src/org/mapdb/LongConcurrentHashMap.java
===================================================================
--- applications/editors/josm/plugins/imagerycache/src/org/mapdb/LongConcurrentHashMap.java	(revision 29363)
+++ applications/editors/josm/plugins/imagerycache/src/org/mapdb/LongConcurrentHashMap.java	(revision 29363)
@@ -0,0 +1,973 @@
+/*
+ *  Copyright (c) 2012 Jan Kotek
+ *
+ *  Licensed under the Apache License, Version 2.0 (the "License");
+ *  you may not use this file except in compliance with the License.
+ *  You may obtain a copy of the License at
+ *
+ *    http://www.apache.org/licenses/LICENSE-2.0
+ *
+ *  Unless required by applicable law or agreed to in writing, software
+ *  distributed under the License is distributed on an "AS IS" BASIS,
+ *  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ *  See the License for the specific language governing permissions and
+ *  limitations under the License.
+ */
+
+/* This code was adopted from Apache Harmony 'ConcurrentHashMap' with following
+ * copyright:
+ *
+ * Written by Doug Lea with assistance from members of JCP JSR-166
+ * Expert Group and released to the public domain, as explained at
+ * http://creativecommons.org/licenses/publicdomain
+ */
+
+package org.mapdb;
+import java.io.Serializable;
+import java.util.Iterator;
+import java.util.NoSuchElementException;
+import java.util.concurrent.locks.ReentrantLock;
+
+/**
+ * Thread safe LongMap. Is refactored version of 'ConcurrentHashMap'
+ *
+ * @author Jan Kotek
+ * @author Doug Lea
+ */
+public class LongConcurrentHashMap< V>
+        extends LongMap<V> implements Serializable  {
+    private static final long serialVersionUID = 7249069246763182397L;
+
+    /*
+     * The basic strategy is to subdivide the table among Segments,
+     * each of which itself is a concurrently readable hash table.
+     */
+
+    /* ---------------- Constants -------------- */
+
+    /**
+     * The default initial capacity for this table,
+     * used when not otherwise specified in a constructor.
+     */
+    static final int DEFAULT_INITIAL_CAPACITY = 16;
+
+    /**
+     * Salt added to keys before hashing, so it is harder to trigger hash collision attack.
+     */
+    protected final long hashSalt = Utils.RANDOM.nextLong();
+
+
+    /**
+     * The default load factor for this table, used when not
+     * otherwise specified in a constructor.
+     */
+    static final float DEFAULT_LOAD_FACTOR = 0.75f;
+
+    /**
+     * The default concurrency level for this table, used when not
+     * otherwise specified in a constructor.
+     */
+    static final int DEFAULT_CONCURRENCY_LEVEL = 16;
+
+    /**
+     * The maximum capacity, used if a higher value is implicitly
+     * specified by either of the constructors with arguments.  MUST
+     * be a power of two <= 1<<30 to ensure that entries are indexable
+     * using ints.
+     */
+    static final int MAXIMUM_CAPACITY = 1 << 30;
+
+    /**
+     * The maximum number of segments to allow; used to bound
+     * constructor arguments.
+     */
+    static final int MAX_SEGMENTS = 1 << 16; // slightly conservative
+
+    /**
+     * Number of unsynchronized retries in size and containsValue
+     * methods before resorting to locking. This is used to avoid
+     * unbounded retries if tables undergo continuous modification
+     * which would make it impossible to obtain an accurate result.
+     */
+    static final int RETRIES_BEFORE_LOCK = 2;
+
+    /* ---------------- Fields -------------- */
+
+    /**
+     * Mask value for indexing into segments. The upper bits of a
+     * key's hash code are used to choose the segment.
+     */
+    final int segmentMask;
+
+    /**
+     * Shift value for indexing within segments.
+     */
+    final int segmentShift;
+
+    /**
+     * The segments, each of which is a specialized hash table
+     */
+    final Segment<V>[] segments;
+
+
+    /* ---------------- Small Utilities -------------- */
+
+
+    /**
+     * Returns the segment that should be used for key with given hash
+     * @param hash the hash code for the key
+     * @return the segment
+     */
+    final Segment<V> segmentFor(int hash) {
+        return segments[(hash >>> segmentShift) & segmentMask];
+    }
+
+
+    /* ---------------- Inner Classes -------------- */
+
+    /**
+     * LongConcurrentHashMap list entry. Note that this is never exported
+     * out as a user-visible Map.Entry.
+     *
+     * Because the value field is volatile, not final, it is legal wrt
+     * the Java Memory Model for an unsynchronized reader to see null
+     * instead of initial value when read via a data race.  Although a
+     * reordering leading to this is not likely to ever actually
+     * occur, the Segment.readValueUnderLock method is used as a
+     * backup in case a null (pre-initialized) value is ever seen in
+     * an unsynchronized access method.
+     */
+    static final class HashEntry<V> {
+        final long key;
+        final int hash;
+        volatile V value;
+        final HashEntry<V> next;
+
+        HashEntry(long key, int hash, HashEntry<V> next, V value) {
+            this.key = key;
+            this.hash = hash;
+            this.next = next;
+            this.value = value;
+        }
+
+        @SuppressWarnings("unchecked")
+        static <V> HashEntry<V>[] newArray(int i) {
+            return new HashEntry[i];
+        }
+    }
+
+    /**
+     * Segments are specialized versions of hash tables.  This
+     * subclasses from ReentrantLock opportunistically, just to
+     * simplify some locking and avoid separate construction.
+     */
+    static final class Segment<V> extends ReentrantLock implements Serializable {
+        /*
+         * Segments maintain a table of entry lists that are ALWAYS
+         * kept in a consistent state, so can be read without locking.
+         * Next fields of nodes are immutable (final).  All list
+         * additions are performed at the front of each bin. This
+         * makes it easy to check changes, and also fast to traverse.
+         * When nodes would otherwise be changed, new nodes are
+         * created to replace them. This works well for hash tables
+         * since the bin lists tend to be short. (The average length
+         * is less than two for the default load factor threshold.)
+         *
+         * Read operations can thus proceed without locking, but rely
+         * on selected uses of volatiles to ensure that completed
+         * write operations performed by other threads are
+         * noticed. For most purposes, the "count" field, tracking the
+         * number of elements, serves as that volatile variable
+         * ensuring visibility.  This is convenient because this field
+         * needs to be read in many read operations anyway:
+         *
+         *   - All (unsynchronized) read operations must first read the
+         *     "count" field, and should not look at table entries if
+         *     it is 0.
+         *
+         *   - All (synchronized) write operations should write to
+         *     the "count" field after structurally changing any bin.
+         *     The operations must not take any action that could even
+         *     momentarily cause a concurrent read operation to see
+         *     inconsistent data. This is made easier by the nature of
+         *     the read operations in Map. For example, no operation
+         *     can reveal that the table has grown but the threshold
+         *     has not yet been updated, so there are no atomicity
+         *     requirements for this with respect to reads.
+         *
+         * As a guide, all critical volatile reads and writes to the
+         * count field are marked in code comments.
+         */
+
+        private static final long serialVersionUID = 2249069246763182397L;
+
+        /**
+         * The number of elements in this segment's region.
+         */
+        transient volatile int count;
+
+        /**
+         * Number of updates that alter the size of the table. This is
+         * used during bulk-read methods to make sure they see a
+         * consistent snapshot: If modCounts change during a traversal
+         * of segments computing size or checking containsValue, then
+         * we might have an inconsistent view of state so (usually)
+         * must retry.
+         */
+        transient int modCount;
+
+        /**
+         * The table is rehashed when its size exceeds this threshold.
+         * (The value of this field is always <tt>(int)(capacity *
+         * loadFactor)</tt>.)
+         */
+        transient int threshold;
+
+        /**
+         * The per-segment table.
+         */
+        transient volatile HashEntry<V>[] table;
+
+        /**
+         * The load factor for the hash table.  Even though this value
+         * is same for all segments, it is replicated to avoid needing
+         * links to outer object.
+         * @serial
+         */
+        final float loadFactor;
+
+        Segment(int initialCapacity, float lf) {
+            loadFactor = lf;
+            setTable(HashEntry.<V>newArray(initialCapacity));
+        }
+
+        @SuppressWarnings("unchecked")
+        static <V> Segment<V>[] newArray(int i) {
+            return new Segment[i];
+        }
+
+        /**
+         * Sets table to new HashEntry array.
+         * Call only while holding lock or in constructor.
+         */
+        void setTable(HashEntry<V>[] newTable) {
+            threshold = (int)(newTable.length * loadFactor);
+            table = newTable;
+        }
+
+        /**
+         * Returns properly casted first entry of bin for given hash.
+         */
+        HashEntry<V> getFirst(int hash) {
+            HashEntry<V>[] tab = table;
+            return tab[hash & (tab.length - 1)];
+        }
+
+        /**
+         * Reads value field of an entry under lock. Called if value
+         * field ever appears to be null. This is possible only if a
+         * compiler happens to reorder a HashEntry initialization with
+         * its table assignment, which is legal under memory model
+         * but is not known to ever occur.
+         */
+        V readValueUnderLock(HashEntry<V> e) {
+            lock();
+            try {
+                return e.value;
+            } finally {
+                unlock();
+            }
+        }
+
+        /* Specialized implementations of map methods */
+
+        V get(final long key, int hash) {
+            if (count != 0) { // read-volatile
+                HashEntry<V> e = getFirst(hash);
+                while (e != null) {
+                    if (e.hash == hash && key == e.key) {
+                        V v = e.value;
+                        if (v != null)
+                            return v;
+                        return readValueUnderLock(e); // recheck
+                    }
+                    e = e.next;
+                }
+            }
+            return null;
+        }
+
+        boolean containsKey(final long key, int hash) {
+            if (count != 0) { // read-volatile
+                HashEntry<V> e = getFirst(hash);
+                while (e != null) {
+                    if (e.hash == hash && key == e.key)
+                        return true;
+                    e = e.next;
+                }
+            }
+            return false;
+        }
+
+        boolean containsValue(Object value) {
+            if (count != 0) { // read-volatile
+                HashEntry<V>[] tab = table;
+                //int len = tab.length;
+                for (HashEntry<V> aTab : tab) {
+                    for (HashEntry<V> e = aTab; e != null; e = e.next) {
+                        V v = e.value;
+                        if (v == null) // recheck
+                            v = readValueUnderLock(e);
+                        if (value.equals(v))
+                            return true;
+                    }
+                }
+            }
+            return false;
+        }
+
+        boolean replace(long key, int hash, V oldValue, V newValue) {
+            lock();
+            try {
+                HashEntry<V> e = getFirst(hash);
+                while (e != null && (e.hash != hash || key!=e.key))
+                    e = e.next;
+
+                boolean replaced = false;
+                if (e != null && oldValue.equals(e.value)) {
+                    replaced = true;
+                    e.value = newValue;
+                }
+                return replaced;
+            } finally {
+                unlock();
+            }
+        }
+
+        V replace(long key, int hash, V newValue) {
+            lock();
+            try {
+                HashEntry<V> e = getFirst(hash);
+                while (e != null && (e.hash != hash || key != e.key))
+                    e = e.next;
+
+                V oldValue = null;
+                if (e != null) {
+                    oldValue = e.value;
+                    e.value = newValue;
+                }
+                return oldValue;
+            } finally {
+                unlock();
+            }
+        }
+
+
+        V put(long key, int hash, V value, boolean onlyIfAbsent) {
+            lock();
+            try {
+                int c = count;
+                if (c++ > threshold) // ensure capacity
+                    rehash();
+                HashEntry<V>[] tab = table;
+                int index = hash & (tab.length - 1);
+                HashEntry<V> first = tab[index];
+                HashEntry<V> e = first;
+                while (e != null && (e.hash != hash || key!=e.key))
+                    e = e.next;
+
+                V oldValue;
+                if (e != null) {
+                    oldValue = e.value;
+                    if (!onlyIfAbsent)
+                        e.value = value;
+                }
+                else {
+                    oldValue = null;
+                    ++modCount;
+                    tab[index] = new HashEntry<V>(key, hash, first, value);
+                    count = c; // write-volatile
+                }
+                return oldValue;
+            } finally {
+                unlock();
+            }
+        }
+
+        void rehash() {
+            HashEntry<V>[] oldTable = table;
+            int oldCapacity = oldTable.length;
+            if (oldCapacity >= MAXIMUM_CAPACITY)
+                return;
+
+            /*
+             * Reclassify nodes in each list to new Map.  Because we are
+             * using power-of-two expansion, the elements from each bin
+             * must either stay at same index, or move with a power of two
+             * offset. We eliminate unnecessary node creation by catching
+             * cases where old nodes can be reused because their next
+             * fields won't change. Statistically, at the default
+             * threshold, only about one-sixth of them need cloning when
+             * a table doubles. The nodes they replace will be garbage
+             * collectable as soon as they are no longer referenced by any
+             * reader thread that may be in the midst of traversing table
+             * right now.
+             */
+
+            HashEntry<V>[] newTable = HashEntry.newArray(oldCapacity<<1);
+            threshold = (int)(newTable.length * loadFactor);
+            int sizeMask = newTable.length - 1;
+            for (HashEntry<V> e : oldTable) {
+                // We need to guarantee that any existing reads of old Map can
+                //  proceed. So we cannot yet null out each bin.
+                if (e != null) {
+                    HashEntry<V> next = e.next;
+                    int idx = e.hash & sizeMask;
+
+                    //  Single node on list
+                    if (next == null)
+                        newTable[idx] = e;
+
+                    else {
+                        // Reuse trailing consecutive sequence at same slot
+                        HashEntry<V> lastRun = e;
+                        int lastIdx = idx;
+                        for (HashEntry<V> last = next;
+                             last != null;
+                             last = last.next) {
+                            int k = last.hash & sizeMask;
+                            if (k != lastIdx) {
+                                lastIdx = k;
+                                lastRun = last;
+                            }
+                        }
+                        newTable[lastIdx] = lastRun;
+
+                        // Clone all remaining nodes
+                        for (HashEntry<V> p = e; p != lastRun; p = p.next) {
+                            int k = p.hash & sizeMask;
+                            HashEntry<V> n = newTable[k];
+                            newTable[k] = new HashEntry<V>(p.key, p.hash,
+                                    n, p.value);
+                        }
+                    }
+                }
+            }
+            table = newTable;
+        }
+
+        /**
+         * Remove; match on key only if value null, else match both.
+         */
+        V remove(final long key, int hash, Object value) {
+            lock();
+            try {
+                int c = count - 1;
+                HashEntry<V>[] tab = table;
+                int index = hash & (tab.length - 1);
+                HashEntry<V> first = tab[index];
+                HashEntry<V> e = first;
+                while (e != null && (e.hash != hash || key!=e.key))
+                    e = e.next;
+
+                V oldValue = null;
+                if (e != null) {
+                    V v = e.value;
+                    if (value == null || value.equals(v)) {
+                        oldValue = v;
+                        // All entries following removed node can stay
+                        // in list, but all preceding ones need to be
+                        // cloned.
+                        ++modCount;
+                        HashEntry<V> newFirst = e.next;
+                        for (HashEntry<V> p = first; p != e; p = p.next)
+                            newFirst = new HashEntry<V>(p.key, p.hash,
+                                                          newFirst, p.value);
+                        tab[index] = newFirst;
+                        count = c; // write-volatile
+                    }
+                }
+                return oldValue;
+            } finally {
+                unlock();
+            }
+        }
+
+        void clear() {
+            if (count != 0) {
+                lock();
+                try {
+                    HashEntry<V>[] tab = table;
+                    for (int i = 0; i < tab.length ; i++)
+                        tab[i] = null;
+                    ++modCount;
+                    count = 0; // write-volatile
+                } finally {
+                    unlock();
+                }
+            }
+        }
+    }
+
+
+
+    /* ---------------- Public operations -------------- */
+
+    /**
+     * Creates a new, empty map with the specified initial
+     * capacity, load factor and concurrency level.
+     *
+     * @param initialCapacity the initial capacity. The implementation
+     * performs internal sizing to accommodate this many elements.
+     * @param loadFactor  the load factor threshold, used to control resizing.
+     * Resizing may be performed when the average number of elements per
+     * bin exceeds this threshold.
+     * @param concurrencyLevel the estimated number of concurrently
+     * updating threads. The implementation performs internal sizing
+     * to try to accommodate this many threads.
+     * @throws IllegalArgumentException if the initial capacity is
+     * negative or the load factor or concurrencyLevel are
+     * nonpositive.
+     */
+    public LongConcurrentHashMap(int initialCapacity,
+                                 float loadFactor, int concurrencyLevel) {
+        if (!(loadFactor > 0) || initialCapacity < 0 || concurrencyLevel <= 0)
+            throw new IllegalArgumentException();
+
+        if (concurrencyLevel > MAX_SEGMENTS)
+            concurrencyLevel = MAX_SEGMENTS;
+
+        // Find power-of-two sizes best matching arguments
+        int sshift = 0;
+        int ssize = 1;
+        while (ssize < concurrencyLevel) {
+            ++sshift;
+            ssize <<= 1;
+        }
+        segmentShift = 32 - sshift;
+        segmentMask = ssize - 1;
+        this.segments = Segment.newArray(ssize);
+
+        if (initialCapacity > MAXIMUM_CAPACITY)
+            initialCapacity = MAXIMUM_CAPACITY;
+        int c = initialCapacity / ssize;
+        if (c * ssize < initialCapacity)
+            ++c;
+        int cap = 1;
+        while (cap < c)
+            cap <<= 1;
+
+        for (int i = 0; i < this.segments.length; ++i)
+            this.segments[i] = new Segment<V>(cap, loadFactor);
+    }
+
+    /**
+     * Creates a new, empty map with the specified initial capacity,
+     * and with default load factor (0.75) and concurrencyLevel (16).
+     *
+     * @param initialCapacity the initial capacity. The implementation
+     * performs internal sizing to accommodate this many elements.
+     * @throws IllegalArgumentException if the initial capacity of
+     * elements is negative.
+     */
+    public LongConcurrentHashMap(int initialCapacity) {
+        this(initialCapacity, DEFAULT_LOAD_FACTOR, DEFAULT_CONCURRENCY_LEVEL);
+    }
+
+    /**
+     * Creates a new, empty map with a default initial capacity (16),
+     * load factor (0.75) and concurrencyLevel (16).
+     */
+    public LongConcurrentHashMap() {
+        this(DEFAULT_INITIAL_CAPACITY, DEFAULT_LOAD_FACTOR, DEFAULT_CONCURRENCY_LEVEL);
+    }
+
+    /**
+     * Returns <tt>true</tt> if this map contains no key-value mappings.
+     *
+     * @return <tt>true</tt> if this map contains no key-value mappings
+     */
+    @Override
+	public boolean isEmpty() {
+        final Segment<V>[] segments = this.segments;
+        /*
+         * We keep track of per-segment modCounts to avoid ABA
+         * problems in which an element in one segment was added and
+         * in another removed during traversal, in which case the
+         * table was never actually empty at any point. Note the
+         * similar use of modCounts in the size() and containsValue()
+         * methods, which are the only other methods also susceptible
+         * to ABA problems.
+         */
+        int[] mc = new int[segments.length];
+        int mcsum = 0;
+        for (int i = 0; i < segments.length; ++i) {
+            if (segments[i].count != 0)
+                return false;
+            else
+                mcsum += mc[i] = segments[i].modCount;
+        }
+        // If mcsum happens to be zero, then we know we got a snapshot
+        // before any modifications at all were made.  This is
+        // probably common enough to bother tracking.
+        if (mcsum != 0) {
+            for (int i = 0; i < segments.length; ++i) {
+                if (segments[i].count != 0 ||
+                    mc[i] != segments[i].modCount)
+                    return false;
+            }
+        }
+        return true;
+    }
+
+    /**
+     * Returns the number of key-value mappings in this map.  If the
+     * map contains more than <tt>Integer.MAX_VALUE</tt> elements, returns
+     * <tt>Integer.MAX_VALUE</tt>.
+     *
+     * @return the number of key-value mappings in this map
+     */
+    @Override
+	public int size() {
+        final Segment<V>[] segments = this.segments;
+        long sum = 0;
+        long check = 0;
+        int[] mc = new int[segments.length];
+        // Try a few times to get accurate count. On failure due to
+        // continuous async changes in table, resort to locking.
+        for (int k = 0; k < RETRIES_BEFORE_LOCK; ++k) {
+            check = 0;
+            sum = 0;
+            int mcsum = 0;
+            for (int i = 0; i < segments.length; ++i) {
+                sum += segments[i].count;
+                mcsum += mc[i] = segments[i].modCount;
+            }
+            if (mcsum != 0) {
+                for (int i = 0; i < segments.length; ++i) {
+                    check += segments[i].count;
+                    if (mc[i] != segments[i].modCount) {
+                        check = -1; // force retry
+                        break;
+                    }
+                }
+            }
+            if (check == sum)
+                break;
+        }
+        if (check != sum) { // Resort to locking all segments
+            sum = 0;
+            for (Segment<V> segment : segments) segment.lock();
+            for (Segment<V> segment : segments) sum += segment.count;
+            for (Segment<V> segment : segments) segment.unlock();
+        }
+        if (sum > Integer.MAX_VALUE)
+            return Integer.MAX_VALUE;
+        else
+            return (int)sum;
+    }
+
+    @Override
+    public Iterator<V> valuesIterator() {
+        return new ValueIterator();
+    }
+
+    @Override
+    public LongMapIterator<V> longMapIterator() {
+        return new MapIterator();
+    }
+
+    /**
+     * Returns the value to which the specified key is mapped,
+     * or {@code null} if this map contains no mapping for the key.
+     *
+     * <p>More formally, if this map contains a mapping from a key
+     * {@code k} to a value {@code keys} such that {@code key.equals(k)},
+     * then this method returns {@code keys}; otherwise it returns
+     * {@code null}.  (There can be at most one such mapping.)
+     *
+     * @throws NullPointerException if the specified key is null
+     */
+    @Override
+	public V get(long key) {
+        final int hash = Utils.longHash(key^hashSalt);
+        return segmentFor(hash).get(key, hash);
+    }
+
+    /**
+     * Tests if the specified object is a key in this table.
+     *
+     * @param  key   possible key
+     * @return <tt>true</tt> if and only if the specified object
+     *         is a key in this table, as determined by the
+     *         <tt>equals</tt> method; <tt>false</tt> otherwise.
+     * @throws NullPointerException if the specified key is null
+     */
+    public boolean containsKey(long key) {
+        final int hash = Utils.longHash(key^hashSalt);
+        return segmentFor(hash).containsKey(key, hash);
+    }
+
+    /**
+     * Returns <tt>true</tt> if this map maps one or more keys to the
+     * specified value. Note: This method requires a full internal
+     * traversal of the hash table, and so is much slower than
+     * method <tt>containsKey</tt>.
+     *
+     * @param value value whose presence in this map is to be tested
+     * @return <tt>true</tt> if this map maps one or more keys to the
+     *         specified value
+     * @throws NullPointerException if the specified value is null
+     */
+    public boolean containsValue(Object value) {
+        if (value == null)
+            throw new NullPointerException();
+
+        // See explanation of modCount use above
+
+        final Segment<V>[] segments = this.segments;
+        int[] mc = new int[segments.length];
+
+        // Try a few times without locking
+        for (int k = 0; k < RETRIES_BEFORE_LOCK; ++k) {
+            //int sum = 0;
+            int mcsum = 0;
+            for (int i = 0; i < segments.length; ++i) {
+                //int c = segments[i].count;
+                mcsum += mc[i] = segments[i].modCount;
+                if (segments[i].containsValue(value))
+                    return true;
+            }
+            boolean cleanSweep = true;
+            if (mcsum != 0) {
+                for (int i = 0; i < segments.length; ++i) {
+                    //int c = segments[i].count;
+                    if (mc[i] != segments[i].modCount) {
+                        cleanSweep = false;
+                        break;
+                    }
+                }
+            }
+            if (cleanSweep)
+                return false;
+        }
+        // Resort to locking all segments
+        for (Segment<V> segment : segments) segment.lock();
+        boolean found = false;
+        try {
+            for (Segment<V> segment : segments) {
+                if (segment.containsValue(value)) {
+                    found = true;
+                    break;
+                }
+            }
+        } finally {
+            for (Segment<V> segment : segments) segment.unlock();
+        }
+        return found;
+    }
+
+
+    /**
+     * Maps the specified key to the specified value in this table.
+     * Neither the key nor the value can be null.
+     *
+     * <p> The value can be retrieved by calling the <tt>get</tt> method
+     * with a key that is equal to the original key.
+     *
+     * @param key key with which the specified value is to be associated
+     * @param value value to be associated with the specified key
+     * @return the previous value associated with <tt>key</tt>, or
+     *         <tt>null</tt> if there was no mapping for <tt>key</tt>
+     * @throws NullPointerException if the specified key or value is null
+     */
+    @Override
+	public V put(long key, V value) {
+        if (value == null)
+            throw new NullPointerException();
+        final int hash = Utils.longHash(key^hashSalt);
+        return segmentFor(hash).put(key, hash, value, false);
+    }
+
+    /**
+     * 
+     *
+     * @return the previous value associated with the specified key,
+     *         or <tt>null</tt> if there was no mapping for the key
+     * @throws NullPointerException if the specified key or value is null
+     */
+    public V putIfAbsent(long key, V value) {
+        if (value == null)
+            throw new NullPointerException();
+        final int hash = Utils.longHash(key^hashSalt);
+        return segmentFor(hash).put(key, hash, value, true);
+    }
+
+
+    /**
+     * Removes the key (and its corresponding value) from this map.
+     * This method does nothing if the key is not in the map.
+     *
+     * @param  key the key that needs to be removed
+     * @return the previous value associated with <tt>key</tt>, or
+     *         <tt>null</tt> if there was no mapping for <tt>key</tt>
+     * @throws NullPointerException if the specified key is null
+     */
+    @Override
+	public V remove(long key) {
+        final int hash = Utils.longHash(key^hashSalt);
+        return segmentFor(hash).remove(key, hash, null);
+    }
+
+    /**
+     * 
+     *
+     * @throws NullPointerException if the specified key is null
+     */
+    public boolean remove(long key, Object value) {
+        final int hash = Utils.longHash(key^hashSalt);
+        return value != null && segmentFor(hash).remove(key, hash, value) != null;
+    }
+
+    /**
+     * 
+     *
+     * @throws NullPointerException if any of the arguments are null
+     */
+    public boolean replace(long key, V oldValue, V newValue) {
+        if (oldValue == null || newValue == null)
+            throw new NullPointerException();
+        final int hash = Utils.longHash(key^hashSalt);
+        return segmentFor(hash).replace(key, hash, oldValue, newValue);
+    }
+
+    /**
+     * 
+     *
+     * @return the previous value associated with the specified key,
+     *         or <tt>null</tt> if there was no mapping for the key
+     * @throws NullPointerException if the specified key or value is null
+     */
+    public V replace(long key, V value) {
+        if (value == null)
+            throw new NullPointerException();
+        final int hash = Utils.longHash(key^hashSalt);
+        return segmentFor(hash).replace(key, hash, value);
+    }
+
+    /**
+     * Removes all of the mappings from this map.
+     */
+    @Override
+	public void clear() {
+        for (Segment<V> segment : segments) segment.clear();
+    }
+
+
+
+
+
+    /* ---------------- Iterator Support -------------- */
+
+    abstract class HashIterator {
+        int nextSegmentIndex;
+        int nextTableIndex;
+        HashEntry<V>[] currentTable;
+        HashEntry< V> nextEntry;
+        HashEntry< V> lastReturned;
+
+        HashIterator() {
+            nextSegmentIndex = segments.length - 1;
+            nextTableIndex = -1;
+            advance();
+        }
+
+
+        final void advance() {
+            if (nextEntry != null && (nextEntry = nextEntry.next) != null)
+                return;
+
+            while (nextTableIndex >= 0) {
+                if ( (nextEntry = currentTable[nextTableIndex--]) != null)
+                    return;
+            }
+
+            while (nextSegmentIndex >= 0) {
+                Segment<V> seg = segments[nextSegmentIndex--];
+                if (seg.count != 0) {
+                    currentTable = seg.table;
+                    for (int j = currentTable.length - 1; j >= 0; --j) {
+                        if ( (nextEntry = currentTable[j]) != null) {
+                            nextTableIndex = j - 1;
+                            return;
+                        }
+                    }
+                }
+            }
+        }
+
+        public boolean hasNext() { return nextEntry != null; }
+
+        HashEntry<V> nextEntry() {
+            if (nextEntry == null)
+                throw new NoSuchElementException();
+            lastReturned = nextEntry;
+            advance();
+            return lastReturned;
+        }
+
+        public void remove() {
+            if (lastReturned == null)
+                throw new IllegalStateException();
+            LongConcurrentHashMap.this.remove(lastReturned.key);
+            lastReturned = null;
+        }
+    }
+
+    final class KeyIterator
+        extends HashIterator
+        implements Iterator<Long>
+    {
+        @Override
+		public Long next()        { return super.nextEntry().key; }
+    }
+
+    final class ValueIterator
+        extends HashIterator
+        implements Iterator<V>
+    {
+        @Override
+		public V next()        { return super.nextEntry().value; }
+    }
+
+
+    final class MapIterator extends HashIterator implements LongMapIterator<V>{
+
+        private long key;
+        private V value;
+
+        @Override
+        public boolean moveToNext() {
+            if(!hasNext()) return false;
+            HashEntry<V> next = nextEntry();
+            key = next.key;
+            value = next.value;
+            return true;
+        }
+
+        @Override
+        public long key() {
+            return key;
+        }
+
+        @Override
+        public V value() {
+            return value;
+        }
+    }
+
+
+
+
+
+}
Index: applications/editors/josm/plugins/imagerycache/src/org/mapdb/LongConcurrentLRUMap.java
===================================================================
--- applications/editors/josm/plugins/imagerycache/src/org/mapdb/LongConcurrentLRUMap.java	(revision 29363)
+++ applications/editors/josm/plugins/imagerycache/src/org/mapdb/LongConcurrentLRUMap.java	(revision 29363)
@@ -0,0 +1,928 @@
+package org.mapdb;
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+
+import java.lang.ref.WeakReference;
+import java.util.*;
+import java.util.concurrent.atomic.AtomicInteger;
+import java.util.concurrent.atomic.AtomicLong;
+import java.util.concurrent.locks.ReentrantLock;
+import java.util.logging.Level;
+import java.util.logging.Logger;
+
+/**
+ * A LRU cache implementation based upon ConcurrentHashMap and other techniques to reduce
+ * contention and synchronization overhead to utilize multiple CPU cores more effectively.
+ * <p/>
+ * Note that the implementation does not follow a true LRU (least-recently-used) eviction
+ * strategy. Instead it strives to remove least recently used items but when the initial
+ * cleanup does not remove enough items to reach the 'acceptableWaterMark' limit, it can
+ * remove more items forcefully regardless of access order.
+ *
+ * MapDB note: reworked to implement LongMap. Original comes from:
+ * https://svn.apache.org/repos/asf/lucene/dev/trunk/solr/core/src/java/org/apache/solr/util/LongConcurrentLRUMap.java
+ * 
+ */
+@SuppressWarnings({"unchecked","rawtypes","unused"}) //TODO unused stuff?
+public class LongConcurrentLRUMap<V> extends LongMap<V> {
+  private static Logger log = Logger.getLogger(LongConcurrentLRUMap.class.getName());
+
+  private final LongConcurrentHashMap<CacheEntry<V>> map;
+  private final int upperWaterMark, lowerWaterMark;
+  private final ReentrantLock markAndSweepLock = new ReentrantLock(true);
+  private boolean isCleaning = false;  // not volatile... piggybacked on other volatile vars
+  private final boolean newThreadForCleanup;
+  private volatile boolean islive = true;
+  private final Stats stats = new Stats();
+  private final int acceptableWaterMark;
+  private long oldestEntry = 0;  // not volatile, only accessed in the cleaning method
+  private CleanupThread cleanupThread ;
+
+  public LongConcurrentLRUMap(int upperWaterMark, final int lowerWaterMark, int acceptableWatermark,
+                              int initialSize, boolean runCleanupThread, boolean runNewThreadForCleanup) {
+    if (upperWaterMark < 1) throw new IllegalArgumentException("upperWaterMark must be > 0");
+    if (lowerWaterMark >= upperWaterMark)
+      throw new IllegalArgumentException("lowerWaterMark must be  < upperWaterMark");
+    map = new LongConcurrentHashMap<CacheEntry<V>>(initialSize);
+    newThreadForCleanup = runNewThreadForCleanup;
+    this.upperWaterMark = upperWaterMark;
+    this.lowerWaterMark = lowerWaterMark;
+    this.acceptableWaterMark = acceptableWatermark;
+    if (runCleanupThread) {
+      cleanupThread = new CleanupThread(this);
+      cleanupThread.start();
+    }
+  }
+
+  public LongConcurrentLRUMap(int size, int lowerWatermark) {
+    this(size, lowerWatermark, (int) Math.floor((lowerWatermark + size) / 2),
+            (int) Math.ceil(0.75 * size), false, false);
+  }
+
+  public void setAlive(boolean live) {
+    islive = live;
+  }
+
+  public V get(long key) {
+    CacheEntry<V> e = map.get(key);
+    if (e == null) {
+      if (islive) stats.missCounter.incrementAndGet();
+      return null;
+    }
+    if (islive) e.lastAccessed = stats.accessCounter.incrementAndGet();
+    return e.value;
+  }
+
+    @Override
+    public boolean isEmpty() {
+        return map.isEmpty();
+    }
+
+    public V remove(long key) {
+    CacheEntry<V> cacheEntry = map.remove(key);
+    if (cacheEntry != null) {
+      stats.size.decrementAndGet();
+      return cacheEntry.value;
+    }
+    return null;
+  }
+
+  public V put(long key, V val) {
+    if (val == null) return null;
+    CacheEntry<V> e = new CacheEntry<V>(key, val, stats.accessCounter.incrementAndGet());
+    CacheEntry<V> oldCacheEntry = map.put(key, e);
+    int currentSize;
+    if (oldCacheEntry == null) {
+      currentSize = stats.size.incrementAndGet();
+    } else {
+      currentSize = stats.size.get();
+    }
+    if (islive) {
+      stats.putCounter.incrementAndGet();
+    } else {
+      stats.nonLivePutCounter.incrementAndGet();
+    }
+
+    // Check if we need to clear out old entries from the cache.
+    // isCleaning variable is checked instead of markAndSweepLock.isLocked()
+    // for performance because every put invokation will check until
+    // the size is back to an acceptable level.
+    //
+    // There is a race between the check and the call to markAndSweep, but
+    // it's unimportant because markAndSweep actually aquires the lock or returns if it can't.
+    //
+    // Thread safety note: isCleaning read is piggybacked (comes after) other volatile reads
+    // in this method.
+    if (currentSize > upperWaterMark && !isCleaning) {
+      if (newThreadForCleanup) {
+        new Thread() {
+          @Override
+          public void run() {
+            markAndSweep();
+          }
+        }.start();
+      } else if (cleanupThread != null){
+        cleanupThread.wakeThread();
+      } else {
+        markAndSweep();
+      }
+    }
+    return oldCacheEntry == null ? null : oldCacheEntry.value;
+  }
+
+  /**
+   * Removes items from the cache to bring the size down
+   * to an acceptable value ('acceptableWaterMark').
+   * <p/>
+   * It is done in two stages. In the first stage, least recently used items are evicted.
+   * If, after the first stage, the cache size is still greater than 'acceptableSize'
+   * config parameter, the second stage takes over.
+   * <p/>
+   * The second stage is more intensive and tries to bring down the cache size
+   * to the 'lowerWaterMark' config parameter.
+   */
+private void markAndSweep() {
+    // if we want to keep at least 1000 entries, then timestamps of
+    // current through current-1000 are guaranteed not to be the oldest (but that does
+    // not mean there are 1000 entries in that group... it's acutally anywhere between
+    // 1 and 1000).
+    // Also, if we want to remove 500 entries, then
+    // oldestEntry through oldestEntry+500 are guaranteed to be
+    // removed (however many there are there).
+
+    if (!markAndSweepLock.tryLock()) return;
+    try {
+      long oldestEntry = this.oldestEntry;
+      isCleaning = true;
+      this.oldestEntry = oldestEntry;     // volatile write to make isCleaning visible
+
+      long timeCurrent = stats.accessCounter.get();
+      int sz = stats.size.get();
+
+      int numRemoved = 0;
+      int numKept = 0;
+      long newestEntry = timeCurrent;
+      long newNewestEntry = -1;
+      long newOldestEntry = Long.MAX_VALUE;
+
+      int wantToKeep = lowerWaterMark;
+      int wantToRemove = sz - lowerWaterMark;
+
+      CacheEntry<V>[] eset = new CacheEntry[sz];
+      int eSize = 0;
+
+      // System.out.println("newestEntry="+newestEntry + " oldestEntry="+oldestEntry);
+      // System.out.println("items removed:" + numRemoved + " numKept=" + numKept + " esetSz="+ eSize + " sz-numRemoved=" + (sz-numRemoved));
+
+      for (Iterator<CacheEntry<V>> iter = map.valuesIterator(); iter.hasNext();) {
+        CacheEntry<V> ce  = iter.next();
+        // set lastAccessedCopy to avoid more volatile reads
+        ce.lastAccessedCopy = ce.lastAccessed;
+        long thisEntry = ce.lastAccessedCopy;
+
+        // since the wantToKeep group is likely to be bigger than wantToRemove, check it first
+        if (thisEntry > newestEntry - wantToKeep) {
+          // this entry is guaranteed not to be in the bottom
+          // group, so do nothing.
+          numKept++;
+          newOldestEntry = Math.min(thisEntry, newOldestEntry);
+        } else if (thisEntry < oldestEntry + wantToRemove) { // entry in bottom group?
+          // this entry is guaranteed to be in the bottom group
+          // so immediately remove it from the map.
+          evictEntry(ce.key);
+          numRemoved++;
+        } else {
+          // This entry *could* be in the bottom group.
+          // Collect these entries to avoid another full pass... this is wasted
+          // effort if enough entries are normally removed in this first pass.
+          // An alternate impl could make a full second pass.
+          if (eSize < eset.length-1) {
+            eset[eSize++] = ce;
+            newNewestEntry = Math.max(thisEntry, newNewestEntry);
+            newOldestEntry = Math.min(thisEntry, newOldestEntry);
+          }
+        }
+      }
+
+      // System.out.println("items removed:" + numRemoved + " numKept=" + numKept + " esetSz="+ eSize + " sz-numRemoved=" + (sz-numRemoved));
+      // TODO: allow this to be customized in the constructor?
+      int numPasses=1; // maximum number of linear passes over the data
+
+      // if we didn't remove enough entries, then make more passes
+      // over the values we collected, with updated min and max values.
+      while (sz - numRemoved > acceptableWaterMark && --numPasses>=0) {
+
+        oldestEntry = newOldestEntry == Long.MAX_VALUE ? oldestEntry : newOldestEntry;
+        newOldestEntry = Long.MAX_VALUE;
+        newestEntry = newNewestEntry;
+        newNewestEntry = -1;
+        wantToKeep = lowerWaterMark - numKept;
+        wantToRemove = sz - lowerWaterMark - numRemoved;
+
+        // iterate backward to make it easy to remove items.
+        for (int i=eSize-1; i>=0; i--) {
+          CacheEntry<V> ce = eset[i];
+          long thisEntry = ce.lastAccessedCopy;
+
+          if (thisEntry > newestEntry - wantToKeep) {
+            // this entry is guaranteed not to be in the bottom
+            // group, so do nothing but remove it from the eset.
+            numKept++;
+            // remove the entry by moving the last element to it's position
+            eset[i] = eset[eSize-1];
+            eSize--;
+
+            newOldestEntry = Math.min(thisEntry, newOldestEntry);
+            
+          } else if (thisEntry < oldestEntry + wantToRemove) { // entry in bottom group?
+
+            // this entry is guaranteed to be in the bottom group
+            // so immediately remove it from the map.
+            evictEntry(ce.key);
+            numRemoved++;
+
+            // remove the entry by moving the last element to it's position
+            eset[i] = eset[eSize-1];
+            eSize--;
+          } else {
+            // This entry *could* be in the bottom group, so keep it in the eset,
+            // and update the stats.
+            newNewestEntry = Math.max(thisEntry, newNewestEntry);
+            newOldestEntry = Math.min(thisEntry, newOldestEntry);
+          }
+        }
+        // System.out.println("items removed:" + numRemoved + " numKept=" + numKept + " esetSz="+ eSize + " sz-numRemoved=" + (sz-numRemoved));
+      }
+
+
+
+      // if we still didn't remove enough entries, then make another pass while
+      // inserting into a priority queue
+      if (sz - numRemoved > acceptableWaterMark) {
+
+        oldestEntry = newOldestEntry == Long.MAX_VALUE ? oldestEntry : newOldestEntry;
+        newOldestEntry = Long.MAX_VALUE;
+        newestEntry = newNewestEntry;
+        newNewestEntry = -1;
+        wantToKeep = lowerWaterMark - numKept;
+        wantToRemove = sz - lowerWaterMark - numRemoved;
+
+        PQueue<V> queue = new PQueue<V>(wantToRemove);
+
+        for (int i=eSize-1; i>=0; i--) {
+          CacheEntry<V> ce = eset[i];
+          long thisEntry = ce.lastAccessedCopy;
+
+          if (thisEntry > newestEntry - wantToKeep) {
+            // this entry is guaranteed not to be in the bottom
+            // group, so do nothing but remove it from the eset.
+            numKept++;
+            // removal not necessary on last pass.
+            // eset[i] = eset[eSize-1];
+            // eSize--;
+
+            newOldestEntry = Math.min(thisEntry, newOldestEntry);
+            
+          } else if (thisEntry < oldestEntry + wantToRemove) {  // entry in bottom group?
+            // this entry is guaranteed to be in the bottom group
+            // so immediately remove it.
+            evictEntry(ce.key);
+            numRemoved++;
+
+            // removal not necessary on last pass.
+            // eset[i] = eset[eSize-1];
+            // eSize--;
+          } else {
+            // This entry *could* be in the bottom group.
+            // add it to the priority queue
+
+            // everything in the priority queue will be removed, so keep track of
+            // the lowest value that ever comes back out of the queue.
+
+            // first reduce the size of the priority queue to account for
+            // the number of items we have already removed while executing
+            // this loop so far.
+            queue.myMaxSize = sz - lowerWaterMark - numRemoved;
+            while (queue.size() > queue.myMaxSize && queue.size() > 0) {
+              CacheEntry otherEntry = queue.pop();
+              newOldestEntry = Math.min(otherEntry.lastAccessedCopy, newOldestEntry);
+            }
+            if (queue.myMaxSize <= 0) break;
+
+            Object o = queue.myInsertWithOverflow(ce);
+            if (o != null) {
+              newOldestEntry = Math.min(((CacheEntry)o).lastAccessedCopy, newOldestEntry);
+            }
+          }
+        }
+
+        // Now delete everything in the priority queue.
+        // avoid using pop() since order doesn't matter anymore
+        for (CacheEntry<V> ce : queue.getValues()) {
+          if (ce==null) continue;
+          evictEntry(ce.key);
+          numRemoved++;
+        }
+
+        // System.out.println("items removed:" + numRemoved + " numKept=" + numKept + " initialQueueSize="+ wantToRemove + " finalQueueSize=" + queue.size() + " sz-numRemoved=" + (sz-numRemoved));
+      }
+
+      oldestEntry = newOldestEntry == Long.MAX_VALUE ? oldestEntry : newOldestEntry;
+      this.oldestEntry = oldestEntry;
+    } finally {
+      isCleaning = false;  // set before markAndSweep.unlock() for visibility
+      markAndSweepLock.unlock();
+    }
+  }
+
+  private static class PQueue<V> extends PriorityQueue<CacheEntry<V>> {
+    int myMaxSize;
+    final Object[] heap;
+    
+    PQueue(int maxSz) {
+      super(maxSz);
+      heap = getHeapArray();
+      myMaxSize = maxSz;
+    }
+
+    
+    Iterable<CacheEntry<V>> getValues() {
+      return (Iterable) Collections.unmodifiableCollection(Arrays.asList(heap));
+    }
+
+	@Override
+    protected boolean lessThan(CacheEntry a, CacheEntry b) {
+      // reverse the parameter order so that the queue keeps the oldest items
+      return b.lastAccessedCopy < a.lastAccessedCopy;
+    }
+
+    // necessary because maxSize is private in base class
+    public CacheEntry<V> myInsertWithOverflow(CacheEntry<V> element) {
+      if (size() < myMaxSize) {
+        add(element);
+        return null;
+      } else if (size() > 0 && !lessThan(element, (CacheEntry<V>) heap[1])) {
+        CacheEntry<V> ret = (CacheEntry<V>) heap[1];
+        heap[1] = element;
+        updateTop();
+        return ret;
+      } else {
+        return element;
+      }
+    }
+  }
+
+    /** A PriorityQueue maintains a partial ordering of its elements such that the
+     * least element can always be found in constant time.  Put()'s and pop()'s
+     * require log(size) time.
+     *
+     * <p><b>NOTE</b>: This class will pre-allocate a full array of
+     * length <code>maxSize+1</code> if instantiated via the
+     * {@link #PriorityQueue(int,boolean)} constructor with
+     * <code>prepopulate</code> set to <code>true</code>.
+     *
+     * @lucene.internal
+     */
+    private static abstract class PriorityQueue<T> {
+        private int size;
+        private final int maxSize;
+        private final T[] heap;
+
+        public PriorityQueue(int maxSize) {
+            this(maxSize, true);
+        }
+
+        public PriorityQueue(int maxSize, boolean prepopulate) {
+            size = 0;
+            int heapSize;
+            if (0 == maxSize)
+                // We allocate 1 extra to avoid if statement in top()
+                heapSize = 2;
+            else {
+                if (maxSize == Integer.MAX_VALUE) {
+                    // Don't wrap heapSize to -1, in this case, which
+                    // causes a confusing NegativeArraySizeException.
+                    // Note that very likely this will simply then hit
+                    // an OOME, but at least that's more indicative to
+                    // caller that this values is too big.  We don't +1
+                    // in this case, but it's very unlikely in practice
+                    // one will actually insert this many objects into
+                    // the PQ:
+                    heapSize = Integer.MAX_VALUE;
+                } else {
+                    // NOTE: we add +1 because all access to heap is
+                    // 1-based not 0-based.  heap[0] is unused.
+                    heapSize = maxSize + 1;
+                }
+            }
+            heap = (T[]) new Object[heapSize]; // T is unbounded type, so this unchecked cast works always
+            this.maxSize = maxSize;
+
+            if (prepopulate) {
+                // If sentinel objects are supported, populate the queue with them
+                T sentinel = getSentinelObject();
+                if (sentinel != null) {
+                    heap[1] = sentinel;
+                    for (int i = 2; i < heap.length; i++) {
+                        heap[i] = getSentinelObject();
+                    }
+                    size = maxSize;
+                }
+            }
+        }
+
+        /** Determines the ordering of objects in this priority queue.  Subclasses
+         *  must define this one method.
+         *  @return <code>true</code> iff parameter <tt>a</tt> is less than parameter <tt>b</tt>.
+         */
+        protected abstract boolean lessThan(T a, T b);
+
+        /**
+         * This method can be overridden by extending classes to return a sentinel
+         * object which will be used by the {@link PriorityQueue#PriorityQueue(int,boolean)}
+         * constructor to fill the queue, so that the code which uses that queue can always
+         * assume it's full and only change the top without attempting to insert any new
+         * object.<br>
+         *
+         * Those sentinel values should always compare worse than any non-sentinel
+         * value (i.e., {@link #lessThan} should always favor the
+         * non-sentinel values).<br>
+         *
+         * By default, this method returns false, which means the queue will not be
+         * filled with sentinel values. Otherwise, the value returned will be used to
+         * pre-populate the queue. Adds sentinel values to the queue.<br>
+         *
+         * If this method is extended to return a non-null value, then the following
+         * usage pattern is recommended:
+         *
+         * <pre class="prettyprint">
+         * // extends getSentinelObject() to return a non-null value.
+         * PriorityQueue&lt;MyObject&gt; pq = new MyQueue&lt;MyObject&gt;(numHits);
+         * // save the 'top' element, which is guaranteed to not be null.
+         * MyObject pqTop = pq.top();
+         * &lt;...&gt;
+         * // now in order to add a new element, which is 'better' than top (after
+         * // you've verified it is better), it is as simple as:
+         * pqTop.change().
+         * pqTop = pq.updateTop();
+         * </pre>
+         *
+         * <b>NOTE:</b> if this method returns a non-null value, it will be called by
+         * the {@link PriorityQueue#PriorityQueue(int,boolean)} constructor
+         * {@link #size()} times, relying on a new object to be returned and will not
+         * check if it's null again. Therefore you should ensure any call to this
+         * method creates a new instance and behaves consistently, e.g., it cannot
+         * return null if it previously returned non-null.
+         *
+         * @return the sentinel object to use to pre-populate the queue, or null if
+         *         sentinel objects are not supported.
+         */
+        protected T getSentinelObject() {
+            return null;
+        }
+
+        /**
+         * Adds an Object to a PriorityQueue in log(size) time. If one tries to add
+         * more objects than maxSize from initialize an
+         * {@link ArrayIndexOutOfBoundsException} is thrown.
+         *
+         * @return the new 'top' element in the queue.
+         */
+        public final T add(T element) {
+            size++;
+            heap[size] = element;
+            upHeap();
+            return heap[1];
+        }
+
+        /**
+         * Adds an Object to a PriorityQueue in log(size) time.
+         * It returns the object (if any) that was
+         * dropped off the heap because it was full. This can be
+         * the given parameter (in case it is smaller than the
+         * full heap's minimum, and couldn't be added), or another
+         * object that was previously the smallest value in the
+         * heap and now has been replaced by a larger one, or null
+         * if the queue wasn't yet full with maxSize elements.
+         */
+        public T insertWithOverflow(T element) {
+            if (size < maxSize) {
+                add(element);
+                return null;
+            } else if (size > 0 && !lessThan(element, heap[1])) {
+                T ret = heap[1];
+                heap[1] = element;
+                updateTop();
+                return ret;
+            } else {
+                return element;
+            }
+        }
+
+        /** Returns the least element of the PriorityQueue in constant time. */
+        public final T top() {
+            // We don't need to check size here: if maxSize is 0,
+            // then heap is length 2 array with both entries null.
+            // If size is 0 then heap[1] is already null.
+            return heap[1];
+        }
+
+        /** Removes and returns the least element of the PriorityQueue in log(size)
+         time. */
+        public final T pop() {
+            if (size > 0) {
+                T result = heap[1];       // save first value
+                heap[1] = heap[size];     // move last to first
+                heap[size] = null;        // permit GC of objects
+                size--;
+                downHeap();               // adjust heap
+                return result;
+            } else
+                return null;
+        }
+
+        /**
+         * Should be called when the Object at top changes values. Still log(n) worst
+         * case, but it's at least twice as fast to
+         *
+         * <pre class="prettyprint">
+         * pq.top().change();
+         * pq.updateTop();
+         * </pre>
+         *
+         * instead of
+         *
+         * <pre class="prettyprint">
+         * o = pq.pop();
+         * o.change();
+         * pq.push(o);
+         * </pre>
+         *
+         * @return the new 'top' element.
+         */
+        public final T updateTop() {
+            downHeap();
+            return heap[1];
+        }
+
+        /** Returns the number of elements currently stored in the PriorityQueue. */
+        public final int size() {
+            return size;
+        }
+
+        /** Removes all entries from the PriorityQueue. */
+        public final void clear() {
+            for (int i = 0; i <= size; i++) {
+                heap[i] = null;
+            }
+            size = 0;
+        }
+
+        private final void upHeap() {
+            int i = size;
+            T node = heap[i];          // save bottom node
+            int j = i >>> 1;
+            while (j > 0 && lessThan(node, heap[j])) {
+                heap[i] = heap[j];       // shift parents down
+                i = j;
+                j = j >>> 1;
+            }
+            heap[i] = node;            // install saved node
+        }
+
+        private final void downHeap() {
+            int i = 1;
+            T node = heap[i];          // save top node
+            int j = i << 1;            // find smaller child
+            int k = j + 1;
+            if (k <= size && lessThan(heap[k], heap[j])) {
+                j = k;
+            }
+            while (j <= size && lessThan(heap[j], node)) {
+                heap[i] = heap[j];       // shift up child
+                i = j;
+                j = i << 1;
+                k = j + 1;
+                if (k <= size && lessThan(heap[k], heap[j])) {
+                    j = k;
+                }
+            }
+            heap[i] = node;            // install saved node
+        }
+
+        /** This method returns the internal heap array as Object[].
+         * @lucene.internal
+         */
+        protected final Object[] getHeapArray() {
+            return (Object[]) heap;
+        }
+    }
+
+
+  private void evictEntry(long key) {
+    CacheEntry<V> o = map.remove(key);
+    if (o == null) return;
+    stats.size.decrementAndGet();
+    stats.evictionCounter.incrementAndGet();
+    evictedEntry(o.key,o.value);
+  }
+
+  /**
+   * Returns 'n' number of oldest accessed entries present in this cache.
+   *
+   * This uses a TreeSet to collect the 'n' oldest items ordered by ascending last access time
+   *  and returns a LinkedHashMap containing 'n' or less than 'n' entries.
+   * @param n the number of oldest items needed
+   * @return a LinkedHashMap containing 'n' or less than 'n' entries
+   */
+  public Map<Long,V> getOldestAccessedItems(int n) {
+    Map<Long,V> result = new LinkedHashMap<Long,V>();
+    if (n <= 0)
+      return result;
+    TreeSet<CacheEntry<V>> tree = new TreeSet<CacheEntry<V>>();
+    markAndSweepLock.lock();
+    try {
+        for( Iterator<CacheEntry<V>> iter = map.valuesIterator(); iter.hasNext();){
+        CacheEntry<V> ce = iter.next();
+        ce.lastAccessedCopy = ce.lastAccessed;
+        if (tree.size() < n) {
+          tree.add(ce);
+        } else {
+          if (ce.lastAccessedCopy < tree.first().lastAccessedCopy) {
+            tree.remove(tree.first());
+            tree.add(ce);
+          }
+        }
+      }
+    } finally {
+      markAndSweepLock.unlock();
+    }
+    for (CacheEntry<V> e : tree) {
+      result.put(e.key, e.value);
+    }
+    return result;
+  }
+
+  public Map<Long,V> getLatestAccessedItems(int n) {
+    Map<Long,V> result = new LinkedHashMap<Long,V>();
+    if (n <= 0)
+      return result;
+    TreeSet<CacheEntry<V>> tree = new TreeSet<CacheEntry<V>>();
+    // we need to grab the lock since we are changing lastAccessedCopy
+    markAndSweepLock.lock();
+    try {
+        for( Iterator<CacheEntry<V>> iter = map.valuesIterator(); iter.hasNext();){
+
+        CacheEntry<V> ce = iter.next();
+        ce.lastAccessedCopy = ce.lastAccessed;
+        if (tree.size() < n) {
+          tree.add(ce);
+        } else {
+          if (ce.lastAccessedCopy > tree.last().lastAccessedCopy) {
+            tree.remove(tree.last());
+            tree.add(ce);
+          }
+        }
+      }
+    } finally {
+      markAndSweepLock.unlock();
+    }
+    for (CacheEntry<V> e : tree) {
+      result.put(e.key, e.value);
+    }
+    return result;
+  }
+
+  public int size() {
+    return stats.size.get();
+  }
+
+    @Override
+    public Iterator<V> valuesIterator() {
+        final Iterator<CacheEntry<V>> iter = map.valuesIterator();
+        return new Iterator<V>(){
+
+            @Override
+            public boolean hasNext() {
+                return iter.hasNext();
+            }
+
+            @Override
+            public V next() {
+                return iter.next().value; //TODO can be value null if already evicted?
+            }
+
+            @Override
+            public void remove() {
+                iter.remove(); //TODO is exposing remove ok? any impact on cache?
+            }
+        };
+    }
+
+    @Override
+    public LongMapIterator<V> longMapIterator() {
+        final LongMapIterator<CacheEntry<V>> iter = map.longMapIterator();
+        return new LongMapIterator<V>() {
+            @Override
+            public boolean moveToNext() {
+                return iter.moveToNext();
+            }
+
+            @Override
+            public long key() {
+                return iter.key();
+            }
+
+            @Override
+            public V value() {
+                return iter.value().value; //TODO can be value null if already evicted?
+            }
+
+            @Override
+            public void remove() {
+                iter.remove(); //TODO is exposing remove ok? any impact on cache?
+            }
+        };
+    }
+
+    public void clear() {
+    map.clear();
+  }
+
+  public LongMap<CacheEntry<V>> getMap() {
+    return map;
+  }
+
+  private static class CacheEntry<V> implements Comparable<CacheEntry<V>> {
+    long key;
+    V value;
+    volatile long lastAccessed = 0;
+    long lastAccessedCopy = 0;
+
+
+    public CacheEntry(long key, V value, long lastAccessed) {
+      this.key = key;
+      this.value = value;
+      this.lastAccessed = lastAccessed;
+    }
+
+    public void setLastAccessed(long lastAccessed) {
+      this.lastAccessed = lastAccessed;
+    }
+
+    @Override
+    public int compareTo(CacheEntry<V> that) {
+      if (this.lastAccessedCopy == that.lastAccessedCopy) return 0;
+      return this.lastAccessedCopy < that.lastAccessedCopy ? 1 : -1;
+    }
+
+    @Override
+    public int hashCode() {
+      return value.hashCode();
+    }
+
+    @Override
+    public boolean equals(Object obj) {
+      return value.equals(obj);
+    }
+
+    @Override
+    public String toString() {
+      return "key: " + key + " value: " + value + " lastAccessed:" + lastAccessed;
+    }
+  }
+
+ private boolean isDestroyed =  false;
+  public void destroy() {
+    try {
+      if(cleanupThread != null){
+        cleanupThread.stopThread();
+      }
+    } finally {
+      isDestroyed = true;
+    }
+  }
+
+  public Stats getStats() {
+    return stats;
+  }
+
+
+  protected static class Stats {
+    private final AtomicLong accessCounter = new AtomicLong(0),
+            putCounter = new AtomicLong(0),
+            nonLivePutCounter = new AtomicLong(0),
+            missCounter = new AtomicLong();
+    private final AtomicInteger size = new AtomicInteger();
+    private AtomicLong evictionCounter = new AtomicLong();
+
+    public long getCumulativeLookups() {
+      return (accessCounter.get() - putCounter.get() - nonLivePutCounter.get()) + missCounter.get();
+    }
+
+    public long getCumulativeHits() {
+      return accessCounter.get() - putCounter.get() - nonLivePutCounter.get();
+    }
+
+    public long getCumulativePuts() {
+      return putCounter.get();
+    }
+
+    public long getCumulativeEvictions() {
+      return evictionCounter.get();
+    }
+
+    public int getCurrentSize() {
+      return size.get();
+    }
+
+    public long getCumulativeNonLivePuts() {
+      return nonLivePutCounter.get();
+    }
+
+    public long getCumulativeMisses() {
+      return missCounter.get();
+    }
+
+    public void add(Stats other) {
+      accessCounter.addAndGet(other.accessCounter.get());
+      putCounter.addAndGet(other.putCounter.get());
+      nonLivePutCounter.addAndGet(other.nonLivePutCounter.get());
+      missCounter.addAndGet(other.missCounter.get());
+      evictionCounter.addAndGet(other.evictionCounter.get());
+      size.set(Math.max(size.get(), other.size.get()));
+    }
+  }
+
+
+  private static class CleanupThread extends Thread {
+    private WeakReference<LongConcurrentLRUMap> cache;
+
+    private boolean stop = false;
+
+    public CleanupThread(LongConcurrentLRUMap c) {
+      cache = new WeakReference<LongConcurrentLRUMap>(c);
+    }
+
+    @Override
+    public void run() {
+      while (true) {
+        synchronized (this) {
+          if (stop) break;
+          try {
+            this.wait();
+          } catch (InterruptedException e) {}
+        }
+        if (stop) break;
+        LongConcurrentLRUMap c = cache.get();
+        if(c == null) break;
+        c.markAndSweep();
+      }
+    }
+
+    void wakeThread() {
+      synchronized(this){
+        this.notify();
+      }
+    }
+
+    void stopThread() {
+      synchronized(this){
+        stop=true;
+        this.notify();
+      }
+    }
+  }
+
+  @Override
+  protected void finalize() throws Throwable {
+    try {
+      if(!isDestroyed){
+        log.log(Level.SEVERE, "LongConcurrentLRUMap was not destroyed prior to finalize(), indicates a bug -- POSSIBLE RESOURCE LEAK!!!");
+        destroy();
+      }
+    } finally {
+      super.finalize();
+    }
+  }
+
+  /** override this method to get notified about evicted entries*/
+  protected void evictedEntry(long key, V value){
+
+  }
+}
Index: applications/editors/josm/plugins/imagerycache/src/org/mapdb/LongHashMap.java
===================================================================
--- applications/editors/josm/plugins/imagerycache/src/org/mapdb/LongHashMap.java	(revision 29363)
+++ applications/editors/josm/plugins/imagerycache/src/org/mapdb/LongHashMap.java	(revision 29363)
@@ -0,0 +1,480 @@
+/*
+ *  Licensed to the Apache Software Foundation (ASF) under one or more
+ *  contributor license agreements.  See the NOTICE file distributed with
+ *  this work for additional information regarding copyright ownership.
+ *  The ASF licenses this file to You under the Apache License, Version 2.0
+ *  (the "License"); you may not use this file except in compliance with
+ *  the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ *  Unless required by applicable law or agreed to in writing, software
+ *  distributed under the License is distributed on an "AS IS" BASIS,
+ *  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ *  See the License for the specific language governing permissions and
+ *  limitations under the License.
+ */
+
+package org.mapdb;
+
+import java.io.Serializable;
+import java.util.Arrays;
+import java.util.ConcurrentModificationException;
+import java.util.Iterator;
+import java.util.NoSuchElementException;
+
+/**
+ * LongHashMap is an implementation of LongMap without concurrency locking.
+ * This code is adoption of 'HashMap' from Apache Harmony refactored to support primitive long keys.
+ */
+public class LongHashMap<V> extends LongMap<V> implements Serializable {
+
+    private static final long serialVersionUID = 362340234235222265L;
+
+    /*
+     * Actual count of entries
+     */
+    transient int elementCount;
+
+    /*
+     * The internal data structure to hold Entries
+     */
+    transient Entry<V>[] elementData;
+
+    /*
+     * modification count, to keep track of structural modifications between the
+     * HashMap and the iterator
+     */
+    transient int modCount = 0;
+
+    /*
+     * default size that an HashMap created using the default constructor would
+     * have.
+     */
+    private static final int DEFAULT_SIZE = 16;
+
+    /*
+     * maximum ratio of (stored elements)/(storage size) which does not lead to
+     * rehash
+     */
+    final float loadFactor;
+
+    /**
+     * Salt added to keys before hashing, so it is harder to trigger hash collision attack.
+     */
+    protected final long hashSalt = hashSaltValue();
+
+    protected long hashSaltValue() {
+        return Utils.RANDOM.nextLong();
+    }
+
+    /*
+     * maximum number of elements that can be put in this map before having to
+     * rehash
+     */
+    int threshold;
+
+    static class Entry<V>{
+        final int origKeyHash;
+
+        final long key;
+        V value;
+        Entry<V> next;
+
+
+
+        public Entry(long key, int hash) {
+            this.key = key;
+            this.origKeyHash = hash;
+        }
+    }
+
+    private static class AbstractMapIterator<V>  {
+        private int position = 0;
+        int expectedModCount;
+        Entry<V> futureEntry;
+        Entry<V> currentEntry;
+        Entry<V> prevEntry;
+
+        final LongHashMap<V> associatedMap;
+
+        AbstractMapIterator(LongHashMap<V> hm) {
+            associatedMap = hm;
+            expectedModCount = hm.modCount;
+            futureEntry = null;
+        }
+
+        public boolean hasNext() {
+            if (futureEntry != null) {
+                return true;
+            }
+            while (position < associatedMap.elementData.length) {
+                if (associatedMap.elementData[position] == null) {
+                    position++;
+                } else {
+                    return true;
+                }
+            }
+            return false;
+        }
+
+        final void checkConcurrentMod() throws ConcurrentModificationException {
+            if (expectedModCount != associatedMap.modCount) {
+                throw new ConcurrentModificationException();
+            }
+        }
+
+        final void makeNext() {
+            checkConcurrentMod();
+            if (!hasNext()) {
+                throw new NoSuchElementException();
+            }
+            if (futureEntry == null) {
+                currentEntry = associatedMap.elementData[position++];
+                futureEntry = currentEntry.next;
+                prevEntry = null;
+            } else {
+                if(currentEntry!=null){
+                    prevEntry = currentEntry;
+                }
+                currentEntry = futureEntry;
+                futureEntry = futureEntry.next;
+            }
+        }
+
+        public final void remove() {
+            checkConcurrentMod();
+            if (currentEntry==null) {
+                throw new IllegalStateException();
+            }
+            if(prevEntry==null){
+                int index = currentEntry.origKeyHash & (associatedMap.elementData.length - 1);
+                associatedMap.elementData[index] = associatedMap.elementData[index].next;
+            } else {
+                prevEntry.next = currentEntry.next;
+            }
+            currentEntry = null;
+            expectedModCount++;
+            associatedMap.modCount++;
+            associatedMap.elementCount--;
+
+        }
+    }
+
+
+    private static class EntryIterator <V> extends AbstractMapIterator<V> implements LongMapIterator<V> {
+
+        EntryIterator (LongHashMap<V> map) {
+            super(map);
+        }
+
+
+        @Override
+        public boolean moveToNext() {
+            if(!hasNext()) return false;
+            makeNext();
+            return true;
+        }
+
+        @Override
+        public long key() {
+            return currentEntry.key;
+        }
+
+        @Override
+        public V value() {
+            return (V) currentEntry.value;
+        }
+    }
+
+
+    private static class ValueIterator <V> extends AbstractMapIterator<V> implements Iterator<V> {
+
+        ValueIterator (LongHashMap<V> map) {
+            super(map);
+        }
+
+        @Override
+		public V next() {
+            makeNext();
+            return currentEntry.value;
+        }
+    }
+    /**
+     * Create a new element array
+     *
+     * @param s
+     * @return Reference to the element array
+     */
+    @SuppressWarnings("unchecked")
+    Entry<V>[] newElementArray(int s) {
+        return new Entry[s];
+    }
+
+    /**
+     * Constructs a new empty {@code HashMap} instance.
+     */
+    public LongHashMap() {
+        this(DEFAULT_SIZE);
+    }
+
+    /**
+     * Constructs a new {@code HashMap} instance with the specified capacity.
+     *
+     * @param capacity
+     *            the initial capacity of this hash map.
+     * @throws IllegalArgumentException
+     *                when the capacity is less than zero.
+     */
+    public LongHashMap(int capacity) {
+        this(capacity, 0.75f);  // default load factor of 0.75
+    }
+
+    /**
+     * Calculates the capacity of storage required for storing given number of
+     * elements
+     *
+     * @param x
+     *            number of elements
+     * @return storage size
+     */
+    private static final int calculateCapacity(int x) {
+        if(x >= 1 << 30){
+            return 1 << 30;
+        }
+        if(x == 0){
+            return 16;
+        }
+        x = x -1;
+        x |= x >> 1;
+        x |= x >> 2;
+        x |= x >> 4;
+        x |= x >> 8;
+        x |= x >> 16;
+        return x + 1;
+    }
+
+    /**
+     * Constructs a new {@code HashMap} instance with the specified capacity and
+     * load factor.
+     *
+     * @param capacity
+     *            the initial capacity of this hash map.
+     * @param loadFactor
+     *            the initial load factor.
+     * @throws IllegalArgumentException
+     *                when the capacity is less than zero or the load factor is
+     *                less or equal to zero.
+     */
+    public LongHashMap(int capacity, float loadFactor) {
+        if (capacity >= 0 && loadFactor > 0) {
+            capacity = calculateCapacity(capacity);
+            elementCount = 0;
+            elementData = newElementArray(capacity);
+            this.loadFactor = loadFactor;
+            computeThreshold();
+        } else {
+            throw new IllegalArgumentException();
+        }
+    }
+
+    /**
+     * Removes all mappings from this hash map, leaving it empty.
+     *
+     * @see #isEmpty
+     * @see #size
+     */
+    @Override
+    public void clear() {
+        if (elementCount > 0) {
+            elementCount = 0;
+            Arrays.fill(elementData, null);
+            modCount++;
+        }
+    }
+
+
+    /**
+     * Computes the threshold for rehashing
+     */
+    private void computeThreshold() {
+        threshold = (int) (elementData.length * loadFactor);
+    }
+
+    /**
+     * Returns the value of the mapping with the specified key.
+     *
+     * @param key
+     *            the key.
+     * @return the value of the mapping with the specified key, or {@code null}
+     *         if no mapping for the specified key is found.
+     */
+    @Override
+    public V get(long key) {
+        Entry<V> m = getEntry(key);
+        if (m != null) {
+            return m.value;
+        }
+        return null;
+    }
+
+    final Entry<V> getEntry(long key) {
+        int hash = Utils.longHash(key^hashSalt);
+        int index = hash & (elementData.length - 1);
+        return findNonNullKeyEntry(key, index, hash);
+    }
+
+    final Entry<V> findNonNullKeyEntry(long key, int index, int keyHash) {
+        Entry<V> m = elementData[index];
+        while (m != null
+                && (m.origKeyHash != keyHash || key!=m.key)) {
+            m = m.next;
+        }
+        return m;
+    }
+
+
+
+    /**
+     * Returns whether this map is empty.
+     *
+     * @return {@code true} if this map has no elements, {@code false}
+     *         otherwise.
+     * @see #size()
+     */
+    @Override
+    public boolean isEmpty() {
+        return elementCount == 0;
+    }
+
+    /**
+     * Maps the specified key to the specified value.
+     *
+     * @param key
+     *            the key.
+     * @param value
+     *            the value.
+     * @return the value of any previous mapping with the specified key or
+     *         {@code null} if there was no such mapping.
+     */
+    @Override
+    public V put(long key, V value) {
+        Entry<V> entry;
+        int hash = Utils.longHash(key^hashSalt);
+        int index = hash & (elementData.length - 1);
+        entry = findNonNullKeyEntry(key, index, hash);
+        if (entry == null) {
+           modCount++;
+           entry = createHashedEntry(key, index, hash);
+           if (++elementCount > threshold) {
+               rehash();
+           }
+        }
+
+        V result = entry.value;
+        entry.value = value;
+        return result;
+    }
+
+
+    Entry<V> createHashedEntry(long key, int index, int hash) {
+        Entry<V> entry = new Entry<V>(key,hash);
+        entry.next = elementData[index];
+        elementData[index] = entry;
+        return entry;
+    }
+
+
+
+    void rehash(int capacity) {
+        int length = calculateCapacity((capacity == 0 ? 1 : capacity << 1));
+
+        Entry<V>[] newData = newElementArray(length);
+        for (int i = 0; i < elementData.length; i++) {
+            Entry<V> entry = elementData[i];
+            elementData[i] = null;
+            while (entry != null) {
+                int index = entry.origKeyHash & (length - 1);
+                Entry<V> next = entry.next;
+                entry.next = newData[index];
+                newData[index] = entry;
+                entry = next;
+            }
+        }
+        elementData = newData;
+        computeThreshold();
+    }
+
+    void rehash() {
+        rehash(elementData.length);
+    }
+
+    /**
+     * Removes the mapping with the specified key from this map.
+     *
+     * @param key
+     *            the key of the mapping to remove.
+     * @return the value of the removed mapping or {@code null} if no mapping
+     *         for the specified key was found.
+     */
+    @Override
+    public V remove(long key) {
+        Entry<V> entry = removeEntry(key);
+        if (entry != null) {
+            return entry.value;
+        }
+        return null;
+    }
+
+
+    final Entry<V> removeEntry(long key) {
+        int index = 0;
+        Entry<V> entry;
+        Entry<V> last = null;
+
+        int hash = Utils.longHash(key^hashSalt);
+        index = hash & (elementData.length - 1);
+        entry = elementData[index];
+        while (entry != null && !(entry.origKeyHash == hash && key == entry.key)) {
+             last = entry;
+             entry = entry.next;
+        }
+
+        if (entry == null) {
+            return null;
+        }
+        if (last == null) {
+            elementData[index] = entry.next;
+        } else {
+            last.next = entry.next;
+        }
+        modCount++;
+        elementCount--;
+        return entry;
+    }
+
+    /**
+     * Returns the number of elements in this map.
+     *
+     * @return the number of elements in this map.
+     */
+    @Override
+    public int size() {
+        return elementCount;
+    }
+
+    @Override
+    public Iterator<V> valuesIterator() {
+        return new ValueIterator<V>(this);
+    }
+
+    @Override
+    public LongMapIterator<V> longMapIterator() {
+        return new EntryIterator<V>(this);
+    }
+
+
+
+
+
+}
Index: applications/editors/josm/plugins/imagerycache/src/org/mapdb/LongMap.java
===================================================================
--- applications/editors/josm/plugins/imagerycache/src/org/mapdb/LongMap.java	(revision 29363)
+++ applications/editors/josm/plugins/imagerycache/src/org/mapdb/LongMap.java	(revision 29363)
@@ -0,0 +1,117 @@
+/*
+ *  Copyright (c) 2012 Jan Kotek
+ *
+ *  Licensed under the Apache License, Version 2.0 (the "License");
+ *  you may not use this file except in compliance with the License.
+ *  You may obtain a copy of the License at
+ *
+ *    http://www.apache.org/licenses/LICENSE-2.0
+ *
+ *  Unless required by applicable law or agreed to in writing, software
+ *  distributed under the License is distributed on an "AS IS" BASIS,
+ *  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ *  See the License for the specific language governing permissions and
+ *  limitations under the License.
+ */
+
+package org.mapdb;
+
+import java.util.Iterator;
+
+/**
+ * Same as 'java.util.Map' but uses primitive 'long' keys to minimise boxing (and GC) overhead.
+ *
+ * @author Jan Kotek
+ */
+public abstract class LongMap<V> {
+
+    /**
+     * Removes all mappings from this hash map, leaving it empty.
+     *
+     * @see #isEmpty
+     * @see #size
+     */
+    public abstract void clear();
+
+    /**
+     * Returns the value of the mapping with the specified key.
+     *
+     * @param key the key.
+     * @return the value of the mapping with the specified key, or {@code null}
+     *         if no mapping for the specified key is found.
+     */
+    public abstract V get(long key);
+
+    /**
+     * Returns whether this map is empty.
+     *
+     * @return {@code true} if this map has no elements, {@code false}
+     *         otherwise.
+     * @see #size()
+     */
+    public abstract boolean isEmpty();
+
+    /**
+     * Maps the specified key to the specified value.
+     *
+     * @param key   the key.
+     * @param value the value.
+     * @return the value of any previous mapping with the specified key or
+     *         {@code null} if there was no such mapping.
+     */
+    public abstract V put(long key, V value);
+
+
+    /**
+     * Removes the mapping from this map
+     *
+     * @param key to remove
+     *  @return value contained under this key, or null if value did not exist
+     */
+    public abstract V remove(long key);
+
+    /**
+     * Returns the number of elements in this map.
+     *
+     * @return the number of elements in this map.
+     */
+    public abstract int size();
+
+
+    /**
+     * @return iterator over values in map
+     */
+    public abstract Iterator<V> valuesIterator();
+
+    public abstract LongMapIterator<V> longMapIterator();
+
+
+    /** Iterates over LongMap key and values without boxing long keys */
+    public interface LongMapIterator<V>{
+        boolean moveToNext();
+        long key();
+        V value();
+
+        void remove();
+    }
+
+    @Override
+	public String toString(){
+        final StringBuilder b = new StringBuilder();
+        b.append(getClass().getSimpleName());
+        b.append('[');
+        boolean first = true;
+        LongMapIterator<V> iter = longMapIterator();
+        while(iter.moveToNext()){
+            b.append(iter.key());
+            b.append(" => ");
+            b.append(iter.value());
+            if(first){
+                first = false;
+                b.append(", ");
+            }
+        }
+        b.append(']');
+        return b.toString();
+    }
+}
Index: applications/editors/josm/plugins/imagerycache/src/org/mapdb/Queues.java
===================================================================
--- applications/editors/josm/plugins/imagerycache/src/org/mapdb/Queues.java	(revision 29363)
+++ applications/editors/josm/plugins/imagerycache/src/org/mapdb/Queues.java	(revision 29363)
@@ -0,0 +1,572 @@
+package org.mapdb;
+
+
+
+import java.io.DataInput;
+import java.io.DataOutput;
+import java.io.IOException;
+import java.util.Collection;
+import java.util.Iterator;
+import java.util.NoSuchElementException;
+import java.util.concurrent.locks.Lock;
+import java.util.concurrent.locks.ReentrantLock;
+
+/**
+ * Various queues algorithms
+ */
+public final class Queues {
+
+    private Queues(){}
+
+
+    public static abstract class SimpleQueue<E> implements java.util.Queue<E>{
+
+        protected final Engine engine;
+        protected final Serializer<E> serializer;
+
+        protected final Atomic.Long head;
+
+
+        protected static class NodeSerializer<E> implements Serializer<Node<E>> {
+            private final Serializer<E> serializer;
+
+            public NodeSerializer(Serializer<E> serializer) {
+                this.serializer = serializer;
+            }
+
+            @Override
+            public void serialize(DataOutput out, Node<E> value) throws IOException {
+                if(value==Node.EMPTY) return;
+                Utils.packLong(out,value.next);
+                serializer.serialize(out, value.value);
+            }
+
+            @Override
+            public Node<E> deserialize(DataInput in, int available) throws IOException {
+                if(available==0)return Node.EMPTY;
+                return new Node<E>(Utils.unpackLong(in), serializer.deserialize(in,-1));
+            }
+        }
+
+        protected final Serializer<Node<E>> nodeSerializer;
+
+
+        public SimpleQueue(Engine engine, Serializer<E> serializer, long headRecid) {
+            this.engine = engine;
+            this.serializer = serializer;
+            if(headRecid == 0) headRecid = engine.put(0L, Serializer.LONG_SERIALIZER);
+            head = new Atomic.Long(engine,headRecid);
+            nodeSerializer = new NodeSerializer<E>(serializer);
+        }
+
+
+        /**
+         * Closes underlying storage and releases all resources.
+         * Used mostly with temporary collections where engine is not accessible.
+         */
+        public void close(){
+            engine.close();
+        }
+
+
+
+        protected static final class Node<E>{
+
+            protected static final Node EMPTY = new Node(0L, null);
+
+            final protected long next;
+            final protected E value;
+
+            public Node(long next, E value) {
+                this.next = next;
+                this.value = value;
+            }
+
+            @Override
+            public boolean equals(Object o) {
+                if (this == o) return true;
+                if (o == null || getClass() != o.getClass()) return false;
+
+                Node node = (Node) o;
+
+                if (next != node.next) return false;
+                if (value != null ? !value.equals(node.value) : node.value != null) return false;
+
+                return true;
+            }
+
+            @Override
+            public int hashCode() {
+                int result = (int) (next ^ (next >>> 32));
+                result = 31 * result + (value != null ? value.hashCode() : 0);
+                return result;
+            }
+        }
+
+        @Override
+        public void clear() {
+            while(!isEmpty())
+                remove();
+        }
+
+
+        @Override
+        public E remove() {
+            E ret = poll();
+            if(ret == null) throw new NoSuchElementException();
+            return ret;
+        }
+
+
+        @Override
+        public E element() {
+            E ret = peek();
+            if(ret == null) throw new NoSuchElementException();
+            return ret;
+
+        }
+
+
+        @Override
+        public boolean offer(E e) {
+            return add(e);
+        }
+
+
+
+        @Override
+        public boolean isEmpty() {
+            return head.get()==0;
+        }
+
+
+        @Override
+        public int size() {
+            throw new UnsupportedOperationException();
+        }
+
+
+        @Override
+        public boolean contains(Object o) {
+            throw new UnsupportedOperationException();
+        }
+
+        @Override
+        public Iterator<E> iterator() {
+            throw new UnsupportedOperationException();
+        }
+
+        @Override
+        public Object[] toArray() {
+            throw new UnsupportedOperationException();
+        }
+
+        @Override
+        public <T> T[] toArray(T[] a) {
+            throw new UnsupportedOperationException();
+        }
+
+
+        @Override
+        public boolean remove(Object o) {
+            throw new UnsupportedOperationException();
+        }
+
+        @Override
+        public boolean containsAll(Collection<?> c) {
+            throw new UnsupportedOperationException();
+        }
+
+        @Override
+        public boolean addAll(Collection<? extends E> c) {
+            throw new UnsupportedOperationException();
+        }
+
+        @Override
+        public boolean removeAll(Collection<?> c) {
+            throw new UnsupportedOperationException();
+        }
+
+        @Override
+        public boolean retainAll(Collection<?> c) {
+            throw new UnsupportedOperationException();
+        }
+    }
+
+    /**
+     * Last in first out lock-free queue
+     *
+     * @param <E>
+     */
+    public static class Stack<E> extends SimpleQueue<E> {
+
+        protected final boolean useLocks;
+        protected final Locks.RecidLocks locks;
+
+
+
+        public Stack(Engine engine,  Serializer<E> serializer, long headerRecid, boolean useLocks) {
+            super(engine, serializer, headerRecid);
+            this.useLocks = useLocks;
+            locks = useLocks? new Locks.LongHashMapRecidLocks() : null;
+        }
+
+        @Override
+        public E peek() {
+            while(true){
+                long head2 = head.get();
+                if(0 == head2) return null;
+                Node<E> n = engine.get(head2, nodeSerializer);
+                long head3 = head.get();
+                if(0 == head2) return null;
+                if(head2 == head3) return (E) n.value;
+            }
+        }
+
+        @Override
+        public E poll() {
+            long head2 = 0;
+            Node<E> n;
+            do{
+                if(useLocks && head2!=0)locks.unlock(head2);
+                head2 =head.get();
+                if(head2 == 0) return null;
+
+                if(useLocks && head2!=0)locks.lock(head2);
+                n = engine.get(head2, nodeSerializer);
+            }while(n==null || !head.compareAndSet(head2, n.next));
+            if(useLocks && head2!=0){
+                engine.delete(head2,Serializer.LONG_SERIALIZER);
+                locks.unlock(head2);
+            }else{
+                engine.update(head2, null, nodeSerializer);
+            }
+            return (E) n.value;
+        }
+
+
+        @Override
+        public boolean add(E e) {
+            long head2 = head.get();
+            Node<E> n = new Node<E>(head2, e);
+            long recid = engine.put(n, nodeSerializer);
+            while(!head.compareAndSet(head2, recid)){
+                //failed to update head, so read new value and start over
+                head2 = head.get();
+                n = new Node<E>(head2, e);
+                engine.update(recid, n, nodeSerializer);
+            }
+            return true;
+        }
+    }
+
+    protected static final class StackRoot{
+        final long headerRecid;
+        final boolean useLocks;
+        final Serializer serializer;
+
+        public StackRoot(long headerRecid, boolean useLocks, Serializer serializer) {
+            this.headerRecid = headerRecid;
+            this.useLocks = useLocks;
+            this.serializer = serializer;
+        }
+    }
+
+    protected static final class StackRootSerializer implements Serializer<StackRoot>{
+
+        final Serializer<Serializer> serialierSerializer;
+
+        public StackRootSerializer(Serializer<Serializer> serialierSerializer) {
+            this.serialierSerializer = serialierSerializer;
+        }
+
+        @Override
+        public void serialize(DataOutput out, StackRoot value) throws IOException {
+            out.write(SerializationHeader.MAPDB_STACK);
+            Utils.packLong(out, value.headerRecid);
+            out.writeBoolean(value.useLocks);
+            serialierSerializer.serialize(out,value.serializer);
+        }
+
+        @Override
+        public StackRoot deserialize(DataInput in, int available) throws IOException {
+            if(in.readUnsignedByte()!=SerializationHeader.MAPDB_STACK) throw new InternalError();
+            return new StackRoot(
+                    Utils.unpackLong(in),
+                    in.readBoolean(),
+                    serialierSerializer.deserialize(in,-1)
+            );
+        }
+    }
+
+    static <E> long createStack(Engine engine, Serializer<Serializer> serializerSerializer, Serializer<E> serializer, boolean useLocks){
+        long headerRecid = engine.put(0L, Serializer.LONG_SERIALIZER);
+        StackRoot root = new StackRoot(headerRecid, useLocks, serializer);
+        StackRootSerializer rootSerializer = new StackRootSerializer(serializerSerializer);
+        return engine.put(root, rootSerializer);
+    }
+
+    static <E> Stack<E> getStack(Engine engine, Serializer<Serializer> serializerSerializer, long rootRecid){
+        StackRoot root = engine.get(rootRecid, new StackRootSerializer(serializerSerializer));
+        return new Stack<E>(engine, root.serializer, root.headerRecid, root.useLocks);
+    }
+
+    /**
+     * First in first out lock-free queue
+     *
+     * @param <E>
+     */
+    public static class Queue<E> extends SimpleQueue<E> {
+
+        protected final Atomic.Long tail;
+        protected final Atomic.Long size;
+
+        public Queue(Engine engine, Serializer<E> serializer, long headerRecid, long nextTailRecid, long sizeRecid) {
+            super(engine, serializer,headerRecid);
+            tail = new Atomic.Long(engine,nextTailRecid);
+            size = new Atomic.Long(engine,sizeRecid);
+        }
+
+
+        @Override
+        public boolean isEmpty() {
+            return head.get() == 0;
+        }
+
+        public boolean add(E item){
+            final long nextTail = engine.put((Node<E>)Node.EMPTY, nodeSerializer);
+            Node<E> n = new Node<E>(nextTail, item);
+            long tail2 = tail.get();
+            while(!engine.compareAndSwap(tail2, (Node<E>)Node.EMPTY, n, nodeSerializer)){
+                tail2 = tail.get();
+            }
+            head.compareAndSet(0,tail2);
+            tail.set(nextTail);
+            size.incrementAndGet();
+            return true;
+        }
+
+        public E poll(){
+            while(true){
+                long head2 = head.get();
+                if(head2 == 0)return null;
+                Node<E> n = engine.get(head2,nodeSerializer);
+                if(n==null){
+                    //TODO we need to know when queue is empty and we can break the cycle
+                    // I am not really sure under what concurrent situation is n==null, so there is 'size' hack
+                    // but 'size' hack is probably not thread-safe
+                    if(size.get()==0)return null ;
+                    continue;
+                }
+                if(!engine.compareAndSwap(head2,n, (Node<E>)Node.EMPTY, nodeSerializer))
+                    continue;
+                if(!head.compareAndSet(head2,n.next)) throw new InternalError();
+                size.decrementAndGet();
+                return n.value;
+            }
+        }
+
+        @Override
+        public E peek() {
+            long head2 = head.get();
+            if(head2==0) return null;
+            Node<E> n = engine.get(head2,nodeSerializer);
+            while(n == null){
+                if(size.get()==0) return null;
+                n = engine.get(head2,nodeSerializer);
+            }
+
+            return n.value;
+        }
+    }
+
+
+    protected static final class QueueRoot{
+        final long headerRecid;
+        final long nextTailRecid;
+        final Serializer serializer;
+        final long sizeRecid;
+
+        public QueueRoot(long headerRecid, long nextTailRecid, long sizeRecid, Serializer serializer) {
+            this.headerRecid = headerRecid;
+            this.nextTailRecid = nextTailRecid;
+            this.serializer = serializer;
+            this.sizeRecid = sizeRecid;
+        }
+    }
+
+    protected static final class QueueRootSerializer implements Serializer<QueueRoot>{
+
+        final Serializer<Serializer> serialierSerializer;
+
+        public QueueRootSerializer(Serializer<Serializer> serialierSerializer) {
+            this.serialierSerializer = serialierSerializer;
+        }
+
+        @Override
+        public void serialize(DataOutput out, QueueRoot value) throws IOException {
+            out.write(SerializationHeader.MAPDB_QUEUE);
+            Utils.packLong(out, value.headerRecid);
+            Utils.packLong(out, value.nextTailRecid);
+            Utils.packLong(out, value.sizeRecid);
+            serialierSerializer.serialize(out,value.serializer);
+        }
+
+        @Override
+        public QueueRoot deserialize(DataInput in, int available) throws IOException {
+            if(in.readUnsignedByte()!=SerializationHeader.MAPDB_QUEUE) throw new InternalError();
+            return new QueueRoot(
+                    Utils.unpackLong(in),
+                    Utils.unpackLong(in),
+                    Utils.unpackLong(in),
+                    serialierSerializer.deserialize(in,-1)
+                    );
+        }
+    }
+
+    static <E> long createQueue(Engine engine, Serializer<Serializer> serializerSerializer, Serializer<E> serializer){
+        long headerRecid = engine.put(0L, Serializer.LONG_SERIALIZER);
+        long nextTail = engine.put(SimpleQueue.Node.EMPTY, new SimpleQueue.NodeSerializer(null));
+        long nextTailRecid = engine.put(nextTail, Serializer.LONG_SERIALIZER);
+        long sizeRecid = engine.put(0L, Serializer.LONG_SERIALIZER);
+        QueueRoot root = new QueueRoot(headerRecid, nextTailRecid, sizeRecid, serializer);
+        QueueRootSerializer rootSerializer = new QueueRootSerializer(serializerSerializer);
+        return engine.put(root, rootSerializer);
+    }
+
+
+    static <E> Queue<E> getQueue(Engine engine, Serializer<Serializer> serializerSerializer, long rootRecid){
+        QueueRoot root = engine.get(rootRecid, new QueueRootSerializer(serializerSerializer));
+        return new Queue<E>(engine, root.serializer, root.headerRecid, root.nextTailRecid,root.sizeRecid);
+    }
+
+    public static class CircularQueue<E> extends SimpleQueue<E> {
+
+        protected final Atomic.Long headInsert;
+        //TODO is there a way to implement this without global locks?
+        protected final Lock lock = new ReentrantLock();
+        protected final long size;
+
+        public CircularQueue(Engine engine, Serializer serializer, long headRecid, long headInsertRecid, long size) {
+            super(engine, serializer, headRecid);
+            headInsert = new Atomic.Long(engine, headInsertRecid);
+            this.size = size;
+        }
+
+        @Override
+        public boolean add(Object o) {
+            lock.lock();
+            try{
+                long nRecid = headInsert.get();
+                Node<E> n = engine.get(nRecid, nodeSerializer);
+                n = new Node<E>(n.next, (E) o);
+                engine.update(nRecid, n, nodeSerializer);
+                headInsert.set(n.next);
+                //move 'poll' head if it points to currently replaced item
+                head.compareAndSet(nRecid, n.next);
+                return true;
+            }finally {
+                lock.unlock();
+            }
+        }
+
+        @Override
+        public E poll() {
+            lock.lock();
+            try{
+                long nRecid = head.get();
+                Node<E> n = engine.get(nRecid, nodeSerializer);
+                engine.update(nRecid, new Node<E>(n.next, null), nodeSerializer);
+                head.set(n.next);
+                return n.value;
+            }finally {
+                lock.unlock();
+            }
+        }
+
+        @Override
+        public E peek() {
+            lock.lock();
+            try{
+                long nRecid = head.get();
+                Node<E> n = engine.get(nRecid, nodeSerializer);
+                return n.value;
+            }finally {
+                lock.unlock();
+            }
+        }
+    }
+
+    protected static final class CircularQueueRoot{
+        final long headerRecid;
+        final long headerInsertRecid;
+        final Serializer serializer;
+        final long sizeRecid;
+
+        public CircularQueueRoot(long headerRecid, long headerInsertRecid, long sizeRecid, Serializer serializer) {
+            this.headerRecid = headerRecid;
+            this.headerInsertRecid = headerInsertRecid;
+            this.serializer = serializer;
+            this.sizeRecid = sizeRecid;
+        }
+    }
+
+    protected static final class CircularQueueRootSerializer implements Serializer<CircularQueueRoot>{
+
+        final Serializer<Serializer> serialierSerializer;
+
+        public CircularQueueRootSerializer(Serializer<Serializer> serialierSerializer) {
+            this.serialierSerializer = serialierSerializer;
+        }
+
+        @Override
+        public void serialize(DataOutput out, CircularQueueRoot value) throws IOException {
+            out.write(SerializationHeader.MAPDB_CIRCULAR_QUEUE);
+            Utils.packLong(out, value.headerRecid);
+            Utils.packLong(out, value.headerInsertRecid);
+            Utils.packLong(out, value.sizeRecid);
+            serialierSerializer.serialize(out,value.serializer);
+        }
+
+        @Override
+        public CircularQueueRoot deserialize(DataInput in, int available) throws IOException {
+            if(in.readUnsignedByte()!=SerializationHeader.MAPDB_CIRCULAR_QUEUE) throw new InternalError();
+            return new CircularQueueRoot(
+                    Utils.unpackLong(in),
+                    Utils.unpackLong(in),
+                    Utils.unpackLong(in),
+                    serialierSerializer.deserialize(in,-1)
+            );
+        }
+    }
+
+    static <E> long createCircularQueue(Engine engine, Serializer<Serializer> serializerSerializer, Serializer<E> serializer, long size){
+        if(size<2) throw new IllegalArgumentException();
+        //insert N Nodes empty nodes into a circle
+        long prevRecid = 0;
+        long firstRecid = 0;
+        Serializer<SimpleQueue.Node> nodeSer = new SimpleQueue.NodeSerializer(serializerSerializer);
+        for(long i=0;i<size;i++){
+            SimpleQueue.Node n = new SimpleQueue.Node(prevRecid, null);
+            prevRecid = engine.put(n, nodeSer);
+            if(firstRecid==0) firstRecid = prevRecid;
+        }
+        //update first node to point to last recid
+        engine.update(firstRecid, new SimpleQueue.Node(prevRecid, null), nodeSer );
+
+        long headerRecid = engine.put(prevRecid, Serializer.LONG_SERIALIZER);
+        long headerInsertRecid = engine.put(prevRecid, Serializer.LONG_SERIALIZER);
+
+        CircularQueueRoot root = new CircularQueueRoot(headerRecid, headerInsertRecid, size, serializer);
+        CircularQueueRootSerializer rootSerializer = new CircularQueueRootSerializer(serializerSerializer);
+        return engine.put(root, rootSerializer);
+    }
+
+
+    static <E> CircularQueue<E> getCircularQueue(Engine engine, Serializer<Serializer> serializerSerializer, long rootRecid){
+        CircularQueueRoot root = engine.get(rootRecid, new CircularQueueRootSerializer(serializerSerializer));
+        return new CircularQueue<E>(engine, root.serializer, root.headerRecid, root.headerInsertRecid,root.sizeRecid);
+    }
+
+
+}
Index: applications/editors/josm/plugins/imagerycache/src/org/mapdb/SerializationHeader.java
===================================================================
--- applications/editors/josm/plugins/imagerycache/src/org/mapdb/SerializationHeader.java	(revision 29363)
+++ applications/editors/josm/plugins/imagerycache/src/org/mapdb/SerializationHeader.java	(revision 29363)
@@ -0,0 +1,185 @@
+/*
+ *  Copyright (c) 2012 Jan Kotek
+ *
+ *  Licensed under the Apache License, Version 2.0 (the "License");
+ *  you may not use this file except in compliance with the License.
+ *  You may obtain a copy of the License at
+ *
+ *    http://www.apache.org/licenses/LICENSE-2.0
+ *
+ *  Unless required by applicable law or agreed to in writing, software
+ *  distributed under the License is distributed on an "AS IS" BASIS,
+ *  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ *  See the License for the specific language governing permissions and
+ *  limitations under the License.
+ */
+
+package org.mapdb;
+
+/**
+ * Header byte, is used at start of each record to indicate data type
+ * WARNING !!! values bellow must be unique !!!!!
+ *
+ * @author Jan Kotek
+ */
+interface SerializationHeader {
+
+    int  NULL = 0;
+    int  POJO = 1;
+    int  BOOLEAN_TRUE = 2;
+    int  BOOLEAN_FALSE = 3;
+    int  INTEGER_MINUS_1 = 4;
+    int  INTEGER_0 = 5;
+    int  INTEGER_1 = 6;
+    int  INTEGER_2 = 7;
+    int  INTEGER_3 = 8;
+    int  INTEGER_4 = 9;
+    int  INTEGER_5 = 10;
+    int  INTEGER_6 = 11;
+    int  INTEGER_7 = 12;
+    int  INTEGER_8 = 13;
+    int  INTEGER_255 = 14;
+    int  INTEGER_PACK_NEG = 15;
+    int  INTEGER_PACK = 16;
+    int  LONG_MINUS_1 = 17;
+    int  LONG_0 = 18;
+    int  LONG_1 = 19;
+    int  LONG_2 = 20;
+    int  LONG_3 = 21;
+    int  LONG_4 = 22;
+    int  LONG_5 = 23;
+    int  LONG_6 = 24;
+    int  LONG_7 = 25;
+    int  LONG_8 = 26;
+    int  LONG_PACK_NEG = 27;
+    int  LONG_PACK = 28;
+    int  LONG_255 = 29;
+    int  LONG_MINUS_MAX = 30;
+    int  SHORT_MINUS_1 = 31;
+    int  SHORT_0 = 32;
+    int  SHORT_1 = 33;
+    int  SHORT_255 = 34;
+    int  SHORT_FULL = 35;
+    int  BYTE_MINUS_1 = 36;
+    int  BYTE_0 = 37;
+    int  BYTE_1 = 38;
+    int  BYTE_FULL = 39;
+    int  CHAR = 40;
+    int  FLOAT_MINUS_1 = 41;
+    int  FLOAT_0 = 42;
+    int  FLOAT_1 = 43;
+    int  FLOAT_255 = 44;
+    int  FLOAT_SHORT = 45;
+    int  FLOAT_FULL = 46;
+    int  DOUBLE_MINUS_1 = 47;
+    int  DOUBLE_0 = 48;
+    int  DOUBLE_1 = 49;
+    int  DOUBLE_255 = 50;
+    int  DOUBLE_SHORT = 51;
+    int  DOUBLE_FULL = 52;
+    int  DOUBLE_ARRAY = 53;
+    int  BIGDECIMAL = 54;
+    int  BIGINTEGER = 55;
+    int  FLOAT_ARRAY = 56;
+    int  INTEGER_MINUS_MAX = 57;
+    int  SHORT_ARRAY = 58;
+    int  BOOLEAN_ARRAY = 59;
+
+    int  ARRAY_INT_B_255 = 60;
+    int  ARRAY_INT_B_INT = 61;
+    int  ARRAY_INT_S = 62;
+    int  ARRAY_INT_I = 63;
+    int  ARRAY_INT_PACKED = 64;
+
+    int  ARRAY_LONG_B = 65;
+    int  ARRAY_LONG_S = 66;
+    int  ARRAY_LONG_I = 67;
+    int  ARRAY_LONG_L = 68;
+    int  ARRAY_LONG_PACKED = 69;
+
+    int  CHAR_ARRAY = 70;
+    int ARRAY_BYTE = 71;
+    int ARRAY_BYTE_ALL_EQUAL = 72;
+
+    int  ARRAY_OBJECT = 73;
+    //special cases for BTree values which stores references
+    int  ARRAY_OBJECT_PACKED_LONG = 74;
+    int  ARRAYLIST_PACKED_LONG = 75;
+    int ARRAY_OBJECT_ALL_NULL = 76;
+    int ARRAY_OBJECT_NO_REFS = 77;
+
+    int  STRING_EMPTY = 101;
+    int  STRING = 103;
+
+    int  ARRAYLIST = 105;
+
+
+    int  TREEMAP = 107;
+    int  UUID = 108;
+    int  HASHMAP = 109;
+
+    int  LINKEDHASHMAP = 111;
+
+
+    int  TREESET = 113;
+
+    int  HASHSET = 115;
+
+    int  LINKEDHASHSET = 117;
+
+    int  LINKEDLIST = 119;
+
+    int  SERIALIZER_COMPRESSION_WRAPPER = 120;
+
+    int  VECTOR = 121;
+    int  IDENTITYHASHMAP = 122;
+    int  HASHTABLE = 123;
+    int  LOCALE = 124;
+    int  PROPERTIES = 125;
+
+    int  CLASS = 126;
+    int  DATE = 127;
+    int FUN_HI = 128;
+
+    int STRING_SERIALIZER = 129;
+    int COMPARABLE_COMPARATOR = 130;
+    int COMPARABLE_COMPARATOR_WITH_NULLS = 131;
+    int BASIC_SERIALIZER = 132;
+    int THIS_SERIALIZER = 133;
+
+    int TUPLE2 = 134;
+    int TUPLE3 = 135;
+    int TUPLE4 = 136;
+    int B_TREE_MAP_ROOT_HEADER = 137;
+    int B_TREE_NODE_LEAF_LR = 138;
+    int B_TREE_NODE_LEAF_L = 139;
+    int B_TREE_NODE_LEAF_R = 140;
+    int B_TREE_NODE_LEAF_C = 141;
+    int B_TREE_NODE_DIR_LR = 142;
+    int B_TREE_NODE_DIR_L = 143;
+    int B_TREE_NODE_DIR_R = 144;
+    int B_TREE_NODE_DIR_C = 145;
+    int B_TREE_BASIC_KEY_SERIALIZER = 146;
+
+
+    int B_TREE_SERIALIZER_POS_LONG = 147;
+    int B_TREE_SERIALIZER_STRING = 148;
+    int B_TREE_SERIALIZER_POS_INT = 149;
+    int LONG_SERIALIZER = 150;
+    int INTEGER_SERIALIZER = 151;
+    int EMPTY_SERIALIZER = 152;
+    int CRC32_SERIALIZER = 153;
+    int MAPDB_STACK = 154;
+    int MAPDB_QUEUE = 155;
+    int MAPDB_CIRCULAR_QUEUE = 156;
+
+    /**
+     * used for reference to already serialized object in object graph
+     */
+    int OBJECT_STACK = 166;
+
+    int JAVA_SERIALIZATION = 172;
+
+
+
+}
Index: applications/editors/josm/plugins/imagerycache/src/org/mapdb/Serializer.java
===================================================================
--- applications/editors/josm/plugins/imagerycache/src/org/mapdb/Serializer.java	(revision 29363)
+++ applications/editors/josm/plugins/imagerycache/src/org/mapdb/Serializer.java	(revision 29363)
@@ -0,0 +1,177 @@
+/*
+ *  Copyright (c) 2012 Jan Kotek
+ *
+ *  Licensed under the Apache License, Version 2.0 (the "License");
+ *  you may not use this file except in compliance with the License.
+ *  You may obtain a copy of the License at
+ *
+ *    http://www.apache.org/licenses/LICENSE-2.0
+ *
+ *  Unless required by applicable law or agreed to in writing, software
+ *  distributed under the License is distributed on an "AS IS" BASIS,
+ *  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ *  See the License for the specific language governing permissions and
+ *  limitations under the License.
+ */
+package org.mapdb;
+
+
+import java.io.DataInput;
+import java.io.DataOutput;
+import java.io.IOException;
+import java.util.zip.CRC32;
+
+/**
+ * Provides serialization and deserialization
+ *
+ * @author Jan Kotek
+ */
+public interface Serializer<A> {
+
+    /**
+     * Serialize the content of an object into a ObjectOutput
+     *
+     * @param out ObjectOutput to save object into
+     * @param value Object to serialize
+     */
+    public void serialize( DataOutput out, A value)
+            throws IOException;
+
+
+    /**
+     * Deserialize the content of an object from a DataInput.
+     *
+     * @param in to read serialized data from
+     * @param available how many bytes are available in DataInput for reading, may be -1 (in streams) or 0 (null).
+     * @return deserialized object
+     * @throws java.io.IOException
+     */
+    public A deserialize( DataInput in, int available)
+            throws IOException;
+
+    /**
+     * Serializes strings using UTF8 encoding.
+     * Used mainly for testing.
+     * Does not handle null values.
+     */
+    
+    Serializer<String> STRING_SERIALIZER = new Serializer<String>() {
+
+        @Override
+		public void serialize(DataOutput out, String value) throws IOException {
+            final byte[] bytes = value.getBytes(Utils.UTF8);
+            out.write(bytes);
+        }
+
+
+        @Override
+		public String deserialize(DataInput in, int available) throws IOException {
+            byte[] bytes = new byte[available];
+            in.readFully(bytes);
+            return new String(bytes, Utils.UTF8);
+        }
+    };
+
+
+
+
+    /** Serializes Long into 8 bytes, used mainly for testing.
+     * Does not handle null values.*/
+     
+     Serializer<Long> LONG_SERIALIZER = new Serializer<Long>() {
+        @Override
+        public void serialize(DataOutput out, Long value) throws IOException {
+            out.writeLong(value);
+        }
+
+        @Override
+        public Long deserialize(DataInput in, int available) throws IOException {
+            return in.readLong();
+        }
+    };
+
+    /** Serializes Integer into 4 bytes, used mainly for testing.
+     * Does not handle null values.*/
+    
+    Serializer<Integer> INTEGER_SERIALIZER = new Serializer<Integer>() {
+        @Override
+        public void serialize(DataOutput out, Integer value) throws IOException {
+            out.writeInt(value);
+        }
+
+        @Override
+        public Integer deserialize(DataInput in, int available) throws IOException {
+            return in.readInt();
+        }
+    };
+
+    
+    Serializer<Boolean> BOOLEAN_SERIALIZER = new Serializer<Boolean>() {
+        @Override
+        public void serialize(DataOutput out, Boolean value) throws IOException {
+            out.writeBoolean(value);
+        }
+
+        @Override
+        public Boolean deserialize(DataInput in, int available) throws IOException {
+            if(available==0) return null;
+            return in.readBoolean();
+        }
+    };
+
+    
+
+
+    /** always writes zero length data, and always deserializes it as an empty String */
+    Serializer<Object> EMPTY_SERIALIZER = new Serializer<Object>() {
+        @Override
+        public void serialize(DataOutput out, Object value) throws IOException {
+            if(value!=Utils.EMPTY_STRING) throw new IllegalArgumentException();
+        }
+
+        @Override
+        public Object deserialize(DataInput in, int available) throws IOException {
+            if(available!=0) throw new InternalError();
+            return Utils.EMPTY_STRING;
+        }
+    };
+
+    /** basic serializer for most classes in 'java.lang' and 'java.util' packages*/
+    @SuppressWarnings("unchecked")
+    Serializer<Object> BASIC_SERIALIZER = new SerializerBase();
+
+
+
+    /**
+     * Adds CRC32 checksum at end of each record to check data integrity.
+     * It throws 'IOException("CRC32 does not match, data broken")' on de-serialization if data are corrupted
+     */
+    
+    public static final Serializer<byte[]> CRC32_CHECKSUM = new Serializer<byte[]>() {
+        @Override
+        public void serialize(DataOutput out, byte[] value) throws IOException {
+            if(value == null || value.length==0) return;
+            CRC32 crc = new CRC32();
+            crc.update(value);
+            out.write(value);
+            out.writeInt((int) crc.getValue());
+        }
+
+        @Override
+        public byte[] deserialize(DataInput in, int available) throws IOException {
+            if(available==0) return null;
+            byte[] value = new byte[available-4];
+            in.readFully(value);
+            CRC32 crc = new CRC32();
+            crc.update(value);
+            int checksum = in.readInt();
+            if(checksum!=(int)crc.getValue()){
+                throw new IOException("CRC32 does not match, data broken");
+            }
+            return value;
+        }
+    };
+
+
+}
+
Index: applications/editors/josm/plugins/imagerycache/src/org/mapdb/SerializerBase.java
===================================================================
--- applications/editors/josm/plugins/imagerycache/src/org/mapdb/SerializerBase.java	(revision 29363)
+++ applications/editors/josm/plugins/imagerycache/src/org/mapdb/SerializerBase.java	(revision 29363)
@@ -0,0 +1,1457 @@
+/*
+ *  Copyright (c) 2012 Jan Kotek
+ *
+ *  Licensed under the Apache License, Version 2.0 (the "License");
+ *  you may not use this file except in compliance with the License.
+ *  You may obtain a copy of the License at
+ *
+ *    http://www.apache.org/licenses/LICENSE-2.0
+ *
+ *  Unless required by applicable law or agreed to in writing, software
+ *  distributed under the License is distributed on an "AS IS" BASIS,
+ *  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ *  See the License for the specific language governing permissions and
+ *  limitations under the License.
+ */
+package org.mapdb;
+
+import java.io.*;
+import java.lang.reflect.Array;
+import java.math.BigDecimal;
+import java.math.BigInteger;
+import java.util.*;
+
+import static org.mapdb.SerializationHeader.*;
+
+/**
+ * Serializer which uses 'header byte' to serialize/deserialize
+ * most of classes from 'java.lang' and 'java.util' packages.
+ *
+ * @author Jan Kotek
+ */
+@SuppressWarnings({ "unchecked", "rawtypes" })
+public class SerializerBase implements Serializer{
+
+
+    static final Set knownSerializable = new HashSet(Arrays.asList(
+            BTreeKeySerializer.STRING,
+            BTreeKeySerializer.ZERO_OR_POSITIVE_LONG,
+            BTreeKeySerializer.ZERO_OR_POSITIVE_INT,
+
+            Utils.COMPARABLE_COMPARATOR, Utils.COMPARABLE_COMPARATOR_WITH_NULLS,
+
+            Serializer.STRING_SERIALIZER, Serializer.LONG_SERIALIZER, Serializer.INTEGER_SERIALIZER,
+            Serializer.EMPTY_SERIALIZER, Serializer.BASIC_SERIALIZER, Serializer.CRC32_CHECKSUM
+    ));
+
+    public static void assertSerializable(Object o){
+        if(o!=null && !(o instanceof Serializable)
+                && !knownSerializable.contains(o)){
+            throw new IllegalArgumentException("Not serializable: "+o.getClass());
+        }
+    }
+
+    /**
+     * Utility class similar to ArrayList, but with fast identity search.
+     */
+    final static class FastArrayList<K> {
+
+        private int size = 0;
+        private K[] elementData = (K[]) new Object[1];
+
+        boolean forwardRefs = false;
+
+        K get(int index) {
+            if (index >= size)
+                throw new IndexOutOfBoundsException();
+            return elementData[index];
+        }
+
+        void add(K o) {
+            if (elementData.length == size) {
+                //grow array if necessary
+                elementData = Arrays.copyOf(elementData, elementData.length * 2);
+            }
+
+            elementData[size] = o;
+            size++;
+        }
+
+        int size() {
+            return size;
+        }
+
+
+        /**
+         * This method is reason why ArrayList is not used.
+         * Search an item in list and returns its index.
+         * It uses identity rather than 'equalsTo'
+         * One could argue that TreeMap should be used instead,
+         * but we do not expect large object trees.
+         * This search is VERY FAST compared to Maps, it does not allocate
+         * new instances or uses method calls.
+         *
+         * @param obj to find in list
+         * @return index of object in list or -1 if not found
+         */
+        int identityIndexOf(Object obj) {
+            for (int i = 0; i < size; i++) {
+                if (obj == elementData[i]){
+                    forwardRefs = true;
+                    return i;
+                }
+            }
+            return -1;
+        }
+
+    }
+
+
+
+
+    @Override
+    public void serialize(final DataOutput out, final Object obj) throws IOException {
+        serialize(out, obj, null);
+    }
+
+
+    public void serialize(final DataOutput out, final Object obj, FastArrayList<Object> objectStack) throws IOException {
+
+        /**try to find object on stack if it exists*/
+        if (objectStack != null) {
+            int indexInObjectStack = objectStack.identityIndexOf(obj);
+            if (indexInObjectStack != -1) {
+                //object was already serialized, just write reference to it and return
+                out.write(OBJECT_STACK);
+                Utils.packInt(out, indexInObjectStack);
+                return;
+            }
+            //add this object to objectStack
+            objectStack.add(obj);
+        }
+
+        final Class clazz = obj != null ? obj.getClass() : null;
+
+        /** first try to serialize object without initializing object stack*/
+        if (obj == null) {
+            out.write(NULL);
+            return;
+        } else if (clazz == Boolean.class) {
+            if ((Boolean) obj)
+                out.write(BOOLEAN_TRUE);
+            else
+                out.write(BOOLEAN_FALSE);
+            return;
+        } else if (clazz == Integer.class) {
+            final int val = (Integer) obj;
+            writeInteger(out, val);
+            return;
+        } else if (clazz == Double.class) {
+            double v = (Double) obj;
+            if (v == -1d)
+                out.write(DOUBLE_MINUS_1);
+            else if (v == 0d)
+                out.write(DOUBLE_0);
+            else if (v == 1d)
+                out.write(DOUBLE_1);
+            else if (v >= 0 && v <= 255 && (int) v == v) {
+                out.write(DOUBLE_255);
+                out.write((int) v);
+            } else if (v >= Short.MIN_VALUE && v <= Short.MAX_VALUE && (short) v == v) {
+                out.write(DOUBLE_SHORT);
+                out.writeShort((int) v);
+            } else {
+                out.write(DOUBLE_FULL);
+                out.writeDouble(v);
+            }
+            return;
+        } else if (clazz == Float.class) {
+            float v = (Float) obj;
+            if (v == -1f)
+                out.write(FLOAT_MINUS_1);
+            else if (v == 0f)
+                out.write(FLOAT_0);
+            else if (v == 1f)
+                out.write(FLOAT_1);
+            else if (v >= 0 && v <= 255 && (int) v == v) {
+                out.write(FLOAT_255);
+                out.write((int) v);
+            } else if (v >= Short.MIN_VALUE && v <= Short.MAX_VALUE && (short) v == v) {
+                out.write(FLOAT_SHORT);
+                out.writeShort((int) v);
+
+            } else {
+                out.write(FLOAT_FULL);
+                out.writeFloat(v);
+            }
+            return;
+        } else if (clazz == BigInteger.class) {
+            out.write(BIGINTEGER);
+            byte[] buf = ((BigInteger) obj).toByteArray();
+            Utils.packInt(out, buf.length);
+            out.write(buf);
+            return;
+        } else if (clazz == BigDecimal.class) {
+            out.write(BIGDECIMAL);
+            BigDecimal d = (BigDecimal) obj;
+            byte[] buf = d.unscaledValue().toByteArray();
+            Utils.packInt(out, buf.length);
+            out.write(buf);
+            Utils.packInt(out, d.scale());
+            return;
+        } else if (clazz == Long.class) {
+            final long val = (Long) obj;
+            writeLong(out, val);
+            return;
+        } else if (clazz == Short.class) {
+            short val = (Short) obj;
+            if (val == -1)
+                out.write(SHORT_MINUS_1);
+            else if (val == 0)
+                out.write(SHORT_0);
+            else if (val == 1)
+                out.write(SHORT_1);
+            else if (val > 0 && val < 255) {
+                out.write(SHORT_255);
+                out.write(val);
+            } else {
+                out.write(SHORT_FULL);
+                out.writeShort(val);
+            }
+            return;
+        } else if (clazz == Byte.class) {
+            byte val = (Byte) obj;
+            if (val == -1)
+                out.write(BYTE_MINUS_1);
+            else if (val == 0)
+                out.write(BYTE_0);
+            else if (val == 1)
+                out.write(BYTE_1);
+            else {
+                out.write(BYTE_FULL);
+                out.writeByte(val);
+            }
+            return;
+        } else if (clazz == Character.class) {
+            out.write(CHAR);
+            out.writeChar((Character) obj);
+            return;
+        } else if (clazz == String.class) {
+            String s = (String) obj;
+            if (s.length() == 0) {
+                out.write(STRING_EMPTY);
+            } else {
+                out.write(STRING);
+                serializeString(out, s);
+            }
+            return;
+        } else if (obj instanceof Class) {
+            out.write(CLASS);
+            serializeClass(out, (Class)obj);
+            return;
+        } else if (obj instanceof int[]) {
+            writeIntArray(out, (int[]) obj);
+            return;
+        } else if (obj instanceof long[]) {
+            writeLongArray(out, (long[]) obj);
+            return;
+        } else if (obj instanceof short[]) {
+            out.write(SHORT_ARRAY);
+            short[] a = (short[]) obj;
+            Utils.packInt(out, a.length);
+            for(short s:a) out.writeShort(s);
+            return;
+        } else if (obj instanceof boolean[]) {
+            out.write(BOOLEAN_ARRAY);
+            boolean[] a = (boolean[]) obj;
+            Utils.packInt(out, a.length);
+            for(boolean s:a) out.writeBoolean(s); //TODO pack 8 booleans to single byte
+            return;
+        } else if (obj instanceof double[]) {
+            out.write(DOUBLE_ARRAY);
+            double[] a = (double[]) obj;
+            Utils.packInt(out, a.length);
+            for(double s:a) out.writeDouble(s);
+            return;
+        } else if (obj instanceof float[]) {
+            out.write(FLOAT_ARRAY);
+            float[] a = (float[]) obj;
+            Utils.packInt(out, a.length);
+            for(float s:a) out.writeFloat(s);
+            return;
+        } else if (obj instanceof char[]) {
+            out.write(CHAR_ARRAY);
+            char[] a = (char[]) obj;
+            Utils.packInt(out, a.length);
+            for(char s:a) out.writeChar(s);
+            return;
+        } else if (obj instanceof byte[]) {
+            byte[] b = (byte[]) obj;
+            serializeByteArray(out, b);
+            return;
+        } else if (clazz == Date.class) {
+            out.write(DATE);
+            out.writeLong(((Date) obj).getTime());
+            return;
+        } else if (clazz == UUID.class) {
+            out.write(UUID);
+            out.writeLong(((UUID) obj).getMostSignificantBits());
+            out.writeLong(((UUID)obj).getLeastSignificantBits());
+            return;
+        } else if(clazz == BTreeKeySerializer.BasicKeySerializer.class){
+            out.write(B_TREE_BASIC_KEY_SERIALIZER);
+            if(((BTreeKeySerializer.BasicKeySerializer)obj).defaultSerializer!=this) throw new InternalError();
+            return;
+        } else if(clazz == CompressLZF.SerializerCompressWrapper.class){
+            out.write(SERIALIZER_COMPRESSION_WRAPPER);
+            serialize(out, ((CompressLZF.SerializerCompressWrapper)obj).serializer, objectStack);
+            return;
+
+        } else if(obj == BTreeKeySerializer.ZERO_OR_POSITIVE_LONG){
+            out.write(B_TREE_SERIALIZER_POS_LONG);
+            return;
+        } else if(obj == BTreeKeySerializer.ZERO_OR_POSITIVE_INT){
+            out.write(B_TREE_SERIALIZER_POS_INT);
+            return;
+        } else if(obj == Serializer.STRING_SERIALIZER){
+            out.write(SerializationHeader.STRING_SERIALIZER);
+            return;
+        } else if(obj == Serializer.LONG_SERIALIZER){
+            out.write(SerializationHeader.LONG_SERIALIZER);
+            return;
+        } else if(obj == Serializer.INTEGER_SERIALIZER){
+            out.write(SerializationHeader.INTEGER_SERIALIZER);
+            return;
+        } else if(obj == Serializer.EMPTY_SERIALIZER){
+            out.write(SerializationHeader.EMPTY_SERIALIZER);
+            return;
+        } else if(obj == Serializer.CRC32_CHECKSUM){
+            out.write(SerializationHeader.CRC32_SERIALIZER);
+            return;
+        } else if(obj == BTreeKeySerializer.STRING){
+            out.write(B_TREE_SERIALIZER_STRING);
+            return;
+        } else if(obj == Utils.COMPARABLE_COMPARATOR){
+            out.write(COMPARABLE_COMPARATOR);
+            return;
+        } else if(obj == Utils.COMPARABLE_COMPARATOR_WITH_NULLS){
+            out.write(COMPARABLE_COMPARATOR_WITH_NULLS);
+            return;
+        } else if(obj == BASIC_SERIALIZER){
+            out.write(SerializationHeader.BASIC_SERIALIZER);
+            return;
+        } else if(obj == Fun.HI){
+            out.write(FUN_HI);
+        } else if(obj == this){
+            out.write(THIS_SERIALIZER);
+            return;
+        }
+
+
+
+
+        /** classes bellow need object stack, so initialize it if not alredy initialized*/
+        if (objectStack == null) {
+            objectStack = new FastArrayList();
+            objectStack.add(obj);
+        }
+
+
+        if (obj instanceof Object[]) {
+            Object[] b = (Object[]) obj;
+            boolean packableLongs = b.length <= 255;
+            boolean allNull = true;
+            if (packableLongs) {
+                //check if it contains packable longs
+                for (Object o : b) {
+                    if(o!=null){
+                        allNull=false;
+                        if (o.getClass() != Long.class || ((Long) o < 0 && (Long) o != Long.MAX_VALUE)) {
+                            packableLongs = false;
+                        }
+                    }
+
+                    if(!packableLongs && !allNull)
+                        break;
+                }
+            }else{
+                //check for all null
+                for (Object o : b) {
+                    if(o!=null){
+                        allNull=false;
+                        break;
+                    }
+                }
+            }
+            if(allNull){
+                out.write(ARRAY_OBJECT_ALL_NULL);
+                Utils.packInt(out, b.length);
+
+                // Write classfor components
+                Class<?> componentType = obj.getClass().getComponentType();
+                serializeClass(out, componentType);
+
+            }else if (packableLongs) {
+                //packable Longs is special case,  it is often used in JDBM to reference fields
+                out.write(ARRAY_OBJECT_PACKED_LONG);
+                out.write(b.length);
+                for (Object o : b) {
+                    if (o == null)
+                        Utils.packLong(out, 0);
+                    else
+                        Utils.packLong(out, (Long) o + 1);
+                }
+
+            } else {
+                out.write(ARRAY_OBJECT);
+                Utils.packInt(out, b.length);
+
+                // Write classfor components
+                Class<?> componentType = obj.getClass().getComponentType();
+                serializeClass(out, componentType);
+
+                for (Object o : b)
+                    serialize(out, o, objectStack);
+
+            }
+
+        } else if (clazz == ArrayList.class) {
+            ArrayList l = (ArrayList) obj;
+            boolean packableLongs = l.size() < 255;
+            if (packableLongs) {
+                //packable Longs is special case,  it is often used in JDBM to reference fields
+                for (Object o : l) {
+                    if (o != null && (o.getClass() != Long.class || ((Long) o < 0 && (Long) o != Long.MAX_VALUE))) {
+                        packableLongs = false;
+                        break;
+                    }
+                }
+            }
+            if (packableLongs) {
+                out.write(ARRAYLIST_PACKED_LONG);
+                out.write(l.size());
+                for (Object o : l) {
+                    if (o == null)
+                        Utils.packLong(out, 0);
+                    else
+                        Utils.packLong(out, (Long) o + 1);
+                }
+            } else {
+                serializeCollection(ARRAYLIST, out, obj, objectStack);
+            }
+
+        } else if (clazz == java.util.LinkedList.class) {
+            serializeCollection(LINKEDLIST, out, obj, objectStack);
+        } else if (clazz == Vector.class) {
+            serializeCollection(VECTOR, out, obj, objectStack);
+        } else if (clazz == TreeSet.class) {
+            TreeSet l = (TreeSet) obj;
+            out.write(TREESET);
+            Utils.packInt(out, l.size());
+            serialize(out, l.comparator(), objectStack);
+            for (Object o : l)
+                serialize(out, o, objectStack);
+        } else if (clazz == HashSet.class) {
+            serializeCollection(HASHSET, out, obj, objectStack);
+        } else if (clazz == LinkedHashSet.class) {
+            serializeCollection(LINKEDHASHSET, out, obj, objectStack);
+        } else if (clazz == TreeMap.class) {
+            TreeMap l = (TreeMap) obj;
+            out.write(TREEMAP);
+            Utils.packInt(out, l.size());
+            serialize(out, l.comparator(), objectStack);
+            for (Object o : l.keySet()) {
+                serialize(out, o, objectStack);
+                serialize(out, l.get(o), objectStack);
+            }
+        } else if (clazz == HashMap.class) {
+            serializeMap(HASHMAP, out, obj, objectStack);
+        } else if (clazz == IdentityHashMap.class) {
+            serializeMap(IDENTITYHASHMAP, out, obj, objectStack);
+        } else if (clazz == LinkedHashMap.class) {
+            serializeMap(LINKEDHASHMAP, out, obj, objectStack);
+        } else if (clazz == Hashtable.class) {
+            serializeMap(HASHTABLE, out, obj, objectStack);
+        } else if (clazz == Properties.class) {
+            serializeMap(PROPERTIES, out, obj, objectStack);
+        } else if (clazz == Locale.class){
+            out.write(LOCALE);
+            Locale l = (Locale) obj;
+            out.writeUTF(l.getLanguage());
+            out.writeUTF(l.getCountry());
+            out.writeUTF(l.getVariant());
+        } else if (clazz == Fun.Tuple2.class){
+            out.write(TUPLE2);
+            Fun.Tuple2 t = (Fun.Tuple2) obj;
+            serialize(out, t.a, objectStack);
+            serialize(out, t.b, objectStack);
+        } else if (clazz == Fun.Tuple2.class){
+            out.write(TUPLE3);
+            Fun.Tuple3 t = (Fun.Tuple3) obj;
+            serialize(out, t.a, objectStack);
+            serialize(out, t.b, objectStack);
+            serialize(out, t.c, objectStack);
+        } else if (clazz == Fun.Tuple4.class){
+            out.write(TUPLE4);
+            Fun.Tuple4 t = (Fun.Tuple4) obj;
+            serialize(out, t.a, objectStack);
+            serialize(out, t.b, objectStack);
+            serialize(out, t.c, objectStack);
+            serialize(out, t.d, objectStack);
+        } else {
+            serializeUnknownObject(out, obj, objectStack);
+        }
+
+    }
+
+
+    protected void serializeClass(DataOutput out, Class clazz) throws IOException {
+        //TODO override in SerializerPojo
+        out.writeUTF(clazz.getName());
+    }
+
+
+    static void serializeString(DataOutput out, String obj) throws IOException {
+        final int len = obj.length();
+        Utils.packInt(out, len);
+        for (int i = 0; i < len; i++) {
+            int c = (int) obj.charAt(i); //TODO investigate if c could be negative here
+            Utils.packInt(out, c);
+        }
+
+    }
+
+    private void serializeMap(int header, DataOutput out, Object obj, FastArrayList<Object> objectStack) throws IOException {
+        Map l = (Map) obj;
+        out.write(header);
+        Utils.packInt(out, l.size());
+        for (Object o : l.keySet()) {
+            serialize(out, o, objectStack);
+            serialize(out, l.get(o), objectStack);
+        }
+    }
+
+    private void serializeCollection(int header, DataOutput out, Object obj, FastArrayList<Object> objectStack) throws IOException {
+        Collection l = (Collection) obj;
+        out.write(header);
+        Utils.packInt(out, l.size());
+
+        for (Object o : l)
+            serialize(out, o, objectStack);
+
+    }
+
+    private void serializeByteArray(DataOutput out, byte[] b) throws IOException {
+        boolean allEqual = b.length>0;
+        //check if all values in byte[] are equal
+        for(int i=1;i<b.length;i++){
+            if(b[i-1]!=b[i]){
+                allEqual=false;
+                break;
+            }
+        }
+        if(allEqual){
+            out.write(ARRAY_BYTE_ALL_EQUAL);
+            Utils.packInt(out, b.length);
+            out.write(b[0]);
+        }else{
+            out.write(ARRAY_BYTE);
+            Utils.packInt(out, b.length);
+            out.write(b);
+        }
+    }
+
+
+    private void writeLongArray(DataOutput da, long[] obj) throws IOException {
+        long max = Long.MIN_VALUE;
+        long min = Long.MAX_VALUE;
+        for (long i : obj) {
+            max = Math.max(max, i);
+            min = Math.min(min, i);
+        }
+
+        if (0 <= min && max <= 255) {
+            da.write(ARRAY_LONG_B);
+            Utils.packInt(da, obj.length);
+            for (long l : obj)
+                da.write((int) l);
+        } else if (0 <= min) {
+            da.write(ARRAY_LONG_PACKED);
+            Utils.packInt(da, obj.length);
+            for (long l : obj)
+                Utils.packLong(da, l);
+        } else if (Short.MIN_VALUE <= min && max <= Short.MAX_VALUE) {
+            da.write(ARRAY_LONG_S);
+            Utils.packInt(da, obj.length);
+            for (long l : obj)
+                da.writeShort((short) l);
+        } else if (Integer.MIN_VALUE <= min && max <= Integer.MAX_VALUE) {
+            da.write(ARRAY_LONG_I);
+            Utils.packInt(da, obj.length);
+            for (long l : obj)
+                da.writeInt((int) l);
+        } else {
+            da.write(ARRAY_LONG_L);
+            Utils.packInt(da, obj.length);
+            for (long l : obj)
+                da.writeLong(l);
+        }
+
+    }
+
+
+    private void writeIntArray(DataOutput da, int[] obj) throws IOException {
+        int max = Integer.MIN_VALUE;
+        int min = Integer.MAX_VALUE;
+        for (int i : obj) {
+            max = Math.max(max, i);
+            min = Math.min(min, i);
+        }
+
+        boolean fitsInByte = 0 <= min && max <= 255;
+        boolean fitsInShort = Short.MIN_VALUE >= min && max <= Short.MAX_VALUE;
+
+
+        if (obj.length <= 255 && fitsInByte) {
+            da.write(ARRAY_INT_B_255);
+            da.write(obj.length);
+            for (int i : obj)
+                da.write(i);
+        } else if (fitsInByte) {
+            da.write(ARRAY_INT_B_INT);
+            Utils.packInt(da, obj.length);
+            for (int i : obj)
+                da.write(i);
+        } else if (0 <= min) {
+            da.write(ARRAY_INT_PACKED);
+            Utils.packInt(da, obj.length);
+            for (int l : obj)
+                Utils.packInt(da, l);
+        } else if (fitsInShort) {
+            da.write(ARRAY_INT_S);
+            Utils.packInt(da, obj.length);
+            for (int i : obj)
+                da.writeShort(i);
+        } else {
+            da.write(ARRAY_INT_I);
+            Utils.packInt(da, obj.length);
+            for (int i : obj)
+                da.writeInt(i);
+        }
+
+    }
+
+
+    private void writeInteger(DataOutput da, final int val) throws IOException {
+        if (val == -1)
+            da.write(INTEGER_MINUS_1);
+        else if (val == 0)
+            da.write(INTEGER_0);
+        else if (val == 1)
+            da.write(INTEGER_1);
+        else if (val == 2)
+            da.write(INTEGER_2);
+        else if (val == 3)
+            da.write(INTEGER_3);
+        else if (val == 4)
+            da.write(INTEGER_4);
+        else if (val == 5)
+            da.write(INTEGER_5);
+        else if (val == 6)
+            da.write(INTEGER_6);
+        else if (val == 7)
+            da.write(INTEGER_7);
+        else if (val == 8)
+            da.write(INTEGER_8);
+        else if (val == Integer.MIN_VALUE)
+            da.write(INTEGER_MINUS_MAX);
+        else if (val > 0 && val < 255) {
+            da.write(INTEGER_255);
+            da.write(val);
+        } else if (val < 0) {
+            da.write(INTEGER_PACK_NEG);
+            Utils.packInt(da, -val);
+        } else {
+            da.write(INTEGER_PACK);
+            Utils.packInt(da, val);
+        }
+    }
+
+    private void writeLong(DataOutput da, final long val) throws IOException {
+        if (val == -1)
+            da.write(LONG_MINUS_1);
+        else if (val == 0)
+            da.write(LONG_0);
+        else if (val == 1)
+            da.write(LONG_1);
+        else if (val == 2)
+            da.write(LONG_2);
+        else if (val == 3)
+            da.write(LONG_3);
+        else if (val == 4)
+            da.write(LONG_4);
+        else if (val == 5)
+            da.write(LONG_5);
+        else if (val == 6)
+            da.write(LONG_6);
+        else if (val == 7)
+            da.write(LONG_7);
+        else if (val == 8)
+            da.write(LONG_8);
+        else if (val == Long.MIN_VALUE)
+            da.write(LONG_MINUS_MAX);
+        else if (val > 0 && val < 255) {
+            da.write(LONG_255);
+            da.write((int) val);
+        } else if (val < 0) {
+            da.write(LONG_PACK_NEG);
+            Utils.packLong(da, -val);
+        } else {
+            da.write(LONG_PACK);
+            Utils.packLong(da, val);
+        }
+    }
+
+
+
+    static String deserializeString(DataInput buf) throws IOException {
+        int len = Utils.unpackInt(buf);
+        char[] b = new char[len];
+        for (int i = 0; i < len; i++)
+            b[i] = (char) Utils.unpackInt(buf);
+
+        return new String(b);
+    }
+
+
+    @Override
+    public Object deserialize(DataInput is, int capacity) throws IOException {
+        if(capacity==0) return null;
+        return deserialize(is, null);
+    }
+
+    public Object deserialize(DataInput is, FastArrayList<Object> objectStack) throws IOException {
+
+        Object ret = null;
+
+        final int head = is.readUnsignedByte();
+
+        /** first try to deserialize object without allocating object stack*/
+        switch (head) {
+            case NULL:
+                break;
+            case BOOLEAN_TRUE:
+                ret = Boolean.TRUE;
+                break;
+            case BOOLEAN_FALSE:
+                ret = Boolean.FALSE;
+                break;
+            case INTEGER_MINUS_1:
+                ret = -1;
+                break;
+            case INTEGER_0:
+                ret = 0;
+                break;
+            case INTEGER_1:
+                ret = 1;
+                break;
+            case INTEGER_2:
+                ret = 2;
+                break;
+            case INTEGER_3:
+                ret = 3;
+                break;
+            case INTEGER_4:
+                ret = 4;
+                break;
+            case INTEGER_5:
+                ret = 5;
+                break;
+            case INTEGER_6:
+                ret = 6;
+                break;
+            case INTEGER_7:
+                ret = 7;
+                break;
+            case INTEGER_8:
+                ret = 8;
+                break;
+            case INTEGER_MINUS_MAX:
+                ret = Integer.MIN_VALUE;
+                break;
+            case INTEGER_255:
+                ret = is.readUnsignedByte();
+                break;
+            case INTEGER_PACK_NEG:
+                ret = -Utils.unpackInt(is);
+                break;
+            case INTEGER_PACK:
+                ret = Utils.unpackInt(is);
+                break;
+            case LONG_MINUS_1:
+                ret = (long) -1;
+                break;
+            case LONG_0:
+                ret = (long) 0;
+                break;
+            case LONG_1:
+                ret = (long) 1;
+                break;
+            case LONG_2:
+                ret = (long) 2;
+                break;
+            case LONG_3:
+                ret = (long) 3;
+                break;
+            case LONG_4:
+                ret = (long) 4;
+                break;
+            case LONG_5:
+                ret = (long) 5;
+                break;
+            case LONG_6:
+                ret = (long) 6;
+                break;
+            case LONG_7:
+                ret = (long) 7;
+                break;
+            case LONG_8:
+                ret = (long) 8;
+                break;
+            case LONG_255:
+                ret = (long) is.readUnsignedByte();
+                break;
+            case LONG_PACK_NEG:
+                ret = -Utils.unpackLong(is);
+                break;
+            case LONG_PACK:
+                ret = Utils.unpackLong(is);
+                break;
+            case LONG_MINUS_MAX:
+                ret = Long.MIN_VALUE;
+                break;
+            case SHORT_MINUS_1:
+                ret = (short) -1;
+                break;
+            case SHORT_0:
+                ret = (short) 0;
+                break;
+            case SHORT_1:
+                ret = (short) 1;
+                break;
+            case SHORT_255:
+                ret = (short) is.readUnsignedByte();
+                break;
+            case SHORT_FULL:
+                ret = is.readShort();
+                break;
+            case BYTE_MINUS_1:
+                ret = (byte) -1;
+                break;
+            case BYTE_0:
+                ret = (byte) 0;
+                break;
+            case BYTE_1:
+                ret = (byte) 1;
+                break;
+            case BYTE_FULL:
+                ret = is.readByte();
+                break;
+            case SHORT_ARRAY:
+                int size = Utils.unpackInt(is);
+                ret = new short[size];
+                for(int i=0;i<size;i++) ((short[])ret)[i] = is.readShort();
+                break;
+            case BOOLEAN_ARRAY:
+                size = Utils.unpackInt(is);
+                ret = new boolean[size];
+                for(int i=0;i<size;i++) ((boolean[])ret)[i] = is.readBoolean();
+                break;
+            case DOUBLE_ARRAY:
+                size = Utils.unpackInt(is);
+                ret = new double[size];
+                for(int i=0;i<size;i++) ((double[])ret)[i] = is.readDouble();
+                break;
+            case FLOAT_ARRAY:
+                size = Utils.unpackInt(is);
+                ret = new float[size];
+                for(int i=0;i<size;i++) ((float[])ret)[i] = is.readFloat();
+                break;
+            case CHAR_ARRAY:
+                size = Utils.unpackInt(is);
+                ret = new char[size];
+                for(int i=0;i<size;i++) ((char[])ret)[i] = is.readChar();
+                break;
+            case CHAR:
+                ret = is.readChar();
+                break;
+            case FLOAT_MINUS_1:
+                ret = (float) -1;
+                break;
+            case FLOAT_0:
+                ret = (float) 0;
+                break;
+            case FLOAT_1:
+                ret = (float) 1;
+                break;
+            case FLOAT_255:
+                ret = (float) is.readUnsignedByte();
+                break;
+            case FLOAT_SHORT:
+                ret = (float) is.readShort();
+                break;
+            case FLOAT_FULL:
+                ret = is.readFloat();
+                break;
+            case DOUBLE_MINUS_1:
+                ret = (double) -1;
+                break;
+            case DOUBLE_0:
+                ret = (double) 0;
+                break;
+            case DOUBLE_1:
+                ret = (double) 1;
+                break;
+            case DOUBLE_255:
+                ret = (double) is.readUnsignedByte();
+                break;
+            case DOUBLE_SHORT:
+                ret = (double) is.readShort();
+                break;
+            case DOUBLE_FULL:
+                ret = is.readDouble();
+                break;
+            case BIGINTEGER:
+                ret = new BigInteger(deserializeArrayByte(is));
+                break;
+            case BIGDECIMAL:
+                ret = new BigDecimal(new BigInteger(deserializeArrayByte(is)), Utils.unpackInt(is));
+                break;
+            case STRING:
+                ret = deserializeString(is);
+                break;
+            case STRING_EMPTY:
+                ret = Utils.EMPTY_STRING;
+                break;
+            case CLASS:
+                ret = deserializeClass(is);
+                break;
+            case DATE:
+                ret = new Date(is.readLong());
+                break;
+            case UUID:
+                ret = new UUID(is.readLong(), is.readLong());
+                break;
+            case ARRAY_INT_B_255:
+                ret = deserializeArrayIntB255(is);
+                break;
+            case ARRAY_INT_B_INT:
+                ret = deserializeArrayIntBInt(is);
+                break;
+            case ARRAY_INT_S:
+                ret = deserializeArrayIntSInt(is);
+                break;
+            case ARRAY_INT_I:
+                ret = deserializeArrayIntIInt(is);
+                break;
+            case ARRAY_INT_PACKED:
+                ret = deserializeArrayIntPack(is);
+                break;
+            case ARRAY_LONG_B:
+                ret = deserializeArrayLongB(is);
+                break;
+            case ARRAY_LONG_S:
+                ret = deserializeArrayLongS(is);
+                break;
+            case ARRAY_LONG_I:
+                ret = deserializeArrayLongI(is);
+                break;
+            case ARRAY_LONG_L:
+                ret = deserializeArrayLongL(is);
+                break;
+            case ARRAY_LONG_PACKED:
+                ret = deserializeArrayLongPack(is);
+                break;
+            case ARRAYLIST_PACKED_LONG:
+                ret = deserializeArrayListPackedLong(is);
+                break;
+            case ARRAY_BYTE_ALL_EQUAL:
+                byte[] b = new byte[Utils.unpackInt(is)];
+                Arrays.fill(b, is.readByte());
+                ret = b;
+                break;
+            case ARRAY_BYTE:
+                ret =  deserializeArrayByte(is);
+                break;
+            case LOCALE :
+                ret = new Locale(is.readUTF(),is.readUTF(),is.readUTF());
+                break;
+            case COMPARABLE_COMPARATOR:
+                ret = Utils.COMPARABLE_COMPARATOR;
+                break;
+            case SerializationHeader.LONG_SERIALIZER:
+                ret = LONG_SERIALIZER;
+                break;
+            case SerializationHeader.INTEGER_SERIALIZER:
+                ret = INTEGER_SERIALIZER;
+                break;
+            case SerializationHeader.EMPTY_SERIALIZER:
+                ret = EMPTY_SERIALIZER;
+                break;
+            case SerializationHeader.CRC32_SERIALIZER:
+                ret = Serializer.CRC32_CHECKSUM;
+                break;
+            case B_TREE_SERIALIZER_POS_LONG:
+                ret = BTreeKeySerializer.ZERO_OR_POSITIVE_LONG;
+                break;
+            case B_TREE_SERIALIZER_POS_INT:
+                ret = BTreeKeySerializer.ZERO_OR_POSITIVE_INT;
+                break;
+            case B_TREE_SERIALIZER_STRING:
+                ret = BTreeKeySerializer.STRING;
+                break;
+            case COMPARABLE_COMPARATOR_WITH_NULLS:
+                ret = Utils.COMPARABLE_COMPARATOR_WITH_NULLS;
+                break;
+            case B_TREE_BASIC_KEY_SERIALIZER:
+                ret = new BTreeKeySerializer.BasicKeySerializer(this);
+                break;
+            case SerializationHeader.BASIC_SERIALIZER:
+                ret = BASIC_SERIALIZER;
+                break;
+            case SerializationHeader.STRING_SERIALIZER:
+                ret = Serializer.STRING_SERIALIZER;
+                break;
+            case TUPLE2:
+                ret = new Fun.Tuple2(deserialize(is, objectStack), deserialize(is, objectStack));
+                break;
+            case TUPLE3:
+                ret = new Fun.Tuple3(deserialize(is, objectStack), deserialize(is, objectStack), deserialize(is, objectStack));
+                break;
+            case TUPLE4:
+                ret = new Fun.Tuple4(deserialize(is, objectStack), deserialize(is, objectStack), deserialize(is, objectStack), deserialize(is, objectStack));
+                break;
+            case FUN_HI:
+                ret = Fun.HI;
+                break;
+            case THIS_SERIALIZER:
+                ret = this;
+                break;
+            case JAVA_SERIALIZATION:
+                throw new InternalError("Wrong header, data were probably serialized with java.lang.ObjectOutputStream, not with JDBM serialization");
+            case ARRAY_OBJECT_PACKED_LONG:
+                ret = deserializeArrayObjectPackedLong(is);
+                break;
+            case ARRAY_OBJECT_ALL_NULL:
+                ret = deserializeArrayObjectAllNull(is);
+                break;
+            case ARRAY_OBJECT_NO_REFS:
+                ret = deserializeArrayObjectNoRefs(is);
+                break;
+
+            case -1:
+                throw new EOFException();
+
+        }
+
+        if (ret != null || head == NULL) {
+            if (objectStack != null)
+                objectStack.add(ret);
+            return ret;
+        }
+
+        /**  something else which needs object stack initialized*/
+
+        if (objectStack == null)
+            objectStack = new FastArrayList();
+        int oldObjectStackSize = objectStack.size();
+
+        switch (head) {
+            case OBJECT_STACK:
+                ret = objectStack.get(Utils.unpackInt(is));
+                break;
+            case ARRAYLIST:
+                ret = deserializeArrayList(is, objectStack);
+                break;
+            case ARRAY_OBJECT:
+                ret = deserializeArrayObject(is, objectStack);
+                break;
+            case LINKEDLIST:
+                ret = deserializeLinkedList(is, objectStack);
+                break;
+            case TREESET:
+                ret = deserializeTreeSet(is, objectStack);
+                break;
+            case HASHSET:
+                ret = deserializeHashSet(is, objectStack);
+                break;
+            case LINKEDHASHSET:
+                ret = deserializeLinkedHashSet(is, objectStack);
+                break;
+            case VECTOR:
+                ret = deserializeVector(is, objectStack);
+                break;
+            case TREEMAP:
+                ret = deserializeTreeMap(is, objectStack);
+                break;
+            case HASHMAP:
+                ret = deserializeHashMap(is, objectStack);
+                break;
+            case IDENTITYHASHMAP:
+                ret = deserializeIdentityHashMap(is, objectStack);
+                break;
+            case LINKEDHASHMAP:
+                ret = deserializeLinkedHashMap(is, objectStack);
+                break;
+            case HASHTABLE:
+                ret = deserializeHashtable(is, objectStack);
+                break;
+            case PROPERTIES:
+                ret = deserializeProperties(is, objectStack);
+                break;
+            case SERIALIZER_COMPRESSION_WRAPPER:
+                ret = CompressLZF.serializerCompressWrapper((Serializer) deserialize(is, objectStack));
+                break;
+            default:
+                ret = deserializeUnknownHeader(is, head, objectStack);
+                break;
+        }
+
+        if (head != OBJECT_STACK && objectStack.size() == oldObjectStackSize) {
+            //check if object was not already added to stack as part of collection
+            objectStack.add(ret);
+        }
+
+
+        return ret;
+    }
+
+    private byte[] deserializeArrayByte(DataInput is) throws IOException {
+        byte[] bb = new byte[Utils.unpackInt(is)];
+        is.readFully(bb);
+        return bb;
+    }
+
+
+    protected  Class deserializeClass(DataInput is) throws IOException {
+        //TODO override 'deserializeClass' in SerializerPojo
+        try {
+            return Class.forName(is.readUTF());
+        } catch (ClassNotFoundException e) {
+            throw new RuntimeException(e);
+        }
+    }
+
+
+
+
+    private long[] deserializeArrayLongL(DataInput is) throws IOException {
+        int size = Utils.unpackInt(is);
+        long[] ret = new long[size];
+        for (int i = 0; i < size; i++)
+            ret[i] = is.readLong();
+        return ret;
+    }
+
+
+    private long[] deserializeArrayLongI(DataInput is) throws IOException {
+        int size = Utils.unpackInt(is);
+        long[] ret = new long[size];
+        for (int i = 0; i < size; i++)
+            ret[i] = is.readInt();
+        return ret;
+    }
+
+
+    private long[] deserializeArrayLongS(DataInput is) throws IOException {
+        int size = Utils.unpackInt(is);
+        long[] ret = new long[size];
+        for (int i = 0; i < size; i++)
+            ret[i] = is.readShort();
+        return ret;
+    }
+
+
+    private long[] deserializeArrayLongB(DataInput is) throws IOException {
+        int size = Utils.unpackInt(is);
+        long[] ret = new long[size];
+        for (int i = 0; i < size; i++) {
+            ret[i] = is.readUnsignedByte();
+            if (ret[i] < 0)
+                throw new EOFException();
+        }
+        return ret;
+    }
+
+
+    private int[] deserializeArrayIntIInt(DataInput is) throws IOException {
+        int size = Utils.unpackInt(is);
+        int[] ret = new int[size];
+        for (int i = 0; i < size; i++)
+            ret[i] = is.readInt();
+        return ret;
+    }
+
+
+    private int[] deserializeArrayIntSInt(DataInput is) throws IOException {
+        int size = Utils.unpackInt(is);
+        int[] ret = new int[size];
+        for (int i = 0; i < size; i++)
+            ret[i] = is.readShort();
+        return ret;
+    }
+
+
+    private int[] deserializeArrayIntBInt(DataInput is) throws IOException {
+        int size = Utils.unpackInt(is);
+        int[] ret = new int[size];
+        for (int i = 0; i < size; i++) {
+            ret[i] = is.readUnsignedByte();
+            if (ret[i] < 0)
+                throw new EOFException();
+        }
+        return ret;
+    }
+
+
+    private int[] deserializeArrayIntPack(DataInput is) throws IOException {
+        int size = Utils.unpackInt(is);
+        if (size < 0)
+            throw new EOFException();
+
+        int[] ret = new int[size];
+        for (int i = 0; i < size; i++) {
+            ret[i] = Utils.unpackInt(is);
+        }
+        return ret;
+    }
+
+    private long[] deserializeArrayLongPack(DataInput is) throws IOException {
+        int size = Utils.unpackInt(is);
+        if (size < 0)
+            throw new EOFException();
+
+        long[] ret = new long[size];
+        for (int i = 0; i < size; i++) {
+            ret[i] = Utils.unpackLong(is);
+        }
+        return ret;
+    }
+
+    private int[] deserializeArrayIntB255(DataInput is) throws IOException {
+        int size = is.readUnsignedByte();
+        if (size < 0)
+            throw new EOFException();
+
+        int[] ret = new int[size];
+        for (int i = 0; i < size; i++) {
+            ret[i] = is.readUnsignedByte();
+            if (ret[i] < 0)
+                throw new EOFException();
+        }
+        return ret;
+    }
+
+
+    private Object[] deserializeArrayObject(DataInput is, FastArrayList<Object> objectStack) throws IOException {
+        int size = Utils.unpackInt(is);
+        Class clazz = deserializeClass(is);
+        Object[] s = (Object[]) Array.newInstance(clazz, size);
+        objectStack.add(s);
+        for (int i = 0; i < size; i++){
+            s[i] = deserialize(is, objectStack);
+        }
+        return s;
+    }
+
+    private Object[] deserializeArrayObjectNoRefs(DataInput is) throws IOException {
+        int size = Utils.unpackInt(is);
+        Class clazz = deserializeClass(is);
+        Object[] s = (Object[]) Array.newInstance(clazz, size);
+        for (int i = 0; i < size; i++){
+            s[i] = deserialize(is, null);
+        }
+        return s;
+    }
+
+
+    private Object[] deserializeArrayObjectAllNull(DataInput is) throws IOException {
+        int size = Utils.unpackInt(is);
+        Class clazz = deserializeClass(is);
+        Object[] s = (Object[]) Array.newInstance(clazz, size);
+        return s;
+    }
+
+
+    private Object[] deserializeArrayObjectPackedLong(DataInput is) throws IOException {
+        int size = is.readUnsignedByte();
+        Object[] s = new Object[size];
+        for (int i = 0; i < size; i++) {
+            long l = Utils.unpackLong(is);
+            if (l == 0)
+                s[i] = null;
+            else
+                s[i] = l - 1;
+        }
+        return s;
+    }
+
+
+    private ArrayList<Object> deserializeArrayList(DataInput is, FastArrayList<Object> objectStack) throws IOException {
+        int size = Utils.unpackInt(is);
+        ArrayList<Object> s = new ArrayList<Object>(size);
+        objectStack.add(s);
+        for (int i = 0; i < size; i++) {
+            s.add(deserialize(is, objectStack));
+        }
+        return s;
+    }
+
+    private ArrayList<Object> deserializeArrayListPackedLong(DataInput is) throws IOException {
+        int size = is.readUnsignedByte();
+        if (size < 0)
+            throw new EOFException();
+
+        ArrayList<Object> s = new ArrayList<Object>(size);
+        for (int i = 0; i < size; i++) {
+            long l = Utils.unpackLong(is);
+            if (l == 0)
+                s.add(null);
+            else
+                s.add(l - 1);
+        }
+        return s;
+    }
+
+
+    private java.util.LinkedList deserializeLinkedList(DataInput is, FastArrayList<Object> objectStack) throws IOException {
+        int size = Utils.unpackInt(is);
+        java.util.LinkedList s = new java.util.LinkedList();
+        objectStack.add(s);
+        for (int i = 0; i < size; i++)
+            s.add(deserialize(is, objectStack));
+        return s;
+    }
+
+
+    private Vector<Object> deserializeVector(DataInput is, FastArrayList<Object> objectStack) throws IOException {
+        int size = Utils.unpackInt(is);
+        Vector<Object> s = new Vector<Object>(size);
+        objectStack.add(s);
+        for (int i = 0; i < size; i++)
+            s.add(deserialize(is, objectStack));
+        return s;
+    }
+
+
+    private HashSet<Object> deserializeHashSet(DataInput is, FastArrayList<Object> objectStack) throws IOException {
+        int size = Utils.unpackInt(is);
+        HashSet<Object> s = new HashSet<Object>(size);
+        objectStack.add(s);
+        for (int i = 0; i < size; i++)
+            s.add(deserialize(is, objectStack));
+        return s;
+    }
+
+
+    private LinkedHashSet<Object> deserializeLinkedHashSet(DataInput is, FastArrayList<Object> objectStack) throws IOException {
+        int size = Utils.unpackInt(is);
+        LinkedHashSet<Object> s = new LinkedHashSet<Object>(size);
+        objectStack.add(s);
+        for (int i = 0; i < size; i++)
+            s.add(deserialize(is, objectStack));
+        return s;
+    }
+
+
+    private TreeSet<Object> deserializeTreeSet(DataInput is, FastArrayList<Object> objectStack) throws IOException {
+        int size = Utils.unpackInt(is);
+        TreeSet<Object> s = new TreeSet<Object>();
+        objectStack.add(s);
+        Comparator comparator = (Comparator) deserialize(is, objectStack);
+        if (comparator != null)
+            s = new TreeSet<Object>(comparator);
+
+        for (int i = 0; i < size; i++)
+            s.add(deserialize(is, objectStack));
+        return s;
+    }
+
+
+    private TreeMap<Object, Object> deserializeTreeMap(DataInput is, FastArrayList<Object> objectStack) throws IOException {
+        int size = Utils.unpackInt(is);
+
+        TreeMap<Object, Object> s = new TreeMap<Object, Object>();
+        objectStack.add(s);
+        Comparator comparator = (Comparator) deserialize(is, objectStack);
+        if (comparator != null)
+            s = new TreeMap<Object, Object>(comparator);
+        for (int i = 0; i < size; i++)
+            s.put(deserialize(is, objectStack), deserialize(is, objectStack));
+        return s;
+    }
+
+
+    private HashMap<Object, Object> deserializeHashMap(DataInput is, FastArrayList<Object> objectStack) throws IOException {
+        int size = Utils.unpackInt(is);
+
+        HashMap<Object, Object> s = new HashMap<Object, Object>(size);
+        objectStack.add(s);
+        for (int i = 0; i < size; i++)
+            s.put(deserialize(is, objectStack), deserialize(is, objectStack));
+        return s;
+    }
+
+    private IdentityHashMap<Object, Object> deserializeIdentityHashMap(DataInput is, FastArrayList<Object> objectStack) throws IOException {
+        int size = Utils.unpackInt(is);
+
+        IdentityHashMap<Object, Object> s = new IdentityHashMap<Object, Object>(size);
+        objectStack.add(s);
+        for (int i = 0; i < size; i++)
+            s.put(deserialize(is, objectStack), deserialize(is, objectStack));
+        return s;
+    }
+
+    private LinkedHashMap<Object, Object> deserializeLinkedHashMap(DataInput is, FastArrayList<Object> objectStack) throws IOException {
+        int size = Utils.unpackInt(is);
+
+        LinkedHashMap<Object, Object> s = new LinkedHashMap<Object, Object>(size);
+        objectStack.add(s);
+        for (int i = 0; i < size; i++)
+            s.put(deserialize(is, objectStack), deserialize(is, objectStack));
+        return s;
+    }
+
+
+    private Hashtable<Object, Object> deserializeHashtable(DataInput is, FastArrayList<Object> objectStack) throws IOException {
+        int size = Utils.unpackInt(is);
+
+        Hashtable<Object, Object> s = new Hashtable<Object, Object>(size);
+        objectStack.add(s);
+        for (int i = 0; i < size; i++)
+            s.put(deserialize(is, objectStack), deserialize(is, objectStack));
+        return s;
+    }
+
+
+    private Properties deserializeProperties(DataInput is, FastArrayList<Object> objectStack) throws IOException {
+        int size = Utils.unpackInt(is);
+
+        Properties s = new Properties();
+        objectStack.add(s);
+        for (int i = 0; i < size; i++)
+            s.put(deserialize(is, objectStack), deserialize(is, objectStack));
+        return s;
+    }
+
+    /** override this method to extend SerializerBase functionality*/
+    protected void serializeUnknownObject(DataOutput out, Object obj, FastArrayList<Object> objectStack) throws IOException {
+        throw new InternalError("Could not deserialize unknown object: "+obj.getClass().getName());
+    }
+    /** override this method to extend SerializerBase functionality*/
+    protected Object deserializeUnknownHeader(DataInput is, int head, FastArrayList<Object> objectStack) throws IOException {
+        throw new InternalError("Unknown serialization header: " + head);
+    }
+
+
+}
Index: applications/editors/josm/plugins/imagerycache/src/org/mapdb/SerializerPojo.java
===================================================================
--- applications/editors/josm/plugins/imagerycache/src/org/mapdb/SerializerPojo.java	(revision 29363)
+++ applications/editors/josm/plugins/imagerycache/src/org/mapdb/SerializerPojo.java	(revision 29363)
@@ -0,0 +1,620 @@
+/*
+ *  Copyright (c) 2012 Jan Kotek
+ *
+ *  Licensed under the Apache License, Version 2.0 (the "License");
+ *  you may not use this file except in compliance with the License.
+ *  You may obtain a copy of the License at
+ *
+ *    http://www.apache.org/licenses/LICENSE-2.0
+ *
+ *  Unless required by applicable law or agreed to in writing, software
+ *  distributed under the License is distributed on an "AS IS" BASIS,
+ *  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ *  See the License for the specific language governing permissions and
+ *  limitations under the License.
+ */
+package org.mapdb;
+
+import java.io.*;
+import java.lang.reflect.Constructor;
+import java.lang.reflect.Field;
+import java.lang.reflect.InvocationTargetException;
+import java.lang.reflect.Method;
+import java.util.ArrayList;
+import java.util.HashMap;
+import java.util.List;
+import java.util.Map;
+import java.util.concurrent.ConcurrentHashMap;
+import java.util.concurrent.CopyOnWriteArrayList;
+import java.util.concurrent.locks.ReentrantReadWriteLock;
+
+/**
+ * Serializer which handles POJO, object graphs etc.
+ *
+ * @author  Jan Kotek
+ */
+public class SerializerPojo extends SerializerBase{
+
+
+    protected static final Serializer<CopyOnWriteArrayList<ClassInfo>> serializer = new Serializer<CopyOnWriteArrayList<ClassInfo>>() {
+
+        @Override
+		public void serialize(DataOutput out, CopyOnWriteArrayList<ClassInfo> obj) throws IOException {
+            Utils.packInt(out, obj.size());
+            for (ClassInfo ci : obj) {
+                out.writeUTF(ci.getName());
+                out.writeBoolean(ci.isEnum);
+                out.writeBoolean(ci.isExternalizable);
+                if(ci.isExternalizable) continue; //no fields
+
+                Utils.packInt(out, ci.fields.size());
+                for (FieldInfo fi : ci.fields) {
+                    out.writeUTF(fi.getName());
+                    out.writeBoolean(fi.isPrimitive());
+                    out.writeUTF(fi.getType());
+                }
+            }
+        }
+
+        @Override
+		public CopyOnWriteArrayList<ClassInfo> deserialize(DataInput in, int available) throws IOException{
+            if(available==0) return new CopyOnWriteArrayList<ClassInfo>();
+
+            int size = Utils.unpackInt(in);
+            ArrayList<ClassInfo> ret = new ArrayList<ClassInfo>(size);
+
+            for (int i = 0; i < size; i++) {
+                String className = in.readUTF();
+                boolean isEnum = in.readBoolean();
+                boolean isExternalizable = in.readBoolean();
+
+                int fieldsNum = isExternalizable? 0 : Utils.unpackInt(in);
+                FieldInfo[] fields = new FieldInfo[fieldsNum];
+                for (int j = 0; j < fieldsNum; j++) {
+                    fields[j] = new FieldInfo(in.readUTF(), in.readBoolean(), in.readUTF(), classForName(className));
+                }
+                ret.add(new ClassInfo(className, fields,isEnum,isExternalizable));
+            }
+            return new CopyOnWriteArrayList<ClassInfo>(ret);
+        }
+    };
+
+    protected final ReentrantReadWriteLock lock = new ReentrantReadWriteLock();
+
+    private static Class<?> classForName(String className) {
+        try {
+            return Class.forName(className);
+        } catch (ClassNotFoundException e) {
+            throw new RuntimeException(e);
+        }
+    }
+
+
+
+
+    public SerializerPojo(CopyOnWriteArrayList<ClassInfo> registered){
+        if(registered == null)
+            this.registered = new CopyOnWriteArrayList<ClassInfo>();
+        else
+            this.registered = registered;
+    }
+
+    /**
+     * Stores info about single class stored in JDBM.
+     * Roughly corresponds to 'java.io.ObjectStreamClass'
+     */
+    protected static class ClassInfo {
+
+        private final String name;
+        private final List<FieldInfo> fields = new ArrayList<FieldInfo>();
+        private final Map<String, FieldInfo> name2fieldInfo = new HashMap<String, FieldInfo>();
+        private final Map<String, Integer> name2fieldId = new HashMap<String, Integer>();
+        private ObjectStreamField[] objectStreamFields;
+
+        final boolean isEnum;
+
+        final boolean isExternalizable;
+
+        ClassInfo(final String name, final FieldInfo[] fields, final boolean isEnum, final boolean isExternalizable) {
+            this.name = name;
+            this.isEnum = isEnum;
+            this.isExternalizable = isExternalizable;
+
+            for (FieldInfo f : fields) {
+                this.name2fieldId.put(f.getName(), this.fields.size());
+                this.fields.add(f);
+                this.name2fieldInfo.put(f.getName(), f);
+            }
+        }
+
+        public String getName() {
+            return name;
+        }
+
+        public FieldInfo[] getFields() {
+            return fields.toArray(new FieldInfo[fields.size()]);
+        }
+
+        public FieldInfo getField(String name) {
+            return name2fieldInfo.get(name);
+        }
+
+        public int getFieldId(String name) {
+            Integer fieldId = name2fieldId.get(name);
+            if(fieldId != null)
+                return fieldId;
+            return -1;
+        }
+
+        public FieldInfo getField(int serialId) {
+            return fields.get(serialId);
+        }
+
+        public int addFieldInfo(FieldInfo field) {
+            name2fieldId.put(field.getName(), fields.size());
+            name2fieldInfo.put(field.getName(), field);
+            fields.add(field);
+            return fields.size() - 1;
+        }
+
+        public ObjectStreamField[] getObjectStreamFields() {
+            return objectStreamFields;
+        }
+
+        public void setObjectStreamFields(ObjectStreamField[] objectStreamFields) {
+            this.objectStreamFields = objectStreamFields;
+        }
+
+
+    }
+
+    /**
+     * Stores info about single field stored in JDBM.
+     * Roughly corresponds to 'java.io.ObjectFieldClass'
+     */
+    static class FieldInfo {
+        private final String name;
+        private final boolean primitive;
+        private final String type;
+        private Class<?> typeClass;
+        // Class containing this field
+        private final Class<?> clazz;
+        private Object setter;
+        private Object getter;
+
+        public FieldInfo(String name, boolean primitive, String type, Class<?> clazz) {
+            this.name = name;
+            this.primitive = primitive;
+            this.type = type;
+            this.clazz = clazz;
+            try {
+                this.typeClass = Class.forName(type);
+            } catch (ClassNotFoundException e) {
+                this.typeClass = null;
+            }
+            initSetter();
+            initGetter();
+        }
+
+        private void initSetter() {
+            // Set setter
+            String setterName = "set" + firstCharCap(name);
+
+            Class<?> aClazz = clazz;
+
+            // iterate over class hierarchy, until root class
+            while (aClazz != Object.class) {
+                // check if there is getMethod
+                try {
+                    Method m = aClazz.getMethod(setterName, typeClass);
+                    if (m != null) {
+                        setter = m;
+                        return;
+                    }
+                } catch (Exception e) {
+                    // e.printStackTrace();
+                }
+
+                // no get method, access field directly
+                try {
+                    Field f = aClazz.getDeclaredField(name);
+                    // security manager may not be happy about this
+                    if (!f.isAccessible())
+                        f.setAccessible(true);
+                    setter = f;
+                    return;
+                } catch (Exception e) {
+//					e.printStackTrace();
+                }
+                // move to superclass
+                aClazz = aClazz.getSuperclass();
+            }
+        }
+
+        private void initGetter() {
+            // Set setter
+            String getterName = "get" + firstCharCap(name);
+
+            Class<?> aClazz = clazz;
+
+            // iterate over class hierarchy, until root class
+            while (aClazz != Object.class) {
+                // check if there is getMethod
+                try {
+                    Method m = aClazz.getMethod(getterName);
+                    if (m != null) {
+                        getter = m;
+                        return;
+                    }
+                } catch (Exception e) {
+                    // e.printStackTrace();
+                }
+
+                // no get method, access field directly
+                try {
+                    Field f = aClazz.getDeclaredField(name);
+                    // security manager may not be happy about this
+                    if (!f.isAccessible())
+                        f.setAccessible(true);
+                    getter = f;
+                    return;
+                } catch (Exception e) {
+//					e.printStackTrace();
+                }
+                // move to superclass
+                aClazz = aClazz.getSuperclass();
+            }
+        }
+
+        public FieldInfo(ObjectStreamField sf, Class<?> clazz) {
+            this(sf.getName(), sf.isPrimitive(), sf.getType().getName(), clazz);
+        }
+
+        public String getName() {
+            return name;
+        }
+
+        public boolean isPrimitive() {
+            return primitive;
+        }
+
+        public String getType() {
+            return type;
+        }
+
+        private String firstCharCap(String s) {
+            return Character.toUpperCase(s.charAt(0)) + s.substring(1);
+        }
+    }
+
+
+    CopyOnWriteArrayList<ClassInfo> registered;
+    Map<Class<?>, Integer> class2classId = new HashMap<Class<?>, Integer>();
+    Map<Integer, Class<?>> classId2class = new HashMap<Integer, Class<?>>();
+
+
+
+    public void registerClass(Class<?> clazz) throws IOException {
+        if(clazz != Object.class)
+            assertClassSerializable(clazz);
+
+        if (containsClass(clazz))
+            return;
+
+        ObjectStreamField[] streamFields = getFields(clazz);
+        FieldInfo[] fields = new FieldInfo[streamFields.length];
+        for (int i = 0; i < fields.length; i++) {
+            ObjectStreamField sf = streamFields[i];
+            fields[i] = new FieldInfo(sf, clazz);
+        }
+
+        ClassInfo i = new ClassInfo(clazz.getName(), fields,clazz.isEnum(), Externalizable.class.isAssignableFrom(clazz));
+        class2classId.put(clazz, registered.size());
+        classId2class.put(registered.size(), clazz);
+        registered.add(i);
+
+
+        saveClassInfo();
+    }
+
+    /** action performed after classInfo was modified, feel free to override */
+    protected void saveClassInfo() {
+
+    }
+
+    private ObjectStreamField[] getFields(Class<?> clazz) {
+        ObjectStreamField[] fields = null;
+        ClassInfo classInfo = null;
+        Integer classId = class2classId.get(clazz);
+        if (classId != null) {
+            classInfo = registered.get(classId);
+            fields = classInfo.getObjectStreamFields();
+        }
+        if (fields == null) {
+            ObjectStreamClass streamClass = ObjectStreamClass.lookup(clazz);
+            FastArrayList<ObjectStreamField> fieldsList = new FastArrayList<ObjectStreamField>();
+            while (streamClass != null) {
+                for (ObjectStreamField f : streamClass.getFields()) {
+                    fieldsList.add(f);
+                }
+                clazz = clazz.getSuperclass();
+                streamClass = ObjectStreamClass.lookup(clazz);
+            }
+            fields = new ObjectStreamField[fieldsList
+                    .size()];
+            for (int i = 0; i < fields.length; i++) {
+                fields[i] = fieldsList.get(i);
+            }
+            if(classInfo != null)
+                classInfo.setObjectStreamFields(fields);
+        }
+        return fields;
+    }
+
+    private void assertClassSerializable(Class<?> clazz) throws NotSerializableException, InvalidClassException {
+        if(containsClass(clazz))
+            return;
+
+        if (!Serializable.class.isAssignableFrom(clazz))
+            throw new NotSerializableException(clazz.getName());
+    }
+
+    public Object getFieldValue(String fieldName, Object object) {
+        try {
+            registerClass(object.getClass());
+        } catch (IOException e) {
+            e.printStackTrace();
+        }
+        ClassInfo classInfo = registered.get(class2classId.get(object.getClass()));
+        return getFieldValue(classInfo.getField(fieldName), object);
+    }
+
+    public Object getFieldValue(FieldInfo fieldInfo, Object object) {
+
+        Object fieldAccessor = fieldInfo.getter;
+        try {
+            if (fieldAccessor instanceof Method) {
+                Method m = (Method) fieldAccessor;
+                return m.invoke(object);
+            } else {
+                Field f = (Field) fieldAccessor;
+                return f.get(object);
+            }
+        } catch (Exception e) {
+
+        }
+
+        throw new NoSuchFieldError(object.getClass() + "." + fieldInfo.getName());
+    }
+
+    public void setFieldValue(String fieldName, Object object, Object value) {
+        try {
+            registerClass(object.getClass());
+        } catch (IOException e) {
+            e.printStackTrace();
+        }
+        ClassInfo classInfo = registered.get(class2classId.get(object.getClass()));
+        setFieldValue(classInfo.getField(fieldName), object, value);
+    }
+
+    public void setFieldValue(FieldInfo fieldInfo, Object object, Object value) {
+
+        Object fieldAccessor = fieldInfo.setter;
+        try {
+            if (fieldAccessor instanceof Method) {
+                Method m = (Method) fieldAccessor;
+                m.invoke(object, value);
+            } else {
+                Field f = (Field) fieldAccessor;
+                f.set(object, value);
+            }
+            return;
+        } catch (Throwable e) {
+            e.printStackTrace();
+        }
+
+        throw new NoSuchFieldError(object.getClass() + "." + fieldInfo.getName());
+    }
+
+    public boolean containsClass(Class<?> clazz) {
+        return (class2classId.get(clazz) != null);
+    }
+
+    public int getClassId(Class<?> clazz) {
+        Integer classId = class2classId.get(clazz);
+        if(classId != null) {
+            return classId;
+        }
+        throw new Error("Class is not registered: " + clazz);
+    }
+
+    @Override
+    protected void serializeUnknownObject(DataOutput out, Object obj, FastArrayList<Object> objectStack) throws IOException {
+        out.write(SerializationHeader.POJO);
+
+        registerClass(obj.getClass());
+
+        //write class header
+        int classId = getClassId(obj.getClass());
+        Utils.packInt(out, classId);
+        ClassInfo classInfo = registered.get(classId);
+
+        if(classInfo.isExternalizable){
+            throw new InternalError("Can not serialize Externalizable class");
+//            Externalizable o = (Externalizable) obj;
+//            DataInputOutput out2 = (DataInputOutput) out;
+//            try{
+//                out2.serializer = this;
+//                out2.objectStack = objectStack;
+//                o.writeExternal(out2);
+//            }finally {
+//                out2.serializer = null;
+//                out2.objectStack = null;
+//            }
+//            return;
+        }
+
+
+        if(classInfo.isEnum) {
+            int ordinal = ((Enum<?>)obj).ordinal();
+            Utils.packInt(out, ordinal);
+        }
+
+        ObjectStreamField[] fields = getFields(obj.getClass());
+        Utils.packInt(out, fields.length);
+
+        for (ObjectStreamField f : fields) {
+            //write field ID
+            int fieldId = classInfo.getFieldId(f.getName());
+            if (fieldId == -1) {
+                //field does not exists in class definition stored in db,
+                //propably new field was added so add field descriptor
+                fieldId = classInfo.addFieldInfo(new FieldInfo(f, obj.getClass()));
+                saveClassInfo();
+            }
+            Utils.packInt(out, fieldId);
+            //and write value
+            Object fieldValue = getFieldValue(classInfo.getField(fieldId), obj);
+            serialize(out, fieldValue, objectStack);
+        }
+    }
+
+
+    @Override
+    protected Object deserializeUnknownHeader(DataInput in, int head, FastArrayList<Object> objectStack) throws IOException {
+        if(head!=SerializationHeader.POJO) throw new InternalError();
+
+        //read class header
+        try {
+            int classId = Utils.unpackInt(in);
+            ClassInfo classInfo = registered.get(classId);
+//            Class clazz = Class.forName(classInfo.getName());
+            Class<?> clazz = classId2class.get(classId);
+            if(clazz == null)
+                clazz = Class.forName(classInfo.getName());
+            assertClassSerializable(clazz);
+
+            Object o;
+
+            if(classInfo.isEnum) {
+                int ordinal = Utils.unpackInt(in);
+                o = clazz.getEnumConstants()[ordinal];
+            }
+            else {
+                o = createInstanceSkippinkConstructor(clazz);
+            }
+
+            objectStack.add(o);
+
+            if(classInfo.isExternalizable){
+                throw new InternalError("can not serialize Externalizable class");
+//                Externalizable oo = (Externalizable) o;
+//                DataInputOutput in2 = (DataInputOutput) in;
+//                try{
+//                    in2.serializer = this;
+//                    in2.objectStack = objectStack;
+//                    oo.readExternal(in2);
+//                }finally {
+//                    in2.serializer = null;
+//                    in2.objectStack = null;
+//                }
+
+            }else{
+                int fieldCount = Utils.unpackInt(in);
+                for (int i = 0; i < fieldCount; i++) {
+                    int fieldId = Utils.unpackInt(in);
+                    FieldInfo f = classInfo.getField(fieldId);
+                    Object fieldValue = deserialize(in, objectStack);
+                    setFieldValue(f, o, fieldValue);
+                }
+            }
+            return o;
+        } catch (Exception e) {
+            throw new Error("Could not instanciate class", e);
+        }
+    }
+
+
+    static private Method sunConstructor = null;
+    static private Object sunReflFac = null;
+    static private Method androidConstructor = null;
+
+    static{
+        try{
+            Class clazz = Class.forName("sun.reflect.ReflectionFactory");
+            if(clazz!=null){
+                Method getReflectionFactory = clazz.getMethod("getReflectionFactory");
+                sunReflFac = getReflectionFactory.invoke(null);
+                sunConstructor = clazz.getMethod("newConstructorForSerialization",
+                        java.lang.Class.class, java.lang.reflect.Constructor.class);
+            }
+        }catch(Exception e){
+            //ignore
+        }
+
+        if(sunConstructor == null)try{
+            //try android way
+            Method newInstance = ObjectInputStream.class.getDeclaredMethod("newInstance", Class.class, Class.class);
+            newInstance.setAccessible(true);
+            androidConstructor = newInstance;
+
+        }catch(Exception e){
+            //ignore
+        }
+
+
+    }
+
+
+    private static Map<Class<?>, Constructor<?>> class2constuctor = new ConcurrentHashMap<Class<?>, Constructor<?>>();
+
+    /**
+     * For pojo serialization we need to instanciate class without invoking its constructor.
+     * There are two ways to do it:
+     * <p/>
+     *   Using proprietary API on Oracle JDK and OpenJDK
+     *   sun.reflect.ReflectionFactory.getReflectionFactory().newConstructorForSerialization()
+     *   more at http://www.javaspecialists.eu/archive/Issue175.html
+     * <p/>
+     *   Using 'ObjectInputStream.newInstance' on Android
+     *   http://stackoverflow.com/a/3448384
+     * <p/>
+     *   If non of these works we fallback into usual reflection which requires an no-arg constructor
+     */
+    @SuppressWarnings("restriction")
+	protected <T> T createInstanceSkippinkConstructor(Class<T> clazz)
+            throws NoSuchMethodException, InvocationTargetException, IllegalAccessException, InstantiationException {
+
+        if(sunConstructor !=null){
+            //Sun specific way
+            Constructor<?> intConstr = class2constuctor.get(clazz);
+
+            if (intConstr == null) {
+                Constructor<?> objDef = Object.class.getDeclaredConstructor();
+                intConstr = (Constructor<?>) sunConstructor.invoke(sunReflFac, clazz, objDef);
+                class2constuctor.put(clazz, intConstr);
+            }
+
+            return (T)intConstr.newInstance();
+        }else if(androidConstructor!=null){
+            //android (harmony) specific way
+            return (T)androidConstructor.invoke(null, clazz, Object.class);
+        }else{
+            //try usual generic stuff which does not skip constructor
+            Constructor<?> c = class2constuctor.get(clazz);
+            if(c==null){
+                c =clazz.getConstructor();
+                if(!c.isAccessible()) c.setAccessible(true);
+                class2constuctor.put(clazz,c);
+            }
+            return (T)c.newInstance();
+        }
+    }
+
+//    protected abstract Object deserialize(DataInput in, FastArrayList objectStack) throws IOException, ClassNotFoundException;
+
+//    protected abstract void serialize(DataOutput out, Object fieldValue, FastArrayList objectStack) throws IOException;
+//
+
+
+}
Index: applications/editors/josm/plugins/imagerycache/src/org/mapdb/SnapshotEngine.java
===================================================================
--- applications/editors/josm/plugins/imagerycache/src/org/mapdb/SnapshotEngine.java	(revision 29363)
+++ applications/editors/josm/plugins/imagerycache/src/org/mapdb/SnapshotEngine.java	(revision 29363)
@@ -0,0 +1,157 @@
+package org.mapdb;
+
+import java.util.Map;
+import java.util.concurrent.ConcurrentHashMap;
+import java.util.concurrent.locks.ReentrantReadWriteLock;
+
+/**
+ * Naive implementation of Snapshots on top of StorageEngine.
+ * On update it takes old value and stores it aside.
+ * <p/>
+ * TODO merge snapshots down with Storage for best performance
+ *
+ * @author Jan Kotek
+ */
+public class SnapshotEngine extends EngineWrapper{
+
+    protected final Locks.RecidLocks locks = new Locks.LongHashMapRecidLocks();
+
+    protected final static Object NOT_EXIST = new Object();
+    protected final static Object NOT_INIT_YET = new Object();
+
+
+    protected final Map<Snapshot, String> snapshots = new ConcurrentHashMap<Snapshot, String>();
+
+
+    protected SnapshotEngine(Engine engine) {
+        super(engine);
+    }
+
+    public Engine snapshot() {
+        return new Snapshot();
+    }
+
+    /** protects <code>snapshot</code> when modified */
+    protected final ReentrantReadWriteLock snapshotsLock = new ReentrantReadWriteLock();
+
+    @Override
+    public <A> long put(A value, Serializer<A> serializer) {
+        long recid = super.put(value, serializer);
+        locks.lock(recid);
+        try{
+            for(Snapshot s:snapshots.keySet()){
+                s.oldValues.putIfAbsent(recid, NOT_EXIST);
+            }
+            return recid;
+        }finally{
+            locks.unlock(recid);
+        }
+    }
+
+    @Override
+    public <A> boolean compareAndSwap(long recid, A expectedOldValue, A newValue, Serializer<A> serializer) {
+        locks.lock(recid);
+        try{
+            boolean ret =  super.compareAndSwap(recid, expectedOldValue, newValue, serializer);
+            if(ret==true){
+                for(Snapshot s:snapshots.keySet()){
+                    s.oldValues.putIfAbsent(recid, expectedOldValue);
+                }
+            }
+            return ret;
+        }finally{
+            locks.unlock(recid);
+        }
+    }
+
+    @Override
+    public <A> void update(long recid, A value, Serializer<A> serializer) {
+        locks.lock(recid);
+        try{
+            Object val = NOT_INIT_YET;
+            for(Snapshot s:snapshots.keySet()){
+                if(s.oldValues.get(recid)==null){
+                    if(val == NOT_INIT_YET)
+                        val = get(recid, serializer);
+                    s.oldValues.put(recid,val);
+                }
+            }
+
+            super.update(recid, value, serializer);
+        }finally{
+            locks.unlock(recid);
+        }
+    }
+
+    @Override
+    public  <A> void delete(long recid, Serializer<A> serializer) {
+        locks.lock(recid);
+        try{
+            Object val = NOT_INIT_YET;
+            for(Snapshot s:snapshots.keySet()){
+                if(s.oldValues.get(recid)==null){
+                    if(val == NOT_INIT_YET)
+                        val = get(recid, serializer);
+                    s.oldValues.put(recid,val);
+                }
+            }
+
+            super.delete(recid,serializer);
+        }finally{
+            locks.unlock(recid);
+        }
+    }
+
+    public static Engine createSnapshotFor(Engine engine) {
+        SnapshotEngine se = null;
+        while(true){
+            if(engine instanceof SnapshotEngine){
+                se = (SnapshotEngine) engine;
+                break;
+            }else if(engine instanceof EngineWrapper){
+                engine = ((EngineWrapper)engine).getWrappedEngine();
+            }else{
+                throw new IllegalArgumentException("Could not create Snapshot for Engine: "+engine);
+            }
+        }
+
+        return se.snapshot();
+    }
+
+    protected class Snapshot extends ReadOnlyEngine{
+
+        protected LongConcurrentHashMap oldValues = new LongConcurrentHashMap();
+
+        public Snapshot() {
+            super(SnapshotEngine.this);
+            snapshots.put(Snapshot.this, "");
+        }
+
+
+        @Override
+        public <A> A get(long recid, Serializer<A> serializer) {
+            locks.lock(recid);
+            try{
+                Object ret = oldValues.get(recid);
+                if(ret!=null){
+                    if(ret==NOT_EXIST) return null;
+                    return (A) ret;
+                }
+                return SnapshotEngine.this.getWrappedEngine().get(recid, serializer);
+            }finally{
+                locks.unlock(recid);
+            }
+        }
+
+        @Override
+        public boolean isClosed() {
+           return oldValues!=null;
+        }
+
+        @Override
+        public void close() {
+            snapshots.remove(Snapshot.this);
+            oldValues.clear();
+        }
+    }
+}
Index: applications/editors/josm/plugins/imagerycache/src/org/mapdb/StorageAppend.java
===================================================================
--- applications/editors/josm/plugins/imagerycache/src/org/mapdb/StorageAppend.java	(revision 29363)
+++ applications/editors/josm/plugins/imagerycache/src/org/mapdb/StorageAppend.java	(revision 29363)
@@ -0,0 +1,341 @@
+package org.mapdb;
+
+import java.io.File;
+import java.io.IOError;
+import java.io.IOException;
+import java.util.concurrent.locks.ReentrantReadWriteLock;
+
+/**
+ * Append only storage. Uses different file format than Direct and Journaled storage
+ */
+public class StorageAppend implements Engine{
+
+    protected final File file;
+    protected final boolean useRandomAccessFile;
+    protected final boolean readOnly;
+
+    protected final static long FILE_NUMBER_SHIFT = 28;
+    protected final static long FILE_OFFSET_MASK = 0x0FFFFFFFL;
+
+    protected final static long FILE_HEADER = 56465465456465L;
+
+    protected final ReentrantReadWriteLock appendLock = new ReentrantReadWriteLock();
+    protected final static Long THUMBSTONE = Long.MIN_VALUE;
+    protected final static int THUMBSTONE_SIZE = -3;
+    protected final static long EOF = -1;
+    protected final static long COMMIT = -2;
+    protected final static long ROLLBACK = -2;
+
+    protected Volume currentVolume;
+    protected long currentVolumeNum;
+    protected int currentFileOffset;
+    protected long maxRecid = 10;
+
+    protected LongConcurrentHashMap<Volume> volumes = new LongConcurrentHashMap<Volume>();
+    protected final LongConcurrentHashMap<Long> recidsInTx = new LongConcurrentHashMap<Long>();
+
+
+    protected final Volume recidsTable = new Volume.MemoryVol(true);
+    protected static final int MAX_FILE_SIZE = 1024 * 1024 * 10;
+
+    public StorageAppend(File file, boolean useRandomAccessFile, boolean readOnly, boolean transactionsDisabled) {
+        this.file = file;
+        this.useRandomAccessFile = useRandomAccessFile;
+        this.readOnly = readOnly;
+        //TODO special mode with transactions disabled
+
+        File zeroFile = getFileNum(0);
+        if(zeroFile.exists()){
+            replayLog();
+        }else{
+            //create zero file
+            currentVolume = Volume.volumeForFile(zeroFile, useRandomAccessFile, readOnly);
+            currentVolume.ensureAvailable(8);
+            currentVolume.putLong(0, FILE_HEADER);
+            currentFileOffset = 8;
+            volumes.put(0L, currentVolume);
+        }
+
+
+
+
+    }
+
+    protected void replayLog() {
+        try{
+        for(long fileNum=0;;fileNum++){
+            File f = getFileNum(fileNum);
+            if(!f.exists()) return;
+            currentVolume = Volume.volumeForFile(f, useRandomAccessFile, readOnly);
+            volumes.put(fileNum, currentVolume);
+            currentVolumeNum = fileNum;
+
+            //replay file and rebuild recid index table
+            LongHashMap<Long> recidsTable2 = new LongHashMap<Long>();
+            if(!currentVolume.isEmpty()){
+                int pos =0;
+                long header = currentVolume.getLong(pos); pos+=8;
+                if(header!=FILE_HEADER) throw new InternalError();
+
+                for(;;){
+                    long recid = currentVolume.getLong(pos); pos+=8;
+                    maxRecid = Math.max(recid, maxRecid);
+
+                    if(recid == EOF || recid == 0){
+                        break; //end of file
+                    }else if(recid == COMMIT){
+                        //move stuff from temporary table to currently used
+                        commitRecids(recidsTable2);
+                        continue;
+                    }else if(recid == ROLLBACK){
+                        //do not use last recids
+                        recidsTable2.clear();
+                        continue;
+                    }
+
+                    long filePos = (fileNum<<FILE_NUMBER_SHIFT) | pos;
+                    int size = currentVolume.getInt(pos); pos+=4;
+                    if(size!=THUMBSTONE_SIZE){
+                        //skip data
+                        pos+=size;
+                        //store location within the log files in memory
+                        recidsTable2.put(recid, filePos);
+                    }else{
+                        //record was deleted (THUMBSTONE mark)
+                        recidsTable2.put(recid, THUMBSTONE);
+                    }
+                }
+
+            }
+        }
+        }catch(IOError e){
+            //TODO error is part of workflow, but maybe change workflow?
+        }
+    }
+
+    protected File getFileNum(long fileNum) {
+        return new File(file.getPath()+"."+fileNum);
+    }
+
+
+    protected void commitRecids(LongMap<Long> recidsTable2) {
+        LongMap.LongMapIterator<Long> iter = recidsTable2.longMapIterator();
+        while(iter.moveToNext()){
+            long recidsTableOffset = iter.key()*8;
+            recidsTable.ensureAvailable(recidsTableOffset+8);
+            recidsTable.putLong(recidsTableOffset, iter.value());
+        }
+        recidsTable2.clear();
+    }
+
+
+    @Override
+    public <A> long put(A value, Serializer<A> serializer) {
+        try{
+            DataOutput2 out = new DataOutput2();
+            serializer.serialize(out, value);
+            appendLock.writeLock().lock();
+            try{
+
+                long newRecid = maxRecid++; //TODO free recid management
+                update2(newRecid, out);
+                rollOverFile();
+                return newRecid;
+            }finally {
+                appendLock.writeLock().unlock();
+            }
+        }catch(IOException e){
+            throw new IOError(e);
+        }
+    }
+
+    protected void update2(long recid, DataOutput2 out) {
+        currentVolume.ensureAvailable(currentFileOffset+8+4+out.pos);
+        currentVolume.putLong(currentFileOffset,recid);
+        currentFileOffset+=8;
+        long filePos = (currentVolumeNum<<FILE_NUMBER_SHIFT) | currentFileOffset;
+
+        currentVolume.putInt(currentFileOffset,out.pos);
+        currentFileOffset+=4;
+        currentVolume.putData(currentFileOffset,out.buf, out.pos);
+        currentFileOffset+=out.pos;
+        recidsInTx.put(recid, filePos);
+    }
+
+    @Override
+    public <A> A get(long recid, Serializer<A> serializer) {
+        appendLock.readLock().lock();
+        try {
+            Long fileNum2 = recidsInTx.get(recid);
+            if(fileNum2 == null)
+                    fileNum2 = recidsTable.getLong(recid*8);
+
+            if(fileNum2 == THUMBSTONE){  //there is warning about '==', it is ok
+                //record was deleted;
+                return null;
+            }
+
+            if(fileNum2 == 0){
+                return serializer.deserialize(new DataInput2(new byte[0]), 0);
+            }
+
+            long fileNum = fileNum2;
+
+            long fileOffset = fileNum & FILE_OFFSET_MASK;
+            if(fileOffset>MAX_FILE_SIZE) throw new InternalError();
+            fileNum = fileNum>>>FILE_NUMBER_SHIFT;
+            Volume v = volumes.get(fileNum);
+
+            int size = v.getInt(fileOffset);
+            DataInput2 input = v.getDataInput(fileOffset+4, size);
+
+            return serializer.deserialize(input, size);
+        } catch (IOException e) {
+            throw new IOError(e);
+        }finally {
+            appendLock.readLock().unlock();
+        }
+
+    }
+
+    @Override
+    public <A> void update(long recid, A value, Serializer<A> serializer) {
+        try{
+            DataOutput2 out = new DataOutput2();
+            serializer.serialize(out, value);
+            appendLock.writeLock().lock();
+            try {
+                update2(recid, out);
+                rollOverFile();
+            }finally {
+                appendLock.writeLock().unlock();
+            }
+
+        }catch(IOException e){
+            throw new IOError(e);
+        }
+
+    }
+
+    @Override
+    public <A> boolean compareAndSwap(long recid, A expectedOldValue, A newValue, Serializer<A> serializer) {
+        appendLock.writeLock().lock();
+        try{
+            Object oldVal = get(recid, serializer);
+            //TODO compare binary stuff?
+            if((oldVal==null && expectedOldValue==null)|| (oldVal!=null && oldVal.equals(expectedOldValue))){
+                DataOutput2 out = new DataOutput2();
+                try {
+                    serializer.serialize(out, newValue); //TODO serialize outside of APPEND_LOCK
+                } catch (IOException e) {
+                    throw new IOError(e);
+                }
+                update2(recid, out);
+                rollOverFile();
+                return true;
+            }else{
+                return false;
+            }
+        }finally {
+            appendLock.writeLock().unlock();
+        }
+
+    }
+
+    @Override
+    public <A> void delete(long recid, Serializer<A> serializer){
+        //put thumbstone into log
+        appendLock.writeLock().lock();
+        try{
+
+            currentVolume.ensureAvailable(currentFileOffset+8+4);
+            currentVolume.putLong(currentFileOffset, recid);
+            currentFileOffset+=8;
+            currentVolume.putInt(currentFileOffset, THUMBSTONE_SIZE);
+            currentFileOffset+=4;
+            recidsInTx.put(recid, THUMBSTONE);
+            rollOverFile();
+        }finally {
+            appendLock.writeLock().unlock();
+        }
+
+    }
+
+    @Override
+    public void close() {
+        currentVolume = null;
+        volumes = null;
+    }
+
+    @Override
+    public boolean isClosed() {
+        return volumes==null;
+    }
+
+    @Override
+    public void commit() {
+        //append commit mark
+        appendLock.writeLock().lock();
+        try{
+            commitRecids(recidsInTx);
+            currentVolume.ensureAvailable(currentFileOffset+8);
+            currentVolume.putLong(currentFileOffset, COMMIT);
+            currentFileOffset+=8;
+            currentVolume.sync();
+            rollOverFile();
+        }finally {
+            appendLock.writeLock().unlock();
+        }
+    }
+
+    @Override
+    public void rollback() throws UnsupportedOperationException {
+        //append rollback mark
+        appendLock.writeLock().lock();
+        try{
+            currentVolume.ensureAvailable(currentFileOffset+8);
+            currentVolume.putLong(currentFileOffset, ROLLBACK);
+            currentFileOffset+=8;
+            currentVolume.sync();
+            recidsInTx.clear();
+            rollOverFile();
+        }finally {
+            appendLock.writeLock().unlock();
+        }
+
+
+    }
+
+
+    @Override
+    public boolean isReadOnly() {
+        return readOnly;
+    }
+
+    @Override
+    public void compact() {
+        //TODO implement compaction on StorageAppend
+    }
+
+    /** check if current file is too big, if yes finish it and start next file */
+    protected void rollOverFile(){
+        if(currentFileOffset<MAX_FILE_SIZE-8) return;
+
+
+        currentVolume.ensureAvailable(currentFileOffset+8);
+        currentVolume.putLong(currentFileOffset, EOF);
+        currentVolume.sync();
+        currentVolumeNum++;
+        currentVolume = Volume.volumeForFile(
+              getFileNum(currentVolumeNum), useRandomAccessFile, readOnly);
+        currentVolume.ensureAvailable(MAX_FILE_SIZE);
+        currentVolume.putLong(0, FILE_HEADER);
+        currentFileOffset = 8;
+        currentVolume.sync();
+        volumes.put(currentVolumeNum,currentVolume);
+
+    }
+
+}
+
+
Index: applications/editors/josm/plugins/imagerycache/src/org/mapdb/StorageDirect.java
===================================================================
--- applications/editors/josm/plugins/imagerycache/src/org/mapdb/StorageDirect.java	(revision 29363)
+++ applications/editors/josm/plugins/imagerycache/src/org/mapdb/StorageDirect.java	(revision 29363)
@@ -0,0 +1,722 @@
+/*
+ *  Copyright (c) 2012 Jan Kotek
+ *
+ *  Licensed under the Apache License, Version 2.0 (the "License");
+ *  you may not use this file except in compliance with the License.
+ *  You may obtain a copy of the License at
+ *
+ *    http://www.apache.org/licenses/LICENSE-2.0
+ *
+ *  Unless required by applicable law or agreed to in writing, software
+ *  distributed under the License is distributed on an "AS IS" BASIS,
+ *  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ *  See the License for the specific language governing permissions and
+ *  limitations under the License.
+ */
+
+package org.mapdb;
+
+import java.io.File;
+import java.io.IOError;
+import java.io.IOException;
+import java.nio.ByteBuffer;
+import java.util.ArrayList;
+import java.util.concurrent.locks.ReentrantReadWriteLock;
+
+/**
+ * Storage Engine which saves record directly into file.
+ * Is used when transaction journal is disabled.
+ *
+ * @author Jan Kotek
+ */
+public class StorageDirect  implements Engine {
+
+
+
+
+    static final long PHYS_OFFSET_MASK = 0x0000FFFFFFFFFFFFL;
+
+
+    /** File header. First 4 bytes are 'JDBM', last two bytes are store format version */
+    static final long HEADER = 5646556656456456L;
+
+
+    static final int RECID_CURRENT_PHYS_FILE_SIZE = 1;
+    static final int RECID_CURRENT_INDEX_FILE_SIZE = 2;
+
+
+    /** offset in index file which points to FREEINDEX list (free slots in index file) */
+    static final int RECID_FREE_INDEX_SLOTS = 3;
+
+
+    //TODO slots 5 to 18 are currently unused
+
+
+
+    static final int RECID_FREE_PHYS_RECORDS_START = 20;
+
+    static final int NUMBER_OF_PHYS_FREE_SLOT =1000 + 1535;
+    static final int MAX_RECORD_SIZE = 65535;
+
+    /** must be smaller then 127 */
+    static final byte LONG_STACK_NUM_OF_RECORDS_PER_PAGE = 100;
+
+    static final int LONG_STACK_PAGE_SIZE =   8 + LONG_STACK_NUM_OF_RECORDS_PER_PAGE * 8;
+
+    /** offset in index file from which normal physid starts */
+    static final int INDEX_OFFSET_START = RECID_FREE_PHYS_RECORDS_START +NUMBER_OF_PHYS_FREE_SLOT;
+    public static final String DATA_FILE_EXT = ".p";
+
+
+    protected final ReentrantReadWriteLock lock = new ReentrantReadWriteLock();
+
+    protected final boolean appendOnly;
+    protected final boolean deleteFilesOnExit;
+    protected final boolean failOnWrongHeader;
+    protected final boolean readOnly;
+
+    volatile protected Volume phys;
+    volatile protected Volume index;
+
+    public StorageDirect(Volume.Factory volFac, boolean appendOnly,
+                   boolean deleteFilesOnExit, boolean failOnWrongHeader, boolean readOnly) {
+
+        this.appendOnly = appendOnly;
+        this.deleteFilesOnExit = deleteFilesOnExit;
+        this.failOnWrongHeader = failOnWrongHeader;
+        this.readOnly = readOnly;
+
+        try{
+            lock.writeLock().lock();
+
+
+            phys = volFac.createPhysVolume();
+            index = volFac.createIndexVolume();
+            phys.ensureAvailable(8);
+            index.ensureAvailable(INDEX_OFFSET_START*8);
+
+            final long header = index.isEmpty()? 0 : index.getLong(0);
+            if(header!=HEADER){
+                if(failOnWrongHeader) throw new IOError(new IOException("Wrong file header"));
+                else writeInitValues();
+            }
+
+            File indexFile = index.getFile();
+            if(!(this instanceof StorageJournaled) && indexFile !=null
+                    && new File(indexFile.getPath()+ StorageJournaled.TRANS_LOG_FILE_EXT).exists()){
+                throw new IllegalAccessError("Could not open DB in Direct Mode; WriteAhead log file exists, it may contain some data.");
+            }
+
+        }finally {
+            lock.writeLock().unlock();
+        }
+
+    }
+
+    public StorageDirect(Volume.Factory volFac){
+        this(volFac, false, false,false, false);
+    }
+
+
+
+
+    @Override
+    public <A> long put(A value, Serializer<A> serializer) {
+        if(value == null||serializer==null) throw new NullPointerException();
+        try{
+            DataOutput2 out = new DataOutput2();
+            serializer.serialize(out,value);
+            //TODO log warning if record is too big
+
+
+            try{
+                lock.writeLock().lock();
+                //update index file, find free recid
+                long recid = longStackTake(RECID_FREE_INDEX_SLOTS);
+                if(recid == 0){
+                    //could not reuse recid, so create new one
+                    final long indexSize = index.getLong(RECID_CURRENT_INDEX_FILE_SIZE * 8);
+                    if(indexSize%8!=0) throw new InternalError();
+                    recid = indexSize/8;
+                    //grow buffer if necessary
+                    index.ensureAvailable(indexSize+8);
+                    index.putLong(RECID_CURRENT_INDEX_FILE_SIZE * 8, indexSize + 8);
+                }
+
+                if(out.pos<MAX_RECORD_SIZE){
+                    //is small size and can be stored in single record
+                    //get physical record, first 16 bites is record size, remaining 48 bytes is record offset in phys file
+                    final long indexValue = out.pos!=0?
+                            freePhysRecTake(out.pos):
+                            0L;
+
+                    phys.putData(indexValue&PHYS_OFFSET_MASK, out.buf, out.pos);
+                    index.putLong(recid * 8, indexValue);
+                }else{
+                    putLargeLinkedRecord(out, recid);
+                }
+
+                return recid - INDEX_OFFSET_START;
+            }finally {
+                lock.writeLock().unlock();
+            }
+        }catch(IOException e){
+            throw new IOError(e);
+        }
+    }
+
+    private void putLargeLinkedRecord(DataOutput2 out, long recid) throws IOException {
+        //large size, needs to link multiple records together
+        //start splitting from end, so we can build up linked list
+        final int chunkSize = MAX_RECORD_SIZE-8;
+        int lastArrayPos = out.pos;
+        int arrayPos = out.pos - out.pos%chunkSize;
+        long lastChunkPhysId = 0;
+        while(arrayPos>=0){
+            final int currentChunkSize = lastArrayPos-arrayPos;
+            byte[] b = new byte[currentChunkSize+8]; //TODO reuse byte[]
+            //append reference to prev physId
+            ByteBuffer.wrap(b).putLong(0, lastChunkPhysId);
+            //copy chunk
+            System.arraycopy(out.buf, arrayPos, b, 8, currentChunkSize);
+            //and write current chunk
+            lastChunkPhysId = freePhysRecTake(currentChunkSize+8);
+            phys.putData(lastChunkPhysId&PHYS_OFFSET_MASK, b, b.length);
+            lastArrayPos = arrayPos;
+            arrayPos-=chunkSize;
+        }
+        index.putLong(recid * 8, lastChunkPhysId);
+    }
+
+
+    @Override
+    public <A> A get(long recid, Serializer<A> serializer) {
+        if(serializer==null) throw new NullPointerException();
+        if(recid<=0) throw new IllegalArgumentException("recid");
+        recid += INDEX_OFFSET_START;
+        try{
+            try{
+                lock.readLock().lock();
+                final long indexValue = index.getLong(recid * 8) ;
+                return recordGet2(indexValue, phys, serializer);
+            }finally{
+                lock.readLock().unlock();
+            }
+
+
+        }catch(IOException e){
+            throw new IOError(e);
+        }
+    }
+
+
+
+    @Override
+    public <A> void update(long recid, A value, Serializer<A> serializer){
+        if(value == null||serializer==null) throw new NullPointerException();
+        if(recid<=0) throw new IllegalArgumentException("recid");
+        recid+=INDEX_OFFSET_START;
+        try{
+            DataOutput2 out = new DataOutput2();
+            serializer.serialize(out,value);
+
+            try{
+                lock.writeLock().lock();
+
+                final long oldIndexVal = index.getLong(recid * 8);
+                final long oldSize = oldIndexVal>>>48;
+
+                //check if we need to split new records into multiple one
+                if(out.pos<MAX_RECORD_SIZE){
+                    //check if size has changed
+                    if(oldSize == 0 && out.pos==0){
+                        //do nothing
+                    }else if(oldSize == out.pos && oldSize!=MAX_RECORD_SIZE){
+                        //size is the same, so just write new data
+                        phys.putData(oldIndexVal&PHYS_OFFSET_MASK, out.buf, out.pos);
+                    }else if(oldSize != 0 && out.pos==0){
+                        //new record has zero size, just delete old phys one
+                        freePhysRecPut(oldIndexVal);
+                        index.putLong(recid * 8, 0L);
+                    }else{
+                        //size has changed, so write into new location
+                        final long newIndexValue = freePhysRecTake(out.pos);
+                        phys.putData(newIndexValue&PHYS_OFFSET_MASK, out.buf, out.pos);
+                        //update index file with new location
+                        index.putLong(recid * 8, newIndexValue);
+
+                        //and set old phys record as free
+                        unlinkPhysRecord(oldIndexVal,recid);
+                    }
+                }else{
+                    putLargeLinkedRecord(out, recid);
+                    //and set old phys record as free
+                    unlinkPhysRecord(oldIndexVal,recid);
+                }
+            }finally {
+                lock.writeLock().unlock();
+            }
+        }catch(IOException e){
+            throw new IOError(e);
+        }
+    }
+
+
+   @Override
+   public <A> void delete(long recid, Serializer<A> serializer){
+        if(serializer==null)throw new NullPointerException();
+        if(recid<=0) throw new IllegalArgumentException("recid");
+        recid+=INDEX_OFFSET_START;
+        try{
+            lock.writeLock().lock();
+            final long oldIndexVal = index.getLong(recid * 8);
+            index.putLong(recid * 8, 0L);
+            longStackPut(RECID_FREE_INDEX_SLOTS,recid);
+            unlinkPhysRecord(oldIndexVal,recid);
+        }catch(IOException e){
+            throw new IOError(e);
+        }finally {
+            lock.writeLock().unlock();
+        }
+    }
+
+    @Override
+    public void commit() {
+        //TODO sync here?
+    }
+
+    @Override
+    public void rollback() {
+        throw new UnsupportedOperationException("Can not rollback, transactions disabled.");
+    }
+
+
+
+   protected long longStackTake(final long listRecid) throws IOException {
+        final long dataOffset = index.getLong(listRecid * 8) &PHYS_OFFSET_MASK;
+        if(dataOffset == 0)
+            return 0; //there is no such list, so just return 0
+
+        writeLock_checkLocked();
+
+
+        final int numberOfRecordsInPage = phys.getUnsignedByte(dataOffset);
+
+        if(numberOfRecordsInPage<=0) throw new InternalError();
+        if(numberOfRecordsInPage>LONG_STACK_NUM_OF_RECORDS_PER_PAGE) throw new InternalError();
+
+        final long ret = phys.getLong (dataOffset+numberOfRecordsInPage*8);
+
+        //was it only record at that page?
+        if(numberOfRecordsInPage == 1){
+            //yes, delete this page
+            final long previousListPhysid =phys.getLong(dataOffset) &PHYS_OFFSET_MASK;
+            if(previousListPhysid !=0){
+                //update index so it points to previous page
+                index.putLong(listRecid * 8, previousListPhysid | (((long) LONG_STACK_PAGE_SIZE) << 48));
+            }else{
+                //zero out index
+                index.putLong(listRecid * 8, 0L);
+            }
+            //put space used by this page into free list
+            freePhysRecPut(dataOffset | (((long)LONG_STACK_PAGE_SIZE)<<48));
+        }else{
+            //no, it was not last record at this page, so just decrement the counter
+            phys.putUnsignedByte(dataOffset, (byte) (numberOfRecordsInPage - 1));
+        }
+        return ret;
+
+    }
+
+
+   protected void longStackPut(final long listRecid, final long offset) throws IOException {
+       writeLock_checkLocked();
+
+       //index position was cleared, put into free index list
+        final long listPhysid2 = index.getLong(listRecid * 8) &PHYS_OFFSET_MASK;
+
+        if(listPhysid2 == 0){ //empty list?
+            //yes empty, create new page and fill it with values
+            final long listPhysid = freePhysRecTake(LONG_STACK_PAGE_SIZE) &PHYS_OFFSET_MASK;
+            if(listPhysid == 0) throw new InternalError();
+            //set previous Free Index List page to zero as this is first page
+            phys.putLong(listPhysid, 0L);
+            //set number of free records in this page to 1
+            phys.putUnsignedByte(listPhysid, (byte) 1);
+
+            //set  record
+            phys.putLong(listPhysid + 8, offset);
+            //and update index file with new page location
+            index.putLong(listRecid * 8, (((long) LONG_STACK_PAGE_SIZE) << 48) | listPhysid);
+        }else{
+            final int numberOfRecordsInPage = phys.getUnsignedByte(listPhysid2);
+            if(numberOfRecordsInPage == LONG_STACK_NUM_OF_RECORDS_PER_PAGE){ //is current page full?
+                //yes it is full, so we need to allocate new page and write our number there
+
+                final long listPhysid = freePhysRecTake(LONG_STACK_PAGE_SIZE) &PHYS_OFFSET_MASK;
+                if(listPhysid == 0) throw new InternalError();
+                //final ByteBuffers dataBuf = dataBufs[((int) (listPhysid / BUF_SIZE))];
+                //set location to previous page
+                phys.putLong(listPhysid, listPhysid2);
+                //set number of free records in this page to 1
+                phys.putUnsignedByte(listPhysid, (byte) 1);
+                //set free record
+                phys.putLong(listPhysid +  8, offset);
+                //and update index file with new page location
+                index.putLong(listRecid * 8, (((long) LONG_STACK_PAGE_SIZE) << 48) | listPhysid);
+            }else{
+                //there is space on page, so just write released recid and increase the counter
+                phys.putLong(listPhysid2 +  8 + 8 * numberOfRecordsInPage, offset);
+                phys.putUnsignedByte(listPhysid2, (byte) (numberOfRecordsInPage + 1));
+            }
+        }
+   }
+
+
+
+
+	protected long freePhysRecTake(final int requiredSize) throws IOException {
+        writeLock_checkLocked();
+
+        if(requiredSize<=0) throw new InternalError();
+
+        long freePhysRec = (appendOnly
+                //TODO !HACK! to 'fix' issue 69
+                || Thread.currentThread().getStackTrace().length>256)
+                ? 0L:
+                findFreePhysSlot(requiredSize);
+        if(freePhysRec!=0){
+            return freePhysRec;
+        }
+
+
+
+        //No free records found, so lets increase the file size.
+        //We need to take case of growing ByteBuffers.
+        // Also max size of ByteBuffers is 2GB, so we need to use multiple ones
+
+        final long physFileSize = index.getLong(RECID_CURRENT_PHYS_FILE_SIZE*8);
+        if(physFileSize <=0) throw new InternalError("illegal file size:"+physFileSize);
+
+        //check if new record would be overflowing BUF_SIZE
+        if(physFileSize%Volume.BUF_SIZE+requiredSize<=Volume.BUF_SIZE){
+            //no, so just increase file size
+            phys.ensureAvailable(physFileSize+requiredSize);
+            //so just increase buffer size
+            index.putLong(RECID_CURRENT_PHYS_FILE_SIZE * 8, physFileSize + requiredSize);
+
+            //and return this
+            return (((long)requiredSize)<<48) | physFileSize;
+        }else{
+            //new size is overlapping 2GB ByteBuffers size
+            //so we need to create empty record for 'padding' size to 2GB
+
+            final long  freeSizeToCreate = Volume.BUF_SIZE -  physFileSize%Volume.BUF_SIZE;
+            if(freeSizeToCreate == 0) throw new InternalError();
+
+            final long nextBufferStartOffset = physFileSize + freeSizeToCreate;
+            if(nextBufferStartOffset%Volume.BUF_SIZE!=0) throw new InternalError();
+
+            //increase the disk size
+            phys.ensureAvailable(physFileSize + freeSizeToCreate + requiredSize);
+            index.putLong(RECID_CURRENT_PHYS_FILE_SIZE * 8, physFileSize + freeSizeToCreate + requiredSize);
+
+            //mark 'padding' free record
+            freePhysRecPut((freeSizeToCreate<<48)|physFileSize);
+
+            //and finally return position at beginning of new buffer
+            return (((long)requiredSize)<<48) | nextBufferStartOffset;
+        }
+
+    }
+
+
+
+    private void writeInitValues() {
+        writeLock_checkLocked();
+
+        //zero out all index values
+        for(int i=1;i<=INDEX_OFFSET_START+Engine.LAST_RESERVED_RECID;i++){
+            index.putLong(i*8, 0L);
+        }
+
+        //write headers
+        phys.putLong(0, HEADER);
+        index.putLong(0L,HEADER);
+        if(index.getLong(0L)!=HEADER)
+            throw new InternalError();
+
+
+        //and set current sizes
+        index.putLong(RECID_CURRENT_PHYS_FILE_SIZE * 8, 8L);
+        index.putLong(RECID_CURRENT_INDEX_FILE_SIZE * 8, INDEX_OFFSET_START * 8 + Engine.LAST_RESERVED_RECID*8 + 8);
+    }
+
+
+    protected void writeLock_checkLocked() {
+        if(!lock.writeLock().isHeldByCurrentThread())
+            throw new IllegalAccessError("no write lock");
+    }
+
+
+
+    final int freePhysRecSize2FreeSlot(final int size){
+        if(size>MAX_RECORD_SIZE) throw new IllegalArgumentException("too big record");
+        if(size<0) throw new IllegalArgumentException("negative size");
+
+        if(size<1535)
+            return size-1;
+        else if(size == MAX_RECORD_SIZE)
+            return NUMBER_OF_PHYS_FREE_SLOT-1;
+        else
+            return 1535 -1 + (size-1535)/64;
+    }
+
+    @Override
+    public void close() {
+        try{
+            lock.writeLock().lock();
+
+            phys.close();
+            index.close();
+            if(deleteFilesOnExit){
+                phys.deleteFile();
+                index.deleteFile();
+            }
+            phys = null;
+            index = null;
+
+        }finally {
+            lock.writeLock().unlock();
+        }
+    }
+
+    @Override
+    public boolean isClosed(){
+        return index == null;
+    }
+
+    protected  <A> A recordGet2(long indexValue, Volume data, Serializer<A> serializer) throws IOException {
+        final long dataPos = indexValue & PHYS_OFFSET_MASK;
+        final int dataSize = (int) (indexValue>>>48);
+        if(dataPos == 0) return serializer.deserialize(new DataInput2(new byte[0]),0);
+
+        if(dataSize<MAX_RECORD_SIZE){
+            //single record
+            DataInput2 in = data.getDataInput(dataPos, dataSize);
+            final A value = serializer.deserialize(in,dataSize);
+
+            if( in.pos != dataSize + (data.isSliced()?dataPos%Volume.BUF_SIZE:0))
+                throw new InternalError("Data were not fully read.");
+            return value;
+        }else{
+            //large linked record
+            ArrayList<DataInput2> ins = new ArrayList<DataInput2>();
+            ArrayList<Integer> sizes = new ArrayList<Integer>();
+            int recSize = 0;
+            long nextLink = indexValue;
+            while(nextLink!=0){
+                int currentSize = (int) (nextLink>>>48);
+                recSize+= currentSize-8;
+                DataInput2 in = data.getDataInput(nextLink & PHYS_OFFSET_MASK, currentSize);
+                nextLink = in.readLong();
+                ins.add(in);
+                sizes.add(currentSize - 8);
+            }
+            //construct byte[]
+            byte[] b = new byte[recSize];
+            int pos = 0;
+            for(int i=0;i<ins.size();i++){
+                DataInput2 in = ins.set(i,null);
+                int size = sizes.get(i);
+                in.readFully(b, pos, size);
+                pos+=size;
+            }
+            DataInput2 in = new DataInput2(b);
+            final A value = serializer.deserialize(in,recSize);
+
+            if( in.pos != recSize)
+                throw new InternalError("Data were not fully read.");
+            return value;
+        }
+    }
+
+
+
+    protected void freePhysRecPut(final long indexValue) throws IOException {
+        if((indexValue &PHYS_OFFSET_MASK)==0) throw new InternalError("zero indexValue: ");
+        final int size =  (int) (indexValue>>>48);
+
+        final long listRecid = RECID_FREE_PHYS_RECORDS_START + freePhysRecSize2FreeSlot(size);
+        longStackPut(listRecid, indexValue);
+    }
+
+    protected long findFreePhysSlot(int requiredSize) throws IOException {
+        int slot = freePhysRecSize2FreeSlot(requiredSize);
+        //check if this slot can contain smaller records,
+        if(requiredSize>1 && slot==freePhysRecSize2FreeSlot(requiredSize-1))
+            slot ++; //yes, in this case we have to start at next slot with bigger record and divide it
+
+        while(slot< NUMBER_OF_PHYS_FREE_SLOT){
+
+            final long v = longStackTake(RECID_FREE_PHYS_RECORDS_START +slot);
+            if(v!=0){
+                //we found it, check if we need to split record
+                final int foundRecSize = (int) (v>>>48);
+                if(foundRecSize!=requiredSize){
+
+                    //yes we need split
+                    final long newIndexValue =
+                            ((long)(foundRecSize - requiredSize)<<48) | //encode size into new free record
+                                    (v & PHYS_OFFSET_MASK) +   requiredSize; //and encode new free record phys offset
+                    freePhysRecPut(newIndexValue);
+                }
+
+                //return offset combined with required size
+                return (v & PHYS_OFFSET_MASK) |
+                        (((long)requiredSize)<<48);
+            }else{
+                slot++;
+            }
+        }
+        return 0;
+
+    }
+
+    @Override
+    public <A> boolean compareAndSwap(long recid, A expectedOldValue, A newValue, Serializer<A> serializer){
+        if(expectedOldValue == null||newValue==null||serializer==null) throw new NullPointerException();
+        if(recid<=0) throw new IllegalArgumentException("recid");
+        try{
+            lock.writeLock().lock();
+            Object oldVal = get(recid, serializer);
+            if((oldVal==null && expectedOldValue==null)|| (oldVal!=null && oldVal.equals(expectedOldValue))){
+                update(recid, newValue, serializer);
+                return true;
+            }else{
+                return false;
+            }
+        }finally{
+            lock.writeLock().unlock();
+        }
+
+    }
+
+
+
+    @Override
+    public boolean isReadOnly() {
+        return readOnly;
+    }
+
+
+    protected void unlinkPhysRecord(long indexVal, long recid) throws IOException {
+        int size = (int) (indexVal >>>48);
+        if(size==0) return;
+        if(size<MAX_RECORD_SIZE){
+            freePhysRecPut(indexVal);
+        }else{
+            while(indexVal!=0){
+                //traverse linked record
+                long nextIndexVal = phys.getLong(indexVal&PHYS_OFFSET_MASK);
+                freePhysRecPut(indexVal);
+                indexVal = nextIndexVal;
+            }
+        }
+
+    }
+
+    @Override
+    public void compact(){
+        if(readOnly) throw new IllegalAccessError();
+        if(index.getFile()==null) throw new UnsupportedOperationException("compact not supported for memory storage yet");
+        lock.writeLock().lock();
+        try{
+            //create secondary files for compaction
+            //TODO RAF
+            //TODO memory based stores
+            final File indexFile = index.getFile();
+            final File physFile = phys.getFile();
+            final boolean isRaf = index instanceof Volume.RandomAccessFileVol;
+            Volume.Factory fab = Volume.fileFactory(false, isRaf, new File(indexFile+".compact"));
+            StorageDirect store2 = new StorageDirect(fab);
+
+            //transfer stack of free recids
+            for(long recid =longStackTake(RECID_FREE_INDEX_SLOTS);
+                recid!=0; recid=longStackTake(RECID_FREE_INDEX_SLOTS)){
+                store2.longStackPut(recid, RECID_FREE_INDEX_SLOTS);
+            }
+
+            //iterate over recids and transfer physical records
+            final long indexSize = index.getLong(RECID_CURRENT_INDEX_FILE_SIZE*8)/8;
+
+
+            store2.lock.writeLock().lock();
+            for(long recid = INDEX_OFFSET_START; recid<indexSize;recid++){
+                //read data from first store
+                long physOffset = index.getLong(recid*8);
+                long physSize = physOffset >>> 48;
+                //TODO linked records larger then 64KB
+                physOffset = physOffset & PHYS_OFFSET_MASK;
+
+                //write index value into second storage
+                store2.index.ensureAvailable(recid*8+8);
+
+                //get free place in second store, and write data there
+                if(physSize!=0){
+                    DataInput2 in = phys.getDataInput(physOffset, (int)physSize);
+                    long physOffset2 =
+                            store2.freePhysRecTake((int)physSize) & PHYS_OFFSET_MASK;
+
+                    store2.phys.ensureAvailable((physOffset2 & PHYS_OFFSET_MASK)+physSize);
+                    synchronized (in.buf){
+                        //copy directly from buffer
+                        in.buf.limit((int) (in.pos+physSize));
+                        in.buf.position(in.pos);
+                        store2.phys.putData(physOffset2, in.buf);
+                    }
+                    store2.index.putLong(recid*8, (physSize<<48)|physOffset2);
+                }else{
+                    //just write zeroes
+                    store2.index.putLong(recid*8, 0);
+                }
+            }
+
+            store2.index.putLong(RECID_CURRENT_INDEX_FILE_SIZE*8, indexSize*8);
+
+            File indexFile2 = store2.index.getFile();
+            File physFile2 = store2.phys.getFile();
+            store2.lock.writeLock().unlock();
+            store2.close();
+
+            long time = System.currentTimeMillis();
+            File indexFile_ = new File(indexFile.getPath()+"_"+time+"_orig");
+            File physFile_ = new File(physFile.getPath()+"_"+time+"_orig");
+
+            index.close();
+            phys.close();
+            if(!indexFile.renameTo(indexFile_))throw new InternalError();
+            if(!physFile.renameTo(physFile_))throw new InternalError();
+
+            if(!indexFile2.renameTo(indexFile))throw new InternalError();
+            //TODO process may fail in middle of rename, analyze sequence and add recovery
+            if(!physFile2.renameTo(physFile))throw new InternalError();
+
+            indexFile_.delete();
+            physFile_.delete();
+
+            Volume.Factory fac2 = Volume.fileFactory(false, isRaf, indexFile);
+            index = fac2.createIndexVolume();
+            phys = fac2.createPhysVolume();
+
+        }catch(IOException e){
+            throw new IOError(e);
+        }finally {
+            lock.writeLock().unlock();
+        }
+    }
+
+
+
+
+}
Index: applications/editors/josm/plugins/imagerycache/src/org/mapdb/StorageJournaled.java
===================================================================
--- applications/editors/josm/plugins/imagerycache/src/org/mapdb/StorageJournaled.java	(revision 29363)
+++ applications/editors/josm/plugins/imagerycache/src/org/mapdb/StorageJournaled.java	(revision 29363)
@@ -0,0 +1,713 @@
+/*
+ *  Copyright (c) 2012 Jan Kotek
+ *
+ *  Licensed under the Apache License, Version 2.0 (the "License");
+ *  you may not use this file except in compliance with the License.
+ *  You may obtain a copy of the License at
+ *
+ *    http://www.apache.org/licenses/LICENSE-2.0
+ *
+ *  Unless required by applicable law or agreed to in writing, software
+ *  distributed under the License is distributed on an "AS IS" BASIS,
+ *  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ *  See the License for the specific language governing permissions and
+ *  limitations under the License.
+ */
+
+package org.mapdb;
+
+import java.io.IOError;
+import java.io.IOException;
+import java.nio.ByteBuffer;
+import java.util.ArrayList;
+
+/**
+ * StorageDirect which provides transaction and journal.
+ * Index file data are stored in memory+trans journal, phys file data are stored only in transaction journal.
+ *
+ * @author Jan Kotek
+ */
+public class StorageJournaled extends StorageDirect implements Engine {
+
+    protected static final long WRITE_INDEX_LONG = 1L <<48;
+    protected static final long WRITE_INDEX_LONG_ZERO = 2L <<48;
+    protected static final long WRITE_PHYS_LONG = 3L <<48;
+    protected static final long WRITE_PHYS_ARRAY = 4L <<48;
+
+    protected static final long WRITE_SKIP_BUFFER = 444L <<48;
+    /** last instruction in log file */
+    protected static final long WRITE_SEAL = 111L <<48;
+    /** added to offset 8 into log file, indicates that write was successful*/
+    protected static final long LOG_SEAL = 4566556446554645L;
+    public static final String TRANS_LOG_FILE_EXT = ".t";
+
+
+    protected Volume transLog;
+    protected final Volume.Factory volFac;
+    protected long transLogOffset;
+
+
+    protected long indexSize;
+    protected long physSize;
+    protected final LongMap<long[]> recordLogRefs = new LongHashMap<long[]>();
+    protected final LongMap<Long> recordIndexVals = new LongHashMap<Long>();
+    protected final LongMap<long[]> longStackPages = new LongHashMap<long[]>();
+    protected final LongMap<ArrayList<Long>> transLinkedPhysRecods = new LongHashMap<ArrayList<Long>>();
+
+
+    public StorageJournaled(Volume.Factory volFac){
+        this(volFac, false, false, false, false);
+    }
+
+    public StorageJournaled(Volume.Factory volFac, boolean appendOnly,
+                            boolean deleteFilesOnExit, boolean failOnWrongHeader, boolean readOnly) {
+        super(volFac,  appendOnly, deleteFilesOnExit, failOnWrongHeader, readOnly);
+        lock.writeLock().lock();
+        try{
+            this.volFac = volFac;
+            this.transLog = volFac.createTransLogVolume();
+            reloadIndexFile();
+            replayLogFile();
+            transLog = null;
+        }finally{
+            lock.writeLock().unlock();
+        }
+    }
+
+
+    protected void reloadIndexFile() {
+        transLogOffset = 0;
+        writeLock_checkLocked();
+        recordLogRefs.clear();
+        recordIndexVals.clear();
+        longStackPages.clear();
+        transLinkedPhysRecods.clear();
+        indexSize = index.getLong(RECID_CURRENT_INDEX_FILE_SIZE *8);
+        physSize = index.getLong(RECID_CURRENT_PHYS_FILE_SIZE*8);
+        writeLock_checkLocked();
+    }
+
+    protected void openLogIfNeeded(){
+       if(transLog!=null) return;
+       transLog = volFac.createTransLogVolume();
+       transLog.ensureAvailable(16);
+       transLog.putLong(0, HEADER);
+       transLog.putLong(8, 0L);
+       transLogOffset = 16;
+    }
+
+
+
+
+
+    @Override
+    public <A> long put(A value, Serializer<A> serializer) {
+        if(value==null||serializer==null) throw new NullPointerException();
+        try{
+            DataOutput2 out = new DataOutput2();
+            serializer.serialize(out,value);
+
+            try{
+                lock.writeLock().lock();
+                //update index file, find free recid
+                long recid = longStackTake(RECID_FREE_INDEX_SLOTS);
+                if(recid == 0){
+                    //could not reuse recid, so create new one
+                    if(indexSize%8!=0) throw new InternalError();
+                    recid = indexSize/8;
+                    indexSize+=8;
+                }
+
+                if(out.pos<MAX_RECORD_SIZE){
+                    //get physical record
+                    // first 16 bites is record size, remaining 48 bytes is record offset in phys file
+                    final long indexValue = out.pos!=0?
+                        freePhysRecTake(out.pos):0L;
+                    writeIndexValToTransLog(recid, indexValue);
+
+                    //write new phys data into trans log
+                    writeOutToTransLog(out, recid, indexValue);
+                    checkBufferRounding();
+                }else{
+                    putLargeLinkedRecord(out, recid);
+                }
+
+
+
+                return recid-INDEX_OFFSET_START;
+            }finally {
+                lock.writeLock().unlock();
+            }
+        }catch(IOException e){
+            throw new IOError(e);
+        }
+    }
+
+    private void putLargeLinkedRecord(DataOutput2 out, long recid) throws IOException {
+        openLogIfNeeded();
+        //large size, needs to link multiple records together
+        //start splitting from end, so we can build up linked list
+        final int chunkSize = MAX_RECORD_SIZE-8;
+        int lastArrayPos = out.pos;
+        int arrayPos = out.pos - out.pos%chunkSize;
+        long lastChunkPhysId = 0;
+        ArrayList<Long> journalRefs = new ArrayList<Long>();
+        ArrayList<Long> physRecords = new ArrayList<Long>();
+        while(arrayPos>=0){
+            final int currentChunkSize = lastArrayPos-arrayPos;
+            byte[] b = new byte[currentChunkSize+8]; //TODO reuse byte[]
+            //append reference to prev physId
+            ByteBuffer.wrap(b).putLong(0, lastChunkPhysId);
+            //copy chunk
+            System.arraycopy(out.buf, arrayPos, b, 8, currentChunkSize);
+            //and write current chunk
+            lastChunkPhysId = freePhysRecTake(currentChunkSize+8);
+            physRecords.add(lastChunkPhysId);
+            //phys.putData(lastChunkPhysId&PHYS_OFFSET_MASK, b, b.length);
+
+            transLog.ensureAvailable(transLogOffset+10+currentChunkSize+8);
+            transLog.putLong(transLogOffset, WRITE_PHYS_ARRAY|(lastChunkPhysId&PHYS_OFFSET_MASK));
+            transLogOffset+=8;
+            transLog.putUnsignedShort(transLogOffset, currentChunkSize+8);
+            transLogOffset+=2;
+            final Long transLogReference = (((long)currentChunkSize)<<48)|(transLogOffset+8);
+            journalRefs.add(transLogReference);
+            transLog.putData(transLogOffset,b, b.length);
+            transLogOffset+=b.length;
+
+            checkBufferRounding();
+
+            lastArrayPos = arrayPos;
+            arrayPos-=chunkSize;
+        }
+        transLinkedPhysRecods.put(recid,physRecords);
+        writeIndexValToTransLog(recid, lastChunkPhysId);
+        long[] journalRefs2 = new long[journalRefs.size()];
+        for(int i=0;i<journalRefs2.length;i++){
+            journalRefs2[i] = journalRefs.get(i);
+        }
+        recordLogRefs.put(recid, journalRefs2);
+    }
+
+    protected void checkBufferRounding() throws IOException {
+        if(transLogOffset%Volume.BUF_SIZE > Volume.BUF_SIZE - MAX_RECORD_SIZE*2){
+            //position is to close to end of ByteBuffers (1GB)
+            //so start writing into new buffer
+            transLog.ensureAvailable(transLogOffset+8);
+            transLog.putLong(transLogOffset,WRITE_SKIP_BUFFER);
+            transLogOffset += Volume.BUF_SIZE-transLogOffset%Volume.BUF_SIZE;
+        }
+    }
+
+    protected void writeIndexValToTransLog(long recid, long indexValue) throws IOException {
+        //write new index value into transaction log
+        openLogIfNeeded();
+        transLog.ensureAvailable(transLogOffset+16);
+        transLog.putLong(transLogOffset, WRITE_INDEX_LONG | (recid * 8));
+        transLogOffset+=8;
+        transLog.putLong(transLogOffset, indexValue);
+        transLogOffset+=8;
+        recordIndexVals.put(recid,indexValue);
+    }
+
+    protected void writeOutToTransLog(DataOutput2 out, long recid, long indexValue) throws IOException {
+        openLogIfNeeded();
+        transLog.ensureAvailable(transLogOffset+10+out.pos);
+        transLog.putLong(transLogOffset, WRITE_PHYS_ARRAY|(indexValue&PHYS_OFFSET_MASK));
+        transLogOffset+=8;
+        transLog.putUnsignedShort(transLogOffset, out.pos);
+        transLogOffset+=2;
+        final long transLogReference = (((long)out.pos)<<48)|transLogOffset;
+        recordLogRefs.put(recid, new long[]{transLogReference}); //store reference to transaction log, so we can load data quickly
+        transLog.putData(transLogOffset,out.buf, out.pos);
+        transLogOffset+=out.pos;
+    }
+
+
+    @Override
+    public <A> A get(long recid, Serializer<A> serializer) {
+        if(serializer==null)throw new NullPointerException();
+        if(recid<=0) throw new IllegalArgumentException("recid");
+        recid+=INDEX_OFFSET_START;
+
+        try{
+            lock.readLock().lock();
+
+            long[] indexVals = recordLogRefs.get(recid);
+            if(indexVals!=null){
+                if(indexVals.length==1){
+                    //single record
+                    if(indexVals[0] == Long.MIN_VALUE)
+                        return null; //was deleted
+                    //record is in transaction log
+                    return recordGet2(indexVals[0], transLog, serializer);
+                }else{
+                    //read linked record from journal
+                    //first calculate total size
+                    int size = 0;
+                    for(long physId:indexVals) size+= physId>>>48;
+                    byte[] b = new byte[size];
+                    //now load it in chunks
+                    int pos = 0;
+                    for(long physId:indexVals){
+                        int curChunkSize = (int) (physId>>>48);
+                        long offset = physId&PHYS_OFFSET_MASK;
+                        DataInput2 in = transLog.getDataInput(offset, curChunkSize);
+                        in.readFully(b,pos,curChunkSize);
+                        pos+=curChunkSize;
+                    }
+                    if(size!=pos) throw new InternalError();
+                    //now deserialize
+                    DataInput2 in = new DataInput2(b);
+                    A ret = serializer.deserialize(in, size);
+                    if(in.pos!=size) throw new InternalError("Data were not fully read");
+                    return ret;
+                }
+            }else{
+                //not in transaction log, read from file
+                final long indexValue = index.getLong(recid*8) ;
+                 return recordGet2(indexValue, phys, serializer);
+            }
+        }catch(IOException e){
+            throw new IOError(e);
+        }finally{
+            lock.readLock().unlock();
+        }
+    }
+
+    @Override
+    public <A> void update(long recid, A value, Serializer<A> serializer) {
+        if(value==null||serializer==null) throw new NullPointerException();
+        if(recid<=0) throw new IllegalArgumentException("recid");
+        recid+=INDEX_OFFSET_START;
+
+        try{
+            DataOutput2 out = new DataOutput2();
+            serializer.serialize(out,value);
+
+            try{
+                lock.writeLock().lock();
+
+                //check if size has changed
+                long oldIndexVal = getIndexLong(recid);
+                long oldSize = oldIndexVal>>>48;
+
+                //check if we need to split new records into multiple one
+                if(out.pos<MAX_RECORD_SIZE){
+                    if(oldSize == 0 && out.pos==0){
+                        //do nothing
+                    } else if(oldSize == out.pos ){
+                        //size is the same, so just write new data
+                        writeOutToTransLog(out, recid, oldIndexVal);
+                    }else if(oldSize != 0 && out.pos==0){
+                        //new record has zero size, just delete old phys one
+                        freePhysRecPut(oldIndexVal);
+                        writeIndexValToTransLog(recid, 0L);
+                    }else{
+                        //size has changed, so write into new location
+                        final long newIndexValue = freePhysRecTake(out.pos);
+
+                        writeOutToTransLog(out, recid, newIndexValue);
+                        //update index file with new location
+                        writeIndexValToTransLog(recid, newIndexValue);
+
+                        //and set old phys record as free
+                        unlinkPhysRecord(oldIndexVal,recid);
+                    }
+                }else{
+                    unlinkPhysRecord(oldIndexVal,recid); //unlink must be first to release currently used space
+                    putLargeLinkedRecord(out, recid);
+                }
+
+
+                checkBufferRounding();
+            }finally {
+                lock.writeLock().unlock();
+            }
+        }catch(IOException e){
+            throw new IOError(e);
+        }
+
+    }
+
+    private long getIndexLong(long recid) {
+        Long v = recordIndexVals.get(recid);
+        return (v!=null) ? v :
+             index.getLong(recid * 8);
+    }
+
+    @Override
+    public <A> void delete(long recid, Serializer<A>  serializer){
+        if(serializer==null) throw new NullPointerException();
+        if(recid<=0) throw new IllegalArgumentException("recid");
+        recid+=INDEX_OFFSET_START;
+
+        try{
+            lock.writeLock().lock();
+            openLogIfNeeded();
+
+            transLog.ensureAvailable(transLogOffset+8);
+            transLog.putLong(transLogOffset, WRITE_INDEX_LONG_ZERO | (recid*8));
+            transLogOffset+=8;
+            longStackPut(RECID_FREE_INDEX_SLOTS,recid);
+            recordLogRefs.put(recid, new long[]{Long.MIN_VALUE});
+            //check if is in transaction
+            long oldIndexVal = getIndexLong(recid);
+            recordIndexVals.put(recid,0L);
+            unlinkPhysRecord(oldIndexVal,recid);
+
+
+            checkBufferRounding();
+
+        }catch(IOException e){
+            throw new IOError(e);
+        }finally {
+            lock.writeLock().unlock();
+        }
+    }
+
+
+    @Override
+    public void close() {
+        super.close();
+
+        if(transLog!=null){
+             transLog.sync();
+             transLog.close();
+             if(deleteFilesOnExit){
+                transLog.deleteFile();
+            }
+        }
+
+        transLog = null;
+        //TODO delete trans log logic
+    }
+
+    @Override
+    public void commit() {
+        try{
+            lock.writeLock().lock();
+
+            //dump long stack pages
+            LongMap.LongMapIterator<long[]> iter = longStackPages.longMapIterator();
+            while(iter.moveToNext()){
+                transLog.ensureAvailable(transLogOffset+8+2+LONG_STACK_PAGE_SIZE);
+                transLog.putLong(transLogOffset, WRITE_PHYS_ARRAY|iter.key());
+                transLogOffset+=8;
+                transLog.putUnsignedShort(transLogOffset, LONG_STACK_PAGE_SIZE);
+                transLogOffset+=2;
+                for(long l:iter.value()){
+                    transLog.putLong(transLogOffset, l);
+                    transLogOffset+=8;
+                }
+                checkBufferRounding();
+            }
+
+            //update physical and logical filesize
+            writeIndexValToTransLog(RECID_CURRENT_PHYS_FILE_SIZE, physSize);
+            writeIndexValToTransLog(RECID_CURRENT_INDEX_FILE_SIZE, indexSize);
+
+
+            //seal log file
+            transLog.ensureAvailable(transLogOffset+8);
+            transLog.putLong(transLogOffset, WRITE_SEAL);
+            transLogOffset+=8;
+            //flush log file
+            transLog.sync();
+            //and write mark it was sealed
+            transLog.putLong(8, LOG_SEAL);
+            transLog.sync();
+
+            replayLogFile();
+            reloadIndexFile();
+
+        }catch(IOException e){
+            throw new IOError(e);
+        }finally{
+            lock.writeLock().unlock();
+        }
+    }
+
+    protected void replayLogFile(){
+
+            writeLock_checkLocked();
+            transLogOffset = 0;
+
+            if(transLog!=null){
+                transLog.sync();
+            }
+
+
+            //read headers
+            if(transLog.isEmpty() || transLog.getLong(0)!=HEADER || transLog.getLong(8) !=LOG_SEAL){
+                //wrong headers, discard log
+                transLog.close();
+                transLog.deleteFile();
+                transLog = null;
+                return;
+            }
+
+
+            //all good, start replay
+            transLogOffset=16;
+            long ins = transLog.getLong(transLogOffset);
+            transLogOffset+=8;
+
+            while(ins!=WRITE_SEAL && ins!=0){
+
+                final long offset = ins&PHYS_OFFSET_MASK;
+                ins -=offset;
+
+                if(ins == WRITE_INDEX_LONG_ZERO){
+                    index.ensureAvailable(offset+8);
+                    index.putLong(offset, 0L);
+                }else if(ins == WRITE_INDEX_LONG){
+                    final long value = transLog.getLong(transLogOffset);
+                    transLogOffset+=8;
+                    index.ensureAvailable(offset+8);
+                    index.putLong(offset, value);
+                }else if(ins == WRITE_PHYS_LONG){
+                    final long value = transLog.getLong(transLogOffset);
+                    transLogOffset+=8;
+                    phys.ensureAvailable(offset+8);
+                    phys.putLong(offset, value);
+                }else if(ins == WRITE_PHYS_ARRAY){
+                    final int size = transLog.getUnsignedShort(transLogOffset);
+                    transLogOffset+=2;
+                    //transfer byte[] directly from log file without copying into memory
+                    DataInput2 input = transLog.getDataInput(transLogOffset, size);
+                    synchronized (input.buf){
+                        input.buf.position(input.pos);
+                        input.buf.limit(input.pos+size);
+                        phys.ensureAvailable(offset+size);
+                        phys.putData(offset, input.buf);
+                        input.buf.clear();
+                    }
+                    transLogOffset+=size;
+                }else if(ins == WRITE_SKIP_BUFFER){
+                    transLogOffset += Volume.BUF_SIZE-transLogOffset%Volume.BUF_SIZE;
+                }else{
+                    throw new InternalError("unknown trans log instruction: "+(ins>>>48));
+                }
+
+                ins = transLog.getLong(transLogOffset);
+                transLogOffset+=8;
+            }
+            transLogOffset=0;
+
+            //flush dbs
+            phys.sync();
+            index.sync();
+            //and discard log
+            transLog.putLong(0, 0);
+            transLog.putLong(8, 0); //destroy seal to prevent log file from being replayed
+            transLog.close();
+            transLog.deleteFile();
+            transLog = null;
+    }
+
+
+    @Override
+    public void rollback() {
+        lock.writeLock().lock();
+        try{
+        //discard trans log
+        if(transLog!=null){
+            transLog.close();
+            transLog.deleteFile();
+            transLog = null;
+        }
+
+        reloadIndexFile();
+        }finally{
+            lock.writeLock().unlock();
+        }
+
+    }
+
+    @Override
+    public void compact() {
+        lock.writeLock().lock();
+        try{
+            if(transLog!=null && !transLog.isEmpty())
+                throw new IllegalAccessError("Journal not empty; commit first, than compact");
+            super.compact();
+        }finally {
+            lock.writeLock().unlock();
+        }
+    }
+
+
+    private long[] getLongStackPage(final long physOffset, boolean read){
+        long[] buf = longStackPages.get(physOffset);
+        if(buf == null){
+            buf = new long[LONG_STACK_NUM_OF_RECORDS_PER_PAGE+1];
+            if(read)
+                for(int i=0;i<buf.length;i++){
+                    buf[i] = phys.getLong(physOffset+i*8);
+                }
+            longStackPages.put(physOffset,buf);
+        }
+        return buf;
+    }
+
+    @Override
+    protected long longStackTake(final long listRecid) throws IOException {
+        final long dataOffset = getIndexLong(listRecid) & PHYS_OFFSET_MASK;
+        if(dataOffset == 0)
+            return 0; //there is no such list, so just return 0
+
+        writeLock_checkLocked();
+
+        long[] buf = getLongStackPage(dataOffset,true);
+
+        final int numberOfRecordsInPage = (int) (buf[0]>>>(8*7));
+
+
+        if(numberOfRecordsInPage<=0)
+            throw new InternalError();
+        if(numberOfRecordsInPage>LONG_STACK_NUM_OF_RECORDS_PER_PAGE) throw new InternalError();
+
+        final long ret = buf[numberOfRecordsInPage];
+
+        final long previousListPhysid = buf[0] & PHYS_OFFSET_MASK;
+
+        //was it only record at that page?
+        if(numberOfRecordsInPage == 1){
+            //yes, delete this page
+            long value = previousListPhysid !=0 ?
+                    previousListPhysid | (((long) LONG_STACK_PAGE_SIZE) << 48) :
+                    0L;
+            //update index so it points to previous (or none)
+            writeIndexValToTransLog(listRecid, value);
+
+            //put space used by this page into free list
+            longStackPages.remove(dataOffset); //TODO write zeroes to phys file
+            freePhysRecPut(dataOffset | (((long)LONG_STACK_PAGE_SIZE)<<48));
+        }else{
+            //no, it was not last record at this page, so just decrement the counter
+            buf[0] = previousListPhysid | ((1L*numberOfRecordsInPage-1L)<<(8*7));
+        }
+        return ret;
+
+    }
+
+    @Override
+    protected void longStackPut(final long listRecid, final long offset) throws IOException {
+        writeLock_checkLocked();
+
+        //index position was cleared, put into free index list
+        final long listPhysid2 =getIndexLong(listRecid) & PHYS_OFFSET_MASK;
+
+        if(listPhysid2 == 0){ //empty list?
+            //yes empty, create new page and fill it with values
+            final long listPhysid = freePhysRecTake(LONG_STACK_PAGE_SIZE) &PHYS_OFFSET_MASK;
+            long[] buf = getLongStackPage(listPhysid,false);
+            if(listPhysid == 0) throw new InternalError();
+            //set number of free records in this page to 1
+            buf[0] = 1L<<(8*7);
+            //set  record
+            buf[1] = offset;
+            //and update index file with new page location
+            writeIndexValToTransLog(listRecid, (((long) LONG_STACK_PAGE_SIZE) << 48) | listPhysid);
+        }else{
+            long[] buf = getLongStackPage(listPhysid2,true);
+            final int numberOfRecordsInPage = (int) (buf[0]>>>(8*7));
+            if(numberOfRecordsInPage == LONG_STACK_NUM_OF_RECORDS_PER_PAGE){ //is current page full?
+                //yes it is full, so we need to allocate new page and write our number there
+                final long listPhysid = freePhysRecTake(LONG_STACK_PAGE_SIZE) &PHYS_OFFSET_MASK;
+                long[] bufNew = getLongStackPage(listPhysid,false);
+                if(listPhysid == 0) throw new InternalError();
+                //final ByteBuffers dataBuf = dataBufs[((int) (listPhysid / BUF_SIZE))];
+                //set location to previous page
+                //set number of free records in this page to 1
+                bufNew[0] = listPhysid2 | (1L<<(8*7));
+                //set free record
+                bufNew[1] = offset;
+                //and update index file with new page location
+                writeIndexValToTransLog(listRecid,(((long) LONG_STACK_PAGE_SIZE) << 48) | listPhysid);
+            }else{
+                //there is space on page, so just write released recid and increase the counter
+                buf[1+numberOfRecordsInPage] = offset;
+                buf[0] = (buf[0]&PHYS_OFFSET_MASK) | ((1L*numberOfRecordsInPage+1L)<<(8*7));
+            }
+        }
+    }
+
+
+
+    @Override
+	protected long freePhysRecTake(final int requiredSize) throws IOException {
+        writeLock_checkLocked();
+
+        if(requiredSize<=0) throw new InternalError();
+
+        long freePhysRec = appendOnly? 0L:
+                findFreePhysSlot(requiredSize);
+        if(freePhysRec!=0){
+            return freePhysRec;
+        }
+
+        //No free records found, so lets increase the file size.
+        //We need to take case of growing ByteBuffers.
+        // Also max size of ByteBuffers is 2GB, so we need to use multiple ones
+
+        final long oldFileSize = physSize;
+        if(oldFileSize <=0) throw new InternalError("illegal file size:"+oldFileSize);
+
+        //check if new record would be overflowing BUF_SIZE
+        if(oldFileSize%Volume.BUF_SIZE+requiredSize<=Volume.BUF_SIZE){
+            //no, so just increase file size
+            physSize+=requiredSize;
+            //so just increase buffer size
+
+            //and return this
+            return (((long)requiredSize)<<48) | oldFileSize;
+        }else{
+            //new size is overlapping 2GB ByteBuffers size
+            //so we need to create empty record for 'padding' size to 2GB
+
+            final long  freeSizeToCreate = Volume.BUF_SIZE -  oldFileSize%Volume.BUF_SIZE;
+            if(freeSizeToCreate == 0) throw new InternalError();
+
+            final long nextBufferStartOffset = oldFileSize + freeSizeToCreate;
+            if(nextBufferStartOffset%Volume.BUF_SIZE!=0) throw new InternalError();
+
+            //increase the disk size
+            physSize += freeSizeToCreate + requiredSize;
+
+            //mark 'padding' free record
+            freePhysRecPut((freeSizeToCreate<<48)|oldFileSize);
+
+            //and finally return position at beginning of new buffer
+            return (((long)requiredSize)<<48) | nextBufferStartOffset;
+        }
+
+    }
+
+    @Override
+    protected void unlinkPhysRecord(long indexVal, long recid) throws IOException {
+        if(indexVal == 0) return;
+
+        ArrayList<Long> linkedInTrans = transLinkedPhysRecods.remove(recid);
+        if(linkedInTrans!=null){
+            for(Long l:linkedInTrans){
+                freePhysRecPut(l);
+            }
+            return;
+        }
+
+        if((indexVal>>>48)<MAX_RECORD_SIZE){  //check size
+            //single record
+            freePhysRecPut(indexVal);
+            return;
+        }
+
+        while(indexVal!=0){
+            freePhysRecPut(indexVal);
+            final long offset = indexVal & PHYS_OFFSET_MASK;
+            indexVal = phys.getLong(offset); //read next value
+        }
+    }
+
+}
Index: applications/editors/josm/plugins/imagerycache/src/org/mapdb/TxBlock.java
===================================================================
--- applications/editors/josm/plugins/imagerycache/src/org/mapdb/TxBlock.java	(revision 29363)
+++ applications/editors/josm/plugins/imagerycache/src/org/mapdb/TxBlock.java	(revision 29363)
@@ -0,0 +1,9 @@
+package org.mapdb;
+
+/**
+ * Wraps single transaction in a block
+ */
+public interface TxBlock {
+
+    void tx(DB db) throws TxRollbackException;
+}
Index: applications/editors/josm/plugins/imagerycache/src/org/mapdb/TxMaker.java
===================================================================
--- applications/editors/josm/plugins/imagerycache/src/org/mapdb/TxMaker.java	(revision 29363)
+++ applications/editors/josm/plugins/imagerycache/src/org/mapdb/TxMaker.java	(revision 29363)
@@ -0,0 +1,175 @@
+package org.mapdb;
+
+import java.util.LinkedHashSet;
+import java.util.Set;
+
+/**
+ * Transaction factory
+ *
+ * @author Jan Kotek
+ */
+public class TxMaker {
+
+    protected static final Fun.Tuple2<Object, Serializer> DELETED = new Fun.Tuple2(null, Serializer.STRING_SERIALIZER);
+
+    protected Engine engine;
+
+    protected final Object lock = new Object();
+
+    protected final LongMap<TxEngine> globalMod = new LongHashMap<TxEngine>();
+
+
+    public TxMaker(Engine engine) {
+        if(engine==null) throw new IllegalArgumentException();
+        this.engine = engine;
+    }
+
+    
+    public DB makeTx(){
+        return new DB(new TxEngine(engine));
+    }
+
+    public void close() {
+        if(engine==null)
+            engine.close();
+    }
+
+    /**
+     * Executes given block withing single transaction.
+     * If block throws {@code TxRollbackException} execution is repeated until it does not fail.
+     *
+     * @param txBlock
+     */
+    public void execute(TxBlock txBlock) {
+        for(;;){
+            DB tx = makeTx();
+            try{
+                txBlock.tx(tx);
+                if(!tx.isClosed())
+                    tx.commit();
+                return;
+            }catch(TxRollbackException e){
+                //failed, so try again
+            }
+        }
+    }
+
+    protected class TxEngine extends EngineWrapper{
+
+        protected LongMap<Fun.Tuple2<?, Serializer>> modItems =
+                new LongHashMap<Fun.Tuple2<?, Serializer>>();
+
+        protected Set<Long> newItems = new LinkedHashSet<Long>();
+
+
+        protected TxEngine(Engine engine) {
+            super(engine);
+        }
+
+        @Override
+        public <A> long put(A value, Serializer<A> serializer) {
+            if(isClosed()) throw new IllegalAccessError("already closed");
+            synchronized (lock){
+                long recid = engine.put(Utils.EMPTY_STRING, Serializer.EMPTY_SERIALIZER);
+                newItems.add(recid);
+                modItems.put(recid, Fun.t2(value, (Serializer)serializer));
+                globalMod.put(recid, TxEngine.this);
+                return recid;
+            }
+        }
+
+        @Override
+        public <A> A get(long recid, Serializer<A> serializer) {
+            if(isClosed()) throw new IllegalAccessError("already closed");
+            synchronized (lock){
+                Fun.Tuple2 t = modItems.get(recid);
+                if(t!=null){
+                    return (A) t.a;
+                    //TODO compare serializers?
+                }else{
+                    return super.get(recid, serializer);
+                }
+            }
+        }
+
+        @Override
+        public <A> void update(long recid, A value, Serializer<A> serializer) {
+            if(isClosed()) throw new IllegalAccessError("already closed");
+            synchronized (lock){
+                TxEngine other = globalMod.get(recid);
+                if(other!=null && other!=TxEngine.this) {
+                    rollback();
+                    throw new TxRollbackException();
+                }
+                modItems.put(recid, new Fun.Tuple2(value, serializer));
+                globalMod.put(recid, TxEngine.this);
+            }
+
+        }
+
+        @Override
+        public <A> boolean compareAndSwap(long recid, A expectedOldValue, A newValue, Serializer<A> serializer) {
+            if(isClosed()) throw new IllegalAccessError("already closed");
+            throw new IllegalAccessError("Compare and Swap not supported in Tx mode");
+        }
+
+        @Override
+        public <A> void delete(long recid, Serializer<A> serializer){
+            if(isClosed()) throw new IllegalAccessError("already closed");
+            synchronized (lock){
+                TxEngine other = globalMod.get(recid);
+                if(other!=null && other!=TxEngine.this) {
+                    rollback();
+                    throw new TxRollbackException();
+                }
+                modItems.put(recid, DELETED);
+                globalMod.put(recid, TxEngine.this);
+            }
+        }
+
+        @Override
+        public void commit() {
+            synchronized (lock){
+                //remove locally modified items from global list
+                LongMap.LongMapIterator<Fun.Tuple2<?, Serializer>> iter = modItems.longMapIterator();
+                while(iter.moveToNext()){
+                    TxEngine other = globalMod.remove(iter.key());
+                    if(other!=TxEngine.this) throw new InternalError();
+                    Fun.Tuple2<?, Serializer> t = iter.value();
+                    engine.update(iter.key(), t.a, t.b);
+                }
+                modItems = null;
+                newItems = null;
+
+                engine.commit();
+            }
+
+        }
+
+        @Override
+        public void rollback() {
+            synchronized (lock){
+                //remove locally modified items from global list
+                LongMap.LongMapIterator iter = modItems.longMapIterator();
+                while(iter.moveToNext()){
+                    TxEngine other = globalMod.remove(iter.key());
+                    if(other!=TxEngine.this) throw new InternalError();
+                }
+                //delete preallocated items
+                for(long recid:newItems){
+                    engine.delete(recid, Serializer.EMPTY_SERIALIZER);
+                }
+                modItems = null;
+                newItems = null;
+            }
+
+        }
+
+        @Override
+        public void close() {
+            rollback();
+        }
+    }
+
+
+}
Index: applications/editors/josm/plugins/imagerycache/src/org/mapdb/TxRollbackException.java
===================================================================
--- applications/editors/josm/plugins/imagerycache/src/org/mapdb/TxRollbackException.java	(revision 29363)
+++ applications/editors/josm/plugins/imagerycache/src/org/mapdb/TxRollbackException.java	(revision 29363)
@@ -0,0 +1,9 @@
+package org.mapdb;
+
+/**
+ * Exception thrown when transaction is rolled back.
+ * @author Jan Kotek
+ */
+public class TxRollbackException extends RuntimeException {
+
+}
Index: applications/editors/josm/plugins/imagerycache/src/org/mapdb/Utils.java
===================================================================
--- applications/editors/josm/plugins/imagerycache/src/org/mapdb/Utils.java	(revision 29363)
+++ applications/editors/josm/plugins/imagerycache/src/org/mapdb/Utils.java	(revision 29363)
@@ -0,0 +1,265 @@
+/*
+ *  Copyright (c) 2012 Jan Kotek
+ *
+ *  Licensed under the Apache License, Version 2.0 (the "License");
+ *  you may not use this file except in compliance with the License.
+ *  You may obtain a copy of the License at
+ *
+ *    http://www.apache.org/licenses/LICENSE-2.0
+ *
+ *  Unless required by applicable law or agreed to in writing, software
+ *  distributed under the License is distributed on an "AS IS" BASIS,
+ *  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ *  See the License for the specific language governing permissions and
+ *  limitations under the License.
+ */
+
+package org.mapdb;
+
+import java.io.*;
+import java.nio.ByteBuffer;
+import java.util.*;
+import java.util.concurrent.atomic.AtomicLong;
+import java.util.logging.Logger;
+
+/**
+ * Various IO related utilities
+ *
+ * @author Jan Kotek
+ * @author Nathan Sweet wrote long packer utils
+ */
+@SuppressWarnings("unchecked")
+final public class Utils {
+
+    static final Logger LOG = Logger.getLogger("JDBM");
+
+
+    @SuppressWarnings("rawtypes")
+	public static final Comparator<Comparable> COMPARABLE_COMPARATOR = new Comparator<Comparable>() {
+        @Override
+        public int compare(Comparable o1, Comparable o2) {
+            return o1.compareTo(o2);
+        }
+    };
+
+    @SuppressWarnings("rawtypes")
+	public static final Comparator<Comparable> COMPARABLE_COMPARATOR_WITH_NULLS = new Comparator<Comparable>() {
+        @Override
+        public int compare(Comparable o1, Comparable o2) {
+            return o1 == null && o2 != null ? -1 : (o1 != null && o2 == null ? 1 : o1.compareTo(o2));
+        }
+    };
+
+
+    public static final String EMPTY_STRING = "";
+    public static final String UTF8 = "UTF8";
+    public static Random RANDOM = new Random();
+
+
+    /**
+     * Pack  non-negative long into output stream.
+     * It will occupy 1-10 bytes depending on value (lower values occupy smaller space)
+     *
+     * @param out DataOutput to put value into
+     * @param value to be serialized, must be non-negative
+     * @throws java.io.IOException
+     */
+    static public void packLong(DataOutput out, long value) throws IOException {
+
+        if (value < 0) {
+            throw new IllegalArgumentException("negative value: keys=" + value);
+        }
+
+        while ((value & ~0x7FL) != 0) {
+            out.write((((int) value & 0x7F) | 0x80));
+            value >>>= 7;
+        }
+        out.write((byte) value);
+    }
+
+
+    /**
+     * Unpack positive long value from the input stream.
+     *
+     * @param in The input stream.
+     * @return The long value.
+     * @throws java.io.IOException
+     */
+    static public long unpackLong(DataInput in) throws IOException {
+
+        long result = 0;
+        for (int offset = 0; offset < 64; offset += 7) {
+            long b = in.readUnsignedByte();
+            result |= (b & 0x7F) << offset;
+            if ((b & 0x80) == 0) {
+                return result;
+            }
+        }
+        throw new Error("Malformed long.");
+    }
+
+
+    /**
+     * Pack  non-negative long into output stream.
+     * It will occupy 1-5 bytes depending on value (lower values occupy smaller space)
+     *
+     * @param in DataOutput to put value into
+     * @param value to be serialized, must be non-negative
+     * @throws IOException
+     */
+
+    static public void packInt(DataOutput in, int value) throws IOException {
+        if (value < 0) {
+            throw new IllegalArgumentException("negative value: keys=" + value);
+        }
+
+        while ((value & ~0x7F) != 0) {
+            in.write(((value & 0x7F) | 0x80));
+            value >>>= 7;
+        }
+
+        in.write((byte) value);
+    }
+
+    static public int unpackInt(DataInput is) throws IOException {
+        for (int offset = 0, result = 0; offset < 32; offset += 7) {
+            int b = is.readUnsignedByte();
+            result |= (b & 0x7F) << offset;
+            if ((b & 0x80) == 0) {
+                return result;
+            }
+        }
+        throw new Error("Malformed int.");
+    }
+
+
+    public static int longHash(final long key) {
+        int h = (int)(key ^ (key >>> 32));
+        h ^= (h >>> 20) ^ (h >>> 12);
+        return h ^ (h >>> 7) ^ (h >>> 4);
+    }
+
+    /** clone value using serialization */
+    public static <E> E clone(E value, Serializer<E> serializer){
+        try{
+            DataOutput2 out = new DataOutput2();
+            serializer.serialize(out,value);
+            DataInput2 in = new DataInput2(ByteBuffer.wrap(out.copyBytes()), 0);
+
+            return serializer.deserialize(in,out.pos);
+        }catch(IOException ee){
+            throw new IOError(ee);
+        }
+    }
+
+    /** expand array size by 1, and put value at given position. No items from original array are lost*/
+    public static Object[] arrayPut(final Object[] array, final int pos, final Object value){
+        final Object[] ret = Arrays.copyOf(array, array.length+1);
+        if(pos<array.length){
+            System.arraycopy(array, pos, ret, pos+1, array.length-pos);
+        }
+        ret[pos] = value;
+        return ret;
+    }
+
+    public static long[] arrayLongPut(final long[] array, final int pos, final long value) {
+        final long[] ret = Arrays.copyOf(array,array.length+1);
+        if(pos<array.length){
+            System.arraycopy(array,pos,ret,pos+1,array.length-pos);
+        }
+        ret[pos] = value;
+        return ret;
+    }
+
+
+    /** Compute nearest bigger power of two*/
+    public static int nextPowTwo(final int value){
+        int ret = 2;
+        while(ret<value)
+            ret = ret<<1;
+        return ret;
+    }
+
+    /**
+     * Create temporary file in temp folder. All associated db files will be deleted on JVM exit.
+     */
+    public static File tempDbFile() {
+        try{
+            File index = File.createTempFile("mapdb","db");
+            index.deleteOnExit();
+            new File(index.getPath()+StorageDirect.DATA_FILE_EXT).deleteOnExit();
+            new File(index.getPath()+ StorageJournaled.TRANS_LOG_FILE_EXT).deleteOnExit();
+
+            return index;
+        }catch(IOException e){
+            throw new IOError(e);
+        }
+    }
+
+    /** check if Operating System is Windows */
+    public static boolean isWindows(){
+        String os = System.getProperty("os.name");
+        return os!=null && (os.toLowerCase().indexOf("win") >= 0);
+
+    }
+
+    /** check if Operating System is Android */
+    public static boolean isAndroid(){
+        return "Dalvik".equalsIgnoreCase(System.getProperty("java.vm.name"));
+    }
+
+
+    /**
+     * Check if large files can be mapped into memory.
+     * For example 32bit JVM can only address 2GB and large files can not be mapped,
+     * so for 32bit JVM this function returns false.
+     *
+     */
+    public static boolean JVMSupportsLargeMappedFiles() {
+        String prop = System.getProperty("os.arch");
+        if(prop!=null && prop.contains("64")) return true;
+        //TODO better check for 32bit JVM
+        return false;
+    }
+
+    private static boolean collectionAsMapValueLogged = false;
+
+    public static void checkMapValueIsNotCollecion(Object value){
+        if(!CC.LOG_HINTS || collectionAsMapValueLogged) return;
+        if(value instanceof Collection || value instanceof Map){
+            collectionAsMapValueLogged = true;
+            LOG.warning("You should not use collections as Map values. MapDB requires key/values to be immutable! Checkout MultiMap example for 1:N mapping.");
+        }
+    }
+
+    public static void printer(final AtomicLong value){
+        new Thread("printer"){
+            {
+                setDaemon(true);
+            }
+
+
+            @Override
+            public void run() {
+                long startValue = value.get();
+                long startTime = System.currentTimeMillis();
+                long old = value.get();
+                while(true){
+
+                    try {
+                        Thread.sleep(1000);
+                    } catch (InterruptedException e) {
+                        return;
+                    }
+
+                    long current = value.get();
+                    long totalSpeed = 1000*(current-startValue)/(System.currentTimeMillis()-startTime);
+                    System.out.print("total: "+current+" - items per last second: "+(current-old)+" - avg items per second: "+totalSpeed+"\r");
+                    old = current;
+                }
+
+            }
+        }.start();
+    }
+
+}
Index: applications/editors/josm/plugins/imagerycache/src/org/mapdb/Volume.java
===================================================================
--- applications/editors/josm/plugins/imagerycache/src/org/mapdb/Volume.java	(revision 29363)
+++ applications/editors/josm/plugins/imagerycache/src/org/mapdb/Volume.java	(revision 29363)
@@ -0,0 +1,857 @@
+/*
+ *  Copyright (c) 2012 Jan Kotek
+ *
+ *  Licensed under the Apache License, Version 2.0 (the "License");
+ *  you may not use this file except in compliance with the License.
+ *  You may obtain a copy of the License at
+ *
+ *    http://www.apache.org/licenses/LICENSE-2.0
+ *
+ *  Unless required by applicable law or agreed to in writing, software
+ *  distributed under the License is distributed on an "AS IS" BASIS,
+ *  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ *  See the License for the specific language governing permissions and
+ *  limitations under the License.
+ */
+
+package org.mapdb;
+
+import java.io.EOFException;
+import java.io.File;
+import java.io.IOError;
+import java.io.IOException;
+import java.lang.reflect.Method;
+import java.nio.ByteBuffer;
+import java.nio.MappedByteBuffer;
+import java.nio.channels.AsynchronousFileChannel;
+import java.nio.channels.FileChannel;
+import java.nio.file.StandardOpenOption;
+import java.util.Arrays;
+import java.util.concurrent.ExecutionException;
+import java.util.concurrent.Future;
+import java.util.concurrent.locks.ReentrantLock;
+import java.util.logging.Level;
+
+/**
+ * MapDB abstraction over raw storage (file, disk partition, memory etc...)
+ *
+ * @author Jan Kotek
+ */
+public abstract class Volume {
+
+    public static final int BUF_SIZE = 1<<30;
+    public static final int INITIAL_SIZE = 1024*32;
+
+    abstract public void ensureAvailable(final long offset);
+
+    abstract public void putLong(final long offset, final long value);
+    abstract public void putInt(long offset, int value);
+    abstract public void putByte(final long offset, final byte value);
+
+    abstract public void putData(final long offset, final byte[] value, int size);
+    abstract public void putData(final long offset, final ByteBuffer buf);
+
+    abstract public long getLong(final long offset);
+    abstract public int getInt(long offset);
+    abstract public byte getByte(final long offset);
+
+
+
+    abstract public DataInput2 getDataInput(final long offset, final int size);
+
+    abstract public void close();
+
+    abstract public void sync();
+
+    public abstract boolean isEmpty();
+
+    public abstract void deleteFile();
+
+    public abstract boolean isSliced();
+
+
+    public final void putUnsignedShort(final long offset, final int value){
+        putByte(offset, (byte) (value>>8));
+        putByte(offset+1, (byte) (value));
+    }
+
+    public final int getUnsignedShort(long offset) {
+        return (( (getByte(offset) & 0xff) << 8) |
+                ( (getByte(offset+1) & 0xff)));
+    }
+
+    public int getUnsignedByte(long offset) {
+        return getByte(offset) & 0xff;
+    }
+
+    public void putUnsignedByte(long offset, int b) {
+        putByte(offset, (byte)(b & 0xff));
+    }
+
+    /** returns underlying file if it exists */
+    abstract public File getFile();
+
+
+    /**
+     * Factory which creates two/three volumes used by each MapDB Storage Engine
+     */
+    public static interface Factory {
+        Volume createIndexVolume();
+        Volume createPhysVolume();
+        Volume createTransLogVolume();
+    }
+
+    public static Volume volumeForFile(File f, boolean useRandomAccessFile, boolean readOnly) {
+        return useRandomAccessFile ?
+                new RandomAccessFileVol(f, readOnly):
+                new MappedFileVol(f, readOnly);
+    }
+
+
+    public static Factory fileFactory(final boolean readOnly, final boolean RAF, final File indexFile){
+        return fileFactory(readOnly, RAF, indexFile,
+                new File(indexFile.getPath() + StorageDirect.DATA_FILE_EXT),
+                new File(indexFile.getPath() + StorageJournaled.TRANS_LOG_FILE_EXT));
+    }
+
+    public static Factory fileFactory(final boolean readOnly,
+                                      final boolean RAF,
+                                      final File indexFile,
+                                      final File physFile,
+                                      final File transLogFile) {
+        return new Factory() {
+            @Override
+            public Volume createIndexVolume() {
+                return volumeForFile(indexFile, RAF, readOnly);
+            }
+
+            @Override
+            public Volume createPhysVolume() {
+                return volumeForFile(physFile, RAF, readOnly);
+            }
+
+            @Override
+            public Volume createTransLogVolume() {
+                return volumeForFile(transLogFile, RAF, readOnly);
+            }
+        };
+    }
+
+
+    public static Factory memoryFactory(final boolean useDirectBuffer) {
+        return new Factory() {
+
+            @Override public Volume createIndexVolume() {
+                return new MemoryVol(useDirectBuffer);
+            }
+
+            @Override public Volume createPhysVolume() {
+                return new MemoryVol(useDirectBuffer);
+            }
+
+            @Override public Volume createTransLogVolume() {
+                return new MemoryVol(useDirectBuffer);
+            }
+        };
+    }
+
+
+    /**
+     * Abstract Volume over bunch of ByteBuffers
+     * It leaves ByteBufferVol details (allocation, disposal) on subclasses.
+     * Most methods are final for better performance (JIT compiler can inline those).
+     */
+    abstract static public class ByteBufferVol extends Volume{
+
+
+        protected final ReentrantLock growLock = new ReentrantLock();
+
+        protected ByteBuffer[] buffers;
+        protected final boolean readOnly;
+
+        protected ByteBufferVol(boolean readOnly) {
+            this.readOnly = readOnly;
+        }
+
+
+        @Override
+        public final void ensureAvailable(long offset) {
+            int buffersPos = (int) (offset/ BUF_SIZE);
+
+            //check for most common case, this is already mapped
+            if(buffersPos<buffers.length && buffers[buffersPos]!=null &&
+                    buffers[buffersPos].capacity()>=offset% BUF_SIZE)
+                return;
+
+            growLock.lock();
+            try{
+                //check second time
+                if(buffersPos<buffers.length && buffers[buffersPos]!=null &&
+                        buffers[buffersPos].capacity()>=offset% BUF_SIZE)
+                    return;
+
+
+                //grow array if necessary
+                if(buffersPos>=buffers.length){
+                    buffers = Arrays.copyOf(buffers, Math.max(buffersPos, buffers.length * 2));
+                }
+
+                //just remap file buffer
+                ByteBuffer newBuf = makeNewBuffer(offset);
+                if(readOnly)
+                    newBuf = newBuf.asReadOnlyBuffer();
+
+                buffers[buffersPos] = newBuf;
+            }finally{
+                growLock.unlock();
+            }
+        }
+
+        protected abstract ByteBuffer makeNewBuffer(long offset);
+
+        protected final ByteBuffer internalByteBuffer(long offset) {
+            final int pos = ((int) (offset / BUF_SIZE));
+            if(pos>=buffers.length) throw new IOError(new EOFException("offset: "+offset));
+            return buffers[pos];
+        }
+
+
+
+        @Override public final void putLong(final long offset, final long value) {
+            internalByteBuffer(offset).putLong((int) (offset% BUF_SIZE), value);
+        }
+
+        @Override public final void putInt(final long offset, final int value) {
+            internalByteBuffer(offset).putInt((int) (offset% BUF_SIZE), value);
+        }
+
+
+        @Override public final void putByte(final long offset, final byte value) {
+            internalByteBuffer(offset).put((int) (offset % BUF_SIZE), value);
+        }
+
+
+
+        @Override public final void putData(final long offset, final byte[] value, final int size) {
+            final ByteBuffer b1 = internalByteBuffer(offset);
+            final int bufPos = (int) (offset% BUF_SIZE);
+
+            synchronized (b1){
+                b1.position(bufPos);
+                b1.put(value, 0, size);
+            }
+        }
+
+        @Override public final void putData(final long offset, final ByteBuffer buf) {
+            final ByteBuffer b1 = internalByteBuffer(offset);
+            final int bufPos = (int) (offset% BUF_SIZE);
+            //no overlap, so just write the value
+            synchronized (b1){
+                b1.position(bufPos);
+                b1.put(buf);
+            }
+        }
+
+        @Override final public long getLong(long offset) {
+            try{
+                return internalByteBuffer(offset).getLong((int) (offset% BUF_SIZE));
+            }catch(IndexOutOfBoundsException e){
+                throw new IOError(new EOFException());
+            }
+        }
+
+        @Override final public int getInt(long offset) {
+            try{
+                return internalByteBuffer(offset).getInt((int) (offset% BUF_SIZE));
+            }catch(IndexOutOfBoundsException e){
+                throw new IOError(new EOFException());
+            }
+        }
+
+
+        @Override public final byte getByte(long offset) {
+            try{
+                return internalByteBuffer(offset).get((int) (offset% BUF_SIZE));
+            }catch(IndexOutOfBoundsException e){
+                throw new IOError(new EOFException());
+            }
+        }
+
+
+        @Override
+        public final DataInput2 getDataInput(long offset, int size) {
+            final ByteBuffer b1 = internalByteBuffer(offset);
+            final int bufPos = (int) (offset% BUF_SIZE);
+            return new DataInput2(b1, bufPos);
+        }
+
+        @Override
+        public boolean isEmpty() {
+            return buffers[0]==null || buffers[0].capacity()==0;
+        }
+
+        @Override
+        public boolean isSliced(){
+            return true;
+        }
+
+
+
+        /**
+         * Hack to unmap MappedByteBuffer.
+         * Unmap is necessary on Windows, otherwise file is locked until JVM exits or BB is GCed.
+         * There is no public JVM API to unmap buffer, so this tries to use SUN proprietary API for unmap.
+         * Any error is silently ignored (for example SUN API does not exist on Android).
+         */
+        protected void unmap(MappedByteBuffer b){
+            try{
+                if(unmapHackSupported){
+
+                    // need to dispose old direct buffer, see bug
+                    // http://bugs.sun.com/bugdatabase/view_bug.do?bug_id=4724038
+                    Method cleanerMethod = b.getClass().getMethod("cleaner", new Class[0]);
+                    if(cleanerMethod!=null){
+                        cleanerMethod.setAccessible(true);
+                        Object cleaner = cleanerMethod.invoke(b, new Object[0]);
+                        if(cleaner!=null){
+                            Method clearMethod = cleaner.getClass().getMethod("clean", new Class[0]);
+                            if(cleanerMethod!=null)
+                                clearMethod.invoke(cleaner, new Object[0]);
+                        }
+                    }
+                }
+            }catch(Exception e){
+                unmapHackSupported = false;
+                Utils.LOG.log(Level.WARNING, "ByteBufferVol Unmap failed", e);
+            }
+        }
+
+        private static boolean unmapHackSupported = true;
+        static{
+            try{
+                unmapHackSupported =
+                        Class.forName("sun.nio.ch.DirectBuffer")!=null;
+            }catch(Exception e){
+                unmapHackSupported = false;
+            }
+        }
+
+
+    }
+
+    public static final class MappedFileVol extends ByteBufferVol {
+
+        protected final File file;
+        protected final FileChannel fileChannel;
+        protected final FileChannel.MapMode mapMode;
+        protected final java.io.RandomAccessFile raf;
+
+        static final int BUF_SIZE_INC = 1024*1024;
+
+        public MappedFileVol(File file, boolean readOnly) {
+            super(readOnly);
+            this.file = file;
+            this.mapMode = readOnly? FileChannel.MapMode.READ_ONLY: FileChannel.MapMode.READ_WRITE;
+            try {
+                this.raf = new java.io.RandomAccessFile(file, readOnly?"r":"rw");
+                this.fileChannel = raf.getChannel();
+
+                final long fileSize = fileChannel.size();
+                if(fileSize>0){
+                    //map existing data
+                    buffers = new ByteBuffer[(int) (1+fileSize/BUF_SIZE)];
+                    for(int i=0;i<=fileSize/BUF_SIZE;i++){
+                        final long offset = 1L*BUF_SIZE*i;
+                        buffers[i] = fileChannel.map(mapMode, offset, Math.min(BUF_SIZE, fileSize-offset));
+                        if(mapMode == FileChannel.MapMode.READ_ONLY)
+                            buffers[i] = buffers[i].asReadOnlyBuffer();
+                        //TODO what if 'fileSize % 8 != 0'?
+                    }
+                }else{
+                    buffers = new ByteBuffer[1];
+                    buffers[0] = fileChannel.map(mapMode, 0, INITIAL_SIZE);
+                    if(mapMode == FileChannel.MapMode.READ_ONLY)
+                        buffers[0] = buffers[0].asReadOnlyBuffer();
+
+                }
+            } catch (IOException e) {
+                throw new IOError(e);
+            }
+        }
+
+        @Override
+        public void close() {
+            growLock.lock();
+            try{
+                fileChannel.close();
+                raf.close();
+                if(!readOnly)
+                    sync();
+                for(ByteBuffer b:buffers){
+                    if(b!=null && (b instanceof MappedByteBuffer)){
+                        unmap((MappedByteBuffer) b);
+                    }
+                }
+                buffers = null;
+            } catch (IOException e) {
+                throw new IOError(e);
+            }finally{
+                growLock.unlock();
+            }
+
+        }
+
+        @Override
+        public void sync() {
+            if(readOnly) return;
+            for(ByteBuffer b:buffers){
+                if(b!=null && (b instanceof MappedByteBuffer)){
+                    ((MappedByteBuffer)b).force();
+                }
+            }
+        }
+
+        @Override
+        public boolean isEmpty() {
+            return buffers[0]==null || buffers[0].capacity()==0;
+        }
+
+        @Override
+        public void deleteFile() {
+            file.delete();
+        }
+
+        @Override
+        public File getFile() {
+            return file;
+        }
+
+        @Override
+        protected ByteBuffer makeNewBuffer(long offset) {
+            try {
+                //unmap old buffer on windows
+                int bufPos = (int) (offset/BUF_SIZE);
+                if(bufPos<buffers.length && buffers[bufPos]!=null){
+                    unmap((MappedByteBuffer) buffers[bufPos]);
+                    buffers[bufPos] = null;
+                }
+
+                long newBufSize =  offset% BUF_SIZE;
+                newBufSize = newBufSize + newBufSize%BUF_SIZE_INC; //round to BUF_SIZE_INC
+                return fileChannel.map(
+                        mapMode,
+                        offset - offset% BUF_SIZE, newBufSize );
+            } catch (IOException e) {
+                if(e.getCause()!=null && e.getCause() instanceof OutOfMemoryError){
+                    throw new RuntimeException("File could not be mapped to memory, common problem on 32bit JVM. Use `DBMaker.newRandomAccessFileDB()` as workaround",e);
+                }
+
+                throw new IOError(e);
+            }
+        }
+    }
+
+    public static final class MemoryVol extends ByteBufferVol {
+        protected final boolean useDirectBuffer;
+
+        @Override
+        public String toString() {
+            return super.toString()+",direct="+useDirectBuffer;
+        }
+
+        public MemoryVol(boolean useDirectBuffer) {
+            super(false);
+            this.useDirectBuffer = useDirectBuffer;
+            ByteBuffer b0 = useDirectBuffer?
+                    ByteBuffer.allocateDirect(INITIAL_SIZE) :
+                    ByteBuffer.allocate(INITIAL_SIZE);
+            buffers = new ByteBuffer[]{b0};
+        }
+
+        @Override protected ByteBuffer makeNewBuffer(long offset) {
+            final int newBufSize = Utils.nextPowTwo((int) (offset % BUF_SIZE));
+            //double size of existing in-memory-buffer
+            ByteBuffer newBuf = useDirectBuffer?
+                    ByteBuffer.allocateDirect(newBufSize):
+                    ByteBuffer.allocate(newBufSize);
+            final int buffersPos = (int) (offset/ BUF_SIZE);
+            final ByteBuffer oldBuffer = buffers[buffersPos];
+            if(oldBuffer!=null){
+                //copy old buffer if it exists
+                synchronized (oldBuffer){
+                    oldBuffer.rewind();
+                    newBuf.put(oldBuffer);
+                }
+            }
+            return newBuf;
+        }
+
+        @Override public void close() {
+            growLock.lock();
+            try{
+                for(ByteBuffer b:buffers){
+                    if(b!=null && (b instanceof MappedByteBuffer)){
+                        unmap((MappedByteBuffer)b);
+                    }
+                }
+                buffers = null;
+            }finally{
+                growLock.lock();
+            }
+        }
+
+        @Override public void sync() {}
+
+        @Override public void deleteFile() {}
+
+        @Override
+        public File getFile() {
+            return null;
+        }
+    }
+
+
+    public static final class RandomAccessFileVol extends Volume{
+
+        protected final File file;
+        protected final boolean readOnly;
+
+        protected java.io.RandomAccessFile raf;
+        protected long pos;
+
+        public RandomAccessFileVol(File file, boolean readOnly) {
+            this.file = file;
+            this.readOnly = readOnly;
+
+            try {
+                this.raf = new java.io.RandomAccessFile(file, readOnly? "r":"rw");
+                this.raf.seek(0);
+                pos = 0;
+            } catch (IOException e) {
+                throw new IOError(e);
+            }
+        }
+
+        @Override
+        synchronized public void ensureAvailable(long offset) {
+            //we do not have a list of ByteBuffers, so ensure size does not have to do anything
+        }
+
+        @Override
+        synchronized public void putLong(long offset, long value) {
+            try {
+                if(pos!=offset){
+                    raf.seek(offset);
+                }
+                pos=offset+8;
+                raf.writeLong(value);
+            } catch (IOException e) {
+                throw new IOError(e);
+            }
+        }
+
+        @Override
+        synchronized public void putInt(long offset, int value) {
+            try {
+                if(pos!=offset){
+                    raf.seek(offset);
+                }
+                pos=offset+4;
+                raf.writeInt(value);
+            } catch (IOException e) {
+                throw new IOError(e);
+            }
+        }
+
+
+        @Override
+        synchronized public void putByte(long offset, byte value) {
+            try {
+                if(pos!=offset){
+                    raf.seek(offset);
+                }
+                pos=offset+1;
+                raf.writeByte(0xFF & value);
+            } catch (IOException e) {
+                throw new IOError(e);
+            }
+
+        }
+
+        @Override
+        synchronized public void putData(long offset, byte[] value, int size) {
+            try {
+                if(pos!=offset){
+                    raf.seek(offset);
+                }
+                pos=offset+size;
+                raf.write(value,0,size);
+            } catch (IOException e) {
+                throw new IOError(e);
+            }
+        }
+
+        @Override
+        synchronized public void putData(long offset, ByteBuffer buf) {
+            try {
+                int size = buf.limit()-buf.position();
+                if(pos!=offset){
+                    raf.seek(offset);
+                }
+                pos=offset+size;
+                byte[] b = new byte[size];
+                buf.get(b);
+                putData(offset, b, size);
+            } catch (IOException e) {
+                throw new IOError(e);
+            }
+
+        }
+
+        @Override
+        synchronized public long getLong(long offset) {
+            try {
+                if(pos!=offset){
+                    raf.seek(offset);
+                }
+                pos=offset+8;
+                return raf.readLong();
+            } catch (IOException e) {
+                throw new IOError(e);
+            }
+        }
+
+        @Override
+        synchronized public int getInt(long offset) {
+            try {
+                if(pos!=offset){
+                    raf.seek(offset);
+                }
+                pos=offset+4;
+                return raf.readInt();
+            } catch (IOException e) {
+                throw new IOError(e);
+            }
+
+        }
+
+
+        @Override
+        synchronized public byte getByte(long offset) {
+
+            try {
+                if(pos!=offset){
+                    raf.seek(offset);
+                }
+                pos=offset+1;
+                return raf.readByte();
+            } catch (IOException e) {
+                throw new IOError(e);
+            }
+
+        }
+
+        @Override
+        synchronized public DataInput2 getDataInput(long offset, int size) {
+            try {
+                if(pos!=offset){
+                    raf.seek(offset);
+                }
+                pos=offset+size;
+                byte[] b = new byte[size];
+                raf.read(b);
+                return new DataInput2(b);
+            } catch (IOException e) {
+                throw new IOError(e);
+            }
+        }
+
+        @Override
+        synchronized public void close() {
+            try {
+                raf.close();
+                raf = null;
+            } catch (IOException e) {
+                throw new IOError(e);
+            }
+
+        }
+
+        @Override
+        synchronized public void sync() {
+            try {
+                raf.getFD().sync();
+            } catch (IOException e) {
+                throw new IOError(e);
+            }
+        }
+
+        @Override
+        synchronized public boolean isEmpty() {
+            return file.length()==0;
+        }
+
+        @Override
+        synchronized public void deleteFile() {
+            file.delete();
+        }
+
+        @Override
+        public boolean isSliced(){
+            return false;
+        }
+
+        @Override
+        synchronized public File getFile() {
+            return file;
+        }
+    }
+
+    public static class AsyncFileChannelVol extends Volume{
+
+
+        protected AsynchronousFileChannel channel;
+        protected final boolean readOnly;
+        protected final File file;
+
+        public AsyncFileChannelVol(File file, boolean readOnly){
+            this.readOnly = readOnly;
+            this.file = file;
+            try {
+                this.channel = readOnly?
+                        AsynchronousFileChannel.open(file.toPath(),StandardOpenOption.READ):
+                        AsynchronousFileChannel.open(file.toPath(),StandardOpenOption.READ, StandardOpenOption.WRITE);
+
+            } catch (IOException e) {
+                throw new IOError(e);
+            }
+        }
+
+        @Override
+        public void ensureAvailable(long offset) {
+            //we do not have a list of ByteBuffers, so ensure size does not have to do anything
+        }
+
+
+
+        protected void await(Future<Integer> future, int size) {
+            try {
+                int res = future.get();
+                if(res!=size) throw new InternalError("not enough bytes");
+            } catch (InterruptedException e) {
+                throw new RuntimeException(e);
+            } catch (ExecutionException e) {
+                throw new RuntimeException(e);
+            }
+        }
+
+        @Override
+        public void putByte(long offset, byte value) {
+            ByteBuffer b = ByteBuffer.allocate(1);
+            b.put(0, value);
+            await(channel.write(b, offset),1);
+        }
+        @Override
+        public void putInt(long offset, int value) {
+            ByteBuffer b = ByteBuffer.allocate(4);
+            b.putInt(0, value);
+            await(channel.write(b, offset),4);
+        }
+
+        @Override
+        public void putLong(long offset, long value) {
+            ByteBuffer b = ByteBuffer.allocate(8);
+            b.putLong(0, value);
+            await(channel.write(b, offset),8);
+        }
+
+        @Override
+        public void putData(long offset, byte[] value, int size) {
+            ByteBuffer b = ByteBuffer.wrap(value);
+            b.limit(size);
+            await(channel.write(b,offset),size);
+        }
+
+        @Override
+        public void putData(long offset, ByteBuffer buf) {
+            await(channel.write(buf,offset), buf.limit() - buf.position());
+        }
+
+
+
+        @Override
+        public long getLong(long offset) {
+            ByteBuffer b = ByteBuffer.allocate(8);
+            await(channel.read(b, offset), 8);
+            b.rewind();
+            return b.getLong();
+        }
+
+        @Override
+        public byte getByte(long offset) {
+            ByteBuffer b = ByteBuffer.allocate(1);
+            await(channel.read(b, offset), 1);
+            b.rewind();
+            return b.get();
+        }
+
+        @Override
+        public int getInt(long offset) {
+            ByteBuffer b = ByteBuffer.allocate(4);
+            await(channel.read(b, offset), 4);
+            b.rewind();
+            return b.getInt();
+        }
+
+
+
+        @Override
+        public DataInput2 getDataInput(long offset, int size) {
+            ByteBuffer b = ByteBuffer.allocate(size);
+            await(channel.read(b, offset), size);
+            b.rewind();
+            return new DataInput2(b,0);
+        }
+
+        @Override
+        public void close() {
+            try {
+                channel.close();
+            } catch (IOException e) {
+                throw new IOError(e);
+            }
+        }
+
+        @Override
+        public void sync() {
+            try {
+                channel.force(true);
+            } catch (IOException e) {
+                throw new IOError(e);
+            }
+        }
+
+        @Override
+        public boolean isEmpty() {
+            return file.length()>0;
+        }
+
+        @Override
+        public void deleteFile() {
+            file.delete();
+        }
+
+        @Override
+        public boolean isSliced() {
+            return false;
+        }
+
+        @Override
+        public File getFile() {
+            return file;
+        }
+    }
+
+
+}
+
Index: applications/editors/josm/plugins/imagerycache/src/org/openstreetmap/josm/plugins/imagerycache/ImageryCachePlugin.java
===================================================================
--- applications/editors/josm/plugins/imagerycache/src/org/openstreetmap/josm/plugins/imagerycache/ImageryCachePlugin.java	(revision 29363)
+++ applications/editors/josm/plugins/imagerycache/src/org/openstreetmap/josm/plugins/imagerycache/ImageryCachePlugin.java	(revision 29363)
@@ -0,0 +1,31 @@
+package org.openstreetmap.josm.plugins.imagerycache;
+
+import java.io.File;
+import org.openstreetmap.gui.jmapviewer.OsmTileLoader;
+import org.openstreetmap.gui.jmapviewer.interfaces.TileLoaderListener;
+import org.openstreetmap.josm.gui.layer.TMSLayer;
+import org.openstreetmap.josm.plugins.Plugin;
+import org.openstreetmap.josm.plugins.PluginInformation;
+
+/**
+ * @author Alexei Kasatkin
+ */
+public class ImageryCachePlugin extends Plugin {
+    
+    TMSLayer.TileLoaderFactory factory = new TMSLayer.TileLoaderFactory() {
+        @Override
+        public OsmTileLoader makeTileLoader(TileLoaderListener listener) {
+            String cachePath = TMSLayer.PROP_TILECACHE_DIR.get();
+            if (cachePath != null && !cachePath.isEmpty()) {
+                return new OsmDBTilesLoader(listener, new File(cachePath));
+            }
+            return null;
+        }
+    };
+
+    public ImageryCachePlugin(PluginInformation info) {
+        super(info);
+        TMSLayer.setCustomTileLoaderFactory(factory);
+    }
+    
+}
Index: applications/editors/josm/plugins/imagerycache/src/org/openstreetmap/josm/plugins/imagerycache/OsmDBTilesLoader.java
===================================================================
--- applications/editors/josm/plugins/imagerycache/src/org/openstreetmap/josm/plugins/imagerycache/OsmDBTilesLoader.java	(revision 29363)
+++ applications/editors/josm/plugins/imagerycache/src/org/openstreetmap/josm/plugins/imagerycache/OsmDBTilesLoader.java	(revision 29363)
@@ -0,0 +1,388 @@
+package org.openstreetmap.josm.plugins.imagerycache;
+
+import java.io.ByteArrayInputStream;
+import java.io.ByteArrayOutputStream;
+import java.io.File;
+import java.io.IOException;
+import java.io.InputStream;
+import java.io.Serializable;
+import java.net.HttpURLConnection;
+import java.net.URL;
+import java.net.URLConnection;
+import java.util.HashMap;
+import java.util.Map;
+import java.util.Random;
+import java.util.logging.Level;
+import java.util.logging.Logger;
+import org.mapdb.DB;
+import org.mapdb.DBMaker;
+import org.mapdb.Serializer;
+import org.openstreetmap.gui.jmapviewer.JobDispatcher;
+import org.openstreetmap.gui.jmapviewer.OsmTileLoader;
+import org.openstreetmap.gui.jmapviewer.Tile;
+import org.openstreetmap.gui.jmapviewer.interfaces.TileJob;
+import org.openstreetmap.gui.jmapviewer.interfaces.TileLoaderListener;
+import org.openstreetmap.gui.jmapviewer.interfaces.TileSource;
+import org.openstreetmap.gui.jmapviewer.interfaces.TileSource.TileUpdate;
+
+/**
+ * 
+ * @author Alexei Kasatkin, based on OsmFileCacheTileLoader by @author Jan Peter Stotz, @author Stefan Zeller
+ */
+class OsmDBTilesLoader extends OsmTileLoader {
+    
+    
+    private static final Logger log = Logger.getLogger(OsmDBTilesLoader.class.getName());
+    public static final long FILE_AGE_ONE_DAY = 1000 * 60 * 60 * 24;
+    public static final long FILE_AGE_ONE_WEEK = FILE_AGE_ONE_DAY * 7;
+
+    
+    
+    static class TileDAOMapDB {
+        protected HashMap<String, DB> dbs = new HashMap<String, DB>();
+        protected HashMap<String, Map<Long,DBTile>> storages  = new HashMap<String, Map<Long,DBTile>>();
+        private final File cacheFolder;
+        
+        /**
+         * Lazy creation of DB object associated to * @param source
+         * or returning from cache
+         */
+        private synchronized DB getDB(String source) {
+            DB db = dbs.get(source);
+            if (db==null) {
+                try {
+                db = DBMaker
+                    .newFileDB(new File(cacheFolder, source.replaceAll("[\\\\/:*?\"<>| ]", "_")))
+                    .randomAccessFileEnableIfNeeded()
+                    .journalDisable()
+                    .closeOnJvmShutdown()
+                    .make();
+                dbs.put(source, db);
+                } catch (Exception e) {
+                    log.warning("Error: Can not create MapDB file");
+                    e.printStackTrace();
+                }
+            }
+            return db;
+        }
+
+        private synchronized Map<Long,DBTile> getStorage(String source) {
+            Map<Long, DBTile> m = storages.get(source);
+            if (m == null) {
+                try {
+                    DB d = getDB(source);
+                    m = d.getHashMap("tiles");
+                    storages.put(source, m);
+                    log.log(Level.FINEST, "Created storage {0}", source);
+                } catch (Exception e) {
+                    log.severe("Error: Can not create HashMap in MapDB storage");
+                    e.printStackTrace();
+                }
+            }
+            return m;
+        }
+        
+        public TileDAOMapDB(File cacheFolder) {
+            this.cacheFolder = cacheFolder;
+        }
+        
+                
+        DBTile getById(String source, long id) {
+            return getStorage(source).get(id);
+        }
+
+        protected void updateModTime(String source, long id, DBTile dbTile) {
+            log.finest("Updating modification time");
+            getStorage(source).put(id, dbTile);
+        }
+
+        protected void updateTile(String source, long id, DBTile dbTile) {
+            log.finest("Updating tile in base");
+            getStorage(source).put(id, dbTile);
+        }
+
+        protected void deleteTile(String source, long id) {
+            getStorage(source).remove(id);
+        }
+
+
+    }
+            
+    TileDAOMapDB dao;
+                        
+   
+    protected long maxCacheFileAge = FILE_AGE_ONE_WEEK;
+    protected long recheckAfter = FILE_AGE_ONE_DAY;
+
+    
+    public OsmDBTilesLoader(TileLoaderListener smap, File cacheFolder) {
+        super(smap);
+        dao = new TileDAOMapDB(cacheFolder);
+    }
+    
+    @Override
+    public TileJob createTileLoaderJob(final Tile tile) {
+        return new DatabaseLoadJob(tile);
+    }
+    
+    static class DBTile implements Serializable {
+        byte data[];
+        Map<String, String> metaData;
+        long lastModified;
+    }
+
+    protected class DatabaseLoadJob implements TileJob {
+
+        private final Tile tile;
+        File tileCacheDir;
+        DBTile dbTile = null;
+        long fileAge = 0;
+        boolean fileTilePainted = false;
+        
+        long id;
+        String sourceName;
+        
+        public DatabaseLoadJob(Tile tile) {
+            this.tile = tile;
+            id = 0x01000000L * tile.getZoom() + 0x00200000L *tile.getXtile() +tile.getYtile();
+            sourceName = tile.getSource().getName();
+        }
+
+        @Override
+        public Tile getTile() {
+            return tile;
+        }
+
+        @Override
+        public void run() {
+            synchronized (tile) {
+                if ((tile.isLoaded() && !tile.hasError()) || tile.isLoading())
+                    return;
+                tile.initLoading();
+            }
+            if (loadTileFromFile()) {
+                return;
+            }
+            if (fileTilePainted) {
+                TileJob job = new TileJob() {
+                    public void run() {
+                        loadOrUpdateTile();
+                    }
+                    public Tile getTile() {
+                        return tile;
+                    }
+                };
+                JobDispatcher.getInstance().addJob(job);
+            } else {
+                loadOrUpdateTile();
+            }
+        }
+
+        private boolean loadTileFromFile() {
+            ByteArrayInputStream bin = null;
+            try {
+                dbTile = dao.getById(sourceName, id);
+                
+                if (dbTile == null) return false;
+
+                if ("no-tile".equals(tile.getValue("tile-info")))
+                {
+                    tile.setError("No tile at this zoom level");
+                    if (dbTile!=null) {
+                        dao.deleteTile(sourceName, id);
+                    }
+                } else {
+                    bin = new ByteArrayInputStream(dbTile.data);
+                    if (bin.available() == 0)
+                        throw new IOException("Data empty");
+                    tile.loadImage(bin);
+                    bin.close();
+                }
+
+                fileAge = dbTile.lastModified;
+                boolean oldTile = System.currentTimeMillis() - fileAge > maxCacheFileAge;
+                if (!oldTile) {
+                    tile.setLoaded(true);
+                    listener.tileLoadingFinished(tile, true);
+                    fileTilePainted = true;
+                    return true;
+                }
+                listener.tileLoadingFinished(tile, true);
+                fileTilePainted = true;
+            } catch (Exception e) {
+                try {
+                    if (bin != null) {
+                        bin.close();
+                        dao.deleteTile(sourceName, id);
+                    }
+                } catch (Exception e1) {
+                }
+                dbTile = null;
+                fileAge = 0;
+            }
+            return false;
+        }
+
+        long getLastModTime() {
+            return System.currentTimeMillis() - maxCacheFileAge + recheckAfter;
+        }
+                
+        private void loadOrUpdateTile() {
+            
+            try {
+                URLConnection urlConn = loadTileFromOsm(tile);
+                final TileUpdate tileUpdate = tile.getSource().getTileUpdate();
+                if (dbTile != null) {
+                    switch (tileUpdate) {
+                    case IfModifiedSince:   // (1)
+                        urlConn.setIfModifiedSince(fileAge);
+                        break;
+                    case LastModified:      // (2)
+                        if (!isOsmTileNewer(fileAge)) {
+                            log.finest("LastModified test: local version is up to date: " + tile);
+                            dbTile.lastModified = getLastModTime();
+                            dao.updateModTime(sourceName, id, dbTile);
+                            return;
+                        }
+                        break;
+                    }
+                } else {
+                    dbTile = new DBTile();
+                }
+                
+                if (tileUpdate == TileSource.TileUpdate.ETag || tileUpdate == TileSource.TileUpdate.IfNoneMatch) {
+                    String fileETag = tile.getValue("etag");
+                    if (fileETag != null) {
+                        switch (tileUpdate) {
+                        case IfNoneMatch:   // (3)
+                            urlConn.addRequestProperty("If-None-Match", fileETag);
+                            break;
+                        case ETag:          // (4)
+                            if (hasOsmTileETag(fileETag)) {
+                                dbTile.lastModified = getLastModTime();
+                                dao.updateModTime(sourceName, id, dbTile);
+                                return;
+                            }
+                        }
+                    }
+                    tile.putValue("etag", urlConn.getHeaderField("ETag"));
+                }
+                if (urlConn instanceof HttpURLConnection && ((HttpURLConnection)urlConn).getResponseCode() == 304) {
+                    // If we are isModifiedSince or If-None-Match has been set
+                    // and the server answers with a HTTP 304 = "Not Modified"
+                    log.finest("Answer from HTTP: 304 / ETag test: local version is up to date: " + tile);
+                    dbTile.lastModified = getLastModTime();
+                    dao.updateModTime(sourceName, id, dbTile);
+                    return;
+                }
+
+                loadTileMetadata(tile, urlConn);
+                dbTile.metaData = tile.getMetadata();
+
+                if ("no-tile".equals(tile.getValue("tile-info")))
+                {
+                    tile.setError("No tile at this zoom level");
+                    listener.tileLoadingFinished(tile, true);
+                } else {
+                    for(int i = 0; i < 5; ++i) {
+                        if (urlConn instanceof HttpURLConnection && ((HttpURLConnection)urlConn).getResponseCode() == 503) {
+                            Thread.sleep(5000+(new Random()).nextInt(5000));
+                            continue;
+                        }
+                        log.log(Level.FINE, "Loading from OSM{0}", tile);
+                        byte[] buffer = loadTileInBuffer(urlConn);
+                        if (buffer != null) {
+                            tile.loadImage(new ByteArrayInputStream(buffer));
+                            tile.setLoaded(true);
+                            dbTile.data = buffer;
+                            dbTile.lastModified = System.currentTimeMillis();
+                            dao.updateTile(sourceName, id, dbTile);
+                            listener.tileLoadingFinished(tile, true);
+                            break;
+                        }
+                    }
+                }
+                
+            } catch (Exception e) {
+                tile.setError(e.getMessage());
+                listener.tileLoadingFinished(tile, false);
+                try {
+                    log.log(Level.SEVERE, "Failed loading {0}: {1}", new Object[]{tile.getUrl(), e.getMessage()});
+                    e.printStackTrace();
+                } catch(IOException i) {
+                }
+            } finally {
+                tile.finishLoading();
+            }
+        }
+        
+        
+        protected byte[] loadTileInBuffer(URLConnection urlConn) throws IOException {
+            InputStream input = urlConn.getInputStream();
+            ByteArrayOutputStream bout = new ByteArrayOutputStream(input.available());
+            byte[] buffer = new byte[2048];
+            boolean finished = false;
+            do {
+                int read = input.read(buffer);
+                if (read >= 0) {
+                    bout.write(buffer, 0, read);
+                } else {
+                    finished = true;
+                }
+            } while (!finished);
+            if (bout.size() == 0)
+                return null;
+            return bout.toByteArray();
+        }
+
+        /**
+         * Performs a <code>HEAD</code> request for retrieving the
+         * <code>LastModified</code> header value.
+         *
+         * Note: This does only work with servers providing the
+         * <code>LastModified</code> header:
+         * <ul>
+         * <li>{@link tilesources.OsmTileSource.CycleMap} - supported</li>
+         * <li>{@link tilesources.OsmTileSource.Mapnik} - not supported</li>
+         * </ul>
+         *
+         * @param fileAge time of the 
+         * @return <code>true</code> if the tile on the server is newer than the
+         *         file
+         * @throws IOException
+         */
+        protected boolean isOsmTileNewer(long fileAge) throws IOException {
+            URL url;
+            url = new URL(tile.getUrl());
+            HttpURLConnection urlConn = (HttpURLConnection) url.openConnection();
+            prepareHttpUrlConnection(urlConn);
+            urlConn.setRequestMethod("HEAD");
+            urlConn.setReadTimeout(30000); // 30 seconds read timeout
+            // System.out.println("Tile age: " + new
+            // Date(urlConn.getLastModified()) + " / "
+            // + new Date(fileAge));
+            long lastModified = urlConn.getLastModified();
+            if (lastModified == 0)
+                return true; // no LastModified time returned
+            return (lastModified > fileAge);
+        }
+
+        protected boolean hasOsmTileETag(String eTag) throws IOException {
+            URL url;
+            url = new URL(tile.getUrl());
+            HttpURLConnection urlConn = (HttpURLConnection) url.openConnection();
+            prepareHttpUrlConnection(urlConn);
+            urlConn.setRequestMethod("HEAD");
+            urlConn.setReadTimeout(30000); // 30 seconds read timeout
+            // System.out.println("Tile age: " + new
+            // Date(urlConn.getLastModified()) + " / "
+            // + new Date(fileAge));
+            String osmETag = urlConn.getHeaderField("ETag");
+            if (osmETag == null)
+                return true;
+            return (osmETag.equals(eTag));
+        }
+
+    }
+    
+}
