Well, the most obvious cause of this is memory leaks in your application But, if you've thoroughly investigated using tools like jconsole, yourkit, jprofiler or any of the other profiling and analysis tools out there and you can eliminate your code as the source of the problem, it may be due to bug(s) in the JVM.
One symptom is OutOfMemory exceptions, accompanied by a message such as
"java.lang.OutOfMemoryError: requested 32756 bytes for xxx. Out of swap space?".
Sun bug number 4697804 describes how this can happen in the scenario when the garbage collector needs to allocate a bit more space during its run, tries to resize the heap but fails because the machine is out of swap space. One suggested work around is to ensure that the jvm never tries to resize the heap, by setting min heap size to max heap size:
Another workaround is to ensure you have configured sufficient swap space on your device to accommodate all programs you are running concurrently.
Another cause of the OOM exception can be native memory being consumed whilst the heap remains relatively static. The symptoms to look out for are the process size growing, but the heap usage remaining relatively level. Native memory can be consumed by a number of things, the JIT compiler being one, and nio ByteBuffers being another. Sun bug number 6210541 discusses a still-unsolved problem where the jvm allocates a direct ByteBuffer in some circumstances that is never garbage collected, effectively eating native memory. Guy Korland's blog discusses this problem here and here.
By default, Jetty will allocate its own direct ByteBuffers for io if the nio SelectChannelConnector is configured, and MappedByteBuffers to memory-map static files. However, if you're on Windows, you may have disabled the memory-mapped buffers to avoid the file-locking problem whereby Windows will not permit you to overwrite a file if it has been mapped into memory, in which case you might be vulnerable to this jvm problem.