Message-ID: <1082909607.96325.1397992422654.JavaMail.email@example.com> Subject: Exported From Confluence MIME-Version: 1.0 Content-Type: multipart/related; boundary="----=_Part_96324_767086764.1397992422654" ------=_Part_96324_767086764.1397992422654 Content-Type: text/html; charset=UTF-8 Content-Transfer-Encoding: quoted-printable Content-Location: file:///C:/exported.html
The way that objects are allocated in JikesRVM can be difficult = to grasp for someone new to the code base. This document provides a d= etailed look at some of the paths through the JikesRVM - MMTk interface cod= e to help bootstrap understanding of the process. The process and cod= e illustrated below is current as of March 2011, svn revision 16052 (b= etween JikesRVM 3.1.1 and 3.1.2).
The best starting place to understand the allocation sequence is in the = class org.jikesrvm.mm.mminterface.MemoryManager, which is a facade class fo= r the MMTk allocators. MMTk provides a variety of memory management p= lans which are designed to be independent of the actual language being impl= emented. The MemoryManager class orchestrates the services of MMTk to= allocate memory, and adds the structure necessary to make the allocated me= mory into Java objects.
The method allocateScalar is where all scalar (ie non-array) ob= jects are allocated. The parameters of this method specify the object= to be allocated in sufficient detail that when this method is compiled by = the opt compiler, all of the parameters are compile-time constants, allowin= g maximum optimization. Working through the body of the method,
Selected.Mutator mutator =3D Selected.Mutator.get();
As mentioned above, MMTk provides many different memory management plans= , one of which is selected at build time. This call acquires a pointe= r to the thread-local per-mutator component of MMTk. Much of MMTk's p= erformance comes from providing unsynchronized thread-local data structures= for the frequently used operations, so rather than provide a single interf= ace object, it provides a per-thread interface object for both mutator and = collector threads.
allocator =3D mutator.checkAllocator(org.jikesrvm.runtime.Memory.align= Up(size, MIN_ALIGNMENT), align, allocator);
An MMTk plan in general provides several spaces where objects can be all= ocated, each with their own characteristics. JikesRVM is free to requ= est allocation in any of these spaces, but sometimes there are constraints = only available on a per-allocation basis that might force MMTk to override = JikesRVM's request. For example, JikesRVM may specify that objects al= located by a particular class are allocated in MMTk's non-moving space. &nb= sp;At execution time, one such object may turn out to be too large for allo= cation in the general non-moving space provided by that particular plan, an= d so MMTk needs to promote the object to the Large Object Space (LOS), whic= h is also non-moving, but has high space overheads. This call will ge= nerally compile down to 0 or a small handful of instructions.
Address region =3D allocateSpace(mutator, size, align, offset, allocat= or, site);
This calls a method of MemoryManager, common to all allocation methods (= for Arrays and other special objects), that calls
Address region =3D mutator.alloc(bytes, align, offset, allocator,= site);
to actually allocate memory from the current MMTk plan.
Object result =3D ObjectModel.initializeScalar(region, tib, size);
Now we call the JikesRVM object model to initialize the allocated region= as a scalar object, and thenmutator.postAlloc(ObjectReference.fromObject(result), ObjectReference.= fromObject(tib), size, allocator);
we call MMTk's postAlloc method to perform initialization that = can only be performed after an object has been initialized by the virtual m= achine.
Compiler integrat= ion
The allocateScalar method discussed above is only actually call= ed from one place, the method resolvedNewScalar(int ...) in the cl= ass org.jikesrvm.runtime.RuntimeEntrypoints. This class provides meth= ods that are accessed directly by the compilers, via fields in the org= .jikesrvm.runtime.Entrypoints class. The 'resolved' part of the metho= d name indicates that the class of object being allocated is resolved at co= mpile time (recall that the Java Language Spec requires that classes are on= ly loaded, resolved etc when they are needed - sometimes it's necessary to = compile code that performs classloading and then allocate the object).
RuntimeEntrypoints also contains an overload, resolvedNewScalar(RVMC= lass), that is used by the reflection API to allocate objects. I= t's instructive to look at this method, as it performs essentially the same= operations as the compiler when compiling the call to resolvedNewScala= r(int...).
Working backwards from this point requires delving into the individual c= ompilers.
There is a different baseline compiler for each architecture. The = relevant code in the baseline compiler for the ia32 architecture is in the = class org.jikesrvm.compilers.baseline.ia32.BaselineCompilerImpl. The = method emit_resolved_new(RVMClass) is responsible for generating c= ode to execute the 'new' bytecode when the target class is already resolved= . Looking at this method, you can see it does essentially what the
resolvedNewScalar(RVMClass) method in RuntimeEntrypoints does, then = generates Intel machine code to perform the call to the resolvedNewScalar e= ntrypoint. Note how the work of calculating the size, alignment etc o= f the object is performed by the compiler, at compile = time.
Similar code exists in the PPC baseline compiler.
Optimizing Compile= r
The optimizing compiler is paradoxically somewhat simpler than the basel= ine compiler, in that injection of the call to the entrypoint is done in an= architecture independent level of compiler IR. (An overview of the J= ikesRVM optimizing compiler can be found in ).
In HIR (the high-level Intermediate Representation), allocation is expre= ssed as a 'new' opcode. During the translation from HIR to LIR (Low-l= evel IR), this and other opcodes are translated into instructions by the cl= ass org.jikesrvm.compilers.opt.hir2lir.ExpandRuntimeServices. The met= hod perform(IR) performs this translation, selecting particular op= erations via a large switch statement. The NEW_opcode case performs t= he task we're interested in, doing essentially the same job as the baseline= compiler, but generating IR rather than machine instructions. The co= mpiler generates a 'call' operation, and then (if the compilation policy de= cides it's required) inlines it.
At this point in code generation, all the methods called by RuntimeEntry= points.resolvedNewScalar(int...) which are annotated @Inline are also inlin= ed into the current method. This inlining extends through to the MMTk= code so that the allocation sequence can be optimized down to a handful of= instructions.
It can be instructive to look at the various levels of IR generated for = object allocation using a simple test program and the OptTestHarness utilit= y described elsewhere in the user guide.