Message-ID: <1361935077.1887.1410989754740.JavaMail.email@example.com> Subject: Exported From Confluence MIME-Version: 1.0 Content-Type: multipart/related; boundary="----=_Part_1886_1851642022.1410989754739" ------=_Part_1886_1851642022.1410989754739 Content-Type: text/html; charset=UTF-8 Content-Transfer-Encoding: quoted-printable Content-Location: file:///C:/exported.html
This page gives a brief outline of the major control flows in the execut= ion of a garbage collector in MMTk. For simplicity, we focus on the M= arkSweep collector, although much of the discussion will be relevant to oth= er collectors.
This page assumes you have a basic knowledge of garbage collection, for = those that don't, please see one of the standard texts such as The Ga= rbage Collection Handbook.
An MMTk Plan is r= equired to provide 5 classes. They are required to have consistent na= mes which start with the same name and have a suffix that indicates which c= lass it inherits from. in the case of the MarkSweep plan, the name is "MS".
org.mmtk.pla= n.Plan. This class encapsulates data structures that are shared among multip= le threads.
Spaceclass, and it is in the policy class that the mechanics = of a particular mechanism (like mark-sweep) is implemented. The task = of a Plan is to create the policy (Space) objects that manage the heap, and= to integrate them into the MMTk framework.
org.mmtk.policy.Space, and a particular instance of a policy i= s known generically as a=20 space. The static initializer of a Plan and its subclasses d= efine the spaces that make up an MMTk plan.
In this code fragment, we see the MS plan defined. Note that we ge= nerally also define a static final space descriptor. This is= an optimization that allows some rapid operations on spaces.
Space is a global object, shared among multiple muta=
tor threads. Each policy will also have one or more thread-local clas=
ses which provide unsynchronized allocation. These classes are subcla=
org.mmtk.utility.alloc.Allocator, and in the case=
of MarkSweep, it is called
MarkSweepLocal. Instances of=
MarkSweepLocal are created as part of a mutator context,=
The design pattern is that the local Allocator will allocate space from = a thread-local buffer, and when that is exhausted it will allocate a new bu= ffer from the global Space, performing appropriate locking. The const= ructor of the MarkSweepLocal specifies the space from which the allocator w= ill allocate global memory.
MMTk provides two= methods for allocating an object. These are provided by the MSMutato= r class, to give each plan the opportunity to use fast, unsynchronized thre= ad-local allocation before falling back to a slower synchronized slow-path.=
The version imple= mented in MarkSweep looks like this:
The basic structu=
re of this method is common to all MMTk plans. First they decide whet=
her the operation applies to this level of abstraction (if (allocato=
r =3D=3D MS.ALLOC_DEFAULT)), and if so, delegate to the appropriate place, =
otherwise pass it up the chain to the super-class. In the case of Mar=
kSweep, MSMutator delegates the allocation to its thread-local MarkSweepLoc=
alloc method of
s inherited from
SegregatedFreeListLocal (mark-sweep is n=
ot the only way of managing free-list allocation), and looks like this
This is a standard pattern for thread-local allocation: first we look in=
the thread-local space (line 3), and if successful return the result (line=
s 4-8). If unsuccessful, we request space from the global policy via =
Allocator.allocSlow. This is the common =
interface that all Allocators use to request space from the global policy. =
This will eventually call the allocator-specific
Once method. The workings of the allocSlowOnce method are very=
policy-specific, so not appropriate to look at at this stage, but eventual=
ly all policies will attempt to acquire fresh virtual memory via the <=
Space.acquire is the only correct way for a policy to =
allocate new virtual memory for its own use.
The logic of space.acquire is:
= trueif it has done so.
This code fragment shows the retry logic in the allocator. We try =
allocSlowOnce, which may recycle partial=
ly-used blocks and eventually call
f a GC occurred, we try again. Eventually the plan will request an em=
ergency collection which will (for example) cause soft references to be dro=
pped. If this fails we throw an
In a stop-the-world garbage collector like MarkSweep, the mutator thread= s run until memory is exhausted, then all mutator threads are suspended, th= e collector threads are activated, and they perform a garbage collection. &= nbsp;After the GC is complete, the collector threads are suspended and the = mutator threads resume. MMTk also has some support for concurrent col= lectors, in which one or more collector threads can be scheduled to run alo= ngside the mutator, either exclusively or in addition to (hopefully briefer= ) stop-the-world phases.
Thread scheduling in MMTk is handled by a GC controller thread, implemen=
ted in the singleton class
ext held in the static field
Context. Whenever a collection is initiated=
, it is done by calling methods on this object.
As mentioned above, every attempt to allocate fresh virtual memory calls=
the current plan's
poll(...) method. This initiate=
s a GC by calling
h in a stop-the-world collector like MarkSweep pauses the mutator threads a=
nd then wakes the collector threads. The main loop of the garbage col=
lector is simply the
run() method of
ollector, shown below.
collect() method is specific to the type of collec=
tor, and in
StopTheWorldCollector it looks like this
Every garbage collection consists of a series of steps. Each step = is either executed once (e.g. updating the mark state before marking the he= ap), or in parallel on all available collector threads (e.g. the parallel m= ark phase). The actual work of a step is done by the collectionPhase = method of the global, collector or mutator class of a plan.
In early versions of MMTk, the main collection method was a template met= hod, calling individual methods for each phase of the collection. As = the number of collectors in MMTk grew, this became unwieldy and has been re= placed with a configurable mechanism of phases.
org.mmtk.plan.Simple defines the basic struc=
ture of most of MMTk's garbage collectors. First it defines the phase=
Each phase of the collection is represented by a 16-bit integer, an inde= x into a table of Phase objects. Simple phases are scheduled= , and combined into sequences, or complex phases.
A simple phase can be scheduled in one of 4 ways:
Between every phase of a collection, the collector threads rendezvous at= a synchronization barrier. The actual execution of a collector's pha= ses is done in the method Phase.processPhaseStack. This method handle= s resuming a concurrent collection as well as running a full stop-the-world= collection.
The actual work of a collection phase is done (as mentioned above) in th= e collectionPhase method of the major Plan classes.
This excerpt shows how the global MS plan implements collectionPhase, il= lustrating the key phases of a simple stop-the-world collector. The p= repare phase performs tasks such as changing the mark state, the closure ph= ase performs a transifive closure over the heap (the mark phase of a mark-s= weep algorithm) and the release phase performs any post-collection steps. &= nbsp;Where possible, a plan is structured so that each layer of inheritance= deals only with the objects it creates, i.e. the MS class operates on the = msSpace and delegates work on all other spaces to the super-class where the= y are defined. By convention the PREPARE phase is performed outside-i= n (super-class preparation first) and RELEASE is done inside-out (local fir= st, super-class second).
The main operation of a tracing collector is the transitive closure oper= ation where all (or a subset) of the object graph is visited. Some co= llectors such as generational collectors perform these operations in more t= han one way, e.g. a nursery collection in a generational collector does not= trace through pointers into the mature space, while a full-heap collection= does. All MMTk collectors are designed to run using several parallel= threads, using data structures that have unsynchronized thread-local and s= ynchronized global components in the same way as MMTk's policy classes.
MMTk's trace operation uses the following terminology:
Each distinct transitive closure operation is defined as a subclass of T= raceLocal. The closure is performed in the collectionPhase method of = the plan-specific CollectorContext class
The initial starting point for the closure is computed by the STACK_ROOT= S and ROOTS phases, which add root locations to a buffer by calling TraceLo= cal.reportDelayedRootEdge. The closure operation proceeds by invoking= traceObiect on each root location (in method processRootEdge), and then in= voking scanObject on each heap object encountered. Note that the CLOS= URE operation is performed multiple times in each GC, due to processing of = reference types.