Skip to end of metadata
Go to start of metadata

Using Maven2 to build projects which use JNI code

This page documents my experiences with supporting projects which use JNI code, and describes the solution I developed and the tools I use, in the hope that it may save somebody else this pain.


I work in telecoms, which means that we have quite a lot of projects which have a greater or lesser quantity of native code, either because they interface to the operating system at a low level, or simply as a way of dealing with the real-time requirements of our software. As time has gone by, we've built up a reasonably complex heirarchy of applications with native code which depend on libraries with native code, and some of the libraries even export native-level symbols to application native code.

Our investigations with converting JNI-free projects to a Maven2 build process were extremely positive, and it therefore soon became desirable to convert all of our projects, including those with JNI requirements, to Maven2. This led to the following requirements for the build process.


  1. The build/release process should match as closely as possible that of a non-JNI project - checkout followed by 'mvn package', etc.
  2. As this functionality will be common to many projects, long incantations of plugin configuration in each pom are unacceptable; it must be possible to either factor everything out to a common parent pom, or just to have sensible defaults which build everything. For example, it should be possible to make use of a library using JNI code - and have it work for unit tests, assemblies, etc - just by adding it to your <dependencies/> element.
  3. It should be possible to run an application with as little extraneous scripting as possible. Essentially, this means: unzip assembly; run java -jar on application jar.
  4. It should be possible to call functions in one JNI library from another. Typically this means having the library's include files and dll available at compile time, and having the library available for dynamic linking at runtime.
  5. The build process must be portable from one platform to another. I'm happy to require that building for Windows requires Cygwin, but it should be possible to have builds for different platforms alongside each other in the repository, and the right build be chosen when building.
  6. Our legacy build process must continue to work until the whole company can be migrated to maven2.

For bonus points:

  1. During the development cycle, it shouldn't be necessary to run the entire maven build cycle for each change. The user's IDE can be configured to build the Java side, so we need a means of quickly building the native side. This also offers an easier upgrade path from our legacy build system, fwiw.

Possible solutions

There are obviously many ways to skin this particular cat; I'll discuss a few of the options.


The first place to look is obviously to see if anybody else has solved this problem. I therefore made a start with the native-maven-plugin. This, however, has a number of problems, which means that it totally fails to meet my requirements.

  • The showstopper is that the native code ant the java code are separate projects. This means that it's impossible to get things to build in the right order, because we have to do the following.
    1. compile java source to class files
    2. run javah on class files
    3. compile and link native code
    4. run unit tests for java code

Some messing about with a very complex project structure is possible, but it's ugly and very hard to set up for each of many projects.

  • If you build a library which has a JNI component, making use of it is very complex. Essentially, maven2 downloads the jni library to the depths of your local repository, where it then can't be found by a System.load().
    • update: this can be worked around by judicious use of a recent maven-dependencies-plugin build.
  • Again, when you depend on a library with a JNI component, you need complex incantations in your pom, to depend on both the jar, and a separate profile for each target platform to depend on the native library.
  • Building the native code always requires running the full maven build.

JNI library as an attached artifact

The next possibility considered was to build the native library as an 'attached artifact' - in much the same way as a javadoc or source jar can be attached to the main artifact. This solved some of the problems, especially the problem with build order; depending on a library with jni code was still a nightmare, however.

JNI library archived within the jar

The solution I ended up using was to store the compiled jni library in the jar alongside the class files.

This means either cross-compiling for all possible architectures, or more simply, having a different jar for each architecture. This latter fits quite well with our setup - where almost all of our machines are Linux-i386, with a smattering of win32 boxes.

Sadly System.load() can't cope with loading libraries from within a jar, so we'll therefore need a custom loader which extracts the library to a temporary file at runtime; this is obviously achievable, however.

Freehep-NAR plugin

Since I originally wrote this, the freehep project have been working on porting their nar plugin to Maven 2. As of the time of writing (13 October 2006), it's still in Alpha - but I think this may represent a more polished alternative to this problem than my solution. I'd certainly suggest that anybody thinking about this problem takes a look at it.


For an example of a project using this implementation, you may like to download simple-native-example. That directory contains source archives (eg, as well as binaries compiled for Linux-i386, and javadocs, and may provide helpful examples of the concepts below.


The general idea here, as mentioned above is that we will build our JNI library into a jar, alongside the class files. That means that we end up wit a different jar for each system architecture we are targetting; to distinguish between the different versions, we actually put a system identifier into the names of the artifacts we produce. More on this below.

Directory structure

We need a place to put native code. This should naturally be src/main/native in the standard directory layout; for a more complex project it may be appropriate to further split this into src and include.

The JNI library should be built straight into a subdirectory of ${outputDirectory}/classes (ie, target/classes); by doing so, it will automatically get built into the project jar, as well as being in the right place when running the project from an IDE which just uses the .class files straight out of ${outputDirectory}/classes. I decided to put such my jni libraries in META-INF/lib, but this is obviously not set in stone.

Building the JNI

The first thing we need to do (after creating the .java files, with suitable native methods, of course), is to build the JNI library. I chose to do this using a Makefile, and a maven-antrun-plugin execution, as this makes it reusable between different build environments, and is easily run independently of Maven. It's also an easy way of making sure the right files, and only the right files, get recompiled.

The Makefile needs to:

  • use javah to create .h files from the .class files in the outputDirectory.
  • use gcc to compile c into object files
  • link the object files into the dynamic library.

We now need for make to be run after the .class files are built; I did this by attaching an execution to the process-classes phase. I've also added an ant task to write a file containing the compile-time classpath; this can then be used from the Makefile to build the classpath argument for javah.

Library loader

We now have our JNI library on the class path, so we need a way of loading it. I created a separate project which would extract JNI libraries from the class path, then load them. Find it at This is added as a dependency to the pom, obviously.

To use it, I simply call com.wapmx.nativeutils.jniloader.NativeLoader.loadLibrary(libname). More information is in the javadoc for NativeLoader.

I generally prefer to wrap such things in a try/catch block, as follows:

We should now be at the point where our junit tests work from maven; a mvn test should work! It should also work fine from an IDE.

The artifactId

As I mentioned earlier, we need to be able to keep builds for multiple architectures alongside one another in the repository, so we need to distinguish between different builds somehow. We do this by putting an identifier into the artifactId.

It turns out that we have to distinguish between features of a system which are hard to determine programatically - for example, the libc and libstdc++ versions used by the system. We therefore first create a system property, ${mx.sysinfo} which encodes this information. This needs to be set in `/etc/mavenrc` or `.mavenrc`: for example, on an i386 Linux system using glibc version 2.3 and libstdc++ version 6, we might have:

The precise value you use doesn't really matter (unless you want to share your artifacts outside your organisation), nor even the name of the property, provided you are consistent and the values you choose adequately distinguish between architectures.

It's then simple to use this in your pom; where previously you might have <project>...<artifactId>my-artifact</artifactId>...</project>, add a suffix to the artifactId thus: <project>...<artifactId>my-artifact-${mx.sysinfo}</artifactId>...</project>. Similarly, whenever we refer to that artifact, we again add -${mx.sysinfo} to the names of the artifactIds.


Releasing the project is the same as normal - mvn release:prepare followed by mvn release:perform - except that you need to do the perform step on each platform you wish to support. To repeat a release, you can do:


It really is as simple as that. You should now be able to use your JNI-enabled project exactly as you would any other project - with the one proviso that, if you depend on it from another project, you must put a -${mx.sysinfo} on the end of the artifactId. For example:


The rest of this page isn't relevant to a single JNI project - it describes some extra knobs and whistles I've added in order to handle our more extensive use of JNI.

Functions exported at the native level

Sometimes it's useful for a JNI library to export symbols for other JNI libraries to use. For example, we might want to create a "utilities" library with functions such as JNU_ThrowByName.

We therefore end up with a "library" project and an "application" project. An example of such a pair can be found at; the source is at

Building an "includes" zip

We need to make the .h and .so/.dll files from the library project available when compiling the application project. To do this, I've configured the pom of the library project to build a zip file containing the includes files. We also put the .so file in the zip, as it's much easier to extract from there, it separates runtime and compiletime dependencies more nicely, and because we might as well.

Here's an example assembly descriptor (which can be packaged into the shared build-scripts artifact as below):


And a pom snippet to use it (which can be factored out to a common parent pom):


Unpacking includes and library archives

Now we need to make the build for the application project unpack the includes zip, before trying to compile the native code. We can use the dependencies plugin for this:


We thus end up with target/extracted/include, and target/extracted/lib; the Makefile can now be set up to use those two paths at appropriate points.

Ensuring libraries are extracted in the right order

At runtime, we must ensure that our library project's JNI is extracted (and loaded) before we try to load the application project's JNI - otherwise the dynamic linker won't be able to find the library. To this end, I have this in a static initialiser in the application:

Making the OS dynamic linker work

The operating system must be able to find the dynamic library from the library project at runtime for the application project; for Linux, this means that you have to set LD_LIBRARY_PATH to include wherever the native-loader extracts the libraries to. If we don't, we'll get a cannot open shared object file error.

For example, to make this work when running the tests in the application project, we can do:

You obviously also need to make java.library.tmpdir and LD_LIBRARY_PATH agree if running java directly. A similar trick is no doubt possible in Win32 (with PATH instead of LD_LIBRARY_PATH), but I haven't tested that.

Also note that, on glibc 2.3.6 at least, LD_LIBRARY_PATH is parsed at startup, and any non-existent absolute directories ignored; so unless your tmplib exists when you start, you'll still get an error. It's probably easiest just to keep LD_LIBRARY_PATH relative.

Shared build scripts

We can make use of the unpacking of includes zips to factor out the complicated parts of the Makefile to a single Makefile which is used by all JNI projects. provides a demonstration of how this might be done.


A previous version of this wiki page suggested putting the platform discriminator in the jar's classifier; I no longer recommend this as it causes problems with snapshot dependencies, and requires a hideous fudge to set a classifier on the main artifact of a build.

Contacting me

If you find this information useful, or you have problems with it, please let me know. I can be contacted at

  • No labels


  1. Regarding the native-maven-plugin showstopper:

    If the code could be broken into three projects, it seems like it would remove the circular dependency problem:

    • main java code
    • jni java code
    • native code

    The jni java code would have no dependencies. The native code would depend on the jni java code, so it could use the jni java class with javah. The main java code would have a compile dependency on jni java code and a runtime dependency on the native code. The main java code project would contain the unit tests.

    It might be annoying to distribute two pure java jars, and a native lib for each platform. The two pure java jars could be combined into a single jar with fourth packaging project.

  2. I've run into cases where I have more than one type of native code, or I'm using different languages and I suggest that src/main/native does not give enough distinction.

     How about:

    • src/main/cpp
    • src/main/php
    • src/main/basic

    etc. That would follow the existing standard of src/main/java where the dir is designated by language rather than environment.

  3. Scott: It's true; it is possible to make this work, and as such the word "showstopper" is probably a bit strong. However, I am still left with a number of other problems here:

    • I still need to set up and maintain a forest of projects for every single native project in the company; there are many of these and sharing setup between them is difficult.
    • The hoops you have to jump through to make it work are, imho, opaque, and I am left with a project structure which I barely understand, let alone any of my co-developers.
    • I mentioned that, ideally, I'd like to retain the ability to build my native code with a Makefile. Doing so is difficult when the native library becomes the primary artifact of a project.

    Brill: yes, if you have projects where you have a variety of languages, what you suggest makes sense. The joy of maven is such that you can configure all this yourself. From my point of view, "native" comprises the very specific case of code written to link into a native library, which is then used with the JNI. Unless you're doing something very odd, I'd imagine this is exclusively C or C++ (which I have no desire to separate), and other things belong better in, for example, src/main/scripts. There is, however, nothing in my setup which would stop you using your choice if you find it more appropriate.

  4. Hi Richard, EXCELLENT post!  I regret that I hadn't read this last week, as I just rolled my own solution.  I think yours is a bit cleaner in how you're handling native compiles.  However, I thought I'd mention a way to bundle native exe/bin files on a platform-independant basis, since this might help others as well. The net effect is that a cross-platform JAR file is created with native binaries for several platforms.

    Basically, I'm bundling native exe/dll/so files in a special package "com.mycompany.bin" (avoiding the name "native" because of the special use in .java code).  These get extracted from the jar file at runtime, as you have done.  These get unpacked by an unpacker class that looks at the System.getProperty() values for "" and "os.arch".

    For example, a caller asks the unpacker to get the native "myprogram" program for the current platform.  It could be OSX, Linux, Win, and different architectures as well.  I search for a property file using the program name, os, and architecture values.  This is very similar to using ResourceBundles to load natural language properties.  Since the program's platform is determinate at runtime, the properties file path can also be determined (kind of like looking for "", but it looks like "myprogram_Windows" in this case).  The properties file values are then loaded (using Apache Commons Configuration).  Then whole sets of bundled files are extracted to a target path, based on the properties (commons configuration allows for arrays of property values, so an array of files is extracted based on the relative classpath).

    here's an example fileset:

    Once everything is extracted, based on entries in the properties files, and there were no IOExceptions's, the native code is loaded for whatever usecase exists. It's really clean because any program can use the native files just as easiliy as a JAR file from Maven.  I store my native JAR bundles in my local Maven repo this way.

    Since everything is bundled into one JAR file, I don't actually have to think about what platform it's running on. The appropraite files are unpacked at runtime for me. It has worked great so far.  I think one of the keys to being successful in the long run is to pick a method that works, and stick with it for everything native.

    cheers, Jason

  5. Thanks for the feedback, Jason. I like the idea of having everything in one jar file - but the question is, how do you create the jar file with libraries for lots of different platforms in it? Presumably you have to either cross-compile, or have a build system which spans multiple boxes?

  6. Actually, it's much more easier if you don't use javah to produce a .h-file which then is used from C-code. In this case native binary could be compiled absolutely independently from java and before building jni-artefact. It makes maven dependencies much more easier and straight.

    In this case it's possible to create binaries on different platforms, different environments, etc, and then use maven with appropriate binary in testing environment.

    A native build system may produce set of jars (merely zip-archive, even it's not necessary to use jdk) for each platform named as mylibrary-native-{$version}-{$platform}.jar containing only dll/so/lib/etc and upload it into maven-repository as artefacts with classifier (it's easy to do even not using the maven).

    The maven build system would use dependency on that artefacts to build and test jni java classes.

    There is one issue with this approach - it's impossible to verify on compilation stage if the binary implements all required native functions, but anyway it is covered with test-cases usually.

  7. If anyone comes across this excellent article like I did but is struggling trying to get this working on Mac OSX.

    You need to add the variable DYLD_LIBRARY_PATH instead or as well (if you want to support multiple build OS's) as  LD_LIBRARY_PATH.