Using JPA in real projects (part 1)

Is JPA only good for samples?

This question is of course a bit provocative. But if you look at all the JPA samples out in the wild, then none of them can be applied to real world projecs without fundamental changes.

This post tries to cover a few JPA aspects as well as showing off some maven-foo from a big real world project. I am personally using Apache OpenJPA because it works well and I’m a committer on the project (which means I can immediately fix bugs if I hit one). I will try to motivate my friends from JBoss to provide a parallel guide for Hibernate and maybe we even find some Glassfish/EclipseLink geek.

One of the most fundamental differences between the different JPA providers is where they store the state information for the loaded entities. OpenJPA stores this info directly in the entities (EclipseLink as well afaik) and Hibernate stores it in the EntityManager and sometimes in the 1:n proxies used for lazy loading (if no weaving is used). All this is not defined in the spec but product specific behaviour. Please always keep this in mind when applying JPA techiques to another JPA provider.

I’ll have to split this article in 2 parts, otherwise it would be too much for a good read. Todays part will focus on the general project setup, the 2nd one will cover some coding practices usable for JPA based projects.

The Project Infrastructure and Setup

A general note on my project structure: my project is not a sample but fairly big (40k users, 5mio page hits, 600++ JSF pages) and consists of 10++ WebApps with each of them having their own backend (JPA + db + businesslogic), frontend (JSF and backing beans) and api (remote APIs) JARs. Thus I have all my shared configuration in myprj/parent/fe myprf/parent/be and myprj/parent/api maven modules, containing the pom.xml pointed to as <parent> by all backends, frontends resp apis.

├── parent
│   ├── api
│   ├── be (<- here I keep all my shared backend configuration) 
│   ├── fe
├── webapp1
│   ├── api
│   ├── be (referencing ../../parent/be/pom.xml)
│   └── fe
├── webapp2
│   ├── api
│   ├── be (referencing ../../parent/be/pom.xml)
│   └── fe

Backend Unit Test Setup

1. All my backend unit tests use testng and really do hit the database! A business process test which doesn’t touch the database is worth nothing imo…
We are using a local MySQL installation for the tests and use an Apache Maven Profile for switching to other databases like Oracle and PostgreSQL (which we both use in production).

2. We have a special testng test-group called createData which we can @Test(dependsOnGroups="createData"). Or we just use the @Test(dependsOnMethods="myTestMethodCreatingTheData").
That way we have all tests which create some pretty complex set of test-data running first. All tests which need this data as base for their own work will run afterwards.

3. Each test must be re-runnable and cleanup his own mess in @BeforeClass. We use BeforeClass because this also works if you kill your test in the debugger. Nice goodie: you also can check the produced data in the database later on. Too bad that there is no easy way to automatically proove this. The best bet is to make all your colleagues aware of it and tell them that they have to throw the next party if they introduce a broken or un-repeatable test 😉

The Enhancement Question

I’ve outlined the details and pitfalls of JPA enhancement in a previous post.
I’m a big fan of build-time-enhancement because it a.) works nicely with OpenJPA and b.) my testng unit tests run much faster (because I only enhance those entities once). I also like the fact that I know exactly what will run on the server and my unit tests will hit side effects early on. In a big project you’ll hit enhancement and state side effects which let your app act differently in unit test and on the EE server more often than you’ll guess.
Of course, this might differ if you use another JPA provider.

For enabling build-time-enhancement with OpenJPA I have the following in my parent-be.pom.

                          otherwise you get ClassNotFoundExceptions during 
                          the code coverage report run

You might have spotted a few maven properties which I later define in each projects pom. That way I can keep my common configuration generic and still have a way to tweak the behaviour for each sub-project. Again a nice benefit: You can easily use mvn -Dsomeproperty=anothervalue to tweak those settings on the commandline.

  • ${jpa-includes} for defining the comma separated list of classes which should get enhanced, e.g. "mycomp/project/modulea/backend/*.class,mycomp/project/modulea/backend/otherstuff/*.class
  • ${jpa-exludes} the opposite to jpa-includes
  • openjpa.sql.action to define what should be done during DB schema creation. This can be build for always create the whole DB schema (CREATE TABLES), or refresh for generating only ALTER TABLE statements for the changes. I’ll come back to this later.
  • ${} and credentials properties are used to be able to run the schema creation against Oracle, MySQL and PostgreSQL (switched via maven profiles).

Creating the Database

For doing tests with a real database we of course need to create the schema first. We do NOT let JPA do any automatic database schema changes on JPA-startup. Doing so might unrecoverably trash your production database, so it’s always turned off!

Instead we trigger the SQL schema creation process via the Apache OpenJPA openjpa-maven-plugin manually (for the configuration see below):

$> mvn openjpa:sql

Then we check the generated SQL in target/database.sql and copy it to the structure we have in each of our backend projects:

├── mysql
│   ├── createdb.sql
│   ├── createindex.sql
│   ├── database.sql
│   └── schema_delta.sql
├── oracle
│   ├── createdb.sql
│   ├── createindex.sql
│   ├── database.sql
│   └── schema_delta.sql
└── postgres
    ├── createdb.sql
    ├── createindex.sql
    ├── database.sql
    └── schema_delta.sql

The following files are involved in the db setup:


This file creates the database itself. It is optional as not every database supports to create a whole database. In MySQL we just do the following

DROP DATABASE if exists ProjextXDatabase
USE ProjextXDatabase;

In Oracle this is not that easy. It’s a major pain to drop and then setup a whole data store. A major problem is that you cannot easily access a datastore which doesnt exist anymore via Oracles JDBC driver. Instead, we just drop all the tables.:

DROP TABLE AnotherTable CASCADE constraints PURGE;

If you have a better idea, then please speak up 😉


This is the exact 1:1 DDL/Schema file we generated via the JPA (in my case via the openjpa-maven-plugins mvn openjpa:sql mentioned above). It is simply copied over from target/database.sql but the content remains unchanged. It runs after the createdb.sql file.


This file contains the initial index tweaks which were not generated in the DDL. In Oracle and PostgreSQL this file e.g. contains all the indices on foreign keys, because OpenJPA doesn’t generate them (I remember that Hibernate does, correct?). In MySQL we don’t need those because MySQL automatically adds indices for foreign keys itself.

But this is of course a good place to add all the performance tuning stuff you ever wanted 😉


This one is really a goldie! Once a project goes into production we do not generate full databae schemas anymore! Instead we switch the openjpa-maven-plugin to the refresh mode. In this mode OpenJPA will compare the entities with the state of the configured database and only generate ALTER TABLE and similar statements for the changes in target/database.sql. This works surprisingly good!

We then review the generated schema changes and append the content to src/main/sql/[dbvendor]/schema_delta.sql. Of course we also add clean comments about the product revision in which the change got made. That way an administrator just picks the n last entries from this file and is easily able to bring the production database to the last revision.

Doing this step manually is very important! From time to time there are changes (renaming a column for example) which cannot be handled by the generated DDL. Such changes or small migration updates need to be maintained manually.

How to create the DB for my tests?

This one is pretty easy if you know the trick: We just make use of the sql-maven-plugin.

Here is the configuration I use in my project:

        <!-- Default profile for surefire with MySQL: creates database, imports testdata and runs all unit tests -->




            <!-- that skips sql plugin and test!!! -->
        add profile for oracle and postgresql accordingly

Whenever you run your build, the database will be freshly set up in the process-test-resources phase. The database will then be exactly as in production!

Guess we are now basically ready to start hacking on our project!

The 2nd part will focus on how to handle JPA stuff in the application code. Stay tuned!


About struberg
I'm an Apache Software Foundation member and Java Champion blogging about Java, µC, TheASF, OpenWebBeans, Maven, MyFaces, CODI, GIT, OpenJPA, TomEE, DeltaSpike, ...

12 Responses to Using JPA in real projects (part 1)

  1. JL Cetina says:

    Thanks Struberg, i will wait for the second part!!!

  2. JL Cetina says:

    Maybe in the 2nd part you can show how create the entities with metamodel (ReverseMappingTool) with Maven.

    • struberg says:


      Thanks for the feedback!

      I’m affraid the openjpa-maven-plugin doesn’t have the reverse-mapping functionality right now. Previously this was not possible because different OpenJPA versions did this quite different. Now this is not a problem anymore as we moved the plugin to the OpenJPA project itself. And as the maintainer of the openjpa-maven-plugin I’ve now created a JIRA issue [1] and try to work on it in the next few months. Any help or feedback is welcome btw!

      Right now you’ll need to use the antrun-maven-plugin for it.



  3. Thanks Mark.
    Can we use openjpa-maven-plugin with other implementations of JPA, like EclipseLink or Hibernate?
    Can you please attache some source code for this maven structure and JPA stuff for your next article?

    eagerly waiting for the next part 😉

  4. JL Cetina says:

    I did the generator with this:






  5. Anthony Fryer says:

    Nice article. I am interested in your technique for creating the database and generating the ALTER table commands in a script for updating an existing production database. I use dbmaintain ( to do this and to also create test database schemas. I imagine you could use the two in combination by making the script you generate using OpenJPA a dbmaintain script by just putting the script in a location dbmaintain knows about.

    One thing about using mysql for your tests is you need to have mysql installed. I use mysql in production but for unit tests and also for setting up development environments i use hsqldb which can be downloaded as a maven dependency (and is already provided with tomee). I find having a zero install requirement means i can develop on machines that you may not have install permissions on, say for example, a windows 7 machine at a work place with retentive security policies. I once worked on a machine like this and it was the catalyst for setting up my build with hsqldb because i couldn’t install mysql.

    Regarding using enchancement, i am also a big fan and have seen big differences in behaviour between ehanced and non-enhanced classes. I don’t trust my test case results when run with non-enhanced classes, so for me its a must have. I recently found an eclipse plugin that will supposedly work with the openjpa-maven-plugin so you can get the enhancement also happening inside eclipse. I haven’t had a chance to test it out yet, but the doco is at

    • struberg says:

      Hi Anthony!

      Thanks for the feedback! Yes, dynamic enhancement can be pretty dirty and especially under Hibernate completely changes the behaviour of the JPA container.

      Regarding your question about how I create my database. I did describe it in the ‘schema_delta.sql’ paragraph. When I create a new project I set the ${openjpa.sql.action} to build. That way I can always generate the full DB schema with mvn openjpa:sql.

      Once the projects moves to production the first time, I switch ${openjpa.sql.action} to refresh thus I get only the alter table statements when triggering a new schema. OpenJPA does this by looking at the current schema in the database, and automatically generate the required changes. I then copy over the generated schema changes to the bottom of my schema_delta.sql file.

      Of course, I need to run this with every of my database vendors I use in my project(oracle, mysql, postgresql atm). But better then fu***g up the database due to auto-DDL 🙂


  6. Heinz Huber says:

    Interesting insights! Looking forward to the second part.

  7. Michiel Vermandel says:

    Hi Struberg.

    Thanks for your great articles.
    Any chance we’ll get to see part 2 in the near future?


Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: