RE-ENTER SAS
Voyez le cloud autrement

Dockerizing Oracle Commerce (ATG) 11.1

by Vina Rakotondrainibe | Oracle Commerce Expert and Cloud Deployment Specialist
Paris area,

I should call this thread "falling in love with Docker". I started my experiment with Jboss AS 7.1 because it is open source and found out that there was a JNDI resolution issue on the dynamo: pattern that you can find pretty much everywhere in ATG files (e.g the Dynamo messaging system's topics). Once I had my Docker build files set for 7.1, it took me literally 5 minutes to switch my container to EAP 6.1 and it works! I never went so fast reconfiguring an environment.

The specifications

For this test I used the following stack:

  • JBOSS EAP 6.1.0
  • JDK 7
  • MySQL
  • All on Ubuntu 14.04

Please note that I did not intend to build a production environment and Docker is not supported by Oracle for Oracle Commerce. I wanted to build a proof of concept to show that it was possible to send a front end e-commerce application (here CRS 11, Oracle Commerce Reference Store) into a container and run it like the old days on bare metal.

Basically, I made three containers

  • A MySQL container
  • An ATG instance with the full blown CSR EAR which will expose a ServiceLockManager on the 9012 port
  • An ATG instance which would be a repeatable instance in a load balanced cluster. Yes, my experiment was not about session replication on JBOSS. By the way, I found out that less and less customers use session replication these days. The main reason is that the integrators does not really measure the constraints of its use and run into code misconception with memory leak, unserializable objects or simply huge session objects leading to an out of memory or a replication timeout.

Configuration management

In nearly 14 years of ATG experience, I was never able to configure three instances so quickly on the same machine. That was because I could never reuse the same configuration out of the box twice. At least, you need to reassign ports in /atg/dynamo/Configuration right?

With Docker, that is an old story. For my experiment, I used the same ATG server configuration and by just changing the JBOSS HTTP port (i.e. 8080) correspondence on the host, I ran an SLM and a front end.

Starting the instances

You would start the SLM with this command:

docker run -d -p 8080:8080 -p 9990:9990 -p 9012:9012 -h atg_front_slm --name atg_front_slm -v /home/vrakoton/docker-workspace/runtime/atg/config:/srv/jboss/config -v /home/vrakoton/docker-workspace/runtime/atg/ear/slm:/opt/jboss-eap-6.1/standalone/deployments -v /home/vrakoton/docker-workspace/runtime/atg/log/slm:/var/log/jboss --link database:database  reenter/ubuntu-jboss-eap-6.1 /opt/scripts/jboss.sh start

And the other front end instance like this:

docker run -d -p 8180:8080 --name atg_front1 -h atg_front1 -v /home/vrakoton/docker-workspace/runtime/atg/config:/srv/jboss/config -v /home/vrakoton/docker-workspace/runtime/atg/ear/front1:/opt/jboss-eap-6.1/standalone/deployments -v /home/vrakoton/docker-workspace/runtime/atg/log/front1:/var/log/jboss --link database:database --link atg_front_slm:atg_front_slm reenter/ubuntu-jboss-eap-6.1 /opt/scripts/jboss.sh start

Third instance would start with this command:

docker run -d -p 8280:8080 --name atg_front2 -h atg_front2 -v /home/vrakoton/docker-workspace/runtime/atg/config:/srv/jboss/config -v /home/vrakoton/docker-workspace/runtime/atg/ear/front2:/opt/jboss-eap-6.1/standalone/deployments -v /home/vrakoton/docker-workspace/runtime/atg/log/front2:/var/log/jboss --link database:database --link atg_front_slm:atg_front_slm reenter/ubuntu-jboss-eap-6.1 /opt/scripts/jboss.sh start

And so on. You can notice several things about these shell commands:

  • -p host_port:container_port options are necessary to access the instance's pages and JBOSS console to do some checks
  • --name and -h options gives a name to the container and changes its Linux hostname so that the references for SLM, cache invalidation event listeners etc... are valid
  • We mount different folders for jboss logs and deployment volumes per container. The reason for choosing a different log folder is obvious. The reason for choosing a per instance deployment folder may not be clear at first but it's necessary if you do not want your instances to continuously redeploy the application each time a new one is starting. Indeed, as the .deployed file gets touched, it triggers a redeploy order from the deployment scanner.
  • In both commands, we link the containers to the database container.
  • For each front server, we link it back to the SLM so it can connect at each repository startup.

Conclusion

It is a shame containers are not officialy supported by Oracle in the ATG support matrix because after more that 14 years working on ATG, it is the first time I can get so many instances running in less than half a day. The longest step was to familiarize with docker: the build process and the command line options. It took me two days to learn it properly.

I don't think there is anything risky to run a pure Java process inside a container and that would be acceptable on continuous integration platforms (you need to build a RHEL image though). Even Oracle Guided Search can be bundled into a container as it is well explained here (among other things), a container do not add overhead unlike virtualization. The pre-requisites is to run docker on a RHEL host in order to avoid system lib conflicts for the index processes.

Now that Docker has released Swarm and Compose, it is time to try a multi host configuration. You can find the source of this test on GitHub.

Top