Voyez le cloud autrement

Oracle Commerce Go Live Checklist

by Vina Rakotondrainibe | Oracle Commerce Expert and Cloud Deployment Specialist
Paris area,


In this article, we will look at every aspect of your platform to make sure everything is ready for Oracle Commerce 11.1 go live.

Technical Requirements

Oracle issues a very detailed technical requirements sheet for each Oracle Commerce version. Versions of web servers, JDK, applications servers and databases are listed precisely in a PDF document that you can download from the Oracle support portal. Connect to that portal and search for "Oracle Commerce supported environment" to choose the document which corresponds to your version.

There has been a long debate about how up to date is the Oracle supported stack because sometimes versions of application servers are two years old. Customers are tempted to use the latest versions of the underlying product stack because they encouter some platform bugs which are fixed in a more recent version.

You must know that anytime you have an issue on your e-commerce platform, Oracle support will refuse to help if you can not reproduce that issue on an environment with the exact supported versions installed. This is near impossible if the issue happens in production under heavy load for example.

Note: Oracle does no longer supprorts minor versions of application servers so you need to ensure you have the right one.

Global OS and Platform Considerations

  • Separate logs in a specific /var/log partitition for the web servers and the application server instances.
  • Make sur the max number of open files is high enough for the apache, endeca and the jboss user (i.e. at least > 5000). You can find out the currently set value using the ulimit command.
  • Make sure firewalls are properly set to allow communication between the web servers and the AJP, HTTP etc... ports on your application servers (depending on the protocol you use for your load balancing). It is important to close other ports between your N-tier layers for security reasons. It adds constraints but you will be glad to have a secure platform once hackers attack your site.
  • Make sure firewalls control the network communication between your application servers and your databases too. It is the heart of your business which is at stake.
  • Make sure root can not log on your systems, only sudoers .
  • Make sure log rotation is in place for all log files.
  • Use tools like Chef, Ansible or Puppet to automate your configuration. An Oracle Commerce platform is complicated to deploy and maintain if you are not automated.


  • Make sure static resources are served by the web servers (specially if your licencing mode is the per hit licence mode).
  • Use mod_jk for JBOSS EAP 6.0 if you do not use the Red Hat official mod_cluster load balancer (you need to buy a license). We found that the open source version had an issue to detect node subscription/unsubscription to the balancer's pool and some glitches in domain monitoring. Oracle Commerce is agnostic on the load balancing technology/protocol as long as the application server's vendor support it.
  • Make sure you have enough workers on Apache to serve the traffic. The total number of workers on all web servers should be equal to the number of requests all your application servers instances can serve.
  • Configure sticky session

Oracle Commerce Platform

  • If you still have a per CPU licence (new Oracle Commerce licence model is now based on a per request fee), you will pay for the total amount of CPU you have on your bare metal machines even though you do not use them. Oracle applies this policy for all virtualization solutions unless you use the Oracle VM technology.
  • Make sure you have an EAR for each application (i.e. one for Commerce, one for SLM, one for CSC etc...). The single EAR approach is a wrong path to go: the EAR is bigger and you do not know what's activated and what's not in it even with ATG experienced developpers.
  • Install separate EAR for Server Lock Managers (SLM) with the minimal modules in it (the DAS module should do).
  • Make sure SLMs are configured and that all instances can access them. You will need at least three groups of SLMs: for front, CSC and BCC clusters. There can be maximum two instances of SLM (which will work in failover mode) per group.
  • Make sure external configuration is managed separately from the EAR content. To do this, you need to use the in the JVM arguments to tell ATG where the external configuration files are stored.
  • Check that the scenarioManager.xml (on all instances), internalScenarioManager.xml (on BCC or CSC instances) and workflowProcessmanager.xml (on BCC instance) are present in your external configuration and set properly for each environement.
  • Make sure you have the capacity to manage instances with big amount of memory if you choose to allocate a significant amount of memory. Taking a heap dump on a large heap is much longer than on small heaps. During that time, your cluster must be able to cope with load with less instances. Sometimes, you'd better have more instances with less heap memory. Not to mention that to read a 16 Gb heap dump, you'd better have almost that amount of RAM on your developper machine.
  • Your application delivery procedure should use rolling restart or blue/green strategy and not a stop all instances approach.
  • Make sure session replication is in place if you do not want to loose navigation and shopping cart contexts during rolling restarts. Be aware that session replication consumes heap memory and network bandwidth and requires tuning on the strategy, frequency and your application's session footprint.
  • Make sure you have an incomplete order cleaning mechanism

Oracle Guided Search

  • Install your Endeca applications under /srv/endeca/apps (and not under /opt as shown in all Endeca examples). Things evolved in server folder hierarchy since the documents were written.
  • Reconfigure all Endeca services to write logs in /var/log/endeca . This can be done in the application topology XML configuration files which are delivered in your application AND in the Tomcat log configuration files for CAS, Tools and Framework and Platform Services.
  • Make sure your dgraphs are in different restart groups and load balanced properly.
  • By default, index generation is performed in parallel so depending on the number of applications, make sure you have enough memory to support the N (one per language as Oracle's best practices) dgidx processes in parallel on your ITL servers. You can find the memory used by a dgidx in the Dgidx.log file under each application's log folder.
  • MDEXes (i.e. dgraph processes) should be run on dedicated servers. You should have at least two server boxes (virtual or physical) to allow failover and HA. MDEXes for preview and for live should not be on the same servers: you don't want to impact your productionn operation because of issues which might happen in the authoring cluster.
  • Increase the number of threads per dgraph based on the number of cores available on your servers (1 thread = 1 core ideally). With the power of today's machine, you can allocate more threads than cores, 1 thread = 1 core is a bit conservative of course.


  • Use Oracle database on Linux for highly loaded applications, the other supported databases are difficult to tune and will not cope easily with heavy transactional load. I also seriously doubt that Oracle Commerce is as thouroughly tested on these alternative databases as it is on Oracle database.
  • Make sure you have enough sessions on your database to support all the connections coming from all your application server instances (the total number of sessions should be equal to the sum of the maximum connection set on all datasources).
  • Use shared pool connection on Oracle. Dedicated not really required as it can be resource consuming on the database machines if you have a lot of instances.
  • Separate your databases on different instancces (at least one for core commerce schema , one for switching schemas and one for back office schemas) or use Oracle RAC (not the same price though). Working on schema separation and/or database clustering is important if you want flexibility during maintenance.
  • Make sure database backups are in place and that at least a failover mechanism is available (Dataguard and/or RAC).