Bemærk:

You are reading the Adobe Experience Manager 6.x version of performance tuning tips. This article is also available for version 5.x.

Response time is slow for editing and visitor requests

Response time is poor when authors edit content, or websites respond slowly to visitor requests.

Cause

The following factors influence performance problems in AEM:

  • Improper design
  • Application code
  • Lack of caching
  • Bad disk I/O configuration
  • Memory sizing
  • Network bandwidth and latency
  • AEM installed on some select windows 2008 and 2012 version where memory management is an issue
  • Modifying out-of-the-box configurations as described below can help improve performance in AEM.

Preventing performance issues

Here are some steps you can take to ensure that you find and fix performance issues before they have an impact on your users:

  1. Implement and execute load tests that simulate realistic scenarios in both author and publish instances. Researching and defining expected load is a crucial step in this process. This step helps you demonstrate whether the AEM application, architecture, and AEM installation will perform well once it is live in a production environment.

    Results of this exercise help determine whether there is a misconfiguration, application issue, sizing, hardware problem or other issue affecting the system performance. See also the performance guidelines and monitoring guidelines

  2. In addition to load testing, stress testing helps to define the maximum load the system can handle. This test can help you prepare for traffic spikes. More information on performance testing can be found here.

  3. Install the recommended AEM service packs, cumulative fix packs and hot fixes:

    https://helpx.adobe.com/experience-manager/aem-releases-updates.html

  4. If you are using Windows server, then review this article.

  5. If you are planning to load large amounts of Assets (images, videos, and so on) into AEM then make sure you apply the Assets best practices.

  6. Provision enough RAM and avoid IO saturation

    If you are intending to run production at any scale then the Linux environment should be provisioned with as much RAM as the segment tar files will grow to between offline compaction (or online compaction peaks).  In addition, the following will avoid IO saturation.

    • Separate OS/Data/Logging disks
    • Mount Data disks with noatime
    • set readahead buffers to < 32 on the data disk.
    • Ideally use xfs over ext4 on the data disks.
    • If on RedHat running in a VM, make certain the entropy pool is always > 1K bits (use rngtools if necessary)
  7. Disable Transparent Huge Pages on Linux

    AEM performs fine-grained reads/writes, while Linux Transparent Huge Pages are optimized for large operations, so it is recommended to disable Transparent Huge Pages when using Mongo or Tar storage.

Enabling transient workflows

Transient workflows can be used for any workflows that:

  • are run often.
  • do not need the workflow history.

They will generate a performance boost in those situations.

This use case is typically met when there are high volumes of assets ingestion.

Follow the procedure documented on https://helpx.adobe.com/experience-manager/6-4/assets/using/performance-tuning-guidelines.html#Workflows

Tuning Sling Job Queues

Bulk upload of large assets is typically a very resource intensive process. By default, the number of concurrent threads per job queue is equal to the number of CPU cores. As such, this value setting may cause an overall performance impact and high Java heap consumption.

Adobe recommends that you do not exceed 50% of the CPU cores. To adjust this value, go to the following:

http://<host>:<port>/system/console/configMgr/org.apache.sling.event.jobs.QueueConfiguration

Set queue.maxparallel to a value that represents 50% of the CPU cores of the server that hosts your AEM instance. For example, for 8 CPU cores, set the value to 4.  

Tuning your Oak Repository

First make sure that you have the latest Oak version installed for your AEM 6 instance. Check the recommended hot fixes page mentioned above.

Oak Query Engine/Index optimizations

  1. Create custom oak indexes for all frequently used search queries.

    • For information on how to analyze slow queries see this article.
    • Create the custom indexes under the oak:index node for all search properties that you want to search with by following this article.
    • For each custom Lucene-based index, try to set includedPaths (String[]) setting to restrict the index to only apply to certain content paths. Then restrict applicable searches to those paths that are included by the index.
  2. JVM parameters

    Add these JVM parameters in the AEM start script to prevent expansive queries from overloading the systems. Please note, these are default values starting AEM 6.3.

    • -Doak.queryLimitInMemory=500000 (see also Oak documentation)
    • -Doak.queryLimitReads=100000 (see also Oak documentation)
    • -Dupdate.limit=250000 (only for DocumentNodeStore, eg. MongoMK, RDBMK)

    The following option might as well improve performance, but changes the meaning of the result size call. Specially, only query restrictions that are part of the used index are considered when calculating the size.

    Additionally, ACLs are not applied to the results, so nodes which are not visible to the current session will still be included in the count returned. As such,  the count returned can be higher than the actual number of results and the accurate count can only be determined by iterating through the results:

  3. Lucene index configuration

    Open /system/console/configMgr/org.apache.jackrabbit.oak.plugins.index.lucene.LuceneIndexProviderServiceand

    • enable CopyOnRead (enabled by default since AEM 6.2)
    • enable CopyOnWrite (enabled by default since AEM 6.2)
    • enable Prefetch Index Files (enabled by default since AEM 6.2)

    See http://jackrabbit.apache.org/oak/docs/query/lucene.html for more information about the available parameters

    Because some paths do not need to be indexed, you can do the following:

    In CRXDE Lite, go to /oak:index/lucene, set a multivalue string property (String[])named excludedPaths with these values /var, /etc/workflow/instances, /etc/replication.

  4. Data Store

    If you use AEM Assets or have an AEM application that uses binary files extensively, Adobe recommends that you use an external datastore. Using an external datastore helps ensure maximum performance.  See documentation for detailed instructions

    When using a FileDataStore, tune cacheSizeInMB to a percentage of your available heap. A conservative value is 2% of the max heap.  For example, for an 8 GB heap:

    maxCachedBinarySize=1048576
    cacheSizeInMB=164

    Note that maxCachedBinarySize is set to 1 MB (1048576). As such, it only caches files that are a maximum of 1 MB.  Tuning this setting to a smaller value may also make sense.

    When dealing with a large number of binaries, you want to maximize performance. Therefore, Adobe recommends that an external datastore is used instead of the default node stores. In addition, Adobe recommends you tune the following parameters:

    • maxCachedBinarySize=10485760
    • cacheSizeInMB=4096

Bemærk:

The cacheSizeInMB setting can cause the Java process to run out of memory if it is set too high. For example, if you have the max heap size set to 8 GB (-Xmx8g) and you expect AEM and your application to utilize a combined heap of 4 GB, then it makes sense to set cacheSizeInMB to 82 instead of 164. In the range of 2-10% of the max heap is a safe configuration. However, Adobe recommends that you validate setting changes with load testing while also monitoring memory utilization. 

Mongo storage tuning

  1. MongoBlobStore cache size:  The blobstore is used to store and read large binary objects. Internally, the store with cache is implemented which splits the binaries in relatively small blocks (data or hash code or indirect hash), so that each block fits in memory. 

    In a default setup, the MongoBlobStore uses a fixed cache size of 16MB.  For deployments where more RAM is available and blob storage is frequently accessed (for example, when Lucene index is large), increase the cache size. This cache size only applies when you use MongoBlobStore (default), not when using an external blobstore.

    • You can configure the cache size (in MB) by way of blobCacheSize setting on DocumentNodeStoreService.
      For example: blobCacheSize=1024
    • Please also review AEM-MongoDB checklist.
  2. Document cache size: To optimize the performance of reading nodes from MongoDB you need to tune the caches sizes of DocumentNodeStore.  The default size of the cache is set to 256 MB, which is distributed among various caches used in DocumentNodeStore. See http://jackrabbit.apache.org/oak/docs/nodestore/documentmk.html#cache

    • You can configure the cache size (MB) by way of the cache setting on DocumentNodeStoreService. For example, cache=2048.
    • Set all of the following cache configurations in crx-quickstart/install/org.apache.jackrabbit.oak.plugins.document.DocumentNodeStoreService.configand then load test with various values to see what the optimal configuration is for your environment. Note that remaining cache percent is given to the document cache:
      • cache=2048
      • nodeCachePercentage=35
      • childrenCachePercentage=20
      • diffCachePercentage=30
      • docChildrenCachePercentage=10
    • With the above configuration, the percentages total 95%.  The remaining 5% of the cache is given to documentCache.
        documentCache = cache - nodeCache - childrenCache - diffCache - docChildrenCache
    • While distributing the cache percentages, ensure that cache left for documentCache is not very large. That is, keep it to ~500 MB max or less; a large documentCache can lead to increase in the time taken to perform cache invalidation.
  3. Cache settings in AEM 6.2 with Oak 1.4.x:

    • In AEM 6.2, changes were made to the way these cache settings work. In AEM 6.2 with Oak 1.4, there is a new cache: prevDocCache.  You can configure this cache using the setting prevDocCachePercentage. Default is 4.
    • The documentCache uses the remaining cache MB (cache setting minus the size of all other caches):
        documentCache = cache - nodeCache - childrenCache - diffCache - docChildrenCache - prevDocCache
  4. Implement the MongoDB production checklist
    http://docs.mongodb.org/manual/administration/production-checklist/ - according to Mongo DB support, many of the items there have a large impact on performance. For any questions, contact MongoDB Support directly.

  5. Read performance: Add this query string parameter to your Mongo DB URL on each AEM node:  ?readPreference=secondaryPreferred
    This parameter tells the system to do reads from the secondary which gives some added read performance.

  6. Increase thread pool for oak-observation: open /system/console/configMgr/org.apache.sling.commons.threads.impl.DefaultThreadPool.factory
    Set the name to oak-observation and set the min and max pool size to 20.

  7. Increase observation queue length: Create a file named com.adobe.granite.repository.impl.SlingRepositoryManager.cfg containing parameter oak.observation.queue‐length=50000 . Place it under the /crx-­‐quickstart/install folder.

  8. Avoid long running queries: Set the system property in the JVM parameters :  -Doak.mongo.maxQueryTimeMS=60000 to avoid queries running longer than 1 minute.

Tar storage tuning

Micro kernels do not call memory-mapped files directly. However, JDK internally uses memory-mapped files for efficient reading. On certain Windows 64-bit operating system could fail to clean up memory-mapped files and consume all the native OS memory. Make sure to install the performance-related patches/hotfix from microsoft (see KB 2731284) and Oracle.

  • An alternative option is to disable the memory map mode by adding tarmk.mode=32 in SegmentNodeStoreService.config until the operating system issue is resolved. The downside of disabling makes I/O intensive. Make sure to raise the I/O Page Lock Limit.

TarMK revision clean (compaction)

For AEM 6.2, Adobe recommends that you use offline compaction, by way of the oak-run tool. Please also review AEM 6.x Maintenance Guide.  

Strating AEM 6.3, the online compaction (also known as online revision cleanup) is enabled by default. See this page for more information. Please also visit online revision clean up FAQ.

Cloud Manager for Adobe Managed Services (AMS) customers

Cloud Manager (AMS customers only) allows customers to ensure successful AEM deployment with a guided performance testing and autoscaling.

What to do next?

For further help concerning your AEM performance issues:

  1. Join the discussion at the Adobe forums for any question or feedback.  

  2. Get official support

    1. Collect the following data:
      1. Profiler output
      2. Thread Dumps
      3. All the log files
    2. Contact AEM Customer Care

Dette arbejde har licens under en Creative Commons Attribution-Noncommercial-Share Alike 3.0 Unported License  Opslag på Twitter™ og Facebook er ikke omfattet af vilkårene for Creative Commons.

Juridiske meddelelser   |   Politik for beskyttelse af personlige oplysninger online