This article describes the common asset tuning configuratons and solutions which would help optimizing the asset performance.
- Adobe recommends enabling HTTPS because many organizations have firewalls that sniff HTTP traffic, which adversely impacts uploads and corrupts files.
- For large file upload , prefer wired connections instead of wireless.
- Disable full text search for binary files via tika index
- Set optimal JVM parameters as for an example below and use Java 8:
XX:+UseConcMarkSweepGC = Enable Concurrent Mark Sweep (CMS) Collector-Doak.queryLimitInMemory=500000 -Doak.queryLimitReads=100000 -Dupdate.limit=250000 -Doak.fastQuerySize=true
- Tuning the Sling Job Queues: The bulk upload of large assets may be very resource intensive. By default the number of concurrent threads per job queue is equal to the number of CPU cores, which may cause an overall performance impact and high java heap consumption.It is recommended to not exceed 50% of the cores. To change this value, go to : http://:/system/console/configMgr/org.apache.sling.event.jobs.QueueConfiguration and set queue.maxparallel to a value representing 50% of the CPU cores of the server hosting your AEM instance (eg. for 8 CPU cores, set the value to 4).
- Tune the cache size for CQBufferedImageCache: Consider you have a system with a max heap (-Xmx param) of 5 GB, an Oak BlobCache set at 1 GB, and Document cache set at 2 GB. In this case, the buffered cache would take max 1.25 GB and that would leave only 0.75-GB memory for unexpected spikes. Eventually, the JVM fails with OutOfMemoryErrors. To solve the problem, reduce the configured max size of the buffered image cache.When uploading large amounts of assets to Adobe Experience Manager, tune the buffered cache size by configuring it via the OSGi Web Console.1. Go to http://host:port/system/console/configMgr/com.day.cq.dam.core.impl.cache.CQBufferedImageCache2. Set the property cq.dam.image.cache.max.memory in bytes for example,1073741824 is 1GB (1024*1024*1024 = 1 GB).Note:From AEM 6.1 SP1, if you're using a sling:osgiConfig node for configuring this property, make sure to set the data type to Long. See this article for more details on this.
- Tune cacheSizeInMB when using the FileDataStore to a percentage of your available heap (a conservative value would be 2% of the max heap). For example, for an 8-gigabyteheap:maxCachedBinarySize=1048576cacheSizeInMB=164Note that maxCachedBinarySize is set to 1MB (1048576) so that it only caches files that are max 1MB in size. Tuning this to a smaller value might make sense.When dealing with large number of binaries, Adobe recommends that an external data store be used instead of the default node stores to maximize performance. In addition, Adobe recommends you tune the following parameters:
Caution:The cacheSizeInMB setting can cause the java process to run out of memory if it is set too high. For example, if you have the max heap size set to 8GB (-Xmx8g) and you expect AEM and your application to utilize a combined heap of 4GB, then it would make sense to set cacheSizeInMB to 82 instead of 164. In the range of 2-10% of the max heap is a safe configuration. However, it is highly recommended to validate changes in these settings by load testing while monitoring the memory utilization.
- The DAM Update Asset workflow contains a full suite of steps that are configured for tasks, such as Scene7 PTIFF generation and InDesign Server integration. However, most users may not require several of these steps. Adobe recommends you create a custom copy of the DAM Update Asset workflow model, and remove any unnecessary steps. In this case, update the launchers for DAM Update Asset to point to the new model.
- Transient Workflow : To optimize high ingestion loads, Adobe suggests switching the DAM Update and XMP Metadata Writeback workflow to a transient workflow.As the name implies, runtime data related to the intermediate work steps in transient workflows are not persisted in the JCR when they run (the output renditions are persisted of course). It causes a 10% reduction in the workflow processing time and significantly reduces repository growth. No more CRUD workflows are required to purge.In addition, it reduces the number of TAR files to compact. If your business dictates persisting/archiving workflow runtime data for audit purposes, do not enable this feature.
- Selective rendition generation: Only generate the renditions you need by adding conditions to the asset processing workflow, so that more costly renditions are only generated for select assets./workflow/ Dam Update Asset >> Process Thumbnails step.
- Shared data store among instances: Implementing an S3 or Shared File Datastore can help to save disk space and increase network throughput in large-scale implementations. However there may be other additional task in maintaining such deployement. But this can be a good tradeoff for better performance.
- Maintenance: Normally, you should run purging workflows on a weekly basis. However, in resource-intensive scenarios, such as during wide-scale asset ingestion, you can run it more frequently.