AEM 6.x Maintenance Guide

This page is intended to provide consolidated and relevant details for different Maintenance operations on AEM 6.x repository.


This article provides details on various maintenance activities for AEM 6.x.  AEM Oak repository maintenance is critical to keeping a performant system.  Lack of maintenance causes instability, performance issues and disk space outages.

Oak Tar SegmentNodeStore Maintenance

Oak Mongo Storage Maintenance

Binary File (BLOB Store or DataStore) Maintenance

Workflow Purge Maintenance

Version Purge Maintenance

Lucene Binaries Cleanup

Audit Log Purge Maintenance

Maintenance Activities for AEM 6.x Instances

Oak Tar SegmentNodeStore Maintenance

 The SegmentNodeStore is the default Oak storage mechanism that ships with AEM. This persistence layer writes data to tar files under crx-quickstart/repository/segmentstore. The storage is an append-only format, so when content is created, updated or deleted, the files in the segmentstore folder only get larger. Over time, this can affect performance and stability of AEM. To reclaim disk space and improve performance, you need to run tar compaction, aka "Revision Cleanup", maintenance.  Revision Cleanup can be run both offline (with AEM stopped) and online (with AEM running). However, Online Cleanup is only safe for use in AEM 6.3 and later versions. See the related documentation for your AEM version:

Comparison of Offline vs. Online Compaction

The table below provides a quick reference on the differences between online and offline revision cleanup:

Offline Compaction

Online Compaction

AEM 6.0 - 6.3

6.3 and future versions

Downtime required

No downtime

Cleans all old revisions

Cleans revisions prior to
the last time online compaction ran

Must run to completion

Runs during maintenance window

Runs faster

Performance restricted by system activities

Recommended to run bi-weekly

Runs daily (default 2AM - 5AM)

For AEM versions 6.0 through 6.2, Online Revision Cleanup must be disabled, see this article for steps on disabling it. In addition, there are known scenarios where Offline Cleanup can fail. See this article to avoid those issues and improve the performance of offline compaction.

Recommended Schedule

On AEM6.0 through AEM6.2, it is recommended to run offline compaction bi-weekly.  On AEM6.3, offline compaction is not required.  Instead, it is recommended to monitor online compaction which is scheduled (by default) to run daily from 2AM-5AM.

Sample Log Output

Here are sample log messages showing successful runs of Revision Cleanup.


Offline Revision Cleanup for AEM6.2 / Oak:

Apache Jackrabbit Oak 1.4.13​
Compacting crx-quickstart/repository/segmentstore​
    before ​
        Fri Oct 14 12:19:30 EDT 2016, data00000a.tar​
        Wed May 17 10:37:46 EDT 2017, data00001b.tar​
        Fri Oct 14 12:20:22 EDT 2016, data00002a.tar​
        Wed May 17 10:37:45 EDT 2017, data00011b.tar​
        Sun Jul 16 16:12:39 EDT 2017, journal.log​
        Fri Oct 14 12:19:24 EDT 2016, repo.lock​
    size 7.7 GB (7712731491 bytes)​
    -> compacting​
    -> cleaning up​
    -> removed old file data00074a.tar​
    -> removed old file data00073a.tar​
    -> removed old file data00072a.tar​
    -> removed old file data00018b.tar​
    -> writing new journal.log: a838c3e9-613f-4095-abba-939c437882e7:59384 root​
after ​
        Fri Oct 14 12:19:30 EDT 2016, data00000a.tar​
        Wed May 17 10:37:46 EDT 2017, data00001b.tar​
        Wed May 17 10:37:46 EDT 2017, data00003b.tar​
        Wed May 17 10:37:45 EDT 2017, data00004b.tar​
       Mon Jul 17 11:11:28 EDT 2017, journal.log​
        Fri Oct 14 12:19:24 EDT 2016, repo.lock​
    size 6.4 GB (6385295920 bytes)​
    removed files [data00057a.tar, data00065a.tar, data00020b.tar, data00018b.tar, data00050b.tar, data00073a.tar, data00058a.tar, data00069a.tar, data00060a.tar, data00063a.tar, data00074a.tar, data00066a.tar, data00055a.tar, data00062a.tar, data00036b.tar, data00070a.tar, data00068a.tar, data00072a.tar, data00067a.tar, data00049b.tar, data00061a.tar, data00056a.tar, data00064a.tar, data00059a.tar]​
    added files [data00050c.tar, data00065b.tar, data00073b.tar, data00056b.tar, data00072b.tar, data00066b.tar, data00069b.tar, data00063b.tar, data00018c.tar, data00058b.tar, data00060b.tar, data00074b.tar, data00020c.tar, data00059b.tar, data00070b.tar, data00062b.tar, data00061b.tar, data00036c.tar, data00075a.tar, data00057b.tar, data00049c.tar]​
Compaction succeeded in 21.76 s (21s).​


Online Revision Cleanup on AEM6.3:

TarMK GC #2: started​
TarMK GC #2: …..​
TarMK GC #2: estimation started
​TarMK GC #2: estimation completed in 6.373 ms (6 ms). Segmentstore size has increased since the last garbage collection from 417.9 MB (417905664 bytes) to 844.2 MB (844169728 bytes), an increase of 426.3 MB (426264064 bytes) or 102%. This is greater than sizeDeltaEstimation=100.0 MB (100000000 bytes), so running garbage collection​
TarMK GC #2: compaction started, …​
TarMK GC #2: estimated number of nodes to compact is 442708, based on 442307 nodes compacted to 417905664 bytes on disk in previous compaction and current size of 418285056 bytes on disk.​
TarMK GC #2: compaction cycle 0 completed in 52.96 s (52956 ms). Compacted ff940e56-3a69-4721-ab80-74210b7beae9.00000034 to 8ddf7c10-1af6-41f8-aa99-6b750e09e00b.00000533​
​TarMK GC #2: ……​
TarMK GC #2: compaction succeeded in 53.07 s (53072 ms), after 1 cycles​
TarMK GC #2: cleanup started.​
TarMK GC #2: current repository size is 530.8 MB (530752512 bytes)​
cleanup marking files for deletion: …….​
cleanup completed in 1.584 s (1584 ms). Post cleanup size is 223.9 MB (223906816 bytes) and space reclaimed 306.8 MB (306845696 bytes).​
Removed files: ……​
TarMK closed: …….​

Oak Mongo Storage Maintenance

Some customers have chosen to use MongoDB (DocumentNodeStore) as the storage platform for Oak instead of using the default Tar SegmentNodeStore.  This storage platform provides high availability via clustering of AEM instances.  Similar to Tar storage, all writes, including deletes, cause more data to be written.  To clean up old unneeded revisions of data, run DocumentNodeStore Revision Garbage Collection (also known as "Revision Cleanup").  This maintenance is configured on all AEM versions to run automatically starting at 2am every day.  The maintenance task to configure this is the same as Tar Online Revision Cleanup.

The maintenance task only runs on the leader AEM cluster node against the primary replica of the MongoDB cluster.  It works like this:

  1. Check if any checkpoint exists which is older than 24 hours
  2. Query MongoDB for deleted documents which are older than the max revision age
  3. Delete documents in batches
  4. Query MongoDB for "split" revision documents older than the max revision age
  5. Delete documents in batches

Recommended Schedule

This maintenance needs to be run at least daily.  By default it runs starting at 2AM and it runs to completion.  However, the more frequently it runs, the faster it runs.  Also, the more frequently it runs, the better Oak write performance will be overall.  Note that while it runs, it will affect MongoDB performance.  It should only run when minimal users are on the system.

Sample Log Output

The sample log output below shows the starting and ending log messages for a successful run of Revision Garbage Collection:

16.08.2017 23:38:26.521 *INFO* [sling-oak-observation-173] org.apache.jackrabbit.oak.plugins.document.VersionGarbageCollector Starting revision garbage collection. Revisions older than [2017-08-15 23:38:26.516] will be removed
16.08.2017 23:38:26.727 *INFO* [sling-oak-observation-173] org.apache.jackrabbit.oak.plugins.document.VersionGarbageCollector Proceeding to delete [2620] documents
16.08.2017 23:38:27.265 *INFO* [sling-oak-observation-173] org.apache.jackrabbit.oak.plugins.document.VersionGarbageCollector Proceeding to delete [0] previous documents
16.08.2017 23:38:27.279 *INFO* [sling-oak-observation-173] org.apache.jackrabbit.oak.plugins.document.VersionGarbageCollector Revision garbage collection finished in 766.2 ms. VersionGCStats{ignoredGCDueToCheckPoint=false, deletedDocGCCount=2620, splitDocGCCount=3, intermediateSplitDocGCCount=0, timeToCollectDeletedDocs=170.4 ms, timeTakenToDeleteDeletedDocs=561.3 ms, timeTakenToCollectAndDeleteSplitDocs=10.52 ms}

Oak Binary File (BLOB) Maintenance

As of AEM 6.3, the default Oak repository storage configuration is the Tar SegmentStore with a FileDataStore configured for storing binary files.  By default, a DataStore stores binary files that are 4KB or larger.  In a default installation, the datastore is located under crx-quickstart/repository/datastore on the server's file system.  These files include jar files, images, javascript, css, packages, lucene index files and any other binary files uploaded to AEM.  Those files are referenced within the revisions of binary node properties in the NodeStore.

The DataStore is designed to store files uniquely, meaning if the same file is uploaded to two different locations in Oak, it only stores one file.  Due to this, files are never deleted from the datastore until DataStore Garbage Collection runs.

BLOB GC (aka "DataStore Garbage Collection") maintenance applies to File, S3, and Mongo BLOB binary stores in Oak.  See the official documentation for more details on this topic:

Recommended Schedule

By default, this is a weekly maintenance task that runs on Saturday mornings starting at 2AM and it runs to completion.  It is recommended to run this maintenance task weekly to avoid running out of disk space.  However, if the storage volumes have ample space, it can safely be run less frequently.  Lack of this maintenance will only risk wasting disk space.

Sample Log Output

The sample log output below shows the starting and ending log messages for a successful run of DataStore Garbage Collection:

Starting Blob garbage collection with markOnly [false]​
Collected (1024) blob references ​
Deleted blobs [……….]​
Blob garbage collection completed in 4.569 s (4568 ms). Number of blobs deleted [6341] with max modification time of [2017-07-16 22:03:55.295]

Workflow Purge Maintenance

Whenever a workflow is started in AEM, a workflow history is created under /etc/workflow/instances in the Oak repository.  Over time, workflow history nodes can pile up and affect system performance.  To avoid this situation, you need to run Workflow Purge Maintenance.

See this documentation for details on how to configure and run this maintenance task.

Recommended Schedule

This maintenance task needs to be run on a weekly basis.

Sample Log Output

The log messages below show a successful run of Workflow Purge Maintenance:

*INFO* [Workflow Purge: 'DAM Workflows'] com.adobe.granite.workflow.core.mbean.WorkflowOperationsImpl Begin workflow purge with the following parameters:​
dry run: false​
days old: 0​
models: {/etc/workflow/models/dam/update_asset/jcr:content/model ,/etc/workflow/models/dam-xmp-writeback/jcr:content/model} purge save threshold: {20} purge query count: {1000}​
*INFO* [Workflow Purge: 'DAM Workflows'] com.adobe.granite.workflow.core.mbean.WorkflowOperationsImpl Cleaned up 1006 instances​
*INFO* [Workflow Purge: 'DAM Workflows'] com.adobe.granite.workflow.core.mbean.WorkflowOperationsImpl Finished running Workflow Purge. Purged: 1006 items.  Elapsed time (seconds): 0​
*INFO* [Workflow Purge: 'DAM Workflows'] com.adobe.granite.workflow.core.mbean.PurgeScheduler Finished running  Workflow Purge Configuration: DAM Workflows.  Purged: 1006 items.  Elapsed time (seconds): 0​

Version Purge Maintenance

In a default AEM installation, versions are created when you publish or unpublish pages or assets, upload or replace assets.  Versions are stored as nodes under /jcr:system/jcr:versionStorage in the Oak repository.  Those nodes keep references to binary files in the datastore.  Over time the versions pile up and this affects system performance and disk utilization.  The search indexes, Tar or Mongo storage and DataStore get bloated with additional data from old version histories. To reclaim the disk space and gain back system performance you need to run Version Purge.

Recommended Schedule

This maintenance task needs to be run on a monthly basis.

Sample Log Output

Version purge will only output messages to the logs if it successfully purges versions.  If it fails to purge some versions it would throw an error and continue to purge other versions.

The log message below is an example of a successful purge of a version:

INFO [pool-11-thread-10-Maintenance Queue(com/adobe/granite/maintenance/job/VersionPurgeTask)] Purged version 1.0 of /content/geometrixx/en/jcr:content

The error below is an example of a failed version purge:

ERROR [pool-11-thread-10-Maintenance Queue(com/adobe/granite/maintenance/job/VersionPurgeTask)] Unable to purge version 1.1 for /content/geometrixx/en/jcr:content : OakIntegrity0001: Unable to delete referenced node
javax.jcr.ReferentialIntegrityException: OakIntegrity0001: Unable to delete referenced node
at org.apache.jackrabbit.oak.api.CommitFailedException.asRepositoryException(
at org.apache.jackrabbit.oak.api.CommitFailedException.asRepositoryException(
at org.apache.jackrabbit.oak.jcr.version.ReadWriteVersionManager.removeVersion(

Lucene Binaries Cleanup

By using the Lucene Binaries Cleanup task, you can purge lucene binaries and reduce the running data store size requirement. This is because the lucene's binary churn will be re-claimed daily instead of the earlier dependency on a successful data store garbage collection run.

Though the maintenance task was developed to reduce Lucene related revision garbage, there are general efficiency gains when running the task:

  • The weekly execution of the data store garbage collection task will complete more quickly
  • It may also slightly improve the overall AEM performance 

You can access the Lucene Binaries Cleanup task from: AEM > Tools > Operations > Maintenance > Daily Maintenance Window > Lucene Binaries Cleanup.

Audit Log Purge Maintenance

In a default AEM installation, whenever pages or assets are created / uploaded, modified, deleted, or (un)published then audit logs are created.  Audit logs are stored as nodes under /var/audit in the Oak repository.  Over time, these nodes pile up and affect system performance.  To avoid that situation you need to run Audit Log Maintenance.

See this documentation for details on how to configure and run Audit Log Purge Maintenance.

Recommended Schedule

This maintenance task needs to be run on a monthly basis.

Sample Log Output

The log messages below show a successful run of AuditLog Maintenance:

16.08.2017 23:19:48.765 *INFO* [pool-81-thread-1] {name = test, type = [PageVersion Created, PageRestored, PageValid, PageMoved, PageDeleted, PageModified, PageCreated, PageInvalid, PageRolled Out], contentPath = /content, minimumAge = 5} activated
16.08.2017 23:19:48.799 *INFO* [sling-threadpool-d679d698-60cf-4039-9702-55136a780492-(apache-sling-job-thread-pool)-1-Maintenance Queue(com/adobe/granite/maintenance/job/AuditLogMaintenanceTask)] test - AuditLog purge starting execution
16.08.2017 23:19:48.800 *INFO* [sling-threadpool-d679d698-60cf-4039-9702-55136a780492-(apache-sling-job-thread-pool)-1-Maintenance Queue(com/adobe/granite/maintenance/job/AuditLogMaintenanceTask)] test - Node /var/audit/ does not exist in the repository, skipping current rule execution.
16.08.2017 23:19:48.800 *INFO* [sling-threadpool-d679d698-60cf-4039-9702-55136a780492-(apache-sling-job-thread-pool)-1-Maintenance Queue(com/adobe/granite/maintenance/job/AuditLogMaintenanceTask)] test - AuditLog purge execution is over