CRX Offline Backup and Restore Procedure

Question / Problem

How do I make an offline backup for CRX 1.4.1 or 1.4.2? How do I restore the backup?

Answer / Resolution

Offline Backup

The offline backup allows to create a consistent backup of a clustered repository. The backup is taken from a stopped cluster node.

Warning: Copying the repository files while the repository is running is not supported. If the repository files are copied while the repository is running, the backup may be in an inconsistent state. Nodes and the search index in the copied repository may be corrupt, and the copied repository may not start. When starting, the exception "Slave has same identity as master" may be thrown (if a network connection to the original repository server exists). The reason is that the copied repository thinks it is running in cluster mode.

Requirements and Configuration

This documentation applies to CRX 1.4.1 and 1.4.2.

The offline backup requires that Tar persistence manager is used. The TarPM is the default persistence manager for CRX. This documentation uses the following directories:

  • shared/namespaces
  • shared/nodetypes
  • shared/journal
  • shared/repository/datastore
  • shared/version
  • shared/workspaces/x

The following directories exist on both cluster nodes (master and slave):

  • clusterNode
  • clusterNode/repository/index
  • clusterNode/repository/meta
  • clusterNode/version/copy
  • clusterNode/workspaces/* (* stands for all workspaces in use)
  • clusterNode/workspaces/*/copy
  • clusterNode/workspaces/*/index

To improve performance, we assume one cluster node is configured as the preferred master. The backup is made on a slave node. This is not a requirement, it is just to improve performance. On how to configure a cluster node as the preferred master, see the Tar PM and Cluster configuration.

Backup Procedure

  • For each cluster node, backup the file that contains the cluster node id. When using CRX Quickstart, this id is stored in clusterNode/ When not using CRX Quickstart, the cluster node id is stored in the file repository.xml.
  • Stop one (preferably the slave) cluster node.
  • Backup all files of the this cluster node, including subdirectories.
  • Afterwards, backup the files on the following shared directories:
    • shared/namespaces
    • shared/nodetypes
    • shared/repository/datastore
  • Start the cluster node again.

Restore a Backup (Emergency Restore)

  • Stop both cluster nodes.
  • Delete the following directories in the clusterNode directory on all cluster nodes:
    • repository, shared, version, workspaces
  • Delete all files and directories in the shared directory.
  • Restore the backup of the cluster node (including subdirectories) to one cluster nodes.
  • Copy all files of the restored cluster node to all other cluster nodes. After that, each cluster node contains the exact same data.
  • Delete the file clusterNode/revision.log on all cluster nodes.
  • Delete the files **/ on all cluster nodes if they exist.
  • For each cluster node, restore the file that contains the cluster node id (clusterNode/ or repository.xml). This ensures that each cluster node has a different cluster node id.
  • Restore the backup of the shared directory:
    • shared/namespaces
    • shared/nodetypes
    • shared/repository/datastore
  • Copy the contents of the following directories from one cluster node to the shared directory:
    • clusterNode/version/copy/data*.tar to shared/version
    • clusterNode/workspaces/*/copy/data*.tar to shared/workspaces/* (for each workspace)

Example Script Files

The offline backup and restore may be automated using script files. You may use the following two script files as a template. WARNING: These scripts are for demonstrative purposes. You should remove the beginning exit, fix the variable paths and test the scripts thoroughly in your environment to use them.

Affected Versions

CRX 1.4.1 and 1.4.2




Получете помощ по-бързо и по-лесно

Нов потребител?