You're viewing help content for version:

Offloading distributes processing tasks amoung Experience Manager instances in a topology. With offloading, you can use specific Experience Manager instances for performing specific types of processing. Specialized processing enables you to maximize the usage of available server resources.

Offloading is based on the Apache Sling Discovery and Sling JobManager features. To use offloading, you add Experience Manager clusters to a topology and identify the job topics that the cluster process. Clusters are comprised of one or more Experience Manager instances, so that a single instance is considered to be a cluster.

For information about adding instances to a topology, see Administering Topologies.

This page describes how to configure Experience Manager instances to create topologes and consume jobs. For information about using Java APIs to create jobs and job consumers, see Creating and Consuming Jobs for Offloading.

Job Distribution

The Sling JobManager and JobConsumer enable the creation of jobs that are processed in a topology:

  • JobManager: A service that creates jobs for specific topics. 
  • JobConsumer: A service that executes jobs of one or more topics. Multiple JobConsumer services can be registered for the same topic. 

When JobManager creates a job, the Offloading framework selects an Experience Manager cluster in the topology to execute the job:

  • The cluster must include one or more instances that are running a JobConsumer that is registered for the job topic.
  • The topic must be enabled for at least one instance in the cluster.

See Configuring Topic Consumption for information about refining job distribution.

chlImage_1

When the Offloading framework selects a cluster to execute a job, and the cluseter is comprised of multiple instances, Sling Distribution determines which instance in the cluster executes the job.

Job Payloads

The Offloading framework supports job payloads that associate jobs with resources in the repository. Job payloads are useful when jobs are created for processing resources and the job is offloaded to another computer. 

Upon creation of a job, the payload is only guaranteed to be located on the instance that creates the job. When offloading the job, replication agents ensure that the payload is created on the instance that eventually consumes the job. When job execution is complete, reverse replication causes the payload to be copied back to the instance that created the job.

Administering Topologies

Topologies are loosely-coupled Experience Manager clusters that are participating in offloading. A cluster consists of one or more Experience Manager server instances (a single instance is considered as a cluster).

Each Experience Manager instance runs the following Offloading-related services:

  • Discovery Service: Sends requests to a Topology Connector to join the topology.
  • Topology Connector: Receives the join requests and either accepts or refuses each request.

The Discovery Service of all members of the topology point to the Topology Connector on one of the members. In the sections that follow, this member is referred to as the root member.

chlImage_1

Each cluster in the topology contains an instance that is recognized as the leader. The cluster leader interacts with the topology on behalf of the other members of the cluster. When the leader leaves the cluster, a new leader for the cluster is automatically chosen.

Viewing the Topology

Use Topology Browser to explore the state of the topology in which the Experience Manager instance is participating. Topology Browser shows the clusters and instances of the topology.

For each cluster, you see a list of cluster members that indicates the order in which each member joined the cluseter, and which member is the Leader. The Current property indicates the instance that you are currently administering. 

For each instance in the cluster, you can see several topology-related properties:

  • A whitelist of topics for the instance's job consumer.
  • The endpoints that are exposed for connecting with the topology.
  • The job topics for which the instance is registered for offloading.
  • The job topics that the instance processes. 
  1. Using the Touch UI, click the Tools tab. (http://localhost:4502/tools.html)

  2. In the Granite Operations area, click Offloading Browser.

  3. In the navigation panel, click Topology Browser.

    The clusters that are participating in the topology appear.

    chlImage_1
  4. Click a cluster to see a list of the instances in the cluster and their ID, Current status, and Leader status.

  5. Click an instance ID to see more detailed properties.

You can also use the Web Console to view topology information. The console provides further information about the topology clusters:

  • Which instance is the local instance.
  • The Topology Connector services that this instance uses to connect to the topology (outgoing), and the services that connect to this instance (incoming).
  • Change history for the topology and intance properties.

Use the following procedure to open the Topology Management page of the Web Console:

  1. Open the Web Console in your browser. (http://localhost:4502/system/console)

  2. Click Main > Topology Management.

    chlImage_1

Configuring Topology Membership

The Apache Sling Resource-Based Discovery Service runs on each instance to control how Experience Manager instances interact with a topology.

The Discovery Service sends periodic POST requests (heartbeats) to Topology Connector services to establish and maintain connections with the topology. The Topology Connector service maintains a whitelist of IP addresses or host names that are allowed to join the topology:

  • To join an instance to a topology, specify the URL of the Topology Connector service of the root member. 
  • To enable an instance to join a topology, add the instance to the whitelist of the root member's Topology Connector service.

Use the Web Console or a sling:OsgiConfig node to configure the following properties of the org.apache.sling.discovery.impt.Config service:

Property Name OSGi Name Description Default Value
Heartbeat timeout (seconds) heartbeatTimeout The amount of time in seconds to wait for a heartbeat response before the targeted instance is considered unavailable.   20
Heartbeat interval (seconds) heartbeatInterval The amount of time in seconds between heartbeats. 15
Minimal Event Delay (seconds) minEventDelay

When a change occurs to the topology, the amount of time to delay the change of state from  TOPOLOGY_CHANGING to TOPOLOGY_CHANGED. Each change that occurs when the state is TOPOLOGY_CHANGING increases the delay by this amount of time.

This delay prevents listeners from being flooded with events. 

To use no delay, specify 0 or a negative number.

3
Topology Connector URLs topologyConnectorUrls The URLs of the Topology Connector services to send heartbeat messages. http://localhost:4502/libs/sling/topology/connector
Topology Connector Whitelist topologyConnectorWhitelist The list of IP addresses or host names that the local Topology Connector service allows in the topology. 

localhost

127.0.0.1

Repository Descriptor Name leaderElectionRepositoryDescriptor   <no value>

Use the following procedure to connect a CQ instance to the root member of a topology. The procedure points the instance to the Topology Connector URL of the root topology member. Perform this procedure on all members of the topology.

  1. Open the Web Console in your browser. (http://localhost:4502/system/console)

  2. Click Main > Topology Management.

  3. Click Configure Discovery Service.

  4. Add an item to the Topology Connector URLs property, and specify the URL of the root topology member's Topology Connector service. The URL is in the form http://rootservername:4502/libs/sling/topology/connector.

Perform the following procedure on the root member of the topology. The procedure adds the names of the other topology members to its Discovery Service whitelist.

  1. Open the Web Console in your browser. (http://localhost:4502/system/console)

  2. Click Main > Topology Management.

  3. Click Configure Discovery Service.

  4. For each member of the topology, add an item to the Topology Connector Whitelist property, and specify the host name or IP address of the topology member.

Configuring Topic Consumption

Use Offloading Browser to configure topic consumption for the Experience Manager instances in the topology. For each instance, you can specify the topics that it consumes. For example, to configure your topology so that only one instance consumes topics of a specific type, disable the topic on all instances except for one.

Jobs are distributed amoung instances that have the associated topic enabled using round-robin logic.

  1. Using the Touch UI, click the Tools tab. (http://localhost:4502/tools.html)

  2. In the Granite Operations area, click Offloading Browser.

  3. In the navigation panel, click Offloading Browser.

    The offloading topics and the server instances that that can consume the topics appear.

    chlImage_1
  4. To disable the consumption of a topic for an instance, below the topc name click Disable beside the instance.

  5. To configure all topic consumption for an instance, click the instance identifier below any topic. 

    chlImage_1
  6. Click one of the following buttons beside a topic to configure the consumption behavior for the instance, and then click Save:

    • Enabled: This instance consumes jobs of this topic. 
    • Disabled: This instance does not consume jobs of this topic. 
    • Exclusive: This instance consumes jobs only of this topic. 

    Note: When you select Exclusive for a topic, all of the other topics are automatically set to Disabled.

Installed Job Consumers

Several JobConsumer implementations are installed with Experience Manager. The topics for which these JobConsumers are registered appear in Offloading Browser. Additional topics that appear are those that custom JobConsumers have registered. The following table describes the default JobConsumers.

Job topic Service PID Description
/ org.apache.sling.event.impl.jobs.deprecated.EventAdminBridge Installed with Apache Sling. Processes jobs that the OSGi event admin generates, for backward compatibility.
com/day/cq/replication/job/* com.day.cq.replication.impl.AgentManagerImpl A replication agent that replicates job payloads.
com/adobe/granite/workflow/offloading com.adobe.granite.workflow.core.offloading.WorkflowOffloadingJobConsumer Processes jobs that the DAM Update Asset Offloader workflow generates.

Disabling and Enabling Topics For an Instance

The Apache Sling Job Consumer Manager service provides topic whitelist and blacklist properties. Configure these properties to enable or disable the processing of specific topics on an Experience Manager instance. 

Note: If the instance belongs to a topology, you can also use Offloading Browser on any computer in the topology to enable or disable topics. 

The logic that creates the list of enabled topics first allows all of the topics that are in the whitelist, and then removes topics that are on the blacklist.By default, all topics are enabled (the whitelist value is *) and no topics are disabled (the blacklist has no value).

Use Web Console or a sling:OsgiConfig node to configure the following properties. For sling:OsgiConfig nodes, the PID of the Job Consumer Manager service is org.apache.sling.event.impl.jobs.JobConsumerManager.

Property Name in Web Console OSGi ID Description
Topic Whitelist job.consumermanager.whitelist A list of topics that the local JobManager service processes. The default value of * causes all topics to be sent to the registered TopicConsumer service.
Topic Blacklist job.consumermanager.blacklist A list of topics that the local JobManager service does not process. 

Creating Replication Agents For Offloading

The offloading framework uses replication to transport resources between author and worker. The offloading framework automatically creates replication agents when instances join the topology. The agents are created with default values. You must manually change the password that the agents use for authentication. 

Caution:

A known issue with the automatically-generated replication agents requires you to manually create new replication agents. Follow the procedure in Problems Using the Automatically Generated Replication Agents before you create the agents for Offloading.

Create the replication agents that transport job payloads between instances for offloading. The following illustration shows the agents that are required to offload from the author to a worker instance. The author has a Sling ID of 1 and the worker instance has a Sling ID of 2: 

chlImage_1

This setup requires the following three agents:

  1. An outgoing agent on the author instance that replicates to the worker instance.
  2. A reverse agent on the author instance that pulls from the outbox on the worker instance.
  3. An outbox agent on the worker instance.

This replication scheme is similar to that used between author and publish instances. However, for the offloading situation all of the instances involved are authoring instances. 

 

Note:

The Offloading framework uses the topology to obtain the IP addresses of the offloading instances. The framework then automatically creates the replication agents based on these IP addresses. If the IP addresses of the offloading instances later change, the change is automatically propaged on the topology after the instance restarts. However, the Offloading framework does not automatically update the replication agents to reflect the new IP addresses. To avoid this situaion, use fixed IP addresses for all instances in the topology. 

Naming the Replication Agents for Offloading

Use a specific format for the Name property of the replication agents so that the offloading framework automatically uses the correct agent for specific worker instances.

Naming the outgoing agent on the author instance:

offloading_<slingid>, where <slingid> is the Sling ID of the worker instance. 

Example: offloading_f5c8494a-4220-49b8-b079-360a72f71559

Naming the reverse agent on the author instance:

offloading_reverse_<slingid>, where <slingid> is the Sling ID of the worker instance. 

Example: offloading_reverse_f5c8494a-4220-49b8-b079-360a72f71559

Naming the outbox on the worker instance:

offloading_outbox

Creating the outgoing agent

  1. Create a Replication Agent on author. (See the documention for replication agents). Specify any Title. The Name must follow the naming convention. 

  2. Create the agent using the following properties:

    Property Value
    Settings > Serialization Type Default
    Transport >Transport URI http://<ip of target instance>:<port>/bin/receive?sling:authRequestLogin=1
    Transport >Transport User Replication user on target instance
    Transport >Transport Passoword Replication user password on target instance
    Extended > HTTP Method POST
    Triggers > Ignore Default True

Creating the reverse agent

  1. Create a Reverse Replication Agent on author. (See the documention for replication agents.) Specify any Title. The Name must follow the naming convention. 

  2. Create the agent using the following properties:

    Property Value
    Settings > Serialization Type Default
    Transport >Transport URI http://<ip of target instance>:<port>/bin/receive?sling:authRequestLogin=1
    Transport >Transport User Replication user on target instance
    Transport >Transport Passoword Replication user password on target instance
    Extended > HTTP Method GET

Creating the outbox agent

  1. Create a Replication Agent on the worker instance. (See the documention for replication agents.) Specify any Title. The Name must be offloading_outbox.

  2. Create the agent using the following properties.

    Property Value
    Settings > Serialization Type Default
    Transport >Transport URI repo://var/replication/outbox
    Trigger > Ignore Default True

Finding the Sling ID

Obtain the Sling ID of an Experience Manager instance using either of the following methods:

  • Open the Web Console and, in the Sling Settings, find the value of the Sling ID property (http://localhost:4502/system/console/status-slingsettings). This method is usefull if the instance is not yet part of the topology.
  • Use the Topology browser if the instance is already part of the topology.

Offloading the Processing of DAM Assets

Configure the instances of a topology so that specific instances perform the background processing of assets that are added or updated in DAM.

By default, Experience Manager executes the DAM Update Asset workflow when a DAM asset changes or one is added to DAM. Change the default behavior so that Experience Manager instead executes the DAM Update Asset Offloader workflow. This workflow generates a JobManager job that has a topic of com/adobe/granite/workflow/offloading. Then, configure the topology so that the job is offloaded to a dedicated worker.

Caution:

 No workflow should be transient when used with workflow offloading. For example, the DAM Update Asset workflow must not be transient when used for asset offloading. To set/unset the transient flag on a workflow, see Transient Workflows.

The following procedure assumes the following characteristics for the offloading topology:

  • One or more Experience Manager instance are authoring instances that users interact with for adding or updating DAM assets.
  • Users to do not directly interact with one or more Experience Manager instances that process the DAM assets. These instances are dedicated to the background processing of DAM assets.
  1. On each Experience Manager instance, configure the Discovery Service so that it points to the root Topography Connector. (See Configuring Topology Membership.)

  2. Configure the root Topography Connector so that the connecting instances are on the whitelist. 

  3. Open Offloading Browser and disable the com/adobe/granite/workflow/offloading topic on the instances with which users interact to upload or change DAM assets.

    chlImage_1
  4. On each instance that users interact with to upload or change DAM assets, configure workflow launchers to use the DAM Update Asset Offloading workflow:

    1. Open the Workflow console. (localhost:4502/libs/cq/workflow/content/console.html)
    2. Click the Launcher tab.
    3. Locate the two Launcher configurations that execute the DAM Update Asset workflow. One launcher configuration event type is Node Created, and the other type is Node Modified.
    4. Change both event types so that they execute the DAM Update Asset Offloading workflow. (For information about launcher configurations, see Starting Workflows When Nodes Change.)
  5. On the instances that perform the background processing of DAM assets, disable the workflow launchers that execute the DAM Update Asset workflow. 

Known Offloading Issues

The following issues are related to the behavior of the Offloading framework.

Working Offline

You can use offloading without using a network interface where all instances run on the same computer using different ports. This situation is useful for demonstration or development purposes. To enable this situation, configure all IP addresses and URLs using localhost as the computer name.

To use offloading offline, you should initially start your Experience Manager instances when the computer is offline so that the replication agents are created using localhost as the IP address. Using localhost causes the correct behavior if the computer later goes online.

If you initially start your instances when the computer has a network interface, the IP address of the computer is used in the properties of the replication agents. If you later go offline, the replication agents will not function correctly.

Problems Using the Automatically-Generated Replication Agents

By default, the offloading framework automatically creates the replication agents for transporting resources between author and offloading workers. These agents are meant to be administered using the Granite Replication Agent UI and therefore do not appear in the Experience Manager Replication Agent console. Limitations of the Granite Replication Agent UI prevent the full management of replication agents. Therefore, you must use the Experience Manager Replication console and use the following procedure to implement the following changes:

  • Disable the automatic creation of the replication agents.
  • Remove any automatically-created agents.
  • Manually create the agents that the Offloading framework requires.
  1. To disable the automatic creation of replication agents for offloading, on each computer in the topology use CRXDE Lite to delete the /libs/granite/offloading/config.author/com.adobe.granite.offloading.impl.transporter.OffloadingAgentManager node.

  2. On all instances in the topology, use CRXDE Lite to delete all replication agent nodes below /etc/replication/agents.author. The node names are prefixed with offloading_ as per the required naming convention.

  3. To manually create the Offloading replication agents, use the procedure in Creating Replication Agents for Offloading.

This work is licensed under a Creative Commons Attribution-Noncommercial-Share Alike 3.0 Unported License  Twitter™ and Facebook posts are not covered under the terms of Creative Commons.

Legal Notices   |   Online Privacy Policy