Triggers is an integration between Adobe Campaign and Adobe Analytics using the pipeline. The pipeline retrieves users' actions or triggers from your website. A cart abandonment is an example of trigger. Triggers are processed in Adobe Campaign to send emails in near real time.
Summary This document describes the components that make up the integration in Adobe Campaign. It also explains how to fix common issues and configurations. 
Solution Adobe Campaign Classic (v6.11, v7)
Audience Administrators, Advanced users

If you have questions about this article or about any other Adobe Campaign topic, ask the Community.

Overview

Triggers run marketing actions within a short range of time following a user’s action. The typical response time is less than one hour.

It allows for more agile integrations since the configuration is minimal and a third party is not involved.
It also supports high volumes of traffic without impacting the performance of marketing activities. As an example, the integration can process a million triggers per hour.

Architecture

What is Pipeline?

Pipeline is a messaging system hosted in the Experience Cloud that uses Apache Kafka. It is a way to easily pass data between solutions. Further, Pipeline is a message queue rather than a database. Producers push events in the pipeline and the consumers listen to the flow and do what they want with the event. Events are kept for a few days but no more. The purpose is to listen 24/7 and process events immediately.

Picture1

How does Pipeline work?

The "pipelined" process is always running on the Adobe Campaign marketing server. It connects to the pipeline, retrieves the events, and processes them immediately. 

Picture2

The pipelined process logs in to the Experience Cloud using an authentication service and sends a private key. The authentication service returns a token. The token is used to authenticate when retrieving the events. Triggers are retrieved from a REST web service using a simple GET request. The response is JSON format. Parameters to the request include the name of the trigger and a pointer that indicates the last message retrieved. The pipelined process handles it automatically.

Usage

Opmerking:

It is a specific example from various possible implementations.

The pipeline events are downloaded automatically. These events can be monitored using a form.

image2017-1-13 18_8_3 blur

A recurrent campaign workflow does a query on triggers and if they match the marketing criteria, it starts a delivery.

image2017-1-13 18_13_31 blur

Pipeline Configuration

Overview

Authentication parameters such as the customer ID, the private key, and the authentication endpoint are configured in the instance configuration files.
The list of triggers to be processed is configured in an option. It is in JSON format.

The trigger is processed immediately using Javascript code. It is saved into a database table with no further processing in real time.
The triggers are used for targeting by a campaign workflow that sends emails. The campaign is set up so that a customer that has both trigger events receives an email.

Prerequisites

Using Experience Cloud Triggers in Campaign requires:

  • Adobe Campaign version 6.11 build 8705 or later.
  • Adobe Analytics Ultimate, Premium, Foundation, OD, Select, Prime, Mobile Apps, Select, or Standard.

Prerequisite configurations are:

  • Creation of a private key file and then the creation of the oAuth application registered with that key.
  • Configuration of the triggers in Adobe Analytics.

The Adobe Analytics configuration is out of the scope of this document. 
Adobe Campaign requires the following information from Adobe Analytics:

  • The name of the oAuth application.
  • The IMSOrgId. It is the identifier of the Experience Cloud customer.
  • The names of the triggers configured in Analytics.
  • The name and format of the data fields to reconcile with the Marketing database.

Opmerking:

Part of this configuration is a custom development and requires the following:

  • Working knowledge of JSON, XML, and Javascript parsing in Adobe Campaign.
  • Working knowledge of the QueryDef and Writer APIs.
  • Working notions of encryption and authentication using private keys.

Since editing the JS code requires technical skills, do not attempt it without the proper understanding.

Triggers are saved to a database table. Thus, trigger data can be safely used by marketing operators in targeting workflows.

Authentication and configuration files

Authentication is required since Pipeline is hosted in the Adobe Experience Cloud.
If the Marketing server is hosted on premise, when it logs in to Pipeline, it must authenticate to have a secure connection.
It uses a pair of public and private keys. This process is the same function as a user/password, only more secure.

IMSOrgId

The IMSOrgId is the identifier of the customer on the Adobe Experience Cloud.
Set it in the instance serverConf.xml file, under the IMSOrgId attribute.

Example:

<redirection IMSOrgId="C5E715(…)98A4@AdobeOrg" (…)

Key generation

The key is a pair of files. It's in the RSA format and 4096 bytes long. It can be generated with an open source tool such as OpenSSL. Each time the tool is run, a new key is randomly generated.

For the sake of convenience, the steps are listed below:

  • openssl genrsa -out <private_key.pem> 4096
  • openssl rsa -pubout -in <private_key.pem> -out <public_key.pem>

Example private_key.pem file:

 

 ----BEGIN RSA PRIVATE KEY----
 MIIEowIBAAKCAQEAtqcYzt5WGGABxUJSfe1Xy8sAALrfVuDYURpdgbBEmS3bQMDb
 (…)
 64+YQDOSNFTKLNbDd+bdAA+JoYwUCkhFyvrILlgvlSBvwAByQ2Lx
 ----END RSA PRIVATE KEY---- 

 Example public_key.pem file:

----BEGIN PUBLIC KEY----
MIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEAtqcYzt5WGGABxUJSfe1X
(…)
EwIDAQAB
----END PUBLIC KEY----

Opmerking:

Keys should not be generated by PuttyGen, OpenSSL is the best choice.

oAuth client creation in Adobe Experience Cloud

An Application of type JWT must be created by logging in to Adobe Developer Connection and follow these steps:

  1. Select the Service Account (JWT Assertion).
  2. Enter the Application name.
  3. Register the public key.
  4. Select the Organization (Admin Account is mandatory).
  5. Select the trigger's scope.
worddav742582ccac2fe4e5078ee2804a38addf

Application ID configuration in Adobe Campaign

The application ID of the oAuth client created must be configured in Adobe Campaign. You can do it by editing the instance config file in the pipelined element, specifically the appName attribute.

Example:

<pipelined autoStart="true" appName="applicationID" authPrivateKey="@qQf146pexBksGvo0esVIDO(…)"/>

Key encryption

To be used by pipelined, the private key must be encrypted.
Encryption is done using the cryptString Javascript function.
It must be performed on the same instance as pipelined. 
A sample of private Key encryption with JavaScript is available in the Annexes.

Key registration in Adobe Campaign

The encrypted private key must be registered in Adobe Campaign. You can do it by editing the instance config file in the pipelined element, specifically the authPrivateKey attribute.

Example:

<pipelined autoStart="true" appName="applicationID" authPrivateKey="@qQf146pexBksGvo0esVIDO(…)"/>

Pipelined process auto-start

The pipelined process must be started automatically.

To do it, set the element in the configuration file to autostart="true":

<pipelined autoStart="true" appName="applicationID" authPrivateKey="@qQf146pexBksGvo0esVIDO(…)"/>

Pipelined process restart

It can also be started manually using the command line:
nlserver start pipelined@instance

A restart is required for the changes to take effect:
nlserver restart pipelined@instance

In case of error

Look for errors on the standard output (if you started manually) or in the pipelined log file. Refer to the Troubleshooting section of this document for more information on resolving issues.

Pipelined configuration options

Option Description
appName ID of the OAuth application registered in Adobe Developer Connection (where the public key was uploaded).
See https://marketing.adobe.com/developer/documentation/authentication-1/auth-service-account-1
authGatewayEndpoint URL to get "gateway tokens". Default: https://api.omniture.com
authPrivateKey Private key (public part uploaded in Adobe Developer Connection) AES encrypted with the XtkKey option:
cryptString("PRIVATE_KEY")
disableAuth Disable authentication (connecting without gateway tokens is only accepted by some development Pipeline endpoints)
discoverPipelineEndpoint URL to discover the Pipeline Services endpoint to be used for this tenant. Default: https://producer-pipeline-pnw.adobe.net
dumpStatePeriodSec Period between 2 dumps of the process internal state in var/INSTANCE/pipelined.json Internal state is also accessible on-demand at http://INSTANCE:7781/pipelined/status
forcedPipelineEndpoint Disable the discovery of the PipelineServicesEndpoint and force it
monitorServerPort The pipelined process listens on this port to provide the process internal state at http://INSTANCE:PORT/pipelined/status. Default is 7781
pointerFlushMessageCount When this number of messages is processed, the offsets are saved in the database. Default is 1000
pointerFlushPeriodSec After this period, the offsets will be saved in the database. Default is 5 (secs)
processingJSThreads Number of dedicated threads processing messages with custom JS connectors. Default is 4
processingThreads Number of dedicated threads processing messages with built-in code. Default is 4
retryPeriodSec Delay between retries (if there are processing errors). Default is 30 (secs)
retryValiditySec Discard the message if it is not successfully processed after this period (too many retries). Default is 300 (secs)

Pipeline option NmsPipeline_Config

Once the authentication works, pipelined can retrieve the events and process them. It only processes triggers that are configured in Adobe Campaign, ignoring the others. The trigger must have been generated from Analytics and pushed to the pipeline beforehand.
The option can also be configured with a wildcard to catch all triggers regardless of name.
The configuration of the triggers is done in an option, under Administration > Platform > Options. The option name is NmsPipeline_Config. Data type is "long text". It's in JSON format.

Example 1

This example specifies two triggers.

Paste the JSON code from this template into the option value. Make sure to remove comments.

{
	"topics": [ // list of "topics" that the pipelined is listening to.
		{
			"name": "triggers", // Name of the first topic : triggers.
			"consumer": "customer_dev", // Name of the instance that listens. 
			"triggers": [ // Array of triggers. 
				{
					"name": "3e8a2ba7-fccc-49bb-bdac-33ee33cf02bf", // TriggerType ID from Analytics 
					"jsConnector": "cus:triggers.js" // Javascript library holding the processing function.
				}, {
					"name": "2da3fdff-13af-4c51-8ed0-05802a572e94", // Second TriggerType ID 
					"jsConnector": "cus:triggers.js" // Can use the same JS for all.
				},
			]
		}
	]
}

Example 2

This example catches all triggers.

{
 "topics": [
    {
      "name": "triggers",     
      "consumer":  "customer_dev",                         
      "triggers": [    
        {
          "name": "*",
          "jsConnector": "cus:pipeline.js" 
        }
      ]
    }
 ]
 }

Opmerking:

The trigger UID value to a specific trigger name in the Analytics interface can found as part of the URL querystring parameters in the Triggers interface. The triggerType UID is passed in the pipeline data stream and code can be written into the pipeline.JS to map the trigger UID to a user-friendly label that can be stored in a Trigger Name column in the pipelineEvents schema.

The Consumer parameter

The pipeline works with a “supplier and consumer” model. There can be many consumers on the same queue. Messages are “consumed” only for an individual consumer. Each consumer gets its own “copy” of the messages.


The “consumer” parameter identifies the instance as one of these consumers. It is the identity of the instance calling the pipeline. You can fill it with the instance name. The pipeline service keeps track of the messages retrieved by each consumer. Using different consumers for different instances ensures that every message is sent to each instance.

How to configure the Pipeline option

Add or edit Experience Cloud triggers under the "triggers" array; do not edit the rest.
Make sure that the JSON is valid; this website can help: http://jsonlint.com/

  • "name" is the trigger ID. A wildcard "*" catches all triggers.
  • "Consumer" is any unique string that uniquely identifies the nlserver instance. It can usually be the instance name itself. For multiple environments (dev/stage/prod), please ensure it is unique for each of them so that each instance gets a copy of the message.
  • Pipelined also supports the "aliases" topic.

Restart pipelined after making changes.

Processing events

Processing events in Javascript

JS file

Pipeline uses a Javascript function to process each message. This function is user-defined.

It is configured in the NmsPipeline_Config option under the "JSConnector" attribute. This javascript is called every time an event is received. It's run by the pipelined process.

The sample JS file is cus:triggers.js.

JS function

The pipeline Javascript must start with a specific function.

This function is called once for every event:

    function processPipelineMessage(xmlTrigger) {}

It should return <undefined/>.

Restart pipelined after editing the JS.

Trigger data format

The trigger data is passed to the JS function. It's in XML format.

  • The @triggerId attribute contains the name of the trigger.
  • The enrichments element contains the data generated by Analytics and is attached to the trigger. It's in JSON format.
  • @offset is the "pointer" to the message. It indicates the order of the message within the queue.
  • @partition is a container of messages within the queue. The offset is relative to a partition. There are about 15 partitions in the queue.

Example:

<trigger offset="1500435" partition="4" triggerId="LogoUpload_1_Visits_from_specific_Channel_or_ppp">
 <enrichments>{"analyticsHitSummary":{"dimensions":{" eVar01":{"type":"string","data":["PI4INE1ETDF6UK35GO13X7HO2ITLJHVH"],"name":" eVar01","source":"session summary"}, "timeGMT":{"type":"int","data":[1469164186,1469164195],"name":"timeGMT","source":"session summary"}},"products":{}}}</enrichments>
 <aliases/>
 </trigger>

Enrichment data format

Opmerking:

It is a specific example from various possible implementations.

The content is defined in Analytics for each trigger. It's in JSON format.
For example, in a trigger LogoUpload_uploading_Visits:

  • eVar01 can contain the Shopper ID which is used to reconcile with Campaign recipients. It's in String format. It must be reconciled to find the Shopper ID, which is the primary key.
  • timeGMT can contain the time of the trigger on the Analytics side. It's in UTC Epoch format (seconds since 01/01/1970 UTC).

Example:

{
 "analyticsHitSummary": {
 "dimensions": {
 "eVar01": {
 "type": "string",
 "data": ["PI4INE1ETDF6UK35GO13X7HO2ITLJHVH"],
 "name": " eVar01",
 "source": "session summary"
 },
 "timeGMT": {
 "type": "int",
 "data": [1469164186, 1469164195],
 "name": "timeGMT",
 "source": "session summary"
 }
 },
 "products": {}
 }
 }

Order of events processing

The events are processed one at a time, by order of offset. Each thread of the pipeline processes a different partition.

The ‘offset’ of the last event retrieved is stored in the database. Therefore, if the process is stopped, it restarts from the last message. This data is stored in the built-in schema xtk:pipelineOffset.

This pointer is specific to each instance and each consumer. Therefore, when many instances access the same pipeline with different consumers, they each get all the messages and in the same order.

The "consumer" parameter of the pipeline option identifies the calling instance. 

Currently, there is no way to have different queues for separate environments such as 'staging' or 'dev'.

Logging and error handling

Logs such as logInfo() are directed to the pipelined log. Errors such as logError() are written to the pipelined log and cause the event to be put into a retry queue.
Messages in error are retried several times in the duration set in the pipelined options.
For debugging and monitoring purposes, the full trigger data is written into the trigger table. It is in the "data" field in XML format. Alternatively, a logInfo() containing the trigger data serves the same purpose.

Parsing the data

This sample JS code parses the eVar01 in the enrichments.

function processPipelineMessage(xmlTrigger)
 {
 (…)
 var shopper_id = ""
 if (xmlTrigger.enrichments.length() > 0)
 {
 if (xmlTrigger.enrichments.toString().match(/eVar01/) != undefined)
 {
 var enrichments = JSON.parse(xmlTrigger.enrichments.toString())
 shopper_id = enrichments.analyticsHitSummary.dimensions. eVar01.data[0]
 }
 }
 (…)
 }

Be cautious when parsing to avoid errors.
Since this code is used for all triggers, most data is not required. Therefore, it can be left empty when not present.

Storing the trigger

Opmerking:

It is a specific example from various possible implementations.

This sample JS code saves the trigger to the database.

function processPipelineMessage(xmlTrigger)
 {
 (…)
 var event = 
 <pipelineEvent
 xtkschema = "cus:pipelineEvent"
 _operation = "insert"
 created = {timeNow}
 lastModified = {timeNow}
 triggerType = {triggerType}
 timeGMT = {timeGMT}
 shopper_id = {shopper_id}
 data = {xmlTrigger.toXMLString()}
 />
 xtk.session.Write(event)
 return <undef/>; 
 }

Constraints

Performance for this code must be optimal since it runs at high frequencies. There are potential negative effects for other marketing activities. Especially if processing more than one million trigger events per hour on the Marketing server. Or if it is not properly tuned.

The context of this Javascript is limited. Not all functions of the API are available. For example, getOption() or getCurrentdate() do not work. 

To enable faster processing, several threads of this script are executed at the same time. The code must be thread safe.

Storing the events

Opmerking:

It is a specific example from various possible implementations.

Pipeline event schema

Events are stored in a database table. It's used by marketing campaigns to target customers and enrich emails using triggers.
Although each trigger can have a distinct data structure, all triggers can be held in a single table. 
The triggerType field identifies from which trigger the data originates.

Here is a sample schema code for this table:

Attribute Type Label Description
pipelineEventId Long Primary key The trigger's internal primary key.
data Memo Trigger Data The full contents of trigger data in XML format. For debugging and audit purposes.
triggerType String 50 TriggerType The name of the trigger. Identifies the behavior of the customer on the website.
shopper_id String 32 shopper_id The shopper's Internal Identifier. Set by the reconciliation workflow. If zero, it means that the customer is unknown in Campaign.
shopper_key Long shopper_key The shopper's External identifier as captured by Analytics.
created Datetime Created The time when the event was created in Campaign.
lastModified Datetime Last Modified The last time the event was modified in Adobe.
timeGMT Datetime Timestamp The time when the event was generated in Analytics.

Displaying the events

The events can be displayed with a simple form based on the events schema.

image2017-1-13 18_8_3 blur

Processing the events

Reconciliation workflow

Reconciliation is the process of matching the customer from Analytics into the Campaign database. For example, the criteria for matching can be the shopper_id.

For performance reasons, the matching must be done in batch mode by a workflow. 
The frequency must be set to 15 minutes to optimize the workload. As a consequence, the delay between an event reception in Adobe Campaign and its processing by a marketing workflow is up to 15 minutes.

Options for unit reconciliation in JS

In theory, it's possible to run the reconciliation query for each trigger in the JS. It has a higher performance impact and gives faster results. It could be required for specific use cases when reactivity is needed. 

It can be difficult to do it if no index is set on shopper_id. If the criteria are on a separate database server than the marketing server, it uses a database link, which has poor performance.

Purge workflow

Triggers are processed within the hour so there is no reason to keep them for a long time. The volume can be about 1 million triggers per hour. It explains why a purge workflow must be put in place. The purge deletes all triggers that are older than three days and runs once per day.

Campaign workflow

The trigger campaign workflow is often similar to other recurring campaigns that have been used.
For example, it can start with a query on the triggers looking for specific events during the last day. That target is used to send the email. Enrichments or data can come from the trigger. It can be safely used by Marketing as it requires no configuration.

image2017-1-13 18_13_31 blur

Pipeline Monitoring

Pipeline process status

The pipelined status web service gives information on the status of the pipelined process.

It can be accessed manually using a browser or automatically with a monitoring application.

It's in REST format, which is described below.

image2017-1-10 17-2-30

Indicators

This section lists the indicators in the status web service.
Recommended indicators to monitor are highlighted.

  • Consumer: name of the client pulling the triggers. Configured in the pipeline option.
  • http-request
    • last-alive-ms-ago: time in ms since a connection check was made.
    • last-failed-cnx-ms-ago: time in ms since the last time the connection check failed.
    • pipeline-host: name of the host where the pipeline data is pulled from.
  • pointer
    • current-offsets: value of the pointer into the pipeline, per child thread.
    • last-flush-ms-ago: time in ms since a batch of triggers was retrieved.
    • next-offsets-flush: time to wait until the next batch, when finished.
    • processed-since-last-flush: number of triggers processed in the last batch.
  • routing
    • triggers: list of triggers retrieved. Configured in the pipelined option.
  • stats
    • average-pointer-flush-time-ms: average processing time for one batch of triggers.
    • average-trigger-processing-time-ms: average time spent parsing the triggers data.
    • bytes-read: number of bytes read from the queue since the process was started.
    • current-messages: current number of pending messages that have been pulled from the queue and are awaiting processing. This indicator should be close to zero.
    • current-retries: current number of messages that have failed processing and are awaiting retry.
    • peak-messages: maximum number of pending messages the process has been handling since it was started.
    • pointer-flushes: number of batches of messages processed since the start.
    • routing-JS-custom: number of messages that were processed by the custom JS.
    • trigger-discarded: number of messages that were discarded after too many retries due to processing errors.
    • trigger-processed: number of messages that were processed without an error.
    • trigger-received: number of messages received from the queue.

These stats are displayed per processing thread.

  • average-trigger-processing-time-ms: average time spent parsing the triggers data.
  • is-JS-processor: value "1" if this thread uses the custom JS.
  • trigger-discarded: number of messages that were discarded after too many retries due to processing errors. This indicator should be zero.
  • trigger-failures: number of processing errors in the JS. This indicator should be zero.
  • trigger-received: number of messages received from the queue. 

 

  • Settings: they are set in the config files.
    • flush-pointer-msg-count: number of messages in a batch.
    • flush-pointer-period-ms: time between two batches, in milliseconds.
    • processing-threads-JS: number of processing threads running the custom JS.
    • retry-period-ms: time between two retries when a processing error occurs.
    • retry-validity-duration-ms: duration from the time processing is retried until the message is discarded.

Pipeline messages Report

This report displays the number of messages per hour in the last five days.

image2017-1-10 17-3-10

Troubleshooting Pipeline

Pipelined fails with error "No task corresponds to the mask "pipelined@"

Your version of AC does not support the pipeline.

  1. Check If the pipelined element is present in the config file. If not, it means it's not supported.
  2. Upgrade to version 6.11 build 8705 or later.

Pipelined fails with '' aurait dû commencer par '[' ou '{' (iRc=16384)"

The NmsPipeline_Config option is not set. It's actually a JSON parsing error.
Set the JSON config in the option NmsPipeline_Config. See "routing option" in this page. 

Pipelined fails with "the subject must be a valid organization or client"

The IMSOrgid configuration is not valid.

  1. Check that the IMSOrgId is set in the serverConf.xml.
  2. Look for an empty IMSOrgId in the instance config file that can override the default. If so, remove it.
  3. Check that the IMSOrgId matches that of the customer in the Experience Cloud. 

Pipelined fails with "invalid key"

The @authPrivateKey parameter of the instance config file is incorrect.

  1. Check that the authPrivateKey is set.
  2. Check that the authPrivateKey: starts with @, ends with =, and is about 2600 characters long.
  3. Look for the original key and check that it is: in RSA format, 4096 bits long, and starts with -----BEGIN RSA PRIVATE KEY-----.
    If necessary, re-create the key and register it on Adobe Developer Connection.
  4. Check that the key was encoded within the same instance as pipelined.
    If necessary, redo the encoding using the sample JavaScript or workflow.

Pipelined fails with "unable to read the token during authentication"

The private key has an invalid format.

  1. Run the steps for key encryption on this page.
  2. Check that the key is encrypted on the same instance.
  3. Check that the authPrivateKey in the config file matches the generated key.
    Make sure to use OpenSSL to generate the key pair. PuttyGen for example, does not generate the proper format.

No triggers are retrieved

When the pipelined process is running and no triggers are retrieved:

  1. Make sure that the trigger is active in Analytics and is generating events.
  2. Make sure that the pipelined process is running.
  3. Look for errors in the pipelined log.
  4. Look for errors in the pipelined status page. trigger-discarted, trigger-failures should be zero.
  5. Check that the trigger name is configured in the NmsPipeline_Config option. If there is a doubt, use the wildcard option.
  6. Check that Analytics has an active trigger and is generating events. There could be a delay of a few hours after the configuration is made in Analytics before it's active.

Events are not linked to a customer

When some events are not linked to a customer:

  1. Check that the reconciliation workflow is running, if applicable.
  2. Check that the event contains a customer ID.
  3. Make a query on the customer table using the customer ID.
  4. Check the frequency of the customer import. New customers are imported into Adobe Campaign with a workflow.

Latency in events processing

When the Analytics timestamp is much older than the creation date of the event in Campaign.

Normal latency is less than 15 minutes. 

  1. Check if the pipelined process has been running.
  2. Look for errors in pipelined.log that can cause retries. Fix the errors, if applicable.
  3. Check the pipelined status page for the queue size. If the queue size is large, improve the performance of the JS.
  4. Since a delay seems to increase with volume, configure the triggers on Analytics using fewer messages.

Annexes

How to use the Key encryption JavaScript

Run a JavaScript to encrypt the private key. It is required for the pipeline configuration.

Here is a code sample that you can use to run the cryptString function:

/*
USAGE:
  nlserver javascript -instance:<instancename> -file -arg:"<private_key.pem file>" -file encryptKey.js
*/

function usage()
{
  return "USAGE:\n" +
    '  nlserver javascript -instance:<instancename> -file -arg:"<private_key.pem file>" -file encryptKey.js\n'
}

var fn = application.arg;
if( fn == "" )
  logError("Missing key file file\n" + usage());

//open the pem file
plaintext = loadFile(fn)

if( !plaintext.match(/^-----BEGIN RSA PRIVATE KEY-----/) )
  logError("File should be an rsa private key")

logInfo("Encrypted key:\n" + cryptString(plaintext))

On the server, execute the Javascript:

nlserver javascript -instance:<instancename> -file -arg:"<private_key.pem file>" -file encryptKey.js

Copy and paste the encoded key from the output to the console.

Dit werk is gelicentieerd onder de Creative Commons Naamsvermelding/Niet-commercieel/Gelijk delen 3.0 Unported-licentie  De voorwaarden van Creative Commons zijn niet van toepassing op Twitter™- en Facebook-berichten.

Juridische kennisgevingen   |   Online privacybeleid