Goal

This document provides data estimation for Performance Monitoring Toolset and allows users in sizing hardware requirements for their own deployments. The document helps in estimating:

  • The number of ElasticSearch instances that must be mapped for a single instance of ColdFusion.
  • The amount of data generated by Performance Monitoring Toolset or ColdFusion that is stored in ElasticSearch based on the traffic received by ColdFusion.

Background of this document

The main queries of users centered around the following

  • Determining ElasticSearch requirements for a particular deployment.
  • Assessing ElasticSearch requirements for a specified amount of ColdFusion Servers.

In the absence of benchmarking, customers have had to adopt a size-by-experience methodology, running the risk of the tool's imperfect representation of the requirements.

Scope of the document

This document provides the basic building blocks for estimating the data generated by Performance Monitoring Toolset. 

We will also discuss the data generated by one ColdFusion node as building block and extend it for different deployments.

Basic configuration of Performance Monitoring Toolset

For deploying in production, it is recommended that Performance Monitoring Toolset must be deployed separately from a ColdFusion server.

Performance Monitoring Toolset has two components:

  1. Monitoring server
  2. Datastore

The configuration for server are specified below:

Minimum requirements for Performance Monitoring Toolset server:

  1. RAM: 8 GB if Performance Monitoring Toolset server and Datastore server are installed on the same machine. If Performance Monitoring Toolset server is installed separately, then 4 GB RAM is required.
  2. Hard Disk: Free disk space of 40 GB, if Performance Monitoring Toolset server and Datastore server are installed on the same machine. For stand-alone Performance Monitoring Toolset server installation, 10 GB hard disk is recommended.

Minimum requirements for Datastore server (ElasticSearch 5.6.3):

  1. RAM: 8 GB, if Performance Monitoring Toolset server and Datastore server are installed on the same machine.
  2. Hard Disk: Free disk space of 40 GB. We recommend SSDs for optimized performance of ElasticSearch I/O.
  3. The minimum RAM required for ES process is 4GB.

If you have multiple ColdFusion Instances configured with  Performance Monitoring Toolset and your servers are receiving a large number of requests, it is recommended to install a cluster of Datastore.

For more information on monitoring the health of a Datastore, see Monitor health of Datastore.

Use cases

Performance Monitoring Toolset with ElasticSearch cluster:

For large deployments with a topology having several ColdFusion Nodes, an ElasticSearch cluster is recommended and it must be backed up by the appropriate hardware.

Performance Monitoring Toolset with single ElasticSearch node:

The table below shows the setup of a single ElasticSearch node based on the metrics data generated, and it also covers the archiving strategy.

We recommend a heap of 8 GB for Performance Monitoring Toolset.

For a non-request data size of 5 KB,

Requests/CF Node/Day

Index Size/CF Node/Day

ElasticSearch nodes recommended

Archiving Strategy (Data for how many days to go without archiving)

 

10k

 

110 MB

 

CF Nodes

ES Nodes

1

1

10

1

30

1

 

 

Duration

1 month

1 month

1 month

 

 

50k

 

180 MB

 

CF Nodes

ES Nodes

1

1

10

1

30

1

 

 

Duration

1 month

20 days

7 days

 

 

100k

 

265 MB

 

CF Nodes

ES Nodes

1

1

10

1

30

3 (cluster)

 

 

Duration

1 month

20 days

30 days

 

 

1M

 

1850 MB

 

CF Nodes

ES Nodes

1

1

10

3 (Cluster)

30

7 (Cluster)

 

 

Duration

20 days

7 days

10 days

 

Archive data in Performance Monitoring Toolset

As data grows in time, the need to properly archive it grows as well.

Archiving helps you to maintain regular backups of your data and comply with an organization's data retention policies. Using archiving, you can remove obsolete production data via a well-defined process. In Performance Monitoring Toolset, you can archive records in a repository and also set the frequency of archival and data retention period.

Archiving

Firstly, you must create a repository by specifying the following fields:

Fields

 Description

Name

Name of the repository to be created.

Path

To register the shared file system repository, mount the same shared filesystem to the same location on all master and data nodes.

Path to the repository.

●      Windows: The format for path is C:/path/to/repository

●      Non-Windows: The format for path is /opt/path/to/repository

You must also whitelist the paths in the datastore configuration file at <Path to Performance Monitoring Toolset>/datastore/config/elasticsearch.yml.

For example,

In the elastricsearch.yml file, add the following:

path.repo: ["C:/path/to/repository"]

You can perform archiving both manually and automatically.

  1. Manual - You go to the Archiving section, select the repository and trigger archiving manually.
  2. Auto-archiving - You can also schedule archiving to run on a regular basis. For this you are required to fill in some details.

        a. Enable: Choose this check-box to enable archiving for the selected repository.

        b. Repository name: Lists all repositories created.

        c. Archive data older than: Enter the number of days that data has to be retained from the time when archiving starts. If you have specified 30 days, only data for the last 30 days is retained. The rest is archived.

        d. Frequency: Enter the interval in days to schedule the archiving periodically.

ElasticSearch system configuration

Performance Monitoring Toolset uses Elasticsearch is the datastore.  Since ElasticSearch performs file-intensive I/O operations, the use of SSD drives is recommended.

You can set the Heartbeat interval option in the section Performance Toolset Monitoring properties section, while configuring the web server connector with ColdFusion. The setting determines the frequency at which the connector metrics are posted to ElasticSearch.

You must allocate an optimal heap size to ElasticSearch, based on the availability of memory and other processes on the machine.

The JVM heap size for ElasticSearch can be set by editing the jvm.config file at <pmt_root>/datastore/config directory.

To set the lower and upper heap size to 8 GB, update the following settings:

  • - Xms8g
  • - Xmx8g

The Heap Size must be no more than 50% of your physical RAM. This is to ensure that there is enough physical RAM left for kernel file system caches.

To avoid resizing of the JVM heap in runtime, start the JVM with the initial heap size equal to the maximum heap size.

The default garbage collection used in ElasticSearch is CMS. Using Serial GC is not recommended on a multicore CPU.

For further details on sizing the JVM heap, see ElasticSearch heap sizing.

Operating System Settings

File swapping/paging

To disable swapping between main memory and secondary storage, in the host system, use the following system command on Linux:

sudo swapoff –a

To disable it permanently, edit the /etc/fstab file, and comment out any line that contain the word swap.

On Windows, disable the paging file via,

System Properties > Advanced > Performance > Advanced > Virtual memory

On Linux, ensure that the in sysctl, the value of vm.swappiness is set to 1. This reduces the kernel’s tendency to swap and should not lead to swapping under normal circumstances, while still allowing the whole system to swap in emergency conditions.

Yet another option is to use mlockall on Linux/Unix systems, or VirtualLock on Windows, to try to lock the process address space into RAM, preventing any ElasticSearch memory from being swapped out. This can be done, by adding this line to the config/elasticsearch.yml file:

bootstrap.memory_lock: true

You can also enable mockall by using the following setting in the elasticsearch.yml file at <pmt_root>/datastore/config directory.

bootstrap.mlockall: true

This enables JVM to lock its memory to prevent it from being swapped by the OS.

Open file descriptors limit

(This is limited to nix systems only)

To set the number of open file descriptors for system users, edit the file /etc/security/limits.conf .

Set the number to at least 65536.

os_user soft nofile <no_of_files>

os_user hard nofile <no_of_files>

For example,

* soft nofile 65536

* hard nofile 65536

Process limit per user

Set the maximum number of processes available to a user to at least 2048.

/etc/security/limits.conf file

* soft nproc <no_of_proc>

* hard nproc <no_of_ proc >

Alternatively, use the ulimit command as follows:

ulimit –u 2048

Virtual memory

ElasticSearch uses an mmapfs directory by default for 64-bit systems to store its indices. The default operating system limits on mmap counts is likely to be too low, which may result in out of memory exceptions.

On Linux, you can increase the limits by running the following command as root:

sysctl -w vm.max_map_count=262144

To set this value permanently, update the vm.max_map_count setting in /etc/sysctl.conf.

To verify the set value, run sysctl vm.max_map_count, after rebooting.

Port range and connection timeout

Increase the port range and decrease the TCP connection timeout by using the following directives in the /etc/sysctl.conf file:

net.ipv4.ip_local_port_range = 1024 65535

net.ipv4.tcp_fin_timeout = 30

Configure an ElasticSearch cluster

The datastore for  can be a cluster of ElasticSearch node that can be hosted on multiple hosts. This allows the datastore to be scaled horizontally or vertically as needed.

Edit the elasticsearch.yml file located at <pmt_root>/datastore/config

Add the ip-address port pair of the ElasticSearch node to the discovery.zen.ping.unicast.hosts property as follows:

discovery.zen.ping.unicast.hosts: ["xx.xx.xx.xx:9300", "zz.zz.zz.zz:9301"]

The port value should be set to the transport.tcp.port used for node to node communication. The port value can be found the elasticsearch.yml file of the respective node

One of the ElasticSearch nodes should be set as the master by setting the following property to true:

node.master: true

All other ElasticSearch nodes should have the this property set to false

node.master: false

Set the following property to (total number of master-eligible nodes / 2 + 1)

discovery.zen.minimum_master_nodes  

This prevents the “split-brain” problem by determining the number of nodes that need to be in communication to elect a master.

Restart the node for the configuration changes to take effect.

Configure size of heap of ElasticSearch

By default, Performance Monitoring Toolset Datastore service is configured to use a heap with minimum and maximum size of 2 GB. It is good enough if you are not configuring a lot of ColdFusion servers for monitoring purpose. However, it is important to configure heap size to its optimum value to get the desired performance benefits.

But how do we change the heap size of ElasticSearch. There are two ways to configure heap size.

Non-Windows and Windows platforms (only If ElasticSearch runs via command prompt)

Here’s the list of steps that should be followed to configure the heap size:

  1.  Navigate to this location <pmt_install_root>/datastore/config.

  2. Open jvm.options configuration file. ElasticSearch assigns the entire heap specified in jvm.options via these two settings- Xms (minimum heap size) and Xmx (maximum heap size).

  3. Change the values for the Xms and Xmx settings.

  4. Save the changes.

  5. Restart ColdFusion 2018 Performance Monitoring Toolset Datastore Service from the command prompt or the scripts located at <pmt_install_root>/bin.

Windows (If ElasticSearch runs via Services)

Configuring the heap size for ElasticSearch is different if it runs via Windows services.

To configure the heap size, follow the steps below:

  1. Navigate to this location <pmt_install_root>/datastore/bin.

  2. In the command prompt, run the command elasticsearch-service.bat manager.

  3. The ColdFusion 2018 Performance Monitoring Toolset Datastore configuration wizard launches.

    ColdFusion Performance Monitoring Toolset Datastore configuration wizard
  4. Navigate to the tab Java.

  5. Change the values in the Initial memory pool and Maximum memory pool text fields.

  6. To save the changes, click OK.

  7. Restart ColdFusion 2018 Performance Monitoring Toolset Datastore Service from services.msc.

Few recommendations to configure ES heap size:

  1. Set the minimum heap size (Xms) and maximum heap size (Xmx) to be equal.
  2. The more heap available to ElasticSearch, the more memory it can use for caching. But note that too much heap can subject you to long garbage collection pauses.
  3. Set Xmx to no more than 50% of your physical RAM, to ensure that there is enough physical RAM left for kernel file system caches. It should not exceed 32 GB

This work is licensed under a Creative Commons Attribution-Noncommercial-Share Alike 3.0 Unported License  Twitter™ and Facebook posts are not covered under the terms of Creative Commons.

Legal Notices   |   Online Privacy Policy