Requests/CF Node/Day
This document provides data estimation for Performance Monitoring Toolset and allows users in sizing hardware requirements for their own deployments. The document helps in estimating:
The main queries of users centered around the following
In the absence of benchmarking, customers have had to adopt a size-by-experience methodology, running the risk of the tool's imperfect representation of the requirements.
This document provides the basic building blocks for estimating the data generated by Performance Monitoring Toolset.
We will also discuss the data generated by one ColdFusion node as building block and extend it for different deployments.
For deploying in production, it is recommended that Performance Monitoring Toolset must be deployed separately from a ColdFusion server.
Performance Monitoring Toolset has two components:
The configuration for server are specified below:
Minimum requirements for Performance Monitoring Toolset
Minimum requirements for Performance Monitoring Toolset server:
Minimum requirements for Datastore server (ElasticSearch 5.6.3):
If you have multiple ColdFusion Instances configured with Performance Monitoring Toolset and your servers are receiving a large number of requests, it is recommended to install a cluster of Datastore.
For more information on monitoring the health of a Datastore, see Monitor health of Datastore.
Performance Monitoring Toolset with ElasticSearch cluster:
For large deployments with a topology having several ColdFusion Nodes, an ElasticSearch cluster is recommended and it must be backed up by the appropriate hardware.
Performance Monitoring Toolset with single ElasticSearch node:
The table below shows the setup of a single ElasticSearch node based on the metrics data generated, and it also covers the archiving strategy.
We recommend a heap of 8 GB for Performance Monitoring Toolset.
For a non-request data size of 5 KB,
Requests/CF Node/Day |
Index Size/CF Node/Day |
ElasticSearch nodes recommended |
Archiving Strategy (Data for how many days to go without archiving) |
10k |
110 MB |
|
|
50k |
180 MB |
|
|
100k |
265 MB |
|
|
1M |
1850 MB |
|
|
As data grows in time, the need to properly archive it grows as well.
Archiving helps you to maintain regular backups of your data and comply with an organization's data retention policies. Using archiving, you can remove obsolete production data via a well-defined process. In Performance Monitoring Toolset, you can archive records in a repository and also set the frequency of archival and data retention period.
Firstly, you must create a repository by specifying the following fields:
Fields |
Description |
Name |
Name of the repository to be created. |
Path |
To register the shared file system repository, mount the same shared filesystem to the same location on all master and data nodes. Path to the repository. ● Windows: The format for path is C:/path/to/repository ● Non-Windows: The format for path is /opt/path/to/repository You must also whitelist the paths in the datastore configuration file at <Path to Performance Monitoring Toolset>/datastore/config/elasticsearch.yml. For example, In the elastricsearch.yml file, add the following: path.repo: ["C:/path/to/repository"] |
You can perform archiving both manually and automatically.
a. Enable: Choose this check-box to enable archiving for the selected repository.
b. Repository name: Lists all repositories created.
c. Archive data older than: Enter the number of days that data has to be retained from the time when archiving starts. If you have specified 30 days, only data for the last 30 days is retained. The rest is archived.
d. Frequency: Enter the interval in days to schedule the archiving periodically.
For more details, see the following:
Performance Monitoring Toolset uses Elasticsearch is the datastore. Since ElasticSearch performs file-intensive I/O operations, the use of SSD drives is recommended.
You can set the Heartbeat interval option in the section Performance Toolset Monitoring properties section, while configuring the web server connector with ColdFusion. The setting determines the frequency at which the connector metrics are posted to ElasticSearch.
You must allocate an optimal heap size to ElasticSearch, based on the availability of memory and other processes on the machine.
The JVM heap size for ElasticSearch can be set by editing the jvm.config file at <pmt_root>/datastore/config directory.
To set the lower and upper heap size to 8 GB, update the following settings:
The Heap Size must be no more than 50% of your physical RAM. This is to ensure that there is enough physical RAM left for kernel file system caches.
To avoid resizing of the JVM heap in runtime, start the JVM with the initial heap size equal to the maximum heap size.
The default garbage collection used in ElasticSearch is CMS. Using Serial GC is not recommended on a multicore CPU.
For further details on sizing the JVM heap, see ElasticSearch heap sizing.
To disable swapping between main memory and secondary storage, in the host system, use the following system command on Linux:
sudo swapoff –a
To disable it permanently, edit the /etc/fstab file, and comment out any line that contain the word swap.
On Windows, disable the paging file via,
System Properties > Advanced > Performance > Advanced > Virtual memory
On Linux, ensure that the in sysctl, the value of vm.swappiness is set to 1. This reduces the kernel’s tendency to swap and should not lead to swapping under normal circumstances, while still allowing the whole system to swap in emergency conditions.
Yet another option is to use mlockall on Linux/Unix systems, or VirtualLock on Windows, to try to lock the process address space into RAM, preventing any ElasticSearch memory from being swapped out. This can be done, by adding this line to the config/elasticsearch.yml file:
bootstrap.memory_lock: true
You can also enable mockall by using the following setting in the elasticsearch.yml file at <pmt_root>/datastore/config directory.
bootstrap.mlockall: true
This enables JVM to lock its memory to prevent it from being swapped by the OS.
(This is limited to nix systems only)
To set the number of open file descriptors for system users, edit the file /etc/security/limits.conf .
Set the number to at least 65536.
os_user soft nofile <no_of_files>
os_user hard nofile <no_of_files>
For example,
* soft nofile 65536
* hard nofile 65536
Set the maximum number of processes available to a user to at least 2048.
/etc/security/limits.conf file
* soft nproc <no_of_proc>
* hard nproc <no_of_ proc >
Alternatively, use the ulimit command as follows:
ulimit –u 2048
ElasticSearch uses an mmapfs directory by default for 64-bit systems to store its indices. The default operating system limits on mmap counts is likely to be too low, which may result in out of memory exceptions.
On Linux, you can increase the limits by running the following command as root:
sysctl -w vm.max_map_count=262144
To set this value permanently, update the vm.max_map_count setting in /etc/sysctl.conf.
To verify the set value, run sysctl vm.max_map_count, after rebooting.
Increase the port range and decrease the TCP connection timeout by using the following directives in the /etc/sysctl.conf file:
net.ipv4.ip_local_port_range = 1024 65535
net.ipv4.tcp_fin_timeout = 30
The datastore for can be a cluster of ElasticSearch node that can be hosted on multiple hosts. This allows the datastore to be scaled horizontally or vertically as needed.
Edit the elasticsearch.yml file located at <pmt_root>/datastore/config
Add the ip-address port pair of the ElasticSearch node to the discovery.zen.ping.unicast.hosts property as follows:
discovery.zen.ping.unicast.hosts: ["xx.xx.xx.xx:9300", "zz.zz.zz.zz:9301"]
The port value should be set to the transport.tcp.port used for node to node communication. The port value can be found the elasticsearch.yml file of the respective node
One of the ElasticSearch nodes should be set as the master by setting the following property to true:
node.master: true
All other ElasticSearch nodes should have the this property set to false
node.master: false
Set the following property to (total number of master-eligible nodes / 2 + 1)
discovery.zen.minimum_master_nodes
This prevents the “split-brain” problem by determining the number of nodes that need to be in communication to elect a master.
Restart the node for the configuration changes to take effect.
By default, Performance Monitoring Toolset Datastore service is configured to use a heap with minimum and maximum size of 2 GB. It is good enough if you are not configuring a lot of ColdFusion servers for monitoring purpose. However, it is important to configure heap size to its optimum value to get the desired performance benefits.
But how do we change the heap size of ElasticSearch. There are two ways to configure heap size.
Here’s the list of steps that should be followed to configure the heap size:
Navigate to this location <pmt_install_root>/datastore/config.
Open jvm.options configuration file. ElasticSearch assigns the entire heap specified in jvm.options via these two settings- Xms (minimum heap size) and Xmx (maximum heap size).
Change the values for the Xms and Xmx settings.
Save the changes.
Restart ColdFusion 2018 Performance Monitoring Toolset Datastore Service from the command prompt or the scripts located at <pmt_install_root>/bin.
Configuring the heap size for ElasticSearch is different if it runs via Windows services.
To configure the heap size, follow the steps below:
Navigate to this location <pmt_install_root>/datastore/bin.
In the command prompt, run the command elasticsearch-service.bat manager.
The ColdFusion 2018 Performance Monitoring Toolset Datastore configuration wizard launches.
Navigate to the tab Java.
Change the values in the Initial memory pool and Maximum memory pool text fields.
To save the changes, click OK.
Restart ColdFusion 2018 Performance Monitoring Toolset Datastore Service from services.msc.
Few recommendations to configure ES heap size:
Aanmelden bij je account