Copy the NCC_Onprem_12_4_Installer.zip to the home directory of all the nodes.
For example, scp NCC_Onprem_12_4_Installer.zip -i ssh_key.pem my-user@webrtc.corp.example.com:/home/my-user/
Introduction
Adobe Connect uses the latest WebRTC framework for providing enhanced Audio and Video capabilities. This Enhanced Audio-Video setup typically runs on multiple Linux® nodes that have specific roles. There are signalling nodes, media nodes, and recording nodes. This setup also uses PostgreSQL and Redis databases that can be installed on one or on separate machines depending on the usage.
Each node’s setup is done by copying the installer zip file to it, editing the config files, running the dependency installation script, and finally the main installation script. Below are the topics that are covered in following:
- Prepare the environment
- Upgrading a system having 12.0, 12.1, 12.1.5
- Configure Enhanced Audio-Video in Adobe Connect Service (CPS)
These steps are described in the Installation section.
Pre-requisite and System Requirements
Estimate the size for Enhanced Audio/Video servers/nodes
The file, session_participants_count.sql, can be run to estimate the size of the enhanced audio/video servers. The output from the SQL queries is the input to the calculator, which is an Excel file, Additional Enhanced A/V hardware estimator.xlsx.
The calculator help in estimating the number of VMs needed based on your past usage of Adobe Connect. The calculator needs the following set of inputs.
- The number of server CPU cores and RAM.
The attached SQL queries are used to determine the peak number of concurrent sessions done in the last 12 months and the average number of attendees.
The estimated number of publishers in each session. A publisher is a meeting attendee (host, presenter, or participant) connecting their microphone in the meeting room (both muted or unmuted) and connecting their webcam in the meeting room (both live and paused).
Both the session_participants_count.sql and Additional Enhanced A/V hardware estimator.xlsx files are included in the Installer package and are not available elsewhere.
Estimate FQDN and SSL certificate requirements
FQDN:
One public DNS record for external Application Load Balancer e.g. webrtc.example.com
One public DNS record for each Media Server e.g. media-1.example.com
SSL CERTIFICATES:
One SSL certificate for LB FQDN
TURNS configuration is recommended for Media Servers, so, one certificate for each Media Server.
Understand Network Architecture
Port Opening Requirements
Source | Destination | Port | Protocol | Use |
---|---|---|---|---|
Signalling Node | Redis | 6379 | TCP | |
Postgres | 5432 | TCP | ||
Recording Node | 5000-5100 | TCP | ||
ASR Node | 6000-6100 | TCP | ||
SIP or Media- Server Node |
5060 | UDP | For SIP Signalling |
|
SIP or Media-Server Node |
5060 | TCP | ||
CPS Server | 80 | TCP | For POC or Demo setups running without SSL |
|
CPS Server/LB | 443 | TCP | ||
Recording Node | Media-Sever Node (*) |
443 | TCP | TURNS |
3478 | UDP | STUN/TURN | ||
3478 | TCP | STUN/TURN | ||
30000-65535 | UDP | SRTP (real-time media flow) | ||
CPS Server | 80 | TCP | For POC or Demo setups running without SSL |
|
CPS Server/LB | 443 | TCP | ||
Redis | 6379 | TCP | ||
Signalling Node | 8090 | TCP |
||
Signalling Node |
18443 | TCP | For POC or Demo setups running without LB |
|
WebRTC Load Balancer (**) | 443 | TCP |
||
Media Server Node | Redis | 6379 | TCP |
|
Postgres | 5432 | TCP |
||
stun.I.google.com | 19302 | UDP | For discovering Public IP. Verifying NAT type. |
|
stun1.I.google.com |
19302 | UDP | For discovering Public IP. Verifying NAT type. |
|
Signalling Node | 18443 | TCP | For POC or Demo setups running without LB |
|
Media Server Node | 8445 | TCP | For performing clustering in case of multiple Media |
|
WebRTC Load Balancer (**) | 443 | TCP |
For registration on WebRTC Gateway |
|
CPS Server | WebRTC Load Balancer |
443 | TCP | |
Recording Node | 80 | TCP |
Downloading recording files | |
Signalling Node | 18443 | TCP | For POC or Demo setups running without LB |
|
Users/Client/Internet | Media-Server Node (*) |
443 | TCP |
TURNS (Audio-Video over TLS) |
3478 | UDP | STUN/TURN |
||
3478 | TCP | STUN/TURN |
||
30000-65535 | UDP | SRTP (real-time media flow) |
||
CPS Server | 80 | TCP |
For POC or Demo setups running without SSL |
|
CPS Server/LB | 443 | TCP | ||
Signalling Node | 18443 | TCP | For POC or Demo setups running without LB |
|
WebRTC Load Balancer |
443 | TCP |
||
SIP Node | Redis | 6379 | TCP |
In most cases, SIP service runs on Media Server Node |
Postgres | 5432 | TCP |
||
WebRTC Load Balancer |
443 | TCP |
||
Signalling Node | 18443 | TCP | For POC or Demo setups running without LB |
|
Auto Captioning/ASR Node | Redis | 6379 | TCP | |
Signalling Node | 8080 | TCP | For Media Connection |
|
Media-Server Node (*) | 3000-65535 | UDP | SRTP (real-time media flow) |
|
443 | TCP |
TURNS | ||
3478 | UDP | STUN/TURN | ||
3478 | STUN/TURN | |||
Signalling Node | 8090 | TCP |
||
CPS Server | 80 | TCP |
For POC or Demo setups running without SSL |
|
CPS Server/LB | 443 | TCP |
* Media servers in most cases have a Public IP address but in restricted environments where access to Adobe Connect is not needed over the Internet, ports should be open for its Private IP address.
** There is an external WebRTC Load Balancer for routing traffic to multiple Signalling nodes and for SSL offloading. The LB acts as an interface between the new WebRTC servers and CPS and end users.
The load balancing is typically be done via an external Application Load Balancer with the below configuration.
HTTPS port 443: Signalling nodes HTTP port 18443 - CPS and End-users connect to this listener, and that is the entry point to the new WebRTC cluster.
HTTPS port 443: Signalling nodes HTTP port 9090 - Used by the administrator to connect to the WebRTC admin web panel to configure TURNS and SIP/Telephony.
For more information, see Set up load balancing for on-premise setup of Enhanced Audio/Video (WebRTC).
System Requirements for Enhanced Audio/Video Servers
- OS – Red Hat Enterprise Linux 64-bit version 8.6
- Third-party libraries (Open source) that are installed on the servers:
- Podman 4.2.0
- Virtualenv 20.19.0
- Python 3.9
- Pip 21.3.1
- Python libraries (pydantic, pyhocon, docker, redis, packaging, psycopg2-binary, python-json-logger, pystun3, coloredlogs, colorama)
A typical enhanced AV setup would need at least three Red Hat servers. One each for Signalling, Media, and Recording nodes. The estimator based on meeting load could suggest multiple Media-Server, Recording, and Signalling nodes. An Application Load Balancer would be used for SSL offloading and for communication between CPS and end-user clients with Signalling Nodes.
Some setups based on the load could just have two servers with Recording service containers running on the Signalling node.
Here we’ll describe the most common three server setups in detail.
- Provision 3 Red Hat servers. Kindly refer to the Hardware estimator for configuration.
- The installation requires a non-root user with sudo access.
- After a fresh install of Red Hat, create a new user-id.
- To enable sudo for the new user ID on RHEL, add the ID to the wheel group:
- Become root by running su.
- Run usermod -aG wheel your_user_id.
- Log out and back in again using the new ID.
- Become root by running su.
- Assign Static IP addresses to Signalling and Recording Nodes.
- Provision and assign a Public IP address for the Media server. If using 1:1 NAT, the public IP should be mapped to the private IP of the Media server.
- Create a Public DNS record for Media Server. It is highly recommended to setup TURNS(connection over TLS 443), so, also provision an SSL certificate for this FQDN.
- Open the required network ports between the three nodes. Kindly refer to the Network Architecture section.
- Setup an external Application Load Balancer as described in the previous section. Also, setup a Public DNS record for the LB and provision an SSL certificate.
Copy the Installer zip file
-
-
Optionally verify the downloaded signed zip using Jarsigner. Jarsigner is installed as part of JAVA.
- Verify if JAVA is installed by using command [ java -version ]. If Java is installed, then you get the JAVA version as an output.
- If Java is not present on the machine then, install JAVA.
sudo yum install java-1.8.0-openjdk-devel - Now, copy the below command in the Terminal window and click Enter.
jarsigner -verify -verbose NCC_Onprem_12_4_Installer.zip - The output of the verification contain:
- list of files inside the zip
- certificate information from Adobe for authentication
- the successful output message "jar is verified" or the unsuccessful "jar is not verified"
- If the certificate information is valid and if the successful verification output message is printed, the user can use the zip contents and then proceed to installation else the user needs to contact Adobe Support.
-
Extract the ZIP. Make sure that the files have appropriate permissions.
Use the commands: Do not run any command with root/sudo access unless clearly specified.unzip NCC_Onprem_12_4_Installer.zip
Traverse to the Installer parent directory, for example,
cd ~/ncc-onprem-installer/
sudo chmod +x ExternalDependencies/install.sh
sudo chmod +x MainInstall.sh
sudo chmod +x check-connectivity.sh
sudo chmod +x uninstall.shملاحظة:When running the setup on an environment with no internet access or a locked-down environment, run the below command:
sudo chmod +x Externaldependecies/package-util.sh
-
Execute the dependency installation script.
- When running the setup on an environment with no internet access or a locked-down environment, run the below command to install external dependencies. If you have internet access, continue with the following step 2.
Traverse to the Installer parent directory. For Example, ensure you are in ~/ncc-onprem-installer/ directory.
Execute bash ExternalDependencies/package-util.sh --install. This installs all required external dependencies on the box.
This is needed once per node.
- Traverse to the Installer parent directory. For Example, ensure you are in ~/ncc-onprem-installer/ directory.
Execute bash ExternalDependencies/install.sh. This installs all required external dependencies on the box.
This is needed once per node.
- When running the setup on an environment with no internet access or a locked-down environment, run the below command to install external dependencies. If you have internet access, continue with the following step 2.
Installation Process
The following steps below walk through installing the WebRTC environment (as an example) on 4 separate Red Hat instances. A Signalling node, a Recording node, a Media-Server node, and an ASR node. Pay close attention to the services that are configured in the containers.conf for each node’s instructions. You need to configure each node with a specific set of services/containers in the configuration files.
** On a lab system, you can install all these ‘nodes’ (Signalling, Recording, Media, and ASR Server) on one Linux instance. In that case, in the containers.conf you would set ‘count=1’ for all servers/containers needed for your environment.
Signalling node
On the Signalling nodes, you would typically run the following services. Each service runs as a docker container.
- config (Configuration Service)
- cas (New Connect API Service)
- apigw (API Gateway/Router)
- liveswitch-gateway (WebRTC Gateway Service)
Signalling node(s) typically be in the private subnet accessible to Connect client via an External Load balancer.
Procedure
-
Edit the Hosts file
On the Signalling node, the hosts' file should be updated. You could use a text editor like nano or vi.
- Open the /etc/hosts file using nano or vi editor e.g. sudo vi /etc/hosts
- Add the following line at the end. Replace the <private-ip> with the private IP of the host. Kindly note the space between each word below.
<private-ip> cas.ncc.svc.cluster.local gw.fm.ncc.internal config.ncc.svc.cluster.local pi.ncc.svc.cluster.local auth.ncc.svc.cluster.local
192.168.1.100 cas.ncc.svc.cluster.local gw.fm.ncc.internal config.ncc.svc.cluster.local pi.ncc.svc.cluster.local auth.ncc.svc.cluster.local
On the ASR node, the hosts' file should be updated. You could use a text editor like nano or vi.
- Open the /etc/hosts file using nano or vi editor e.g. sudo vi /etc/hosts
- Add the following line at the end. Replace the <private-ip> with the private IP of the host. Kindly note the space between each word below.
<private-ip> gw.fm.ncc.internal
ملاحظة:Change the <private-ip> with the private IP of the Signalling host
- Open the /etc/hosts file using nano or vi editor e.g. sudo vi /etc/hosts
-
Edit the Configuration files
- Edit the configuration file, present in, ncc-onprem-installer/Config/config.conf. The instructions to edit the file are added as comments in the config file. This file needs to be edited for each host separately.
- Now, edit ncc-onprem-installer/Config/containers.conf. The instructions to edit the file are added as comments in the file. This file needs to be edited for each host separately. Depending on the services to be installed and the number of containers to be deployed.
- On a typical Signalling Node, you would install:
- casServer
- configService
- apiGateway
- gatewayServer
- redis
- postgres
- lb (Optional. Kindly refer to the Load Balancer section below for additional configuration steps.)
- So, set count=1 for all the services above and 0 for all others.
- Important: Kindly set restart=1 for configService, after making any change to Config.conf.
- In setups requiring multiple Signalling nodes, the Redis and Postgres databases are only installed on the first node.
- Edit the configuration file, present in, ncc-onprem-installer/Config/config.conf. The instructions to edit the file are added as comments in the config file. This file needs to be edited for each host separately.
-
Execute the main installer script .
Switch to the Main installer directory cd ~/ncc-onprem-installer/
Execute the main installer script, bash MainInstall.sh. Wait for the confirmation message.
Success Message:
2023-01-31 18:21:34,033 : INFO : Main : 55 : Installation successful.
Failure Message:
2023-01-31 20:04:44,849 : ERROR : Main : 59 : Installation failed. Check the installer.log for further information.Please refer to troubleshooting section.
-
Verify Installation
- Health Check API
For the health check of the node, you can browse to the URL: http://<private_ip>:18443/health. A healthy response must be 200 OK {"apigw":"ok"}.
To verify from the RedHat machine you can use the CURL command
For example, curl -v http://172.31.56.203:18443/health - Verify Container Status
In the Terminal window, run the command docker ps.
The output like the below and the STATUS should not be restarted for any of the containers.
- Health Check API
-
Use the bundled Nginx Load Balancer aka lb container
The installer is now bundled with an open-source Nginx container for SSL off-loading.
This should be used only for labs or small setups with only 1 Signalling node.
You should install the LB on the Signalling Node.
When using this bundled LB, the Signalling node should be in the public subnets (or DMZ) and assigned a public IP or mapped to a public IP via 1:1 NAT. Clients connect to the node directly via the LB URL on port 443.
In setups where access to Connect is not required from the Internet, the Signalling should be in the private subnet but still accessible from the internal network on port 443.
Requirements:
- Public/Private DNS records depending on the use case
- SSL certificate for the DNS record
- Private key for the SSL certificate
Steps for using the Load Balancer:
- mkdir -p ~/connect/loadbalancer/certs
- Copy the SSL certificate (issued for FQDN of the Signalling node) and the private key to the certs directory. Ensure that the names are cert.crt and key.key.
- In the config.conf file, add the FQDN in the hostnames>lb section and also in the hostnames>fqdn section as this would now be the entry point of the WebRTC cluster.
- In the container.conf file, update the count of lb to 1.
- Now, run bash MainInstall.sh.
If using a wildcard certificate with an AWS EC2 instance:
- Create an A record in Route53. For example, If the wildcard cert is *.example.com then create the A record as webrtc.example.com and map it to the Elastic IP of the EC2 instance.
- Copy the wildcard cert and key to the ~/connect/loadbalancer/certs directory. Ensure that the names are cert.crt and key.key.
- Add the FQDN. For example, webrtc.example.com in the hostnames>lb section and also in hostnames>fqdn.
- In the container.conf file, update the count of lb to 1.
- Now, run bash MainInstall.sh.
On the Media node(s), you would run the following services:
- liveswitch-media-server (Media Server) – For enhanced AudioVideo
- liveswitch-sip-connector (SIP service) – For using SIP
Media nodes should be in the public subnets (or DMZ) and assigned a public IP or mapped to a public IP via 1:1 NAT. Clients connects to the Media node’s Public IPs directly.
The Enhanced AV (WebRTC) client uses the ICE method for connectivity and tries to establish an audio-video stream with the Media server over the below ports and protocol:
- UDP ports in the range 30000 – 65535 handle real-time media flow.
- UDP and TCP port 3478 handle STUN and TURN traffic to help clients stream through the firewalls.
- TLS over port 443 to enable TURNS to ensure high availability for streaming in restricted networks.
- C12 WebRTC-based Client tries all options (TCP and UDP) before it switches to TLS over port 443.
- UDP and TCP port 5060 handle SIP traffic for trunk/PBX registration and inbound/outbound calling. (Required if you are using SIP)
Procedure
-
Execute the main installer script.
Switch to the Main installer directory cd ~/ncc-onprem-installer/
Execute the main installer script, bash MainInstall.sh. Wait for the confirmation message.
Success Message
2023-01-31 18:21:34,033 : INFO : Main : 55 : Installation successful.
Failure Message
2023-01-31 20:04:44,849 : ERROR : Main : 59 : Installation failed. Check the installer.log for further information.Please refer to troubleshooting section.
-
Verify Installation
In the Terminal window, run the command docker ps and ensure that the liveswitch-media-server container is running.
Recording Node
Recording nodes should run in a private network. On recording nodes, you can run one or more instances of:
- hcr (Recording Container. # of hcr containers decide simultaneous recordings that can run)
- recordingserver (WebServer to serve recordings files to CPS. 1 per Recording Node)
Recording nodes should be reachable on:
- TCP 80 from the local network, so that CPS can download the recordings.
- TCP 5000-5100 from the local network, individual recording containers would be bound to host ports in that range.
- TCP 8090 from the local network.
Recording nodes should be able to reach the Media nodes on the public IP on ports listed in the Media nodes section and to CPS on port 443 to make a successful recording.
Procedure
-
Edit the Configuration files
- Edit the config.conf file or copy it from the Signalling node, ~/ ncc-onprem-installer /Config/config.conf.
- Now, edit ~/ncc-onprem-installer/Config/containers.conf. The instructions to edit the file are added as comments in the file.
- On a Recording Server Node, you would install:
- recordingContainer
- recordingserver
- recordingContainer
- So, set count >= 1 for recordingContainer, 1 for recordingserver, and 0 for all the others.
-
Execute the main installer script
Switch to the Main installer directory cd ~/ncc-onprem-installer/ncc-onprem-installer/.
Execute the main installer script, bash MainInstall.sh. Wait for the confirmation message.
Success Message
2023-01-31 18:21:34,033 : INFO : Main : 55 : Installation successful.
Failure Message
2023-01-31 20:04:44,849 : ERROR : Main : 59 : Installation failed. Check the installer.log for further information.Please refer to troubleshooting section.
-
Verify Installation
In the Terminal window, run the command B and ensure that the hcr and recordingserver containers are running.
ASR Node
ASR or Closed Captioning nodes should run in a private network. On ASR nodes, you can run one or more instances of ASR (ASR Container. # of ASR containers decides the number of meetings that will have closed captioning running simultaneously).
ASR nodes should be reachable on TCP 6000-6100 from the local network, individual ASR containers would be bound to host ports in that range.
Procedure
-
Edit the Configuration files.
-
Edit the config.conf file or copy it from the Signalling node, ~/ ncc-onprem-installer /Config/config.conf.
-
Now, edit ~/ncc-onprem-installer/Config/containers.conf.
-
On a ASR Server Node, you would install:
asrContainer
So, set count >= 1 for asrContainer
-
Execute the main installer script.
Switch to the Main installer directory cd ~/ncc-onprem-installer/ncc-onprem-installer/.
Execute the main installer script, bash MainInstall.sh. Wait for the confirmation message.Success Message
2023-01-31 18:21:34,033 : INFO : Main : 55 : Installation successful.
Failure Message
2023-01-31 20:04:44,849 : ERROR : Main : 59 : Installation failed. Check the installer.log for further information.
Prepare the environment
Copy the Installer zip file.
-
Copy the NCC_Onprem_12_4_Installer.zip to the home directory of all the nodes.
For example, scp NCC_Onprem_12_4_Installer.zip -i ssh_key.pem my-user@webrtc.corp.example.com:/home/my-user/ -
Optionally verify the downloaded signed zip using Jarsigner. Jarsigner is installed as part of JAVA.
- Verify if JAVA is installed by using command [ java -version ]. If Java is installed, then you get the JAVA version as an output.
- If Java is not present on the machine then, install JAVA.
sudo yum install java-1.8.0-openjdk-devel - Now, copy the below command in the Terminal window and click Enter.
jarsigner -verify -verbose NCC_Onprem_12_4_Installer.zip - The output of the verification contain:
- list of files inside the zip
- certificate information from Adobe for authentication
- the successful output message "jar is verified" or the unsuccessful "jar is not verified"
- If the certificate information is valid and if the successful verification output message is printed, the user can use the zip contents and then proceed to installation else the user needs to contact Adobe Support.
-
Extract the ZIP. Make sure that the files have appropriate permissions.
Use the commands: Do not run any command with root/sudo access unless clearly specified.unzip NCC_Onprem_12_4_Installer.zip
Traverse to the Installer parent directory, for example,
cd ~/ncc-onprem-installer/
sudo chmod +x ExternalDependencies/install.sh
sudo chmod +x MainInstall.sh
sudo chmod +x check-connectivity.sh
sudo chmod +x uninstall.sh -
Execute the dependency installation script.
Traverse to the Installer parent directory, for example,
Ensure you are in ~/ncc-onprem-installer/ directory
Execute bash ExternalDependencies/install.sh. This installs all required external dependencies on the box.This is needed once per node.
The upgrade process requires down-time.
Signalling node
On the Signalling nodes, you would typically run the following services. Each service runs as a docker container.
- config (Configuration Service)
- cas (New Connect API Service)
- apigw (API Gateway/Router)
- liveswitch-gateway (WebRTC Gateway Service)
Procedure
-
Edit the Configuration files
- Edit the configuration file, present in, ncc-onprem-installer/Config/config.conf. The instructions to edit the file are added as comments in the config file. This file needs to be edited for each host separately. You can use the config.conf file from the previous installation as a reference. Kindly do not replace the new file with the old one. The newer version has important updates.
- Now, edit ncc-onprem-installer/Config/containers.conf. The instructions to edit the file are added as comments in the file. This file needs to be edited for each host separately.
- When upgrading the Signalling Node you would typically install:
- casServer
- configService
- apiGateway
- gatewayServer
- Kindly refer to the file from the previous installation to identify the services to be installed and the number of containers to be deployed. Set count=1 for all the previously installed services and 0 for all others.
- Important: Kindly set restart=1 for configService, after making any change to Config.conf file or when upgrading using a newer installer version.
- Important: When upgrading, the shipped Redis and Postgres databases are not be installed, so, keep their count to 0 in the containers.conf file.
-
Switch to the Main installer directory cd ~/ncc-onprem-installer/
Execute the main installer script, bash MainInstall.sh. The installer automatically upgrades the services to the latest version. Wait for the confirmation message.
Success Message:
2023-01-31 18:21:34,033 : INFO : Main : 55 : Installation successful.
Failure Message:
2023-01-31 20:04:44,849 : ERROR : Main : 59 : Installation failed. Check the installer.log for further information. -
Verify Installation
- Health Check API
For the health check of the node, you can browse to the URL http://<private_ip>:18443/health. A healthy response must be 200 OK {"apigw":"ok"}. To verify from the RedHat machine you can use the CURL command.
For example, curl -v http://172.31.56.203:18443/health Verify Container Status
In the Terminal window, run the command docker ps
The output like the one below and the STATUS should not be restarted for any of the containers.
- Health Check API
Media Server Node
On the Media node(s), you would run the following services:
- liveswitch-media-server (Media Server) – For enhanced AudioVideo
- liveswitch-sip-connector (SIP service) – For using SIP
Procedure
-
Edit the Configuration files.
-
Edit the config.conf file or copy it from the Signalling node, ~/ncc-onprem-installer /Config/config.conf. You can use the config.conf file from the previous installation as a reference. Kindly do not replace the new file with the old one. The newer version has important updates.
ملاحظة:If you are in a restricted environment, leave blank entries for externalStunUrls0 and externalStunUrls1 in the gatewayServer section. For example, externalStunUrls0="" , externalStunUrls1="".
-
Now, edit ~/ncc-onprem-installer/Config/containers.conf. The instructions to edit the file are added as comments in the file.
-
Now, edit ~/ncc-onprem-installer/Config/containers.conf. The instructions to edit the file are added as comments in the file.
-
On a Media Server Node you would upgrade:
- mediaServer
- sipServer (If SIP/Telephony is needed)
-
Kindly refer to the file from the previous installation to identify the services to be installed and the number of containers to be deployed. Set count=1 for all the previously installed services and 0 for all others.
-
In the mediaServer block update values for mediaNodePublicIP and mediaNodePublicFQDN. Kindly refer to the inline comments for more detail.
-
-
Execute the main installer script
Switch to the Main installer directory cd ~/ncc-onprem-installer/
Execute the main installer script, bash MainInstall.sh. The installer automatically upgrades the services to the latest version. Wait for the confirmation message.
Success Message:
2023-01-31 18:21:34,033 : INFO : Main : 55 : Installation successful.
Failure Message:
2023-01-31 20:04:44,849 : ERROR : Main : 59 : Installation failed. Check the installer.log for further information.Please refer to troubleshooting section.
-
Verify Installation
In the Terminal window, run the command docker ps and ensure that the liveswitch-media-server and liveswitch-sip-connector(optional install) containers are running.
Recording Node
Recording nodes should run in a private network. On recording nodes, you can run one or more instances:
hcr (Recording Container. Number of hcr containers is equal to # of concurrent recordings that can run.)
Procedure
-
Edit the Configuration files
- Edit the config.conf file or copy it from the Signalling node, ~/ ncc-onprem-installer /Config/config.conf. You can use the config.conf file from the previous installation as a reference. Kindly do not replace the new file with the old one. The newer version has important updates.
- Now, edit ~/ncc-onprem-installer/Config/containers.conf. The instructions to edit the file are added as comments in the file.
- On a Recording Server Node, you would upgrade:
- recordingContainer
- Kindly refer to the containers.conf from the previous installation directory and set the same value of the count for recordingContainer and 0 for all the other services.
- Edit the config.conf file or copy it from the Signalling node, ~/ ncc-onprem-installer /Config/config.conf. You can use the config.conf file from the previous installation as a reference. Kindly do not replace the new file with the old one. The newer version has important updates.
-
Execute the main installer script.
Switch to the Main installer directory cd ~/ncc-onprem-installer/ncc-onprem-installer/
Execute the main installer script, bash MainInstall.sh. The installer automatically upgrades the services to the latest version. Wait for the confirmation message.
Success Message:
2023-01-31 18:21:34,033 : INFO : Main : 55 : Installation successful.
Failure Message:
2023-01-31 20:04:44,849 : ERROR : Main : 59 : Installation failed. Check the installer.log for further information.Please refer to troubleshooting section.
-
Verify Installation
In the Terminal window, run the command docker ps and ensure that the hcr containers are running.
The following steps need to be performed on all Adobe Connect 12 (CPS) server(s):
- Add the below configs in custom.ini of all the Connect server(s) present at <Installation_Dir>/Connect/custom.ini.
# comma separated list of CAS discovery URLs
# Possible values
WEBRTC_CAS_DISCOVERY_URLS=http://<Signalling Node IP >:18443/api/cps/ingest
WEBRTC_CAS_DISCOVERY_URLS=http://<Load Balancer URL>/api/cps/ingest
# Shared secret for CAS, used to sign requests to CAS. Enter the one that you set under the hmac section in the config.conf file on the Signalling node.
WEBRTC_CAS_SHARED_SECRET=CorrectHorseBatteryStaple - Save the file and restart connectpro service.
Post Installation Process
-
Verify Network Connectivity
Run the Connectivity Test workflow to verify network connectivity between different nodes
- Login to the Signalling Node.
- Traverse to the Installer parent directory, for example,
cd ~/ncc-onprem-installer/ncc-onprem-installer/ - Run the command bash check-connectivity.sh.
- The connectivity tool uses the provided values in the Config.conf file to run basic network tests like:
- HTTP requests between different services
- Network connectivity tests on the required TCP and UDP ports
- Name resolution
- Database connectivity
- This test checks the required connectivity between the nodes and services and share the results to help diagnose network connectivity issues.
-
Test Adobe Connect Meeting with Enhanced Audio-Video
- Create a new meeting and ensure that Enhanced Audio/Video option is selected.
- Join the meeting room and turn on the Mic and the Camera.
- If possible also join the room from a different PC or Mobile device and verify the audio-video exchange.
- Try Screen Sharing.
- Start Recording. Wait for 30s and Stop. Now, verify if the recording is accessible.
Setup of Additional Features
Configure SIP
To configure SIP, you must access the FM admin web panel.
-
Visit: http://<your-load-balancer-address>:9090/admin and log in with the username and password configured in config.conf. If you are not using a Load Balancer then browse to http://<Signalling_Node-address>:9090/admin.
-
Visit APPLICATIONS, then adobeconnect, and then scroll down to Channels.Configure SIP Outbound Caller ID Configuration for both default (*) and broadcast-* channel group.
-
From the main menu go to DEPLOYMENTS and edit the SIP configuration.
Upload TURNS certificate to FM admin
To provide SSL termination for TURNS bindings, you can upload a certificate to the FM admin panel.
The certificate must be in the PFX format.
Follow the steps below:
-
Visit: http://<your-load-balancer-address>:9090/admin and log in with the username and password configured in config.conf. If you are not using a Load Balancer then browse to http://<Signalling_Node-address>:9090/admin.
-
Go to CERTIFICATES.
-
Under File Certificates select on a + sign and upload the certificate for the domain that you're using.
Configure the TURNS binding
This is needed so that we can have secure TURN communication.
If you've already uploaded the certificate as discussed in the previous section, perform the following steps:
-
Navigate to DEPLOYMENTS, then select Default.
-
Scroll down to the section, Advanced Configuration.
-
Configure the TURNS Bindings with the certificate that you uploaded earlier.
Uninstallation
-
Change to the installer root directory(Root directory contains uninstall.sh), cd ~/ncc_onprem_installer.
-
Execute, bash uninstall.sh.
-
To remove Postgres and Redis databases data directory from the Signalling node, run the below command.
sudo rm -rf ~/connect
This removes all NCC components and images from the system. But the external dependencies(Python, PIP, Virtualenv, and Python packages) are not be deleted.
docker.errors.DockerException: Error while fetching server API version: ('Connection aborted.', FileNotFoundError(2, 'No such file or directory'))
This error message displays when DOCKER_HOST environment variable is not properly set. Follow the steps below:
- Traverse to the Installer parent directory, for example:
cd ~/ncc-onprem-installer/
- Run the command, source ~/.bash_profile
If the above steps don’t resolve the issue, then follow the steps below:
- Remove Podman, sudo yum remove podman-remote
- Log out of the current user and log in again.
- Traverse to the Installer parent directory, for example:
cd ~/ncc-onprem-installer/
- Execute bash ExternalDependencies/install.sh.
Do not run any command with root/sudo access unless clearly specified.
2023-03-22 03:27:12 : ERROR : Main : 48 : Failed to start Postgres.
This error message is displayed when the Postgres docker container is unable to mount the host volume due to permission issue. Please follow the steps below:·
- Traverse to the Installer parent directory, for example,
cd ~/ncc-onprem-installer/ - Run the command, docker load -i images/postgres.tar.gz
- Then run:
docker run -d \
--env POSTGRES_PASSWORD='PostPass1234' \
--volume ~/connect/postgres/data:/var/lib/postgresql/data \
--publish 5432:5432 \
--network host \
--user 0 \
--restart always \
--name postgres \
docker-connect-release.dr-uw2.adobeitc.com/ncc-onprem/postgres:12.4.1 - This will fix the permission issues on the host directory.
- Now, run the below command to remove the Postgres docker container.
docker rm -f postgres - Finally, run the MainInstaller again.
bash MainInstall.sh
pydantic.errors.PydanticImportError: `BaseSettings` has been moved to the `pydantic-settings` package. Seehttps://docs.pydantic.dev/2.0/migration/#basesettings-has-moved-to-pydantic-settings for more details.
Please follow the steps below:·
- Traverse to the Installer parent directory, for example,
cd ~/ncc-onprem-installer/ - Run the command, source bin/activate
- Then run this command to uninstall pydatic:
python3 -m pip uninstall pydantic - Run this command to re-install Pydantic:
python3 -m pip install "pydantic==1.*"
- Finally, run the MainInstaller again.
bash MainInstall.sh
The media server needs 1 :1 NAT, which means any traffic egress and ingress for the media server for the internet needs to use the public IP. The media server auto-discovers its public IP to use the STUN protocol, once you have configured 1:1 NAT for the media server on your router or firewall. Please go to the admin interface of the WebRTC gateway and make sure that the mediaver can detect its public IP address.
https://adminwebrtc.yourdomain.com/admin/#/servers
Media Server is not reachable from Clients even after 1:1 NAT working fine
The WebRTC Client uses ICE and tries to establish an audio-video stream with the media server in the order below:
UDP in port range 35000 to 65535
TCP over port 3478
TLS over port 443
C12 WebRTC based Client will try all options (TCP and UDP) , before it switches to TLS over port 443
Connect 12.4.1 is by default designed with the assumption that CPS and WebRTC and New Connect Cluster (aka NCC) are in the same VLAN/VPC or broadcast domain. In case you've them in different networks, ensure that you've proper layer three routing between CPS and NCC/WebRTC to have network connectivity in both directions.
To check the network connectivity between CPS and NCC, use CURL and Telnet.
CPS and NCC connectivity work over HTTPS only.
The error below appears in the FMGW container logs. The second Redis instance is set to run as a replica.
ERROR [FM.LiveSwitch.Signalling.Server.IdleThread][-] 2022-07-21T12:46:29.439Z Unexpected error in idle thread.
StackExchange.Redis.RedisServerException: READONLY You can't write against a read only replica.
Solution:
Open the configuration file corresponding to the Redis service and modify the value of the attribute "slave-read-only" to “no”.
As a workaround, follow the steps below:
- cd ncc-onprem-installer.
- mv Images images.
- sed -i 's/Images\//images\//' Installer/utils/global_constants.py.
- Uninstall - Execute uninstall.sh.
- Reinstall - Execute MainInstall.sh.
Influx DB exceptions in logs can be ignored for now.
After rebooting an instance and restarting a docker container, if you check the logs for cas container, you will see the exceptions.
To resolve this issue, restart the cas container. Type docker restart cas.