Adobe Connect 12.2 Installation Guide (Enhanced Audio-Video Setup)

Introduction

Adobe Connect uses the latest WebRTC framework for providing enhanced Audio and Video capabilities. This Enhanced Audio-Video setup typically runs on multiple Linux® nodes that have specific roles. There are signalling nodes, media nodes, and recording nodes. This setup also uses PostgreSQL and Redis databases that can be installed on one or on separate machines depending on the usage.

Each node’s setup is done by copying the installer zip file to it, editing the config files, running the dependency installation script, and finally the main installation script. Below are the topics that are covered in following:

These steps are described in the Installation section.

Pre-requisite and System Requirements 

Estimate the size for Enhanced Audio/Video servers/nodes 

The file,  session_participants_count.sql, can be run to estimate the size of the enhanced audio/video servers. The output from the SQL queries is the input to the calculator, which is an Excel file, Additional Enhanced A/V hardware estimator.xlsx.

The calculator help in estimating the number of VMs needed based on your past usage of Adobe Connect. The calculator needs the following set of inputs. 

  • The number of server CPU cores and RAM. 
  • The attached SQL queries are used to determine the peak number of concurrent sessions done in the last 12 months and the average number of attendees. 

  • The estimated number of publishers in each session. A publisher is a meeting attendee (host, presenter, or participant) connecting their microphone in the meeting room (both muted or unmuted) and connecting their webcam in the meeting room (both live and paused).

Note:

Both the session_participants_count.sql and Additional Enhanced A/V hardware estimator.xlsx files are included in the Installer package and are not available elsewhere. 

Estimate FQDN and SSL certificate requirements

FQDN:

  • One public DNS record for external Application Load Balancer e.g. webrtc.example.com

  • One public DNS record for each Media Server e.g. media-1.example.com

SSL CERTIFICATES:

  • One SSL certificate for LB FQDN 

  • TURNS configuration is recommended for Media Servers, so, one certificate for each Media Server.

Understand Network Architecture

Port Opening Requirements

Source Destination Port Protocol Use
Signalling Node Redis 6379 TCP  
Postgres 5432 TCP  
Recording Node 5000-5100 TCP  
SIP or Media-
Server Node
5060 UDP  
SIP or Media-
Server Node
5060 TCP For SIP Signalling
CPS Server 443 TCP
Recording Node Media-Sever
Node (*)
443 TCP TURNS
3478 UDP STUN/TURN
3478 TCP STUN/TURN
30000-65535 UDP SRTP (real-time media flow)
CPS Server 443 TCP  
Redis 6379 TCP  
Signalling Node 8090 TCP
 
WebRTC Load Balancer (**) 443 TCP
 
Media Server Node Redis 6379 TCP
 
Postgres 5432 TCP
 
stun.I.google.com 19302 UDP For discovering Public IP. Verifying NAT type. 
stun1.I.google.com
19302 UDP For discovering Public IP. Verifying NAT type. 
WebRTC Load Balancer (**) 443 TCP
For registration on WebRTC Gateway 
CPS Server WebRTC Load Balancer 
443 TCP
 
Recording Node 80 TCP
Downloading recording files
Users/Client/Internet Media-Server Node (*)
443 TCP

 

 

 

 

TURNS (Audio-Video over TLS) 

3478 UDP STUN/TURN
3478 TCP STUN/TURN
30000-65535 UDP SRTP (real-time media flow)
CPS Server 443 TCP
 
WebRTC Load Balancer 
443 TCP
 
SIP Node Redis 6379 TCP
In most cases, SIP service runs on Media Server Node 
Postgres 5432 TCP
WebRTC Load Balancer 
443 TCP

* Media servers in most cases have a Public IP address but in restricted environments where access to Adobe Connect is not needed over the Internet, ports should be open for its Private IP address.

** There is an external WebRTC Load Balancer for routing traffic to multiple Signalling nodes and for SSL offloading. The LB acts as an interface between the new WebRTC servers and CPS and end users.

The load balancing is typically be done via an external Application Load Balancer with the below configuration.

  • HTTPS port 443: Signalling nodes HTTP port 18443 - CPS and End-users connect to this listener, and that is the entry point to the new WebRTC cluster.

  • HTTPS port 443: Signalling nodes HTTP port 9090 - Used by the administrator to connect to the WebRTC admin web panel to configure TURNS and SIP/Telephony.

System Requirements for Enhanced Audio/Video Servers

  • OS – Red Hat Enterprise Linux 64-bit version 8.6
  • Third-party libraries (Open source) that are installed on the servers:
    • Podman 4.2.0
    • Virtualenv 20.19.0
    • Python 3.9
    • Pip 21.3.1
    • Python libraries (pydantic, pyhocon, docker, redis, packaging, psycopg2-binary, python-json-logger, pystun3, coloredlogs, colorama)

Prepare the environment

A typical enhanced AV setup would need at least three Red Hat servers. One each for Signalling, Media, and Recording nodes. The estimator based on meeting load could suggest multiple Media-Server, Recording, and Signalling nodes. An Application Load Balancer would be used for SSL offloading and for communication between CPS and end-user clients with Signalling Nodes.

Some setups based on the load could just have two servers with Recording service containers running on the Signalling node.

Here we’ll describe the most common three server setups in detail.

  • Provision 3 Red Hat servers. Kindly refer to the Hardware estimator for configuration.
  • The installation requires a non-root user with sudo access.
    • After a fresh install of Red Hat, create a new user-id.
    • To enable sudo for the new user ID on RHEL, add the ID to the wheel group:
      • Become root by running su.
      • Run usermod -aG wheel your_user_id.
      • Log out and back in again using the new ID.
  • Assign Static IP addresses to Signalling and Recording Nodes.
  • Provision and assign a Public IP address for the Media server. If using 1:1 NAT, the public IP should be mapped to the private IP of the Media server.
  • Create a Public DNS record for Media Server. It is highly recommended to setup  TURNS(connection over TLS 443), so, also provision an SSL certificate for this FQDN. 
  • Open the required network ports between the three nodes. Kindly refer to the Network Architecture section. 
  • Setup an external Application Load Balancer as described in the previous section. Also, setup a Public DNS record for the LB and provision an SSL certificate. 

Copy the Installer zip file

  1. Copy the NCC_Onprem_12_2_Installer.zip to the home directory of all the nodes.
    For example, scp  NCC_Onprem_12_2_Installer.zip -i ssh_key.pem  my-user@webrtc.corp.example.com:/home/my-user/

  2. Optionally verify the downloaded signed zip using Jarsigner. Jarsigner is installed as part of JAVA.

    1. Verify if JAVA is installed by using command [ java -version ]. If Java is installed, then you get the JAVA version as an output.
    2. If Java is not present on the machine then, install JAVA.
      sudo yum install java-1.8.0-openjdk-devel 
    3. Now, copy the below command in the Terminal window and click Enter.
      jarsigner -verify -verbose NCC_Onprem_12_2_Installer.zip
    4. The output of the verification contain:
      • list of files inside the zip
      • certificate information from Adobe for authentication
      • the successful  output message "jar is verified" or the unsuccessful   "jar is not verified"
    5. If the certificate information is valid and if the successful verification output message is printed, the user can use the zip contents and then proceed to installation else the user needs to contact Adobe Support
  3. Extract the ZIP. Make sure that the files have appropriate permissions.
    Use the commands: Do not run any command with root/sudo access unless clearly specified.

    unzip NCC_Onprem_12_2_Installer.zip
    Traverse to the Installer parent directory, for example,
    cd  ~/ncc-onprem-installer/
    sudo chmod +x ExternalDependencies/install.sh
    sudo chmod +x MainInstall.sh
    sudo chmod +x check-connectivity.sh
    sudo chmod +x uninstall.sh

    Note:

    When running the setup on an environment with no internet access or a locked-down environment, run the below command:

    sudo chmod +x Externaldependecies/package-util.sh
  4. Execute the dependency installation script.

    1. When running the setup on an environment with no internet access or a locked-down environment, run the below command to install external dependencies. If you have internet access, continue with the following step 2.
      Traverse to the Installer parent directory. For Example, ensure you are in ~/ncc-onprem-installer/ directory.
      Execute bash ExternalDependencies/package-util.sh --install. This installs all required external dependencies on the box.

      This is needed once per node.
    2. Traverse to the Installer parent directory. For Example, ensure you are in ~/ncc-onprem-installer/ directory.
      Execute bash ExternalDependencies/install.sh. This installs all required external dependencies on the box.

      This is needed once per node.

Installation Process

The following steps below walk through installing the WebRTC environment (as an example) on 3 separate Red Hat instances.  A Signalling node, a Recording node, and a Media-Server node.  Pay close attention to the services that are configured in the containers.conf for each node’s instructions. You need to configure each node with a specific set of services/containers in the configuration files.

** On a lab system, you can install all these ‘nodes’ (Signalling, Recording, and Media Server) on one Linux instance.  In that case, in the containers.conf you would set ‘count=1’ for all servers/containers needed for your environment. 

Signalling node

On the Signalling nodes, you would typically run the following services. Each service runs as a docker container.

  • config (Configuration Service)
  • cas (New Connect API Service)
  • apigw (API Gateway/Router)
  • liveswitch-gateway (WebRTC Gateway Service)

Signalling node(s) typically be in the private subnet accessible to Connect client via an External Load balancer. 

Procedure

  1. Edit the Hosts file 

    On the Signalling node, the hosts' file should be updated. You could use a text editor like nano or vi. 

    1. Open the /etc/hosts file using nano or vi editor e.g. sudo vi /etc/hosts
    2. Add the following line at the end. Replace the <private-ip> with the private IP of the host. Kindly note the space between each word below. 

    <private-ip> cas.ncc.svc.cluster.local gw.fm.ncc.internal config.ncc.svc.cluster.local pi.ncc.svc.cluster.local auth.ncc.svc.cluster.local

    Note:

    Change the example IP value to match your environment.

    192.168.1.100 cas.ncc.svc.cluster.local gw.fm.ncc.internal config.ncc.svc.cluster.local pi.ncc.svc.cluster.local auth.ncc.svc.cluster.local

  2. Edit the Configuration files 

    1. Edit the configuration file, present in, ncc-onprem-installer/Config/config.conf. The instructions to edit the file are added as comments in the config file. This file needs to be edited for each host separately.  
    2. Now, edit ncc-onprem-installer/Config/containers.conf. The instructions to edit the file are added as comments in the file. This file needs to be edited for each host separately. Depending on the services to be installed and the number of containers to be deployed. 
    3. On a typical Signalling Node, you would install:
      • casServer
      • configService
      • apiGateway
      • gatewayServer
      • redis
      • postgres
      • lb (Optional. Kindly refer to the Load Balancer section below for additional configuration steps.)
    4. So, set count=1 for all the services above and 0 for all others. 
    5. Important: Kindly set restart=1 for configService, after making any change to Config.conf. 
    6. In setups requiring multiple Signalling nodes, the Redis and Postgres databases are only installed on the first node. 
  3. Execute the main installer script .
    Switch to the Main installer directory cd ~/ncc-onprem-installer/ 
    Execute the main installer script, bash MainInstall.sh. Wait for the confirmation message.
    Success Message: 
                   2023-01-31 18:21:34,033 : INFO : Main : 55 : Installation successful. 
    Failure Message:
                   2023-01-31 20:04:44,849 : ERROR : Main : 59 : Installation failed. Check the installer.log for further information. 

    Please refer to troubleshooting section.

  4. Verify Installation

    • Health Check API 
      For the health check of the node, you can browse to the URL: http://<private_ip>:18443/health. A healthy response must be 200 OK {"apigw":"ok"}.  

      To verify from the RedHat machine you can use the CURL command 
      For example, curl -v http://172.31.56.203:18443/health 
    • Verify Container Status 
      In the Terminal window, run the command docker ps.
      The output like the below and the STATUS should not be restarted for any of the containers.
    Container Status

  5. Use the bundled Nginx Load Balancer aka lb container

    The installer is now bundled with an open-source Nginx container for SSL off-loading. 

    This should be used only for labs or small setups with only 1 Signalling node. 

    You should install the LB on the Signalling Node.

    When using this bundled LB, the Signalling node should be in the public subnets (or DMZ) and assigned a public IP or mapped to a public IP via 1:1 NAT. Clients connect to the node directly via the LB URL on port 443.

    In setups where access to Connect is not required from the Internet, the Signalling should be in the private subnet but still accessible from the internal network on port 443.

    Requirements:

    • Public/Private DNS records depending on the use case
    • SSL certificate for the DNS record
    • Private key for the SSL certificate

    Steps for using the Load Balancer: 

    1. mkdir -p ~/connect/loadbalancer/certs
    2. Copy the SSL certificate (issued for FQDN of the Signalling node) and the private key to the certs directory. Ensure that the names are cert.crt and key.key.
    3. In the config.conf file, add the FQDN in the hostnames>lb section and also in the hostnames>fqdn section as this would now be the entry point of the WebRTC cluster.
    4. In the container.conf file, update the count of lb to 1.
    5. Now, run bash MainInstall.sh.

    If using a wildcard certificate with an AWS EC2 instance: 

    1. Create an A record in Route53. For example, If the wildcard cert is *.example.com then create the A record as webrtc.example.com and map it to the Elastic IP of the EC2 instance.
    2. Copy the wildcard cert and key to the ~/connect/loadbalancer/certs directory. Ensure that the names are cert.crt and key.key.
    3. Add the FQDN. For example, webrtc.example.com in the hostnames>lb section and also in hostnames>fqdn.
    4. In the container.conf file, update the count of lb to 1.
    5. Now, run bash MainInstall.sh.
Media Server Node

On the Media node(s), you would run the following services: 

  • liveswitch-media-server (Media Server) – For enhanced AudioVideo
  • liveswitch-sip-connector (SIP service) – For using SIP

Media nodes should be in the public subnets (or DMZ) and assigned a public IP or mapped to a public IP via 1:1 NAT. Clients connects to the Media node’s Public IPs directly. 

The Enhanced AV (WebRTC) client uses the ICE method for connectivity and tries to establish an audio-video stream with the Media server over the below ports and protocol: 

  • UDP ports in the range 30000 – 65535 handle real-time media flow.
  • UDP and TCP port 3478 handle STUN and TURN traffic to help clients stream through the firewalls.
  • TLS over port 443 to enable TURNS to ensure high availability for streaming in restricted networks. 
  • C12 WebRTC-based Client tries all options (TCP and UDP) before it switches to TLS over port 443.
  • UDP and TCP port 5060 handle SIP traffic for trunk/PBX registration and inbound/outbound calling. (Required if you are using SIP)

Procedure

  1. Execute the main installer script.
    Switch to the Main installer directory cd ~/ncc-onprem-installer/
    Execute the main installer script, bash MainInstall.sh. Wait for the confirmation message.
    Success Message
           2023-01-31 18:21:34,033 : INFO : Main : 55 : Installation successful. 
    Failure Message
                   2023-01-31 20:04:44,849 : ERROR : Main : 59 : Installation failed. Check the installer.log for further information.    

    Please refer to troubleshooting section.

  2. Verify Installation 

    In the Terminal window, run the command docker ps and ensure that the liveswitch-media-server container is running. 

Recording Node 

Recording nodes should run in a private network. On recording nodes, you can run one or more instances of: 

  • hcr (Recording Container. # of hcr containers decide simultaneous recordings that can run) 
  • recordingserver (WebServer to serve recordings files to CPS. 1 per Recording Node) 

Recording nodes should be reachable on: 

  • TCP 80 from the local network, so that CPS can download the recordings.
  • TCP 5000-5100 from the local network, individual recording containers would be bound to host ports in that range. 
  • TCP 8090 from the local network. 

Recording nodes should be able to reach the Media nodes on the public IP on ports listed in the Media nodes section and to CPS on port 443 to make a successful recording.

Procedure

  1. Edit the Configuration files 

    1. Edit the config.conf file or copy it from the Signalling node, ~/ ncc-onprem-installer /Config/config.conf
    2. Now, edit  ~/ncc-onprem-installer/Config/containers.conf. The instructions to edit the file are added as comments in the file.
    3. On a Recording Server Node, you would install:
      • recordingContainer  
      • recordingserver  
    4. So, set count >= 1 for recordingContainer, 1 for recordingserver, and 0 for all the others.
  2. Execute the main installer script
    Switch to the Main installer directory cd ~/ncc-onprem-installer/ncc-onprem-installer/.
    Execute the main installer script, bash MainInstall.sh. Wait for the confirmation message. 
    Success Message
                    2023-01-31 18:21:34,033 : INFO : Main : 55 : Installation successful. 
    Failure Message
                    2023-01-31 20:04:44,849 : ERROR : Main : 59 : Installation failed. Check the installer.log for further information

    Please refer to troubleshooting section.

  3. Verify Installation 

    In the Terminal window, run the command B and ensure that the hcr and recordingserver containers are running. 

Upgrading a system having 12.0, 12.1, 12.1.5 

Prepare the environment 

Copy the Installer zip file.

  1. Copy the NCC_Onprem_12_2_Installer.zip to the home directory of all the nodes.
    For example, scp  NCC_Onprem_12_2_Installer.zip -i ssh_key.pem  my-user@webrtc.corp.example.com:/home/my-user/

  2. Optionally verify the downloaded signed zip using Jarsigner. Jarsigner is installed as part of JAVA.

    1. Verify if JAVA is installed by using command [ java -version ]. If Java is installed, then you get the JAVA version as an output.
    2. If Java is not present on the machine then, install JAVA.
      sudo yum install java-1.8.0-openjdk-devel 
    3. Now, copy the below command in the Terminal window and click Enter.
      jarsigner -verify -verbose NCC_Onprem_12_2_Installer.zip
    4. The output of the verification contain:
      • list of files inside the zip
      • certificate information from Adobe for authentication
      • the successful  output message "jar is verified" or the unsuccessful   "jar is not verified"
    5. If the certificate information is valid and if the successful verification output message is printed, the user can use the zip contents and then proceed to installation else the user needs to contact Adobe Support
  3. Extract the ZIP. Make sure that the files have appropriate permissions.
    Use the commands: Do not run any command with root/sudo access unless clearly specified.

    unzip NCC_Onprem_12_2_Installer.zip
    Traverse to the Installer parent directory, for example,
    cd  ~/ncc-onprem-installer/
    sudo chmod +x ExternalDependencies/install.sh
    sudo chmod +x MainInstall.sh
    sudo chmod +x check-connectivity.sh
    sudo chmod +x uninstall.sh

  4. Execute the dependency installation script.

    Traverse to the Installer parent directory, for example,
    Ensure you are in  ~/ncc-onprem-installer/ directory
    Execute bash ExternalDependencies/install.sh. This installs all required external dependencies on the box. 

    This is needed once per node.

Note:

The upgrade process requires down-time.

Signalling node 

On the Signalling nodes, you would typically run the following services. Each service runs as a docker container. 

  • config (Configuration Service)
  • cas (New Connect API Service)
  • apigw (API Gateway/Router)
  • liveswitch-gateway (WebRTC Gateway Service)

Procedure

  1. Edit the Configuration files 

    1. Edit the configuration file, present in, ncc-onprem-installer/Config/config.conf. The instructions to edit the file are added as comments in the config file. This file needs to be edited for each host separately. You can use the config.conf file from the previous installation as a reference. Kindly do not replace the new file with the old one. The newer version has important updates. 
    2. Now, edit ncc-onprem-installer/Config/containers.conf. The instructions to edit the file are added as comments in the file. This file needs to be edited for each host separately.  
    3. When upgrading the Signalling Node you would typically install: 
      • casServer
      • configService
      • apiGateway
      • gatewayServer
    4. Kindly refer to the file from the previous installation to identify the services to be installed and the number of containers to be deployed. Set count=1 for all the previously installed services and 0 for all others. 
    5. Important: Kindly set restart=1 for configService, after making any change to Config.conf file or when upgrading using a newer installer version. 
    6. Important: When upgrading, the shipped Redis and Postgres databases are not be installed, so, keep their count to 0 in the containers.conf file. 
  2. Switch to the Main installer directory cd ~/ncc-onprem-installer/ 
    Execute the main installer script, bash MainInstall.sh. The installer automatically upgrades the services to the latest version. Wait for the confirmation message. 
    Success Message:
                    2023-01-31 18:21:34,033 : INFO : Main : 55 : Installation successful. 
    Failure Message:
                  2023-01-31 20:04:44,849 : ERROR : Main : 59 : Installation failed. Check the installer.log for further information. 
  3. Verify Installation 

    • Health Check API
      For the health check of the node, you can browse to the URL http://<private_ip>:18443/health. A healthy response must be 200 OK {"apigw":"ok"}. To verify from the RedHat machine you can use the CURL command.
      For example, curl -v http://172.31.56.203:18443/health
    • Verify Container Status
      In the Terminal window, run the command docker ps
      The output like the one below and the STATUS should not be restarted for any of the containers.

    Container Status

Media Server Node

On the Media node(s), you would run the following services:

  • liveswitch-media-server (Media Server) – For enhanced AudioVideo
  • liveswitch-sip-connector (SIP service) – For using SIP

Procedure

  1. Edit the Configuration files.

    1. Edit the config.conf file or copy it from the Signalling node, ~/ncc-onprem-installer /Config/config.conf. You can use the config.conf file from the previous installation as a reference. Kindly do not replace the new file with the old one. The newer version has important updates.

      Note:

      If you are in a restricted environment, leave blank entries for externalStunUrls0 and externalStunUrls1 in the gatewayServer section. For example, externalStunUrls0="" , externalStunUrls1="". 

    2. Now, edit ~/ncc-onprem-installer/Config/containers.conf. The instructions to edit the file are added as comments in the file.
    3. Now, edit ~/ncc-onprem-installer/Config/containers.conf. The instructions to edit the file are added as comments in the file.

    4. On a Media Server Node you would upgrade: 

      • mediaServer
      • sipServer (If SIP/Telephony is needed)
    5. Kindly refer to the file from the previous installation to identify the services to be installed and the number of containers to be deployed. Set count=1 for all the previously installed services and 0 for all others. 

    6. In the mediaServer block update values for mediaNodePublicIP and mediaNodePublicFQDN. Kindly refer to the inline comments for more detail.

  2. Execute the main installer script 
    Switch to the Main installer directory cd ~/ncc-onprem-installer/
    Execute the main installer script, bash MainInstall.sh. The installer automatically upgrades the services to the latest version. Wait for the confirmation message.
    Success Message:
                    2023-01-31 18:21:34,033 : INFO : Main : 55 : Installation successful. 
    Failure Message:
                 2023-01-31 20:04:44,849 : ERROR : Main : 59 : Installation failed. Check the installer.log for further information. 

    Please refer to troubleshooting section.

  3. Verify Installation

    In the Terminal window, run the command docker ps and ensure that the liveswitch-media-server and liveswitch-sip-connector(optional install) containers are running.

Recording Node

Recording nodes should run in a private network. On recording nodes, you can run one or more instances: 
hcr (Recording Container. Number of hcr containers is equal to # of  concurrent recordings that can run.)

Procedure

  1. Edit the Configuration files 

    1. Edit the config.conf file or copy it from the Signalling node, ~/ ncc-onprem-installer /Config/config.conf. You can use the config.conf file from the previous installation as a reference. Kindly do not replace the new file with the old one. The newer version has important updates. 
    2. Now, edit ~/ncc-onprem-installer/Config/containers.conf. The instructions to edit the file are added as comments in the file. 
    3. On a Recording Server Node, you would upgrade: 
      • recordingContainer
    4. Kindly refer to the containers.conf from the previous installation directory and set the same value of the count for recordingContainer and 0 for all the other services. 
  2. Execute the main installer script.
    Switch to the Main installer directory cd ~/ncc-onprem-installer/ncc-onprem-installer/ 
    Execute the main installer script, bash MainInstall.sh. The installer automatically upgrades the services to the latest version. Wait for the confirmation message. 
    Success Message:
                   2023-01-31 18:21:34,033 : INFO : Main : 55 : Installation successful. 
    Failure Message:
                2023-01-31 20:04:44,849 : ERROR : Main : 59 : Installation failed. Check the installer.log for further information.

    Please refer to troubleshooting section.

  3. Verify Installation

    In the Terminal window, run the command docker ps and ensure that the hcr containers are running.

Configure Enhanced Audio-Video in Adobe Connect Service (CPS)

The following steps need to be performed on all Adobe Connect 12 (CPS) server(s):

  1. Add the below configs in custom.ini of all the Connect server(s) present at <Installation_Dir>/Connect/custom.ini.
    # comma separated list of CAS discovery URLs
    # Possible values
    WEBRTC_CAS_DISCOVERY_URLS=http://<Signalling Node IP >:18443/api/cps/ingest
    WEBRTC_CAS_DISCOVERY_URLS=http://<Load Balancer URL>/api/cps/ingest
    # Shared secret for CAS, used to sign requests to CAS. Enter the one that you set under the hmac section in the config.conf file on the Signalling node.
    WEBRTC_CAS_SHARED_SECRET=CorrectHorseBatteryStaple
  2. Save the file and restart connectpro service.

Post Installation Process

  1. Verify Network Connectivity

    Run the Connectivity Test workflow to verify network connectivity between different nodes 

    1. Login to the Signalling Node.
    2. Traverse to the Installer parent directory, for example,
      cd ~/ncc-onprem-installer/ncc-onprem-installer/
    3. Run the command bash check-connectivity.sh.
    4. The connectivity tool uses the provided values in the Config.conf file to run basic network tests like: 
      • HTTP requests between different services 
      • Network connectivity tests on the required TCP and UDP ports
      • Name resolution
      • Database connectivity
    5. This test checks the required connectivity between the nodes and services and share the results to help diagnose network connectivity issues.
  2. Test Adobe Connect Meeting with Enhanced Audio-Video 

    1. Create a new meeting and ensure that Enhanced Audio/Video option is selected.
    2. Join the meeting room and turn on the Mic and the Camera.
    3. If possible also join the room from a different PC or Mobile device and verify the audio-video exchange.
    4. Try Screen Sharing.    
    5. Start Recording. Wait for 30s and Stop. Now, verify if the recording is accessible.

Setup of Additional Features 

Configure SIP 

To configure SIP, you must access the FM admin web panel. 

  1. Visit: http://<your-load-balancer-address>:9090/admin and log in with the username and password configured in config.conf. If you are not using a Load Balancer then browse to http://<Signalling_Node-address>:9090/admin.
  2. Visit APPLICATIONS, then adobeconnect, and then scroll down to Channels.
    Configure SIP Outbound Caller ID Configuration for both default (*) and broadcast-* channel group.
    SIP outbound caller ID configuration

  3. From the main menu go to DEPLOYMENTS and edit the SIP configuration. 

    SIP configuration

Upload TURNS certificate to FM admin 

To provide SSL termination for TURNS bindings, you can upload a certificate to the FM admin panel.

The certificate must be in the PFX  format.

Follow the steps below:

  1. Visit: http://<your-load-balancer-address>:9090/admin and log in with the username and password configured in config.conf. If you are not using a Load Balancer then browse to http://<Signalling_Node-address>:9090/admin.

  2. Go to CERTIFICATES

  3. Under File Certificates select on a + sign and upload the certificate for the domain that you're using. 

Configure the TURNS binding 

This is needed so that we can have secure TURN communication.

If you've already uploaded the certificate as discussed in the previous section, perform the following steps:

  1. Navigate to DEPLOYMENTS, then select Default

  2. Scroll down to the section, Advanced Configuration.

  3. Configure the TURNS Bindings with the certificate that you uploaded earlier.

    Configure the TURNS binding

Uninstallation

  1. Change to the installer root directory(Root directory contains uninstall.sh), cd ~/ncc_onprem_installer.

  2. Execute, bash uninstall.sh.

  3. To remove Postgres and Redis databases data directory from the Signalling node, run the below command.

    sudo rm -rf ~/connect  

Note:

This removes all NCC components and images from the system. But the external dependencies(Python, PIP, Virtualenv, and Python packages) are not be deleted. 

Troubleshooting

docker.errors.DockerException: Error while fetching server API version: ('Connection aborted.', FileNotFoundError(2, 'No such file or directory'))

This error message displays when DOCKER_HOST environment variable is not properly set. Follow the steps below:

  1. Traverse to the Installer parent directory, for example:
    cd ~/ncc-onprem-installer/
  2. Run the command, source ~/.bash_profile

If the above steps don’t resolve the issue, then follow the steps below:

  1. Remove Podman, sudo yum remove podman-remote 
  2. Log out of the current user and log in again.
  3. Traverse to the Installer parent directory, for example:
    cd  ~/ncc-onprem-installer/
  4. Execute bash ExternalDependencies/install.sh.
Note:

Do not run any command with root/sudo access unless clearly specified.

2023-03-22 03:27:12 : ERROR : Main : 48 : Failed to start Postgres.

This error message is displayed when the Postgres docker container is unable to  mount the host volume due to permission issue. Please follow the steps below:·         

  1. Traverse to the Installer parent directory, for example,
    cd ~/ncc-onprem-installer/
  2. Run the command, docker load -i images/postgres.tar.gz
  3. Then run:
    docker run -d \
    --env POSTGRES_PASSWORD='PostPass1234' \
    --volume  ~/connect/postgres/data:/var/lib/postgresql/data \
    --publish 5432:5432 \
    --network host \
    --user 0 \
    --restart always \
    --name postgres \
    docker-connect-release.dr-uw2.adobeitc.com/ncc-onprem/postgres:12.2
  4. This will fix the permission issues on the host directory.
  5. Now, run the below command to remove the Postgres docker container.
    docker rm -f postgres
  6. Finally, run the MainInstaller again.
    bash MainInstall.sh 

The media server needs 1 :1 NAT, which means any traffic egress and ingress for the media server for the internet needs to use the public IP. The media server auto-discovers its public IP to use the STUN protocol, once you have configured 1:1  NAT for the media server on your router or firewall. Please go to the admin interface of the WebRTC gateway and make sure that the mediaver can detect its public IP address. 

https://adminwebrtc.yourdomain.com/admin/#/servers

Media Server is not reachable from Clients even after 1:1 NAT working fine 

The WebRTC Client uses ICE and tries to establish an audio-video stream with the media server in the order below:

  • UDP in port range 35000 to 65535 

  • TCP over port 3478

  • TLS over port 443 

  • C12 WebRTC based Client will try all options (TCP and UDP) , before it switches to TLS over port 443

Connect 12.2 is by default designed with the assumption that CPS and WebRTC and New Connect Cluster (aka NCC) are in the same VLAN/VPC or broadcast domain. In case you've them in different networks, ensure that you've proper layer three routing between CPS and NCC/WebRTC to have network connectivity in both directions. 

To check the network connectivity between CPS and NCC, use CURL and Telnet. 

CPS and NCC connectivity work over HTTPS only.

The error below appears in the FMGW container logs. The second Redis instance is set to run as a replica. 

ERROR [FM.LiveSwitch.Signalling.Server.IdleThread][-] 2022-07-21T12:46:29.439Z Unexpected error in idle thread. 
StackExchange.Redis.RedisServerException: READONLY You can't write against a read only replica. 

Solution:  

Open the configuration file corresponding to the Redis service and modify the value of the attribute "slave-read-only" to “no”. 

As a workaround, follow the steps below:

  1. cd ncc-onprem-installer.
  2. mv Images images.
  3. sed -i 's/Images\//images\//' Installer/utils/global_constants.py.
  4. Uninstall - Execute uninstall.sh.
  5. Reinstall - Execute MainInstall.sh.       

Influx DB exceptions in logs can be ignored for now.

After rebooting an instance and restarting a docker container, if you check the logs for cas container, you will see the exceptions.

To resolve this issue, restart the cas container. Type docker restart cas.

More like this

 Adobe

Get help faster and easier

New user?