Running Multiple Instances
    • Dark
      Light
    • PDF

    Running Multiple Instances

    • Dark
      Light
    • PDF

    Article summary

    Deploying Rocket.Chat with multiple instances improves scalability and reliability by distributing the workload across several Rocket.Chat instances as the user base grows. This document details the process of setting up a multi-instance Rocket.Chat deployment using Docker.

    While scaling up by adding more servers is recommended for high availability  purposes, you can better utilize your existing hardware by running multiple instances of the Rocket.Chat application (Node.js/Meteor app) on your current host(s). This approach is particularly effective if you have a multi-core computer. A good rule of thumb is to run N-1 Rocket.Chat instances, where N is the number of cores. Running multiple instances on a single host requires a reverse proxy to manage traffic before it reaches your application.

    By following this document, you will be able to:

    • Set up multiple Rocket.Chat instances using Docker.

    • Configure a single MongoDB database to be shared among these instances.

    • Implement a reverse proxy server to manage traffic distribution.

    This document shows an example on  how to run multiple Rocket.Chat instances. Your actual prodecures may vary depending on your organization's policies and configurations.

    Prerequisites

    1. Servers:

    1. One server for MongoDB.

    2. One server for Nginx or your preferred reverse proxy.

    3. Two or more servers for deploying Rocket.Chat instances as per your needs.

    1. Docker must be installed and operational on all servers.

    2. Configure the servers to either allow all traffic from any source or restrict traffic to only allow communication between the MongoDB, Nginx, and Rocket.Chat servers.

    Deploying Rocket.Chat with multiple instances involves the following key phases:

    1. Setup MongoDB database

    2. Deploy Rocket.Chat servers

    3. Configure a reverse proxy to manage traffic across the Rocket.Chat instances

    Set up MongoDB

    Configure MongoDB according to your preference and verify the setup. Alternatively, use the Docker command below to set up MongoDB on the MongoDB server:

    docker run -d --name mongodb   -e MONGODB_REPLICA_SET_MODE=primary   -e MONGODB_REPLICA_SET_NAME=rs0   -e MONGODB_ADVERTISED_HOSTNAME=<mongodb-ip-address>   -e ALLOW_EMPTY_PASSWORD=yes   -p 27017:27017   bitnami/mongodb:6.0

    Replace <mongodb-ip-address> with the IP address or hostname of your MongoDB server.

    Deploy Rocket.Chat servers

    This document outlines the steps to deploy two instances of Rocket.Chat using Docker, with both workspaces connected to the MongoDB database configured above.

    These instructions can be extended to deploy additional instances.

    1. Create a compose.yml file:
      On each Rocket.Chat server, create a compose.yml file with the following configuration:

      services:
        rocketchat:
          image: registry.rocket.chat/rocketchat/rocket.chat:${RELEASE:-latest}
          restart: always
          network_mode: host
          environment:
            MONGO_URL: "mongodb://<mongodb-ip>:27017/rocketchat?replicaSet=rs0"
            MONGO_OPLOG_URL: "mongodb://<mongodb-ip>:27017/local?replicaSet=rs0"
            ROOT_URL: http://<current-rocketchat-ip-or-hostname>:3000
            PORT: ${PORT:-3000}
            INSTANCE_IP: <current-rocketchat-ip-or-hostname>
          expose:
            - ${PORT:-3000}
          extra_hosts:
            - "rocketchat-01:<first-rocketchat-ip-or-hostname>"
            - "rocketchat-02:<second-rocketchat-ip-or-hostname>"
    1. Replace placeholders in each compose.yml file:

      1. <mongodb-ip>: The IP address or hostname of your MongoDB server.

      2. <current-rocketchat-ip-or-hostname>: The IP address or hostname where the current Rocket.Chat server is being deployed(i.e., the server where this compose file is used to deploy Rocket.Chat).

      3. <rocketchat-01-ip-or-hostname> and <rocketchat-02-ip-or-hostname>: The IP address or hostname of all the Rocket.Chat servers.

        Ensure that each server's compose.yml file includes its specific IP or hostname as <current-rocketchat-ip-or-hostname>, and lists all Rocket.Chat servers IP/hostname under extra_hosts. The extra_hosts key facilitates inter-instance communication.

    1. Adjust MongoDB connection strings if using MongoDB Atlas:
      If you are using MongoDB Atlas, adjust the connection string in the MONGO_URL and MONGO_OPLOG_URL variables to the following format:

      mongodb://<user>:<pass>@host1:27017,host2:27017,host3:27017/<databaseName>?replicaSet=<replicaSet>&ssl=true&authSource=admin
    2. Start the Rocket.Chat container:
      On each Rocket.Chat server, run the following command to start the container:

      docker compose up -d

    Extending to additional instances

    To deploy more than two Rocket.Chat instances:

    • Create additional compose.yml files for each new instance.

    • Add the IP address/hostname of the new instances to the extra_hosts section in all compose.yml files of existing instances.

    • Ensure each new compose.yml file includes the IP/hostname of all other Rocket.Chat servers.

    By following these steps, you can scale the deployment to include multiple Rocket.Chat instances, each connected to the same MongoDB database, and ready for configuration with a reverse proxy for load balancing.

    Configure reverse proxy

    Now that both workspaces are running, set up a reverse proxy to distribute the traffic and balance the load among the Rocket.Chat instances. It also allows you to access the workspaces from a single host.

    In this document, we will use Nginx as the reverse proxy. To keep this example simple, TLS termination is not considered.

    1. On the Nginx server, create a compose.yml file:

      services:
        nginx:
          image: nginx
          network_mode: host
          container_name: nginx
          restart: unless-stopped
          volumes:
            - ./nginx.conf:/etc/nginx/conf.d/default.conf
      

    2. Set up a backend and define all the Rocket.Chat workspaces that Nginx will proxy requests to under the upstream block. Create an nginx.conf file with the following content:

      upstream backend {
          server <first-rocketchat-server>:3000;
          server <second-rocketchat-server>:3000;
      }
      
      server {
          listen 80;
          server_name <nginx-ip-or-hostname>;
      
          location / {
              proxy_pass http://backend;
              proxy_http_version 1.1;
              proxy_set_header Upgrade $http_upgrade;
              proxy_set_header Connection "upgrade";
              proxy_set_header Host $http_host;
              proxy_set_header X-Real-IP $remote_addr;
              proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
              proxy_set_header X-Forwarded-Proto http;
              proxy_set_header X-Nginx-Proxy true;
              proxy_redirect off;
          }
      }
      

      Update the <nginx-ip-or-hostname> with the public IP address or hostname of the Nginx server.

      The Rocket.Chat servers added to the backend upstream correspond to the IP/hostname defined in the ROOT_URL variable of the compose file used to deploy each Rocket.Chat instance(port 3000 by default).

    3. Start Nginx using Docker Compose:

      docker compose up -d

    Accessing Rocket.Chat

    Once the reverse proxy server is running, visit the public IP address or hostname of the Nginx server on your browser. You can now explore your Rocket.Chat workspace and invite other users.

    To verify the connected instances running for your Rocket.Chat workspace, navigate to Administration > Workspace,  and click the Instances button under Deployment. The Rocket.Chat instances connected on your workspace are listed as shown below:

    To access more configuration details for a specific instance, click on the desired instance from the list. The image below shows  detailed example for a selected instance:

    Ensure nodes can communicate

    To maintain optimal functionality in a multi-instance Rocket.Chat deployment, instances must communicate directly with one another. This direct communication is crucial for transmitting ephemeral events, like user typing indicators, between instances.

    To enable this peer-to-peer communication, set the INSTANCE_IP environment variable for each Rocket.Chat instance. This variable defines the IP address that is accessible by other instances. In a Docker environment where host networking isn't used, INSTANCE_IP should be set to the host's IP address rather than the Docker-assigned IP, as the latter is only reachable within the same Docker network on the host.

    Additionally, the TCP_PORT environment variable can be configured to specify a port for event communication. By default, Rocket.Chat selects a random port for each instance, but you can define TCP_PORT when dealing with firewalls or Docker environments to ensure the correct port is opened and accessible.

    Both INSTANCE_IP and TCP_PORT are stored in the instances' collection, where Rocket.Chat instances monitor and maintain connectivity. This internal registry allows each instance to discover new peers and establish TCP connections as new instances are added. If an instance fails to update its record, it will eventually expire and be removed from the registry, ensuring only active connections are maintained.

    Verify your database

    When using MongoDB Atlas or deploying your custom database, it is essential to verify your database as a critical component of this setup. Ensure that you are running a replica set for the following reasons:

    • Database reliability: Replication ensures your data is backed up and another node is available if the primary node fails.

    • Oplog tailing: Enabling oplog with a replica set allows MongoDB to publish events for data synchronization across nodes. Rocket.Chat relies on this to monitor database events. For instance, if a message is sent on Instance 1 and you are connected to Instance 2, oplog tailing ensures Instance 2 receives the message insert event, displaying the new message.


    Was this article helpful?

    ESC

    Eddy AI, facilitating knowledge discovery through conversational intelligence