Help Center

Management Console Availability watchdog

Starting from v2.4.5 Resilio Connect product is high availability ready. This articles outlines how to deploy Resilio Connect MC and configure jobs to ensure high availability (HA) of your whole setup.

Management Console

Setting up Management Console Availability watchdog includes the following steps:

  1. Install 2 Management Consoles; consider one to be primary and the other to be secondary. Secondary MC should NOT be running by default.

  2. Set up a DNS CNAME for your MC, ensure your local DNS server points to primary MC address by default. 

  3. Ensure you always put a DNS name of MC into Agents configuration file

  4. Edit primary MC's configuration file fields (root).backup.path and (root).backup.schedule:

    - point backup location to either shared network drive / SAN OR*

    - point backup location to secondary MC over any preferred network protocol*

    - set backup frequency to 1 hour using provided Cron syntax.

  5. Setup crossover procedure, as described below.  

* When using SMB mapped location, always use UNC path (like \\Computer\Backups) instead of the mapped drive letter. Escape the slash on Windows (\\\\Computer\\Backups). Also, ensure that the user running Agent has read-write access to the share


Management Console crossover procedure:

1. Install python3 and pip3 (scripts are tested using python3.6, so python3 should point to python3.6) and requests, simplejson libraries: 


sudo apt-get install python3 python3-pip
sudo python3 -m pip install requests simplejson


Download and install python3.6 and python3-pip from official site. Install requests and simplejson libraries:

python3 -m pip install requests simplejson

2. Download and unpack archive with scripts on server with secondary MC. On Windows unarchive them into same directory where secondary MC is installed (where node.exe and srvctrl.cmd are located).

Adjust config.json as necessary: 

PRIMARY_SERVER_ADDRESS: URL (protocol + host + port) of primary management console server

BACKUPS_PATH: path to primary management console backups location (should be a regular path in file system)

SECONDARY_SERVER_DIR_PATH: path where secondary Management Console (srvctl) is installed. On Windows, escape the backslash: "C:\\foo\\bar"

SECONDARY_SERVER_START_TIMEOUT: secondary server start timeout in seconds

PULL_BACKUP_SCRIPT_TIMEOUT: timeout in seconds for PULL_BACKUP_SCRIPT execution

PULL_LATEST_BACKUP_INTERVAL: time interval in seconds between pulling backup from

CUSTOM_PRE_SCRIPT: command to execute custom script before running secondary server, not compulsory

CUSTOM_POST_SCRIPT: command to execute custom script after running secondary server, not compulsory

CUSTOM_SCRIPT_TIMEOUT: timeout in seconds for CUSTOM_SCRIPT execution

NOTIFICATOR_SCRIPT: command to execute script that sends notifications. Important: script should accept message to be sent as the first argument, for example /bin/bash /path/to/ "Some message"


MAX_RETRIES_NUMBER: number of retries of @utils.retrier decorator, if decorated function returned False

FAILURE_RETRY_INTERVAL: sleep time in seconds until the next attempt in @utils.retrier decorator

HEALTH_CHECK_INTERVAL: sleep time in seconds until the next health check

HTTP_REQUEST_TIMEOUT: time out in seconds for http request

FAILURE_GRACE_PERIOD: time in seconds after the failure, during which False check status isn't returned

WINDOWS_APP_DATA_DIR_PATH: storage path of secondary MC, for example C:\\Windows\\System32\\config\\systemprofile\\AppData\\Roaming\resilio-connect-server (for Windows only)


3. Place code to be executed into *.sh files if you're on Linux or replace those files with cmd scripts if you're on Windows. Be sure to adjust the corresponding parameters in config: 


Pre script gets executed when failover logic is triggered but before launching the secondary management console; post script gets executed after attempt to launch the secondary management console. Notificator script is executed to notify administrator about important events. It should contain a custom code that accepts message as the first argument and sends this message somewhere.

4. Navigate to the directory where scripts are located and launch the script:

python3 --config /path/to/config.json --logging debug

For example, if config.json is in the same folder as script:

python3 --config config.json --logging debug

This is the main executable script and monitors primary MC and if necessary launches secondary. Logging (--logging) prints events to stdout. It's not compulsory to enable debug logging, but highly advisable. 

 5. Add this script to system startup. 

The script will be polling primary server at intervals as REQUESTER_LOOP_INTERVAL states. If valid json is not returned, script jumps to failover part and starts secondary MC. After launching secondary server, script starts requesting secondary server status and if secondary server doesn't respond  - script triggers to notify admin about it and exits.

Note: this script doesn't automate switching of agent to secondary MC. You need to implement DNS switch logic in for example

Tracker Server

  1. Install additional Tracker Server on machine other than primary MC
  2. Set next Default Agent Profile parameters:
  3. Custom trackers - to tracker server which runs on primary MC and your spare tracker server.
  4. Custom tracker mode - High Availability

Agents & Jobs HA

While deploying agents, ensure to deliver a sync.conf file which contains DNS name of your MC, not IP address.
Each Job's HA depends on a type of job you want to make highly available.

Synchronization job

HA available only if you have a limited (1-3) number of RW agents. As a HA measure just set up one more RW agent on a separate machine.

Consolidation job

Set up one more agent as "Destination" on separate machine. Please note, that destination agents MUST NOT save data from sources to the same physical location (i.e. data cannot be saved to same network or SAN location).

Distribution job

Set up one more agent as "Destination" on separate machine. Ensure, it has fastest connection possible to the source and has data pre-seeded. Alternatively, reserve destination machine may point to the same data location as source does (i.e. same network / SAN location).
Note, that in either case reserve agent will still need some time to receive metadata from source. The exact time depends on data size, CPU power and storage speed. Once reserve agent got the metadata, it'll keep seeding data to all other destinations even if source dies.


Was the article helpful? Yes / No, send feedback on article Thanks!

Please note that we won't mail you back. This is just purely feedback on the article above. If you need help from our Support Team, please use the "Contact Support" link at the top of the page.