Help Center

Consolidation Job


Consolidation job is designed to collect data from multiple agents onto storage servers. This job can be useful, for example, to collect periodic reports from workstations. It's reusable, so new files can be later added to the source agents. To re-use the job, just manually launch it with "Start" button. At this point source agent will rescan the folder and upload new files. The already present files will be re-hashed and compared across the agents. Matching files won't be re-synced. 

Destination server gets Read-Only access to the share. It means that it will not be able to upload updated files back to source agents.

Mobile devices cannot participate in consolidation job.

This is how Consolidation job is created and configured:

1) Go to JOBS -> Create new job -> Consolidation.

2) Give it a name and description. It's optional and defaults can be used. Check option to use SHA2 hashing if preferred. Note that with SHA2 Agents running Connect version 2.0 and older will not be able to participate in this job.

3) Choose the source groups. Agent belonging to this group will be uploading their data. You can create a new group or use the one(s) that you already have, provided agents do not produce conflicts. When creating a group, you can specify a schedule by which agents will be uploading data. See here for more details on creating and managing groups,

4) Choose the destination groups. Agents from this group will be collecting data from source agents. You can create a new group or pick an already existing one, provided agents do not create Conflicts.

5) On step Path source and destination share paths shall be specified.
When using default Path Macro, Agent will start distributing the directory where that macro points unless you specify a subfolder there.

Using Agent tags is also possible. See here for more details on using tags.

6) Consolidation job supports post commands which make it possible to run a command once transfer is complete. Triggers indicate the moment when the script will be executed. This step can be skipped.
Before file-indexing begins: right after job is created, the agent will start indexing files in the specified directory. A script, triggered at this moment can "cook the files before serving", for example, re-arrange the them, add/remove new and do things alike, so that the folder is indexed and distributed in a proper manner the way you need.
After an agent completes downloading: the script will run on each destination agent after it finishes download. Other agents may be still downloading the files, thus it's recommended not to remove or update the distributed files with this trigger.
After all agents complete downloading: as opposed to the trigger above, in this case script will run only after all destination agents finish downloading all the files.

8) Job scheduler defines when the job will be launched:
Run now - right after creation;
Run at - at the preferred date/time (local agents' time);
Repeat manually - job won't start until manually launched with "Start" button;
Repeat hourly - job will run every N hours. Scale is 1 hour, integer.
Repeat daily - job will run on a daily basis at the selected time. Scale is 1 day, integer.
Repeat weekly - job will run on selected days on the week, additionally you can set the exact time of the day.

In all those periodic schedules (hourly, daily and weekly) it's possible to select the starting and ending points for the job.

7) Review the job details and save.
Right after that source agents will index the specified source share and upload data to destination agents. A specified script/command will be executed at the picked trigger.

Was the article helpful? Yes / No, send feedback on article Thanks!


Please note that we won't mail you back. This is just purely feedback on the article above. If you need help from our Support Team, please use the "Contact Support" link at the top of the page.