Subscribe

UiPath Orchestrator

The UiPath Orchestrator Guide

About Jobs

A job represents the execution of a process on a UiPath Robot. You can launch the execution of a job in either attended or unattended mode. You cannot launch a job from Orchestrator on attended robots, unless for debugging purposes, and they cannot run under a locked screen.

Attended ModeUnattended Mode
UiPath Assistant
Robot Command Line Interface
Automations Page > Jobs
Automations Page > Triggers
Automations Page > Processes

Three locations in Orchestrator enable you to configure and start a job: the Jobs, Triggers, and Processes pages. The Jobs page represents the jobs control center, where you can monitor already launched jobs, view their details and logs, and stop/kill/resume/restart a job.

 

Job Execution


Job Sources

There are three possible job sources, depending on the job launching mechanism:

  • Manual - the job has been started and configured from the Jobs/Triggers/Processes pages, using the Start button.
  • Agent - the job has been started in attended mode from the UiPath Robot tray, UiPath Assistant, or using the Command-Line.
  • [Trigger_Name] - the job has been launched through a trigger, used for preplanned job execution.

 

User-Machine Mappings

User-machine mappings enable you to tie unattended usage under particular users to specific machine templates. The gives granular control over the execution targets of your automation. User-machine mappings can be tenant-based (not tied to a specific folder), or folder-based (tied to a specific folder).

Enabling user-machine mappings

  1. At the tenant level, navigate to Settings > General.
  2. In the Modern Folders section, enable/disable the corresponding toggle.

  enable_umm

📘

Note

You need View on Machines, Edit on Machines OR View on Machines, Create on Machines to change user-machine mappings at the tenant level.

  • Tenant Mappings - You can configure tenant user-machine mappings on the Machines page in Orchestrator by linking the users who usually log in on specific host machines to the associated machine templates. You can do this on the User-Machine Mappings tab when creating a template or editing an existing one. The resulting user-machine mappings become the only supported pairs for execution.

Users are depicted on the User-Machine Mappings page using the Windows credentials (Domain\Username) of their unattended robot if one has been created.

920

📘

Note

You need Edit permissions on Folders OR Edit on Subfolders to change user-machine mappings at the folder level.

  • Folder Mappings - You can configure user-machine mappings on a per-folder basis, meaning that, in a particular folder, on a machine template, you can limit the execution to specific users only. The resulting user-machine mappings then become the only supported pairs for execution in that folder.

Folder mappings act as subsets of template mappings and allow you to achieve the utmost level of granularity possible. Not providing folder-level mappings leaves template-level mappings in place as the defaults.

894

📘

Known Issue

Upon [disabling the User-Machine Mappings (doc:configuring-tenant-settings) feature, existing user-machine mappings will be kept when running jobs, even if the mappings are not visible in the UI.

📘

Note

All changes made to tenant mappings are reflected at a folder level as follows:

  • Inherit from tenant - all user configuration changes made for tenant mappings are reflected at a folder level, adding or removing users to tenant mappings adds/removes them from folder mappings as well.
  • Specific user-machine mappings for this folder - adding a user to tenant mappings does not make them available for folder mappings; the user is excluded from folder mappings. Removing a user from tenant mappings removes them at the folder level as well.

🚧

Important

Users added to a folder after user-machine mappings have been configured are not added to the existing mappings; hence, they will not be able to use that machine. Make sure to manually map the users to the machines in order to use it.

Execution Targets

According to the mechanism used for launching jobs in Orchestrator, you can choose and configure a job allocation strategy and an execution target, implicitly. This article describes the allocation strategies and execution targets available when launching jobs from the Jobs page.

Learn about execution targets for triggers.

📘

Note

If the Robot becomes unresponsive (the robot machine is down, or the Robot Service crashes) during job execution, after reconnecting, it restarts the execution of the jobs that were running during the crash.

908

1. Allocate Dynamically

Dynamic allocation with no explicit user and machine selection allows you to execute a foreground process multiple times under the user and machine that become available first. Background processes get executed on any user, regardless if it's busy or not, as long as you have sufficient runtimes.

Using the Allocate Dynamically option you can execute a process up to 10000 times in one job.

2. User

The process is executed under a specific user. Only specifying the user results in Orchestrator allocating the machine dynamically. Specifying both the user and the machine means the job launches on that very user-machine pair.

3. Machine

The process is executed on one of the host machines attached to the selected machine template. Specifying the template displays an additional Connected Machines option, allowing you to select a specific host machine from the pool of connected host machines. Only specifying the machine results in Orchestrator allocating the user dynamically. Specifying both the user and the machine means the job launches on that very user-machine pair.

Make sure that runtimes matching the job type are allocated to the associated machine template. Only connected host machines associated with the active folder are displayed.

 

❗️

Dynamic Allocation Usage Convention

You need to provision a Windows user for each user on a host machine that belongs to the folders to which the corresponding machine template is assigned.
Say you connected a server to Orchestrator using the key generated by the machine template, FinanceT. That machine template is assigned to folders FinanceExecution and FinanceHR, where 6 users are assigned as well. Those 6 users need to be provisioned as Windows users on the server.

If you configure a job to execute the same process multiple times, a job entry is created for each execution. The jobs are ordered based on their priority and creation time, with higher priority, older jobs being placed first in line. As soon as a robot becomes available, it executes the next job in line. Until then, the jobs remain in a pending state.

Setup

  • 1 folder
  • 1 machine template with two runtimes
  • 2 users: john.smith and petri.ota
  • 2 processes which require user interaction: P1 - which adds queue items to a queue, P2 - which processes the items in the queue
  • The machine template and the users must be associated to the folder containing the processes.

Desired Outcome

  • P1 is executed with a high priority by anyone.
  • P2 is executed with a low priority by petri.ota.

Required Job Configuration

  • Start a job using P1, don't assign it to any particular user, set the priority to High.
  •  Job1

  • Start a job for P2, assign it to petri.ota, set the priority to Low.
  •  Job2

 

Execution Priority


You can control which job has precedence over other competing jobs through the Job Priority field either when deploying the process or when configuring a job/trigger for that process. A job can have one of the following priorities: Low (↓), Normal (→), High (↑).

Starting a Job Manually

The priority is inherited from where it was initially configured. You can either leave it as it is or change it.
Automations Page > Jobs
Inherits the priority set at the process level.
Automations Page > Triggers
Inherits the priority set at the trigger level. If the trigger itself inherited the priority at the process level, then that one is used.
Automations Page > Processes
Uses the priority set for that process.

If you configure a job to execute the same process multiple times, a job entry is created for each execution. The jobs are ordered based on their priority and creation time, with higher priority, older jobs being placed first in line. As soon as a robot becomes available, it executes the next job in line. Until then, the jobs remain in a pending state.

Starting a Job Through a Trigger

The priority is set by default to Inherited, meaning it inherits the value at the process level. Choosing a process automatically updates the arrow icon to illustrate what value has been set at the process level. Any jobs launched by the trigger have the priority set at the trigger level. If the default Inherited is kept, the jobs are launched with the priority at the process level.
Any subsequent changes made at the process level are propagated to the trigger, and the jobs created through it implicitly.

📘

Note

If you start a job that requires user intervention on multiple Robots on the same machine that does not run on Windows Server, the selected process is only executed by the first Robot while the rest fail. An instance for each of these executions is created and displayed on the Jobs page.

Process Types


📘

Note:

By default, any process can be edited while having associated running or pending jobs.

Running jobs associated with a modified process use the initial version of the process. The updated version is used for newly created jobs or at the next trigger of the same job.
Pending jobs associated with a modified process use the updated version.

There are two types of processes, according to the user interface requirements:

  • doesnot_requires_use_interface Background Process - Does not require a user interface, nor user intervention to get executed. For this reason, you can execute multiple such jobs in unattended mode on the same user simultaneously. Each execution requires an Unattended/NonProduction license. Background processes run in Session 0 when started in unattended mode.
  • requires_user_interface Requires User Interface - Requires user interface as the execution needs the UI to be generated, or the process contains interactive activities, such as Click. You can only execute one such a process on a user at a time.

Click here for details about processes in Orchestrator.
The same user can execute multiple background processes and a singular UI-requiring process simultaneously.

High-Density Robots


If you start a job on multiple High-Density Robots from the same Windows Server machine, it means that the selected process is executed by each specified Robot, at the same time. An instance for each of these executions is created and displayed on the Jobs page.

If you are using High-Density Robots and did not enable RDP on that machine, each time you start a job, the following error is displayed: “A specified logon session does not exist. It may already have been terminated.” To see how to set up your machine for High-Density Robots, please see the About Setting Up Windows Server for High-Density Robots page.

Long-Running Workflows


📘

Note

This feature is only supported for unattended environments. Starting a long-running process on an Attended Robot is not supported as the job cannot be killed from Orchestrator nor can it be effectively resumed.

Processes that require logical fragmentation or human intervention (validations, approvals, exception handling) such as invoice processing and performance reviews, are handled with a set of instruments in the UiPath suite: a dedicated project template in Studio called Orchestration Process, actions and resource allocation capabilities in Orchestrator.

Broadly, you configure your workflow with a pair of activities. The workflow can be parameterized with the specifics of the execution, such that a suspended job can only be resumed if certain requirements have been met. Only after the requirements have been met, resources are allocated for job resumption, thus ensuring no waste in consumption.

In Orchestrator this is marked by having the job suspended, awaiting for requirements to be met, and then having the job resumed and executed as usual. Depending on which pair you use, completion requirements change, and the Orchestrator response adjusts accordingly.

Jobs

ActivitesUse Case
Start Job and Get Reference
Wait for Job and Resume
Introduce a job condition, such as uploading queue items.

After the main job has been suspended, the auxiliary job gets executed. After this process is complete, the main job is resumed. Depending on how you configured your workflow, the resumed job can make use of the data obtained from the auxiliary process execution.




If your workflow uses the Start Job and Get Reference activity to invoke another workflow, your Robot role should be updated with the following permissions:
View on Processes
View, Edit, Create on Jobs
View on Environments.

Queues

ActivitiesUse Case
Add Queue Item and Get Reference
Wait for Queue Item and Resume
Introduce a queue condition, such as having queue items processed.

After the main job has been suspended, the queue items need to be processed through the auxiliary job. After this process is complete, the main job is resumed. Depending on how you configured your workflow, the resumed job can make use of the output data obtained from the processed queue item.

Actions

Form Actions

ActivitiesUse Case
Create Form Task
Wait for Form Task and Resume
Introduce user intervention conditions, found in Orchestrator as actions.

After the job has been suspended, an actions is generated in Orchestrator (as configured in Studio).

Only after action completion, is the job resumed.

Form actions need to be completed by the assigned user. User assignment can be handled directly in Orchestrator, or through the Assign Tasks activity.

External Actions

ActivitiesUse Case
Create External Task
Wait for External Task and Resume
Introduce user intervention conditions, found in Orchestrator as actions.

After the job has been suspended, an actions is generated in Orchestrator (as configured in Studio).

Only after task completion, is the job resumed.

External actions can be completed by any user with Edit permissions on Actions, and access to the associated folder.

Document Validation Actions

ActivitiesUse Case
Create Document Validation Action
Wait for Document Validation Action and Resume
Introduce user intervention conditions, found in Orchestrator as actions.

After the job has been suspended, an actions is generated in Orchestrator (as configured in Studio).

Only after task completion, is the job resumed.

Document Validation actions need to be completed by the assigned user. User assignment can be handled directly in Orchestrator, or through the Assign Tasks activity.





In order for the Robot to upload, download an delete data from a storage bucket, it needs to be granted the appropriate permissions. This can be done by updating the Robot role with the following:
To upload document data:
View, Create on Storage Files
View on Storage Buckets

To delete document data after downloading:
View, Delete on Storage Files
View on Storage Buckets

Duration

ActivityUse Case
Resume After DelayIntroduce a time interval as a delay, during which the workflow is suspended.

After the delay has passed, the job is resumed.

Job fragments are not restricted to being executed by the same Robot. They can be executed by any Robot that is available when the job is resumed and ready for execution. This also depends on the execution target configured when defining the job. Details here.

I defined my job to be executed by specific Robots, say X, Y and Z. When I start the job only Z is available, therefore my job is executed by Z until it gets suspended awaiting user validation. After it gets validated, and the job is resumed, only X is available, therefore the job is executed by X.

  • From a monitoring point of view, such a job is counted as one, regardless of being fragmented or executed by different Robots.
  • Suspended jobs cannot be assigned to Robots, only resumed ones can.

To check the triggers required for the resumption of a suspended job, check the Triggers tab on the Job Details window.

Recording


For unattended faulted jobs, if your process had the Enable Recording option switched on, you can download the corresponding execution media to check the last moments of the execution before failure.
The Download Recording option is only displayed on the Jobs window if you have View permissions on Execution Media.

Updated about a year ago


About Jobs


Suggested Edits are limited on API Reference Pages

You can only suggest edits to Markdown body content, but not to the API spec.