A job represents the execution of a process on UiPath Robot. You can launch the execution of a job in either attended or unattended mode. You cannot launch a job from Orchestrator on attended robots, and they cannot run under a locked screen.
Attended Mode | Unattended Mode |
---|---|
UiPath Robot Tray UiPath Assistant Robot Command Line Interface | Monitoring Menu > Jobs Automations Menu > Triggers Automations Menu > Processes |
In Orchestrator there are three locations which enable you to configure and start a job: the Jobs, Triggers, and Processes pages. The jobs control center is represented by the Jobs page, where you can monitor already launched jobs, view their details and logs, and stop/kill/resume/restart a job. The Jobs page displays the content of the current folder only.
Note:
By default, any process can be edited while having associated running or pending jobs. Please take into account the following:
Running jobs associated to a modified process use the initial version of the process. The updated version is used for newly created jobs or at the next trigger of the same job.
Pending jobs associated to a modified process use the updated version.
Note
If the Robot becomes unresponsive (the robot machine is down, or the Robot Service crashes) during job execution, after reconnecting, it restarts the execution of the jobs that were running during the crash.
Manually Configure and Launch a Job
See here details.
Monitoring Menu > Jobs > Start
Automations Menu > Triggers > Start
Automations Menu > Processes > Start
Stop/Kill/Resume/Restart a Job
See here details.
Monitoring Menu > Jobs > More Actions > Stop/Kill/Resume/Restart
View Job Details
See here details.
Useful in the event you plan to troubleshoot faulted jobs. Troubleshoot unattended faulted jobs by recording unattended execution.
Monitoring Menu > Jobs > More Actions > Details
View Job Logs
See here details.
Monitoring Menu > Jobs > More Actions > View Logs
Sources
There are three possible job sources, depending on the job launching mechanism:
- Manual - the job has been started and configured from the Jobs/Triggers/Processes pages, using the Start button.
- Agent - the job has been started in attended mode from the UiPath Robot tray, UiPath Assistant, or using the Command Line.
- [Trigger_Name] - the job has been launched through a trigger, used for preplanned job execution.
Execution Target
User The process is executed under a specific user. Allocate Dynamically The foreground process is executed multiple times under which user and machine become available first. If the user is also selected, only machine allocation is done dynamically. Background processes get executed on any user, regardless if it's busy or not, as long as you have sufficient runtimes. | Specific Robots The process is executed by certain Robots, as selected from the robots list. Allocate Dynamically The foreground process is executed multiple times on which Robot becomes available first. Background processes get executed by any Robot, regardless if it's busy or not, as long as you have sufficient runtimes. |
Dynamic Allocation Usage Convention
On a host machine you need to provision a Windows user for each user that belongs to the folders to which the corresponding machine template is assigned to.
Say you connected a server to Orchestrator using the key generated by machine template, FinanceT. That machine template is assigned to folders FinanceExecution and FinanceHR, where 6 users are assigned as well. Those 6 users need to be provisioned as Windows users on the server.
If you configure a job to execute the same process multiple times, a job entry is created for each execution. The jobs are ordered based on their priority and creation time, with higher priority, older jobs being placed first in line. As soon as a robot becomes available, it executes the next job in line. Until then, the jobs remain in a pending state.
Setup
- 1 folder
- 1 machine template with two runtimes
- 2 users: john.smith and petri.ota
- 2 processes which require user interaction: P1 - which adds queue items to a queue, P2 - which processes the items in the queue
- The machine template and the users must be associated to the folder containing the processes.
Desired Outcome
- P1 is executed with a high priority by anyone.
- P2 is executed with a low priority by petri.ota.
Required Job Configuration
- Start a job using P1, don't assign it to any particular user, set the priority to High.
- Start a job for P2, assign it to petri.ota, set the priority to Low.
Execution Priority
You can control which job has precedence over other competing jobs through the Job Priority field either when deploying the process, or when configuring a job/trigger for that process. A job can have one of the following priorities: Low (↓), Normal (→), High (↑).
Starting a Job Manually
The priority is inherited from where it was initially configured. You can either leave it as it is, or change it.
Monitoring Menu > Jobs
Inherits the priority set at process level.
Automations Menu > Triggers
Inherits the priority set at trigger level. If the trigger itself inherited the priority at process level, then that one is used.
Automations Menu > Processes
Uses the priority set for that process.
If you configure a job to execute the same process multiple times, a job entry is created for each execution. The jobs are ordered based on their priority and creation time, with higher priority, older jobs being placed first in line. As soon as a robot becomes available, it executes the next job in line. Until then, the jobs remain in a pending state.
For example, in the following screenshot, you can see that three different jobs were started on the same Robot. The first job is running, while the others are in a pending state.
Starting a Job Through a Trigger
The priority is set by default to Inherited, meaning it inherits the value set at process level. Choosing a process automatically updates the arrow icon to illustrate what value has been set at process level. Any jobs launched by the trigger have the priority set at trigger level. If the default Inherited is kept, the jobs are launched with the priority at process level.
Any subsequent changes made at process level are propagated to the trigger, and the jobs created through it implicitly.
Note
If you start a job that requires user intervention on multiple Robots on the same machine that does not run on Windows Server, the selected process is only executed by the first Robot while the rest fail. An instance for each of these executions is created and displayed on the Jobs page.
Process Types
There are two types of processes, according to the use interface requirements:
- Background Process - Does not require a user interface, nor user intervention to get executed. For this reason you can execute multiple such jobs in unattended mode on the same user simultaneously. Each execution requires an Unattended/NonProduction license. Background processes run in Session 0 when started in unattended mode.
For a more detailed description on how Background Processes work, see the Background Process Automation page, which provides more information on this subject.
- Requires User Interface - Requires user interface as the execution needs the UI to be generated, or the process contains interactive activities, such as Click. You can only execute one such a process on a user at a time.
Click here for details about processes in Orchestrator.
The same user can execute multiple background processes and a singular UI-requiring process at the same time.
High-Density Robots
If you start a job on multiple High-Density Robots from the same Windows Server machine, it means that the selected process is executed by each specified Robot, at the same time. An instance for each of these executions is created and displayed in the Jobs page.
If you are using High-Density Robots and did not enable RDP on that machine, each time you start a job, the following error is displayed: “A specified logon session does not exist. It may already have been terminated.” To see how to set up your machine for High-Density Robots, please see the About Setting Up Windows Server for High-Density Robots page.
Long-Running Workflows
Note
This feature is only supported for unattended environments. Starting a long-running process on an Attended Robot is not supported as the job cannot be killed from Orchestrator nor can it be effectively resumed.
Processes that require logical fragmentation or human intervention (validations, approvals, exception handling) such as invoice processing and performance reviews, are handled with a set of instruments in the UiPath suite: a dedicated project template in Studio called Orchestration Process, actions and resource allocation capabilities in Orchestrator.
Broadly, you configure your workflow with a pair of activities. The workflow can be parameterized with the specifics of the execution, such that a suspended job can only be resumed if certain requirements have been met. Only after the requirements have been met, resources are allocated for job resumption, thus ensuring no waste in terms of consumption.
In Orchestrator this is marked by having the job suspended, awaiting for requirements to be met, and then having the job resumed and executed as usual. Depending on which pair you use, completion requirements change and the Orchestrator response adjusts accordingly.
Jobs
Activites | Use Case |
---|---|
Start Job and Get Reference Wait for Job and Resume | Introduce a job condition, such as uploading queue items. After the main job has been suspended, the auxiliary job gets executed. After this process is complete, the main job is resumed. Depending on how you configured your workflow, the resumed job can make use of the data obtained from the auxiliary process execution. If your workflow uses the Start Job and Get Reference activity to invoke another workflow, your Robot role should be updated with the following permissions: View on Processes View, Edit, Create on Jobs View on Environments. |
Queues
Activities | Use Case |
---|---|
Add Queue Item and Get Reference Wait for Queue Item and Resume | Introduce a queue condition, such as having queue items processed. After the main job has been suspended, the queue items need to be processed through the auxiliary job. After this process is complete, the main job is resumed. Depending on how you configured your workflow, the resumed job can make use of the output data obtained from the processed queue item. |
Actions
Form Actions
Activities | Use Case |
---|---|
Create Form Task Wait for Form Task and Resume | Introduce user intervention conditions, found in Orchestrator as actions. After the job has been suspended, an actions is generated in Orchestrator (as configured in Studio). Only after action completion, is the job resumed. Form actions need to be completed by the assigned user. User assignment can be handled directly in Orchestrator, or through the Assign Tasks activity. |
External Actions
Activities | Use Case |
---|---|
Create External Task Wait for External Task and Resume | Introduce user intervention conditions, found in Orchestrator as actions. After the job has been suspended, an actions is generated in Orchestrator (as configured in Studio). Only after task completion, is the job resumed. External actions can be completed by any user with Edit permissions on Actions, and access to the associated folder. |
Document Validation Actions
Activities | Use Case |
---|---|
Create Document Validation Action Wait for Document Validation Action and Resume | Introduce user intervention conditions, found in Orchestrator as actions. After the job has been suspended, an actions is generated in Orchestrator (as configured in Studio). Only after task completion, is the job resumed. Document Validation actions need to be completed by the assigned user. User assignment can be handled directly in Orchestrator, or through the Assign Tasks activity. In order for the Robot to upload, download an delete data from a storage bucket, it needs to be granted the appropriate permissions. This can be done by updating the Robot role with the following: To upload document data: View, Create on Storage Files View on Storage Buckets To delete document data after downloading: View, Delete on Storage Files View on Storage Buckets |
Duration
Activity | Use Case |
---|---|
Resume After Delay | Introduce a time interval as a delay, during which the workflow is suspended. After the delay has passed, the job is resumed. |
Job fragments are not restricted to being executed by the same Robot. They can be executed by any Robot that is available when the job is resumed and ready for execution. This also depends on the execution target configured when defining the job. Details here.
I defined my job to be executed by specific Robots, say X, Y and Z. When I start the job only Z is available, therefore my job is executed by Z until it gets suspended awaiting user validation. After it gets validated, and the job is resumed, only X is available, therefore the job is executed by X.
- From a monitoring point of view, such a job is counted as one, regardless of being fragmented or executed by different Robots.
- Suspended jobs cannot be assigned to Robots, only resumed ones can.
To check the triggers required for the resumption of a suspended job, check the Triggers tab on the Job Details window.
Recording
For unattended faulted jobs, if your process had the Enable Recording option switched on, you can download the corresponding execution media to check the last moments of the execution before failure.
The Download Recording option is only displayed on the Jobs window if you have View permissions on Execution Media.
Updated 2 years ago