Environment Type
The hardware requirements differ from your development environment to the production environment. While the same hardware requirements as your production environment could be utilized for testing and development purposes, that implies higher, and unnecessary, costs especially in the case of large scale deployments.
Development Environments
These requirements assume a maximum of 100 Unattended robots running simultaneously. Two machines can be used, one for Orchestrator and (optionally) Elasticsearch, and one for SQL Server, configured as follows:
Web Application Server
CPU Cores (>2GHz) | RAM (GB) | HDD (GB) |
---|---|---|
4 | 4 | 150 |
SQL Server
CPU Cores (>2GHz) | RAM (GB) | HDD (GB) |
---|---|---|
4 | 8 | 300 |
Production Environments
For production environments, it is highly recommended to provide one dedicated server for each role:
- Orchestrator web application.
- SQL Server Database Engine.
- Elasticsearch and Kibana.
For a Multi-Node Installation, in addition to the above, the following is also required:
- High Availability add-on (HAA) for Orchestrator (3+ HAA nodes are required for true high availability and 6+ HAA nodes for geo-redundancy.
Note:
Multi-node Orchestrator deployments use the RESP (REdis Serialization Protocol) for communication, and thus can be configured using any solution relying on this protocol.
HAA is the only such solution supported by UiPath.
The hardware configuration for each required server depends upon the size of your deployment, as detailed below. The hardware requirements presented here were made based on tests where a Robot was defined as follows:
- messages are sent from the Robot to Orchestrator with a frequency of 1 message per second
- within 60 seconds, the Robot sends:
- 40 message logs
- 2 heartbeats
- 6 get asset requests
- 6 add queue item requests
- 6 get queue item requests
Support up to 250 Unattended Robots
Web Application Server
Number of Robots | CPU Cores (min 2 GHz) | RAM (GB) | HDD (GB) |
---|---|---|---|
<20 | 4 | 4 | 100 |
<50 | 4 | 4 | 100 |
<100 | 4 | 4 | 150 |
<200 | 4 | 4 | 200 |
<250 | 4 | 4 | 200 |
Note:
For more than 200 Robots, increase the number of SQL connections in the
web.config
file to 200. To do this, add theMax Pool Size=200
code to the connection string, so that it looks somethings like this:
<add name="Default" providerName="System.Data.SqlClient" connectionString="Server=SQL4142;Integrated Security=True;Database=UiPath;Max Pool Size=200;" />
SQL Server
Number of Robots | CPU Cores (min 2 GHz) | RAM (GB) | HDD (GB) |
---|---|---|---|
<20 | 4 | 8 | 100 |
<50 | 4 | 8 | 200 |
<100 | 4 | 8 | 300 |
<200 | 8 | 8 | SSD 400 |
<250 | 8 | 16 | SSD 400 |
Disc space requirements highly depend on:
- Whether work queues are used or not. If work queues are used, it depends on average number of transactions added daily/weekly and size (number of fields, size of each field) of each transaction.
- The retention period for successfully processed queue items (the customer should implement their own retention policy).
- Whether messages logged by the Robots are stored or not in the database. If they are stored, a filter can be applied to only store in the DB specific levels of messages (for example, store in the DB the messages with log level Error and Critical, and store in Elasticsearch messages with log level Info, Warn and Trace).
- Frequency of logging messages - the Robot developer uses the Log Message activity at will, whenever they consider a message is worth to be logged.
- The retention period for old logged messages (the customer should implement their own retention policy).
- Logging level value set up in the Robot. For example, if logging level in the robot is set to Info, only messages with levels Info, Warn, Error and Critical are sent to Orchestrator; messages with levels Debug, Trace and Verbose are ignored, they will not reach Orchestrator.
Elasticsearch Server
Number of Robots | CPU Cores (min 2 GHz) | RAM (GB) | HDD (GB) |
---|---|---|---|
<20 | 4 | 4 | 100 |
<50 | 4 | 4 | 100 |
<100 | 4 | 8 | 150 |
<200 | 4 | 12 | 200 |
<250 | 4 | 12 | 300 |
Disc space requirements depend on:
- The retention period (the customer should implement their own retention policy).
- Frequency of logging messages - the Robot developer uses the Log Message activity at will, whenever they consider a message is worth to be logged.
- Logging level value set up in the Robot. For example, if logging level in the Robot is set to Info, only messages with levels Info, Warn, “Error” and “Critical” are sent to Orchestrator; messages with levels “Debug”, “Trace” and “Verbose” are ignored, they will not reach Orchestrator.
Note:
For more than 50 Robots you need to instruct the Java Virtual Machine used by Elasticsearch to use 50% of the available RAM, by setting both the
-Xms
and-Xmx
arguments to half of the total amount of memory. This is done either through theES_JAVA_OPTS
environment variable or by editing thejvm.options
file.
Support Between 250 and 500 Unattended Robots
Web Application Server
Number of Robots | CPU Cores (min 2 GHz) | RAM (GB) | HDD (GB) |
---|---|---|---|
<300 | 8 | 8 | 200 |
<400 | 8 | 8 | 220 |
<500 | 16 | 8 | 250 |
Note:
For more than 400 Robots it is recommended to increase the number of CPU Cores to 16.
SQL Server
Number of Robots | CPU Cores (min 2 GHz) | RAM (GB) | HDD (GB) |
---|---|---|---|
<300 | 16 | 32 | SSD 400 |
<400 | 16 | 32 | SSD 500 |
<500 | 16 | 32 | SSD 600 |
Note:
For SQL Server Standard Edition, 16 CPU cores is the maximum that the Standard edition will use. For a virtual machine, please ensure that this number of cores is obtained as 4 virtual sockets with 4 cores each (and not as 2 sockets with 8 cores or 8 sockets with 2 cores). For Enterprise edition, it does not matter what is the combination to obtain 16 cores.
For more than 300 Robots, please consider not storing all logged messages in the SQL Server database. Store in the DB only the messages with log level Error and Critical. Store all messages (including Error and Critical) in Elasticsearch.
Elasticsearch Server
Number of Robots | CPU Cores (min 2 GHz) | RAM (GB) | HDD (GB) |
---|---|---|---|
<300 | 4 | 12 | 300 |
<400 | 4 | 16 | 500 |
<500 | 4 | 16 | 600 |
Support for Over 500 Unattended Robots
If Orchestrator needs to support more than 500 Robots running simultaneously, you need to provide 2 or more Orchestrator nodes and 1 or more HAA nodes in a farm, under a Network Load Balancer. Each node should have the hardware requirements according to the number of Robots it serves by request from the Load Balancer. But remember that the SQL Server is still one single machine (even with Always On Availability Groups, the Primary Replica is the one responsible to serve all the I/O requests). Therefore you need to:
- Increase the RAM on the SQL Server to 64GB.
- Store ONLY Error and Critical log levels from the Robot in the DB.
SQL Server
Number of Robots | CPU Cores (min 2 GHz) | RAM (GB) | HDD (GB) |
---|---|---|---|
>500 | 16 | 64 | SSD 800 |
For SQL Server Standard Edition, 16 CPU cores is the maximum that the Standard edition will use. For a virtual machine, please ensure that this number of cores is obtained as 4 virtual sockets with 4 cores each (and not as 2 sockets with 8 cores or 8 sockets with 2 cores). For Enterprise edition, it does not matter what is the combination to obtain 16 cores.
Large Scale Production Environment
The following environment is recommended to run 10K Attended Robots or 1000 Unattended Robots:
- An F5 load balancer.
- Orchestrator - at least 6 instances that run on machines with 8 CPU Cores and 16 GB RAM.
- Robots - machines with 4 CPU Cores and 16 GB RAM
- SQL Server - machines with 16 CPU Cores and 64 GB RAM
The SQL machine has to be configured with 4 sockets/16 cores (default 8 sockets/16 cores).
A standard web.config
file with the following adjustments:
- The database connection pool on each Orchestrator instance set to 200 (default 100).
- Configure the logging method to only use ElasticSearch and disable the Database target.
Note that logging to the database can significantly slow down the process, especially if your workflow contains errors. In case you do want log to database, then set the NLog module buffer size to 10 (default 100).
You can also check out hardware requirements for Studio and Robot.
TCP Ports
Port | Description |
---|---|
443 | Default port for communication between Users and Orchestrator with the connected Robots. |
1433 | Default port for communication between Orchestrator and the SQL Server machine. |
9200 | Communication between Orchestrator and Elasticsearch. |
9300 | Communication between Elasticsearch nodes, if applicable. |
5601 | Default port used by the Kibana plugin, if applicable. |
3389 | Required for RDP automation, needed for High-Density Robots. |
Updated 2 years ago
See Also
Software Requirements |
About Logs |