The tables below describe what is required for each single-service node. In a High Availability configuration, a single-service node represents each of the HA server instances.
|Product||Debian||Centos||RHEL||Ubuntu||Windows Server||Helm Charts||SLES|
|Artifactory||8.x, 9.x, 10.x||7.x, 8.x||7.x, 8.x|
16.04, 18.04, 20.04
|2008 R2, 2016 or 2019||2.x, 3.x||12 SP 5|
|Mission Control||8.x, 9.x, 10.x||7.x, 8.x||7.x, 8.x||16.04, 18.04, 20.04||2.x, 3.x||12 SP 5|
|Xray||8.x, 9.x, 10.x||7.x, 8.x||7.x, 8.x||16.04, 18.04, 20.04||2.x, 3.x|
|Distribution||8.x, 9.x, 10.x||7.x, 8.x||7.x, 8.x||16.04, 18.04, 20.04||2.x, 3.x|
|Pipelines||7.x, 8.x||7.x, 8.x||16.04, 18.04, 20.04||Build nodes only||2.x, 3.x|
Reserving Ports for Services
As JFrog adds additional services to the JFrog Platform portfolio, there is a need to "reserve" ports for the Platform to ensure that the service works properly. To this end, JFrog recommends reserving ports 8000-8100 (this is addition to the existing internal ports documented below).
|Product||Processor||Memory||Storage||External Network Port||Internal Network Ports (default)||Databases/Third Party Applications|
|Artifactory (Version 7.0 and above)|
Based on expected artifact storage volume. Fast disk with free space that is at least 3 times the total size of stored artifacts.
Min requirements. Assuming running with an external database.
|4 cores||12 GB||100 GB|
Elasticsearch 7.8.0 and 7.8.1 (for Mission Control version 4.6.0)
Elasticsearch 7.10.2 (for Mission Control version 4.7.0 to 4.7.7)
Elasticsearch 7.12.1 (for Mission Control version 4.7.8)
Elasticsearch 7.13.2 (for Mission Control version 4.7.9)
The requirements presented here are based on the size of your environment.
Use a dedicated server for Xray with no other software running to alleviate performance bottlenecks, avoid port conflicts, and avoid setting uncommon configurations.
Up to 100k indexed artifacts, and 1K artifacts/builds per day
Xray and DB: 6 CPU
Xray and DB: 24 GB
Xray and DB: 500 GB (SSD, 3000 IOPS)
|Up to 1M indexed artifacts, and 10k artifacts/builds per day|
|Up to 2M indexed artifacts, and 20k artifacts/builds per day|
|Up to 10M indexed artifacts, and 50k artifacts/builds per day|
|Over 10M indexed artifacts, and 50k artifacts/builds per day|
|Contact JFrog Support for sizing requirements.|
The number of nodes above refers to High Availability (HA) setups, not Disaster Recovery.
Distribution (Version 2.0 and above)
Min requirements. Assuming running with an external database.
|3 cores||5 GB||50 GB|
|Pipelines (Version 1.0 and above)||4 cores||8 GB||100 GB|
Artifactory, Xray, Mission Control, and other JFrog products all need to be set with static IP addresses. These services also need to be able to communicate directly with each other over the same LAN connection. Hosting these services in geographically distant locations may cause health checks to temporarily fail. Ensure the ports are open and no firewalls block communications between these services.
Java based products (Artifactory, Distribution, Mission Control) must run with JDK 11+. The JDK is already bundled into the applications.
JVM Memory Allocation
While not a strict requirement, we recommend that you modify the JVM memory parameters used to run Artifactory.
You should reserve at least 512MB for Artifactory. The larger your repository or number of concurrent users, the larger you need to make the -Xms and -Xmx values accordingly.
Set your JVM parameters in the system.yaml configuration file.
Artifactory has been tested with the latest versions of:
- Safari (for Mac)
- Edge (Chromium-based versions)
System Time Synchronization
The JFrog Platform requires time synchronization between all JFrog services within the same Platform.
Unsynchronised services may cause issues during authentication and token verification.
For Docker and Docker Compose installations, JFrog services require Docker v18 and above (for Pipelines 18.09 and above) and Docker Compose v1.24 and up to be installed on the machine on which you want to run on.
Helm Chart Requirements
For Helm Charts installations, JFrog services requires the following prerequisites/requirements:
- Kubernetes 1.12+ (for installation instructions, see Kubernetes installation)
- Kubernetes cluster with:
- Dynamic storage provisioning enabled
- Default StorageClass set to persistent storage
Artifactory - Working with Very Large Storage
In most cases, our recommendation is for storage that is at least 3 times the total size of stored artifacts in order to accommodate system backups. However, when working with a very large volume of artifacts, the recommendation may vary greatly according to the specific setup of your system. Therefore, when working with over 10 Tb of stored artifacts, please contact JFrog support who will work with you to provide a recommendation for storage that is customized to your specific setup.
Allocated storage space may vary
Xray downloads and then deletes fetched artifacts after indexing. However, in order to have more parallel indexing processes, and thereby more temporary files at the same time would require more space.
This is especially applicable for large BLOBs such as Docker images.
Use a dedicated node for Xray with no other software running to alleviate performance bottlenecks, avoid port conflicts, and avoid setting uncommon configurations.
In most cases, our recommendation is to use an SSD drive for Xray to have better performance and it is not recommended to use an NFS drive, as it is a disk I/O-intensive service, a slow NFS server can suffer from I/O bottlenecks and NFS is mostly used for storage replication.
Since the local storage used for Xray services are temporary, it does not require replication between the different nodes in a multi-node/HA deployment.
File Handle Allocation Limit
Avoid performance bottlenecks
In the process of deep recursive scan in which Xray indexes artifacts and their dependencies (metadata), Xray needs to concurrently manage many open files. The default maximum number of files that can be opened concurrently on Linux systems is usually too low for the indexing process and can therefore cause a performance bottleneck. For optimal performance, we recommend increasing the number of files that can be opened concurrently to 100,000 (or the maximum your system can handle) by following the steps below.
Use the following command to determine the current file handle allocation limit:
Then, set the following parameters in your
.conf file to the lower of 100,000 or the file handle allocation limit determined above.
The example shows how the relevant parameters in the
.conf file are set to 100000. The actual setting for your installation may be different depending file handle allocation limit in your system.