The following tables describe what is required for each single-service node. In a High Availability configuration, a single-service node represents each of the HA server instances.
Minimum System and Application Requirements
From version 7.41.4, Artifactory supports installation on ARM64 architecture through Helm and Docker installations. You must set up an external database as the Artifactory database since Artifactory does not support the bundled database with the ARM64 installation. Artifactory installation pulls the ARM64 image automatically when you run the Helm or Docker installation on the ARM64 platform.
Currently, ARM64 support is not available for other JFrog products.
Artifactory, Xray, and other JFrog products all need to be set with static IP addresses. These services also need to be able to communicate directly with each other over the same LAN connection. Hosting these services in geographically distant locations may cause health checks to temporarily fail. Ensure the ports are open and no firewalls block communications between these services.
Java-based products (Artifactory, Distribution, Insight, Mission Control) must run with JDK 11+. The JDK is already bundled into the applications.
JVM Memory Allocation
While not a strict requirement, we recommend that you modify the JVM memory parameters used to run Artifactory.
You should reserve at least 512MB for Artifactory. The larger your repository or number of concurrent users, the larger you need to make the -Xms and -Xmx values accordingly.
Set your JVM parameters in the system.yaml configuration file.
shared: extraJavaOpts: "-Xms512m -Xmx2g"
Artifactory has been tested with the latest versions of:
The JFrog Platform requires time synchronization between all JFrog services within the same Platform.
Unsynchronised services may cause issues during authentication and token verification.
FROM: Docker Requirements to be taken into the individual docker installation and upgrade pages. (reuse_006)
For Docker and Docker Compose installations, JFrog services require Docker v18 and above (for Pipelines 18.09 and above) and Docker Compose v1.24 and up to be installed on the machine on which you want to run on.
For install instructions, please refer to the and the documentation.
FROM: Helm Chart Requirements to be taken into the individual helm installation and upgrade pages. (reuse_003)
For Helm Charts installations, JFrog services requires the following prerequisites/requirements:
JFrog validates compatibility with the core Kubernetes distribution. Since Kubernetes distribution vendors may apply additional logic or hardening (for example, OpenShift and Rancher) JFrog Platform deployment with such platform vendors might not be fully supported.
FROM: Requirements to be taken into the Main Artifactory installation and upgrade pages. (reuse_004)
In most cases, our recommendation is for storage that is at least 3 times the total size of stored artifacts in order to accommodate system backups. However, when working with a very large volume of artifacts, the recommendation may vary greatly according to the specific setup of your system. Therefore, when working with over 10 Tb of stored artifacts, please contact JFrog support who will work with you to provide a recommendation for storage that is customized to your specific setup.
Xray downloads and then deletes fetched artifacts after indexing. However, in order to have more parallel indexing processes, and thereby more temporary files at the same time would require more space.
This is especially applicable for large BLOBs such as Docker images.
While Artifactory can use a Networked File System (NFS) for its binary storage, you should do not install the application itself on an NFS. The Artifactory application needs very fast, reliable access to its configuration files. Any latency from an NFS will result in poor performance when the application fails to read these files. Therefore, install Artifactory on a local disk mounted directly to the host.
To use an NFS to store binaries, use the "file-system" binarystore.xml configuration with the additional "<baseDataDir>" setting.
FROM: Requirements to be taken into the Main Xray installation and upgrade pages. (reuse_005)
Use a dedicated node for Xray with no other software running to alleviate performance bottlenecks, avoid port conflicts, and avoid setting uncommon configurations.
In most cases, our recommendation is to use an SSD drive for Xray to have better performance and it is not recommended to use an NFS drive, as it is a disk I/O-intensive service, a slow NFS server can suffer from I/O bottlenecks and NFS is mostly used for storage replication.
Since the local storage used for Xray services are temporary, it does not require replication between the different nodes in a multi-node/HA deployment.
In the process of deep recursive scan in which Xray indexes artifacts and their dependencies (metadata), Xray needs to concurrently manage many open files. The default maximum number of files that can be opened concurrently on Linux systems is usually too low for the indexing process and can therefore cause a performance bottleneck. For optimal performance, we recommend increasing the number of files that can be opened concurrently to 100,000 (or the maximum your system can handle) by following the steps below.
Use the following command to determine the current file handle allocation limit:
Then, set the following parameters in your
.conf file to the lower of 100,000 or the file handle allocation limit determined above.
The example shows how the relevant parameters in the
.conf file are set to 100000. The actual setting for your installation may be different depending file handle allocation limit in your system.
root hard nofile 100000 root soft nofile 100000 xray hard nofile 100000 xray soft nofile 100000 postgres hard nofile 100000 postgres soft nofile 100000
<iframe width="560" height="315" src="https://www.youtube.com/embed/bPhYrgjV0so" frameborder="0" allowfullscreen></iframe>