The tables below describe what is required for each single-service node. In a High Availability configuration, a single-service node represents each of the HA server instances.
Artifactory, Xray, Mission Control, and other JFrog products all need to be set with static IP addresses. These services also need to be able to communicate directly with each other over the same LAN connection. Hosting these services in geographically distant locations may cause health checks to temporarily fail. Ensure the ports are open and no firewalls block communications between these services.
Java based products (Artifactory, Distribution, Mission Control) must run with JDK 11. The JDK is already bundled into the applications.
JVM Memory Allocation
While not a strict requirement, we recommend that you modify the JVM memory parameters used to run Artifactory.
You should reserve at least 512MB for Artifactory. The larger your repository or number of concurrent users, the larger you need to make the -Xms and -Xmx values accordingly.
Set your JVM parameters in the system.yaml configuration file.
shared: extraJavaOpts: "-Xms512m -Xmx2g"
Artifactory has been tested with the latest versions of:
The JFrog Platform requires time synchronization between all JFrog services within the same Platform.
Unsynchronised services may cause issues during authentication and token verification.
For Docker and Docker Compose installations, JFrog services require Docker v18 and above (for Pipelines 18.09 and above) and Docker Compose v1.24 and up to be installed on the machine on which you want to run on.
For install instructions, please refer to the and the documentation.
The Oracle database is not supported on Docker and Docker Compose installations for Artifactory.
For Helm Charts installations, JFrog services requires the following prerequisites/requirements:
In most cases, our recommendation is for storage that is at least 3 times the total size of stored artifacts in order to accommodate system backups. However, when working with a very large volume of artifacts, the recommendation may vary greatly according to the specific setup of your system. Therefore, when working with over 10 Tb of stored artifacts, please contact JFrog support who will work with you to provide a recommendation for storage that is customized to your specific setup.
Xray downloads and then deletes fetched artifacts after indexing. However, in order to have more parallel indexing processes, and thereby more temporary files at the same time would require more space.
This is especially applicable for large BLOBs such as Docker images.
In the process of deep recursive scan in which Xray indexes artifacts and their dependencies (metadata), Xray needs to concurrently manage many open files. The default maximum number of files that can be opened concurrently on Linux systems is usually too low for the indexing process and can therefore cause a performance bottleneck. For optimal performance, we recommend increasing the number of files that can be opened concurrently to 100,000 (or the maximum your system can handle) by following the steps below.
Use the following command to determine the current file handle allocation limit:
Then, set the following parameters in your
.conf file to the lower of 100,000 or the file handle allocation limit determined above.
The example shows how the relevant parameters in the
.conf file are set to 100000. The actual setting for your installation may be different depending file handle allocation limit in your system.
root hard nofile 100000 root soft nofile 100000 xray hard nofile 100000 xray soft nofile 100000 postgres hard nofile 100000 postgres soft nofile 100000
<iframe width="560" height="315" src="https://www.youtube.com/embed/bPhYrgjV0so" frameborder="0" allowfullscreen></iframe>