Using Artifactory 5.x ?
JFrog Artifactory 5.x User Guide

Have a question? Want to report an issue? Contact JFrog support

Skip to end of metadata
Go to start of metadata


This page describes how to set up a set of Artifactory nodes as an Artifactory HA system.

Each of the HA components is configured individually and a common setup file is configured to bring together all of the components in the system as a whole.



Artifactory HA is supported from Artifactory 3.1 and above. If you are running a previous version of Artifactory, you first need to upgrade as described in Upgrading Artifactory.

All nodes within the same Artifactory HA installation must be running the same Artifactory version and the same JVM version.


Artifactory HA is supported with an Enterprise License. When setting up your HA cluster you need to install a different license on each of the Artifactory nodes in the cluster.

If you have more cluster nodes than the number of licenses provided you may purchase additional Artifactory Enterprise licenses.


Artifactory HA requires the following hardware:

  • Load balancer with session affinity (sticky session)
  • NFS (Network File System)
  • External database server with a single URL to the database
  • All the Artifactory HA components (Artifactory cluster nodes, database server, NFS server and load balancer) must be within the same fast LAN
  • All the HA nodes must communicate with each other through dedicated TCP ports

Artifactory HA requires an external database and currently supports Oracle, MySQL, MS SQL and PostgreSQL. For details on how to configure any of these databases please refer to Configuring the Database.

Page Contents

Home Directories

When setting up Artifactory HA you need to configure the $ARTIFACTORY_HOME directory separately for each of the Artifactory cluster nodes in your system, and a common $CLUSTER_HOME directory that is found on the NFS.

The general layout of these directories is as follows:


    |- etc/
        |- logback.xml
        |- artifactory.lic
    |- data/
        |- tmp/
    |- logs/
    |- bin/
    |- misc/
    |- webapps/
    |- tomcat/
        |- lib/
            |- <jdbc driver>


    |- ha-etc/
        |- mimetypes.xml
        |- ui/
        |- plugins/
    |- ha-data/
        |- filestore/
        |- tmp/
    |- ha-backup/ and

Note that the file under $ARTIFACTORY_HOME/etc folder should be replaced with an file. The is moved to the $CLUSTER_HOME/ha-etc folder.


Each of the Artifactory cluster nodes must have full write privileges on the $CLUSTER_HOME directory tree.

Configuring Artifactory HA

For Artifactory HA to operate properly you need to ensure that the cluster is configured correctly as a shared system, and that each specific server is configured correctly as a node in the system.

To ensure correct configuration of your system at each step, we recommend the following procedure:

  1. Configure the cluster
    Then, for each cluster node in your system do the following:
  2. Install the cluster node
  3. Configure the cluster node
  4. Test your HA configuration

Configuring the Cluster

You need to create a shared $CLUSTER_HOME directory on your NFS storage which is visible and writable to all the Artifactory cluster nodes in your system.

The contents of this shared directory are as follows:


Shared configuration files for the cluster, including the backend storage and information on all the Artifactory cluster nodes in your system


Shared data for the cluster. Among other things, this directory contains the filestore when using Artifactory in db-filesystem mode.


Shared backup directory for automatic backups performed from one of the Artifactory cluster nodes.


You need to manually create the following two files for the shared cluster configuration:

Configuration parameters that are shared by all of the Artifactory cluster nodes

Identical to the file used in a regular Artifactory installation.

This file replaces the file in each individual cluster node since each node reads the properties from this file in the common location under $CLUSTER_HOME.

The file is created in the right location when you configure the first Artifactory cluster node in your system and is fully described below.

Mounting the NFS from Artifactory HA nodes

When mounting the NFS on the client side, make sure to add the following option for the mount command:


This ensures that nodes in your HA cluster will immediately see any changes to the NFS made by other nodes.

 If there is a firewall between the cluster nodes, both the Hazelcast port (10001) and the Tomcat port (default 8081) should be open between all nodes. 


The file contains the following property:

An ASCII string token that you select, which is used to send secured messages between the servers. This can be any string you choose (like a password).


For example, a file for an Artifactory HA installation could be:



Make sure that each cluster has a unique security token.

Installing a Cluster Node

As mentioned in the Requirements section above Artifactory HA is supported from Artifactory 3.1 and above.

Whether you are performing a new installation or upgrading a current one, we recommend that you have all of your Artifactory cluster nodes installed and fully functional with Artifactory 3.1 or above as separate servers before configuring them to be a part of the HA cluster.

We recommend that you complete the configuration of each Artifactory cluster node, and its integration into your HA cluster as described below before going on to the next node.

New installation

For a new installation, simply follow the instructions in Installing Artifactory.

For the first node that you install you need to move the file from your servers's $ARTIFACTORY_HOME/etc/ directory to your $CLUSTER_HOME/ha-etc/ directory.

Upgrading a current installation of Artifactory Pro to Artifactory HA

  1. Verify that the Artifactory server on the current installation you are upgrading is shut down
  2. To upgrade your current installation to Artifactory 3.1 and above, please refer to Upgrading Artifactory
  3. Copy $ARTIFACTORY_HOME/etc/ and $ARTIFACTORY_HOME/data/ from our current installation to the corresponding locations under $CLUSTER_HOME
    You only need to do this once, so for any subsequent servers that you upgrade to configure into your Artifactory HA system, this step can be omitted.

JDBC driver

You should also verify that your database JDBC driver is correctly located in $ARTIFACTORY_HOME/tomcat/lib/ for each Artifactory cluster node.

Configuring a Cluster Node

  1. Shut down the Artifactory cluster node.

  2. Create an $ARTIFACTORY_HOME/etc/ file and populate it with the following parameters (you can use $ARTIFACTORY_HOME/misc/ha/ as a template to define your file):

    Unique descriptive name of this server.


    Make sure that each node has an id that is unique on your whole network.


    The location of $CLUSTER_HOME that you set up on your NFS.


    The context url that should be used to communicate with this server within the cluster.

    Use an explicit IP address

    The host must be explicitly defined as an IP address and not as a host name.

    membership.portThe port that should be used to communicate with this server within the cluster.
    If not specified, Artifactory will allocate a port automatically, however we recommend to set this to a fixed value to ensure that the port allocated is open to all of your organizations security systems such as firewalls etc.
    primary(true | false) Indicates if this is the primary server. There must be one (and only one) server configured in the cluster to be the primary server. For other servers this parameter is optional and its value defaults to "false".


    Optional. When nodes in the same cluster are running on different networks (e.g. nodes on different docker hosts), set this value to match the server's internal IP address.

    Use external IP as context.url

    If this parameter is set, context.url must be set with the machine's externally accessible address.


    For example, an file for a server called art1 connected to a mounted drive with $CLUSTER_HOME at /mnt/shared/artifactory/clusterhome, communicating with the other nodes through port 10001, and configured as the "primary"  would be as follows:
    hazelcast.interface= (optional) file permissions

    On Linux, once the file is created, the Artifactory user should be set as its owner and its permissions should be set to 644(-rw-r--r--)

  3. Set a valid Artifactory HA license in your $ARTIFACTORY_HOME/etc/artifactory.lic file. If this file exists (from a previous installation of Artifactory Pro), then simply replace your Pro license with the Artifactory HA license, otherwise create the file and populate it with your HA license.

  4. Mount the $CLUSTER_HOME directory as defined in the cluster.home property of $ARTIFACTORY_HOME/etc/

  5. Test your HA configuration after each cluster node that you add to your system.

  6. After you have installed the first cluster node and verified that your system is working correctly as an HA installation, you should configure the Custom URL Base. 
    In the Admin tab under Configuration | General, set the Custom URL Base field to the URL of the Load Balancer.

Once you have successfully installed, configured and tested a cluster node you may go on to the next one that you want to add to your system.

Testing Your HA Configuration

The following are a series of tests you can do to verify that your system is configured correctly as an HA installation:

  1. Directly Access the Artifactory UI for the server you have just configured
  2. In the Admin module go to Advanced | System Logs to view the log and verify that you see an entry for Artifactory Cluster Home.
  3. The bottom of the module navigation bar should also indicate that you are running with Artifactory HA. In case of an error you will see an error message in the page header.
  4. Access Artifactory through your load balancer and log in as Admin.
  5. In the Home module, you should see that Artifactory HA is Available.

  6. In the Admin module go to Configuration.  There should be a section called High Availability. When selected you should a table with details on all the Artifactory nodes in your cluster as displayed below.

  7. In the Admin module under Configuration | General, verify that the Custom URL Base field is correctly configured to the URL of the Load Balancer.

Upgrading Artifactory HA

From version 3.5, the procedure for upgrading Artifactory HA changed

The sections below provide upgrade instructions according to your current version, and assume you are upgrading to the latest version.

Upgrading from Any Version below 3.5

Upgrading Artifactory HA below Version 3.5 requires shutting down all of your Artifactory HA nodes, upgrading and restarting the Master node, and then upgrading and restarting the slaves one node at a time as follows:

Configure an explicit IP in your context.url

Before continuing with your upgrade, make sure you have configured the context.url of your cluster nodes with an explicit IP address.

  1. Shutdown all of your Artifactory HA nodes one at a time
  2. Upgrade the Master node using the regular procedure described in Upgrading Artifactory, and then restart it.
  3. Upgrade and restart all the other Slave nodes one at a time using the regular procedure described in Upgrading Artifactory.

Upgrading from Version 3.5+

If your current version is 3.5 or higher, upgrading an HA cluster is done by first upgrading the master node, restarting it, and then upgrading the rest of the slave nodes one at a time.

At any time, at least one Artifactory node continues to run which means that there is no disruption of service.

This zero-downtime upgrade process should be executed as follows:

  1. Perform a graceful shutdown of the Artifactory master node. While the master node is down, the load balancer should redirect all queries to the slave nodes.
  2. Upgrade the master node using the regular procedure described in Upgrading Artifactory.
  3. Restart the master node. When the master starts up, it recognizes that the HA cluster nodes are not all running the same version of Artifactory, and consequently the system is limited to allowing uploads and downloads. 
    Any attempt to perform other actions such as changing the DB schema, modifying permissions, changing repository configuration and more, are strictly blocked. This limitation will continue until all the cluster nodes are once again running the same version.

    Version inconsistency generates exceptions

    Running the HA cluster nodes with different versions generates exceptions which can be seen in the log files and reflect the temporary inconsistent state during the upgrade process. This is normal and should be ignored until all the cluster nodes are once again running the same version.

  4. For each slave node

    1. Perform a graceful shutdown of Artifactory, 

    2. Upgrade it using the regular procedure described in Upgrading Artifactory.

    3. Restart the node

Once all nodes have been upgraded to the same version, your Artifactory HA installation will be fully functional again.