This page describes how to set up a set of Artifactory nodes as an Artifactory HA system.
Each of the HA components is configured individually and a common setup file is configured to bring together all of the components in the system as a whole.
Artifactory HA is supported from Artifactory 3.1 and above. If you are running a previous version of Artifactory please first upgrade to v3.1 as described in Upgrading Artifactory.
All nodes within the same Artifactory HA installation must be running the same Artifactory version and the same JVM version.
Artifactory HA is provided as a feature of the Artifactory Pro Enterprise Value Pack with licenses for a set number of cluster nodes.
When setting up Artifactory HA you need to install a different license on each of the Artifactory nodes in the cluster.
If you have more cluster nodes than the number of licenses provided you may purchase additional Artifactory HA licenses.
Artifactory HA requires the following hardware:
- Load balancer with session affinity (sticky session)
- NFS (Network File System)
- External database server with a single URL to the database
- All the Artifactory HA components (Artifactory cluster nodes, database server, NFS server and load balancer) must be within the same fast LAN
- All the HA nodes must communicate with each other through dedicated TCP ports
Artifactory HA requires an external database and currently supports Oracle, MySQL, MS SQL and PostgreSQL. For details on how to configure any of these databases please refer to Changing the Default Storage.
When setting up Artifactory HA you need to configure the
$ARTIFACTORY_HOME directory separately for each of the Artifactory cluster nodes in your system, and a common
$CLUSTER_HOME directory that is found on the NFS.
The general layout of these directories is as follows:
|- <jdbc driver>
artifactory.system.properties and ha-node.properties
Note that the
artifactory.system.properties file under
$ARTIFACTORY_HOME/etc folder should be replaced with an
ha-node.properties file. The
artifactory.system.properties is moved to the
Each of the Artifactory cluster nodes must have full write privileges on the
$CLUSTER_HOME directory tree.
Configuring Artifactory HA
For Artifactory HA to operate properly you need to ensure that the cluster is configured correctly as a shared system, and that each specific server is configured correctly as a node in the system.
To ensure correct configuration of your system at each step, we recommend the following procedure:
- Configure the cluster
Then, for each cluster node in your system do the following:
- Install the cluster node
- Configure the cluster node
- Test your HA configuration
Configuring the Cluster
You need to create a shared
$CLUSTER_HOME directory on your NFS storage which is visible and writable to all the Artifactory cluster nodes in your system.
The contents of this shared directory are as follows:
Shared configuration files for the cluster, including the backend storage and information on all the Artifactory cluster nodes in your system
Shared data for the cluster. Among other things, this directory contains the filestore when using Artifactory in db-filesystem mode.
Shared backup directory for automatic backups performed from one of the Artifactory cluster nodes.
You need to manually create the following two files for the shared cluster configuration:
|Configuration parameters that are shared by all of the Artifactory cluster nodes|
Identical to the
This file replaces the
The file is created in the right location when you configure the first Artifactory cluster node in your system and is fully described below.
cluster.properties file contains the following property:
|An ASCII string token that you select, which is used to send secured messages between the servers. This can be any string you choose (like a password).|
For example, a
cluster.properties file for an Artifactory HA installation could be:
Make sure that each cluster has a unique security token.
Installing a Cluster Node
As mentioned in the Requirements section above Artifactory HA is supported from Artifactory 3.1 and above.
Whether you are performing a new installation or upgrading a current one, we recommend that you have all of your Artifactory cluster nodes installed and fully functional with Artifactory 3.1 or above as separate servers before configuring them to be a part of the HA cluster.
We recommend that you complete the configuration of each Artifactory cluster node, and its integration into your HA cluster as described below before going on to the next node.
For a new installation, simply install Artifactory 3.1 as described in Installing Artifactory.
For the first node that you install you need to move the
storage.properties file from your servers's
$ARTIFACTORY_HOME/etc/ directory to your
Upgrading a current installation of Artifactory Pro to Artifactory HA
- Verify that the Artifactory server on the current installation you are upgrading is shut down
- To upgrade your current installation to Artifactory 3.1 and above, please refer to Upgrading Artifactory
$ARTIFACTORY_HOME/data/from our current installation to the corresponding locations under
$CLUSTER_HOME.You only need to do this once, so for any subsequent servers that you upgrade to configure into your Artifactory HA system, this step can be omitted.
You should also verify that your database JDBC driver is correctly located in
$ARTIFACTORY_HOME/tomcat/lib/ for each Artifactory cluster node.
Configuring a Cluster Node
Shut down the Artifactory cluster node.
$ARTIFACTORY_HOME/etc/ha-node.propertiesfile and populate it with the following parameters (you can use
$ARTIFACTORY_HOME/misc/ha/ha-node.properties.templateas a template to define your
Unique descriptive name of this server.
Make sure that each node has an id that is unique on your whole network.
The location of $CLUSTER_HOME that you set up on your NFS.
The context url that should be used to communicate with this server within the cluster.
Don't end with a slash ("/")
Make sure your context url does not end with a slash character
Use an explicit IP address
The host must be explicitly defined as an IP address and not as a host name.
membership.port The port that should be used to communicate with this server within the cluster.
If not specified, Artifactory will allocate a port automatically, however we recommend to set this to a fixed value to ensure that the port allocated is open to all of your organizations security systems such as firewalls etc.
primary (true | false) Indicates if this is the primary server. There must be one (and only one) server configured in the cluster to be the primary server. For other servers this parameter is optional and its value defaults to "false".
For example, an
ha-node.propertiesfile for a server called
art1connected to a mounted drive with
/mnt/shared/artifactory/clusterhome, communicating with the other nodes through port 10001, and configured as the "primary" would be as follows:
ha-node.properties file permissions
On Linux, once the
ha-node.propertiesfile is created, the Artifactory user should be set as its owner and its permissions should be set to 644(-rw-r--r--)
Set a valid Artifactory HA license in your
$ARTIFACTORY_HOME/etc/artifactory.licfile. If this file exists (from a previous installation of Artifactory Pro), then simply replace your Pro license with the Artifactory HA license, otherwise create the file and populate it with your HA license.
$CLUSTER_HOMEdirectory as defined in the
Test your HA configuration after each cluster node that you add to your system.
After you have installed the first cluster node and verified that your system is working correctly as an HA installation, you should configure the Custom URL Base.
In the Admin tab under Configuration | General, set the Custom URL Base field to the URL of the Load Balancer.
Once you have successfully installed, configured and tested a cluster node you may go on to the next one that you want to add to your system.
Testing Your HA Configuration
The following are a series of tests you can do to verify that your system is configured correctly as an HA installation:
- Directly Access the Artifactory UI for the server you have just configured
- In the Admin tab go to Advanced | System Logs to view the log and verify that you see an entry for Artifactory Cluster Home.
- The footer pane of the Artifactory UI should also indicate that you are running with Artifactory HA. In case of an error you will see an error message in the page header.
- Access Artifactory through your load balancer and log in as Admin.
In the Home tab, under Welcome, you should see that High Availability is Available under Pro Add-ons
In the Admin tab go to Configuration. There should be a section called High Availability. When selected you should a table with details on all the Artifactory nodes in your cluster as displayed below.
In the Admin tab under Configuration | General, verify that the Custom URL Base field is correctly configured to the URL of the Load Balancer.
Upgrading Artifactory HA
From version 3.5, the procedure for upgrading Artifactory HA changed
The sections below provide upgrade instructions according to your current version, and assume you are upgrading to the latest version.
Upgrading from Any Version below 3.5
Upgrading Artifactory HA below Version 3.5 requires shutting down all of your Artifactory HA nodes, upgrading and restarting the Master node, and then upgrading and restarting the slaves one node at a time as follows:
Configure an explicit IP in your context.url
Before continuing with your upgrade, make sure you have configured the context.url of your cluster nodes with an explicit IP address.
- Shutdown all of your Artifactory HA nodes one at a time
- Upgrade the Master node using the regular procedure described in Upgrading Artifactory, and then restart it.
- Upgrade and restart all the other Slave nodes one at a time using the regular procedure described in Upgrading Artifactory.
Upgrading from Version 3.5+
If your current version is 3.5 or higher, upgrading an HA cluster is done by first upgrading the master node, restarting it, and then upgrading the rest of the slave nodes one at a time.
At any time, at least one Artifactory node continues to run which means that there is no disruption of service.
This zero-downtime upgrade process should be executed as follows:
- Perform a graceful shutdown of the Artifactory master node. While the master node is down, the load balancer should redirect all queries to the slave nodes.
- Upgrade the master node using the regular procedure described in Upgrading Artifactory.
Restart the master node. When the master starts up, it recognizes that the HA cluster nodes are not all running the same version of Artifactory, and consequently the system is limited to allowing uploads and downloads.
Any attempt to perform other actions such as changing the DB schema, modifying permissions, changing repository configuration and more, are strictly blocked. This limitation will continue until all the cluster nodes are once again running the same version.
Version inconsistency generates exceptions
Running the HA cluster nodes with different versions generates exceptions which can be seen in the log files and reflect the temporary inconsistent state during the upgrade process. This is normal and should be ignored until all the cluster nodes are once again running the same version.
For each slave node
Perform a graceful shutdown of Artifactory,
Upgrade it using the regular procedure described in Upgrading Artifactory.
- Restart the node
Once all nodes have been upgraded to the same version, your Artifactory HA installation will be fully functional again.