Page tree
Skip to end of metadata
Go to start of metadata

Overview

This page describes how to set up a set of Artifactory nodes as an Artifactory HA system.

Each of the HA components is configured individually and a common setup file is configured to bring together all of the components in the system as a whole.

Requirements

Version

Artifactory HA is supported from Artifactory 3.1 and above. If you are running a previous version of Artifactory please first upgrade to v3.1 as described in Upgrading Artifactory.

All nodes within the same Artifactory HA installation must be running the same Artifactory version and the same JVM version.

Licensing

Artifactory HA is provided as a feature of the  Artifactory Pro Enterprise Value Pack with licenses for a set number of cluster nodes.

When setting up Artifactory HA you need to install a different license on each of the Artifactory nodes in the cluster.

 If you have more cluster nodes than the number of licenses provided you may purchase additional Artifactory HA licenses.

Hardware

Artifactory HA requires the following hardware:

  • Load balancer with session affinity (sticky session)
  • NFS (Network File System)
  • External database server with a single URL to the database

Network

  • All the Artifactory HA components (Artifactory cluster nodes, database server, NFS server and load balancer) must be within the same fast LAN
  • All the HA nodes must communicate with each other through dedicated TCP ports

Database

Artifactory HA requires an external database and currently supports Oracle, MySQL, MS SQL and PostgreSQL. For details on how to configure any of these databases please refer to Changing the Default Storage.

Page Contents


Home Directories

When setting up Artifactory HA you need to configure the $ARTIFACTORY_HOME directory separately for each of the Artifactory cluster nodes in your system, and a common $CLUSTER_HOME directory that is found on the NFS.

The general layout of these directories is as follows:

|- $ARTIFACTORY_HOME 
    |- etc/
        |- ha-node.properties      
        |- logback.xml
        |- artifactory.lic
    |- data/
        |- tmp/
        |- artifactory.properties
    |- logs/
    |- bin/
    |- misc/
    |- webapps/
    |- tomcat/
        |- lib/
            |- <jdbc driver>

|- $CLUSTER_HOME 
    |- ha-etc/
        |- cluster.properties
        |- storage.properties
        |- artifactory.system.properties
        |- mimetypes.xml
        |- ui/
        |- plugins/
    |- ha-data/
        |- filestore/
        |- tmp/
        |- artifactory.properties
    |- ha-backup/

 

artifactory.system.properties and ha-node.properties

Note that the artifactory.system.properties file under $ARTIFACTORY_HOME/etc folder should be replaced with an ha-node.properties file. The artifactory.system.properties is moved to the $CLUSTER_HOME/ha-etc folder.

Privileges

Each of the Artifactory cluster nodes must have full write privileges on the $CLUSTER_HOME directory tree.


Configuring Artifactory HA

For Artifactory HA to operate properly you need to ensure that the cluster is configured correctly as a shared system, and that each specific server is configured correctly as a node in the system.

To ensure correct configuration of your system at each step, we recommend the following procedure:

  1. Configure the cluster
    Then, for each cluster node in your system do the following:
  2. Install the cluster node
  3. Configure the cluster node
  4. Test your HA configuration

Configuring the Cluster

You need to create a shared $CLUSTER_HOME directory on your NFS storage which is visible and writable to all the Artifactory cluster nodes in your system.

The contents of this shared directory are as follows:

ha-etc

Shared configuration files for the cluster, including the backend storage and information on all the Artifactory cluster nodes in your system

ha-data

Shared data for the cluster. Among other things, this directory contains the filestore when using Artifactory in db-filesystem mode.

ha-backup

Shared backup directory for automatic backups performed from one of the Artifactory cluster nodes.

 

You need to manually create the following two files for the shared cluster configuration:

$CLUSTER_HOME/ha-etc/cluster.properties

Configuration parameters that are shared by all of the Artifactory cluster nodes

$CLUSTER_HOME/ha-etc/storage.properties

Identical to the storage.properties file used in a regular Artifactory installation.

This file replaces the storage.properties file in each individual cluster node since each node reads the properties from this file in the common location under $CLUSTER_HOME.

The file is created in the right location when you configure the first Artifactory cluster node in your system and is fully described below.

 

The cluster.properties file contains the following property:

security.token=<your_selected_token>

An ASCII string token that you select, which is used to send secured messages between the servers. This can be any string you choose (like a password).

 

For example, a cluster.properties file for an Artifactory HA installation could be:

security.token=76b07383dcda344979681e01efa5ac50

Uniqueness

Make sure that each cluster has a unique security token.


Installing a Cluster Node

As mentioned in the Requirements section above Artifactory HA is supported from Artifactory 3.1 and above.

Whether you are performing a new installation or upgrading a current one, we recommend that you have all of your Artifactory cluster nodes installed and fully functional with Artifactory 3.1 or above as separate servers before configuring them to be a part of the HA cluster.

 

We recommend that you complete the configuration of each Artifactory cluster node, and its integration into your HA cluster as described below before going on to the next node.

New installation

For a new installation, simply install Artifactory 3.1 as described in Installing Artifactory.

For the first node that you install you need to move the storage.properties file from your servers's $ARTIFACTORY_HOME/etc/ directory to your $CLUSTER_HOME/ha-etc/ directory.

Upgrading a current installation of Artifactory Pro to Artifactory HA

  1. Verify that the Artifactory server on the current installation you are upgrading is shut down
  2. To upgrade your current installation to Artifactory 3.1 and above, please refer to Upgrading Artifactory
  3. Copy $ARTIFACTORY_HOME/etc/ and $ARTIFACTORY_HOME/data/ from our current installation to the corresponding locations under $CLUSTER_HOME.
    You only need to do this once, so for any subsequent servers that you upgrade to configure into your Artifactory HA system, this step can be omitted.
     

JDBC driver

You should also verify that your database JDBC driver is correctly located in $ARTIFACTORY_HOME/tomcat/lib/ for each Artifactory cluster node.

Configuring a Cluster Node

  1. Shut down the Artifactory cluster node.

  2. Create an $ARTIFACTORY_HOME/etc/ha-node.properties file and populate it with the following parameters (you can use $ARTIFACTORY_HOME/misc/ha/ha-node.properties.template as a template to define your ha-node.properties file): 

    node.id

    Unique descriptive name of this server.

    Uniqueness

    Make sure that each node has an id that is unique on your whole network.

    cluster.home

    The location of $CLUSTER_HOME that you set up on your NFS.

    context.url

    The context url that should be used to communicate with this server within the cluster. 

    Don't end with a slash ("/")

    Make sure your context url does not end with a slash character

    Use an explicit IP address

    The host must be explicitly defined as an IP address and not as a host name.

    membership.portThe port that should be used to communicate with this server within the cluster.
    If not specified, Artifactory will allocate a port automatically, however we recommend to set this to a fixed value to ensure that the port allocated is open to all of your organizations security systems such as firewalls etc.
    primary(true | false) Indicates if this is the primary server. There must be one (and only one) server configured in the cluster to be the primary server. For other servers this parameter is optional and its value defaults to "false".

     

    For example, an ha-node.properties file for a server called art1 connected to a mounted drive with $CLUSTER_HOME at /mnt/shared/artifactory/clusterhome, communicating with the other nodes through port 10001, and configured as the "primary"  would be as follows:

    node.id=art1
    cluster.home=/mnt/shared/artifactory/clusterhome
    context.url=http://10.0.0.121:8081/artifactory
    membership.port=10001
    primary=true

    ha-node.properties file permissions

    On Linux, once the ha-node.properties file is created, the Artifactory user should be set as its owner and its permissions should be set to 644(-rw-r--r--)

  3. Set a valid Artifactory HA license in your $ARTIFACTORY_HOME/etc/artifactory.lic file. If this file exists (from a previous installation of Artifactory Pro), then simply replace your Pro license with the Artifactory HA license, otherwise create the file and populate it with your HA license.

  4. Mount the $CLUSTER_HOME directory as defined in the cluster.home property of $ARTIFACTORY_HOME/etc/ha-node.properties

  5. Test your HA configuration after each cluster node that you add to your system.

  6. After you have installed the first cluster node and verified that your system is working correctly as an HA installation, you should configure the Custom URL Base. 
    In the Admin tab under Configuration | General, set the Custom URL Base field to the URL of the Load Balancer.

Once you have successfully installed, configured and tested a cluster node you may go on to the next one that you want to add to your system.


Testing Your HA Configuration

The following are a series of tests you can do to verify that your system is configured correctly as an HA installation:

  1. Directly Access the Artifactory UI for the server you have just configured
  2. In the Admin tab go to Advanced | System Logs to view the log and verify that you see an entry for Artifactory Cluster Home.
    Artifactory HA Log File 
  3. The footer pane of the Artifactory UI should also indicate that you are running with Artifactory HA. In case of an error you will see an error message in the page header.
    Artifactory HA in the UI footer pane
  4. Access Artifactory through your load balancer and log in as Admin.
  5. In the Home tab, under Welcome, you should see that High Availability is Available under Pro Add-ons
     

  6. In the Admin tab go to Configuration.  There should be a section called High Availability. When selected you should a table with details on all the Artifactory nodes in your cluster as displayed below.

    Artifactory HA Section

  7. In the Admin tab under Configuration | General, verify that the Custom URL Base field is correctly configured to the URL of the Load Balancer.


Upgrading Artifactory HA

From version 3.5, the procedure for upgrading Artifactory HA changed

The sections below provide upgrade instructions according to your current version, and assume you are upgrading to the latest version.

Upgrading from Any Version below 3.5

Upgrading Artifactory HA below Version 3.5 requires shutting down all of your Artifactory HA nodes, upgrading and restarting the Master node, and then upgrading and restarting the slaves one node at a time as follows:

Configure an explicit IP in your context.url

Before continuing with your upgrade, make sure you have configured the context.url of your cluster nodes with an explicit IP address.

  1. Shutdown all of your Artifactory HA nodes one at a time
  2. Upgrade the Master node using the regular procedure described in Upgrading Artifactory, and then restart it.
  3. Upgrade and restart all the other Slave nodes one at a time using the regular procedure described in Upgrading Artifactory.

Upgrading from Version 3.5+

If your current version is 3.5 or higher, upgrading an HA cluster is done by first upgrading the master node, restarting it, and then upgrading the rest of the slave nodes one at a time.

At any time, at least one Artifactory node continues to run which means that there is no disruption of service.

This zero-downtime upgrade process should be executed as follows:

  1. Perform a graceful shutdown of the Artifactory master node. While the master node is down, the load balancer should redirect all queries to the slave nodes.
  2. Upgrade the master node using the regular procedure described in Upgrading Artifactory.
  3. Restart the master node. When the master starts up, it recognizes that the HA cluster nodes are not all running the same version of Artifactory, and consequently the system is limited to allowing uploads and downloads. 
    Any attempt to perform other actions such as changing the DB schema, modifying permissions, changing repository configuration and more, are strictly blocked. This limitation will continue until all the cluster nodes are once again running the same version.

    Version inconsistency generates exceptions

    Running the HA cluster nodes with different versions generates exceptions which can be seen in the log files and reflect the temporary inconsistent state during the upgrade process. This is normal and should be ignored until all the cluster nodes are once again running the same version.

  4. For each slave node

    1. Perform a graceful shutdown of Artifactory, 

    2. Upgrade it using the regular procedure described in Upgrading Artifactory.

    3. Restart the node

Once all nodes have been upgraded to the same version, your Artifactory HA installation will be fully functional again.

 

 

 

 

 

 

  • No labels