Have a question? Want to report an issue? Contact JFrog support

Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

Overview

This page describes how to set up a set of Artifactory nodes as an Artifactory HA cluster.

Each of the HA components is configured individually and a common setup file is configured to bring together all of the components in the system as a whole. 

Requirements

Version

Artifactory HA is supported from Artifactory 3.1 and above. If you are running a previous version of Artifactory, you first need to upgrade as described in Upgrading Artifactory.

All nodes within the same Artifactory HA installation must be running the same Artifactory version and the same JVM version.

Licensing

Artifactory HA is supported with an

Newtablink
TextEnterprise License
URLhttps://www.jfrog.com/pricing/
. Each node in the cluster must be activated with a different license, however, this is transparently and automatically managed by the Artifactory Cluster License Manager

Hardware

Artifactory HA requires the following hardware:

  • Load balancer
  • External database server with a single URL to the database
Network
  • All the Artifactory HA components (Artifactory cluster nodes, database server and load balancer) must be within the same fast LAN
  • All the HA nodes must communicate with each other through dedicated TCP ports
  • Network communications between the cluster nodes must be enabled for each of:
Database

Artifactory HA requires an external database and currently supports Oracle, MySQL, MS SQL and PostgreSQL. For details on how to configure any of these databases please refer to Configuring the Database.

Info
iconfalse
titlePage Contents

Table of Contents
maxLevel4
minLevel2


Home Directory

When setting up Artifactory HA you need to configure the $ARTIFACTORY_HOME directory separately for each of the Artifactory cluster nodes in your system.

The general layout of these directories is as follows:

Expand
titleClick for directory layout...
|- $ARTIFACTORY_HOME 
|- access
|- etc/
|- ha-node.properties       
|- logback.xml
|- artifactory.cluster.lic
|- mimetypes.xml
|- cluster.id
|- binarystore.xml
|- db.properties
|- plugins
|- ui
|- security
  |- access
     |-etc 
|- data/
        |- tmp/
        |- artifactory.properties
|- logs/
|- webapps/
|- tomcat/
    |- lib/
        |- <jdbc driver>/
|- bin/
|- misc/
         |-backup/
         |-support

Installing Artifactory HA

An Artifactory HA node is first installed as an Artifactory Pro instance, and is then modified to operate as a node in the HA cluster by configuring the $ARTIFACTORY_HOME/etc/ha-node.properties file. Once the primary node is set up, it is used to create a bootstrap bundle which is then used to configure the secondary nodes in the cluster. 

The Bootstrap Bundle

The bootstrap bundle, bootstrap.bundle.tar.gz, contains a set of security keys and configuration files required for the proper functioning of the cluster. During the process of installing and configuring the HA nodes, the bootstrap bundle is generated by calling the Create Bootstrap Bundle REST API endpoint on the primary node. The same bootstrap bundle should be copied manually to each secondary node during its installation process (into the etc folder). There’s no need to unpack the archive, Artifactory handles this process when starting up.

The Installation Process

The binary storage in an HA installation must be accessible to all nodes in the cluster. This is achieved either by mounting a Network File System (NFS) on each cluster node, using shared object storage, or by using the nodes' local file systems while using a mechanism that synchronizes the binaries between them.

The installation procedure involves two stages:

  1. Setting up your storage configuration   
    The storage configuration varies depending on your decision for two parameters of your setup:
    1. Binary store: Do you plan to use Filesystem Storage to store binaries on your nodes' filesystems, or a Cloud Storage provider such as S3, GCS or any other S3-compliant provider?
    2. NFS: Do you plan to use the Network File System (NFS) or not?
  2. Installing the cluster nodes  
    Once your storage is configured and set up, the rest of the installation process is identical

Setting Up Your Storage Configuration

Your choice for binary store and use of NFS or not leads to one of the following four options for setting up your storage configuration:

Using Filesystem Storage with the NFS

Expand
titleClick here to expand for details...

To set up your HA cluster to use filesystem storage with the NFS, follow these steps which are detailed below:

  • Create and configure $ARTIFACTORY_HOME/etc/ha-node.properties 
  • Create an NFS mount 
  • Configure the binarystore.xml file

Once you have completed configuring your filestore, you are ready to complete the HA installation process by installing the cluster nodes.

Create ha-node.properties

Create the $ARTIFACTORY_HOME/etc/ha-node.properties file and populate it with the following parameters:

node.id

Unique descriptive name of this server.

Note
titleUniqueness

Make sure that each node has an id that is unique on your whole network.

context.url

The context url that should be used to communicate with this server within the cluster.

There are two ways to specify the context.url field:

  • As an explicit IP address
  • As a host name. In this case, you need to specify the hazelcast.interface field with wildcards. For details, please refer to the description for hazelcast.interface field below.

membership.portThe port that should be used to communicate with this server within the cluster.
If not specified, Artifactory will allocate a port automatically, however we recommend to set this to a fixed value to ensure that the port allocated is open to all of your organizations security systems such as firewalls etc.
primary(true | false) Indicates if this is the primary server. There must be one (and only one) server configured in the cluster to be the primary server. For other servers this parameter is optional and its value defaults to "false".

artifactory.ha.data.dir

This property provides the full path to the root directory of your NFS binary storage.
artifactory.ha.backup.dirThis property provides the full path to the root directory of your Artifactory back-up data on the NFS.

hazelcast.interface

[Optional] When nodes in the same cluster are running on different networks (e.g. nodes on different docker hosts), set this value to match the server's internal IP address.

If you have specified the context.url as a host name, you need to use to the wildcard character (i.e., an asterisk - '*') so as to include the server's internal IP address as well as that of all members in the cluster.

For example, if you have two nodes with the following parameters:

NodeIPHost name
A10.1.2.22node.a
B10.1.3.33node.b

then the hazelcast.interface field should be set to 10.1.*.*

Another example, if you have two nodes with the following parameters:

NodeIPHost name
A10.1.2.22node.a
B10.1.2.33node.b

then the hazelcast.interface field should be set to 10.1.2.*

Tip
titleha-node.properties file permissions

On Linux, once the ha-node.properties file is created, the Artifactory user should be set as its owner and its permissions should be set to 644(-rw-r--r--)

The example below shows how an ha-node.properties file may be configured for using filesystem storage with the NFS 

Code Block
node.id=art1
context.url=http://10.0.0.121:8081/artifactory
membership.port=10001
primary=true
artifactory.ha.data.dir=/mnt/shared/artifactory/ha-data
artifactory.ha.backup.dir=/mnt/shared/artifactory/ha-backup
hazelcast.interface=192.168.0.2 (optional)
Create an NFS mount

When setting up Artifactory HA you need to configure the $ARTIFACTORY_HOME directory separately for each of the Artifactory cluster nodes in your system, and a common $DATA_DIR that is accessible to all nodes to host all your filestore binaries

Create an NFS mount which will be accessible to all nodes. This mount will serve as the $DATA_DIR.

In addition, you need to set up a $BACKUP_DIR that must be accessible by the master node. It may be located on the same NFS mount, however this is not compulsory.

Note
titlePrivileges

Each of the Artifactory cluster nodes must have full write privileges on the $DATA_DIR directory tree.

Note
titleMounting the NFS from Artifactory HA nodes

When mounting the NFS on the client side, make sure to add the following option for the mount command:

lookupcache=none

This ensures that nodes in your HA cluster will immediately see any changes to the NFS made by other nodes.

Configure the binarystore.xml File

The default binarystore.xml that comes with Artifactory out-of-the-box contains the file-system template. Since this is exactly the configuration you need, there is no need to modify the binarystore.xml file.

In this configuration, Artifactory uses the artifactory.ha.data.dir as the location for all binaries.

You are now ready to complete the HA installation process by installing the cluster nodes.

Using Filesystem Storage Without the NFS

Expand
titleClick here to expand for details...

To set up your HA cluster to use filesystem storage without the NFS, follow these steps which are detailed below:

  • Create and configure $ARTIFACTORY_HOME/etc/ha-node.properties 
  • Configure the binarystore.xml file
Create ha-node.properties

Create the $ARTIFACTORY_HOME/etc/ha-node.properties file and populate it with the following parameters:

node.id

Unique descriptive name of this server.

Note
titleUniqueness

Make sure that each node has an id that is unique on your whole network.

context.url

The context url that should be used to communicate with this server within the cluster.

There are two ways to specify the context.url field:

  • As an explicit IP address
  • As a host name. In this case, you need to specify the hazelcast.interface field with wildcards. For details, please refer to the description for hazelcast.interface field below.

membership.portThe port that should be used to communicate with this server within the cluster.
If not specified, Artifactory will allocate a port automatically, however we recommend to set this to a fixed value to ensure that the port allocated is open to all of your organizations security systems such as firewalls etc.
primary(true | false) Indicates if this is the primary server. There must be one (and only one) server configured in the cluster to be the primary server. For other servers this parameter is optional and its value defaults to "false".

hazelcast.interface

[Optional] When nodes in the same cluster are running on different networks (e.g. nodes on different docker hosts), set this value to match the server's internal IP address.

If you have specified the context.url as a host name, you need to use to the wildcard character (i.e., an asterisk - '*') so as to include the server's internal IP address as well as that of all members in the cluster.

For example, if you have two nodes with the following parameters:

NodeIPHost name
A10.1.2.22node.a
B10.1.3.33node.b

then the hazelcast.interface field should be set to 10.1.*.*

Another example, if you have two nodes with the following parameters:

NodeIPHost name
A10.1.2.22node.a
B10.1.2.33node.b
then the hazelcast.interface field should be set to 10.1.2.*
Tip
titleha-node.properties file permissions

On Linux, once the ha-node.properties file is created, the Artifactory user should be set as its owner and its permissions should be set to 644(-rw-r--r--)

The example below shows how the ha-node.properties file might be configured for your cluster nodes to use filesystem storage without the NFS:

Code Block
node.id=art1
context.url=http://10.0.0.121:8081/artifactory
membership.port=10001
primary=true
hazelcast.interface=192.168.0.2 (optional)
Configure the binarystore.xml File

The default binarystore.xml that comes with Artifactory out-of-the-box contains the file-system template which uses the NFS. Therefore, to setup your filestore so that it doesn't use the NFS, you need to modify this file.

Warning
titleTake care when modifying binarystore.xml

Making changes to this file may result in losing binaries stored in Artifactory!

If you are not sure of what you are doing, please contact JFrog Support for assistance.

We recommend using the cluster-file-system template which is one of the built-in templates that come with Artifactory out-of-the-box. This configuration uses the default filestore location (under $ARTIFACTORY_HOME/data) to store binaries locally on the filesystem, unless specified otherwise. A mechanism connected to all other nodes in the cluster is used to keep binaries synchronized and accessible to all nodes, based on the required redundancy value (which is 2 by default).

Tip
titleHow to use the cluster-file-system template

 To learn how to configure your binarystore.xml to use the cluster-file-system template, please refer to Basic Configuration Elements under Configuring the Filestore.

Note

If your cluster has only two nodes, we recommend modifying the lenientLimit from its default value of 0 which would prevent writes to Artifactory if one of the nodes goes down.

 

You are now ready to complete the HA installation process by installing the cluster nodes.

Using Cloud Storage With the NFS

Expand
titleClick here to expand for details...

To set up your HA cluster to use cloud storage with the NFS, follow these steps which are detailed below:

  • Create and configure $ARTIFACTORY_HOME/etc/ha-node.properties 
  • Create an NFS mount 
  • Configure the binarystore.xml file
Create ha-node.properties

Create the ha-node.properties file and populate it with the following parameters:

node.id

Unique descriptive name of this server.

Note
titleUniqueness

Make sure that each node has an id that is unique on your whole network.

context.url

The context url that should be used to communicate with this server within the cluster.

There are two ways to specify the context.url field:

  • As an explicit IP address
  • As a host name. In this case, you need to specify the hazelcast.interface field with wildcards. For details, please refer to the description for hazelcast.interface field below.

membership.portThe port that should be used to communicate with this server within the cluster.
If not specified, Artifactory will allocate a port automatically, however we recommend to set this to a fixed value to ensure that the port allocated is open to all of your organizations security systems such as firewalls etc.
primary(true | false) Indicates if this is the primary server. There must be one (and only one) server configured in the cluster to be the primary server. For other servers this parameter is optional and its value defaults to "false".

artifactory.ha.data.dir

This property provides the full path to the root directory of your NFS binary storage.
artifactory.ha.backup.dirThis property provides the full path to the root directory of your Artifactory back-up data on the NFS.

hazelcast.interface

[Optional] When nodes in the same cluster are running on different networks (e.g. nodes on different docker hosts), set this value to match the server's internal IP address.

If you have specified the context.url as a host name, you need to use to the wildcard character (i.e., an asterisk - '*') so as to include the server's internal IP address as well as that of all members in the cluster.

For example, if you have two nodes with the following parameters:

NodeIPHost name
A10.1.2.22node.a
B10.1.3.33node.b

then the hazelcast.interface field should be set to 10.1.*.*

Another example, if you have two nodes with the following parameters:

NodeIPHost name
A10.1.2.22node.a
B10.1.2.33node.b

then the hazelcast.interface field should be set to 10.1.2.*

 

The example below shows how the ha-node.properties file might be configured for your cluster nodes to use cloud storage with the NFS:

Code Block
node.id=art1
context.url=http://10.0.0.121:8081/artifactory
membership.port=10001
primary=true
artifactory.ha.data.dir=/mnt/shared/artifactory/ha-data
artifactory.ha.backup.dir=/mnt/shared/artifactory/ha-backup
hazelcast.interface=192.168.0.2 
Tip
titleha-node.properties file permissions

On Linux, once the ha-node.properties file is created, the Artifactory user should be set as its owner and its permissions should be set to 644(-rw-r--r--)

Create an NFS mount

When setting up Artifactory HA you need to configure the $ARTIFACTORY_HOME directory separately for each of the Artifactory cluster nodes in your system, and a common $DATA_DIR that is accessible to all nodes to host all your filestore binaries

 

Create an NFS mount which will be accessible to all nodes. This mount will serve as the $DATA_DIR.

In addition, you need to set up a $BACKUP_DIR that must be accessible by the master node. It may be located on the same NFS mount, however this is not compulsory.

Note
titlePrivileges

Each of the Artifactory cluster nodes must have full write privileges on the $DATA_DIR directory tree.

Note
titleMounting the NFS from Artifactory HA nodes

When mounting the NFS on the client side, make sure to add the following option for the mount command:

lookupcache=none

This ensures that nodes in your HA cluster will immediately see any changes to the NFS made by other nodes.

 

Configure the binarystore.xml file

The default binarystore.xml that comes with Artifactory out-of-the-box contains the file-system template. Therefore, to setup your filestore so to use cloud storage with the NFS, you need to modify this file.

 

Warning: Take care when modifying the binarystore.xml file

Making changes to this file may result in losing binaries stored in Artifactory!

If you are not sure of what you are doing, please contact JFrog Support for assistance.

 

We recommend using either the s3 chain or the google-storage chain which are among the built-in chain templates that come with Artifactory out-of-the-box. These chains use the shared filestore location (under $DATA_DIR) to store binaries in a staging area, before they are moved to the cloud storage.

Tip:  To learn how to configure your binarystore.xml to use the s3 and google-storage chain templates, please refer to Basic Configuration Elements under Configuring the Filestore.

You are now ready to complete the HA installation process by installing the cluster nodes.

Using Cloud Storage Without the NFS

Expand
titleClick here to expand for details...

To set up your HA cluster to use cloud storage without the NFS, follow these steps which are detailed below:

  • Create and configure $ARTIFACTORY_HOME/etc/ha-node.properties 
  • Configure the binarystore.xml file
Create ha-node.properties

Create the ha-node.properties file and populate it with the following parameters:

node.id

Unique descriptive name of this server.

Note
titleUniqueness

Make sure that each node has an id that is unique on your whole network.

context.url

The context url that should be used to communicate with this server within the cluster.

There are two ways to specify the context.url field:

  • As an explicit IP address
  • As a host name. In this case, you need to specify the hazelcast.interface field with wildcards. For details, please refer to the description for hazelcast.interface field below.

membership.portThe port that should be used to communicate with this server within the cluster.
If not specified, Artifactory will allocate a port automatically, however we recommend to set this to a fixed value to ensure that the port allocated is open to all of your organizations security systems such as firewalls etc.
primary(true | false) Indicates if this is the primary server. There must be one (and only one) server configured in the cluster to be the primary server. For other servers this parameter is optional and its value defaults to "false".

hazelcast.interface

[Optional] When nodes in the same cluster are running on different networks (e.g. nodes on different docker hosts), set this value to match the server's internal IP address.

If you have specified the context.url as a host name, you need to use to the wildcard character (i.e., an asterisk - '*') so as to include the server's internal IP address as well as that of all members in the cluster.

For example, if you have two nodes with the following parameters:

NodeIPHost name
A10.1.2.22node.a
B10.1.3.33node.b

then the hazelcast.interface field should be set to 10.1.*.*

Another example, if you have two nodes with the following parameters:

NodeIPHost name
A10.1.2.22node.a
B10.1.2.33node.b

then the hazelcast.interface field should be set to 10.1.2.*

Tip
titleha-node.properties file permissions

On Linux, once the ha-node.properties file is created, the Artifactory user should be set as its owner and its permissions should be set to 644(-rw-r--r--

The example below shows how the ha-node.properties file might be configured for your cluster nodes to use cloud storage without the NFS:

Code Block
node.id=art1
context.url=http://10.0.0.121:8081/artifactory
membership.port=10001
primary=true
hazelcast.interface=192.168.0.2 (optional)
Configure the binarystore.xml File

The default binarystore.xml that comes with Artifactory out-of-the-box contains the file-system template. Therefore, to setup your filestore so to use cloud storage without the NFS, you need to modify this file.

Warning
titleTake care when modifying binarystore.xml

Making changes to this file may result in losing binaries stored in Artifactory!

If you are not sure of what you are doing, please contact JFrog Support for assistance.

We recommend using either the cluster-s3 chain or the cluster-google-storage chain which are among the built-in templates that come with Artifactory out-of-the-box. These templates use a mechanism connected to all other nodes in the cluster to keep binaries synchronized and accessible to all nodes according to the required redundancy (which is 2 by default). Binaries are first stored locally on each node (under $ARTIFACTORY_HOME/data/eventual by default), with additional copies on other nodes according to the redundancy configured, before moving on to persistent cloud storage.

Tip
titleHow to use the s3 and google-storage chain templates

To learn how to configure your binarystore.xml to use the cluster-s3 and cluster-google-storage chain templates, please refer to Basic Configuration Elements under Configuring the Filestore.

You are now ready to complete the HA installation process by installing the cluster nodes.

Installing the Cluster Nodes

Once you have completed setting up your filestore configuration, the process for installing the cluster nodes is identical and described in the steps below:

  1. Install the primary node 
  2. Create the bootstrap bundle 
  3. Add licenses  
  4. Set the cluster's URL Base 
  5. Add secondary nodes 

Installing the Primary Node

Go through a regular installation of Artifactory Pro as described in Installing Artifactory, and then convert it to be the HA primary node by adding the ha-node.properties file you created when you set up your storage configuration to the $ARTIFACTORY_HOME/etc. Do not start up the instance yet.  

Note that an external database must be configured for usage at this point, as mentioned in the Requirements section 

You should also verify that your database JDBC driver is correctly located in $ARTIFACTORY_HOME/tomcat/lib for each Artifactory cluster node.

Creating the Bootstrap Bundle

First, start up the primary node. Once your primary node is up and running, you can create the bootstrap bundle by calling the Create Bootstrap Bundle REST API endpoint on the primary node. This creates the bundle, bootstrap.bundle.tar.gz,  and stores it under $ARTIFACTORY_HOME/etc. You will need the bootstrap bundle later on when adding secondary nodes.

Note: The bootstrap bundle file is only used when none of the files it includes are present in the corresponding locations in the secondary cluster nodes. Once Artifactory is finished with it (either used it or deemed unnecessary) the bundle file is deleted as it contains sensitive files.

TipWe recommend backing up the bootstrap bundle to a folder that is different from where the Artifactory cluster data or ARTIFACTORY_HOME folder are located until you have added all your secondary nodes and have verified that the cluster is up and running correctly.

Add Licenses

There are several ways you can add licenses to the cluster:

Since currently, the only operative node is the primary node, you can install your licenses there. Once you add the secondary nodes to the cluster, they will be licensed automatically through the Cluster License Manager.  

All licenses used must be Enterprise licenses. 

Set the URL Base

After you have installed the node and verified that your system is working correctly as an HA installation, you should configure the Custom URL Base. 
In the Admin tab under Configuration | General, set the Custom URL Base field to the URL of the Load Balancer.

Add Secondary Nodes

You should also verify that your database JDBC driver is correctly located in $ARTIFACTORY_HOME/tomcat/lib for each Artifactory cluster node.

To add secondary nodes, for each node, follow these steps:

  1. Create an ha-node.properties file according to how you want to set up your storage configuration
  2. Go through a new Artifactory Pro installation as described in Installing Artifactory. Do not start up the instance yet.
    Note that an external database must be configured for usage at this point, as mentioned in the Requirements section
  3. Once the Artifactory Pro installation is complete, add the ha-node.properties file you created to the $ARTIFACTORY_HOME/etc folder.

  4. Copy the bootstrap bundle you created on the primary node, bootstrap.bundle.tar.gz, to the $ARTIFACTORY_HOME/etc folder on the secondary node.

    Warning: Bootstrap Bundle and db.properties
    This is a critical step in the upgrade process. The bootstrap bundle must be installed in each secondary node before you start it up for it to operate correctly in the cluster.
    Note also, if the $ARTIFACTORY_HOME/etc folder in your secondary node already contains a db.properties file, make sure to remove it. Presence of this file will prevent the bootstrap bundle from being properly extracted when you start up the secondary node causing the upgrade to fail.

  5. Start up the cluster node. Upon starting up, the node is automatically allocated a license by the Cluster License Manager, and is automatically configured through the bootstrap bundle.

  6. Test your HA configuration after each cluster node that you add to your system.

Warning: Ensure network communication

Make sure that network communication is enabled between the cluster nodes for each of the following:

  • context.url
  • hazelcast.interface and membership.port (used together. For example, 172.24.0.1:10001)

Upgrading Artifactory HA

Upgrading Artifactory HA depends on which version you are starting from. For detailed instructions, please refer to Upgrading an Enterprise HA Cluster


Testing Your HA Configuration

The following are a series of tests you can do to verify that your system is configured correctly as an HA installation:

Excerpt
  1. Directly Access the Artifactory UI for the server you have just configured
  2. In the Admin module go to Advanced | System Logs to view the log and verify that you see an entry for HA Node ID.
    Artifactory HA Log File
  3. The bottom of the module navigation bar should also indicate that you are running with an Enterprise licens. In case of an error you will see an error message in the page header.
  4. Access Artifactory through your load balancer and log in as Admin.
  5. In the Admin module go to Configuration.  There should be a section called High Availability. When selected you should see a table with details on all the Artifactory nodes in your cluster as displayed below.

    HA Cluster Nodes

  6. In the Admin module under Configuration | General, verify that the Custom URL Base field is correctly configured to the URL of the Load Balancer.

Cluster License Management

Artifactory 5.0 introduces an automated license management interface for HA clusters through which all licenses are allocated automatically to nodes as they are added to the cluster. A batch of licenses can be added through the UI and REST API to any node in a cluster.

A new node starting up will request an available license from the pool automatically, and will be allocated the license with the latest expiry date. The license is also automatically returned to the pool if the node is shut down or removed from the HA cluster.

Note
titleWhich license is allocated?

Note that adding a license through a node does not necessarily mean that the license will be attached to that specific node. The license is added to the pool available and the available license with the latest expiry date will be allocated to the node.

Once you have purchased a set of licenses, they are provided to you as a space-separated or newline-separated list.

Adding Licenses

There are three ways that licenses can be added to an HA cluster:

Tip
titleSpecifying multiple licenses

 When specifying multiple licenses, whether in the Artifactory UI, using the REST API or in the artifactory.cluster.license file, make sure that the licenses are separated by a newline.

Using the UI

Through the UI, in the Admin module, under Configuration | Artifactory Licenses, you can view all licenses uploaded to your cluster.

Cluster licenses

To add licenses to your cluster, click New and copy your license key(s) into the License Key entry field. You can also simply drag and drop the file containing the license key(s) into the same field. Make sure that each license is separated by a newline.

Add cluster licenses

Using the REST API

You can also add licenses through the Install License REST API endpoint 

Using the Primary Node's Filesystem

To accommodate spinning up Artifactory HA nodes using automation, before booting up your primary node, you can place the artifactory.cluster.license file in its $ARTIFACTORY_HOME/etc folder. Upon being booted up, the primary node automatically extracts one of the licenses.

Similarly, upon being started up, each secondary node also automatically extracts one of the remaining available licenses.

 

License Expiry

Nodes running with a license that is about to expire will automatically be updated with a new license available from the pool. Artifactory administrators can manually delete the expired license from within the UI or using REST API.

Deleting Licenses

A license can be deleted under one the following conditions:

  • It is not currently being used, 
  • There is an alternative license available in the pool. In this case, the node to which the deleted license was attached will automatically be allocated with an alternative license.
Note
titlePerpetual License

 Note that Artifactory licenses are perpetual and may continue to activate an Artifactory instance indefinitely, however, an instance running on an expired license may not be upgraded and is not eligible for support.

REST API

You can manage your Artifactory HA licenses using the HA License Information, Install HA Cluster Licenses and Delete HA Cluster License REST API endpoints. 


Screencast

HTML
<iframe width="560" height="315" src="https://www.youtube.com/embed/E4EngY2hCqM" frameborder="0" allowfullscreen></iframe>