Configuring the Migration
Before migrating your data away from the NFS, make sure all nodes in your HA cluster are up and running. Then, to configure migration of your data for the use cases described above, follow the procedure below:
- Verify versions
- Verify configuration files are synchronized
- Edit the
- Copy data to the new location
- Configure binarystore.xml to match your setup
- Test the configuration
Before proceeding with transferring your data, you need to verify that all cluster nodes are installed with exactly the same version which must be 5.0 and above. To verify the version running on each node in your HA cluster, in the Admin module under Configuration | High Availability, check the Version column of the table displaying your HA nodes.
Verify Configuration Files are Synchronized
When upgrading your HA cluster from version 4.x to version 5.x, an automatic conversion process synchronizes the configuration files for all the cluster nodes. This replaces the need for the
$CLUSTER_HOME/ha-etc folder that was used in v4.x. Once you have verified that all nodes are running the same version, you should verify that all configuration files are synchronized between the nodes. For each node, navigate to its
$ARTIFACTORY_HOME/etc folder and verify the following:
|Each node should still have this file configured as described in Create ha-node.properties|
|This file was introduced in Artifactory 5.0 and it defines the connection to the database. The password specified in this file is encrypted by the key in the |
|This file opens up the full set of options to configure your binary storage without the NFS. It will contain the binary provider configuration according to how you wish to store your binaries. For each of the use cases described above, you can find the corresponding binary provider configuration under Configure binarystore.xml.|
|This file contains the key used to encrypt and decrypt files that are used to synchronize the cluster nodes|
From version 5.0, Artifactory HA synchronizes configuration files on all cluster nodes, a change made to one of these files on any node triggers the mechanism to synchronize the change to the other nodes.
Since changes on one node are automatically synchronized to the other nodes, take care not to simultaneously modify the same file on two different nodes since changes you make on one node could overwrite the changes you make on the other one.
Edit the ha-node.properties File
Locate the ha-node.properties file in each node under the
$ARTIFACTORY_HOME/etc and comment out or remove the following entries otherwise Artifactory will continue write according to the previous path you have configured to the shared file system.
Copy Data to the New Location
Once you have verified your configuration files are correctly synchronized, you are ready to migrate your data. The sub-sections below describe how to migrate your data for the three use-cases described in the Overview above.
Use Case 1: NFS → Local FS
For this use case, we first need to ensure that there is enough storage available on each node to accommodate the volume of data in my
/data folder and the desired redundancy. In general, you need to comply with the following formula:
- If you expect the maximum storage in your environment to be 100 TB
- Your redundancy is 2
- You have 4 nodes in your cluster,
Then each node should have at least 50 TB of storage available.
For a redundancy of N, copy the data from your NFS to N of the nodes in your cluster.
For example, for a redundancy of 2, and assuming you have two nodes named "Node1" and "Node2" respectively, copy the
$CLUSTER_HOME/ha-data folder to the
$ARTIFACTORY_HOME/data folder on each of Node1 and Node2.
Optimize distribution of your files
Once you have copied your filestore to to each of the N nodes according to the desired redundancy, we recommend invoking the Optimize System Storage REST API endpoint in order to optimize the storage by balancing it storage amongst all nodes in the cluster.
Use Case 2: NFS Eventual + S3: → Local FS Eventual + S3
This use case refers to using S3 as persistent storage, but is equally applicable to other cloud object store providers such as GCS, CEPH, OpenStack and other supported vendors.
In this use case, you only need to ensure that there are no files in the
eventual folder of your NFS. If any files are still there, they should be moved to your cloud storage provider bucket, or to one of the nodes'
Use Case 3: NFS → Local FS Eventual + S3
Migrating a filestore for a single installation to S3 is normally an automatic procedure handled by Artifactory, however, in the case of moving an HA filestore from the NFS, the automatic procedure does not work since the folder structure changes.
In this case, you need to copy the data under $CLUSTER_HOME/ha-data from your NFS to the bucket on your cloud storage provider (here too, other providers described in Use Case 2 are also supported) while making sure that there are no files left in the
_pre folders of the eventual binary provider on your node's local file system.
In this step you need to configure the binarystore.xml to match the setup you have selected in the use case. Note that the three use cases above use one of two final configurations:
All data is stored on the cluster node's local filesystem (labelled here as Local FS)
The cluster nodes use the cluster node's local filesystem as an eventual binary provider and data is persistently stored on S3 (labelled here as Local FS Eventual + S3)
Node downtime required
To modify the binarystore.xml file for a node, you first need to gracefully shut down the node, modify the file and then restart the node in order for your new configuration to take effect
In this example, all data is stored on the nodes' file systems. For the sake of this example, we will assume that:
- We have 3 nodes
- We want redundancy = 1
To accomplish this setup, you need to:
Copy the data from the
$CLUSTER_HOME/ha-dataon your NFS to the
$ARTIFACTORY_HOME/datafolder on two of the nodes.
- Once all data has been copied, you need to place the binarystore.xml under
$ARTIFACTORY_HOME/etcof each cluster node.
- Finally, you need to gracefully restart each node for the changes to take effect.
Optimizing the redundant storage
After restarting your system, you can trigger optimization using the REST API so that all three nodes are utilized for redundancy. For details, please refer to Optimize System Storage.
In this use case, the binarystore.xml used with the NFS before migration would look like the following if you are using one of the default file-system template.
After migrating the data, the new binarystore.xml placed on each cluster node you can use the cluster-file-system template.
While you don't need to configure anything else, this is what the cluster-file-system template looks like:
Local FS Eventual + S3
In this example, data is temporarily stored on the file system of each node using an Eventual binary provider, and is then passed on to your S3 object storage for persistent storage.
In this use case, the binarystore.xml used your NFS for cache and eventual with your object store on S3 before migration will look like the following if you are using the S3 template.
After migrating your filestore to S3 (and stopping to use the NFS), your
binarystore.xml should use the
cluster-s3 template as follows:
cluster-s3 template looks like this:
Because you must configure the s3 provider with parameters specific to your account (but can leave all others with the recommended values), if you choose to use this template, your
binarystore.xml configuration file should look like this:
Testing Your Configuration
To test your configuration you can simply deploy an artifact to Artifactory and then inspect your persistent storage (whether on your node's file system on your cloud provider) and verify that the artifact has been stored correctly.