The HA architecture consists of 3 building blocks: load balancer, application(s) and common resources.
JFrog support team is available to help you configure the JFrog service's cluster nodes. It is up to your organization's IT staff to configure your load balancer, database and object store.
The load balancer is the entry point to your JFrog Platform Deployment and optimally distributes requests to the nodes in your system. The load balancer is only required for the Artifactory service, which is then responsible to route and balance between the nodes of the other services. For additional information refer to Configuring Load Balancer.
A JFrog service (or application) running in HA mode represents a cluster of two or more nodes that share common resources.
- All JFrog services (Artifactory, Xray, Mission Control, Distribution) can be run in HA mode, though only Artifactory requires a load balancer.
- Each cluster node runs all microservices, described in the System Architecture.
Each service requires a set of common resources. The resources vary per service but typically include at least one database.
Local Area Network
To ensure good performance, all the components of your HA installation must be installed on the same high-speed LAN.
In theory, HA could work over a Wide Area Network (WAN), however in practice, network latency makes it impractical to achieve the performance required for high availability systems.
Cloud-Native High Availability
From Artifactory 7.17.4, all nodes in the high availability cluster can perform tasks such as replication, garbage collection, backups, exporting, and importing. Every node in the cluster can serve any of the mentioned tasks and if any node goes down, the different nodes in the cluster will be able to perform these tasks instead. By default, when adding a new node (member) to the cluster, it will be able to perform cluster-wide tasks without user intervention.
"taskAffinity": "any" attribute is set by default, on all the nodes in the cluster, when installing an Artifactory version 7.17.4 and above and is configured under the
Nodes section in the Artifactory Configuration YAML. To remove this functionality from a node, set
Backward Compatibility when Upgrading HA Environments
To maintain backward compatibility, when upgrading to Artifactory 7.17.4 from a previous version, the
primary: true attribute is maintained.
To use this new functionality, add the
any to each of the nodes in the cluster in the Artifactory System YAML.
Today, in a lot of customer environments, the primary node is specifically configured with access to some NFS mount for Artifactory backups.
With the introduction of Cloud-Native High Availability, where any node can create a backup, the user will need to set up access for all nodes to have write access to the mount for creating a backup. Alternatively, the user can exclude all nodes from managing cluster-wide tasks except for a single node. This will mimic today’s behavior, where there is only one primary node that can write to the NFS mount.
If you are moving to Cloud-Native High Availability on HA, it is recommend to have a shared drive path for backup paths (if you use a local drive path, the backup will be saved in whichever node triggers the backup operation, which can lead to confusion).