Cloud customer?
Start for Free >
Upgrade in MyJFrog >
What's New in Cloud >

Search





Overview

JFrog services can be configured for High Availability with a cluster of 2 or more active/active nodes on the same Local Area Network (LAN).

Setting Up High Availability on Mission Control

High Availability configuration for Mission Control requires a cluster of 3 or more active/active nodes on the same LAN.

An HA configuration provides the following benefits:

Optimal Resilience

Maximize your uptime. In case one or more nodes is unavailable or down for upgrade, the load is shared between the remaining nodes, ensuring optimal resilience and uptime.

Improved Performance with Load Balancing

Scale your environment with as many nodes as you need. All cluster nodes in an HA configuration are synchronized, and jointly share and balance the workload between them. When a node becomes unavailable, the cluster will automatically spread the workload across the other remaining node(s).

Managed Heavy Loads

Accommodate larger load bursts with no compromise to performance. With horizontal server scalability, you can easily increase your capacity to meet any load requirements as your organization grows.

Always being Synchronized

Seamlessly and instantly synchronized data, configuration, cached objects and scheduled job changes across all cluster nodes.

JFrog Subscription Levels

SELF-HOSTED
ENTERPRISE X 
ENTERPRISE+
Page Contents


HA Architecture

The HA architecture consists of 3 building blocks: load balancer,  application(s) and common resources.

Getting Help

JFrog support team is available to help you configure the JFrog service's cluster nodes. It is up to your organization's IT staff to configure your load balancer, database and object store. 

Load Balancer

The load balancer is the entry point to your JFrog Platform Deployment and optimally distributes requests to the nodes in your system. The load balancer is only required for the Artifactory service, which is then responsible to route and balance between the nodes of the other services. For additional information refer to Configuring Load Balancer.

Application

A JFrog service (or application) running in HA mode represents a cluster of two or more nodes that share common resources.

  • All JFrog services (Artifactory, Xray, Mission Control, Distribution) can be run in HA mode, though only Artifactory requires a load balancer.
  • Each cluster node runs all microservices, described in the System Architecture

Common Resources

Each service requires a set of common resources. The resources vary per service but typically include at least one database.

Local Area Network

To ensure good performance, all the components of your HA installation must be installed on the same high-speed LAN.

In theory, HA could work over a Wide Area Network (WAN), however in practice, network latency makes it impractical to achieve the performance required for high availability systems.


Cloud-Native High Availability

From Artifactory 7.17.4, all nodes in the high availability cluster can perform tasks such as replication, garbage collection, backups, exporting, and importing. Every node in the cluster can serve any of the mentioned tasks and if any node goes down, the different nodes in the cluster will be able to perform these tasks instead. By default, when adding a new node (member) to the cluster, it will be able to perform cluster-wide tasks without user intervention.

The "taskAffinity": "any" attribute is set by default, on all the nodes in the cluster, when installing an Artifactory version 7.17.4 and above and is configured under the Nodes section in the Artifactory Configuration YAML. To remove this functionality from a node, set  "taskAffinity": "none".

Backward Compatibility when Upgrading HA Environments

To maintain backward compatibility, when upgrading to Artifactory 7.17.4 from a previous version, the primary: true attribute is maintained.
To use this new functionality, add the taskAffinity: any to each of the nodes in the cluster in the Artifactory System YAML.

Prerequisites

Today, in a lot of customer environments, the primary node is specifically configured with access to some NFS mount for Artifactory backups.

With the introduction of Cloud-Native High Availabilitywhere any node can create a backup, the user will need to set up access for all nodes to have write access to the mount for creating a backup. Alternatively, the user can exclude all nodes from managing cluster-wide tasks except for a single node. This will mimic today’s behavior, where there is only one primary node that can write to the NFS mount.

If you are moving to Cloud-Native High Availability on HA, it is recommend to have a shared drive path for backup paths (if you use a local drive path, the backup will be saved in whichever node triggers the backup operation, which can lead to confusion). 


Installing and Upgrading to HA

For additional information refer to the Installation and upgrade sections:

  • No labels
Copyright © 2021 JFrog Ltd.