Xray seamlessly and instantly synchronizes all data, configuration, cached objects and scheduled job changes across all cluster nodes.
Xray’s self-monitoring mechanism, which provides you with system availability issues, is enhanced to let you know which node is affected.
In addition, Xray provides cluster health information in a new page called “High Availability”, showing health information of every node and every microservice.
Xray allows you to easily install a full HA cluster in minutes, or upgrade your existing Xray environment.
The consists of 3 layers: load balancer, application and common resources.
The load balancer is the entry point to your Xray HA cluster, optimally distributing requests to the Xray microservices on the cluster nodes. It's able to recognize the type of request and add it to its corresponding microservice queue, according to the current load on this micorservice type on each of the nodes. For example, when receiving an indexing request, Xray will check the load on the indexing microservice on all of the cluster nodes and will place the new request in the queue with least pending indexing requests.
It is the responsibility of your organization to manage and configure it correctly.
The code sample below shows a basic example of a load balancer configuration:
More details are available on the |
Xray HA presents a cluster of two or more Xray nodes that share common resources. Each cluster node runs all 4 Xray microservices:
Separated into three units:
The storage used by Xray is not a common resource and only node specific files, such as configuration and temporary files, are saved to the disk. |
Synchronization
Critical data is shared across the cluster nodes by using the same databases as common resources. Local temporary data, such as log files and artifacts being processed, is kept separately for each of the nodes in their config and data folders.
Xray implements caching for a variety of objects, including permissions, watches and builds. These caches are automatically updated on each change via RabbitMQ messages. The updated node will send synchronous messages via RabbitMQ to all other nodes to trigger a cache reload.
Xray uses RabbitMQ to ensure that periodical jobs are run only by a single node in the cluster, rather than all nodes. The job and node may change each time, however Xray verifies that every scheduled job is only executed once.