Using the latest JFrog products?
JFrog Platform User Guide
JFrog Xray 2.x Documentation
To get the latest version, go to the JFrog Unified Platform
Always Synchronized
Xray seamlessly and instantly synchronizes all data, configuration, cached objects and scheduled job changes across all cluster nodes.
Enhanced Monitoring
Xray’s self-monitoring mechanism, which provides you with system availability issues, is enhanced to let you know which node is affected.
In addition, Xray provides cluster health information in a new page called “High Availability”, showing health information of every node and every microservice.
Easy HA Setup
Xray allows you to easily install a full HA cluster in minutes, or upgrade your existing Xray environment.
Architecture
The consists of 3 layers: load balancer, application and common resources.
Load Balancer
The load balancer is the entry point to your Xray HA cluster, optimally distributing requests to the Xray microservices on the cluster nodes. It's able to recognize the type of request and add it to its corresponding microservice queue, according to the current load on this micorservice type on each of the nodes. For example, when receiving an indexing request, Xray will check the load on the indexing microservice on all of the cluster nodes and will place the new request in the queue with least pending indexing requests.
It is the responsibility of your organization to manage and configure it correctly.
The code sample below shows a basic example of a load balancer configuration:
Application Layer
Xray HA presents a cluster of two or more Xray nodes that share common resources. Each cluster node runs all 4 Xray microservices:
- Indexer - Responsible for the indexing process, including:
- Recursively extracting artifacts and builds
- Collecting artifact metadata from accompanying files
- Building an artifact components graph representation
- Persist - Responsibilities include
- Matching the given components graph with the public component information
- Completing component naming
- Storing the data in the relevant databases (graph data in PostgreSQL and component metadata in MongoDB)
- Analysis - Responsible for enriching component metadata such as vulnerabilities, licenses and versions.
- Server - Responsibilities include:
- Generating violations by matching analysis data with watches and policies
- Hosting the API and UI endpoints
- Running scheduled jobs such as the database synchronization process
Xray Resources
Separated into three units:
- PostgreSQL: Components Graph Database
- Every artifact and build indexed by Xray is broken down into multiple components. These components and the relationships between each other are represented in a checksum based components graph.
- Xray uses PostgreSQL to store and query this components graph. Must be installed externally.
- Default port 5432 should be open for communication between each of the nodes in the cluster and the database server.
- MongoDB: Components Metadata and Configuration
- Xray comes out of the box with a rich component metadata database. This database is updated on a daily basis using the database sync process.
- Xray uses MongoDB to store this component metadata database, as well as all Xray configuration, such as watches, policies and violations. Must be installed externally.
- Default port 27017 should be open for communication between each of the nodes in the cluster and the database server.
- RabbitMQ: Microservice Communication and Messaging
- Xray has multiple flows, such as scanning, impact analysis, and database sync. These flows require processing completed by the different Xray services listed above. Flows contain multiple steps that are completed by the Xray services.
- Xray uses RabbitMQ to manage these different flows and track synchronous and asynchronous communication between the services.
- Default port 5672 should be open for communication between each of the nodes in the cluster.
- The RabbitMQ is installed as part of the Xray installation for every node and in case of HA architecture, RabbitMQ using queue mirroring between the different RabbitMQ nodes.
Filestore
The storage used by Xray is not a common resource and only node specific files, such as configuration and temporary files, are saved to the disk.
Synchronization
Critical and Temporary Data
Critical data is shared across the cluster nodes by using the same databases as common resources. Local temporary data, such as log files and artifacts being processed, is kept separately for each of the nodes in their config and data folders.
Cached Objects
Xray implements caching for a variety of objects, including permissions, watches and builds. These caches are automatically updated on each change via RabbitMQ messages. The updated node will send synchronous messages via RabbitMQ to all other nodes to trigger a cache reload.
Scheduled Jobs
Xray uses RabbitMQ to ensure that periodical jobs are run only by a single node in the cluster, rather than all nodes. The job and node may change each time, however Xray verifies that every scheduled job is only executed once.