To upgrade from version 2.6 and below, you first need to upgrade to version 2.7.x as described in the Upgrading Xray 2 documentation, and then continue to upgrade from version 2.7 to 3.x.
From version 3.x, the MongoDB is not used by Xray, except during the initial migration phase from version 2.x. The data is automatically migrated from MongoDB to PostgreSQL from version 2.7x and above. After upgrading to version 2.7x, you must ensure that all data migrations are complete before proceeding to upgrade to version 3.x. The xray-migration-readiness tool enables you to verify that all data migrations are complete. Download the tool, and follow the instructions in the readme file.
JFrog Xray v3.x is only compatible with JFrog Artifactory v7.x. To upgrade, you must first install JFrog Artifactory 7.x.
The following upgrade methods are supported:
The installer script works with all supported upgrade methods (RPM, Debian and Docker Compose). It provides you an interactive way to upgrade Xray and its dependencies.
Stop the service.
xray stop docker ps -a --format '{{.Names}}' | grep ^xray_* | xargs docker rm -f |
cd /opt/jfrog/xray/scripts ./xray.sh stop |
Extract the contents of the compressed archive and go to the extracted folder. The installer script is located in the extracted folder.
tar -xvf jfrog-xray-<version>-<compose|rpm|deb>.tar.gz cd jfrog-xray-<version>-<compose|rpm|deb> |
This .env file is used by docker-compose and is updated during installations and upgrades. Notice that some operating systems do not display dot files by default. If you make any changes to the file, remember to backup before an upgrade. |
Run the installer script.
Note: the script will prompt you with a series of mandatory inputs, including the jfrogURL
(custom base URL) and joinKey
.
./config.sh |
A .env file is included within the archive. This file is used by docker-compose and is updated during installations and upgrades. Some Operating Systems do not display dot files by default so please be aware of this file. If you make changes to the file, remember to backup before an upgrade |
./install.sh |
migration log: $JFROG_HOME/xray/var/log/migration.log
file
system.yaml configuration: $JFROG_HOME/xray/var/etc/system.yaml
This newly created file will contain your current custom configurations in the new format.
Please ensure that a large file handle limit is specified before you start Xray. |
Start the Xray service.
systemctl start xray.service |
Starting from Xray 3.8x the stop and restart action on Xray will not be applied to RabbitMQ process. On start action of Xray, if RabbitMQ is not running , it will be started. If you want the script to perform stop and restart action on RabbitMQ, set shared.rabbitMq.autoStop as true in the system.yaml. Note that this flag is not consumed in docker-compose installation. |
systemctl start xray |
cd jfrog-xray-<version>-compose # Starting from Xray 3.8x RabbitMQ has been moved to a compose file of its own, this needs to be started before starting other services docker-compose -p xray-rabbitmq -f docker-compose-rabbitmq.yaml up -d # Starting from Xray 3.8.x, PostgreSQL needs to be started before starting the other services. docker-compose -p xray-postgres -f docker-compose-postgres-9-5-2v.yaml up -d docker-compose -p xray up -d docker-compose -p xray ps docker-compose -p xray down |
http://<jfrogUrl>/ui
/, go the Security & Compliance tab in the Application module in the UI.Check Xray Log.
tail -f $JFROG_HOME/var/log/console.log |
After the migration is successfully ended, it is recommended to complete the following steps:
|
The RPM upgrade bundles Xray and all its dependencies. It is provided as native RPM packages, where Xray and its dependencies must be installed separately. Use this, if you are automating installations.
Stop the current service.
cd /opt/jfrog/xray/scripts ./xray.sh stop |
Extract the contents of the compressed archive and go to the extracted folder.
tar -xvf jfrog-xray-<version>-rpm.tar.gz cd jfrog-xray-<version>-rpm |
Install Xray as a service on Red Hat compatible Linux distributions, as a root user.
yum -y install ./xray/xray.rpm |
Check that the migration has completed successfully, by reviewing the following files:
migration log: $JFROG_HOME/xray/var/log/migration.log
file
system.yaml configuration: $JFROG_HOME/xray/var/etc/system.yaml
This newly created file will contain your current custom configurations in the new format.
Please ensure that a large file handle limit is specified before you start Xray. |
Make sure the third party services are running.
If you had the PostgreSQL database that was packaged as part of 2.x installed, the same will be used in the current installation. It can be managed using the following commands:
|
From version 3.x, RabbitMQ is packaged and managed as part of the Xray RPM. Any action (stop, start and status) on the main service of Xray will be performed on RabbitMQ as well. The existing RabbitMQ RPM which was installed as part of 2.x can be uninstalled after Xray 3.x is successfully installed and running. |
From version 3.x, MongoDB will not be used by Xray except during the migration phase. On start of version 3.x, data will be automatically migrated from MongoDB to PostgreSQL. Make sure both the databases are up and running before Xray services are started. During the migration, Xray will not be accessible. The migration duration will depend on the data size to be migrated.
|
Start Xray.
systemctl start xray.service |
systemctl start xray |
Access Xray from your browser at: http://<jfrogUrl>/ui/
, go the Security & Compliance tab in the Application module in the UI.
Check Xray Log.
tail -f $JFROG_HOME/xray/var/log/console.log |
After the migration is successfully ended, it is recommended to complete the following steps:
|
The Debian upgrade bundles Xray and all its dependencies. It is provided as native Debian packages, where Xray and its dependencies must be installed separately. Use this, if you are automating installations.
Stop the current server.
cd /opt/jfrog/xray/scripts ./xray.sh stop |
Extract the contents of the compressed archive and go to the extracted folder.
tar -xvf jfrog-xray-<version>-deb.tar.gz cd jfrog-xray-<version>-deb |
Install Xray as a service on a Debian compatible Linux distributions, as a root user.
dpkg -i ./xray/xray.deb |
Check that the migration has completed successfully, by reviewing the following files:
migration log: $JFROG_HOME/xray/var/log/migration.log
file
system.yaml configuration: $JFROG_HOME/xray/var/etc/system.yaml
This newly created file will contain your current custom configurations in the new format.
Please ensure that a large file handle limit is specified before you start Xray. |
Set the Artifactory connection details.
Make sure the third party services are running.
If you had the PostgreSQL database that was packaged as part of 2.x installed, the same will be used in the current installation. It can be managed using the following commands:
|
From 3.x, RabbitMQ is packaged and managed as part of the Xray DEB. Any action (stop, start and status) on the main service of Xray will be performed on RabbitMQ as well. The existing RabbitMQ DEB which was installed as part of 2.x can be uninstalled after Xray 3.x is successfully installed and running. |
From 3.x, MongoDB will not be used by Xray. On start of 3.x, data will be migrated from MongoDB to PostgreSQL. Make sure both the databases are up and running before Xray services are started. This can be uninstalled after Xray 3.x is successfully installed and running.
|
Start Xray.
systemctl start xray.service |
systemctl start xray |
http://<jfrogUrl>/ui/
, go the Security & Compliance tab in the Application module in the UI.Check Xray Log.
tail -f $JFROG_HOME/xray/var/log/console.log |
After the migration is successfully ended, it is recommended to complete the following steps:
|
This section describes the process to upgrade your Xray High Availability cluster.
The Xray load balancer is no longer required when upgrading an Xray HA cluster from version 2.x to 3.x as all the Xray requests are now routed through the Platform Router in the JFrog Platform. |
The following installation methods are supported:
Perform the following steps for each node in your system. When starting up each node, make sure to enter the correct details according to first node or additional node being added to the cluster.
The instructions below assume that you are upgrading from a 2.X official Docker installation. |
Stop all the cluster nodes that are setup using HA by running the following command on each node.
./xray stop all |
Extract the contents of the compressed archive and go to the extracted folder.
tar -xvf jfrog-xray-<version>-compose.tar.gz cd jfrog-xray-<version>-compose.tar.gz |
This .env file is used by docker-compose and is updated during installations and upgrades. Notice that some operating systems do not display dot files by default. If you make any changes to the file, remember to backup before an upgrade. |
Run the config.sh
script to setup folders with required ownership. Note: the script will prompt you with a series of mandatory inputs, including if this is part of a cluster, and configure the needed system.yaml
.
./config.sh |
Note: For the first node upgrade, make sure to select "N" when prompted if you are adding an additional node to an existing product cluster. For the following additional nodes, make sure to select "Y" and provide the Join Key and JFrog URL.
Start the node. Note:Run this commandonlyfrom the extracted folder.
Manage Xray using docker-compose commands.
cd jfrog-xray-<version>-compose # Starting from Xray 3.8.x, PostgreSQL needs to be started before starting the other services. if PostgreSQL 9.5.2 running use - docker-compose -p xray-postgres -f docker-compose-postgres-9-5-2v.yaml up -d if PostgreSQL 10.13 running use - docker-compose -p xray-postgres -f docker-compose-postgres-10-13v.yaml up -d if PostgreSQL 12.3 running use - docker-compose -p xray-postgres -f docker-compose-postgres.yaml up -d docker-compose -p xray up -d docker-compose -p xray ps docker-compose -p xray down |
Access Xray from your browser at: http://<jfrogUrl>/ui/
, go the Security & Compliance tab in the Application module in the UI.
docker-compose -p xray logs |
Check Xray Log.
Stop all the cluster nodes that are setup using HA by running the following command on each node.
service xray stop |
Make sure your MongoDB and PostgreSQL are running in the background. |
Extract the contents of the compressed archive and go to the extracted folder.
tar -xvf jfrog-xray-<version>-rpm.tar.gz |
tar -xvf jfrog-xray-<version>deb.tar.gz |
Run the install.sh
script to setup folders with required ownership.
./install.sh |
Add the following to the $<PostgreSQL home folder>/data/pg_hba.conf
file.
host all all 0.0.0.0/0 md5 |
Add the following to the $<PostgreSQL home folder>/data/postgresql.conf
file.
listen_addresses='*' |
Restart PostgreSQL.
service postgresql-<version> stop service postgresql-<version> start |
From Xray version 3.x, RabbitMQ is packaged and managed as part of the Xray RPM. Any action (stop, start and status) on the main service of Xray will be performed on RabbitMQ as well. |
Start the Xray node.
systemctl start xray.service |
systemctl start xray |
Manage Xray using the following commands.
systemctl stop xray.service |
service xray stop|status|restart |
http://<jfrogUrl>/ui/
, go the Security & Compliance tab in the Application module in the UI.Check Xray Log.
tail -f $JFROG_HOME/xray/var/log/console.log |
Stop all the cluster nodes that are setup using HA by running the following command on each node.
service xray stop |
Extract the contents of the compressed archive and go to the extracted folder.
tar -xvf jfrog-xray-<version>-rpm.tar.gz |
tar -xvf jfrog-xray-<version>-deb.tar.gz |
Run the config.sh
script to setup folders with required ownership.
./install.sh |
Modify the system.yaml
file located in the $JFROG_HOME/xray
/var/etc
folder with the following configurations.
shared: rabbitMq: active: node: name: <node-name> (use same name across all subsequent nodes) ip: <first-node-ip> |
Start the Xray node.
systemctl start xray.service |
systemctl start xray |
Manage Xray using the following commands.
systemctl stop xray.service |
service xray stop|status|restart |
http://<jfrogUrl>/ui/
, go the Security & Compliance tab in the Application module in the UI.Check Xray Log.
tail -f $JFROG_HOME/xray/var/log/console.log |
After the migration is successfully ended, it is recommended to complete the following steps:
|
The following upgrade methods are supported:
The installer script works with all supported upgrade methods (RPM, Debian, and Docker Compose). It provides you an interactive way to upgrade Xray and its dependencies.
Stop the service.
systemctl stop xray.service |
service xray stop |
cd jfrog-xray-<version>-compose docker-compose -p xray down |
Extract the contents of the compressed archive and go to the extracted folder. The installer script is located in the extracted folder.
Note: For Docker Compose upgrades, make sure to merge any customizations in your current docker-compose.yaml
file to the new extracted version of the docker-compose.yaml
file.
tar -xvf jfrog-xray-<version>-<compose|rpm|deb>.tar.gz cd jfrog-xray-<version>-<compose|rpm|deb> |
Copy the contents of the |
Run the installer script.
If needed, the script will prompt you with a series of mandatory inputs, including the jfrogURL
(custom base URL) and joinKey
.
./config.sh |
./install.sh |
Start the Xray service.
systemctl start xray.service |
systemctl start xray |
cd jfrog-xray-<version>-compose docker-compose -p xray up -d |
http://<jfrogUrl>/ui/
, go the Security & Compliance tab in the Application module in the UI.Check Xray Log.
tail -f $JFROG_HOME/xray/var/log/console.log |
Download Xray (RPM or Debian)
Stop the current server.
systemctl stop xray.service |
service xray stop |
Extract the contents of the compressed archive and go to the extracted folder.
tar -xvf jfrog-xray-<version>-<rpm|deb>.tar.gz cd jfrog-xray-<version>-<rpm|deb> |
Starting from Xray 3.8.x, RabbitMQ bundled with Xray has been upgraded, and it requires an upgrade of the Erlang library. This library can be found on $JFROG_HOME/xray/app/third-party. For more information, see Installing Erlang. |
To upgrade the dependencies for RabbitMQ, you will need to upgrade |
Install Xray as a service on Red Hat compatible Linux distributions, as a root user.
yum -y install ./xray/xray.rpm |
dpkg -i ./xray/xray.deb |
Start Xray.
systemctl start xray.service |
systemctl start xray |
Access Xray from your browser at: http://<jfrogUrl>/ui/
, go the Security & Compliance tab in the Application module in the UI.
Check Xray Log.
tail -f $JFROG_HOME/xray/var/log/console.log |
Remember to back up the RabbitMQ password and to add it back to the |
Stop the current server.
cd $JFROG_HOME/xray/app/bin ./xray.sh stop |
Extract the contents of the compressed archive and go to the extracted folder.
mv jfrog-xray-<version>-linux.tar.gz /opt/jfrog/ cd /opt/jfrog tar -xf jfrog-xray-<version>-linux.tar.gz |
Starting from Xray 3.8.x, RabbitMQ bundled with Xray has been upgraded, and it requires an upgrade of the Erlang library. This library can be found on $JFROG_HOME/xray/app/third-party. For more information, see Installing Erlang. |
Replace the existing $JFROG_HOME/xray/app with the new app folder.
# Export variables to simplify commands export JFROG_HOME=/opt/jfrog export JF_NEW_VERSION=/opt/jfrog/jfrog-xray-<version>-linux # Remove app rm -rf $JFROG_HOME/xray/app # Copy new app cp -fr $JF_NEW_VERSION/app $JFROG_HOME/xray/ # Remove extracted new version rm -rf $JF_NEW_VERSION |
Manage Xray.
$JFROG_HOME/xray/app/bin/xray.sh start|stop |
Check Xray Log.
tail -f $JFROG_HOME/xray/var/log/console.log |
When upgrading from Xray 3.x to 3.x charts, due to breaking changes, use Also, While upgrading from Xray 3.x to 4.x charts, due to breaking RabbitMQ (when
|
Downtime is required to perform an upgrade. |
To upgrade Xray.
Update the existing deployed version to the updated version.
helm upgrade --set common.xrayVersion=[version number]/jfrog/xray |
If Xray was installed without providing a value to postgresql.postgresqlPassword
(the password was autogenerated), follow these instructions.
Get the current password by running the following.
POSTGRES_PASSWORD=$(kubectl get secret -n <namespace> <myrelease>-postgresql -o jsonpath="{.data.postgresql-password}" | base64 --decode) |
Upgrade the release by passing the previously auto-generated secret.
helm upgrade <myrelease> center/jfrog/xray --set postgresql.postgresqlPassword=${POSTGRES_PASSWORD} |
If Xray was installed without providing a value to rabbitmq.rabbitmqPassword/rabbitmq-ha.rabbitmqPassword
(the password was autogenerated), follow these instructions.
Get the current password by running the following.
RABBITMQ_PASSWORD=$(kubectl get secret -n <namespace> <myrelease>-rabbitmq -o jsonpath="{.data.rabbitmq-password}" | base64 --decode) |
Upgrade the release by passing the previously auto-generated secret.
helm upgrade <myrelease> center/jfrog/xray --set rabbitmq.rabbitmqPassword=${RABBITMQ_PASSWORD}/rabbitmq-ha.rabbitmqPassword=${RABBITMQ_PASSWORD} |
Upgrade the release by passing the previously auto-generated secrets.
helm upgrade --install xray --namespace xray center/jfrog/xray --set rabbitmq-ha.rabbitmqPassword=<rabbit-password> --set postgresql.post |
http://<jfrogUrl>/ui/
, then go to the Security & Compliance tab in the Application module in the UI.Check the status of your deployed helm releases.
helm status xray |
The recommended migration process has two main steps:
Get the service names OLD_PG_SERVICE_NAME
, OLD_MONGO_SERVICE_NAME
using the command below. For example: OLD_PG_SERVICE_NAME
and OLD_MONGO_SERVICE_NAME
values as <OLD_RELEASE_NAME>-postgresql
and <OLD_RELEASE_NAME>-mongodb
respectively.
$ kubectl get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE <OLD_RELEASE_NAME>-mongodb ClusterIP 10.101.56.69 <none> 27017/TCP 114m <OLD_RELEASE_NAME>-postgresql ClusterIP 10.101.250.74 <none> 5432/TCP 114m <OLD_RELEASE_NAME>-rabbitmq-ha ClusterIP None <none> 15672/TCP,5672/TCP,4369/TCP 114m <OLD_RELEASE_NAME>-rabbitmq-ha-discovery ClusterIP None <none> 15672/TCP,5672/TCP,4369/TCP 114m <OLD_RELEASE_NAME>-xray-analysis ClusterIP 10.104.138.63 <none> 7000/TCP 114m <OLD_RELEASE_NAME>-xray-indexer ClusterIP 10.106.72.163 <none> 7002/TCP 114m <OLD_RELEASE_NAME>-xray-persist ClusterIP 10.103.20.33 <none> 7003/TCP 114m <OLD_RELEASE_NAME>-xray-server LoadBalancer 10.105.121.175 <pending> 80:32326/TCP 114m |
Save the previous passwords OLD_PG_SERVICE_NAME
, OLD_MONGO_SERVICE_NAME
or extract them from the secret of the existing PostgreSQL and MongoDB pods.
#Example: OLD_PG_PASSWORD=$(kubectl get secret -n <namespace> <OLD_RELEASE_NAME>-postgresql -o jsonpath="{.data.postgres-password}" | base64 --decode) OLD_MONGO_PASSWORD=$(kubectl get secret -n <namespace> <OLD_RELEASE_NAME>-mongodb -o jsonpath="{.data.mongodb-password}" | base64 --decode) |
Stop the old Xray pods (scale down replicas to 0). The PostgreSQL and MongoDB pods will still remain active.
$ kubectl scale statefulsets <REPLACE_OLD_RELEASE_NAME>-rabbitmq-ha <REPLACE_OLD_RELEASE_NAME>-xray-analysis <REPLACE_OLD_RELEASE_NAME>-xray-indexer <REPLACE_OLD_RELEASE_NAME>-xray-persist <REPLACE_OLD_RELEASE_NAME>-xray-server --replicas=0 |
Run the helm install
(not upgrade) with the new version
say xray-new
in the following way.
Verify that all probes are disabled.
--set router.livenessProbe.enabled=false --set router.readinessProbe.enabled=false --set indexer.livenessProbe.enabled=false --set analysis.livenessProbe.enabled=false --set server.livenessProbe.enabled=false --set persist.livenessProbe.enabled=false --set indexer.readinessProbe.enabled=false --set analysis.readinessProbe.enabled=false --set server.readinessProbe.enabled=false --set persist.readinessProbe.enabled=false |
Point to the previous PostgreSQL pod (user, password,DATABASE).
--set postgresql.enabled=false --set database.user=<OLD_PG_USERNAME> --set database.password=<OLD_PG_PASSWORD> --set database.url="postgres://<SERVICE_NAME_POSTGRES>:5432/xraydb?sslmode=disable" |
Point to the previous MongoDB (user, password,DATABASE) pod.
--set xray.mongoUsername=<OLD_MONGO_USERNAME> --set xray.mongoPassword=<OLD_MONGO_PASSWORD> --set xray.mongoUrl="mongodb://<SERVICE_NAME_MONGODB>:27017/?authSource=xray&authMechanism=SCRAM-SHA-1" |
This will trigger the migration process as per the example below.
# Create a customvalues.yaml file router: livenessProbe: enabled: false readinessProbe: enabled: false indexer: livenessProbe: enabled: false readinessProbe: enabled: false analysis: livenessProbe: enabled: false readinessProbe: enabled: false server: livenessProbe: enabled: false readinessProbe: enabled: false persist: livenessProbe: enabled: false readinessProbe: enabled: false postgresql: enabled: false database: user: <OLD_PG_USERNAME> password: <OLD_PG_PASSWORD> url: "postgres://<SERVICE_NAME_POSTGRES>:5432/xraydb?sslmode=disable" xray: mongoUsername: <OLD_MONGO_USERNAME> mongoPassword: <OLD_MONGO_PASSWORD> mongoUrl: "mongodb://<SERVICE_NAME_MONGODB>:27017/?authSource=xray&authMechanism=SCRAM-SHA-1" masterKey: <PREVIOUS_MASTER_KEY> jfrogUrl: <NEW_ARTIFACTORY_URL> joinKey: <JOIN_KEY> rabbitmq: enabled: true auth: password: <PASSWORD> rabbitmq-ha: enabled: false |
Apply the values.yaml
file during installation.
helm upgrade --install xray-new center/jfrog/xray -f customvalues.yaml |
Create a customvalues.yaml
file.
replicaCount: 0 postgresql: postgresqlPassword: <NEW_PG_PASSWORD> rabbitmq: enabled: true auth: password: <PASSWORD> rabbitmq-ha: enabled: false xray: masterKey: <PREVIOUS_MASTER_KEY> jfrogUrl: <NEW_ARTIFACTORY_URL> joinKey: <JOIN_KEY> unifiedUpgradeAllowed: true databaseUpgradeReady: true |
Apply the values.yaml
file during installation.
helm upgrade --install xray-new center/jfrog/xray -f customvalues.yaml |
To migrate PostgreSQL data between old and new pods.
Connect to the new PostgreSQL pod (you can obtain the name by running kubectl get
pods).
$ kubectl exec -it <NAME> bash |
Once logged in, create a dump file from the previous database using pg_dump
, and connect to the previous PostgreSQL chart.
$ pg_dump -h <OLD_PG_SERVICE_NAME> -U xray DATABASE_NAME > /tmp/backup.sql |
After running this command, you will be prompted for a password; this password is the previous chart password OLD_PG_PASSWORD
. This operation could take some time depending on the database size.
Once you have the backup file, you can restore it with this command.
$ psql -U xray DATABASE_NAME < /tmp/backup.sql |
After running the command above, you will be prompted for a password; this is the current chart password. This operation could take some time depending on the database size.
Next, run the upgrade command one final time to start Xray.
# Create a customvalues.yaml file xray: masterKey: <PREVIOUS_MASTER_KEY> jfrogUrl: <NEW_ARTIFACTORY_URL> joinKey: <JOIN_KEY> rabbitmq: enabled: true auth: password: <PASSWORD> rabbitmq-ha: enabled: false postgresql: postgresqlPassword: <NEW_PG_PASSWORD> unifiedUpgradeAllowed: true databaseUpgradeReady: true |
Apply the values file during the installation.
helm upgrade --install xray-new center/jfrog/xray -f customvalues.yaml |
Restore access to the new Xray by running the command below to remove the old Xray deployment and Helm release.
run helm delete <OLD_RELEASE_NAME> |
Xray should now be ready for use.
The RabbitMQ-HA chart has been removed from the 7.x chart versions; therefore, before upgrading to the 7.x chart versions, you will need to perform the RabbitMQ migration (assuming you are not using the Bitnami RabbitMQ). This section describes the steps for migrating from the RabbitMQ-HA chart to the Bitnami RabbitMQ chart before upgrading to chart version 7.x and above. |
For this procedure, you will need to choose whether to migrate without existing queues (assuming that all queues are empty) or to migrate while Xray is down.
This procedure assumes that all queues are empty. While running the helm upgrade, make sure that there are no indexing or Watches running. |
Upgrade Xray with Bitnami RabbitMQ (disabling RabbitMQ-HA).
yaml rabbitmq-ha: enabled: false rabbitmq: enabled: true auth: username: guest password: password |
This migration option, which requires downtime, is intended for situations where there are unfinished tasks running in Xray, but the migration to Bitnami RabbitMQ is necessary.
Upgrade Xray with both RabbitMQs (RabbitMQ-HA and Bitnami RabbitMQ) and scale down the Xray services to 0 (replicaCount: 0).
Both RabbitMQs should be scaled down to one replica. Both RabbitMQs should have the |
yaml xray: replicaCount: 0 rabbitmq-ha: enabled: true replicaCount: 1 rabbitmq: enabled: true replicaCount: 1 auth: username: guest password: guest |
Go into the Bitnami RabbitMQ pod and run the following.
bash export OLD_RMQ=rabbit@`<RELEASE_NAME>`-rabbitmq-ha-0.`<RELEASE_NAME>`-rabbitmq-ha-discovery.`<NAMESPACE_NAME>`.svc.cluster.local && \ rabbitmqctl stop_app && \ rabbitmqctl join_cluster $OLD_RMQ && \ rabbitmqctl start_app |
The process of data synchronization between RabbitMQ -HA and the new Bitnami RabbitMQ node begins.
rabbitmqctl list_queues
. The synchronization status can also be viewed from the RabbitMQ dashboard of the old RabbitMQ (RabbitMQ-HA).When all the data has been synchronized between the cluster nodes, run a helm upgrade
to disable the RabbitMQ-HA. This removes the old RabbitMQ-HA and brings up the Xray services.
yaml xray: replicaCount: 1 rabbitmq-ha: enabled: false rabbitmq: enabled: true replicaCount: 1 auth: username: guest password: guest |
Finally, remove the old node from the Bitnami RabbitMQ.
bash rabbitmqctl forget_cluster_node rabbit@`<RELEASE_NAME>`-rabbitmq-ha-0.`<RELEASE_NAME>`-rabbitmq-ha-discovery.`<NAMESPACE_NAME>`.svc.cluster.local |
Enable Xray in Artifactory.
This section describes the process of upgrading your Xray High Availability cluster using the interactive script. The upgrade supports the following installation types:
Follow these steps to upgrade Xray using the interactive script.
Before running the upgrade on Linux and Native installers, you will need to stop RabbitMQ. Because |
For Docker Compose, you will need to run the
|
Stop the Xray services on all secondary nodes.
You must stop the services on the secondary nodes before stopping the services on the master node. |
$JFROG_HOME/xray/app/bin/xray.sh stop |
docker-compose -p xray down |
systemctl stop xray |
Next, stop the Xray services on the master node.
$JFROG_HOME/xray/app/bin/xray.sh stop |
docker-compose -p xray down |
systemctl stop xray |
app
folder on all nodes (master and secondary). Upgrade Xray on the master node by running the installer script.
If needed, the script will prompt you with a series of mandatory inputs, including the jfrogURL
(custom base URL) and joinKey
.
./config.sh |
./install.sh |
Start the Xray services on the master node.
$JFROG_HOME/xray/app/bin/xray.sh start |
docker-compose -p xray up -d |
systemctl start xray |
Upgrade Xray on the secondary nodes.
./config.sh |
config.sh/install.sh |
Start the Xray services on the secondary nodes.
$JFROG_HOME/xray/app/bin/xray.sh start |
docker-compose -p xray up -d |
systemctl start xray |