Upgrading Mission Control

JFrog Installation & Setup Documentation

Content Type
Installation & Setup
ft:sourceType
Paligo

Overview

The procedure to upgrade Mission Control depends on your installation type. We strongly recommend reading through this page before proceeding with your upgrade.

Mission Control is Moving to Artifactory as a Service

From JFrog Artifactory version 7.27, Mission Control has been integrated directly into Artifactory as a service. You will no longer need to install Mission Control to use the features it provides, only to enable the service in Artifactory.

The metrics capabilities that were provided by Mission Control will now be provided through JFrog Insight. To learn more about how to install Insight, see Installing Insight.

To learn more about how Mission Control has been integrated into Artifactory and to migrate to Mission Control microservice, see Migrating Platform Deployments and License Buckets.

You must install JFrog Insights to use trends and charts after you migrate to Mission Control microservice. For more information, see Migrating from Mission Control to Insight.

If you wish to continue using Mission Control, before upgrading, refer to the information on Mission Control Requirements and Supported Platforms for Mission Control.

Note

Make sure to use the same upgrade method (RPM, Debian, Docker, etc.) as the one you initially used to install Mission Control.

Upgrading to version 4.x for the first time?

It is recommended that you first review what's new with the latest JFrog Platform. Review the breaking changes, deprecated features and more.What's New: Self-hosted

Before You Proceed

JFrog Mission Control 4.x can only be installed as part of the JFrog Platform Deployment installation. Make sure this installation is completed first before continuing.

Default Home Directory

The default Mission Control home directory is defined according to the installation type. For additional details see the JFrog Product Directory Structure page.

Note: This guide uses $JFROG_HOME to represent the JFrog root directory containing the deployed product.

Setting Up High Availability on Mission Control

High Availability configuration for Mission Control requires a cluster of 3 or more active/active nodes on the same LAN.

Upgrading from Versions below 3.5.1

Warning

Before you upgrade, ensure that the operating system version that you use is supported. See System Requirements for detailed information on operating system support.

To upgrade from version 3.5 and below, you first need to upgrade to version 3.5.1 as described in the Upgrading from Versions below 3.5.1, and then continue to upgrade from version 3.5.1 to 4.x.

Upgrading from version 3.5 and below to 4.x is not supported.

Upgrading from Version 3.5.1 to 4.x

Warning

Before you upgrade, ensure that the operating system version that you use is supported. See System Requirements for detailed information on operating system support.

JFrog Mission Control v4.x is only compatible with JFrog Artifactory v7.x. To upgrade, you must first install JFrog Artifactory 7.x. For more information, see Installing Artifactory.

There are several new concepts introduced in Mission Control 4.x, improving the installation and customization process. For more information, see What's New: Self-Hosted.What's New: Self-hosted

To upgrade to version 4.x, you'll need to first unpack the installer archive of Mission Control, without installing the services, and then export/import your licenses using the below migration procedure.

Note

When using Mission Control prior to release 4.7.5, and using the Export<>Import functionality to duplicate/replicate a Mission Control instance, the license buckets should be manually loaded post-import, as they are not included in the export.Mission Control Release Notes

Warning

Data other than your licenses, such as your service information and insight, will not be available after the upgrade.

  1. Download Mission Control.

  2. Extract the contents of the compressed archive and go to the extracted folder.

    tar -xvf jfrog-mc-<version>-<compose|rpm|deb>.tar.gz
    cd jfrog-mc-<version>-<compose|rpm|deb>

    .env file included within the Docker-Compose archive

    This .env file is used by docker-compose and is updated during installations and upgrades.

    Notice that some operating systems do not display dot files by default. If you make any changes to the file, remember to backup before an upgrade.

  3. Copy the jfmcDataExport.sh migration script from the <extracted folder>/third-party/postgresql/ directory, to the /tmp directoryof the machine (or container) hosting your Mission Control v3.5.1 database.

    For Linux Archive installation, the script will be at <extract-folder>/app/third-party/postgresql .

  4. Run the following commands on the machine (or container) hosting the Mission Control v3.5.1 database.

    Native Postgres Installation

    chown postgres:postgres /tmp/jfmcDataExport.sh
    cd /tmp/
    su postgres -c "POSTGRES_PATH=/opt/PostgreSQL/9.6/bin PGPASSWORD=password bash /tmp/jfmcDataExport.sh --output=/tmp"

    Postgres in Docker container

    docker exec -it <postgres_container_id> bash
    su postgres -c "POSTGRES_PATH=/usr/lib/postgresql/9.6/bin PGPASSWORD=password bash /tmp/jfmcDataExport.sh --output=/tmp"
    docker cp <postgres_container_id>:/tmp/jfmcDataExport.tar.gz /tmp/jfmcDataExport.tar.gz
    # If database host is different from JFrog Mission Control host,
    # Then copy /tmp/jfmcDataExport.tar.gz from database host to JFrog Mission Control host (e.g. with scp)

    command-line options

    --host=HOST           database server host (default: "127.0.0.1")
    --port=PORT           database server port (default: "5432")
    --user=USER           database user name (default: "jfmc")
    --database=DATABASE   database name to connect to (default: "mission_control")
    --schema=SCHEMA       database schema name to connect to (default: "jfmc_server")
    --output=OUTPUT       path to output dir where jfmcDataExport.tar.gz will be created (default: ".")
  5. Check the output of the above command.

    2019-10-28T16:13:18.277Z [shell] [INFO ] [] [jfmcDataExport.sh:425 ] [main] - Exporting license buckets...
    2019-10-28T16:13:18.313Z [shell] [INFO ] [] [jfmcDataExport.sh:428 ] [main] - Exporting managed licenses...
    2019-10-28T16:13:18.349Z [shell] [INFO ] [] [jfmcDataExport.sh:419 ] [main] - Bundling exported data...
    2019-10-28T16:13:18.365Z [shell] [INFO ] [] [jfmcDataExport.sh:421 ] [main] - Mission Control data dumped to: ./jfmcDataExport.tar.gz
    
  6. Run the command to erase old package for RPM and Debian installs. For zip, you need to stop and proceed.

    Note: This step is only needed when you are installing Mission Control 4.x on the same server where the old version was running.

    RPM - Remove old packages

    # Uninstall Mission Control
    yum remove jfmc
    rm -fr /var/opt/jfrog/mission-control
     
    # Uninstall PostgreSQL
    /opt/PostgreSQL/9.6/uninstall-postgresql
    rm -fr /var/opt/postgres
    #For SystemD systems
    rm -fr /lib/systemd/system/postgresql-9.6.service /etc/systemd/system/multi-user.target.wants/postgresql-9.6.service
    systemctl daemon-reload
    systemctl reset-failed
    #For SystemV systems
     
     
    # Uninstall Elasticsearch
    yum remove -y elasticsearch-oss
    rm -fr /etc/elasticsearch
    rm -fr /usr/share/elasticsearch

    Docker Installations - stop and remove the containers

    # For docker-compose installations
    docker-compose  -f ./jfmc-compose.json -p jfmc down
     
    # For docker installations
    mission-control stop
    docker ps -a --format '{{.Names}}' | grep ^jfmc_* | xargs docker rm -f

    Debian - Remove old package

    # Uninstall Mission Control
    apt-get purge jfmc
    rm -fr /var/opt/jfrog/mission-control
     
    # Uninstall PostgreSQL
    /opt/PostgreSQL/9.6/uninstall-postgresql
    rm -fr /var/opt/postgres
    rm -rf /var/spool/mail/postgres
    rm -rf /opt/PostgreSQL
    rm -rf /tmp/postgresql_installer_*
    rm -rf /etc/selinux/targeted/active/modules/100/postgresql
    #For SystemD systems
    rm -fr /lib/systemd/system/postgresql-9.6.service /etc/systemd/system/multi-user.target.wants/postgresql-9.6.service
    systemctl daemon-reload
    systemctl reset-failed
    #For SystemV systems
    rm -rf  /etc/init.d/postgres-9.6
     
     
    # Uninstall Elasticsearch
    apt-get purge elasticsearch-oss
    rm -fr /etc/elasticsearch
    rm -fr /usr/share/elasticsearch
  7. Install Mission Control.

  8. Copy the exported data.

    mkdir -p $JFROG_HOME/mc/var/bootstrap/mc
    cp /tmp/jfmcDataExport.tar.gz $JFROG_HOME/mc/var/bootstrap/mc
     
    # NOTE : The following is needed only for docker-compose installer
    chown -R 1050:1050 $JFROG_HOME/mc/var/bootstrap
  9. Restart Mission Control.

  10. Validate that the import was successful. The filename should be renamed to jfmcDataExport.tar.gz.done. It will be renamed to jfmcDataExport.tar.gz.failed if the import procedure failed.

  11. Check the Mission Control log.

    tail -f $JFROG_HOME/mc/var/log/console.log

Upgrading from Version 4.x to 4.x

Upgrading to Mission Control 4.6.x

Upgrading to Mission Control 4.6.x requires Artifactory 7.11.x. Prior to upgrading, to enable the new metrics and trends, you must perform the following:

  1. Stop Artifactory and Mission Control services.

  2. Upgrade to Artifactory 7.11.x and to Mission Control 4.6.x.

  3. Update the Artifactory System YAML with the Elasticsearch URL, username, and password. For more information, see Enabling Trends.Topology and Trends

  4. Stop and start Artifactory for the changes to take effect.

The following upgrade methods are supported:

Note

When you upgrade a Mission Control high availability cluster, ensure that you trigger the upgrade process on all the nodes simultaneously.

Interactive Script Upgrade (Recommended)

This supports all install types, including Docker Compose, RPM, and Debian.

  1. Stop the service.

    systemd OS

    systemctl stop mc
    

    systemv OS

    service mc stop

    Docker Compose

    cd jfrog-mc-<version>-compose
    docker-compose -p mc down
  2. Extract the contents of the compressed archive and go to the extracted folder.

    Note: Make sure to merge your customizations in your current docker-compose.yaml file to the new extracted version of the docker-compose.yaml file.

    tar -xvf jfrog-mc-<version>-<compose|rpm|deb>.tar.gz
    cd jfrog-mc-<version>-<compose|rpm|deb>

    Note

    Copy the contents of the .env file in the previous installation to the newly created .env file in this archive without copying the versions, as this will affect the upgrade.

  3. Run the installer script.

    Note: if needed, the script will prompt you with a series of mandatory inputs, including the jfrogURL (custom base URL) and joinKey.

    Compose

    ./config.sh

    RPM/DEB

    ./install.sh
  4. Start and manage the Mission Control service.

    systemd OS

    systemctl start|stop mc

    systemv OS

    service mc start|stop

    Docker Compose

    cd jfrog-mc-<version>-compose
    docker-compose -p mc up -d
    docker-compose -p mc ps
    docker-compose -p mc down
  5. Access Mission Control from your browser at http://<jfrogUrl>/ui/, go to the Dashboard tab in the Application module in the UI.

  6. Check Mission Control Log.

    tail -f $JFROG_HOME/mc/var/log/console.log
Manual RPM/Debian Upgrade
  1. Stop the current server.

    systemd OS

    systemctl stop mc

    systemv OS

    service mc stop
  2. Extract the contents of the compressed archive and go to the extracted folder.

    tar -xvf jfrog-mc-<version>-<rpm|deb>.tar.gz
    cd jfrog-mc-<version>-<rpm|deb>
  3. Configure Elasticsearch.

    Note

    If you are upgrading from Mission Control version 4.5.x and lower, you need to upgrade Elasticsearch. This package can be located in the extracted contents at jfrog-mc-<version>-<rpm|deb>/third-party/elasticsearch/elasticsearch-oss-<version>.<rpm|deb>. For upgrade steps, refer to Elasticsearch documentation.

  4. When connecting an external instance of Elasticsearch to Mission Control, add the following flag in the Shared Configurations of $JFROG_HOME/mc/var/etc/system. yaml file and step (6) can be skipped.

    shared:
       elasticsearch:
           external: true
  5. Recommended to install the Search Guard plugin when using Elasticsearch which is packaged with Mission Control. This will help ensure secure communication with Elasticsearch.

    1. Search Guard package can be located in the extracted contents at jfrog-mc-<version>-<rpm|deb>/third-party/elasticsearch/search-guard-<version>.zip.

      For installation steps, refer to Search Guard documentation.

    2. Add an admin user to Search Guard which will ensure authenticated communication with Elasticsearch.

      1. The Search Guard configuration accepts a hashed password. Use the following command to generate the hash for the password. Also, add the username and password generated here in the Shared Configuration as specified in step (5) above.

        /etc/elasticsearch/plugins/search-guard-7/tools/hash.sh -p <clear_text_password>
         
        #This will output a hashed password (<hash_password>), make a copy of it
      2. Prepare the configuration snippet to add a new(admin) user with the hashed password obtained from the previous step

        <username>:
            hash: "<hashed_password>"
            backend_roles:
               - "admin"
            description: "Insight Elastic admin user"
      3. Paste the above snippet to the end of this file sg_internal_users.yml located at /etc/elasticsearch/plugins/search-guard-7/sgconfig/

    3. Enable anonymous access to “_cluster/health” endpoint. This is required to check the health of the Elasticsearch cluster.

      1. Enable anonymous auth in this file sg_config.yml at /etc/elasticsearch/plugins/search-guard-7/sgconfig/

        sg_config:
               dynamic:
                  http:
                     anonymous_auth_enabled: true #set this to true
      2. Map anonymous user sg_anonymous to backend role sg_anonymous_backendrole in this file sg_roles_mapping.yml at /etc/elasticsearch/plugins/search-guard-7/sgconfig

        sg_anonymous:
            backend_roles:
                - sg_anonymous_backendrole
      3. Add this snippet to the end of this file sg_roles.yml located at /etc/elasticsearch /plugins/search-guard-7/sgconfig

        sg_anonymous:
          cluster_permissions:
            - cluster:monitor/health
  6. Install Mission Control as a service on Red Hat-compatible Linux distributions, as a root user.

    rpm

    yum -y install ./mc/mc.rpm

    Debian

    dpkg -i ./mc/mc.deb
  7. Set the Artifactory connection details.

  8. Start and manage Mission Control.

    service mc start|stop
  9. Access Mission Control from your browser at: http://<jfrogUrl>/ui/, go the Dashboard tab in the Application module in the UI.

  10. Check Mission Control Log.

    tail -f $JFROG_HOME/mc/var/log/console.log
Linux Archive Upgrade
  1. Stop the current server.

    Stop Mission Control

    cd $JFROG_HOME/mc/app/bin
    ./mc.sh stop
  2. Configure Elasticsearch.

    Note

    If you are upgrading from Mission Control version 4.5.x and lower, you need to upgrade Elasticsearch. This packagecan be located in the extracted contents at mc/app/third-party/elasticsearch/elasticsearch-oss-<version>.tar.gz. For upgrade steps, refer to Elasticsearch documentation.

  3. When connecting an external instance of Elasticsearch to Mission Control, add the following flag in the Shared Configurations of $JFROG_HOME/mc/var/etc/system. yaml file and step (6) can be skipped.

    shared:
       elasticsearch:
           external: true
  4. Recommended to install the Search Guard plugin when using Elasticsearch which is packaged with Mission Control. This will help ensure secure communication with Elasticsearch.

    1. Search Guard package can be located in the extracted contents at mc/app/third-party/elasticsearch/search-guard-<version>.zip

      For installation steps, refer to Search Guard documentation.

    2. Add an admin user to Search Guard which will ensure authenticated communication with Elasticsearch.

      1. The Search Guard configuration accepts a hashed password. Use the following command to generate the hash for the password. Also, add the username and password generated here in the Shared Configuration as specified in step (5) above.

        /etc/elasticsearch/plugins/search-guard-7/tools/hash.sh -p <clear_text_password>
         
        #This will output a hashed password (<hash_password>), make a copy of it
      2. Prepare the configuration snippet to add a new(admin) user with the hashed password obtained from the previous step

        <username>:
            hash: "<hashed_password>"
            backend_roles:
               - "admin"
            description: "Insight Elastic admin user"
      3. Paste the above snippet to the end of this file sg_internal_users.yml located at /etc/elasticsearch/plugins/search-guard-7/sgconfig/

    3. Enable anonymous access to “_cluster/health” endpoint. This is required to check the health of the Elasticsearch cluster.

      1. Enable anonymous auth in this file sg_config.yml at /etc/elasticsearch/plugins/search-guard-7/sgconfig/

        sg_config:
               dynamic:
                  http:
                     anonymous_auth_enabled: true #set this to true
      2. Map anonymous user sg_anonymous to backend role sg_anonymous_backendrole in this file sg_roles_mapping.yml at /etc/elasticsearch/plugins/search-guard-7/sgconfig

        sg_anonymous:
            backend_roles:
                - sg_anonymous_backendrole
      3. Add this snippet to the end of this file sg_roles.yml located at /etc/elasticsearch /plugins/search-guard-7/sgconfig

        sg_anonymous:
          cluster_permissions:
            - cluster:monitor/health
  5. Extract the contents of the compressed archive and go to the extracted folder.

    Untar

    tar -xvf jfrog-mc-<version>-linux.tar.gz
    
  6. Replace the existing $JFROG_HOME/mc/app with the new app folder.

    Upgrade

    # Export variables to simplify commands
    export JFROG_HOME=/opt/jfrog
    export JF_NEW_VERSION=/opt/jfrog/mc-4.x
     
    # Remove app
    rm -rf $JFROG_HOME/mc/app
     
    # Copy new app
    cp -r $JF_NEW_VERSION/app $JFROG_HOME/mc
     
    # Remove extracted new version
    rm -rf $JF_NEW_VERSION
  7. Run the migration script to remove old service directories.

    Run the migration script with the same privileges as you have in your current Mission Control installation. This script will remove old services directories, redundant service yaml files in router  and translate your current configurations to the new configuration format, according to the new file system layout.
     
    $JFROG_HOME variable points to the new installation
    export JFROG_HOME=<Full path to jfrog directory, for example: /opt/jfrog>
    cd $JFROG_HOME/mc/app/bin
    ./migrate.sh
    Check that the migration has completed successfully, by reviewing the following files:
     - migration log: $JFROG_HOME/mc/var/log/migration.log
    - system.yaml configuration: $JFROG_HOME/mc/var/etc/system.yaml
       This newly created file will contain your current custom configurations in the new format.
  8. Manage Mission Control.

    $JFROG_HOME/app/bin/mc.sh start|stop
  9. Access Mission Controlfrom your browser at: http://<jfrogUrl>/ui/, go to the Dashboard tab in the Application module in the UI.

  10. Check Mission Control Log.

    tail -f $JFROG_HOME/mc/var/log/console.log
Helm Upgrade

Once you have a new chart version, you can update your deployment.

Non-Compatible Upgrades

In cases where a new version is not compatible with the existing deployed version (see the relevant Changelog), you will need to do the following:

  • Deploy a new version alongside the old version (and set a new release name)

  • Copy configurations and data from the old deployment to the new one

Note

Downtime is required to perform an upgrade.

Data export is done with a migration script called jfmcDataExport.sh (available under the files directory in the Mission Control chart).

To upgrade.

  1. Verify that you have upgraded Artifactory to v.7x. For more information, see Helm Upgrade.

  2. Update the existing deployed version to the updated version.

    helm upgrade mission-control jfrog/mission-control
  3. Stop the old Mission Control pod (scale down replicas to 0) (PostgreSQL remains in place).

    $ kubectl scale statefulsets <OLD_RELEASE_NAME>-mission-control --replicas=0
  4. Export data from the old PostgreSQL instance in the following way.

    1. Connect to the old PostgreSQL pod (you can get the name by running kubectl get pods).

      $ kubectl exec -it <OLD_RELEASE_NAME>-postgresql bash
    2. Copy the jfmcDataExport.sh file and run the following commands.

      $ kubectl cp ./jfmcDataExport.sh <OLD_RELEASE_NAME>-postgresql:/tmp/jfmcDataExport.sh
      $ chown postgres:postgres /tmp/jfmcDataExport.sh
      $ su postgres -c "PGPASSWORD=password bash /tmp/jfmcDataExport.sh --output=/tmp"
      if you are on 2x charts(operating system user postgres is not there) run ./jfmcDataExport.sh --output=/tmp and provide jfmc user password
      
    3. Copy the exported file to your local system.

      $ kubectl cp <OLD_RELEASE_NAME>-postgresql:/tmp/jfmcDataExport.tar.gz ./jfmcDataExport.tar.gz
  5. Install the new Mission Control and copy the exported file.

    1. Run the helm install with the new version called mission-control-new.

    2. Copy the exported tar file to the new Mission Control pod.

      $ kubectl cp ./jfmcDataExport.tar.gz <NEW_RELEASE_NAME>-mission-control:/opt/jfrog/mc/var/bootstrap/mc/jfmcDataExport.tar.gz -c mission-control
    3. Restart the new Mission Control pod.

    4. Validate that the import was successful. The filename should be renamed to jfmcDataExport.tar.gz.done (it will be renamed to jfmcDataExport.tar.gz.failed if the import procedure failed).

  6. Run the following command to remove the old Mission Control deployment and Helm release.

    helm delete <OLD_RELEASE_NAME>
  7. Access Mission-Control from your browser at: http://<jfrogUrl>/ui/, then go to the Security & Compliance tab in the Application module in the UI.

  8. Check the status of your deployed Helm releases.

    helm status mission-control

    Mission Control should now be available.