Cloud customer?
Start for Free >
Upgrade in MyJFrog >
What's New in Cloud >

Search





Overview

This page provides a guide for the different ways you can install and configure JFrog Mission Control, single node and high availability. Additional information on high availability can be found here.


Before You Begin

System Requirements

Before installing Mission Control, refer to System Requirements for information on supported platforms, supported browsers, and other requirements.

System Architecture

To learn about the JFrog Platform Deployment, refer to System Architecture.

Installing Mission Control

Before installing Mission Control 4.x, you must first install JFrog Artifactory 7.x.

Installation Steps

The installation procedure involves the following main steps:

  1. Download Mission Control as per your required installer type (Linux Archive, Docker Compose, RPM, Debian).
  2. Install Mission Control either as a single node installation, or high availability cluster.
    1. Install third party dependencies (PostgreSQL and Elasticsearch databases, included in the archive)
    2. Install Mission Control
  3. Configure the service
    1. Connection to Artifactory (joinKey and jfrogUrl)
    2. Additional optional configuration including changing default credentials for databases
  4. Start the Service using the start scripts or OS service management.
  5. Check the Service Log to check the status of the service.

Default Home Directory

The default Mission Control home directory is defined according to the installation type. For additional details see the Product Directory Structure page.

Note: This guide uses $JFROG_HOME to represent the JFrog root directory containing the deployed product.

Artifactory Plugins for Mission Control

Mission Control 3.x relied on Artifactory user plugins to manage and monitor Artifactory. From version 4.0, those plugins are no longer used. When upgrading to Artifactory 7.x, those user plugins will be automatically removed from Artifactory. The plugins that will be removed are:

  • propertySetsConfig.groovy
  • haClusterDump.groovy
  • httpSsoConfig.groovy
  • repoLayoutsConfig.groovy
  • ldapGroupsConfig.groovy
  • internalUser.groovy
  • ldapSettingsConfig.groovy
  • pluginsConfig.groovy
  • proxiesConfig.groovy
  • requestRouting.groovy

JFrog Subscription Levels

SELF-HOSTED
ENTERPRISE X
ENTERPRISE+
Page Contents


Single Node Installation

The following installation methods are supported:

Interactive Script Installation (recommended)

All install types are supported, including: Docker Compose, Linux Archive, RPM and Debian.

The installer script provides you an interactive way to install Mission Control and its dependencies. All install types are supported. This installer should be used for Docker Compose.

  1. Download Mission Control.
  2. Extract the contents of the compressed archive and go to the extracted folder.

    tar -xvf jfrog-mc-<version>-<compose|rpm|deb>.tar.gz
    cd jfrog-mc-<version>-<compose|rpm|deb>

    OS user permissions for Linux archive

    When running Mission Control, the installation script creates a user called jfmc by default which must have run and execute permissions on the installation directory.

    It is recommended to extract the Mission Control download file into a directory that gives run and execute permissions to all users such as /opt.

    Linux archive
    mv jfrog-mc-<version>-linux.tar.gz /opt/
    cd /opt
    tar -xf jfrog-mc-<version>-linux.tar.gz
    mv jfrog-mc-<version>-linux mc
    cd mc

    .env file included within the Docker-Compose archive

    This .env file is used by docker-compose and is updated during installations and upgrades.

    Notice that some operating systems do not display dot files by default. If you've made any changes to the file, remember to backup before an upgrade.

  3. Run the installer script.
    Note: the script will prompt you with a series of mandatory inputs, including the jfrogURL (custom base URL) and joinKey.

    Docker Compose
    ./config.sh
    RPM/DEB
    ./install.sh

    Prerequisites for Linux archive

    Refer prerequisites for Mission Control in Linux Archive  before running install script.

    Linux archive
    ./install.sh --user <user name> --group <group name>
    
    -h | --help                                       : [optional] display usage
    -u | --user                                       : [optional] (default: jfmc) user which will be used to run the product, it will be created if its unavailable
    -g | --group                                      : [optional] (default: jfmc) group which will be used to run the product, it will be created if its unavailable
  4. Validate and customize the product configuration (optional), including the third party dependencies connection details and ports.
  5. Start and manage the Mission Control service.

    systemd OS
    systemctl start|stop mc.service
    systemv
    service mc start|stop
    Docker Compose
    cd jfrog-mc-<version>-compose
    docker-compose -p mc up -d
    docker-compose -p mc ps
    docker-compose -p mc down

    Mission Control can be installed and managed as a service in a Linux archive installation. Refer start Mission Control section under Linux Archive Manual Installation for more details. 

    Linux archive
    mc/app/bin/mc.sh start|stop
  6. Access Mission Control from your browser at: http://<jfrogUrl>/ui/and go to the Dashboard tab in the Application module in the UI.
  7. Check the Mission Control log.

    tail -f $JFROG_HOME/mc/var/log/console.log

    Configuring the Log Rotation of the Console Log

    The console.log file can grow quickly since all services write to it. This file is not log rotated for Darwin installations. Learn more on how to configure the log rotation.

Linux Archive Installation

  1. Download Mission Control.
  2. Extract the contents of the compressed archive and move it into /mc directory..

    tar -xvf jfrog-mc-<version>-linux.tar.gz
    mv jfrog-mc-<version>-linux mc
  3. Install PostgreSQL.

    PostgreSQL is required and must be installed before continuing with the next installation steps.

    Set your PostgreSQL connection details in the Shared Configurations section of the $JFROG_HOME/mc/var/etc/system.yaml file 

  4. Prepare for Elasticsearch Installation by increasing the map count. For additional information refer to the Elastic Search documentation.

    sudo sysctl -w vm.max_map_count=262144

    To make this change permanent, remember to update the vm.max_map_count setting in /etc/sysctl.conf.

  5. Install Elasticsearch. Instructions to install Elasticsearch are available here.

    Elasticsearch is required and must be installed before continuing with the next installation steps.

    Set your Elasticsearch connection details in the Shared Configurations section of the $JFROG_HOME/mc/var/etc/system.yaml file.

    When connecting an external instance of Elasticsearch to Mission Control, add the following flag in the Shared Configurations of $JFROG_HOME/mc/var/etc/system.yaml file and step (6) can be skipped. Elasticsearch will be treated as external and will need to be marked as so in the system.yaml.

    shared:
       elasticsearch:
           external: true

    When Elasticsearch is external, the URL to the Elasticsearch instance will also need to be provided. 

    • When using the packaged version of Elasticsearch, the URL will be as follows.

      shared:
         elasticsearch:
             url: http://localhost:9200
    • When using a non-packaged version of Elasticsearch, the URL will be as follows.

      shared:
         elasticsearch:
             url: <URL_TO_ELASTICSEARCH_INSTANCE>:<ELASTICSEARCH_PORT>

      When SearchGuard is configured, add the username and password as follows.

      shared:
         elasticsearch:
             username: <USERNAME_SET_IN_SEARCHGUARD>
             password: <CLEAR_TEXT_PASSWORD_FOR_THE_ABOVE_USERNAME>
  6. As an alternative, Elasticsearch packaged with Mission Control can be used. This package can be located in the extracted contents at mc/app/third-party/elasticsearch/elasticsearch-oss-7.8.0.tar.gz. For installation steps, refer to Elasticsearch documentation.

  7. It is recommended to install Search Guard plugin when using Elasticsearch that is packaged with Mission Control. This will help ensure secure communication to Elasticsearch. 

    1. Search Guard package can be located in the extracted contents at mc/app/third-party/elasticsearch/search-guard-7.8.0.zip. For installation steps, refer to Search Guard documentation.

    2. Add an admin user to Search Guard which will ensure authenticated communication with Elasticsearch
      1. The Search Guard configuration accepts a hashed password. Use the following command to generate the hash for the password. Also, add the username and password generated here in the Shared Configuration as specified in step (5) above.

        <JFROG_HOME>/mc/app/third-party/elasticsearch/elasticsearch-7.8.0/plugins/search-guard-7.8/tools/hash.sh -p <clear_text_password>
        
        #This will output a hashed password (<hash_password>), make a copy of it
      2. Prepare the configuration snippet to add a new(admin) user with the hashed password obtained from previous step

        <username>:
            hash: "<hashed_password>"
            backend_roles:
               - "admin"
            description: "Insight Elastic admin user"
      3. Paste the above snippet to the end of this file “sg_internal_users.yml” located at <JFROG_HOME>/mc/app/third-party/elasticsearch/elasticsearch-7.8.0/plugins/search-guard-7.8/sgconfig/


    3. Enable anonymous access to “_cluster/health” endpoint. This is required to check the health of Elasticsearch cluster.
      1. Enable anonymous auth in this file "sg_config.yml" at <JFROG_HOME>/mc/app/third-party/elasticsearch/elasticsearch-7.8.0/plugins/search-guard-7.8/sgconfig/

        sg_config:
               dynamic:
                  http:
                     anonymous_auth_enabled: true #set this to true
      2. Map anonymous user "sg_anonymous" to backend role "sg_anonymous_backendrole" in this file "sg_roles_mapping.yml" at <JFROG_HOME>/mc/app/third-party/elasticsearch/elasticsearch-7.8.0/plugins/search-guard-7.8/sgconfig

        sg_anonymous:
            backend_roles:
                - sg_anonymous_backendrole
      3. Add this snippet to the end of this file "sg_roles.yml" located at <JFROG_HOME>/mc/app/third-party/elasticsearch/elasticsearch-7.8.0/plugins/search-guard-7.8/sgconfig

        sg_anonymous:
          cluster_permissions:
            - cluster:monitor/health
  8. Start PostgreSQL and Elasticsearch
  9. Customize the product configuration.
    1. Set the Artifactory connection details.
    2. Customize the PostgreSQL Database connection details (optional).
    3. Set any additional configurations (for example: ports, node id) using the Mission Control system.yaml configuration file.
  10. Start and manage the Mission Control service as the user who extracted the tar.
    As a process

    Daemon Process
    mc/app/bin/mc.sh start

    Manage the process.

    mc/app/bin/mc.sh start|stop|status|restart

    As a service
    Mission Control is packaged as an archive file and an install script that can be used to install it as a service running under a custom user. Currently supported on Linux systems.

    OS User Permissions

    When running Mission Control as a service, the installation script creates a user called jfmc (by default) which must have run and execute permissions on the installation directory.

    It is recommended to extract the Mission Control download file into a directory that gives run and execute permissions to all users such as /opt.

    To install Mission Control as a service, execute the following command as root: 

    User and group can be passed through mc/var/etc/system.yaml as shared.user and shared.group. This takes precedence over values passed through command line on install. 

    mc/app/bin/installService.sh --user <enter user, default value is mc> --group <enter group, default value is mc>
    
    -u | --user                                       : [optional] (default: jfmc) user which will be used to run the product, it will be created if its unavailable
    -g | --group                                      : [optional] (default: jfmc) group which will be used to run the product, it will be created if its unavailable

    The user and group will be stored in the mc/var/etc/system.yaml at the end of the installation.
    To manage the service, use the systemd or init.d commands depending on your system. 

    Using systemd
     systemctl <start|stop|status> mc.service
    Using init.d
    service mc <start|stop|status>
  11. Access Mission Control from your browser at: http://<jfrogUrl>/ui/and go to the Dashboard tab in the Application module in the UI
  12. Check the Mission Control log.

    tail -f $JFROG_HOME/mc/var/log/console.log

Manual RPM Installation

The RPM installation bundles Mission Control and all its dependencies. It is provided as native RPM packages, where Mission Control and its dependencies must be installed separately. Use this, if you are automating installations.

  1. Download Mission Control.

  2. Extract the contents of the compressed archive, and go to the extracted folder:

    tar -xvf jfrog-mc-<version>-rpm.tar.gz
    cd jfrog-mc-<version>-rpm
  3. Install Mission Control. You must run as a root user.

    rpm -Uvh --replacepkgs ./mc/mc.rpm
  4. Install PostgreSQL and start PostgreSQL service.

    PostgreSQL is required and must be installed before continuing with the next installation steps.

    Set your PostgreSQL connection details in the Shared Configurations section of the $JFROG_HOME/mc/var/etc/system.yaml file.

  5. Install Elasticsearch and start Elasticsearch service.

    Elasticsearch is required and must be installed before continuing with the next installation steps.

    Set your Elasticsearch connection details in the Shared Configurations section of the $JFROG_HOME/mc/var/etc/system.yaml file.

    When connecting an external instance of Elasticsearch to Mission Control, add the following flag in the Shared Configurations of $JFROG_HOME/mc/var/etc/system.yaml file and step (6) can be skipped.

    1. shared:
         elasticsearch:
             external: true

    As an alternative, Elasticsearch packaged with Mission Control can be used. This package can be located in the extracted contents at jfrog-mc-<version>-rpm/third-party/elasticsearch/elasticsearch-oss-7.8.0.rpm. For installation steps, refer to Elasticsearch documentation

  6. Recommended to install Search Guard plugin when using Elasticsearch that is packaged with Mission Control. This will help ensure secure communication to Elasticsearch. 

    1. Search Guard package can be located in the extracted contents at jfrog-mc-<version>-rpm/third-party/elasticsearch/search-guard-7.8.0.zip. For installation steps, refer to Search Guard documentation.

    2. Add an admin user to Search Guard which will ensure authenticated communication with Elasticsearch
      1. The Search Guard configuration accepts a hashed password. Use the following command to generate the hash for the password. Also, add the username and password generated here in the Shared Configuration as specified in step (5) above.

        /etc/elasticsearch/plugins/search-guard-7.8/tools/hash.sh -p <clear_text_password>
        
        #This will output a hashed password (<hash_password>), make a copy of it
      2. Prepare the configuration snippet to add a new(admin) user with the hashed password obtained from previous step

        <username>:
            hash: "<hashed_password>"
            backend_roles:
               - "admin"
            description: "Insight Elastic admin user"
      3. Paste the above snippet to the end of this file “sg_internal_users.yml” located at /etc/elasticsearch/plugins/search-guard-7.8/sgconfig/


    3. Enable anonymous access to “_cluster/health” endpoint. This is required to check the health of Elasticsearch cluster.
      1. Enable anonymous auth in this file "sg_config.yml" at /etc/elasticsearch/plugins/search-guard-7.8/sgconfig/

        sg_config:
               dynamic:
                  http:
                     anonymous_auth_enabled: true #set this to true
      2. Map anonymous user "sg_anonymous" to backend role "sg_anonymous_backendrole" in this file "sg_roles_mapping.yml" at /etc/elasticsearch/plugins/search-guard-7.8/sgconfig

        sg_anonymous:
            backend_roles:
                - sg_anonymous_backendrole
      3. Add this snippet to the end of this file "sg_roles.yml" located at /etc/elasticsearch/plugins/search-guard-7.8/sgconfig

        sg_anonymous:
          cluster_permissions:
            - cluster:monitor/health
  7. Customize the product configuration.
    1. Set the Artifactory connection details.
    2. Customize the PostgreSQL Database connection details. (optional)
    3. Set any additional configurations (for example: ports, node id) using the Mission Control system.yaml configuration file.
  8. Start and manage the Mission Control service.

    systemd OS
    systemctl start|stop mc.service
    systemv OS
    service mc start|stop|status|restart
  9. Access Mission Control from your browser at: http://<jfrogUrl>/ui/and go to the Dashboard tab in the Application module in the UI
  10. Check the Mission Control log.

    Linux
    tail -f $JFROG_HOME/mc/var/log/console.log

Manual Debian Installation

The Debian installation bundles Mission Control and all its dependencies. It is provided as native Debian packages, where Mission Control and its dependencies must be installed separately. Use this, if you are automating installations.

  1. Download Mission Control.
  2. Extract the contents of the compressed archive, and go to the extracted folder:

    tar -xvf jfrog-mc-<version>-deb.tar.gz
    cd jfrog-mc-<version>-deb
  3. Install Mission control. You must run as a root user.

    dpkg -i ./mc/mc.deb
  4. Install PostgreSQL.

    PostgreSQL is required and must be installed before continuing with the next installation steps.

    Set your PostgreSQL connection details in the Shared Configurations section of the $JFROG_HOME/mc/var/etc/system.yaml file.

  5. Install Elasticsearch.

    Elasticsearch is required and must be installed before continuing with the next installation steps.

    Set your Elasticsearch connection details in the Shared Configurations section of the $JFROG_HOME/mc/var/etc/system.yaml file.

    When connecting an external instance of Elasticsearch to Mission Control, add the following flag in the Shared Configurations of $JFROG_HOME/mc/var/etc/system.yaml file and step (6) can be skipped. 

    shared:
       elasticsearch:
           external: true

    As an alternative, Elasticsearch packaged with Mission Control can be used. This package can be located in the extracted contents at jfrog-mc-<version>-deb/third-party/elasticsearch/elasticsearch-oss-7.8.0.deb. For installation steps, refer to Elasticsearch documentation

  6. Recommended to install Search Guard plugin when using Elasticsearch that is packaged with Mission Control. This will help ensure secure communication to Elasticsearch. 

    1. Search Guard package can be located in the extracted contents at jfrog-mc-<version>-deb/third-party/elasticsearch/search-guard-7.8.0.zip. For installation steps, refer to Search Guard documentation.

    2. Add an admin user to Search Guard which will ensure authenticated communication with Elasticsearch
      1. The Search Guard configuration accepts a hashed password. Use the following command to generate the hash for the password. Also, add the username and password generated here in the Shared Configuration as specified in step (5) above.

        /usr/share/elasticsearch/plugins/search-guard-7.8/tools/hash.sh -p <clear_text_password>
        
        #This will output a hashed password (<hash_password>), make a copy of it
      2. Prepare the configuration snippet to add a new(admin) user with the hashed password obtained from previous step

        <username>:
            hash: "<hashed_password>"
            backend_roles:
               - "admin"
            description: "Insight Elastic admin user"
      3. Paste the above snippet to the end of this file “sg_internal_users.yml” located at /usr/share/elasticsearch/plugins/search-guard-7/sgconfig/


    3. Enable anonymous access to “_cluster/health” endpoint. This is required to check the health of Elasticsearch cluster.
      1. Enable anonymous auth in this file "sg_config.yml" at /usr/share/elasticsearch/plugins/search-guard-7/sgconfig/

        sg_config:
               dynamic:
                  http:
                     anonymous_auth_enabled: true #set this to true
      2. Map anonymous user "sg_anonymous" to backend role "sg_anonymous_backendrole" in this file "sg_roles_mapping.yml" at /usr/share/elasticsearch/plugins/search-guard-7/sgconfig/

        sg_anonymous:
            backend_roles:
                - sg_anonymous_backendrole
      3. Add this snippet to the end of this file "sg_roles.yml" located at /usr/share/elasticsearch/plugins/search-guard-7/sgconfig/

        sg_anonymous:
          cluster_permissions:
            - cluster:monitor/health
  7. Customize the product configuration.
    1. Set the Artifactory connection details.
    2. Customize the PostgreSQL Database connection details. (optional)
    3. Set any additional configurations (for example: ports, node id) using the Mission Control system.yaml configuration file.

  8. Start and manage the Mission Control service.

    systemd OS
    systemctl start|stop mc.service
    systemv OS
    service mc start|stop|status|restart
  9. Access Mission Control from your browser at: http://<jfrogUrl>/ui/and go to the Dashboard tab in the Application module in the UI.
  10. Check the Mission Control log.

    Linux
    tail -f $JFROG_HOME/mc/var/log/console.log

Helm Chart Installation

Deploying Artifactory for Small, Medium or Large Installations

In the chart directory, includes three values files, one for each installation type - small/medium/large. These values files are recommendations for setting resources requests and limits for your installation. You can find the files in the corresponding chart directory:

  1. Add the ChartCenter Helm repository to your Helm client.

    helm repo add jfrog https://charts.jfrog.io 
    
  2. Update the repository.

    helm repo update
  3. Initiate installation by providing a join key and JFrog url as a parameter to the Mission Control chart installation.

    helm upgrade --install mission-control --set missionControl.joinKey=<YOUR_PREVIOUSLY_RETRIEVED_JOIN_KEY> \
                 --set missionControl.jfrogUrl=<YOUR_PREVIOUSLY_RETRIEVED_BASE_URL> --namespace mission-control jfrog/mission-control

    Alternatively, you can manually create a secret containing the join key and then pass it to the template during install/upgrade. The key must be named join-key.

    kubectl create secret generic my-secret --from-literal=join-key=<YOUR_PREVIOUSLY_RETIREVED_JOIN_KEY>
    
    # Pass the created secret to helm
    helm upgrade --install mission-control --set missionControl.joinKeySecretName=my-secret --namespace mission-control jfrog/mission-control

    In either case, make sure to pass the same join key on all future calls to helm install and helm upgrade! This means always passing --set missionControl.joinKey=<YOUR_PREVIOUSLY_RETRIEVED_JOIN_KEY>. In the second, this means always passing --set missionControl.joinKeySecretName=my-secret and ensuring the contents of the secret remain unchanged.

  4. Customize the product configuration (optional) including database, Java Opts, and filestore.

    Unlike other installations, Helm Chart configurations are made to the values.yaml and are then applied to the system.yaml.

    Follow these steps to apply the configuration changes.

    1. Make the changes to values.yaml. 
    2. Run the command.

      helm upgrade --install mission-control --namespace mission-control -f values.yaml

    3. Restart Mission Control to apply the changes.
  5. Access Mission Control from your browser at: http://<jfrogUrl>/ui/and go to the Dashboard tab in the Application module in the UI.

  6. Check the status of your deployed Helm releases.

    helm status mission-control

HA Installation

The following describes how to set up a Mission Control HA cluster with more than one node. For more information about HA, see System Architecture.

Prerequisites

All nodes within the same Mission Control HA installation must be running the same Artifactory version.

Database

Mission Control HA requires an external PostgreSQL database. Make sure to install it before proceeding to install the first node. There are several ways to setup PostgreSQL for redundancy. Including: HA, Load Balancing and Replication. For more information, see the PostgreSQL documentation.

Network

  • All the Mission Control HA components (Mission Control cluster nodes, database server and Elasticsearch) must be within the same fast LAN.

  • All the HA nodes must communicate with each other through dedicated TCP ports.

The following installation methods are supported:

Linux Archive/RPM/Debian Installation

First node installation steps:

  1. Install the first node. The installation is identical to the single node installation.

    Important: make sure not to start Mission Control.

  2. Configure the system.yaml file with the database and first node configuration details. For example,

    First node system.yaml
    shared:
      database:
        type: postgresql
        driver: org.postgresql.Driver
        url: postgre*sql*://<ip:port>/mission_control?sslmode=disable
        username: <username>
        password: <password>
      jfrogUrl: <JFrog URL>
      security:
        joinKey: <Artifactory Join Key>
  3. Start and manage the Mission Control service.

    systemd OS
    systemctl start|stop mc.service
    Systemv OS
    service mc start|stop
  4. Access Mission Control from your browser at: http://<jfrogUrl>/ui/and go to the Dashboard tab in the Application module in the UI
  5. Check the Mission Control log.

    Linux
    tail -f $JFROG_HOME/mc/var/log/console.log

Additional node installation steps:

For a node to join a cluster, the node must have the same database configuration and the Master Key. Install all additional nodes using the same steps described above, with the additional steps below:

  1. Configure the system.yaml file for the additional node with master key, database and active node configurations. For example,

    Additional node system.yaml
    shared:
      database:
        type: postgresql
        driver: org.postgresql.Driver
        url: postgre*sql*://<ip:port>/mission_control?sslmode=disable
        username: <username>
        password: <password>
      jfrogUrl: <JFrog URL>
      security:
        joinKey: <Artifactory Join Key>
      # Configure the following property values when Elasticsearch is installed from the bundled Mission Control package.
      elasticsearch:
        clusterSetup: "YES"
        unicastFile: "$JFROG_HOME/mc/data/elasticsearch/config/unicast_hosts.txt"
  2. Copy the master.key from the first node to the additional node located at $JFROG_HOME/mc/var/etc/security/master.key.
  3. Add the username and password as configured for Elasticsearch on master node on the additional node too. Add it to the Shared Configurations section in $JFROG_HOME/mc/var/etc/system.yaml file.
  4. Copy the client and node certificates from Elasticsearch's config folder from master node to a new directory named as "sg_certs" under the extracted folder on additional node

    #Linux Archive
    #Source directory on master node - mc/app/third-party/elasticsearch/config contains localhost.key, localhost.pem, root-ca.pem
    #Add them to mc/sg_certs on additional node
    
    
    #RPM
    #Source directory on master node - /etc/elasticsearch/config contains localhost.key, localhost.pem, root-ca.pem
    #Add them to jfrog-mc-<version>-rpm/sg_certs on additional node
    
    #Debian
    cd jfrog-mc-<version>-deb/
    mkdir sg_certs
    #Source directory on master node - /usr/share/elasticsearch/config contains localhost.key, localhost.pem, root-ca.pem
    #Add them to jfrog-mc-<version>-deb/sg_certs on additional node
  5. Start the additional node.

  6. Access Mission Control from your browser at: http://<jfrogUrl>/ui/and go to the Dashboard tab in the Application module in the UI
  7. Check the Mission Control log.

    Linux
    tail -f $JFROG_HOME/mc/var/log/console.log

Docker Compose Installation

First node installation steps:

  1. Extract the contents of the compressed archive and go to the extracted folder.

    tar -xvf jfrog-mc-<version>-compose.tar.gz
    cd jfrog-mc-<version>-compose.tar.gz

    .env file included within the Docker-Compose archive

    This .env file is used by docker-compose and is updated during installations and upgrades.

    Notice that some operating systems do not display dot files by default. If you make any changes to the file, remember to backup before an upgrade.

  2. Run the config.sh script to setup folders with required ownership.

    ./config.sh
  3. Configure the system.yaml file with the database for the first node configuration details. For example,

    First node system.yaml
    shared:
      database:
        type: postgresql
        driver: org.postgresql.Driver
        url: postgre*sql*://<ip:port>/mission_control?sslmode=disable
        username: <username>
        password: <password>
      jfrogUrl: <JFrog URL>
      security:
        joinKey: <Artifactory Join Key>
  4. Validate and customize the product configuration (optional), including the third party dependencies connection details and ports.
  5. Start and manage Mission Control using docker-compose commands.

    cd jfrog-mc-<version>-compose
    docker-compose -p mc logs
    docker-compose -p mc ps
    docker-compose -p mc up -d
    docker-compose -p mc down
  6. Access Mission Control from your browser at: http://<jfrogUrl>/ui/and go to the Dashboard tab in the Application module in the UI

  7. Check Mission Control Log.

    docker-compose -p mc logs

Additional node installation steps:

  1. Extract the contents of the compressed archive and go to the extracted folder.

    tar -xvf jfrog-mc-<version>-compose.tar.gz
    cd jfrog-mc-<version>-compose.tar.gz
  2. Run the config.sh script to setup folders with required ownership.

    ./config.sh
    
  3. Configure the system.yaml file for the secondary node with database and active node configurations. For example,

    Additional node system.yaml
    shared:
      database:
        type: postgresql
        driver: org.postgresql.Driver
        url: postgre*sql*://<ip:port>/mission_control?sslmode=disable
        username: <username>
        password: <password>
      jfrogUrl: <JFrog URL>
      security:
        joinKey: <Artifactory Join Key>
      # Configure the following property values when Elasticsearch is installed from the bundled Mission Control package.
      elasticsearch:
        clusterSetup: "YES"
        unicastFile: "/var/opt/jfrog/mc/data/elasticsearch/config/unicast_hosts.txt"
  4. Copy the master.key from the first node to the additional node located at $JFROG_HOME/mc/var/etc/security/master.key.
  5. Add jfmc user to elasticsearch group to be able to update cluster configuration.

    usermod -a -G elasticsearch jfmc
    
  6. Validate and customize the product configuration (optional), including the third party dependencies connection details and ports.

  7. Start and manage Mission Control using docker-compose commands.

    cd jfrog-mc-<version>-compose
    docker-compose -p mc logs
    docker-compose -p mc ps
    docker-compose -p mc up -d
    docker-compose -p mc down
  8. Access Mission Control from your browser at: http://<jfrogUrl>/ui/and go to the Dashboard tab in the Application module in the UI
  9. Check the Mission Control log.

    docker-compose -p mc logs

Helm Installation HA

Important

Currently, it is not possible to connect a JFrog product (e.g., Mission Control) that is within a Kubernetes cluster with another JFrog product (e.g., Artifactory) that is outside of the cluster, as this is considered a separate network. Therefore, JFrog products cannot be joined together if one of them is in a cluster.

Deploying Artifactory for Small, Medium or Large Installations

In the chart directory, includes three values files, one for each installation type–small/medium/large. These values files are recommendations for setting resources requests and limits for your installation. You can find the files in the corresponding chart directory:

High Availability

For high availability of Mission Control, set the replicaCount in the values.yaml file to >1 (the recommended is 3). It is highly recommended to also set RabbitMQ to run as an HA cluster. Start Mission Control with 3 replicas per service and 3 replicas for RabbitMQ.

helm upgrade --install mission-control --namespace mission-control --set replicaCount=3 --set rabbitmq-ha.replicaCount=3 center/jfrog/mission-control
  1. Add the ChartCenter Helm repository to your Helm client.

    helm repo add jfrog https://charts.jfrog.io 
    
  2. Update the repository.

    helm repo update
  3. Initiate installation by providing a join key and JFrog url as a parameter to the Mission Control chart installation.

    helm upgrade --install mission-control --set missionControl.joinKey=<YOUR_PREVIOUSLY_RETIREVED_JOIN_KEY> \
                 --set missionControl.jfrogUrl=<YOUR_PREVIOUSLY_RETIREVED_BASE_URL> --namespace mission-control jfrog/mission-control

    Alternatively, you can manually create a secret containing the join key and then pass it to the template during install/upgrade. the key must be named join-key.

    # Create a secret containing the key: 
    kubectl create secret generic my-secret --from-literal=join-key=<YOUR_PREVIOUSLY_RETIREVED_JOIN_KEY>
    
    # Pass the created secret to helm
    helm upgrade --install mission-control --set missionControl.joinKeySecretName=my-secret --namespace mission-control jfrog/mission-control

    In either case, make sure to pass the same join key on all future calls to helm install and helm upgrade! This means always passing --set missionControl.joinKey=<YOUR_PREVIOUSLY_RETIREVED_JOIN_KEY>. In the second, this means always passing --set missionControl.joinKeySecretName=my-secret and ensuring the contents of the secret remain unchanged.

  4. Customize the product configuration (optional) including database, Java Opts, and filestore.

    Unlike other installations, Helm Chart configurations are made to the values.yaml and are then applied to the system.yaml.

    Follow these steps to apply the configuration changes.

    1. Make the changes to values.yaml. 
    2. Run the command.

      helm upgrade --install mission-control --namespace mission-control -f values.yaml

    3. Restart Mission Control to apply the changes.
  5. Access Mission Control from your browser at: http://<jfrogUrl>/ui/and go to the Dashboard tab in the Application module in the UI

  6. Check the status of your deployed Helm releases.

    helm status mission-control

Product Configuration

After installing and before running Mission Control, you may set the following configurations.

Where to find the system configurations?

You can configure all your system settings using the system.yaml file located in the $JFROG_HOME/mc/var/etc folder. For more information, see Mission Control YAML Configuration.

If you don't have a System YAML file in your folder, copy the template available in the folder and name it system.yaml.

For the Helm charts, the system.yaml file is managed in the chart’s values.yaml.

Artifactory Connection Details

Mission Control requires a working Artifactory server and a suitable license. The Mission Control connection to Artifactory requires 2 parameters:

  • jfrogUrl - URL to the machine where JFrog Artifactory is deployed, or the load balancer pointing to it. It is recommended to use DNS names rather than direct IPs. For example: http://jfrog.acme.com or http://10.20.30.40:8082
    Set it in the Shared Configurations section of the $JFROG_HOME/mc/etc/system.yaml file.
  • join.key - This is the "secret" key required by Artifactory for registering and authenticating the Mission Control server.
    You can fetch the Artifactory joinKey (join Key) from the JPD UI in the Administration module | Security | Settings | Join Key
    Set the join.key used by your Artifactory server in the Shared Configurations section of the $JFROG_HOME/mc/etc/system.yaml file.

Changing PostgreSQL Database Credentials

Mission Control comes bundled with a PostgreSQL Database out-of-the-box, which comes pre-configured with default credentials.

These commands are indicative and assume some familiarity with PostgreSQL. Please do not copy and paste them. For docker-compose, you will need to ssh into the PostgreSQL container before you run them

To change the default credentials:

PostgreSQL
#1.  Change password for mission control user
# Access PostgreSQL as the jfmc user adding the optional -W flag to invoke the password prompt
$ psql -d mission_control -U jfmc -W
# Securely change the password for user "mission_control". Enter and then retype the password at the prompt.
\password jfmc
# Verify the update was successful by logging in with the new credentials
$ psql -d mission_control -U jfmc -W

#2.  Change password for scheduler user
# Access PostgreSQL as the jfmc user adding the optional -W flag to invoke the password prompt
$ psql -d mission_control -U jfisc -W
# Securely change the password for user "mission_control". Enter and then retype the password at the prompt.
\password jfisc
# Verify the update was successful by logging in with the new credentials
$ psql -d mission_control -U jfisc -W

#3. Change password for insight server user
# Access PostgreSQL as the jfmc user adding the optional -W flag to invoke the password prompt
$ psql -d mission_control -U jfisv -W
# Securely change the password for user "mission_control". Enter and then retype the password at the prompt.
\password jfisv
# Verify the update was successful by logging in with the new credentials
$ psql -d mission_control -U jfisv -W

Changing Elasticsearch Credentials

Search Guard tool is used to manage authentication. To change password for the default user, Search Guard accepts a hash password to be provided in the configuration.

  1. Obtain the username used to access Elasticsearch from $JFROG_HOME/mc/var/etc/system.yaml available at elasticsearch.username
  2. Generate the hash password by providing the password(in text format) as input

    $ELASTICSEARCH_HOME/plugins/search-guard-<major_version_number>/tools/hash.sh -p <password_in_text_format>
    
  3. The output from the previous step should be updated in the configuration for the default user

    Other flavours
    vi $ELASTICSEARCH_HOME/plugins/search-guard-<major_version_number>/sgconfig/sg_internal_users.yml
    #Scroll in the file to find an entry for the username of the default user
    #Update the value for "hash" with the hash content obtained from previous step
    <default_username>:
       hash: <hash_output_from_previous_step>
  4. Run the command to initialise Search Guard 

Add Certificates when Connecting to SSL Enabled Elasticsearch

Other flavours
cd $JFROG_HOME/mc/var/etc/security/keys/trusted
#Copy the certificates to this location and restart MC services

Set your PostgreSQL and Elasticsearch connection details in the Shared Configurations section of the $JFROG_HOME/mc/var/etc/system.yaml file.

Load a Custom Certificate to Elasticsearch Search Guard 

If you prefer to use the custom certificates when Search Guard enabled with tls in Elasticsearch, you can use the search-guard-tlstool  to generate Search Guard certificates.

The tool to generate Search Guard certificates is be available in $JFROG_HOME/app/third-party/elasticsearch/search-guard-tlstool-1.6.tar.gz. For more information about generating certificates, see Search Guard TLS Tool.

  1. Run the tool to generate the certificates.

    tar -xvf $JFROG_HOME/app/third-party/elasticsearch/search-guard-tlstool-1.6.tar.gz
    cp $JFROG_HOME/app/third-party/elasticsearch/config/tlsconfig.yml $JFROG_HOME/app/third-party/elasticsearch/search-guard-tlstool-1.8/config
    cd $JFROG_HOME/app/third-party/elasticsearch/search-guard-tlstool-1.8/tools
    ./sgtlstool.sh -c ../config/tlsconfig.yml -ca -crt # folder named "out" will be created with all the required certificates, 
    cd out 
    
  2. Copy the generated certificates [[  localhost.key,  localhost.pem, root-ca.pem,  sgadmin.key,  sgadmin.pem  ]]  to the target location based on the installer type.

    Native
    cp localhost.key localhost.pem root-ca.pem sgadmin.key sgadmin.pem  /etc/elasticsearch/certs/
    Docker Compose
    cp localhost.key localhost.pem root-ca.pem sgadmin.key sgadmin.pem $JFROG_HOME/mc/var/data/elasticsearch/certs

Configuring a Custom Elasticsearch Role

The Search Guard tool is used to manage authentication. By default, an admin user is required to authenticate Elasticsearch. As an alternative to this, a new user can be configured to authenticate Elasticsearch by assigning a custom role with permissions for the application to work.

  1. Add the following snippet to define a new role with custom permissions:

    vi $ELASTICSEARCH_HOME/plugins/search-guard-<major_version_number>/sgconfig/sg_roles.yml
    
    #Add the following snippet to define a new role with custom permissions
    
    <role_name>:
      cluster_permissions:
        - cluster:monitor/health
        - cluster:monitor/main
        - cluster:monitor/state
        - "indices:admin/template/get"
      index_permissions:
        - index_patterns:
            - "*"
          allowed_actions:
            - "indices:monitor/health"
            - "indices:monitor/stats"
            - "indices:monitor/settings/get"
            - "indices:admin/aliases/get"
            - "indices:admin/get"
            - "indices:admin/create"
            - "indices:admin/delete"
            - "indices:admin/rollover"
            - SGS_CRUD


  2. Add the following snippet to add a new user:

    vi $ELASTICSEARCH_HOME/plugins/search-guard-<major_version_number>/sgconfig/sg_roles.yml/sg_internal_users.yml
    
    
    # Add the following snippet to add a new user
    
    <user_name>:
      hash: <Hash_password>
      backend_roles:
        - "<role_name>"   //role_name defined in previous step
      description: "<description>"


    1. Run the following command to generate a hash password:

      $ELASTICSEARCH_HOME/plugins/search-guard-<major_version_number>/tools/hash.sh -p <clear_text_password>
  3. Add the following snippet to map the new username to the role defined in the previous step:

    vi $ELASTICSEARCH_HOME/plugins/search-guard-<major_version_number>/sgconfig/sg_roles.yml/sg_roles_mapping.yml
    
    # Add the following snippet to map the new username to the role defined in the previous step
    
    <role_name>:
      users:
        - "<user_name>"
  4. Initialize Search Guard to upload the above changes made in the configuration.
  5. Set the new credentials in $JFROG_HOME/mc/etc/system.yaml file:

    shared:
        elasticsearch:
               username: <user_name>
               password: <clear_text_password>
    
    
  6. Restart Mission Control services.

Setting up Your PostgreSQL Databases, Users and Schemas

Database and schema names can only be changed for a new installation. Changing the names during an upgrade will result in the loss of existing data.

Helm Users

Create a single user with permission to all schemas. Use this user's credentials during your Helm installation on this page.

  1. Log in to the PostgreSQL database as an admin and execute the following commands.

    PostgreSQL Database, Schema and User Creation
    CREATE DATABASE mission_control WITH ENCODING='UTF8' TABLESPACE=pg_default;
    #    Exit from current login
    \q
    #    Login to $DB_NAME database using admin user (by default its postgres)
    psql -U postgres mission_control
    CREATE USER jfmc WITH PASSWORD 'password';
    GRANT ALL ON DATABASE mission_control TO jfmc;
    CREATE SCHEMA IF NOT EXISTS jfmc_server AUTHORIZATION jfmc;
    GRANT ALL ON SCHEMA jfmc_server TO jfmc;
    CREATE SCHEMA IF NOT EXISTS insight_server AUTHORIZATION jfmc;
    GRANT ALL ON SCHEMA insight_server TO jfmc;
    CREATE SCHEMA IF NOT EXISTS insight_scheduler AUTHORIZATION jfmc;
    GRANT ALL ON SCHEMA insight_scheduler TO jfmc;
  2. Configure the system.yaml file with the database configuration details according to the information above. For example.

    shared:
      database:
        type: postgresql
        driver: org.postgresql.Driver
        url: jdbc:postgresql://localhost:5432/mission_control
        username: jfmc
        password: password


For Advanced Users

Manual Docker Compose Installation

  1. Extract the contents of the compressed archive and go to the extracted folder.

    tar -xvf jfrog-mc-<version>-compose.tar.gz

    .env file included within the Docker-Compose archive

    This .env file is used by docker-compose and is updated during installations and upgrades.

    Notice that some operating systems do not display dot files by default. If you've made any changes to the file, remember to backup before an upgrade.

  2. Create the following folder structure under $JFROG_HOME/mc.

     -- [1050	1050    ]  var
     -- [1050	1050    ]  data
     -- [1000	1000	]  elasticsearch
     -- [1000   1000	]  data
     -- [999	999		]  postgres
     -- [1050	1050	]  etc
  3. Copy the appropriate docker-compose templates from the templates folder to the extracted folder. Rename it as docker-compose.yaml

    NOTE: The commands below assume you are using the template: docker-compose-postgres-es.yaml

    RequirementTemplate
    Mission control with externalised databasesdocker-compose.yaml
    Mission control with Elasticsearch and PostgreSQLdocker-compose-postgres-es.yaml
  4. Update the .env file

    ## The Installation directory for Mission Control. IF not entered, the script will prompt you for this input. Default [$HOME/.jfrog/mc]
    ROOT_DATA_DIR=
    
    ## Public IP of this machine
    HOST_IP=
  5. Customize the product configuration.
    1. Set the Artifactory connection details.
    2. Customize the PostgreSQL Database connection details. (optional)
    3. Set any additional configurations (for example: ports, node id) using the Mission Control system.yaml configuration file.

      Verify that the host's ID and IP are added to the system.yaml. This is important to ensure that other products and Platform Deployments can reach this instance.

  6. For Elasticsearch to work correctly, increase the map count. For additional information, see Elasticsearch documentation

  7. Create the necessary tables and users using the script: "createPostgresUsers.sh". 
    • Start the PostgreSQL container.

      docker-compose -p mc up -d postgres
    • Copy the script into the PostgreSQL container.

      docker cp ./third-party/postgresql/createPostgresUsers.sh mc_postgres:/
    • Exec into the container and execute the script. This will create the database tables and users.

      PostgreSQL 9.x
      docker exec -t mc_postgres bash -c "chmod +x /createPostgresUsers.sh && gosu postgres /createPostgresUsers.sh"
      PostgreSQL 10.x/12.x
      docker exec -t mc_postgres bash -c "export DB_PASSWORD=password1 &&
      chmod +x /createPostgresUsers.sh && su-exec postgres /createPostgresUsers.sh"
  8. Start Mission Control using docker-compose commands.

    docker-compose -p mc logs
    docker-compose -p mc ps
    docker-compose -p mc up -d
    docker-compose -p mc down
  9. Access Mission Control from your browser at: http://SERVER_HOSTNAME/ui/. For example, on your local machine: http://localhost/ui/.

  10. Check the Mission Control log.

    docker-compose -p mc logs

    Configuring the Log Rotation of the Console Log

    The console.log file can grow quickly since all services write to it. The installation scripts add a cron job to log rotate the console.log file every hour.

    This is not  done for manual Docker Compose installations. Learn more on how to configure the log rotation.


  • No labels
Copyright © 2021 JFrog Ltd.