Overview

This page provides a guide for the different ways you can install and configure JFrog Mission Control, single node and high availability. Additional information on high availability can be found  here.


Before You Begin

System Requirements

Before installing Mission Control, refer to System Requirements for information on supported platforms, supported browsers, and other requirements.

When installing Mission Control, you must run the installation as a root user or provide sudo access to a non-root user. 

You will need to have admin permissions on the installation machine in the following cases

  • Native installer - always requires admin permissions
  • Archive installer - requires admin permissions only during installation
  • Docker installer - does not require admin permissions

Use a dedicated server for Mission Control with no other software running to alleviate performance bottlenecks, avoid port conflicts, and avoid setting uncommon configurations.

System Architecture

To learn about the JFrog Platform Deployment, refer to System Architecture.

Installing Mission Control

Before installing Mission Control 4.x,  you must first install  JFrog Artifactory 7.x.

Installation Steps

The installation procedure involves the following main steps:

  1. Download Mission Control  as per your required installer type (Linux Archive, Docker Compose, RPM, Debian).
  2. Install Mission Control either as a single node installation, or high availability cluster.
    1. Install third party dependencies (PostgreSQL and Elasticsearch databases, included in the archive)
    2. Install Mission Control
  3. Configure the service
    1. Connection to Artifactory (joinKey  and  jfrogUrl)
    2. Additional optional configuration including changing default credentials for databases
  4. Start the Service  using the start scripts or OS service management.
  5. Check the Service Log  to check the status of the service.

The default Mission Control home directory is defined according to the installation type. For additional details see the Product Directory Structure page.

Note: This guide uses  $JFROG_HOME  to represent the JFrog root directory containing the deployed product.

Mission Control 3.x relied on Artifactory user plugins to manage and monitor Artifactory. From version 4.0, those plugins are no longer used. When upgrading to Artifactory 7.x, those user plugins will be automatically removed from Artifactory.  The plugins that will be removed are:

  • propertySetsConfig.groovy
  • haClusterDump.groovy
  • httpSsoConfig.groovy
  • repoLayoutsConfig.groovy
  • ldapGroupsConfig.groovy
  • internalUser.groovy
  • ldapSettingsConfig.groovy
  • pluginsConfig.groovy
  • proxiesConfig.groovy
  • requestRouting.groovy

JFrog Subscription Levels

SELF-HOSTED
ENTERPRISE X
ENTERPRISE+


Single Node Installation

The following installation methods are supported:

Interactive Script Installation (recommended)

All install types are supported, including: Docker Compose, Linux Archive, RPM and Debian.

The installer script provides you an interactive way to install Mission Control and its dependencies. All install types are supported. This installer should be used for Docker Compose.

  1. Download Mission Control.
  2. Extract the contents of the compressed archive and go to the extracted folder.

    tar -xvf jfrog-mc-<version>-<compose|rpm|deb>.tar.gz
    cd jfrog-mc-<version>-<compose|rpm|deb>

    When running Mission Control, the installation script creates a user called jfmc by default which must have run and execute permissions on the installation directory.

    It is recommended to extract the Mission Control download file into a directory that gives run and execute permissions to all users such as /opt.

    mv jfrog-mc-<version>-linux.tar.gz /opt/
    cd /opt
    tar -xf jfrog-mc-<version>-linux.tar.gz
    mv jfrog-mc-<version>-linux mc
    cd mc

    This .env file is used by docker-compose and is updated during installations and upgrades.

    Notice that some operating systems do not display dot files by default. If you've made any changes to the file, remember to backup before an upgrade.

  3. Run the installer script.
    Note: the script will prompt you with a series of mandatory inputs, including the  jfrogURL (custom base URL) and joinKey .

    ./config.sh
    ./install.sh

    Refer prerequisites for Mission Control in Linux Archive  before running install script.

    ./install.sh --user <user name> --group <group name>
    
    -h | --help                                       : [optional] display usage
    -u | --user                                       : [optional] (default: jfmc) user which will be used to run the product, it will be created if its unavailable
    -g | --group                                      : [optional] (default: jfmc) group which will be used to run the product, it will be created if its unavailable
  4. Validate and customize the product configuration  (optional), including the third party dependencies connection details and ports.
  5. Start and manage the Mission Control service.

    systemctl start|stop mc.service
    service mc start|stop
    cd jfrog-mc-<version>-compose
    docker-compose -p mc up -d
    docker-compose -p mc ps
    docker-compose -p mc down

    Mission Control can be installed and managed as a service in a Linux archive installation. Refer start Mission Control section under Linux Archive Manual Installation for more details. 

    mc/app/bin/mc.sh start|stop
  6. Access Mission Control from your browser at: http://<jfrogUrl>/ui/and go to the Dashboard tab in the Application module in the UI.
  7. Check the Mission Control log.

    tail -f $JFROG_HOME/mc/var/log/console.log

    The console.log file can grow quickly since all services write to it. This file is not log rotated for Darwin installations. Learn more on how to configure the log rotation.

Linux Archive Installation

  1. Download Mission Control.

  2. Extract the contents of the compressed archive under JFROG_HOME and move it into mc directory.

    tar -xvf jfrog-mc-<version>-linux.tar.gz
    mv jfrog-mc-<version>-linux mc
  3. Install PostgreSQL by following the steps detailed in Installing PostgreSQL.

    PostgreSQL is required and must be installed before continuing with the next installation steps. Set your PostgreSQL connection details in the Shared Configurations section of the $JFROG_HOME/mc/var/etc/system.yaml file.

  4. Prepare for the Elasticsearch installation by increasing the map count. For more information, see the Elastic Search documentation .

    sudo sysctl -w vm.max_map_count=262144

    To make this change permanent, remember to update the vm.max_map_count setting in /etc/sysctl.conf.

  5. Install Elasticsearch. Instructions to install Elasticsearch are available here.

    You can install the package available at <JFROG_HOME>/ mc/app/third-party/elasticsearch/elasticsearch-oss-<version>.tar.gz or you can download a compatible version of Elasticsearch from this page.

    1. Install Search Guard. The Search Guard package can be located in the extracted contents at <JFROG_HOME>/m c/app/third-party/elasticsearch/search-guard-<version>.tar.gz . For installation steps, refer to the  Search Guard documentation .

      You must install the Search Guard plugin to ensure secure communication with Elasticsearch.


      1. Add an admin user to Search Guard, to ensure authenticated communication with Elasticsearch. 
        The Search Guard configuration accepts a hashed password. Use the following command to generate the hash for the password.

        <JFROG_HOME>/mc/app/third-party/elasticsearch/elasticsearch-<version>/plugins/search-guard-7/tools/hash.sh -p <clear_text_password>
        
        #This will output a hashed password (<hash_password>), make a copy of it
      2. Prepare the configuration snippet to add a new(admin) user with the hashed password obtained from previous step.

        <username>:
           hash: "<hashed_password>"
           backend_roles:
             - "admin"
            description: "Insight Elastic admin user"
      3. Paste the above snippet to the end of this file “sg_internal_users.yml” located at < JFROG_HOME>/mc/app/third-party/elasticsearch/elasticsearch-<version>/plugins/search-guard-7/sgconfig/ .

    2. Enable the anonymous access to _cluster/health  endpoint. This is required to check the health of Elasticsearch cluster.
      Enable the anonymous auth in this file sg_config.yml  at < JFROG_HOME>/mc/app/third-party/elasticsearch/elasticsearch-<version>/plugins/search-guard-7/sgconfig/ .

      sg_config:
        dynamic:
          http:
            anonymous_auth_enabled: true #set this to true
    3. Map the anonymous user sg_anonymous  to the backend role "sg_anonymous_backendrole" in this file "sg_roles_mapping.yml" at < JFROG_HOME>/mc/app/third-party/elasticsearch/elasticsearch-<version>/plugins/search-guard-7/sgconfig/ .

      sg_anonymous:
        backend_roles:
          - sg_anonymous_backendrole
    4. Add the following snippet to the end of this file sg_roles.yml  located at  <JFROG_HOME>/mc/app/third-party/elasticsearch/elasticsearch-<version>/plugins/search-guard-7/sgconfig/ .

      sg_anonymous:
        cluster_permissions:
          - cluster:monitor/health
  6. Add the following in the shared section of $JFROG_HOME/mc/var/etc/system. yaml  file. Refer to Shared Configurations section.

    shared:
      elasticsearch:
        external: true
        url: <URL_TO_ELASTICSEARCH_INSTANCE>:<ELASTICSEARCH_PORT>
        username: <USERNAME_SET_IN_SEARCHGUARD>
        password: <CLEAR_TEXT_PASSWORD_FOR_THE_ABOVE_USERNAME>
           

    You must set the value of external as true under Elasticsearch configuration in the system.yaml file even if you install Elasticsearch in the same machine as Mission Control.

    If you use Amazon Elasticsearch Service, enter the following in the shared section of the system.yaml file.

    shared:
    elasticsearch:
    url: <URL_TO_ELASTICSEARCH>:<ELASTICSEARCH_PORT>
    external: true
    aes:
    signed: true
    serviceName: <AES_SERVICE_NAME>
    region: <AES_SERVICE_REGION>
    accessKey: <AWS_ACCESS_KEY>
    secretKey: <AWS_SECRET_KEY>
  7. Start PostgreSQL and Elasticsearch.

  8. Customize the product configuration.
    1. Set the Artifactory connection details.
    2. Customize the PostgreSQL Database connection details (optional).
    3. Set any additional configurations (for example: ports, node id) using the Mission Control System YAML.

  9. Start and manage the Mission Control service as the user who extracted the tar.
    As a process

    <JFROG_HOME>/mc/app/bin/mc.sh start

    Manage the process.

    <JFROG_HOME>/mc/app/bin/mc.sh start|stop|status|restart

    As a service, Mission Control is packaged as an archive file and an install script that can be used to install it as a service running under a custom user. Currently supported on Linux systems.

    When running Mission Control as a service, the installation script creates a user called jfmc  (by default)   which must have run and execute permissions on the installation directory.

    It is recommended to extract the Mission Control download file into a directory that gives run and execute permissions to all users such as /opt.

    To install Mission Control as a service,  execute the following command as root: 

    User and group can be passed through  <JFROG_HOME>/ mc/var/etc/system.yaml as shared.user and shared.group. This takes precedence over values passed through command line on install. 

    <JFROG_HOME>/mc/app/bin/installService.sh --user <enter user, default value is mc> --group <enter group, default value is mc>
    
    -u | --user                                       : [optional] (default: jfmc) user which will be used to run the product, it will be created if its unavailable
    -g | --group                                      : [optional] (default: jfmc) group which will be used to run the product, it will be created if its unavailable

    The user and group will be stored in the <JFROG_HOME>/ mc/var/etc/system.yaml at the end of the installation.
    To manage the service, use the  systemd  or  init.d commands depending on your system. 

     systemctl <start|stop|status> mc.service
    service mc <start|stop|status>
  10. Access Mission Control from your browser at: http://<jfrogUrl>/ui/ and go to the Dashboard  tab in the Application module in the UI
  11. Check the Mission Control log.

    tail -f $JFROG_HOME/mc/var/log/console.log

Manual RPM Installation

The RPM installation bundles Mission Control and all its dependencies. It is provided as native RPM packages, where Mission Control and its dependencies must be installed separately. Use this, if you are automating installations.

  1. Download Mission Control.

  2. Extract the contents of the compressed archive, and go to the extracted folder:

    tar -xvf jfrog-mc-<version>-rpm.tar.gz
    cd jfrog-mc-<version>-rpm
  3. Install Mission Control. Y ou must run as a root user.

    rpm -Uvh --replacepkgs ./mc/mc.rpm
  4. Install PostgreSQL and start the PostgreSQL service.

    PostgreSQL is required and must be installed before continuing with the next installation steps.

    Set your PostgreSQL connection details in the Shared Configurations section of the $JFROG_HOME/mc/var/etc/system.yaml file.

  5. Install Elasticsearch. Instructions to install Elasticsearch are available here.
    You can install the package available at  jfrog-mc-<version>-rpm /third-party/elasticsearch/elasticsearch-oss-<version>.tar.gz or you can download a compatible version of Elasticsearch from this  page .

    When connecting an external instance of Elasticsearch to Mission Control, add the following flag in the Shared Configurations of $JFROG_HOME/mc/var/etc/system. yaml  file.

    shared:
      elasticsearch:
        external: true


    1. Install Search Guard. The Search Guard package can be located in the extracted contents at jfrog-mc-<version>-rpm /third-party/elasticsearch/search-guard-<version>.tar.gz . For installation steps, refer to the  Search Guard documentation .

      You must install the Search Guard plugin to ensure secure communication with Elasticsearch.


      1. Add an admin user to Search Guard, to ensure authenticated communication with Elasticsearch. 
        The Search Guard configuration accepts a hashed password. Use the following command to generate the hash for the password.

        /etc/elasticsearch/plugins/search-guard-7/tools/hash.sh -p <clear_text_password>
        
        #This will output a hashed password (<hash_password>), make a copy of it
      2. Prepare the configuration snippet to add a new(admin) user with the hashed password obtained from previous step.

        <username>:
           hash: "<hashed_password>"
           backend_roles:
             - "admin"
           description: "Insight Elastic admin user"
      3. Paste the above snippet to the end of this file “sg_internal_users.yml” located at /etc/elasticsearch/plugins/search-guard-7/sgconfig/ .

    2. Enable the anonymous access to _cluster/health  endpoint. This is required to check the health of Elasticsearch cluster.
      Enable the anonymous auth in this file sg_config.yml  at  /etc/elasticsearch/plugins/search-guard-7/sgconfig/ .

      sg_config:
        dynamic:
          http:
            anonymous_auth_enabled: true #set this to true
    3. Map the anonymous user sg_anonymous  to the backend role "sg_anonymous_backendrole" in this file "sg_roles_mapping.yml" at /etc/elasticsearch/plugins/search-guard-7/sgconfig .

      sg_anonymous:
        backend_roles:
          - sg_anonymous_backendrole
    4. Add the following snippet to the end of this file sg_roles.yml  located at /etc/elasticsearch/plugins/search-guard-7 /sgconfig/ .

      sg_anonymous:
        cluster_permissions:
          - cluster:monitor/health
  6. Add the following in the shared section of $JFROG_HOME/mc/var/etc/system. yaml  file. Refer to Shared Configurations section.

    shared:
      elasticsearch:
        url: <URL_TO_ELASTICSEARCH_INSTANCE>:<ELASTICSEARCH_PORT>
        username: <USERNAME_SET_IN_SEARCHGUARD>
        password: <CLEAR_TEXT_PASSWORD_FOR_THE_ABOVE_USERNAME>
           

    You must set the value of external as true under Elasticsearch configuration in the system.yaml file even if you install Elasticsearch in the same machine as Mission Control.

    If you use Amazon Elasticsearch Service, enter the following in the shared section of the YAML file.

    shared:
    elasticsearch:
    url: <URL_TO_ELASTICSEARCH>:<ELASTICSEARCH_PORT>
    external: true
    aes:
    signed: true
    serviceName: <AES_SERVICE_NAME>
    region: <AES_SERVICE_REGION>
    accessKey: <AWS_ACCESS_KEY>
    secretKey: <AWS_SECRET_KEY>
  7. Customize the product configuration.

    1. Set the Artifactory connection details.
    2. Customize the PostgreSQL Database connection details. (optional)
    3. Set any additional configurations (for example: ports, node id) using Mission Control System YAML.

  8. Start and manage the Mission Control service.

    systemctl start|stop mc.service
    service mc start|stop|status|restart
  9. Access Mission Control from your browser at: http://<jfrogUrl>/ui/ and go to the Dashboard  tab in the Application module in the UI
  10. Check the Mission Control log.

    tail -f $JFROG_HOME/mc/var/log/console.log

Manual Debian Installation

The Debian installation bundles Mission Control and all its dependencies. It is provided as native Debian packages, where Mission Control and its dependencies must be installed separately. Use this, if you are automating installations.

  1. Download Mission Control.
  2. Extract the contents of the compressed archive, and go to the extracted folder:

    tar -xvf jfrog-mc-<version>-deb.tar.gz
    cd jfrog-mc-<version>-deb
  3. Install Mission control.  You must run as a root user.

    dpkg -i ./mc/mc.deb
  4. Install PostgreSQL.

    PostgreSQL is required and must be installed before continuing with the next installation steps.

    Set your PostgreSQL connection details in the Shared Configurations section of the $JFROG_HOME/mc/var/etc/system.yaml file.

  5. Install Elasticsearch. Instructions to install Elasticsearch are available here.


    You can install the package available at jfrog-mc-<version>-deb
    /third-party/elasticsearch/elasticsearch-oss-<version>.tar.gz or you can download a compatible version of Elasticsearch from this page.

    1. Install Search Guard. The Search Guard package can be located in the extracted contents at jfrog-mc-<version>-deb /third-party/elasticsearch/search-guard-<version>.tar.gz . For installation steps, refer to the  Search Guard documentation .

      You must install the Search Guard plugin to ensure secure communication with Elasticsearch.


      1. Add an admin user to Search Guard, to ensure authenticated communication with Elasticsearch. 
        The Search Guard configuration accepts a hashed password. Use the following command to generate the hash for the password.

        /usr/share/elasticsearch/plugins/search-guard-7/tools/hash.sh -p <clear_text_password>
        
        #This will output a hashed password (<hash_password>), make a copy of it
      2. Prepare the configuration snippet to add a new(admin) user with the hashed password obtained from previous step.

        <username>:
           hash: "<hashed_password>"
           backend_roles:
             - "admin"
           description: "Insight Elastic admin user"
      3. Paste the above snippet to the end of this file “sg_internal_users.yml” located at  /usr/share/elasticsearch/plugins/search-guard-7/sgconfig/ .

    2. Enable the anonymous access to _cluster/health  endpoint. This is required to check the health of Elasticsearch cluster.
      Enable the anonymous auth in this file sg_config.yml  at  /usr/share/elasticsearch/plugins/search-guard-7/sgconfig/ .

      sg_config:
        dynamic:
          http:
            anonymous_auth_enabled: true #set this to true
    3. Map the anonymous user sg_anonymous  to the backend role "sg_anonymous_backendrole" in this file "sg_roles_mapping.yml" at /usr/share/elasticsearch/plugins/search-guard-7/sgconfig/ .

      sg_anonymous:
        backend_roles:
          - sg_anonymous_backendrole
    4. Add the following snippet to the end of this file sg_roles.yml  located at  /usr/share/elasticsearch/plugins/search-guard-7/sgconfig/ .

      sg_anonymous:
        cluster_permissions:
          - cluster:monitor/health



  6. Add the following in the shared section of $JFROG_HOME/mc/var/etc/system. yaml  file. Refer to Shared Configurations section.

    shared:
      elasticsearch:
        url: <URL_TO_ELASTICSEARCH_INSTANCE>:<ELASTICSEARCH_PORT>
        username: <USERNAME_SET_IN_SEARCHGUARD>
        password: <CLEAR_TEXT_PASSWORD_FOR_THE_ABOVE_USERNAME>
           

    You must set the value of external as true under Elasticsearch configuration in the system.yaml file even if you install Elasticsearch in the same machine as Mission Control.

    If you use Amazon Elasticsearch Service, enter the following in the shared section of the YAML file.

    shared:
    elasticsearch:
    url: <URL_TO_ELASTICSEARCH>:<ELASTICSEARCH_PORT>
    external: true
    aes:
    signed: true
    serviceName: <AES_SERVICE_NAME>
    region: <AES_SERVICE_REGION>
    accessKey: <AWS_ACCESS_KEY>
    secretKey: <AWS_SECRET_KEY>
  7. Customize the product configuration .

    1. Set the Artifactory connection details.
    2. Customize the PostgreSQL Database connection details. (optional)
    3. Set any additional configurations (for example: ports, node id) using Mission Control System YAML.

  8. Start and manage the Mission Control service.

    systemctl start|stop mc.service
    service mc start|stop|status|restart
  9. Access Mission Control from your browser at: http://<jfrogUrl>/ui/ and go to the Dashboard  tab in the Application module in the UI.
  10. Check the Mission Control log.

    tail -f $JFROG_HOME/mc/var/log/console.log

Helm Chart Installation

In the chart directory, includes three values files, one for each installation type - small/medium/large. These values files are recommendations for setting resources requests and limits for your installation.   You can find the files in the corresponding chart directory:

  1. Add the   ChartCenter Helm repository   to your Helm client.

    helm repo add jfrog https://charts.jfrog.io 
    
  2. Update the repository.

    helm repo update
  3. Initiate installation by providing a  join key and JFrog url as a parameter to the Mission Control chart installation.

    helm upgrade --install mission-control --set missionControl.joinKey=<YOUR_PREVIOUSLY_RETRIEVED_JOIN_KEY> \
                 --set missionControl.jfrogUrl=<YOUR_PREVIOUSLY_RETRIEVED_BASE_URL> --namespace mission-control jfrog/mission-control

    Alternatively, you can manually create a secret containing the join key and then pass it to the template during install/upgrade. The key must be named join-key.

    kubectl create secret generic my-secret --from-literal=join-key=<YOUR_PREVIOUSLY_RETIREVED_JOIN_KEY>
    
    # Pass the created secret to helm
    helm upgrade --install mission-control --set missionControl.joinKeySecretName=my-secret --namespace mission-control jfrog/mission-control

    In either case, make sure to pass the same join key on all future calls to  helm install  and  helm upgrade! This means always passing  --set missionControl.joinKey=<YOUR_PREVIOUSLY_RETRIEVED_JOIN_KEY>. In the second, this means always passing  --set missionControl.joinKeySecretName=my-secret  and ensuring the contents of the secret remain unchanged.

  4. Customize the product configuration   (optional)  including database, Java Opts, and filestore.

    Unlike other installations, Helm Chart configurations are made to the values.yaml  and are then applied to the system.yaml.

    Follow these steps to apply the configuration changes.

    1. Make the changes to values.yaml. 
    2. Run the command.

      helm upgrade -- install   mission-control --namespace mission-control -f values.yaml

    3. Restart Mission Control to apply the changes.
  5. Access Mission Control from your browser at: http://<jfrogUrl>/ui/ and go to the Dashboard  tab in the Application module in the UI.

  6. Check the status of your deployed Helm releases.

    helm status mission-control

HA Installation

The following describes how to set up a Mission Control HA cluster with more than one node. For more information about HA, see  System Architecture .

Prerequisites

All nodes within the same Mission Control HA installation must be running the same Artifactory version.

For a Mission Control HA cluster to work correctly, you must have at least three nodes in the cluster.


Database

Mission Control HA requires an external PostgreSQL  database. Make sure to install it before proceeding to install the first node. There are several ways to setup PostgreSQL for redundancy. Including: HA, Load Balancing and Replication. For more information, see the PostgreSQL documentation .

Network

  • All the Mission Control HA components (Mission Control cluster nodes, database server and Elasticsearch) must be within the same fast LAN.

  • All the HA nodes must communicate with each other through dedicated TCP ports.

The following installation methods are supported:

Linux Archive/RPM/Debian Installation

First node installation steps :

  1. Install the first node. The installation is identical to the single node installation.

    Important: make sure not to start Mission Control.

  2. Configure the system.yaml  file with the database and first node configuration details. For example,

    shared:
      database:
        type: postgresql
        driver: org.postgresql.Driver
        url: postgre*sql*://<ip:port>/mission_control?sslmode=disable
        username: <username>
        password: <password>
      jfrogUrl: <JFrog URL>
      security:
        joinKey: <Artifactory Join Key>
  3. Start and manage the Mission Control service.

    systemctl start|stop mc.service
    service mc start|stop
  4. Access Mission Control from your browser at: http://<jfrogUrl>/ui/ and go to the Dashboard  tab in the Application module in the UI
  5. Check the Mission Control log.

    tail -f $JFROG_HOME/mc/var/log/console.log

Additional node installation steps :

For a node to join a cluster, the node must have the same database configuration and the Master Key. Install all additional nodes using the same steps described above, with the additional steps below:

  1. Configure the system.yaml  file for the additional node with master key, database and active node configurations. For example,

    shared:
      database:
        type: postgresql
        driver: org.postgresql.Driver
        url: postgre*sql*://<ip:port>/mission_control?sslmode=disable
        username: <username>
        password: <password>
      jfrogUrl: <JFrog URL>
      security:
        joinKey: <Artifactory Join Key>
      # Configure the following property values when Elasticsearch is installed from the bundled Mission Control package.
      elasticsearch:
        clusterSetup: "YES"
        unicastFile: "$JFROG_HOME/mc/data/elasticsearch/config/unicast_hosts.txt"
  2. Copy the master.key from the first node to the additional node located at $JFROG_HOME/mc/var/etc/security/master.key.
  3. Add the username and password as configured for Elasticsearch on master node on the additional node too. Add it to the Shared Configurations section in $JFROG_HOME/mc/var/etc/system.yaml file.
  4. Copy the client and node certificates from Elasticsearch's config folder from master node to a new directory named as "sg-certs" under the extracted folder on additional node

    #Linux Archive
    #Source directory on master node - mc/app/third-party/elasticsearch/config contains localhost.key, localhost.pem, root-ca.pem
    #Add them to mc/sg-certs on additional node
    
    
    #RPM
    #Source directory on master node - /etc/elasticsearch/config contains localhost.key, localhost.pem, root-ca.pem
    #Add them to jfrog-mc-<version>-rpm/sg-certs on additional node
    
    #Debian
    cd jfrog-mc-<version>-deb/
    mkdir sg-certs
    #Source directory on master node - /usr/share/elasticsearch/config contains localhost.key, localhost.pem, root-ca.pem
    #Add them to jfrog-mc-<version>-deb/sg-certs on additional node
  5. Start the additional node.

  6. Access Mission Control from your browser at: http://<jfrogUrl>/ui/and go to the Dashboard tab in the Application module in the UI
  7. Check the Mission Control log.

    tail -f $JFROG_HOME/mc/var/log/console.log

Docker Compose Installation

First node installation steps:

  1. Extract the contents of the compressed archive and go to the extracted folder.

    tar -xvf jfrog-mc-<version>-compose.tar.gz
    cd jfrog-mc-<version>-compose.tar.gz

    This .env file is used by docker-compose and is updated during installations and upgrades.

    Notice that some operating systems do not display dot files by default. If you make any changes to the file, remember to backup before an upgrade.

  2. Run the  config.sh  script to setup folders with required ownership.

    ./config.sh
  3. Configure the system.yaml file with the database for the first node configuration details. For example,

    shared:
      database:
        type: postgresql
        driver: org.postgresql.Driver
        url: postgre*sql*://<ip:port>/mission_control?sslmode=disable
        username: <username>
        password: <password>
      jfrogUrl: <JFrog URL>
      security:
        joinKey: <Artifactory Join Key>
  4. Validate and customize the product configuration   (optional), including the third party dependencies connection details and ports.
  5. Start and manage Mission Control using docker-compose commands.

    cd jfrog-mc-<version>-compose
    docker-compose -p mc logs
    docker-compose -p mc ps
    docker-compose -p mc up -d
    docker-compose -p mc down
  6. Access Mission Control from your browser at: http://<jfrogUrl>/ui/ and go to the Dashboard  tab in the Application module in the UI

  7. Check Mission Control Log.

    docker-compose -p mc logs

Additional node installation steps:

  1. Extract the contents of the compressed archive and go to the extracted folder.

    tar -xvf jfrog-mc-<version>-compose.tar.gz
    cd jfrog-mc-<version>-compose.tar.gz
  2. Run the config .sh script to setup folders with required ownership.

    ./config.sh
    
  3. Configure the system.yaml file for the secondary node with database and active node configurations. For example,

    shared:
      database:
        type: postgresql
        driver: org.postgresql.Driver
        url: postgre*sql*://<ip:port>/mission_control?sslmode=disable
        username: <username>
        password: <password>
      jfrogUrl: <JFrog URL>
      security:
        joinKey: <Artifactory Join Key>
      # Configure the following property values when Elasticsearch is installed from the bundled Mission Control package.
      elasticsearch:
        clusterSetup: "YES"
        unicastFile: "/var/opt/jfrog/mc/data/elasticsearch/config/unicast_hosts.txt"
  4. Copy the master.key from the first node to the additional node located at $JFROG_HOME/mc/var/etc/security/master.key.
  5. Add jfmc user to elasticsearch group to be able to update cluster configuration.

    usermod -a -G elasticsearch jfmc
    
  6. Validate and customize the product configuration   (optional), including the third party dependencies connection details and ports.

  7. Start and manage Mission Control using docker-compose commands.

    cd jfrog-mc-<version>-compose
    docker-compose -p mc logs
    docker-compose -p mc ps
    docker-compose -p mc up -d
    docker-compose -p mc down
  8. Access Mission Control from your browser at: http://<jfrogUrl>/ui/and go to the Dashboard tab in the Application module in the UI
  9. Check the Mission Control log.

    docker-compose -p mc logs

Helm Installation HA

Currently, it is not possible to connect a JFrog product (e.g., Mission Control) that is within a Kubernetes cluster with another JFrog product (e.g., Artifactory) that is outside of the cluster, as this is considered a separate network. Therefore, JFrog products cannot be joined together if one of them is in a cluster.

In the chart directory, includes three values files, one for each installation type–small/medium/large. These values files are recommendations for setting resources requests and limits for your installation.   You can find the files in the corresponding chart directory:

For high availability of Mission Control, set the replicaCount in the values.yaml file to >1 (the recommended value is 3).

helm upgrade --install mission-control --namespace mission-control --set replicaCount=3 jfrog/mission-control
  1. Add the   ChartCenter Helm repository   to your Helm client.

    helm repo add jfrog https://charts.jfrog.io 
    
  2. Update the repository.

    helm repo update
  3. Initiate installation by providing a join key and JFrog url as a parameter to the Mission Control chart installation.

    helm upgrade --install mission-control --set missionControl.joinKey=<YOUR_PREVIOUSLY_RETIREVED_JOIN_KEY> \
                 --set missionControl.jfrogUrl=<YOUR_PREVIOUSLY_RETIREVED_BASE_URL> --namespace mission-control jfrog/mission-control

    Alternatively, you can manually create a secret containing the join key and then pass it to the template during install/upgrade. the key must be named join-key.

    # Create a secret containing the key: 
    kubectl create secret generic my-secret --from-literal=join-key=<YOUR_PREVIOUSLY_RETIREVED_JOIN_KEY>
    
    # Pass the created secret to helm
    helm upgrade --install mission-control --set missionControl.joinKeySecretName=my-secret --namespace mission-control jfrog/mission-control

    In either case, make sure to pass the same join key on all future calls to  helm install  and  helm upgrade! This means always passing  --set missionControl.joinKey=<YOUR_PREVIOUSLY_RETIREVED_JOIN_KEY>. In the second, this means always passing  --set missionControl.joinKeySecretName=my-secret  and ensuring the contents of the secret remain unchanged.

  4. Customize the product configuration   (optional)  including database, Java Opts, and filestore.

    Unlike other installations, Helm Chart configurations are made to the values.yaml  and are then applied to the system.yaml.

    Follow these steps to apply the configuration changes.

    1. Make the changes to values.yaml. 
    2. Run the command.

      helm upgrade -- install   mission-control --namespace mission-control -f values.yaml

    3. Restart Mission Control to apply the changes.
  5. Access Mission Control from your browser at: http://<jfrogUrl>/ui/ and go to the Dashboard  tab in the Application module in the UI

  6. Check the status of your deployed Helm releases.

    helm status mission-control



Product Configuration

After installing and before running Mission Control, you may set the following configurations.

You can configure all your system settings using the system.yaml file located in the $JFROG_HOME/mc/var/etc folder. For more information, see  Mission Control YAML Configuration .

If you don't have a System YAML file in your folder, copy the template available in the folder and name it system.yaml.

For the Helm charts, the system.yaml file is managed in the chart’s values.yaml.

Artifactory Connection Details

Mission Control requires a working Artifactory server and a suitable license. The Mission Control connection to Artifactory requires 2 parameters:

  • jfrogUrl - URL to the machine where JFrog Artifactory is deployed, or the load balancer pointing to it. It is recommended to use DNS names rather than direct IPs. For example: http://jfrog.acme.com or http://10.20.30.40:8082
    Set it in the Shared Configurations section of the $JFROG_HOME/mc/etc/system.yaml file.
  • join.key - This is the "secret" key required by Artifactory for registering and authenticating the Mission Control server.
    You can fetch the Artifactory joinKey (join Key) from the JPD UI in the Administration module | Security | Settings | Join Key
    Set the join.key  used by your Artifactory server in the Shared Configurations  section of the  $JFROG_HOME/mc/etc/system.yaml file.

Changing PostgreSQL Database Credentials

Mission Control comes bundled with a PostgreSQL Database out-of-the-box,  which comes pre-configured with default credentials.

These commands are indicative and assume some familiarity with PostgreSQL. Please do not copy and paste them. For docker-compose, you will need to ssh into the PostgreSQL container before you run them

To change the default credentials:

#1.  Change password for mission control user
# Access PostgreSQL as the jfmc user adding the optional -W flag to invoke the password prompt
$ psql -d mission_control -U jfmc -W
# Securely change the password for user "mission_control". Enter and then retype the password at the prompt.
\password jfmc
# Verify the update was successful by logging in with the new credentials
$ psql -d mission_control -U jfmc -W

#2.  Change password for scheduler user
# Access PostgreSQL as the jfmc user adding the optional -W flag to invoke the password prompt
$ psql -d mission_control -U jfisc -W
# Securely change the password for user "mission_control". Enter and then retype the password at the prompt.
\password jfisc
# Verify the update was successful by logging in with the new credentials
$ psql -d mission_control -U jfisc -W

#3. Change password for insight server user
# Access PostgreSQL as the jfmc user adding the optional -W flag to invoke the password prompt
$ psql -d mission_control -U jfisv -W
# Securely change the password for user "mission_control". Enter and then retype the password at the prompt.
\password jfisv
# Verify the update was successful by logging in with the new credentials
$ psql -d mission_control -U jfisv -W

Changing Elasticsearch Credentials

Search Guard tool is used to manage authentication. To change password for the default user, Search Guard accepts a hash password to be provided in the configuration.

  1. Obtain the username used to access Elasticsearch from $JFROG_HOME/mc/var/etc/system.yaml available at elasticsearch.username
  2. Generate the hash password by providing the password(in text format) as input

    $ELASTICSEARCH_HOME/plugins/search-guard-7/tools/hash.sh -p <password_in_text_format>
    
  3. The output from the previous step should be updated in the configuration for the default user

    vi $ELASTICSEARCH_HOME/plugins/search-guard-7/sgconfig/sg_internal_users.yml
    #Scroll in the file to find an entry for the username of the default user
    #Update the value for "hash" with the hash content obtained from previous step
    <default_username>:
       hash: <hash_output_from_previous_step>
  4. Run the command to initialise Search Guard 

Add Certificates when Connecting to SSL Enabled Elasticsearch

cd $JFROG_HOME/mc/var/etc/security/keys/trusted
#Copy the certificates to this location and restart MC services

Set your PostgreSQL and Elasticsearch connection details in the Shared Configurations  section of the  $JFROG_HOME/mc/var/etc/system.yaml f ile.

Load a Custom Certificate to Elasticsearch Search Guard 

If you prefer to use the custom certificates when Search Guard enabled with tls in Elasticsearch, you can use the search-guard-tlstool  to generate Search Guard certificates.

The tool to generate Search Guard certificates is be available in $JFROG_HOME/app/third-party/elasticsearch/search-guard-tlstool-<version>.tar.gz. For more information about generating certificates, see Search Guard TLS Tool.

  1. Run the tool to generate the certificates.

    tar -xvf $JFROG_HOME/app/third-party/elasticsearch/search-guard-tlstool-<version>.tar.gz
    cp $JFROG_HOME/app/third-party/elasticsearch/config/tlsconfig.yml $JFROG_HOME/app/third-party/elasticsearch/search-guard-tlstool-<version>/config
    cd $JFROG_HOME/app/third-party/elasticsearch/search-guard-tlstool-<version>/tools
    ./sgtlstool.sh -c ../config/tlsconfig.yml -ca -crt # folder named "out" will be created with all the required certificates, 
    cd out 
    
  2. Copy the generated certificates [[   localhost.key,  localhost.pem, root-ca.pem,  sgadmin.key,  sgadmin.pem   ]]  to the target location based on the installer type.

    cp localhost.key localhost.pem root-ca.pem sgadmin.key sgadmin.pem  /etc/elasticsearch/certs/
    cp localhost.key localhost.pem root-ca.pem sgadmin.key sgadmin.pem $JFROG_HOME/mc/var/data/elasticsearch/certs

Configuring a Custom Elasticsearch Role

The Search Guard tool is used to manage authentication. By default, an admin user is required to authenticate Elasticsearch. As an alternative to this, a new user can be configured to authenticate Elasticsearch by assigning a custom role with permissions for the application to work.

  1. Add the following snippet to define a new role with custom permissions:

    vi $ELASTICSEARCH_HOME/plugins/search-guard-7/sgconfig/sg_roles.yml
    
    #Add the following snippet to define a new role with custom permissions
    
    <role_name>:
      cluster_permissions:
        - cluster:monitor/health
        - cluster:monitor/main
        - cluster:monitor/state
        - "indices:admin/template/get"
      index_permissions:
        - index_patterns:
            - "*"
          allowed_actions:
            - "indices:monitor/health"
            - "indices:monitor/stats"
            - "indices:monitor/settings/get"
            - "indices:admin/aliases/get"
            - "indices:admin/get"
            - "indices:admin/create"
            - "indices:admin/delete"
            - "indices:admin/rollover"
            - SGS_CRUD


  2. Add the following snippet to add a new user:

    vi $ELASTICSEARCH_HOME/plugins/search-guard-7/sgconfig/sg_internal_users.yml
    
    
    # Add the following snippet to add a new user
    
    <user_name>:
      hash: <Hash_password>
      backend_roles:
        - "<role_name>"   //role_name defined in previous step
      description: "<description>"


    1. Run the following command to generate a hash password:

      $ELASTICSEARCH_HOME/plugins/search-guard-7/tools/hash.sh -p <clear_text_password>
  3. Add the following snippet to map the new username to the role defined in the previous step:

    vi $ELASTICSEARCH_HOME/plugins/search-guard-7/sgconfig/sg_roles.yml/sg_roles_mapping.yml
    
    # Add the following snippet to map the new username to the role defined in the previous step
    
    <role_name>:
      users:
        - "<user_name>"
  4. Initialize Search Guard to upload the above changes made in the configuration.

    export JAVA_HOME=<JFROG_HOME>/mc/app/third-party/java
    
    cd $ELASTICSEARCH_HOME/plugins/search-guard-7/tools
    
    bash ../tools/sgadmin.sh -p 9300 -cacert root-ca.pem -cert sgadmin.pem -key sgadmin.key -nhnv -icl -cd ../sgconfig/


  5. Set the new credentials in $JFROG_HOME/mc/etc/system.yaml file:

    shared:
      elasticsearch:
        username: <user_name>
        password: <clear_text_password>
    
    
  6. Restart Mission Control services.

Setting up Your PostgreSQL Databases, Users and Schemas

Database and schema names can only be changed for a new installation. Changing the names during an upgrade will result in the loss of existing data.

Create a single user with permission to all schemas. Use this user's credentials during your Helm installation on this page.

  1. Log in to the PostgreSQL database as an admin and execute the following commands.

    CREATE DATABASE mission_control WITH ENCODING='UTF8' TABLESPACE=pg_default;
    #    Exit from current login
    \q
    #    Login to $DB_NAME database using admin user (by default its postgres)
    psql -U postgres mission_control
    CREATE USER jfmc WITH PASSWORD 'password';
    GRANT ALL ON DATABASE mission_control TO jfmc;
    CREATE SCHEMA IF NOT EXISTS jfmc_server AUTHORIZATION jfmc;
    GRANT ALL ON SCHEMA jfmc_server TO jfmc;
    CREATE SCHEMA IF NOT EXISTS insight_server AUTHORIZATION jfmc;
    GRANT ALL ON SCHEMA insight_server TO jfmc;
    CREATE SCHEMA IF NOT EXISTS insight_scheduler AUTHORIZATION jfmc;
    GRANT ALL ON SCHEMA insight_scheduler TO jfmc;
  2. Configure the  system.yaml  file with the database configuration details according to the information above. For example.

Installing PostgreSQL 


Do not use a password for PostgreSQL that has special characters: Mission Control may not work if you configure a password that has special characters, such as ~ = # @ $ / .

RPM

  1. Install PostgreSQL.

    # Run the following commands from the extracted jfrog-mc-<version>-rpm directory.
    # Note : Use postgreSQL rpms with el6 when installing on Centos 6 and RHEL 6 and use postgresql12-12.5-1 packages 
    # Note : Use postgreSQL rpms with el8 when installing on Centos 8 and RHEL 8
    
    mkdir -p /var/opt/postgres/data
    
    rpm -ivh --replacepkgs ./third-party/postgresql/libicu-50.2-3.el7.x86_64.rpm (only AWS instance)
    rpm -ivh --replacepkgs ./third-party/postgresql/postgresql12-libs-12.5-1PGDG.rhel7.x86_64.rpm
    rpm -ivh --replacepkgs ./third-party/postgresql/postgresql12-12.5-1PGDG.rhel7.x86_64.rpm
    rpm -ivh --replacepkgs ./third-party/postgresql/postgresql12-server-12.5-1PGDG.rhel7.x86_64.rpm
    
    chown -R postgres:postgres /var/opt/postgres
    
    export PGDATA="/var/opt/postgres/data"
    export PGSETUP_INITDB_OPTIONS="-D /var/opt/postgres/data"
    
    # For centos 7&8 / rhel 7&8 
    sed -i "s~^Environment=PGDATA=.*~Environment=PGDATA=/var/opt/postgres/data~" /lib/systemd/system/postgresql-12.service
    systemctl daemon-reload
    /usr/pgsql-12/bin/postgresql-12-setup initdb
    
    # For centos 6 / rhel 6
    sed -i "s~^PGDATA=.*~PGDATA=/var/opt/postgres/data~" /etc/init.d/postgresql-12
    service postgresql-12 initdb
    
    Replace "ident" and "peer" with "trust" in postgres hba configuration files ie /var/opt/postgres/data/pg_hba.conf
    
  2. Configure PostgreSQL to allow external IP connections. 

  3. By default PostgreSQL will only allow localhost clients communications. To enable different IPs to communicate with the database you will need to configure the pg_hba.conf  file.

    • Docker-compose$JFROG_HOME/mc/var/data/postgres/data
    • Native installations: /var/opt/postgres/data

    To grant all IPs access you may add the below, under the IPv4 local connections section.

    host    all             all             0.0.0.0/0               trust

     Add the following line to  /var/opt/postgres/data /postgresql.conf.

    listen_addresses='*'
    port=5432
  4. Start PostgreSQL.

    systemctl start postgresql-12.service 
    
    or 
    
    service postgresql-12 start
  5. Setup the database and  user.

    ## run the script to seed the tables and schemas needed by Mission Control
    cp -f ./third-party/postgresql/createPostgresUsers.sh /tmp
    source /etc/locale.conf
    
    cd /tmp && su postgres -c "POSTGRES_PATH=/usr/pgsql-12/bin PGPASSWORD=postgres DB_PASSWORD=password bash /tmp/createPostgresUsers.sh"

Debian   

Prerequisites

It is recommended to ensure your  apt-get  libraries are up-to-date, using the following commands.

apt-get update
apt-get install -f -y
apt-get update
# Create the file repository configuration to pull postgresql dependencies

cp -f /etc/apt/sources.list /etc/apt/sources.list.origfile
sh -c 'echo "deb http://ftp.de.debian.org/debian/ $(lsb_release -cs) main non-free contrib" >> /etc/apt/sources.list'
sh -c 'echo "deb-src http://ftp.de.debian.org/debian/ $(lsb_release -cs) main non-free contrib" >> /etc/apt/sources.list'  
  
cp -f /etc/apt/sources.list.d/pgdg.list /etc/apt/sources.list.d/pgdg.list.origfile
sh -c 'echo "deb http://apt.postgresql.org/pub/repos/apt/ $(lsb_release -cs)-pgdg main" >> /etc/apt/sources.list.d/pgdg.list'
sh -c 'echo "deb http://apt.postgresql.org/pub/repos/apt/ precise-pgdg main" >> /etc/apt/sources.list.d/pgdg.list'

wget --no-check-certificate --quiet -O - https://www.postgresql.org/media/keys/ACCC4CF8.asc | sudo apt-key add -
Install Steps
  1. Install PostgreSQL.
    Run the following commands from the extracted jfrog-mc-<version>-deb directory.

    mkdir -p /var/opt/postgres/data
    
    
    dpkg -i ./third-party/postgresql/postgresql-12_12.5-1.pgdg16.04+1_amd64.deb
    dpkg -i ./third-party/postgresql/postgresql-12_12.5-1.pgdg18.04+1_amd64.deb
    dpkg -i ./third-party/postgresql/postgresql-12_12.5-1.pgdg20.04+1_amd64.deb
    ## Before installing Postgres dependencies
    mv /etc/apt/sources.list.d/backports.list /etc/apt >/dev/null
    apt-get update
    dpkg -i ./third-party/postgresql/postgresql-12_12.5-1.pgdg80+1_amd64.deb
    
    # After installing Postgres dependencies
    mv /etc/apt/backports.list /etc/apt/sources.list.d/backports.list >/dev/null
    apt-get update
    dpkg -i ./third-party/postgresql/postgresql-12_12.5-1.pgdg90+1_amd64.deb
    apt update -y 
    apt-get install wget sudo -y 
    apt-get install -y gnupg gnupg1 gnupg2 
    dpkg -i ./third-party/postgresql/postgresql-12_12.5-1.pgdg100+1_amd64.deb
  2. Stop the Xray service.

    systemctl stop postgresql.service
  3. Change permissions for the postgres folder.

    chown -R postgres:postgres /var/opt/postgres
    
    sed -i "s~^data_directory =.*~data_directory = '/var/opt/postgres/data'~" "/etc/postgresql/12/main/postgresql.conf"
    sed -i "s~^hba_file =.*~hba_file = '/var/opt/postgres/data/pg_hba.conf'~" "/etc/postgresql/12/main/postgresql.conf"
    sed -i "s~^ident_file =.*~ident_file = '/var/opt/postgres/data/pg_ident.conf'~" "/etc/postgresql/12/main/postgresql.conf"
    
    su postgres -c "/usr/lib/postgresql/12/bin/initdb --pgdata=/var/opt/postgres/data"
  4. Configure PostgreSQL to allow external IP connections. 

  5. By default PostgreSQL will only allow localhost clients communications. To enable different IPs to communicate with the database you will need to configure the pg_hba.conf  file.

    • Docker-compose$JFROG_HOME/mc/var/data/postgres/data
    • Native installations: /var/opt/postgres/data

    To grant all IPs access you may add the below, under the IPv4 local connections section:

    host    all             all             0.0.0.0/0               trust

    Add the following line to /etc/postgresql/12/main/postgresql.conf

    listen_addresses='*'
  6.  Start PostgreSQL 

    systemctl start postgresql.service 
    
    or 
    
    service postgresql start
  7. Set up the database and  user.

    ## run the script to seed the tables and schemas needed by Mission Control
    cp -f ./third-party/postgresql/createPostgresUsers.sh /tmp
    source /etc/default/locale
    
    cd /tmp && su postgres -c "POSTGRES_PATH=/usr/lib/postgresql/12/bin PGPASSWORD=postgres DB_PASSWORD=password bash /tmp/createPostgresUsers.sh"
  8. Put back the original pgdg.list.

    mv /etc/apt/sources.list.d/pgdg.list /etc/apt/sources.list.d/pgdg.list.tmp &&
    cp -f /etc/apt/sources.list.d/pgdg.list.origfile /etc/apt/sources.list.d/pgdg.list
  9. Remove backup files.

    rm -f /etc/apt/sources.list.d/pgdg.list.tmp
    rm -f /etc/apt/sources.list.d/pgdg.list.origfile
  10. Put back the original sources.list.

    mv /etc/apt/sources.list /etc/apt/sources.list.tmp &&
    cp -f /etc/apt/sources.list.origfile /etc/apt/sources.list
  11. Remove the backup files.

    rm -f /etc/apt/sources.list.tmp &&
    rm -f /etc/apt/sources.list.origfile

 Linux Archive

Postgres binaries are no longer bundled with linux archive installer for Mission Control. Remember to install Postgres manually.  

   

 
# Create the psql database (the script "mc/app/third-party/postgresql/createPostgresUsers.sh" , responsible for seeding Postgres assumes this database exists)
<pgsql bin path>/psql template1
<postgres prompt>: CREATE DATABASE <user_name>;
<postgres prompt>: \q
 
## run the script to seed the tables and schemas needed by Mission Control
POSTGRES_PATH=<pgsql bin path> mc/app/third-party/postgresql/createPostgresUsers.sh


Setting up Your PostgreSQL Databases, Users and Schemas

Database and schema names can only be changed for a new installation. Changing the names during an upgrade will result in the loss of existing data.

Create a single user with permission to all schemas. Use this user's credentials during your Helm installation on this page.

  1. Log in to the PostgreSQL database as an admin and execute the following commands.

    CREATE DATABASE mission_control WITH ENCODING='UTF8' TABLESPACE=pg_default;
    #    Exit from current login
    \q
    #    Login to $DB_NAME database using admin user (by default its postgres)
    psql -U postgres mission_control
    CREATE USER jfmc WITH PASSWORD 'password';
    GRANT ALL ON DATABASE mission_control TO jfmc;
    CREATE SCHEMA IF NOT EXISTS jfmc_server AUTHORIZATION jfmc;
    GRANT ALL ON SCHEMA jfmc_server TO jfmc;
    CREATE SCHEMA IF NOT EXISTS insight_server AUTHORIZATION jfmc;
    GRANT ALL ON SCHEMA insight_server TO jfmc;
    CREATE SCHEMA IF NOT EXISTS insight_scheduler AUTHORIZATION jfmc;
    GRANT ALL ON SCHEMA insight_scheduler TO jfmc;
  2. Configure the  system.yaml  file with the database configuration details according to the information above. For example.

    shared:
      database:
        type: postgresql
        driver: org.postgresql.Driver
        url: jdbc:postgresql://localhost:5432/mission_control
        username: jfmc
        password: password

For Advanced Users

Manual Docker Compose Installation

  1. Extract the contents of the compressed archive and go to the extracted folder.

    tar -xvf jfrog-mc-<version>-compose.tar.gz

    This .env file is used by docker-compose and is updated during installations and upgrades.

    Notice that some operating systems do not display dot files by default. If you've made any changes to the file, remember to backup before an upgrade.

  2. Create the following folder structure under $JFROG_HOME/mc.

     -- [1050	1050    ]  var
     -- [1050	1050    ]  data
     -- [1000	1000	]  data/elasticsearch
     -- [999	999		]  postgres
     -- [1050	1050	]  etc
  3. Copy the appropriate docker-compose templates from the templates folder to the extracted folder. Rename it as docker-compose.yaml

    The commands below assume you are using the template: docker-compose-postgres.yaml.

    RequirementTemplate
    Mission control with Elasticsearchdocker-compose.yaml
    PostgreSQLdocker-compose-postgres.yaml
  4. Update the .env file

    ## The Installation directory for Mission Control. IF not entered, the script will prompt you for this input. Default [$HOME/.jfrog/mc]
    ROOT_DATA_DIR=
    
    ## Public IP of this machine
    HOST_IP=
    
    ## Configuration on the first bootstrap of the cluster. Set this only for the first node.
    ES_MASTER_NODE_SETTINGS="cluster.initial_master_nodes=<node-ip>"
  5. Customize the product configuration.
    1. Set the Artifactory connection details.
    2. Customize the PostgreSQL Database connection details. (optional)
    3. Set any additional configurations (for example: ports, node id) using Mission Control System YAML.

      Verify that the host's ID and IP are added to the system.yaml. This is important to ensure that other products and Platform Deployments can reach this instance.

  6. For Elasticsearch to work correctly, increase the map count. For additional information, see Elasticsearch documentation

  7. Create the necessary tables and users using the script: "createPostgresUsers.sh". 
    • Start the PostgreSQL container.

      docker-compose -p mc-postgres -f docker-compose-postgres.yaml up -d
    • Copy the script into the PostgreSQL container.

      docker cp ./third-party/postgresql/createPostgresUsers.sh mc_postgres:/
    • Exec into the container and execute the script. This will create the database tables and users.

      docker exec -t mc_postgres bash -c "chmod +x /createPostgresUsers.sh && gosu postgres /createPostgresUsers.sh"
      docker exec -t mc_postgres bash -c "export DB_PASSWORD=password1 &&
      chmod +x /createPostgresUsers.sh && su-exec postgres /createPostgresUsers.sh"
  8. Run the following commands.

    mkdir -p ${ROOT_DATA_DIR}/var/data/elasticsearch/sgconfig
    mkdir -p ${ROOT_DATA_DIR}/var/data/elasticsearch/config
    touch -p ${ROOT_DATA_DIR}/var/data/elasticsearch/config/unicast_hosts.txt
    chown -R 1000:1000 ${ROOT_DATA_DIR}/var/data/elasticsearch
    chmod 777 ${ROOT_DATA_DIR}/var/data/elasticsearch/config/unicast_hosts.txt
  9. Start Mission Control using docker-compose commands.

    docker-compose -p mc logs
    docker-compose -p mc ps
    docker-compose -p mc up -d
    docker-compose -p mc down
  10. Access Mission Control from your browser at: http://SERVER_HOSTNAME/ui/ . For example, on your local machine: http://localhost/ui/ .

  11. Check the Mission Control log.

    docker-compose -p mc logs

    The console.log file can grow quickly since all services write to it. The installation scripts add a cron job to log rotate the console.log file every hour.

    This is not  done for manual Docker Compose installations. Learn more on how to configure the log rotation.