Search


Cloud customer?
Upgrade in MyJFrog >


Working with an older version?

JFrog Artifactory 6.x
JFrog Xray 2.x
JFrog Mission Control 3.x
JFrog Distribution 1.x
JFrog Enterprise+ (Pre-Platform Release)




Overview

This page provides a guide for the different ways you can install and configure JFrog Mission Control, single node and high availability. Additional information on high availability can be found here.

To install Mission Control 4.x, you must first install JFrog Artifactory 7.x.

Before you install Mission Control please refer to additional information on supported platforms, browsers and other requirements, and the system architecture.

Installation Steps

The installation procedure involves the following main steps:

  1. Download Mission Control as per your required installer type (Linux Archive, Docker Compose, RPM, Debian).
  2. Install Mission Control either as a single node installation, or high availability cluster.
    1. Install third party dependencies (PostgreSQL and Elasticsearch databases, included in the archive)
    2. Install Mission Control
  3. Configure the service
    1. Connection to Artifactory (joinKey and jfrogUrl)
    2. Additional optional configuration including changing default credentials for databases
  4. Start the Service using the start scripts or OS service management.
  5. Check the Service Log to check the status of the service.

Default Home Directory

The default Mission Control home directory is defined according to the installation type. For additional details see the Product Directory Structure page.

Note: This guide uses $JFROG_HOME to represent the JFrog root directory containing the deployed product.

Artifactory Plugins for Mission Control

Mission Control 3.x relied on Artifactory user plugins to manage and monitor Artifactory. Starting 4.0, those plugins are no longer used. When upgrading to Artifactory 7.x, those user plugins will be automatically removed from Artifactory. The plugins that will be removed are:

  1. propertySetsConfig.groovy
  2. haClusterDump.groovy
  3. httpSsoConfig.groovy
  4. repoLayoutsConfig.groovy
  5. ldapGroupsConfig.groovy
  6. internalUser.groovy
  7. ldapSettingsConfig.groovy
  8. pluginsConfig.groovy
  9. proxiesConfig.groovy
  10. requestRouting.groovy

Page Contents


Single Node Installation

The following installation methods are supported:

Interactive Script Installation (recommended)

All install types are supported, including: Docker Compose, Linux Archive, RPM and Debian.

The installer script provides you an interactive way to install Mission Control and its dependencies. All install types are supported. This installer should be used for Docker Compose.

  1. Download Mission Control.
  2. Extract the contents of the compressed archive and go to the extracted folder.

    tar -xvf jfrog-mc-<version>-<compose|rpm|deb>.tar.gz
    cd jfrog-mc-<version>-<compose|rpm|deb>

    .env file included within the Docker-Compose archive

    This .env file is used by docker-compose and is updated during installations and upgrades.

    Notice that some operating systems do not display dot files by default. If you've made any changes to the file, remember to backup before an upgrade.

  3. Run the installer script.
    Note: the script will prompt you with a series of mandatory inputs, including the jfrogURL (custom base URL) and joinKey.

    Docker Compose
    ./config.sh
    RPM/DEB
    ./install.sh
  4. Validate and customize the product configuration (optional), including the third party dependencies connection details and ports.
  5. Start and manage the Mission Control service.

    systemd OS
    systemctl start|stop mc.service
    systemv
    service mc start|stop
    Docker Compose
    cd jfrog-mc-<version>-compose
    docker-compose -p mc up -d
    docker-compose -p mc ps
    docker-compose -p mc down
  6. Access Mission Control from your browser at: http://<jfrogUrl>/ui/, go the Dashboard tab in the Application module in the UI.
  7. Check Mission Control Log.

    tail -f $JFROG_HOME/mc/var/log/console.log

    Configuring the Log Rotation of the Console Log

    The console.log file can grow quickly since all services write to it. This file is not log rotated for Darwin installations. Learn more on how to configure the log rotation.

Linux Archive Installation

  1. Download Mission Control.
  2. Extract the contents of the compressed archive and move it into /mc directory..

    tar -xvf jfrog-mc-<version>-linux.tar.gz
    mv jfrog-mc-<version>-linux mc
  3. Install PostgreSQL.

    PostgreSQL is required and must be installed before continuing with the next installation steps.

    Set your PostgreSQL connection details in the Shared Configurations section of the $JFROG_HOME/mc/var/etc/system.yaml file 

  4. Prepare for Elasticsearch Installation by increasing the map count. For additional information refer to the Elastic Search documentation.

    sudo sysctl -w vm.max_map_count=262144

    To make this change permanent, remember to update the vm.max_map_count setting in /etc/sysctl.conf

  5. Install Elasticsearch.

    Elasticsearch is required and must be installed before continuing with the next installation steps.

    Set your Elasticsearch connection details in the Shared Configurations section of the $JFROG_HOME/mc/var/etc/system.yaml file.

  6. Start PostgreSQL and Elasticsearch.

  7. Customize the product configuration.
    1. Set the Artifactory connection details.
    2. Customize the PostgreSQL Database connection details. (optional)
    3. Set any additional configurations (for example: ports, node id) using the Mission Control system.yaml configuration file.
  8. Start and manage the Mission Control service as the user who extracted the tar.

    mc/app/bin/mc.sh start|stop|status|restart
  9. Access Mission Control from your browser at: http://<jfrogUrl>/ui/, go the Dashboard tab in the Application module in the UI.
  10. Check Mission Control Log.

    tail -f $JFROG_HOME/mc/var/log/console.log

Manual RPM Installation

The RPM installation bundles Mission Control and all its dependencies. It is provided as native RPM packages, where Mission Control and its dependencies must be installed separately. Use this, if you are automating installations.

  1. Download Mission Control.

  2. Extract the contents of the compressed archive, and go to the extracted folder:

    tar -xvf jfrog-mc-<version>-rpm.tar.gz
    cd jfrog-mc-<version>-rpm
  3. Install Mission Control. You must run as a root user.

    rpm -Uvh --replacepkgs ./mc/mc.rpm
  4. Install PostgreSQL and start PostgreSQL service.

    PostgreSQL is required and must be installed before continuing with the next installation steps.

    Set your PostgreSQL connection details in the Shared Configurations section of the $JFROG_HOME/mc/var/etc/system.yaml file.

  5. Install Elasticsearch and start Elasticsearch service.

    Elasticsearch is required and must be installed before continuing with the next installation steps.

    Set your Elasticsearch connection details in the Shared Configurations section of the $JFROG_HOME/mc/var/etc/system.yaml file.

  6. Customize the product configuration.
    1. Set the Artifactory connection details.
    2. Customize the PostgreSQL Database connection details. (optional)
    3. Set any additional configurations (for example: ports, node id) using the Mission Control system.yaml configuration file.
  7. Start and manage the Mission Control service.

    systemd OS
    systemctl start|stop mc.service
    systemv OS
    service mc start|stop|status|restart
  8. Access Mission Control from your browser at: http://<jfrogUrl>/ui/, go the Dashboard tab in the Application module in the UI.
  9. Check Mission Control Log.

    Linux
    tail -f $JFROG_HOME/mc/var/log/console.log

Manual Debian Installation

The Debian installation bundles Mission Control and all its dependencies. It is provided as native Debian packages, where Mission Control and its dependencies must be installed separately. Use this, if you are automating installations.

  1. Download Mission Control.
  2. Extract the contents of the compressed archive, and go to the extracted folder:

    tar -xvf jfrog-mc-<version>-deb.tar.gz
    cd jfrog-mc-<version>-deb
  3. Install Mission control. You must run as a root user.

    dpkg -i ./mc/mc.deb
  4. Install PostgreSQL.

    PostgreSQL is required and must be installed before continuing with the next installation steps.

    Set your PostgreSQL connection details in the Shared Configurations section of the $JFROG_HOME/mc/var/etc/system.yaml file.

  5. Install Elasticsearch.

    Elasticsearch is required and must be installed before continuing with the next installation steps.

    Set your Elasticsearch connection details in the Shared Configurations section of the $JFROG_HOME/mc/var/etc/system.yaml file.

  6. Customize the product configuration.
    1. Set the Artifactory connection details.
    2. Customize the PostgreSQL Database connection details. (optional)
    3. Set any additional configurations (for example: ports, node id) using the Mission Control system.yaml configuration file.

  7. Start and manage the Mission control service.

    systemd OS
    systemctl start|stop mc.service
    systemv OS
    service mc start|stop|status|restart
  8. Access Mission control from your browser at: http://<jfrogUrl>/ui/, go the Dashboard tab in the Application module in the UI.
  9. Check Mission control Log.

    Linux
    tail -f $JFROG_HOME/mc/var/log/console.log

HA Installation

The following describes how to set up a Mission Control HA cluster with more than one node. For more information about HA, see System Architecture.

Prerequisites

All nodes within the same Mission Control HA installation must be running the same Artifactory version.

Database

Mission Control HA requires an external PostgreSQL database. Make sure to install it before proceeding to install the first node. There are several ways to setup PostgreSQL for redundancy. Including: HA, Load Balancing and Replication. For more information, see the PostgreSQL documentation.

Network

  • All the Mission Control HA components (Mission Control cluster nodes, database server and Elasticsearch) must be within the same fast LAN.

  • All the HA nodes must communicate with each other through dedicated TCP ports.

The following installation methods are supported:

Linux Archive/RPM/Debian Installation

First node installation steps:

  1. Install the first node. The installation is identical to the single node installation.

    Important: make sure not to start Mission Control.

  2. Configure the system.yaml file with the database and first node configuration details. For example,

    First node system.yaml
    shared:
      database:
        type: postgresql
        driver: org.postgresql.Driver
        url: postgres://<ip:port>/mission_control?sslmode=disable
        username: <username>
        password: <password>
      jfrogUrl: <JFrog URL>
      security:
        joinKey: <Artifactory Join Key>
  3. Start and manage the Mission Control service.

    systemd OS
    systemctl start|stop mc.service
    Systemv OS
    service mc start|stop
  4. Access Mission Control from your browser at: http://<jfrogUrl>/ui/, go the Dashboard tab in the Application module in the UI.
  5. Check Mission Control Log.

    Linux
    tail -f $JFROG_HOME/mc/var/log/console.log
Additional node installation steps:

In order for a node to join a cluster, the node must have the same database configuration and the Master Key. Install all additional nodes using the same steps described above, with the additional steps below:

  1. Configure the system.yaml file for the additional node with master key, database and active node configurations. For example,

    Additional node system.yaml
    shared:
      database:
        type: postgresql
        driver: org.postgresql.Driver
        url: postgres://<ip:port>/mission_control?sslmode=disable
        username: <username>
        password: <password>
      jfrogUrl: <JFrog URL>
      security:
        joinKey: <Artifactory Join Key>
      # Configure the following property values when Elasticsearch is installed from the bundled Mission Control package.
      elasticsearch:
        clusterSetup: "YES"
        unicastFile: "$JFROG_HOME/mc/data/elasticsearch/config/unicast_hosts.txt"
  2. Copy the master.key from the first node to the additional node located at $JFROG_HOME/mc/var/etc/security/master.key.
  3. Start the additional node.

  4. Access Mission Control from your browser at: http://<jfrogUrl>/ui/, go the Dashboard tab in the Application module in the UI.
  5. Check Mission Control Log.

    Linux
    tail -f $JFROG_HOME/mc/var/log/console.log

Docker Compose Installation

First node installation steps:

  1. Extract the contents of the compressed archive and go to the extracted folder.

    tar -xvf jfrog-mc-<version>-compose.tar.gz
    cd jfrog-mc-<version>-compose.tar.gz

    .env file included within the Docker-Compose archive

    This .env file is used by docker-compose and is updated during installations and upgrades.

    Notice that some operating systems do not display dot files by default. If you make any changes to the file, remember to backup before an upgrade.

  2. Run the config.sh script to setup folders with required ownership.

    ./config.sh
  3. Configure the system.yaml file with the database for the first node configuration details. For example,

    First node system.yaml
    shared:
      database:
        type: postgresql
        driver: org.postgresql.Driver
        url: postgres://<ip:port>/mission_control?sslmode=disable
        username: <username>
        password: <password>
      jfrogUrl: <JFrog URL>
      security:
        joinKey: <Artifactory Join Key>
  4. Validate and customize the product configuration (optional), including the third party dependencies connection details and ports.
  5. Start and manage Mission Control using docker-compose commands.

    cd jfrog-mc-<version>-compose
    docker-compose -p mc logs
    docker-compose -p mc ps
    docker-compose -p mc up -d
    docker-compose -p mc down
  6. Access Mission Control from your browser at: http://<jfrogUrl>/ui/, go the Dashboard tab in the Application module in the UI.

  7. Check Mission Control Log.

    docker-compose -p mc logs

Additional node installation steps:

  1. Extract the contents of the compressed archive and go to the extracted folder.

    tar -xvf jfrog-mc-<version>-compose.tar.gz
    cd jfrog-mc-<version>-compose.tar.gz
  2. Run the config.sh script to setup folders with required ownership.

    ./config.sh
    
  3. Configure the system.yaml file for the secondary node with database and active node configurations. For example,

    Additional node system.yaml
    shared:
      database:
        type: postgresql
        driver: org.postgresql.Driver
        url: postgres://<ip:port>/mission_control?sslmode=disable
        username: <username>
        password: <password>
      jfrogUrl: <JFrog URL>
      security:
        joinKey: <Artifactory Join Key>
      # Configure the following property values when Elasticsearch is installed from the bundled Mission Control package.
      elasticsearch:
        clusterSetup: "YES"
        unicastFile: "/var/opt/jfrog/mc/data/elasticsearch/config/unicast_hosts.txt"
  4. Copy the master.key from the first node to the additional node located at $JFROG_HOME/mc/var/etc/security/master.key.
  5. Add jfmc user to elasticsearch group to be able to update cluster configuration.

    usermod -a -G elasticsearch jfmc
    
  6. Validate and customize the product configuration (optional), including the third party dependencies connection details and ports.

  7. Start and manage Mission Control using docker-compose commands.

    cd jfrog-mc-<version>-compose
    docker-compose -p mc logs
    docker-compose -p mc ps
    docker-compose -p mc up -d
    docker-compose -p mc down
  8. Access Mission Control from your browser at: http://<jfrogUrl>/ui/, go the Dashboard tab in the Application module in the UI.

  9. Check Mission Control Log.

    docker-compose -p mc logs

Product Configuration

After installing and before running Mission Control, you may set the following configurations.

Where to find the system configurations?

You can configure all your system settings using the system.yaml file located in the $JFROG_HOME/mc/var/etc folder.

If you don't have a System YAML file in your folder, copy the template available in the folder and name it system.yaml.

Artifactory Connection Details

Mission Control requires a working Artifactory server and a suitable license. The Mission Control connection to Artifactory requires 2 parameters:

  • jfrogUrl - URL to the machine where JFrog Artifactory is deployed, or the load balancer pointing to it. It is recommended to use DNS names rather than direct IPs. For example: http://jfrog.acme.com or http://10.20.30.40:8082
    Set it in the Shared Configurations section of the $JFROG_HOME/mc/etc/system.yaml file.
  • join.key - This is the "secret" key required by Artifactory for registering and authenticating the Mission Control server.
    You can fetch the Artifactory joinKey (join Key) from the JPD UI in the Administration module | Security | Settings | Join Key or from the Artifactory filesystem
    Set the join.key used by your Artifactory server in the Shared Configurations section of the $JFROG_HOME/mc/etc/system.yaml file.

Changing PostgreSQL Database Credentials

Mission Control comes bundled with a PostgreSQL Database out-of-the-box, which comes pre-configured with default credentials.

These commands are indicative and assume some familiarity with PostgreSQL. Please do not copy and paste them. For docker-compose, you will need to ssh into the PostgreSQL container before you run them

To change the default credentials:

PostgreSQL
#1.  Change password for mission control user
# Access PostgreSQL as the jfmc user adding the optional -W flag to invoke the password prompt
$ psql -d mission_control -U jfmc -W
# Securely change the password for user "distribution". Enter and then retype the password at the prompt.
\password jfmc
# Verify the update was successful by logging in with the new credentials
$ psql -d mission_control -U jfmc -W

#2.  Change password for scheduler user
# Access PostgreSQL as the jfmc user adding the optional -W flag to invoke the password prompt
$ psql -d mission_control -U jfisc -W
# Securely change the password for user "distribution". Enter and then retype the password at the prompt.
\password jfisc
# Verify the update was successful by logging in with the new credentials
$ psql -d mission_control -U jfisc -W

#3. Change password for executor user
# Access PostgreSQL as the jfmc user adding the optional -W flag to invoke the password prompt
$ psql -d mission_control -U jfise -W
# Securely change the password for user "distribution". Enter and then retype the password at the prompt.
\password jfise
# Verify the update was successful by logging in with the new credentials
$ psql -d mission_control -U jfise -W

#4. Change password for insight server user
# Access PostgreSQL as the jfmc user adding the optional -W flag to invoke the password prompt
$ psql -d mission_control -U jfisv -W
# Securely change the password for user "distribution". Enter and then retype the password at the prompt.
\password jfisv
# Verify the update was successful by logging in with the new credentials
$ psql -d mission_control -U jfisv -W

Changing Elasticsearch Database Credentials

Elasticsearch
curl  <elasticsearch_endpoint>/_xpack/security/user/elastic/_password
{
  "password" : "s3cr3t"
}

Set your PostgreSQL and Elasticsearch connection details in the Shared Configurations section of the $JFROG_HOME/mc/var/etc/system.yaml file.


Installing Third Party Applications

PostgreSQL Installation

Linux Archive
# The binary is available in the folder: "mc/app/third-party/postgresql". Untar to a suitable folder as in the example below (which is extracting to the <user_name>'s home directory).
su <user_name>
tar -xvf ./postgresql-9.6.11-1-linux-x64-binaries.tar.gz -C ~/

# As a sudo user, create the data folders to be used by Postgres and make this user (<user_name>) the owner
sudo su
mkdir -p  /var/lib/pgsql/data
chown -R <user_name> /var/lib/pgsql/data
mkdir -p /usr/local/pgsql/data
chown -R <user_name> /usr/local/pgsql/data

#For your convenience and to avoid Postgres failing to startup because of path issues, set the path and locale.
su <user_name>
export PATH="$PATH:/home/<user_name>/pgsql/bin"
LC_ALL="en_US.UTF-8"
LC_CTYPE="en_US.UTF-8"

# Initialize the data directory
initdb -D /usr/local/pgsql/data

# Start Postgres service
pg_ctl -D /usr/local/pgsql/data -l logfile start

# Enabling Postgres connectivity from remote servers, Will be needed only  for HA or externalized database installations
# Add the following line to  /usr/local/pgsql/data/pg_hba.conf 
host    all             all             0.0.0.0/0               md5

#Add the following line to /usr/local/pgsql/data/postgresql.conf
listen_addresses='*'

# Restart Postgres service
pg_ctl -D /usr/local/pgsql/data -l logfile stop
pg_ctl -D /usr/local/pgsql/data -l logfile start

# Create the psql database (the script "mc/scripts/createPostgresUsers.sh" , responsible for seeding Postgres assumes this database exists)
psql template1
<postgres prompt>: CREATE DATABASE <user_name>;
<postgres prompt>: \q

#run the script to seed the tables and schemas needed by Mission Control
mc/app/third-party/postgresql/createPostgresUsers.sh
RPM
# Install PostgreSQL and setup database, users and schema
# Note: Avoid using users home directories for data folder.

# Set LC_ALL Environment Variable
export LC_ALL="C"

# Navigate to the extracted folder and run the install PostgreSQL: 
cd jfrog-mc-<version>-rpm
./third-party/postgresql/postgresql-*.run --unattendedmodeui none --mode unattended --datadir /var/opt/jfrog/postgres --serverport 5432

#Start PostgreSQL service
service postgresql-9.6 start

# Enabling Postgres connectivity from remote servers, Will be needed only  for HA or externalized database installations
# Add the following line to  /usr/local/pgsql/data/pg_hba.conf 
host    all             all             0.0.0.0/0               md5

#Add the following line to /usr/local/pgsql/data/postgresql.conf
listen_addresses='*'

#  Re-start PostgreSQL service
service postgresql-9.6 stop
service postgresql-9.6 start

#Copy Create Users Script
cp ./third-party/postgresql/createPostgresUsers.sh /tmp/createPostgresUsers.sh
cd /tmp
su postgres -c "POSTGRES_PATH=/opt/PostgreSQL/9.6/bin PGPASSWORD=postgres bash /tmp/createPostgresUsers.sh"
Debian
# Set JAVA_HOME.
export JAVA_HOME=/opt/jfrog/mc/app/third-party/java
export PATH=$PATH:/opt/jfrog/mc/app/third-party/java/bin

# Install PostgreSQL and setup database, users and schema,
# Note: Avoid using users home directories for data folder.

# Set LC_ALL Environment Variable
export LC_ALL="C"

# Install PostgreSQL
./third-party/postgresql/postgresql-*.run --unattendedmodeui none --mode unattended --datadir /var/opt/jfrog/postgres --serverport 5432

# Start PostgreSQL service
service postgresql-9.6 start

# Enabling Postgres connectivity from remote servers, Will be needed only  for HA or externalized database installations
# Add the following line to  /usr/local/pgsql/data/pg_hba.conf 
host    all             all             0.0.0.0/0               md5

#Add the following line to /usr/local/pgsql/data/postgresql.conf
listen_addresses='*'

#  Re-start PostgreSQL service
service postgresql-9.6 stop
service postgresql-9.6 start


# Copy Create Users Script
cp ./third-party/postgresql/createPostgresUsers.sh /tmp/createPostgresUsers.sh
# cd to /tmp and execute Create Users Script as postgres
cd /tmp
su postgres -c "POSTGRES_PATH=/opt/PostgreSQL/9.6/bin PGPASSWORD=postgres bash /tmp/createPostgresUsers.sh"

Elasticsearch Installation

Linux Archive
# The binary is available in the folder: "mc/app/third-party/elasticsearch". Copy to a suitable folder (<elastic_search_location> and unzip.
su <user_name>
tar -xvf elasticsearch-oss-6.6.0.tar.gz

# Set java home
export JAVA_HOME=<install_path>/jfmc/app/third-party/java
export PATH=$PATH:<install_path>/jfmc/app/third-party/java/bin

# For HA setup ,add the following in elasticsearch.yml in elasticsearch-6.6.0/config
transport.host: 0.0.0.0
transport.publish_host: <publish ip of the server,that can be accessed from other nodes>
discovery.zen.hosts_provider: file
discovery.zen.minimum_master_nodes: 2 #This is applicable from second node onwards

# Start ElasticSearch in detached mode.
# start as <user_name>
./elasticsearch-6.6.0/bin/elasticsearch -d

# Restart ElasticSearch
`ps -ef` | grep elasticsearch
# Find the PID
kill -15 <PID>
<elastic_search_location>/elasticsearch-6.6.0/bin/elasticsearch -d
RPM
# Install ElastiSearch
rpm -ivh --replacepkgs ./third-party/elasticsearch/elasticsearch-oss-6.6.0.rpm

# Set JAVA_HOME using the /etc/sysconfig/elasticsearch file. For example, 
JAVA_HOME=/opt/jfrog/mc/app/third-party/java

# Pass JAVA_HOME to elasticsearch service
# Edit /etc/sysconfig/elasticsearch with following content
JAVA_HOME=/opt/jfrog/mc/app/third-party/java

# Start ElasticSearch Service
service elasticsearch start # for  systemv OS
systemctl start elasticsearch.service # for systemD OS
Debian
# Set JAVA_HOME.
export JAVA_HOME=/opt/jfrog/mc/app/third-party/java
export PATH=$PATH:/opt/jfrog/mc/app/third-party/java/bin

# Install ElasticSearch
dpkg -i ./third-party/elasticsearch/elasticsearch-oss-6.6.0.deb
# Pass JAVA_HOME to elasticsearch service
# Edit /etc/default/elasticsearch with following content
JAVA_HOME=/opt/jfrog/mc/app/third-party/java

# Enable Elasticsearch Service to start automatically when the system boots up
sudo /bin/systemctl daemon-reload
sudo /bin/systemctl enable elasticsearch.service

# Start ElasticSearch Service
service elasticsearch start # for  systemv OS
systemctl start elasticsearch.service # for systemD OS

For Advanced Users

Manual Docker Compose Installation

  1. Extract the contents of the compressed archive and go to the extracted folder.

    tar -xvf jfrog-mc-<version>-compose.tar.gz

    .env file included within the Docker-Compose archive

    This .env file is used by docker-compose and is updated during installations and upgrades.

    Notice that some operating systems do not display dot files by default. If you've made any changes to the file, remember to backup before an upgrade.

  2. Create the following folder structure under $JFROG_HOME/mc.

     -- [1050	1050    ]  var
     -- [1050	1050    ]  data
     -- [1000	1000	]  elasticsearch
     -- [1000   1000	]  data
     -- [999	999		]  postgres
     -- [1050	1050	]  etc
  3. Copy the appropriate docker-compose templates from the templates folder to the extracted folder. Rename it as docker-compose.yaml

    NOTE: The commands below assume you are using the template: docker-compose-postgres-es.yaml

    RequirementTemplate
    Mission control with externalised databasesdocker-compose.yaml
    Mission control with Elasticsearch and PostgreSQLdocker-compose-postgres-es.yaml
  4. Update the .env file

    ## The Installation directory for Mission Control. IF not entered, the script will prompt you for this input. Default [$HOME/.jfrog/mc]
    ROOT_DATA_DIR=
    
    ## Public IP of this machine
    HOST_IP=
  5. Customize the product configuration.
    1. Set the Artifactory connection details.
    2. Customize the PostgreSQL Database connection details. (optional)
    3. Set any additional configurations (for example: ports, node id) using the Mission Control system.yaml configuration file.

      Ensure the host's ID and IP are added to the system.yaml. This is important to ensure that other products and Platform Deployments can reach this instance

  6. For Elasticsearch to work correctly, increase the map count. For additional information refer to the Elastic Search documentation

  7. Create the necessary tables and users using the script: "createPostgresUsers.sh". 
    • Start the PostgreSQL container.

      docker-compose -p mc up -d postgres
    • Copy the script into the PostgreSQL container.

      docker cp ./third-party/postgresql/createPostgresUsers.sh mc_postgres:/
    • Exec into the container and execute the script. This will create the database tables and users.

      docker exec -t mc_postgres bash -c "chmod +x /createPostgresUsers.sh && gosu postgres /createPostgresUsers.sh"
  8. Start Mission Control using docker-compose commands.

    docker-compose -p mc logs
    docker-compose -p mc ps
    docker-compose -p mc up -d
    docker-compose -p mc down
  9. Access Mission Control from your browser at: http://SERVER_HOSTNAME/ui/. For example, on your local machine: http://localhost/ui/.

  10. Check Mission Control log

    docker-compose -p mc logs

    Configuring the Log Rotation of the Console Log

    The console.log file can grow quickly since all services write to it. The installation scripts add a cron job to log rotate the console.log file every hour.

    This is not  done for manual Docker Compose installations. Learn more on how to configure the log rotation.


  • No labels
Copyright © 2019 JFrog Ltd.