Skip to end of metadata
Go to start of metadata

Overview

JFrog Mission Control is available as a standalone ZIP file installation for all supported 64 bit Linux flavors: CentOS, Red Hat, Debian and Ubuntu.

Prerequisites

To run Mission Control from a ZIP installation, you need to have JDK 8 installed.

Page Contents

 


Installation Instructions

Once you have  downloadedMission Control, follow these steps:

  1. Extract the contents of the compressed archive, and remove the version number, operating system and architecture from the extracted folder.

    unzip jfmc-<version>-<os>-<arch>.zip
    mv jfmc-<version>-<os>-<arch> jfmc
  2. Load ElasticSearch and Postgres which can be preinstalled. Binaries for both these databases are also available in the zip-dependencies folder.
    For more instructions, see the 3rd Party Binaries or commands to setup manually on running Postgres section.

  3. Edit the jfmc/scripts/setEnvDefaults.sh environment file and provide the relevant environment variables for your ElasticSearch and Postgres installations.


New Environment File

Once you install and run the first Mission Control node, a mc.key file, will be generated and saved in the file system. The mc key is an internal secret, used by Mission Control to encrypt sensitive data and must be synced between the Mission Control cluster nodes. After the application is started, the mc key can be found in the following path:  <JFMC_DATA>/security/mc.key.


  1. Start Mission Control.

    jfmc/bin/jfmc.sh start

    New Environment File

    Upon completing the ZIP installation, Mission Control creates a new environment file, jfmc/data/setenv.sh.
    This is the environment file that will be used from here on. Any further changes that may need to be made in environment variables should be made in this file.

Script Actions

The Mission Control script installed with the ZIP installation can be used for a variety of actions using the following syntax:

jfmc/bin/jfmc.sh <action>

Where <action> can take one of the following values:

start

Start the Mission Control services in the background. Corresponding process IDs will be stored as jfmc/run/<service_name>.pid.

stop

Stops the Mission Control services. This command gets the process ID from the jfmc/run/<service_name>.pid file, stop the process and remove the file.

status

Checks if the Mission Control services are running as follows:

Obtain the process ID from jfmc/run/<service_name>.pid and check if the process is running

If the pid file exists, but the process is not running, check if any other processes are running for this service. This will show any processes that are out of sync ( for example, if a pid file was removed or modified).

Out of sync processes

Any processes that are out of sync should manually be removed.

restart

Restarts Mission Control services by calling stop and start sequentially.

getInternalCerts

Mission Control uses certificates for secure internal communications between it services. Normally, this call is automatically run on every start action to make sure the services are all furnished with the required certificates. You can make this call manually if, for any reason, automatic generation of certificates failed.

Environment Variables

The ZIP file installation requires a set of environment variables in order to run. These are provided through the environment file. Note that some of the variables in the environment file() are for internal use and should not be modified. Following are the environment variables (with default values) you should modify to match your own installation:

PostgreSQL:
  • export DB_TYPE=postgresql
  • export DB_HOST="localhost"
  • export DB_PORT=5432
  • export DB_NAME=mission_control
  • export DB_TABLESPACE="pg_default"
  • export DB_SSLMODE="false"
  • export JFIS_DB_SCHEMA=insight_server
  • export JFEX_DB_SCHEMA=insight_executor
  • export JFSC_DB_SCHEMA=insight_scheduler
  • export JFMC_DB_SCHEMA=jfmc_server

 credentials can be set in property file jfmc/etc/mission-control.properties
# These values are considered by default, the file needs to be edited if there is a need to change the values
# Jfmc server credentials
jfmc.db.username=jfmc
jfmc.db.password=password


# Insight server credentials
jfis.db.username=jfis
jfis.db.password=password


# Executor credentials
jfex.db.username=jfex
jfex.db.password=password


# Scheduler credentials
jfsc.db.username=jfsc
jfsc.db.password=password


# Elasticsearch credentials
elastic.username=admin
elastic.password=admin
ElasticSearch:
Ports used by the Mission Control Service:
  • export JFMC_PORT=8080

  • export JFMC_SCHEDULER_PORT=8085

  • export JFMC_EXECUTOR_PORT=8087

  • export JFMC_INSIGHT_SERVER_PORT=8090

  • export JFMC_INSIGHT_SERVER_SSL_PORT=8089 (removed in 3.3.0)

    High Availability related variables:
  •  export NODEHEALTHCHECK_KILL_ONMAXFAILURES=true
  • export JFMC_ES_CLUSTER_SETUP="YES"
  • export JFMC_HOST_IP = <Publish IP of this server, for connecting from other nodes>
  • export JFIS_ELASTIC_UNICAST_HOST_FILE =<Elasticsearch conf location/unicast_hosts.txt>
Options for each Java service:
  • export JFMC_EXTRA_JAVA_OPTS="-Xmx2g"

Changing Port Settings

The following table describes the different Mission Control services, the default port allocated to the service, and the environment variable through which the port can be modified:

Service
Default Port
Environment Variable
Mission Control
8080
JFMC_PORT
Insight server
8090
8089 (removed in 3.3.0)
JFMC_INSIGHT_SERVER_PORT
JFMC_INSIGHT_SERVER_SSL_PORT
Insight scheduler
8085
JFMC_SCHEDULER_PORT
Insight executor
8087
JFMC_EXECUTOR_PORT

If port conflicts are detected, you can change the port allocated as follows:

  1. If the Mission Control services are running, stop them.

    $MC_HOME/bin/jfmc.sh stop
  2. In $MC_HOME/data/setenv.sh, modify the environment variable corresponding to the service with the port conflict as described in the table above.

    Start the Mission Control services.

    $MC_HOME/bin/jfmc.sh start

3rd Party Binaries

Binaries for ElasticSearch and Postgres are included in the archive and can be used if needed. The instructions below are specific to these binaries.

ElasticSearch

  • The binary is available in the folder: "zip-dependencies". Copy to a suitable folder (<elastic_search_location> and unzip.

    tar -xvf elasticsearch-oss-6.6.0.tar.gz
  • Install search guard plugin. This is an optional step if you want to secure your ElasticSearch with authentication.
    The binary for search guard plugin (search-guard-6-6.6.0-24.zip) will be available in the folder: "zip-dependencies". Copy this file to a suitable location <search_guard_location>

    ./elasticsearch-6.6.0/bin/elasticsearch-plugin install file:///<search_guard_location>/search-guard-6-6.6.0-24.zip
  • Generate search guard certificates. This is applicable if you are configuring non-HA or first node of HA setup.
    The tool to generate search guard certificates will be available in the folder: "zip-dependencies". Copy this tar ball to a suitable location <search_guard_cert_tool_location>

    tar -xvf <search_guard_cert_tool_location>/search-guard-tlstool-1.6.tar.gz
    cp jfmc/scripts/elasticsearch/config/tlsconfig.yml <search_guard_cert_tool_location>/config
    cd <search_guard_cert_tool_location>/tools
    ./sgtlstool.sh -c ../config/tlsconfig.yml -ca -crt# folder named "out" will be created with all the required certificates cd out cp localhost.key localhost.pem root-ca.pem <elastic_search_location>/elasticsearch-6.6.0/config
    cp root-ca.pem  sgadmin.key sgadmin.pem <elastic_search_location>/elasticsearch-6.6.0/plugins/search-guard-6/tools

    For nodes other than first node in HA setup, copy the certificates from the first node to the corresponding location.

  • Configure ElasticSearch for search guard by updating the following properties in <elastic_search_location>/elasticsearch-6.6.0/config

    searchguard.ssl.transport.pemcert_filepath: localhost.pem
    searchguard.ssl.transport.pemkey_filepath: localhost.key
    searchguard.ssl.transport.pemtrustedcas_filepath: root-ca.pem
    searchguard.ssl.transport.enforce_hostname_verification: false
    searchguard.ssl.transport.resolve_hostname: false
    searchguard.nodes_dn:
    - CN=localhost,OU=Ops,O=localhost\, Inc.,DC=localhost,DC=com
    searchguard.authcz.admin_dn:
    - CN=sgadmin,OU=Ops,O=sgadmin\, Inc.,DC=sgadmin,DC=com
    searchguard.enterprise_modules_enabled: false
  • For HA setup ,add the following in elasticsearch.yml

    transport.host: 0.0.0.0
    transport.publish_host: <publish ip of the server,that can be accessed from other nodes>
    discovery.zen.hosts_provider: file 
    discovery.zen.minimum_master_nodes: 2 #This is applicable from second node onwards
    
  • Start ElasticSearch in detached mode.

    ./elasticsearch-6.6.0/bin/elasticsearch -d
  • Initialize Search Guard.  This is applicable if you are configuring non-HA or first node of HA setup.

    cd <elastic_search_location>/elasticsearch-6.6.0/plugins/search-guard-6/tools 
    ./sgadmin.sh -cacert root-ca.pem -cert sgadmin.pem -key sgadmin.key -nhnv -icl -cd ../sgconfig/
  • Restart ElasticSearch

    `ps -ef` | grep elasticsearch
    # Find the PID
    kill -15 <PID>
    <elastic_search_location>/elasticsearch-6.6.0/bin/elasticsearch -d

Postgres

  • The binary is available in the folder: "zip-dependencies". Untar to a suitable folder as in the example below (which is extracting to the user's home directory).

    tar -xvf ./postgresql-9.6.11-1-linux-x64-binaries.tar.gz -C ~/
  • As a sudo user, create the data folders to be used by Postgres and make this user (<user_name>) the owner

    sudo su
    mkdir -p  /var/lib/pgsql/data
    chown -R <user_name> /var/lib/pgsql/data
    mkdir -p /usr/local/pgsql/data
    chown -R <user_name> /usr/local/pgsql/data
  • For your convenience and to avoid Postgres failing to startup because of path issues, set the path and locale.

    su <user_name>
    export PATH="$PATH:/home/<user_name>/pgsql/bin"
    LC_ALL="en_US.UTF-8"
    LC_CTYPE="en_US.UTF-8"
  • Enabling Postgres connectivity from remote servers
    Add the following line to  /usr/local/pgsql/data/pg_hba.conf 

    host    all             all             0.0.0.0/0               md5

    Add the following line to /usr/local/pgsql/data/postgresql.conf

    listen_addresses='*'
  • Initialize the data directory and start Postgres

    initdb -D /usr/local/pgsql/data
    pg_ctl -D /usr/local/pgsql/data -l logfile start
  • Create the psql database (the script "jfmc/scripts/createPostgresUsers.sh" , responsible for seeding Postgres assumes this database exists)

    psql template1
    <postgres prompt>: CREATE DATABASE <username>;
    <postgres prompt>: \q
    .jfmc/scripts/createPostgresUsers.sh
  • No labels