Using the latest JFrog products?
JFrog Platform User Guide
JFrog Mission Control 3.x Documentation
To get the latest version, go to the JFrog Unified Platform
Standard Installation
The JFrog Mission Control Docker Compose installer can be downloaded from the Mission Control Download Page.
The docker-compose actions refer to the project name as "jfmc" in the docker-compose -p <project name> <action> command.
Download and extract the jfmc-compose-<version>.zip.
unzip jfmc-compose-<version>.zip
Set the JFMC_MOUNT_ROOT variable using the mount path in the setenv.sh file. This will be used to store data, config and logs of all the JFMC services, including the databases Mission Control uses.
Load the environment variables to the session which will run the docker-compose actions.source ./setenv.sh
Reload required
You need to reload the setenv.sh file and restart services every time a value of an environment is modified.
source ./setenv.sh docker-compose -f ./jfmc-compose.json -p jfmc down docker-compose -f ./jfmc-compose.json -p jfmc up -d
Mission Control services running as non-root user with UID and GID as 1050 by default. The mount for each service should be owned by the user running within it.
Each service is enabled to run with a custom UID and GID. This can be done by setting a new key value pair under each service as follows:"user": "<uid>:<gid>"
Make sure the mount point for each service is owned by this set UID and GID.
For the default UID and GID, execute following steps to prepare mounted directories:
# For Mission Control services default is 1050:1050, mkdir -p ${JFMC_MOUNT_ROOT}/jfmc/logs/insight-server mkdir -p ${JFMC_MOUNT_ROOT}/jfmc/logs/insight-scheduler mkdir -p ${JFMC_MOUNT_ROOT}/jfmc/logs/insight-executor mkdir -p ${JFMC_MOUNT_ROOT}/jfmc/etc/insight-scheduler mkdir -p ${JFMC_MOUNT_ROOT}/jfmc/etc/insight-executor mkdir -p ${JFMC_MOUNT_ROOT}/jfmc/support/insight-scheduler mkdir -p ${JFMC_MOUNT_ROOT}/jfmc/support/insight-executor mkdir -p ${JFMC_MOUNT_ROOT}/jfmc/support/insight-server chown -R 1050:1050 ${JFMC_MOUNT_ROOT}/jfmc # For Elasticsearch default is 1000:1000, mkdir -p ${JFMC_MOUNT_ROOT}/elasticsearch/data mkdir -p ${JFMC_MOUNT_ROOT}/elasticsearch/sgconfig chown -R 1000:1000 ${JFMC_MOUNT_ROOT}/elasticsearch # For PostgreSQL, # Ignore this part if you are running with default compose file # PostgreSQL runs as root user by default, docker will take care of creating mount point with right permissions # For custom UID and GID, # mkdir -p ${JFMC_MOUNT_ROOT}/postgres/data # chown -R customUID:customGID ${JFMC_MOUNT_ROOT}/postgres # For MongoDB (removed in 3.4.0), # Ignore this part if you are running with default compose file # Mongo DB runs as root user by default, docker will take care of creating mount point with right permissions # For custom UID and GID, # mkdir -p ${JFMC_MOUNT_ROOT}/mongodb/db # chown -R customUID:customGID ${JFMC_MOUNT_ROOT}/mongodb
- Create a PostgreSQL database and users.
Launch the "postgres container", and login as admin user.
#Start the postgres container docker-compose -f ./jfmc-compose.json -p jfmc up -d postgres #Check if it started successfully and get container-id docker ps -a #Exec into the container docker exec -it <Container_id> /bin/bash #Login as admin user and login to psql with admin credentials psql -U postgres
- Create the postgres database, schema and users as in the Configuring PostgreSQL section.
Launch Mission Control.
docker-compose -f ./jfmc-compose.json -p jfmc up -d
Initialise Elasticsearch guard plugin.
docker exec -it jfmc_elasticsearch_1 bash -c "cd /usr/share/elasticsearch/plugins/search-guard-6/tools; ./sgadmin.sh -p ${ELASTIC_TRANSPORT_PORT} -cacert root-ca.pem -cert sgadmin.pem -key sgadmin.key -nhnv -icl -cd ../sgconfig/"
HA Installation
Setting up the First Node
Download and extract the jfmc-compose-<version>.zip.
unzip jfmc-compose-<version>.zip
Set the JFMC_MOUNT_ROOT variable using the mount path in the setenv.sh file. This will be used to store data, config and logs of all the Mission Control services, including the databases Mission Control uses.
Load the environment variables to the session which will run the docker-compose actions.source ./setenv.sh
Reload required
You need to reload the setenv.sh file and restart services every time a value of an environment is modified.
source ./setenv.sh docker-compose -f ./jfmc-compose-ha.json -p jfmc down docker-compose -f ./jfmc-compose-ha.json -p jfmc up -d
Mission Control services are running as non-root user with UID and GID as 1050 by default. The mount for each service should be owned by the user running within it.
Each service is enabled to run with a custom UID and GID. This can be done by setting a new key value pair under each service as follows:"user": "<uid>:<gid>"
Make sure the mount point for each service is owned by this set UID and GID.
For the default UID and GID, execute following steps to prepare mounted directories.
# For Mission Control services default is 1050:1050, mkdir -p ${JFMC_MOUNT_ROOT}/jfmc/logs/insight-server mkdir -p ${JFMC_MOUNT_ROOT}/jfmc/logs/insight-scheduler mkdir -p ${JFMC_MOUNT_ROOT}/jfmc/logs/insight-executor chown -R 1050:1050 ${JFMC_MOUNT_ROOT}/jfmc # For Elasticsearch default is 1000:1000, mkdir -p ${JFMC_MOUNT_ROOT}/elasticsearch/data mkdir -p ${JFMC_MOUNT_ROOT}/elasticsearch/sgconfig chown -R 1000:1000 ${JFMC_MOUNT_ROOT}/elasticsearch # Create elasticsearch unicast file # This file will be modified by insight-server and read by elasticsearch # Note : Will be utilized only in Mission Control HA mode mkdir -p ${JFMC_MOUNT_ROOT}/elasticsearch/config echo "" > ${JFMC_MOUNT_ROOT}/elasticsearch/config/unicast_hosts.txt chown -R 1000:1000 ${JFMC_MOUNT_ROOT}/elasticsearch
- Create a PostgreSQL database, schema and users by following steps from Using External Databases.
- Modify PostgreSQL connection details to point to newly setup external PostgresQL in setenv.sh
Modify HA related environment variables.
export JFMC_ES_CLUSTER_SETUP=“YES” # Host IP will be used by other HA nodes to connect and join the elastic cluster through transport port export JFMC_HOST_IP=<host_ip> export ELASTIC_TRANSPORT_PORT=9300 # Set this to true for HA installation - mission-control service will commit suicide if insight-server is unhealthy which in turn will indicate the node is unhealthy export NODEHEALTHCHECK_KILL_ONMAXFAILURES=true
Make sure <JFMC_HOST_IP>:<ELASTIC_TRANSPORT_PORT> is accessible from other nodes
Load the environment variables.
source ./setenv.sh
Launch Mission Control.
docker-compose -f ./jfmc-compose-ha.json -p jfmc up -d
Initialise elastic search guard plugin.
docker exec -it jfmc_elasticsearch_1 bash -c "cd /usr/share/elasticsearch/plugins/search-guard-6/tools; ./sgadmin.sh -p ${ELASTIC_TRANSPORT_PORT} -cacert root-ca.pem -cert sgadmin.pem -key sgadmin.key -nhnv -icl -cd ../sgconfig/"
Setting up the Second/Additional Nodes
- Complete the first 3 steps from the First node instructions above.
- Modify PostgreSQL connection details to point to existing external PostgresQL in setenv.sh
Modify HA related environment variables,
export JFMC_ES_CLUSTER_SETUP="YES" # Host IP will be used by other HA nodes to connect and join the elastic cluster through transport port export JFMC_HOST_IP=<host_ip> export ELASTIC_TRANSPORT_PORT=9300 # Set this to true for HA installation - mission-control service will commit suicide if insight-server is unhealthy which in turn will indicate the node is unhealthy export NODEHEALTHCHECK_KILL_ONMAXFAILURES=true export ELASTIC_MIN_MASTER_NODES=2
Make sure <JFMC_HOST_IP>:<ELASTIC_TRANSPORT_PORT> is accessible from other nodes
Copy mc.key content from first node, can be found in ${JFMC_MOUNT_ROOT}/jfmc/data/security/mc.key
Paste the copied mc.key in ${JFMC_MOUNT_ROOT}/jfmc/data/security/mc.key of current node.
Make sure ${JFMC_MOUNT_ROOT}/jfmc/data/security/mc.key is set to right ownership (default 1050:1050).
Load the environment variables.
source ./setenv.sh
Launch Mission Control.
docker-compose -f ./jfmc-compose-ha.json -p jfmc up -d