Need help with other JFrog products?
JFrog Enterprise+
JFrog Artifactory
JFrog Xray
JFrog Distribution
[JFrog Pipelines]
JFrog Bintray
JFrog Access
JFrog CLI
The JFrog Mission Control Docker Compose installer can be downloaded from the
Newtablink | ||||
---|---|---|---|---|
|
Info |
---|
The docker-compose actions refer to the project name as "jfmc" in the docker-compose -p <project name> <action> command. |
Download and extract the jfmc-compose-<version>.zip.
Code Block |
---|
unzip jfmc-compose-<version>.zip |
Set the JFMC_MOUNT_ROOT variable using the mount path in the setenv.sh file. This will be used to store data, config and logs of all the JFMC services, including the databases Mission Control uses.
Load the environment variables to the session which will run the docker-compose actions.
Code Block |
---|
source ./setenv.sh |
Info | ||
---|---|---|
| ||
You need to reload the setenv.sh file and restart services every time a value of an environment is modified.
|
Mission Control services running as non-root user with UID and GID as 1050 by default. The mount for each service should be owned by the user running within it.
Anchor | ||||
---|---|---|---|---|
|
Code Block |
---|
"user": "<uid>:<gid>" |
Warning |
---|
Make sure the mount point for each service is owned by this set UID and GID. |
For the default UID and GID, execute following steps to prepare mounted directories:
Code Block |
---|
# For Mission Control services default is 1050:1050,
mkdir -p ${JFMC_MOUNT_ROOT}/jfmc/logs/insight-server
mkdir -p ${JFMC_MOUNT_ROOT}/jfmc/logs/insight-scheduler
mkdir -p ${JFMC_MOUNT_ROOT}/jfmc/logs/insight-executor
mkdir -p ${JFMC_MOUNT_ROOT}/jfmc/etc/insight-scheduler
mkdir -p ${JFMC_MOUNT_ROOT}/jfmc/etc/insight-executor
mkdir -p ${JFMC_MOUNT_ROOT}/jfmc/support/insight-scheduler
mkdir -p ${JFMC_MOUNT_ROOT}/jfmc/support/insight-executor
mkdir -p ${JFMC_MOUNT_ROOT}/jfmc/support/insight-server
chown -R 1050:1050 ${JFMC_MOUNT_ROOT}/jfmc
# For Elasticsearch default is 1000:1000,
mkdir -p ${JFMC_MOUNT_ROOT}/elasticsearch/data
mkdir -p ${JFMC_MOUNT_ROOT}/elasticsearch/sgconfig
chown -R 1000:1000 ${JFMC_MOUNT_ROOT}/elasticsearch
# For PostgreSQL,
# Ignore this part if you are running with default compose file
# PostgreSQL runs as root user by default, docker will take care of creating mount point with right permissions
# For custom UID and GID,
# mkdir -p ${JFMC_MOUNT_ROOT}/postgres/data
# chown -R customUID:customGID ${JFMC_MOUNT_ROOT}/postgres
# For MongoDB (removed in 3.4.0),
# Ignore this part if you are running with default compose file
# Mongo DB runs as root user by default, docker will take care of creating mount point with right permissions
# For custom UID and GID,
# mkdir -p ${JFMC_MOUNT_ROOT}/mongodb/db
# chown -R customUID:customGID ${JFMC_MOUNT_ROOT}/mongodb
|
Launch the "postgres container", and login as admin user.
Code Block |
---|
#Start the postgres container docker-compose -f ./jfmc-compose.json -p jfmc up -d postgres #Check if it started successfully and get container-id docker ps -a #Exec into the container docker exec -it <Container_id> /bin/bash #Login as admin user and login to psql with admin credentials psql -U postgres |
Launch Mission Control.
Code Block |
---|
docker-compose -f ./jfmc-compose.json -p jfmc up -d |
Initialise Elasticsearch guard plugin.
Code Block |
---|
docker exec -it jfmc_elasticsearch_1 bash -c "cd /usr/share/elasticsearch/plugins/search-guard-6/tools; ./sgadmin.sh -p ${ELASTIC_TRANSPORT_PORT} -cacert root-ca.pem -cert sgadmin.pem -key sgadmin.key -nhnv -icl -cd ../sgconfig/" |
Download and extract the jfmc-compose-<version>.zip.
Code Block |
---|
unzip jfmc-compose-<version>.zip |
Set the JFMC_MOUNT_ROOT variable using the mount path in the setenv.sh file. This will be used to store data, config and logs of all the Mission Control services, including the databases Mission Control uses.
Load the environment variables to the session which will run the docker-compose actions.
Code Block |
---|
source ./setenv.sh |
Info | ||
---|---|---|
| ||
You need to reload the setenv.sh file and restart services every time a value of an environment is modified.
|
Mission Control services are running as non-root user with UID and GID as 1050 by default. The mount for each service should be owned by the user running within it.
Anchor | ||||
---|---|---|---|---|
|
Code Block |
---|
"user": "<uid>:<gid>" |
Warning |
---|
Make sure the mount point for each service is owned by this set UID and GID. |
For the default UID and GID, execute following steps to prepare mounted directories.
Code Block |
---|
# For Mission Control services default is 1050:1050, mkdir -p ${JFMC_MOUNT_ROOT}/jfmc/logs/insight-server mkdir -p ${JFMC_MOUNT_ROOT}/jfmc/logs/insight-scheduler mkdir -p ${JFMC_MOUNT_ROOT}/jfmc/logs/insight-executor chown -R 1050:1050 ${JFMC_MOUNT_ROOT}/jfmc # For Elasticsearch default is 1000:1000, mkdir -p ${JFMC_MOUNT_ROOT}/elasticsearch/data mkdir -p ${JFMC_MOUNT_ROOT}/elasticsearch/sgconfig chown -R 1000:1000 ${JFMC_MOUNT_ROOT}/elasticsearch # Create elasticsearch unicast file # This file will be modified by insight-server and read by elasticsearch # Note : Will be utilized only in Mission Control HA mode mkdir -p ${JFMC_MOUNT_ROOT}/elasticsearch/config echo "" > ${JFMC_MOUNT_ROOT}/elasticsearch/config/unicast_hosts.txt chown -R 1000:1000 ${JFMC_MOUNT_ROOT}/elasticsearch |
Modify HA related environment variables.
Code Block |
---|
export JFMC_ES_CLUSTER_SETUP=“YES” # Host IP will be used by other HA nodes to connect and join the elastic cluster through transport port export JFMC_HOST_IP=<host_ip> export ELASTIC_TRANSPORT_PORT=9300 # Set this to true for HA installation - mission-control service will commit suicide if insight-server is unhealthy which in turn will indicate the node is unhealthy export NODEHEALTHCHECK_KILL_ONMAXFAILURES=true |
Note |
---|
Make sure <JFMC_HOST_IP>:<ELASTIC_TRANSPORT_PORT> is accessible from other nodes |
Load the environment variables.
Code Block |
---|
source ./setenv.sh |
Launch Mission Control.
Code Block |
---|
docker-compose -f ./jfmc-compose-ha.json -p jfmc up -d |
Initialise elastic search guard plugin.
Code Block |
---|
docker exec -it jfmc_elasticsearch_1 bash -c "cd /usr/share/elasticsearch/plugins/search-guard-6/tools; ./sgadmin.sh -p ${ELASTIC_TRANSPORT_PORT} -cacert root-ca.pem -cert sgadmin.pem -key sgadmin.key -nhnv -icl -cd ../sgconfig/" |
Modify HA related environment variables,
Code Block |
---|
export JFMC_ES_CLUSTER_SETUP="YES" # Host IP will be used by other HA nodes to connect and join the elastic cluster through transport port export JFMC_HOST_IP=<host_ip> export ELASTIC_TRANSPORT_PORT=9300 # Set this to true for HA installation - mission-control service will commit suicide if insight-server is unhealthy which in turn will indicate the node is unhealthy export NODEHEALTHCHECK_KILL_ONMAXFAILURES=true export ELASTIC_MIN_MASTER_NODES=2 |
Note |
---|
Make sure <JFMC_HOST_IP>:<ELASTIC_TRANSPORT_PORT> is accessible from other nodes |
Copy mc.key content from first node, can be found in ${JFMC_MOUNT_ROOT}/jfmc/data/security/mc.key
Paste the copied mc.key in ${JFMC_MOUNT_ROOT}/jfmc/data/security/mc.key of current node.
Note |
---|
Make sure ${JFMC_MOUNT_ROOT}/jfmc/data/security/mc.key is set to right ownership (default 1050:1050). |
Load the environment variables.
Code Block |
---|
source ./setenv.sh |
Launch Mission Control.
Code Block |
---|
docker-compose -f ./jfmc-compose-ha.json -p jfmc up -d |