When installing Mission Control, you must run the installation as a root user or provide sudo access to a non-root user. |
You will need to have admin permissions on the installation machine in the following cases
|
Use a dedicated server for Mission Control with no other software running to alleviate performance bottlenecks, avoid port conflicts, and avoid setting uncommon configurations. |
Debian | Centos | RHEL | Ubuntu | Windows Server | Helm Charts | SLES |
---|---|---|---|---|---|---|
8.x, 9.x, 10.x | 7.x, 8.x | 7.x, 8.x | 16.04, 18.04, 20.04 | 2.x, 3.x | 12 SP 5 |
Version 4.0 to 4.7.x
Processor | Memory | Storage | External Network Port | Internal Network Ports (default) | Databases/Third Party Applications |
---|---|---|---|---|---|
4 cores | 12 GB | 100 GB |
|
| Required: PostgreSQL:
Elasticsearch 6.6.x Elasticsearch 7.6.1 Elasticsearch 7.8.0 and 7.8.1 (for Mission Control version 4.6.0) Elasticsearch 7.10.2. (for Mission Control version 4.7.0 to 4.7.7) Elasticsearch 7.12.1. (for Mission Control version 4.7.8) Elasticsearch 7.14.1. (for Mission Control version 4.7.15) |
To learn about the JFrog Platform Deployment, refer to System Architecture.
Before installing Mission Control 4.x, you must first install JFrog Artifactory 7.x.
The following installation methods are supported:
All install types are supported, including: Docker Compose, Linux Archive, RPM, and Debian.
The installer script provides you an interactive way to install Mission Control and its dependencies. All install types are supported. This installer should be used for Docker Compose.
Extract the contents of the compressed archive and go to the extracted folder.
tar -xvf jfrog-mc-<version>-<compose|rpm|deb>.tar.gz cd jfrog-mc-<version>-<compose|rpm|deb> |
When running Mission Control, the installation script creates a user called jfmc by default which must have run and execute permissions on the installation directory. It is recommended to extract the Mission Control download file into a directory that gives run and execute permissions to all users such as |
mv jfrog-mc-<version>-linux.tar.gz /opt/ cd /opt tar -xf jfrog-mc-<version>-linux.tar.gz mv jfrog-mc-<version>-linux mc cd mc |
This .env file is used by docker-compose and is updated during installations and upgrades. Notice that some operating systems do not display dot files by default. If you've made any changes to the file, remember to backup before an upgrade. |
Run the installer script.
The script prompts you with a series of mandatory inputs, including the |
./config.sh |
./install.sh |
Refer prerequisites for Mission Control in Linux Archive before running install script. |
./install.sh --user <user name> --group <group name> -h | --help : [optional] display usage -u | --user : [optional] (default: jfmc) user which will be used to run the product, it will be created if its unavailable -g | --group : [optional] (default: jfmc) group which will be used to run the product, it will be created if its unavailable |
Start and manage the Mission Control service.
systemctl start|stop mc.service |
service mc start|stop |
cd jfrog-mc-<version>-compose docker-compose -p mc up -d docker-compose -p mc ps docker-compose -p mc down |
You can install and manage Mission Control as a service in a Linux archive installation. Refer start Mission Control section under Linux Archive Manual Installation for more details. |
mc/app/bin/mc.sh start|stop |
http://<jfrogUrl>/ui/
and go to the Dashboard tab in the Application module in the UI.Check the Mission Control log.
tail -f $JFROG_HOME/mc/var/log/console.log |
The |
Extract the contents of the compressed archive under JFROG_HOME
and move it into mc
directory.
tar -xvf jfrog-mc-<version>-linux.tar.gz mv jfrog-mc-<version>-linux mc |
Install PostgreSQL by following the steps detailed in Installing PostgreSQL.
PostgreSQL is required and must be installed before continuing with the next installation steps. Set your PostgreSQL connection details in the Shared Configurations section of the |
Prepare for the Elasticsearch installation by increasing the map count. For more information, see the Elastic Search documentation .
sudo sysctl -w vm.max_map_count=262144 |
To make this change permanent, remember to update the |
Install Elasticsearch. Instructions to install Elasticsearch are available here.
You can install the package available at <JFROG_HOME>/ mc/app/third-party/elasticsearch/elasticsearch-oss-<version>.tar.gz
or you can download a compatible version of Elasticsearch from this page.
Install Search Guard. The Search Guard package can be located in the extracted contents at
<JFROG_HOME>/m
c/app/third-party/elasticsearch/search-guard-<version>.tar.gz
. For installation steps, refer to the Search Guard documentation .
You must install the Search Guard plugin to ensure secure communication with Elasticsearch. |
Add an admin user to Search Guard, to ensure authenticated communication with Elasticsearch.
The Search Guard configuration accepts a hashed password. Use the following command to generate the hash for the password.
<JFROG_HOME>/mc/app/third-party/elasticsearch/elasticsearch-<version>/plugins/search-guard-7/tools/hash.sh -p <clear_text_password> #This will output a hashed password (<hash_password>), make a copy of it |
Prepare the configuration snippet to add a new(admin) user with the hashed password obtained from previous step.
<username>: hash: "<hashed_password>" backend_roles: - "admin" description: "Insight Elastic admin user" |
JFROG_HOME>/mc/app/third-party/elasticsearch/elasticsearch-<version>/plugins/search-guard-7/sgconfig/
.Enable the anonymous access to _cluster/health
endpoint. This is required to check the health of Elasticsearch cluster.
Enable the anonymous auth in this file sg_config.yml
at < JFROG_HOME>/mc/app/third-party/elasticsearch/elasticsearch-<version>/plugins/search-guard-7/sgconfig/
.
sg_config: dynamic: http: anonymous_auth_enabled: true #set this to true |
Map the anonymous user sg_anonymous
to the backend role "sg_anonymous_backendrole" in this file "sg_roles_mapping.yml" at < JFROG_HOME>/mc/app/third-party/elasticsearch/elasticsearch-<version>/plugins/search-guard-7/sgconfig/
.
sg_anonymous: backend_roles: - sg_anonymous_backendrole |
Add the following snippet to the end of this file sg_roles.yml
located at <JFROG_HOME>/mc/app/third-party/elasticsearch/elasticsearch-<version>/plugins/search-guard-7/sgconfig/
.
sg_anonymous: cluster_permissions: - cluster:monitor/health |
Add the following in the shared section of $JFROG_HOME/mc/var/etc/system.
yaml
file. Refer to Shared Configurations section.
shared: elasticsearch: external: true url: <URL_TO_ELASTICSEARCH_INSTANCE>:<ELASTICSEARCH_PORT> username: <USERNAME_SET_IN_SEARCHGUARD> password: <CLEAR_TEXT_PASSWORD_FOR_THE_ABOVE_USERNAME> |
You must set the value of external as true under Elasticsearch configuration in the system.yaml file even if you install Elasticsearch in the same machine as Mission Control. |
If you use Amazon Elasticsearch Service, enter the following in the shared section of the YAML file.
If you use the Amazon Elasticsearch Service, you must log in to the service using your Amazon AWS credentials. |
Start and manage the Mission Control service as the user who extracted the tar.
As a process
<JFROG_HOME>/mc/app/bin/mc.sh start |
Manage the process.
<JFROG_HOME>/mc/app/bin/mc.sh start|stop|status|restart |
As a service, Mission Control is packaged as an archive file and an install script that can be used to install it as a service running under a custom user. Currently supported on Linux systems.
When running Mission Control as a service, the installation script creates a user called jfmc (by default) which must have run and execute permissions on the installation directory. It is recommended to extract the Mission Control download file into a directory that gives run and execute permissions to all users such as |
To install Mission Control as a service, execute the following command as root:
User and group can be passed through <JFROG_HOME>/ |
<JFROG_HOME>/mc/app/bin/installService.sh --user <enter user, default value is mc> --group <enter group, default value is mc> -u | --user : [optional] (default: jfmc) user which will be used to run the product, it will be created if its unavailable -g | --group : [optional] (default: jfmc) group which will be used to run the product, it will be created if its unavailable |
The user and group will be stored in the <JFROG_HOME>/ mc/var/etc/system.yaml
at the end of the installation.
To manage the service, use the systemd
or init.d
commands depending on your system.
systemctl <start|stop|status> mc.service |
service mc <start|stop|status> |
http://<jfrogUrl>/ui/
and go to the Dashboard tab in the Application module in the UICheck the Mission Control log.
tail -f $JFROG_HOME/mc/var/log/console.log |
The RPM installation bundles Mission Control and all its dependencies. It is provided as native RPM packages, where Mission Control and its dependencies must be installed separately. Use this, if you are automating installations.
Extract the contents of the compressed archive, and go to the extracted folder:
tar -xvf jfrog-mc-<version>-rpm.tar.gz cd jfrog-mc-<version>-rpm |
Install Mission Control. Y ou must run as a root user.
rpm -Uvh --replacepkgs ./mc/mc.rpm |
Install PostgreSQL and start the PostgreSQL service.
PostgreSQL is required and must be installed before continuing with the next installation steps. Set your PostgreSQL connection details in the Shared Configurations section of the |
Install Elasticsearch. Instructions to install Elasticsearch are available here.
You can install the package available at jfrog-mc-<version>-rpm
/third-party/elasticsearch/elasticsearch-oss-<version>.tar.gz
or you can download a compatible version of Elasticsearch from this page .
When connecting an external instance of Elasticsearch to Mission Control, add the following flag in the Shared Configurations of $JFROG_HOME/mc/var/etc/system.
yaml
file.
shared: elasticsearch: external: true |
Install Search Guard. The Search Guard package can be located in the extracted contents at jfrog-mc-<version>-rpm
/third-party/elasticsearch/search-guard-<version>.tar.gz
. For installation steps, refer to the Search Guard documentation .
You must install the Search Guard plugin to ensure secure communication with Elasticsearch. |
Add an admin user to Search Guard, to ensure authenticated communication with Elasticsearch.
The Search Guard configuration accepts a hashed password. Use the following command to generate the hash for the password.
/etc/elasticsearch/plugins/search-guard-7/tools/hash.sh -p <clear_text_password> #This will output a hashed password (<hash_password>), make a copy of it |
Prepare the configuration snippet to add a new(admin) user with the hashed password obtained from previous step.
<username>: hash: "<hashed_password>" backend_roles: - "admin" description: "Insight Elastic admin user" |
/etc/elasticsearch/plugins/search-guard-7/sgconfig/
.Enable the anonymous access to _cluster/health
endpoint. This is required to check the health of Elasticsearch cluster.
Enable the anonymous auth in this file sg_config.yml
at
/etc/elasticsearch/plugins/search-guard-7/sgconfig/
.
sg_config: dynamic: http: anonymous_auth_enabled: true #set this to true |
Map the anonymous user sg_anonymous
to the backend role "sg_anonymous_backendrole" in this file "sg_roles_mapping.yml" at /etc/elasticsearch/plugins/search-guard-7/sgconfig
.
sg_anonymous: backend_roles: - sg_anonymous_backendrole |
Add the following snippet to the end of this file sg_roles.yml
located at /etc/elasticsearch/plugins/search-guard-7
/sgconfig/
.
sg_anonymous: cluster_permissions: - cluster:monitor/health |
Add the following in the shared section of $JFROG_HOME/mc/var/etc/system.
yaml
file. Refer to Shared Configurations section.
shared: elasticsearch: url: <URL_TO_ELASTICSEARCH_INSTANCE>:<ELASTICSEARCH_PORT> username: <USERNAME_SET_IN_SEARCHGUARD> password: <CLEAR_TEXT_PASSWORD_FOR_THE_ABOVE_USERNAME> |
You must set the value of external as true under Elasticsearch configuration in the system.yaml file even if you install Elasticsearch in the same machine as Mission Control. |
If you use Amazon Elasticsearch Service, enter the following in the shared section of the YAML file.
If you use the Amazon Elasticsearch Service, you must log in to the service using your Amazon AWS credentials. |
Customize the product configuration.
Start and manage the Mission Control service.
systemctl start|stop mc.service |
service mc start|stop|status|restart |
http://<jfrogUrl>/ui/
and go to the Dashboard tab in the Application module in the UICheck the Mission Control log.
tail -f $JFROG_HOME/mc/var/log/console.log |
The Debian installation bundles Mission Control and all its dependencies. It is provided as native Debian packages, where Mission Control and its dependencies must be installed separately. Use this, if you are automating installations.
Extract the contents of the compressed archive, and go to the extracted folder:
tar -xvf jfrog-mc-<version>-deb.tar.gz cd jfrog-mc-<version>-deb |
Install Mission control. You must run as a root user.
dpkg -i ./mc/mc.deb |
PostgreSQL is required and must be installed before continuing with the next installation steps. Set your PostgreSQL connection details in the Shared Configurations section of the |
Install Elasticsearch. Instructions to install Elasticsearch are available here.
You can install the package available at jfrog-mc-<version>-deb
/third-party/elasticsearch/elasticsearch-oss-<version>.tar.gz
or you can download a compatible version of Elasticsearch from this page.
Install Search Guard. The Search Guard package can be located in the extracted contents at jfrog-mc-<version>-deb
/third-party/elasticsearch/search-guard-<version>.tar.gz
. For installation steps, refer to the Search Guard documentation .
You must install the Search Guard plugin to ensure secure communication with Elasticsearch. |
Add an admin user to Search Guard, to ensure authenticated communication with Elasticsearch.
The Search Guard configuration accepts a hashed password. Use the following command to generate the hash for the password.
/usr/share/elasticsearch/plugins/search-guard-7/tools/hash.sh -p <clear_text_password> #This will output a hashed password (<hash_password>), make a copy of it |
Prepare the configuration snippet to add a new(admin) user with the hashed password obtained from previous step.
<username>: hash: "<hashed_password>" backend_roles: - "admin" description: "Insight Elastic admin user" |
/usr/share/elasticsearch/plugins/search-guard-7/sgconfig/
.Enable the anonymous access to _cluster/health
endpoint. This is required to check the health of Elasticsearch cluster.
Enable the anonymous auth in this file sg_config.yml
at
/usr/share/elasticsearch/plugins/search-guard-7/sgconfig/
.
sg_config: dynamic: http: anonymous_auth_enabled: true #set this to true |
Map the anonymous user sg_anonymous
to the backend role "sg_anonymous_backendrole" in this file "sg_roles_mapping.yml" at
/usr/share/elasticsearch/plugins/search-guard-7/sgconfig/
.
sg_anonymous: backend_roles: - sg_anonymous_backendrole |
Add the following snippet to the end of this file sg_roles.yml
located at /usr/share/elasticsearch/plugins/search-guard-7/sgconfig/
.
sg_anonymous: cluster_permissions: - cluster:monitor/health |
Add the following in the shared section of $JFROG_HOME/mc/var/etc/system.
yaml
file. Refer to Shared Configurations section.
shared: elasticsearch: url: <URL_TO_ELASTICSEARCH_INSTANCE>:<ELASTICSEARCH_PORT> username: <USERNAME_SET_IN_SEARCHGUARD> password: <CLEAR_TEXT_PASSWORD_FOR_THE_ABOVE_USERNAME> |
You must set the value of external as true under Elasticsearch configuration in the system.yaml file even if you install Elasticsearch in the same machine as Mission Control. |
If you use Amazon Elasticsearch Service, enter the following in the shared section of the YAML file.
If you use the Amazon Elasticsearch Service, you must log in to the service using your Amazon AWS credentials. |
Customize the product configuration .
Start and manage the Mission Control service.
systemctl start|stop mc.service |
service mc start|stop|status|restart |
http://<jfrogUrl>/ui/
and go to the Dashboard tab in the Application module in the UI. Check the Mission Control log.
tail -f $JFROG_HOME/mc/var/log/console.log |
In the chart directory, includes three values files, one for each installation type - small/medium/large. These values files are recommendations for setting resources requests and limits for your installation. You can find the files in the corresponding chart directory: |
Add the ChartCenter Helm repository to your Helm client.
helm repo add jfrog https://charts.jfrog.io |
Update the repository.
helm repo update |
Initiate installation by providing a join key and JFrog url as a parameter to the Mission Control chart installation.
helm upgrade --install mission-control --set missionControl.joinKey=<YOUR_PREVIOUSLY_RETRIEVED_JOIN_KEY> \ --set missionControl.jfrogUrl=<YOUR_PREVIOUSLY_RETRIEVED_BASE_URL> --namespace mission-control jfrog/mission-control |
Alternatively, you can manually create a secret containing the join key and then pass it to the template during install/upgrade. The key must be named join-key.
kubectl create secret generic my-secret --from-literal=join-key=<YOUR_PREVIOUSLY_RETIREVED_JOIN_KEY> # Pass the created secret to helm helm upgrade --install mission-control --set missionControl.joinKeySecretName=my-secret --namespace mission-control jfrog/mission-control |
In either case, make sure to pass the same join key on all future calls to |
Customize the product configuration (optional) including database, Java Opts, and filestore.
Unlike other installations, Helm Chart configurations are made to the Follow these steps to apply the configuration changes.
|
Access Mission Control from your browser at: http://<jfrogUrl>/ui/
and go to the Dashboard tab in the Application module in the UI.
Check the status of your deployed Helm releases.
helm status mission-control |
The following describes how to set up a Mission Control HA cluster with more than one node. For more information about HA, see System Architecture .
All nodes within the same Mission Control HA installation must be running the same Artifactory version.
For a Mission Control HA cluster to work correctly, you must have at least three nodes in the cluster. |
Mission Control HA requires an external PostgreSQL database. Make sure to install it before proceeding to install the first node. There are several ways to setup PostgreSQL for redundancy. Including: HA, Load Balancing and Replication. For more information, see the PostgreSQL documentation
All the Mission Control HA components (Mission Control cluster nodes, database server and Elasticsearch) must be within the same fast LAN.
All the HA nodes must communicate with each other through dedicated TCP ports.
The following installation methods are supported:
All install types are supported, including: Docker Compose, Linux Archive, RPM, and Debian.
The installer script provides you an interactive way to install Mission Control and its dependencies. All install types are supported. This installer should be used for Docker Compose.
Install the first node. The installation is identical to the single node installation.
Do not start the Mission Control service. |
Start the Mission Control service.
systemctl start mc.service |
service mc start |
cd jfrog-mc-<version>-compose docker-compose -p mc up -d |
You can install and manage Mission Control as a service in a Linux archive installation. Refer start Mission Control section under Linux Archive Manual Installation for more details. |
mc/app/bin/mc.sh start |
Access Mission Control from your browser at: http://<jfrogUrl>/ui/
and go to the Dashboard tab in the Application module in the UI.
Check the Mission Control log.
tail -f $JFROG_HOME/mc/var/log/console.log |
docker-compose -p mc logs |
For a node to join a cluster, the node must have the same database configuration and the master key.
If you installed Search Guard along with Elasticsearch , you must copy the client and node certificates from Elasticsearch's configuration folder in the primary node to all the additional nodes.
If you want to use the bundled Elasticsearch installation with Mission Control in RPM and Debian installations, copy the client and node certificates from Elasticsearch's configuration folder from the master node to a new directory named as "sg-certs" under the extracted folder on additional node.
Create the folder, Copy localhost.key, localhost.pem, and root-ca.pem from the Elasticsearch source folder, |
Create the folder, Copy localhost.key, localhost.pem, and root-ca.pem from the Elasticsearch source folder, |
Docker Compose installer uses pre-generated certificates for Search Guard. You do not need to manually copy the client and node certificates. |
mc
.$JFROG_HOME/etc/security/master.key
.http://<jfrogUrl>/ui/
and go to the Dashboard tab in the Application module in the UI. Check the Mission Control log.
tail -f $JFROG_HOME/mc/var/log/console.log |
docker-compose -p mc logs |
Install the first node. The installation is identical to the single node installation.
Do not start the Mission Control service. |
Configure the
system.yaml
file with the database and first node configuration details. For example,
shared: database: type: postgresql driver: org.postgresql.Driver url: jdbc:postgresql://<ip:port>/mission_control?sslmode=disable username: <username> password: <password> jfrogUrl: <JFrog URL> security: joinKey: <Artifactory Join Key> |
Start and manage the Mission Control service.
systemctl start|stop mc.service |
service mc start|stop |
http://<jfrogUrl>/ui/
and go to the Dashboard tab in the Application module in the UI Check the Mission Control log.
tail -f $JFROG_HOME/mc/var/log/console.log |
For a node to join a cluster, the node must have the same database configuration and the master key. Install all additional nodes using the same steps described above, with the additional steps below:
Configure the
system.yaml
file for the additional node with master key, database and active node configurations. For example,
shared: database: type: postgresql driver: org.postgresql.Driver url: jdbc:postgresql://<ip:port>/mission_control?sslmode=disable username: <username> password: <password> jfrogUrl: <JFrog URL> security: joinKey: <Artifactory Join Key> # Configure the following property values when Elasticsearch is installed from the bundled Mission Control package. elasticsearch: clusterSetup: "YES" unicastFile: "$JFROG_HOME/mc/data/elasticsearch/config/unicast_hosts.txt" |
master.key
from the first node to the additional node located at $JFROG_HOME/mc/var/etc/security/master.key.
$JFROG_HOME/mc/var/etc/system.yaml
file. If you installed Search Guard along with Elasticsearch , copy the client and node certificates from Elasticsearch's config folder from the primary node to a new directory, sg-certs
, under the extracted folder on the additional node.
Start the additional node.
http://<jfrogUrl>/ui/
and go to the Dashboard tab in the Application module in the UI. Check the Mission Control log.
tail -f $JFROG_HOME/mc/var/log/console.log |
Currently, it is not possible to connect a JFrog product (e.g., Mission Control) that is within a Kubernetes cluster with another JFrog product (e.g., Artifactory) that is outside of the cluster, as this is considered a separate network. Therefore, JFrog products cannot be joined together if one of them is in a cluster. |
In the chart directory, includes three values files, one for each installation type–small/medium/large. These values files are recommendations for setting resources requests and limits for your installation. You can find the files in the corresponding chart directory: |
For high availability of Mission Control, set the replicaCount in the values.yaml file to >1 (the recommended value is 3).
|
Add the ChartCenter Helm repository to your Helm client.
helm repo add jfrog https://charts.jfrog.io |
Update the repository.
helm repo update |
Initiate installation by providing a join key and JFrog url as a parameter to the Mission Control chart installation.
helm upgrade --install mission-control --set missionControl.joinKey=<YOUR_PREVIOUSLY_RETIREVED_JOIN_KEY> \ --set missionControl.jfrogUrl=<YOUR_PREVIOUSLY_RETIREVED_BASE_URL> --namespace mission-control jfrog/mission-control |
Alternatively, you can manually create a secret containing the join key and then pass it to the template during install/upgrade. the key must be named join-key.
# Create a secret containing the key: kubectl create secret generic my-secret --from-literal=join-key=<YOUR_PREVIOUSLY_RETIREVED_JOIN_KEY> # Pass the created secret to helm helm upgrade --install mission-control --set missionControl.joinKeySecretName=my-secret --namespace mission-control jfrog/mission-control |
In either case, make sure to pass the same join key on all future calls to |
Unlike other installations, Helm Chart configurations are made to the Follow these steps to apply the configuration changes.
|
Access Mission Control from your browser at: http://<jfrogUrl>/ui/
and go to the Dashboard tab in the Application module in the UI
Check the status of your deployed Helm releases.
helm status mission-control |
After installing and before running Mission Control, you may set the following configurations.
You can configure all your system settings using the If you don't have a System YAML file in your folder, copy the template available in the folder and name it For the Helm charts, the |
Mission Control requires a working Artifactory server and a suitable license. The Mission Control connection to Artifactory requires 2 parameters:
$JFROG_HOME/mc/etc/system.yaml
file.joinKey
(join Key) from the JPD UI in the Administration module | User Management | Settings | Join Key. join.key
used by your Artifactory server in the Shared Configurations section of the $JFROG_HOME/mc/etc/system.yaml
file.Mission Control comes bundled with a PostgreSQL Database out-of-the-box, which comes pre-configured with default credentials.
These commands are indicative and assume some familiarity with PostgreSQL. Please do not copy and paste them. For docker-compose, you will need to ssh into the PostgreSQL container before you run them |
To change the default credentials:
#1. Change password for mission control user # Access PostgreSQL as the jfmc user adding the optional -W flag to invoke the password prompt $ psql -d mission_control -U jfmc -W # Securely change the password for user "mission_control". Enter and then retype the password at the prompt. \password jfmc # Verify the update was successful by logging in with the new credentials $ psql -d mission_control -U jfmc -W #2. Change password for scheduler user # Access PostgreSQL as the jfmc user adding the optional -W flag to invoke the password prompt $ psql -d mission_control -U jfisc -W # Securely change the password for user "mission_control". Enter and then retype the password at the prompt. \password jfisc # Verify the update was successful by logging in with the new credentials $ psql -d mission_control -U jfisc -W #3. Change password for insight server user # Access PostgreSQL as the jfmc user adding the optional -W flag to invoke the password prompt $ psql -d mission_control -U jfisv -W # Securely change the password for user "mission_control". Enter and then retype the password at the prompt. \password jfisv # Verify the update was successful by logging in with the new credentials $ psql -d mission_control -U jfisv -W |
Search Guard tool is used to manage authentication. To change password for the default user, Search Guard accepts a hash password to be provided in the configuration.
Generate the hash password by providing the password(in text format) as input
$ELASTICSEARCH_HOME/plugins/search-guard-7/tools/hash.sh -p <password_in_text_format> |
The output from the previous step should be updated in the configuration for the default user
vi $ELASTICSEARCH_HOME/plugins/search-guard-7/sgconfig/sg_internal_users.yml #Scroll in the file to find an entry for the username of the default user #Update the value for "hash" with the hash content obtained from previous step <default_username>: hash: <hash_output_from_previous_step> |
Run the command to initialise Search Guard
cd $JFROG_HOME/mc/var/etc/security/keys/trusted #Copy the certificates to this location and restart MC services |
Set your PostgreSQL and Elasticsearch connection details in the Shared Configurations section of the $JFROG_HOME/mc/var/etc/system.yaml
f ile.
If you prefer to use the custom certificates when Search Guard enabled with tls in Elasticsearch, you can use the search-guard-tlstool
to generate Search Guard certificates.
The tool to generate Search Guard certificates is be available in $JFROG_HOME/app/third-party/elasticsearch/search-guard-tlstool-<version>.tar.gz
. For more information about generating certificates, see Search Guard TLS Tool.
Run the tool to generate the certificates.
tar -xvf $JFROG_HOME/app/third-party/elasticsearch/search-guard-tlstool-<version>.tar.gz cp $JFROG_HOME/app/third-party/elasticsearch/config/tlsconfig.yml $JFROG_HOME/app/third-party/elasticsearch/search-guard-tlstool-<version>/config cd $JFROG_HOME/app/third-party/elasticsearch/search-guard-tlstool-<version>/tools ./sgtlstool.sh -c ../config/tlsconfig.yml -ca -crt # folder named "out" will be created with all the required certificates, cd out |
Copy the generated certificates [[ localhost.key, localhost.pem, root-ca.pem, sgadmin.key, sgadmin.pem ]] to the target location based on the installer type.
cp localhost.key localhost.pem root-ca.pem sgadmin.key sgadmin.pem /etc/elasticsearch/certs/ |
cp localhost.key localhost.pem root-ca.pem sgadmin.key sgadmin.pem $JFROG_HOME/mc/var/data/elasticsearch/certs |
The Search Guard tool is used to manage authentication. By default, an admin user is required to authenticate Elasticsearch. As an alternative to this, a new user can be configured to authenticate Elasticsearch by assigning a custom role with permissions for the application to work.
Add the following snippet to define a new role with custom permissions:
vi $ELASTICSEARCH_HOME/plugins/search-guard-7/sgconfig/sg_roles.yml #Add the following snippet to define a new role with custom permissions <role_name>: cluster_permissions: - cluster:monitor/health - cluster:monitor/main - cluster:monitor/state - "indices:admin/template/get" - "indices:admin/template/delete" - "indices:admin/template/put" - "indices:admin/aliases" - "indices:admin/create" index_permissions: - index_patterns: - "active_*" allowed_actions: - "indices:monitor/health" - "indices:monitor/stats" - "indices:monitor/settings/get" - "indices:admin/aliases/get" - "indices:admin/get" - "indices:admin/aliases" - "indices:admin/create" - "indices:admin/delete" - "indices:admin/rollover" - SGS_CRUD |
Add the following snippet to add a new user:
vi $ELASTICSEARCH_HOME/plugins/search-guard-7/sgconfig/sg_internal_users.yml # Add the following snippet to add a new user <user_name>: hash: <Hash_password> backend_roles: - "<role_name>" //role_name defined in previous step description: "<description>" |
Run the following command to generate a hash password:
$ELASTICSEARCH_HOME/plugins/search-guard-7/tools/hash.sh -p <clear_text_password> |
Add the following snippet to map the new username to the role defined in the previous step:
vi $ELASTICSEARCH_HOME/plugins/search-guard-7/sgconfig/sg_roles.yml/sg_roles_mapping.yml # Add the following snippet to map the new username to the role defined in the previous step <role_name>: users: - "<user_name>" |
Initialize Search Guard to upload the above changes made in the configuration.
export JAVA_HOME=<JFROG_HOME>/mc/app/third-party/java cd $ELASTICSEARCH_HOME/plugins/search-guard-7/tools bash ../tools/sgadmin.sh -p 9300 -cacert root-ca.pem -cert sgadmin.pem -key sgadmin.key -nhnv -icl -cd ../sgconfig/ |
Set the new credentials in $JFROG_HOME/mc/etc/system.yam
l file:
shared: elasticsearch: username: <user_name> password: <clear_text_password> |
Do not use a password for PostgreSQL that has special characters: Mission Control may not work if you configure a password that has special characters, such as |
Install PostgreSQL.
# Run the following commands from the extracted jfrog-mc-<version>-rpm directory. # Note : Use postgreSQL rpms with el6 when installing on Centos 6 and RHEL 6 and use postgresql13-13.2-1 packages # Note : Use postgreSQL rpms with el8 when installing on Centos 8 and RHEL 8 mkdir -p /var/opt/postgres/data rpm -ivh --replacepkgs ./third-party/postgresql/libicu-50.2-3.el7.x86_64.rpm (only AWS instance) rpm -ivh --replacepkgs ./third-party/postgresql/postgresql13-libs-13.2-1PGDG.rhel7.x86_64.rpm rpm -ivh --replacepkgs ./third-party/postgresql/postgresql13-13.2-1PGDG.rhel7.x86_64.rpm rpm -ivh --replacepkgs ./third-party/postgresql/postgresql13-server-13.2-1PGDG.rhel7.x86_64.rpm chown -R postgres:postgres /var/opt/postgres export PGDATA="/var/opt/postgres/data" export PGSETUP_INITDB_OPTIONS="-D /var/opt/postgres/data" # For centos 7&8 / rhel 7&8 sed -i "s~^Environment=PGDATA=.*~Environment=PGDATA=/var/opt/postgres/data~" /lib/systemd/system/postgresql-13.service systemctl daemon-reload /usr/pgsql-13/bin/postgresql-13-setup initdb # For centos 6 / rhel 6 sed -i "s~^PGDATA=.*~PGDATA=/var/opt/postgres/data~" /etc/init.d/postgresql-13 service postgresql-13 initdb Replace "ident" and "peer" with "trust" in postgres hba configuration files ie /var/opt/postgres/data/pg_hba.conf |
Configure PostgreSQL to allow external IP connections.
By default PostgreSQL will only allow localhost clients communications. To enable different IPs to communicate with the database you will need to configure the pg_hba.conf file.
|
To grant all IPs access you may add the below, under the IPv4 local connections section.
host all all 0.0.0.0/0 trust |
Add the following line to /var/opt/postgres/data
/postgresql.conf.
listen_addresses='*' port=5432 |
Start PostgreSQL.
systemctl start postgresql-13.service or service postgresql-13 start |
Setup the database and user.
## run the script to seed the tables and schemas needed by Mission Control cp -f ./third-party/postgresql/createPostgresUsers.sh /tmp source /etc/locale.conf cd /tmp && su postgres -c "POSTGRES_PATH=/usr/pgsql-13/bin PGPASSWORD=postgres DB_PASSWORD=password bash /tmp/createPostgresUsers.sh" |
It is recommended to ensure your apt-get
libraries are up-to-date, using the following commands.
apt-get update apt-get install -f -y apt-get update |
# Create the file repository configuration to pull postgresql dependencies cp -f /etc/apt/sources.list /etc/apt/sources.list.origfile sh -c 'echo "deb http://ftp.de.debian.org/debian/ $(lsb_release -cs) main non-free contrib" >> /etc/apt/sources.list' sh -c 'echo "deb-src http://ftp.de.debian.org/debian/ $(lsb_release -cs) main non-free contrib" >> /etc/apt/sources.list' cp -f /etc/apt/sources.list.d/pgdg.list /etc/apt/sources.list.d/pgdg.list.origfile sh -c 'echo "deb http://apt.postgresql.org/pub/repos/apt/ $(lsb_release -cs)-pgdg main" >> /etc/apt/sources.list.d/pgdg.list' sh -c 'echo "deb http://apt.postgresql.org/pub/repos/apt/ precise-pgdg main" >> /etc/apt/sources.list.d/pgdg.list' wget --no-check-certificate --quiet -O - https://www.postgresql.org/media/keys/ACCC4CF8.asc | sudo apt-key add - |
Install PostgreSQL.
Run the following commands from the extracted jfrog-mc-<version>-deb directory.
mkdir -p /var/opt/postgres/data |
dpkg -i ./third-party/postgresql/postgresql-13_13.2-1.pgdg16.04+1_amd64.deb |
dpkg -i ./third-party/postgresql/postgresql-13_13.2-1.pgdg18.04+1_amd64.deb |
dpkg -i ./third-party/postgresql/postgresql-13_13.2-1.pgdg20.04+1_amd64.deb |
## Before installing Postgres dependencies mv /etc/apt/sources.list.d/backports.list /etc/apt >/dev/null apt-get update dpkg -i ./third-party/postgresql/postgresql-13_13.2-1.pgdg80+1_amd64.deb # After installing Postgres dependencies mv /etc/apt/backports.list /etc/apt/sources.list.d/backports.list >/dev/null apt-get update |
dpkg -i ./third-party/postgresql/postgresql-13_13.2-1.pgdg90+1_amd64.deb |
apt update -y apt-get install wget sudo -y apt-get install -y gnupg gnupg1 gnupg2 dpkg -i ./third-party/postgresql/postgresql-13_13.2-1.pgdg100+1_amd64.deb |
Stop the Xray service.
systemctl stop postgresql.service |
Change permissions for the postgres folder.
chown -R postgres:postgres /var/opt/postgres sed -i "s~^data_directory =.*~data_directory = '/var/opt/postgres/data'~" "/etc/postgresql/13/main/postgresql.conf" sed -i "s~^hba_file =.*~hba_file = '/var/opt/postgres/data/pg_hba.conf'~" "/etc/postgresql/13/main/postgresql.conf" sed -i "s~^ident_file =.*~ident_file = '/var/opt/postgres/data/pg_ident.conf'~" "/etc/postgresql/13/main/postgresql.conf" su postgres -c "/usr/lib/postgresql/13/bin/initdb --pgdata=/var/opt/postgres/data" |
Configure PostgreSQL to allow external IP connections.
By default PostgreSQL will only allow localhost clients communications. To enable different IPs to communicate with the database you will need to configure the pg_hba.conf file.
|
To grant all IPs access you may add the below, under the IPv4 local connections section:
host all all 0.0.0.0/0 trust |
Add the following line to /etc/postgresql/13/main/postgresql.conf
listen_addresses='*' |
Start PostgreSQL
systemctl start postgresql.service or service postgresql start |
Set up the database and user.
## run the script to seed the tables and schemas needed by Mission Control cp -f ./third-party/postgresql/createPostgresUsers.sh /tmp source /etc/default/locale cd /tmp && su postgres -c "POSTGRES_PATH=/usr/lib/postgresql/13/bin PGPASSWORD=postgres DB_PASSWORD=password bash /tmp/createPostgresUsers.sh" |
Put back the original pgdg.list.
mv /etc/apt/sources.list.d/pgdg.list /etc/apt/sources.list.d/pgdg.list.tmp && cp -f /etc/apt/sources.list.d/pgdg.list.origfile /etc/apt/sources.list.d/pgdg.list |
Remove backup files.
rm -f /etc/apt/sources.list.d/pgdg.list.tmp rm -f /etc/apt/sources.list.d/pgdg.list.origfile |
Put back the original sources.list.
mv /etc/apt/sources.list /etc/apt/sources.list.tmp && cp -f /etc/apt/sources.list.origfile /etc/apt/sources.list |
Remove the backup files.
rm -f /etc/apt/sources.list.tmp && rm -f /etc/apt/sources.list.origfile |
Postgres binaries are no longer bundled with linux archive installer for Mission Control. Remember to install Postgres manually. |
# Create the psql database (the script "mc/app/third-party/postgresql/createPostgresUsers.sh" , responsible for seeding Postgres assumes this database exists) <pgsql bin path>/psql template1 <postgres prompt>: CREATE DATABASE <user_name>; <postgres prompt>: \q ## run the script to seed the tables and schemas needed by Mission Control POSTGRES_PATH=<pgsql bin path> mc/app/third-party/postgresql/createPostgresUsers.sh |
Database and schema names can only be changed for a new installation. Changing the names during an upgrade will result in the loss of existing data. |
Create a single user with permission to all schemas. Use this user's credentials during your Helm installation on this page. |
Log in to the PostgreSQL database as an admin and execute the following commands.
CREATE DATABASE mission_control WITH ENCODING='UTF8' TABLESPACE=pg_default; # Exit from current login \q # Login to $DB_NAME database using admin user (by default its postgres) psql -U postgres mission_control CREATE USER jfmc WITH PASSWORD 'password'; GRANT ALL ON DATABASE mission_control TO jfmc; CREATE SCHEMA IF NOT EXISTS jfmc_server AUTHORIZATION jfmc; GRANT ALL ON SCHEMA jfmc_server TO jfmc; CREATE SCHEMA IF NOT EXISTS insight_server AUTHORIZATION jfmc; GRANT ALL ON SCHEMA insight_server TO jfmc; CREATE SCHEMA IF NOT EXISTS insight_scheduler AUTHORIZATION jfmc; GRANT ALL ON SCHEMA insight_scheduler TO jfmc; |
Configure the system.yaml
file with the database configuration details according to the information above. For example.
shared: database: type: postgresql driver: org.postgresql.Driver url: jdbc:postgresql://localhost:5432/mission_control username: jfmc password: password |
Extract the contents of the compressed archive and go to the extracted folder.
tar -xvf jfrog-mc-<version>-compose.tar.gz |
This .env file is used by docker-compose and is updated during installations and upgrades. Notice that some operating systems do not display dot files by default. If you've made any changes to the file, remember to backup before an upgrade. |
Create the following folder structure under $JFROG_HOME/mc
.
-- [1050 1050 ] var -- [1050 1050 ] data -- [1000 1000 ] data/elasticsearch -- [999 999 ] postgres -- [1050 1050 ] etc |
Copy the appropriate docker-compose templates from the templates folder to the extracted folder. Rename it as docker-compose.yaml
.
The commands below assume you are using the template: |
Requirement | Template |
---|---|
Mission control with Elasticsearch | docker-compose.yaml |
PostgreSQL | docker-compose-postgres.yaml |
When you use Docker Compose in Mac, You can remove the following line from the selected
|
Update the .env
file
## The Installation directory for Mission Control. IF not entered, the script will prompt you for this input. Default [$HOME/.jfrog/mc] ROOT_DATA_DIR= ## Public IP of this machine HOST_IP= ## Configuration on the first bootstrap of the cluster. Set this only for the first node. ES_MASTER_NODE_SETTINGS="cluster.initial_master_nodes=<node-ip>" |
Set any additional configurations (for example: ports, node id) using Mission Control System YAML.
Verify that the host's ID and IP are added to the |
For Elasticsearch to work correctly, increase the map count. For additional information, see Elasticsearch documentation .
Start the PostgreSQL container.
docker-compose -p mc-postgres -f docker-compose-postgres.yaml up -d |
Copy the script into the PostgreSQL container.
docker cp ./third-party/postgresql/createPostgresUsers.sh mc_postgres:/ |
Exec into the container and execute the script. This will create the database tables and users.
docker exec -t mc_postgres bash -c "chmod +x /createPostgresUsers.sh && gosu postgres /createPostgresUsers.sh" |
docker exec -t mc_postgres bash -c "export DB_PASSWORD=password1 && chmod +x /createPostgresUsers.sh && su-exec postgres /createPostgresUsers.sh" |
Run the following commands.
mkdir -p ${ROOT_DATA_DIR}/var/data/elasticsearch/sgconfig mkdir -p ${ROOT_DATA_DIR}/var/data/elasticsearch/config touch -p ${ROOT_DATA_DIR}/var/data/elasticsearch/config/unicast_hosts.txt chown -R 1000:1000 ${ROOT_DATA_DIR}/var/data/elasticsearch chmod 777 ${ROOT_DATA_DIR}/var/data/elasticsearch/config/unicast_hosts.txt |
Start Mission Control using docker-compose commands.
docker-compose -p mc logs docker-compose -p mc ps docker-compose -p mc up -d docker-compose -p mc down |
Access Mission Control from your browser at:
http://SERVER_HOSTNAME/ui/
. For example, on your local machine:
http://localhost/ui/
.
Check the Mission Control log.
docker-compose -p mc logs |
The This is not done for manual Docker Compose installations. Learn more on how to configure the log rotation. |