Cloud customer?
Start for Free >
Upgrade in MyJFrog >
What's New in Cloud >



This page provides tips to solve common problems that users have encountered.

Page Contents

JFrog Platform

 How are JFrog Products installers packaged?

The following structure is common across all JFrog products.

Folder/File Name


Contains helper scripts for installer.
Contains third party software.
Product specific bundles (for non Docker Compose installers)
Docker Compose templates (only for Docker Compose installers).
Main installer script (for non Docker Compose installers).
Main configure script (only for Docker Compose installers).
Read me file providing the package details.
 How can I change default data directory path for JFrog products?

Depending on the installer type:

  • RPM / Debian Installers: Set the data directory path in the variable JF_PRODUCT_VAR to the customized data folder and start the services. Set the system environment variable to point to a custom location in your system's environment variables files. See Ubuntu System environment variables.

  • Archive Installer: By default, the data directory is set to the unzip-location/var. You can symlink this directory to any folder you want.

  • Docker Compose Installer: Set the JF_ROOT_DATA_DIR variable in the .env file that comes packaged with the installer.

 How do I monitor my Platform's products health?

It is recommended to run a health check on the specific JFrog product Router node, which is connected to all the node's microservices. This will provide you with the latest health information for the node.

For example, Artifactory's Health Check REST API.

GET /router/api/v1/system/health
 What log should I use to debug my environment?

Each microservice has its own service log. However, it is recommended to start your debugging process by using the console.log, which is a collection of all service logs of all products in a node. Learn More >

 How do I customize my JAVA_OPTS?

JFrog Artifactory, Mission Control and Distribution are bundled with Java 11. To customize the Java run time, configure the shared.extraJavaOpts in the system.yaml.

 How do I customize my Application ports?

The default ports used by each JFrog Product can be modified in the Product system.yaml file.
For example, to set Artifactory to run on a different port (and not on the default 8081 port), perform the following:

  1. Open the Artifactory $JFROG_HOME/artifactory/var/etc/system.yaml file. 
  2. Add or edit the new port key under the artifactory section.

      port: <your new port, for ex: 8089>


Examples for all the different configuration values, including application ports are available in the $JFROG_HOME/<product>/var/etc/system.full-template.yaml file.


 What session management options does Artifactory offer?

From version 6.2, Artifactory offers different alternatives for session management between the Artifactory HA members when accessing one of the members.

The way to manage sessions is controlled via the property (in the $JFROG_HOME/artifactory/var/etc/artifactory/ file) which can take the following values:

Sessions are managed by the database
Sessions are managed by Hazelcast
Sessions are managed by the JVM

If sessions are configured to be managed by the database, Artifactory will also schedule a cron job to cleanup old sessions.

The cron expression to trigger the sessions can be configured using the artifactory.db.session.cleanup.cron property in the $JFROG_HOME/artifactory/var/etc/artifactory/ file.

During installation, this cron expression is set with a default value that triggers the cleanup at a set minute (determined randomly) past each hour.

 How is connection pool for database locking done in Artifactory?

From version 6.0.0, the new database locking mechanism adds its own connection pool (defaults to the value of the value).

However, you may need to adjust your database connection limit to accept more connections. For example, if your database is set to accept up to 100 connections from each node, you may consider increasing the limit to 200 concurrent connections per-node, to accommodate the full utilization of the locking connection pool. Your database should accept the number of configured connections per-node multiplied by the number of the nodes in the cluster.  

Access Service

 An exception is thrown for "java.lang.IllegalStateException: Provided private key and latest private key fingerprints mismatch"

During startup, Artifactory fails to start and an error is thrown:

java.lang.IllegalStateException: Provided private key and latest private key fingerprints mismatch.

Artifactory tries to validate and compare access keys' fingerprint that reside on Artifactory's database and the local file system. If the keys do not match, the exception above will be thrown along with the mismatching fingerprint IDs.
This could occur during an attempted upgrade/installation of Artifactory.


Follow the steps below to make sure that all instances in your circle of trust have the same private key and root certificate: 

Key rotation will invalidate any issued access tokens

The procedure below will create new key pairs which in turn will invalidate any existing Access Tokens.

    1. Create an empty marker file called bootstrap.reset_root_keys under $ARTIFACTORY_HOME/access/etc/
    2. Restart Artifactory.
    3. Verify that the $ARTIFACTORY_HOME/logs/artifactory.log or $ARTIFACTORY_HOME/access/logs/access.log file shows the following entry:
    *** Skipping verification of of the root private fingerprint ***
    *** Private key fingerprint will be overwritten ****************

Access Tokens 

 Why is the access token I generated not working?
SymptomsAuthentication with an access token doesn't work with an error that says "Token validation failed".
CauseThe implementation of access tokens was changed in Artifactory 5.4. The change is backwards compatible, so tokens created with earlier versions of Artifactory can be authenticated in the new version, however the reverse is not true. Tokens created in versions 5.4 or later cannot be authenticated by versions earlier than 5.4.
ResolutionEither upgrade your older Artifactory instances, or make sure you only create access tokens with the older instances

High Availability


 How do I correct the active node name/ip if I've entered it wrong?

To adjust the active node name and IP on the secondary node after a HA installation, it is recommended to re-run the installation wrapper script. Alternatively, manually modify the following files:

RPM/Debian Installation
  1. $JFROG_HOME/xray/var/etc/system.yaml 
Docker Compose Installation
  1. $JFROG_HOME/xray/var/etc/system.yaml 
  2. <installation folder>/.env
  3. $JFROG_HOME/xray/app/third-party/rabbitmq/rabbitmq.conf

Mission Control


 Flood stage disk watermark [95%] exceeded in elasticsearch log OR index read-only / allow delete in insight-executor log
Disk which is storing Elasticsearch data has exceeded 95 % storage

1. Stop the services

2. Clear space on disk used to store elasticsearch data

3. Start the services

4. Change elasticsearch indices setting to RW (read-write),

curl -u<username>:<password> -XPUT -H "Content-Type: application/json" http://localhost:9200/_all/_settings -d '{"index.blocks.read_only_allow_delete": null}'

Default username and password for internal elasticsearch is admin.

Debug Log configuration

 Enable logback debug logging for the JFMC server
From version 4.x the logback.xml has a different way to enable debug logging.

To configure the Mission Control log for debug logging:
In the $JFROG_HOME/var/opt/jfrog/mc/etc/mc/logback.xml file, modify the logger name line as follows:

<logger name="" level="DEBUG"/>

Changes made to the logging configuration are reloaded within several seconds without requiring a restart.

Insight Trends Not Displaying

 Insight trends are not displayed and a 500 error is shown
Incorrect Elasticsearch indices used.
  1. Log in to the Mission Control container.

  2. Disable AUTO_CREATE.

    curl -H 'Content-Type:application/json' -XPUT localhost:8082/elasticsearch/_cluster/settings -d'{"persistent":{"action.auto_create_index":"false"}}' -uadmin:admin
  3. Delete index in Elasticsearch by issuing: 

    curl -XDELETE http://localhost:8082/elasticsearch/active_request_data -uadmin:admin 
  4. Delete index in Elasticsearch by issuing:

    curl -XDELETE http://localhost:8082/elasticsearch/active_metrics_data -uadmin:admin
  5. Delete template.

    curl -X DELETE localhost:8082/elasticsearch/_template/request_logs_template_7 -uadmin:admin
  6. Delete template.

    curl -X DELETE localhost:8082/elasticsearch/_template/metrics_insight_template_7 -uadmin:admin
  7. Stop Mission Control.

  8. Start Mission Control.



 Cannot connect to the Docker daemon during install

When running Pipelines install, you receive the following message:

# Setting platform config


Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?

The Docker service is not running. This can be verified by running  docker info.


Restart the Docker service:

$ systemctl stop docker
$ systemctl start docker
$ systemctl restart docker
$ service docker restart
$ service docker stop
$ service docker start

Node initialization

 Windows Containers must be enabled.
check_win_containers_enabled : Windows Containers must be enabled. Please install the feature, restart this machine
and run this script again.

The node does not have containers enabled.


Enable containers for Windows. Run the following in PowerShell with elevated privileges and then restart the machine.

> Enable-WindowsOptionalFeature -Online -FeatureName Microsoft-Hyper-V -All
> Enable-WindowsOptionalFeature -Online -FeatureName Containers -All
 The command "node" is not recognized

When initializing a new node, an error in the output states that node is not found. Initialization then fails.


NodeJS is installed, but misconfigured. The error most likely occurred because it was not found in the path.


Uninstall NodeJS and allow the build node initialization to reinstall.

If NodeJS was originally installed as part of node initialization, the following commands should work.

On Ubuntu, CentOS, or RHEL
$ sudo rm -rf /usr/local/bin/node
$ sudo rm -rf /usr/local/lib/node_modules/npm/
On Windows
> choco uninstall nodejs
  • No labels
Copyright © 2021 JFrog Ltd.