Search


Cloud customer?
Upgrade in MyJFrog >


Working with an older version?

JFrog Artifactory 6.x
JFrog Xray 2.x
JFrog Mission Control 3.x
JFrog Distribution 1.x
JFrog Enterprise+ (Pre-Platform Release)




Overview

This page provides tips to solve common problems that users have encountered.

Page Contents


JFrog Platform

 How are JFrog Products installers packaged?

The following structure is common across all JFrog products.

Folder/File Name

Description

bin
Contains helper scripts for installer.
third-party
Contains third party software.
<product>
Product specific bundles (for non Docker Compose installers)
templates
Docker Compose templates (only for Docker Compose installers).
install.sh
Main installer script (for non Docker Compose installers).
config.sh
Main configure script (only for Docker Compose installers). 
readme.md
Read me file providing the package details.
 How can I change default data directory path for JFrog products?

Depending on the installer type:

  • RPM / Debian Installers: Set the data directory path in the variable JF_PRODUCT_VAR to the customized data folder and start the services. Set the system environment variable to point to a custom location in your system's environment variables files. See Ubuntu System environment variables.

  • Archive Installer: By default, the data directory is set to the unzip-location/var. You can symlink this directory to any folder you want.

  • Docker Compose Installer: Set the JF_ROOT_DATA_DIR variable in the .env file that comes packaged with the installer.

 How do I monitor my Platform's products health?

It is recommended to run a health check on the specific JFrog product Router node, which is connected to all the node's microservices. This will provide you with the latest health information for the node.

For example, Artifactory's Health Check REST API.

GET /router/api/v1/system/health
 What log should I use to debug my environment?

Each microservice has its own service log. However, it is recommended to start your debugging process by using the console.log, which is a collection of all service logs of all products in a node. Learn More >

 How do I customize my JAVA_OPTS?

JFrog Artifactory, Mission Control and Distribution are bundled with Java 11. To customize the Java run time, configure the shared.extraJavaOpts in the system.yaml.


Artifactory

 What session management options does Artifactory offer?

From version 6.2, Artifactory offers different alternatives for session management between the Artifactory HA members when accessing one of the members.

The way to manage sessions is controlled via the artifactory.map.provider.type property (in the $JFROG_HOME/artifactory/var/etc/artifactory/artifactory.system.properties file) which can take the following values:

db
(default)
Sessions are managed by the database
distributed
Sessions are managed by Hazelcast
jvm
Sessions are managed by the JVM

If sessions are configured to be managed by the database, Artifactory will also schedule a cron job to cleanup old sessions.

The cron expression to trigger the sessions can be configured using the artifactory.db.session.cleanup.cron property in the $JFROG_HOME/artifactory/var/etc/artifactory/artifactory.system.properties file.

During installation, this cron expression is set with a default value that triggers the cleanup at a set minute (determined randomly) past each hour.

 How is connection pool for database locking done in Artifactory?

From version 6.0.0, the new database locking mechanism adds its own connection pool (defaults to the value of the pool.max.active value).

However, you may need to adjust your database connection limit to accept more connections. For example, if your database is set to accept up to 100 connections from each node, you may consider increasing the limit to 200 concurrent connections per-node, to accommodate the full utilization of the locking connection pool. Your database should accept the number of configured connections per-node multiplied by the number of the nodes in the cluster.  


Access Service

 An exception is thrown for "java.lang.IllegalStateException: Provided private key and latest private key fingerprints mismatch"
Symptoms

During startup, Artifactory fails to start and an error is thrown:

java.lang.IllegalStateException: Provided private key and latest private key fingerprints mismatch.
Cause

Artifactory tries to validate and compare access keys' fingerprint that reside on Artifactory's database and the local file system. If the keys do not match, the exception above will be thrown along with the mismatching fingerprint IDs.
This could occur during an attempted upgrade/installation of Artifactory.

Resolution

Follow the steps below to make sure that all instances in your circle of trust have the same private key and root certificate: 

Key rotation will invalidate any issued access tokens

The procedure below will create new key pairs which in turn will invalidate any existing Access Tokens.

    1. Create an empty marker file called bootstrap.reset_root_keys under $ARTIFACTORY_HOME/access/etc/
    2. Restart Artifactory.
    3. Verify that the $ARTIFACTORY_HOME/logs/artifactory.log or $ARTIFACTORY_HOME/access/logs/access.log file shows the following entry:
    ****************************************************************
    *** Skipping verification of of the root private fingerprint ***
    ****************************************************************
    *** Private key fingerprint will be overwritten ****************
    ****************************************************************








Access Tokens 

 Why is the access token I generated not working?
SymptomsAuthentication with an access token doesn't work with an error that says "Token validation failed".
CauseThe implementation of access tokens was changed in Artifactory 5.4. The change is backwards compatible, so tokens created with earlier versions of Artifactory can be authenticated in the new version, however the reverse is not true. Tokens created in versions 5.4 or later cannot be authenticated by versions earlier than 5.4.
ResolutionEither upgrade your older Artifactory instances, or make sure you only create access tokens with the older instances

High Availability

Xray

 How do I correct the active node name/ip if I've entered it wrong?

To adjust the active node name and IP on the secondary node after a HA installation, it is recommended to re-run the installation wrapper script. Alternatively, manually modify the following files:

RPM/Debian Installation
  1. $JFROG_HOME/xray/var/etc/system.yaml 
Docker Compose Installation
  1. $JFROG_HOME/xray/var/etc/system.yaml 
  2. <installation folder>/.env
  3. $JFROG_HOME/xray/app/third-party/rabbitmq/rabbitmq.conf




Mission Control

Installation

 Flood stage disk watermark [95%] exceeded in elasticsearch log OR index read-only / allow delete in insight-executor log
Cause
Disk which is storing Elasticsearch data has exceeded 95 % storage
Resolution

1. Stop the services

2. Clear space on disk used to store elasticsearch data

3. Start the services

4. Change elasticsearch indices setting to RW (read-write),

curl -u<username>:<password> -XPUT -H "Content-Type: application/json" http://localhost:9200/_all/_settings -d '{"index.blocks.read_only_allow_delete": null}'

Default username and password for internal elasticsearch is admin.

Debug Log configuration

 Enable logback debug logging for the JFMC server
Cause
From version 4.x the logback.xml has a different way to enable debug logging.
Resolution

To configure the Mission Control log for debug logging:
In the $JFROG_HOME/var/opt/jfrog/mc/etc/mc/logback.xml file, modify the logger name line as follows:

<logger name="org.jfrog.mc" level="DEBUG"/>

Changes made to the logging configuration are reloaded within several seconds without requiring a restart.


Pipelines

Installation

 Cannot connect to the Docker daemon during install
Symptoms

When running Pipelines install, you receive the following message:

# Setting platform config

##################################################

Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
Cause

The Docker service is not running. This can be verified by running  docker info.

Resolution

Restart the Docker service:

$ systemctl stop docker
$ systemctl start docker
OR
$ systemctl restart docker
OR
$ service docker restart
OR
$ service docker stop
$ service docker start
  • No labels
Copyright © 2020 JFrog Ltd.