Cloud customer?
Start for Free >
Upgrade in MyJFrog >
What's New in Cloud >

Search





Overview

This page provides tips to solve common problems that users have encountered.

Page Contents


JFrog Platform

 How are JFrog Products installers packaged?

The following structure is common across all JFrog products.

Folder/File Name

Description

bin
Contains helper scripts for installer.
third-party
Contains third party software.
<product>
Product specific bundles (for non Docker Compose installers)
templates
Docker Compose templates (only for Docker Compose installers).
install.sh
Main installer script (for non Docker Compose installers).
config.sh
Main configure script (only for Docker Compose installers). 
readme.md
Read me file providing the package details.
 How can I change default data directory path for JFrog products?

Depending on the installer type:

  • RPM / Debian Installers: Set the data directory path in the variable JF_PRODUCT_VAR to the customized data folder and start the services. Set the system environment variable to point to a custom location in your system's environment variables files. See Ubuntu System environment variables.

  • Archive Installer: By default, the data directory is set to the unzip-location/var. You can symlink this directory to any folder you want.

  • Docker Compose Installer: Set the JF_ROOT_DATA_DIR variable in the .env file that comes packaged with the installer.

 How do I monitor my Platform's products health?

It is recommended to run a health check on the specific JFrog product Router node, which is connected to all the node's microservices. This will provide you with the latest health information for the node.

For example, Artifactory's Health Check REST API.

GET /router/api/v1/system/health
 What log should I use to debug my environment?

Each microservice has its own service log. However, it is recommended to start your debugging process by using the console.log, which is a collection of all service logs of all products in a node. Learn More >

 How do I customize my JAVA_OPTS?

JFrog Artifactory, Mission Control and Distribution are bundled with Java 11. To customize the Java run time, configure the shared.extraJavaOpts in the system.yaml.

 How do I customize my Application ports?

The default ports used by each JFrog Product can be modified in the Product system.yaml file.
For example, to set Artifactory to run on a different port (and not on the default 8081 port), perform the following:

  1. Open the Artifactory $JFROG_HOME/artifactory/var/etc/system.yaml file. 
  2. Add or edit the new port key under the artifactory section.

    artifactory:
      port: <your new port, for ex: 8089>

system.full-template.yaml

Examples for all the different configuration values, including application ports are available in the $JFROG_HOME/<product>/var/etc/system.full-template.yaml file.


Access Service

 An exception is thrown for "java.lang.IllegalStateException: Provided private key and latest private key fingerprints mismatch"
Symptoms

During startup, Artifactory fails to start and an error is thrown:

java.lang.IllegalStateException: Provided private key and latest private key fingerprints mismatch.
Cause

Artifactory tries to validate and compare access keys' fingerprint that reside on Artifactory's database and the local file system. If the keys do not match, the exception above will be thrown along with the mismatching fingerprint IDs.
This could occur during an attempted upgrade/installation of Artifactory.

Resolution

Follow the steps below to make sure that all instances in your circle of trust have the same private key and root certificate: 

Key rotation will invalidate any issued access tokens

The procedure below will create new key pairs which in turn will invalidate any existing Access Tokens.

    1. Create an empty marker file called bootstrap.reset_root_keys under $ARTIFACTORY_HOME/access/etc/
    2. Restart Artifactory.
    3. Verify that the $ARTIFACTORY_HOME/logs/artifactory.log or $ARTIFACTORY_HOME/access/logs/access.log file shows the following entry:
    ****************************************************************
    *** Skipping verification of of the root private fingerprint ***
    ****************************************************************
    *** Private key fingerprint will be overwritten ****************
    ****************************************************************

Access Tokens 

 Why is the access token I generated not working?
SymptomsAuthentication with an access token doesn't work with an error that says "Token validation failed".
CauseThe implementation of access tokens was changed in Artifactory 5.4. The change is backwards compatible, so tokens created with earlier versions of Artifactory can be authenticated in the new version, however the reverse is not true. Tokens created in versions 5.4 or later cannot be authenticated by versions earlier than 5.4.
ResolutionEither upgrade your older Artifactory instances, or make sure you only create access tokens with the older instances

High Availability

Xray

 How do I correct the active node name/ip if I've entered it wrong?

To adjust the active node name and IP on the secondary node after a HA installation, it is recommended to re-run the installation wrapper script. Alternatively, manually modify the following files:

RPM/Debian Installation
  1. $JFROG_HOME/xray/var/etc/system.yaml 
Docker Compose Installation
  1. $JFROG_HOME/xray/var/etc/system.yaml 
  2. <installation folder>/.env
  3. $JFROG_HOME/xray/app/third-party/rabbitmq/rabbitmq.conf

Mission Control

Installation

 Flood stage disk watermark [95%] exceeded in elasticsearch log OR index read-only / allow delete in insight-executor log
Cause
Disk which is storing Elasticsearch data has exceeded 95 % storage
Resolution

1. Stop the services

2. Clear space on disk used to store elasticsearch data

3. Start the services

4. Change elasticsearch indices setting to RW (read-write),

curl -u<username>:<password> -XPUT -H "Content-Type: application/json" http://localhost:9200/_all/_settings -d '{"index.blocks.read_only_allow_delete": null}'

Default username and password for internal elasticsearch is admin.

Debug Log Configuration

 Enable logback debug logging for the JFMC server
Cause
From version 4.x the logback.xml has a different way to enable debug logging.
Resolution

To configure the Mission Control log for debug logging:
In the $JFROG_HOME/var/opt/jfrog/mc/etc/mc/logback.xml file, modify the logger name line as follows:

<logger name="org.jfrog.mc" level="DEBUG"/>

Changes made to the logging configuration are reloaded within several seconds without requiring a restart.



Insight Trends Not Displaying

 Insight trends are not displayed and a 500 error is shown
Cause
Incorrect Elasticsearch indices used.
Resolution
  1. Log in to the Mission Control container.

  2. Disable AUTO_CREATE.

    curl -H 'Content-Type:application/json' -XPUT localhost:8082/elasticsearch/_cluster/settings -d'{"persistent":{"action.auto_create_index":"false"}}' -uadmin:admin
  3. Delete index in Elasticsearch by issuing: 

    curl -XDELETE http://localhost:8082/elasticsearch/active_request_data -uadmin:admin 
  4. Delete index in Elasticsearch by issuing:

    curl -XDELETE http://localhost:8082/elasticsearch/active_metrics_data -uadmin:admin
  5. Delete template.

    curl -X DELETE localhost:8082/elasticsearch/_template/request_logs_template_7 -uadmin:admin
  6. Delete template.

    curl -X DELETE localhost:8082/elasticsearch/_template/metrics_insight_template_7 -uadmin:admin
  7. Stop Mission Control.

  8. Start Mission Control.

Pipelines

Installation

 Cannot connect to the Docker daemon during install
Symptoms

When running Pipelines install, you receive the following message:

# Setting platform config

##################################################

Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
Cause

The Docker service is not running. This can be verified by running  docker info.

Resolution

Restart the Docker service:

$ systemctl stop docker
$ systemctl start docker
OR
$ systemctl restart docker
OR
$ service docker restart
OR
$ service docker stop
$ service docker start

Node initialization

 Windows Containers must be enabled.
Symptoms
check_win_containers_enabled : Windows Containers must be enabled. Please install the feature, restart this machine
and run this script again.
Cause

The node does not have containers enabled.

Resolution

Enable containers for Windows. Run the following in PowerShell with elevated privileges and then restart the machine.

> Enable-WindowsOptionalFeature -Online -FeatureName Microsoft-Hyper-V -All
> Enable-WindowsOptionalFeature -Online -FeatureName Containers -All
 The command "node" is not recognized
Symptoms

When initializing a new node, an error in the output states that node is not found. Initialization then fails.

Cause

NodeJS is installed, but misconfigured. The error most likely occurred because it was not found in the path.

Resolution

Uninstall NodeJS and allow the build node initialization to reinstall.

If NodeJS was originally installed as part of node initialization, the following commands should work.

On Ubuntu, CentOS, or RHEL
$ sudo rm -rf /usr/local/bin/node
$ sudo rm -rf /usr/local/lib/node_modules/npm/
On Windows
> choco uninstall nodejs

Pipelines Error Messages

This section lists commonly-encountered Pipelines error messages, possible causes, and some suggestions for resolving the errors. If you have trouble fixing any of these errors, submit a request to Support for further investigation. 

Error: All resource versions are not fetched

Error
reqKick|executeStep|step|prepData|jFrogPipelinesSessionId:28be9c21-4ad6-4e3d-9411-7b9988535fd1|_getResourceVersions, 
All resource versions are not fetched. Requested resource versions: 16; received resource versions: []
Cause

After the run was triggered, but before it started running, one or more resources in the pipeline were reset. Hence, while fetching the resources associated with the run, the resource version was returned as an empty array.

Resolution

Re-run the pipeline. 

When a resource is reset, it wipes out the resource version history and resets it to a single version, which is now considered the latest. This version is used for the new run.

Error: fatal: reference is not a tree

Error
fatal: reference is not a tree: 679e2fc3c2590f7dbaf64534a325ac60b4dc8689
Cause

This could be a result of using git push --force  or  git rebase , which deletes the commit and causes the pipeline to not run.

Resolution

Either:

  • Reset the resource and then trigger the pipeline again. Note that if there are several GitRepo resources in the pipeline, this needs to be done for all of them.

or

  • Push another commit so that all the resources are updated automatically.

Error: Failed to create pvc for node

Error
Failed to create pvc for node
Cause

Either the Kubernetes configuration does not have access to create a Persistent Volume Claim (PVC) resource or Pipelines cannot connect to the provided Kubernetes host server.

Resolution

Review the Kubernetes configurations and verify that the Kube Config provided while creating the Kubernetes Integration has adequate permissions.

Error: SCM provider credentials do not have permissions

Error
The credentials provided for the integration "<integration_name>" do not have enough permissions. Ensure that the credentials exist and have the correct permissions for the provider: github.
Cause

The credentials (username and/or token) provided while creating the integration are incorrect.

Resolution

Ensure that the credentials provided for the SCM provider are correct.

Error: SCM provider URL is invalid

Error
The URL provided for the integration “<integration_name>” is invalid. Provide a valid URL for the SCM provider and try again.
Cause

The SCM URL provided while creating the integration is incorrect.

Resolution

Ensure that the URL provided for the SCM provider is correct.

Error: SCM provider repository path is invalid

Error
The repository path "<repo_path>" is either invalid or does not exist. Ensure that the repository path exists and has the correct permissions for the integration: <integration_name>.
Cause

The repository path provided for the SCM provider while creating the integration is incorrect or does not exist.

Resolution

Ensure that the repository name provided for the SCM provider is correct.

  • No labels
Copyright © 2021 JFrog Ltd.