The following structure is common across all JFrog products.
|Contains helper scripts for installer.|
|Contains third party software.|
|Product specific bundles (for non Docker Compose installers)|
|Docker Compose templates (only for Docker Compose installers).|
|Main installer script (for non Docker Compose installers).|
|Main configure script (only for Docker Compose installers).|
|Read me file providing the package details.|
Depending on the installer type:
RPM / Debian Installers: Set the data directory path in the variable
JF_PRODUCT_VARto the customized data folder and start the services. Set the system environment variable to point to a custom location in your system's environment variables files. See Ubuntu System environment variables.
Archive Installer: By default, the data directory is set to the
unzip-location/var. You can symlink this directory to any folder you want.
Docker Compose Installer: Set the
JF_ROOT_DATA_DIRvariable in the
.envfile that comes packaged with the installer.
It is recommended to run a health check on the specific JFrog product Router node, which is connected to all the node's microservices. This will provide you with the latest health information for the node.
For example, Artifactory's Health Check REST API.
Each microservice has its own service log. However, it is recommended to start your debugging process by using the
console.log, which is a collection of all service logs of all products in a node. Learn More >
JFrog Artifactory, Mission Control and Distribution are bundled with Java 11. To customize the Java run time, configure the
shared.extraJavaOpts in the
The default ports used by each JFrog Product can be modified in the Product
For example, to set Artifactory to run on a different port (and not on the default 8081 port), perform the following:
- Open the Artifactory
Add or edit the new port key under the artifactory section.
Examples for all the different configuration values, including application ports are available in the
During startup, Artifactory fails to start and an error is thrown:
java.lang.IllegalStateException: Provided private key and latest private key fingerprints mismatch.
Artifactory tries to validate and compare access keys' fingerprint that reside on Artifactory's database and the local file system. If the keys do not match, the exception above will be thrown along with the mismatching fingerprint IDs.
Follow the steps below to make sure that all instances in your circle of trust have the same private key and root certificate:
Key rotation will invalidate any issued access tokens
The procedure below will create new key pairs which in turn will invalidate any existing Access Tokens.
|Symptoms||Authentication with an access token doesn't work with an error that says "Token validation failed".|
|Cause||The implementation of access tokens was changed in Artifactory 5.4. The change is backwards compatible, so tokens created with earlier versions of Artifactory can be authenticated in the new version, however the reverse is not true. Tokens created in versions 5.4 or later cannot be authenticated by versions earlier than 5.4.|
|Resolution||Either upgrade your older Artifactory instances, or make sure you only create access tokens with the older instances|
To adjust the active node name and IP on the secondary node after a HA installation, it is recommended to re-run the installation wrapper script. Alternatively, manually modify the following files:
|Docker Compose Installation|
|Disk which is storing Elasticsearch data has exceeded 95 % storage|
1. Stop the services
2. Clear space on disk used to store elasticsearch data
3. Start the services
4. Change elasticsearch indices setting to RW (read-write),
Default username and password for internal elasticsearch is admin.
Debug Log Configuration
|From version 4.x the logback.xml has a different way to enable debug logging.|
To configure the Mission Control log for debug logging:
Changes made to the logging configuration are reloaded within several seconds without requiring a restart.
Insight Trends Not Displaying
|Incorrect Elasticsearch indices used.|
When running Pipelines install, you receive the following message:
The Docker service is not running. This can be verified by running
Restart the Docker service:
The node does not have containers enabled.
Enable containers for Windows. Run the following in PowerShell with elevated privileges and then restart the machine.
When initializing a new node, an error in the output states that
NodeJS is installed, but misconfigured. The error most likely occurred because it was not found in the path.
Uninstall NodeJS and allow the build node initialization to reinstall.
If NodeJS was originally installed as part of node initialization, the following commands should work.
Pipelines Error Messages
This section lists commonly-encountered Pipelines error messages, possible causes, and some suggestions for resolving the errors. If you have trouble fixing any of these errors, submit a request to Support for further investigation.
Error: All resource versions are not fetched
After the run was triggered, but before it started running, one or more resources in the pipeline were reset. Hence, while fetching the resources associated with the run, the resource version was returned as an empty array.
Re-run the pipeline.
When a resource is reset, it wipes out the resource version history and resets it to a single version, which is now considered the latest. This version is used for the new run.
Error: fatal: reference is not a tree
This could be a result of using
Error: Failed to create pvc for node
Either the Kubernetes configuration does not have access to create a Persistent Volume Claim (PVC) resource or Pipelines cannot connect to the provided Kubernetes host server.
Review the Kubernetes configurations and verify that the
Error: SCM provider credentials do not have permissions
The credentials (username and/or token) provided while creating the integration are incorrect.
Ensure that the credentials provided for the SCM provider are correct.
Error: SCM provider URL is invalid
The SCM URL provided while creating the integration is incorrect.
Ensure that the URL provided for the SCM provider is correct.
Error: SCM provider repository path is invalid
The repository path provided for the SCM provider while creating the integration is incorrect or does not exist.
Ensure that the repository name provided for the SCM provider is correct.