Cloud customer?
Start for Free >
Upgrade in MyJFrog >
What's New in Cloud >







Overview

The JFrog Platform supports Open Metrics for Self-hosted customers (the functionality is not supported for JFrog Cloud customers).


Credentials for Accessing Open Metrics

From Artifactory version 7.21.1, an admin user can create a scoped access token using system:metrics:r and use that as the credentials for getting the service metrics. The admin can create read-only access token to get metrics, which then enables anyone with the read-only token to read metrics. To learn more, see Access Token Structure.


To enable metrics in Artifactory, make the following configuration changes to the Artifactory System YAML.

artifactory:
    metrics:
        enabled: true

While metrics are enabled by default in Xray, if they are not enabled, you can make the following configuration changes to the Xray System YAML to enable them.

xray:
    metrics:
        enabled: true

To enable metrics in Insight, make the following configuration changes to the Insight System YAML .

## Insight scheduler template
insight-scheduler:
    metrics:
        enabled: false
## Insight server template
insight-server:
    metrics:
        enabled: false

To enable metrics in Private Distribution Network (PDN) (available from Artifactory 7.38.8 and Distribution 2.12.3), make the following configuration changes to the PDN Server system.yaml.

pdnserver:
  metrics:
    enabled: true
    interval: 5
    exclude:
      - prefix_1
      - prefix_2

Next, to enable metrics in the PDN Node, make the following configuration changes to the PDN Node system.yaml .

pdnNode:
  metrics:
    enabled: true
    interval: 5
    exclude:
      - prefix_1
      - prefix_2
    basicAuthUsername: admin
    basicAuthPassword: password

JFrog Subscription Levels

Page Contents

 


Supported Open Metrics

Artifactory Metrics

The Get the Open Metrics for Artifactory REST API returns the following metrics in Open Metrics format

Metric

Description

app_disk_total_bytes

Total Disk used by the application (Home directory)

app_disk_free_bytes

Total Disk free
jfrt_artifacts_gc_duration_secondsTime taken by a GC run
jfrt_artifacts_gc_binaries_totalNumber of binaries removed by a GC run
jfrt_artifacts_gc_size_cleaned_bytesSpace reclaimed by a GC run
jfrt_artifacts_gc_current_size_bytesSpace occupied by Binaries after a GC run (Only for FULL GC runs)
jfrt_runtime_heap_freememory_bytesAvailable free memory for JVM 
jfrt_runtime_heap_maxmemory_bytesMaximum memory configured for JVM
jfrt_runtime_heap_totalmemory_bytesTotal memory configured for JVM memory
jfrt_runtime_heap_processors_totalTotal number of processors for JVM memory
jfrt_db_connections_active_totalTotal number of active total DB connections
jfrt_db_connections_idle_totalTotal number of idle DB connections
jfrt_db_connections_max_active_totalTotal number of maximum DB connections
jfrt_db_connections_min_idle_totalTotal number of min idle DB connections
jfrt_http_connections_available_total

Total number of available outbound HTTP connections

jfrt_http_connections_leased_totalTotal number of available leased HTTP connections
jfrt_http_connections_pending_totalTotal number of available pending HTTP connections
jfrt_http_connections_max_totalTotal number of maximum HTTP connections

jfrt_slow_queries_duration_seconds

Slow queries duration in seconds
jfrt_slow_queries_count_total
Total number of slow queries
jfrt_storage_current_total_size_bytes
Total size of current storage in bytes.
jfrt_projects_active_total
Total number of active project
jfrt_artifacts_gc_next_run_seconds
Number of seconds for the next run of artifacts garbage collection.

jfrt_http_connections_* metrics collects outbound HTTP connections for repositories sorted by available pool count. If you want to collect this information for more repositories,  you can set the value in the artifactory.system.properties file (available at $JFROG_HOME/var/etc/artifactory/) using the flag, artifactory.httpconnections.metrics.max.total.repositories. The default and recommended value is 10. You can set the value to any integer. 

Xray Metrics

The Xray Metrics REST API returns the following metrics:

MetricDescription
jfxr_db_sync_started_before_seconds Seconds that passed since the last Xray DB sync started running

jfxr_db_sync_running_total

DB sync total running time

jfxr_db_sync_ended_persist_before_seconds

Seconds that passed since completed persisting new updates to the database

jfxr_db_sync_ended_analyze_before_seconds

Seconds that passed since DB sync completed sending all impact analysis messages

jfxr_data_artifacts_total

Total number of Xray scanned artifacts by package type 

Note: Package type is a label package_type

jfxr_data_components_total

Total number of Xray scanned components by package type

Note: Package type is a label package_type

jfxr_performance_server_up_time_seconds

Seconds that passed since Xray server has started on the particular node
app_disk_used_bytes
Disk usage in bytes.
app_disk_free_bytesFree space on disk in bytes
app_io_counters_read_bytesNumber of bytes read by the application
app_io_counters_write_bytesNumber of bytes written by the application
app_self_metrics_calc_secondsNumber of seconds for calculation in metrics
app_self_metrics_totalTotal number of self metrics
cleanup_job_data_deleted_artifacts_in_last_batch_totalNumber of artifacts deleted in the last batch of the cleanup job
cleanup_job_data_processed_artifacts_totalNumber of artifacts processed iby the cleanup job
cleanup_job_data_processed_artifacts_in_last_batch_totalNumber of artifacts processed in the last batch of the cleanup job
cleanup_job_data_start_time_secondsStart time of the last cleanup job
cleanup_job_data_end_time_secondsEnd time of the last cleanup job
cleanup_job_data_time_taken_by_last_job_secondsTime taken to complete the last cleanup job
cleanup_job_data_deleted_artifacts_totalTotal number of artifacts deleted by the cleanup job
db_connection_pool_in_use_totalNumber of connections in use in the DB connection pool
db_connection_pool_idle_totalNumber of connections that are idle in the DB connection pool
db_connection_pool_max_open_totalNumber of connections in use in the DB connection pool.
go_memstats_heap_in_use_bytesMemory (in bytes) in use by the Go heap
go_memstats_heap_allocated_bytesMemory (in bytes) allocated to the Go heap
go_memstats_heap_idle_bytesIdle memory (in bytes) allocated to the Go heap
go_memstats_heap_objects_totalTotal number of objects in the Go heap
go_memstats_heap_reserved_bytesMemory (in bytes) reserved for the Go heap
go_memstats_gc_cpu_fraction_ratioGC-CPU ratio of the Go heap
go_routines_totalTotal number of Go routines
jfxr_jira_no_of_integrations_totalTotal number of Jira integrations
jfxr_jira_no_of_profiles_totalTotal number of Jira profiles
jfxr_jira_no_of_tickets_created_in_last_one_hour_totalTotal number of Jira tickets created in the last one hour
jfxr_jira_last_ticket_creation_time_secondsTime at which the last Jira ticket was created
jfxr_jira_no_of_errors_in_last_hour_totalNumber of Jira errors in the last hour
jfxr_jira_last_error_time_secondsTime at which the last error occurred in Jira
queue_messages_totalTotal number of messages in the queue

Logs

The artifactory_metrics.log will contain system metrics such as: 

  • Total disk space used
  • Total disk space free
  • Time CPU is used by the process
  • JVM available memory
  • JVM number of processors
  • DB number of active, idle, max and min connections
  • HTTP number of available, leased, pending and max connections
  • Xray DB sync running time
  • Xray total number of scanned artifacts and components
  • Xray server start time on a node

The artifactory_metrics_events.log will contain deduplicated metrics related to an event such as a GC run.

PDN Metrics

Metrics Log Files

The following are the two metric log files created for PDN:

  • PDN Server: $JF_PRODUCT_HOME/var/log/tracker-metrics.log
  • PDN Node: $JF_PRODUCT_HOME/var/log/distribution-node-metrics.log

The PDN Server Metrics REST API returns the following metrics in Open Metrics format

Metric

Description

app_disk_used_bytes

Used bytes for app home directory disk device

app_disk_free_bytes

Free bytes for app home directory disk device
app_io_counters_errorError in the app io counter
app_self_metrics_calc_secondsTotal time to collect all metrics
app_self_metrics_totalCount of collected metrics
go_memstats_heap_in_use_bytesProcess go heap bytes in use
go_memstats_heap_allocated_bytesProcess go heap allocated bytes
go_memstats_heap_idle_bytesProcess go heap idle bytes
go_memstats_heap_objects_totalProcess go heap number of objects
go_memstats_heap_reserved_bytesProcess go heap reserved bytes
go_memstats_gc_cpu_fraction_ratio Process go cpu used by gc. value is between 0 and 1
go_routines_total Number of go-routines that currently exist
jftrk_cache_topology_metrics_peers_total_free_cache_size_bytes Peers total free cache size
jftrk_cache_topology_metrics_peers_average_cache_used_ratio Peers average cache used
jftrk_cache_topology_metrics_peers_average_cache_free_ratio Peers average cache free
jftrk_cache_topology_metrics_peers_average_max_total_cache_size_ratio Peers average max total cache size
jftrk_cache_topology_metrics_number_of_peers_total Number of peers
jftrk_cache_topology_metrics_number_of_groups_total Number of groups
jftrk_cache_topology_metrics_peers_total_cache_used_bytes Peers total cache used
jftrk_cache_topology_metrics_peers_total_max_cache_size_bytes Peers total max cache size
jftrk_downloads_files_fetched_total Total number of files downloaded in PDN
jftrk_downloads_bytes_served_total Total amount of bytes served to clients
jftrk_downloads_bytes_fetched_total Total amount of bytes downloaded in PDN
jftrk_downloads_release_bundles_total Total number of release bundles downloaded
jftrk_downloads_file_providers_avg_ratio Average number of peers to download from per file
jftrk_downloads_speed_kbps_avg_ratio Average download speed in PDN (Kbps)
jftrk_downloads_errors_total Total download errors
jftrk_downloads_files_served_total Total number of files served to clients
sys_load_15 Host load average in the last 15 minutes
sys_load_1 Host load average in the last minute
sys_load_5 Host load average in the last 5 minutes

The PDN Node Metrics REST API returns the following metrics in Open Metrics format

Metric

Description

app_disk_used_bytes

Used bytes for app home directory disk device

app_disk_free_bytes

Free bytes for app home directory disk device
app_io_counters_errorError in the app io counter
app_self_metrics_calc_secondsTotal time to collect all metrics
app_self_metrics_totalCount of collected metrics
go_memstats_heap_in_use_bytesProcess go heap bytes in use
go_memstats_heap_allocated_bytesProcess go heap allocated bytes
go_memstats_heap_idle_bytesProcess go heap idle bytes
go_memstats_heap_objects_totalProcess go heap number of objects
go_memstats_heap_reserved_bytesProcess go heap reserved bytes
go_memstats_gc_cpu_fraction_ratio Process go cpu used by gc. value is between 0 and 1
go_routines_total Number of go-routines that currently exist
jfpdn_cache_metrics_cache_used_bytesCache used bytes
jfpdn_cache_metrics_cache_maximum_files_totalCache maximum files

jfpdn_cache_metrics_cache_maximum_bytes

Cache maximum bytes

jfpdn_cache_metrics_cache_used_files_total

Cache used files

jfpdn_downloads_speed_kbps_avg_ratio

Average download speed in PDN (Kbps)

jfpdn_downloads_errors_total

Total download errors

jfpdn_downloads_files_served_total

Total number of files served to clients

jfpdn_downloads_files_fetched_totalTotal number of files downloaded in PDN
jfpdn_downloads_bytes_served_totalTotal amount of bytes served to clients
jfpdn_downloads_bytes_fetched_totalTotal amount of bytes downloaded in PDN
jfpdn_downloads_release_bundles_totalTotal number of release bundles downloaded

jfpdn_downloads_file_providers_avg_ratio

Average number of peers to download from per file
sys_load_15 Host load average in the last 15 minutes
sys_load_1 Host load average in the last minute
sys_load_5 Host load average in the last 5 minutes
sys_memory_used_bytesHost used virtual memory
sys_memory_free_bytesHost free virtual memory

Pipelines Metrics

The following are the three metric log files created for Pipelines:

  • Open Metrics Format
    • Pipeline API Metrics: $JF_PRODUCT_HOME/var/log/ api-metrics.log
  • Non-Open Metrics Format
    • Pipeline Reqsealer Event Metrics: $JF_PRODUCT_HOME/var/log/ reqsealer-activity-event.log
    • Pipeline Sync Event Metrics: $JF_PRODUCT_HOME/var/log/ pipelinesync-activity-event.log

Open Metrics Format

The Get Pipelines Metrics Data REST API returns the following metrics in Open Metrics format.

Metric

Description

sys_cpu_user_seconds

User CPU usage time for the the pipeline process in seconds

sys_cpu_system_seconds

System CPU usage time for the pipeline process in seconds

sys_cpu_total_seconds

Total CPU usage time for the pipeline process in seconds

nodejs_heap_read_only_space_total

Total size allocated for V8 heap segment “read_only_space” 

nodejs_heap_read_only_space_used_total

Used size for V8 heap segment “read_only_space” 

nodejs_heap_new_space_total

Total size allocated for V8 heap segment “new_space” 

nodejs_heap_new_space_used_total

Used size for V8 heap segment “new_space” 

nodejs_heap_old_space_total

Total size allocated for V8 heap segment “old_space”

nodejs_heap_old_space_used_total

Used size for V8 heap segment “old_space” 

nodejs_heap_code_space_total

Total size allocated for V8 heap segment “code_space”

nodejs_heap_code_space_used_total

Used size for V8 heap segment “code_space”

nodejs_heap_map_space_total

Total size allocated for V8 heap segment “max_space”

nodejs_heap_map_space_used_total

Used size for V8 heap segment “max_space”

nodejs_heap_large_object_space_total

Total size allocated for V8 heap segment “large_object_space”

nodejs_heap_large_object_space_used_total

Used size for V8 heap segment “large_object_space”

nodejs_heap_code_large_object_space_total

Total size allocated for V8 heap segment “code_large_object_space”

nodejs_heap_code_large_object_space_used_total

Used size for V8 heap segment “code_large_object_space”

nodejs_heap_new_large_object_space_total

Total size allocated for V8 heap segment “new_large_object_space”

nodejs_heap_new_large_object_space_used_total

Used size for V8 heap segment “new_large_object_space”

sys_memory_free_bytes

Host free virtual memory

sys_memory_total_bytes

Host total virtual memory

jfpip_pipelines_per_project_count

In Pipelines 1.24 and prior, this is called jfpip_pipelines_per_project_count_count.

Number of Pipelines Per Project

jfpip_pipelines_count

In Pipelines 1.24 and prior, this is called jfpip_pipelines_count_count .

Number of Total Pipelines

jfpip_queue_messages_total_count

Messages Count for the Queue

jfpip_nodepool_provisionstatus_success_count

Number of node with SUCCESS provisioned status

jfpip_nodepool_provisionstatus_cached_count

Number of node with CACHED provisioned status

jfpip_nodepool_provisionstatus_processing_count

Number of node with PROCESSING provisioned status

jfpip_nodepool_provisionstatus_failure_count

Number of node with FAILURE provisioned status

jfpip_nodepool_provisionstatus_waiting_count

Number of node with WAITING provisioned status

jfpip_concurrent_active_builds_count

Active Concurrent Build Count 

jfpip_concurrent_allowed_builds_count

Allowed Concurrent Build Count

jfpip_concurrent_available_builds_count

Available Concurrent Build Count

All node.js heap size statistics are captured using the v8.getHeapSpaceStatistics() API.

Logs

The api-metrics.log will contain system metrics such as: 

  • Total disk space used
  • Total disk space free
  • Time CPU is used by the process
  • Node JS Heap related information

Non-Open Metrics Format

In addition to the metrics mentioned about, Pipelines supports the following custom activity-based Event Metrics:

  • Pipeline Run & Step Events: 
    For every pipeline run, two types of metrics can be found in reqsealer-activity-event.log. One entry for each step status and one entry for overall pipeline status.

    {"timestamp":"2022-04-05T08:30:10.088Z","startedAt":"2022-04-05T08:30:03.986Z","queuedAt":"2022-04-05T08:30:03.010Z","domain":"step","pipelineName":"my_pipeline_2","triggeredBy":"admin","branchName":"master","stepName":"p2_s1","runNumber":2,"status":"success","durationMillis":6102,"outputArtifactsCount":0,"outputResourcesCount":0} 
    
    {"timestamp":"2022-04-05T08:30:10.088Z","startedAt":"2022-04-05T08:30:03.986Z","domain":"run","pipelineName":"my_pipeline_2","triggeredBy":"admin","branchName":"master","runNumber":2,"status":"success","durationMillis":6102}
  • Pipeline Sync Events:
    For every pipeline sync activity, the following metrics can be found in pipelinesync-activity-event.log.

    {"timestamp":"2022-04-06T10:00:45.673Z","domain":"pipelineSync","pipelineSourceName":"Sample","repositoryName":"a-0908/myFirstRepo","branch":"master","status":"success","durationMillis":10498}
  • Webhook Events (Pipelines 1.25 and above):
    For every webhook activity for the supported SCMs, you will find the following metrics in hookhandler-activity-event.log.

    {"timestamp":"2022-06-10T16:29:29.894Z","domain":"webhook","status":"success","durationMillis":533,"webhookId":"11819184-2d88-4180-92da-aa13092d0ca4","integration":"my_bitbucket","source":"gitrepo","eventType":"branchCreated","branchName":"kt4","repositoryName":"krishnakadiyam/jfrog-pipelines-second"}
    {"timestamp":"2022-06-10T16:29:40.845Z","domain":"webhook","status":"success","durationMillis":323,"webhookId":"6d098e3a-7b4b-427c-ba53-b1174baeeabd","integration":"my_bitbucket","source":"gitrepo","eventType":"branchDeleted","branchName":"kt4","repositoryName":"krishnakadiyam/jfrog-pipelines-second"}
    {"timestamp":"2022-06-13T05:29:55.062Z","domain":"webhook","status":"success","durationMillis":234,"webhookId":"2d4d698b-b083-42fd-a28e-670d9cec4c1a","integration":"glRepo","source":"gitrepo","eventType":"tag","repositoryName":"jfrog-pipelines-second","tagName":"refs/tags/kt4"}
  • Pipelines Integrations Events (Pipelines 1.29 and above):
    For every integrations activity, you will find the following metrics in api-activity-event.log.

    {"timestamp":"2022-11-10T10:36:50.004Z","domain":"projectIntegrations","eventType":"create","status":"success","integrationName":"iwh","integrationId":1,"integrationType":"incomingWebhook","createdBy":"admin","updatedBy":"admin","durationMillis":188}
    {"timestamp":"2022-11-10T10:37:43.423Z","domain":"projectIntegrations","eventType":"update","status":"success","integrationName":"iwh","integrationId":1,"integrationType":"incomingWebhook","createdBy":"admin","updatedBy":"admin","durationMillis":38}
    {"timestamp":"2022-11-10T10:37:55.901Z","domain":"projectIntegrations","eventType":"delete","status":"success","integrationName":"iwh","integrationId":"1","integrationType":"incomingWebhook","createdBy":"admin","updatedBy":"admin","durationMillis":85}

Usage Example - Prometheus

Update the prometheus.yml file to add a scrape job. Update the following configuration with the adequate values:

  • job_name: Use a unique name among other scrape jobs. All metrics collected through this job will have automatically a ‘job’ label with this value added to it
  • username: The name of an admin user 
  • password: The admin password
  • targets : The URL of the Artifactory node.
- job_name: 'artifactory'
    # Configures the protocol scheme used for requests.
    [scheme: <scheme> | default = http]
    # Sets the `Authorization` header on every scrape request with
    # the configured credentials.
    authorization:
		[type: <string> | default: Bearer]
  		credentials: <secret> 
    # metrics_path defaults to '/metrics'
    metrics_path: '/artifactory/api/v1/metrics'
    static_configs:
    - targets: ['<JFROG_URL>:<PORT>']

For more information about Prometheus scrap job configuration, see here.

  • No labels
Copyright © 2023 JFrog Ltd.