Skip to end of metadata
Go to start of metadata

Overview

JFrog Enterprise+ can be installed on a Kubernetes cluster using E+ Helm charts.

Requirements


Helm

Enterprise Plus will be deployed with Helm.

Initialise Helm and Tiller with the following command:

helm init
Page Contents

 


Enterprise Plus Helm charts

Tested on GKE with Dynamic Provisioning

The JFrog Enterprise+ Helm charts have been tested on a managed Kubernetes cluster (GKE) with dynamic provisioning enabled.

Getting the Charts

You can get the Helm charts from the:

Configure JFrog chart repo with Helm client:

# Command to add jfrog repo
$ helm repo add jfrog https://charts.jfrog.io/

Run one of the following commands according to the Helm chart you need:

Helm chart version vs. package version

Don't confuse the Helm chart version with the version of the product you are downloading as these will be different.

Note about RBAC:

RBAC

All helm charts create the needed RBAC objects. See https://kubernetes.io/docs/reference/access-authn-authz/rbac/ for more details

Disabling RBAC can be done by passing a parameter to the helm install command:

$ helm install ... --set rbac.create=false ...
Artifactory (for Edge Node)
# README URL: https://github.com/jfrog/charts/tree/master/stable/artifactory
$ helm fetch jfrog/artifactory
Artifactory HA
# README URL: https://github.com/jfrog/charts/tree/master/stable/artifactory-ha
$ helm fetch jfrog/artifactory-ha
Mission Control
# README URL: https://github.com/jfrog/charts/tree/master/stable/mission-control
$ helm fetch jfrog/mission-control
Distribution
# README URL: https://github.com/jfrog/charts/tree/master/stable/distribution
$ helm fetch jfrog/distribution
Xray
# README URL: https://github.com/jfrog/charts/tree/master/stable/xray
$ helm fetch jfrog/xray

 


Deploy the applications with Helm

Once fetched, you should follow instructions for installation in the README.md file. 

 

System requirements

The system requirements for running JFrog Enterprise+ on Kubernetes vary depending on the size of your deployment.

Kubernetes cluster

A K8S cluster to fulfil all needed resources is needed.

  • Version 1.8.8 or greater (recommended 1.9.7 with RBAC enabled)
  • Cluster nodes with at least 8 CPU and 16GB RAM each
    • Depending on which services you deploy, you should consider at least one node per service
  • Dynamic storage provisioning enabled recommended
    • Default StorageClass enabled to allow services using the default StorageClass for persistent storage

E+ Applications

This table has the totals requested and limits for memory (GB) and CPU needed.

Derived from https://www.jfrog.com/confluence/display/EP/System+Requirements, there are 4 sizes:

  • Small

  • Medium

  • Large

  • XLarge - contact JFrog support as this has to be tailored to the required needed solution

CPU and Memory

Each application has to specify its needed resources so Kubernetes can allocate the right nodes and resource limits.

It is recommended to keep a copy of `values.yaml` for each Helm chart for production as `values-production.yaml` with all configured resources and configurations needed such as replicas, memory and CPU.

The following table is a summary of the total resources needed by the JFrog applications so you can plan your Kubernetes cluster scale beforehand.

A drill-down per application and exact settings is in the Application Specific settings section

Summary

 

 

Small

 

Medium

 

Large

 

RAM GB

 

CPU

  

RAM GB

 

CPU

  

RAM GB

 

CPU

 
 

Request

Limit

Request

Limit

 

Request

Limit

Request

Limit

 

Request

Limit

Request

Limit

Artifactory HA

81248 1224618 24401632

Artifactory Edge

4624 4826 61048

Distribution

61177 10291017 21591343

Mission Control

111836 162938 2033612

Xray

1625612 2539918 33552245
               

Total

45722237 671293067 10419761140

Total w/o Edge

41662033 631212861 9818757132
  • The biggest requested resource is by Xray MongoDB. You must have a node that can fulfil its requested memory

  • All Java applications need explicit setting of Xms and Xmx memory parameters, See templates below

  • It’s assumed the Artifactory Edge is not in the main cluster, so Total is displayed with and without Edge

  • Totals reflect using internal Databases (MongoDB and PostgreSQL) in Mission controls, Xray and Distribution

  • Artifactory databases are NOT included

  • Artifactory HA

    • Incoming UI and API calls are routed to member nodes only. Primary is left for running indexing and maintenance tasks. To pass UI and API calls to primary, pass --set artifactory.service.pool=all.

Storage

Refer to the System System Requirements for storage requirements.

Application specific settings

The following table provides the application specific settings in a yaml format.

It’s recommended to create a values-production.yaml based on the default values.yaml with customised resources and configuration as needed.

Artifactory HA

  • The 2GB gap between `xmx` and memory limit is for the replicator.
  • On a Small deployment, both primary and single member are accepting UI and API calls with artifactory.service.pool=all

 

 

Small

Medium

Large

artifactory:
  primary:
    resources: 
      requests:
        memory: "4Gi"
        cpu: "2"
      limits:
        memory: "6Gi"
        cpu: "4"
    javaOpts: 
      xms: "4g"
      xmx: "4g"
  node: 
    replicaCount: 1
    resources: 
      requests:
        memory: "4Gi"
        cpu: "2"
      limits:
        memory: "6Gi"
        cpu: "4"
    javaOpts: 
      xms: "4g"
      xmx: "4g"
  ## To add all nodes
  ## to LoadBalancer pool
  service:
    pool: all
artifactory:
  primary:
    resources: 
      requests:
        memory: "4Gi"
        cpu: "2"
      limits:
        memory: "8Gi"
        cpu: "6"
    javaOpts: 
      xms: "4g"
      xmx: "6g"
  node: 
    replicaCount: 2
    resources: 
      requests:
        memory: "4Gi"
        cpu: "2"
      limits:
        memory: "8Gi"
        cpu: "6"
    javaOpts: 
      xms: "4g"
      xmx: "6g"
  ## To add all nodes
  ## to LoadBalancer pool
  service:
    pool: all
artifactory:
  primary:
    resources: 
      requests:
        memory: "6Gi"
        cpu: "4"
      limits:
        memory: "10Gi"
        cpu: "8"
    javaOpts: 
      xms: "6g"
      xmx: "8g"
  node: 
    replicaCount: 3
    resources: 
      requests:
        memory: "6Gi"
        cpu: "4"
      limits:
        memory: "10Gi"
        cpu: "8"
    javaOpts: 
      xms: "6g"
      xmx: "8g"
  ## To add all nodes
  ## to LoadBalancer pool
  service:
    pool: all

 

Artifactory Edge

  • The 2GB gap between `xmx` and memory limit is for the replicator.

 

 

Small

Medium

Large

artifactory:
  resources:
    requests:
      memory: "4Gi"
      cpu: "2"
    limits:
      memory: "6Gi"
      cpu: "4"
  javaOpts: 
    xms: "4g"
    xmx: "4g"
artifactory:
  resources:
    requests:
      memory: "4Gi"
      cpu: "2"
    limits:
      memory: "8Gi"
      cpu: "6"
  javaOpts: 
    xms: "4g"
    xmx: "6g"
artifactory:
  resources: 
    requests:
      memory: "6Gi"
      cpu: "4"
    limits:
      memory: "10Gi"
      cpu: "8"
  javaOpts: 
    xms: "6g"
    xmx: "8g"

 

Distribution

 

Small

Medium

Large

replicaCount: 2
redis:
  resources:
    requests:
      memory: "512Mi"
      cpu: "1"
    limits:
      memory: "1Gi"
      cpu: "1"
mongodb:
  resources:
    requests:
      memory: "1Gi"
      cpu: "500m"
    limits:
      memory: "1Gi"
      cpu: "1"
  ## Make sure the --wiredTigerCacheSizeGB is
  ## no more than half the memory limit!
  ## This is critical to protect against
  ## OOMKill by Kubernetes!
  mongodbExtraFlags:
  - "--wiredTigerCacheSizeGB=0.5"
distribution:
  resources:
    requests:
      memory: "1Gi"
      cpu: "1"
    limits:
      memory: "2Gi"
      cpu: "1"
  javaOpts: 
    xms: "1g"
    xmx: "2g"
distributor:
  resources:
    requests:
      memory: "1Gi"
      cpu: "1"
    limits:
      memory: "2Gi"
      cpu: "1"
  javaOpts: 
    xms: "1g"
    xmx: "2g"
replicaCount: 3
redis:
  resources:
    requests:
      memory: "512Mi"
      cpu: "1"
    limits:
      memory: "1Gi"
      cpu: "1"
mongodb:
  resources:
    requests:
      memory: "2Gi"
      cpu: "1"
    limits:
      memory: "2Gi"
      cpu: "2"
  ## Make sure the --wiredTigerCacheSizeGB is
  ## no more than half the memory limit!
  ## This is critical to protect against
  ## OOMKill by Kubernetes!
  mongodbExtraFlags:
  - "--wiredTigerCacheSizeGB=1"
distribution:
  resources:
    requests:
      memory: "1Gi"
      cpu: "1"
    limits:
      memory: "4Gi"
      cpu: "2"
  javaOpts: 
    xms: "1g"
    xmx: "4g"
distributor:
  resources:
    requests:
      memory: "1Gi"
      cpu: "1"
    limits:
      memory: "4Gi"
      cpu: "2"
  javaOpts: 
    xms: "1g"
    xmx: "4g"
replicaCount: 4
redis:
  resources:
    requests:
      memory: "512Mi"
      cpu: "1"
    limits:
      memory: "2Gi"
      cpu: "2"
mongodb:
  resources:
    requests:
      memory: "3Gi"
      cpu: "1"
    limits:
      memory: "3Gi"
      cpu: "3"
  ## Make sure the --wiredTigerCacheSizeGB is
  ## no more than half the memory limit!
  ## This is critical to protect against
  ## OOMKill by Kubernetes!
  mongodbExtraFlags:
  - "--wiredTigerCacheSizeGB=1.5"
distribution:
  resources:
    requests:
      memory: "2Gi"
      cpu: "1"
    limits:
      memory: "6Gi"
      cpu: "4"
  javaOpts: 
    xms: "2g"
    xmx: "6g"
distributor:
  resources:
    requests:
      memory: "2Gi"
      cpu: "1"
    limits:
      memory: "6Gi"
      cpu: "4"
  javaOpts: 
    xms: "2g"
    xmx: "6g"

Mission Control

 

Small

Medium

Large

elasticsearch:
  resources:
    requests:
      memory: "4Gi"
      cpu: "500m"
    limits:
      memory: "4Gi"
      cpu: "1"
  ## ElasticSearch xms and xmx should be same!
  javaOpts:
    xms: "3g"
    xmx: "3g"
mongodb:
  resources:
    requests:
      memory: "2Gi"
      cpu: "500m"
    limits:
      memory: "2Gi"
      cpu: "1"
  ## Make sure the --wiredTigerCacheSizeGB is
  ## no more than half the memory limit!
  ## This is critical to protect against
  ## OOMKill by Kubernetes!
  mongodbExtraFlags:
  - "--wiredTigerCacheSizeGB=1"
missionControl:
  resources: 
    requests:
      memory: "2Gi"
      cpu: "500m"
    limits:
      memory: "4Gi"
      cpu: "1"
  javaOpts: 
    xms: "2g"
    xmx: "4g"
insightServer:
  resources: 
    requests:
      memory: "512Mi"
      cpu: "500m"
    limits:
      memory: "2Gi"
      cpu: "1"
insightExecutor:
  resources: 
    requests:
      memory: "2Gi"
      cpu: "500m"
    limits:
      memory: "4Gi"
      cpu: "1"
  javaOpts: 
    xms: "2g"
    xmx: "4g"
insightScheduler:
  resources: 
    requests:
      memory: "512Mi"
      cpu: "500m"
    limits:
      memory: "2Gi"
      cpu: "1"
  javaOpts: 
    xms: "512m"
    xmx: "2g"
elasticsearch:
  resources:
    requests:
      memory: "6Gi"
      cpu: "500m"
    limits:
      memory: "6Gi"
      cpu: "2"
  ## ElasticSearch xms and xmx should be same!
  javaOpts:
    xms: "5g"
    xmx: "5g"
mongodb:
  resources:
    requests:
      memory: "4Gi"
      cpu: "500m"
    limits:
      memory: "4Gi"
      cpu: "1"
  ## Make sure the --wiredTigerCacheSizeGB is
  ## no more than half the memory limit!
  ## This is critical to protect against
  ## OOMKill by Kubernetes!
  mongodbExtraFlags:
  - "--wiredTigerCacheSizeGB=2"
missionControl:
  resources: 
    requests:
      memory: "3Gi"
      cpu: "500m"
    limits:
      memory: "6Gi"
      cpu: "2"
  javaOpts: 
    xms: "3g"
    xmx: "6g"
insightServer:
  resources: 
    requests:
      memory: "512Mi"
      cpu: "500m"
    limits:
      memory: "4Gi"
      cpu: "1"
insightExecutor:
  resources: 
    requests:
      memory: "2Gi"
      cpu: "500m"
    limits:
      memory: "6Gi"
      cpu: "1"
  javaOpts: 
    xms: "2g"
    xmx: "6g"
insightScheduler:
  resources: 
    requests:
      memory: "512Mi"
      cpu: "500m"
    limits:
      memory: "3Gi"
      cpu: "1"
  javaOpts: 
    xms: "512m"
    xmx: "3g"
elasticsearch:
  resources:
    requests:
      memory: "8Gi"
      cpu: "1"
    limits:
      memory: "8Gi"
      cpu: "2"
  ## ElasticSearch xms and xmx should be same!
  javaOpts:
    xms: "7g"
    xmx: "7g"
mongodb:
  resources:
    requests:
      memory: "6Gi"
      cpu: "1"
    limits:
      memory: "6Gi"
      cpu: "2"
  ## Make sure the --wiredTigerCacheSizeGB is
  ## no more than half the memory limit!
  ## This is critical to protect against
  ## OOMKill by Kubernetes!
  mongodbExtraFlags:
  - "--wiredTigerCacheSizeGB=3"
missionControl:
  resources: 
    requests:
      memory: "3Gi"
      cpu: "1"
    limits:
      memory: "6Gi"
      cpu: "2"
  javaOpts: 
    xms: "3g"
    xmx: "6g"
insightServer:
  resources: 
    requests:
      memory: "512Mi"
      cpu: "1"
    limits:
      memory: "4Gi"
      cpu: "2"
insightExecutor:
  resources: 
    requests:
      memory: "2Gi"
      cpu: "1"
    limits:
      memory: "6Gi"
      cpu: "2"
  javaOpts: 
    xms: "2g"
    xmx: "6g"
insightScheduler:
  resources: 
    requests:
      memory: "512Mi"
      cpu: "1"
    limits:
      memory: "3Gi"
      cpu: "2"
  javaOpts: 
    xms: "512m"
    xmx: "3g"

Xray

 

Small

Medium

Large

rabbitmq-ha:
  replicaCount: 2
  rabbitmqMemoryHighWatermark: 1000MB
  resources:
    requests:
      memory: "512Mi"
      cpu: "500m"
    limits:
      memory: "1Gi"
      cpu: "1"
mongodb:
  resources:
    requests:
      memory: "6Gi"
      cpu: "500m"
    limits:
      memory: "6Gi"
      cpu: "1"
  ## Make sure the --wiredTigerCacheSizeGB is
  ## no more than half the memory limit!
  ## This is critical to protect against
  ## OOMKill by Kubernetes!
  mongodbExtraFlags:
  - "--wiredTigerCacheSizeGB=3"
postgresql:
  resources:
    requests:
      memory: "512Mi"
      cpu: "500m"
    limits:
      memory: "1Gi"
      cpu: "1"
server:
  replicaCount: 2
  resources: 
    requests:
      memory: "1Gi"
      cpu: "500m"
    limits:
      memory: "2Gi"
      cpu: "1"
analysis:
  replicaCount: 2
  resources: 
    requests:
      memory: "1Gi"
      cpu: "500m"
    limits:
      memory: "2Gi"
      cpu: "1"
persist:
  replicaCount: 2
  resources: 
    requests:
      memory: "1Gi"
      cpu: "500m"
    limits:
      memory: "2Gi"
      cpu: "1"
indexer:
  replicaCount: 2
  resources: 
    requests:
      memory: "1Gi"
      cpu: "500m"
    limits:
      memory: "2Gi"
      cpu: "1"
rabbitmq-ha:
  replicaCount: 3
  rabbitmqMemoryHighWatermark: 1000MB
  resources:
    requests:
      memory: "512Mi"
      cpu: "500m"
    limits:
      memory: "1Gi"
      cpu: "1"
mongodb:
  resources:
    requests:
      memory: "10Gi"
      cpu: "500m"
    limits:
      memory: "10Gi"
      cpu: "2"
  ## Make sure the --wiredTigerCacheSizeGB is
  ## no more than half the memory limit!
  ## This is critical to protect against
  ## OOMKill by Kubernetes!
  mongodbExtraFlags:
  - "--wiredTigerCacheSizeGB=5"
postgresql:
  resources:
    requests:
      memory: "1Gi"
      cpu: "500m"
    limits:
      memory: "2Gi"
      cpu: "1"
server:
  replicaCount: 3
  resources: 
    requests:
      memory: "1Gi"
      cpu: "500m"
    limits:
      memory: "2Gi"
      cpu: "1"
analysis:
  replicaCount: 3
  resources: 
    requests:
      memory: "1Gi"
      cpu: "500m"
    limits:
      memory: "2Gi"
      cpu: "1"
persist:
  replicaCount: 3
  resources: 
    requests:
      memory: "1Gi"
      cpu: "500m"
    limits:
      memory: "2Gi"
      cpu: "1"
indexer:
  replicaCount: 3
  resources: 
    requests:
      memory: "1Gi"
      cpu: "500m"
    limits:
      memory: "2Gi"
      cpu: "1"
rabbitmq-ha:
  replicaCount: 4
  rabbitmqMemoryHighWatermark: 2000MB
  resources:
    requests:
      memory: "1Gi"
      cpu: "1"
    limits:
      memory: "2Gi"
      cpu: "2"
mongodb:
  resources:
    requests:
      memory: "12Gi"
      cpu: "1"
    limits:
      memory: "12Gi"
      cpu: "3"
  ## Make sure the --wiredTigerCacheSizeGB is
  ## no more than half the memory limit!
  ## This is critical to protect against
  ## OOMKill by Kubernetes!
  mongodbExtraFlags:
  - "--wiredTigerCacheSizeGB=6"
postgresql:
  resources:
    requests:
      memory: "1Gi"
      cpu: "1"
    limits:
      memory: "3Gi"
      cpu: "2"
server:
  replicaCount: 4
  resources: 
    requests:
      memory: "1Gi"
      cpu: "1"
    limits:
      memory: "2Gi"
      cpu: "2"
analysis:
  replicaCount: 4
  resources: 
    requests:
      memory: "1Gi"
      cpu: "1"
    limits:
      memory: "2Gi"
      cpu: "2"
persist:
  replicaCount: 4
  resources: 
    requests:
      memory: "1Gi"
      cpu: "1"
    limits:
      memory: "2Gi"
      cpu: "2"
indexer:
  replicaCount: 4
  resources: 
    requests:
      memory: "1Gi"
      cpu: "1"
    limits:
      memory: "2Gi"
      cpu: "2"
  • No labels