Cloud customer?
 Upgrade in MyJFrog >

Search





Overview

The JFrog installation for Helm Charts provides you with a wide range of advanced functionalities in addition to the basic installers. While you can install JFrog products using the basic installations, this page details the additional options that you can deploy as an advanced user. These functionalities have been divided into the categories below.

To view the basic installations, see the following:



Advanced Installation Categories

All Products - Customization

  • Establishing TLS and Adding Certificates

  • Adding Custom Init Containers

  • Adding Custom Sidecars

  • Adding Custom Volumes

  • Auto-generated Passwords
  • Using Custom Secrets (for Mission Control and Artifactory)

Artifactory High Availability

  • Artifactory HA Architecture
  • Artifactory Storage
  • Using an Existing Volume Claim
  • Using an Existing Shared Volume Claim
  • Adding Licenses
  • Scaling the Artifactory Cluster

Artifactory

  • Adding Licenses
  • Security-related Issues 

  • Advanced Storage Options 

  • Adding Extensions

  • Infrastructure Customization

  • Using ConfigMaps to Store Non-confidential Data

  • Using an External Database
  • Bootstrapping Using Helm Charts

  • Monitoring and Logging

  • Installing Artifactory and Artifactory HA with Nginx and Terminate SSL in Nginx Service (LoadBalancer)

Mission Control

  • Deploying PostgreSQL

  • Deploying Elasticsearch

Pipelines

  • Installing a Pipelines Chart with Ingress
  • Using an External Secret for the Pipelines Password
  • Setting up a Build Plane

JFrog Platform

Advanced JFrog Platform Installation

Uninstall and Deletion

  • Uninstalling JFrog Platform

  • Uninstalling Artifactory

  • Deleting Artifactory

  • Deleting Xray

Page Contents



All Products - Customization

The following advanced customizations are relevant to all JFrog products.

For Artifactory and Mission Control only:

Establishing TLS and Adding Certificates

Establishing TLS and Adding Certificates for Artifactory

In HTTPS, the communication protocol is encrypted using Transport Layer Security (TLS). By default, TLS between JFrog Platform nodes is disabled. When TLS is enabled, JFrog Access acts as the Certificate Authority (CA) that signs the TLS certificates used by all the different JFrog Platform nodes.

To establish TLS between JFrog Platform nodes: Enable TLS by changing the tls entry (under the security section) in the access.config.yaml file. For additional information, see Managing TLS Certificates.

  1. To enable TLS in charts, set tls to true under access in the values.yaml. By default it is set to false.

    access:
      accessConfig:
        security:
          tls: true
  2. To add custom TLS certificates, create a TLS secret from the certificate files.

    kubectl create secret tls <tls-secret-name> --cert=ca.crt --key=ca.private.key
  3. For reset access certificates , you can set resetAccessCAKeys to true under access section in the values.yaml and perform an helm upgrade.

    Once the Helm upgrade is completed, set resetAccessCAKeys to false for subsequent upgrades (to avoid resetting the access certificates on every Helm upgrade).

    access:
      accessConfig:
        security:
          tls: true
      customCertificatesSecretName: <tls-secret-name>
      resetAccessCAKeys: true

Establishing TLS and Adding Certificates for Xray, Mission Control and Distribution

Create trust between the nodes by copying the ca.crt from the Artifactory server under $JFROG_HOME/artifactory/var/etc/access/keys to the nodes you would like to set trust with under $JFROG_HOME/var/etc/security/keys/trusted. For more details, For more information, see Managing TLS Certificates.

To add this certificate to Xray:

  1. Create a configmaps.yaml file with the following content.

    Xray
    common:
      configMaps: |
        ca.crt: |
          -----BEGIN CERTIFICATE-----
            <certificate content>
          -----END CERTIFICATE-----
    
      customVolumeMounts: |
        - name: xray-configmaps
          mountPath: /tmp/ca.crt
          subPath: ca.crt
    
    server:
      preStartCommand: "mkdir -p {{ .Values.xray.persistence.mountPath }}/etc/security/keys/trusted && cp -fv /tmp/ca.crt {{ .Values.xray.persistence.mountPath }}/etc/security/keys/trusted/ca.crt"
    router:
      tlsEnabled: true  
    Mission Control
    common:
      configMaps: |
        ca.crt: |
          -----BEGIN CERTIFICATE-----
            <certificate content>
          -----END CERTIFICATE-----
      customVolumeMounts: |
        - name: mission-control-configmaps
          mountPath: /tmp/ca.crt
          subPath: ca.crt
    missionControl:
      preStartCommand: "mkdir -p {{ .Values.missionControl.persistence.mountPath }}/etc/security/keys/trusted && cp -fv /tmp/ca.crt {{ .Values.missionControl.persistence.mountPath }}/etc/security/keys/trusted/ca.crt"
    router:
      tlsEnabled: true 
    Distribution
    common:
      configMaps: |
        ca.crt: |
          -----BEGIN CERTIFICATE-----
            <certificate content>
          -----END CERTIFICATE-----
    
      customVolumeMounts: |
        - name: distribution-configmaps
          mountPath: /tmp/ca.crt
          subPath: ca.crt
    
    distribution:
      preStartCommand: "mkdir -p {{ .Values.distribution.persistence.mountPath }}/etc/security/keys/trusted && cp -fv /tmp/ca.crt {{ .Values.distribution.persistence.mountPath }}/etc/security/keys/trusted/ca.crt"
    router:
      tlsEnabled: true  
  2. Run the Helm install/upgrade.

    Xray
    helm upgrade --install xray -f configmaps.yaml --namespace xray center/jfrog/xray
    Mission Control
    helm upgrade --install mission-control -f configmaps.yaml --namespace mission-control center/jfrog/mission-control
    Distribution
    helm upgrade --install distribution -f configmaps.yaml --namespace distribution center/jfrog/distribution
  3. Create a configMap with the files you specified above.
    This will, in turn:

    • Create a volume pointing to the configMap with the name xray-configmaps.

    • Mount this configMap onto /tmp using a customVolumeMounts.

  4. Using the preStartCommand, copy the ca.crt file to the Xray trusted keys folder /etc/security/keys/trusted/ca.crt.
    router.tlsEnabled is set to true to add HTTPS scheme in liveness and readiness probes.

Adding Custom Init Containers

Init Containers are containers that run before the main container runs with your containerized application. In some cases, you will need to use a specialized, unsupported init process, for example, to check something in the file system or to test something before spinning up the main container. If you need to add a custom init container, use the section for defining a custom init container in the values.yaml file (by default this section is commented out).

Artifactory
artifactory:
  ## Add custom init containers
  customInitContainers: |
    ## Init containers template goes here ##
Xray
common:
  ## Add custom init containers executed before predefined init containers
  customInitContainersBegin: |
    ## Init containers template goes here ##

    ## Add custom init containers executed after predefined init containers
  customInitContainers: |
    ## Init containers template goes here ##
Mission Control
common:
  ## Add custom init containers
  customInitContainers: |
    ## Init containers template goes here ##
Distribution
distribution:
  ## Add custom init containers
  customInitContainers: |
    ## Init containers template goes here ##

Adding Custom Sidecars Containers

A sidecar is a utility container in a pod that is loosely coupled to the main application container. In some cases you may need to use an extra sidecar container, for example, for monitoring agents or for log collection. If you need to add a custom sidecar container, use the section for defining a custom sidecar container in the values.yaml file (by default this section is commented out).

Artifactory
artifactory:
  ## Add custom sidecar containers
  customSidecarContainers: |
    ## Sidecar containers template goes here ##
Xray
common:
  ## Add custom sidecar containers
  customSidecarContainers: |
    ## Sidecar containers template goes here ##
Mission Control
common: 
  ## Add custom sidecar containers 
  customSidecarContainers: | 
    ## Sidecar containers template goes here ##
Distribution
common: 
  ## Add custom sidecar containers 
  customSidecarContainers: | 
    ## Sidecar containers template goes here ##

Adding Custom Volumes

A Kubernetes volume is essentially a directory that is accessible to all containers running in a pod. If you need to use a custom volume in a custom init or sidecar container, use the sections for defining a custom init or a custom sidecar container in the values.yaml file (by default these sections are commented out.

Artifactory
artifactory:
  ## Add custom volumes
  customVolumes: |
    ## Custom volume comes here ##
Xray
server:
  ## Add custom volumes
  customVolumes: |
    ## Custom volume comes here ##

Mission Control and Distribution Custom Volumes

To add custom custom files or for your init container, or to make changes to the file system the Mission Control/Distribution container will see, use the following section for defining custom volumes in the values.yaml. By default, these values are are left empty.

Mission Control
common:
  ## Add custom volumes
  customVolumes: |
  #  - name: custom-script
  #    configMap:
  #      name: custom-script

  ## Add custom volumeMounts
  customVolumeMounts: |
  #  - name: custom-script
  #    mountPath: "/scripts/script.sh"
  #    subPath: script.sh
Distribution
common:
  ## Add custom volumes
  customVolumes: |
  #  - name: custom-script
  #    configMap:
  #      name: custom-script

distribution:
  ## Add custom volumeMounts
  customVolumeMounts: |
  #  - name: custom-script
  #    mountPath: "/scripts/script.sh"
  #    subPath: script.sh

distributor:
  ## Add custom volumeMounts
  customVolumeMounts: |
  #  - name: custom-script
  #    mountPath: "/scripts/script.sh"
  #    subPath: script.sh

Overriding the Default System YAML File

There are some advanced use cases where users wish to provide their own system.yaml file to configure the JFrog service. Using this option will override the existing system.yaml in the values.yaml file. There are two ways to override the system.yaml: by using a Custom Init Container and by using an external system.yaml with an existingSecret.

The order of preference would then be as follows.

  1. Custom Init Container

  2. External system.yaml

  3. Default system.yaml in values.yaml

For the Pipelines chart, from chart version 2.2.0 and above, the .Values.existingSecret has been changed to .Values.systemYaml.existingSecret and .Values.systemYaml.dataKey.

Using a Custom Init Container

The Custom Init Container uses external sources such as vaults, external repositories, etc. to override the system.yaml file.

The following example is for the Xray chart.

Custom Init Container Example
customInitContainers: |
   - name: "custom-systemyaml-setup"
     image: "{{ .Values.initContainerImage }}"
     imagePullPolicy: "{{ .Values.imagePullPolicy }}"
     command:
       - 'sh'
       - '-c'
       - 'wget -O {{ .Values.xray.persistence.mountPath }}/etc/system.yaml https://<repo-url>/systemyaml'
     volumeMounts:
       - mountPath: "{{ .Values.xray.persistence.mountPath }}"
         name: data-volume

Using an External System YAML File

  1. Create an external system.yaml file for one of the services, for example, Xray and give it the filename - xray-cus-sy.yaml.

    configVersion: 1
    shared:
        logging:
            consoleLog:
                enabled: true
        jfrogUrl: "http://artifactory-artifactory.rt:8082"
        database:
            type: "postgresql"
            driver: "org.postgresql.Driver"
            username: "xray"
            url: "postgres://xray-postgresql:5432/xraydb?sslmode=disable"
    server:
        mailServer: ""
        indexAllBuilds: "true"
  2. Create a Kubernetes secret.

    kubectl create secret generic sy --from-file ./xray-cus-sy.yaml
  3. Now, use that secret in the relevant section.

    systemYamlOverride:
      existingSecret: sy
      dataKey: xray-cus-sy.yaml

Auto-generated Passwords (Internal PostgreSQL)

An internal PostgreSQL requires one variable to be available during installation or upgrade. If it is not set by the user, a random 10 character alphanumeric string will be set instead; therefore, it recommended for the user to set this explicitly during installation and upgrade.

--set postgresql.postgresqlPassword=<value> \

The values should remain same between upgrades. If this was autogenerated during helm install, the same password will have to be passed on future upgrades.

To read the current password, use the following command (for more information on reading a secret value, see Kubernets: Decoding a Secret).

POSTGRES_PASSWORD=$(kubectl get secret -n <release_name>-postgresql -o jsonpath="{.data.postgresql-password}" | base64 --decode)

The following parameter can be set during upgrade.

--set postgresql.postgresqlPassword=${POSTGRES_PASSWORD} \

Using Custom Secrets

For Artifactory and Mission Control only

Secrets are Kubernetes objects that are used for storing sensitive data such as user name and passwords with encryption. If you need to add a custom secret in a custom init or sidecar container, use the section for defining custom secrets in the values.yaml file (by default this section is commented out).

Artifactory
artifactory:
  # Add custom secrets - secret per file
    customSecrets:
      - name: custom-secret
        key: custom-secret.yaml
        data: >
          secret data
Mission Control
common:
  # Add custom secrets - secret per file
    customSecrets:
      - name: custom-secret
        key: custom-secret.yaml
        data: >
          secret data


To use a custom secret, you will need to define a custom volume.

Artifactory
artifactory:
  ## Add custom volumes
  customVolumes: |
    - name: custom-secret
      secret:
        secretName: custom-secret
Mission Control
common:
  ## Add custom volumes
  customVolumes: |
    - name: custom-secret
      secret:
        secretName: custom-secret

To use a volume, you will need to define a volume mount as part of a custom init or sidecar container.

Artifactory
artifactory:
  customSidecarContainers:
    - name: side-car-container
      volumeMounts:
      - name: custom-secret
        mountPath: /opt/custom-secret.yaml
        subPath: custom-secret.yaml
        readOnly: true
Mission Control
common:
  customVolumeMounts:
    - name: custom-secret
      mountPath: /opt/custom-secret.yaml
      subPath: custom-secret.yaml
      readOnly: true


You can configure the sidecar to run as a custom user by setting the following in the container template.

  # Example of running container as root (id 0)
  securityContext:
    runAsUser: 0
    fsGroup: 0

Artifactory High Availability

Artifactory HA Architecture

The Artifactory HA cluster in this chart is made up of the following:

  • A single primary node
  • Two member nodes, which can be resized as needed; load balancing is done to the member nodes only. This leaves the primary node free to handle jobs and tasks and to not be interrupted by inbound traffic.

This can be controlled by the parameter artifactory.service.pool.

  1. Using the Artifactory pro license (which supports single node only), set artifactory.node.replicaCount=0 in the values.yaml file.
  2. To scale from a single node to multiple nodes(>1), use an Enterprise(+) license and then perform a Helm upgrade (each node will require a separate license).

Artifactory Storage

Artifactory HA support a wide range of storage back ends (for more information, see Artifactory HA storage options).

In this chart, you will set the type of storage you want using artifactory.persistence.type and pass the required configuration settings. The default storage in this chart is the file-system replication, where the data is replicated to all nodes.

All storage configurations (except NFS) come with a default artifactory.persistence.redundancy parameter. This is used to set the number replicas of a binary that should be stored in the cluster's nodes. Once this value is set on initial deployment, you can not update it using Helm. It is recommended to set this to a number greater than half of your cluster's size, and to never scale your cluster down to a size smaller than this number.

Alert: Use a PVC when Using an External Blob Storage

When using external blob storage (for example, AWS S3, Azure blob storage, or Google storage), there is still a need to persist temporary eventual storage in a PVC (Persistent Volume Claims) in cases of loss of connection to the external storage or if the Artifactory pod crashes.

Avoiding the usage of a PVC can lead to data loss in case of unplanned pod termination.

Deploying Artifactory on an OpenShift Cluster and Using the Azure PostgreSQL Database Service

When deploying Artifactory on an OpenShift Cluster while using the Azure PostgreSQL database service, the service requires a TLS encrypted database connection. To learn more, see Metadata Service Troubleshooting.

Using an Existing Volume Claim

Using an Existing Volume Claim for the Primary Node

To use an existing volume claim for the Artifactory primary node storage, you will need to do the following.

  1. Create a persistent volume claim by the name volume-<release-name>-artifactory-ha-primary-0 e.g., volume-myrelease-artifactory-ha-primary-0.

  2. Pass a parameter to helm install and helm upgrade.

    ... --set artifactory.primary.persistence.existingClaim=true

Using an Existing Volume Claim for the Member Nodes

To use an existing volume claim for the Artifactory member nodes storage, you will need to do the following.

  1. Create persistent volume claims according to the number of replicas defined at artifactory.node.replicaCount by the names volume-<release-name>-artifactory-ha-member-<ordinal-number>, e.g., volume-myrelease-artifactory-ha-member-0 and volume-myrelease-artifactory-ha-primary-1.

  2. Pass a parameter to helm install and helm upgrade.

    ... --set artifactory.node.persistence.existingClaim=true

Using an Existing Shared Volume Claim

To use an existing volume claim (for data and backup) that is to be shared across all nodes, you will need to do the following.

  1. Create PVCs with ReadWriteMany that match the naming conventions.

    {{ template "artifactory-ha.fullname" . }}-data-pvc-<claim-ordinal>
    {{ template "artifactory-ha.fullname" . }}-backup-pvc-<claim-ordinal>

    Here is an example that shows 2 existing volume claims being used.

    myexample-artifactory-ha-data-pvc-0
      myexample-artifactory-ha-backup-pvc-0
      myexample-artifactory-ha-data-pvc-1
      myexample-artifactory-ha-backup-pvc-1
  2. Set the artifactory.persistence.fileSystem.existingSharedClaim.enabled in the values.yaml file to true.

    -- set artifactory.persistence.fileSystem.existingSharedClaim.enabled=true
    -- set artifactory.persistence.fileSystem.existingSharedClaim.numberOfExistingClaims=2

Adding Licenses

To activate Artifactory HA, you must install an appropriate license as part of the installation. There are three ways to manage the license: via Artifactory UI, REST API, or a Kubernetes Secret.

Specifying multiple licenses

Whether in the Artifactory UI, using the REST API or in the artifactory.cluster.license file, make sure that the licenses are separated by a newline.

The easiest and recommended way is by using Artifactory UI. Using the Kubernetes Secret or REST API is for advanced users and is better suited for automation.

You should use only one of these methods. Switching between them while a cluster is running might disable your Artifactory HA cluster.


Option A: Using REST API

You can add licenses via REST API. Note that the REST API uses "\n" for the newlines in the licenses (this is currently recommended method).

Option B: Using the Artifactory UI

Once the primary cluster is running, open Artifactory UI and insert the license(s) in the UI. See HA installation and setup for more details.

Enter all of the licenses at once, with each license separated by a newline. If you add the licenses one at a time, you may get redirected to a node without a license and the UI will not load for that node.

Option C: Using a Kubernetes Secret

Important

This method is relevant for initial deployment only. Once Artifactory is deployed, you should not keep passing these parameters, since the license is already persisted into Artifactory's storage (and they will be ignored). Updating the license should be done via Artifactory UI or REST API.


You can deploy the Artifactory license(s) as a
Kubernetes Secret
You will need to prepare a text file with the license(s) written in it. If adding multiple licenses, they are added in the same file. Remember to add two new lines between each license block.

  1. Create the Kubernetes secret (assuming the local license file is 'art.lic').

    kubectl create secret generic artifactory-cluster-license --from-file=./art.lic
    
  2. Create a license-values.yaml.

    artifactory:
      license:
        secret: artifactory-cluster-license
        dataKey: art.lic
  3. Install with the license-values.yaml.

    helm upgrade --install jfrog-platform --namespace jfrog-platform center/jfrog/jfrog-platform -f license-values.yaml
Create the Kubernetes Secret as Part of the Helm Release
  1. Create a license-values.yaml.

    artifactory:
      license:
        licenseKey: |-
          <LICENSE_KEY1>
    
    
          <LICENSE_KEY2>
    
    
          <LICENSE_KEY3>
  2. Install with the license-values.yaml.

    helm upgrade --install jfrog-platform --namespace jfrog-platform center/jfrog/jfrog-platform -f license-values.yaml

This method is relevant for initial deployment only. Once Artifactory is deployed, you should not keep passing these parameters, since the license is already persisted into Artifactory's storage (and they will be ignored). Updating the license should be done via Artifactory UI or REST API.

If you want to keep managing the artifactory license using the same method, you can use the copyOnEveryStartup example shown in the values.yaml file.

Scaling the Artifactory Cluster

A key feature in Artifactory HA is the ability to set an initial cluster size using --set artifactory.node.replicaCount=${CLUSTER_SIZE} and if needed, to resize the cluster.

Before Scaling

When scaling, you need to explicitly pass the database password if the password is an automatically generated one (this is the default with the enclosed PostgreSQL Helm chart).

To get the current database password use the following.

export DB_PASSWORD=$(kubectl get $(kubectl get secret -o name | grep postgresql) -o jsonpath="{.data.postgresql-password}" | base64 --decode)

Important

Use --set postgresql.postgresqlPassword=${DB_PASSWORD} with every scale action to prevent a misconfigured cluster.

To Scale Up:

Assuming that you have a cluster with 2 member nodes, and you want to scale up to 3 member nodes (to a total of 4 nodes), use the following.

# Scale to 4 nodes (1 primary and 3 member nodes)
helm upgrade --install artifactory-ha --set artifactory.node.replicaCount=3 --set postgresql.postgresqlPassword=${DB_PASSWORD} --namespace artifactory-ha center/jfrog/artifactory-ha

To Scale Down:

# Scale down to 2 member nodes
helm upgrade --install artifactory-ha --set artifactory.node.replicaCount=2 --set postgresql.postgresqlPassword=${DB_PASSWORD} --namespace artifactory-ha center/jfrog/artifactory-ha

Because Artifactory is running as a Kubernetes Stateful Set, the removal of the node will not remove the persistent volume. You need to explicitly remove it as follows.

# List PVCs
kubectl get pvc

# Remove the PVC with highest ordinal!
# In this example, the highest node ordinal was 2, so need to remove its storage.
kubectl delete pvc volume-artifactory-node-2



Artifactory Advanced Options

Adding Licenses Using Secrets

There are two ways to add licenses using secrets: 

These methods are relevant for initial deployment only. Once Artifactory is deployed, you should not keep passing these parameters, since the license is already persisted into Artifactory's storage (they will be ignored). Updating the license should be done via Artifactory UI or REST API. If you want to keep managing the Artifactory license using the same method, you can use the copyOnEveryStartup example shown in the values.yaml file.

Creating a License Using an Existing Kubernetes Secret

You can deploy the Artifactory license as a Kubernetes secret, by preparing a text file with the license written in it and creating a Kubernetes secret from it.

# Create the Kubernetes secret (assuming the local license file is 'art.lic')
kubectl create secret generic -n artifactory artifactory-license --from-file=./art.lic

# Pass the license to helm
helm upgrade --install artifactory --set artifactory.license.secret=artifactory-license,artifactory.license.dataKey=art.lic --namespace artifactory center/jfrog/artifactory

Creating a Secret as Part of the Helm Release

To create a secret as part of the Helm release, update the values.yaml and then run the installer.

artifactory:
  license:
    licenseKey: |-
      <LICENSE_KEY>

helm upgrade --install artifactory -f values.yaml --namespace artifactory center/jfrog/artifactory

Security-related Issues

The following section addresses security-related issues in the Helm Charts installation, such as managing subscriptions and secrets, network policy, and more.

Customizing the Database Password

You can override the specified database password (set in values.yaml), by passing it as a parameter in the install command line.

helm upgrade --install artifactory --namespace artifactory --set postgresql.postgresqlPassword=12_hX34qwerQ2 center/jfrog/artifactory

You can customize other parameters in the same way, by passing them in the helm install command line.

Creating an Ingress Object

To get Helm to create an ingress object with a hostname, add these lines to the artifactory-ingress-values.yaml file and use it with your helm install or upgrade.

 ingress:
    enabled: true
    hosts:
      - artifactory.company.com
  artifactory:
    service:
      type: NodePort
  nginx
    enabled: false

helm upgrade --install artifactory -f artifactory-ingress-values.yaml --namespace artifactory center/jfrog/artifactory

If your cluster allows for automatic creation/retrieval of TLS certificates (for example, by using a cert-manager; for more information, see cert-manager), create the ingress object as follows.

  1. To configure TLS manually, first create/retrieve a key and certificate pair for the address(es) you wish to protect.

  2. Next, create a TLS secret in the namespace.

    kubectl create secret tls artifactory-tls --cert=path/to/tls.cert --key=path/to/tls.key
  3. Include the secret's name, along with the desired hostnames, in the Artifactory Ingress TLS section of your custom values.yaml file.

    ingress:
        ## If true, Artifactory Ingress will be created
        ##
        enabled: true
    
        ## Artifactory Ingress hostnames
        ## Must be provided if Ingress is enabled
        ##
        hosts:
          - artifactory.domain.com
        annotations:
          kubernetes.io/tls-acme: "true"
        ## Artifactory Ingress TLS configuration
        ## Secrets must be manually created in the namespace
        ##
        tls:
          - secretName: artifactory-tls
            hosts:
              - artifactory.domain.com
Using Ingress Annotations

The following is an example of an Ingress annotation that enables Artifactory to work as a Docker Registry using the Repository Path method. For more information, see Artifactory as Docker Registry.

ingress:
  enabled: true
  defaultBackend:
    enabled: false
  hosts:
    - myhost.example.com
  annotations:
    ingress.kubernetes.io/force-ssl-redirect: "true"
    ingress.kubernetes.io/proxy-body-size: "0"
    ingress.kubernetes.io/proxy-read-timeout: "600"
    ingress.kubernetes.io/proxy-send-timeout: "600"
    kubernetes.io/ingress.class: nginx
    nginx.ingress.kubernetes.io/configuration-snippet: |
      rewrite ^/(v2)/token /artifactory/api/docker/null/v2/token;
      rewrite ^/(v2)/([^\/]*)/(.*) /artifactory/api/docker/$2/$1/$3;
    nginx.ingress.kubernetes.io/proxy-body-size: "0"
  tls:
    - hosts:
      - "myhost.example.com"

If you are using Artifactory as an SSO provider (e.g., with Xray), you will need to use the following annotations, and change to your domain.

..
    annotations:
      kubernetes.io/ingress.class: nginx
      nginx.ingress.kubernetes.io/configuration-snippet: |
        proxy_pass_header   Server;
        proxy_set_header    X-JFrog-Override-Base-Url https://<artifactory-domain>;
Adding Additional Ingress Rules

You also have the option of adding additional Ingress rules to the Artifactory Ingress. An example for this use case would be to route the /xray path to Xray. To do that, simply add the following to the artifactory-values.yaml file and run the upgrade.

ingress:
  enabled: true
  defaultBackend:
    enabled: false
  annotations:
    kubernetes.io/ingress.class: nginx
    nginx.ingress.kubernetes.io/configuration-snippet: |
      rewrite "(?i)/xray(/|$)(.*)" /$2 break;

  additionalRules: |
    - host: <MY_HOSTNAME>
      http:
        paths:
          - path: /
            backend:
              serviceName: <XRAY_SERVER_SERVICE_NAME>
              servicePort: <XRAY_SERVER_SERVICE_PORT>
          - path: /xray
            backend:
              serviceName: <XRAY_SERVER_SERVICE_NAME>
              servicePort: <XRAY_SERVER_SERVICE_PORT>
          - path: /artifactory
            backend:
              serviceName: {{ template "artifactory.nginx.fullname" . }}
              servicePort: {{ .Values.nginx.externalPortHttp }}

helm upgrade --install xray center/jfrog/artifactory -f artifactory-values.yaml
Using a Dedicated Ingress Object for the Replicator Service

You also have the option of adding an additional Ingress object to the Replicator service. An example for this use case could be routing the /replicator/ path to Artifactory. To do that, simply add the following to the artifactory-values.yaml file.

artifactory:
  replicator:
    enabled: true
    ingress:
      name: <MY_INGRESS_NAME>
      hosts:
        - myhost.example.com
      annotations:
        kubernetes.io/ingress.class: nginx
        nginx.ingress.kubernetes.io/proxy-buffering: "off"
        nginx.ingress.kubernetes.io/configuration-snippet: |
          chunked_transfer_encoding on;
      tls:
        - hosts:
          - "myhost.example.com"
          secretName: <CUSTOM_SECRET>
Running Ingress Behind Another Load Balancer

If you are running a load balancer that is used to offload the TLS, in front of Nginx Ingress Controller, or if you are setting X-Forwarded-* headers, you might want to enable the use-forwarded-headers=true option. Otherwise, Nginx will fill those headers with the request information it receives from the external load balancer.

To enable this option with the helm installation, use the following command.

helm upgrade --install nginx-ingress --namespace nginx-ingress stable/nginx-ingress --set-string controller.config.use-forwarded-headers=true

To enable this option with the helm upgrade, use the following command.

helm upgrade nginx-ingress --set-string controller.config.use-forwarded-headers=true stable/nginx-ingress

Alternatively, create a values.yaml file with the following content, then install nginx-ingress with the values file you created.

controller:
  config:
    use-forwarded-headers: "true"

helm upgrade --install nginx-ingress --namespace nginx-ingress center/kubernetes-ingress-nginx/ingress-nginx -f values.yaml

Log Analytics

FluentD, Prometheus and Grafana

To configure Prometheus and Grafana to gather metrics from Artifactory through the use of FluentD, refer to the log analytics repository. The repository contains a file artifactory-values.yaml that can be used to deploy Prometheus, Service Monitor, and Grafana with this chart.

Configuring the NetworkPolicy

The NetworkPolicy specifies which Ingress and Egress are allowed in this namespace. It is encouraged to be more specific whenever possible to increase system security.

In the networkpolicy section of the values.yaml file you can specify a list of NetworkPolicy objects.

  • For podSelector, Ingress and Egress, if nothing is provided then a default - {} is applied, which is to allow everything.
  • A full (but very wide open) example that results in 2 NetworkPolicy objects being created:

    networkpolicy:
      # Allows all Ingress and Egress to/from Artifactory.
      - name: artifactory
        podSelector:
          matchLabels:
            app: artifactory
        egress:
        - {}
        ingress:
        - {}
      # Allows connectivity from artifactory pods to postgresql pods, but no traffic leaving postgresql pod.
      - name: postgres
        podSelector:
          matchLabels:
            app: postgresql
        ingress:
        - from:
          - podSelector:
              matchLabels:
                app: artifactory

Advanced Storage Options

The filestore is where binaries are physically stored, and it is one of the two stores essential for Artifactory's storage and management resourcesArtifactory supports a wide range of storage back ends; in this section, we have detailed some of the advanced options for Artifactory storage; for more information, see Artifactory Filestore options

Setting the Artifactory Persistency Storage Type 

In the Helm chart, set the type of storage you want with artifactory.persistence.type and pass the required configuration settings. The default storage in this chart is the file-system replication, where the data is replicated to all nodes.

Important

All storage configurations, except Network File System (NFS) come with a default artifactory.persistence.redundancy parameter. This is used to set how many replicas of a binary should be stored in the cluster's nodes. Once this value is set on initial deployment, you can not update it using Helm. It is recommended to set this to a number greater than half of your cluster's size, and to never scale your cluster down to a size smaller than this number.

To use your selected bucket as the HA's filestore, pass the filestore's parameters to the Helm installation/upgrade.

Setting up the Network File System (NFS) Storage

To use an NFS server as your cluster's storage, you will need to do the following.

  1. Set up an NFS server and get its IP as NFS_IP.

  2. Create data and backup directories on the NFS exported directory with write permissions to all.

  3. Pass NFS parameters to the Helm installation/upgrade as follows.

    artifactory:
     persistence:
       type: nfs
       nfs:
         ip: ${NFS_IP}
Configuring the NFS Persistence Type

In some cases, it is not possible for the Helm Chart to set up your NFS mounts automatically for Artifactory. In these cases (for example the AWS EFS), you will use the artifactory.persistence.type=file-system, even though your underlying persistence is actually a network file system.

The same thing applies when using a slow storage device (such as cheap disks) as your main storage solution for Artifactory; this means that serving highly-used files from the network file system/slow storage can take time, which is why you would want a cache filesystem that is stored locally on disk (on fast disks such as SSD).

  1. Create a values.yaml file.

  2. Set up your volume mount to your fast storage device as follows.

    artifactory:
      ## Set up your volume mount to your fast storage device
      customVolumes: |
        - name: my-cache-fast-storage
          persistentVolumeClaim:
            claimName: my-cache-fast-storage-pvc
      ## Enable caching and configure the cache directory
      customVolumeMounts: |
        - name: my-cache-fast-storage
          mountPath: /my-fast-cache-mount
      ## Install the helm chart with the values file you created
      persistence:
        cacheProviderDir: /my-fast-cache-mount
        fileSystem:
          cache:
            enabled: true
  3. Install Artifactory with the values file you created.

    Artifactory
    helm upgrade --install artifactory center/jfrog/artifactory --namespace artifactory -f values.yaml
    Artifactory HA
    helm upgrade --install artifactory-ha center/jfrog/artifactory-ha --namespace artifactory-ha -f values.yaml

Google Storage

You can use Google Storage bucket as the cluster's filestore by passing the Google Storage parameters below to helm install and helm upgrade. For more information, see Google Storage Binary Provider.

artifactory:
 persistence:
   type: google-storage
   googleStorage:
     identity: ${GCP_ID}
     credential: ${GCP_KEY}
Artifactory HA

To use a GCP service account, Artifactory requires a gcp.credentials.json file in the same directory as the binaraystore.xml file.
This can be generated by running the following.

gcloud iam service-accounts keys create <file_name> --iam-account <service_account_name>

This will produce the following, which can be saved to a file or copied into your values.yaml.

{
   "type": "service_account",
   "project_id": "<project_id>",
   "private_key_id": "?????",
   "private_key": "-----BEGIN PRIVATE KEY-----\n????????==\n-----END PRIVATE KEY-----\n",
   "client_email": "???@j<project_id>.iam.gserviceaccount.com",
   "client_id": "???????",
   "auth_uri": "https://accounts.google.com/o/oauth2/auth",
   "token_uri": "https://oauth2.googleapis.com/token",
   "auth_provider_x509_cert_url": "https://www.googleapis.com/oauth2/v1/certs",
   "client_x509_cert_url": "https://www.googleapis.com/robot/v1....."
}

One option is to create your own secret and to pass it to your helm install in a custom values.yaml

# Create the Kubernetes secret from the file you created earlier.
# IMPORTANT: The file must be called "gcp.credentials.json" because this is used later as the secret key!
kubectl create secret generic artifactory-gcp-creds --from-file=./gcp.credentials.json

Set this secret in your custom values.yaml.

artifactory:
  persistence:
    googleStorage
      gcpServiceAccount:
        enabled: true
        customSecretName: artifactory-gcp-creds

Another option is to put your generated config directly in your custom values.yaml and then a secret will be created from that.

artifactory:
  persistence:
    googleStorage
      gcpServiceAccount:
        enabled: true
        config: |
          {
             "type": "service_account",
             "project_id": "<project_id>",
             "private_key_id": "?????",
             "private_key": "-----BEGIN PRIVATE KEY-----\n????????==\n-----END PRIVATE KEY-----\n",
             "client_email": "???@j<project_id>.iam.gserviceaccount.com",
             "client_id": "???????",
             "auth_uri": "https://accounts.google.com/o/oauth2/auth",
             "token_uri": "https://oauth2.googleapis.com/token",
             "auth_provider_x509_cert_url": "https://www.googleapis.com/oauth2/v1/certs",
             "client_x509_cert_url": "https://www.googleapis.com/robot/v1....."
          }

AWS S3

You can use an AWS S3 bucket as the cluster's filestore by passing the AWS S3 parameters below to helm install and helm upgrade. For more information, see AWS S3 Binary Provider.

When using the AWS S3 persistence type, you will not be able to provide an IAM at the pod level. To grant permissions to Artifactory using an IAM role, you will have to attach the IAM role to the machine(s) on which Artifactory is running. This is due to the fact that the ASW S3 template uses the JetS3t library to interact with AWS. If you want to grant an IAM role at the pod level, see the AWS S3 VS section below.

# Using explicit credentials:
artifactory:
 persistence:
   type: aws-s3
   awsS3:
     identity: ${AWS_ACCESS_KEY_ID}
     credential: ${AWS_SECRET_ACCESS_KEY}
     region: ${AWS_REGION}
     endpoint: ${AWS_S3_ENDPOINT}
 
# Using an existing IAM role
artifactory:
 persistence:
   type: aws-s3
   awsS3:
     roleName: ${AWS_ROLE_NAME}
     region: ${AWS_REGION}
     endpoint: ${AWS_S3_ENDPOINT}

Verify that the S3 endpoint and region match. For more information, see AWS documentation on endpoints.

AWS S3 V3

To use an AWS S3 bucket as the cluster's filestore and access it with the official AWS SDK, see the S3 Official SDK Binary Provider. This filestore template uses the official AWS SDK, unlike the aws-s3 implementation, whichthat uses the JetS3t library. Use this template if you want to attach an IAM role to the Artifactory pod directly (as opposed to attaching it to the machine/s that Artifactory runs on).

You will need to combine this with a Kubernetes mechanism for attaching IAM roles to pods, such as kube2iam.

Pass the AWS S3 V3 parameters and the annotation pointing to the IAM role (when using an IAM role; this is kube2iam-specific and may vary depending on the implementation) to helm install and helm upgrade . 

# Using explicit credentials:
artifactory:
 persistence:
   type: aws-s3-v3
   awsS3V3:
     region: ${AWS_REGION}
     bucketName: ${AWS_S3_BUCKET_NAME}
     identity: ${AWS_ACCESS_KEY_ID}
     credential: ${AWS_SECRET_ACCESS_KEY}
     useInstanceCredentials: false

# Using an existing IAM role
artifactory:
  annotations: 'iam\.amazonaws\.com/role'=${AWS_IAM_ROLE_ARN}  
  persistence:    
    type: aws-s3-v3    
    awsS3V3:      
      region: ${AWS_REGION}      
      bucketName: ${AWS_S3_BUCKET_NAME}

To enable Direct Cloud Storage Download, use the following.

artifactory:
 persistence:
   awsS3V3:
     enableSignedUrlRedirect: true

Microsoft Azure Blob Storage

You can use Azure Blob Storage as the cluster's filestore by passing the Azure Blob Storage parameters to helm install and helm upgrade. For more information, see Azure Blob Storage Binary Provider.

artifactory:
 persistence:
   type: azure-blob
   azureBlob:
     accountName: ${AZURE_ACCOUNT_NAME}
     accountKey: ${AZURE_ACCOUNT_KEY}
     endpoint: ${AZURE_ENDPOINT}
     containerName: ${AZURE_CONTAINER_NAME}

To use a persistent volume claim as cache dir together with Azure Blob Storage, pass the following parameters as well to helm install and helm upgrade (verify that mountPath and cacheProviderDir point to the same location).

artifactory:
 persistence:
   existingClaim: ${YOUR_CLAIM}
   mountPath: /opt/cache-dir
   cacheProviderDir: /opt/cache-dir


Custom binarystore.xml

There are two options for providng a custom binarystore.xml.

  1. Editing directly in the values.yaml.

    artifactory:
      persistence:
        binarystoreXml: |
          <!-- The custom XML snippet -->
          <config version="v1">
              <chain template="file-system"/>
          </config>
  2. Create your own secret and pass it to your helm install command.

    # Prepare your custom Secret file (custom-binarystore.yaml)
    kind: Secret
    apiVersion: v1
    metadata:
      name: custom-binarystore
      labels:
        app: artifactory
        chart: artifactory
    stringData:
      binarystore.xml: |-
          <!-- The custom XML snippet -->
          <config version="v1">
              <chain template="file-system"/>
          </config>
  3. Next, create a secret from the file.

    kubectl apply -n artifactory -f ./custom-binarystore.yaml
  4. Pass the secret to your helm install command.

    Artifactory
    helm upgrade --install artifactory --namespace artifactory --set artifactory.persistence.customBinarystoreXmlSecret=custom-binarystore center/jfrog/artifactory
    Artifactory HA
    helm upgrade --install artifactory-ha --namespace artifactory-ha --set artifactory.persistence.customBinarystoreXmlSecret=custom-binarystore center/jfrog/artifactory-ha

Adding Extensions

Extensions (also known as plugins) are software components that extend and integrate with your system. Most cluster administrators will use a hosted or distribution instance of Kubernetes. In this section we have included some of the extensions you can use with Artifactory using Helm Charts.

Using Logger Sidecars

Logger sidecars enable you to tail various logs from Artifactory (see the available values in the values.yaml file).

To get a list of containers in the pod do the following.

kubectl get pods -n <NAMESPACE> <POD_NAME> -o jsonpath='{.spec.containers[*].name}' | tr ' ' '\n'

To view specific logs, use the following.

kubectl logs -n <NAMESPACE> <POD_NAME> -c <LOG_CONTAINER_NAME>

Adding User Plugins

User plugins enable you to extend Artifactory's behavior, for example, creating a Kubernetes secret.

  1. Create a secret with Artifactory User Plugins using the following command.

    # Secret with single user plugin
    kubectl  create secret generic archive-old-artifacts --from-file=archiveOldArtifacts.groovy --namespace=artifactory 
    
    # Secret with single user plugin with configuration file
    kubectl  create secret generic webhook --from-file=webhook.groovy  --from-file=webhook.config.json.sample --namespace=artifactory 
  2. Add the plugin secret names to the plugins.yaml file as follows.

    artifactory:
      userPluginSecrets:
        - archive-old-artifacts
        - webhook
  3. You can now pass the plugins.yaml file you created to the Helm install command to deploy Artifactory with user plugins as follows.

    Artifactory
    helm upgrade --install artifactory center/jfrog/artifactory --namespace artifactory -f values.yaml
    Artifactory HA
    helm upgrade --install artifactory-ha center/jfrog/artifactory-ha --namespace artifactory-ha -f values.yaml

Alternatively, you may be in a situation in which you would like to create a secret in a Helm chart that depends on this chart. In this scenario, the name of the secret is likely generated dynamically via template functions, so passing a statically named secret is not possible.

In this case, Helm chart supports evaluating strings as templates via the tpl function--simply pass the raw string containing the templating language used to name your secret as a value instead by adding the following to your chart's values.yaml file.

artifactory: # Name of the artifactory dependency
  artifactory:
    userPluginSecrets:
      - '{{ template "my-chart.fullname" . }}'

Using ConfigMaps to Store Non-confidential Data

A configMap is an API object that is used to store non-confidential data in key-value pairs. If you want to mount a custom file to Artifactory, either an init shell script or a custom configuration file (such as logback.xml), you can use this option.

Creating Custom configMaps for Artifactory

Create a configmaps.yaml file as per the example below, then use it with your Helm installation/upgrade. This will, in turn, do the following:

  1. Create a volume pointing to the configMap with the name artifactory-configmaps.

  2. Mount this configMap onto /tmp/my-config-map using customVolumeMounts.

  3. Set the shell script we mounted as the postStartCommand.

  4. Copy the logback.xml file to its proper location in the $ARTIFACTORY_HOME/etc directory.

    artifactory:
      configMaps: |
        logback.xml: |
          <configuration debug="false">
              <appender name="CONSOLE" class="ch.qos.logback.core.ConsoleAppender">
                  <encoder class="ch.qos.logback.core.encoder.LayoutWrappingEncoder">
                      <layout class="org.artifactory.logging.layout.BackTracePatternLayout">
                          <pattern>%date [%-5level] \(%-20c{3}:%L\) %message%n</pattern>
                      </layout>
                  </encoder>
              </appender>
    
              <logger name="/artifactory">
                  <level value="INFO"/>
                  <appender-ref ref="CONSOLE"/>
              </logger>
              <logger name="org.eclipse.jetty">
                  <level value="WARN"/>
                  <appender-ref ref="CONSOLE"/>
              </logger>
          </configuration>
    
        my-custom-post-start-hook.sh: |
          echo "This is my custom post start hook"
    
      customVolumeMounts: |
        - name: artifactory-configmaps
          mountPath: /tmp/my-config-map
    
      postStartCommand: |
        chmod +x /tmp/my-config-map/my-custom-post-start-hook.sh;
        /tmp/my-config-map/my-custom-post-start-hook.sh;
    
      copyOnEveryStartup:
        - source: /tmp/my-config-map/logback.xml
          target: etc/
    Artifactory
    helm upgrade --install artifactory -f configmaps.yaml --namespace artifactory center/jfrog/artifactory
    Artifactory HA
    helm upgrade --install artifactory-ha center/jfrog/artifactory-ha --namespace artifactory-ha -f values.yaml

Creating a Custom nginx.conf Using Nginx

  1. Create the nginx.conf file:  

    kubectl create configmap nginx-config --from-file=nginx.conf
  2. Pass the configMap to the Helm installation:

    Artifactory
    helm upgrade --install artifactory --set nginx.customConfigMap=nginx-config --namespace artifactory center/jfrog/artifactory
    Artifactory HA
    helm upgrade --install artifactory-ha center/jfrog/artifactory-ha --namespace artifactory-ha -f values.yaml

Using an External Database

For production grade installations, it is recommended to use an external PostgreSQL with a static password.

PostgreSQL

There are cases where you will want to use an external PostgreSQL with a different database name, e.g., my-artifactory-db; in this case, you will need to set a custom PostgreSQL connection URL, where my-artifactory-db is the name of the database.

This can be done with the following parameters.

postgresql:
 enabled: false
database:
 type: postgresql
 driver: org.postgresql.Driver
 url: 'jdbc:postgresql://${DB_HOST}:${DB_PORT}/my-artifactory-db'
 user: <DB_USER>
 password: <DB_PASSWORD>

You must set postgresql.enabled=false for the chart to use the database.* parameters. Without it, they will be ignored.

Other Database Types

There are cases where you will want to use a different database and not the enclosed PostgreSQL. For more information, see configuring the database.

The official Artifactory Docker images include the PostgreSQL database driver. For other database types, you will have to add the relevant database driver to Artifactory's tomcat/lib.

This can be done with the following parameters.

# Make sure your Artifactory Docker image has the MySQL database driver in it
postgresql:
 enabled: false
database:
 type: mysql
 driver: com.mysql.jdbc.Driver
 url: <DB_URL>
 user: <DB_USER>
 password: <DB_PASSWORD>
artifactory:
 preStartCommand: "mkdir -p /opt/jfrog/artifactory/var/bootstrap/artifactory/tomcat/lib; cd /opt/jfrog/artifactory/var/bootstrap/artifactory/tomcat/lib && wget -O /opt/jfrog/artifactory/var/bootstrap/artifactory/tomcat/lib/mysql-connector-java-5.1.41.jar https://jcenter.bintray.com/mysql/mysql-connector-java/5.1.41/mysql-connector-java-5.1.41.jar"

You must set postgresql.enabled=false for the chart to use the database.* parameters. Without it, they will be ignored.

Configuring Artifactory with an External Oracle Database

To use Artifactory with an Oracle database, the required instant client library files, libaio must be copied to the tomcat lib. In addition, you will need to set the LD_LIBRARY_PATH env variable.

  1. Create a value file with the configuration.

    postgresql:
      enabled: false
    database:
      type: oracle
      driver: oracle.jdbc.OracleDriver
      url: <DB_URL>
      user: <DB_USER>
      password: <DB_PASSWORD>
    artifactory:
      preStartCommand: "mkdir -p /opt/jfrog/artifactory/var/bootstrap/artifactory/tomcat/lib; cd /opt/jfrog/artifactory/var/bootstrap/artifactory/tomcat/lib && wget -O instantclient-basic-linux.x64-19.6.0.0.0dbru.zip https://download.oracle.com/otn_software/linux/instantclient/19600/instantclient-basic-linux.x64-19.6.0.0.0dbru.zip && unzip -jn instantclient-basic-linux.x64-19.6.0.0.0dbru.zip && wget -O libaio1_0.3.110-3_amd64.deb http://ftp.br.debian.org/debian/pool/main/liba/libaio/libaio1_0.3.110-3_amd64.deb &&  dpkg-deb -x libaio1_0.3.110-3_amd64.deb . && cp lib/x86_64-linux-gnu/* ."  
      extraEnvironmentVariables:
      - name: LD_LIBRARY_PATH
        value: /opt/jfrog/artifactory/var/bootstrap/artifactory/tomcat/lib
  2. Install Artifactory with the values file you created.

    Artifactory
    helm upgrade --install artifactory center/jfrog/artifactory --namespace artifactory -f values-oracle.yaml
    Artifactory HA
    helm upgrade --install artifactory-ha center/jfrog/artifactory-ha --namespace artifactory-ha -f values.yaml
  3. If this is an upgrade from 6.x to 7.x, add same preStartCommand under the artifactory.migration.preStartCommand.

Using a Pre-existing Kubernetes Secret

If you store your database credentials in a pre-existing Kubernetes Secret, you can specify them via database.secrets instead of database.user and database.password.

# Create a secret containing the database credentials
postgresql:
 enabled: false
database:
 secrets:
  user:
    name: "my-secret"
    key: "user"
  password:
    name: "my-secret"
    key: "password"
  url:
    name: "my-secret"
    key: "url"

Infrastructure Customization

Artifactory Memory and CPU Resources

The Artifactory Helm chart comes with support for configured resource requests and limits to Artifactory, Nginx and PostgreSQL. By default, these settings are commented out. It is highly recommended to set these so you have full control of the allocated resources and limits. Artifactory java memory parameters can (and should) also be set to match the allocated resources with artifactory.javaOpts.xms and artifactory.javaOpts.xmx.

Artifactory
# Example of setting resource requests and limits to all pods (including passing java memory settings to Artifactory)
artifactory:
 javaOpts:
   xms: "1g"
   xmx: "4g"
 resources:
   requests:
     memory: "1Gi"
     cpu: "500m"
   limits:
     memory: "4Gi"
     cpu: "2"
nginx:
 resources:
  requests:
    memory: "250Mi"
    cpu: "100m"
  limits:
    memory: "500Mi"
    cpu: "250m"
Artifactory HA
# Example of setting resource requests and limits to all pods (including passing java memory settings to Artifactory)
artifactory:
 primary:
   resources:
    requests:
      memory: "1Gi"
      cpu: "500m"
    limits:
      memory: "4Gi"
      cpu: "2"
 node:
   resources:
    requests:
      memory: "1Gi"
      cpu: "500m"
    limits:
      memory: "4Gi"
      cpu: "2"
   javaOpts:
     xms: "1g"
     xmx: "4g"
initContainers:
 resources:
  requests:
    memory: "64Mi"
    cpu: "10m"
  limits:
    memory: "128Mi"
    cpu: "250m"
postgresql:
 resources:
  requests:
    memory: "512Mi"
    cpu: "200m"
  limits:
    memory: "1Gi"
    cpu: "1"
nginx:
 resources:
  requests:
    memory: "250Mi"
    cpu: "100m"
  limits:
    memory: "500Mi"
    cpu: "250m"

Although it is possible to set resources limits and requests this way, it is recommended to use the pre-built values files for small, medium and large installation and change them according to your needs if necessary. 

Custom Docker Registry

If you need to pull your Docker images from a private registry, you will need to create a Kubernetes Docker registry secret and pass it to Helm during installation/upgrade.

# Create a Docker registry secret called 'regsecret'
kubectl create secret docker-registry regsecret --docker-server=<your-registry-server> --docker-username=<your-name> --docker-password=<your-pword> --docker-email=<your-email>

helm upgrade --install artifactory --set imagePullSecrets=regsecret --namespace artifactory center/jfrog/artifactory

Bootstrapping Artifactory 

You can bootstrap the Artifactory admin password and the Artifactory configuration when using Helm Charts. 

Bootstrapping the Artifactory Admin Password

You can bootstrap the admin user password as described in Recreating the Default Admin User.

  1. Create admin-creds-values.yaml and provide the IP (by default 127.0.0.1) and password.

    artifactory:
      admin:
        ip: "<IP_RANGE>" # Example: "*" to allow access from anywhere
        username: "admin"
        password: "<PASSWD>"
  2. Apply the admin-creds-values.yaml file.

    Artifactory
    helm upgrade --install artifactory --namespace artifactory center/jfrog/artifactory -f admin-creds-values.yaml
    Artifactory HA
    helm upgrade --install artifactory-ha --namespace artifactory-ha center/jfrog/artifactory-ha -f admin-creds-values.yaml
  3. Restart the Artifactory pod (Kubectl delete pod <pod_name>).

Bootstrapping the Artifactory Configuration

You can use Helm Charts to bootstrap the Artifactory global and security configuration. To do so, you will need an Artifactory subscription.

  1. Create a bootstrap-config.yaml with an artifactory.config.import.xml and a security.import.xml as shown below.

    apiVersion: v1
    kind: ConfigMap
    metadata:
      name: my-release-bootstrap-config
    data:
      artifactory.config.import.xml: |
        <config contents>
      security.import.xml: |
        <config contents>
  2. Create a configMap in Kubernetes.

    kubectl apply -f bootstrap-config.yaml
  3. Pass the configMap to Helm using one of the following options.

    Artifactory
    helm upgrade --install artifactory --set artifactory.license.secret=artifactory-license,artifactory.license.dataKey=art.lic,artifactory.configMapName=my-release-bootstrap-config --namespace artifactory center/jfrog/artifactory
    Artifactory HA
    helm upgrade --install artifactory-ha --set artifactory.license.secret=artifactory-license,artifactory.license.dataKey=art.lic,artifactory.configMapName=my-release-bootstrap-config --namespace artifactory-ha center/jfrog/artifactory-ha

    or

    Artifactory
    helm upgrade --install artifactory --set artifactory.license.licenseKey=<LICENSE_KEY>,artifactory.configMapName=my-release-bootstrap-config --namespace artifactory center/jfrog/artifactory
    
    Artifactory HA
    helm upgrade --install artifactory-ha --set artifactory.license.licenseKey=<LICENSE_KEY>,artifactory.configMapName=my-release-bootstrap-config --namespace artifactory-ha center/jfrog/artifactory-ha


    For more information, see 
    Bootstrapping the Artifactory Global Configuration and Bootstrapping the Artifactory Security Configuration.

Copying Configuration Files for Every Startup 

Files stored in the /artifactory-extra-conf directory are only copied to the ARTIFACTORY_HOME/etc directory upon the first startup. In some cases, you might want your configuration files to be copied to the ARTIFACTORY_HOME/etc directory on every startup.

For example:

The binarstore.xml file: If you use the default behavior, your binarystore.xml configuration will only be copied on the first startup, which means that changes you make over time to the binaryStore.xml configuration will not be applied. 

  1. To make sure your changes are applied on every startup, create a YAML block with the following values:

    artifactory: copyOnEveryStartup: - source: /artifactory_bootstrap/binarystore.xml target: etc/artifactory/
  2. Install the Helm chart with the values file you created:

    Artifactory
    helm upgrade --install artifactory --namespace artifactory center/jfrog/artifactory -f values.yaml
    Artifactory HA
    helm upgrade --install artifactory-ha --namespace artifactory-ha center/jfrog/artifactory-ha -f values.yaml

Any custom configuration file you have to configure Artifactory, such as logback.xml:

  1. Create a configMap with your logback.xml configuration.

  2. Next, create a values.yaml file with the following values:

    artifactory:
      ## Create a volume pointing to the config map with your configuration file
      customVolumes: |
        - name: logback-xml-configmap
          configMap:
            name: logback-xml-configmap
      customVolumeMounts: |
        - name: logback-xml-configmap
          mountPath: /tmp/artifactory-logback/
      copyOnEveryStartup:
        - source: /tmp/artifactory-logback/*
          target: etc/
  3. Install the Helm chart with the values file you created:

    Artifactory
    helm upgrade --install artifactory --namespace artifactory center/jfrog/artifactory -f values.yaml
    Artifactory HA
    helm upgrade --install artifactory-ha --namespace artifactory-ha center//jfrog/artifactory-ha -f values.yaml

Monitoring and Logging

Artifactory JMX Configuration

Artifactory exposes MBeans under the org.jfrog.artifactory domain, which enables you to monitor repositories, executor pools, storage and HTTP connection pools. To learn more, see Artifactory JMX MBeans.

To enable JMX in your deployment use the following command.

Artifactory
artifactory:
 javaOpts:
   jmx:
     enabled: false
Artifactory HA
artifactory:
 primary:
   javaOpts:
     jmx:
       enabled: false
 node:
   javaOpts:
     jmx:
       enabled: false

This will enable access to Artifactory with JMX on the default port 9010; to change the port to your port of choice, use the setting  artifactory.javaOpts.jmx.port

To connect to Artifactory using JMX with jconsole (or any similar tool) installed on your computer, follow these steps.

  1. Enable JMX as described above and change the Artifactory service to be of type LoadBalancer.

    Artifactory
    artifactory:
     service:
       type: LoadBalancer
     javaOpts:
       jmx:
         enabled: false
    Artifactory HA
    artifactory:
     service:
       type: LoadBalancer
     primary:
       javaOpts:
         jmx:
           enabled: false
     node:
       javaOpts:
         jmx:
           enabled: false
    
  2. The default setting for java.rmi.server.hostname is the service name (this is also configurable using artifactory.javaOpts.jmx.host). To connect to Artifactory with jconsole, map the Artifactory Kuberentes service IP to the service name using your host file, as per the example below.

    Artifactory
    <artifactory-service-ip> artifactory-<release-name>
    Artifactory HA
    <artifactory-primary-service-ip>    artifactory-ha-<release-name>-primary
    <artifactory-node-service-ip>       <release-name>
  3. Launch jconsole with the service address and port.

    Artifactory
    jconsole artifactory-<release-name>:<jmx-port>
    Artifactory HA
    jconsole artifactory-ha-<release-name>-primary:<primary-jmx-port>
    jconsole <release-name>:<node-jmx-port>

Artifactory Filebeat

If you want to collect logs from your Artifactory installation and send them to a central log collection solution like ELK, you can use this option.

Create a filebeat.yaml file with the following content.

filebeat:
  enabled: true
  logstashUrl: <YOUR_LOGSTASH_URL>
  resources:
    requests:
      memory: "100Mi"
      cpu: "100m"
    limits:
      memory: "100Mi"
      cpu: "100m"

Optionally, you can customize the filebeat.yaml to send output to a different location and then and use it with your Helm installation/upgrade.

filebeat:
  enabled: true
  filebeatYml: |
    <YOUR_CUSTOM_FILEBEAT_YML>

helm upgrade --install artifactory -f filebeat.yaml --namespace artifactory center/jfrog/artifactory
helm upgrade --install artifactory-ha -f filebeat.yaml --namespace artifactory-ha center/jfrog/artifactory-ha


This will begin sending your Artifactory logs to the log aggregator of your choice, based on your configuration in the filebeatYml.

Installing Artifactory and Artifactory HA with Nginx and Terminate SSL in Nginx Service (LoadBalancer)

You can install the Helm chart while performing SSL offload in the LoadBalancer layer of Nginx, for example: using AWS ACM certificates to do SSL offload in the loadbalancer layer. Simply add the following to an artifactory-ssl-values.yaml file, and then use it with your Helm installation/upgrade.

nginx:
    https:
      enabled: false
    service:
      ssloffload: true
      annotations:
        service.beta.kubernetes.io/aws-load-balancer-ssl-cert: "arn:aws:acm:xx-xxxx:xxxxxxxx:certificate/xxxxxxxxxxxxx"
        service.beta.kubernetes.io/aws-load-balancer-backend-protocol: "http"
        service.beta.kubernetes.io/aws-load-balancer-ssl-ports: "https" 
helm upgrade --install artifactory -f artifactory-ssl-values.yaml --namespace artifactory center/jfrog/artifactory
helm upgrade --install artifactory-ha -f artifactory-ssl-values.yaml --namespace artifactory-ha center/jfrog/artifactory-ha


Advanced JFrog Platform Installation

In cases where you have already installed Artifactory on your system and wish to add some or all of the JFrog products, you can use the JFrog Platform Installer to perform an advanced installation.

  1. Create a custom-values.yaml, which you will then pass during installation.
  2. In this file, disable the Artifactory installation and enable other products to install them on your system in the following way.

    artifactory:
    enabled: false
    xray:
    enabled: true
    mission-control:
    enabled: true
  3. Run the installation.

    helm upgrade --install jfrog-platform --namespace jfrog-platform center/jfrog/jfrog-platform -f custom-values.yaml

For more information, see Artifactory Helm Chart Installation - Additional Options.



Advanced Database Options for Mission Control

Deploying PostgreSQL

There are cases where you will want to use an external PostgreSQL and not the enclosed PostgreSQL. For more information, see Creating the Mission Control PostgreSQL Database.

This can be done with the following parameters.

...
--set postgresql.enabled=false \
--set database.url=${DB_URL} \
--set database.user=${DB_USER} \
--set database.password=${DB_PASSWORD} \
...

You must set postgresql.enabled=false for the chart to use the database.* parameters. Without it, they will be ignored.

Using Existing Secrets for PostgreSQL Connection Details

You can use existing secrets for managing the database connection details. Pass them to the install command with the following parameters.

export POSTGRES_USERNAME_SECRET_NAME=
export POSTGRES_USERNAME_SECRET_KEY=
export POSTGRES_PASSWORD_SECRET_NAME=
export POSTGRES_PASSWORD_SECRET_KEY=
...
    --set database.secrets.user.name=${POSTGRES_USERNAME_SECRET_NAME} \
    --set database.secrets.user.key=${POSTGRES_USERNAME_SECRET_KEY} \
    --set database.secrets.password.name=${POSTGRES_PASSWORD_SECRET_NAME} \
    --set database.secrets.password.key=${POSTGRES_PASSWORD_SECRET_KEY} \
...

Deploying Elasticsearch

By default, the Mission Control Helm Chart deploys an Elasticsearch pod. It also configures Docker host kernel parameters using a privileged init container. In some installations, you may not be allowed to run privileged containers, in which case you can disable the Docker host configuration by configuring the following parameter.

--set elasticsearch.configureDockerHost=false

There are cases where you will want to use an external Elasticsearch and not the enclosed Elasticsearch.

This can be done with the following parameters.

--set elasticsearch.enabled=false \
--set elasticsearch.url=${ES_URL} \
--set elasticsearch.username=${ES_USERNAME} \
--set elasticsearch.password=${ES_PASSWORD} \

Advanced Options for Pipelines

Installing the Pipelines Chart with Ingress

Prerequisites

Before deploying Pipelines with Ingress, you will need to have the following in place:

  • A running Kubernetes cluster
  • An Artifactory or Artifactory HA with Enterprise+ License
    • A precreated repository jfrogpipelines in Artifactory type Generic with layout maven-2-default
  • A deployed Nginx-ingress controller
  • [Optional] A deployed Cert-manager for automatic management of TLS certificates with Lets Encrypt
  • [Optional] A TLS secret reqired for https access

Prepare the Configurations

  1. Get the JFrog Pipelines helm chart to get the needed configuration files.

    helm fetch center/jfrog/pipelines --untar

    Next, edit the local copies of the values-ingress.yaml and values-ingress-passwords.yaml with the required configuration values.

  2. In the values-ingress.yaml file, edit the following:

    • Artifactory URL

    • Ingress hosts

    • Ingress tls secrets

  3. In the values-ingress-passwords.yaml file, set the passwords uiUserPassword, postgresqlPassword and auth.password, and the same for masterKey and joinKey.

Install JFrog Pipelines

Run the install command.

kubectl create ns pipelines
helm upgrade --install pipelines --namespace pipelines center/jfrog/pipelines -f pipelines/values-ingress.yaml -f pipelines/values-ingress-passwords.yaml


Using an External Secret for the Pipelines Password

The best practice for passwords is to use external secrets instead of storing passwords in values.yaml files.

  1. Fill in the passwords, masterKey and joinKey in values-ingress-passwords.yaml and then create and install the external secret.

    ## Generate pipelines-system-yaml secret
    helm template --name-template pipelines pipelines/ -s templates/pipelines-system-yaml.yaml \
        -f pipelines/values-ingress-external-secret.yaml -f pipelines/values-ingress-passwords.yaml | kubectl apply --namespace pipelines -f -
    
    ## Generate pipelines-database secret
    helm template --name-template pipelines pipelines/ -s templates/database-secret.yaml \
        -f pipelines/values-ingress-passwords.yaml | kubectl apply --namespace pipelines -f -
    
    ## Generate pipelines-rabbitmq-secret secret
    helm template --name-template pipelines pipelines/ -s templates/rabbitmq-secret.yaml \
        -f pipelines/values-ingress-passwords.yaml | kubectl apply --namespace pipelines -f -
  2. Install Pipelines.

    helm upgrade --install pipelines --namespace pipelines center/jfrog/pipelines -f values-ingress-external-secret.yaml

Setting up a Build Plane

To use Pipelines, you will need to set up a Build Plane. For more information, see the following:


Uninstall and Deletion

This section details the procedures for uninstalling the complete JFrog Platform and Artifactory. In addition, you'll find instructions for deleting Artifactory and Xray.

Uninstalling JFrog Platform

Removal of the Helm release will not completely remove the persistent volumes--you need to remove them explicitly.

Uninstall is supported only on Helm v3 and on.

To uninstall JFrog Platform, run the following command.

helm uninstall jfrog-platform --namespace jfrog-platform

Uninstalling Artifactory

Uninstall is supported only on Helm v3 and on.

  1. Uninstall Artifactory using the following command.

    helm uninstall artifactory && sleep 90 && kubectl delete pvc -l app=artifactory
  2. Next, delete the storage bucket and SQL database.

    gsutil rm -r gs://artifactory gcloud sql instances delete artifactory

Deleting Artifactory

You do not need to uninstall Artifactory before deleting it.

Deleting Artifactory using the commands below will also delete your data volumes and you will lose all of your data. You must back up all this information before deletion. 

To delete Artifactory use the following command.

Helm v3
helm delete artifactory --namespace artifactory

This will completely delete your Artifactory deployment (Pro or HA cluster). 

Deleting Xray

Deleting Xray using the commands below will also delete your data volumes and you will lose all of your data. You must back up all this information before deletion. 

To remove Xray services and data tools, use the following command.

Helm v3
helm delete xray --namespace xray # Remove the data disks kubectl delete pvc -l release=xray

If Xray was installed without providing a value to rabbitmq.rabbitmqPassword/rabbitmq-ha.rabbitmqPassword (a password was autogenerated), follow these instructions.

  1. Get the current password by running this command.

    RABBITMQ_PASSWORD=$(kubectl get secret -n <namespace> <myrelease>-rabbitmq -o jsonpath="{.data.rabbitmq-password}" | base64 --decode)
  2. Upgrade the release by passing the previously auto-generated secret.

    helm upgrade <myrelease> center/jfrog/xray --set rabbitmq.rabbitmqPassword=${RABBITMQ_PASSWORD}/rabbitmq-ha.rabbitmqPassword=${RABBITMQ_PASSWORD}

If Xray was installed with all of the default values (e.g., with no user-provided values for RabbitMQ/PostgreSQL), follow these steps.

  1. Retrieve all current passwords (RabbitMQ/PostgreSQL) as explained in the above section.
  2. Upgrade the release by passing the previously auto-generated secrets.

    helm upgrade --install xray --namespace xray center/jfrog/xray --set rabbitmq-ha.rabbitmqPassword=<rabbit-password> --set postgresql.pos
Copyright © 2021 JFrog Ltd.