All Products - Customization
The following advanced customizations are relevant to all JFrog products.
Establishing TLS and Adding Certificates
Establishing TLS and Adding Certificates for Artifactory
In HTTPS, the communication protocol is encrypted using Transport Layer Security (TLS). By default, TLS between JFrog Platform nodes is disabled. When TLS is enabled, JFrog Access acts as the Certificate Authority (CA) that signs the TLS certificates used by all the different JFrog Platform nodes.
To establish TLS between JFrog Platform nodes: Enable TLS by changing the tls
entry (under the security section) in the access.config.yaml
file. For additional information, see Managing TLS Certificates.
To enable TLS in charts, set
tls
to true underaccess
in thevalues.yaml
. By default it is set to false.access: accessConfig: security: tls: true
To add custom TLS certificates, create a TLS secret from the certificate files.
kubectl create secret tls <tls-secret-name> --cert=ca.crt --key=ca.private.key
For reset access certificates , you can set
resetAccessCAKeys
to true under access section in thevalues.yaml
and perform an helm upgrade.Once the Helm upgrade is completed, set
resetAccessCAKeys
to false for subsequent upgrades (to avoid resetting the access certificates on every Helm upgrade).access: accessConfig: security: tls: true customCertificatesSecretName: <tls-secret-name> resetAccessCAKeys: true
Establishing TLS and Adding Certificates for Xray, Mission Control and Distribution
Create trust between the nodes by copying the ca.crt
from the Artifactory server under $JFROG_HOME/artifactory/var/etc/access/keys
to the nodes you would like to set trust with under $JFROG_HOME/var/etc/security/keys/trusted
. For more details, For more information, see Managing TLS Certificates.
To add this certificate to Xray:
Create a
configmaps.yaml
file with the following content.Xraycommon: configMaps: | ca.crt: | -----BEGIN CERTIFICATE----- <certificate content> -----END CERTIFICATE----- customVolumeMounts: | - name: xray-configmaps mountPath: /tmp/ca.crt subPath: ca.crt server: preStartCommand: "mkdir -p {{ .Values.xray.persistence.mountPath }}/etc/security/keys/trusted && cp -fv /tmp/ca.crt {{ .Values.xray.persistence.mountPath }}/etc/security/keys/trusted/ca.crt" router: tlsEnabled: true
Mission Controlcommon: configMaps: | ca.crt: | -----BEGIN CERTIFICATE----- <certificate content> -----END CERTIFICATE----- customVolumeMounts: | - name: mission-control-configmaps mountPath: /tmp/ca.crt subPath: ca.crt missionControl: preStartCommand: "mkdir -p {{ .Values.missionControl.persistence.mountPath }}/etc/security/keys/trusted && cp -fv /tmp/ca.crt {{ .Values.missionControl.persistence.mountPath }}/etc/security/keys/trusted/ca.crt" router: tlsEnabled: true
Distributioncommon: configMaps: | ca.crt: | -----BEGIN CERTIFICATE----- <certificate content> -----END CERTIFICATE----- customVolumeMounts: | - name: distribution-configmaps mountPath: /tmp/ca.crt subPath: ca.crt distribution: preStartCommand: "mkdir -p {{ .Values.distribution.persistence.mountPath }}/etc/security/keys/trusted && cp -fv /tmp/ca.crt {{ .Values.distribution.persistence.mountPath }}/etc/security/keys/trusted/ca.crt" router: tlsEnabled: true
Run the Helm install/upgrade.
Xrayhelm upgrade --install xray -f configmaps.yaml --namespace xray jfrog/xray
Mission Controlhelm upgrade --install mission-control -f configmaps.yaml --namespace mission-control jfrog/mission-control
Distributionhelm upgrade --install distribution -f configmaps.yaml --namespace distribution jfrog/distribution
Create a configMap with the files you specified above.
This will, in turn:Create a volume pointing to the configMap with the name
xray-configmaps
.Mount this configMap onto
/tmp
using acustomVolumeMounts.
Using the
preStartCommand
, copy theca.crt
file to the Xray trusted keys folder/etc/security/keys/trusted/ca.crt.
router.tlsEnabled
is set to true to add HTTPS scheme in liveness and readiness probes.
Establishing TLS and Adding Certificates for Pipelines
You can create trust between the nodes by copying the ca.crt
file from the Artifactory server under $JFROG_HOME/artifactory/var/etc/access/keys
to of the nodes you would like to set trust with under $JFROG_HOME/pipelines/var/etc/security/keys/trusted
. For more information, see Managing TLS Certificates.
You can have more than one certificates to be present in the trusted directory. For example, you can configure Pipelines API URL behind a load balancer that is setup with custom certificates. You need to add those certificates in the trusted folder as build nodes will be talking to Pipelines API over the load balancer end point.
Add NODE_EXTRA_CA_CERTS
environment variable when you use custom certificates. Pipelines looks through all the certificates available in the trusted folder and concatenates those into a single file called pipeline_custom_certs.crt,
which is then passed as the NODE_EXTRA_CA_CERTS
environment variable.
You can add TLS certificates through a Kubernetes secret. You need to create the secret outside of this chart and provide using the tag, "Values.pipelines.customCertificates.certificateSecretName
".
The following example shows how you can create the secret.
kubectl create secret generic ca-cert --from-file=ca.crt=ca.crt
You can pass the secret to the Helm installation by updating the values.yaml file.
pipelines: customCertificates: enabled: true certificateSecretName: ca-cert
Add circle of trust certificates
Create a secret containing the certificate (root certificate from the source machine).
The following example shows how to create a secret that contains the root certifacte.
kubectl create secret generic edge-root-crt --from-file=./edge-root.crt
Update the values.yaml file to pass the secret to the Helm installation.
artifactory: circleOfTrustCertificatesSecret: edge-root-crt
Run the following command to pass the secret to the Helm installation.
helm upgrade --install artifactory jfrog/artifactory -f values.yaml
Adding Custom Init Containers
Init Containers are containers that run before the main container runs with your containerized application. In some cases, you will need to use a specialized, unsupported init process, for example, to check something in the file system or to test something before spinning up the main container. If you need to add a custom init container, use the section for defining a custom init container in the values.yaml
file (by default this section is commented out).
artifactory: ## Add custom init containers customInitContainers: | ## Init containers template goes here ##
common: ## Add custom init containers executed before predefined init containers customInitContainersBegin: | ## Init containers template goes here ## ## Add custom init containers executed after predefined init containers customInitContainers: | ## Init containers template goes here ##
common: ## Add custom init containers customInitContainers: | ## Init containers template goes here ##
distribution: ## Add custom init containers customInitContainers: | ## Init containers template goes here ##
Adding Custom Sidecars Containers
A sidecar is a utility container in a pod that is loosely coupled to the main application container. In some cases you may need to use an extra sidecar container, for example, for monitoring agents or for log collection. If you need to add a custom sidecar container, use the section for defining a custom sidecar container in the values.yaml
file (by default this section is commented out).
artifactory: ## Add custom sidecar containers customSidecarContainers: | ## Sidecar containers template goes here ##
common: ## Add custom sidecar containers customSidecarContainers: | ## Sidecar containers template goes here ##
common: ## Add custom sidecar containers customSidecarContainers: | ## Sidecar containers template goes here ##
common: ## Add custom sidecar containers customSidecarContainers: | ## Sidecar containers template goes here ##
Adding Custom Volumes
A Kubernetes volume is essentially a directory that is accessible to all containers running in a pod. If you need to use a custom volume in a custom init or sidecar container, use the sections for defining a custom init or a custom sidecar container in the values.yaml
file (by default these sections are commented out.
artifactory: ## Add custom volumes customVolumes: | ## Custom volume comes here ##
server: ## Add custom volumes customVolumes: | ## Custom volume comes here ##
Mission Control and Distribution Custom Volumes
To add custom custom files or for your init container, or to make changes to the file system the Mission Control/Distribution container will see, use the following section for defining custom volumes in the values.yaml
. By default, these values are are left empty.
common: ## Add custom volumes customVolumes: | # - name: custom-script # configMap: # name: custom-script ## Add custom volumeMounts customVolumeMounts: | # - name: custom-script # mountPath: "/scripts/script.sh" # subPath: script.sh
common: ## Add custom volumes customVolumes: | # - name: custom-script # configMap: # name: custom-script distribution: ## Add custom volumeMounts customVolumeMounts: | # - name: custom-script # mountPath: "/scripts/script.sh" # subPath: script.sh distributor: ## Add custom volumeMounts customVolumeMounts: | # - name: custom-script # mountPath: "/scripts/script.sh" # subPath: script.sh
Overriding the Default System YAML File
There are some advanced use cases where users wish to provide their own system.yaml
file to configure the JFrog service. Using this option will override the existing system.yaml
in the values.yaml
file. There are two ways to override the system.yaml
: by using a Custom Init Container and by using an external system.yaml with an existingSecret.
The order of preference would then be as follows.
Custom Init Container
External system.yaml
Default system.yaml in values.yaml
For the Pipelines chart, from chart version 2.2.0 and above, the .Values.existingSecret
has been changed to .Values.systemYaml.existingSecret
and .Values.systemYaml.dataKey
.
Using a Custom Init Container
The Custom Init Container uses external sources such as vaults, external repositories, etc. to override the system.yaml
file.
The following example is for the Xray chart.
customInitContainers: | - name: "custom-systemyaml-setup" image: "{{ .Values.initContainerImage }}" imagePullPolicy: "{{ .Values.imagePullPolicy }}" command: - 'sh' - '-c' - 'wget -O {{ .Values.xray.persistence.mountPath }}/etc/system.yaml https://<repo-url>/systemyaml' volumeMounts: - mountPath: "{{ .Values.xray.persistence.mountPath }}" name: data-volume
Using an External System YAML File
Create an external
system.yaml
file for one of the services, for example, Xray and give it the filename -xray-cus-sy.yaml
.configVersion: 1 shared: logging: consoleLog: enabled: true jfrogUrl: "http://artifactory-artifactory.rt:8082" database: type: "postgresql" driver: "org.postgresql.Driver" username: "xray" url: "postgres://xray-postgresql:5432/xraydb?sslmode=disable" server: mailServer: "" indexAllBuilds: "true"
Create a Kubernetes secret.
kubectl create secret generic sy --from-file ./xray-cus-sy.yaml
Now, use that secret in the relevant section.
systemYamlOverride: existingSecret: sy dataKey: xray-cus-sy.yaml
Auto-generated Passwords (Internal PostgreSQL)
An internal PostgreSQL requires one variable to be available during installation or upgrade. If it is not set by the user, a random 10 character alphanumeric string will be set instead; therefore, it recommended for the user to set this explicitly during installation and upgrade.
--set postgresql.postgresqlPassword=<value> \
The values should remain same between upgrades. If this was autogenerated during helm install
, the same password will have to be passed on future upgrades.
To read the current password, use the following command (for more information on reading a secret value, see Kubernetes: Decoding a Secret).
POSTGRES_PASSWORD=$(kubectl get secret -n <release_name>-postgresql -o jsonpath="{.data.postgresql-password}" | base64 --decode)
The following parameter can be set during upgrade.
--set postgresql.postgresqlPassword=${POSTGRES_PASSWORD} \
Using Custom Secrets
Secrets are Kubernetes objects that are used for storing sensitive data such as user name and passwords with encryption. If you need to add a custom secret in a custom init or sidecar container, use the section for defining custom secrets in the values.yaml
file (by default this section is commented out).
artifactory: # Add custom secrets - secret per file customSecrets: - name: custom-secret key: custom-secret.yaml data: > secret data
common: # Add custom secrets - secret per file customSecrets: - name: custom-secret key: custom-secret.yaml data: > secret data
common: # Add custom secrets - secret per file customSecrets: - name: custom-secret key: custom-secret.yaml data: > secret data
distribution: # Add custom secrets - secret per file customSecrets: - name: custom-secret key: custom-secret.yaml data: > secret data
pipelines: # Add custom secrets - secret per file customSecrets: - name: custom-secret key: custom-secret.yaml data: > secret data
To use a custom secret, you need to define a custom volume.
The following example shows how to define a custom volume in Artifactory.
artifactory: ## Add custom volumes customVolumes: | - name: custom-secret secret: secretName: custom-secret //{{ template "artifactory.name" . }}-unified-secret
To use a volume, you will need to define a volume mount as part of a custom init or sidecar container.
The following example shows how to define a volume mount as part of a sidecar container in Artifactory.
artifactory: customSidecarContainers: - name: side-car-container volumeMounts: - name: custom-secret mountPath: /opt/custom-secret.yaml subPath: custom-secret.yaml readOnly: true
You can configure the sidecar to run as a custom user by setting the following in the container template.
# Example of running container as root (id 0) securityContext: runAsUser: 0 fsGroup: 0
Using Unified Secret
Set the flag, <product chart name>
.unifiedSecretInstallation
, to true
if you want to install a unified secret that combines all the secrets including custom secrets. By default, the flag is set to false and secrets are not unified.
If you set the flag as true, update secretName
in custom volumes with, {{ template "artifactory-ha.name" . }}-unified-secret
, and then run the installation.
The following example shows how to define a custom volume in Artifactory when you want to use unified secrets.
artifactory: ## Add custom volumes customVolumes: | - name: custom-secret secret: secretName: {{ template "artifactory.name" . }}-unified-secret
As a best practice, we recommend that you name custom secrets with an easily identifiable name for the unified secret.
Artifactory High Availability
Artifactory Storage
Artifactory HA support a wide range of storage back ends (for more information, see Artifactory HA storage options).
In this chart, you will set the type of storage you want using artifactory.persistence.type
and pass the required configuration settings. The default storage in this chart is the file-system
replication, where the data is replicated to all nodes.
All storage configurations (except NFS) come with a default artifactory.persistence.redundancy
parameter. This is used to set the number replicas of a binary that should be stored in the cluster's nodes. Once this value is set on initial deployment, you can not update it using Helm. It is recommended to set this to a number greater than half of your cluster's size, and to never scale your cluster down to a size smaller than this number.
Alert: Use a PVC when Using an External Blob Storage
When using external blob storage (for example, AWS S3, Azure blob storage, or Google storage), there is still a need to persist temporary eventual storage in a PVC (Persistent Volume Claims) in cases of loss of connection to the external storage or if the Artifactory pod crashes.
Avoiding the usage of a PVC can lead to data loss in case of unplanned pod termination.
Deploying Artifactory on an OpenShift Cluster and Using the Azure PostgreSQL Database Service
When deploying Artifactory on an OpenShift Cluster while using the Azure PostgreSQL database service, the service requires a TLS encrypted database connection. To learn more, see Metadata Service Troubleshooting.
Using an Existing Volume Claim
Using an Existing Volume Claim for the Primary Node
To use an existing volume claim for the Artifactory primary node storage, you will need to do the following.
Create a persistent volume claim by the name
volume-<release-name>-artifactory-ha-primary-0
e.g.,volume-myrelease-artifactory-ha-primary-0
.Pass a parameter to
helm install
andhelm upgrade
.... --set artifactory.primary.persistence.existingClaim=true
Using an Existing Volume Claim for the Member Nodes
To use an existing volume claim for the Artifactory member nodes storage, you will need to do the following.
Create persistent volume claims according to the number of replicas defined at
artifactory.node.replicaCount
by the namesvolume-<release-name>-artifactory-ha-member-<ordinal-number>
, e.g.,volume-myrelease-artifactory-ha-member-0
andvolume-myrelease-artifactory-ha-primary-1
.Pass a parameter to
helm install
andhelm upgrade
.... --set artifactory.node.persistence.existingClaim=true
Using an Existing Shared Volume Claim
To use an existing volume claim (for data and backup) that is to be shared across all nodes, you will need to do the following.
Create PVCs with ReadWriteMany that match the naming conventions.
{{ template "artifactory-ha.fullname" . }}-data-pvc-<claim-ordinal> {{ template "artifactory-ha.fullname" . }}-backup-pvc
Here is an example that shows 2 existing volume claims being used.
myexample-artifactory-ha-data-pvc-0 myexample-artifactory-ha-data-pvc-1 myexample-artifactory-ha-backup-pvc
Set the
artifactory.persistence.fileSystem.existingSharedClaim.enabled
in thevalues.yaml
file totrue
.-- set artifactory.persistence.fileSystem.existingSharedClaim.enabled=true -- set artifactory.persistence.fileSystem.existingSharedClaim.numberOfExistingClaims=2
Adding Licenses
To activate Artifactory HA, you must install an appropriate license as part of the installation. There are three ways to manage the license: via Artifactory UI, REST API, or a Kubernetes Secret. Specifying multiple licenses Whether in the Artifactory UI, using the REST API or in the The easiest and recommended way is by using Artifactory UI. Using the Kubernetes Secret or REST API is for advanced users and is better suited for automation. You should use only one of these methods. Switching between them while a cluster is running might disable your Artifactory HA cluster. You can add licenses via REST API. Note that the REST API uses "\n" for the newlines in the licenses (this is currently recommended method). Once the primary cluster is running, open Artifactory UI and insert the license(s) in the UI. See HA installation and setup for more details. Enter all of the licenses at once, with each license separated by a newline. If you add the licenses one at a time, you may get redirected to a node without a license and the UI will not load for that node. Important This method is relevant for initial deployment only. Once Artifactory is deployed, you should not keep passing these parameters, since the license is already persisted into Artifactory's storage (and they will be ignored). Updating the license should be done via Artifactory UI or REST API. Create the Kubernetes secret (assuming the local license file is ' Create a Install with the Run the following command for Artifactory. Create a Install with the This method is relevant for initial deployment only. Once Artifactory is deployed, you should not keep passing these parameters, since the license is already persisted into Artifactory's storage (and they will be ignored). Updating the license should be done via Artifactory UI or REST API. If you want to keep managing the Artifactory license using the same method, you can use the copyOnEveryStartup example shown in the artifactory.cluster.license
file, make sure that the licenses are separated by a newline.
Option A: Using REST APIOption B: Using the Artifactory UI
Option C: Using a Kubernetes Secret
You can deploy the Artifactory license(s) as a Kubernetes Secret. You will need to prepare a text file with the license(s) written in it. If adding multiple licenses, they are added in the same file. Remember to add two new lines between each license block.art.lic
').kubectl create secret generic artifactory-cluster-license --from-file=./art.lic
license-values.yaml
.artifactory:
license:
secret: artifactory-cluster-license
dataKey: art.lic
license-values.yaml
.
Run the following command for JFrog Platform.helm upgrade --install jfrog-platform --namespace jfrog-platform jfrog/jfrog-platform -f license-values.yaml
helm upgrade --install artifactory --set artifactory.license.secret=artifactory-cluster-license,artifactory.license.dataKey=artifactory.lic --namespace artifactory jfrog/artifactory
Create the Kubernetes Secret as Part of the Helm Release
license-values.yaml
.artifactory:
license:
licenseKey: |-
<LICENSE_KEY1>
<LICENSE_KEY2>
<LICENSE_KEY3>
license-values.yaml
.helm upgrade --install jfrog-platform --namespace jfrog-platform jfrog/jfrog-platform -f license-values.yaml
values.yaml
file.
Scaling the Artifactory Cluster
A key feature in Artifactory HA is the ability to set an initial cluster size using --set artifactory.node.replicaCount=${CLUSTER_SIZE}
and if needed, to resize the cluster.
Before Scaling
When scaling, you need to explicitly pass the database password if the password is an automatically generated one (this is the default with the enclosed PostgreSQL Helm chart).
To get the current database password use the following.
export DB_PASSWORD=$(kubectl get $(kubectl get secret -o name | grep postgresql) -o jsonpath="{.data.postgresql-password}" | base64 --decode)
Important
Use --set postgresql.postgresqlPassword=${DB_PASSWORD}
with every scale action to prevent a misconfigured cluster.
To Scale Up:
Assuming that you have a cluster with 2 member nodes, and you want to scale up to 3 member nodes (to a total of 4 nodes), use the following.
# Scale to 4 nodes (1 primary and 3 member nodes) helm upgrade --install artifactory-ha --set artifactory.node.replicaCount=3 --set postgresql.postgresqlPassword=${DB_PASSWORD} --namespace artifactory-ha jfrog/artifactory-ha
To Scale Down:
# Scale down to 2 member nodes helm upgrade --install artifactory-ha --set artifactory.node.replicaCount=2 --set postgresql.postgresqlPassword=${DB_PASSWORD} --namespace artifactory-ha jfrog/artifactory-ha
Because Artifactory is running as a Kubernetes Stateful Set, the removal of the node will not remove the persistent volume. You need to explicitly remove it as follows.
# List PVCs kubectl get pvc # Remove the PVC with highest ordinal! # In this example, the highest node ordinal was 2, so need to remove its storage. kubectl delete pvc volume-artifactory-node-2
Artifactory Advanced Options
Adding Licenses Using Secrets
There are two ways to add licenses using secrets:
These methods are relevant for initial deployment only. Once Artifactory is deployed, you should not keep passing these parameters, since the license is already persisted into Artifactory's storage (they will be ignored). Updating the license should be done via Artifactory UI or REST API. If you want to keep managing the Artifactory license using the same method, you can use the copyOnEveryStartup example shown in the values.yaml
file.
Creating a License Using an Existing Kubernetes Secret
You can deploy the Artifactory license as a Kubernetes secret, by preparing a text file with the license written in it and creating a Kubernetes secret from it.
# Create the Kubernetes secret (assuming the local license file is 'art.lic') kubectl create secret generic -n artifactory artifactory-license --from-file=./art.lic # Pass the license to helm helm upgrade --install artifactory --set artifactory.license.secret=artifactory-license,artifactory.license.dataKey=art.lic --namespace artifactory jfrog/artifactory
Creating a Secret as Part of the Helm Release
To create a secret as part of the Helm release, update the values.yaml
and then run the installer.
artifactory: license: licenseKey: |- <LICENSE_KEY> helm upgrade --install artifactory -f values.yaml --namespace artifactory jfrog/artifactory
Security-related Issues
The following section addresses security-related issues in the Helm Charts installation, such as managing subscriptions and secrets, network policy, and more.
Customizing the Database Password
You can override the specified database password (set in values.yaml
), by passing it as a parameter in the install command line.
helm upgrade --install artifactory --namespace artifactory --set postgresql.postgresqlPassword=12_hX34qwerQ2 jfrog/artifactory
You can customize other parameters in the same way, by passing them in the helm install
command line.
Creating an Ingress Object
To get Helm to create an ingress object with a hostname, add these lines to the artifactory-ingress-values.yaml
file and use it with your helm install or upgrade.
ingress: enabled: true hosts: - artifactory.company.com artifactory: service: type: NodePort nginx: enabled: false helm upgrade --install artifactory -f artifactory-ingress-values.yaml --namespace artifactory jfrog/artifactory
If your cluster allows for automatic creation/retrieval of TLS certificates (for example, by using a cert-manager; for more information, see cert-manager
), create the ingress object as follows.
To configure TLS manually, first create/retrieve a key and certificate pair for the address(es) you wish to protect.
Next, create a TLS secret in the namespace.
kubectl create secret tls artifactory-tls --cert=path/to/tls.cert --key=path/to/tls.key
Include the secret's name, along with the desired hostnames, in the Artifactory Ingress TLS section of your custom
values.yaml
file.ingress: ## If true, Artifactory Ingress will be created ## enabled: true ## Artifactory Ingress hostnames ## Must be provided if Ingress is enabled ## hosts: - artifactory.domain.com annotations: kubernetes.io/tls-acme: "true" ## Artifactory Ingress TLS configuration ## Secrets must be manually created in the namespace ## tls: - secretName: artifactory-tls hosts: - artifactory.domain.com
Using Ingress Annotations
The following is an example of an Ingress annotation that enables Artifactory to work as a Docker Registry using the Repository Path method. For more information, see Artifactory as Docker Registry.
ingress: enabled: true defaultBackend: enabled: false hosts: - myhost.example.com annotations: ingress.kubernetes.io/force-ssl-redirect: "true" ingress.kubernetes.io/proxy-body-size: "0" ingress.kubernetes.io/proxy-read-timeout: "600" ingress.kubernetes.io/proxy-send-timeout: "600" kubernetes.io/ingress.class: nginx nginx.ingress.kubernetes.io/configuration-snippet: | rewrite ^/(v2)/token /artifactory/api/docker/null/v2/token; rewrite ^/(v2)/([^\/]*)/(.*) /artifactory/api/docker/$2/$1/$3; nginx.ingress.kubernetes.io/proxy-body-size: "0" tls: - hosts: - "myhost.example.com"
If you are using Artifactory as an SSO provider (e.g., with Xray), you will need to use the following annotations, and change to your domain.
.. annotations: kubernetes.io/ingress.class: nginx nginx.ingress.kubernetes.io/configuration-snippet: | proxy_pass_header Server; proxy_set_header X-JFrog-Override-Base-Url https://<artifactory-domain>;
Adding Additional Ingress Rules
You also have the option of adding additional Ingress rules to the Artifactory Ingress. An example for this use case would be to route the /xray path to Xray. To do that, simply add the following to the artifactory-values.yaml
file and run the upgrade.
ingress: enabled: true defaultBackend: enabled: false annotations: kubernetes.io/ingress.class: nginx nginx.ingress.kubernetes.io/configuration-snippet: | rewrite "(?i)/xray(/|$)(.*)" /$2 break; additionalRules: | - host: <MY_HOSTNAME> http: paths: - path: / backend: serviceName: <XRAY_SERVER_SERVICE_NAME> servicePort: <XRAY_SERVER_SERVICE_PORT> - path: /xray backend: serviceName: <XRAY_SERVER_SERVICE_NAME> servicePort: <XRAY_SERVER_SERVICE_PORT> - path: /artifactory backend: serviceName: {{ template "artifactory.nginx.fullname" . }} servicePort: {{ .Values.nginx.externalPortHttp }} helm upgrade --install xray jfrog/artifactory -f artifactory-values.yaml
Using a Dedicated Ingress Object for the Replicator Service
You also have the option of adding an additional Ingress object to the Replicator service. An example for this use case could be routing the /replicator/
path to Artifactory. To do that, simply add the following to the artifactory-values.yaml
file.
artifactory: replicator: enabled: true ingress: name: <MY_INGRESS_NAME> hosts: - myhost.example.com annotations: kubernetes.io/ingress.class: nginx nginx.ingress.kubernetes.io/proxy-buffering: "off" nginx.ingress.kubernetes.io/configuration-snippet: | chunked_transfer_encoding on; tls: - hosts: - "myhost.example.com" secretName: <CUSTOM_SECRET>
Running Ingress Behind Another Load Balancer
If you are running a load balancer that is used to offload the TLS, in front of Nginx Ingress Controller, or if you are setting X-Forwarded-*
headers, you might want to enable the use-forwarded-headers=true
option. Otherwise, Nginx will fill those headers with the request information it receives from the external load balancer.
Run the following commands to enable the option.
helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx helm repo update kubectl create namespace ingress-nginx helm upgrade --install ingress-nginx --namespace ingress-nginx ingress-nginx/ingress-nginx --set-string controller.config.use-forwarded-headers=true
Alternatively, create a values.yaml
file with the following content. then install nginx-ingress
with the values file you created.
controller: config: use-forwarded-headers: "true"
Run the following commands after you create the values.yaml file.
helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx helm repo update kubectl create namespace ingress-nginx helm upgrade --install ingress-nginx --namespace ingress-nginx ingress-nginx/ingress-nginx -f values.yaml
Log Analytics
FluentD, Prometheus and Grafana
To configure Prometheus and Grafana to gather metrics from Artifactory through the use of FluentD, refer to the log analytics repository. The repository contains a file artifactory-values.yaml
that can be used to deploy Prometheus, Service Monitor, and Grafana with this chart.
Configuring the NetworkPolicy
The NetworkPolicy specifies which Ingress and Egress are allowed in this namespace. It is encouraged to be more specific whenever possible to increase system security.
In the networkpolicy
section of the values.yaml
file you can specify a list of NetworkPolicy objects.
- For podSelector, Ingress and Egress, if nothing is provided then a default
- {}
is applied, which is to allow everything. A full (but very wide open) example that results in 2 NetworkPolicy objects being created:
networkpolicy: # Allows all Ingress and Egress to/from Artifactory. - name: artifactory podSelector: matchLabels: app: artifactory egress: - {} ingress: - {} # Allows connectivity from artifactory pods to postgresql pods, but no traffic leaving postgresql pod. - name: postgres podSelector: matchLabels: app: postgresql ingress: - from: - podSelector: matchLabels: app: artifactory
Advanced Storage Options
The filestore is where binaries are physically stored, and it is one of the two stores essential for Artifactory's storage and management resources. Artifactory supports a wide range of storage back ends; in this section, we have detailed some of the advanced options for Artifactory storage; for more information, see Artifactory Filestore options.
Setting the Artifactory Persistency Storage Type
In the Helm chart, set the type of storage you want with artifactory.persistence.type
and pass the required configuration settings. The default storage in this chart is the file-system
replication, where the data is replicated to all nodes.
Important
All storage configurations, except Network File System (NFS) come with a default artifactory.persistence.redundancy
parameter. This is used to set how many replicas of a binary should be stored in the cluster's nodes. Once this value is set on initial deployment, you can not update it using Helm. It is recommended to set this to a number greater than half of your cluster's size, and to never scale your cluster down to a size smaller than this number.
To use your selected bucket as the HA's filestore, pass the filestore's parameters to the Helm installation/upgrade.
Setting up the Network File System (NFS) Storage
To use an NFS server as your cluster's storage, you will need to do the following.
Set up an NFS server and get its IP as
NFS_IP
.Create
data
andbackup
directories on the NFS exported directory with write permissions to all.Pass NFS parameters to the Helm installation/upgrade as follows.
artifactory: persistence: type: nfs nfs: ip: ${NFS_IP}
Configuring the NFS Persistence Type
In some cases, it is not possible for the Helm Chart to set up your NFS mounts automatically for Artifactory. In these cases (for example the AWS EFS), you will use the artifactory.persistence.type=file-system,
even though your underlying persistence is actually a network file system.
The same thing applies when using a slow storage device (such as cheap disks) as your main storage solution for Artifactory; this means that serving highly-used files from the network file system/slow storage can take time, which is why you would want a cache filesystem that is stored locally on disk (on fast disks such as SSD).
Create a
values.yaml
file.Set up your volume mount to your fast storage device as follows.
artifactory: ## Set up your volume mount to your fast storage device customVolumes: | - name: my-cache-fast-storage persistentVolumeClaim: claimName: my-cache-fast-storage-pvc ## Enable caching and configure the cache directory customVolumeMounts: | - name: my-cache-fast-storage mountPath: /my-fast-cache-mount ## Install the helm chart with the values file you created persistence: cacheProviderDir: /my-fast-cache-mount fileSystem: cache: enabled: true
Install Artifactory with the values file you created.
Artifactoryhelm upgrade --install artifactory jfrog/artifactory --namespace artifactory -f values.yaml
Artifactory HAhelm upgrade --install artifactory-ha jfrog/artifactory-ha --namespace artifactory-ha -f values.yaml
Google Storage
You can use Google Storage bucket as the cluster's filestore by passing the Google Storage parameters below to helm install
and helm upgrade
. For more information, see Google Cloud Storage.
artifactory: persistence: type: google-storage-v2
Artifactory HA
To use a GCP service account, Artifactory requires a gcp.credentials.json
file in the same directory as the binaraystore.xml file.
This can be generated by running the following.
gcloud iam service-accounts keys create <file_name> --iam-account <service_account_name>
This will produce the following, which can be saved to a file or copied into your values.yaml
.
{ "type": "service_account", "project_id": "<project_id>", "private_key_id": "?????", "private_key": "-----BEGIN PRIVATE KEY-----\n????????==\n-----END PRIVATE KEY-----\n", "client_email": "???@j<project_id>.iam.gserviceaccount.com", "client_id": "???????", "auth_uri": "https://accounts.google.com/o/oauth2/auth", "token_uri": "https://oauth2.googleapis.com/token", "auth_provider_x509_cert_url": "https://www.googleapis.com/oauth2/v1/certs", "client_x509_cert_url": "https://www.googleapis.com/robot/v1....." }
One option is to create your own secret and to pass it to your helm install
in a custom values.yaml
.
# Create the Kubernetes secret from the file you created earlier. # IMPORTANT: The file must be called "gcp.credentials.json" because this is used later as the secret key! kubectl create secret generic artifactory-gcp-creds --from-file=./gcp.credentials.json
Set this secret in your custom values.yaml
.
artifactory: persistence: googleStorage: gcpServiceAccount: enabled: true customSecretName: artifactory-gcp-creds
Another option is to put your generated config directly in your custom values.yaml
and then a secret will be created from that.
artifactory: persistence: googleStorage: gcpServiceAccount: enabled: true config: | { "type": "service_account", "project_id": "<project_id>", "private_key_id": "?????", "private_key": "-----BEGIN PRIVATE KEY-----\n????????==\n-----END PRIVATE KEY-----\n", "client_email": "???@j<project_id>.iam.gserviceaccount.com", "client_id": "???????", "auth_uri": "https://accounts.google.com/o/oauth2/auth", "token_uri": "https://oauth2.googleapis.com/token", "auth_provider_x509_cert_url": "https://www.googleapis.com/oauth2/v1/certs", "client_x509_cert_url": "https://www.googleapis.com/robot/v1....." }
AWS S3 V3
To use an AWS S3 bucket as the cluster's filestore and access it with the official AWS SDK, see the S3 Official SDK Binary Provider. Use this template if you want to attach an IAM role to the Artifactory pod directly (as opposed to attaching it to the machine/s that Artifactory runs on).
You should combine this with a Kubernetes mechanism for attaching IAM roles to pods, such as kube2iam.
Pass the AWS S3 V3 parameters and the annotation pointing to the IAM role (when using an IAM role; this is kube2iam-specific and may vary depending on the implementation) to helm install
and helm upgrade
.
# Using explicit credentials: artifactory: persistence: type: aws-s3-v3 awsS3V3: region: ${AWS_REGION} bucketName: ${AWS_S3_BUCKET_NAME} identity: ${AWS_ACCESS_KEY_ID} credential: ${AWS_SECRET_ACCESS_KEY} useInstanceCredentials: false # Using an existing IAM role artifactory: annotations: 'iam\.amazonaws\.com/role'=${AWS_IAM_ROLE_ARN} persistence: type: aws-s3-v3 awsS3V3: region: ${AWS_REGION} bucketName: ${AWS_S3_BUCKET_NAME}
To enable Direct Cloud Storage Download, use the following.
artifactory: persistence: awsS3V3: enableSignedUrlRedirect: true
Microsoft Azure Blob Storage
You can use Azure Blob Storage as the cluster's filestore by passing the Azure Blob Storage parameters to helm install
and helm upgrade
. For more information, see Azure Blob Storage Binary Provider.
artifactory: persistence: type: azure-blob azureBlob: accountName: ${AZURE_ACCOUNT_NAME} accountKey: ${AZURE_ACCOUNT_KEY} endpoint: ${AZURE_ENDPOINT} containerName: ${AZURE_CONTAINER_NAME}
To use a persistent volume claim as cache dir together with Azure Blob Storage, pass the following parameters as well to helm install
and helm upgrade
(verify that mountPath
and cacheProviderDir
point to the same location).
artifactory: persistence: existingClaim: ${YOUR_CLAIM} mountPath: /opt/cache-dir cacheProviderDir: /opt/cache-dir
Custom binarystore.xml
There are two options for providng a custom binarystore.xml.
Editing directly in the values.yaml.
artifactory: persistence: binarystoreXml: | <!-- The custom XML snippet --> <config version="v1"> <chain template="file-system"/> </config>
Create your own secret and pass it to your
helm install
command.# Prepare your custom Secret file (custom-binarystore.yaml) kind: Secret apiVersion: v1 metadata: name: custom-binarystore labels: app: artifactory chart: artifactory stringData: binarystore.xml: |- <!-- The custom XML snippet --> <config version="v1"> <chain template="file-system"/> </config>
Next, create a secret from the file.
kubectl apply -n artifactory -f ./custom-binarystore.yaml
Pass the secret to your
helm install
command.Artifactoryhelm upgrade --install artifactory --namespace artifactory --set artifactory.persistence.customBinarystoreXmlSecret=custom-binarystore jfrog/artifactory
Artifactory HAhelm upgrade --install artifactory-ha --namespace artifactory-ha --set artifactory.persistence.customBinarystoreXmlSecret=custom-binarystore jfrog/artifactory-ha
Adding Extensions
Extensions (also known as plugins) are software components that extend and integrate with your system. Most cluster administrators will use a hosted or distribution instance of Kubernetes. In this section we have included some of the extensions you can use with Artifactory using Helm Charts.
Using Logger Sidecars
Logger sidecars enable you to tail various logs from Artifactory (see the available values in the values.yaml
file).
To get a list of containers in the pod do the following.
kubectl get pods -n <NAMESPACE> <POD_NAME> -o jsonpath='{.spec.containers[*].name}' | tr ' ' '\n'
To view specific logs, use the following.
kubectl logs -n <NAMESPACE> <POD_NAME> -c <LOG_CONTAINER_NAME>
Adding User Plugins
User plugins enable you to extend Artifactory's behavior, for example, creating a Kubernetes secret.
Create a secret with Artifactory User Plugins using the following command.
# Secret with single user plugin kubectl create secret generic archive-old-artifacts --from-file=archiveOldArtifacts.groovy --namespace=artifactory # Secret with single user plugin with configuration file kubectl create secret generic webhook --from-file=webhook.groovy --from-file=webhook.config.json.sample --namespace=artifactory
Create a
plugin-values.yaml
file that contains the plugin secret names andArtifactory Chartartifactory: ## List of secrets for Artifactory user plugins. ## One secret per plugin's files. userPluginSecrets: - archive-old-artifacts - webhook - cleanup copyOnEveryStartup: - source: /artifactory_bootstrap/plugins/* target: etc/artifactory/plugins/artifactory.copyOnEveryStartup is used to copy and overwrite the files from /artifactory_bootstrap/plugins to /opt/jfrog/artifactory/var/etc/artifactory/plugins every time the pod is restarted.
Artifactory HA Chartartifactory: ## List of secrets for Artifactory user plugins. ## One secret per plugin's files. userPluginSecrets: - archive-old-artifacts - webhook - cleanup primary: preStartCommand: "mkdir -p {{ .Values.artifactory.persistence.mountPath }}/etc/artifactory/plugins/ && cp -Lrf /artifactory_bootstrap/plugins/* {{ .Values.artifactory.persistence.mountPath }}/etc/artifactory/plugins/"artifactory.primary.preStartCommand is used to copy and overwrite the files from /artifactory_bootstrap/plugins to /opt/jfrog/artifactory/var/etc/artifactory/plugins every time the pod is restarted.
You can now pass the
plugins.yaml
file you created to the Helm install command to deploy Artifactory with user plugins as follows.Artifactoryhelm upgrade --install artifactory jfrog/artifactory --namespace artifactory -f plugin-values.yaml
Artifactory HAhelm upgrade --install artifactory-ha jfrog/artifactory-ha --namespace artifactory-ha -f plugin-values.yaml
Alternatively, you may be in a situation in which you would like to create a secret in a Helm chart that depends on this chart. In this scenario, the name of the secret is likely generated dynamically via template functions, so passing a statically named secret is not possible.
In this case, Helm chart supports evaluating strings as templates via the tpl
function--simply pass the raw string containing the templating language used to name your secret as a value instead by adding the following to your chart's values.yaml
file.
artifactory: # Name of the artifactory dependency artifactory: userPluginSecrets: - '{{ template "my-chart.fullname" . }}'
Using ConfigMaps to Store Non-confidential Data
A configMap is an API object that is used to store non-confidential data in key-value pairs. If you want to mount a custom file to Artifactory, either an init shell script or a custom configuration file (such as logback.xml
), you can use this option.
Creating Custom configMaps for Artifactory
Create a configmaps.yaml
file as per the example below, then use it with your Helm installation/upgrade. This will, in turn, do the following:
Create a volume pointing to the configMap with the name
artifactory-configmaps
.Mount this configMap onto
/tmp/my-config-map
usingcustomVolumeMounts
.Set the shell script we mounted as the
postStartCommand
.Copy the
logback.xml
file to$ARTIFACTORY_HOME/etc/artifactory
directory.artifactory: configMaps: | logback.xml: | <configuration debug="false"> <appender name="CONSOLE" class="ch.qos.logback.core.ConsoleAppender"> <encoder class="ch.qos.logback.core.encoder.LayoutWrappingEncoder"> <layout class="org.artifactory.logging.layout.BackTracePatternLayout"> <pattern>%date [%-5level] \(%-20c{3}:%L\) %message%n</pattern> </layout> </encoder> </appender> <logger name="/artifactory"> <level value="INFO"/> <appender-ref ref="CONSOLE"/> </logger> <logger name="org.eclipse.jetty"> <level value="WARN"/> <appender-ref ref="CONSOLE"/> </logger> </configuration> my-custom-post-start-hook.sh: | echo "This is my custom post start hook" customVolumeMounts: | - name: artifactory-configmaps mountPath: /tmp/my-config-map postStartCommand: "post_hook_temp=/tmp/post-hook-temp; mkdir ${post_hook_temp}; cp -fv /tmp/my-config-map/my-custom-post-start-hook.sh ${post_hook_temp}; chmod +x ${post_hook_temp}/my-custom-post-start-hook.sh;bash ${post_hook_temp}/my-custom-post-start-hook.sh > ${post_hook_temp}/my-custom-post-start-hook.log 2>&1" copyOnEveryStartup: - source: /tmp/my-config-map/logback.xml target: etc/artifactory
Artifactoryhelm upgrade --install artifactory -f configmaps.yaml --namespace artifactory jfrog/artifactory
Artifactory HAhelm upgrade --install artifactory-ha jfrog/artifactory-ha --namespace artifactory-ha -f values.yaml
Creating a Custom nginx.conf Using Nginx
Create the
nginx.conf
file:kubectl create configmap nginx-config --from-file=nginx.conf
Pass the configMap to the Helm installation:
Artifactoryhelm upgrade --install artifactory --set nginx.customConfigMap=nginx-config --namespace artifactory jfrog/artifactory
Artifactory HAhelm upgrade --install artifactory-ha jfrog/artifactory-ha --namespace artifactory-ha -f values.yaml
Using an External Database
For production grade installations, it is recommended to use an external PostgreSQL with a static password.
PostgreSQL
There are cases where you will want to use an external PostgreSQL with a different database name, e.g., my-artifactory-db; in this case, you will need to set a custom PostgreSQL connection URL, where my-artifactory-db is the name of the database.
This can be done with the following parameters.
postgresql: enabled: false database: type: postgresql driver: org.postgresql.Driver url: 'jdbc:postgresql://${DB_HOST}:${DB_PORT}/my-artifactory-db' user: <DB_USER> password: <DB_PASSWORD>
You must set postgresql.enabled=false
for the chart to use the database.*
parameters. Without it, they will be ignored.
Other Database Types
There are cases where you will want to use a different database and not the enclosed PostgreSQL. For more information, see configuring the database.
The official Artifactory Docker images include the PostgreSQL database driver. For other database types, you will have to add the relevant database driver to Artifactory's tomcat/lib.
This can be done with the following parameters.
# Make sure your Artifactory Docker image has the MySQL database driver in it postgresql: enabled: false database: type: mysql driver: com.mysql.jdbc.Driver url: <DB_URL> user: <DB_USER> password: <DB_PASSWORD> artifactory: preStartCommand: "mkdir -p /opt/jfrog/artifactory/var/bootstrap/artifactory/tomcat/lib; cd /opt/jfrog/artifactory/var/bootstrap/artifactory/tomcat/lib && curl https://repo1.maven.org/maven2/mysql/mysql-connector-java/5.1.41/mysql-connector-java-5.1.41.jar -o /opt/jfrog/artifactory/var/bootstrap/artifactory/tomcat/lib/mysql-connector-java-5.1.41.jar"
You must set postgresql.enabled=false
for the chart to use the database.*
parameters. Without it, they will be ignored.
Configuring Artifactory with an External Oracle Database
To use Artifactory with an Oracle database, the required instant client library files, libaio
must be copied to the tomcat lib
. In addition, you will need to set the LD_LIBRARY_PATH
env variable.
Create a value file with the configuration.
postgresql: enabled: false database: type: oracle driver: oracle.jdbc.OracleDriver url: <DB_URL> user: <DB_USER> password: <DB_PASSWORD> artifactory: preStartCommand: "mkdir -p /opt/jfrog/artifactory/var/bootstrap/artifactory/tomcat/lib; cd /opt/jfrog/artifactory/var/bootstrap/artifactory/tomcat/lib && curl https://download.oracle.com/otn_software/linux/instantclient/19600/instantclient-basic-linux.x64-19.6.0.0.0dbru.zip -o instantclient-basic-linux.x64-19.6.0.0.0dbru.zip && unzip instantclient-basic-linux.x64-19.6.0.0.0dbru.zip && cp instantclient_19_6/ojdbc8.jar . && rm -rf instantclient-basic-linux.x64-19.6.0.0.0dbru.zip instantclient_19_6" extraEnvironmentVariables: - name: LD_LIBRARY_PATH value: /opt/jfrog/artifactory/var/bootstrap/artifactory/tomcat/lib
Install Artifactory with the values file you created.
Artifactoryhelm upgrade --install artifactory jfrog/artifactory --namespace artifactory -f values-oracle.yaml
Artifactory HAhelm upgrade --install artifactory-ha jfrog/artifactory-ha --namespace artifactory-ha -f values.yaml
If this is an upgrade from 6.x to 7.x, add same
preStartCommand
under theartifactory.migration.preStartCommand
.
Using a Pre-existing Kubernetes Secret
If you store your database credentials in a pre-existing Kubernetes Secret, you can specify them via database.secrets
instead of database.user
and database.password
.
# Create a secret containing the database credentials postgresql: enabled: false database: secrets: user: name: "my-secret" key: "user" password: name: "my-secret" key: "password" url: name: "my-secret" key: "url"
Infrastructure Customization
Artifactory Memory and CPU Resources
The Artifactory Helm chart comes with support for configured resource requests and limits to Artifactory, Nginx and PostgreSQL. By default, these settings are commented out. It is highly recommended to set these so you have full control of the allocated resources and limits. Artifactory java memory parameters can (and should) also be set to match the allocated resources with artifactory.javaOpts.xms
and artifactory.javaOpts.xmx
.
# Example of setting resource requests and limits to all pods (including passing java memory settings to Artifactory) artifactory: javaOpts: xms: "1g" xmx: "4g" resources: requests: memory: "1Gi" cpu: "500m" limits: memory: "4Gi" cpu: "2" nginx: resources: requests: memory: "250Mi" cpu: "100m" limits: memory: "500Mi" cpu: "250m"
# Example of setting resource requests and limits to all pods (including passing java memory settings to Artifactory) artifactory: primary: resources: requests: memory: "1Gi" cpu: "500m" limits: memory: "4Gi" cpu: "2" node: resources: requests: memory: "1Gi" cpu: "500m" limits: memory: "4Gi" cpu: "2" javaOpts: xms: "1g" xmx: "4g" initContainers: resources: requests: memory: "64Mi" cpu: "10m" limits: memory: "128Mi" cpu: "250m" postgresql: resources: requests: memory: "512Mi" cpu: "200m" limits: memory: "1Gi" cpu: "1" nginx: resources: requests: memory: "250Mi" cpu: "100m" limits: memory: "500Mi" cpu: "250m"
Although it is possible to set resources limits and requests this way, it is recommended to use the pre-built values files for small, medium and large installation and change them according to your needs if necessary.
Custom Docker Registry
If you need to pull your Docker images from a private registry, you will need to create a Kubernetes Docker registry secret and pass it to Helm during installation/upgrade.
# Create a Docker registry secret called 'regsecret' kubectl create secret docker-registry regsecret --docker-server=<your-registry-server> --docker-username=<your-name> --docker-password=<your-pword> --docker-email=<your-email> helm upgrade --install artifactory --set imagePullSecrets=regsecret --namespace artifactory jfrog/artifactory
Bootstrapping Artifactory
You can bootstrap the Artifactory admin password and the Artifactory configuration when using Helm Charts.
Bootstrapping the Artifactory Admin Password
You can bootstrap the admin
user password as described in Recreating the Default Admin User.
Create
admin-creds-values.yaml
and provide the IP (by default 127.0.0.1) and password.artifactory: admin: ip: "<IP_RANGE>" # Example: "*" to allow access from anywhere username: "admin" password: "<PASSWD>"
Apply the
admin-creds-values.yaml
file.Artifactoryhelm upgrade --install artifactory --namespace artifactory jfrog/artifactory -f admin-creds-values.yaml
Artifactory HAhelm upgrade --install artifactory-ha --namespace artifactory-ha jfrog/artifactory-ha -f admin-creds-values.yaml
- Restart the Artifactory pod (
Kubectl delete pod <pod_name>
).
Bootstrapping the Artifactory Configuration
You can use Helm Charts to bootstrap the Artifactory global and security configuration. To do so, you will need an Artifactory subscription.
Create a
bootstrap-config.yaml
with anartifactory.config.import.xml
and asecurity.import.xml
as shown below.apiVersion: v1 kind: ConfigMap metadata: name: my-release-bootstrap-config data: artifactory.config.import.xml: | <config contents> security.import.xml: | <config contents>
Create a configMap in Kubernetes.
kubectl apply -f bootstrap-config.yaml
Pass the configMap to Helm using one of the following options.
Artifactoryhelm upgrade --install artifactory --set artifactory.license.secret=artifactory-license,artifactory.license.dataKey=art.lic,artifactory.configMapName=my-release-bootstrap-config --namespace artifactory jfrog/artifactory
Artifactory HAhelm upgrade --install artifactory-ha --set artifactory.license.secret=artifactory-license,artifactory.license.dataKey=art.lic,artifactory.configMapName=my-release-bootstrap-config --namespace artifactory-ha jfrog/artifactory-ha
or
Artifactoryhelm upgrade --install artifactory --set artifactory.license.licenseKey=<LICENSE_KEY>,artifactory.configMapName=my-release-bootstrap-config --namespace artifactory jfrog/artifactory
Artifactory HAhelm upgrade --install artifactory-ha --set artifactory.license.licenseKey=<LICENSE_KEY>,artifactory.configMapName=my-release-bootstrap-config --namespace artifactory-ha jfrog/artifactory-ha
For more information, see Bootstrapping the Artifactory Global Configuration and Bootstrapping the Artifactory Security Configuration.
Copying Configuration Files for Every Startup
Files stored in the /artifactory-extra-conf
directory are only copied to the ARTIFACTORY_HOME/etc
directory upon the first startup. In some cases, you might want your configuration files to be copied to the ARTIFACTORY_HOME/etc
directory on every startup.
For example:
The binarystore.xml
file: If you use the default behavior, your binarystore.xml
configuration will only be copied on the first startup, which means that changes you make over time to the binaryStore.xml
configuration will not be applied.
To make sure your changes are applied on every startup, create a YAML block with the following values:
artifactory: copyOnEveryStartup: - source: /artifactory_bootstrap/binarystore.xml target: etc/artifactory
From Artifactory version 7.46.x (Helm charts version 107.46.x), binarystore.xml is copied to
etc/artifactory
in thecopy-system-configurations
init container. Therefore, you need not usecopyOnEveryStartup
for configuring binarystore.xml.Install the Helm chart with the values file you created:
Artifactoryhelm upgrade --install artifactory --namespace artifactory jfrog/artifactory -f values.yaml
Artifactory HAhelm upgrade --install artifactory-ha --namespace artifactory-ha jfrog/artifactory-ha -f values.yaml
Any custom configuration file you have to configure Artifactory, such as logback.xml
:
Create a configMap with your
logback.xml
configuration.Next, create a
values.yaml
file with the following values:artifactory: ## Create a volume pointing to the config map with your configuration file customVolumes: | - name: logback-xml-configmap configMap: name: logback-xml-configmap customVolumeMounts: | - name: logback-xml-configmap mountPath: /tmp/artifactory-logback/ copyOnEveryStartup: - source: /tmp/artifactory-logback/* target: etc/artifactory
Install the Helm chart with the values file you created:
Artifactoryhelm upgrade --install artifactory --namespace artifactory jfrog/artifactory -f values.yaml
Artifactory HAhelm upgrade --install artifactory-ha --namespace artifactory-ha /jfrog/artifactory-ha -f values.yaml
Monitoring and Logging
Artifactory JMX Configuration
Artifactory exposes MBeans under the org.jfrog.artifactory
domain, which enables you to monitor repositories, executor pools, storage and HTTP connection pools. To learn more, see Artifactory JMX MBeans.
To enable JMX in your deployment use the following command.
artifactory: javaOpts: jmx: enabled: true
artifactory: primary: javaOpts: jmx: enabled: true node: javaOpts: jmx: enabled: true
This will enable access to Artifactory with JMX on the default port 9010; to change the port to your port of choice, use the setting artifactory.javaOpts.jmx.port
.
To connect to Artifactory using JMX with jconsole (or any similar tool) installed on your computer, follow these steps.
Enable JMX as described above and change the Artifactory service to be of type LoadBalancer.
Artifactoryartifactory: service: type: LoadBalancer javaOpts: jmx: enabled: true
Artifactory HAartifactory: service: type: LoadBalancer primary: javaOpts: jmx: enabled: true node: javaOpts: jmx: enabled: true
The default setting for
java.rmi.server.hostname
is the service name (this is also configurable usingartifactory.javaOpts.jmx.host
). To connect to Artifactory with jconsole, map the Artifactory Kuberentes service IP to the service name using your host file, as per the example below.Artifactory<artifactory-service-ip> artifactory-<release-name>
Artifactory HA<artifactory-primary-service-ip> artifactory-ha-<release-name>-primary <artifactory-node-service-ip> <release-name>
Launch jconsole with the service address and port.
Artifactoryjconsole artifactory-<release-name>:<jmx-port>
Artifactory HAjconsole artifactory-ha-<release-name>-primary:<primary-jmx-port> jconsole <release-name>:<node-jmx-port>
Artifactory Filebeat
If you want to collect logs from your Artifactory installation and send them to a central log collection solution like ELK, you can use this option.
Create a filebeat.yaml
file with the following content.
filebeat: enabled: true logstashUrl: <YOUR_LOGSTASH_URL> resources: requests: memory: "100Mi" cpu: "100m" limits: memory: "100Mi" cpu: "100m"
Optionally, you can customize the filebeat.yaml
to send output to a different location and then and use it with your Helm installation/upgrade.
filebeat: enabled: true filebeatYml: | <YOUR_CUSTOM_FILEBEAT_YML>
helm upgrade --install artifactory -f filebeat.yaml --namespace artifactory jfrog/artifactory helm upgrade --install artifactory-ha -f filebeat.yaml --namespace artifactory-ha jfrog/artifactory-ha
This will begin sending your Artifactory logs to the log aggregator of your choice, based on your configuration in the filebeatYml
.
Installing Artifactory and Artifactory HA with Nginx and Terminate SSL in Nginx Service (LoadBalancer)
You can install the Helm chart while performing SSL offload in the LoadBalancer layer of Nginx, for example: using AWS ACM certificates to do SSL offload in the loadbalancer layer. Simply add the following to an artifactory-ssl-values.yaml
file, and then use it with your Helm installation/upgrade.
nginx: https: enabled: false service: ssloffload: true annotations: service.beta.kubernetes.io/aws-load-balancer-ssl-cert: "arn:aws:acm:xx-xxxx:xxxxxxxx:certificate/xxxxxxxxxxxxx" service.beta.kubernetes.io/aws-load-balancer-backend-protocol: "http" service.beta.kubernetes.io/aws-load-balancer-ssl-ports: "https"
helm upgrade --install artifactory -f artifactory-ssl-values.yaml --namespace artifactory jfrog/artifactory helm upgrade --install artifactory-ha -f artifactory-ssl-values.yaml --namespace artifactory-ha jfrog/artifactory-ha
Advanced Database Options for Insight
Deploying PostgreSQL
There are cases where you will want to use an external PostgreSQL and not the enclosed PostgreSQL. For more information, see Creating the Insight PostgreSQL Database.
This can be done with the following parameters.
... --set postgresql.enabled=false \ --set database.url=${DB_URL} \ --set database.user=${DB_USER} \ --set database.password=${DB_PASSWORD} \ ...
You must set postgresql.enabled=false
for the chart to use the database.*
parameters. Without it, they will be ignored.
Using Existing Secrets for PostgreSQL Connection Details
You can use existing secrets for managing the database connection details. Pass them to the install command with the following parameters.
export POSTGRES_USERNAME_SECRET_NAME= export POSTGRES_USERNAME_SECRET_KEY= export POSTGRES_PASSWORD_SECRET_NAME= export POSTGRES_PASSWORD_SECRET_KEY= ... --set database.secrets.user.name=${POSTGRES_USERNAME_SECRET_NAME} \ --set database.secrets.user.key=${POSTGRES_USERNAME_SECRET_KEY} \ --set database.secrets.password.name=${POSTGRES_PASSWORD_SECRET_NAME} \ --set database.secrets.password.key=${POSTGRES_PASSWORD_SECRET_KEY} \ ...
Deploying Elasticsearch
By default, the Insight Helm Chart deploys an Elasticsearch pod. It also configures Docker host kernel parameters using a privileged init container. In some installations, you may not be allowed to run privileged containers, in which case you can disable the Docker host configuration by configuring the following parameter.
--set elasticsearch.configureDockerHost=false
There are cases where you will want to use an external Elasticsearch and not the enclosed Elasticsearch.
This can be done with the following parameters.
--set elasticsearch.enabled=false \ --set elasticsearch.url=${ES_URL} \ --set elasticsearch.username=${ES_USERNAME} \ --set elasticsearch.password=${ES_PASSWORD} \
Advanced Database Options for Mission Control
Deploying PostgreSQL
There are cases where you will want to use an external PostgreSQL and not the enclosed PostgreSQL. For more information, see Creating the Mission Control PostgreSQL Database.
This can be done with the following parameters.
... --set postgresql.enabled=false \ --set database.url=${DB_URL} \ --set database.user=${DB_USER} \ --set database.password=${DB_PASSWORD} \ ...
You must set postgresql.enabled=false
for the chart to use the database.*
parameters. Without it, they will be ignored.
Using Existing Secrets for PostgreSQL Connection Details
You can use existing secrets for managing the database connection details. Pass them to the install command with the following parameters.
export POSTGRES_USERNAME_SECRET_NAME= export POSTGRES_USERNAME_SECRET_KEY= export POSTGRES_PASSWORD_SECRET_NAME= export POSTGRES_PASSWORD_SECRET_KEY= ... --set database.secrets.user.name=${POSTGRES_USERNAME_SECRET_NAME} \ --set database.secrets.user.key=${POSTGRES_USERNAME_SECRET_KEY} \ --set database.secrets.password.name=${POSTGRES_PASSWORD_SECRET_NAME} \ --set database.secrets.password.key=${POSTGRES_PASSWORD_SECRET_KEY} \ ...
Deploying Elasticsearch
By default, the Mission Control Helm Chart deploys an Elasticsearch pod. It also configures Docker host kernel parameters using a privileged init container. In some installations, you may not be allowed to run privileged containers, in which case you can disable the Docker host configuration by configuring the following parameter.
--set elasticsearch.configureDockerHost=false
There are cases where you will want to use an external Elasticsearch and not the enclosed Elasticsearch.
This can be done with the following parameters.
--set elasticsearch.enabled=false \ --set elasticsearch.url=${ES_URL} \ --set elasticsearch.username=${ES_USERNAME} \ --set elasticsearch.password=${ES_PASSWORD} \
Advanced Options for Pipelines
Installing the Pipelines Chart with Ingress
Prerequisites
Before deploying Pipelines with Ingress, you will need to have the following in place:
- A running Kubernetes cluster
- An Artifactory or Artifactory HA with Enterprise+ License
- A precreated repository
jfrogpipelines
in Artifactory typeGeneric
with layoutmaven-2-default
- A precreated repository
- A deployed Nginx-ingress controller
- [Optional] A deployed Cert-manager for automatic management of TLS certificates with Lets Encrypt
- [Optional] A TLS secret reqired for https access
Prepare the Configurations
Get the JFrog Pipelines helm chart to get the needed configuration files.
helm fetch jfrog/pipelines --untar
Next, edit the local copies of the
values-ingress.yaml
andvalues-ingress-passwords.yaml
with the required configuration values.In the
values-ingress.yaml
file, edit the following:Artifactory URL
Ingress hosts
Ingress tls secrets
In the
values-ingress-passwords.yaml
file, set the passwordsuiUserPassword
,postgresqlPassword
andauth.password
, and the same formasterKey
andjoinKey
.
Install JFrog Pipelines
Run the install command.
kubectl create ns pipelines helm upgrade --install pipelines --namespace pipelines jfrog/pipelines -f pipelines/values-ingress.yaml -f pipelines/values-ingress-passwords.yaml
Using an External Secret for the Pipelines Password
The best practice for passwords is to use external secrets instead of storing passwords in values.yaml
files.
Fill in the passwords,
masterKey
andjoinKey
invalues-ingress-passwords.yaml
and then create and install the external secret.## Generate pipelines-system-yaml secret helm template --name-template pipelines pipelines/ -s templates/pipelines-system-yaml.yaml \ -f pipelines/values-ingress-external-secret.yaml -f pipelines/values-ingress-passwords.yaml | kubectl apply --namespace pipelines -f - ## Generate pipelines-database secret helm template --name-template pipelines pipelines/ -s templates/database-secret.yaml \ -f pipelines/values-ingress-passwords.yaml | kubectl apply --namespace pipelines -f - ## Generate pipelines-rabbitmq-secret secret helm template --name-template pipelines pipelines/ -s templates/rabbitmq-secret.yaml \ -f pipelines/values-ingress-passwords.yaml | kubectl apply --namespace pipelines -f -
Install Pipelines.
helm upgrade --install pipelines --namespace pipelines jfrog/pipelines -f values-ingress-external-secret.yaml
Setting up a Build Plane
To use Pipelines, you will need to set up a Build Plane. For more information, see the following:
- For Static VMs Node-pool setup, see Managing Node Pools
- For Dynamic VMs Node-pool setup, see Managing Dynamic Node Pools
- For Kubernetes Node-pool setup, see Managing Dynamic Node Pools
Using an External PostgreSQL
If you want to use external PostgreSQL, set postgresql.enabled=false
and create values-external-postgresql.yaml
in the yaml configuration below.
global: # Internal Postgres must be set to false postgresql: user: db_username password: db_user_password host: db_host port: 5432 database: db_name ssl: false / true postgresql: enabled: false
Verfiy that the user db_username
and database db_name
exist before running the Helm install / upgrade.
helm upgrade --install pipelines --namespace pipelines jfrog/pipelines -f values-external-postgresql.yaml
Using an External Vault
If you want to use external Vault, set vault.enabled=false
and create a values-external-vault.yaml
in the yaml configuration below.
vault: enabled: false global: vault: ## Vault url examples # external one: https://vault.example.com # internal one running in the same Kubernetes cluster: http://vault-active:8200 url: vault_url token: vault_token ## Set Vault token using existing secret # existingSecret: vault-secret
If you store an external Vault token in a pre-existing Kubernetes Secret, you can specify it via existingSecret
.
To create a secret containing the Vault token:
kubectl create secret generic vault-secret --from-literal=token=${VAULT_TOKEN} helm upgrade --install pipelines --namespace pipelines jfrog/pipelines -f values-external-vault.yaml
Using an External system.yaml with an Existing Secret
This is for advanced use cases where users wants to provide their own system.yaml
to configure Pipelines. This will override the existing system.yaml
in the values.yaml
.
systemYamlOverride: ## You can use a pre-existing secret by specifying existingSecret existingSecret: ## The dataKey should be the name of the secret data key created. dataKey:
From chart version 2.2.0 and above, the .Values.existingSecret
is changed to .Values.systemYaml.existingSecret
and .Values.systemYaml.dataKey
.
From chart version 2.3.7 and above, the .Values.systemYaml
is changed to .Values.systemYamlOverride
.
helm upgrade --install pipelines --namespace pipelines jfrog/pipelines -f values-external-systemyaml.yaml
Using Vault in Production Environments
To use vault securely you must set the disablemlock
setting in the values.yaml
to false as per the Hashicorp Vault recommendation.
For non-production environments it is acceptable to leave this value set to true. However, this does enable a potential security issue where encrypted credentials could potentially be swapped onto an unencrypted disk. For this reason we recommend you always set this value to false to ensure mlock is enabled.
vault: disablemlock: true
vault: disablemlock: false
Uninstall and Deletion
This section details the procedures for uninstalling Artifactory and Xray.
Uninstalling Artifactory
Uninstall is supported only on Helm v3 and on.
Uninstall Artifactory using the following command.
helm uninstall artifactory && sleep 90 && kubectl delete pvc -l app=artifactory
Next, delete the storage bucket and SQL database.
gsutil rm -r gs://artifactory gcloud sql instances delete artifactory
Deleting Artifactory
You do not need to uninstall Artifactory before deleting it.
Deleting Artifactory using the commands below will also delete your data volumes and you will lose all of your data. You must back up all this information before deletion.
To delete Artifactory use the following command.
helm delete artifactory --namespace artifactory
This will completely delete your Artifactory deployment (Pro or HA cluster).
Deleting Xray
Deleting Xray using the commands below will also delete your data volumes and you will lose all of your data. You must back up all this information before deletion.
To remove Xray services and data tools, use the following command.
helm delete xray --namespace xray # Remove the data disks kubectl delete pvc -l release=xray
If Xray was installed without providing a value to rabbitmq.rabbitmqPassword/rabbitmq-ha.rabbitmqPassword
(a password was autogenerated), follow these instructions.
Get the current password by running this command.
RABBITMQ_PASSWORD=$(kubectl get secret -n <namespace> <myrelease>-rabbitmq -o jsonpath="{.data.rabbitmq-password}" | base64 --decode)
Upgrade the release by passing the previously auto-generated secret.
helm upgrade <myrelease> jfrog/xray --set rabbitmq.rabbitmqPassword=${RABBITMQ_PASSWORD}/rabbitmq-ha.rabbitmqPassword=${RABBITMQ_PASSWORD}
If Xray was installed with all of the default values (e.g., with no user-provided values for RabbitMQ/PostgreSQL), follow these steps.
- Retrieve all current passwords (RabbitMQ/PostgreSQL) as explained in the above section.
Upgrade the release by passing the previously auto-generated secrets.
helm upgrade --install xray --namespace xray jfrog/xray --set rabbitmq-ha.rabbitmqPassword=<rabbit-password> --set postgresql.pos