Cloud customer?
Start for Free >
Upgrade in MyJFrog >
What's New in Cloud >

Search





Overview

This tutorial explains how to specify a kubeconfig for a Kubernetes Integration to authenticate to a self-hosted Kubernetes cluster for a dynamic node pool. You can use a cloud provider solution like EKS, GKE, or AKS, or an on-prem Kubernetes solution.

This tutorial assumes that you have working knowledge of Docker and Kubernetes and understand the following concepts:

Page Contents

Configure a Kubernetes Service Account

You must configure a service account in Kubernetes to provide an identity for the build node processes that Pipelines will dynamically control.

This procedure will use your personal account to create the service account. Make sure your personal account has permissions to do this.

Verify Access to the Cluster

First, make sure you can authenticate yourself to the cluster. This means you have a kubeconfig file that uses your personal account. You can verify this by running this command on your local machine and you should see the file listed.

ls -al $HOME/.kube

Author a service account spec

To create a service account on Kubernetes, you can leverage  kubectl  and a service account spec. Create a YML file similar to the one below:

pipelines_k8s_sa.yml
apiVersion: v1
kind: ServiceAccount
metadata:
  name: pipelines-k8s-pool   # <-- any name you'd like
  namespace: jfrog           # <-- the cluster namespace

Create the service account

You can create a service account by running the following command:

kubectl apply -f pipelines_k8s_sa.yaml

Get Tokens and IP from Kubernetes

Once the service account has been created, you will need to retrieve some key information from Kubernetes in order to configure it through a kubeconfig.

Fetch the name of the secrets used by the service account

This can be found by running the following command:

kubectl describe serviceAccounts pipelines-k8s-pool
output
Name:               pipelines-k8s-pool
Namespace:          jfrog
Labels:             <none>
Annotations:        <none>

Image pull secrets: <none>
Mountable secrets:  pipelines-k8s-pool-token-h6pdj
Tokens:             pipelines-k8s-pool-token-h6pdj

Note the Mountable secrets string. This is the name of the secret that holds the token, and will be used in the next step.

Fetch the token from the secret

Using the Mountable secrets string, you can get the token used by the service account. Run the following command to extract this information:

kubectl describe secrets pipelines-k8s-pool-token-h6pdj
output
Name:           pipelines-k8s-pool
Namespace:      jfrog
Labels:         <none>
Annotations:    kubernetes.io/service-account.name=pipelines-k8s-pool
        kubernetes.io/service-account.uid=c2117d8e-3c2d-11e8-9ccd-42010a8a012f

Type:   kubernetes.io/service-account-token

Data
====
ca.crt:     1115 bytes
namespace:  7 bytes
token:      eyJhbGciOiJSUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJkZWZhdWx0Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZWNyZXQubmFtZSI6InNoaXBwYWJsZS1kZXBsb3ktdG9rZW4tN3Nwc2oiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoic2hpcHBhYmxlLWRlcGxveSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImMyMTE3ZDhlLTNjMmQtMTFlOC05Y2NkLTQyMDEwYThhMDEyZiIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDpkZWZhdWx0OnNoaXBwYWJsZS1kZXBsb3kifQ.ZWKrKdpK7aukTRKnB5SJwwov6PjaADT-FqSO9ZgJEg6uUVXuPa03jmqyRB20HmsTvuDabVoK7Ky7Uug7V8J9yK4oOOK5d0aRRdgHXzxZd2yO8C4ggqsr1KQsfdlU4xRWglaZGI4S31ohCApJ0MUHaVnP5WkbC4FiTZAQ5fO_LcCokapzCLQyIuD5Ksdnj5Ad2ymiLQQ71TUNccN7BMX5aM4RHmztpEHOVbElCWXwyhWr3NR1Z1ar9s5ec6iHBqfkp_s8TvxPBLyUdy9OjCWy3iLQ4Lt4qpxsjwE4NE7KioDPX2Snb6NWFK7lvldjYX4tdkpWdQHBNmqaD8CuVCRdEQ

Copy and save the token value. This will be used in your kubeconfig file.

Get the certificate info for the cluster

Every cluster has a certificate that clients can use to encrypt traffic. Fetch the certificate and write to a file (for example, cluster-cert.txt) by running this command:

kubectl config view --flatten --minify > cluster-cert.txt
cat cluster-cert.txt
output
apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURDekNDQWZPZ0F3SUJBZ0lRZmo4VVMxNXpuaGRVbG15a3AvSVFqekFOQmdrcWhraUc5dzBCQVFzRkFEQXYKTVMwd0t3WURWUVFERXlSaVl6RTBOelV5WXkwMk9UTTFMVFExWldFdE9HTmlPUzFrWmpSak5tUXlZemd4TVRndwpIaGNOTVRnd05EQTVNVGd6TVRReVdoY05Nak13TkRBNE1Ua3pNVFF5V2pBdk1TMHdLd1lEVlFRREV5UmlZekUwCk56VXlZeTAyT1RNMUxUUTFaV0V0T0dOaU9TMWtaalJqTm1ReVl6Z3hNVGd3Z2dFaU1BMEdDU3FHU0liM0RRRUIKQVFVQUE0SUJEd0F3Z2dFS0FvSUJBUURIVHFPV0ZXL09odDFTbDBjeUZXOGl5WUZPZHFON1lrRVFHa3E3enkzMApPUEQydUZyNjRpRXRPOTdVR0Z0SVFyMkpxcGQ2UWdtQVNPMHlNUklkb3c4eUowTE5YcmljT2tvOUtMVy96UTdUClI0ZWp1VDl1cUNwUGR4b0Z1TnRtWGVuQ3g5dFdHNXdBV0JvU05reForTC9RN2ZpSUtWU01SSnhsQVJsWll4TFQKZ1hMamlHMnp3WGVFem5lL0tsdEl4NU5neGs3U1NUQkRvRzhYR1NVRzhpUWZDNGYzTk4zUEt3Wk92SEtRc0MyZAo0ajVyc3IwazNuT1lwWDFwWnBYUmp0cTBRZTF0RzNMVE9nVVlmZjJHQ1BNZ1htVndtejJzd2xPb24wcldlRERKCmpQNGVqdjNrbDRRMXA2WXJBYnQ1RXYzeFVMK1BTT2ROSlhadTFGWWREZHZyQWdNQkFBR2pJekFoTUE0R0ExVWQKRHdFQi93UUVBd0lDQkRBUEJnTlZIUk1CQWY4RUJUQURBUUgvTUEwR0NTcUdTSWIzRFFFQkN3VUFBNElCQVFCQwpHWWd0R043SHJpV2JLOUZtZFFGWFIxdjNLb0ZMd2o0NmxlTmtMVEphQ0ZUT3dzaVdJcXlIejUrZ2xIa0gwZ1B2ClBDMlF2RmtDMXhieThBUWtlQy9PM2xXOC9IRmpMQVZQS3BtNnFoQytwK0J5R0pFSlBVTzVPbDB0UkRDNjR2K0cKUXdMcTNNYnVPMDdmYVVLbzNMUWxFcXlWUFBiMWYzRUM3QytUamFlM0FZd2VDUDNOdHJMdVBZV2NtU2VSK3F4TQpoaVRTalNpVXdleEY4cVV2SmM3dS9UWTFVVDNUd0hRR1dIQ0J2YktDWHZvaU9VTjBKa0dHZXJ3VmJGd2tKOHdxCkdsZW40Q2RjOXJVU1J1dmlhVGVCaklIYUZZdmIxejMyVWJDVjRTWUowa3dpbHE5RGJxNmNDUEI3NjlwY0o1KzkKb2cxbHVYYXZzQnYySWdNa1EwL24KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=
    server: https://35.203.181.169
  name: gke_jfrog-200320_us-west1-a_cluster
contexts:
- context:
    cluster: gke_jfrog-200320_us-west1-a_cluster
    user: gke_jfrog-200320_us-west1-a_cluster
  name: gke_jfrog-200320_us-west1-a_cluster
current-context: gke_jfrog-200320_us-west1-a_cluster
kind: Config
preferences: {}
users:
- name: gke_jfrog-200320_us-west1-a_cluster
  user:
    auth-provider:
      config:
        access-token: ya29.Gl2YBba5duRR8Zb6DekAdjPtPGepx9Em3gX1LAhJuYzq1G4XpYwXTS_wF4cieZ8qztMhB35lFJC-DJR6xcB02oXXkiZvWk5hH4YAw1FPrfsZWG57x43xCrl6cvHAp40
        cmd-args: config config-helper --format=json
        cmd-path: /Users/ambarish/google-cloud-sdk/bin/gcloud
        expiry: 2018-04-09T20:35:02Z
        expiry-key: '{.credential.token_expiry}'
        token-key: '{.credential.access_token}'
      name: gcp

Copy and save two pieces of information from here:

  • certificate-authority-data  
  • server



Configuring Permissions in Kubernetes

Kubernetes includes a number of resources, including roles and role bindings that can be used to break your cluster into namespaces and limiting access to namespaced resources to specific accounts.

This section provides information about defining permissions in Kubernetes using roles and role binding.

Creating a Role

A Role sets permissions within a particular namespace, which must be specified when creating a Role. Each Role has a rules section to define the resources that the rules apply to and the allowed operations, which are required for service account users to run builds within Kubernetes. 

For example, the following example creates a Role in the jfrog namespace, which will allow read/write access to all resources in the namespace:

apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  namespace: jfrog
  name: pipelines-builder-role
rules:
- apiGroups: ["*"] # "" indicates the core API group
  resources: ["pods","secrets","configmaps", "deployments", "svcs"]
  verbs:
  - get
  - list
  - watch
  - create
  - update
  - patch
  - delete

Creating a Service Account

Kubernetes uses service accounts to authenticate and authorize requests made by pods. All newly created pods are automatically assigned to the 'default' service account in your cluster. You can, however, create your own service account.

The following example creates a service account called pipelines-k8s-pool in the jfrog namespace:

apiVersion: v1
kind: ServiceAccount
metadata:
  name: pipelines-k8s-pool
  namespace: jfrog

Creating a Role Binding

The service account that was created in the previous section can now be given the Role that was created earlier using a RoleBinding in the jfrog namespace:

apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: jfrog-builder-rb
  namespace: jfrog
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: deployer
subjects:
  - kind: ServiceAccount
    name: pipelines-k8s-pool
    namespace: jfrog

Add a Kubernetes Adminstration Integration

You must add a Kubernetes integration as an administration integration:

  • From the JFrog Platform Administration module go to Pipelines | Integrations.
  • Click Add an Integration.
  • In the resulting Add New Integration display, click the Integration Type field and select Kubernetes from the dropdown list. 
  • Enter a Name for the Kubernetes integration
  • Paste in a kubeconfig specification as described below
  • Click Create to finish adding the Kubernetes integration

Specify a kubeconfig

From the steps in the prior sections, you should have the following pieces of information:

  • <token>
  • <certificate-authority-data>
  • <server>

The kubeconfig specification you paste into the Kube Config setting should follow this format:

apiVersion: v1
kind: Config
users:
- name: pipelines-k8s-pool                                               # <-- Your service account name
  user:
    token: <token>
clusters:
- cluster:
    certificate-authority-data: <certificate-authority-data>
    server: <server>
  name: self-hosted-cluster
contexts:
- context:
    cluster: self-hosted-cluster
    user: pipelines-k8s-pool                                               # <-- Your service account name
    namespace: jfrog                                                       # <-- The namespace you defined
  name: pipelines_k8s_context
current-context: pipelines_k8s_context


Create a Dynamic Node Pool

Once you have successfully added the Kubernetes admininstration integration, you can add a dynamic node pool that uses it.


  • No labels
Copyright © 2021 JFrog Ltd.