The JFrog Pipeline and Dynatrace reference design leverages a series of built-in integrations and custom bash scripts available within the JFrog Pipelines platform, that enable users to build a Docker image and deploy it to a Kubernetes cluster instrumented with the Dynatrace OneAgent Operator. As this is done, Dynatrace deployment events are sent to the associated service being monitored in the runtime environment with the important context and hyperlink back to the JFrog pipeline job that performed the deployment.
The following set of Artifactory Repositories are used in the reference design:
Repository Name | Type | Description |
art-docker-dynatrace | Stores a collection of Docker images created by the pipeline. | |
art-helm-charts-dynatrace | Stores a collection of Helm charts created by the pipeline. |
Integrations connect Pipelines to an external service/tool. Each integration type defines the endpoint, credentials and any other configuration detail required for Pipelines to exchange information with the service. Below are the integrations used in this reference design:
Integration Name | Type | Description |
artifactoryIntegration | Stores the Artifactory URL and Token. | |
k8sIntegration | Stores the configuration YAML to access Kubernetes cluster connection. | |
githubIntegration | Stores the GitHub URL and User Access Token the GitHub account where the pipeline code is stored. The GitHub user must have admin rights for the Pipelines repository. | |
dynatraceIntegration | Stores the Dynatrace instance URL and API token. The API token is configured with the permissions for any APIs it calls. The reference design API permissions are defined lower in this document. |
The reference JFrog Pipelines executes the following pipeline steps as shown in the JFrog picture below.
In preparation for setting up the pipelines and running them, you will first need to make a copy of the reference design GitHub code repository. This assumes you have a GitHub account.
A Kubernetes cluster is not required for the integration; instead, the reference pipeline uses a containerized application and uses Helm for deployments.
This demo uses Google GKE using the default settings from the Google console.
Once the cluster is provisioned, use the google console web console to run these commands to configure kubectl to connect and view the cluster nodes.
gcloud container clusters get-credentials <CLUSTER NAME> --zone <ZONE> --project <PROJECT> kubectl get nodes |
This step creates the service account that is needed in the next section as part of the k8s pipeline integration setup.
From the cloud shell clone the demo repository. For example:
git clone https://github.com/dt-demos/JFrog-pipelines-dynatrace-example.git |
Navigate into the setup folder.
cd JFrog-pipelines-dynatrace-example/setup |
Run this script that creates the k8s service account, adds the namespace for where the sample application will be deployed and generates a kubeconfig file for a GKE cluster.
./createKubernetesServiceAccounts.sh |
This step creates the Kubernetes role that is needed by Dynatrace to call the Kubernetes API.
Assuming you are still in the “JFrog-pipelines-dynatrace-example/setup
” folder within the Google cloud shell, run this command to add the role.
kubectl create -f dynatrace-oneagent-metadata-viewer.yaml |
You can verify that the “dynatrace-oneagent-metadata-viewer” was created using this command.
kubectl -n dev get role |
See the Dynatrace Documentation for more details and the ways this role can be configured.
This step creates the Frog Docker repository credentials as a Kubernetes secret. This secret is needed by Helm as it pulls the docker image during Helm deployments.
From the Google cloud shell, run this command to login to your Artifactory Docker repository.
docker login YOUR-ACCOUNT.JFrog.io |
To export the saved credentials into a Kubernetes secret, run this command.
kubectl create secret docker-registry regcred \ --docker-server=<YOUR-ACCOUNT.JFrog.io> \ --docker-username=<username> \ --docker-password=<Artifactory_API_key> \ --docker-email=<email> |
See the Kubernetes Documentation for more details.
One key Dynatrace advantage is ease of activation and for Kubernetes the Dynatrace Operator is designed specifically to handle the lifecycle of the Dynatrace OneAgent, Kubernetes API monitoring, OneAgent traffic routing, and all future containerized componentry such as the forthcoming extension framework.
Operator setup is typically a one-time activity as part of the Kubernetes cluster and the quickest way to deploy the Dynatrace operator is the deployment wizard within the Dynatrace UI.
After processing, you can run this command from the Google cloud web shell. You want to see that all pods are running as shown below before moving to the next section.
kubectl -n dynatrace get pods NAME READY STATUS RESTARTS AGE dynakube-classic-gkt9f 1/1 Running 0 4d3h dynakube-classic-rnndv 1/1 Running 0 4d3h dynakube-classic-s7v4l 1/1 Running 0 4d3h dynakube-kubemon-0 1/1 Running 0 4d3h dynakube-routing-0 1/1 Running 0 4d3h dynatrace-operator-8b89765d5-znzd6 1/1 Running 10 5d3 |
Log in to Dynatrace and review each of these pages to verify that the Kubernetes cluster is being monitored.
Within Dynatrace, create an API Token.
To complete the setup of the pipeline, JFrog integrations, repositories and a pipeline source needs to be added.
The reference pipeline within the repo, pipelines.yml, assumes the names for the integrations and repositories match names from the table in the Design Reference section and as described below are used. You are welcome to use different names, but you must adjust the pipelines.yml too else the pipeline will fail. |
kubeconfig
file for their cloud provider generated in the previous setup section. From the Administration page, select Integrations and then click Add Integrations button. kubeconfig
contents for their cloud provider generated in the previous setup sectionThe new integrations should look like this:
The new repositories should look like this:
Create a new GitHub Pipeline source with your new repository using the GitHub integration created in the previous step.
Once the setup is complete, the pipeline can be run manually from within the JFrog console under the My Pipelines tab within the Applications panel. The pipeline will execute all the steps in a sequential manner and the completed pipeline will look like this:
You can expand each step to review the details, but here are a few other things to review.
Within JFrog, expand the Artifactory repository and review the updated charts and custom properties as shown below.
Within JFrog, expand the Artifactory repository and review the published image.
First obtain the public IP address from the Kubernetes service using this command:
kubectl -n dynatrace get service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
demoapp LoadBalancer 10.84.2.18 34.133.103.200 8080:32602/TCP 5d
Using the example above, open the application on port 8080 at http://34.133.103.200:8080. The application will look like this:
In order for Dynatrace to fully monitor services and applications, there needs to be some transactions flowing through the application. To make this easy, a simple script is provided in the repo that will send CURL requests to the various application URLs within a loop. To run this script, open up the Google web shell and run these commands:
~/jfrog-pipelines-dynatrace-example/scripts
./sendSomeTraffic.sh
sendSomeTraffic.sh will determine the public IP for the application and output the loop status as shown below. To stop the script, just use ctrl-c.
Calling http://111.111.111.111:8080...
loop 1
200
200
200
200
loop 2
200
200
200
200
loop 3
200
200
...
...
Within Dynatrace, navigate to the Releases menu to open up the release inventory dashboard. On this dashboard is the demo application, its auto-detected version and its deployment event. Below are the expanded details for an example deployment event with its associated meta-data including the URL back to the JFrog pipeline that performed the deployment.
Within Dynatrace, navigate to the Kubernetes menu and drill into the workload for the demo application. As shown below, the demoapp workload dashboard shows overall utilization metrics with links to drill into the specific process.
Within Dynatrace, navigate to the Services menu and open up the DemoNodeJsApp as shown below. This dashboard shows everything you need to know about the service including the time series metrics for the requests coming from the sendSomeTraffic.sh script.
The sample app comes with built-in "feature" behaviors - meaning - if you launch the app and tell it to run as feature 1, 2, or 3 to show slightly different behavior. A feature is set using a Docker build argument that sets an environment variable that the application code looks for. You can read more about this in the demoapp README file.
To change feature numbers, adjust the environment variable FEATURE_NUMBER to the value in the pipeline.yml file and commit the code change. JFrog will automatically re-run the pipeline after any code commit, so just monitor the progress of the pipeline and then go back to review the Dynatrace release inventory, events and JFrog artifacts. Below is an example of what the change should look like:
If you need help with this integration, contact partner_support@JFrog.com.