How it Works
The JFrog Pipeline and Dynatrace reference design leverages a series of built-in integrations and custom bash scripts available within the JFrog Pipelines platform, that enable users to build a Docker image and deploy it to a Kubernetes cluster instrumented with the Dynatrace OneAgent Operator. As this is done, Dynatrace deployment events are sent to the associated service being monitored in the runtime environment with the important context and hyperlink back to the JFrog pipeline job that performed the deployment.
Reference design
JFrog Repositories
The following set of Artifactory Repositories are used in the reference design:
Repository Name | Type | Description |
art-docker-dynatrace | Stores a collection of Docker images created by the pipeline. | |
art-helm-charts-dynatrace | Stores a collection of Helm charts created by the pipeline. |
Integrations
Integrations connect Pipelines to an external service/tool. Each integration type defines the endpoint, credentials and any other configuration detail required for Pipelines to exchange information with the service. Below are the integrations used in this reference design:
Integration Name | Type | Description |
artifactoryIntegration | Stores the Artifactory URL and Token. | |
k8sIntegration | Stores the configuration YAML to access Kubernetes cluster connection. | |
githubIntegration | Stores the GitHub URL and User Access Token the GitHub account where the pipeline code is stored. The GitHub user must have admin rights for the Pipelines repository. | |
dynatraceIntegration | Stores the Dynatrace instance URL and API token. The API token is configured with the permissions for any APIs it calls. The reference design API permissions are defined lower in this document. |
Pipeline Steps
The reference JFrog Pipelines executes the following pipeline steps as shown in the JFrog picture below.
- DockerBuild: Native JFrog Pipelines step that builds a docker image from a Dockerfile and source code within the githubIntegration repo. This step uses the artifactoryIntegration to connect to the Artifactory instance so that the built image is available to the DockerPush step.
- DockerPush: Native JFrog Pipelines step that publishes the Docker image to an Artifactory local Docker repository. Uses the artifactoryIntegration to push the image to the Artifactory instance.
- HelmPublish: Native JFrog Pipelines step that uses the Helm chart in the githubIntegration repo and modifies it with the desired Docker image and tag to deploy. Pushes the updated Helm Chart to the Artifactory local Helm repository.
- HelmDeploy: Native JFrog Pipelines step that installs the published Helm chart to the Kubernetes cluster defined in the k8sIntegration. This step will also output the status of the deployed pods.
- AppUpCheck: Using the JFrog Pipelines Bash step type, this step calls a custom bash script that checks whether the passed in URL is reachable. It will try a few times with a delay between attempts and will return with 'exit 1' if the URL is not reachable.
- ConfigureDynatrace: Using the JFrog Pipelines Bash step type, this step calls a custom bash script that will run a Docker image with the Dynatrace monitoring as code CLI. The githubIntegration repo is mounted as a volume so that the monitoring-as-code CLI can access the YAML and JSON configuration files that it will process. Dynatrace credentials are read from the dynatraceIntegration.
- ModifyHelmChartProperties: Using the JFrog Pipelines Bash step type, this step calls a custom bash script that calls the JFrog API to add custom properties to Helm chart in the repository. This step uses data from the previous step and adds it to the chart file properties. Artifactory credentials are read from the artifactoryIntegration.
- PostDynatraceEvent: Using the JFrog Pipelines Bash step type, this step calls a custom bash script that sends the custom deployment event to the Dynatrace instance using the Dynatrace API. Dynatrace credentials are read from the dynatraceIntegration.
Fork GitHub Repo and Create Access Token
In preparation for setting up the pipelines and running them, you will first need to make a copy of the reference design GitHub code repository. This assumes you have a GitHub account.
- Fork the sample JFrog Pipeline-Dynatrace repository to your own repository.
- Adjust the values.yaml file with values for your environment. You must adjust:
- gitRepo.path - replace dt-demos with your github username or org name
- image.ImageName - replace dynatracedemos with your Artifactory account name
- Create a GitHub Personal Access Token in the GitHub account for the GitHub pipeline repo according to the JFrog GitHub integration document. Copy the generated token to a safe location. This Token is needed in the upcoming section for the pipeline GitHub integration setup.
Preparing the Kubernetes Cluster and Dynatrace
Step 1 - Create and Connect to Cluster
A Kubernetes cluster is not required for the integration; instead, the reference pipeline uses a containerized application and uses Helm for deployments.
This demo uses Google GKE using the default settings from the Google console.
- cluster-version 1.20.8
- 3 nodes (12GB 6 CPU) machine-type "e2-medium"
Once the cluster is provisioned, use the google console web console to run these commands to configure kubectl to connect and view the cluster nodes.
gcloud container clusters get-credentials <CLUSTER NAME> --zone <ZONE> --project <PROJECT> kubectl get nodes
Step 2 - Create a JFrog Service Account
This step creates the service account that is needed in the next section as part of the k8s pipeline integration setup.
From the cloud shell clone the demo repository. For example:
git clone https://github.com/dt-demos/JFrog-pipelines-dynatrace-example.git
Navigate into the setup folder.
cd JFrog-pipelines-dynatrace-example/setup
Run this script that creates the k8s service account, adds the namespace for where the sample application will be deployed and generates a kubeconfig file for a GKE cluster.
./createKubernetesServiceAccounts.sh
- Copy the contents from the kubeconfig file to a safe location.
Step 3 - Create a Dynatrace Viewer Role
This step creates the Kubernetes role that is needed by Dynatrace to call the Kubernetes API.
Assuming you are still in the “
JFrog-pipelines-dynatrace-example/setup
” folder within the Google cloud shell, run this command to add the role.kubectl create -f dynatrace-oneagent-metadata-viewer.yaml
You can verify that the “dynatrace-oneagent-metadata-viewer” was created using this command.
kubectl -n dev get role
See the Dynatrace Documentation for more details and the ways this role can be configured.
Step 4 - Create a JFrog Docker Credentials Secret
This step creates the Frog Docker repository credentials as a Kubernetes secret. This secret is needed by Helm as it pulls the docker image during Helm deployments.
From the Google cloud shell, run this command to login to your Artifactory Docker repository.
docker login YOUR-ACCOUNT.JFrog.io
To export the saved credentials into a Kubernetes secret, run this command.
kubectl create secret docker-registry regcred \ --docker-server=<YOUR-ACCOUNT.JFrog.io> \ --docker-username=<username> \ --docker-password=<Artifactory_API_key> \ --docker-email=<email>
See the Kubernetes Documentation for more details.
Step 5 - Install Dynatrace Operator
One key Dynatrace advantage is ease of activation and for Kubernetes the Dynatrace Operator is designed specifically to handle the lifecycle of the Dynatrace OneAgent, Kubernetes API monitoring, OneAgent traffic routing, and all future containerized componentry such as the forthcoming extension framework.
Operator setup is typically a one-time activity as part of the Kubernetes cluster and the quickest way to deploy the Dynatrace operator is the deployment wizard within the Dynatrace UI.
- Log in into Dynatrace
- In the Dynatrace menu, select Manage.
- Go to Deploy Dynatrace | Start Installation | Kubernetes
- On the Kubernetes page fill in the following.
- Name = same as your cluster name. This name is used by various Dynatrace settings, including Kubernetes cluster name, Network Zone, ActiveGate Group, and Host Group
- Choose ‘Create Token’
- Depending on you setup, choose ‘Skip SSL’
- Depending on your setup, ‘Enable volume storage’
- Use the Copy function to copy and paste the generated command from the Dynatrace wizard in the Google cloud shell and run it.
After processing, you can run this command from the Google cloud web shell. You want to see that all pods are running as shown below before moving to the next section.
kubectl -n dynatrace get pods NAME READY STATUS RESTARTS AGE dynakube-classic-gkt9f 1/1 Running 0 4d3h dynakube-classic-rnndv 1/1 Running 0 4d3h dynakube-classic-s7v4l 1/1 Running 0 4d3h dynakube-kubemon-0 1/1 Running 0 4d3h dynakube-routing-0 1/1 Running 0 4d3h dynatrace-operator-8b89765d5-znzd6 1/1 Running 10 5d3
Step 6 - Verify the Cluster within Dynatrace
Log in to Dynatrace and review each of these pages to verify that the Kubernetes cluster is being monitored.
- In the Dynatrace menu, select Hosts. You should see several hosts as shown below:
- In the Dynatrace menu, select Kubernetes. You should see the cluster as shown below:
Step 7 - Create an API Token
Within Dynatrace, create an API Token.
- Log in to Dynatrace.
- In the Dynatrace menu, select Settings.
- Go to Integration | Dynatrace API.
- Select Generate token.
- Enter a name for your token such as JFrog.
- Select the required permissions for the token as shown below.
- Click the Generate button.
- Copy the generated token to a safe location. This token is required for the next section for the Dynatrace pipeline integration setup.
Setting up the Pipeline
To complete the setup of the pipeline, JFrog integrations, repositories and a pipeline source needs to be added.
Important
The reference pipeline within the repo, pipelines.yml, assumes the names for the integrations and repositories match names from the table in the Design Reference section and as described below are used. You are welcome to use different names, but you must adjust the pipelines.yml too else the pipeline will fail.
Step 1: Add Integrations
- Add a GitHub JFrog Integration from the JFrog Administration panel. From the Administration page, select Integrations and then click the ‘Add Integrations’ button. On the form, fill in with these values as shown below.
- Name = githubIntegration
- Type = pick ‘GitHub’ from the drop down list
- Token = Your GitHub Personal Access Token.
- Create the Dynatrace Integration using the generic integration available within JFrog Pipelines. From the Administration page, select Integrations and then click the ‘Add Integrations’ button. On the form, fill in with these values as shown below.
- Name = dynatraceIntegration
- Type = pick ‘Generic Integration’ from the drop down list
- Add the following as the customer environment variables:
- DT_URL - The URL of the Dynatrace instance. For example: https://abc.live.dynatrace.com
- DT_API_TOKEN - The API token for the Dynatrace instance
- Create the Kubernetes integration using the static kubeconfig file for their cloud provider generated in the previous setup section. From the Administration page, select Integrations and then click the ‘Add Integrations’ button. On the form, fill in with these values as shown below.
- Name = k8sIntegration
- Type = Select ‘Kubernetes’ from the drop down list
- KubeConfig = Paste the static kubeconfig contents for their cloud provider generated in the previous setup section
- Create the Artifactory Integration. From the Administration page, select Integrations and then click the ‘Add Integrations’ button. On the form, fill in with these values as shown below:
- Name = artifactoryIntegration
- Type = pick ‘Artifactory’ from the drop down list
- Artifactory URL = leave as default
- Artifactory User = leave as default
- API Key = Click the ‘Get API Key’ button
- The new integrations should look like this:
Step 2 - Add Repositories
- Create the Docker repository.
- From the Administration page, select Repositories and then click the ‘Add Repository button. Choose the ‘local repository’ option
- On the popup window, select Docker
- On the form, set the ‘Repository Key’ = art-docker-dynatrace
- Leave the other fields with the default settings
- Click the ‘Save and Finish’ button
- Create the Helm Chart repository.
- From the Administration page, select Repositories and then click the ‘Add Repository button. Choose the ‘local repository’ option
- On the popup window, select Helm
- On the form, set the ‘Repository Key’ = art-helm-charts-dynatrace
- Leave the other fields with the default settings
- Click ‘Save and Finish’ button
- The new repositories should look like this:
Step 3 - Add a Pipeline Source
Create a new GitHub Pipeline source with your new repository using the GitHub integration created in the previous step.
- From the Administration panel, select Pipeline Sources from the Pipelines drop down menu.
- Then click Add Pipeline Source and select From YAML from the drop down menu.
- On the form, fill in with these values as shown below:
- SCM Provider Integration = githubIntegration
- Repository Full Name = From the dropdown pick the forked repository
- Branch = main
- Pipeline Config file Filter = pipelines.yml
- The new pipeline source should look like this:
Running the Pipeline
Once the setup is complete, the pipeline can be run manually from within the JFrog console under the My Pipelines tab within the Applications panel. The pipeline will execute all the steps in a sequential manner and the completed pipeline will look like this:
You can expand each step to review the details, but here are a few other things to review.
- Review the published Helm Chart
Within JFrog, expand the Artifactory repository and review the updated charts and custom properties as shown below.
- Review the published Docker Chart
Within JFrog, expand the Artifactory repository and review the published image.
- Review the sample application
First obtain the public IP address from the Kubernetes service using this command:
kubectl -n dynatrace get service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
demoapp LoadBalancer 10.84.2.18 34.133.103.200 8080:32602/TCP 5d
Using the example above, open the application on port 8080 at http://34.133.103.200:8080. The application will look like this:
- Send some traffic to the sample application
In order for Dynatrace to fully monitor services and applications, there needs to be some transactions flowing through the application. To make this easy, a simple script is provided in the repo that will send CURL requests to the various application URLs within a loop. To run this script, open up the Google web shell and run these commands:
~/jfrog-pipelines-dynatrace-example/scripts
./sendSomeTraffic.sh
sendSomeTraffic.sh will determine the public IP for the application and output the loop status as shown below. To stop the script, just use ctrl-c.
Calling http://111.111.111.111:8080...
loop 1
200
200
200
200
loop 2
200
200
200
200
loop 3
200
200
...
...
- Review the Dynatrace deployment event
Within Dynatrace, navigate to the Releases menu to open up the release inventory dashboard. On this dashboard is the demo application, its auto-detected version and its deployment event. Below are the expanded details for an example deployment event with its associated meta-data including the URL back to the JFrog pipeline that performed the deployment.
- Review the Dynatrace Kubernetes dashboard
Within Dynatrace, navigate to the Kubernetes menu and drill into the workload for the demo application. As shown below, the demoapp workload dashboard shows overall utilization metrics with links to drill into the specific process.
- Review the service
Within Dynatrace, navigate to the Services menu and open up the DemoNodeJsApp as shown below. This dashboard shows everything you need to know about the service including the time series metrics for the requests coming from the sendSomeTraffic.sh script.
- Deploy another version of the demo application
The sample app comes with built-in "feature" behaviors - meaning - if you launch the app and tell it to run as feature 1, 2, or 3 to show slightly different behavior. A feature is set using a Docker build argument that sets an environment variable that the application code looks for. You can read more about this in the demoapp README file.
To change feature numbers, adjust the environment variable FEATURE_NUMBER to the value in the pipeline.yml file and commit the code change. JFrog will automatically re-run the pipeline after any code commit, so just monitor the progress of the pipeline and then go back to review the Dynatrace release inventory, events and JFrog artifacts. Below is an example of what the change should look like:
Support
If you need help with this integration, contact partner_support@JFrog.com.