Breaking Your Pipelines into Steps

JFrog Pipelines Documentation

Products
JFrog Pipelines
Content Type
User Guide
ft:sourceType
Paligo

One of the most important decisions you will need to make in designing your pipeline is how to map your DevOps processes to JFrog Pipelines concepts of steps and resources.

A Step is a component of a pipeline that executes commands to accomplish an activity, such as building a binary, pushing a binary to a repository, deploying a service, provisioning a VPC or cluster, etc.

The native steps of JFrog Pipelines help you to build your pipeline as a sequence of smaller actions. For example, you can use the native step DockerBuild to build a Docker image, then use DockerPush to push the resulting image to a Docker registry.

This is essential to Pipelines' "building block" approach to constructing workflows. So it is a vital best practice to ensure that the steps you create using the generic Bash step encapsulate a discrete action similar to a native step.

Some guidelines for determining the discreteness of a step are:

  • A step should perform one task, so that you can get discrete status for that task.

  • A step can be triggered independently to trigger a downstream pipeline, so step boundaries should be designed with that in mind.

  • Steps can be run in parallel to speed up pipeline execution, so if you have a large test suite, you can divide that into several steps and run in parallel.

  • Steps can be run on different sized nodes, so if you want some series of steps to run on a bigger or smaller node size, separate it out into a step.

  • A step should have a continuous workflow, with no stops in between. For example, if you need manual input for a step, it's best to separate that step into another step with an approval gate in between. This prevents your steps from blocking a build node waiting for input.

  • For branch-based pipelines, it's best to separate steps per branch (or branch type), so that you can easily identify status of each branch.

Example Pipelines

Let's take a simple example of a typical pipeline. For every merge to the master branch of your GitRepo, you need to:

  1. Build a Docker image

  2. Push it to a Docker registry

  3. Deploy it to a Kubernetes development environment

  4. Run tests against the deployed app and send a notification of the result

Single step approach

Technically, you can bundle all these steps into one Bash step. Your pipeline diagram will be a single step, triggered by a change to your GitRepo resource:

Workflow Steps Alt 1.png

The approach above has the following characteristics:

  • There is no discrete status. If this step fails, at a glance you cannot tell if build, deployment, or functional tests failed.

  • The entire pipeline runs as one unit, so there is no way to separate out parts for special treatment. For example, you cannot run the functional test suite on a node with a custom runtime with supporting components.

Multiple steps approach

Let's look at an alternative approach to the same pipeline using a combination of native steps and Bash steps:

Workflow Steps Alt 2.png

As you can see, the pipeline has a much more discrete structure and each step can send its status notifications.

You can also run the "run tests" step on a node with a custom runtime, or even split it up into parallel steps if needed.

Workflow Steps Alt 2a.png

This is the recommended approach to define steps that are sufficiently discrete to be able to perform interim actions such as identify status, send notifications, and evaluate results.