Cloud customer?
Start for Free >
Upgrade in MyJFrog >
What's New in Cloud >



This page provides a high-level overview of the structure of a pipeline configuration file.

JFrog Pipelines uses its own declarative language based on YAML syntax, Pipelines DSL, to describe workflows. You create pipelines in text files written in Pipelines DSL, which we refer to as a pipeline config. You can create these files in any text editor of your choice.

You must store your pipeline config file(s) in a source code repository (for example, GitHub). When Pipelines has been configured to use this repo as a pipeline source, the config files will be automatically read and the workflows loaded into Pipelines and run.

Page Contents

Pipeline Config Structure

There are two top-level sections that can be defined in a pipeline config:

  • A resources section that specifies the Resources used by the automated pipeline.
  • A pipelines section that specifies the pipeline execution environment and the Steps to execute.

For ease of illustration, we'll describe how this looks in a single pipeline config file (e.g., pipelines.yml).

Resources Section

Resources provide the information that steps need in order to execute, or to store information generated by a step. For example, a resource might point to a source code repository, a Docker image, or a Helm chart. A list of all supported resources is available in Resources overview.

The basic format of each resources declaration is:

  • name -- A globally unique friendly name for the resource
  • type -- A predefined string that specifies the type of resource
  • configuration -- Begins the section of settings required by the resource type. This typically includes the name of the integration that connects the resource to the external service.

Resource definitions are global and can be used by all pipelines in a Project that are in at least one of the same environments. This means that resource names must be unique across all pipeline config files in a Project. 

For example, here is a resources section that defines two resources, a GitRepo and a Docker Image:

  - name: my_Git_Repository
    type: GitRepo
      gitProvider: my_GitHub_Integration
      path: ~johndoe/demo
        include: master

  - name: my_Docker_Image
    type: Image
      registry: my_Docker_Registry_Integration
      imageName: johndoe/demo_image
      imageTag: latest

Dive Deeper

See the Pipelines Resources section to learn what resource types are available and how they can be used in your pipelines.

Pipelines Section

The pipelines section defines the workflow, consisting of steps and the dependencies between them.

The basic format of each pipelines declaration is:

  • name -- A friendly name for the resource, unique within the project.
  • configuration -- An optional section to specify environment variables and/or a runtime image for the pipeline to execute in.
  • A collection of step sections that specifies the steps to execute.

The name of the pipeline will be available in the environment variable $pipeline_name, which can be used to construct the base name for builds.

Pipeline Configuration

The optional configuration section can specify an execution environment for all steps in the pipeline. While this configuration can be defined per step, it is sometimes more convenient to define it at a pipeline level if it's going to be the same for all steps in the pipeline.

The basic format of each configuration section is:

  • environmentVariables -- Any variables defined here are available to every step in the pipeline. These variables are read-only; they cannot be redefined in a step. 

    If the following variables are set, they will be used:

    • JFROG_CLI_BUILD_NAME: If set, the pipeline uses this value instead of the default pipeline name for the build info collected.
    • JFROG_CLI_BUILD_NUMBER: If set, the pipeline uses this value instead of the default run number for the build info collected.
    • USE_LOCAL_JFROG_CLI: If set as true, the local JFrog CLI on the host or in the image (depending on runtime configuration) is used instead of the version packaged with JFrog Pipelines. This is not recommended and native steps may not be able to run with the local JFrog CLI version.
  • nodePool -- Optionally specify a specific node pool where your steps will execute. If not specified, then the node pool set as the the default will be used.
  • runtime -- This section allows you to specify the default runtime environment for steps in the pipeline. The options are:
    • Run steps directly on the host machine
    • Run steps inside the node pool's default Docker container or one of its language-specific variants
    • Run steps inside a custom Docker container of your choice
  • chronological -- Any runs of the pipeline will not start running while another run of the same pipeline is processing if chronological is set to true. The default is false, allowing runs to execute in parallel if there are nodes available.
  • dependencyMode -- Specifies when the pipeline may run relative to other pipelines connected by resources. If any of these three settings are true, new runs will not be created for resources updated by other pipelines if there is already a waiting run with the same resources and steps. So if a pipeline runs twice consecutively and the following pipeline has waitOnParentComplete set to true, the following pipeline will only run once. When the pipelines do run, they will use the latest resource versions. The optional settings are:
    • waitOnParentComplete: If true, the pipeline will not start running when a pipeline that outputs a resource that is an input to this pipeline has a waiting or processing run.
    • waitOnParentSuccess: If true, the pipeline will not start running when a pipeline that outputs a resource that is an input to this pipeline has a processing run or the last complete run was not successful.
    • waitOnChildComplete: If true, the pipeline will not start running when a pipeline that has an input resource that is output of this pipeline has a waiting or processing run unless that child pipeline is waiting for this pipeline to complete.
  • retentionPolicy --  Optionally specify if the pipeline run data should be deleted after a specific number of days. Also, provides the ability to keep a minimum number of pipeline runs data:
    • maxAgeDays: Specifies number of days after which the pipeline run data will be deleted (cannot exceed the system level setting). Setting this value to 0 means an infinite retention. 
    • minRuns: Specifies the minimum number of pipeline runs data to keep, regardless of their age (cannot exceed the system level setting).

Any step can override the pipeline's default runtime configuration if needed to configure its own runtime selection.

Dive Deeper

Pipeline Steps

Each named pipeline declares a collection of named step blocks the pipeline will execute.

The basic format of each step declaration is:

  • name -- A friendly name for the step that may be referenced in other steps. Step names must be unique within the same pipeline.
  • type -- A predefined string that specifies the type of step
  • configuration -- Begins the section of settings required by the step type. This may include:
    • Environment variables local to the step
    • Any runtime configuration for the step
    • Any triggering input steps or resources
    • Any resources output by the step
    • Any integrations used by the step
    • All settings required by the step type
  • executionSpecifies the actions to perform for each execution phase of the step. 

For example, here is a a simple sequence of two steps. Each uses the generic Bash step to output text to the console:

    - name: step_1
      type: Bash
          - name: my_Git_Repository     # Trigger execution on code commit
          - echo "Hello World!"
    - name: step_2
      type: Bash
          - name: step_1               # Execute this step after the prior step
          - echo "Goodbye World!"

Dive Deeper

See Pipelines Steps to learn more about using steps in your pipelines.

Pipeline Config File Strategies

A pipeline config file can have one or more pipelines defined in it, but the definition of a single pipeline cannot be fragmented across multiple files. Pipeline config filenames can be any form you choose, although the convention for a single file is pipelines.yml

Some things to note about pipelines:

  • You can have as many pipeline config files as you want. For example, our customers manage config in the following different ways:
    • Maintain a central DevOps repository and keep all pipeline config files for all projects in that repository.
    • Keep pipeline config files that build each microservice with the source code for that microservice. 
    • Separating out pipeline steps and resources into separate config files (for example, pipelines.steps.yml and pipelines.resources.yml respectively).

  • No labels
Copyright © 2021 JFrog Ltd.