Cloud customer?
Start for Free >
Upgrade in MyJFrog >
What's New in Cloud >

Search





Overview

The execution of each step in a pipeline may be performed on different nodes. For this reason, you are not guaranteed that changes to the environment that are made within a step will persist to subsequent steps.

Stateful pipelines remember information generated by steps and make it available to dependent steps or successive runs of the same pipeline. This is a crucial component in order to achieve end-to-end Continuous Delivery.

Some example use-cases are the following:

  • A step creates information about the commitSha and image/file version that was built, which is then consumed by another step to deploy the correct version into the test environment.
  • A step creates a VPC for the Staging environment. It stores information like VPC info, subnets, security groups, etc. which is required by another step that deploys to the Staging environment.
  • You have a provisioning step that uses Terraform to create the Test environment. At a later stage in your pipeline, you have a deprovisioning step that destroys the test environment. Both these steps need to read and update the Terraform state file so they are aware of the current state of the Test environment.


Page Contents

Types of State

JFrog Pipelines supports three types of state:

Each of these states is characterized by the scope of information persistence.

Run State

A pipeline's run state is persistent only between steps within the same run. Information stored in one step is available to subsequent steps in the run of that pipeline. After the run is complete, that state information is no longer available.


To preserve state across steps, use the  utility functions for run state management


Pipelines supports two types of run state information that can be preserved between steps.

Key-Value Pairs

Using the add_run_variables utility function, you can store a key-value pair to the run state. That key-value pair will automatically be available to all subsequent steps in the run as an environment variable.

 Click here for example YAML...
 pipelines:
  - name: example_run_state_pipeline
    steps:
      - name: step_1
        type: Bash
        configuration:
          inputResources:
            - name: myAppRepo            # Triggering resource
        execution:
          onExecute:
            - add_run_variables first_stepid=$step_id
            - add_run_variables ${first_stepid}_buildNumber=${run_number}

      . . . 

      - name: step_5
        type: Bash
        configuration:
          inputSteps:
            - name: step_4
        execution:
          onExecute:
            - echo "Hello world"
            - echo $first_stepid
            - echo ${first_stepid}_buildNumber


Files

Using the add_run_files utility function, a step can store a file to the run state. Any subsequent step can then use the restore_run_files function to retrieve the file from the run state.

 Click here for example YAML...
 pipelines:
  - name: example_run_file_pipeline
    steps:
      - name: step_1
        type: Bash
        configuration:
          inputResources:
            - name: myAppRepo            # Triggering resource
        execution:
          onExecute:
            echo "Hello world"
          onComplete:
            - add_run_files myfile cachefile.txt

      . . . 

      - name: step_5
        type: Bash
        configuration:
          inputSteps:
            - name: step_4
        execution:
          onStart:
            - restore_run_files myfile cachefile.txt
          onExecute:
            echo "Hello world"

Pipeline State

pipeline state is persistent for all runs of the same pipeline. Information stored by a step during a pipeline's run is available to subsequent runs of that pipeline. 

To preserve state across pipelines, you may use the  utility functions for pipeline state management

Pipelines supports two types of run state information that can be preserved between steps.

Key-Value Pairs

Using the add_pipeline_variables utility function, you can store a key-value pair to the pipeline state. That key-value pair will automatically be available to all subsequent runs as an environment variable.

 Click here for example YAML...
 pipelines:
  - name: example_pipeline_state_pipeline
    steps:
      - name: step_1
        type: Bash
        configuration:
          inputResources:
            - name: myAppRepo            # Triggering resource
        execution:
          onExecute:
            - echo "Hello world"
            - echo $previous_buildNumber
      . . . 

      - name: final_step
        type: Bash
        configuration:
          inputSteps:
            - name: prior_step
        execution:
          onExecute:
            # Preserve the build number for the next run of the pipeline
            - add_pipeline_variables $previous_buildNumber=${run_number}

Files

Using the add_pipeline_files utility function, a step can store a file to the pipeline state. Any step can then use the restore_pipeline_files function to retrieve the file from the pipeline state.

 Click here for example YAML...
 pipelines:
  - name: example_pipeline_file_pipeline
    steps:
      - name: step_1
        type: Bash
        configuration:
          inputResources:
            - name: myAppRepo            # Triggering resource
        execution:
          onStart:
            # Restore the file from the previous run of the pipeline
            - restore_pipeline_files myfile cachefile.txt
          onExecute:
            echo "Hello world"

      . . . 

      - name: final_step
        type: Bash
        configuration:
          inputSteps:
            - name: prior_step
        execution:
          onExecute:
            echo "Hello world"
          onComplete:
            # Preserve file for the next run of the pipeline
            - add_pipeline_files myfile cachefile.txt

Resource-based State

Using the write_output utility function, key-values can be stored as a property in any output resource. Every step that has the resource as an input can access the key-value information in its scripts as an environment variable.

The environment variable for the value is of the form res_<resource name>_<key name>.  

Resource-based state information is persistent across pipelines, so it can be used as a mechanism for passing information from one pipeline to the next.

 Click here for example YAML...
pipelines:
  - name: example_resource_state_pipeline
    steps:
      - name: step_1
        type: Bash
        configuration:
          inputResources:
            - name: myAppRepo           # Triggering resource
          outputResources:
            - name: myImage             # Image resource
        execution:
          onExecute:
            - echo $res_myAppRepo_commitSha
            - write_output myImage "imageTag=master" "sha=$res_myAppRepo_commitSha" "description=\"hello world\""

      - name: step_2
        type: Bash
        configuration:
          inputResources:
            - name: myImage
        execution:
          onExecute:
            - echo "Hello world"
            - echo $res_myImage_imageTag
            - echo $res_myImage_sha            
            - echo $res_myImage_description
  • No labels
Copyright © 2021 JFrog Ltd.