Before learning how to use Pipelines, here are some fundamental concepts you will need to be familiar with.
These facilities connect Pipelines to information and services that are not part of the JFrog Platform Deployment, but are accessible elsewhere on the network.
An Integration connects Pipelines to an external service/tool. Each integration type defines the endpoint, credentials and any other configuration detail required for Pipelines to exchange information with the service. All credential information is encrypted and held in secure storage, in conformance with best security practices.
For more information, and a list of all available integration types, see the Pipelines Integrations reference.
A Pipeline Source is a location in an external repository (such as GitHub or BitBucket) where pipeline configuration files can be found. A pipeline source connects to the repository through an integration.
A pipeline is an event-driven workflow that you construct using Pipelines DSL, which is based on YAML. The YAML file containing the DSL is called a pipeline configuration (config).
Resources are one of the key building blocks of all pipelines. They are information entities that are used for storing and exchanging information across steps and pipelines. Resources are versioned and each version is immutable. They are also global and depending on the scope defined for the pipeline source, they can be available across pipelines, which enable you to connect multiple pipelines together to create a pipeline of pipelines.
Resources are versioned and each version is immutable. They are also global and depending on the scope defined for the pipeline source, they can be available across pipelines, which enable you to connect multiple pipelines together to create a pipeline of pipelines.
For more information, and a list of all available resource types, see the Pipelines Resources reference.
A Step is a unit of execution in a pipeline. It is triggered by some event and uses resources to perform an action as part of the pipeline.
For more information, and a list of all available step types, see the Pipelines Steps reference.
A run is an instance of execution of a pipeline. Pipelines maintains an ordered history of all runs of each pipeline, with an execution log that can be examined through the JFrog Platform.
Every step in your pipeline executes on a build node that has been provisioned with a runtime environment. Through Pipelines DSL, you can control which runtimes your steps execute in.
For more information, see Managing Runtimes.
To run any step in a pipeline, you need a build node (virtual machine) that will receive the runtime container where the step will execute.
You must provide nodes and attach them to your JFrog Pipelines project. A node can be on any infrastructure that you choose to use, whether it is from a cloud provider (such as AWS, GCP, or Azure), or on your own infrastructure if your security policies require your operations to remain behind your own firewall.
Nodes can be either static, which are available all the time, or dynamic, which are spun up on-demand through a cloud service.
Node Pools are a convenient way to logically group nodes. This enables you to run steps simultaneously in a pipeline, maintain nodes of different architecture and operating system, pin steps to run on specific node types and more.
A node pool is assigned a default runtime image. This default is automatically provisioned to its node unless a step overrides this behavior by specifiying a different runtime.
A runtime image is a preconfigured Docker container that includes the necessary OS, software tools, packages, and configurations that a step needs to execute.
The JFrog Platform Deployment provides a standard set of runtime images that can be used for most applications. This set includes baseline runtimes with variants to support many commonly used languages. You can also create your own runtime images for specialized needs.