Onboarding Pipelines Best Practices
JFrog Pipelines is the next generation CI/CD solution for DevOps automation. It is part of the JFrog Platform, bringing together Artifactory, Xray and Distribution into a unified end-to-end system for one-stop DevOps. Through a single pane of glass, administrators can manage user and groups permissions to control who sees what.
Pipelines comes with several out-of-the-box integrations for the DevOps tools you are likely to use most, enabling you to quickly add connections to services such as version control systems, cloud providers, Docker and Kubernetes.
You code your pipeline in the YAML-based Pipelines DSL, and store it in your Git-compatible source code repository, such as GitHub. In this DSL, you declare the resources and steps of your pipeline.
Pipelines native steps provide you with a streamlined way to specify your most common DevOps tasks, such as building and pushing a Docker image, publishing buildinfo, and promoting between artifact repositories.
You'll tell Pipelines where your DSL file is stored by adding the repo as a Pipeline Source, connecting it through a Pipelines integration. Once you do, Pipelines will automatically sync with the source to load and process your YAML DSL file.
Every time you commit a change to your YAML DSL file, Pipelines will sync from that source to reload it.
Viewing and Running Pipelines
Viewing your pipeline presents you with an interactive diagram that visualizes your workflow sequence.
A CI build can trigger on a change to your source code, such as a new commit of any or selected files. Or you can manually trigger your pipeline from the diagram and watch it execute.
Your pipelines can be as complex as you need, and can trigger from multiple sources enabling it to flow through different paths.
Every time your pipeline is run, Pipelines logs a record of its execution. You can see when every run occurred, and whether it was successful.
You can view the detailed log of every run, even for runs currently executing. In the run log, you can examine the actions and results of every step in your pipeline. This enables you to diagnose any errors in a failed step, or to watch the output of a currently executing step in real time.
Your pipelines run in build nodes that you provide grouped into node pools. The nodes in a pool run the same Host OS, so that multiple pipeline steps can execute in parallel.
You can create pools of static nodes, which are always-available machines on-prem or in the cloud, or dynamic pools, where nodes are spun up and down in a cloud service as they're needed, helping you to save costs.
You can start exploring Pipelines through our quick starts and the JFrog Platform documentation.