Cloud customer?
Start for Free >
Upgrade in MyJFrog >
What's New in Cloud >

Search





Overview

This page provides a guide for the different ways you can install and configure on a single node and for high availability. For more information, see High Availability.


Before You Begin

System Requirements

Before you install refer to System Requirements for information on supported platforms, supported browsers, and other requirements, and the system architecture.

The current version of JFrog Pipelines has been validated to operate on the following platforms:

Ubuntu

CentOS

RHEL

20.04 LTS,
18.04 LTS,
16.04 LTS
8, 78, 7

JFrog Pipelines requires a JFrog Artifactory instance with an Enterprise+ license.

When installing Pipelines, you must run the installation as a root user or provide sudo access to a non-root user. 

The Debian installer for Pipelines is used to install Pipelines on an Ubuntu operating system.

External Connections

You should review the external connections details to conform port assignments and public IP addresses to the recommended configuration.

Installing Pipelines

Before installing Pipelines 1.x, you must first install JFrog Artifactory 7.x.

Installation Steps

The installation procedure involves the following main steps:

  1. Download Pipelines as per your required installer type (Linux Archive, RPM, Debian, Helm).
  2. Install Pipelines either as a single node installation, or high availability cluster.
    1. Install third party dependencies (PostgreSQL and Elasticsearch databases, included in the archive)
    2. Install Pipelines
  3. Configure the service
    1. Connection to Artifactory (joinKey and jfrogUrl)
    2. Additional optional configuration including changing default credentials for databases
  4. Start the Service using the start scripts or OS service management.
  5. Check the Service Log to check the status of the service.
  6. Configure Pipelines.

Default Home Directory / $JFROG_HOME

The default Artifactory home directory is defined according to the installation type. For more information, see the Product Directory Structure page.

This guide uses $JFROG_HOME to represent the JFrog root directory containing the deployed product, the home directory for all JFrog products.

JFrog Subscription Levels

SELF-HOSTED
ENTERPRISE+
Page Contents


Single Node Installation

Choose one of the following methods for installing Pipelines in Single Node:

Install Using the Pipelines Command Line Installer

Pipelines can be installed through the Pipelines command line installer for Docker and Docker Compose, which automates the performance of most installation procedures. The command line installer is also used to change configuration settings, to restart, and to upgrade Pipelines.

When using the command line installer, it will perform the following procedures on your behalf.

  1. Install the following.
    1. Third-party dependencies (PostgreSQL database, RabbitMQ messaging, included in the archive)
    2. Pipelines
  2. Connect to Artifactory(using joinKey and jfrogUrl).
  3. Configuration of Pipelines service, including
    1. network IP/URL assignments for build node access
    2. network IP/URL assignments for REST APIs and supplemental UI
    3. connection to optional external database
    4. registry of default build images
  4. Start the service.

Prerequisites to Installation

The Pipelines command line installer invokes Docker Compose to orchestrate installation of the multiple Docker containers that comprise Pipelines.

To facilitate use of Docker Compose by the command line installer, you must install:

Installation Steps

  1. Download one of the Jfrog Pipelines installers


  2. Extract the installer from the downloaded .rpm, .deb, or .tar.gz  file (see System Directories for the value of the JFrog root directory $JFROG_HOME).

    Linux Archive (tar.gz)
    $ mkdir -p installer && tar -C installer -xvzf pipelines-<version>.tar.gz 
    $ cd installer/pipelines-<version>
    $ ./pipelines --help # prints all the available command line installer options
    RPM
    $ sudo rpm -Uvh pipelines-<version>.rpm
    $ cd $JFROG_HOME\pipelines\installer
    Debian
    $ sudo dpkg -i pipelines-<version>.deb
    $ cd $JFROG_HOME\pipelines\installer

    Installing Pipelines using RPM or Debian will make the Pipelines command line installer command accessible from any directory.

    Installing via the generic Linux installer (.tar) involves extracting the pipelines files (which is where the "pipelines" executable comes into play with "./pipelines") .

  3. Run the installer. 

    $ sudo pipelines install \
        --base-url-ui <jfrog-url> \
        --artifactory-joinkey <join-key> \
        --installer-ip <new-instance-ip> \
        --api-url http://<external-ip>:8082/pipelines/api \
        --www-url http://<external-ip>:30001 \
        --rabbitmq-url amqp://<external-ip>:30200
    

    For details on install options, external connections, and using an external database, see Product Configuration.

  4. You may now perform a health check on the node to verify that it is operating properly

    $ curl -XGET http://localhost:8046/router/api/v1/topology/health | jq '.'

Helm Installation

Prerequisites

Before deploying Pipelines using Helm Chart, you will need to have the following in place:

  • An installed Artifactory
  • A pre-created repository jfrogpipelines in of an Antifactory type Generic with a maven-2-default layout, and a deployed Nginx-ingress controller

For more information, see Helm Charts for Advanced Users.

  1. Add the ChartCenter Helm repository to your Helm client.

    helm repo add jfrog https://charts.jfrog.io
    
  2. Update the repository.

    helm repo update
  3. To connect Pipelines to your Artifactory installation, you will need to use a Join Key. To provide a Join Key, jfrogUrl, and jfrogUrlUI to your Pipelines installation, retrieve the connection details of your Artifactory installation from the UI as shown below. For more information, see Viewing the Join Key.

    pipelines:
      ## Artifactory URL - Mandatory
      ## If Artifactory and Pipelines are in same namespace, jfrogUrl is Artifactory service name, otherwise its external URL of Artifactory
      jfrogUrl: ""
      ## Artifactory UI URL - Mandatory
      ## This must be the external URL of Artifactory, for example: https://artifactory.example.com
      jfrogUrlUI: ""
    
      ## Join Key to connect to Artifactory
      ## IMPORTANT: You should NOT use the example joinKey for a production deployment!
      joinKey: EEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEE
    
      ## Pipelines requires a unique master key
      ## You can generate one with the command: "openssl rand -hex 32"
      ## IMPORTANT: You should NOT use the example masterKey for a production deployment!
      masterKey: FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF
  4. Get the Pipelines helm chart to get the required configuration files.

    helm fetch jfrog/pipelines --untar
  5. Configure the installation by editing the local copies of the values-ingress.yaml and values-ingress-passwords.yaml with the required configuration values.

    1. Edit the URLs in the values-ingress.yaml file (Artifactory URL, Ingress hosts, Ingress tls secrets).

    1. Set the passwords uiUserPassword, postgresqlPassword and auth.password in the local copies.

    2. Set the masterKey and joinKey in the values-ingress-passwords.yaml.

      Unlike other installations, Helm Chart configurations are made to the values.yaml and are then applied to the system.yaml.

      Follow these steps to apply the configuration changes.

      1. Make the changes to values.yaml. 
      2. Run the command.

        helm upgrade --install pipelines --namespace pipelines -f values.yaml

  6. Install Pipelines.

    kubectl create ns pipelines
    helm upgrade --install pipelines --namespace pipelines jfrog/pipelines -f pipelines/values-ingress.yaml -f pipelines/values-ingress-passwords.yaml
  7. Access Pipelines from your browser at: http://<jfrogUrl>/ui/, then go to the Pipelines tab in the Application module in the UI.

  8. Check the status of your deployed helm releases.

    helm status pipelines

For advanced installation options, see Helm Charts Installers for Advanced Users.

HA Installation

The following describes how to set up a Pipelines HA cluster with two or more nodes. For more information, see the System Architecture.

Choose one of the following methods for installing Pipelines in Single Node:

Prerequisites

All nodes within the same Pipelines HA installation must run the same Pipelines version.

Licensing

Pipelines HA is supported with an Enterprise Plus License. Each node in the cluster must be activated with a different license.

Database

Pipelines HA requires an external PostgreSQL database. Make sure you have completed setting up your external database before proceeding to install the first node. The database connection details are used for each node installation.

There are several ways to setup PostgreSQL for redundancy. Including: HA, Load Balancing and Replication. For more information, see the  PostgreSQL documentation.

RabbitMQ

RabbitMQ is installed as part of the Pipelines installation for every node. In HA architecture, it uses queue mirroring between the different RabbitMQ nodes.

Network

  • All the Pipelines HA components (cluster nodes, database server and RabbitMQ) must be within the same fast LAN.
  • All the HA nodes must communicate with each other through dedicated TCP ports.
  • Network communications between the cluster nodes must be enabled for each of the cluster nodes.

Install HA Using the Pipelines Command Line Installer

Install the First Node

  1. Extract the installer from the downloaded .rpm, .deb, or .tar.gz  file, as shown for the single node installation.
  2. Perform the install procedure in the first node using the Pipelines command line installer.
    Note: You will need to fetch your jfrogURL (custom base URL) and join key to link your Pipelines installation to the Platform.

    $ sudo pipelines install \
        --base-url-ui <jfrog-url> \
        --artifactory-joinkey <join-key> \
        --db-connection-string postgres://<user>:<pass>@<ip>:<port>/<db> \
        --installer-ip <new-instance-ip> \
        --api-url http://<new-instance-ip>:8082/pipelines/api \
        --www-url http://<new-instance-ip>:30001 \
        --rabbitmq-url amqp://<new-instance-ip>:30200
    
  3. You may perform a health check on the node to confirm it is operating properly.

    $ curl -XGET http://localhost:8046/router/api/v1/topology/health | jq '.'

Install Additional Nodes

Repeat the following procedure for each additional node.

  1. In the new node instance, extract the installer from the downloaded .rpm, .deb, or .tar.gz  file, as performed for the first node.

  2. Copy from the first node instance the file $JFROG_HOME/pipelines/var/etc/system.yaml to the same location in the new instance.

  3. Perform the install procedure in the new node using the Pipelines command line installer.

    $ sudo pipelines install --installer-ip <new-instance-ip>
    
  4. You may perform a health check on the node to confirm it is operating properly.

    $ curl -XGET http://localhost:8046/router/api/v1/topology/health | jq '.'

Configure the Load Balancer

Once all additional nodes have been installed with an identical version of Pipelines, the load balancer must be configured to distribute requests made through a common base URI.

For example, if you want Pipelines to be accessible as mypipelines.jfrog.io over HTTPS, then the port mapping should be configured as follows: 

URILB(nginx/ELB)Backend Instance(s)
https://mypipelines.jfrog.io [Port: 30001][TCP][SSL termination] [PORT: 30001]
https://mypipelines-api.jfrog.io [Port: 8082][HTTP][SSL termination] [PORT: 8082]

https://mypipelines-msg.jfrog.io 

[PORT: 30200][TCP][SSL termination] [PORT: 30200]

https://mypipelines-msg.jfrog.io 

[PORT: 30201][TCP][SSL termination] [PORT: 30201]

Update Nodes

On each node (including the first), run the Pipelines command line installer again to update your installation for the load balanced URI:

$ sudo pipelines install \
    --www-url https://mypipelines.jfrog.io \
    --api-url https://mypipelines-api.jfrog.io/pipelines/api \
    --rabbitmq-url amqps://mypipelines-msg.jfrog.io

Pipelines should now be available in your JFrog Platform at https://myartifactory.jfrog.io

Helm HA Installation

Prerequisites

Before deploying Pipelines using Helm Chart, you will need to have the following in place:

  • An installed Artifactory
  • A pre-created repository jfrogpipelines in of an Antifactory type Generic with a maven-2-default layout, and a deployed Nginx-ingress controller

For more information, see Helm Charts for Advanced Users.

Important

Currently, it is not possible to connect a JFrog product (e.g., Pipelines) that is within a Kubernetes cluster with another JFrog product (e.g., Artifactory) that is outside of the cluster, as this is considered a separate network. Therefore, JFrog products cannot be joined together if one of them is in a cluster.

High Availability

For an HA Pipelines installation, set the replicaCount in the values.yaml file to >1 (the recommended is 3). It is highly recommended to also configure RabbitMQ and Redis subcharts to run in high availability modesStart Pipelines with 3 replicas per service and 3 replicas for RabbitMQ.

helm upgrade --install pipelines --namespace pipelines --set replicaCount=3 jfrog/pipelines
  1. Add the ChartCenter Helm repository to your Helm client.

    helm repo add jfrog https://charts.jfrog.io 
    
  2. Update the repository.

    helm repo update
  3. Next, create a unique master key; Pipelines requires a unique master key to be used by all micro-services in the same cluster. By default the chart has one set, the pipelines.masterKey, in the values.yaml file (unlike other installations, Helm Chart configurations are made to the values.yaml and are then applied to the system.yaml).

    For production grade installations it is strongly recommended to use a custom master key. If you initially use the default master key it will be very hard to change the master key at a later stage This key is for demo purpose and should not be used in a production environment.

  4. Generate a unique key and pass it to the template during installation/upgrade.

    # Create a key
    export MASTER_KEY=$(openssl rand -hex 32)
    echo ${MASTER_KEY}
    
    # Pass the created master key to Helm
    helm upgrade --install --set pipelines.masterKey=${MASTER_KEY} --namespace pipelines jfrog/pipelines

    Alternatively, you can create a secret containing the master key manually and pass it to the template during installation/upgrade.

    # Create a key
    export MASTER_KEY=$(openssl rand -hex 32)
    echo ${MASTER_KEY}
    
    # Create a secret containing the key. The key in the secret must be named master-key
    kubectl create secret generic my-secret --from-literal=master-key=${MASTER_KEY}
    
    # Pass the created secret to Helm
    helm upgrade --install pipelines --set pipelines.masterKeySecretName=my-secret --namespace pipelines jfrog/pipelines

    In either case, make sure to pass the same master key on all future calls to helm install and helm upgrade. In the first case, this means always passing --set pipelines.masterKey=${MASTER_KEY}. In the second, this means always passing --set pipelines.masterKeySecretName=my-secret and ensuring the contents of the secret remain unchanged.

  5. To connect Pipelines to your Artifactory installation, you will need to use a Join Key. To provide a Join Key, jfrogUrl, and jfrogUrlUI to your Pipelines installation, retrieve the connection details of your Artifactory installation from the UI in the following way (for more information see Viewing the Join Key).

    pipelines:
      ## Artifactory URL - Mandatory
      ## If Artifactory and Pipelines are in same namespace, jfrogUrl is Artifactory service name, otherwise its external URL of Artifactory
      jfrogUrl: ""
      ## Artifactory UI URL - Mandatory
      ## This must be the external URL of Artifactory, for example: https://artifactory.example.com
      jfrogUrlUI: ""
    
      ## Join Key to connect to Artifactory
      ## IMPORTANT: You should NOT use the example joinKey for a production deployment!
      joinKey: EEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEE
    
      ## Pipelines requires a unique master key
      ## You can generate one with the command: "openssl rand -hex 32"
      ## IMPORTANT: You should NOT use the example masterKey for a production deployment!
      masterKey: FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF
  6. Get the Pipelines helm chart to get the required configuration files.

    helm fetch jfrog/pipelines --untar
  7. Configure the installation by editing the local copies of the values-ingress.yaml and values-ingress-passwords.yaml with the required configuration values.

    1. Edit the URLs in the values-ingress.yaml file (Artifactory URL, Ingress hosts, Ingress tls secrets).

    1. Set the passwords uiUserPassword, postgresqlPassword and auth.password in the local copies.

    2. Set the masterKey and joinKey in the values-ingress-passwords.yaml.

      Unlike other installations, Helm Chart configurations are made to the values.yaml and are then applied to the system.yaml.

      Follow these steps to apply the configuration changes.

      1. Make the changes to values.yaml. 
      2. Run the command.

        helm upgrade --install pipelines --namespace pipelines -f values.yaml

  8. Install Pipelines.

    kubectl create ns pipelines
    helm upgrade --install pipelines --namespace pipelines jfrog/pipelines -f pipelines/values-ingress.yaml -f pipelines/values-ingress-passwords.yaml
  9. Access Pipelines from your browser at: http://<jfrogUrl>/ui/, then go to the Pipelines tab in the Application module in the UI.

  10. Check the status of your deployed helm releases.

    helm status pipelines

For advanced installation options, see Helm Charts Installers for Advanced Users.


Product Configuration

The command-line options of the Pipelines command line installer can be used to orchestrate a custom configuration of Pipelines. These can be used during the install process using pipelines install. You can also run pipelines install  to change configuration settings:

$ sudo pipelines install [flags]


All available options can be listed using the following command:

$ pipelines help
 
  Usage:
    ./pipelines <command> [flags]
 
  Examples:
    ./pipelines install
 
  Commands:
    install                 Run Pipelines installation
      --installer-ip                       Internal IP of the host [mandatory]
      --base-url-ui                        Unified UI URL  [mandatory]
      --artifactory-joinkey                Join key to connect with Artifactory [mandatory]
      --www-url                            Use provided url for WWW [mandatory]
      --api-url                            Use provided url for API [mandatory]
      --rabbitmq-url                       URL to connect to rabbitmq with basic auth e.g. amqp://myMsg.com [mandatory]
      --rabbitmq-admin-url                 URL to connect to rabbitmq admin UI e.g. http://myMsg.com
      --base-url                           Internal Artifactory URL
      --global-password                    Set one password for all services (db, rabbitmq, pipelines). Can be changed later
      --install-user                       User (and Group) that owns the installer generated files and folders (vagrant by default) e.g. obie, obie:obie, 1001:obie, 1002:1002
      --artifactory-proxy                  Proxy server to use for connecting to Artifactory
      --artifactory-proxy-username         User for the proxy server
      --artifactory-proxy-password         Password for proxy server
      --artifactory-service-id             Service Id to register with Artifactory. Format 'jft@<id>'
      --image-registry-url                 Docker registry url for Pipelines component images
      --image-registry-creds               Path to a file containing Docker credentials for the image registry as an alternative to --image-registry-url
      --build-image-registry-url           Docker registry url for images used by default in Pipelines steps
      --state-bucket                       Root bucket name for storing state
      --skip-image-pull                    Do not pull images
      --db-connection-string               A connection string to use an existing Postgres database
      --vault-url                          URL to connect to an existing Vault
      --vault-root-token                   Root token of the existing Vault specified with --vault-url
      --no-verify-ssl                      If true, pass -no-verify-ssl flag to services
      --global-http-proxy                  HTTP proxy to be used in Pipelines in place of any proxy information fetched from Artifactory
      --global-https-proxy                 HTTPS proxy to be used in Pipelines in place of any proxy information fetched from Artifactory
      --global-no-proxy                    No proxy settings to be used in Pipelines in place of any proxy information fetched from Artifactory
      --access-control-allow-origins       API will return these as allowed origins. A comma-separated list of origins should be provided.
      --disable-call-home                  Disable call home functionality
      --enable-call-home                   Enables call home functionality if previously disabled
      --rabbitmq-health-check-interval     RabbitMQ health check interval in mins
      --artifactory-health-check-interval  Artifactory health check interval in mins
      --db-health-check-interval           Database health check interval in mins
      --config-backup-count                Number of upgrades for which to keep backup configurations
    upgrade                 Upgrade current installation
    restart                 Restart Pipelines
    clean                   Remove Pipelines components and files
    info                    Print information about current installation on console
      -j | --json                    Print info as json
    version                 Print current installation control and build plane versions
      -j | --json                    Print info as json
    help                    Print this message


Custom Installation Directory

The environment variable JFROG_HOME is used to determine the location of all the configuration files and data stored for the installer. In any installation command, export this variable to the location of your choice.

$ JFROG_HOME=/my/dir/location pipelines install ....
$ sudo JFROG_HOME=/my/dir/location pipelines install ....

If the installation is not being run by a user with sudo permissions, grant the user full read/write permissions on the JFROG_HOME directory.

All installer commands (upgrade, clean, etc) will need JFROG_HOME environment variable to run successfully.

$ JFROG_HOME=/my/dir/location pipelines upgrade

To avoid exporting the variable for each command, it can be added to the global environment file in /etc/environment or user specific environments in $HOME/.bashrc or $HOME/.bash_profile

Artifactory Connection Details

Pipelines requires a working Artifactory server and a suitable license. The Pipelines connection to Artifactory requires two parameters:

  • baseUrlUI - URL to the machine where JFrog Artifactory is deployed, or the load balancer pointing to it. It is recommended to use DNS names rather than direct IPs. For example: http://jfrog.acme.com or http://10.20.30.40:8082
    Optionally, you may also set an internal URL baseUrl for connecting to Artifactory. You may need to do this if you have set up your JFrog Platform Deployment with a load balancer.
    Use the --base-url-ui option to set both the baseUrlUI and the internal baseUrl the same URL, or use both --base-url and --base-url-ui to set them to individual URLs.

    The --base-url-ui is the user accessible URL for Artifactory whereas the --base-url is the "internal" route for Artifactory. These URLs will be the same if the entire JFrog Platform is on a private network and users use Artifactory via a private IP. In most cases, the --base-url-ui will be a URL, and the --base-url will be an internal IP, internal domain name, or load-balancer IP. \

    In the Helm installer, the --base-url should be the internal service URL Artifactory. This is more efficient and reduces the number of network hops, thus decreasing the probability of breaking the connection during large file upload/downloads.

  • joinKey - This is the "secret" key required by Artifactory for registering and authenticating the Pipelines server.
    You can fetch the Artifactory joinKey (join Key) from the JPD UI in the Administration module | Security | Settings | Join Key
    Use the --artifactory-joinkey option to set the joinKey through the command line installer.

External Connections 

Pipelines requires network configuration that enables the exchange of messages between the Pipelines CI server (in the controlplane) and all possible build nodes (in the buildplane). Build nodes must be able to command the controlplane through Pipelines REST APIs and send status messages through RabbitMQ. Since build nodes may run outside the private network or VPC where Pipelines is installed (for example, in a public cloud), the ports for these channels should be exposed as public IP addresses. This exposure is mitigated by secure protocols and authentication.

These recommended port settings and exposures ensure full functionality of all documented features and usage of all supported build node types and sources. Custom configurations may support a limited subset of Pipelines functionality.

Please contact JFrog support for assistance in implementing a custom install.

Installer optionPortProtocolDefaultDescription
--installer-ipnone
noneREQUIRED: Base IP of the Pipelines instance.
--api-url80822http/httpsnone

IP or URL for REST APIs. For example: http://34.217.93.187:8082/pipelines/api

--www-url300011http/httpsnone

IP or URL for supplemental UI pages (Run History, Run Log)

--rabbitmq-url302002ampq/ampqsnoneIP or URL for messaging between controlplane and buildplane
--rabbitmq-admin-url30201http/https<installer-ip>:30201Accessed only from within Pipelines




1 Must be accessible to users (same accessibility as base-url-ui)
2 Must be accessible from build nodes (external for cloud VMs)

Example

Artifactory: jfrog.mycompany.com
Instance IP (internal): 10.128.0.16
Pipelines external IP: 34.217.93.187

$ sudo pipelines install \
    --base-url-ui http://jfrog.mycompany.com  \
    --installer-ip 10.128.0.16 \
    --api-url http://34.217.93.187:8082/pipelines/api \
    --www-url http://34.217.93.187:30001 \
    --rabbitmq-url amqp://34.217.93.187:30200 \
    --artifactory-joinkey <join-key>

UI External URL (--www-url)

The www external URL provides access to the Pipelines user interface from outside the installation host. This URL must be defined on first install using the --www-url option; there is no default assignment by the command line installer.

The preferred form of this URL is an IP address with the port . For example:

--www-url http://34.217.93.187:30001

Alternately, if you have set up your service URL as a domain name through a NAT gateway or load balancer:

--www-url http://mypipelines.mycompany.com:30001

API External URL (--api-url)

The API external URL provides access to the Pipelines REST APIs from outside the installation host. This URL must be defined on first install using the --api-url option; there is no default assignment by the command line installer.

The preferred form of this URL is an IP address with the port, followed by the path /pipelines/api. For example:

--api-url http://34.217.93.187:8082/pipelines/api

Alternately, if you have set up your service URL as a domain name through a NAT gateway or load balancer:

--api-url http://mypipelines-api.mycompany.com/pipelines/api

RabbitMQ External URL (--rabbitmq-url)

Build nodes need to connect to the RabbitMQ service running on the installation host to successfully register themselves and signal completion. This URL must be accessible to all build nodes and defined on first install using the --rabbitmq-url option; there is no default assignment by the command line installer.

The preferred form of this URL is an IP address with the port. For example:

--rabbitmq-url http://34.217.93.187:30200

Alternately, if you have set up your service URL as a domain name through a NAT gateway or load balancer:

--rabbitmq-url http://mypipelines.mycompany.com:30200

You can also set the internal administration URL using the --rabbitmq-admin-url option. If this option is not specified, it will default to http://<installer-ip>:30201.

You can also use these options to specify RabbitMQ authentication credentials in the URLs:

$ sudo pipelines install --rabbitmq-url amqp[s]://user:pass@1.2.3.4:30200 \
         --rabbitmq-admin-url http[s]://adminUser:adminPass@1.2.3.4:30201

External Database

By default, Pipelines installs and connects to a PostgreSQL database that runs in the same instance as the Pipelines service. However, it can be configured to use an external PostgreSQL database if needed. This is required for a high availability installation, so that all HA nodes of Pipelines reference a common, shared database.

Use the commands below to create a Pipelines user and database with appropriate permissions before proceeding to install Pipelines. Modify the relevant values to match your specific environment:

CREATE USER pipelines WITH PASSWORD ‘password’;
CREATE DATABASE pipelinesdb WITH OWNER=pipelines ENCODING=’UTF8';
GRANT ALL PRIVILEGES ON DATABASE pipelinesdb TO pipelines;

After you have verified that the script is correct, you need to run it to create the database and proceed with configuring the database.

When installing Pipelines, you must specify the connection string using the following argument in command line installer. The database connection details are used for each node installation.

$ sudo pipelines install --db-connection-string postgres://<user>:<pass>@<ip>:<port>/<db>

External Vault

By default, Pipelines installs and connects to a vault that runs in the same instance as the Pipelines service. However, it can be configured to use an external vault if needed. 

When installing Pipelines, you must specify the vault-url string and the root token using the following argument in command line installer.

$ sudo pipelines install --vault-url <external-vault-url> --vault-root-token <external-vault-root-token>

Using Vault in Production Environments

To use vault securely, you must set the disablemlock setting in the values.yaml to false (see Hashicorp Vault recommendations.

vault:
  disablemlock: false

Proxy Setup

Pipelines installer accepts a proxy configuration to use a proxy server to connect to the JFrog Artifactory instance using the following arguments in command line installer:

$ sudo pipelines install --artifactory-proxy <proxy-server>\
    --artifactory-proxy-username <proxy-username> \
    --artifactory-proxy-password <proxy-password>

The installer also fetches proxy configurations from the connected JFrog Artifactory instance and injects them into all microservices and execution nodes. This is done to ensure that any outgoing connections use the same proxy settings as are being used by the parent Artifactory instance.

State

The installer allows users to set up state providers using command line installer. State is used by Pipelines to store

  • Cache
  • Test and coverage reports
  • Step artifacts
  • Step outputs
  • Run outputs

Users can also use state indirectly to download console logs and artifacts from the UI.

Use --state-bucket <bucket name>  to configure the Artifactory repository to use for storing state. If this setting is left blank, a name is automatically generated.

Non-Root User

The --install-user <username>:<groupname> argument of the command line installer controls the user and group settings for the files created by installer. By default, the installation runs as the currently logged in user, as defined by $HOME environment variable

Prerequisites

  • The user and group provided as the arguments must exist before running the installation
  • ssh-keypair for the user must exist in $USER_HOME/.ssh directory
  • The public key should be in file $USER_HOME/.ssh/id_rsa.pub
  • The private key should be in file $USER_HOME/.ssh/id_rsa with permissions set to 600
  • The user being used must have permissions on JFROG_HOME directory (/opt/jfrog by default)
  • The user being used must be part of docker group on the host to execute Docker commands
  • The following dependencies must be installed: Python, jq, yq, curl, nc, psql, and Docker Compose

Install Image Registry

The --image-registry-url argument of the command line installer specifies the endpoint where the Docker images for Pipelines services are stored that will be installed by the Pipelines command line installer. By default, the command line installer will install Pipelines from the JFrog distribution registry at releases-docker.jfrog.ioThis should not be changed without instruction from JFrog.

To change the registry for runtime build images, use --build_image_registry-url as described below.

Changing the Default Build Image Registry

The standard set of runtime build images are stored at releases-docker.jfrog.io and the Pipelines command line installer sets this registry location by default.

You may want to copy the build images to a local Docker registry either to improve image pull times or avoid requiring access to a remote registry. After copying the images to the new location, you'll need to update Pipelines to use this location.  This can be done during installation or as part of an upgrade. Assuming that you have simply moved all of the default images, this just requires setting the --build-image-registry-url option to the new registry to update all the default images for Pipelines when running either pipelines upgrade or pipelines install.

$ sudo pipelines upgrade --build-image-registry-url my.docker.registry.io

While setting --build-image-registry-url to the new registry, ensure that the new registry allows anonymous access for pulling.

Alternatively, if you want to use multiple registries or change the names of the default images, you can edit the Pipelines System YAML file and then run pipelines upgrade without the --build-image-registry-url option to start using the new image settings.


Accessing Pipelines

Once the installation is complete, Pipelines can be accessed as part of the JFrog Platform Deployment.

  1. Access the JFrog Platform from your browser. For example, at: http:/<jfrogUrl>/ui/.
  2. For Pipelines functions, go to the Pipelines tab in the Application module.

Once the installation is complete, start configuring Pipelines to create build node pools, add integrations, and add pipeline sources.


Restarting Pipelines

It may be necessary to restart Pipelines on a node. For example, if the VM is restarted, Pipelines will need to be restarted for it to start running again.

If Pipelines was installed with sudo and the default $JFROG_HOME, run sudo pipelines restart. OFor install Pipelines HAtherwise, run pipelines restart as the user that installed Pipelines and/or with the same $JFROG_HOME environment variable.


Copyright © 2021 JFrog Ltd.