Cloud customer?
 Upgrade in MyJFrog >

Search





Overview

This page provides a guide for the different ways you can install and configure JFrog Pipelines on a single node and for high availability. Additional information on high availability can be found here.

System Requirements

To install Pipelines 1.x, you must first install JFrog Artifactory 7.x.

Before you install Pipelines please refer to System Requirements for information on supported platforms, supported browsers, and other requirements, and the system architecture.

You should also review the external connections details to conform port assignments and public IP addresses to the recommended configuration.

The current version of Pipelines has been validated to operate on:

  • Ubuntu 18.04 LTS
  • Ubuntu 16.04 LTS
  • CentOS 7
  • RHEL 7 

Installation Steps

Pipelines is installed through the Pipelines Command Line Installer (CLI), which automates the performance of most installation procedures.

In addition to installation, the CLI is also used to change configuration settings, to restart, and to upgrade Pipelines.

The installation procedure involves the following main steps:

  1. Download the package(s) to install the Pipelines Command Line Installer for the distribution type. (Linux Archive, RPM, Debian)
  2. Install and run the Pipelines CLI to install Pipelines either as a single node installation or a high availability cluster.
    The Pipelines CLI performs the following procedures on your behalf:
    1. Install
      1. third-party dependencies (PostgreSQL database, RabbitMQ messaging, included in the archive)
      2. Pipelines
    2. Connection to Artifactory (using joinKey and jfrogUrl)
    3. Configuration of Pipelines service, including
      1. network IP/URL assignments for build node access
      2. network IP/URL assignments for REST APIs and supplemental UI
      3. connection to optional external database
      4. registry of default build images
    4. Starting the service
  3. Post-Install Steps including configuration of Pipelines.

Default Home Directory / $JFROG_HOME

The default Artifactory home directory is defined according to the installation type. For additional details see the Product Directory Structure page.

Note: This guide uses $JFROG_HOME to represent the JFrog root directory containing the deployed product, the home directory for all JFrog products.

Page Contents


Single Node Installation

JFrog Pipelines is installed using the Pipelines Command Line Installer (CLI) that must be downloaded. The following describes how to install Pipelines on your system.

Prerequisites to Installation

The Pipelines CLI invokes Docker Compose to orchestrate installation of the multiple Docker containers that compose JFrog Pipelines.

To facilitate use of Docker Compose by the CLI, you must install:

Installation

  1. Extract the installer from the downloaded .rpm, .deb, or .tar.gz  file. (See System Directories for the value of the JFrog root directory $JFROG_HOME.)

    Linux Archive (tar.gz)
    $ mkdir -p installer && tar -C installer -xvzf pipelines-<version>.tar.gz 
    $ cd installer/pipelines-<version>
    $ ./pipelines --help # prints all the available CLI options
    RPM
    $ sudo rpm -Uvh pipelines-<version>.rpm
    $ cd $JFROG_HOME\pipelines\installer
    Debian
    $ sudo dpkg -i pipelines-<version>.deb
    $ cd $JFROG_HOME\pipelines\installer



  2. Run the installer.
    Note: You will need to
    fetch your jfrogURL (custom base URL) and join key to link your Pipelines installation to the Platform.

    For a standard single node installation:

    $ sudo pipelines install \
        --base-url-ui <jfrog-url> \
        --artifactory-joinkey <join-key> \
        --installer-ip <new-instance-ip> \
        --api-url http://<external-ip>:8082/pipelines/api \
        --www-url http://<external-ip>:30001 \
        --rabbitmq-url amqp://<external-ip>:30200
    

    For details on install options, external connections, and using an external database, see the Product Configuration section.


  3. You may perform a health check on the node to confirm it is operating properly

    $ curl -XGET http://localhost:8046/router/api/v1/topology/health | jq '.'



HA Installation

The following describes how to set up a Pipelines HA cluster with two or more nodes. For more information, see the System Architecture.

Prerequisites

All nodes within the same Pipelines HA installation must run the same Pipelines version.

Licensing

Pipelines HA is supported with an Enterprise Plus License. Each node in the cluster must be activated with a different license.

Database

Pipelines HA requires an external PostgreSQL database.

Make sure you have completed setting up your external database before proceeding to install the first node. The database connection details are used for each node installation.

There are several ways to setup PostgreSQL for redundancy. Including: HA, Load Balancing and Replication. For more information, see the  PostgreSQL documentation.

RabbitMQ

RabbitMQ is installed as part of the Pipelines installation for every node. In HA architecture, it uses queue mirroring between the different RabbitMQ nodes.

Network

All the Pipelines HA components (cluster nodes, database server and RabbitMQ) must be within the same fast LAN.

All the HA nodes must communicate with each other through dedicated TCP ports.

Network communications between the cluster nodes must be enabled for each of the cluster nodes.

Install the First Node

  1. Extract the installer from the downloaded .rpm, .deb, or .tar.gz  file, as shown for the single node installation.

  2. Perform the install procedure in the first node using the Pipelines CLI.
    Note: You will need to fetch your jfrogURL (custom base URL) and join key to link your Pipelines installation to the Platform.

    $ sudo pipelines install \
        --base-url-ui <jfrog-url> \
        --artifactory-joinkey <join-key> \
        --db-connection-string postgres://<user>:<pass>@<ip>:<port>/<db> \
        --installer-ip <new-instance-ip> \
        --api-url http://<new-instance-ip>:8082/pipelines/api \
        --www-url http://<new-instance-ip>:30001 \
        --rabbitmq-url amqp://<new-instance-ip>:30200
    
  3. You may perform a health check on the node to confirm it is operating properly:

    $ curl -XGET http://localhost:8046/router/api/v1/topology/health | jq '.'

Install Additional Nodes

Repeat the following procedure for each additional node:

  1. In the new node instance, extract the installer from the downloaded .rpm, .deb, or .tar.gz  file, as performed for the first node.

  2. Copy from the first node instance the file $JFROG_HOME/pipelines/var/etc/system.yaml to the same location in the new instance.

  3. Perform the install procedure in the new node using the Pipelines CLI:

    $ sudo pipelines install --installer-ip <new-instance-ip>
    
  4. You may perform a health check on the node to confirm it is operating properly:

    $ curl -XGET http://localhost:8046/router/api/v1/topology/health | jq '.'

Configure the Load Balancer

Once all additional nodes have been installed with an identical version of Pipelines, the load balancer must be configured to distribute requests made through a common base URI.

For example, if you want Pipelines to be accessible as mypipelines.jfrog.io over HTTPS, then the port mapping should be configured as follows: 

URILB(nginx/ELB)Backend Instance(s)
https://mypipelines.jfrog.io [Port: 30001][TCP][SSL termination] [PORT: 30001]
https://mypipelines-api.jfrog.io [Port: 8082][HTTP][SSL termination] [PORT: 8082]

https://mypipelines-msg.jfrog.io 

[PORT: 30200][TCP][SSL termination] [PORT: 30200]

https://mypipelines-msg.jfrog.io 

[PORT: 30201][TCP][SSL termination] [PORT: 30201]

Update Nodes

On each node (including the first), run the Pipelines CLI again to update your installation for the load balanced URI:

$ sudo pipelines install \
    --www-url https://mypipelines.jfrog.io \
    --api-url https://mypipelines-api.jfrog.io/pipelines/api \
    --rabbitmq-url amqps://mypipelines-msg.jfrog.io


Pipelines should now be available in your JPD at https://myartifactory.jfrog.io



Product Configuration

The command-line options of the Pipelines CLI can be used to orchestrate a custom configuration of JFrog Pipelines. These can be used during the install process using pipelines install. You can also run pipelines install  to change configuration settings:

$ sudo pipelines install [flags]


All the available options can be listed using following command:

$ pipelines help

Usage:
    ./pipelines <command> [flags]

  Examples:
    ./pipelines install  Commands:
    install                 Run Pipelines installation
      --installer-ip               Internal IP of the host
      --base-url-ui                Unified UI URL
      --base-url                   Internal Artifactory URL
      --artifactory-joinkey        Join key to connect with Artifactory
      --global-password            Set one password for all services (db, rabbitmq, pipelines internal service user token). Can be changed later
      --install-user               User (and Group) that owns the installer generated files and folders ($USER by default) 
                                     e.g. obie, obie:obie, 1001:obie, 1002:1002
      --artifactory-proxy          Proxy server to use for connecting to Artifactory
      --artifactory-proxy-username User for the proxy server
      --artifactory-proxy-password Password for proxy server
      --artifactory-service-id     Service Id to register with Artifactory. Format 'jfrt@<id>'
      --www-url                    Use provided url for WWW instead of host IP
      --api-url                    Use provided url for API instead of host IP
      --rabbitmq-url               URL to connect to rabbitmq e.g. amqp://myMsg.com
      --rabbitmq-admin-url         URL to connect to rabbitmq admin UI e.g., http://myMsg.com
      --image-registry-url         Image registry endpoint (default: docker.bintray.io)
      --build-image-registry-url   Docker registry url for images used by default in Pipelines steps
      --state-bucket               Root bucket name for storing state
      --skip-image-pull            Do not pull images
      --db-connection-string       A connection string to use an existing Postgres database
      --no-verify-ssl              Pass -no-verify-ssl flag to services
      --disable-call-home          Disable call home functionality
      --vault-url                  An existing vault URL
      --vault-root-token           Root token of the vault
      --rabbitmq-health-check-interval 		RabbitMQ health check interval in mins
      --artifactory-health-check-interval 	Artifactory health check interval in mins
    upgrade                 Upgrade current installation
    restart                 Restart Pipelines
    clean                   Remove Pipelines components and files
    info                    Print information about current installation on console
      -j | --json                  Print info as json
    version                 Print current installation control and build plane versions
      -j | --json                  Print info as json
    help                    Print this message


Custom Installation Directory

The environment variable JFROG_HOME is used to determine the location of all the configuration files and data stored for the installer. In any installation command, export this variable to the location of your choice.

$ JFROG_HOME=/my/dir/location pipelines install ....
$ sudo JFROG_HOME=/my/dir/location pipelines install ....

If the installation is not being run by a user with sudo permissions, grant the user full read/write permissions on the JFROG_HOME directory.

All installer commands (upgrade, clean, etc) will need JFROG_HOME environment variable to run successfully.

$ JFROG_HOME=/my/dir/location pipelines upgrade

To avoid exporting the variable for each command, it can be added to the global environment file in /etc/environment or user specific environments in $HOME/.bashrc or $HOME/.bash_profile

Artifactory Connection Details

Pipelines requires a working Artifactory server and a suitable license. The Pipelines connection to Artifactory requires two parameters:

  • baseUrlUI - URL to the machine where Artifactory is deployed, or the load balancer pointing to it. It is recommended to use DNS names rather than direct IPs. For example: http://jfrog.acme.com or http://10.20.30.40:8082
    Optionally, you may also set an internal URL baseUrl for connecting to Artifactory. You may need to do this if you have set up your JFrog Platform Deployment with a load balancer.
    Use the --base-url-ui option to set both the baseUrlUI and the internal baseUrl the same URL, or use both --base-url and --base-url-ui to set them to individual URLs.

  • joinKey - This is the "secret" key required by Artifactory for registering and authenticating the Pipelines server.
    You can fetch the Artifactory joinKey (join Key) from the JPD UI in the Administration module | Security | Settings | Join Key
    Use the --artifactory-joinkey option to set the joinKey through the CLI.

External Connections 

Pipelines requires network configuration that enables the exchange of messages between the Pipelines CI server (in the controlplane) and all possible build nodes (in the buildplane). Build nodes must be able to command the controlplane through Pipelines REST APIs and send status messages through RabbitMQ. Since build nodes may run outside the private network or VPC where Pipelines is installed (for example, in a public cloud), the ports for these channels should be exposed as public IP addresses. This exposure is mitigated by secure protocols and authentication.

These recommended port settings and exposures ensure full functionality of all documented features and usage of all supported build node types and sources. Custom configurations may support a limited subset of Pipelines functionality.

Please contact JFrog support for assistance in implementing a custom install.

Installer optionPortProtocolDefaultDescription
--installer-ipnone
noneREQUIRED: Base IP of the Pipelines instance.
--api-url80822http/httpsnone

IP or URL for REST APIs. For example: http://34.217.93.187:8082/pipelines/api

--www-url300011http/httpsnone

IP or URL for supplemental UI pages (Run History, Run Log)

--rabbitmq-url302002ampq/ampqsnoneIP or URL for messaging between controlplane and buildplane
--rabbitmq-admin-url30201http/https<installer-ip>:30201Accessed only from within Pipelines



1 Must be accessible to users (same accessibility as base-url-ui)
2 Must be accessible from build nodes (external for cloud VMs)

Example

Artifactory: jfrog.mycompany.com
Instance IP (internal): 10.128.0.16
Pipelines external IP: 34.217.93.187

$ sudo pipelines install \
    --base-url-ui http://jfrog.mycompany.com  \
    --installer-ip 10.128.0.16 \
    --api-url http://34.217.93.187:8082/pipelines/api \
    --www-url http://34.217.93.187:30001 \
    --rabbitmq-url amqp://34.217.93.187:30200 \
    --artifactory-joinkey <join-key>

UI External URL (--www-url)

The www external URL provides access to the Pipelines user interface from outside the installation host. This URL must be defined on first install using the --www-url option; there is no default assignment by the CLI.

The preferred form of this URL is an IP address with the port . For example:

--www-url http://34.217.93.187:30001

Alternately, if you have set up your service URL as a domain name through a NAT gateway or load balancer:

--www-url http://mypipelines.mycompany.com:30001

API External URL (--api-url)

The API external URL provides access to the Pipelines REST APIs from outside the installation host. This URL must be defined on first install using the --api-url option; there is no default assignment by the CLI.

The preferred form of this URL is an IP address with the port, followed by the path /pipelines/api. For example:

--api-url http://34.217.93.187:8082/pipelines/api

Alternately, if you have set up your service URL as a domain name through a NAT gateway or load balancer:

--api-url http://mypipelines-api.mycompany.com/pipelines/api

RabbitMQ External URL (--rabbitmq-url)

Build nodes need to connect to the RabbitMQ service running on the installation host to successfully register themselves and signal completion. This URL must be accessible to all build nodes and defined on first install using the --rabbitmq-url option; there is no default assignment by the CLI.

The preferred form of this URL is an IP address with the port. For example:

--rabbitmq-url http://34.217.93.187:30200

Alternately, if you have set up your service URL as a domain name through a NAT gateway or load balancer:

--rabbitmq-url http://mypipelines.mycompany.com:30200

You can also set the internal administration URL using the --rabbitmq-admin-url option. If this option is not specified, it will default to http://<installer-ip>:30201.

You can also use these options to specify RabbitMQ authentication credentials in the URLs:

$ sudo pipelines install --rabbitmq-url amqp[s]://user:pass@1.2.3.4:30200 \
         --rabbitmq-admin-url http[s]://adminUser:adminPass@1.2.3.4:30201

External Database

By default, Pipelines installs and connects to a PostgreSQL database that runs in the same instance with the Pipelines service. However, it can be configured to use an external PosgreSQL database if needed. This is required for a high availability installation, so that all HA nodes of Pipelines reference a common, shared database.

Make sure you have completed setting up your external database before proceeding to install Pipelines. The database connection details are used for each node installation.

When installing Pipelines, you must specify the connection string using the following argument in CLI:

$ sudo pipelines install --db-connection-string postgres://<user>:<pass>@<ip>:<port>/<db>

External Vault

By default, Pipelines installs and connects to a vault that runs in the same instance as the Pipelines service. However, it can be configured to use an external vault if needed. 

When installing Pipelines, you must specify the vault-url string and the root token using the following argument in CLI.

$ sudo pipelines install --vault-url <external-vault-url> --vault-root-token <external-vault-root-token>


Proxy Setup

Pipelines installer accepts a proxy configuration to use a proxy server to connect to the Artifactory instance using the following arguments in CLI:

$ sudo pipelines install --artifactory-proxy <proxy-server>\
    --artifactory-proxy-username <proxy-username> \
    --artifactory-proxy-password <proxy-password>

The installer also fetches proxy configurations from the connected Artifactory instance and injects them into all microservices and execution nodes. This is done to ensure that any outgoing connections use the same proxy settings as are being used by the parent Artifactory instance.

State

The installer allows users to set up state providers using CLI. State is used by Pipelines to store

  • Cache
  • Test and coverage reports
  • Step artifacts
  • Step outputs
  • Run outputs

Users can also use state indirectly to download console logs and artifacts from the UI.

Use --state-bucket <bucket name>  to configure the Artifactory repository to use for storing state. If this setting is left blank, a name is automatically generated.

Non-Root User

The --install-user <username>:<groupname> argument of the CLI controls the user and group settings for the files created by installer. By default, the installation runs as the currently logged in user, as defined by $HOME environment variable

Prerequisites

  • The user and group provided as the arguments must exist before running the installation
  • ssh-keypair for the user must exist in $USER_HOME/.ssh directory
  • The public key should be in file $USER_HOME/.ssh/id_rsa.pub
  • The private key should be in file $USER_HOME/.ssh/id_rsa with permissions set to 600
  • The user being used must have permissions on JFROG_HOME directory (/opt/jfrog by default)
  • The user being used must be part of docker group on the host to execute Docker commands
  • The following dependencies must be installed: Python, jq, yq, curl, nc, psql, and Docker Compose

Install Image Registry

The --image-registry-url argument of the CLI specifies the endpoint where the Docker images for Pipelines services are stored that will be installed by the Pipelines CLI. By default, the CLI will install Pipelines from the JFrog distribution registry at docker.bintray.io. This should not be changed without instruction from JFrog.

To change the registry for runtime build images, use --build_image_registry-url as described below.

Changing the Default Build Image Registry

The standard set of runtime build images are stored at docker.bintray.io and the Pipelines CLI sets this registry location by default.

You may want to copy the build images to a local Docker registry either to improve image pull times or avoid requiring access to a remote registry. After copying the images to the new location, you'll need to update Pipelines to use this location.  This can be done during installation or as part of an upgrade. Assuming that you have simply moved all of the default images, this just requires setting the --build-image-registry-url option to the new registry to update all the default images for Pipelines when running either pipelines upgrade or pipelines install.


$ sudo pipelines upgrade --build-image-registry-url my.docker.registry.io


Alternatively, if you want to use multiple registries or change the names of the default images, you can edit the
Pipelines System YAML file and then run pipelines upgrade without the --build-image-registry-url option to start using the new image settings.



Post-Install Steps

Once the installation is complete, Pipelines can be accessed as part of the JFrog Platform Deployment.

  1. Access the JFrog Platform from your browser. For example, at: http:/<jfrogUrl>/ui/.
  2. For Pipelines functions, go to the Pipelines tab in the Application module.

Once the installation is complete, start configuring Pipelines to create build node pools, add integrations, and add pipeline sources.


Restarting Pipelines

It may be necessary to restart Pipelines on a node. For example, if the VM is restarted, Pipelines will need to be restarted in order for it to start running again.

If Pipelines was installed with sudo and the default $JFROG_HOME, run sudo pipelines restart. And otherwise, run pipelines restart as the user that installed Pipelines and/or with the same $JFROG_HOME environment variable.

  • No labels
Copyright © 2020 JFrog Ltd.