Single Node Installation
Choose one of the following methods for installing Pipelines in Single Node:
Install Using the Pipelines Command Line Installer
Pipelines can be installed through the Pipelines command line installer for Docker and Docker Compose, which automates the performance of most installation procedures. The command line installer is also used to change configuration settings, to restart, and to upgrade Pipelines.
When using the command line installer, it will perform the following procedures on your behalf.
- Install the following.
- Third-party dependencies (PostgreSQL database, RabbitMQ messaging, included in the archive)
- Pipelines
- Connect to Artifactory(using
joinKey
andjfrogUrl
). - Configuration of Pipelines service, including
- network IP/URL assignments for build node access
- network IP/URL assignments for REST APIs and supplemental UI
- connection to optional external database
- registry of default build images
- Start the service.
Prerequisites to Installation
The Pipelines command line installer invokes Docker Compose to orchestrate installation of the multiple Docker containers that comprise Pipelines.
To facilitate use of Docker Compose by the command line installer, you must install:
- Docker version 18.09 or above
- Docker Compose version 1.24.1 or above
- Python version 2.7 or above
Installation Steps
Download one of the Jfrog Pipelines installers
Extract the installer from the downloaded
.rpm
,.deb
, or.tar.gz
file (see System Directories for the value of the JFrog root directory$JFROG_HOME
).Linux Archive (tar.gz)$ mkdir -p installer && tar -C installer -xvzf pipelines-<version>.tar.gz $ cd installer/pipelines-<version> $ ./pipelines --help # prints all the available command line installer options
RPM$ sudo rpm -Uvh pipelines-<version>.rpm $ cd $JFROG_HOME\pipelines\installer
Debian$ sudo dpkg -i pipelines-<version>.deb $ cd $JFROG_HOME\pipelines\installer
Installing Pipelines using RPM or Debian will make the Pipelines command line installer command accessible from any directory.
Installing via the generic Linux installer (.tar) involves extracting the pipelines files (which is where the "pipelines" executable comes into play with "./pipelines") .
Run the installer.
$ sudo pipelines install \ --base-url-ui <jfrog-url> \ --artifactory-joinkey <join-key> \ --installer-ip <new-instance-ip> \ --api-url http://<external-ip>:8082/pipelines/api \ --www-url http://<external-ip>:30001 \ --rabbitmq-url amqp://<external-ip>:30200
For details on install options, external connections, and using an external database, see Product Configuration.
You may now perform a health check on the node to verify that it is operating properly
$ curl -XGET http://localhost:8046/router/api/v1/topology/health | jq '.'
Helm Installation
Prerequisites
Before deploying Pipelines using Helm Chart, you will need to have the following in place:
- An installed Artifactory
- A pre-created repository
jfrogpipelines
in of an Antifactory typeGeneric
with amaven-2-default
layout, and a deployed Nginx-ingress controller
For more information, see Helm Charts for Advanced Users.
Add the ChartCenter Helm repository to your Helm client.
helm repo add center https://repo.chartcenter.io
Update the repository.
helm repo update
To connect Pipelines to your Artifactory installation, you will need to use a Join Key. To provide a Join Key, jfrogUrl, and jfrogUrlUI to your Pipelines installation, retrieve the connection details of your Artifactory installation from the UI as shown below. For more information, see Viewing the Join Key.
pipelines: ## Artifactory URL - Mandatory ## If Artifactory and Pipelines are in same namespace, jfrogUrl is Artifactory service name, otherwise its external URL of Artifactory jfrogUrl: "" ## Artifactory UI URL - Mandatory ## This must be the external URL of Artifactory, for example: https://artifactory.example.com jfrogUrlUI: "" ## Join Key to connect to Artifactory ## IMPORTANT: You should NOT use the example joinKey for a production deployment! joinKey: EEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEE ## Pipelines requires a unique master key ## You can generate one with the command: "openssl rand -hex 32" ## IMPORTANT: You should NOT use the example masterKey for a production deployment! masterKey: FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF
Get the Pipelines
helm chart
to get the required configuration files.helm fetch center/jfrog/pipelines --untar
Configure the installation by editing the local copies of the
values-ingress.yaml
values-ingress-passwords.yaml
with the required configuration values.Edit the URLs in the
values-ingress.yaml
file (Artifactory URL, Ingress hosts, Ingress tls secrets).
Set the passwords
uiUserPassword
,postgresqlPassword
andauth.password
in the local copies.Set the
masterKey
andjoinKey
in thevalues-ingress-passwords.yaml
.Unlike other installations, Helm Chart configurations are made to the
values.yaml
and are then applied to thesystem.yaml
.Follow these steps to apply the configuration changes.
- Make the changes to
values.yaml.
Run the command.
helm upgrade --
install
pipelines --namespace pipelines -f values.yaml
- Make the changes to
Install Pipelines.
kubectl create ns pipelines helm upgrade --install pipelines --namespace pipelines center/jfrog/pipelines -f pipelines/values-ingress.yaml -f pipelines/values-ingress-passwords.yaml
Access Pipelines from your browser at:
http://<jfrogUrl>/ui/
, then go to the Pipelines tab in the Application module in the UI.Check the status of your deployed helm releases.
helm status pipelines
For advanced installation options, see Helm Charts Installers for Advanced Users.
HA Installation
The following describes how to set up a Pipelines HA cluster with two or more nodes. For more information, see the System Architecture.
Choose one of the following methods for installing Pipelines in Single Node:
Prerequisites
All nodes within the same Pipelines HA installation must run the same Pipelines version.
Licensing
Pipelines HA is supported with an Enterprise Plus License. Each node in the cluster must be activated with a different license.
Database
Pipelines HA requires an external PostgreSQL database. Make sure you have completed setting up your external database before proceeding to install the first node. The database connection details are used for each node installation.
There are several ways to setup PostgreSQL for redundancy. Including: HA, Load Balancing and Replication. For more information, see the PostgreSQL documentation.
RabbitMQ
RabbitMQ is installed as part of the Pipelines installation for every node. In HA architecture, it uses queue mirroring between the different RabbitMQ nodes.
Network
- All the Pipelines HA components (cluster nodes, database server and RabbitMQ) must be within the same fast LAN.
- All the HA nodes must communicate with each other through dedicated TCP ports.
- Network communications between the cluster nodes must be enabled for each of the cluster nodes.
Install HA Using the Pipelines Command Line Installer
Install the First Node
- Extract the installer from the downloaded
.rpm
,.deb
, or.tar.gz
file, as shown for the single node installation. Perform the install procedure in the first node using the Pipelines command line installer.
Note: You will need to fetch your jfrogURL (custom base URL) and join key to link your Pipelines installation to the Platform.$ sudo pipelines install \ --base-url-ui <jfrog-url> \ --artifactory-joinkey <join-key> \ --db-connection-string postgres://<user>:<pass>@<ip>:<port>/<db> \ --installer-ip <new-instance-ip> \ --api-url http://<new-instance-ip>:8082/pipelines/api \ --www-url http://<new-instance-ip>:30001 \ --rabbitmq-url amqp://<new-instance-ip>:30200
You may perform a health check on the node to confirm it is operating properly.
$ curl -XGET http://localhost:8046/router/api/v1/topology/health | jq '.'
Install Additional Nodes
Repeat the following procedure for each additional node.
- In the new node instance, extract the installer from the downloaded
.rpm
,.deb
, or.tar.gz
file, as performed for the first node. Copy from the first node instance the file $JFROG_HOME/pipelines/var/etc/system.yaml to the same location in the new instance.
Perform the install procedure in the new node using the Pipelines command line installer.
$ sudo pipelines install --installer-ip <new-instance-ip>
You may perform a health check on the node to confirm it is operating properly.
$ curl -XGET http://localhost:8046/router/api/v1/topology/health | jq '.'
Configure the Load Balancer
Once all additional nodes have been installed with an identical version of Pipelines, the load balancer must be configured to distribute requests made through a common base URI.
For example, if you want Pipelines to be accessible as mypipelines.jfrog.io over HTTPS, then the port mapping should be configured as follows:
URI | LB(nginx/ELB) | Backend Instance(s) |
---|---|---|
https://mypipelines.jfrog.io | [Port: 30001][TCP][SSL termination] | [PORT: 30001] |
https://mypipelines-api.jfrog.io | [Port: 8082][HTTP][SSL termination] | [PORT: 8082] |
[PORT: 30200][TCP][SSL termination] | [PORT: 30200] | |
[PORT: 30201][TCP][SSL termination] | [PORT: 30201] |
Update Nodes
On each node (including the first), run the Pipelines command line installer again to update your installation for the load balanced URI:
$ sudo pipelines install \ --www-url https://mypipelines.jfrog.io \ --api-url https://mypipelines-api.jfrog.io/pipelines/api \ --rabbitmq-url amqps://mypipelines-msg.jfrog.io
Pipelines should now be available in your JFrog Platform at https://myartifactory.jfrog.io
.
Helm HA Installation
Prerequisites
Before deploying Pipelines using Helm Chart, you will need to have the following in place:
- An installed Artifactory
- A pre-created repository
jfrogpipelines
in of an Antifactory typeGeneric
with amaven-2-default
layout, and a deployed Nginx-ingress controller
For more information, see Helm Charts for Advanced Users.
Important
Currently, it is not possible to connect a JFrog product (e.g., Pipelines) that is within a Kubernetes cluster with another JFrog product (e.g., Artifactory) that is outside of the cluster, as this is considered a separate network. Therefore, JFrog products cannot be joined together if one of them is in a cluster.
High Availability
For an HA Pipelines installation, set the replicaCount in the values.yaml
file to >1 (the recommended is 3). It is highly recommended to also configure RabbitMQ and Redis subcharts to run in high availability modes. Start Pipelines with 3 replicas per service and 3 replicas for RabbitMQ.
helm upgrade --install pipelines --namespace pipelines --set replicaCount=3 center/jfrog/pipelines
Add the ChartCenter Helm repository to your Helm client.
helm repo add center https://repo.chartcenter.io
Update the repository.
helm repo update
Next, create a unique master key; Pipelines requires a unique master key to be used by all micro-services in the same cluster. By default the chart has one set, the
pipelines.masterKey
, in thevalues.yaml
file (unlike other installations, Helm Chart configurations are made to thevalues.yaml
and are then applied to thesystem.yaml
).For production grade installations it is strongly recommended to use a custom master key. If you initially use the default master key it will be very hard to change the master key at a later stage This key is for demo purpose and should not be used in a production environment.
Generate a unique key and pass it to the template during installation/upgrade.
# Create a key export MASTER_KEY=$(openssl rand -hex 32) echo ${MASTER_KEY} # Pass the created master key to Helm helm upgrade --install --set pipelines.masterKey=${MASTER_KEY} --namespace pipelines center/jfrog/pipelines
Alternatively, you can create a secret containing the master key manually and pass it to the template during installation/upgrade.
# Create a key export MASTER_KEY=$(openssl rand -hex 32) echo ${MASTER_KEY} # Create a secret containing the key. The key in the secret must be named master-key kubectl create secret generic my-secret --from-literal=master-key=${MASTER_KEY} # Pass the created secret to Helm helm upgrade --install pipelines --set pipelines.masterKeySecretName=my-secret --namespace pipelines center/jfrog/pipelines
In either case, make sure to pass the same master key on all future calls to
helm install
andhelm upgrade
. In the first case, this means always passing--set pipelines.masterKey=${MASTER_KEY}
. In the second, this means always passing--set pipelines.masterKeySecretName=my-secret
and ensuring the contents of the secret remain unchanged.To connect Pipelines to your Artifactory installation, you will need to use a Join Key. To provide a Join Key, jfrogUrl, and jfrogUrlUI to your Pipelines installation, retrieve the connection details of your Artifactory installation from the UI in the following way (for more information see Viewing the Join Key).
pipelines: ## Artifactory URL - Mandatory ## If Artifactory and Pipelines are in same namespace, jfrogUrl is Artifactory service name, otherwise its external URL of Artifactory jfrogUrl: "" ## Artifactory UI URL - Mandatory ## This must be the external URL of Artifactory, for example: https://artifactory.example.com jfrogUrlUI: "" ## Join Key to connect to Artifactory ## IMPORTANT: You should NOT use the example joinKey for a production deployment! joinKey: EEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEE ## Pipelines requires a unique master key ## You can generate one with the command: "openssl rand -hex 32" ## IMPORTANT: You should NOT use the example masterKey for a production deployment! masterKey: FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF
Get the Pipelines
helm chart
to get the required configuration files.helm fetch center/jfrog/pipelines --untar
Configure the installation by editing the local copies of the
values-ingress.yaml
values-ingress-passwords.yaml
with the required configuration values.Edit the URLs in the
values-ingress.yaml
file (Artifactory URL, Ingress hosts, Ingress tls secrets).
Set the passwords
uiUserPassword
,postgresqlPassword
andauth.password
in the local copies.Set the
masterKey
andjoinKey
in thevalues-ingress-passwords.yaml
.Unlike other installations, Helm Chart configurations are made to the
values.yaml
and are then applied to thesystem.yaml
.Follow these steps to apply the configuration changes.
- Make the changes to
values.yaml.
Run the command.
helm upgrade --
install
pipelines --namespace pipelines -f values.yaml
- Make the changes to
Install Pipelines.
kubectl create ns pipelines helm upgrade --install pipelines --namespace pipelines center/jfrog/pipelines -f pipelines/values-ingress.yaml -f pipelines/values-ingress-passwords.yaml
Access Pipelines from your browser at:
http://<jfrogUrl>/ui/
, then go to the Pipelines tab in the Application module in the UI.Check the status of your deployed helm releases.
helm status pipelines
For advanced installation options, see Helm Charts Installers for Advanced Users.
Product Configuration
The command-line options of the Pipelines command line installer can be used to orchestrate a custom configuration of Pipelines. These can be used during the install process using pipelines install
. You can also run pipelines install
to change configuration settings:
$ sudo pipelines install [flags]
All available options can be listed using the following command:
$ pipelines help Usage: ./pipelines <command> [flags] Examples: ./pipelines install Commands: install Run Pipelines installation --installer-ip Internal IP of the host [mandatory] --base-url-ui Unified UI URL [mandatory] --artifactory-joinkey Join key to connect with Artifactory [mandatory] --www-url Use provided url for WWW [mandatory] --api-url Use provided url for API [mandatory] --rabbitmq-url URL to connect to rabbitmq with basic auth e.g. amqp://myMsg.com [mandatory] --rabbitmq-admin-url URL to connect to rabbitmq admin UI e.g. http://myMsg.com --base-url Internal Artifactory URL --global-password Set one password for all services (db, rabbitmq, pipelines). Can be changed later --install-user User (and Group) that owns the installer generated files and folders (vagrant by default) e.g. obie, obie:obie, 1001:obie, 1002:1002 --artifactory-proxy Proxy server to use for connecting to Artifactory --artifactory-proxy-username User for the proxy server --artifactory-proxy-password Password for proxy server --artifactory-service-id Service Id to register with Artifactory. Format 'jft@<id>' --image-registry-url Docker registry url for Pipelines component images --image-registry-creds Path to a file containing Docker credentials for the image registry as an alternative to --image-registry-url --build-image-registry-url Docker registry url for images used by default in Pipelines steps --state-bucket Root bucket name for storing state --skip-image-pull Do not pull images --db-connection-string A connection string to use an existing Postgres database --vault-url URL to connect to an existing Vault --vault-root-token Root token of the existing Vault specified with --vault-url --no-verify-ssl If true, pass -no-verify-ssl flag to services --global-http-proxy HTTP proxy to be used in Pipelines in place of any proxy information fetched from Artifactory --global-https-proxy HTTPS proxy to be used in Pipelines in place of any proxy information fetched from Artifactory --global-no-proxy No proxy settings to be used in Pipelines in place of any proxy information fetched from Artifactory --access-control-allow-origins API will return these as allowed origins. A comma-separated list of origins should be provided. --disable-call-home Disable call home functionality --enable-call-home Enables call home functionality if previously disabled --rabbitmq-health-check-interval RabbitMQ health check interval in mins --artifactory-health-check-interval Artifactory health check interval in mins --db-health-check-interval Database health check interval in mins --config-backup-count Number of upgrades for which to keep backup configurations upgrade Upgrade current installation restart Restart Pipelines clean Remove Pipelines components and files info Print information about current installation on console -j | --json Print info as json version Print current installation control and build plane versions -j | --json Print info as json help Print this message
Custom Installation Directory
The environment variable JFROG_HOME
is used to determine the location of all the configuration files and data stored for the installer. In any installation command, export this variable to the location of your choice.
$ JFROG_HOME=/my/dir/location pipelines install .... $ sudo JFROG_HOME=/my/dir/location pipelines install ....
If the installation is not being run by a user with sudo permissions, grant the user full read/write permissions on the JFROG_HOME
directory.
All installer commands (upgrade
, clean
, etc) will need JFROG_HOME
environment variable to run successfully.
$ JFROG_HOME=/my/dir/location pipelines upgrade
To avoid exporting the variable for each command, it can be added to the global environment file in /etc/environment
or user specific environments in $HOME/.bashrc
or $HOME/.bash_profile
Artifactory Connection Details
Pipelines requires a working Artifactory server and a suitable license. The Pipelines connection to Artifactory requires two parameters:
baseUrlUI - URL to the machine where JFrog Artifactory is deployed, or the load balancer pointing to it. It is recommended to use DNS names rather than direct IPs. For example:
http://jfrog.acme.com
orhttp://10.20.30.40:8082
Optionally, you may also set an internal URL baseUrl for connecting to Artifactory. You may need to do this if you have set up your JFrog Platform Deployment with a load balancer.
Use the--base-url-ui
option to set both the baseUrlUI and the internal baseUrl the same URL, or use both--base-url
and--base-url-ui
to set them to individual URLs.The
--base-url-ui
is the user accessible URL for Artifactory whereas the--base-url
is the "internal" route for Artifactory. These URLs will be the same if the entire JFrog Platform is on a private network and users use Artifactory via a private IP. In most cases, the--base-url-ui
will be a URL, and the--base-url
will be an internal IP, internal domain name, or load-balancer IP. \In the Helm installer, the
--base-url
should be the internal service URL Artifactory. This is more efficient and reduces the number of network hops, thus decreasing the probability of breaking the connection during large file upload/downloads.- joinKey - This is the "secret" key required by Artifactory for registering and authenticating the Pipelines server.
You can fetch the ArtifactoryjoinKey
(join Key) from the JPD UI in the Administration module | Security | Settings | Join Key.
Use the--artifactory-joinkey
option to set the joinKey through the command line installer.
External Connections
Pipelines requires network configuration that enables the exchange of messages between the Pipelines CI server (in the controlplane) and all possible build nodes (in the buildplane). Build nodes must be able to command the controlplane through Pipelines REST APIs and send status messages through RabbitMQ. Since build nodes may run outside the private network or VPC where Pipelines is installed (for example, in a public cloud), the ports for these channels should be exposed as public IP addresses. This exposure is mitigated by secure protocols and authentication.
These recommended port settings and exposures ensure full functionality of all documented features and usage of all supported build node types and sources. Custom configurations may support a limited subset of Pipelines functionality.
Please contact JFrog support for assistance in implementing a custom install.
Installer option | Port | Protocol | Default | Description |
---|---|---|---|---|
--installer-ip | none | none | REQUIRED: Base IP of the Pipelines instance. | |
--api-url | 80822 | http/https | none | IP or URL for REST APIs. For example: |
--www-url | 300011 | http/https | none | IP or URL for supplemental UI pages (Run History, Run Log) |
--rabbitmq-url | 302002 | ampq/ampqs | none | IP or URL for messaging between controlplane and buildplane |
--rabbitmq-admin-url | 30201 | http/https | <installer-ip>:30201 | Accessed only from within Pipelines |
1 Must be accessible to users (same accessibility as base-url-ui)
2 Must be accessible from build nodes (external for cloud VMs)
Example
Artifactory: jfrog.mycompany.com
Instance IP (internal): 10.128.0.16
Pipelines external IP: 34.217.93.187
|
UI External URL (--www-url)
The www external URL provides access to the Pipelines user interface from outside the installation host. This URL must be defined on first install using the --www-url
option; there is no default assignment by the command line installer.
The preferred form of this URL is an IP address with the port . For example:
--www-url http://34.217.93.187:30001
Alternately, if you have set up your service URL as a domain name through a NAT gateway or load balancer:
--www-url http://mypipelines.mycompany.com:30001
API External URL (--api-url)
The API external URL provides access to the Pipelines REST APIs from outside the installation host. This URL must be defined on first install using the --api-url option; there is no default assignment by the command line installer.
The preferred form of this URL is an IP address with the port, followed by the path /pipelines/api. For example:
--api-url http://34.217.93.187:8082/pipelines/api
Alternately, if you have set up your service URL as a domain name through a NAT gateway or load balancer:
--api-url http://mypipelines-api.mycompany.com/pipelines/api
RabbitMQ External URL (--rabbitmq-url)
Build nodes need to connect to the RabbitMQ service running on the installation host to successfully register themselves and signal completion. This URL must be accessible to all build nodes and defined on first install using the --rabbitmq-url option; there is no default assignment by the command line installer.
The preferred form of this URL is an IP address with the port. For example:
--rabbitmq-url http://34.217.93.187:30200
Alternately, if you have set up your service URL as a domain name through a NAT gateway or load balancer:
--rabbitmq-url http://mypipelines.mycompany.com:30200
You can also set the internal administration URL using the --rabbitmq-admin-url
option. If this option is not specified, it will default to http://<installer-ip>:30201
.
You can also use these options to specify RabbitMQ authentication credentials in the URLs:
$ sudo pipelines install --rabbitmq-url amqp[s]://user:pass@1.2.3.4:30200 \ --rabbitmq-admin-url http[s]://adminUser:adminPass@1.2.3.4:30201
External Database
By default, Pipelines installs and connects to a PostgreSQL database that runs in the same instance as the Pipelines service. However, it can be configured to use an external PostgreSQL database if needed. This is required for a high availability installation, so that all HA nodes of Pipelines reference a common, shared database.
Make sure you have completed setting up your external database before proceeding to install Pipelines. The database connection details are used for each node installation.
When installing Pipelines, you must specify the connection string using the following argument in command line installer.
$ sudo pipelines install --db-connection-string postgres://<user>:<pass>@<ip>:<port>/<db>
External Vault
By default, Pipelines installs and connects to a vault that runs in the same instance as the Pipelines service. However, it can be configured to use an external vault if needed.
When installing Pipelines, you must specify the vault-url
string and the root token using the following argument in command line installer.
$ sudo pipelines install --vault-url <external-vault-url> --vault-root-token <external-vault-root-token>
Using Vault in Production Environments
To use vault securely, you must set the disablemlock
setting in the values.yaml
to false
(see Hashicorp Vault recommendations.
vault: disablemlock: false
Proxy Setup
Pipelines installer accepts a proxy configuration to use a proxy server to connect to the JFrog Artifactory instance using the following arguments in command line installer:
$ sudo pipelines install --artifactory-proxy <proxy-server>\ --artifactory-proxy-username <proxy-username> \ --artifactory-proxy-password <proxy-password>
The installer also fetches proxy configurations from the connected JFrog Artifactory instance and injects them into all microservices and execution nodes. This is done to ensure that any outgoing connections use the same proxy settings as are being used by the parent Artifactory instance.
State
The installer allows users to set up state providers using command line installer. State is used by Pipelines to store
- Cache
- Test and coverage reports
- Step artifacts
- Step outputs
- Run outputs
Users can also use state indirectly to download console logs and artifacts from the UI.
Use --state-bucket <bucket name>
to configure the Artifactory repository to use for storing state. If this setting is left blank, a name is automatically generated.
Non-Root User
The --install-user <username>:<groupname>
argument of the command line installer controls the user and group settings for the files created by installer. By default, the installation runs as the currently logged in user, as defined by $HOME
environment variable
Prerequisites
- The user and group provided as the arguments must exist before running the installation
- ssh-keypair for the user must exist in
$USER_HOME/.ssh
directory - The public key should be in file
$USER_HOME/.ssh/id_rsa.pub
- The private key should be in file
$USER_HOME/.ssh/id_rsa
with permissions set to 600 - The user being used must have permissions on
JFROG_HOME
directory (/opt/jfrog
by default) - The user being used must be part of
docker
group on the host to execute Docker commands - The following dependencies must be installed: Python, jq, yq, curl, nc, psql, and Docker Compose
Install Image Registry
The --image-registry-url
argument of the command line installer specifies the endpoint where the Docker images for Pipelines services are stored that will be installed by the Pipelines command line installer. By default, the command line installer will install Pipelines from the JFrog distribution registry at releases-docker.jfrog.io.
This should not be changed without instruction from JFrog.
To change the registry for runtime build images, use --build_image_registry-url
as described below.
Changing the Default Build Image Registry
The standard set of runtime build images are stored at releases-docker.jfrog.io
and the Pipelines command line installer sets this registry location by default.
You may want to copy the build images to a local Docker registry either to improve image pull times or avoid requiring access to a remote registry. After copying the images to the new location, you'll need to update Pipelines to use this location. This can be done during installation or as part of an upgrade. Assuming that you have simply moved all of the default images, this just requires setting the --build-image-registry-url
option to the new registry to update all the default images for Pipelines when running either pipelines upgrade
or pipelines install
.
$ sudo pipelines upgrade --build-image-registry-url my.docker.registry.io
Alternatively, if you want to use multiple registries or change the names of the default images, you can edit the Pipelines System YAML file and then run pipelines upgrade
without the --build-image-registry-url
option to start using the new image settings.
Accessing Pipelines
Once the installation is complete, Pipelines can be accessed as part of the JFrog Platform Deployment.
- Access the JFrog Platform from your browser. For example, at:
http:/<jfrogUrl>/ui/
. - For Pipelines functions, go to the Pipelines tab in the Application module.
Once the installation is complete, start configuring Pipelines to create build node pools, add integrations, and add pipeline sources.
Restarting Pipelines
It may be necessary to restart Pipelines on a node. For example, if the VM is restarted, Pipelines will need to be restarted for it to start running again.
If Pipelines was installed with sudo and the default $JFROG_HOME
, run sudo pipelines restart
. OFor install Pipelines HAtherwise, run pipelines restart
as the user that installed Pipelines and/or with the same $JFROG_HOME
environment variable.