Cloud customer?
Start for Free >
Upgrade in MyJFrog >
What's New in Cloud >

Search





Overview

Pipeline jobs simplify building continuous delivery workflows with Jenkins by creating a script that defines the steps of your build. For those not familiar with Jenkins Pipeline, please refer to the Pipeline Tutorial or the Getting Started With Pipeline documentation.

The Jenkins Artifactory Plugin supports Artifactory operations pipeline APIs. You have the added option of downloading dependencies, uploading artifacts, and publishing build-info to Artifactory from a pipeline script.

This page describes how to use declarative pipeline syntax with Artifactory. Declarative syntax is available from version 3.0.0 of the Jenkins Artifactory Plugin.


markdownScripted syntax is also supported. Read more about it here.

Examples

The Jenkins Pipeline Examples can help get you started creating your pipeline jobs with Artifactory.


Page contents


Creating an Artifactory Server Instance

There are two ways to tell the pipeline script which Artifactory server to use. You can either define the server details as part of the pipeline script, or define the server details in Manage | Configure System.

If you choose to define the Artifactory server in the pipeline, add the following to the script:

rtServer (
    id: 'Artifactory-1',
    url: 'http://my-artifactory-domain/artifactory',
	// If you're using username and password:
    username: 'user',
    password: 'password',
	// If you're using Credentials ID:
	credentialsId: 'ccrreeddeennttiiaall',
	// If Jenkins is configured to use an http proxy, you can bypass the proxy when using this Artifactory server:
	bypassProxy: true,
	// Configure the connection timeout (in seconds).
	// The default value (if not configured) is 300 seconds: 
	timeout: 300
)


You can also use a Jenkins Credential ID instead of the username and password:

The id property (Artifactory-1 in the above examples) is a unique identifier for this server, allowing us to reference this server later in the script. If you prefer to define the server in Manage | Configure System, you don't need to add the rtServerit definition as shown above. You can use the reference the server using its configured Server ID.


Uploading and Downloading Files

To download the files, add the following closure to the pipeline script:

rtDownload (
    serverId: 'Artifactory-1',
    spec: '''{
          "files": [
            {
              "pattern": "bazinga-repo/froggy-files/",
              "target": "bazinga/"
            }
          ]
    }''',

    // Optional - Associate the downloaded files with the following custom build name and build number, 
    // as build dependencies.
    // If not set, the files will be associated with the default build name and build number (i.e the 
    // the Jenkins job name and number).
    buildName: 'holyFrog',
    buildNumber: '42',
    // Optional - Only if this build is associated with a project in Artifactory, set the project key as follows.
    project: 'my-project-key'
)

In the above example, file are downloaded from the Artifactory server referenced by the Artifactory-1 server ID.

The above closure also includes a File Spec, which specifies the files which files should be downloaded. In this example, all ZIP files in the bazinga-repo/froggy-files/ Artifactory path should be downloaded into the bazinga directory on your Jenkins agent file system.

Uploading files is very similar. The following example uploads all ZIP files which include froggy in their names into the froggy-files folder in the bazinga-repo Artifactory repository.

rtUpload (
    serverId: 'Artifactory-1',
    spec: '''{
          "files": [
            {
              "pattern": "bazinga/*froggy*.zip",
              "target": "bazinga-repo/froggy-files/"
            }
         ]
    }''',

    // Optional - Associate the uploaded files with the following custom build name and build number,
    // as build artifacts.
    // If not set, the files will be associated with the default build name and build number (i.e the 
    // the Jenkins job name and number).
    buildName: 'holyFrog',
    buildNumber: '42',
    // Optional - Only if this build is associated with a project in Artifactory, set the project key as follows.
    project: 'my-project-key'
)

You can manage the File Spec in separate files, instead of adding it as part of the rtUpload and rtDownload closures. This allows managing the File Specs in a source control, possible with the project sources. Here's how you access the File Spec i the rtUploadThe configuration is similar in the rtDownload closure:

rtUpload (
    serverId: 'Artifactory-1',
    specPath: 'path/to/spec/relative/to/workspace/spec.json',

    // Optional - Associate the uploaded files with the following custom build name and build number.
    // If not set, the files will be associated with the default build name and build number (i.e the 
    // the Jenkins job name and number).
    buildName: 'holyFrog',
    buildNumber: '42',
    // Optional - Only if this build is associated with a project in Artifactory, set the project key as follows.
    project: 'my-project-key'
)

You can read about using File Specs for downloading and uploading files here.

If you'd like to fail the build in case no files are uploaded or downloaded, add the failNoOp property to the rtUpload or rtDownload closures as follows:

rtUpload (
    serverId: 'Artifactory-1',
    specPath: 'path/to/spec/relative/to/workspace/spec.json',
    failNoOp: true,

    // Optional - Associate the uploaded files with the following custom build name and build number.
    // If not set, the files will be associated with the default build name and build number (i.e the 
    // the Jenkins job name and number).
    buildName: 'holyFrog',
    buildNumber: '42',
    // Optional - Only if this build is associated with a project in Artifactory, set the project key as follows.
    project: 'my-project-key'
)

Setting and Deleting Properties on Files in Artifactory

When uploading files to Artifactory using the rtUpload closure, you have the option of setting properties on the files. These properties can be later used to filter and download those files.

In some cases, you may want want to set properties on files that are already in Artifactory. The way to this is very similar to the way you define which files to download or upload: A FileSpec is used to filter the filter on which the properties should be set. The properties to be set are sent outside the File Spec. Here's an example.

rtSetProps (
	serverId: 'Artifactory-1',
	specPath: 'path/to/spec/relative/to/workspace/spec.json',
	props: 'p1=v1;p2=v2',
	failNoOp: true
)

In the above example:

  1. The serverId property is used to reference pre-configured Artifactory server instance as described in the Creating Artifactory Server Instance section. 
  2. The specPath property include a path to a File Spec, which has a similar structure to the File Spec used for downloading files.
  3. The props property defines the properties we'd like to set. In the above example we're setting two properties - p1 and p2 with the v1 and v2 values respectively.
  4. The failNoOp property is optional. Setting it to true will cause the job to fail, if no properties have been set.

You also have the option of specifying the File Spec directly inside the rtSetProps closure as follows.

rtSetProps (
	serverId: 'Artifactory-1',
	props: 'p1=v1;p2=v2',      
	spec: '''{
	    "files": [{
	    	"pattern": "my-froggy-local-repo",
	    	"props": "filter-by-this-prop=yes"
        }]}'''
)

The rtDeleteProps closure is used to delete properties from files in Artifactory, The syntax is pretty similar to the rtSetProps closure. The only difference is that in the rtDeleteProps, we specify only the names of the properties to delete. The names are comma separated. The properties values should not be specified. Here's an example:

rtDeleteProps (
	serverId: 'Artifactory-1',
	specPath: 'path/to/spec/relative/to/workspace/spec.json',
	props: 'p1,p2,p3',
	failNoOp: true
)

Similarly to the rtSetProps closure, the File Spec can be defined implicitly in inside the closure as shown here:

rtDeleteProps (
	serverId: 'Artifactory-1',
	props: 'p1,p2,p3',      
	spec: '''{
	    "files": [{
    		"pattern": "my-froggy-local-repo",
    		"props": "filter-by-this-prop=yes"
    	}]}'''
)



Publishing Build-Info to Artifactory

If you're not yet familiar with the build-info entity, please read about it here.

Files which are downloaded by the rtDownload closure are automatically registered as the current build's dependencies, while files that are uploaded by the rtUpload closure are registered as the build artifacrts. The depedencies and artifacts are recorded locally and can be later published as build-info to Artifactory.

Here's how you publish the build-info to Artifactory:

rtPublishBuildInfo (
    serverId: 'Artifactory-1',
    // The buildName and buildNumber below are optional. If you do not set them, the Jenkins job name is used 
    // as the build name. The same goes for the build number.
    // If you choose to set custom build name and build number by adding the following buildName and 
    // buildNumber properties, you should make sure that previous build steps (for example rtDownload 
    // and rtIpload) have the same buildName and buildNumber set. If they don't, then these steps will not
    // be included in the build-info.
    buildName: 'holyFrog',
    buildNumber: '42',
    // Optional - Only if this build is associated with a project in Artifactory, set the project key as follows.
    project: 'my-project-key'
)


If you set a custom build name and number as shown above, please make sure to set the same build name and number in the rtUpload or rtDownload closures as shown below. If you don't, Artifactory will not be able to associate these files with the build and therefore their files will not be displayed in Artifactory.

rtDownload (
    serverId: 'Artifactory-1',
    // Build name and build number for the build-info:
    buildName: 'holyFrog',
    buildNumber: '42',
    // Optional - Only if this build is associated with a project in Artifactory, set the project key as follows.
    project: 'my-project-key',
    // You also have the option of customising the build-info module name:
    module: 'my-custom-build-info-module-name',
    specPath: 'path/to/spec/relative/to/workspace/spec.json'
)

rtUpload (
    serverId: 'Artifactory-1',
    // Build name and build number for the build-info:
    buildName: 'holyFrog',
    buildNumber: '42',
    // Optional - Only if this build is associated with a project in Artifactory, set the project key as follows.
    project: 'my-project-key',
    // You also have the option of customising the build-info module name:
    module: 'my-custom-build-info-module-name',
    specPath: 'path/to/spec/relative/to/workspace/spec.json'
)

Capturing Environment Variables

To set the Build-Info object to automatically capture environment variables while downloading and uploading files, add the following to your script.

It is important to place the rtBuildInfo closure before any steps associated with this build (for example, rtDownload and rtUpload), so that its configured functionality (for example, environment variables collection) will be invoked as part of these steps.

rtBuildInfo (
    captureEnv: true,

    // Optional - Build name and build number. If not set, the Jenkins job's build name and build number are used.
    buildName: 'my-build',
    buildNumber: '20',
    // Optional - Only if this build is associated with a project in Artifactory, set the project key as follows.
    project: 'my-project-key'
)

By default, environment variables names which include "password", "psw", "secret", "token", or "key" (case insensitive) are excluded and will not be published to Artifactory.

You can add more include/exclude patterns with wildcards as follows:

rtBuildInfo (
    captureEnv: true,
    includeEnvPatterns: ['*abc*', '*bcd*'],
    excludeEnvPatterns: ['*private*', 'internal-*'],

    // Optional - Build name and build number. If not set, the Jenkins job's build name and build number are used.
    buildName: 'my-build',
    buildNumber: '20'
    // Optional - Only if this build is associated with a project in Artifactory, set the project key as follows.
    project: 'my-project-key'
)


Triggering Build Retention

Build retention can be triggered when publishing build-info to Artifactory using the rtPublishBuildInfo closure. Setting build retention therefore should be done before publishing the build, by using the rtBuildInfo closure, as shown below. Please make sure to place the following configuration in the script before the rtPublishBuildInfo closure.

rtBuildInfo (
    // Optional - Maximum builds to keep in Artifactory.
    maxBuilds: 1,
    // Optional - Maximum days to keep the builds in Artifactory.
    maxDays: 2,
    // Optional - List of build numbers to keep in Artifactory.
    doNotDiscardBuilds: ['3'],
    // Optional (the default is false) - Also delete the build artifacts when deleting a build.
    deleteBuildArtifacts: true,

    // Optional - Build name and build number. If not set, the Jenkins job's build name and build number are used.
    buildName: 'my-build',
    buildNumber: '20',
    // Optional - Only if this build is associated with a project in Artifactory, set the project key as follows.
    project: 'my-project-key'
)



Collecting Build Issues

The build-info can include the issues which were handled as part of the build. The list of issues is automatically collected by Jenkins from the git commit messages. This requires the project developers to use a consistent commit message format, which includes the issue ID and issue summary, for example:
HAP-1364 - Replace tabs with spaces
The list of issues can be then viewed in the Builds UI in Artifactory, along with a link to the issue in the issues tracking system.
The information required for collecting the issues is provided through a JSON configuration. This configuration can be provided as a file or as a JSON string.
Here's an example for issues collection configuration.

{
    'version': 1,
    'issues': {
        'trackerName': 'JIRA',
        'regexp': '(.+-[0-9]+)\\s-\\s(.+)',
        'keyGroupIndex': 1,
        'summaryGroupIndex': 2,
        'trackerUrl': 'http://my-jira.com/issues',
        'aggregate': 'true',
        'aggregationStatus': 'RELEASED'
    }
}

Configuration file properties:

Property nameDescription
VersionThe schema version is intended for internal use. Do not change!
trackerNameThe name (type) of the issue tracking system. For example, JIRA. This property can take any value.
trackerUrlThe issue tracking URL. This value is used for constructing a direct link to the issues in the Artifactory build UI.
keyGroupIndex

The capturing group index in the regular expression used for retrieving the issue key. In the example above, setting the index to "1" retrieves HAP-1364 from this commit message:

HAP-1364 - Replace tabs with spaces

summaryGroupIndex

The capturing group index in the regular expression for retrieving the issue summary. In the example above, setting the index to "2" retrieves the sample issue from this commit message:

HAP-1364 - Replace tabs with spaces

aggregate

Set to true, if you wish all builds to include issues from previous builds.

aggregationStatus

If aggregate is set to true, this property indicates how far in time should the issues be aggregated. In the above example, issues will be aggregated from previous builds, until a build with a RELEASE status is found. Build statuses are set when a build is promoted using the jfrog rt build-promote command.
regexp

A regular expression used for matching the git commit messages. The expression should include two capturing groups - for the issue key (ID) and the issue summary. In the example above, the regular expression matches the commit messages as displayed in the following example:

HAP-1364 - Replace tabs with spaces

Here's how you set issues collection in the pipeline script.

rtCollectIssues (
    serverId: 'Artifactory-1',
    config: '''{
        "version": 1,
        "issues": {
            "trackerName": "JIRA",
            "regexp": "(.+-[0-9]+)\\s-\\s(.+)",
            "keyGroupIndex": 1,
            "summaryGroupIndex": 2,
            "trackerUrl": "http://my-jira.com/issues",
            "aggregate": "true",
            "aggregationStatus": "RELEASED"
        }
    }''',
)

In the above example, the issues config is embedded inside the rtCollectIssues closure. You also have the option of providing a file which includes the issues configuration. Here's how you do this:

rtCollectIssues (
    serverId: 'Artifactory-1',
    configPath: '/path/to/config.json'
)

If you'd like add the issues information to a specific build-info, you can also provide build name and build number as follows:

rtCollectIssues (
    serverId: 'Artifactory-1',
    configPath: '/path/to/config'
    buildName: 'my-build',
    buildNumber: '20',
    // Optional - Only if this build is associated with a project in Artifactory, set the project key as follows.
    project: 'my-project-key'
)

To help you get started, we recommend using the Github Examples.



Aggregating Builds

The build-info published to Artifactory can include multiple modules representing different build steps. As shown earlier in this section, you just need to pass the same buildName and buildNumber to all the steps that need it (rtUpload for example).

What happens however if your build process runs on multiple machines or it is spread across different time periods? How do you aggregate all the build steps into one build-info?

When your build process runs on multiple machines or it is spread across different time periods, you have the option of creating and publishing a separate build-info for each segment of the build process, and then aggregating all those published builds into one build-info. The end result is one build-info which references other, previously published build-infos.

In the following example, our pipeline script publishes two build-info instances to Artifactory:

rtPublishBuildInfo (
    serverId: 'Artifactory-1',
    buildName: 'my-app-linux',
    buildNumber: '1'
)

rtPublishBuildInfo (
    serverId: 'Artifactory-1',
    buildName: 'my-app-windows',
    buildNumber: '1'
)

At this point, we have two build-infos stored in Artifactory. Now let's create our final build-info, which references the previous two:

rtBuildAppend(
    // Mandatory:
    serverId: 'Artifactory-1',
    appendBuildName: 'my-app-linux',
    appendBuildNumber: '1',
 
    // The buildName and buildNumber below are optional. If you do not set them, the Jenkins job name is used
    // as the build name. The same goes for the build number.
    // If you choose to set custom build name and build number by adding the following buildName and
    // buildNumber properties, you should make sure that previous build steps (for example rtDownload
    // and rtIpload) have the same buildName and buildNumber set. If they don't, then these steps will not
    // be included in the build-info.
    buildName: 'final',
    buildNumber: '1'
)
 
rtBuildAppend(
    // Mandatory:
    serverId: 'Artifactory-1',
    appendBuildName: 'my-app-windows',
    appendBuildNumber: '1',
    buildName: 'final',
    buildNumber: '1'
)

// Publish the aggregated build-info to Artifactory. 
rtPublishBuildInfo (
    serverId: 'Artifactory-1',
    buildName: 'final',
    buildNumber: '1'
)

If the published builds in Artifactory are associated with a project, you should add the project key to the rtBuildAppend and rtPublishBuildInfo steps as follows.

rtBuildAppend(
    // Mandatory:
    serverId: 'Artifactory-1',
    appendBuildName: 'my-app-linux',
    appendBuildNumber: '1',
    buildName: 'final',
    buildNumber: '1',
    project: 'my-project-key'
)
 
rtBuildAppend(
    // Mandatory:
    serverId: 'Artifactory-1',
    appendBuildName: 'my-app-windows',
    appendBuildNumber: '1',
    buildName: 'final',
    buildNumber: '1',
    project: 'my-project-key'
)

// Publish the aggregated build-info to Artifactory. 
rtPublishBuildInfo (
    serverId: 'Artifactory-1',
    buildName: 'final',
    buildNumber: '1',
    project: 'my-project-key'
)

Build Promotion and Build scanning with Xray are currently not supporting aggregated builds.

Promoting Builds in Artifactory

To promote a build between repositories in Artifactory, define the promotion parameters in the rtPromote closure For example:

rtPromote (
    // Mandatory parameter

    buildName: 'MK',
    buildNumber: '48',
    // Optional - Only if this build is associated with a project in Artifactory, set the project key as follows.
    project: 'my-project-key',
    // Artifactory server ID from Jenkins configuration, or from configuration in the pipeline script
    serverId: 'Artifactory-1',
    // Name of target repository in Artifactory 
    targetRepo: 'libs-release-local',

    // Optional parameters

    // Comment and Status to be displayed in the Build History tab in Artifactory
    comment: 'this is the promotion comment',
    status: 'Released',
    // Specifies the source repository for build artifacts. 
    sourceRepo: 'libs-snapshot-local',
    // Indicates whether to promote the build dependencies, in addition to the artifacts. False by default.
    includeDependencies: true,
    // Indicates whether to fail the promotion process in case of failing to move or copy one of the files. False by default
    failFast: true,
    // Indicates whether to copy the files. Move is the default.
    copy: true
)



Allowing Interactive Promotion for Published Builds

The Promoting Builds in Artifactory section describes how your Pipeline script can promote builds in Artifactory. In some cases however, you'd like the build promotion to be performed after the build finished. You can configure your Pipeline job to expose some or all the builds it publishes to Artifactory, so that they can be later promoted interactively using a GUI. Here's how the Interactive Promotions looks like:


When the build finishes, the promotion window will be accessible by clicking on the promotion icon, next to the build run. To enable interactive promotion for a published build, add the rtAddInteractivePromotion as shown below.

rtAddInteractivePromotion (
    // Mandatory parameters

    // Artifactory server ID from Jenkins configuration, or from configuration in the pipeline script
    serverId: 'Artifactory-1',
    buildName: 'MK',
    buildNumber: '48',
    // Only if this build is associated with a project in Artifactory, set the project key as follows.
    project: 'my-project-key',

    // Optional parameters

    If set, the promotion window will display this label instead of the build name and number.
    displayName: 'Promote me please',
    // Name of target repository in Artifactory 
    targetRepo: 'libs-release-local
    // Comment and Status to be displayed in the Build History tab in Artifactory
    comment: 'this is the promotion comment',
    status: 'Released',
    // Specifies the source repository for build artifacts. 
    sourceRepo: 'libs-snapshot-local',
    // Indicates whether to promote the build dependencies, in addition to the artifacts. False by default.
    includeDependencies: true,
    // Indicates whether to fail the promotion process in case of failing to move or copy one of the files. False by default
    failFast: true,
    // Indicates whether to copy the files. Move is the default.
    copy: true
)

You can add multiple rtAddInteractivePromotion closures, to include multiple builds in the promotion window.


Maven Builds with Artifactory

Maven builds can resolve dependencies, deploy artifacts and publish build-info to Artifactory.

The Jenkins Artifactory Plugin integrates with maven through the "install" goal. It is therefore mandatory to include "install" as one of the goals when using the Artifactory API for maven. There's no need to add the "deploy" goal, since the deployment to Artifactory does not use it.

To run Maven builds with Artifactory from your Pipeline script, you first need to create an Artifactory server instance, as described in the Creating an Artifactory Server Instance section.

The next step is to define an rtMavenResolver closure, which defines the dependencies resolution details, and an rtMavenDeployer closure, which defines the artifacts deployment details. Here's an example:

rtMavenResolver (
    id: 'resolver-unique-id',
    serverId: 'Artifactory-1',
    releaseRepo: 'libs-release',
    snapshotRepo: 'libs-snapshot'
)   

rtMavenDeployer (
    id: 'deployer-unique-id',
    serverId: 'Artifactory-1',
    releaseRepo: 'libs-release-local',
    snapshotRepo: 'libs-snapshot-local',
    // By default, 3 threads are used to upload the artifacts to Artifactory. You can override this default by setting:
    threads: 6,
    // Attach custom properties to the published artifacts:
    properties: ['key1=value1', 'key2=value2']
)

As you can see in the example above, the resolver and deployer should have a unique ID, so that they can be referenced later in the script, In addition, they include an Artifactory server ID and the names of release and snapshot maven repositories.

Now we can run the maven build, referencing the resolver and deployer we defined:

rtMavenRun (
	// Tool name from Jenkins configuration.
    tool: MAVEN_TOOL, 
    // Set to true if you'd like the build to use the Maven Wrapper.
    useWrapper: true,
    pom: 'maven-example/pom.xml',
    goals: 'clean install',
	// Maven options.
	opts: '-Xms1024m -Xmx4096m',
    resolverId: 'resolver-unique-id',
    deployerId: 'deployer-unique-id',
    // If the build name and build number are not set here, the current job name and number will be used:
    buildName: 'my-build-name',
    buildNumber: '17',
    // Optional - Only if this build is associated with a project in Artifactory, set the project key as follows.
    project: 'my-project-key'
)

Instead of setting the tool in the rtMavenRun closure, you can set the path to the Maven installation directory using the MAVEN_HOME environment variable as follows:

environment {
    MAVEN_HOME = '/tools/apache-maven-3.3.9'
}

In case you'd like Maven to use a different JDK than your build agent's default, no problem.
Simply set the JAVA_HOME environment variable to the desired JDK path (the path to the directory above the bin directory, which includes the java executable).

environment {
    JAVA_HOME = '/full/path/to/JDK'
}


The last thing you might want to do, is to publish the build-info for this build. See the Publishing Build Info to Artifactory section for how to do it.


Gradle Builds with Artifactory

Gradle builds can resolve dependencies, deploy artifacts and publish build-info to Artifactory.

Gradle Compatibility

The minimum Gradle version supported is 4.10

To run Gradle builds with Artifactory from your Pipeline script, you first need to create an Artifactory server instance, as described in the Creating an Artifactory Server Instance section.

The next step is to define an rtGradleResolver closure, which defines the dependencies resolution details, and an rtGradleDeployer closure, which defines the artifacts deployment details. Here's an example:

rtGradleResolver (
    id: 'resolver-unique-id',
    serverId: 'Artifactory-1',
    repo: 'jcenter-remote'
)
     
rtGradleDeployer (
    id: 'deployer-unique-id',
    serverId: 'Artifactory-1',
    repo: 'libs-snapshot-local',
    // Optional - By default, 3 threads are used to upload the artifacts to Artifactory. You can override this default by setting:
    threads: 6,
    // Optional - Attach custom properties to the published artifacts:
    properties: ['key1=value1', 'key2=value2'],
	// Optional - Gradle allows customizing the list of deployed artifacts by defining publications as part fo the Gradle build script. 
	// Gradle publications are used to group artifacts together. You have the option of defining which of the defined publications Jenkins should use. Only the artifacts grouped by these publications will be deployed to Artifactory.
	// If you do not define the publications, a default publication, which includes the list of the produced artifacts by a java project will be used. 
	// Here's how you define the list of publications.
	publications: ["mavenJava", "ivyJava"]
    // If you'd like to deploy the artifacts from all the publications defined in the gradle script, you can set the "ALL_PUBLICATIONS" string as follows
    // publications: ["ALL_PUBLICATIONS"]
)

As you can see in the example above, the resolver and deployer should have a unique ID, so that they can be referenced later in the script, In addition, they include an Artifactory server ID and the names of release and snapshot maven repositories.

If you're using gradle to build a project, which produces maven artifacts, you also have the option of defining two deployment repositories as part the rtGradleDeployer closure - one repository will be used for snapshot artifacts and one for release artifacts. Here's how you define it:

rtGradleDeployer (
    id: 'deployer-unique-id',
    serverId: 'Artifactory-1',
	releaseRepo: 'libs-release',
	snapshotRepo: 'libs-snapshot'
)   


Now we can run the Gradle build, referencing the resolver and deployer we defined:

rtGradleRun (
	// Set to true if the Artifactory Plugin is already defined in build script.
    usesPlugin: true,
	// Tool name from Jenkins configuration. 
    tool: GRADLE_TOOL,
	// Set to true if you'd like the build to use the Gradle Wrapper.
	useWrapper: true,
    rootDir: 'gradle-examples/gradle-example/',
    buildFile: 'build.gradle',
    tasks: 'clean artifactoryPublish',
    resolverId: 'resolver-unique-id',
    deployerId: 'deployer-unique-id',
    // If the build name and build number are not set here, the current job name and number will be used:
    buildName: 'my-build-name',
    buildNumber: '17',
    // Optional - Only if this build is associated with a project in Artifactory, set the project key as follows.
    project: 'my-project-key'
)

In case you'd like Gradle to use a different JDK than your build agent's default, no problem.
Simply set the JAVA_HOME environment variable to the desired JDK path (the path to the directory above the bin directory, which includes the java executable).
Here's you do it:

environment {
    JAVA_HOME = '/full/path/to/JDK'
}


The last thing you might want to do, is to publish the build-info for this build. See the Publishing Build Info to Artifactory section for how to do it.

You also have the option of defining default values in the gradle build script. Read more about it here.



Python Builds with Artifactory

Python builds can resolve dependencies, deploy artifacts and publish build-info to Artifactory. To run Python builds with Artifactory start by following these steps, to make sure your Jenkins agent is ready:

  1. Make sure Python is installed on the build agent and that the python command is in the PATH.
  2. Install pip. You can use the Pip Documentation and also Installing packages using pip and virtual environments.
  3. Make sure wheel and setuptools are installed. You can use the Installing Packages Documentation.
  4. Validate that the build agent is ready by running the following commands from the terminal:
Output Python version:
> python --version
 
Output pip version:
> pip --version
 
Verify wheel is installed:
> wheel -h
 
Verify setuptools is installed:
> pip show setuptools
 
Verify that virtual-environment is activated:
> echo $VIRTUAL_ENV

To run Python builds from your Pipeline script, you first need to create an Artifactory server instance, as described in the Creating an Artifactory Server Instance section.

The next step is to define an rtPipResolver, which defines the dependencies resolution details. Here's an example:

rtPipResolver (
    id: "resolver-unique-id",
    serverId: "Artifactory-1",
    repo: "pip-virtual"
)

As you can see in the example above, the resolver should have a unique ID, so that it can be referenced later in the script, In addition, it includes an Artifactory server ID and the name of the repository.

Now we can use the rtPipInstall closure, to resolve the pip dependencies. Notice that the closure references the resolver we defined above.

rtPipInstall (
    resolverId: "resolver-unique-id",
    args: "-r python-example/requirements.txt",
    envActivation: virtual_env_activation,
	// Jenkins spawns a new java process during this step's execution.
	// You have the option of passing any java args to this new process.
	javaArgs: '-agentlib:jdwp=transport=dt_socket,server=y,suspend=n,address=*:5005'
    // If the build name and build number are not set here, the current job name and number will be used:
    buildName: 'my-build-name',
    buildNumber: '17',
    // Optional - Only if this build is associated with a project in Artifactory, set the project key as follows.
    project: 'my-project-key'
)

Notice the envActivation property in the example above. Is is an optional property. Since it is mostly recommended to run pip commands inside a virtual environment, to achieve isolation for the pip build. To follow this recommendation, you have the option of using the envActivation by sending a shell script as its value, for setting up the virtual env.

In most cases, your build also produces artifacts. The artifacts produced can be deployed to Artifactory using the rtUpload closure, as described in the Uploading and Downloading Files section in this article.

The last thing you might want to do, is to publish the build-info for this build. See the Publishing Build Info to Artifactory section for how to do it.

More about build-info: You also have the option of customising the build-info module names. You do this by adding the module property to the rtPipInstall closure as follows:

rtPipInstall (
    resolverId: "resolver-unique-id",
    args: "-r python-example/requirements.txt",
    envActivation: virtual_env_activation,
	module: 'my-custom-build-info-module-name'
)


NuGet and .NET Core Builds with Artifactory

The Artifactory Plugin's integration with the NuGet and .NET Core clients allow build resolve dependencies, deploy artifacts and publish build-info to Artifactory.

  • Depending on the client you'd like to use, please make sure either the nuget or dotnet clients are included in the build agent's PATH.
  • If you're using the dotnet client, please note that the minimum version supported is .NET Core 3.1.200 SDK. 

To run NuGet / DotNet Core builds with Artifactory from your Pipeline script, you first need to create an Artifactory server instance, as described in the Creating an Artifactory Server Instance section.

The next step is to define an rtNugetResolver or rtDotnetResolver (depending on whether you're using using NuGet or DorNet Core), which defines the dependencies resolution details. Here's an example:

rtNugetResolver (
    id: 'resolver-unique-id',
    serverId: 'Artifactory-1',
    repo: 'libs-nuget'
)

// OR

rtDotnetResolver (
    id: 'resolver-unique-id',
    serverId: 'Artifactory-1',
    repo: 'libs-nuget'
)

As you can see in the example above, the resolver should have a unique ID, so that it can be referenced later in the script, In addition, it includes an Artifactory server ID and the name of the repository.

Now we can use the rtNugetRun or rtDotnetRun closure, to resolve the NuGet dependencies. Notice that the closure references the resolver we defined above.

rtNugetRun (
    resolverId: "resolver-unique-id",
    args: "restore ./Examples.sln",
	// Optional - Jenkins spawns a new java process during this step's execution.
	// You have the option of passing any java args to this new process.
	javaArgs: "-agentlib:jdwp=transport=dt_socket,server=y,suspend=n,address=*:5005",
    // Optional - By default, the build uses NuGet API protocol v2. If you'd like to use v3, set it on the build instance as follows.
    apiProtocol: "v3"
    // If the build name and build number are not set here, the current job name and number will be used:
    buildName: 'my-build-name',
    buildNumber: '17',
    // Optional - Only if this build is associated with a project in Artifactory, set the project key as follows.
    project: 'my-project-key'
)

// OR

rtDotnetRun (
    resolverId: "resolver-unique-id",
    args: "restore ./Examples.sln",
	// Jenkins spawns a new java process during this step's execution.
	// You have the option of passing any java args to this new process.
	javaArgs: "-agentlib:jdwp=transport=dt_socket,server=y,suspend=n,address=*:5005",
    // Optional - By default, the build uses NuGet API protocol v2. If you'd like to use v3, set it on the build instance as follows.
    apiProtocol: "v3" 
    // If the build name and build number are not set here, the current job name and number will be used:
    buildName: 'my-build-name',
    buildNumber: '17',
    // Optional - Only if this build is associated with a project in Artifactory, set the project key as follows.
    project: 'my-project-key'
)

In most cases, your build also produces artifacts. The artifacts can be NuGet packages, DLL files or any other type of artifact. The artifacts produced can be deployed to Artifactory using the rtUpload closure, as described in the Uploading and Downloading Files section in this article.

The last thing you might want to do, is to publish the build-info for this build. See the Publishing Build Info to Artifactory section for how to do it.

More about build-info: You also have the option of customising the build-info module names. You do this by adding the module property to the rtNugetRun or rtDotnetRun closures as follows:

rtNugetRun (
    resolverId: "resolver-unique-id",
    args: "restore ./Examples.sln",
	module: 'my-custom-build-info-module-name'
)

// OR

rtDotnetRun (
    resolverId: "resolver-unique-id",
    args: "restore ./Examples.sln",
	module: 'my-custom-build-info-module-name'
)



NPM Builds with Artifactory

NPM builds can resolve dependencies, deploy artifacts and publish build-info to Artifactory. To run NPM builds with Artifactory from your Pipeline script, you first need to create an Artifactory server instance, as described in the Creating an Artifactory Server Instance section.

The next step is to define an rtNpmResolver closure, which defines the dependencies resolution details, and an rtNpmDeployer closure, which defines the artifacts deployment details. Here's an example:

rtNpmResolver (
    id: 'resolver-unique-id',
    serverId: 'Artifactory-1',
    repo: 'libs-npm'
)
      
rtNpmDeployer (
    id: 'deployer-unique-id',
    serverId: 'Artifactory-1',
    repo: 'libs-npm-local',
    // Attach custom properties to the published artifacts:
    properties: ['key1=value1', 'key2=value2']
)


As you can see in the example above, the resolver and deployer should have a unique ID, so that they can be referenced later in the script, In addition, they include an Artifactory server ID and the name of the repository.

Now we can use the rtNpmInstall or rtNpmCi closures, to resolve the NPM dependencies. Notice that the closure references the resolver we defined above.

rtNpmInstall (
    // Optional tool name from Jenkins configuration
    tool: NPM_TOOL,
    // Optional path to the project root. If not set, the root of the workspace is assumed as the root project path.
    path: 'npm-example',
    // Optional npm flags or arguments.
    args: '--verbose',
    resolverId: 'resolver-unique-id',
	// Jenkins spawns a new java process during this step's execution.
	// You have the option of passing any java args to this new process.
	javaArgs: '-agentlib:jdwp=transport=dt_socket,server=y,suspend=n,address=*:5005'
    // If the build name and build number are not set here, the current job name and number will be used:
    buildName: 'my-build-name',
    buildNumber: '17',
    // Optional - Only if this build is associated with a project in Artifactory, set the project key as follows.
    project: 'my-project-key'
)

The rtNpmInstall step invokes the npm install command behind the scenes. If you'd like to use the npm ci command instead, simply replace the step name with rtNpmCi.


And to pack and publish the npm package out project creates, we use the rtNpmPublish closure with a reference to the deployer we defined.

rtNpmPublish (
    // Optional tool name from Jenkins configuration
    tool: 'npm-tool-name',
    // Optional path to the project root. If not set, the root of the workspace is assumed as the root project path.
    path: 'npm-example',
    deployerId: 'deployer-unique-id',
	// Jenkins spawns a new java process during this step's execution.
	// You have the option of passing any java args to this new process.
	javaArgs: '-agentlib:jdwp=transport=dt_socket,server=y,suspend=n,address=*:5005'
    // If the build name and build number are not set here, the current job name and number will be used:
    buildName: 'my-build-name',
    buildNumber: '17',
    // Optional - Only if this build is associated with a project in Artifactory, set the project key as follows.
    project: 'my-project-key'
)

The build uses the npm executable to install (download the dependencies) and also to pack the resulting npm package before publishing it. By default, Jenkins uses the npm executable, which is present in the agent’s PATH. You can also reference a tool defined in Jenkins configuration. Here's how:

environment {
    // Path to the NodeJS home directory (not to the npm executable)
    NODEJS_HOME = 'path/to/the/nodeJS/home'
}
// or
environment {
    // If a tool named 'nodejs-tool-name' is defined in Jenkins configuration.
    NODEJS_HOME = "${tool 'nodejs-tool-name'}"
}
// or
nodejs(nodeJSInstallationName: 'nodejs-tool-name') {
    // Only in this code scope, the npm defined by 'nodejs-tool-name' is used.
}

If the npm installation is not set, the npm executable which is found in the agent's PATH is used.

The last thing you might want to do, is to publish the build-info for this build. See the Publishing Build Info to Artifactory section for how to do it.

More about build-info: You also have the option of customising the build-info module names. You do this by adding the module property to the rtNpmInstall or rtNpmPublish closures as follows:


rtNpmInstall (
    tool: 'npm-tool-name',
    path: 'npm-example',
    resolverId: 'resolver-unique-id',
    module: 'my-custom-build-info-module-name'
)

rtNpmPublish (
    tool: 'npm-tool-name',
    path: 'npm-example',
    deployerId: 'deployer-unique-id'
    module: 'my-custom-build-info-module-name'
)


Go Builds with Artifactory

While building your Go projects, Jenkins can resolve dependencies, deploy artifacts and publish build-info to Artifactory.

Please make sure that the go client is included in the build agent's PATH.


To run Go builds with Artifactory from your Pipeline script, you first need to create an Artifactory server instance, as described in the Creating an Artifactory Server Instance section.

The next step is to define an rtGoResolver closure, which defines the dependencies resolution details, and an rtGoDeployer closure, which defines the artifacts deployment details. Here's an example:


rtGoResolver (
    id: 'resolver-unique-id',
    serverId: 'Artifactory-1',
    repo: 'libs-go'
)
       
rtGoDeployer (
    id: 'deployer-unique-id',
    serverId: 'Artifactory-1',
    repo: 'libs-go-local',
    // Attach custom properties to the published artifacts:
    properties: ['key1=value1', 'key2=value2']
)

As you can see in the example above, the resolver and deployer should have a unique ID, so that they can be referenced later in the script, In addition, they include an Artifactory server ID and the name of the repository.

Now we can use the rtGoRun closure, to run the build..Notice that the closure references the resolver we defined above.

rtGoRun (
    path: 'path/to/the/project/root',
    resolverId: 'resolver-unique-id',    
    args: 'build'
    // If the build name and build number are not set here, the current job name and number will be used:
    buildName: 'my-build-name',
    buildNumber: '17',
    // Optional - Only if this build is associated with a project in Artifactory, set the project key as follows.
    project: 'my-project-key'
)

Now that the project is built, you can pack and publish it to Artifactory as a Go package. We use the rtGoPublish closure with a reference to the deployer we defined.

rtGoPublish (
    path: "path/to/the/project/root',
    deployerId: 'deployer-unique-id',
    version: '1.0.0'
)

The last thing you might want to do, is to publish the build-info for this build. See the Publishing Build Info to Artifactory section for how to do it.

More about build-info: You also also have the option of customising the build-info module names. You do this by adding the module property to the rtGoRun or rtGoPublish closures as follows:

rtGoRun (
    path: 'path/to/the/project/root',
    resolverId: 'resolver-unique-id',    
    args: 'build',
    module: 'my-custom-build-info-module-name'
)

rtGoPublish (
    path: 'path/to/the/project/root',
    deployerId: 'deployer-unique-id',
    version: '1.0.0',
    module: 'my-custom-build-info-module-name'
)


Conan Builds with Artifactory

Conan is a C/C++ Package Manager. The Artifactory Pipeline DSL includes APIs that make it easy for you to run Conan builds, using the Conan Client installed on your build agents. Here's what you need to do before you create your first Conan build job with Jenkins:

1, Install the latest Conan Client on your Jenkins build agent. Please refer to the Conan documentation for installation instructions.

2. Add the Conan Client executable to the PATH environment variable on your build agent, to make sure Jenkins is able to use the client.

3. Create a Conan repository in Artifactory as described in the Conan Repositories Artifactory documentation.

OK. Let's start coding your first Conan Pipeline script.

Let's start by creating a Conan Client instance:

rtConanClient (
    id: "myConanClient"
)

When creating the Conan client, you can also specify the Conan user home directory as shown below:

rtConanClient (
    id: "myConanClient",
	userHome: "conan/my-conan-user-home"
)

We can now configure our new conan client by adding an Artifactory repository to it. In our example, we're adding the 'conan-local' repository, located in the Artifactory server, referenced by the pre-configured server ID:

rtConanRemote (
    name: "myRemoteName",
    serverId: "Artifactory-1",
    repo: "conan-local",
    clientId: "myConanClient",
	// Optional - Adding this argument will make the conan client not to raise an error. If an existing remote exists with the provided name.
	force: true,
	// Optional - Adding this argument will make the conan client skip the validation of SSL certificates.
	verifySSL: false
)

OK. We're ready to start running Conan commands. You'll need to be familiar with the Conan commands syntax, exposed by the Conan Client to run the commands. You can read about the commands syntax in the Conan documentation.

Let's run the first command:

rtConanRun (
    clientId: "myConanClient",
    command: "install . --build missing"
)

The next thing we want to do is to use the conan remote we created. For example, let's upload our artifacts to the conan remote. Notice how we use the ID of the remote we created earlier, which is myRemoteName:

rtConanRun (
    clientId: "myConanClient",
    command: "upload * --all -r myRemoteName --confirm"
)

We can now publish the the buildInfo to Artifactory, as described in the Publishing Build-Info to Artifactory section:

rtPublishBuildInfo (
    serverId: "Artifactory-1"
)


Docker Builds with Artifactory

General

The Jenkins Artifactory Plugin supports a Pipeline DSL that allows pulling and pushing docker images from and to Artifactory. while collecting and publishing build-info to Artifactory. To setup your Jenkins build agents to collect build-info for your Docker builds, please refer to the setup instructions.

Working with Docker Daemon Directly

The Jenkins Artifactory Plugin supports working with the docker daemon directly through its REST API. Please make sure ti set up Jenkins to work with docker and Artifasctory as mentioned in the previous section.

Next, let's create an Artifactory server instance as shown blow, or configure it through Manage | Configure System.  

rtServer (
    id: 'Artifactory-1',
    url: 'http://my-artifactory-domain/artifactory',
    credentialsId: 'my-credentials-id'
)

Next, here's how you pull a docker image from Artifactory.

Pulling Docker Images from Artifactory
rtDockerPull(
    serverId: 'Artifactory-1',
    image: ARTIFACTORY_DOCKER_REGISTRY + '/hello-world:latest',
    // Host:
    // On OSX: "tcp://127.0.0.1:1234"
    // On Linux can be omitted or null
    host: HOST_NAME,
    sourceRepo: 'docker-remote',
    // If the build name and build number are not set here, the current job name and number will be used:
    buildName: 'my-build-name',
    buildNumber: '17',
    // Optional - Only if this build is associated with a project in Artifactory, set the project key as follows.
    project: 'my-project-key',
	// Jenkins spawns a new java process during this step's execution.
	// You have the option of passing any java args to this new process.
	javaArgs: '-agentlib:jdwp=transport=dt_socket,server=y,suspend=n,address=*:5005' 
)

Here's how you push an image to Artifactory

Pushing Docker Images to Artifactory
rtDockerPush(
    serverId: 'Artifactory-1',
    image: ARTIFACTORY_DOCKER_REGISTRY + '/hello-world:latest',
    // Host:
    // On OSX: 'tcp://127.0.0.1:1234'
    // On Linux can be omitted or null
    host: HOST_NAME,
    targetRepo: 'docker-local',
    // Attach custom properties to the published artifacts:
    properties: 'project-name=docker1;status=stable',
    // If the build name and build number are not set here, the current job name and number will be used:
    buildName: 'my-build-name',
    buildNumber: '17',
    // Optional - Only if this build is associated with a project in Artifactory, set the project key as follows.
    project: 'my-project-key',
	// Jenkins spawns a new java process during this step's execution.
	// You have the option of passing any java args to this new process.
	javaArgs: '-agentlib:jdwp=transport=dt_socket,server=y,suspend=n,address=*:5005' 
)

And finally, you have the option of publishing the build-info to Artifactory as follows.

rtPublishBuildInfo (
    serverId: 'Artifactory-1',
    // If the build name and build number are not set here, the current job name and number will be used. Make sure to use the same value used in the rtDockerPull and/or rtDockerPush steps.
    buildName: 'my-build-name',
    buildNumber: '17',
    // Optional - Only if this build is associated with a project in Artifactory, set the project key as follows.
    project: 'my-project-key',
)



Using Kaniko

The rtCreateDockerBuild step allows collecting build-info for docker images that were published to Artifactory using Kaniko. See our kaniko project example on GitHub to learn how to do this.

Using Jib

The rtCreateDockerBuild step allows collecting build-info for docker images that were published to Artifactory using the JIB Maven Plugin. See our maven-jib-example on GitHub to learn how to do this. Since this example also runs maven using the Artifactory pipeline APIs, we also recommend referring to the Maven Builds with Artifactory section included in this documentation page.

Scanning Builds with JFrog Xray

The Jenkins Artifactory Plugin is integrated with JFrog Xray through JFrog Artifactory allowing you to have build artifacts scanned for vulnerabilities and other issues. If issues or vulnerabilities are found, you may choose to fail a build. This integration requires JFrog Artifactory v4.16 and above and JFrog Xray v1.6 and above. 

You may scan any build that has been published to Artifactory. It does not matter when the build was published, as long as it was published before triggering the scan by JFrog Xray.

The following instructions show you how to configure your Pipeline script to have a build scanned.

rtServer (
    id: 'Artifactory-1',
    url: 'http://my-artifactory-domain/artifactory',
    credentialsId: 'my-credentials-id'
)

xrayScan (
    serverId: 'Artifactory-1',
	// If the build name and build number are not set here, the current job name and number will be used:
	buildName: 'my-build-name',
	buildNumber: '17',
    // Optional - Only if this build is associated with a project in Artifactory, set the project key as follows.
    project: 'my-project-key',    
	// If the build is found vulnerable, the job will fail by default. If you do not wish it to fail:
	failBuild: false
)


Managing Release Bundles

General

The Jenkins Artifactory Plugin exposes a set of pipeline APIs for managing and distributing Release Bundles. These APIs require version 2.0 or higher of JFrog Distribution. These APIs work with JFrog Distribution's REST endpoint, and not with Artifactory REST endpoint. It is therefore recommended to verify that JFrog Distribution is accessible from Jenkins through Jenkins | Manage | Configure System. The serverId value in all examples in this section should be replaced with the JFrog Platform ID you configured.

To make it easier to get started using the JFrog Distribution pipeline APIs, you can use the jfrog-distribution-example available here.

Creating or Updating Unsigned Release Bundles

The dsCreateReleaseBundle and dsUpdateReleaseBundle steps create and update a release bundle on JFrog Distribution. The steps accept the configured JFrog Platform ID as well as the release bundle name and release bundle version to be created. The steps also accept a File Spec, which defines the files in Artifactory to be bundled into the release bundle. 

Create a release bundle
dsCreateReleaseBundle(
    serverId: "jfrog-instance-1",
    name: "example-release-bundle",
    version: "1",
    spec: """{
        "files": [{
            "pattern": "libs-release-local/ArtifactoryPipeline.zip"
        }]
    }""",
	// The default is "plain_text". The syntax for the release notes. Can be one of 'markdown', 'asciidoc', or 'plain_text'.
    releaseNotesSyntax: "markdown",
	// Optional. If set to true, automatically signs the release bundle version.
    signImmediately: true,
    // Optional. Path to a file describing the release notes for the release bundle version.
	releaseNotesPath: "path/to/release-notes",
    // Optional. The passphrase for the signing key.
	gpgPassphrase: "abc",
    // Optional. A repository name at the source Artifactory instance, to store release bundle artifacts in. If not provided, Artifactory will use the default one.
	storingRepo: "release-bundles-1",
    // Optional.
	description: "Some desc",
    // Optional. Path to a file with the File Spec content.
	specPath: "path/to/filespec.json",
    // Optional. Set to true to disable communication with JFrog Distribution.
	dryRun: true
)
Update a release bundle
dsUpdateReleaseBundle(
    serverId: "jfrog-instance-1",
    name: "example-release-bundle",
    version: "1",
    spec: """{
        "files": [{
            "pattern": "libs-release-local/ArtifactoryPipeline.zip"
        }]
    }""",
	// The default is "plain_text". The syntax for the release notes. Can be one of 'markdown', 'asciidoc', or 'plain_text'.
    releaseNotesSyntax: "",
	// Optional. If set to true, automatically signs the release bundle version.
    signImmediately: true,
    // Optional. Path to a file describing the release notes for the release bundle version.
	releaseNotesPath: "path/to/release-notes",
    // Optional. The passphrase for the signing key.
	gpgPassphrase: "abc",
    // Optional. A repository name at the source Artifactory instance, to store release bundle artifacts in. If not provided, Artifactory will use the default one.
	storingRepo: "release-bundles-1",
    //Optional.
	description: "Some desc",
    // Optional. Path to a file with the File Spec content.
	specPath: "path/to/filespec.json",
    // Optional. Set to true to disable communication with JFrog Distribution.
	dryRun: true
)


Signing Release Bundles

Release bundles must be signed before they can be distributed. Here's how you sign a release bundle.

Sign a release bundle
dsSignReleaseBundle(
    serverId: "jfrog-instance-1",
    name: "example-release-bundle",
    version: "1",
	// Optional GPG passphrase
	gpgPassphrase: "abc",
	// Optional repository name at the source Artifactory instance, to store release bundle artifacts in. If not provided, Artifactory will use the default one.
	storingRepo: "release-bundles-1"	
)


Distributing Release Bundles

Here's how you distribute a signed release bundle.

Distribute a release bundle
dsDeleteReleaseBundle(
    serverId: "jfrog-instance-1",
    name: "example-release-bundle",
    version: "1",
	// Optional distribution rules
    distRules: """{
            "distribution_rules": [
            {
                "site_name": "*",
                "city_name": "*",
                "country_codes": ["*"]
            }
            ]
        }""",
	// Optional country codes. Cannot be used together with 'distRules'
	countryCodes: ["001", "002"],
	// Optional site name. Cannot be used together with 'distRules'
	siteName: "my-site",
	// Optional city name. Cannot be used together with 'distRules'
	cityName: "New York",
	// Optional. If set to true, the response will be returned only after the distribution is completed.
    sync: true,
	// Optional. Set to true to disable communication with JFrog Distribution.
	dryRun: true
}

Deleting Release bundles

Here's how you delete a release bundle.

dsDeleteReleaseBundle(
    serverId: "jfrog-instance-1",
    name: "example-release-bundle",
    version: "1",
	// Optional distribution rules
    distRules: """{
            "distribution_rules": [
            {
                "site_name": "*",
                "city_name": "*",
                "country_codes": ["*"]
            }
            ]
        }"""
    ),
	// Optional country codes. Cannot be used together with 'distRules')	
	countryCodes: ["001", "002"]
	// Optional site name. Cannot be used together with 'distRules')
	siteName: "my-site",
	// Optional city name. Cannot be used together with 'distRules')
	cityName: "New York",
	// Optional. If set to true, the response will be returned only after the deletion is completed.
    sync: true,
	// Optional. If set to true, the release bundle will also be deleted on the source Artifactory instance, and not only on the edge node.
    deleteFromDist: true,
	// Optional. Set to true to disable communication with JFrog Distribution.
	dryRun: true
}


Build Triggers

The Artifactory Trigger allows a Jenkins job to be automatically triggered when files are added or modified in a specific Artifactory path. The trigger periodically polls Artifactory to check if the job should be triggered. You can read more about it here.

You have the option of defining the Artifactory Trigger from within your pipeline. Here's how you do it:

First, create an Artifactory server instance, as described in the Creating an Artifactory Server Instance section.

Next, define the trigger as shown here:

rtBuildTrigger(
    serverId: "ARTIFACTORY_SERVER",
    spec: "*/10 * * * *",
    paths: "generic-libs-local/builds/starship"
)

When a job is triggered following deployments to Artifactory, you can store the URL of the file in Artifactory which triggered the job in an environment variable. Here's how you do it:

environment {
    RT_TRIGGER_URL = "${currentBuild.getBuildCauses('org.jfrog.hudson.trigger.ArtifactoryCause')[0]?.url}"
}
  • No labels
Copyright © 2021 JFrog Ltd.