Make sure you have reviewed the overall installation process
Before you proceed with the instructions on this page, make sure you have reviewed the whole installation procedure as described in Installing Artifactory.
Artifactory Docker images can be pulled from Bintray and run as a Docker container.
To do this, you need to have Docker client properly installed and configured on your machine. For details about installing and using Docker, please refer to the Docker documentation.
Running with Docker for Artifactory 4.x
Artifactory as a Docker container has been completely redesigned in version 5.0. If you are running previous versions of Artifactory, please refer to Running with Docker in the Artifactory 4.x User Guide
The way we recommend running Artifactory on Docker is to orchestrate your setup using Docker Compose. This will ensure you have all the required services specified in a single YAML file with pre-configured parameters.
Using Docker Compose
To setup an Artifactory environment made of multiple containers (for example, a database, an NGINX load balancer and Artifactory each running in a different container), you can use docker-compose.
Artifactory OSS, Artifactory Pro and Artifactory HA can all be run using Docker Compose. For detailed documentation and sample Compose files showing a variety of ways to setup Artifactory with Docker Compose, please refer to the
artifactory-docker-examples repository on GitHub.
Since the Artifactory instance running in a Docker container is mutable, all data and configuration files will be lost once the container is removed. If you want your data to persist (for example when upgrading to a new version), you should also follow the next step.
Add another connector to Tomcat. For example to support SSL.
HA-Specific Environment Variables
Passing Environment Variables to the entrypoint script
The entrypoint script of the Artifactory Pro Docker image accepts various environment variables. These are documented in the table below, and can be used to manipulate various HA-specific settings. Setting the following variables is particularly useful when using an orchestration tool such as Kubernetes or Docker Compose to spin up new Artifactory nodes. For more details on configuring the ha-node.properties please refer to Setting Up Your Storage Configuration.
Determines whether the node is set as a Primary node or as a Member node in the cluster.
The value of the 'node.id' parameter in the ha-node.properties file.
The IP of the container. This variable is used to compose a full context.url, only when the $HA_CONTEXT_URL variable is not set. Determined by running 'hostname -i'.
The value of the 'context.url' parameter in the generated ha-node.properties file. This is the node URL exposed to cluster members. If not set, the $HA_HOST_IP variable will be used to derive the full context.url.
The Hazelcast membership port of the node.
Set this on a member node only if the nodes are not going to be a part of the same docker network, so that they're not reachable to each other by the container name, or if you the name of the primary node container is not "artifactory-node1". The entrypoint script would send an HTTP request to the primary node using this URL to wait for the Primary node to start up.
The value for the 'artifactory.ha.data.dir' parameter in the ha-node.properties file.
The value for the 'artifactory.ha.backup.dir' parameter in the ha-node.properties file.
Managing Data Persistence
The "artifactory" user
Previously, the Artifactory Docker container started as user root, but was run by user artifactory. From version 6.2, user artifactory is used to both start and run the Docker container. Note that:
the artifactory user default ID is 1030
the artifactory user must have write privileges to any persistent storage mounted on the Artifactory container
For your data and configuration to remain once the Artifactory Docker container is removed, you need to store them on an external volume mounted to the Docker container. There are two ways to do this:
Using Host Directories
Using a Docker Named Volume
Using Host Directories
The external volume is a directory in your host's file system (such as /var/opt/jfrog/artifactory). When you pass this to the docker run command, the Artifactory process will use it to read configuration and store its data.
To mount the above example, you would use the following command:
This mounts the /var/opt/jfrog/artifactory directory on your host machine to the container's /var/opt/jfrog/artifactory and will then be used by Artifactory for configuration and data.
Using a Docker Named Volume
In this case, you create a docker named volume and pass it to the container. By default, the named volume is a local directory under /var/lib/docker/volumes/<name>, but can be set to work with other locations. For more details, please refer to the Docker documentation for
The example below creates a Docker named volume called artifactory_data and mounts it to the Artifactory container under /var/opt/jfrog/artifactory:
In this case, even if the container is stopped and removed, the volume persists and can be attached to a new running container using the above docker run command.
Extra Configuration Directory
You can mount extra configuration files, such as binarystore.xml, artifactory.lic or db.properties, that are needed for your Artifactory installation. To do this, you need to mount the file or directory on the host into the Artifactory Docker container's /artifactory_extra_conf folder. When the Artifactory Docker container starts, it will copy the files from /artifactory_extra_conf to ARTIFACTORY_HOME/etc (usually /var/opt/jfrog/artifactory/etc).
The files mounted into /artifactory_extra_conf will be copied over to ARTIFACTORY_HOME/etc every time the container starts, so you should avoid modifying the files in ARTIFACTORY_HOME/etc.
The Artifactory Docker image can be run with an Nginx Docker image that can be used to manage SSL, reverse proxy and other web server features. For configuration details, please refer to Configuring NGINX.
A custom Docker image that is already setup for Artifactory with NGINX and is available at: `docker.bintray.io/jfrog/nginx-artifactory-pro`
By default, Artifactory runs with an embedded Derby Database that comes built-in, however, Artifactory supports additional databases. To switch to one of the other supported databases, please refer to Changing the Database.
Building Artifactory OSS From Sources
The Artifactory OSS Docker image sources are available for download allowing you to build the image yourself. For details, please refer to Building Artifactory OSS.
Once the Artifactory container is up and running, you access Artifactory in the usual way by browsing to:
Docker for Windows limitation
There is a known limitation with running with Docker on Windows.
The limitation is described in the following JIRA issue. There is an optional workaround there, but it's not recommended for production deployments.