Still using Artifactory 3.x ?
Click here for the Artifactory 3.x User Guide

Skip to end of metadata
Go to start of metadata


Artifactory comes with a built-in Derby database that can be reliably used to store data for production-level repositories up to hundreds of gigabytes in size.

However, Artifactory's storage layer supports pluggable storage implementations allowing you to change the default storage and use other popular databases.

Artifactory currently supports the following databases:

Modes of Operation

Artifactory supports two modes of operation:

  • Metadata in the database and binaries stored on the file system (This is the default and recommended configuration).
  • Metadata and binaries stored as BLOBs in the database

"Once-And-Only-Once" Content Storage

Artifactory stores binary files only once.

When a new file about to be stored in Artifactory is found to have the same checksum as a previously stored file (i.e. it is identical to a file that has already been stored), Artifactory does not store the new file content, but rather creates a link to the existing file in the metadata of the newly deployed file.

This principle applies regardless of which repository and path artifacts are deployed - you can deploy the same file to many different coordinates, and as long as identical content is found in the storage, it is reused.

Before You Start


Changing the storage type does not automatically transfer your data to the new storage. Please follow the steps below to backup your data so that you can restore it after the change.

Backup Your Current Installation

When changing the storage type for an existing installation you must import your Artifactory content and configuration from a backup.

Make sure to backup your current Artifactory system before updating to a new storage type.

Remove the Old Data Folder

If you have previously run Artifactory with a different storage type you must remove (or move) the existing $ARTIFACTORY_HOME/data folder.

If you do not do this, Artifactory continues to use some of the previous storage definitions and will fail to start up producing a NotFoundException in several places during startup sequence. Removing (or emptying) the $ARTIFACTORY_HOME/data folder will avoid these errors.

Setup the New Storage

To setup your new storage you need to create a database instance, create an Artifactory user for the database, install the appropriate JDBC driver, copy the relevant database configuration file, and configure the corresponding file.

This is fully detailed in the specific documentation page for each of the supported databases listed in the Overview section.

The Bundled Storage Configurations

For each of the supported databases you can find the corresponding properties file inside $ARTIFACTORY_HOME/misc/db.

Each file contains the mandatory parameters and definitions that should be configured to work with your database as follows:


filesystem (default)
This means that metadata is stored in the database, but binaries are stored in the file system. The default location is under $ARTIFACTORY_HOME/data/filestore however this can be modified.

All the metadata and the binaries are stored as BLOBs in the database.

Works the same way as filesystem but also has a binary LRU (Least Recently Used) cache for upload/download requests. Improves performance of instances with high IOPS (I/O Operations) or slow NFS access.

This is the setting used for S3 Object Storage

The maximum number of pooled database connections (default: 100).


The maximum number of pooled idle database connections (default: 10).


If binary.provider.type is set to fullDb this value specifies the maximum cache size (in bytes) to allocate on the system for caching BLOBs.


If binary.provider.type is set to filesystem this value specifies the location of the binaries (default: $ARTIFACTORY_HOME/data/filestore).


The location of the cache. This should be set to your $ARTIFACTORY_HOME directory directly (not on the NFS).

Backing up $ARTIFACTORY_HOME/data/filestore

If binary.provider.type is set to filesystem, then raw Artifactory data must be backed up.

To do this, the $ARTIFACTORY_HOME/data/filestore folder should be backed-up in parallel with a database dump since both are required. The database must be dumped first.

This does not impact Artifactory's own backup system which is storage-agnostic.

Accessing a Remote Database

To avoid network latency issues when reading and writing artifacts data, we highly recommend that you create the database either on the same machine on which Artifactory is running or on a fast SAN disk.

This is critical when binary.provider.type is set to fullDb (whereby files are served from database BLOBs) and the file system cache is small.




  • No labels