[RTFACT-8041] Support HA setup without NFS Created: 28/Aug/15  Updated: 14/Nov/17  Resolved: 30/Jan/17

Status: Resolved
Project: Artifactory Binary Repository
Component/s: Configuration, High Availability
Affects Version/s: 3.4.2, 4.2.2, 4.7.0, 4.7.3
Fix Version/s: 5.0.0

Type: New Feature Priority: Normal
Reporter: Aaron Rhodes Assignee: Dan Feldman (Inactive)
Resolution: Fixed Votes: 18
Labels: None

Issue Links:
RTFACT-12743 Storage.properties is not properly co... Sub-task Resolved Dan Feldman  


Users using HA migrating to HA find that they still need NFS just to store the configuration files. Should have a way to do this without having a shared NFS site, such as separate etc folders with the same config files.

Comment by suchisubhra [ 28/Aug/15 ]

we saw this issue and really like to know what jfrog is thinking...

Comment by Gard Rimestad [ 15/Mar/16 ]

We would like this very much as well. It does not feel right to set up nfs when we live in aws and have s3 for storage.

Comment by Alex Lake [ 05/Apr/16 ]

Being able to use S3/RDS and not have NFS at all would be a huge win.

Comment by Johan Raffin [ 09/Jun/16 ]

True the use of NFS particularly in an AWS environment would be helpful. Even storing the data in a memcached/Redis application would be helpful (so we could leverage from AWS ElasticCache offerings or alike)

Comment by Jean-Jacques [ 13/Jun/16 ]

Actually, the current HA configuration with an HA sharding storage have a SPOF with the shared configuration file.
We are highly interrested in this improvement as well as we are thinking about update our landscape with this feature and keep it in HA mode for each part of the landscape.

Comment by Romain [ 29/Jun/16 ]

We would also really like this feature. Our AWS setup would become much simpler if Artifactory could use S3 for storage (given it already has credentials), or ElastiCache as Johan mentioned above. In the meantime we had to setup a highly-available GlusterFS cluster just to mount a few config files, which is not trivial and actually requires quite a bit of maintenance.

Comment by Martin Migasiewicz [ 12/Jul/16 ]

Right now the ha-data directory contains also an essential directory which is used by S3:

The 'eventual' binary provider comes to overcome the potential latency when using the S3.
By default, the eventual is located under the $ARTIFACTORY_HOME/data folder (or $CLUSTER_HOME/data folder in case of HA setup). Under it, there will be 3 folders -
_pre, _add, _delete.
The _pre folder is for the persistent mechanism to make sure files are valid.
The _add and _delete folders are to handle the upload and delete of files from the S3.
For example, when file is first uploaded to Artifactory, it will be written to the eventual/_pre and an once confirmed as valid, will be moved to the _add folder.

How are you going to solve this problem?

Comment by Martin Migasiewicz [ 15/May/17 ]

Could someone please give some insight how the feature works now? Basically I would like to hear an answer on my question before

Generated at Fri Jun 05 22:19:40 UTC 2020 using Jira 8.5.3#805003-sha1:b4933e02eaff29a49114274fe59e1f99d9d963d7.