How to setup Artifactory HA cluster in AWS?

 

If you are planning to setup Artifactory in AWS, then you could consider using our SAAS service which is offered in AWS/GCP/Microsoft Azure. If you choose to set up Artifactory yourself, then below is an example of Artifactory HA cluster setup in AWS:

User-added image
 

 

Artifactory installation and setup:

The following page on our wiki has instructions for installing Artifactory. In order to configure Artifactory nodes as a HA cluster, please read the following instructions on our wiki. We recommend using Amazon RDS for the database. Artifactory has been tested with the following DB types; PostgreSQL, MySQL, MSSQL, Oracle. Here are the instructions for setting up the database to be used by Artifactory. S3 will be setup as the filestore and our wiki has examples of the S3 configuration. Please note that currently you can connect to S3 by providing the identity and credentials in the binarystore.xml file or alternatively you can use the IAM role method to connect to S3.
 

Setting up reverse proxy:

Install a reverse proxy server for each of the Artifactory nodes as it will be required if you are planning to use docker repositories in Artifactory. The ELB requests would be forwarded to the reverse proxy server in front of Artifactory node that would have rewrite rules to handle docker client requests and also the Artifactory web UI requests. The configuration for the reverse proxy server can be generated using Artifactory reverse proxy generator. You can generate either Apache or NGINX config depending on what you have installed. In order to generate the configuration, please access the Artifactory node directly bypassing the load balancer. Since the ELB will be handling the SSL, the reverse proxy can be configured as a HTTP endpoint. Below is an example Nginx configuration for a Artifactory node that will be accessed via ELB:

In the below example if you notice we have hard coded some values for the headers and have highlighted these entries. Since the "server_name" in the Nginx config will not be the same as the ELB domain name and Nginx is listening on 80, we hard coded the headers, so that the response URL to ELB from Nginx will be the same as request URL. Please note that in this example, we have considered that the ELB is handling the HTTPS and is forwarding requests to  Nginx which is listening on port 80.

 

###########################################################
## this configuration was generated by JFrog Artifactory ##
###########################################################

 

## server configuration
server {
   
   listen 80 ;
   
   server_name myart.server.com;
   if ($http_x_forwarded_proto = '') {
       set $http_x_forwarded_proto  $scheme;
   }
   ## Application specific logs
   ## access_log /var/log/nginx/myart.server.com-access.log timing;
   ## error_log /var/log/nginx/myart.server.com-error.log;
   rewrite ^/$ /artifactory/webapp/ redirect;
   rewrite ^/artifactory/?(/webapp)?$ /artifactory/webapp/ redirect;
   rewrite ^/(v1|v2)/(.*) /artifactory/api/docker/docker-local/$1/$2;
   chunked_transfer_encoding on;
   client_max_body_size 0;
   location /artifactory/ {
   proxy_read_timeout  900;
   proxy_pass_header   Server;
   proxy_cookie_path   ~*^/.* /;
   if ( $request_uri ~ ^/artifactory/(.*)$ ) {
       proxy_pass          http://localhost:8081/artifactory/$1;
   }
   proxy_pass          http://localhost:8081/artifactory/;
   proxy_set_header    X-Artifactory-Override-Base-Url https://ELB domain name/artifactory;
   proxy_set_header    X-Forwarded-Port  443;
   proxy_set_header    X-Forwarded-Proto https;
   proxy_set_header    Host              ELB domain name;
   proxy_set_header    X-Forwarded-For   $proxy_add_x_forwarded_for;
   }
}

AWS ELB:

The load balancer will require sticky sessions for persisting user access to Artifactory UI. The following article explains the method that can be used for configuring sticky session in the AWS ELB. The requirement for sticky session will be removed in a newer version of Artifactory, which is scheduled to be released sometime in Q2 2017. In order to configure health check for Artifactory nodes in the ELB, please take a look at the following article. The following page on AWS has instructions to setup a domain name for the ELB.

The custom base URL in Artifactory needs to be set to https://DNS_NAME/artifactory. The "DNS_NAME" in the URL would be the custom domain name that resolves to the AWS ELB. Please note that the context "artifactory" in the custom base URL will be not required if you have configured Artifactory to run as ROOT.

Example binarystore.xml using S3 bucket for a Artifactory HA cluster:

<!-- The S3 binary provider configuration -->
<config version="2">
   <chain>
       <provider id="cache-fs-eventual-s3" type="cache-fs">
           <provider id="sharding-cluster-eventual-s3" type="sharding-cluster">
               <sub-provider id="eventual-cluster-s3" type="eventual-cluster">
                   <provider id="retry-s3" type="retry">
                       <provider id="s3" type="s3"/>
                   </provider>
               </sub-provider>
               <dynamic-provider id="remote-s3" type="remote"/>
           </provider>
       </provider>
   </chain>

   <provider id="cache-fs-eventual-s3" type="cache-fs">
       <maxCacheSize>100000000000</maxCacheSize>    <!-- The maximum size of the cache in bytes:  100 gig -->
       <fileStoreDir>cache</fileStoreDir>
   </provider>

   <provider id="sharding-cluster-eventual-s3" type="sharding-cluster">
       <readBehavior>crossNetworkStrategy</readBehavior>
       <writeBehavior>crossNetworkStrategy</writeBehavior>
       <redundancy>1</redundancy>
       <property name="zones" value="local,remote"/>
   </provider>

   <provider id="eventual-cluster-s3" type="eventual-cluster">
       <zone>local</zone>
   </provider>

   <provider id="retry-s3" type="retry">
       <maxTrys>10</maxTrys>                                              
   </provider>
 
<provider id="s3" type="s3">
       <credential>something</credential>
       <identity>something</identity>
       <endpoint>s3.amazonaws.com</endpoint>
       <bucketName>mybucket</bucketName>
       <httpsOnly>true</httpsOnly>
       <property name="s3service.disable-dns-buckets" value="true"></property>                      
       <property name="httpclient.max-connections" value="300"></property>                          
   </provider>

<provider id="remote-s3" type="remote">
   <zone>remote</zone>
</provider>

</config>