Page History
Excerpt | |||||||||||||||||||||||||||||||||||||||||||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Configuring a Sharding Binary ProviderA sharding binary provider is a binary provider as described in Configuring the Filestore. Basic sharding configuration is used to configure a sharding binary provider for Artifactory instance. Basic Sharding ConfigurationThe following parameters are available for a basic sharding configuration:
Example 1The code snippet below is a sample configuration for the following setup:
Example 2The following code snippet shows the "double-shards" template which can be used as is for your binary store configuration.
The double-shards template uses a cached provider with two mounts and a redundancy of 1, i.e. only one copy of each artifact is stored.
To modify the parameters of the template, you can change the values of the elements in the template definition. For example, to increase redundancy of the configuration to 2, you only need to modify the
Cross-Zone Sharding ConfigurationSharding across multiple zones in an HA Artifactory cluster allows you to create zones or regions of sharded data to provide additional redundancy in case one of your zones becomes unavailable. You can determine the order in which the data is written between the zones and you can set the method for establishing the free space when writing to the mounts in the neighboring zones. The following parameters are available for a cross-zone sharding configuration in the binarystore.xml file:
|
node.id | Unique descriptive name of this server.
| |||||
node.crossZoneOrder | This parameter sets the zone order in which the data is written to the mounts. In the following example, crossZoneOrder: "us-east1,us-east2", the sharding will write to the US-EAST-1 zone and then to the US-EAST-2 zone. |
Info |
---|
You can dynamically add nodes to an existing sharding cluster using the Artifactory System YAML file. To do so, you will need your cluster to already be configured with sharding, and by adding the ‘crossZoneOrder: us-east-1,us-east-2’ property, the new node will be able to write to the existing cluster nodes without changing the binarystore.xml file. |
Example:
This example displays a cross-zone sharding scenario in which the Artifactory cluster is configured with a redundancy of 2 and includes the following steps:
- The developer first deploys the package to the closest Artifactory node.
- The package is then automatically deployed to the 'US-EAST-1" zone to the shard with the highest percentage of free space in the "S1" shard (with 51% free space).
- The package is deployed using the same method to the "S3" shard, that also has the highest percentage of free space in the 'US-EAST-2' zone.
The code snippet below is a sample configuration of our cross-zone setup:
- 1 Artifactory cluster across 2 zones: "us-east-1" and "us-east-2" in this order.
- 4 HA nodes, 2 nodes in each zone.
- 4 mounts (shards), 2 mounts in each zone.
- The write strategy for the provider is zonePercentageFreeSpace.
Example: Cross-zone sharding configuration in Artifactory System YAML
Code Block node: id: "west-node-1" crossZoneOrder: "us-east-1,us-east-2"
Example: Cross-zone sharding configuration in the binarystore.xml
Code Block <config version="4"> <chain> <provider id="sharding" type="sharding"> <sub-provider id="shard1" type="state-aware"/> <sub-provider id="shard2" type="state-aware"/> <sub-provider id="shard3" type="state-aware"/> <sub-provider id="shard4" type="state-aware"/> </provider> </chain> <provider id="sharding" type="sharding"> <redundancy>2</redundancy> <readBehavior>zone</readBehavior> <writeBehavior>zonePercentageFreeSpace</writeBehavior> </provider> <provider id="shard1" type="state-aware"> <fileStoreDir>mount1</fileStoreDir> <zone>us-east-1</zone> </provider> <provider id="shard2" type="state-aware"> <fileStoreDir>mount2</fileStoreDir> <zone>us-east-1</zone> </provider> <provider id="shard3" type="state-aware"> <fileStoreDir>mount3</fileStoreDir> <zone>us-east-2</zone> </provider> <provider id="shard4" type="state-aware"> <fileStoreDir>mount4</fileStoreDir> <zone>us-east-2</zone> </provider> </config>
- 1 Artifactory cluster across 2 zones: "us-east-1" and "us-east-2" in this order.
Using Balancing to Recover from Mount Failure
In case of a mount failure, the actual redundancy in your system will be reduced accordingly. In the meantime, binaries continue to be written to the remaining active mounts. Once the malfunctioning mount has been restored, the system needs to rebalance the binaries written to the remaining active mounts to fully restore (i.e. balance) the redundancy configured in the system. Depending on how long the failed mount was inactive, this may involve a significant volume of binaries that now need to be written to the restored mount, which may take significant amount of time. Since restoring the full redundancy is a resource intensive operation, the balancing operation is run in a series of distinct sessions until complete. These are automatically invoked after a Garbage Collection process has been run in the system.
Restoring Balance in Unbalanced Redundant Storage Units
In the case of voluntary actions that cause an imbalance the system redundancy, such as when doing a filestore migration, you may manually invoke rebalancing of redundancy using the Optimize System Storage REST API endpoint. Applying this endpoint raises a flag for Artifactory to run rebalancing following the next Garbage Collection. Note that, to expedite rebalancing, you can invoke garbage collection manually from the Artifactory UI.
Optimizing System Storage
Artifactory REST API provides an endpoint that allows you to raise a flag to indicate that Artifactory should invoke balancing between redundant storage units of a sharded filestore after the next garbage collection. For details, please refer to Optimize System Storage.