[RTFACT-20637] UI Performance For version 7.0 Created: 17/Nov/19  Updated: 17/Nov/19

Status: In Progress
Project: Artifactory Binary Repository
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: New Feature Priority: Normal
Reporter: Omri Ziv Assignee: Omri Ziv
Resolution: Unresolved Votes: 0
Labels: UGA

Issue Links:
Relationship

 Description   

UI Performance For version 7.0 - Cherry picks from 6.14






[RTFACT-20634] Docker allows overwriting same image with no delete/overwrite permissions Created: 15/Nov/19  Updated: 15/Nov/19

Status: Open
Project: Artifactory Binary Repository
Component/s: None
Affects Version/s: 6.11.3
Fix Version/s: None

Type: Improvement Priority: Normal
Reporter: Matthew Wang Assignee: Unassigned
Resolution: Unresolved Votes: 0
Labels: None


 Description   

Currently, if you deploy an image with a user, then try to push the image again (to overwrite it) with no delete/overwrite permissions, the docker client returns a successful message. If you try to overwrite the image/tag with a different image, then there will be an error.

Steps to reproduce:

  1. Set permissions for user to Read, Annotate, Deploy
  2. Tag (docker tag busybox mill.jfrog.info:12019/docker-local/busybox) and push docker image to Artifactory for the first time with a version tag (say v0.3) – Works fine
  3. Set permissions for user to Read, Annotate, Deploy, Delete/Overwrite
  4. Push the same image/version again – Works as well (as expected)
  5. Revoke the Delete/Overwrite permissions
  6. Push the same image/version again – Still able to push the image! This is not expected.
  7. Try to tag a different image (docker tag hello-world mill.jfrog.info:12019/docker-local/busybox) and push. See the docker client has an error this time

The docker client should provide an error when trying to overwrite an image if delete/overwrite permissions is not granted






[RTFACT-20633] Access tokens - Notification System Created: 15/Nov/19  Updated: 15/Nov/19

Status: Open
Project: Artifactory Binary Repository
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: New Feature Priority: Normal
Reporter: Hamza Zaoui Assignee: Unassigned
Resolution: Unresolved Votes: 0
Labels: None


 Description   

Artifactory to send notification when an access token is about to expire.






[RTFACT-20632] Access tokens - Hide tokens generated by Artifactory Created: 15/Nov/19  Updated: 15/Nov/19

Status: Open
Project: Artifactory Binary Repository
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: Improvement Priority: Normal
Reporter: Hamza Zaoui Assignee: Unassigned
Resolution: Unresolved Votes: 0
Labels: None


 Description   

Option to filter/exclude tokens in the access token management UI.
Ability to show only admin or user based access tokens.






[RTFACT-20631] Fetch Properties for all artifacts in particular remote repos Created: 15/Nov/19  Updated: 15/Nov/19

Status: Open
Project: Artifactory Binary Repository
Component/s: None
Affects Version/s: 6.13.1
Fix Version/s: None

Type: New Feature Priority: Normal
Reporter: Timofey Shklyarov Assignee: Unassigned
Resolution: Unresolved Votes: 0
Labels: None


 Description   

Right now our deployments use Property Search downloads for non-unique SNAPSHOT artifacts.

We switched 1 Artifactory instance (6.14) to Edge and it's not possible to benefit from remote repos having the caching feature, cause it's not possible to resolve artifacts...

E.g. we use URL like:

'http://URL:8081/artifactory/repo/com/inq/etl/4.0-SNAPSHOT/etl-4.0-SNAPSHOT.jar;build.number=6780;+build.name=RT - C-RT - develop;+bamboo.timestamp=1572635502'

Requesting fetching props (meta-data) for all artifacts for a particular remote repo.

Client: Nuance






[RTFACT-20630] make builds unique Created: 15/Nov/19  Updated: 15/Nov/19

Status: Open
Project: Artifactory Binary Repository
Component/s: Build Info
Affects Version/s: 6.13.1
Fix Version/s: None

Type: New Feature Priority: Normal
Reporter: Christian Schyma Assignee: Unassigned
Resolution: Unresolved Votes: 0
Labels: None


 Description   

Currently it is possible to create builds with the same name and number more than once. On requesting the build info via https://www.jfrog.com/confluence/display/RTF/Artifactory+REST+API#ArtifactoryRESTAPI-BuildInfo the latest build is returned. That's not obivous and also not documented.

Solution: Prevent the creating of builds with the same name and number.






[RTFACT-20629] While using docker pull the execution time is 1.6 secs with DISTINCT in db query. Execution time reduces drastically without DISTINCT in the DB query Created: 15/Nov/19  Updated: 15/Nov/19

Status: Open
Project: Artifactory Binary Repository
Component/s: None
Affects Version/s: 6.10.8
Fix Version/s: None

Type: Improvement Priority: Normal
Reporter: Swarnendu Kayal Assignee: Unassigned
Resolution: Unresolved Votes: 0
Labels: None


 Description   

Customer is using Enterprise+ 6.10.8 with Postgres. On this and previous versions they have been seeing sporadic but persistent read timeouts for pulls from our docker registry (v2 registry – 7 million artifacts). They have also been seeing CPU usage spikes in our database coinciding with the timeouts and also latency of 1.8 seconds on response from Artifactory also coincident with these timeouts. The machines in question are adequately provisioned.

The offending query is as follows:

select distinct
{{ n.repo as itemRepo,}}
{{ n.node_path as itemPath,}}
{{ n.node_name as itemName,}}
{{ n.created as itemCreated,}}
{{ n.modified as itemModified,}}
{{ n.updated as itemUpdated,}}
{{ n.created_by as itemCreatedBy,}}
{{ n.modified_by as itemModifiedBy,}}
{{ n.node_type as itemType,}}
{{ n.bin_length as itemSize,}}
{{ n.node_id as itemId,}}
{{ n.depth as itemDepth,}}
{{ n.sha1_actual as itemActualSha1,}}
{{ n.sha1_original as itemOriginalSha1,}}
{{ n.md5_actual as itemActualMd5,}}
{{ n.md5_original as itemOriginalMd5,}}
{{ n.sha256 as itemSha2}}
from nodes n
where ( n.node_name like $1 escape '^' and n.repo = $2) and n.node_type = $3

Analysis on the query is as follows:

QUERY PLAN
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
----------------------------
Limit (cost=417490.41..417490.86 rows=10 width=408) (actual time=16088.515..16088.569 rows=10 loops=1)
-> Unique (cost=417490.41..452121.47 rows=769579 width=408) (actual time=16088.514..16088.565 rows=10 loops=1)
-> Sort (cost=417490.41..419414.36 rows=769579 width=408) (actual time=16088.513..16088.556 rows=10 loops=1)
Sort Key: repo, node_path, node_name, created, modified, updated, created_by, modified_by, node_type, bin_length, node_id, depth, sha1_actual, sha1_original, md5_a
ctual, md5_original, sha256
Sort Method: external merge Disk: 318648kB
-> Seq Scan on nodes n (cost=0.00..58161.79 rows=769579 width=408) (actual time=0.004..273.049 rows=769419 loops=1)
Planning time: 0.125 ms
Execution time: 16161.358 ms
(8 rows)

This is 1.6 secs which is slow. They identified that the use of DISTINCT in the query as the problem. The customer states: “distinct on non-indexed fields needs to compare all the values of those fields and since that didn't fit in the allowed sort memory buffer for the db connection, postgres had no other option then to sort them on disk”

See the line in the previously quoted log: “Sort Method: external merge Disk: 318648kB”

Without the DISTINCT the execution time drops from 1.6 seconds to 0.025ms:

QUERY PLAN
-------------------------------------------------------------------------------------------------------------------
Limit (cost=0.00..0.76 rows=10 width=408) (actual time=0.005..0.008 rows=10 loops=1)
-> Seq Scan on nodes n (cost=0.00..58161.79 rows=769579 width=408) (actual time=0.005..0.008 rows=10 loops=1)
Planning time: 0.052 ms
Execution time: 0.025 ms

The customer wants to remove this latency and these sporadic timeouts for the docker pull.

The customer has the below questions:

1. What causes this query to be generated – it looks like it might be generated from a simple GET for a specific artifact.
2. Whether the use of DISTINCT is actually necessary for this call as a starting point.






[RTFACT-20628] separate queues and worker pool for _add and _delete operations Created: 15/Nov/19  Updated: 15/Nov/19

Status: Open
Project: Artifactory Binary Repository
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: New Feature Priority: Normal
Reporter: Loren Yeung Assignee: Unassigned
Resolution: Unresolved Votes: 2
Labels: None

Issue Links:
Relationship
relates to RTFACT-19152 Segregate folders inside eventual/_queue Open

 Description   

currently Artifactory lumps cluster-s3 adds and deletes into a single queue. If users delete alot of files, during a GC run, the queue will be filled with delete operations, which will backlog the adds until deletes are complete. Artifactory should:

1. prioritize adds over deletes - this may in include just letting the delete operations be temporarily suspended if the system is busy, like running low on available connections, db pool etc

2. allow for separate worker pools.






[RTFACT-20627] monitoring for eventual and remote related tasks Created: 15/Nov/19  Updated: 15/Nov/19

Status: Open
Project: Artifactory Binary Repository
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: New Feature Priority: Normal
Reporter: Loren Yeung Assignee: Unassigned
Resolution: Unresolved Votes: 2
Labels: None


 Description   

Currently, there does not appear to be an easy way to see what the eventual worker states are (outside of taking a thread dump) - whether they're processing uploads or deletes, or idling. Would be nice to have something to monitor this (mbean)

It would also be nice to be able to monitor the active number of s3 connections that Artifactory is using.






[RTFACT-20626] Allow Replicator to be used for Artifactory Replication Created: 14/Nov/19  Updated: 14/Nov/19

Status: Open
Project: Artifactory Binary Repository
Component/s: Replication, Replicator
Affects Version/s: 6.14.1
Fix Version/s: None

Type: New Feature Priority: Normal
Reporter: Joshua Han Assignee: Unassigned
Resolution: Unresolved Votes: 0
Labels: None


 Description   

For E+, allowing the Replicator to be used for Artifactory replication can help speed up the replication. 






[RTFACT-20622] Revert INST-74 Created: 14/Nov/19  Updated: 14/Nov/19

Status: Open
Project: Artifactory Binary Repository
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: Bug Priority: Normal
Reporter: Gal Ben Ami Assignee: Gal Ben Ami
Resolution: Unresolved Votes: 0
Labels: None


 Description   

Revert INST-74






[RTFACT-20618] slow support bundle for large instances Created: 14/Nov/19  Updated: 15/Nov/19

Status: Open
Project: Artifactory Binary Repository
Component/s: Support Zone
Affects Version/s: None
Fix Version/s: None

Type: Improvement Priority: Normal
Reporter: Loren Yeung Assignee: Unassigned
Resolution: Unresolved Votes: 1
Labels: None


 Description   

Large instances can take a very long time to gather support bundles. See this 5 node cluster:
$ grep "art-exec-4264" artifactory*
artifactory.2019-11-13_1847.log:timestamp="2019-11-13 18:32:10,142",thread="art-exec-4264",level="INFO ",class="o.a.a.s.c.ArtifactorySupportBundleCollector:89",message="Initiating support content collection '/var/opt/jfrog/artifactory/data/tmp/work/...'"
artifactory.2019-11-13_1847.log:timestamp="2019-11-13 18:32:10,153",thread="art-exec-4264",level="INFO ",class="o.a.a.s.c.BundleCollector:59",message="Awaiting for tasks collecting content ..."
artifactory.2019-11-13_1847.log:timestamp="2019-11-13 18:32:10,153",thread="art-exec-4264",level="INFO ",class="o.a.a.s.c.BundleCollector:151",message="Scheduling task 'collectThreadDump'"
artifactory.2019-11-13_1847.log:timestamp="2019-11-13 18:32:10,155",thread="art-exec-4264",level="INFO ",class="o.a.a.s.c.BundleCollector:151",message="Scheduling task 'collectSecurityConfig'"
artifactory.2019-11-13_1847.log:timestamp="2019-11-13 18:32:10,157",thread="art-exec-4264",level="INFO ",class="o.a.a.s.c.BundleCollector:151",message="Scheduling task 'collectConfigDescriptor'"
artifactory.2019-11-13_1847.log:timestamp="2019-11-13 18:32:10,157",thread="art-exec-4264",level="INFO ",class="o.a.a.s.c.BundleCollector:151",message="Scheduling task 'collectConfigurationFiles'"
artifactory.2019-11-13_1847.log:timestamp="2019-11-13 18:32:10,158",thread="art-exec-4264",level="INFO ",class="o.a.a.s.c.BundleCollector:151",message="Scheduling task 'collectStorageSummary'"
artifactory.2019-11-13_1847.log:timestamp="2019-11-13 18:32:10,159",thread="art-exec-4264",level="INFO ",class="o.a.a.s.c.BundleCollector:151",message="Scheduling task 'collectSystemInfo'"
artifactory.2019-11-13_1847.log:timestamp="2019-11-13 18:32:10,160",thread="art-exec-4264",level="INFO ",class="o.a.a.s.c.BundleCollector:151",message="Scheduling task 'collectSystemLogs'"
artifactory.2019-11-13_1847.log:timestamp="2019-11-13 18:42:56,333",thread="art-exec-4264",level="INFO ",class="o.a.a.s.c.BundleCollector:69",message="All collecting tasks were accomplished!"
artifactory.2019-11-13_1847.log:timestamp="2019-11-13 18:42:56,337",thread="art-exec-4264",level="INFO ",class="o.a.a.s.c.ArtifactorySupportBundleCollector:202",message="Flushing node manifest info NodeManifest(serviceType=jfrt, microserviceName=artifactory, microserviceVersion=6.12.2, serviceId=jfrt@01bnzqp8ex0eny1z3tb0zv0848, nodeId=artifact001, bundleInfo=NodeManifestBundleInfo(id=2019-11-13_High_DB_load-1573669916295, name=2019-11-13_High_DB_load, description=High Postgres DB load, created=2019-11-13T18:31:56.285Z[UTC], status=in progress)) was successfully accomplished"
artifactory.2019-11-13_1847.log:timestamp="2019-11-13 18:42:56,337",thread="art-exec-4264",level="INFO ",class="o.a.a.s.c.ArtifactorySupportBundleCollector:194",message="Compressing collected content ..."
artifactory.2019-11-13_1847.log:timestamp="2019-11-13 18:44:12,248",thread="art-exec-4264",level="INFO ",class="o.a.a.s.c.ArtifactorySupportBundleCollector:166",message="Deployed bundle archive "..." under '...'"
artifactory.2019-11-13_1847.log:timestamp="2019-11-13 18:44:12,250",thread="art-exec-4264",level="INFO ",class="o.a.a.s.c.ArtifactorySupportBundleCollector:106",message="Support request content collection is done!, - [...]"

Then it moves on to the next node, takes another ~15 minutes. The last recorded timestamp in one of the nodes:
artifactory.log:timestamp="2019-11-13 19:12:47,132",thread="art-exec-13253",level="INFO ",class="o.a.a.s.c.BundleCollector:151",message="Scheduling task 'collectSystemInfo'"
artifactory.log:timestamp="2019-11-13 19:12:47,133",thread="art-exec-13253",level="INFO ",class="o.a.a.s.c.BundleCollector:151",message="Scheduling task 'collectSystemLogs'"
[Customer decided to just tar xvzf at this point - it had been running for over 40 minutes at this point, as it would have taken over an hour to complete]

Eventually the customer just decided to tar -xvzf his logs, which took 3 minutes, compared to our job taking 10 minutes to collect logs, and another 2 minutes to compress, and multiply by # of nodes (5 in this case).






[RTFACT-20615] Use GetRouterBaseUrl() for deriving the router port Created: 14/Nov/19  Updated: 14/Nov/19

Status: Will Not Implement
Project: Artifactory Binary Repository
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: Bug Priority: Normal
Reporter: Syamk Sakthidharan Assignee: Unassigned
Resolution: Unresolved Votes: 0
Labels: None


 Description   

See the discussion here : https://git.jfrog.info/projects/JFROG/repos/artifactory/pull-requests/502/overview 

 

We need to use the sysenv library to derive the router port for us. Also remove `metadata.router.port` as its not required anymore. 



 Comments   
Comment by Syamk Sakthidharan [ 14/Nov/19 ]

Wrong project.Ignore this issue. 





[RTFACT-20610] Deleting Debian repository after copying the content to different Debian repository results with N/A repository/package type in Storage Summary Created: 13/Nov/19  Updated: 17/Nov/19

Status: Open
Project: Artifactory Binary Repository
Component/s: None
Affects Version/s: 6.14.0
Fix Version/s: None

Type: Bug Priority: Normal
Reporter: David Pinhas Assignee: Unassigned
Resolution: Unresolved Votes: 0
Labels: None


 Description   

Problem statement and Impact

This behavior occurs when creating a Debian repository and copying the deployed content of this repository to a different Debian repository, afterward, triggering “Recalculate Index” on the first repository and deleting it with the “Delete Repository” REST API.

Steps to reproduce:

  1. Create two Debian repositories “temp” and “main”
  2. Deploy a Debian package to the “temp” repository
  3. “Copy Content” from “temp” repository to “main” repository
  4. Trigger “Recalculate Index” on “temp” repository (wait for it to finish)
  5. Delete the “temp” repository using Delete Repository REST API

 

Expected results

The “temp” repository should be deleted from the “Artifact Repository Browser” and the “Storage Summary”

 

Actual results

The “temp” repository is deleted from the “Artifact Repository Browser”, but displayed in the “Storage Summary” page in Artifactory UI

 

As a workaround, you may run the following queries, before proceeding with this workaround, please take a snapshot of the database:

 

DELETE FROM stats WHERE node_id IN (SELECT node_id FROM nodes WHERE repo= 'repo-name');

DELETE FROM watches WHERE node_id IN (SELECT node_id FROM nodes WHERE repo = 'repo-name');

DELETE FROM node_meta_infos WHERE node_id IN (SELECT node_id FROM nodes WHERE repo = 'repo-name');

DELETE FROM node_props WHERE node_id IN (SELECT node_id FROM nodes WHERE repo = 'repo-name’);

DELETE FROM nodes WHERE repo = 'repo-name';



 Comments   
Comment by Kevin Cheng [ 17/Nov/19 ]

For us, we store all debian artifacts in couple large repo.

However, we do created temporary deb repo by pulling necessary artifacts based on our release need and then copy the content (including the repo index and structure) to a smaller release repo (not the large repo above).

We do this because we only update the content all at once in a periodic schedule. Moreover, we choose to use the copy operation to shorten the time of the update.





[RTFACT-20609] Path pattern not recongnizing RPM file name elements Created: 13/Nov/19  Updated: 13/Nov/19

Status: Open
Project: Artifactory Binary Repository
Component/s: Repo Layout
Affects Version/s: 5.6.3
Fix Version/s: None

Type: Bug Priority: Normal
Reporter: Ovidiu-Florin BOGDAN Assignee: Unassigned
Resolution: Unresolved Votes: 0
Labels: None


 Description   

I have created an Repository layout with the following path pattern:

[org]/[CentOS_version<[0-9.]+>]/[Channel<app|configs|deps>]/Packages/[module]-[baseRev](-[patchLevel<[.~0-9a-zA-Z]+>])(-[fileItegRev])(.[archTag<\w+>]).[ext]

when I Test Artifact Path Resolution using the following test path:

MyOrg/7/app/Packages/app-appliance-1.0.2-latest-JIRA-12315.x86_64.rpm

Artifactory does not match the elements and gives the following match:

Organization: MyOrg 
Module: app-appliance-1.0.2-latest-JIRA 
Base Revision: 12315 
Folder Integration Revision: 
File Integration Revision: 
Classifier: 
Extension: rpm 
Type: 
CentOS_version: 7 
Channel: app 
patchLevel: 
archTag: x86_64

When using this regular expression:

(\w+)\/([0-9]+)\/(app|configs|deps)\/(Packages)\/([\w-.]+)-([0-9.]+)-([\w~]*)-([\w-]+).(\w+).(\w+)

on the online tool https://regex101.com I get a group match for each element.

I'm not sure what's the difference in implementation in Artifactory, or if this is a resolved bug in a later version of Artifactory.






[RTFACT-20607] Downloading artifacts through search page not encoding the special characters like #, % as %23 Created: 13/Nov/19  Updated: 14/Nov/19

Status: Open
Project: Artifactory Binary Repository
Component/s: None
Affects Version/s: 6.14.1
Fix Version/s: None

Type: Bug Priority: Normal
Reporter: Sankar Kumar Assignee: Unassigned
Resolution: Unresolved Votes: 1
Labels: None


 Description   

Downloading the artifacts with special characters like "#", "%" in path failed to escaped as %23 in search page whereas it is working fine from the tree browser.

 

The download link on the Search page (which doesn't work - returns 404) forwards you to:

http://localhost:8081/artifactory/generic-local/hello#/R-3.5.3.pkg

 

The download link on the Tree browser (which does work) forwards you to:

http://localhost:8081/artifactory/generic-local/hello%23/R-3.5.3.pkg

 

The Tree browser is correctly encoding the special character as "%23", whereas the Search page does not and causing 404 error while trying to download the artifacts.

 



 Comments   
Comment by Chris Zardis [ 13/Nov/19 ]

Thanks for raising this ticket Sankar

The Tree browser looks like it was fixed in https://www.jfrog.com/jira/browse/RTFACT-7916 , but it seems that fix was not extended to every location you can download an artifact from.





[RTFACT-20602] Conan's remote repository implementation is broken Created: 13/Nov/19  Updated: 13/Nov/19

Status: Open
Project: Artifactory Binary Repository
Component/s: Conan
Affects Version/s: None
Fix Version/s: None

Type: Bug Priority: Normal
Reporter: Ankush Chadha Assignee: Unassigned
Resolution: Unresolved Votes: 0
Labels: CPE, ConanCenter

Attachments: PNG File Screen Shot 2019-11-12 at 10.31.26 PM.png    

 Description   

Use default remote repository (bintray)

And try to fetch a package from a remote repository.

(Alternatively, one can try to pull from a virtual that only includes this remote repository.)

Actual Behavior: Files and properties (set on conaninfo.txt file) are missing from remote Conan repositories. Attached screenshot: The archive file is missing for one of the package revisions (screenshot attached)

 

Expected Behavior: Archive files and properties should exist 

If we switch to a local repository, then both scenarios work fine

 

 






[RTFACT-20601] The Remote Stats Download should not be tagged to an IP as the Download Count Metrics Created: 12/Nov/19  Updated: 12/Nov/19

Status: Open
Project: Artifactory Binary Repository
Component/s: None
Affects Version/s: 6.13.1
Fix Version/s: None

Type: Improvement Priority: Normal
Reporter: Manoj Tuguru Assignee: Unassigned
Resolution: Unresolved Votes: 0
Labels: None


 Description   

In a immutable infrastructure where the artifactory is deployed, the instances are often recycled which makes the IP's dynamic. This causes the remote_stats table present in the database which has the following values:
  node_id   |    origin     | download_count | last_downloaded | last_downloaded_by |                                    path

The Origin value is populated with the IP address of the artifactory intance which becomes irrelevant in where the IP's are dynamic.

Instead of the having the IP's as the origin if we have the more static values like either the servername or the License hash's as the origin value will be more useful and we can have historical data being relevant to the remote_stats table in the database.

 






[RTFACT-20599] CRAN repository - Github integration Created: 12/Nov/19  Updated: 12/Nov/19

Status: Open
Project: Artifactory Binary Repository
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: New Feature Priority: Normal
Reporter: Hamza Zaoui Assignee: Unassigned
Resolution: Unresolved Votes: 0
Labels: None


 Description   

Adding Github as an alternative source to the CRAN remote repository configuration, so that the GitHub R source package is installed from Artifactory using "install.packages" instead of using "install_github" function that bypasses Artifactory.(https://www.rdocumentation.org/packages/devtools/versions/1.13.6/topics/install_github)






[RTFACT-20595] Race condition in Helm repositories Created: 12/Nov/19  Updated: 13/Nov/19

Status: Open
Project: Artifactory Binary Repository
Component/s: None
Affects Version/s: 6.10.8
Fix Version/s: None

Type: Bug Priority: Normal
Reporter: Aviv Blonder Assignee: Unassigned
Resolution: Unresolved Votes: 2
Labels: None

Issue Links:
Relationship
is related to RTFACT-18179 Debian metadata calculation can fail ... Open

 Description   

There is a possibility of a race condition in Helm repositories, with the same behavior as this JIRA.

When 2 deployments happen concurrently to the same Helm local repo, then it might occur that:
1. First deploy triggered indexing
2. Second deploy started, and added itself to the event queue, but didn't commit to the db yet
3. In the same millisecond, the indexing started going over the queue. The event was there but did not commit yet, so it failed.

This is the error:

2019-11-11 12:24:35,017 [art-exec-2344321] [ERROR] (o.j.r.h.HelmMetadataExtractor:53) - Error while extracting metadata from chart: Failed to retrieve resource helm-local:test-chart-2-0.0.1.tgz: Could not get resource stream. Path 'test-chart-2-0.0.1.tgz' not found in helm-local





[RTFACT-20590] compress debian metadata types in parallel rather than serially Created: 12/Nov/19  Updated: 12/Nov/19

Status: Open
Project: Artifactory Binary Repository
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: Improvement Priority: Normal
Reporter: Loren Yeung Assignee: Unassigned
Resolution: Unresolved Votes: 0
Labels: None


 Description   

Currently, each archive is done serially, rather than in parallel, which can lead to longer indexing times for larger environments - see bz2 taking 1 minute and 40 seconds. This can propagate to many minutes the more nodes there are:

2019-10-28 23:39:51,867 [art-exec-427924] [DEBUG] (o.j.r.d.i.a.DebianAutomaticRepoMetadataIndexer:117) - Writing Packages to path dists/xenial/temp-1572305948526/xenial/main/binary-amd64/Packages for coordinates xenial/main/amd64
2019-10-28 23:40:03,455 [art-exec-427924] [DEBUG] (o.a.a.d.i.DebianLocalInterceptor:94) - Adding sha256 to Packages file at debian-local/dists/xenial/temp-1572305948526/xenial/main/binary-amd64/Packages
2019-10-28 23:40:03,470 [art-exec-427924] [DEBUG] (o.j.r.d.i.a.DebianAutomaticRepoMetadataIndexer:117) - Writing Packages.gz to path dists/xenial/temp-1572305948526/xenial/main/binary-amd64/Packages.gz for coordinates xenial/main/amd64
2019-10-28 23:40:13,292 [art-exec-427924] [DEBUG] (o.a.a.d.i.DebianLocalInterceptor:94) - Adding sha256 to Packages file at debian-local/dists/xenial/temp-1572305948526/xenial/main/binary-amd64/Packages.gz
2019-10-28 23:40:13,302 [art-exec-427924] [DEBUG] (o.j.r.d.i.a.DebianAutomaticRepoMetadataIndexer:117) - Writing Packages.bz2 to path dists/xenial/temp-1572305948526/xenial/main/binary-amd64/Packages.bz2 for coordinates xenial/main/amd64
2019-10-28 23:41:53,148 [art-exec-427924] [DEBUG] (o.a.a.d.i.DebianLocalInterceptor:94) - Adding sha256 to Packages file at debian-local/dists/xenial/temp-1572305948526/xenial/main/binary-amd64/Packages.bz2

We should attempt to do these in parallel to reduce indexing times.






[RTFACT-20589] work queue implementation for debian indexing can result in slow indexing Created: 11/Nov/19  Updated: 11/Nov/19

Status: Open
Project: Artifactory Binary Repository
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: Improvement Priority: Normal
Reporter: Loren Yeung Assignee: Unassigned
Resolution: Unresolved Votes: 0
Labels: None


 Description   

It appears that Artifactory indexes debian repositories based on a locking mechanism per distribution; there is a scenario where deploying a debian package to a large HA environment will take a while to index;

deploy deb of distribution xenial to Artifactory, load balancer chooses node 1. the distribution lock for xenial is held by node 2, as there are other, previous deployments. node 2 takes around 2 minutes to index its packages. the lock rotates around to the other nodes, and it takes a while before being given back to node 1, resulting in a multi minute delay between package deployment and availability in metadata (and apt-get client).






[RTFACT-20588] Artifactory does not clean up stale debian metadata when switching optional compression formats Created: 11/Nov/19  Updated: 13/Nov/19

Status: Open
Project: Artifactory Binary Repository
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: Bug Priority: Normal
Reporter: Loren Yeung Assignee: Unassigned
Resolution: Unresolved Votes: 0
Labels: None


 Description   

Artifactory does not delete optional compression formats when a format is removed from the UI, resulting in stale metadata within the repository. It seems that it does not update the releases file either

Say you select bz2 to be created, and index the repository. Later, you decide to uncheck it. Artifactory will not remove the old bz2 files, and the releases file will continue to have entries that reference the bz2.






[RTFACT-20587] upgrading Artifactory does not add <optionalIndexCompressionFormats> tags to debian repositories Created: 11/Nov/19  Updated: 13/Nov/19

Status: Will Not Implement
Project: Artifactory Binary Repository
Component/s: Debian
Affects Version/s: None
Fix Version/s: None

Type: Bug Priority: Normal
Reporter: Loren Yeung Assignee: Unassigned
Resolution: Unresolved Votes: 0
Labels: None


 Description   

Start up an Artifactory on 5.x (I did 5.5.2)
Create debian-local repository
see that it does not have any optional compression formats (expected).

Upgrade to a version that DOES have this flag, like 6.8.16
see that the descriptor does not have:
<localRepository>
<key>debian-local</key>
<type>debian</type>
<includesPattern>*/</includesPattern>
<repoLayoutRef>simple-default</repoLayoutRef>
<dockerApiVersion>V2</dockerApiVersion>
<forceNugetAuthentication>false</forceNugetAuthentication>
<blackedOut>false</blackedOut>
<handleReleases>true</handleReleases>
<handleSnapshots>true</handleSnapshots>
<maxUniqueSnapshots>0</maxUniqueSnapshots>
<maxUniqueTags>0</maxUniqueTags>
<suppressPomConsistencyChecks>true</suppressPomConsistencyChecks>
<propertySets>
<propertySetRef>artifactory</propertySetRef>
</propertySets>
<archiveBrowsingEnabled>false</archiveBrowsingEnabled>
<snapshotVersionBehavior>unique</snapshotVersionBehavior>
<localRepoChecksumPolicyType>client-checksums</localRepoChecksumPolicyType>
<calculateYumMetadata>false</calculateYumMetadata>
<yumRootDepth>0</yumRootDepth>
<debianTrivialLayout>false</debianTrivialLayout>
<enableFileListsIndexing>false</enableFileListsIndexing> <------------ no optionalIndexCompressionFormats tag below this line
</localRepository>

workaround:
edit the repository in the admin view, and add bz2. Hit save, edit again and remove bz2. You will now see <optionalIndexCompressionFormats/> in the descriptor.

This will help prevent artifactory from creating bz2 metadata when its not needed, as it is very slow compared to gz, as if you dont do this, artifactory will compute bz2 by default.






[RTFACT-20584] Artifactory restarts should be more robust in HA mode Created: 11/Nov/19  Updated: 13/Nov/19

Status: Open
Project: Artifactory Binary Repository
Component/s: Access Server, High Availability, startup
Affects Version/s: None
Fix Version/s: None

Type: Improvement Priority: Normal
Reporter: Giancarlo Martinez Assignee: Unassigned
Resolution: Unresolved Votes: 1
Labels: None


 Description   

Artifactory sometimes fails startup (esp. if another node is being restarted) due to access server failures with an error like this:

Caused by: org.jfrog.access.client.AccessClientException: Couldn't grant a token, response code: 500, body: {
"errors" : [

{ "code" : "INTERNAL_SERVER_ERROR", "message" : "Could not propagate changes to another access server ServerImpl(id=<id>, created=1520978883103, modified=1573155376310, uniqueName=<id>, version=4.6.6, privateKeyFingerprint=<fingerprint>, privateKeyLastModified=1573155376309, lastHeartbeat=1573489493167, baseUrl=http://<ip>:8040/access, grpcInfo=<ip>:8045)" }

]
}

 

It doesn't seem to be very robust and will give up if it has any problems contacting another Access server, instead of retrying (perhaps even on a different cluster node). An easy way to reproduce this is to restart more than one node at a time, though even a connection failure can cause this.






[RTFACT-20583] Publish multiple modules in a Generic Build Created: 11/Nov/19  Updated: 11/Nov/19

Status: Open
Project: Artifactory Binary Repository
Component/s: None
Affects Version/s: 6.11.3
Fix Version/s: None

Type: New Feature Priority: Normal
Reporter: Tim Assignee: Unassigned
Resolution: Unresolved Votes: 0
Labels: None
Environment:

CentOS 7



 Description   

We are currently unable to publish multiple modules using a Generic build. We can publish multiple modules with maven or gradle builds and would like to request this functionality with generic builds.

We have a support ticket #114874 that explains this issue in more detail and tracks the efforts of JFrog support to resolve this issue.






[RTFACT-20582] using GAVC search results in incorrect results if the group ID has 4 components or more Created: 11/Nov/19  Updated: 11/Nov/19

Status: Open
Project: Artifactory Binary Repository
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: Bug Priority: Normal
Reporter: Loren Yeung Assignee: Unassigned
Resolution: Unresolved Votes: 1
Labels: None

Issue Links:
Relationship
relates to RTFACT-9503 GAVC search: inconsistent results wh... Open

 Description   

Trying to search:

$ curl "localhost:8081/artifactory/api/search/gavc?a=two&repos=mvn-release-local"
{
 "results" : [ {
   "uri" : "http://localhost:8081/artifactory/api/storage/mvn-release-local/com/mycompany/app/two/my-app/2.0/my-app-2.0.pom"
 }, {
   "uri" : "http://localhost:8081/artifactory/api/storage/mvn-release-local/com/mycompany/app/two/my-app/2.0/my-app-2.0.jar"
 } ]

My module ID: com.mycompany.app.two:my-app:2.0

pom:

<modelVersion>4.0.0</modelVersion>
	<groupId>com.mycompany.app.two</groupId>
	<artifactId>my-app</artifactId>
	<packaging>jar</packaging>
	<version>2.0</version>
	<name>my-app</name>
	<url>http://maven.apache.org</url>

It looks like this is happening because Artifactory is treating the 4th folder as the artifactID






[RTFACT-20562] Artifactory returns a 404 error for NPM instead of 403 if the user does not have permission to the repository Created: 10/Nov/19  Updated: 12/Nov/19

Status: Open
Project: Artifactory Binary Repository
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: Bug Priority: Normal
Reporter: Adi Vizgan Assignee: Unassigned
Resolution: Unresolved Votes: 1
Labels: None


 Description   

Steps to reproduce:

  1. Create NPM virtual, remote and local repositories.
  2. Create a user and remove the default "Readers" group for the user.
  3. Make sure there is no group or permission that gives that user read access to the NPM repositories.
  4. Make sure that the "Hide Existence of Unauthorized Resources" is NOT enabled.
  5. Perform the "npm config set registry.." and "npm login" commands as usual.
  6. Try to download a random package (in my example - byte@1.0.0) and see that you get a 404 error instead of 403.
  7. Look at the request.log and see that Artifactory returns 404 instead of 403.

Sample request and output:

npm install byte@1.0.0  ✔  10300  15:09:37

npm ERR! code E404

request.log:

20191110125801|23|REQUEST|82.81.195.5|adiv|GET|/api/npm/npm/byte|HTTP/1.1|404|0






[RTFACT-20560] Cache FS to populate based on downloads only, not uploads Created: 10/Nov/19  Updated: 13/Nov/19

Status: Open
Project: Artifactory Binary Repository
Component/s: Filestore
Affects Version/s: None
Fix Version/s: None

Type: New Feature Priority: Normal
Reporter: Ariel Kabov Assignee: Unassigned
Resolution: Unresolved Votes: 0
Labels: None


 Description   

The current Cache FS implementation is LRU cache, but it populates based on both uploads and downloads.

This request is to add a flag to the Cache FS binary provider to configure it to populate based on downloads only.






[RTFACT-20558] Improve build time Created: 10/Nov/19  Updated: 14/Nov/19

Status: In Progress
Project: Artifactory Binary Repository
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: Improvement Priority: Normal
Reporter: Gal Ben Ami Assignee: Gal Ben Ami
Resolution: Unresolved Votes: 0
Labels: None


 Description   

Improve build time and consistency by upgrading maven and controlling the number of concurrent maven builds.






[RTFACT-20557] Nuget.org response times out causing a 404 instead of pulling artifact from the cache Created: 07/Nov/19  Updated: 11/Nov/19

Status: Open
Project: Artifactory Binary Repository
Component/s: NuGet, Remote Repository
Affects Version/s: 6.13.1, 6.14.0
Fix Version/s: None

Type: Bug Priority: Normal
Reporter: Scott Mosher Assignee: Unassigned
Resolution: Unresolved Votes: 1
Labels: None

Issue Links:
Relationship
is related to RTFACT-15405 Add the ability to resolve cached art... Resolved

 Description   

If an artifact is cached within the remote repository and user tries to pull artifact from the virtual or remote.  If there is a timeout to the remote site, the request does not fall back to the cache, but instead returns an error in the logs and states it cannot find the package within the client.

 

REPRODUCE

1) Create the default virtual for nuget repo

2) Configure nuget client and pull a package

3) Set the remote repo socket timeout to 5 MS (something very small)

4) Try the same pull request.  You will see a connection timeout error:

2019-11-07 19:03:32,762 [http-nio-8081-exec-4] [ERROR] (o.a.a.n.NuGetServiceImpl:697) - Error occurred while performing a remote NuGet query on 'https://www.nuget.org/api/v2/FindPackagesById()?includePrerelease=false&$top=80&id='FluentValidation'&includeAllVersions=false': Connect to www.nuget.org:443 [www.nuget.org/13.66.39.44] failed: connect timed out

You will also see from the nuget client:

  *http://localhost:8081/artifactory/api/nuget/nuget*

 **   *GET http://localhost:8081/artifactory/api/nuget/nuget/FindPackagesById()?id='FluentValidation'*

  OK http://localhost:8081/artifactory/api/nuget/nuget/FindPackagesById()?id='FluentValidation' 24ms

Unable to find package 'FluentValidation'

5) You can also pull the artifact with same setting if pointing directly to the cache repository or if you set the remote to offline.

 

EXPECTED

This is a simple way to reproduce the behavior but is not the issue being seen.  The issue where this comes to play is a read time out from nuget.org.  Could be slow network or rate limiting from nuget.org?  Artifactory should treat this as if the endpoint is unreachable and offline.  It should fall back to the remote cache instead of failing the request.






[RTFACT-20552] Artifactory failed to start due to IndexOutOfBoundsException if the artifactory key is corrupted Created: 27/Oct/19  Updated: 12/Nov/19

Status: Pending QA
Project: Artifactory Binary Repository
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: Bug Priority: Normal
Reporter: Alex Dvorkin Assignee: Nadav Yogev
Resolution: Unresolved Votes: 0
Labels: UGA
Environment:

6.14.0 (Zip install on Ubuntu 18.04 VM)


Attachments: PNG File Screen Shot 2019-10-27 at 16.51.19.png     PNG File Screen Shot 2019-10-27 at 16.52.06.png    

 Description   

This happens due to excessive read of artifactory key. We only need this key for migration, if master key does not exist, but we read it on each start up regardless.

In my test, I hard-restarted Ubuntu VM. As a result artifactory key file got empty (it was not empty before the boot - I had snapshot to verify that)

As a result:

This is the keys folder:

Perhaps, this critical crash could've been avoided if the file was not checked.






[RTFACT-20549] Xray security valuation: FasterXML jackson-databind Multiple Gadgets Insecure Deserialization Unspecified Remote Weakness Created: 07/Nov/19  Updated: 07/Nov/19

Status: Open
Project: Artifactory Binary Repository
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: Bug Priority: Normal
Reporter: Yaniv Shani Assignee: Nadav Yogev
Resolution: Unresolved Votes: 0
Labels: UGA, XrayVulnerabilityToBeFixed


 Description   

 

https://entplus-xray.jfrog.io/web/#/component/details/build:~2F~2Fartifactory-pro-docker-master/8654






[RTFACT-20548] ability to customize "docker image clean up by max unique tag" feature Created: 07/Nov/19  Updated: 07/Nov/19

Status: Open
Project: Artifactory Binary Repository
Component/s: Docker
Affects Version/s: None
Fix Version/s: None

Type: Improvement Priority: Normal
Reporter: Loren Yeung Assignee: Unassigned
Resolution: Unresolved Votes: 1
Labels: None

Issue Links:
Relationship
relates to RTFACT-19151 Artifactory Cleanup Policy Open

 Description   

Currently, the only way the "clean up by max unique tags" feature can be used is whenever the number of docker tags goes over a set number, the oldest published tag is removed from Artifactory.

It would be nice to be able to customize this feature, such as preserving (dont delete) 'latest', or certain tags containing prefixes/suffixes

or clean up via last downloaded time, rather than publish time.

The artifactory clean up plugin does not work well with docker images.






[RTFACT-20546] artifacts directory path should be same through github-remote repo Created: 06/Nov/19  Updated: 12/Nov/19

Status: Open
Project: Artifactory Binary Repository
Component/s: None
Affects Version/s: 6.14.0
Fix Version/s: None

Type: Bug Priority: Normal
Reporter: David Shin Assignee: Unassigned
Resolution: Unresolved Votes: 0
Labels: None


 Description   

Consider these two URLs:
 
From Artifactory
(1) https://david.vm:8081/artifactory/api/vcs/downloadRelease/github-remote/ktossell/libuvc/77b43d618e71698f52f7beb88c4b99b5f018259b?ext=tar.gz
 
From github
(2) https://github.com/ktossell/libuvc/archive/77b43d618e71698f52f7beb88c4b99b5f018259b.tar.gz

They should be exactly the same file (same checksum). 

However, Downloading the files locally, I see that the directory names end up differently:

$ tar tvfz _artifactory.tgz | head
drwxrwxr-x root/root         0 2017-10-01 20:31 libuvc-libuvc-77b43d6/
rw-rw-r- root/root      5322 2017-10-01 20:31 libuvc-libuvc-77b43d6/CMakeLists.txt
rw-rw-r- root/root      1522 2017-10-01 20:31 libuvc-libuvc-77b43d6/LICENSE.txt
rw-rw-r- root/root      1279 2017-10-01 20:31 libuvc-libuvc-77b43d6/README.md
drwxrwxr-x root/root         0 2017-10-01 20:31 libuvc-libuvc-77b43d6/cameras/
rw-rw-r- root/root      8874 2017-10-01 20:31 libuvc-libuvc-77b43d6/cameras/isight_imac.txt
rw-rw-r- root/root      8874 2017-10-01 20:31 libuvc-libuvc-77b43d6/cameras/isight_macbook.txt
rw-rw-r- root/root     77622 2017-10-01 20:31 libuvc-libuvc-77b43d6/cameras/logitech_hd_pro_920.txt
rw-rw-r- root/root     31029 2017-10-01 20:31 libuvc-libuvc-77b43d6/cameras/ms_lifecam_show.txt
rw-rw-r- root/root     64478 2017-10-01 20:31 libuvc-libuvc-77b43d6/cameras/quickcampro9000.txt

$ tar tvfz _github.tgz | head
drwxrwxr-x root/root         0 2017-10-01 20:31 libuvc-77b43d618e71698f52f7beb88c4b99b5f018259b/
rw-rw-r- root/root      5322 2017-10-01 20:31 libuvc-77b43d618e71698f52f7beb88c4b99b5f018259b/CMakeLists.txt
rw-rw-r- root/root      1522 2017-10-01 20:31 libuvc-77b43d618e71698f52f7beb88c4b99b5f018259b/LICENSE.txt
rw-rw-r- root/root      1279 2017-10-01 20:31 libuvc-77b43d618e71698f52f7beb88c4b99b5f018259b/README.md
drwxrwxr-x root/root         0 2017-10-01 20:31 libuvc-77b43d618e71698f52f7beb88c4b99b5f018259b/cameras/
rw-rw-r- root/root      8874 2017-10-01 20:31 libuvc-77b43d618e71698f52f7beb88c4b99b5f018259b/cameras/isight_imac.txt
rw-rw-r- root/root      8874 2017-10-01 20:31 libuvc-77b43d618e71698f52f7beb88c4b99b5f018259b/cameras/isight_macbook.txt
rw-rw-r- root/root     77622 2017-10-01 20:31 libuvc-77b43d618e71698f52f7beb88c4b99b5f018259b/cameras/logitech_hd_pro_920.txt
rw-rw-r- root/root     31029 2017-10-01 20:31 libuvc-77b43d618e71698f52f7beb88c4b99b5f018259b/cameras/ms_lifecam_show.txt
rw-rw-r- root/root     64478 2017-10-01 20:31 libuvc-77b43d618e71698f52f7beb88c4b99b5f018259b/cameras/quickcampro9000.txt

For some reason, artifactory is using the short git hash in the directory name, instead of the full one.
 
Steps to reproduce
1. Package type : VCS
2. Repository name : github-remote
3. URL : https://github.com
4. Download a file through Artifactory.

curl -uadmin:password 'http://mill.jfrog.info:12312/artifactory/api/vcs/downloadRelease/github-remote/ktossell/libuvc/77b43d618e71698f52f7beb88c4b99b5f018259b?ext=tar.gz' -O
 
tar tvfz 77b43d618e71698f52f7beb88c4b99b5f018259b?ext=tar.gz | head

drwxrwxr-x  0 root   root        0 Oct  1  2017 libuvc-libuvc-77b43d6/

rw-rw-r-  0 root   root     5322 Oct  1  2017 libuvc-libuvc-77b43d6/CMakeLists.txt

rw-rw-r-  0 root   root     1522 Oct  1  2017 libuvc-libuvc-77b43d6/LICENSE.txt

rw-rw-r-  0 root   root     1279 Oct  1  2017 libuvc-libuvc-77b43d6/README.md

drwxrwxr-x  0 root   root        0 Oct  1  2017 libuvc-libuvc-77b43d6/cameras/

rw-rw-r-  0 root   root     8874 Oct  1  2017 libuvc-libuvc-77b43d6/cameras/isight_imac.txt

rw-rw-r-  0 root   root     8874 Oct  1  2017 libuvc-libuvc-77b43d6/cameras/isight_macbook.txt

rw-rw-r-  0 root   root    77622 Oct  1  2017 libuvc-libuvc-77b43d6/cameras/logitech_hd_pro_920.txt

rw-rw-r-  0 root   root    31029 Oct  1  2017 libuvc-libuvc-77b43d6/cameras/ms_lifecam_show.txt

rw-rw-r-  0 root   root    64478 Oct  1  2017 libuvc-libuvc-77b43d6/cameras/quickcampro9000.txt

 

 

Is this intentional behavior because of too long of directory path from github?

 



 Comments   
Comment by Patrick Russell [ 12/Nov/19 ]

So I found that instead of using the "github.com/[...]/archive/[SHA_SUM]" URL, Artifactory is instead using an "api.github.com/repos" URL:

 

#Artifactory Download path

#Note: There is no *.tar.gz extension as compared with the GitHub example and is using "repos"

https://api.github.com/repos/ktossell/libuvc/tarball/77b43d618e71698f52f7beb88c4b99b5f018259b

 

"Status: 301 Moved Permanently"

"Location: https://api.github.com/repositories/1345732/tarball/77b43d618e71698f52f7beb88c4b99b5f018259b"

 

"Status: 302 Found"

"Location: https://codeload.github.com/libuvc/libuvc/legacy.tar.gz/77b43d618e71698f52f7beb88c4b99b5f018259b"





[RTFACT-20545] Need an option to specify * for checking expired docker tags Created: 06/Nov/19  Updated: 08/Nov/19

Status: Open
Project: Artifactory Binary Repository
Component/s: None
Affects Version/s: 6.13.0
Fix Version/s: None

Type: Improvement Priority: Normal
Reporter: Prasanna Narayana Assignee: Unassigned
Resolution: Unresolved Votes: 0
Labels: None

Issue Links:
Trigger
was triggered by RTFACT-20140 Static docker manifests should not be... Resolved

 Description   

As a follow up on the Jira: RTFACT-20140, Need an option for specifying * in the system property artifactory.docker.expired.tags so that they can ensure that Artifactory checks all the upstream tags.

 






[RTFACT-20542] Add username to artifactory o.a.e.UploadServiceImpl log entry Created: 06/Nov/19  Updated: 12/Nov/19

Status: Open
Project: Artifactory Binary Repository
Component/s: None
Affects Version/s: 6.8.10
Fix Version/s: None

Type: Improvement Priority: Normal
Reporter: Manuel Flamerich Assignee: Unassigned
Resolution: Unresolved Votes: 0
Labels: None


 Description   

Currently Artifactory logs each upload in two files: request.log and artifactory.log.

The file request.log contains the UI user but the file artifactory.log does not.

For audit purposes, I would like to add to the username to artifactory.log.

The reason is to be able to quickly find out what user deployed what file into what repository. The current workflow to find that out is to review artifactory.log, search for "UploadServiceImpl", review the repositories and files deployed, open a browser and check the UI for the user that deployed such artifact. This workflow is not efficient, cannot scale and would not give me a report of user upload activity for engineering management.

 






[RTFACT-20537] Deadlock when two snapshots of the same artifact are deployed at the same time Created: 06/Nov/19  Updated: 06/Nov/19

Status: Open
Project: Artifactory Binary Repository
Component/s: None
Affects Version/s: 6.13.0
Fix Version/s: None

Type: Bug Priority: Normal
Reporter: Josh Watson Assignee: Unassigned
Resolution: Unresolved Votes: 0
Labels: None


 Description   

One of our developers committed to his feature branch whilst merging into the delivery branch. This led to two builds being executed simultaneously by Jenkins, one a fraction of a second after the other, with the same version. Artifactory returned an internal server error and failed one of the builds:

2019-11-06 13:50:38,773 [http-nio-8081-exec-881] [INFO ] (o.a.e.UploadServiceImpl:399) - Deploy to 'libs-snapshot:com/sysmech/zen/blah/blah/1.0.0-SNAPSHOT/blah-1.0.0-20191106.135040-12.jar' Content-Length: 193949
2019-11-06 13:50:59,669 [http-nio-8081-exec-885] [ERROR] (o.a.r.d.DbStoringRepoMixin:290) - Couldn't save resource libs-snapshot:com/sysmech/zen/blah/blah/1.0.0-SNAPSHOT/blah/x.y.z-20191106.135039-12.jar, reason:
java.lang.reflect.UndeclaredThrowableException: null
    at com.sun.proxy.$Proxy75.next(Unknown Source)
    at org.artifactory.storage.db.fs.dao.NodesDao.getChildren(NodesDao.java:250)
    at org.artifactory.storage.db.fs.service.FileServiceImpl.loadChildren(FileServiceImpl.java:272)
    at org.artifactory.storage.fs.tree.FolderNode.getChildrenItemNode(FolderNode.java:74)
    at org.artifactory.storage.fs.tree.FolderNode.getChildren(FolderNode.java:56)
    at org.artifactory.maven.MavenMetadataCalculator.folderContainsPoms(MavenMetadataCalculator.java:422)
    at org.artifactory.maven.MavenMetadataCalculator.createSnapshotsMetadata(MavenMetadataCalculator.java:210)
    at org.artifactory.maven.MavenMetadataCalculator.calculateAndSet(MavenMetadataCalculator.java:179)
    at org.artifactory.maven.MavenMetadataCalculator.calculate(MavenMetadataCalculator.java:154)
    at org.artifactory.maven.MavenMetadataServiceImpl.calculateMavenMetadata(MavenMetadataServiceImpl.java:81)
    at sun.reflect.GeneratedMethodAccessor452.invoke(Unknown Source)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:498)
    at org.springframework.aop.support.AopUtils.invokeJoinpointUsingReflection(AopUtils.java:343)
    at org.springframework.aop.framework.JdkDynamicAopProxy.invoke(JdkDynamicAopProxy.java:206)
    at com.sun.proxy.$Proxy179.calculateMavenMetadata(Unknown Source)
    at org.artifactory.repo.interceptor.MavenMetadataCalculationInterceptor.afterCreate(MavenMetadataCalculationInterceptor.java:73)
    at org.artifactory.repo.interceptor.storage.StorageInterceptorsImpl.afterCreate(StorageInterceptorsImpl.java:69)
    at sun.reflect.GeneratedMethodAccessor288.invoke(Unknown Source)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:498)
    at org.springframework.aop.support.AopUtils.invokeJoinpointUsingReflection(AopUtils.java:343)
    at org.springframework.aop.framework.JdkDynamicAopProxy.invoke(JdkDynamicAopProxy.java:206)
    at com.sun.proxy.$Proxy184.afterCreate(Unknown Source)
    at org.artifactory.repo.db.DbStoringRepoMixin.invokeAfterCreateInterceptors(DbStoringRepoMixin.java:405)
    at org.artifactory.repo.db.DbStoringRepoMixin.saveResource(DbStoringRepoMixin.java:244)
    at org.artifactory.repo.db.DbLocalRepo.saveResource(DbLocalRepo.java:154)
    at org.artifactory.repo.service.RepositoryServiceImpl.saveResourceInTransaction(RepositoryServiceImpl.java:1876)
    at sun.reflect.GeneratedMethodAccessor446.invoke(Unknown Source)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:498)
    at org.springframework.aop.support.AopUtils.invokeJoinpointUsingReflection(AopUtils.java:343)
    at org.springframework.aop.framework.ReflectiveMethodInvocation.invokeJoinpoint(ReflectiveMethodInvocation.java:198)
    at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:163)
    at org.springframework.transaction.interceptor.TransactionAspectSupport.invokeWithinTransaction(TransactionAspectSupport.java:295)
    at org.springframework.transaction.interceptor.TransactionInterceptor.invoke(TransactionInterceptor.java:98)
    at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:186)
    at org.springframework.aop.framework.JdkDynamicAopProxy.invoke(JdkDynamicAopProxy.java:212)
    at com.sun.proxy.$Proxy178.saveResourceInTransaction(Unknown Source)
    at org.artifactory.repo.service.RepositoryServiceImpl.saveResource(RepositoryServiceImpl.java:1864)
    at sun.reflect.GeneratedMethodAccessor448.invoke(Unknown Source)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:498)
    at org.springframework.aop.support.AopUtils.invokeJoinpointUsingReflection(AopUtils.java:343)
    at org.springframework.aop.framework.JdkDynamicAopProxy.invoke(JdkDynamicAopProxy.java:206)
    at com.sun.proxy.$Proxy178.saveResource(Unknown Source)
    at org.artifactory.engine.UploadServiceImpl.uploadItemWithContent(UploadServiceImpl.java:569)
    at org.artifactory.engine.UploadServiceImpl.uploadItemWithProvidedContent(UploadServiceImpl.java:552)
    at org.artifactory.engine.UploadServiceImpl.uploadItem(UploadServiceImpl.java:429)
    at org.artifactory.engine.UploadServiceImpl.uploadFile(UploadServiceImpl.java:419)
    at org.artifactory.engine.UploadServiceImpl.uploadArtifact(UploadServiceImpl.java:400)
    at org.artifactory.engine.UploadServiceImpl.adjustResponseAndUpload(UploadServiceImpl.java:221)
    at org.artifactory.engine.UploadServiceImpl.validateRequestAndUpload(UploadServiceImpl.java:187)
    at org.artifactory.engine.UploadServiceImpl.upload(UploadServiceImpl.java:130)
    at sun.reflect.GeneratedMethodAccessor291.invoke(Unknown Source)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:498)
    at org.springframework.aop.support.AopUtils.invokeJoinpointUsingReflection(AopUtils.java:343)
    at org.springframework.aop.framework.ReflectiveMethodInvocation.invokeJoinpoint(ReflectiveMethodInvocation.java:198)
    at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:163)
    at org.artifactory.request.aop.RequestAdvice.invoke(RequestAdvice.java:67)
    at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:186)
    at org.springframework.aop.framework.JdkDynamicAopProxy.invoke(JdkDynamicAopProxy.java:212)
    at com.sun.proxy.$Proxy200.upload(Unknown Source)
    at org.artifactory.webapp.servlet.RepoFilter.doUpload(RepoFilter.java:254)
    at org.artifactory.webapp.servlet.RepoFilter.execute(RepoFilter.java:172)
    at org.artifactory.webapp.servlet.RepoFilter.doFilter(RepoFilter.java:97)
    at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193)
    at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166)
    at org.artifactory.webapp.servlet.authentication.ArtifactoryAuthenticationFilterChain.lambda$doFilter$1(ArtifactoryAuthenticationFilterChain.java:134)
    at org.springframework.security.web.authentication.www.BasicAuthenticationFilter.doFilterInternal(BasicAuthenticationFilter.java:215)
    at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:107)
    at org.artifactory.webapp.servlet.authentication.ArtifactoryBasicAuthenticationFilter.doFilter(ArtifactoryBasicAuthenticationFilter.java:96)
    at org.artifactory.webapp.servlet.authentication.ArtifactoryAuthenticationFilterChain.doFilter(ArtifactoryAuthenticationFilterChain.java:170)
    at org.artifactory.webapp.servlet.AccessFilter.authenticateAndExecute(AccessFilter.java:311)
    at org.artifactory.webapp.servlet.AccessFilter.doFilterInternal(AccessFilter.java:208)
    at org.artifactory.webapp.servlet.AccessFilter.doFilter(AccessFilter.java:167)
    at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193)
    at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166)
    at org.artifactory.webapp.servlet.RequestFilter.doFilter(RequestFilter.java:77)
    at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193)
    at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166)
    at org.artifactory.webapp.servlet.ArtifactoryCsrfFilter.doFilter(ArtifactoryCsrfFilter.java:86)
    at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193)
    at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166)
    at org.springframework.session.web.http.SessionRepositoryFilter.doFilterInternal(SessionRepositoryFilter.java:164)
    at org.springframework.session.web.http.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:80)
    at org.artifactory.webapp.servlet.SessionFilter.doFilter(SessionFilter.java:62)
    at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193)
    at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166)
    at org.artifactory.webapp.servlet.ArtifactoryFilter.doFilter(ArtifactoryFilter.java:124)
    at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193)
    at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166)
    at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:199)
    at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:96)
    at org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:493)
    at org.apache.catalina.valves.rewrite.RewriteValve.invoke(RewriteValve.java:304)
    at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:137)
    at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:81)
    at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:87)
    at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:343)
    at org.apache.coyote.http11.Http11Processor.service(Http11Processor.java:798)
    at org.apache.coyote.AbstractProcessorLight.process(AbstractProcessorLight.java:66)
    at org.apache.coyote.AbstractProtocol$ConnectionHandler.process(AbstractProtocol.java:808)
    at org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.doRun(NioEndpoint.java:1498)
    at org.apache.tomcat.util.net.SocketProcessorBase.run(SocketProcessorBase.java:49)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at org.apache.tomcat.util.threads.TaskThread$WrappingRunnable.run(TaskThread.java:61)
    at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.reflect.InvocationTargetException: null
    at sun.reflect.GeneratedMethodAccessor70.invoke(Unknown Source)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:498)
    at org.jfrog.storage.wrapper.ResultSetWrapper.invoke(ResultSetWrapper.java:77)
    ... 110 common frames omitted


 Comments   
Comment by Josh Watson [ 06/Nov/19 ]

I have the full stack trace if you want it, but can't add it to the description as it's > 32KB





[RTFACT-20532] Remove the "Artifactory is happily serving # artifacts" message on front page Created: 06/Nov/19  Updated: 07/Nov/19

Status: Open
Project: Artifactory Binary Repository
Component/s: Web UI
Affects Version/s: None
Fix Version/s: None

Type: Cosmetic Priority: Normal
Reporter: Stefan Gangefors Assignee: Unassigned
Resolution: Unresolved Votes: 0
Labels: None


 Description   

I have tried to find an option to disable the HUGE irrelevant message always present on the front page, namely "Artifactory is happily serving # artifacts".

I've checked artifactory.system.properties and I've searched the docs and the web but I can't seem to find an option to do that.

How can I disable this message? It adds no value to most users.

To display this text a count is made on the nodes table, which can be extremely costly depending on the database used and how big the nodes table is. Having the option to disable this text would also remove a highly unnecessary query.

SELECT COUNT(*) FROM nodes WHERE node_type = 1;





[RTFACT-20528] Repository level download count Created: 06/Nov/19  Updated: 07/Nov/19

Status: Open
Project: Artifactory Binary Repository
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: Improvement Priority: Normal
Reporter: Ganesh Kumar Pandithurai Assignee: Unassigned
Resolution: Unresolved Votes: 0
Labels: Artifactory


 Description   

There is a feature in current Artifactory to get the number of downloads for each artifacts in a repository. We are having few repositories in Artifactory which has many dynamic files/directories so it is hard to get each and every artifacts to arrive a total download count on repository level. This repository level count will be an key input for us to take business decisions. Hence, would like to get a feature in Artifactory GUI to show "Download count" on repository level.






[RTFACT-20527] Enhance query processing in Artifactory with multi-transactional execution Created: 06/Nov/19  Updated: 17/Nov/19

Status: Open
Project: Artifactory Binary Repository
Component/s: None
Affects Version/s: 6.8.13
Fix Version/s: None

Type: Improvement Priority: Normal
Reporter: Nimer Bsoul Assignee: Unassigned
Resolution: Unresolved Votes: 0
Labels: None

Issue Links:
Relationship

 Description   

Currently query "SELECT n.* FROM nodes n JOIN node_props p ON n.node_id = p.node_id WHERE repo = ? AND p.prop_key = ?" which is used for searching maven plugin pom files in local repos with property "artifactory.maven.mavenPlugin" = true
org.artifactory.maven.MavenPluginsMetadataCalculator#calculate. 

The issue here is not the time it takes for the above query to run, but rather the time Artifactory takes to process all the results, while still keeping the DB transaction open.

The change should be to make this a multi-transactional process. So the enhancement is to make this multi-transactional.






[RTFACT-20525] Add comments in the DB queries with business functionality Created: 06/Nov/19  Updated: 06/Nov/19

Status: Open
Project: Artifactory Binary Repository
Component/s: Database, Logging
Affects Version/s: 6.13.1
Fix Version/s: None

Type: Improvement Priority: Normal
Reporter: Joshua Han Assignee: Unassigned
Resolution: Unresolved Votes: 0
Labels: None


 Description   

 

Artifactory uses the same queries in different places, so it is difficult to identify which business functionality is associated with the query. Please add comments in the JDBC/SQL queries with business functionality so that the same queries get translated with different IDs in, for example, in the Oracle AWR reports. 






[RTFACT-20523] Artifactory is not expiring non SemVer info metadata Created: 05/Nov/19  Updated: 07/Nov/19

Status: In Progress
Project: Artifactory Binary Repository
Component/s: Go
Affects Version/s: 6.14.0
Fix Version/s: None

Type: Bug Priority: Normal
Reporter: Elio Marcolino Assignee: Barak Hacham
Resolution: Unresolved Votes: 0
Labels: CPE, GoCenter, gomodules


 Description   

GoCenter supports serving dynamic non SemVer versions like branch names and commit hashes. However, Artifactory seems not to be able to handle those versions metadata (.info) as mutable information that needs to be checked for changes on the upstream.

How to reproduce

  • Create remote Go repo pointing to GoCenter
  • Set the remote Metadata Retrieval Cache Period to 0 to disable caching the metadata locally
  • Create a virtual Go repo containing the virtual repo
  • Create a Go project on GitHub
  • Point your GOPROXY environment variable to the virtual repo
  • Fetch the module master branch by running
    go get github.com/<org>/<repo>@<master>
    
  • Push a change to the module master branch on Github
  • Wait 10 min for the GoCenter cache to expire
  • Fetch the module master branch again using the same go get command

Expected result

The go.mod file should contain a pseudoversion pointing to the latest commit hash pushed during the steps above

Actual results

The go.mod file contains a pseudoversion poiting to the previous commit hash in the master branch since Artifactory keeps serving the cached master.info content






[RTFACT-20518] Docker repo type: Set registry port through REST api Created: 05/Nov/19  Updated: 05/Nov/19

Status: Open
Project: Artifactory Binary Repository
Component/s: Docker, REST API, Web REST API
Affects Version/s: None
Fix Version/s: None

Type: New Feature Priority: Normal
Reporter: Gokul Evuri Assignee: Unassigned
Resolution: Unresolved Votes: 0
Labels: None


 Description   

REST API call is need to change the ‘Registry Port’ of a specified Docker repository.

 

Right now we are manually going through UI of the docker repository in the ‘Advanced’ tab and change the port.

 






[RTFACT-20517] Verify enough disk space is available for backup option evaluates space available in wrong location Created: 05/Nov/19  Updated: 05/Nov/19

Status: Open
Project: Artifactory Binary Repository
Component/s: Artifactory Home, Backup
Affects Version/s: None
Fix Version/s: None

Type: Bug Priority: Normal
Reporter: Ceri Hopkins Assignee: Unassigned
Resolution: Unresolved Votes: 0
Labels: None


 Description   

A weekly backup defined for the default location $ARTIFACTORY_HOME/backup/<backup-tag>  is not able to determine free space in that location correctly.

Scenario:

200GB volume for $ARTIFACTORY_HOME  >  50% used space

Default weekly backup is in $ARTIFACTORY_HOME/backup/<backup-tag>

$ARTIFACTORY_HOME/backup is a bind mount into the container with 22TB free space

 

Result:

Backup fails as BackupSizeCalculator calculates free space in $ARTIFACTORY_HOME and not $ARTIFACTORY_HOME/backup.

Docker commands exec-ed in container demonstrate that corresponding shell commands would give a sensible result - so not sure how BackupSizeCalculator comes to its result.

Supporting info:

2019-11-04 13:39:41,722 [http-nio-8081-exec-4] [INFO ] (o.a.b.BackupSizeCalculator:106) - Free space available for backup: 102842097664

[root@owlservlx25 artifactory]# docker-compose exec artifactory df /var/opt/jfrog/artifactory
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/mapper/dockervg-artifactory_base
209612800 109181004 100431796 52% /var/opt/jfrog/artifactory

(100431796*1024 is figure reported by BackupSizeCalculator above)

However space free in /var/opt/jfrog/artifactory/backup is nearly 22TB.

[root@owlservlx25 artifactory]# docker-compose exec artifactory df -h /var/opt/jfrog/artifactory/backup
Filesystem Size Used Available Use% Mounted on
owlcask1.owlstone.local:/Linux_Backup/artifactory_backup
63.7T 41.9T 21.8T 66% /var/opt/jfrog/artifactory/backup

 

 

 






[RTFACT-20511] Artifactory failed to find Nuget package Created: 05/Nov/19  Updated: 05/Nov/19

Status: Open
Project: Artifactory Binary Repository
Component/s: NuGet
Affects Version/s: None
Fix Version/s: None

Type: Bug Priority: Normal
Reporter: Ruslan Ponimarenko Assignee: Unassigned
Resolution: Unresolved Votes: 0
Labels: None


 Description   

After version upgrade from 6.11.3 to 6.12.2 and then 6.13.1 we are faced with the issue:

Time to time (like 3 times from 10) a job was failed when trying to find Nuget package from a Virtual or Remote Nuget repository. This issue accompanied by errors:

16:17:00 Errors in packages.config projects
16:17:00 Unable to find version '1.0.29' of package 'xxxxx'.
16:17:00 https://api.nuget.org/v3/index.json: Package 'xxx.1.0.29' is not found on source 'https://api.nuget.org/v3/index.json'.
16:17:00 https://xxxx: Failed to fetch results from V2 feed at 'xxxxxxxx(Id='core.sbtech',Version='1.0.29')' with following message : An error occurred while sending the request.
16:17:00 An error occurred while sending the request.
16:17:00 Unable to connect to the remote server
16:17:00 A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond xxx:443
16:17:00 https://xxxx: Failed to fetch results from V2 feed at 'xxxxx(Id='xxxx',Version='1.0.29')' with following message : An error occurred while sending the request.
16:17:00 An error occurred while sending the request.
16:17:00 Unable to connect to the remote server
16:17:00 A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond xxx:443
"The HTTP request to 'GET xxxx/api/nuget/nuget.org/FindPackagesById()?id='Serilog'&semVerLevel=2.0.0' has timed out after 100000ms."

Sensitive info was hidden by the xxxx

When we tried to "wget" the last URL we have found it takes to much time to respond. Like a few minutes. But the next respond to the same URL was very quick.

In periods of time when the errors have happened, there are weren't any lack of system resources at all.

What we have tried:

 



 Comments   
Comment by Ruslan Ponimarenko [ 05/Nov/19 ]

On the Tomcat side, we have fonded these errors:

=> catalina.2019-11-05.log <==
05-Nov-2019 07:45:35.189 SEVERE [http-nio-8081-exec-656] org.glassfish.jersey.server.ServerRuntime$Responder.writeResponse An I/O error has occurred while writing a response message entity to the container output stream.
org.glassfish.jersey.server.internal.process.MappableException: org.apache.catalina.connector.ClientAbortException: java.io.IOException: Broken pipe
at org.glassfish.jersey.server.internal.MappableExceptionWrapperInterceptor.aroundWriteTo(MappableExceptionWrapperInterceptor.java:91)
at org.glassfish.jersey.message.internal.WriterInterceptorExecutor.proceed(WriterInterceptorExecutor.java:163)
at org.glassfish.jersey.message.internal.MessageBodyFactory.writeTo(MessageBodyFactory.java:1135)
at org.glassfish.jersey.server.ServerRuntime$Responder.writeResponse(ServerRuntime.java:662)
at org.glassfish.jersey.server.ServerRuntime$Responder.processResponse(ServerRuntime.java:395)
at org.glassfish.jersey.server.ServerRuntime$Responder.process(ServerRuntime.java:385)
at org.glassfish.jersey.server.ServerRuntime$1.run(ServerRuntime.java:280)
at org.glassfish.jersey.internal.Errors$1.call(Errors.java:272)
at org.glassfish.jersey.internal.Errors$1.call(Errors.java:268)
at org.glassfish.jersey.internal.Errors.process(Errors.java:316)
at org.glassfish.jersey.internal.Errors.process(Errors.java:298)
at org.glassfish.jersey.internal.Errors.process(Errors.java:268)
at org.glassfish.jersey.process.internal.RequestScope.runInScope(RequestScope.java:289)
at org.glassfish.jersey.server.ServerRuntime.process(ServerRuntime.java:256)
at org.glassfish.jersey.server.ApplicationHandler.handle(ApplicationHandler.java:703)
at org.glassfish.jersey.servlet.WebComponent.serviceImpl(WebComponent.java:416)
at org.glassfish.jersey.servlet.WebComponent.service(WebComponent.java:370)
at org.glassfish.jersey.servlet.ServletContainer.service(ServletContainer.java:389)
at org.glassfish.jersey.servlet.ServletContainer.service(ServletContainer.java:342)
at org.glassfish.jersey.servlet.ServletContainer.service(ServletContainer.java:229)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:231)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166)
at org.artifactory.webapp.servlet.RepoFilter.execute(RepoFilter.java:191)
at org.artifactory.webapp.servlet.RepoFilter.doFilter(RepoFilter.java:97)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166)
at org.artifactory.webapp.servlet.AccessFilter.useAuthentication(AccessFilter.java:427)
at org.artifactory.webapp.servlet.AccessFilter.useAnonymousIfPossible(AccessFilter.java:392)
at org.artifactory.webapp.servlet.AccessFilter.doFilterInternal(AccessFilter.java:210)
at org.artifactory.webapp.servlet.AccessFilter.doFilter(AccessFilter.java:167)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166)
at org.artifactory.webapp.servlet.RequestFilter.doFilter(RequestFilter.java:77)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166)
at org.artifactory.webapp.servlet.ArtifactoryCsrfFilter.doFilter(ArtifactoryCsrfFilter.java:75)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166)
at org.springframework.session.web.http.SessionRepositoryFilter.doFilterInternal(SessionRepositoryFilter.java:164)
at org.springframework.session.web.http.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:80)
at org.artifactory.webapp.servlet.SessionFilter.doFilter(SessionFilter.java:62)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166)
at org.artifactory.webapp.servlet.ArtifactoryFilter.doFilter(ArtifactoryFilter.java:124)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166)
at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:199)
at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:96)
at org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:493)
at org.apache.catalina.valves.rewrite.RewriteValve.invoke(RewriteValve.java:304)
at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:137)
at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:81)
at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:87)
at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:343)
at org.apache.coyote.http11.Http11Processor.service(Http11Processor.java:798)
at org.apache.coyote.AbstractProcessorLight.process(AbstractProcessorLight.java:66)
at org.apache.coyote.AbstractProtocol$ConnectionHandler.process(AbstractProtocol.java:808)
at org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.doRun(NioEndpoint.java:1498)
at org.apache.tomcat.util.net.SocketProcessorBase.run(SocketProcessorBase.java:49)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at org.apache.tomcat.util.threads.TaskThread$WrappingRunnable.run(TaskThread.java:61)
at java.lang.Thread.run(Thread.java:748)
Caused by: org.apache.catalina.connector.ClientAbortException: java.io.IOException: Broken pipe
at org.apache.catalina.connector.OutputBuffer.realWriteBytes(OutputBuffer.java:364)
at org.apache.catalina.connector.OutputBuffer.flushByteBuffer(OutputBuffer.java:833)
at org.apache.catalina.connector.OutputBuffer.append(OutputBuffer.java:738)
at org.apache.catalina.connector.OutputBuffer.writeBytes(OutputBuffer.java:399)
at org.apache.catalina.connector.OutputBuffer.write(OutputBuffer.java:377)
at org.apache.catalina.connector.CoyoteOutputStream.write(CoyoteOutputStream.java:96)
at org.springframework.session.web.http.OnCommittedResponseWrapper$SaveContextServletOutputStream.write(OnCommittedResponseWrapper.java:563)
at org.glassfish.jersey.servlet.internal.ResponseWriter$NonCloseableOutputStreamWrapper.write(ResponseWriter.java:325)
at org.glassfish.jersey.message.internal.CommittingOutputStream.write(CommittingOutputStream.java:224)
at org.glassfish.jersey.message.internal.WriterInterceptorExecutor$UnCloseableOutputStream.write(WriterInterceptorExecutor.java:300)
at org.apache.commons.io.IOUtils.copyLarge(IOUtils.java:2315)
at org.apache.commons.io.IOUtils.copy(IOUtils.java:2270)




[RTFACT-20509] Checksum issue with npm modules, likely due to incorrect handling of scoped modules Created: 04/Nov/19  Updated: 14/Nov/19

Status: Open
Project: Artifactory Binary Repository
Component/s: NPM
Affects Version/s: 6.9.5
Fix Version/s: None

Type: Bug Priority: Normal
Reporter: Alasdair McLeay Assignee: Unassigned
Resolution: Unresolved Votes: 0
Labels: None


 Description   

When doing an npm install from our Artifactory server we get checksum errors, e.g.

npm ERR! code EINTEGRITY

npm ERR! sha-XXXX integrity checksum failed when using sha1: wanted sha1-XXXX but got sha512-YYYY

I am pretty sure this is due to the following:

  • user uploads a package such as 'vfile-message@1.0.1' to artifactory by uploading a file called vfile-message.tar.gz via the Artifactory Web UI
  • at a later date, a user uploads a different module, with the same name and the same version number but in a different scope, e.g.  '@types/vfile-message@1.0.1'. The file that is uploaded is also named vfile-message.tar.gz
  • when doing an npm install for  vfile-message we download the wrong package and get the shasum fror @types/vfile-message from Artifactory

I would happily provide more accurate steps to recreate if I could get a copy of Artifactory with npm support in which to use, ideally as a Docker container. Unfortunately I don't think this can be done with the OSS docker image.



 Comments   
Comment by Alasdair McLeay [ 04/Nov/19 ]

This may be a duplicate of RTFACT-7668

Comment by Alasdair McLeay [ 04/Nov/19 ]

(and RTFACT-7440)

Comment by Alasdair McLeay [ 14/Nov/19 ]

Related RTFACT-10424





[RTFACT-20505] Build browsing in UI is very slow for builds names with a lot of build runs. Created: 03/Nov/19  Updated: 03/Nov/19

Status: Open
Project: Artifactory Binary Repository
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: New Feature Priority: Normal
Reporter: Marek Cwynar Assignee: Unassigned
Resolution: Unresolved Votes: 0
Labels: None


 Description   

We have cases where for build name there are over 2,000 build runs. If you choose this build name to view individual build runs then the application page refreshes for over 30 seconds.
This makes viewing build runs data very difficult. You should enter paging for the build overview screen just like you did for artifacts search or presentation of artifacts related to a given build






[RTFACT-20503] Maybe can improve search blobs query for instances without sha2 migration Created: 03/Nov/19  Updated: 03/Nov/19

Status: Open
Project: Artifactory Binary Repository
Component/s: Docker
Affects Version/s: None
Fix Version/s: None

Type: Bug Priority: Normal
Reporter: Inbar Tal Assignee: Unassigned
Resolution: Unresolved Votes: 0
Labels: None


 Description   

Here

 org.artifactory.addon.docker.repomd.DockerPackageWorkContext#createLimitedBlobsQueryBySha256 

if we don't have sha2 migration we create a query that searches for sha2 in the props table. this query uses a redundant join of the nodes and props table IF the sha2 is always the name of the file (cause this means that we can use only the nodes table and search by node.name)

Still need to investigate it






[RTFACT-20502] Blobs cache shouldn't check read permission Created: 03/Nov/19  Updated: 03/Nov/19

Status: Open
Project: Artifactory Binary Repository
Component/s: Docker
Affects Version/s: None
Fix Version/s: None

Type: Bug Priority: Normal
Reporter: Inbar Tal Assignee: Unassigned
Resolution: Unresolved Votes: 0
Labels: None


 Description   

Blobs cache eviction period is configured by system property: "artifactory.docker.blobs.ttl.minutes" (default: 5 mins). We assume that by this time the user still has read access on the blob and we basically should't check it again. The problem is that if the user's permissions changed by this time the copy operation that comes later on will not succeed. We would like to fix this by not checking read access after finding the blob in cache and execute the copy by system user.






[RTFACT-20501] IsBlobExist endpoint returns the whole blob info instead of only the content length Created: 03/Nov/19  Updated: 03/Nov/19

Status: Open
Project: Artifactory Binary Repository
Component/s: Docker
Affects Version/s: None
Fix Version/s: None

Type: Bug Priority: Normal
Reporter: Inbar Tal Assignee: Unassigned
Resolution: Unresolved Votes: 0
Labels: None


 Description   

When docker client sends the "is blob exist" request (in docker push for example) we eventually go to the database and execute a query that returns all fields in order to return an artifact but in fact we only take to content length and returns it to the client.

the query of get blob globally:

  select distinct  n.repo as itemRepo,n.node_path as itemPath,n.node_name as itemName,n.created as itemCreated,n.modified as itemModified,n.updated as itemUpdated,n.created_by as itemCreatedBy,n.modified_by as itemModifiedBy,n.node_type as itemType,n.bin_length as itemSize,n.node_id as itemId,n.depth as itemDepth,n.sha1_actual as itemActualSha1,n.sha1_original as itemOriginalSha1,n.md5_actual as itemActualMd5,n.md5_original as itemOriginalMd5,n.sha256 as itemSha2  from  nodes n  where (( n.sha256 = 'e617a56c238ed06a0215366a122d19fab0b94b28c1413e2171bbe2f883686e6b' and n.node_type = 1) and(n.repo != 'auto-trashcan' or n.repo is null)) and(n.repo != 'jfrog-support-bundle' or n.repo is null) 

There are more queries (I think also the query of the heuristics search)

We would like to return only the content length and repo fields in order the improve performance






[RTFACT-20495] Aql adds a null check to "not equal" queries by default causing performance issues Created: 02/Nov/19  Updated: 02/Nov/19

Status: Open
Project: Artifactory Binary Repository
Component/s: AQL
Affects Version/s: None
Fix Version/s: None

Type: Bug Priority: Normal
Reporter: Inbar Tal Assignee: Unassigned
Resolution: Unresolved Votes: 0
Labels: None


 Description   

One example is when searching blob globally in docker push -

We get this query:
select distinct n.repo as itemRepo,n.node_path as itemPath,n.node_name as itemName,n.created as itemCreated,n.modified as itemModified,n.updated as itemUpdated,n.created_by as itemCreatedBy,n.modified_by as itemModifiedBy,n.node_type as itemType,n.bin_length as itemSize,n.node_id as itemId,n.depth as itemDepth,n.sha1_actual as itemActualSha1,n.sha1_original as itemOriginalSha1,n.md5_actual as itemActualMd5,n.md5_original as itemOriginalMd5,n.sha256 as itemSha2 from nodes n where (( n.sha256 = 'e617a56c238ed06a0215366a122d19fab0b94b28c1413e2171bbe2f883686e6b' and n.node_type = 1) and(n.repo != 'auto-trashcan' or n.repo is null)) and(n.repo != 'jfrog-support-bundle' or n.repo is null)

the "n.repo is null" part is unnecessary here since the repo field can't be null by definition.
We see a 10% improvement when removing this part.

Steps to reproduce:
1. env: 2,500,000 rows in node table, 400,000 shared layers of a docker image with 4 layers.
2. create a user that has read permission on a docker repo (NOT a docker repo contains the shared layer)
3. execute docker push of this image with this user

The code inserting the null check -
org.artifactory.storage.db.aql.sql.builder.query.aql.Criterion#generateNotEqualsQuery
org.artifactory.storage.db.aql.sql.builder.query.aql.Criterion#isNullSql(java.lang .String, org.artifactory.aql.model.AqlField).






[RTFACT-20494] Support custom sourceCategory when integrating Artifactory with sumo logic Created: 01/Nov/19  Updated: 01/Nov/19

Status: Open
Project: Artifactory Binary Repository
Component/s: SumoLogic
Affects Version/s: 6.12.2
Fix Version/s: None

Type: New Feature Priority: Normal
Reporter: Divija Kandukoori Assignee: Unassigned
Resolution: Unresolved Votes: 0
Labels: None


 Description   

The customer wants to have a custom _sourceCategory when querying for the logs in sumo logic after the integration.
At the moment we get the logs in sumo when querying _sourceCategory="/artifactory/request/" but the customer wants to have it modified as _sourceCategory=" XXX/XXX/XXX/artifactory/request/".

Currently, we take "category" from the logback.xml and prepend "Artifactory" to it, but also require that the category in logback.xml be one of four values reflecting the log in question.






[RTFACT-20493] Artifactory does not automatically update licenses in use if there are more expired licenses than nodes Created: 01/Nov/19  Updated: 04/Nov/19

Status: Open
Project: Artifactory Binary Repository
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: Bug Priority: Normal
Reporter: Loren Yeung Assignee: Unassigned
Resolution: Unresolved Votes: 1
Labels: None


 Description   

Summary:
When using JFMC to update licenses, Artifactory does not automatically update the "in use" licenses correctly in the follow scenario:
Have X number of nodes
Have X+1 of expired licenses attached to Artifactory <- this is important - it looks like if you have the same number of nodes ( x), it doesn't reproduce)

In JFMC, attach new licenses from the bucket to Artifactory (I went to bucket management -> attach -> attached new licenses to the Artifactory instance) - see that the "in use" licenses do not update to the new ones. The new licenses are added to Artifactory, but since "in use" is still wrong, Artifactory will continue to think that it's license is invalid.

Tested using Art 6.5.x, and JFMC 3.5.4. Customer saw this happen on latest Artifactory (6.13). I tested this with a 3 node cluster, with 3 expired Enterprise Plus licenses, and 3 expired Enterprise Plus trial licenses. New license bucket had 3 valid licenses (enterprise plus trial)






[RTFACT-20491] ability to globally disable hover over tool tips Created: 01/Nov/19  Updated: 01/Nov/19

Status: Open
Project: Artifactory Binary Repository
Component/s: UI
Affects Version/s: None
Fix Version/s: None

Type: New Feature Priority: Normal
Reporter: Loren Yeung Assignee: Unassigned
Resolution: Unresolved Votes: 0
Labels: None

Attachments: PNG File Screen Shot 2019-11-01 at 11.43.21 AM.png    

 Description   

certain browsers like safari display tool tips, which make Artifactory's redundant, as they both show up (screenshot attached). It would be nice to be able to disable Artifactorys globally. Didn't see an option to disable them in properties or UI. Disabling help mode didn't help.






[RTFACT-20490] Better performance is needed for api/storageinfo Created: 01/Nov/19  Updated: 06/Nov/19

Status: Open
Project: Artifactory Binary Repository
Component/s: REST API
Affects Version/s: None
Fix Version/s: None

Type: Performance Priority: Normal
Reporter: Stefan Gangefors Assignee: Unassigned
Resolution: Unresolved Votes: 2
Labels: None


 Description   

When trying to view the Admin>Advanced>Storage or the REST API /api/storageinfo there is a risk that the request will time out for Artifactory instances with a lot of repos and artifacts.

For our instance with 126 local repos and about 26M artifacts we get a timeout for the API after 600 seconds (due to how we have configured our haproxy).

But the reality is that a REST API endpoint should not take 10 minutes before it returns any information.

In this case it would be fair to make sure that the database query doesn't contain any SUM() calls that can take forever to return if one has a larger amount of artifacts.

A solution for this could be that a thread in the application adds or subtracts from a repo size summary field on every create/delete artifact event. Or a scheduled event could sum up the total and update the summary fields asynchronously. Sure, the sum can differ from reality by a fraction if not all events are processed yet, but considering that the current API can take more than 10 minutes to return it woudln't be an issue. I don't think anyone expects a byte exact number anyway (also, the storage sizes are turned into kB/MB/GB in the UI anyway).

 

Please fix this since currently the functionality of this is inaccessible to us.

 

https://localhost:8081/api/storageinfo
{
  "errors" : [ {
    "status" : 500,
    "message" : "Repository storage summary failed with exception: Communications link failure\n\nThe last packet successfully received from the server was 600,004 milliseconds ago.  The last packet sent successfully to the server was 600,005 milliseconds ago."
  } ]
}

 

Update:

I've done some analyzing on a locally deployed instance and debugged what queries are actually executed on the database.

There are two queries:

SELECT repo, SUM(CASE WHEN node_type = 0 THEN 1 ELSE 0 END) as folders, SUM(CASE WHEN node_type = 1 THEN 1 ELSE 0 END) as files, SUM(bin_length) FROM nodes GROUP BY repo;

and

SELECT count(b.sha1), sum(b.bin_length) FROM binaries b WHERE b.sha1 NOT LIKE '##%';

These queries take an insanely long time to complete due to the SUM statements and that it looks at the whole `nodes` table without limiting the number of rows its going to summarize.

Just as an example: executing the top query but replacing the `GROUP BY repo` statement with `WHERE repo = "reponame"` for one of our mid-sized repositories takes almost a minute if the query wasn't cached.

repo                 folders   files     SUM(bin_length)
firmware-releases    144144    495623    9532328981425

You can just imagine how long that operation would take for the repos we have that has 10M artifacts in them. I don't know how long it actually takes because the DB connection times out after 10 minutes.

 

This performance issue is also visible on the front page where a SELECT COUNT( * ) FROM nodes WHERE node_type = 1; query is executed to get the number of artifacts just to be able to present it. The data shown in that message is mostly just a bragging statement and has no value to normal users visiting Artifactory (RTFACT-20532).

This performance issue is also visible when one clicks the "Show" link for getting the "Artifact Count / Size" on the General page of a repo (as laid out in RTFACT-17669).

 

Is it we who have a badly configured database or is it just that Artifactory isn't really optimized for the amount of artifacts we have in our instance?






[RTFACT-20485] Can't install pre-release npm package Created: 31/Oct/19  Updated: 31/Oct/19

Status: Open
Project: Artifactory Binary Repository
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: Bug Priority: Normal
Reporter: Aleksei Chernov Assignee: Unassigned
Resolution: Unresolved Votes: 0
Labels: None
Environment:

artifactory.version: 6.11.0



 Description   

When I try to install pre-release npm package from private registry I get "No matching version found" error.

But it is possible to install the same package using "canary" npm tag.

This command doesn't work: npm i @scope/package@0.0.373-develop.5
while this does: npm i @scope/package@canary
and successfully installs @scope/package@0.0.373-develop.5






[RTFACT-20484] Artifactory Nuget GetUpdates() request doesn't honor includeAllVersions parameter Created: 31/Oct/19  Updated: 31/Oct/19

Status: Open
Project: Artifactory Binary Repository
Component/s: None
Affects Version/s: 6.10.0
Fix Version/s: None

Type: Bug Priority: Normal
Reporter: Igor Stojković Assignee: Unassigned
Resolution: Unresolved Votes: 0
Labels: None


 Description   

When we target our local nuget repository with a GetUpdates() request and includeAllVersions parameter set to false we still get all the versions instead of just the latest one. For example this URL:

https://artifactory.local.com/artifactory/api/nuget/company-nuget-local/GetUpdates()?packageIds=%27company.coreutil%27&versions=%271.0.36%27&includePrerelease=false&includeAllVersions=false

gives us results with versions 1.0.37, 1.0.38 and 1.0.39 included when we only expect the latest 1.0.39 to be included in the response.

 






[RTFACT-20481] Add indication/suggestion when uploading a large file through the UI to try it through the CLI Created: 31/Oct/19  Updated: 03/Nov/19

Status: Open
Project: Artifactory Binary Repository
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: New Feature Priority: Normal
Reporter: Shai Ben-Zvi Assignee: Unassigned
Resolution: Unresolved Votes: 0
Labels: None


 Description   

Upon uploading a large file through the UI there is a high possibility that the upload will fail due to a timeout by the browser.
It will be nice and helpful to have some kind of info/warning/tip message upon the attempt of uploading such a file (or even any file), that this upload can fail due to the browser UI timeout and to suggest as an alternative to upload it through some CLI like curl.

For example:
let's say I upload a file with size of 1GB through the UI, Artifactory will continue to process the upload but will display the message:
"Please note that it is recommended to upload large file through the CLI due to browser timeouts".

This can save the user a lot of time to understand what might cause the upload to fail instead of checking the log and see that some client closed the connection error.






[RTFACT-20480] Upload by checksum returns confusing 404 when checksum doesn't match Created: 31/Oct/19  Updated: 31/Oct/19

Status: Open
Project: Artifactory Binary Repository
Component/s: REST API
Affects Version/s: 6.13.1
Fix Version/s: None

Type: Bug Priority: Normal
Reporter: Richard Assignee: Unassigned
Resolution: Unresolved Votes: 0
Labels: None


 Description   

As per https://www.jfrog.com/confluence/plugins/servlet/mobile?contentId=46107948#ArtifactoryRESTAPI-DeployArtifactbyChecksum, if the checksum doesn't match, 404 is returned. I find this very confusing as 404 usually indicates that a file is missing (which implies that retrieval was attempted), which is not an expected response when trying to upload a file.

From the HTTP spec (https://tools.ietf.org/html/rfc7231#section-6.5), I disagree that this scenario matches the expected use case for 404:

The server has not found anything matching the effective request URI. No indication is given of whether the condition is temporary or permanent. The 410 (Gone) status code SHOULD be used if the server knows, through some internally configurable mechanism, that an old resource is permanently unavailable and has no forwarding address. This status code is commonly used when the server does not wish to reveal exactly why the request has been refused, or when no other response is applicable.

From this, it sounds as though the artifact doesn't exist and the web server either has no idea where it is or it doesn't want to reveal where it exists, neither of which would match my expected behaviour when uploading a file.

Instead, I believe that 409 is a more appropriate response code, based on the HTTP spec:

Conflicts are most likely to occur in response to a PUT request. For example, if versioning were being used and the representation being PUT included changes to a resource which conflict with those made by an earlier (third-party) request, the server might use the 409 response to indicate that it can't complete the request. In this case, the response representation would likely contain a list of the differences between the two versions.





[RTFACT-20478] UI is bad to point of being broken Created: 31/Oct/19  Updated: 31/Oct/19

Status: Open
Project: Artifactory Binary Repository
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: Bug Priority: Normal
Reporter: Anders Eurenius Assignee: Unassigned
Resolution: Unresolved Votes: 0
Labels: None


 Description   

The front page offers a search, and I can find my repositories, but clicking on one just gives me a blurb about docker.

Expected: Navigation to repository (whatever that means, separate page, or scrolled-to in a the tree-list)

Actual: Non-specific blurb about how to set up docker

screenshot: https://ibb.co/PGwMq7t






[RTFACT-20475] Enforce Special characters in the Password Strings Created: 30/Oct/19  Updated: 30/Oct/19

Status: Open
Project: Artifactory Binary Repository
Component/s: Access Client
Affects Version/s: 6.13.1
Fix Version/s: None

Type: Improvement Priority: Normal
Reporter: Manoj Tuguru Assignee: Unassigned
Resolution: Unresolved Votes: 0
Labels: None


 Description   

Currently Artifactory's password setting policy doesn't have the feature to enforce the special characters in the password string to make it a strong password.






[RTFACT-20474] Improve error message for creating token REST API Created: 30/Oct/19  Updated: 17/Nov/19

Status: Pending QA
Project: Artifactory Binary Repository
Component/s: REST API
Affects Version/s: None
Fix Version/s: None

Type: Bug Priority: Normal
Reporter: Prasanna Narayana Assignee: Barak Hacham
Resolution: Unresolved Votes: 0
Labels: None


 Description   

On using the REST API for creating a token, the error message obtained should be a gender-neutral pronoun. 

Here's the current message :

jfrog rt curl -XPOST /api/security/token -d "username=sample-service" -d "scope=member-of-groups:security-group"
{
"error" : "invalid_request",
"error_description" : "User prasanna can only create user token for himself (requested: sample-service)"

 

Need to change it to :
User prasanna can only create user token(s) for themselves






[RTFACT-20473] Missing REST API for uploading SSH Server Keys Created: 30/Oct/19  Updated: 30/Oct/19

Status: Open
Project: Artifactory Binary Repository
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: Bug Priority: Normal
Reporter: Ronald Blum Assignee: Unassigned
Resolution: Unresolved Votes: 0
Labels: None
Environment:

artifactory.version | 6.13.1
artifactory.revision | 61301900
artifactory.buildNumber | 2793



 Description   

Can you please document how to upload the "SSH Server Configuration" - "Server Keys" (private and public) through the REST API (not the GUI)?

If really missing yet, can you please give information about if such an API extension will become available on future versions of Artifactory?

 

 






[RTFACT-20470] View window non-resizable Created: 30/Oct/19  Updated: 30/Oct/19

Status: Open
Project: Artifactory Binary Repository
Component/s: UI
Affects Version/s: None
Fix Version/s: None

Type: New Feature Priority: Normal
Reporter: Girish Nehte Assignee: Unassigned
Resolution: Unresolved Votes: 0
Labels: None


 Description   

Hi Team, 

This request is related to the sizing of the view window in Artifactory. From the repository, if we open any log file or a text file, it opens in a pop-up window. This window is non-resizable. This is a request to make it resizable. If the file is a bigger one the view is not very good. Hence it would be good if the window can be made resizable, That will help a lot.

 
Regards,

Girish






[RTFACT-20469] Support v3 endpoint to work with "paket" dependency manager Created: 30/Oct/19  Updated: 30/Oct/19

Status: Open
Project: Artifactory Binary Repository
Component/s: NuGet
Affects Version/s: 6.11.3, 6.13.1
Fix Version/s: None

Type: Improvement Priority: Normal
Reporter: Jayanth Suresh Assignee: Unassigned
Resolution: Unresolved Votes: 2
Labels: None


 Description   

Problem statement: Unable to resolve the NuGet packages via Artifactory by using Nuget v3 endpoint to work with "paket" dependency manager. 

Steps to reproduce the issue :

1. Downloaded the paket tool from https://github.com/fsprojects/Paket/releases.
2.Created a Nuget repository (local,remote and virtual).
3.Add the source URL and the dependencies in the paket.dependencies file
4.Run paket.install to analyze the dependency and create packet.lock file.
5.Run packet.update to update all the packages.

I tried to replicate the scenario with the below commands and example.

Paket version 5.227.0

paket.dependencies file

source http://localhost:8081/artifactory/api/nuget/v3/nuget

// NuGet packages
nuget Microsoft.NETCore.App == 2.2.7

Command :

>paket.exe install

Paket version 5.227.0

Skipping resolver for group Main since it is already up-to-date

paket.lock is already up-to-date

Installing into projects:

 - Creating model and downloading packages.

Performance:

 - Runtime: 1 second

Paket failed with

-> The NuGet source http://localhost:8081/artifactory/api/nuget/nuget for packag

e Microsoft.NETCore.DotNetHostPolicy was not found in the paket.dependencies fil

e with sources [NuGetV3

{Url = "http://localhost:8081/artifactory/api/nuget/v3/n uget-remote";              Authentication = Paket.NetUtils+AuthProviderModule+ofFunction@43;}

]

>paket.exe update

Paket version 5.227.0

Resolving packages for group Main:

Performance:

 - Resolver: 565 milliseconds (1 runs)

    - Runtime: 103 milliseconds

    - Blocked (retrieving package versions): 462 milliseconds (1 times)

 - Average Request Time: 121 milliseconds

 - Number of Requests: 1

 - Runtime: 1 second

Paket failed with

-> Unable to retrieve package versions for 'Microsoft.NETCore.App'

   -- CLOSED –

 

   -- OPEN ----

      Microsoft.NETCore.App == 2.2.7 (from C:\Users\test\Downloads\project-e

xamples-master\project-examples-master\nuget-example\MyLogger\MyLogger\paket.dep

endencies)

-> could not find an AllVersionsAPI endpoint for http://localhost:8081/artifacto

ry/api/nuget/v3/nuget-remote

Workaround: If we use the normal v2 endpoint we will be able to resolve the Nuget packages.

paket.dependencies file

source http://localhost:8081/artifactory/api/nuget/nuget

// NuGet packages
nuget Microsoft.NETCore.App == 2.2.7

Command :

>paket.exe install

Paket version 5.227.0

Resolving packages for group Main:

At least one 'next' link (index 0) returned a empty result (noticed on 'http://l

ocalhost:8081/artifactory/api/nuget/nuget/FindPackagesById()?semVerLevel=2.0.0&i

d='Microsoft.NETCore.App'&$orderby=Published desc'): ['http://localhost:8081/art

ifactory/api/nuget/nuget/FindPackagesById()?semVerLevel=2.0.0&id='Microsoft.NETC

ore.App'&$orderby=Published desc&$skip=80']

 - Microsoft.NETCore.App 2.2.7

 - Microsoft.NETCore.DotNetHostPolicy 3.0.0

 - Microsoft.NETCore.Platforms 3.0.0

 - NETStandard.Library 2.0.3

 - Microsoft.NETCore.Targets 3.0.0

 - Microsoft.NETCore.DotNetHostResolver 3.0.0

 - Microsoft.NETCore.DotNetAppHost 3.0.0

Locked version resolution written to C:\Users\test\Downloads\project-example

s-master\project-examples-master\nuget-example\MyLogger\MyLogger\paket.lock

Installing into projects:

 - Creating model and downloading packages.

Downloading Microsoft.NETCore.Platforms 3.0.0

Downloading Microsoft.NETCore.App 2.2.7

Downloading Microsoft.NETCore.DotNetHostPolicy 3.0.0

Downloading Microsoft.NETCore.Targets 3.0.0

Download of Microsoft.NETCore.Platforms 3.0.0 done in 64 milliseconds. (3932 kbi

t/s, 0 MB)

Download of Microsoft.NETCore.Targets 3.0.0 done in 2 seconds. (106 kbit/s, 0 MB

)

Download of Microsoft.NETCore.App 2.2.7 done in 2 seconds. (12452 kbit/s, 3 MB)

Download of Microsoft.NETCore.DotNetHostPolicy 3.0.0 done in 2 seconds. (80 kbit

/s, 0 MB)

Downloading Microsoft.NETCore.DotNetAppHost 3.0.0

Downloading Microsoft.NETCore.DotNetHostResolver 3.0.0

Download of Microsoft.NETCore.DotNetAppHost 3.0.0 done in 1 second. (138 kbit/s,

 0 MB)

Download of Microsoft.NETCore.DotNetHostResolver 3.0.0 done in 2 seconds. (97 kb

it/s, 0 MB)

 - Installing for projects

Garbage collecting Antlr

Garbage collecting Microsoft.CSharp

Garbage collecting System.Diagnostics.Debug

Garbage collecting System.Diagnostics.DiagnosticSource

Garbage collecting System.Diagnostics.Tools

Garbage collecting System.Diagnostics.Tracing

Garbage collecting System.Dynamic.Runtime

Garbage collecting System.Globalization

Garbage collecting System.Globalization.Calendars

Garbage collecting System.Globalization.Extensions

Garbage collecting System.IO

Garbage collecting System.IO.Compression

Garbage collecting System.IO.Compression.ZipFile

Garbage collecting System.IO.FileSystem

Garbage collecting System.IO.FileSystem.Primitives

Garbage collecting System.Linq

Garbage collecting System.ObjectModel

Garbage collecting System.Reflection

Garbage collecting System.Reflection.Emit

Garbage collecting System.Reflection.Emit.ILGeneration

Garbage collecting System.Security.Cryptography.Algorithms

Garbage collecting System.Security.Cryptography.Cng

Performance:

 - Resolver: 14 seconds (1 runs)

    - Runtime: 372 milliseconds

    - Blocked (retrieving package details): 5 seconds (4 times)

    - Blocked (retrieving package versions): 9 seconds (4 times)

    - Not Blocked (retrieving package details): 3 times

 - Disk IO: 1 second

 - Average Download Time: 861 milliseconds

 - Number of downloads: 6

 - Average Request Time: 259 milliseconds

 - Number of Requests: 55

 - Runtime: 21 seconds

Paket omitted 6 warnings similar to the ones above. You can see them in verbose

mode.

>paket.exe update
Paket version 5.227.0
Resolving packages for group Main:
At least one 'next' link (index 0) returned a empty result (noticed on 'http://l
ocalhost:8081/artifactory/api/nuget/nuget/FindPackagesById()?semVerLevel=2.0.0&i
d='Microsoft.NETCore.App'&$orderby=Published desc'): ['http://localhost:8081/art
ifactory/api/nuget/nuget/FindPackagesById()?semVerLevel=2.0.0&id='Microsoft.NETC
ore.App'&$orderby=Published desc&$skip=80']

  • Microsoft.NETCore.App is locked to 2.2.7
  • Microsoft.NETCore.DotNetHostPolicy 3.0.0
  • Microsoft.NETCore.Platforms 3.0.0
  • NETStandard.Library 2.0.3
  • Microsoft.NETCore.Targets 3.0.0
  • Microsoft.NETCore.DotNetHostResolver 3.0.0
  • Microsoft.NETCore.DotNetAppHost 3.0.0
    paket.lock is already up-to-date
    Installing into projects:
  • Creating model and downloading packages.
  • Installing for projects
    Performance:
  • Resolver: 9 seconds (1 runs)
  • Runtime: 427 milliseconds
  • Blocked (retrieving package versions): 8 seconds (5 times)
  • Blocked (retrieving package details): 357 milliseconds (2 times)
  • Not Blocked (retrieving package details): 5 times
  • Not Blocked (retrieving package versions): 2 times
  • Disk IO: 15 milliseconds
  • Average Request Time: 630 milliseconds
  • Number of Requests: 14
  • Runtime: 10 seconds
    Paket omitted 6 warnings similar to the ones above. You can see them in verbose
    mode.

 






[RTFACT-20468] Add support for InSpec Supermarket Created: 30/Oct/19  Updated: 03/Nov/19

Status: Open
Project: Artifactory Binary Repository
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: New Feature Priority: Normal
Reporter: Maayan Amrani Assignee: Unassigned
Resolution: Unresolved Votes: 0
Labels: None


 Description   

The request is to support inspec supermarket profiles. Similar to Chef's capabilities https://www.inspec.io/docs/reference/profiles/

 

 






[RTFACT-20467] Change MySQL character encoding from UTF8MB3 to UTF8MB4 Created: 30/Oct/19  Updated: 30/Oct/19

Status: Open
Project: Artifactory Binary Repository
Component/s: Database
Affects Version/s: None
Fix Version/s: None

Type: Bug Priority: Normal
Reporter: Yossi Shaul Assignee: Unassigned
Resolution: Unresolved Votes: 0
Labels: None

Issue Links:
Dependency
depends on RTFACT-16750 Artifactory does not support MYSQL 8 ... Open

 Description   

When creating the schema in MySQL we get a warning

CREATE DATABASE artdb CHARACTER SET utf8 COLLATE utf8_bin;
Query OK, 1 row affected, 2 warnings (0.01 sec)
3719 'utf8' is currently an alias for the character set UTF8MB3, but will be an alias for UTF8MB4 in a future release. Please consider using UTF8MB4 in order to be unambiguous.

 

UTF8MB3 is deprecated in MySQL 8 and planned to be removed.

According to the MySQL team blog, UTF8MB4 has better performance in v8.0 (but might be worse in previous versions?).

When certifying MySQL 8 we should also test this change and consider siwtching the default.






[RTFACT-20466] Pouvoir afficher une Description sur plusieurs lignes Created: 30/Oct/19  Updated: 05/Nov/19

Status: Open
Project: Artifactory Binary Repository
Component/s: Artifactory Home
Affects Version/s: 6.1.5
Fix Version/s: None

Type: Improvement Priority: Normal
Reporter: Hubert Joos Assignee: Unassigned
Resolution: Unresolved Votes: 0
Labels: None
Environment:

Tous



 Description   

Bonjour,

Nous serions intéressé par pouvoir afficher une "Public Description" sur plusieurs lignes. L'idéal serait que ce champs accepte le html par exemple :

Projet : myproject<br>
<b>Responsable</b> : moi

Mais au moins un "\n" serait déjà cela.

Cette demande pourrait être généralisée à toutes les descriptions.






[RTFACT-20463] Nuget packages unable to resolve from remote cache when Smart remote repo is down Created: 29/Oct/19  Updated: 06/Nov/19

Status: Open
Project: Artifactory Binary Repository
Component/s: NuGet
Affects Version/s: 6.10.1
Fix Version/s: 6.13.1

Type: Bug Priority: Normal
Reporter: Santhosh Pesari Assignee: Unassigned
Resolution: Unresolved Votes: 0
Labels: None


 Description   

Issue:- From Powershell, if we point to Virtual repository to resolve the NuGet artifacts and if the virtual has a smart remote repository in it and in case if the Artifactory that is pointing from the Smart remote repository is down then artifacts were unable to resolve even though artifacts present in the cache.

If we point to remote directly from powershell then we were able to resolve the artifacts successfully.

Steps to reproduce:

1. Consider an Artifactory instance that is facing to the internet and create a repository that is pointing to "https://www.powershellgallery.com/"

2. In the second Artifactory instance, create a remote-repository that is pointing to first remote repository: 

3. From the Powershell, create a repository pointing to Virtual "nuget" repository and remote "original" repository of the secondary instance.

4. Request for the artifact Posh-SSH by pointing to Virtual repository.

Install-Module -Name Posh-SSH  -Verbose -Repository Artifactory4

5. Now Shutdown the Artifactory that is pointing to the internet and in secondary Artifactory instance, Enable the Global offline mode.
6. Then uninstall the Posh-SSH module from PowerShell.
7. Again request for the package from PowerShell and we will see 404 response when resolving from virtual and seeing 200 responses when resolving from remote.

Errors: 

2019-10-29 12:09:50,059 [https-jsse-nio-8443-exec-94] [DEBUG] (o.a.a.n.r.NuGetVirtualRepoHandler:148) - Could not download NuGet package: 'Posh-SSH' version:'2.2.0' from repository: 'powershell-remote', powershell-remote: is offline, 'powershell-remote:Posh-SSH.2.2.0.nupkg' is not found at 'Posh-SSH.2.2.0.nupkg'.

20191029120950|147|REQUEST|10.98.22.96|anonymous|GET|/api/nuget/chocolatey/Download/Posh-SSH/2.2.0|HTTP/1.1|404|0
20191029120950|137|REQUEST|10.98.22.96|anonymous|GET|/api/nuget/chocolatey/Download/Posh-SSH/2.2.0|HTTP/1.1|404|0

 

This issue is seen in 6.10.1 version and also tested in 6.13.1 version but the issue seems to be resolved in 6.13.1.






[RTFACT-20460] Debian package with no control file breaks metadata Created: 29/Oct/19  Updated: 04/Nov/19

Status: Open
Project: Artifactory Binary Repository
Component/s: Debian
Affects Version/s: None
Fix Version/s: None

Type: Bug Priority: Normal
Reporter: Angello Maggio Assignee: Unassigned
Resolution: Unresolved Votes: 1
Labels: None

Regression:
Yes

 Description   

Steps tp reproduce:

$ wget -q http://localhost:80081/artifactory/debian-local/pool/p/my-candidate/my-candidate_1.123_all.deb
$ candidate_name=my-candidate
$ release_deb=my-release_1.123_all.deb
$ release_name=my-release
$ ar x my-candidate_1.123_all.deb control.tar.gz
$ tar -zxf ./control.tar.gz ./control
$ sed -E -e 's/^(Package|Source|Provides|Description): my-candidate/\1: my-release/g' control
$ cp control.new control
$ gunzip control.tar.gz
$ tar --delete -f ./control.tar ./control
>> Skip this step to reproduce $ tar -rf ./control.tar --owner root --group root ./control
$ gzip control.tar
$ cp my-candidate_1.123_all.deb myrelease_1.123_all.deb
$ ar r my-release_1.123_all.deb control.tar.gz

 

Results

Metadata will fail on merge and cause issues:

apt-get update
...
Reading package lists... Error!
E: Encountered a section with no Package: header
E: Problem with MergeList /var/lib/apt/lists/repo.test.myorg.com_artifactory_debian-local_dists_jfrog_main_binary-amd64_Packages
E: The package lists or status file could not be parsed or opened.






[RTFACT-20455] Cocoapods CDN support Created: 29/Oct/19  Updated: 29/Oct/19

Status: Open
Project: Artifactory Binary Repository
Component/s: CocoaPods
Affects Version/s: None
Fix Version/s: None

Type: New Feature Priority: Normal
Reporter: Cal Moody Assignee: Unassigned
Resolution: Unresolved Votes: 0
Labels: artifactory, cocoapods, remote


 Description   

The Cocoapods has a CDN (as of a few months ago) is extremely fast for updates and package installations. For time comparison:

 

Using Artifactory Cocoapods remote

$ pod repo-art cocoapod-remote update

43m 12s

$ pod install (2 small packages)

5.790s

 

Using Cocoapods CDN

$ pod repo update

1.212s

$ pod install

1.014s

 

More info on the Cocoapods CDN: http://blog.cocoapods.org/CocoaPods-1.7.2/

 

Whatever Cocoapods implemented for their CDN is blazingly fast compared to the way Cocoapods used to be. Do you currently have any plans to implement some features to take advantage of the Cocoapods CDN for remote packages?






[RTFACT-20453] Consider replacing the ' sign to " in the Set-Me-Up Created: 29/Oct/19  Updated: 07/Nov/19

Status: Open
Project: Artifactory Binary Repository
Component/s: UI
Affects Version/s: None
Fix Version/s: None

Type: Improvement Priority: Normal
Reporter: Ariel Seftel Assignee: Unassigned
Resolution: Unresolved Votes: 0
Labels: None

Attachments: PNG File Screen Shot 2019-10-29 at 14.04.40.png    

 Description   

Currently, the Set-Me-Up in the UI is using ' sign to wrap the APIkey. 

In Windows this will not work. Only after replacing the ' to " the deploy is successful

See the pic attached.  






[RTFACT-20450] Set admin initial password in the artifactory Created: 29/Oct/19  Updated: 29/Oct/19

Status: Open
Project: Artifactory Binary Repository
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: Epic Priority: Normal
Reporter: Ori Yitzhaki Assignee: Unassigned
Resolution: Unresolved Votes: 0
Labels: None

Epic Name: Set admin initial password in the artifactory

 Description   

As many other systems, Artifactory is started with a default administrator. 
Many organizations ask to force update of the default password






[RTFACT-20448] Access Token authentication will not work on RHEL6 machines using YUM client Created: 29/Oct/19  Updated: 30/Oct/19

Status: Open
Project: Artifactory Binary Repository
Component/s: Access Tokens, YUM
Affects Version/s: None
Fix Version/s: None

Type: Bug Priority: Normal
Reporter: Shai Ben-Zvi Assignee: Unassigned
Resolution: Unresolved Votes: 0
Labels: None

Issue Links:
Relationship
is related to RTFACT-15996 Access Token authentication will not ... Resolved

 Description   

When creating Access Token for example for existing/none existing user with deployment operations on RHEL7 everything works properly.

However, when trying to use the same Access Token on RHEL6 machine using YUM client 3.2.29, it will result with status 401, and "Bad credentials":
{
"errors" : [

{ "status" : 401, "message" : "Bad credentials" }

]

[root@sai-centos-6 6]# yum install libfa1
Loaded plugins: fastestmirror, security
Setting up Install Process
Loading mirror speeds from cached hostfile
https://<username>:<access_token>@jfrog.io/artifactory/rpm-local/repodata/repomd.xml: [Errno 14] PYCURL ERROR 22 - "The requested URL returned error: 401"
Trying other mirror.
Error: Cannot retrieve repository metadata (repomd.xml) for repository: Artifactory. Please verify its path and try again

Steps to reproduce are simple:
1. Create a user(can be admin or with just read permissions on rpm-local). 
2. Create an access token using curl on RHEL6 machine. Apply that token in your yum config.
3. Resolve an artifact existing in the repository, using the yum client(3.2.29).
4. On RHEL6 machine, you will get error 401.






[RTFACT-20447] Allow viewing of plaintext MIME type in Virtual Repositories Created: 28/Oct/19  Updated: 28/Oct/19

Status: Open
Project: Artifactory Binary Repository
Component/s: Virtual Repositories
Affects Version/s: 6.12.2
Fix Version/s: None

Type: Improvement Priority: Normal
Reporter: Michael Schmitt Assignee: Unassigned
Resolution: Unresolved Votes: 0
Labels: None


 Description   

Current functionality lets someone view plaintext content in the Web UI if it's defined in mimetypes.xml but ONLY if it's viewed via the local repository URL.  Please make this work in virtual repos as well since multi-site HA topology relies on virtual repos to present a unified endpoint/interface.  Of note: XML type files are viewable in both local and virtual repos just fine.  With that already in place, I would expect that this shouldn't be complicated functionality to add.






[RTFACT-20446] Go 1.13 compatibility: enable gosumdb Created: 28/Oct/19  Updated: 17/Nov/19

Status: Open
Project: Artifactory Binary Repository
Component/s: Go
Affects Version/s: None
Fix Version/s: None

Type: New Feature Priority: Normal
Reporter: Ankush Chadha Assignee: Nadav Yogev
Resolution: Unresolved Votes: 2
Labels: CPE, gomodules

Issue Links:
Relationship
is related to RTFACT-20405 Artifactory to support go client v1.1... Open

 Description   

Post Go 1.13.x update, with GOPROXY set to Artifactory, not all go related requests will go to Artifactory. The gosumdb requests (supported, lookup, tile) will go directly to Google's gosumdb.

One of the possible solutions is to enable gosumdb support and in addition maybe to suggest GOPRIVATE regex (for private modules).

 

Why this is important?

Artifactory should be the source of truth in both online and offline setup not only for packages but also the checksum metadata related to the packages.

 






[RTFACT-20437] Artifactory /api/search/versions does not return expected versions. Created: 27/Oct/19  Updated: 04/Nov/19

Status: Open
Project: Artifactory Binary Repository
Component/s: REST API
Affects Version/s: 6.12.2
Fix Version/s: None

Type: Bug Priority: Normal
Reporter: Atlassian Build Team Assignee: Unassigned
Resolution: Unresolved Votes: 0
Labels: None


 Description   

Hi there,

We are using artifactory rest endpoint /api/search/versions to query a specific version pattern 1000.*.0 . However the endpoint does not return all versions available.We have more than 4k versions for the version pattern. If we use /api/search/versions?g=groupId&a=artifactId&v=1000.*.0&repos=my-repo, artifactory returns 2k+ versions. If we use /api/search/versions?g=groupId&a=artifactId&v=1000.*.0 (without repos=my-repo param), artifactory returns 4k+ version, but still not all. Note all versions of the artifact exist in my-repo.

We have confirmed that increasing artifactory.search.userQueryLimit can help, but that should not be considered as a proper workaround as the property affects other aql results as well.

Ideally the endpoint should respect artifactory.search.userQueryLimit and return results with pagination if there are too many.

Cheers,






[RTFACT-20429] TLS 1.2 - Disabling old protocols and allowing cipher-suite selection Created: 25/Oct/19  Updated: 25/Oct/19

Status: Open
Project: Artifactory Binary Repository
Component/s: Artifactory Home
Affects Version/s: None
Fix Version/s: None

Type: New Feature Priority: Normal
Reporter: G. Klok Assignee: Unassigned
Resolution: Unresolved Votes: 0
Labels: None


 Description   

We are looking for Artifactory to support the following capabilities without the need of a reverse proxy:

  1. Ability to enable TLS 1.2,
  2. Ability to specify cipher suites for TLS 1.2 use, and
  3. Ability to disable SSL 2.0, SSL 3.0, TLS 1.0 and TLS 1.1





[RTFACT-20425] Dark mode UI Created: 24/Oct/19  Updated: 01/Nov/19

Status: Open
Project: Artifactory Binary Repository
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: New Feature Priority: Normal
Reporter: Loren Yeung Assignee: Unassigned
Resolution: Unresolved Votes: 0
Labels: None


 Description   

request for dark mode in Artifactory in the UI






[RTFACT-20423] Please provide a REST service to copy / move files in bulk Created: 24/Oct/19  Updated: 07/Nov/19

Status: Open
Project: Artifactory Binary Repository
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: New Feature Priority: Normal
Reporter: Francois Ritaly Assignee: Unassigned
Resolution: Unresolved Votes: 1
Labels: None
Environment:

This JIRA is an enhancement request to the Artifactory REST API.

The current Artifactory REST API features a service named Copy Item to copy a given item (either a file or a folder) to another location in Artifactory.

When using the service to copy a folder, the request will only return after the copy completes which makes sense from a user point-of-view. This time can vary based on the number of items in the folder being copied.

I'm currently implementing an automation to create a snapshot of a repository which implies to copy the full repository content somewhere else in Artifactory. Some of the repositories I need to snapshot can contain up to 26 000 files.

The sheer amount of files to copy means that I'm hitting the limits of the current Copy Item REST service. I ran a test. The copy eventually completes after 18 minutes (thanks to 6 parallel copy threads) but as expected, those 26 000 copy requests sent to Artifactory induce a high DB load and a huge network traffic. In terms of performance, this is a worse case scenario.

I tried Improving things by having Artifactory perform whole folder copies: instead of issuing 26000 file copy requests, the snapshot would sent 5000 file copy requests and 25 folder copy requests. I'm not going to elaborate on how this works but it turns out that this solution doesn't really work because of the metadata files: when performing a folder copy, if the folder contains some metadata files (i.e. "repodata") and the user doesn't have permissions to overwrite them in the destination, Artifactory is going to throw an error because the user lacks delete / overwrite permissions. One solution would be to tell Artifactory "Copy this folder over there but skip all the metadata", which is not possible.

My need could be addressed by featuring a new REST service to perform a set of copy / move operations in a transactional way. This would provide a scalable version of the "Copy Item" / "Move Item" current service.

The new service could be invoked by sending a POST request to "/api/batch" with a JSON payload with the following structure:

{
    "operations": [
        {
            "from": "/repo/dir1/dir2/file.txt",
            "to": "/another-repo/dir1/dir2/file.txt",
            "type": "copy"
        },
        {       
            "from": "/yet-another-repo/file3.txt",
            "to": "/another-repo/dir1/dir2/file4.txt",
            "type": "copy"
        }
    ]     
}

where:

  • "from" denotes the source location of the item to copy / move
  • "to" denotes the target location where to copy / move the item
  • "type" denotes the type of operation to perform. Expected values: "copy" or "move".

The items should be copied / moved in the same DB transaction to ensure that the process is transactional and performant.

The service should return a HTTP 200 (OK) if the operation succeeds or a HTTP 4xx if the update fails.

Note: I'm thinking that the service could also be extended to cover item deletions as in the following example.

{
    "operations": [
        {
            "from": "/repo/dir1/dir2/file.txt",
            "to": "/another-repo/dir1/dir2/file.txt",
            "type": "copy"
        },
        {       
            "from": "/yet-another-repo/file3.txt",
            "to": "/another-repo/dir1/dir2/file4.txt",
            "type": "copy"
        },
        {       
            "location": "/yet-another-repo/file.txt",
            "type": "delete"
        }
    ]     
}

Having such a service would remove the current limitations of the existing Copy Item service.

Thanks very much !



 Description   

This JIRA is an enhancement request to the Artifactory REST API.

The current Artifactory REST API features a service named Copy Item to copy a given item (either a file or a folder) to another location in Artifactory.

When using the service to copy a folder, the request will only return after the copy completes which makes sense from a user point-of-view. This time can vary based on the number of items in the folder being copied.

I'm currently implementing an automation to create a snapshot of a repository which implies to copy the full repository content somewhere else in Artifactory. Some of the repositories I need to snapshot can contain up to 26 000 files.

The sheer amount of files to copy means that I'm hitting the limits of the current Copy Item REST service. I ran a test. The copy eventually completes after 18 minutes (thanks to 6 parallel copy threads) but as expected, those 26 000 copy requests sent to Artifactory induce a high DB load and a huge network traffic. In terms of performance, this is a worse case scenario.

I tried Improving things by having Artifactory perform whole folder copies: instead of issuing 26000 file copy requests, the snapshot would sent 5000 file copy requests and 25 folder copy requests. I'm not going to elaborate on how this works but it turns out that this solution doesn't really work because of the metadata files: when performing a folder copy, if the folder contains some metadata files (i.e. "repodata") and the user doesn't have permissions to overwrite them in the destination, Artifactory is going to throw an error because the user lacks delete / overwrite permissions. One solution would be to tell Artifactory "Copy this folder over there but skip all the metadata", which is not possible.

My need could be addressed by featuring a new REST service to perform a set of copy / move operations in a transactional way. This would provide a scalable version of the "Copy Item" / "Move Item" current service.

The new service could be invoked by sending a POST request to "/api/batch" with a JSON payload with the following structure:

{
    "operations": [
        {
            "from": "/repo/dir1/dir2/file.txt",
            "to": "/another-repo/dir1/dir2/file.txt",
            "type": "copy"
        },
        {       
            "from": "/yet-another-repo/file3.txt",
            "to": "/another-repo/dir1/dir2/file4.txt",
            "type": "copy"
        }
    ]     
}

where:

  • "from" denotes the source location of the item to copy / move
  • "to" denotes the target location where to copy / move the item
  • "type" denotes the type of operation to perform. Expected values: "copy" or "move".

The items should be copied / moved in the same DB transaction to ensure that the process is transactional and performant.

The service should return a HTTP 200 (OK) if the operation succeeds or a HTTP 4xx if the update fails.

Note: I'm thinking that the service could also be extended to cover item deletions as in the following example.

{
    "operations": [
        {
            "from": "/repo/dir1/dir2/file.txt",
            "to": "/another-repo/dir1/dir2/file.txt",
            "type": "copy"
        },
        {       
            "from": "/yet-another-repo/file3.txt",
            "to": "/another-repo/dir1/dir2/file4.txt",
            "type": "copy"
        },
        {       
            "location": "/yet-another-repo/file.txt",
            "type": "delete"
        }
    ]     
}

Having this new service would remove the limitations of the current Copy Item service.

Just to be clear, I don't intend the service to be able to handle 26 000 items at once. It would be nice if it could support up to 500 operations at a time.



 Comments   
Comment by Ariel Kabov [ 07/Nov/19 ]

A possible approach to handle massive move/copy between repositories is:
1. get via AQL/CLI list of all artifacts you wish to move.
2. Run a script to move/copy, such as:

#!/bin/sh 

FILE=list.txt 
TOTAL=`cat $FILE | wc -l`

while read LINE
 do  
   echo $TOTAL left
   jfrog rt mv $LINE npm-promote-local
   sleep 0.1
   TOTAL=`expr $TOTAL - 1`
done < $FILE




[RTFACT-20419] Incorrect documentation for direct cloud download Created: 24/Oct/19  Updated: 24/Oct/19

Status: Open
Project: Artifactory Binary Repository
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: Documentation Priority: Normal
Reporter: Krzysztof Malinowski Assignee: Unassigned
Resolution: Unresolved Votes: 0
Labels: None


 Description   

Configuring filestore for direct cloud download is inconsistent with Configuring S3 Filestore:

  • signedUrlExpirySeconds is spelled with URL in params, but Url in example,
  • signatureExpirySecond in Direct Cloud Storage is spelled signatureExpirySeconds (note last s) in Filestore Configuration,
  • useSignature is nowhere documented in Filestore Configuration and it also seems wrong - actual tag is named enableSignedUrlRedirect (which is also not documented in Filestore Configuration).

Please update documentation to reflect actual parameters in use.






[RTFACT-20418] Deny Permission - On repositories Created: 24/Oct/19  Updated: 25/Oct/19

Status: Open
Project: Artifactory Binary Repository
Component/s: None
Affects Version/s: 6.11.3
Fix Version/s: None

Type: New Feature Priority: Normal
Reporter: Rhys Assignee: Unassigned
Resolution: Unresolved Votes: 0
Labels: artifactory, repository


 Description   

Hi

 

I would like to request the ability to Deny specific users / groups access to specific repositories.

This would allow for global permissions to be applied, but then restricted down if needed on a repo by repo basis with minimal changes to the global perms.

 

Eg I need to stop anonymous access to 2 out of 100 repo's , it would be simpler to just deny access to the specific repos for that user. Otherwise we need to change the default Anything permissions group, when adding a repo remember to add it to that group etc etc.

 

THis would also cater for abuse scenarios when a specific user could be locked out from that specific repo for  a period of time

 

Thanks

 






[RTFACT-20416] Artifactory fails to fetch metadata for RPM remote repository when metadata is corrupted Created: 24/Oct/19  Updated: 13/Nov/19

Status: Open
Project: Artifactory Binary Repository
Component/s: None
Affects Version/s: 6.13.0, 6.11.3
Fix Version/s: None

Type: Improvement Priority: Normal
Reporter: Muhammed Kashif Assignee: Unassigned
Resolution: Unresolved Votes: 0
Labels: None

Issue Links:
Contains(WBSGantt)
is contained in RTFACT-18095 corrupt cached npm metadata in a remo... In Progress

 Description   

Whenever the metadata for the RPM remote repository cache is corrupted, Artifactory fails to update the metadata from the remote endpoint and the yum client fails with the error "Metadata file does not match checksum". We tried setting the "Metadata retrieval cache period" to 0 and we expect that as this parameter is set to 0, Artifactory should be able to fetch the metadata every time a request comes but the request is not handled until and unless we delete the contents of the cache and perform zap cache. 

We were able to reproduce the same behaviour by corrupting the metadata manually.

These are the steps we followed,

  1. Create a remote repository "rpm-elastic" in Artifactory instance Art-1 which connects to https://artifacts.elastic.co/packages/
  2. Create a smart remote repository "rpm-smart-elastic" in other Artifactory instance Art-2 which points to "rpm-elastic" repository of Art-1.
  3. Configure the yum client to work with "rpm-smart-elastic" and we would be able to install and list the elasticsearch packages.

Now to reproduce the issue,

1. Created one local repository "rpm-local" in Art-2 and added the corrupted metadata files.
2. Replicated the metadata files from "rpm-local" to "rpm-smart-elastic" repository, now the "rpm-smart-elastic" contains the corrupted metadata file in its cache.
3. Once the files are replicated, we changed the remote endpoint of the smart remote repository "rpm-smart-elastic" to point to the "rpm-elastic" repository of Art-1.
4. When we try to list or install the elasticsearch package using the yum client, it throws "Metadata file does not match checksum" instead of fetching the new metadata from Art-1's "rpm-elastic" repository.

To work around the issue, we manually should clear the cache and perform zap cache so that Artifactory could get the new Metadata from the remote endpoint otherwise we can use the expirePackagesMetadata plugin.






[RTFACT-20411] Support AES256 encryption for Amazon S3 Official SDK Template Created: 23/Oct/19  Updated: 24/Oct/19

Status: Open
Project: Artifactory Binary Repository
Component/s: Artifact Storage
Affects Version/s: None
Fix Version/s: None

Type: New Feature Priority: Normal
Reporter: Jayanth Suresh Assignee: Unassigned
Resolution: Unresolved Votes: 0
Labels: None


 Description   

Currently Amazon S3 SDK Binary provider (cluster-s3-storage-v3/s3-storage-v3 template) does not support AES256 encryption.






[RTFACT-20405] Artifactory to support go client v1.13 checksum verification when sum.golang.org is not accessible. Created: 22/Oct/19  Updated: 04/Nov/19

Status: Open
Project: Artifactory Binary Repository
Component/s: Go
Affects Version/s: 6.12.2
Fix Version/s: None

Type: New Feature Priority: Normal
Reporter: Stanley Fong Assignee: Unassigned
Resolution: Unresolved Votes: 2
Labels: gomodules

Issue Links:
Relationship
relates to RTFACT-20446 Go 1.13 compatibility: enable gosumdb Open

 Description   

This is from stackoverflow - 

At work we can't do downloads directly from internet but must go through a local proxy based on artifactory. I have specified GOPROXY (GOPROXY=https://repo.mycompany.se/artifactory/api/go/gocenter) to a proxy setup in our local artifactory. When running "go get" the download goes ok what I can see but the checksum verification fails as go try to use sum.golang.org directly instead of getting the checksum through the proxy.

Response 

C:\Users\x\go\src\hello2>go get rsc.io/quote@v1.5.2 go: finding rsc.io v1.5.2 go: downloading rsc.io/quote v1.5.2 verifying rsc.io/quote@v1.5.2: rsc.io/quote@v1.5.2: Get https://sum.golang.org/lookup/rsc.io/quote@v1.5.2: dial tcp: lookup sum.golang.org: no such host

Link to the stackoverflow - https://stackoverflow.com/questions/58410493/local-artifactory-golang-proxy-and-checksum-verification

 

 

 






[RTFACT-20396] Initiate housekeeping of Max Unique Versions / Tags on demand Created: 21/Oct/19  Updated: 24/Oct/19

Status: Open
Project: Artifactory Binary Repository
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: New Feature Priority: Normal
Reporter: kevin.kirkham Assignee: Unassigned
Resolution: Unresolved Votes: 1
Labels: None


 Description   

Artifactory already has the capability to limit the number of versions of certain techtype (Maven, Gradle, Docker etc) based on a number provided via the UI. That housekeeping only affects new artefacts added after setting has been set or altered. It would be really useful to be able to run that same housekeeping on-demand and against ALL existing artefacts in the repository.






[RTFACT-20395] ADMIN UI - RBAC Created: 21/Oct/19  Updated: 21/Oct/19

Status: Open
Project: Artifactory Binary Repository
Component/s: None
Affects Version/s: 6.11.3
Fix Version/s: None

Type: New Feature Priority: Normal
Reporter: Rhys Assignee: Unassigned
Resolution: Unresolved Votes: 0
Labels: None


 Description   

Hi,

 

It would be nice to have RBAC implemented at the ADMIN UI level.

IE. we need some devs to manage group membership access to their repositories, but at present we need to give them "full" access to artifactory to do this at present.

 

Note we cant use any external authentication sources at present

 

Thanks






[RTFACT-20394] Dynamic certificate validation while configuring circle of trust Created: 21/Oct/19  Updated: 24/Oct/19

Status: Open
Project: Artifactory Binary Repository
Component/s: None
Affects Version/s: 6.13.1, 6.13.2
Fix Version/s: None

Type: Improvement Priority: Normal
Reporter: Lakshmi Prasad Assignee: Unassigned
Resolution: Unresolved Votes: 1
Labels: None


 Description   

Background:

As part of the circle of trust configuration, we copy the certificate files to the other Artifactory

$ARTIFACTORY_HOME/access/etc/keys/trusted location which doesn't require a restart. 

Issue: 

When we copy an invalid certificate (for example a certificate which has an extra alphabet in it) to this location Artifactory should validate the certificate is valid or not. Currently, the validation is happening only during the start-up of the Artifactory. 






[RTFACT-20392] Go repo: Artifactory doesn't support dynamic versions Created: 19/Oct/19  Updated: 21/Oct/19

Status: Open
Project: Artifactory Binary Repository
Component/s: Virtual Repositories
Affects Version/s: None
Fix Version/s: None

Type: Bug Priority: Normal
Reporter: Ankush Chadha Assignee: Unassigned
Resolution: Unresolved Votes: 0
Labels: None


 Description   

Steps to reproduce:

  1. Set GOPROXY to point to artifactory
  2. go get github.com/0xAX/notificator@master

Step 2 will fail since Artifactory doesn't support dynamic versions such as master. This is supported if GOPROXY is set to proxy.golang.org or athens or direct.






[RTFACT-20390] PyPi uploads using twine cause duplicate entries Created: 18/Oct/19  Updated: 24/Oct/19

Status: Open
Project: Artifactory Binary Repository
Component/s: PyPI
Affects Version/s: None
Fix Version/s: None

Type: Bug Priority: Normal
Reporter: Patrick Russell Assignee: Unassigned
Resolution: Unresolved Votes: 1
Labels: None


 Description   

Symptoms: Duplicate entries appear in the "simple.html" PyPi index file after uploading a twine package

Steps to reproduce:

1. Create the project folder:

mkdir twine-test

mkdir twine-test/example_pkg

cd twine-test/

2. Add a basic setup.py - Include underscores in the package name

import setuptools

setuptools.setup(

    name="infra_pypipkg_test",

    version="0.0.1",

    author="jfrog",

    author_email="",

    description="A small example package",

    url="",

    packages=setuptools.find_packages(),

    classifiers=[

        "Programming Language :: Python :: 3",

        "Operating System :: OS Independent",

    ],

    python_requires='>=3.6',

)

 

3. Add a dummy "hello world" script

echo 'print "hello-world!"' > example_pkg/hello.py

 

3. Bundle it up using python - Do not upload

python3 setup.py sdist bdist_wheel

 

4. Twine upload the file:

twine upload --repository local dist/*

Uploading distributions to http://artifactory.com:8081/artifactory/api/pypi/pypi

Uploading infra_pypipkg_test-0.0.1-py3-none-any.whl

100%|███████████████████████████████████████████████████████████████| 4.70k/4.70k [00:00<00:00, 64.1kB/s]

Uploading infra_pypipkg_test-0.0.1.tar.gz

100%|███████████████████████████████████████████████████████████████| 3.51k/3.51k [00:00<00:00, 61.7kB/s]

 

After indexing, the package appears twice in the simple.html metadata:

<a data-requires-python=">=3.6" href="infra-pypipkg-test" rel="internal" >infra-pypipkg-test</a><br/>

<a data-requires-python=">=3.6" href="infra_pypipkg_test" rel="internal" >infra_pypipkg_test</a><br/>

 

Both link to the Underscore packages. From what I can tell, it might be caused by the folder name:

pypi-local/infra-pypipkg-test/0.0.1/infra_pypipkg_test-0.0.1-py3-none-any.whl

 

Only one POST was recorded for me:

20191018204116|298|REQUEST|127.0.0.1|admin|POST|/api/pypi/pypi|HTTP/1.1|200|3389

 






[RTFACT-20388] Build Info permission Created: 18/Oct/19  Updated: 15/Nov/19

Status: Open
Project: Artifactory Binary Repository
Component/s: Build Info Repository, permissions
Affects Version/s: None
Fix Version/s: None

Type: Improvement Priority: Normal
Reporter: Yann Chaysinh Assignee: Unassigned
Resolution: Unresolved Votes: 0
Labels: None


 Description   

As an Artifactory admin, I want to restrict visibilty to Build Info based on the target repos.

Another solution would be to organize Build Info into folders by specifying the target folder when publishing them + add a builds by specifying a regex on that folder






[RTFACT-20387] Helm virtual aggregating an empty smart remote gives null when we try to get index.yaml Created: 18/Oct/19  Updated: 24/Oct/19

Status: Open
Project: Artifactory Binary Repository
Component/s: None
Affects Version/s: 6.11.3
Fix Version/s: None

Type: Bug Priority: Normal
Reporter: Balaji Satish Assignee: Unassigned
Resolution: Unresolved Votes: 1
Labels: None


 Description   

Helm virtual repository aggregating an empty smart remote repository would give an error with [null]  

2019-10-18 14:18:53,333 [art-exec-349] [ERROR] (o.a.a.h.r.m.HelmVirtualMerger:213) - Couldn't read index file in remote repository testrepo-helm : null

steps to reproduce.

1) create a local helm repository on an Artifactory instance

2) create a remote repository pointing to the above created local repository on a remote Artifactory instance.

3) create a virtual repo aggregating empty smart repos. 

4) Try to get the index.yaml through the virtual repo and observe it returns [null]

curl http://mill.jfrog.info:12036/artifactory/helm/index.yaml 

It should not return null values and Error message. 






[RTFACT-20380] go (v1.13+) go get from github returns a bad pre release version names Created: 17/Oct/19  Updated: 17/Nov/19

Status: In Progress
Project: Artifactory Binary Repository
Component/s: Go
Affects Version/s: None
Fix Version/s: None

Type: Bug Priority: Normal
Reporter: Nadav Yogev Assignee: Nadav Yogev
Resolution: Unresolved Votes: 0
Labels: gomodules

Issue Links:
Dependency
Gantt Start to Start
has to be started together with RTFACT-20326 go (v1.13+) get github.com/coreos/etc... In Progress

 Description   

Starting from go 1.13, validation was added to the version names when requesting info files.

Getting go prerelease version now fails on cases info file was created from GitHub, because the requested info file contained only the hash of the commit as the Version property.
correct syntax is v0.0.0-{timestamp}-{commit-hash}. returned syntax is {commit-hash}

go list -m github.com/alecthomas/template@v0.0.0-20160405071501-a0175ee3bccc 
go: finding github.com/alecthomas/template v0.0.0-20160405071501-a0175ee3bccc go list -m: github.com/alecthomas/template@v0.0.0-20160405071501-a0175ee3bccc: proxy returned info for version a0175ee3bccc instead of requested version

This happens because of the info file is wrong: 

curl https://entplus.jfrog.io/artifactory/api/go/go/github.com/alecthomas/template/@v/v0.0.0-20160405071501-a0175ee3bccc.info 

{"name":"a0175ee3bccc567396460bf5acd36800cb10c49c","shortName":"a0175ee3bccc","version":"a0175ee3bccc","time":"2016-04-05T07:15:01Z"}

going to default go proxy (or not using GOPROXY) will return this version with
{"Version":"v0.0.0-20161220082320-a0175ee3bccc","Time":"2016-12-20T08:23:20Z"}

(name and shortName are omitted)






[RTFACT-20372] Add Cache-Control header, configurable for docker image Created: 17/Oct/19  Updated: 11/Nov/19

Status: Open
Project: Artifactory Binary Repository
Component/s: Docker
Affects Version/s: None
Fix Version/s: None

Type: New Feature Priority: Normal
Reporter: yamatsum Assignee: Unassigned
Resolution: Unresolved Votes: 0
Labels: None


 Description   

When fetching docker image from Artifactgory's docker type repository, there is no cache-control in the response. So we can't cache blobs of docker image. Is it possible for Artifactory to enable cache settings?



 Comments   
Comment by Shu Kutsuzawa [ 11/Nov/19 ]

docker/distributrion sets the header to response in blob server as below. 

distribution/blobserver\.go at 749f6afb4572201e3c37325d0ffedb6f32be8950 · docker/distribution





[RTFACT-20371] Docker Image Fails to start with Oracle DB Created: 16/Oct/19  Updated: 13/Nov/19

Status: Open
Project: Artifactory Binary Repository
Component/s: Database, Docker, Docker Image
Affects Version/s: 6.13.1
Fix Version/s: None

Type: New Feature Priority: Normal
Reporter: Tony Squier Assignee: Unassigned
Resolution: Unresolved Votes: 0
Labels: None


 Description   

The entrypoint-artifactory.sh file does a waitForDB check to access the database by parsing DB_URL.

 

jdbc:oracle:thin:@<tns_entry>?TNS_ADMIN=<directory of the wallet> 

For a URL that uses a TNS entry this doesn't work, presumably because the hostname is in the tnsnames.ora and not the URL. To get around this one has to modify the entry point and remove the check.

Artifactory standalone in fact works fine with this DB configuration.

A possible solution would be to include a property that tells the startup to ignore this check (DB_CHECK_IGNORE) or find some other way to verify the DB is up. 

 






[RTFACT-20355] Ubuntu 14.04 is also supported according to this article Created: 16/Oct/19  Updated: 16/Oct/19

Status: Open
Project: Artifactory Binary Repository
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: Documentation Priority: Normal
Reporter: Nik Polovenko Assignee: Unassigned
Resolution: Unresolved Votes: 0
Labels: None


 Description   

These two articles mention that we support different versions of Ubuntu to install Xray.

One says Ubuntu 14 and another one says Ubuntu 16:

https://www.jfrog.com/confluence/display/EP/System+Requirements

https://www.jfrog.com/confluence/display/XRAY/Installing+Xray

 

Should we list both Ubuntu 14 and 16 in both articles? 

 






[RTFACT-20340] [Nuget] Authentication with API Key only Created: 16/Oct/19  Updated: 24/Oct/19

Status: Open
Project: Artifactory Binary Repository
Component/s: NuGet
Affects Version/s: None
Fix Version/s: None

Type: Improvement Priority: Normal
Reporter: Yann Chaysinh Assignee: Unassigned
Resolution: Unresolved Votes: 0
Labels: None


 Description   

As an C# dev, I want to authenticate against Artifactory via my Nuget client by specifying my API key only.

https://docs.microsoft.com/en-us/nuget/reference/cli-reference/cli-ref-setapikey

Right now, the authentication only works with the "username/password" or "username/apiKey" pairs.

https://www.jfrog.com/confluence/display/RTF/NuGet+Repositories#NuGetRepositories-NuGetAPIKeyAuthentication

 






[RTFACT-20337] Artifactory timing out exception Created: 16/Oct/19  Updated: 07/Nov/19

Status: Open
Project: Artifactory Binary Repository
Component/s: Access Server, Artifactory Home
Affects Version/s: None
Fix Version/s: 4.0.0

Type: Bug Priority: Normal
Reporter: Leela Padmaja Assignee: Yuval Reches
Resolution: Unresolved Votes: 0
Labels: UGA

Attachments: HTML File Artifactory Logs     HTML File routerlogs    

 Description   

We see that Artifactory is timing out while connecting to Access api at the time of start up and failed to come up. This is happening inconsistently when you bring down the services (docker-compose down) and start the services back ( docker-compose up) . Attaching the logs here 

 

routerlogs

Artifactory Logs






[RTFACT-20336] daily backups occasionally fail Created: 16/Oct/19  Updated: 16/Oct/19

Status: Open
Project: Artifactory Binary Repository
Component/s: Backup
Affects Version/s: 6.12.0
Fix Version/s: None

Type: Bug Priority: Normal
Reporter: Tom Robinson Assignee: Unassigned
Resolution: Unresolved Votes: 0
Labels: None
Environment:

Linux svl-artifactory.juniper.net 4.4.0-116-generic #140-Ubuntu SMP Mon Feb 12 21:23:04 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux

  1. df -h /volume/ssd-storage04/

Filesystem                                  Size  Used Avail Use% Mounted on

10.160.0.154:/ssd_store/ssd_storage04/sspt   17T   13T  4.9T  72% /volume/ssd-storage04



 Description   

We get occasional backup failure emails like the following:

The following errors have occurred:

Export error: from: atom-npm:shadowfax to: /volume/ssd-storage04/sspt/tools/artifactory/backup/backup-daily/current/repositories/atom-npm reason: File /volume/ssd-storage04/sspt/tools/artifactory/backup/backup-daily/current/repositories/atom-npm/shadowfax exists and is not a directory. Unable to create directory.: File /volume/ssd-storage04/sspt/tools/artifactory/backup/backup-daily/current/repositories/atom-npm/shadowfax exists and is not a directory. Unable to create directory.
 
In this case, the file in question does in fact exist and is a regular file:
 

# ls -l /volume/ssd-storage04/sspt/tools/artifactory/backup/backup-daily/current/repositories/atom-npm
total 920
-rw-r----- 1 root root 909513 Oct  4 21:35 shadowfax
drwxr-x--- 2 root root   4096 Oct 14 16:42 shadowfax.artifactory-metadata
-rw-r----- 1 root root  14082 Oct  4 21:42 shadowfax-cli
drwxr-x--- 2 root root   4096 Oct 14 16:42 shadowfax-cli.artifactory-metadata

 The artifacts look like this in Artifactory: Unable to render embedded object: File (image-2019-10-15-16-51-45-034.png) not found.

What can I do to prevent these types of errors?



 Comments   
Comment by Tom Robinson [ 16/Oct/19 ]

The screenshot did not get transferred. Looks like:
v

>shadowfax-cli/
v shadowfax/
->shadowfax-2.1.3.tgz
->shadowfax-2.1.6.tgz
->shadowfax-2.1.7.tgz
->shadowfax-2.1.8.tgz
->shadowfax-2.1.9.tgz




[RTFACT-20335] Add option to change the username Artifactory takes from SAML SSO (NameID) Created: 15/Oct/19  Updated: 15/Oct/19

Status: Open
Project: Artifactory Binary Repository
Component/s: SAML SSO
Affects Version/s: None
Fix Version/s: None

Type: New Feature Priority: Normal
Reporter: Patrick Russell Assignee: Unassigned
Resolution: Unresolved Votes: 0
Labels: None


 Description   

Currently, Artifactory is designed to use the SAML SSO NameID parameter for a user's username. There should be a way to update this behavior so another field can be used instead.

Artifactory already supports a custom field for the user's email, another block can be used for the username format.

By default, the NameID is the correct SAML field to use for a username. Some organizations have other use cases:

  • They have an AD service, and want to use the AD username that isn't passed as a nameID
  • A "unique" string is used instead of a human-readable username





[RTFACT-20334] Chart with invalid version number will be indexed but helm will not be able to add repo Created: 15/Oct/19  Updated: 30/Oct/19

Status: Open
Project: Artifactory Binary Repository
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: Bug Priority: Normal
Reporter: Paul Pan Assignee: Unassigned
Resolution: Unresolved Votes: 2
Labels: None


 Description   

If a chart has an invalid version number or appVersion number, artifactory will still index them. However, virtual repos containing such chart will not be able to be added by helm.

Steps:

1. Create a chart with appversion or version as invalid form
appVersion: 9.745796e+09
version: 0.1.1+9745796176

If you check, however, 0.1.1+9745796176 is actually a valid semver 2.0 version.

2. run helm lint to validate the chart and helm package to package the chart.
3. Deploy the chart to artifactory helm local repo.
4. Include the local repo in a virtual.
5. run helm repo add to add the virtual helm repo to helm, you will get this error:

Error: Looks like "https://supportusw.jfrog.io/supportusw/bhelm" is not a valid chart repository or cannot be reached: error unmarshaling JSON: while decoding JSON: json: cannot unmarshal number into Go struct field ChartVersion.appVersion of type string

If the repository was previously added, helm repo update will not reproduce this. But if you remove and re-add the repo, you will see this error.



 Comments   
Comment by Shannon Carey [ 30/Oct/19 ]

This can also happen when a Chart happens to have a version that looks for example like "319e199". This is apparently a hex string. When Artifactory serializes this chart metadata into an index.yaml file, it does not explicitly tag or quote the value.

As a specific example, our index.yaml contained:

  - created: 2019-10-06T01:07:11.776Z
    description: Simple ranger setup. Standalone, with mariadb. Non-persistant
    digest: 6d04ca94542326a1403c20c293eec6965f9669a1be18ebd885188788fb2a265a
    home: https://github.com/planetf1/ranger-docker
    keywords:
    - apache-ranger, ranger, security
    maintainers:
    - email: nigel.l.jones+gh@gmail.com
      name: Nigel Jones
    name: egdp-ranger
    sources:
    - https://github.com/planetf1/ranger-docker
    urls:
    - https://ourserver.example.com/ourrepo/egdp-ranger-504e199.tgz
    version: 504e199

The YAML 1.2 spec indicates that the "[+-]" character after the "e" in a float is optional. Therefore, when the version is read it will be interpreted as a floating point number in scientific format. Helm however expects the version to be a string: https://github.com/helm/helm/blob/master/pkg/chart/metadata.go#L37

It seems like when Artifactory creates the YAML, it should either explicitly tag the field as a string, or quote it so that it can be parsed more reliably.

Related discussions:

https://github.com/terraform-providers/terraform-provider-helm/issues/368

https://github.com/go-yaml/yaml/issues/290





[RTFACT-20330] Updating contentSynchronisation property values via REST API using PATCH request Created: 15/Oct/19  Updated: 22/Oct/19

Status: Open
Project: Artifactory Binary Repository
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: New Feature Priority: Normal
Reporter: Tatarao Vana Assignee: Unassigned
Resolution: Unresolved Votes: 1
Labels: None


 Description   

 

When updating nested properties(using PUT/POST) relating to smart remote repositories via REST API, if you omit any of nested properties from the payload these will get reset to false.

Implement a PATCH request to update a single property that doesn't override the other parameters with default values.

Ex: 
{
contentSynchronisation: { "properties" :

{ "enabled" : false }

}
}






[RTFACT-20329] Artifact Repository Browser Paging Created: 15/Oct/19  Updated: 07/Nov/19

Status: Open
Project: Artifactory Binary Repository
Component/s: None
Affects Version/s: 6.13.0, 6.13.1
Fix Version/s: None

Type: Improvement Priority: Normal
Reporter: Johannes Hublitz Assignee: Unassigned
Resolution: Unresolved Votes: 1
Labels: None


 Description   

Hey,

we have over 873 repositories and we are pretty unhappy with the new "feature", the tree browser will load repositories and their content paginated. Because if we search for a repository that isn't loaded, we need the scroll down all the way to extend the listed repositories (the reloading takes some time).

It would be nice if the repositories list will extend automatically while typing the name of the repository. Or if there is an option to disable the paging feature.

 



 Comments   
Comment by Chris Zardis [ 07/Nov/19 ]

I agree - it is odd behaviour to hide unloaded repositories from the search functionality. Though it makes sense technically from JFrog's choice of implementation, it is far from logical from a end user's perspective. I would encourage a reconsideration of this approach.

In the meantime, the following setting may assist (we've been advised that setting it to an appropriately high level effectively disables it)

artifactory.ui.continue.paging.limit=10000





[RTFACT-20328] Increased precision for logs Created: 15/Oct/19  Updated: 13/Nov/19

Status: Open
Project: Artifactory Binary Repository
Component/s: Logging
Affects Version/s: None
Fix Version/s: None

Type: Improvement Priority: Normal
Reporter: Joe Henshaw Assignee: Unassigned
Resolution: Unresolved Votes: 0
Labels: None


 Description   

Artifactory's logs currently have millisecond precision. This is fine for analysis of logs in a single node setup but it is not ideal for a busy HA setup where a load balancer distributes requests across the nodes on a per-request basis, as is the case in Artifactory's generated HTTPd config, and said logs are being sent to another platform, e.g. ElasticSearch, for further analysis. Millisecond precision in this case means that the the requests can appear in a random order.

Can we therefore increase the precision to be nanosecond?

It might be worth extending this to the other application logs, too - access, app, event.






[RTFACT-20327] Artifactory allows Non Ascii characters in filenames being uploaded to the DB. Created: 14/Oct/19  Updated: 23/Oct/19

Status: Open
Project: Artifactory Binary Repository
Component/s: None
Affects Version/s: 6.6.1, 6.11.3
Fix Version/s: None

Type: Bug Priority: Normal
Reporter: Sai Undurthi Assignee: Unassigned
Resolution: Unresolved Votes: 0
Labels: artifactory, ascii, non-ascii
Environment:

MySQL DB

Collation utf8_unicode_ci and utf8_bin


Attachments: PNG File Screen Shot 2019-10-14 at 2.50.31 PM.png     PNG File Screen Shot 2019-10-14 at 2.51.24 PM.png    

 Description   

A file with non-ascii chinese characters was added to an Artifactory generic repository. It creates a DB entry with the same chinese character name as show in the attached screenshot.

 

Include/Exclude patterns won't work either.

How do we block these files from being uploaded to artifactory?






[RTFACT-20325] ShardingBinaryProvider/ClusterShardingBinaryProvider might abort the write transaction on optimization check error while the binary was already added Created: 14/Oct/19  Updated: 13/Nov/19

Status: Open
Project: Artifactory Binary Repository
Component/s: Binary Provider , Binarystore
Affects Version/s: 6.9.5
Fix Version/s: None

Type: Bug Priority: Normal
Reporter: Shay Bagants Assignee: Unassigned
Resolution: Unresolved Votes: 1
Labels: None

Issue Links:
Relationship
relates to RTFACT-19527 Performance degradation when redundan... Open

 Description   

When deploying artifact with new binary to Artifactory (user artifact deployment, or system artifact deployment such as package index file) and using Sharding storage configuration, immediately after the binary is added to the sub storage providers, Artifactory check if the binary exists in more providers than it should based on the redundancy configuration, if it is, it mark an internal flag to indicate that storage optimization is required.
On this optimization check stage, the binary was already added to the relevant providers, but in some scenarios, there might be failures on the optimization check itself against the sub providers (i.e. socket timeout against remote binary provider).
Currently, on such failure, an Exception is thrown and the transaction is aborted. See example:

2019-09-09 14:00:00,000 [http-nio-8080-exec-1111] [ERROR] (o.a.r.c.e.m.GlobalExceptionMapper:48) - 503 : java.net.SocketTimeoutException: Read timed out
org.jfrog.storage.binstore.exceptions.BinaryStorageException: 503 : java.net.SocketTimeoutException: Read timed out
        at org.artifactory.addon.filestore.multiple.ShardingBinaryProviderImpl.addStream(ShardingBinaryProviderImpl.java:238)
        at org.jfrog.storage.binstore.providers.FileCacheBinaryProviderImpl.addStream(FileCacheBinaryProviderImpl.java:126)
        at org.artifactory.storage.db.binstore.service.BinaryServiceImpl.addBinary(BinaryServiceImpl.java:387)
        at sun.reflect.GeneratedMethodAccessor589.invoke(Unknown Source)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:498)
        at org.springframework.aop.support.AopUtils.invokeJoinpointUsingReflection(AopUtils.java:317)
        at org.springframework.aop.framework.ReflectiveMethodInvocation.invokeJoinpoint(ReflectiveMethodInvocation.java:190)
        at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:157)
        at org.springframework.transaction.interceptor.TransactionInterceptor$1.proceedWithInvocation(TransactionInterceptor.java:99)
        at org.springframework.transaction.interceptor.TransactionAspectSupport.invokeWithinTransaction(TransactionAspectSupport.java:281)
        at org.springframework.transaction.interceptor.TransactionInterceptor.invoke(TransactionInterceptor.java:96)
        at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:179)
        at org.springframework.aop.framework.JdkDynamicAopProxy.invoke(JdkDynamicAopProxy.java:207)
        at com.sun.proxy.$Proxy195.addBinary(Unknown Source)
        at org.artifactory.repo.service.RepositoryServiceImpl.saveResource(RepositoryServiceImpl.java:1808)
        at org.artifactory.repo.service.RepositoryServiceImpl.saveFileInternal(RepositoryServiceImpl.java:629)
        at sun.reflect.GeneratedMethodAccessor600.invoke(Unknown Source)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:498)
        at org.springframework.aop.support.AopUtils.invokeJoinpointUsingReflection(AopUtils.java:317)
        at org.springframework.aop.framework.JdkDynamicAopProxy.invoke(JdkDynamicAopProxy.java:201)
        at com.sun.proxy.$Proxy176.saveFileInternal(Unknown Source)
        at org.artifactory.addon.gems.handlers.GemsRemoteRequestHandler.saveDependencyToCache(GemsRemoteRequestHandler.java:414)
        at org.artifactory.addon.gems.handlers.GemsRemoteRequestHandler.lambda$createAndSaveDependencies$3(GemsRemoteRequestHandler.java:389)
        at java.util.LinkedHashMap.forEach(LinkedHashMap.java:684)
        at org.artifactory.addon.gems.handlers.GemsRemoteRequestHandler.createAndSaveDependencies(GemsRemoteRequestHandler.java:388)
        at org.artifactory.addon.gems.handlers.GemsRemoteRequestHandler.downloadAllDependencies(GemsRemoteRequestHandler.java:346)
        at org.artifactory.addon.gems.handlers.GemsRemoteRequestHandler.handleDependencies(GemsRemoteRequestHandler.java:106)
        at org.artifactory.addon.gems.handlers.GemsVirtualRequestHandler.handleDependencies(GemsVirtualRequestHandler.java:115)
        at org.artifactory.addon.gems.GemsResource.dependencies(GemsResource.java:135)
        at sun.reflect.GeneratedMethodAccessor608.invoke(Unknown Source)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:498)
        at org.glassfish.jersey.server.model.internal.ResourceMethodInvocationHandlerFactory.lambda$static$0(ResourceMethodInvocationHandlerFactory.java:76)
        at org.glassfish.jersey.server.model.internal.AbstractJavaResourceMethodDispatcher$1.run(AbstractJavaResourceMethodDispatcher.java:148)
        at org.glassfish.jersey.server.model.internal.AbstractJavaResourceMethodDispatcher.invoke(AbstractJavaResourceMethodDispatcher.java:191)
        at org.glassfish.jersey.server.model.internal.JavaResourceMethodDispatcherProvider$ResponseOutInvoker.doDispatch(JavaResourceMethodDispatcherProvider.java:200)
        at org.glassfish.jersey.server.model.internal.AbstractJavaResourceMethodDispatcher.dispatch(AbstractJavaResourceMethodDispatcher.java:103)
        at org.glassfish.jersey.server.model.ResourceMethodInvoker.invoke(ResourceMethodInvoker.java:493)
        at org.glassfish.jersey.server.model.ResourceMethodInvoker.apply(ResourceMethodInvoker.java:415)
        at org.glassfish.jersey.server.model.ResourceMethodInvoker.apply(ResourceMethodInvoker.java:104)
        at org.glassfish.jersey.server.ServerRuntime$1.run(ServerRuntime.java:277)
        at org.glassfish.jersey.internal.Errors$1.call(Errors.java:272)
        at org.glassfish.jersey.internal.Errors$1.call(Errors.java:268)
        at org.glassfish.jersey.internal.Errors.process(Errors.java:316)
        at org.glassfish.jersey.internal.Errors.process(Errors.java:298)
        at org.glassfish.jersey.internal.Errors.process(Errors.java:268)
        at org.glassfish.jersey.process.internal.RequestScope.runInScope(RequestScope.java:289)
        at org.glassfish.jersey.server.ServerRuntime.process(ServerRuntime.java:256)
        at org.glassfish.jersey.server.ApplicationHandler.handle(ApplicationHandler.java:703)
        at org.glassfish.jersey.servlet.WebComponent.serviceImpl(WebComponent.java:416)
        at org.glassfish.jersey.servlet.WebComponent.service(WebComponent.java:370)
        at org.glassfish.jersey.servlet.ServletContainer.service(ServletContainer.java:389)
        at org.glassfish.jersey.servlet.ServletContainer.service(ServletContainer.java:342)
        at org.glassfish.jersey.servlet.ServletContainer.service(ServletContainer.java:229)
        at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:231)
        at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166)
        at org.artifactory.webapp.servlet.RepoFilter.execute(RepoFilter.java:186)
        at org.artifactory.webapp.servlet.RepoFilter.doFilter(RepoFilter.java:96)
        at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193)
        at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166)
        at org.artifactory.webapp.servlet.AccessFilter.useAuthentication(AccessFilter.java:427)
        at org.artifactory.webapp.servlet.AccessFilter.useAnonymousIfPossible(AccessFilter.java:392)
        at org.artifactory.webapp.servlet.AccessFilter.doFilterInternal(AccessFilter.java:210)
        at org.artifactory.webapp.servlet.AccessFilter.doFilter(AccessFilter.java:167)
        at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193)
        at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166)
        at org.artifactory.webapp.servlet.RequestFilter.doFilter(RequestFilter.java:77)
        at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193)
        at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166)
        at org.artifactory.webapp.servlet.ArtifactoryCsrfFilter.doFilter(ArtifactoryCsrfFilter.java:74)
        at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193)
        at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166)
        at org.springframework.session.web.http.SessionRepositoryFilter.doFilterInternal(SessionRepositoryFilter.java:164)
        at org.springframework.session.web.http.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:80)
        at org.artifactory.webapp.servlet.SessionFilter.doFilter(SessionFilter.java:62)
        at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193)
        at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166)
        at org.artifactory.webapp.servlet.ArtifactoryFilter.doFilter(ArtifactoryFilter.java:124)
        at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193)
        at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166)
        at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:198)
        at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:96)
        at org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:493)
        at org.apache.catalina.valves.RemoteIpValve.invoke(RemoteIpValve.java:685)
        at org.apache.catalina.valves.rewrite.RewriteValve.invoke(RewriteValve.java:279)
        at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:140)
        at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:81)
        at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:87)
        at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:342)
        at org.apache.coyote.http11.Http11Processor.service(Http11Processor.java:800)
        at org.apache.coyote.AbstractProcessorLight.process(AbstractProcessorLight.java:66)
        at org.apache.coyote.AbstractProtocol$ConnectionHandler.process(AbstractProtocol.java:800)
        at org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.doRun(NioEndpoint.java:1471)
        at org.apache.tomcat.util.net.SocketProcessorBase.run(SocketProcessorBase.java:49)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
        at org.apache.tomcat.util.threads.TaskThread$WrappingRunnable.run(TaskThread.java:61)
        at java.lang.Thread.run(Thread.java:748)
Caused by: java.lang.RuntimeException: java.net.SocketTimeoutException: Read timed out
        at org.jfrog.storage.binstore.client.RemoteBinaryProvider.exists(RemoteBinaryProvider.java:102)
        at org.artifactory.addon.filestore.multiple.ShardingBinaryProviderImpl.findNumberOfProvidersWithFile(ShardingBinaryProviderImpl.java:290)
        at org.artifactory.addon.filestore.multiple.ShardingBinaryProviderImpl.cleanAndVerify(ShardingBinaryProviderImpl.java:279)
        at org.artifactory.addon.filestore.multiple.ShardingBinaryProviderImpl.addStream(ShardingBinaryProviderImpl.java:233)
        ... 99 common frames omitted
Caused by: java.net.SocketTimeoutException: Read timed out
        at java.net.SocketInputStream.socketRead0(Native Method)
        at java.net.SocketInputStream.socketRead(SocketInputStream.java:116)
        at java.net.SocketInputStream.read(SocketInputStream.java:171)
        at java.net.SocketInputStream.read(SocketInputStream.java:141)
        at org.apache.http.impl.io.SessionInputBufferImpl.streamRead(SessionInputBufferImpl.java:139)
        at org.apache.http.impl.io.SessionInputBufferImpl.fillBuffer(SessionInputBufferImpl.java:155)
        at org.apache.http.impl.io.SessionInputBufferImpl.readLine(SessionInputBufferImpl.java:284)
        at org.apache.http.impl.conn.DefaultHttpResponseParser.parseHead(DefaultHttpResponseParser.java:140)
        at org.apache.http.impl.conn.DefaultHttpResponseParser.parseHead(DefaultHttpResponseParser.java:57)
        at org.apache.http.impl.io.AbstractMessageParser.parse(AbstractMessageParser.java:261)
        at org.apache.http.impl.DefaultBHttpClientConnection.receiveResponseHeader(DefaultBHttpClientConnection.java:165)
        at org.apache.http.impl.conn.CPoolProxy.receiveResponseHeader(CPoolProxy.java:167)
        at org.apache.http.protocol.HttpRequestExecutor.doReceiveResponse(HttpRequestExecutor.java:272)
        at org.apache.http.protocol.HttpRequestExecutor.execute(HttpRequestExecutor.java:124)
        at org.apache.http.impl.execchain.MainClientExec.execute(MainClientExec.java:271)
        at org.apache.http.impl.execchain.ProtocolExec.execute(ProtocolExec.java:184)
        at org.apache.http.impl.execchain.RetryExec.execute(RetryExec.java:88)
        at org.apache.http.impl.execchain.RedirectExec.execute(RedirectExec.java:110)
        at org.apache.http.impl.client.InternalHttpClient.doExecute(InternalHttpClient.java:184)
        at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:71)
        at org.jfrog.client.http.CloseableHttpClientDecorator.doExecute(CloseableHttpClientDecorator.java:107)
        at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:82)
        at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:107)
        at org.jfrog.storage.binstore.client.RemoteBinaryProvider.execute(RemoteBinaryProvider.java:427)
        at org.jfrog.storage.binstore.client.RemoteBinaryProvider.execute(RemoteBinaryProvider.java:421)
        at org.jfrog.storage.binstore.client.RemoteBinaryProvider.exists(RemoteBinaryProvider.java:95)
        ... 102 common frames omitted

Artifactory should not abort storage transactions on optimization check failures.






[RTFACT-20323] Automate support bundle upload through Artifactory Created: 11/Oct/19  Updated: 11/Oct/19

Status: Open
Project: Artifactory Binary Repository
Component/s: Artifactory Home
Affects Version/s: None
Fix Version/s: None

Type: New Feature Priority: Normal
Reporter: Ronen Lewit Assignee: Unassigned
Resolution: Unresolved Votes: 0
Labels: None


 Description   

<This request came from Apple CA>

The customer asked have Artifactory build and send the release bundle instead of having an external service to handle it.






[RTFACT-20322] Limit users' AQL resource usage Created: 11/Oct/19  Updated: 11/Oct/19

Status: Open
Project: Artifactory Binary Repository
Component/s: AQL
Affects Version/s: None
Fix Version/s: None

Type: New Feature Priority: Normal
Reporter: Ronen Lewit Assignee: Unassigned
Resolution: Unresolved Votes: 0
Labels: None


 Description   

This request came from Apple CA.

They manage a shared Artifactory service and would like to prevent users from running inefficient AQLs on the shared service, which will have service level impact on the rest of the users.

Today they have an external cron that monitors long running AQLs and kills them.

 






[RTFACT-20320] HA Configuration won't start due to "No valid installed license found." Created: 11/Oct/19  Updated: 11/Oct/19

Status: Open
Project: Artifactory Binary Repository
Component/s: High Availability
Affects Version/s: 6.13.1
Fix Version/s: None

Type: Bug Priority: Normal
Reporter: Max-Florian Bidlingmaier Assignee: Unassigned
Resolution: Unresolved Votes: 0
Labels: None


 Description   

We got an artifactory setup with two nodes using docker. Primary node is configured for HA and up. Two licenses enterpries licenses are added where one is used for this primary node an one is free. Starting up the secondary node finds the configured database but stops with

 

artifactory | 10:34:42.883 [localhost-startStop-1] WARN org.artifactory.addon.ConverterBlockerImpl - No valid installed license found. Blocking conversion
artifactory | 10:34:42.885 [localhost-startStop-1] ERROR org.artifactory.converter.ConvertersManagerImpl - Conversion failed. You should analyze the error and retry launching Artifactory. Error is: Converter can't run since no matching license found, please add new license

ha-node.properties looks like (on secondary node, primary is true and andy reference is set to primary node):

node.id=lxart501p.vkbads.de
context.url=http://lxart501p.vkbads.de:8081/artifactory
access.context.url=http://lxart501p.vkbads.de:8081/access
membership.port=0
primary=False
artifactory.ha.data.dir=/var/opt/jfrog/artifactory/data
artifactory.ha.backup.dir=/backup
hazelcast.interface=10.66.17.*
artifactory.context.path=/

 

 

There is no reference is any log that the secondary node tried to get a license from the primary node. 



 Comments   
Comment by Max-Florian Bidlingmaier [ 11/Oct/19 ]

Please close this issue, resolved by updating primary node to 6.13.1 from 6.13.0

thank you





[RTFACT-20318] Support more remote Repository Types Created: 11/Oct/19  Updated: 22/Oct/19

Status: Open
Project: Artifactory Binary Repository
Component/s: Remote Repository
Affects Version/s: 6.13.1
Fix Version/s: None

Type: Improvement Priority: Normal
Reporter: Rhys Evans Assignee: Unassigned
Resolution: Unresolved Votes: 0
Labels: repository


 Description   

Hi

 

At present you support http and https based remote repositories. We are however seeing more repositories hosted of native cloud storage eg Amazon S3 and Google GCS

 

Can we get these supported ?

 

Thanks






[RTFACT-20311] Tokens are not visible after creation Created: 10/Oct/19  Updated: 14/Oct/19

Status: Open
Project: Artifactory Binary Repository
Component/s: Access Tokens, REST API, Web UI
Affects Version/s: 6.12.2
Fix Version/s: None

Type: Bug Priority: Normal
Reporter: manu nicolas Assignee: Unassigned
Resolution: Unresolved Votes: 1
Labels: None
Environment:

Artifactory Pro running in Docker using official image



 Description   

Access tokens are created through the API. They work correctly and can be used to interact with said API, with the correct permissions.

However, the tokens are not visible in the UI ( /webapp/#/admin/security/access_tokens ) and an API call to /api/security/token returns "tokens" : [ ]

The tokens are visible in the access_tokens table.






[RTFACT-20309] Artifactory AQL/REST Get Artifacts and Versions Created: 10/Oct/19  Updated: 10/Oct/19

Status: Open
Project: Artifactory Binary Repository
Component/s: None
Affects Version/s: 6.10.2, 6.12.2, 6.13.1
Fix Version/s: None

Type: New Feature Priority: Normal
Reporter: Marcel Gredler Assignee: Unassigned
Resolution: Unresolved Votes: 1
Labels: AQL, Artifactory, REST


 Description   

Currently it is possible to retrieve artifacts per AQL or REST-Api.

  • Retrieving it per AQL allows to retrieve the meta-artifacts, within which the Artifact+Version information is available, or retrieve the information from properties (e.g. as it is the case for docker-images)
  • Retrieving it per REST returns the actual artifacts path, which may not contain the meta-information (artifact-name, version, all versions, etc) or be usable to retrieve version information

All versions for an artifact can be retrieved through the meta-artifacts (e.g. maven-metadata.xml) or REST (the artifact-name is required, which is not returned through REST artifact search).

  • This search returns the version associated with an artifact, but does not include the path to the artifacts behind this version.

Because of all this, should you have a use-case that requires a list of all artifacts, their versions and paths towards these versions, then you may have to use a combination of multiple AQL and REST calls - only becoming even more should a DELETE based on artifact+version be added.

Therefore it is better to query the meta-artifacts of all artifacts within a repository and parse these to build your context. Be warned that this can be tedious if you use many different package-types, as some have XML-files, some require properties-querying and some use JSON.


Instead of all of this it would be better if there were a REST endpoint that returns the following meta-information of all artifacts (or artifacts beneath a certain path) within a repository:

  • Artifact-Name
  • Path to meta-artifact
  • Artifact-Versions
  • For each version
    • The path to the version-folder / version-artifact

With this it would be easy to get an overview of the artifacts and versions and easily run additional GET queries or a DELETE.






[RTFACT-20308] JFrog Xray doesn't have "(Optional)" Created: 09/Oct/19  Updated: 22/Oct/19

Status: Open
Project: Artifactory Binary Repository
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: Documentation Priority: Normal
Reporter: Nik Polovenko Assignee: Unassigned
Resolution: Unresolved Votes: 0
Labels: None


 Description   

I believe Xray with Pro X license is also optional so we should put "(Optional)"






[RTFACT-20305] Build Promotion without overwrite permission fails with unique pattern in PyPi repository Created: 09/Oct/19  Updated: 12/Nov/19

Status: Open
Project: Artifactory Binary Repository
Component/s: None
Affects Version/s: 6.7.0, 6.12.2, 6.13.1
Fix Version/s: None

Type: Bug Priority: Normal
Reporter: Vignesh S Assignee: Unassigned
Resolution: Unresolved Votes: 1
Labels: None


 Description   

The second build promotion is failing for Pypi (local) repositories for a non-admin user who does not have delete/overwrite permissions for the repositories, even though the published artifacts names are different.

Step 1: Create two Pypi local repositories named pypi-local and pypi-release

Step 2: Assign Read, annotate, deploy and delete permissions to pypi-local repository for a user.

Step 3: Assign Read, annotate, deploy permissions to pypi-release repository for a user and configure JFrog CLI with the user.

Step 4: Uploaded the package pokemonscli-1.0.2.tar.gz with build number 1 to pypi-local repository
             jfrog rt bce pokemon 1
             jfrog rt upload --build-name pokemon --build-number 1 pokemonscli-1.0.2.tar.gz pypi-local/pokemon/

Step 5: Published the build info for package pokemonscli-1.0.2.tar.gz with build number 1
             jfrog rt build-publish pokemon 1

Step 6: Promoted the package from pypi-local to pypi-release repository. Build Promoted successfully*(build number 1)*.
             jfrog rt build-promote pokemon 1 pypi-release

Step 7: Uploaded the package pokemonscli-1.0.3.tar.gz to local repository with build number 2 to pypi-local repository.
             jfrog rt bce pokemon 2
             jfrog rt upload --build-name pokemon --build-number 2 pokemonscli-1.0.3.tar.gz pypi-local/pokemon/

Step 8: Published the build info for package pokemonscli-1.0.3.tar.gz with build number 2.
             jfrog rt build-publish pokemon 2

Step 9: Tried Promoting the package from pypi-local to pypi-release repository with build number 2.
             jfrog rt build-promote pokemon 2 pypi-release
Build promotion failed with “User doesn't have permissions to override 'pypi-release:proj1'. Needs delete permissions.”

Error Snippet from Artifactory:

2019-10-09 09:57:39,006 [http-nio-8081-exec-10] [ERROR] (o.a.r.s.m.BaseRepoPathMover:457) - User doesn't have permissions to override 'pypi-release:proj'. Needs delete permissions.

2019-10-09 09:57:39,031 [http-nio-8081-exec-10] [INFO ] (o.a.b.BuildPromotionHelper:214) - Skipping promotion status update: item promotion was completed with errors and warnings.

Step 10: Tried to Upload, Publish and Promote different packages mentioned below
cran-0.1.14.tar.gz - build number 3
cran-0.1.13.tar.gz - build number 4
fastcluster-1.1.25.tar.gz - build number 5
fastcluster-1.1.24.tar.gz - build number 6
All the build promotions are successful (build number 3,4,5,6)

Step 11: Take the same package pokemonscli-1.0.3.tar.gz which was tried to promote with build number 2 and upload it with a different build number 7
Step 12: Publish the build info for pokemonscli-1.0.3.tar.gz with build number 7.
Step 13: Tried Promoting the package from pypi-local to pypi-release repository with build number 7. Build promotion failed with

“User doesn't have permissions to override 'pypi-release:proj1'. Needs delete permissions.”

Step 14: Not able to Promote the package pokemonscli-1.0.3.tar.gz even with different build number (As mentioned in step 13).

Step 15: Could observe that only the second build promotion is failing for a non-admin user who does not have permissions to delete/overwrite in the pypi-release repository.

Step 16: Observed the same behaviour with older artifactory versions.

 



 Comments   
Comment by Speechmatics ES [ 09/Oct/19 ]

We are the initial reporter of this issue and have been able to reproduce the issue on the original repository used in initial report, using same packages: cran-0.1.13.tar.gz and cran-0.1.14.tar.gz





[RTFACT-20304] Impossible to upgrade an Artifactory that doesn't have a license Created: 09/Oct/19  Updated: 09/Oct/19

Status: Open
Project: Artifactory Binary Repository
Component/s: None
Affects Version/s: 7.0.0
Fix Version/s: None

Type: Bug Priority: Normal
Reporter: Tomio Tetzlaff Assignee: Unassigned
Resolution: Unresolved Votes: 0
Labels: None


 Description   

It is impossible to upgrade an Artifactory that doesn't have a license:

2019-10-09T07:52:49.105Z [jfrt ] [WARN ] [8b5aecb645a84772] [o.a.a.ConverterBlockerImpl:68 ] [ocalhost-startStop-2] - No valid installed license found. Blocking conversion
2019-10-09T07:52:49.108Z [jfrt ] [ERROR] [8b5aecb645a84772] [.a.c.ConvertersManagerImpl:214] [ocalhost-startStop-2] - Conversion failed. You should analyze the error and retry launching Artifactory. Error is: Converter can't run since no matching license found, please add new license

Artifactory fails to start.






[RTFACT-20303] MS SQL: Wrong collation check on startup Created: 09/Oct/19  Updated: 16/Oct/19

Status: Open
Project: Artifactory Binary Repository
Component/s: Database
Affects Version/s: 6.13.1
Fix Version/s: None

Type: Bug Priority: Normal
Reporter: Stefan Felkel Assignee: Unassigned
Resolution: Unresolved Votes: 0
Labels: None


 Description   

When starting Artifactory, a log message appears

2019-10-09 07:08:32,319 [art-init] [ERROR] (o.a.s.d.v.GeneralValidator:45) - DATABASE SCHEME BAD COLLATION -> latin1_general_ci_as

 

This log message is wrong, because the database collation is correct:

 

SELECT CONVERT (varchar(256), DATABASEPROPERTYEX('artifactory','collation'));

Result: Latin1_General_CS_AI

 

Did you query the sql server's default collation?

SELECT CONVERT (varchar(256), SERVERPROPERTY('collation'));

Result: Latin1_General_CI_AS

 

Please change that check, because such a logging is highly confusing.

 

 

 



 Comments   
Comment by Joe Henshaw [ 16/Oct/19 ]

Duplicates RTFACT-20297.





[RTFACT-20302] copying artifacts from generic to non-generic repos by UI Created: 09/Oct/19  Updated: 09/Oct/19

Status: Open
Project: Artifactory Binary Repository
Component/s: UI
Affects Version/s: 6.13.1
Fix Version/s: None

Type: Improvement Priority: Normal
Reporter: David Shin Assignee: Unassigned
Resolution: Unresolved Votes: 1
Labels: None


 Description   

It is impossible copying artifacts from generic to non-generic repos through UI although REST API works.






[RTFACT-20299] Display part of the Group display in the list of groups so that SAML users can distinct between them Created: 08/Oct/19  Updated: 08/Oct/19

Status: Open
Project: Artifactory Binary Repository
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: Improvement Priority: Normal
Reporter: Gabriel Kohen Assignee: Unassigned
Resolution: Unresolved Votes: 0
Labels: artifactory
Environment:

Artifactory SaaS



 Description   

In order to use a SAML group, you have to add as a group in Artifactory. The exact name in the SAML and Artifactory has to match. The problem is that the name is a a concatenation of Hex. Example: 2b62b405-1352-9ed3-9984-b3415091d251.

In order to make it clean what is the actual group name, ie: "dc-developers", we populate the description field of the group. It's quite cumbersome to drill into each group to know what the actual group code(aka:name) stands for.

I'd be useful to have the description (or part of it) in the list of groups UI.
 
 






[RTFACT-20298] Getting a list or count of running plugins. Created: 08/Oct/19  Updated: 23/Oct/19

Status: Open
Project: Artifactory Binary Repository
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: Improvement Priority: Normal
Reporter: Dimitar Sakarov Assignee: Unassigned
Resolution: Unresolved Votes: 0
Labels: None


 Description   

We need a way of finding out if any, and which plugins are currently being executed in Artifactory.

In the ideal solуtion - the list of currently executed plugins would be best, but also at least a count of plugins that are being executed would help.

Additionally - is there a way to tell how many internal tasks are being executed, and if the instance is busy or idle?

We also need a way to find out if the server is idle, so that we know when to start resource-consuming tasks using a plugin.
We want to start them when no plugins are running, and eventually no backup is in progress, but we don't have a way to get the currently running backups info as well.

Thank you a lot in advance!






[RTFACT-20295] LDAP / AD: Add and use "displayName" in user administration Created: 08/Oct/19  Updated: 09/Oct/19

Status: Open
Project: Artifactory Binary Repository
Component/s: LDAP
Affects Version/s: 6.12.2
Fix Version/s: None

Type: New Feature Priority: Normal
Reporter: Stefan Felkel Assignee: Unassigned
Resolution: Unresolved Votes: 0
Labels: None


 Description   

LDAP authentication works, but currently, only the user name (sAMAccountName) is displayed in the user list.

When adding users to groups, only this user name is displayed and I have no link to the full name or display name of the user.

Example:

  Unable to render embedded object: File (image-2019-10-08-10-36-57-484.png) not found.

Please support retrieving the "displayName" via LDAP for the user together with displaying the user with this string in all according user lists.

 

 






[RTFACT-20294] API Artifact Version Search will only returns 1000 results back Created: 08/Oct/19  Updated: 16/Oct/19

Status: Open
Project: Artifactory Binary Repository
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: Bug Priority: Normal
Reporter: Ohad Levy Assignee: Unassigned
Resolution: Unresolved Votes: 0
Labels: None


 Description   

The api Artifact Version Search will only returns 1000 results back.

 

Configuring the following system properties does not resolve the issue:

 

artifactory.search.UserQueryLimit=6000

artifactory.search.userSqlQueryLimit=6000

artifactory.search.maxResults=6000

 

Steps to reproduce:

  1. Set up Artifactory version 6.12.1
  2. Upload 4k+ maven packages (that follow the maven layout convention)

      3.Run the following rest api example:
         $ curl -uadmin:password "http://localhost:8082/artifactory/api/search/versions?g=com.st-js&a=jquery&v=*&repos=maven"






[RTFACT-20291] Accurate Response for Build Upload Created: 07/Oct/19  Updated: 10/Nov/19

Status: Open
Project: Artifactory Binary Repository
Component/s: REST API
Affects Version/s: None
Fix Version/s: None

Type: Improvement Priority: Normal
Reporter: Angello Maggio Assignee: Unassigned
Resolution: Unresolved Votes: 1
Labels: None


 Description   

Currently the build upload REST api returns a 204 No Content response regardless of actual result.

This makes it difficult in an automation pipeline to identify whether there might have been errors tat prevented the deployment. 

Doing a direct upload to the build info repository returns 201 as expected; the API should do the same.






[RTFACT-20290] When editing a user and trying to change the password to less than 8 chars a wrong msg is showed Created: 07/Oct/19  Updated: 17/Nov/19

Status: Pending QA
Project: Artifactory Binary Repository
Component/s: Web UI
Affects Version/s: 6.13.1
Fix Version/s: None

Type: Bug Priority: Normal
Reporter: Ariel Seftel Assignee: Barak Hacham
Resolution: Unresolved Votes: 0
Labels: None

Issue Links:
Dependency
Trigger

 Description   

Steps to reproduce:

  1. Create a user in Artifactory with a valid 8 chars password.
  2. After the user was created successfully edit the user.
  3. Enter 3 chars in the password field and move to another box ('confirm pass' for example)
  4. You will see an error message that you need at least 4 chars (THIS IS WRONG you need 8 chars)
  5. Enter 4 chars on the 'password' and the 'confirm password' fields. 
  6. Press save. 
  7. You will get error message "User X already exist" 
  8. This is the second bug in this section as the error should be related to the password policy.
  9. Change the password to 8 chars and hit save, it will work. 





[RTFACT-20289] RPM Weak and Very Weak Dependency Functionality Created: 07/Oct/19  Updated: 07/Oct/19

Status: Open
Project: Artifactory Binary Repository
Component/s: Dependencies, RPM
Affects Version/s: 6.12.2
Fix Version/s: None

Type: Improvement Priority: Normal
Reporter: Ryne Williams Assignee: Unassigned
Resolution: Unresolved Votes: 0
Labels: None


 Description   

Artifactory automatically picks up on an RPM's "Requires:" tags for upstream dependencies and "Provides:" tags for downstream dependencies such that when an RPM is deployed to Artifactory, the tags are reflected as sections of the Artifact's "RPM Info" tab.

RPM has, since v4.12, supported weak and very weak upstream/downstream dependency tagging through the use of the "Recommends:", "Suggests:", "Supplements:", and "Enhances:" tags. Our use case would benefit from Artifactory reflecting these tags, and we are requesting their inclusion in the RPM Info tab alongside the regular dependencies. Specifically our system use case would benefit from this feature for reasons including the following:

  1. We need to be able to query (using AQL) for all upstream providers of a weak dependency, so we can install them into a full system configuration (if they exist).  But unlike hard dependencies, a missing weak dependency isn’t considered an error.
  2. After selecting a set of packages (using AQL) to assemble into a full system configuration, we need to be able to query for unsatisfied weak dependencies of that system.  This is used to produce a report for human consumption.
  3. We need to be able to query for all downstream consumers of a weak dependency, so we can determine impacts of changes to the upstream source.

 

 






[RTFACT-20287] Cannot delete repository Created: 07/Oct/19  Updated: 16/Oct/19

Status: Open
Project: Artifactory Binary Repository
Component/s: None
Affects Version/s: 6.13.0
Fix Version/s: None

Type: Bug Priority: Normal
Reporter: Stefan Felkel Assignee: Unassigned
Resolution: Unresolved Votes: 0
Labels: None


 Description   

When trying to delete a repository (e.g. "example-repo-local"), an error message appears:

"Deleting repo 'example-repo-local' failed: Could not check if Repo path 'example-repo-local:' is related to a Release Bundle"

and an exception is logged in the logfile :

 

2019-10-07 16:15:13,790 [http-nio-8081-exec-4] [ERROR] (o.a.u.r.s.a.c.r.DeleteRepositoryConfigService:103) - Deleting repo 'example-repo-local' failed: Could not check if Repo path 'example-repo-local:' is related to a Release Bundle
org.jfrog.storage.StorageException: Could not check if Repo path 'example-repo-local:' is related to a Release Bundle
    at org.artifactory.release.bundle.ReleaseBundleServiceImpl.isRepoPathRelatedToBundle(ReleaseBundleServiceImpl.java:184)
    at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.base/java.lang.reflect.Method.invoke(Method.java:566)
    at org.springframework.aop.support.AopUtils.invokeJoinpointUsingReflection(AopUtils.java:343)
    at org.springframework.aop.framework.JdkDynamicAopProxy.invoke(JdkDynamicAopProxy.java:206)
    at com.sun.proxy.$Proxy254.isRepoPathRelatedToBundle(Unknown Source)
    at org.artifactory.addon.release.bundle.interceptor.ReleaseBundleInterceptor.assertPathNotRelatedToBundle(ReleaseBundleInterceptor.java:53)
    at org.artifactory.addon.release.bundle.interceptor.ReleaseBundleInterceptor.assertDeleteRepoAllowed(ReleaseBundleInterceptor.java:44)
    at org.artifactory.repo.interceptor.storage.StorageInterceptorsImpl.assertDeleteRepoAllowed(StorageInterceptorsImpl.java:90)
    at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.base/java.lang.reflect.Method.invoke(Method.java:566)
    at org.springframework.aop.support.AopUtils.invokeJoinpointUsingReflection(AopUtils.java:343)
    at org.springframework.aop.framework.JdkDynamicAopProxy.invoke(JdkDynamicAopProxy.java:206)
    at com.sun.proxy.$Proxy169.assertDeleteRepoAllowed(Unknown Source)
    at org.artifactory.repo.service.RepositoryServiceImpl.removeRepository(RepositoryServiceImpl.java:2895)
    at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.base/java.lang.reflect.Method.invoke(Method.java:566)
    at org.springframework.aop.support.AopUtils.invokeJoinpointUsingReflection(AopUtils.java:343)
    at org.springframework.aop.framework.JdkDynamicAopProxy.invoke(JdkDynamicAopProxy.java:206)
    at com.sun.proxy.$Proxy155.removeRepository(Unknown Source)
    at org.artifactory.ui.rest.service.admin.configuration.repositories.DeleteRepositoryConfigService.execute(DeleteRepositoryConfigService.java:95)
    at org.artifactory.rest.common.service.ServiceExecutor.process(ServiceExecutor.java:38)
    at org.artifactory.rest.common.resource.BaseResource.runService(BaseResource.java:127)
    at org.artifactory.ui.rest.resource.admin.configuration.repositories.RepoConfigResource.deleteRepository(RepoConfigResource.java:172)
    at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.base/java.lang.reflect.Method.invoke(Method.java:566)
    at org.glassfish.jersey.server.model.internal.ResourceMethodInvocationHandlerFactory.lambda$static$0(ResourceMethodInvocationHandlerFactory.java:76)
    at org.glassfish.jersey.server.model.internal.AbstractJavaResourceMethodDispatcher$1.run(AbstractJavaResourceMethodDispatcher.java:148)
    at org.glassfish.jersey.server.model.internal.AbstractJavaResourceMethodDispatcher.invoke(AbstractJavaResourceMethodDispatcher.java:191)
    at org.glassfish.jersey.server.model.internal.JavaResourceMethodDispatcherProvider$ResponseOutInvoker.doDispatch(JavaResourceMethodDispatcherProvider.java:200)
    at org.glassfish.jersey.server.model.internal.AbstractJavaResourceMethodDispatcher.dispatch(AbstractJavaResourceMethodDispatcher.java:103)
    at org.glassfish.jersey.server.model.ResourceMethodInvoker.invoke(ResourceMethodInvoker.java:493)
    at org.glassfish.jersey.server.model.ResourceMethodInvoker.apply(ResourceMethodInvoker.java:415)
    at org.glassfish.jersey.server.model.ResourceMethodInvoker.apply(ResourceMethodInvoker.java:104)
    at org.glassfish.jersey.server.ServerRuntime$1.run(ServerRuntime.java:277)
    at org.glassfish.jersey.internal.Errors$1.call(Errors.java:272)
    at org.glassfish.jersey.internal.Errors$1.call(Errors.java:268)
    at org.glassfish.jersey.internal.Errors.process(Errors.java:316)
    at org.glassfish.jersey.internal.Errors.process(Errors.java:298)
    at org.glassfish.jersey.internal.Errors.process(Errors.java:268)
    at org.glassfish.jersey.process.internal.RequestScope.runInScope(RequestScope.java:289)
    at org.glassfish.jersey.server.ServerRuntime.process(ServerRuntime.java:256)
    at org.glassfish.jersey.server.ApplicationHandler.handle(ApplicationHandler.java:703)
    at org.glassfish.jersey.servlet.WebComponent.serviceImpl(WebComponent.java:416)
    at org.glassfish.jersey.servlet.WebComponent.service(WebComponent.java:370)
    at org.glassfish.jersey.servlet.ServletContainer.service(ServletContainer.java:389)
    at org.glassfish.jersey.servlet.ServletContainer.service(ServletContainer.java:342)
    at org.glassfish.jersey.servlet.ServletContainer.service(ServletContainer.java:229)
    at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:231)
    at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166)
    at org.artifactory.webapp.servlet.RepoFilter.execute(RepoFilter.java:191)
    at org.artifactory.webapp.servlet.RepoFilter.doFilter(RepoFilter.java:97)
    at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193)
    at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166)
    at org.artifactory.webapp.servlet.AccessFilter.useAuthentication(AccessFilter.java:427)
    at org.artifactory.webapp.servlet.AccessFilter.doFilterInternal(AccessFilter.java:214)
    at org.artifactory.webapp.servlet.AccessFilter.doFilter(AccessFilter.java:167)
    at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193)
    at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166)
    at org.artifactory.webapp.servlet.RequestFilter.doFilter(RequestFilter.java:77)
    at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193)
    at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166)
    at org.artifactory.webapp.servlet.ArtifactoryCsrfFilter.doFilter(ArtifactoryCsrfFilter.java:86)
    at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193)
    at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166)
    at org.springframework.session.web.http.SessionRepositoryFilter.doFilterInternal(SessionRepositoryFilter.java:164)
    at org.springframework.session.web.http.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:80)
    at org.artifactory.webapp.servlet.SessionFilter.doFilter(SessionFilter.java:62)
    at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193)
    at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166)
    at org.artifactory.webapp.servlet.ArtifactoryFilter.doFilter(ArtifactoryFilter.java:124)
    at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193)
    at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166)
    at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:199)
    at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:96)
    at org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:493)
    at org.apache.catalina.valves.rewrite.RewriteValve.invoke(RewriteValve.java:304)
    at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:137)
    at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:81)
    at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:87)
    at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:343)
    at org.apache.coyote.http11.Http11Processor.service(Http11Processor.java:798)
    at org.apache.coyote.AbstractProcessorLight.process(AbstractProcessorLight.java:66)
    at org.apache.coyote.AbstractProtocol$ConnectionHandler.process(AbstractProtocol.java:808)
    at org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.doRun(NioEndpoint.java:1498)
    at org.apache.tomcat.util.net.SocketProcessorBase.run(SocketProcessorBase.java:49)
    at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
    at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
    at org.apache.tomcat.util.threads.TaskThread$WrappingRunnable.run(TaskThread.java:61)
    at java.base/java.lang.Thread.run(Thread.java:834)
Caused by: com.microsoft.sqlserver.jdbc.SQLServerException: No column name was specified for column 1 of 'aa'.
    at com.microsoft.sqlserver.jdbc.SQLServerException.makeFromDatabaseError(SQLServerException.java:262)
    at com.microsoft.sqlserver.jdbc.SQLServerStatement.getNextResult(SQLServerStatement.java:1621)
    at com.microsoft.sqlserver.jdbc.SQLServerPreparedStatement.doExecutePreparedStatement(SQLServerPreparedStatement.java:592)
    at com.microsoft.sqlserver.jdbc.SQLServerPreparedStatement$PrepStmtExecCmd.doExecute(SQLServerPreparedStatement.java:522)
    at com.microsoft.sqlserver.jdbc.TDSCommand.execute(IOBuffer.java:7194)
    at com.microsoft.sqlserver.jdbc.SQLServerConnection.executeCommand(SQLServerConnection.java:2935)
    at com.microsoft.sqlserver.jdbc.SQLServerStatement.executeCommand(SQLServerStatement.java:248)
    at com.microsoft.sqlserver.jdbc.SQLServerStatement.executeStatement(SQLServerStatement.java:223)
    at com.microsoft.sqlserver.jdbc.SQLServerPreparedStatement.executeQuery(SQLServerPreparedStatement.java:444)
    at org.jfrog.storage.JdbcHelper.executeSelect(JdbcHelper.java:184)
    at org.jfrog.storage.JdbcHelper.executeSelect(JdbcHelper.java:152)
    at org.artifactory.storage.db.bundle.dao.ArtifactBundlesDao.isDirectoryRelatedToBundle(ArtifactBundlesDao.java:347)
    at org.artifactory.release.bundle.ReleaseBundleServiceImpl.isRepoPathRelatedToBundle(ReleaseBundleServiceImpl.java:178)
    ... 97 common frames omitted


 Comments   
Comment by Stefan Felkel [ 07/Oct/19 ]

Additional info: stepping back to 6.12.2 works.

Comment by Joe Henshaw [ 16/Oct/19 ]

Guessing you're using an MSSQL database too? See RTFACT-20296.

Comment by Stefan Felkel [ 16/Oct/19 ]

Yes, I triggered that problem at your helpdesk and created this issue, a second issue was created, then.

You can close and link this issue.





[RTFACT-20286] Some artifactory logs do not have OpenTracing trace ID Created: 07/Oct/19  Updated: 07/Oct/19

Status: Open
Project: Artifactory Binary Repository
Component/s: Logging
Affects Version/s: None
Fix Version/s: None

Type: Bug Priority: Normal
Reporter: Alex Dvorkin Assignee: Uriah Levy
Resolution: Unresolved Votes: 0
Labels: None

Issue Links:
Trigger
was triggered by RTFACT-19582 OpenTracing support in Artifactory Resolved

 Description   

From the sanity run, these are the logs lacking trace ID:

2019-10-03T09:11:34.830Z [jfrt ] [INFO ] [                ] [o.j.c.w.FileWatchingManager:75] [Thread-4            ] - Starting watch of folder configurations
2019-10-03T09:12:04.941Z [jfrt ] [INFO ] [                ] [o.j.c.w.ConfigWrapperImpl:504 ] [Thread-4            ] - [Node ID: 5c259e558337]  detected local  create for config 'artifactory.security.artifactory.key'
2019-10-03T09:14:05.947Z [jfrt ] [INFO ] [                ] [p.HaPropagationServiceImpl:504] [Thread-4            ] - Propagation of artifactory.plugin.md5sum.groovy was successful
2019-10-03T09:14:05.966Z [jfrt ] [INFO ] [                ] [p.HaPropagationServiceImpl:504] [Thread-4            ] - Propagation of artifactory.plugin.md5sum.groovy was successful
2019-10-03T09:14:05.980Z [jfrt ] [INFO ] [                ] [p.HaPropagationServiceImpl:504] [Thread-4            ] - Propagation of artifactory.plugin.md5sum.groovy was successful
2019-10-03T09:14:38.449Z [jfrt ] [INFO ] [                ] [p.HaPropagationServiceImpl:504] [Thread-4            ] - Propagation of artifactory.plugin.md5sum.groovy was successful
2019-10-03T09:14:38.462Z [jfrt ] [INFO ] [                ] [p.HaPropagationServiceImpl:504] [Thread-4            ] - Propagation of artifactory.plugin.md5sum.groovy was successful
2019-10-03T09:14:55.676Z [jfrt ] [INFO ] [                ] [p.HaPropagationServiceImpl:504] [Thread-4            ] - Propagation of artifactory.plugin.altAllResponses.groovy was successful
2019-10-03T09:14:55.692Z [jfrt ] [INFO ] [                ] [p.HaPropagationServiceImpl:504] [Thread-4            ] - Propagation of artifactory.plugin.altAllResponses.groovy was successful
2019-10-03T09:14:55.981Z [jfrt ] [INFO ] [                ] [p.HaPropagationServiceImpl:504] [Thread-4            ] - Propagation of artifactory.plugin.altAllResponses.groovy was successful
2019-10-03T09:14:55.995Z [jfrt ] [INFO ] [                ] [p.HaPropagationServiceImpl:504] [Thread-4            ] - Propagation of artifactory.plugin.altAllResponses.groovy was successful
2019-10-03T09:15:12.116Z [jfrt ] [INFO ] [                ] [o.j.c.w.ConfigWrapperImpl:504 ] [Thread-4            ] - [Node ID: 5c259e558337]  detected local  modify for config 'artifactory.plugin.altAllResponses.groovy'
2019-10-03T09:15:12.147Z [jfrt ] [INFO ] [                ] [p.HaPropagationServiceImpl:504] [Thread-4            ] - Propagation of artifactory.plugin.altAllResponses.groovy was successful
2019-10-03T09:15:12.159Z [jfrt ] [INFO ] [                ] [p.HaPropagationServiceImpl:504] [Thread-4            ] - Propagation of artifactory.plugin.altAllResponses.groovy was successful
2019-10-03T09:19:10.005Z [jfrt ] [INFO ] [                ] [o.j.c.w.ConfigWrapperImpl:504 ] [Thread-4            ] - [Node ID: 5c259e558337]  detected local  modify for config 'artifactory.plugin.promotions_test.groovy'
2019-10-03T09:19:10.020Z [jfrt ] [INFO ] [                ] [p.HaPropagationServiceImpl:504] [Thread-4            ] - Propagation of artifactory.plugin.promotions_test.groovy was successful
2019-10-03T09:19:10.030Z [jfrt ] [INFO ] [                ] [p.HaPropagationServiceImpl:504] [Thread-4            ] - Propagation of artifactory.plugin.promotions_test.groovy was successful
2019-10-03T09:19:10.039Z [jfrt ] [INFO ] [                ] [p.HaPropagationServiceImpl:504] [Thread-4            ] - Propagation of artifactory.plugin.promotions_test.groovy was successful
2019-10-03T09:21:03.055Z [jfrt ] [INFO ] [                ] [o.j.c.w.ConfigWrapperImpl:504 ] [Thread-4            ] - [Node ID: 5c259e558337]  detected local  delete for config 'artifactory.security.artifactory.key'
2019-10-03T09:21:03.089Z [jfrt ] [INFO ] [                ] [p.HaPropagationServiceImpl:504] [Thread-4            ] - Propagation of artifactory.security.artifactory.key.9703.201910030921002912 was successful
2019-10-03T09:21:03.098Z [jfrt ] [INFO ] [                ] [p.HaPropagationServiceImpl:504] [Thread-4            ] - Propagation of artifactory.security.artifactory.key.9703.201910030921002912 was successful
2019-10-03T09:21:03.107Z [jfrt ] [INFO ] [                ] [p.HaPropagationServiceImpl:504] [Thread-4            ] - Propagation of artifactory.security.artifactory.key.9703.201910030921002912 was successful
2019-10-03T09:21:04.262Z [jfrt ] [INFO ] [                ] [o.j.c.w.ConfigWrapperImpl:504 ] [Thread-4            ] - [Node ID: 5c259e558337]  detected local  modify for config 'artifactory.security.artifactory.key'
2019-10-03T09:21:04.278Z [jfrt ] [INFO ] [                ] [p.HaPropagationServiceImpl:504] [Thread-4            ] - Propagation of artifactory.security.artifactory.key was successful
2019-10-03T09:21:04.287Z [jfrt ] [INFO ] [                ] [p.HaPropagationServiceImpl:504] [Thread-4            ] - Propagation of artifactory.security.artifactory.key was successful
2019-10-03T09:21:04.296Z [jfrt ] [INFO ] [                ] [p.HaPropagationServiceImpl:504] [Thread-4            ] - Propagation of artifactory.security.artifactory.key was successful
2019-10-03T09:21:07.073Z [jfrt ] [INFO ] [                ] [o.j.c.w.ConfigWrapperImpl:504 ] [Thread-4            ] - [Node ID: 5c259e558337]  detected local  delete for config 'artifactory.security.artifactory.key'
2019-10-03T09:21:07.108Z [jfrt ] [INFO ] [                ] [p.HaPropagationServiceImpl:504] [Thread-4            ] - Propagation of artifactory.security.artifactory.key.4597.201910030921006939 was successful
2019-10-03T09:21:07.116Z [jfrt ] [INFO ] [                ] [p.HaPropagationServiceImpl:504] [Thread-4            ] - Propagation of artifactory.security.artifactory.key.4597.201910030921006939 was successful
2019-10-03T09:21:07.123Z [jfrt ] [INFO ] [                ] [p.HaPropagationServiceImpl:504] [Thread-4            ] - Propagation of artifactory.security.artifactory.key.4597.201910030921006939 was successful
2019-10-03T09:21:49.854Z [jfrt ] [INFO ] [                ] [p.HaPropagationServiceImpl:504] [Thread-4            ] - Propagation of artifactory.security.artifactory.key was successful
2019-10-03T09:21:49.863Z [jfrt ] [INFO ] [                ] [p.HaPropagationServiceImpl:504] [Thread-4            ] - Propagation of artifactory.security.artifactory.key was successful
2019-10-03T09:21:49.878Z [jfrt ] [INFO ] [                ] [p.HaPropagationServiceImpl:504] [Thread-4            ] - Propagation of artifactory.security.artifactory.key was successful
2019-10-03T09:21:52.547Z [jfrt ] [INFO ] [                ] [o.j.c.w.ConfigWrapperImpl:504 ] [Thread-4            ] - [Node ID: 5c259e558337]  detected local  delete for config 'artifactory.security.artifactory.key'
2019-10-03T09:21:52.566Z [jfrt ] [INFO ] [                ] [p.HaPropagationServiceImpl:504] [Thread-4            ] - Propagation of artifactory.security.artifactory.key was successful
2019-10-03T09:21:52.567Z [jfrt ] [INFO ] [                ] [o.j.c.w.ConfigWrapperImpl:504 ] [Thread-4            ] - [Node ID: 5c259e558337]  detected local  modify for config 'artifactory.security.artifactory.key.4821.201910030921052542'
2019-10-03T09:21:52.582Z [jfrt ] [INFO ] [                ] [p.HaPropagationServiceImpl:504] [Thread-4            ] - Propagation of artifactory.security.artifactory.key.4821.201910030921052542 was successful
2019-10-03T09:21:52.589Z [jfrt ] [INFO ] [                ] [p.HaPropagationServiceImpl:504] [Thread-4            ] - Propagation of artifactory.security.artifactory.key.4821.201910030921052542 was successful
2019-10-03T09:21:52.598Z [jfrt ] [INFO ] [                ] [p.HaPropagationServiceImpl:504] [Thread-4            ] - Propagation of artifactory.security.artifactory.key.4821.201910030921052542 was successful
2019-10-03T09:27:02.343Z [jfrt ] [INFO ] [                ] [o.j.c.w.ConfigWrapperImpl:504 ] [Thread-4            ] - [Node ID: 5c259e558337]  detected local  modify for config 'artifactory.security.artifactory.gpg.public'
2019-10-03T09:27:02.375Z [jfrt ] [INFO ] [                ] [p.HaPropagationServiceImpl:504] [Thread-4            ] - Propagation of artifactory.security.artifactory.gpg.public was successful
2019-10-03T09:27:02.384Z [jfrt ] [INFO ] [                ] [p.HaPropagationServiceImpl:504] [Thread-4            ] - Propagation of artifactory.security.artifactory.gpg.public was successful
2019-10-03T09:27:02.391Z [jfrt ] [INFO ] [                ] [p.HaPropagationServiceImpl:504] [Thread-4            ] - Propagation of artifactory.security.artifactory.gpg.public was successful
2019-10-03T09:27:02.413Z [jfrt ] [INFO ] [                ] [p.HaPropagationServiceImpl:504] [Thread-4            ] - Propagation of artifactory.security.artifactory.gpg.public was successful
2019-10-03T09:27:02.474Z [jfrt ] [INFO ] [                ] [p.HaPropagationServiceImpl:504] [Thread-4            ] - Propagation of artifactory.security.artifactory.gpg.private was successful
2019-10-03T09:27:02.482Z [jfrt ] [INFO ] [                ] [p.HaPropagationServiceImpl:504] [Thread-4            ] - Propagation of artifactory.security.artifactory.gpg.private was successful
2019-10-03T09:27:02.498Z [jfrt ] [INFO ] [                ] [p.HaPropagationServiceImpl:504] [Thread-4            ] - Propagation of artifactory.security.artifactory.gpg.private was successful
2019-10-03T09:28:19.622Z [jfrt ] [INFO ] [                ] [p.HaPropagationServiceImpl:504] [Thread-4            ] - Propagation of artifactory.plugin.isOfficial.groovy was successful
2019-10-03T09:28:19.630Z [jfrt ] [INFO ] [                ] [p.HaPropagationServiceImpl:504] [Thread-4            ] - Propagation of artifactory.plugin.isOfficial.groovy was successful
2019-10-03T09:30:49.347Z [jfrt ] [INFO ] [                ] [o.j.c.w.ConfigWrapperImpl:504 ] [Thread-4            ] - [Node ID: 5c259e558337]  detected local  modify for config 'artifactory.cluster.license'
2019-10-03T09:30:49.955Z [jfrt ] [INFO ] [                ] [p.HaPropagationServiceImpl:504] [Thread-4            ] - Propagation of artifactory.cluster.license was successful
2019-10-03T09:30:51.888Z [jfrt ] [INFO ] [                ] [o.j.c.w.ConfigWrapperImpl:504 ] [Thread-4            ] - [Node ID: 5c259e558337]  detected local  modify for config 'artifactory.cluster.license'
2019-10-03T09:30:52.084Z [jfrt ] [INFO ] [                ] [p.HaPropagationServiceImpl:504] [Thread-4            ] - Propagation of artifactory.cluster.license was successful
2019-10-03T09:30:53.007Z [jfrt ] [INFO ] [                ] [o.j.c.w.ConfigWrapperImpl:504 ] [Thread-4            ] - [Node ID: 5c259e558337]  detected local  modify for config 'artifactory.cluster.license'
2019-10-03T09:30:53.613Z [jfrt ] [INFO ] [                ] [p.HaPropagationServiceImpl:504] [Thread-4            ] - Propagation of artifactory.cluster.license was successful
2019-10-03T09:30:54.888Z [jfrt ] [INFO ] [                ] [o.j.c.w.ConfigWrapperImpl:504 ] [Thread-4            ] - [Node ID: 5c259e558337]  detected local  modify for config 'artifactory.cluster.license'
2019-10-03T09:30:55.411Z [jfrt ] [INFO ] [                ] [p.HaPropagationServiceImpl:504] [Thread-4            ] - Propagation of artifactory.cluster.license was successful
2019-10-07T12:26:38.682Z [jfrt ] [INFO ] [                ] [o.j.c.w.FileWatchingManager:75] [Thread-4            ] - Starting watch of folder configurations
2019-10-07T12:27:15.797Z [jfrt ] [INFO ] [                ] [o.j.c.w.ConfigWrapperImpl:504 ] [Thread-4            ] - [Node ID: 83b7004d5217]  detected local  create for config 'artifactory.security.artifactory.key'
2019-10-07T12:28:23.969Z [jfrt ] [INFO ] [                ] [o.j.c.w.ConfigWrapperImpl:504 ] [Thread-4            ] - [Node ID: 83b7004d5217]  detected local  modify for config 'artifactory.plugin.md5sum.groovy'
2019-10-07T12:28:25.568Z [jfrt ] [INFO ] [                ] [o.j.c.w.ConfigWrapperImpl:504 ] [Thread-4            ] - [Node ID: 83b7004d5217]  detected local  modify for config 'artifactory.plugin.md5sum.groovy'
2019-10-07T12:28:26.653Z [jfrt ] [INFO ] [                ] [o.j.c.w.ConfigWrapperImpl:504 ] [Thread-4            ] - [Node ID: 83b7004d5217]  detected local  modify for config 'artifactory.plugin.altAllResponses.groovy'
2019-10-07T12:28:29.105Z [jfrt ] [INFO ] [                ] [o.j.c.w.ConfigWrapperImpl:504 ] [Thread-4            ] - [Node ID: 83b7004d5217]  detected local  modify for config 'artifactory.security.access/access.admin.token'
2019-10-07T12:28:29.231Z [jfrt ] [INFO ] [                ] [o.j.c.w.ConfigWrapperImpl:504 ] [Thread-4            ] - [Node ID: 83b7004d5217]  detected local  delete for config 'artifactory.security.artifactory.key'
2019-10-07T12:28:29.234Z [jfrt ] [INFO ] [                ] [o.j.c.w.ConfigWrapperImpl:504 ] [Thread-4            ] - [Node ID: 83b7004d5217]  detected local  modify for config 'artifactory.security.artifactory.key.9771.201910071228029221'
2019-10-07T12:28:29.242Z [jfrt ] [INFO ] [                ] [o.j.c.w.ConfigWrapperImpl:504 ] [Thread-4            ] - [Node ID: 83b7004d5217]  detected local  modify for config 'artifactory.security.artifactory.key'
2019-10-07T12:28:29.254Z [jfrt ] [INFO ] [                ] [o.j.c.w.ConfigWrapperImpl:504 ] [Thread-4            ] - [Node ID: 83b7004d5217]  detected local  modify for config 'artifactory.security.access/access.admin.token'
2019-10-07T12:28:29.263Z [jfrt ] [INFO ] [                ] [o.j.c.w.ConfigWrapperImpl:504 ] [Thread-4            ] - [Node ID: 83b7004d5217]  detected local  delete for config 'artifactory.security.access/access.admin.token'
2019-10-07T12:28:29.269Z [jfrt ] [INFO ] [                ] [o.j.c.w.ConfigWrapperImpl:504 ] [Thread-4            ] - [Node ID: 83b7004d5217]  detected local  modify for config 'artifactory.security.access/access.admin.token'
2019-10-07T12:28:29.873Z [jfrt ] [INFO ] [                ] [o.j.c.w.ConfigWrapperImpl:504 ] [Thread-4            ] - [Node ID: 83b7004d5217]  detected local  modify for config 'artifactory.plugin.README.md'
2019-10-07�T12:28:29.879Z [jfrt ] [INFO ] [                ] [o.j.c.w.ConfigWrapperImpl:504 ] [Thread-4            ] - [Node ID: 83b7004d5217]  detected local  modify for config 'artifactory.plugin.altAllResponses.groovy'
2019-10-07T12:28:29.884Z [jfrt ] [INFO ] [                ] [o.j.c.w.ConfigWrapperImpl:504 ] [Thread-4            ] - [Node ID: 83b7004d5217]  detected local  modify for config 'artifactory.plugin.md5sum.groovy'
2019-10-07T12:28:39.826Z [jfrt ] [INFO ] [                ] [o.j.c.w.ConfigWrapperImpl:504 ] [Thread-4            ] - [Node ID: 83b7004d5217]  detected local  modify for config 'artifactory.security.access/access.admin.token'
2019-10-07T12:28:39.861Z [jfrt ] [INFO ] [                ] [o.j.c.w.ConfigWrapperImpl:504 ] [Thread-4            ] - [Node ID: 83b7004d5217]  detected local  delete for config 'artifactory.security.artifactory.key'
2019-10-07T12:28:39.881Z [jfrt ] [INFO ] [                ] [o.j.c.w.ConfigWrapperImpl:504 ] [Thread-4            ] - [Node ID: 83b7004d5217]  detected local  modify for config 'artifactory.security.artifactory.key.9771.201910071228029221'
2019-10-07T12:28:39.890Z [jfrt ] [INFO ] [                ] [o.j.c.w.ConfigWrapperImpl:504 ] [Thread-4            ] - [Node ID: 83b7004d5217]  detected local  delete for config 'artifactory.security.access/access.admin.token'
2019-10-07T12:28:39.969Z [jfrt ] [INFO ] [                ] [o.j.c.w.ConfigWrapperImpl:504 ] [Thread-4            ] - [Node ID: 83b7004d5217]  detected local  modify for config 'artifactory.security.access/access.admin.token'
2019-10-07T12:28:40.409Z [jfrt ] [INFO ] [                ] [o.j.c.w.ConfigWrapperImpl:504 ] [Thread-4            ] - [Node ID: 83b7004d5217]  detected local  modify for config 'artifactory.plugin.README.md'
2019-10-07T12:28:40.416Z [jfrt ] [INFO ] [                ] [o.j.c.w.ConfigWrapperImpl:504 ] [Thread-4            ] - [Node ID: 83b7004d5217]  detected local  modify for config 'artifactory.plugin.altAllResponses.groovy'
2019-10-07T12:28:40.421Z [jfrt ] [INFO ] [                ] [o.j.c.w.ConfigWrapperImpl:504 ] [Thread-4            ] - [Node ID: 83b7004d5217]  detected local  modify for config 'artifactory.plugin.md5sum.groovy'
2019-10-07T12:29:14.641Z [jfrt ] [INFO ] [                ] [ifactoryApplicationContext:518] [ocalhost-startStop-4] - Artifactory application context set to NOT READY by destroy
2019-10-07T12:29:14.643Z [jfrt ] [INFO ] [                ] [.j.c.w.FileWatchingManager:127] [Thread-4            ] - Watch service ended on destroy
2019-10-07T12:29:14.643Z [jfrt ] [INFO ] [                ] [.j.c.w.FileWatchingManager:133] [Thread-4            ] - End watch of folder configurations
2019-10-07T12:29:14.649Z [jfrt ] [INFO ] [                ] [ifactoryApplicationContext:356] [ocalhost-startStop-4] - Destroying 57 Artifactory Spring Beans
2019-10-07T12:29:14.650Z [jfrt ] [INFO ] [                ] [askServiceDescriptorHandler:43] [ocalhost-startStop-4] - Removing all job Indexer from task service handler.
2019-10-07T12:29:14.659Z [jfrt ] [INFO ] [                ] [askServiceDescriptorHandler:43] [ocalhost-startStop-4] - Removing all job Garbage Collector from task service handler.
2019-10-07T12:29:14.662Z [jfrt ] [INFO ] [                ] [askServiceDescriptorHandler:43] [ocalhost-startStop-4] - Removing all job Artifact Cleanup from task service handler.
2019-10-07T12:29:14.665Z [jfrt ] [INFO ] [                ] [askServiceDescriptorHandler:43] [ocalhost-startStop-4] - Removing all job Backup from task service handler.
2019-10-07T12:29:14.678Z [jfrt ] [INFO ] [                ] [askServiceDescriptorHandler:43] [ocalhost-startStop-4] - Removing all job Replication from task service handler.
2019-10-07T12:29:14.757Z [jfrt ] [INFO ] [                ] [actorySchedulerFactoryBean:844] [ocalhost-startStop-4] - Shutting down Quartz Scheduler
2019-10-07T12:29:57.230Z [jfrt ] [INFO ] [                ] [o.j.c.w.FileWatchingManager:75] [Thread-4            ] - Starting watch of folder configurations
2019-10-07T12:30:18.847Z [jfrt ] [INFO ] [                ] [o.j.c.w.ConfigWrapperImpl:504 ] [Thread-4            ] - [Node ID: 83b7004d5217]  detected local  modify for config 'artifactory.security.access/access.admin.token'
2019-10-07T12:30:36.867Z [jfrt ] [INFO ] [                ] [o.j.c.w.ConfigWrapperImpl:504 ] [Thread-4            ] - [Node ID: 83b7004d5217]  detected local  delete for config 'artifactory.security.artifactory.key'
2019-10-07T12:30:36.874Z [jfrt ] [INFO ] [                ] [o.j.c.w.ConfigWrapperImpl:504 ] [Thread-4            ] - [Node ID: 83b7004d5217]  detected local  modify for config 'artifactory.security.artifactory.key.8855.201910071230036863'
2019-10-07T12:30:38.149Z [jfrt ] [INFO ] [                ] [o.j.c.w.ConfigWrapperImpl:504 ] [Thread-4            ] - [Node ID: 83b7004d5217]  detected local  modify for config 'artifactory.security.artifactory.key'
2019-10-07T12:30:40.966Z [jfrt ] [INFO ] [                ] [o.j.c.w.ConfigWrapperImpl:504 ] [Thread-4            ] - [Node ID: 83b7004d5217]  detected local  delete for config 'artifactory.security.artifactory.key'
2019-10-07T12:30:40.975Z [jfrt ] [INFO ] [                ] [o.j.c.w.ConfigWrapperImpl:504 ] [Thread-4            ] - [Node ID: 83b7004d5217]  detected local  modify for config 'artifactory.security.artifactory.key.2995.201910071230040965'
2019-10-07T12:31:08.170Z [jfrt ] [INFO ] [                ] [o.j.c.w.ConfigWrapperImpl:504 ] [Thread-4            ] - [Node ID: 83b7004d5217]  detected local  modify for config 'artifactory.security.artifactory.key'
2019-10-07T12:31:10.676Z [jfrt ] [INFO ] [                ] [o.j.c.w.ConfigWrapperImpl:504 ] [Thread-4            ] - [Node ID: 83b7004d5217]  detected local  delete for config 'artifactory.security.artifactory.key'
2019-10-07T12:31:10.682Z [jfrt ] [INFO ] [                ] [o.j.c.w.ConfigWrapperImpl:504 ] [Thread-4            ] - [Node ID: 83b7004d5217]  detected local  modify for config 'artifactory.security.artifactory.key.5681.201910071231010676'
2019-10-07T12:31:23.633Z [jfrt ] [INFO ] [                ] [o.j.c.w.ConfigWrapperImpl:504 ] [Thread-4            ] - [Node ID: 83b7004d5217]  detected local  modify for config 'artifactory.security.access/access.admin.token'
2019-10-07T12:31:23.679Z [jfrt ] [INFO ] [                ] [o.j.c.w.ConfigWrapperImpl:504 ] [Thread-4            ] - [Node ID: 83b7004d5217]  detected local  modify for config 'artifactory.security.artifactory.key.5681.201910071231010676'
2019-10-07T12:31:23.685Z [jfrt ] [INFO ] [                ] [o.j.c.w.ConfigWrapperImpl:504 ] [Thread-4            ] - [Node ID: 83b7004d5217]  detected local  delete for config 'artifactory.security.access/access.admin.token'
2019-10-07T12:31:23.689Z [jfrt ] [INFO ] [                ] [o.j.c.w.ConfigWrapperImpl:504 ] [Thread-4            ] - [Node ID: 83b7004d5217]  detected local  modify for config 'artifactory.security.artifactory.key.8855.201910071230036863'
2019-10-07T12:31:23.694Z [jfrt ] [INFO ] [                ] [o.j.c.w.ConfigWrapperImpl:504 ] [Thread-4            ] - [Node ID: 83b7004d5217]  detected local  modify for config 'artifactory.security.artifactory.key.2995.201910071230040965'
2019-10-07T12:31:23.699Z [jfrt ] [INFO ] [                ] [o.j.c.w.ConfigWrapperImpl:504 ] [Thread-4            ] - [Node ID: 83b7004d5217]  detected local  modify for config 'artifactory.security.artifactory.key.9793.201910071228039861'
2019-10-07T12:31:23.704Z [jfrt ] [INFO ] [                ] [o.j.c.w.ConfigWrapperImpl:504 ] [Thread-4            ] - [Node ID: 83b7004d5217]  detected local  modify for config 'artifactory.security.artifactory.key.9771.201910071228029221'
2019-10-07T12:31:23.742Z [jfrt ] [INFO ] [                ] [o.j.c.w.ConfigWrapperImpl:504 ] [Thread-4            ] - [Node ID: 83b7004d5217]  detected local  modify for config 'artifactory.security.access/access.admin.token'
2019-10-07T12:31:24.108Z [jfrt ] [INFO ] [                ] [o.j.c.w.ConfigWrapperImpl:504 ] [Thread-4            ] - [Node ID: 83b7004d5217]  detected local  modify for config 'artifactory.plugin.README.md'
2019-10-07T12:31:24.114Z [jfrt ] [INFO ] [                ] [o.j.c.w.ConfigWrapperImpl:504 ] [Thread-4            ] - [Node ID: 83b7004d5217]  detected local  modify for config 'artifactory.plugin.promotions_test.groovy'
2019-10-07T12:31:24.118Z [jfrt ] [INFO ] [                ] [o.j.c.w.ConfigWrapperImpl:504 ] [Thread-4            ] - [Node ID: 83b7004d5217]  detected local  modify for config 'artifactory.plugin.altAllResponses.groovy'
2019-10-07T12:31:24.123Z [jfrt ] [INFO ] [                ] [o.j.c.w.ConfigWrapperImpl:504 ] [Thread-4            ] - [Node ID: 83b7004d5217]  detected local  modify for config 'artifactory.plugin.md5sum.groovy'
2019-10-07T12:31:36.642Z [jfrt ] [INFO ] [                ] [o.j.c.w.ConfigWrapperImpl:504 ] [Thread-4            ] - [Node ID: 83b7004d5217]  detected local  modify for config 'artifactory.security.artifactory.gpg.private'
2019-10-07T12:33:16.281Z [jfrt ] [INFO ] [                ] [o.j.c.w.ConfigWrapperImpl:504 ] [Thread-4            ] - [Node ID: 83b7004d5217]  detected local  modify for config 'artifactory.security.artifactory.gpg.public'
2019-10-07T12:33:16.305Z [jfrt ] [INFO ] [                ] [o.j.c.w.ConfigWrapperImpl:504 ] [Thread-4            ] - [Node ID: 83b7004d5217]  detected local  modify for config 'artifactory.security.artifactory.gpg.private'
2019-10-07T12:33:40.597Z [jfrt ] [INFO ] [                ] [o.j.c.w.ConfigWrapperImpl:504 ] [Thread-4            ] - [Node ID: 83b7004d5217]  detected local  modify for config 'artifactory.security.artifactory.gpg.public'
2019-10-07T12:33:40.615Z [jfrt ] [INFO ] [                ] [o.j.c.w.ConfigWrapperImpl:504 ] [Thread-4            ] - [Node ID: 83b7004d5217]  detected local  modify for config 'artifactory.security.artifactory.gpg.private'





[RTFACT-20263] Docker buildkit support Created: 06/Oct/19  Updated: 16/Oct/19

Status: Open
Project: Artifactory Binary Repository
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: Improvement Priority: Normal
Reporter: Shahar Levy Assignee: Unassigned
Resolution: Unresolved Votes: 2
Labels: None


 Description   

When building a new container using the buildkit feature i.e..:

DOCKER_BUILDKIT=1 docker build .

The built image gets stuck in the _uploads folder in Artifactory.

The config layer of docker cannot be moved to the image folder since in it's content's history section, one of the members does not have the "created" key as follows:
"history": [
        

{             "created": "2019-08-20T20:19:55.062606894Z",             "created_by": "/bin/sh -c #(nop) ADD file:fe64057fbb83dccb960efabbf1cd8777920ef279a7fa8dbca0a8801c651bdf7c in / "         }

,
        

{             "created": "2019-08-20T20:19:55.211423266Z",             "created_by": "/bin/sh -c #(nop)  CMD [\"/bin/sh\"]",             "empty_layer": true         }

,
{
            "created_by": "CMD [\"sh\"]",
            "comment": "buildkit.dockerfile.v0",
            "empty_layer": true
        }
    ]
 
When pushing the image, the following error will be thrown in Artifactory:
 
2019-10-06 14:14:02,243 [http-nio-8081-exec-2] [INFO ] (o.j.r.d.v.r.h.DockerV2LocalRepoHandler:257) - Deploying docker manifest for repo 'buildkit' and tag 'yes' into repo 'docker-local'
2019-10-06 14:14:02,244 [http-nio-8081-exec-2] [DEBUG] (o.j.r.d.u.DockerUtils:134) - Searching manifest config blob in: 'buildkit/yes/sha256__fc587b796e67f3e4713fbe7752d27a7cf65958da3d80126a747919c8c49f01a5'
2019-10-06 14:14:02,245 [http-nio-8081-exec-2] [DEBUG] (o.j.r.d.u.DockerUtils:153) - Searching blob in 'buildkit/uploads/sha256_fc587b796e67f3e4713fbe7752d27a7cf65958da3d80126a747919c8c49f01a5'
2019-10-06 14:14:02,245 [http-nio-8081-exec-2] [DEBUG] (o.j.r.d.u.DockerUtils:155) - Blob found in: 'buildkit/uploads/sha256_fc587b796e67f3e4713fbe7752d27a7cf65958da3d80126a747919c8c49f01a5'
2019-10-06 14:14:02,252 [http-nio-8081-exec-2] [ERROR] (o.j.r.d.v.r.h.DockerV2LocalRepoHandler:783) - Error uploading manifest: 'null'
2019-10-06 14:14:02,252 [http-nio-8081-exec-2] [DEBUG] (o.j.r.d.v.r.h.DockerV2LocalRepoHandler:784) - Error uploading manifest:
java.lang.NullPointerException: null
at org.jfrog.repomd.docker.manifest.ManifestSchema2Deserializer.applyAttributesFromContent(ManifestSchema2Deserializer.java:95)
at org.jfrog.repomd.docker.manifest.ManifestSchema2Deserializer.deserialize(ManifestSchema2Deserializer.java:42)
at org.jfrog.repomd.docker.manifest.ManifestDeserializer.deserialize(ManifestDeserializer.java:32)
at org.jfrog.repomd.docker.v2.rest.handler.DockerV2LocalRepoHandler.processUploadedManifestType(DockerV2LocalRepoHandler.java:294)
at org.jfrog.repomd.docker.v2.rest.handler.DockerV2LocalRepoHandler.uploadManifest(DockerV2LocalRepoHandler.java:268)
at org.artifactory.addon.docker.rest.v2.repo.virtual.DockerV2VirtualRepoHandler.lambda$uploadManifest$5(DockerV2VirtualRepoHandler.java:110)
at org.artifactory.addon.docker.rest.v2.repo.virtual.DockerV2VirtualRepoHandler.delegateToLocalIfPossible(DockerV2VirtualRepoHandler.java:176)
at org.artifactory.addon.docker.rest.v2.repo.virtual.DockerV2VirtualRepoHandler.uploadManifest(DockerV2VirtualRepoHandler.java:110)
at org.jfrog.repomd.docker.v2.rest.DockerV2Resource.uploadManifest(DockerV2Resource.java:81)
at sun.reflect.GeneratedMethodAccessor504.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.glassfish.jersey.server.model.internal.ResourceMethodInvocationHandlerFactory.lambda$static$0(ResourceMethodInvocationHandlerFactory.java:76)
at org.glassfish.jersey.server.model.internal.AbstractJavaResourceMethodDispatcher$1.run(AbstractJavaResourceMethodDispatcher.java:148)
at org.glassfish.jersey.server.model.internal.AbstractJavaResourceMethodDispatcher.invoke(AbstractJavaResourceMethodDispatcher.java:191)
at org.glassfish.jersey.server.model.internal.JavaResourceMethodDispatcherProvider$ResponseOutInvoker.doDispatch(JavaResourceMethodDispatcherProvider.java:200)
at org.glassfish.jersey.server.model.internal.AbstractJavaResourceMethodDispatcher.dispatch(AbstractJavaResourceMethodDispatcher.java:103)
at org.glassfish.jersey.server.model.ResourceMethodInvoker.invoke(ResourceMethodInvoker.java:493)
at org.glassfish.jersey.server.model.ResourceMethodInvoker.apply(ResourceMethodInvoker.java:415)
at org.glassfish.jersey.server.model.ResourceMethodInvoker.apply(ResourceMethodInvoker.java:104)
at org.glassfish.jersey.server.ServerRuntime$1.run(ServerRuntime.java:277)
at org.glassfish.jersey.internal.Errors$1.call(Errors.java:272)
at org.glassfish.jersey.internal.Errors$1.call(Errors.java:268)
at org.glassfish.jersey.internal.Errors.process(Errors.java:316)
at org.glassfish.jersey.internal.Errors.process(Errors.java:298)
at org.glassfish.jersey.internal.Errors.process(Errors.java:268)
at org.glassfish.jersey.process.internal.RequestScope.runInScope(RequestScope.java:289)
at org.glassfish.jersey.server.ServerRuntime.process(ServerRuntime.java:256)
at org.glassfish.jersey.server.ApplicationHandler.handle(ApplicationHandler.java:703)
at org.glassfish.jersey.servlet.WebComponent.serviceImpl(WebComponent.java:416)
at org.glassfish.jersey.servlet.WebComponent.service(WebComponent.java:370)
at org.glassfish.jersey.servlet.ServletContainer.service(ServletContainer.java:389)
at org.glassfish.jersey.servlet.ServletContainer.service(ServletContainer.java:342)
at org.glassfish.jersey.servlet.ServletContainer.service(ServletContainer.java:229)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:231)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166)
at org.artifactory.webapp.servlet.RepoFilter.execute(RepoFilter.java:191)
at org.artifactory.webapp.servlet.RepoFilter.doFilter(RepoFilter.java:97)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166)
at org.artifactory.webapp.servlet.AccessFilter.useAuthentication(AccessFilter.java:427)
at org.artifactory.webapp.servlet.AccessFilter.authenticateAndExecute(AccessFilter.java:305)
at org.artifactory.webapp.servlet.AccessFilter.doFilterInternal(AccessFilter.java:208)
at org.artifactory.webapp.servlet.AccessFilter.doFilter(AccessFilter.java:167)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166)
at org.artifactory.webapp.servlet.RequestFilter.doFilter(RequestFilter.java:77)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166)
at org.artifactory.webapp.servlet.ArtifactoryCsrfFilter.doFilter(ArtifactoryCsrfFilter.java:86)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166)
at org.springframework.session.web.http.SessionRepositoryFilter.doFilterInternal(SessionRepositoryFilter.java:164)
at org.springframework.session.web.http.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:80)
at org.artifactory.webapp.servlet.SessionFilter.doFilter(SessionFilter.java:62)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166)
at org.artifactory.webapp.servlet.ArtifactoryFilter.doFilter(ArtifactoryFilter.java:124)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166)
at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:199)
at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:96)
at org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:493)
at org.apache.catalina.valves.rewrite.RewriteValve.invoke(RewriteValve.java:304)
at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:137)
at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:81)
at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:87)
at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:343)
at org.apache.coyote.http11.Http11Processor.service(Http11Processor.java:798)
at org.apache.coyote.AbstractProcessorLight.process(AbstractProcessorLight.java:66)
at org.apache.coyote.AbstractProtocol$ConnectionHandler.process(AbstractProtocol.java:808)
at org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.doRun(NioEndpoint.java:1498)
at org.apache.tomcat.util.net.SocketProcessorBase.run(SocketProcessorBase.java:49)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at org.apache.tomcat.util.threads.TaskThread$WrappingRunnable.run(TaskThread.java:61)
at java.lang.Thread.run(Thread.java:748)
 
When monitoring the push request using charles, it seems that the push for the v2 scheme fails and the client then pushed using v1 scheme and succeeds.
 
Steps to reproduce:
1) Build a docker image using the buildkit.
2) Push the image to Artifactory.






[RTFACT-20261] Include credentials for remote PyPI repository in "Set Me Up" tool Created: 06/Oct/19  Updated: 17/Nov/19

Status: Open
Project: Artifactory Binary Repository
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: Improvement Priority: Normal
Reporter: David Pinhas Assignee: Tamir Hadad
Resolution: Unresolved Votes: 0
Labels: None


 Description   

When configuring remote PyPI repository and using the "Set Me Up" tool, the credentials are not displayed, however, when using the "Set Me Up" tool for local PyPI repository it displays the credentials.

When setting up the remote PyPI repository for resolvement under the ~/.pip/pip.conf file, the user is requested to insert his Artifactory credentials for each "pip install <package>" request using the pip client while other package managers provide the credentials under the "Set Me Up" tool which enables resolvement without inserting the credentials per request.






[RTFACT-20260] Have ability to blacklist/whitelist of key exchange protocol Created: 06/Oct/19  Updated: 07/Oct/19

Status: Open
Project: Artifactory Binary Repository
Component/s: Security
Affects Version/s: None
Fix Version/s: None

Type: Improvement Priority: Normal
Reporter: Ariel Seftel Assignee: Unassigned
Resolution: Unresolved Votes: 0
Labels: None


 Description   

Currently, there is no option to blacklist/whitelist the "key exchange". 

It will be very effective to have this ability in order to allow the users to configure the exceptions from their end based on their security team guidance.






[RTFACT-20259] Permission with / in it should return decoded Created: 05/Oct/19  Updated: 07/Oct/19

Status: Open
Project: Artifactory Binary Repository
Component/s: permissions, REST API
Affects Version/s: 6.12.2
Fix Version/s: None

Type: Bug Priority: Normal
Reporter: Scott Mosher Assignee: Unassigned
Resolution: Unresolved Votes: 0
Labels: None


 Description   

Create a permission where the name has a slash . Fetching the list of permissions from the API returns a URI where the slash is not encoded.

 

Code snippet:

$user = "admin"

$pass = "XXX"

$encodedCreds = [System.Convert]::ToBase64String([System.Text.Encoding]::ASCII.GetBytes("${user}:${pass}"))

$permissions = Invoke-RestMethod -Uri 'http://art.com6/artifactory/api/security/permissions' -Headers @

{     Authorization = "Basic $encodedCreds" }

foreach ($permission in $permissions)

{     Write-Host $permission.uri }

 

Returns:

http://lin2dv2do36:80/artifactory/api/security/permissions/Any%20Remote

http://lin2dv2do36:80/artifactory/api/security/permissions/*A/B*

http://lin2dv2do36:80/artifactory/api/security/permissions/Anything

 

Should return:

http://lin2dv2do36/artifactory/api/security/permissions/Any+Remote

http://lin2dv2do36/artifactory/api/security/permissions/A%2fB

http://lin2dv2do36/artifactory/api/security/permissions/Anything

 

The impact of this bug is I cannot use the returned uri to follow to get more details.






[RTFACT-20254] UI for Anonymous access Created: 03/Oct/19  Updated: 03/Oct/19

Status: Open
Project: Artifactory Binary Repository
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: New Feature Priority: Normal
Reporter: Jesse Assignee: Unassigned
Resolution: Unresolved Votes: 0
Labels: None


 Description   

Would like to make some repos accessible for Anonymous access. When I enable Anonymous access the default behaviour for users is to show them Artifactory as anonymous user and not prompt for login (am using LDAP for user authentication). This is confusing for users. Tried setting disabling UI Access for the anonymous user via the REST API but this doesn't disable the UI for Anonymous access. Would be good to either permit disabling the anonymous user's UI access or allowing us to default to prompt users for login instead of allowing them anonymous access to UI first.






[RTFACT-20239] XRay Permissions page hangs on Save Changes Created: 02/Oct/19  Updated: 02/Oct/19

Status: Open
Project: Artifactory Binary Repository
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: Bug Priority: Normal
Reporter: Stefan Kraus Assignee: Unassigned
Resolution: Unresolved Votes: 0
Labels: None


 Description   
      1. Steps to reproduce

1. Go to XRay as an admin user, log in, and go to the Admin > Security > Permissions page
2. edit an existing permission, click "Save Changes"

I get a loading spinner that hangs until I close the page. If I do close the page or go back, the permission is updated. 

In the console, I see : 
```
Request URL:
https://workivaeast-xray.jfrog.io/ui/permissions/developer
```
As the request that hangs indefinitely. 

In the console, I see a number of errors, the last of which is 
```
Failed to load resource: the server responded with a status of 500 (Internal Server Error)
```






[RTFACT-20237] Unable to logout from Artifactory after login through the native browser Created: 02/Oct/19  Updated: 07/Oct/19

Status: Open
Project: Artifactory Binary Repository
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: Bug Priority: Normal
Reporter: Shai Ben-Zvi Assignee: Unassigned
Resolution: Unresolved Votes: 0
Labels: None

Attachments: PNG File Screen Shot 2019-10-02 at 10.00.04.png     PNG File Screen Shot 2019-10-02 at 10.01.15.png    
Issue Links:
Duplicate
is duplicated by RTFACT-20236 Unable to logout from Artifactory thr... Resolved

 Description   

Steps to reproduce:
1. Create a repository and upload an artifact (so we can access the native ui of the repository).
2. logout or open a new incognito screen.
3. login to Artifactory by accessing 'http://localhost:8081/artifactory/test-repository' (through the native browser).
4. Try to logout.

It seems that the issue is related to not saving the session correctly.
Please see the gif link below of how I reproduced it and the additional screenshots from the output of the logout response in browser developer console.

https://recordit.co/H8Mtb9LBFT






[RTFACT-20234] Change Npm to npm in the different search pages Created: 21/Sep/19  Updated: 07/Nov/19

Status: Pending QA
Project: Artifactory Binary Repository
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: Improvement Priority: Normal
Reporter: Shlomi Kriheli Assignee: Barak Hacham
Resolution: Unresolved Votes: 0
Labels: UGA

Attachments: PNG File npm.png    

 Description   

See attached screenshot - the requirement is to change Npm to npm

This is relevant in all places in the search module (Package, Artifacts as far as I can tell). 






[RTFACT-20225] API REST for remote repository test connection Created: 01/Oct/19  Updated: 01/Oct/19

Status: Open
Project: Artifactory Binary Repository
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: Improvement Priority: Normal
Reporter: gboue Assignee: Unassigned
Resolution: Unresolved Votes: 0
Labels: None


 Description   

As developer i want to create a script which check configuration of all the remote repository of my artifactory instance via REST API






[RTFACT-20219] Improving Systemd configuration Created: 27/Sep/19  Updated: 10/Nov/19

Status: Open
Project: Artifactory Binary Repository
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: Improvement Priority: Normal
Reporter: Divija Kandukoori Assignee: Unassigned
Resolution: Unresolved Votes: 3
Labels: None


 Description   

Customer was experiencing issue with the systemd after upgrading to RHEL 7.7 in regards with the existing jira RTFACT-19850 and Customer proposed these changes to our implementation:

1)Alias=artifactory.service[root@uls-ot-artifa3 logs]# cd /etc/systemd/system
[root@uls-ot-artifa3 system]# ls -l
total 4
lrwxrwxrwx. 1 root root 43 Sep 20 13:13 artifactory.service -> /usr/lib/systemd/system/artifactory.service

This symlink apparently deployed from your RPM is not required. The file in /etc is used to override the file in /usr/lib/systemd/system/ provided by the RPM. Linking them makes no sense and has no impact on the service state. It only makes it more confusing and difficult if somebody wants to override the default service file.

2) /usr/lib/systemd/system/artifactory.service should not be executable, this causes warnings in the logs while starting.

3) Your systemd unit seems to point to an old school sysv init script in /opt/jfrog/artifactory/bin/artifactoryManage.sh .
This is really strange and seriously complicates debugging...
I propose to replace all that and get some inspiration in other Tomcat applications, like for example Atlassian Jira, that works fine with RHEL 7.7...






[RTFACT-20216] Conan smart remote repository pull replication doesn't work when configured with /api/conan Created: 27/Sep/19  Updated: 16/Oct/19

Status: Open
Project: Artifactory Binary Repository
Component/s: None
Affects Version/s: 6.11.3, 6.12.2
Fix Version/s: None

Type: Bug Priority: Normal
Reporter: Muhammed Kashif Assignee: Unassigned
Resolution: Unresolved Votes: 1
Labels: None


 Description   

Conan smart remote repository pull replication doesn't work when configured with /api/conan in the URL, when pull replication is scheduled or manually triggered, in the artifactory.log the below log entry is seen where replication is successful with 0 deployed files although nothing is replicated,

2019-09-25 07:28:13,155 [art-exec-5717] [INFO ] (o.a.a.c.BasicStatusHolder:218) - Completed remote folder replication for conan-smart-repo/ with 0 deployed files, 0 deleted files, 0 properties change, 0 statistics change, 0 mkDirs... average events per second 0

 

When /api/conan is removed from the remote URL, pull replication is actually completed and replicates all the artifacts as shown in the below log entries from Artifactory.log,

2019-09-25 07:28:58,882 [http-nio-8081-exec-1] [INFO ] (o.a.a.r.c.ReplicationAddonImpl:638) - Activating manual remote repository replication for 'conan-smart-repo'
2019-09-25 07:28:58,883 [http-nio-8081-exec-1] [INFO ] (o.a.a.r.c.ReplicationDescriptorHandler:175) - Replication activated manually for repository 'conan-smart-repo'
2019-09-25 07:28:58,883 [art-exec-5703] [INFO ] (o.a.a.c.BasicStatusHolder:218) - Starting remote folder replication for 'conan-smart-repo'.
2019-09-25 07:28:58,910 [art-exec-5703] [INFO ] (o.a.a.r.c.BaseReplicationProducer:218) - Executing file list request: 'http://mill.jfrog.info:12352/artifactory/api/storage/conan-local/?list&deep=1&listFolders=1&mdTimestamps=1&statsTimestamps=1&includeRootPath=1'
2019-09-25 07:28:58,954 [replication-consumer-1569396538883-0] [INFO ] (o.a.r.HttpRepo :432) - conan-smart-repo downloading http://mill.jfrog.info:12352/artifactory/conan-local/conan-package.tgz 4.29 MB
2019-09-25 07:28:59,066 [replication-consumer-1569396538883-0] [INFO ] (o.a.r.HttpRepo :445) - conan-smart-repo downloaded http://mill.jfrog.info:12352/artifactory/conan-local/mysql-connector-java-8.0.16.zip 4.29 MB at 39,175.01 KB/sec
2019-09-25 07:28:59,075 [replication-consumer-1569396538883-0] [INFO ] (o.a.a.c.BasicStatusHolder:218) - Removing the properties of 'conan-smart-repo-cache:mysql-connector-java-8.0.16.zip'.
2019-09-25 07:28:59,083 [art-exec-5703] [INFO ] (o.a.a.c.BasicStatusHolder:218) - Completed remote folder replication for conan-smart-repo/ with 1 deployed files, 0 deleted files, 1 properties change, 0 statistics change, 0 mkDirs... average events per second 10.05

 

Steps to reproduce:

  1. Create a conan-local repository in a distant Artifactory.
  2. Create a smart remote repository that points to the conan-local repository with /api/conan endpoint.
  3. Trigger manual replication, replication is successful but nothing is cached under the conan cache repository.
  4. Remove the /api/conan endpoint from the URL and trigger the replication
  5. Replication is successful and the artifacts are downloaded and cached in the conan cache repository





[RTFACT-20215] Add new index on REPO, NODE_PATH and DEPTH Created: 27/Sep/19  Updated: 27/Sep/19

Status: Open
Project: Artifactory Binary Repository
Component/s: None
Affects Version/s: 6.8.13
Fix Version/s: None

Type: Improvement Priority: Normal
Reporter: Joshua Han Assignee: Unassigned
Resolution: Unresolved Votes: 1
Labels: None
Environment:

Oracle DB



 Description   

We suspect the depth query is causing the load issue. Add another index on REPO, NODE_PATH, and DEPTH, because there are no indexes that exist that fully satisfies the where clause below:

SELECT /*+ index(NODES NODES_REPO_PATH_NAME_IDX) */ * FROM nodes WHERE repo = :1 AND node_path = :2 AND depth = :3

One of the variables in the where clause is depth and there is no index on depth so it is using the available index on repo, path, and name, and it needs to scan this bigger index. Also, there is a slow of skewness in the index.

The suggestion is to use another index (path, repo, depth), note path is leading, so the index is more distributed. We have to try it for sure.






[RTFACT-20212] Update NuGet repo to support new -SkipDuplicare Created: 26/Sep/19  Updated: 26/Sep/19

Status: Open
Project: Artifactory Binary Repository
Component/s: NuGet
Affects Version/s: None
Fix Version/s: None

Type: Improvement Priority: Normal
Reporter: Shoval Arad Assignee: Unassigned
Resolution: Unresolved Votes: 0
Labels: NuGet, artifactory, repository


 Description   

In NuGet 5.1 they added SkipDuplicate flag and for that they added a different status code in case of duplicate nukpg. It would be great if you could support it






[RTFACT-20201] Validate Artifactory Support on Red Hat (RHEL) 8 Created: 25/Sep/19  Updated: 24/Oct/19

Status: Open
Project: Artifactory Binary Repository
Component/s: None
Affects Version/s: 6.11.3
Fix Version/s: None

Type: Improvement Priority: Normal
Reporter: Jeff Peters Assignee: Unassigned
Resolution: Unresolved Votes: 1
Labels: artifactory
Environment:

Red Hat Enterprise Linux version 8


Issue Links:
Relationship
relates to RTFACT-19367 RPM/yum repo support for RHEL 8 AppSt... Open

 Description   

Ensure that Artifactory can function in an environment with RHEL 8 systems, both:

  1. For RHEL 8 clients, so that they can execute commands like docker, yum, curl etc. from those systems.
  2. (Ideally) for running on RHEL 8 hosts. Our corporate network is about to start adopting RHEL 8, and it will eventually become a standard default.

This RTFACT is related to RPM/yum repo support for RHEL 8 AppStream . Thank you.






[RTFACT-20200] wrong listRemoteFolderItems returned for smart remote Helm Created: 25/Sep/19  Updated: 07/Oct/19

Status: Open
Project: Artifactory Binary Repository
Component/s: None
Affects Version/s: 6.11.3, 6.12.2
Fix Version/s: None

Type: Bug Priority: Normal
Reporter: Ariel Kabov Assignee: Eli Mishael
Resolution: Unresolved Votes: 1
Labels: None


 Description   

Steps to reproduce:

1. Create a HELM smart remote repository to another HELM repository in another Artifactory server.
2. Make sure "List remote folder items" is NOT marked.
3. Open again the repository configurations in the UI, see it is still not marked.

4. In the Config Descriptor you'll see:

            <listRemoteFolderItems>true</listRemoteFolderItems>

Alternatively you can also see:

curl -uadmin:password http://mill.jfrog.info:12302/artifactory/api/repositories/helm-remote -vvL  
*   Trying 104.196.245.50...
* TCP_NODELAY set
* Connected to mill.jfrog.info (104.196.245.50) port 12302 (#0)
* Server auth using Basic with user 'admin'
> GET /artifactory/api/repositories/helm-remote HTTP/1.1
> Host: mill.jfrog.info:12302
> Authorization: Basic YWRtaW46cGFzc3dvcmQ=
> User-Agent: curl/7.54.0
> Accept: */*
>
< HTTP/1.1 200 OK
< Server: Artifactory/6.12.2
< X-Artifactory-Id: 30a4b08d958ffe3a:-3e1e090e:16d6929f59f:-8000
< Cache-Control: no-store
< Content-Type: application/vnd.org.jfrog.artifactory.repositories.RemoteRepositoryConfiguration+json
< Transfer-Encoding: chunked
< Date: Wed, 25 Sep 2019 16:09:25 GMT
<
{
  "key" : "helm-remote",
  "packageType" : "helm",
  "description" : "",
  "notes" : "",
  "includesPattern" : "**/*",
  "excludesPattern" : "",
  "repoLayoutRef" : "simple-default",
  "enableComposerSupport" : false,
  "enableNuGetSupport" : false,
  "enableGemsSupport" : false,
  "enableNpmSupport" : false,
  "enableBowerSupport" : false,
  "enableCocoaPodsSupport" : false,
  "enableConanSupport" : false,
  "enableDebianSupport" : false,
  "debianTrivialLayout" : false,
  "enablePypiSupport" : false,
  "enablePuppetSupport" : false,
  "enableDockerSupport" : false,
  "dockerApiVersion" : "V2",
  "forceNugetAuthentication" : false,
  "enableVagrantSupport" : false,
  "enableGitLfsSupport" : false,
  "enableDistRepoSupport" : false,
  "url" : "http://mill.jfrog.info:12318/artifactory/helm-local",
  "username" : "admin",
  "password" : "AM.22NJj.AES128.2dqFUqy6csgSNJZKAmPBtspHYHXzYLfvnnencAYE57mS3QJ1",
  "handleReleases" : true,
  "handleSnapshots" : true,
  "suppressPomConsistencyChecks" : true,
  "remoteRepoChecksumPolicyType" : "generate-if-absent",
  "hardFail" : false,
  "offline" : false,
  "blackedOut" : false,
  "storeArtifactsLocally" : true,
  "socketTimeoutMillis" : 15000,
  "localAddress" : "",
  "retrievalCachePeriodSecs" : 7200,
  "assumedOfflinePeriodSecs" : 300,
  "missedRetrievalCachePeriodSecs" : 1800,
  "unusedArtifactsCleanupPeriodHours" : 0,
  "fetchJarsEagerly" : false,
  "fetchSourcesEagerly" : false,
  "shareConfiguration" : false,
  "synchronizeProperties" : false,
  "maxUniqueSnapshots" : 0,
  "maxUniqueTags" : 0,
  "propertySets" : [ ],
  "archiveBrowsingEnabled" : false,
  "listRemoteFolderItems" : true,
  "rejectInvalidJars" : false,
  "allowAnyHostAuth" : false,
  "enableCookieManagement" : false,
  "enableTokenAuthentication" : false,
  "propagateQueryParams" : false,
  "blockMismatchingMimeTypes" : true,
  "mismatchingMimeTypesOverrideList" : "",
  "bypassHeadRequests" : false,
  "contentSynchronisation" : {
    "enabled" : true,
    "statistics" : {
      "enabled" : false
    },
    "properties" : {
      "enabled" : false
    },
    "source" : {
      "originAbsenceDetection" : false
    }
  },
  "externalDependenciesEnabled" : false,
  "xrayIndex" : false,
  "downloadRedirect" : false,
  "enabledChefSupport" : false,
  "rclass" : "remote"
* Connection #0 to host mill.jfrog.info left intact
}%





[RTFACT-20199] Apache style FancyIndexing for http directory listings Created: 25/Sep/19  Updated: 25/Sep/19

Status: Open
Project: Artifactory Binary Repository
Component/s: None
Affects Version/s: 6.12.2
Fix Version/s: None

Type: New Feature Priority: Normal
Reporter: Ken Martindale Assignee: Unassigned
Resolution: Unresolved Votes: 0
Labels: None


 Description   

Many users find it helpful to be able to sort http directory listings on Last modified time, Name, or Size by clicking the desired column heading when browsing a directory on an Apache or Nginx web server.

They would like to see functionality in Artifactory similar to the FancyIndexing option of the mod_autoindex module for Apache, or the fancyindex module for Nginx.  This would allow them to easily pick the newest item when browsing a repository manually.



 Comments   
Comment by Ken Martindale [ 25/Sep/19 ]

One potential command-line workaround might look something like:

export URL=https://artifactory.domain.com/artifactory/myrepo/folder/
curl -s $URL|sed 's/<[^>]*>//g;/:[0-9]/!d'|sort -k2.8b,2.11bnr -k2.4b,2.6Mbr -k2b,2bnr -k3br

However, a command like the above is somewhat fragile and may not work in all cases.  Users have become accustomed to Fancy Index style browser listings that offer the ability to sort ascending/descending by column and would find it very helpful if Artifactory also offered that functionality.

Comment by Ken Martindale [ 25/Sep/19 ]

One potential command-line workaround might look something like:

export URL=https://artifactory.domain.com/artifactory/myrepo/folder/
curl -s $URL|sed 's/<[^>]*>//g;/:[0-9]/!d'|sort -k2.8b,2.11bnr -k2.4b,2.6Mbr -k2b,2bnr -k3br

However, a command like the above is somewhat fragile and may not work in all cases.  Users have become accustomed to Fancy Index style browser listings that offer the ability to sort ascending/descending by column and would find it very helpful if Artifactory also offered that functionality.





[RTFACT-20190] Add Cache-Control headers for static resources Created: 24/Sep/19  Updated: 25/Sep/19

Status: Open
Project: Artifactory Binary Repository
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: New Feature Priority: Normal
Reporter: Danny Thomas Assignee: Unassigned
Resolution: Unresolved Votes: 0
Labels: None


 Description   

Static UI resources served from /webapp/ already contain the build number to cache break between releases but do not set a Cache-Control header to allow the browser to cache these files.

That causes revalidations for these resources on initial load or on click paths that cause a reload of the page. Given these are static an indefinite (i.e. 1y) max-age and an immutable directive would seem appropriate - https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Cache-Control#Revalidation_and_reloading.






[RTFACT-20189] PAPI allowing a user plugin to load its config files Created: 24/Sep/19  Updated: 30/Oct/19

Status: Open
Project: Artifactory Binary Repository
Component/s: User Plugins
Affects Version/s: None
Fix Version/s: None

Type: Improvement Priority: Normal
Reporter: Travis Foster Assignee: Travis Foster
Resolution: Unresolved Votes: 0
Labels: CPE

Issue Links:
Duplicate

 Description   

User plugins historically load configuration files via the internal API ctx.artifactoryHome.haAwareEtcDir. This API is no longer available as of Artifactory 7, and something new needs to be used instead. Since configuration files are so integral to the design of most plugins, it makes sense to add explicit support for them to the PAPI, rather than incorporating another hack using the internal API.






[RTFACT-20188] SumoLogic custom URL Created: 24/Sep/19  Updated: 02/Oct/19

Status: Open
Project: Artifactory Binary Repository
Component/s: SumoLogic
Affects Version/s: None
Fix Version/s: None

Type: Improvement Priority: Normal
Reporter: Mark Galpin Assignee: Unassigned
Resolution: Unresolved Votes: 0
Labels: None


 Description   

For the Sumologic integration, today we only allow customers to create a "new" integration with a new Sumologic account.  For customers with existing Sumologic accounts, we don't have a way to send data to the existing account.  In addition to new account creation, and the id/key terminology, we should also provide for a customer to give the URL directly (no credentials are required in this use case) which could then be inserted directly into logback.xml.






[RTFACT-20187] Provide examples on how to optimize the migration job. Created: 24/Sep/19  Updated: 24/Sep/19

Status: Open
Project: Artifactory Binary Repository
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: Documentation Priority: Normal
Reporter: Claude Shubov Assignee: Unassigned
Resolution: Unresolved Votes: 0
Labels: CapitalOne
Environment:

BETA


Issue Links:
Duplicate

 Description   

Capital One requested this on Day2 of the onsite visit.

Provide how to optimize the migration job - (how many workers, relevant resources, how long will it take, etc)

Action item: provide a few sample data (e.g. 10 workers with 8 cpu, 16xms, 200 db connections => 20% CPU in artifactory 35% CPU in DB, etc in peak, xxxx in avg => took 10hrs to finish migration of 20M artifacts)






[RTFACT-20185] User stats gathering capabilities Created: 24/Sep/19  Updated: 24/Sep/19

Status: Open
Project: Artifactory Binary Repository
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: New Feature Priority: Normal
Reporter: Opossum Team Assignee: Unassigned
Resolution: Unresolved Votes: 0
Labels: statistics, users


 Description   

We would like to suggest new user stats gathering capabilities.

Some of the stats that we are interested in are:

  • User count (and daily/weekly/monthly user counts).
  • Repositories usage - we would like to track the number of times that a repository was used (e.g the number of times that users performed "npm install" from an Artifactory's npm repository)
  • track which users (or how many unique users) used each repository (e.g which users performed "npm install" from an Artifactory's npm repository)





[RTFACT-20183] ArtifactoryServer.publishBuildInfo now requires to run on a node Created: 24/Sep/19  Updated: 24/Sep/19

Status: Open
Project: Artifactory Binary Repository
Component/s: Artifactory Home, Build Info, CI
Affects Version/s: None
Fix Version/s: None

Type: Bug Priority: Normal
Reporter: François Genois Assignee: Unassigned
Resolution: Unresolved Votes: 1
Labels: jenkins

Regression:
Yes

 Description   

Hello,

This is regarding the jenkinsci/artifactory-plugin.

I used to be able to call `ArtifactoryServer.publishBuildInfo` on without mentioning a node or on a node with an empty workspace. I updated our Artifactory Jenkins plugin, but now we have the following error popping up :
```
org.jenkinsci.plugins.workflow.steps.MissingContextVariableException: Required context class hudson.FilePath is missing
Perhaps you forgot to surround the code with a step that provides this, such as: node at [...]
org.jfrog.hudson.pipeline.common.types.ArtifactoryServer.publishBuildInfo(ArtifactoryServer.java:227)
```

I think this new behavior may have originated with the following commit (I think) : https://github.com/jenkinsci/artifactory-plugin/commit/5bd26c2cac41ebdcc07d5aa4eca2ac5883c2ab69#diff-e1a12dbfddc617d9c9d5d39b4b2a5039

It is it an intended behavior, or a regression?






[RTFACT-20182] Label and persist the "artifactory_extra_conf" Docker volume even if it is empty Created: 24/Sep/19  Updated: 27/Sep/19

Status: Open
Project: Artifactory Binary Repository
Component/s: Docker
Affects Version/s: None
Fix Version/s: None

Type: New Feature Priority: Normal
Reporter: Patrick Russell Assignee: Unassigned
Resolution: Unresolved Votes: 1
Labels: None


 Description   

Currently, the Artifactory Docker image contains a separate volume for the artifactory_extra_conf folder. If a user doesn't explicitly set the volume, the Docker CLI still reserves the space as a Volume:

 

$ docker volume ls

local 38f7c0ba8d2f8dca7c0013306266566f20ed7899695290e5c68a05345f18102f

local 205afe465683cdc02b469129a291c261ab96a8898e15f98277ffde710612aa8e

local 9475de101d2ae6252fb11e5b3bab323193b578b4f8a32e805c01220afa30ae5f

 

Another unlabeled volume is created whenever the container is created or restarted. This volume should be labeled so it is persisted.



 Comments   
Comment by David Shapiro [ 27/Sep/19 ]

Thanks for creating this, Patrick Russell.

 

The concern is mostly around the perspective of someone who is managing the host machine or a DevOps engineer doing a `volume ls`.

A large list with GUID's as names presents the following issues:

  • It isn't possible to understand what system the volume associates to by simply looking at the name.
  • It creates so many of these volumes that the noise drowns out the easing of browsing the volume list. For example, if I'm looking for volume "foo" in a huge list of volumes, it becomes much more difficult.




[RTFACT-20181] Make the Artifactory user discoverable in HTTP requests Created: 23/Sep/19  Updated: 23/Sep/19

Status: Open
Project: Artifactory Binary Repository
Component/s: Headers
Affects Version/s: None
Fix Version/s: None

Type: New Feature Priority: Normal
Reporter: Michael Galati Assignee: Unassigned
Resolution: Unresolved Votes: 0
Labels: None


 Description   

In order to better support authenticated requests at our caching layer we have in front of Artifactory, we'd like a method for determining which Artifactory user initiated the request.  It would be easiest for us to deal with if this could piggyback in the HTTP request headers.  That way, if the cache would produce a cache-hit, we could serve from cache without involving AF in the request (thus helping us scale better).  Ideally, the user information would be exposed in a consistent way across different repo types.

Also note, we'd need something like https://www.jfrog.com/jira/browse/RTFACT-19753 to use in conjunction with this feature.






[RTFACT-20180] Support authentication via mutual TLS Created: 23/Sep/19  Updated: 29/Oct/19

Status: Open
Project: Artifactory Binary Repository
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: Improvement Priority: Normal
Reporter: Giancarlo Martinez Assignee: Unassigned
Resolution: Unresolved Votes: 0
Labels: None

Issue Links:
Duplicate
duplicates RTFACT-20142 Support Certificate Based Authentication Open

 Description   

It would be nice to have Artifactory support mutual TLS (aka client authentication via TLS) as another authentication provider, to replace the use of the usual username+API key combo.

We can do mTLS in nginx, but this doesn't provide SSO directly in Artifactory. Even if configured in Tomcat, Artifactory itself doesn't really understand any of the information that could be passed from the certificate, so it requires server-side parsing to pass something in at the moment.






[RTFACT-20174] Enable setting of custom UI message through YAML file Created: 23/Sep/19  Updated: 23/Oct/19

Status: Open
Project: Artifactory Binary Repository
Component/s: Web UI
Affects Version/s: 6.12.2, 6.13.1
Fix Version/s: None

Type: New Feature Priority: Normal
Reporter: Momcilo Majic Assignee: Unassigned
Resolution: Unresolved Votes: 0
Labels: artifactory
Environment:

We use ansible playbooks and jfrog cli to upload the configuration settings.



 Description   

As an IT administrator I would like to be able to set the UI message through web API and YAML file.

Similar feature already exists for setting the LDAP connections, mail server, etc.

Since we use the ansible and will use geo DNS it is important for us to designate in web interface to which instance has the user connected.

https://www.jfrog.com/confluence/display/RTF/YAML+Configuration+File

 

 






[RTFACT-20172] Artifactory Pro is taking 25% CPU seeming doing nothing Created: 23/Sep/19  Updated: 15/Oct/19

Status: Will Not Implement
Project: Artifactory Binary Repository
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: Bug Priority: Normal
Reporter: Vishal Agrawal Assignee: Unassigned
Resolution: Unresolved Votes: 0
Labels: None


 Description   

I'm looking for steps to capture debugging information. With distroless image I'm unsure what tools and steps need to be executed to capture meaningful debugging information.



 Comments   
Comment by Vishal Agrawal [ 15/Oct/19 ]

Nimer Bsoul could you provide reasons why would this not be looked at or implemented?

First, I'm asking what steps I need to carry out to get more information. Artifactory is hogging 25% CPU on this machine, and we need to get to the bottom of this issue.

Why was the issue closed? What more information was JFrog looking for from me?





[RTFACT-20171] Unable to run top in artifactory container Created: 23/Sep/19  Updated: 23/Sep/19

Status: Open
Project: Artifactory Binary Repository
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: Bug Priority: Normal
Reporter: Vishal Agrawal Assignee: Unassigned
Resolution: Unresolved Votes: 0
Labels: None


 Description   

We run Artifactory pro 6.12.2 in a docker container following the steps at -

https://www.jfrog.com/confluence/display/RTF/Installing+with+Docker

When we bash into the container, we're unable to run top command inside it.

artifactory@9c4e618c83a3:/opt/jfrog/artifactory/logs$ top
top: error while loading shared libraries: libncurses.so.6: cannot open shared object file: No such file or directory
artifactory@9c4e618c83a3:/opt/jfrog/artifactory/logs$

Info from artifactory.log -

 Artifactory Info
 ========================
   artifactory.runMode.test                                              | false
   artifactory.runMode.qa                                                | false
   artifactory.runMode.dev                                               | false
   artifactory.runMode.devHa                                             | false
   artifactory.version                                                   | 6.12.2
   artifactory.revision                                                  | 61202900
   artifactory.buildNumber                                               | 2507

 

I'm trying to run top as our instance is consuming 25+% CPU consistently.






[RTFACT-20165] Rpm info should show if rpm is signed Created: 20/Sep/19  Updated: 20/Sep/19

Status: Open
Project: Artifactory Binary Repository
Component/s: RPM
Affects Version/s: 6.12.2
Fix Version/s: None

Type: Improvement Priority: Normal
Reporter: Giancarlo Martinez Assignee: Unassigned
Resolution: Unresolved Votes: 0
Labels: None


 Description   

The Rpm info view in Artifactory should show if an RPM is signed, like it does many of the other information.

It could also print the name and the hash of the signature optionally.

 

Bonus points if a true/false property is set to indicate the package is (un)signed so we can easily search for unsigned RPMs.






[RTFACT-20162] Optimize Concurrent Docker Requests for Smart Remote Repository Created: 20/Sep/19  Updated: 06/Nov/19

Status: Open
Project: Artifactory Binary Repository
Component/s: None
Affects Version/s: 6.4.3, 6.12.2
Fix Version/s: None

Type: Improvement Priority: Normal
Reporter: Angello Maggio Assignee: Unassigned
Resolution: Unresolved Votes: 2
Labels: None
Environment:

Fresh installations (local or Docker). Tested in mill as well.



 Description   

When concurrently downloading Docker layers, Artifactory will request the same layer multiple times, and will continue to do so until the layer is cached.

Let's say we have two clusters; cluster A and cluster B.

B hasn't cached the image yet, and A might be a remote with cached or not.

Concurrent downloads in cluster B sends out multiple download requests to cluster A for the layers, and only when cluster B completes caching a layer cluster A stops receiving new requests for that layer. 

For example, an image has 2 layer: layer 1 (1MB), layer 2 (100GB)

Since layer 1 is small, cluster B caches it quickly, so cluster B only sends out a handful (less than 10 requests out of 100) to cluster A.

Whereas layer 2 is big, it takes a while to finish caching. Cluster B keeps receiving new requests for layer 2, so it will continue to reach out to cluster A to download layer 2, until the cache completes.

This results in more than 50 of requests already sent out to cluster A.

 






[RTFACT-20161] .conda files missing from repository Created: 20/Sep/19  Updated: 20/Sep/19

Status: Open
Project: Artifactory Binary Repository
Component/s: Conda
Affects Version/s: 6.10.2
Fix Version/s: None

Type: Bug Priority: Normal
Reporter: Chris Wilkes Assignee: Unassigned
Resolution: Unresolved Votes: 1
Labels: conda
Environment:

RHEL 8, Artifactory Pro 6.10.2, Conda type remote repository set up with default settings.



 Description   

Files like this one: https://repo.continuum.io/pkgs/main/linux-64/jinja2-2.10.1-py27_0.conda

do not show up in the index unless explicitly requested manually with a default conda remote repository set up

 

The following link talks about the changes within the conda ecosystem away from .tar to the .conda files






[RTFACT-20158] Artifactory does not expands the jar/war file if executable tag is set to true in pom.xml Created: 20/Sep/19  Updated: 24/Oct/19

Status: Open
Project: Artifactory Binary Repository
Component/s: None
Affects Version/s: 6.9.0, 6.12.2
Fix Version/s: None

Type: Bug Priority: Normal
Reporter: Muhammed Kashif Assignee: Unassigned
Resolution: Unresolved Votes: 0
Labels: None

Attachments: PNG File executable tag.png     XML File pom.xml    

 Description   

In pom.xml, when executable is set to true and jar is deployed to Artifactory. Artifactory does not allow us to expand the jar. However when executable is set to false, the jar/war files can be expanded.

Steps to reproduce:

  1. Build a jar/war file by setting the <executable> tag to true in pom.xml
  2. Deploy the jar/war to Artifactory
  3. Artifactory does not allow us to expand the jar/war file

 

 Due to this behaviour, Xray is not able to scan the dependencies present in the jar/war.






[RTFACT-20148] RPM metadata files getting deleted from both source and target Artifactory following "sync delete" replication Created: 19/Sep/19  Updated: 24/Sep/19

Status: Open
Project: Artifactory Binary Repository
Component/s: Replication, RPM
Affects Version/s: None
Fix Version/s: None

Type: Bug Priority: