[RTFACT-22383] Docker remote repository for neuvector registry Created: 05/Jun/20  Updated: 05/Jun/20

Status: Open
Project: Artifactory Binary Repository
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: Improvement Priority: Normal
Reporter: Shilpa Kallaganad Assignee: Unassigned
Resolution: Unresolved Votes: 0
Labels: None

Attachments: PNG File image-2020-06-05-16-51-19-278.png     PNG File image-2020-06-05-16-56-32-015.png     PNG File image-2020-06-05-17-55-58-967.png    

 Description   

When trying to use https://hub.docker.com/u/neuvector repository to pull neuvector packages, the below configuration is used.

However when running docker pull command as shown below. The docker pull returns manifest unknown error.  

Even tried configuring https://hub.docker.com as repository URL in Artifactory and got the same error. Providing authentication details in advanced section does not make any change.

However, pulling the package from docker default registry does not give any error.

Can there be any changes done to use the Neuvector repository in Artifactory






[RTFACT-22382] Onboarding wizard appeared on working Artifactory Created: 05/Jun/20  Updated: 05/Jun/20

Status: Open
Project: Artifactory Binary Repository
Component/s: Artifactory, UI
Affects Version/s: 7.4.3
Fix Version/s: None

Type: Bug Priority: Normal
Reporter: Yuriy Tabolin Assignee: Unassigned
Resolution: Unresolved Votes: 0
Labels: None


 Description   

Hi. We have on-prem artifactory-pro 7.4.3, which was upgraded from 6.18.1 some weeks ago. artifactory works in docker with nginx reverse-proxy.

Artifactory is working fine, uptime is some weeks, but during this time onboarding wizard (initial setup) was appeared twice. Onboarding wizard has appeared when opened a root of artifactory site https://repo.somedomain.com/, which redirected by nginx to /ui/ The last time it was yesterday.

I think it is probably a bug or some misconfiguration, so artifactory thinks that it is new setup and onboarding wizard is needed.






[RTFACT-22381] Docker log is too verbose Created: 05/Jun/20  Updated: 05/Jun/20

Status: Open
Project: Artifactory Binary Repository
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: Change Request Priority: Normal
Reporter: Joshua Han Assignee: Unassigned
Resolution: Unresolved Votes: 0
Labels: None


 Description   

Please change the log level for below messages (o.j.r.d.v.r.h.DockerV2LocalRepoHandler) to DEBUG.

 

2020-06-04 10:55:16,842 [http-nio-8081-exec-1059] [INFO ] (o.j.r.d.v.r.h.DockerV2LocalRepoHandler:606) - Fetching docker manifest for repo 'a/b/c' and tag '0.0.5' in repo 'docker-local'
 2020-06-04 10:55:16,880 [http-nio-8081-exec-282] [INFO ] (o.j.r.d.v.r.h.DockerV2LocalRepoHandler:116) - Fetching docker blob 'sha256:ca62daa13e49fbf5adb2fb344dcdcb7a7328a2beec3cda4d46e820c780b8b750' from repo 'docker-local'





[RTFACT-22379] Artifactory should support resolution of major version packages (V2/beyond) Created: 04/Jun/20  Updated: 05/Jun/20

Status: Open
Project: Artifactory Binary Repository
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: New Feature Priority: Normal
Reporter: Divija Kandukoori Assignee: Unassigned
Resolution: Unresolved Votes: 0
Labels: None

Attachments: JPEG File mytest2.jpg     JPEG File test1.jpg    

 Description   

While resolving go modules-v2-or-higher that doesn't have the specified directory structure as mentioned in the go wiki here : https://github.com/golang/go/wiki/Modules#releasing-modules-v2-or-higher. we are experiencing 404 not found while resolving through Artifactory and directly resolving from the go-center works. Attaching the screenshots.

I tested this from a virtual repository that has a remote repository included where the remote is proxying Github.






[RTFACT-22378] Duplicate replication configs Created: 04/Jun/20  Updated: 04/Jun/20

Status: Open
Project: Artifactory Binary Repository
Component/s: Artifactory
Affects Version/s: None
Fix Version/s: None

Type: Bug Priority: Normal
Reporter: Prathibha Ayyappan Assignee: Unassigned
Resolution: Unresolved Votes: 0
Labels: None


 Description   

Artifactory accepts duplicate replication configs using this API:

https://www.jfrog.com/confluence/display/JFROG/Artifactory+REST+API#ArtifactoryRESTAPI-CreateorReplaceLocalMulti-pushReplication

After which the configuration will fail to reload on the site with this error:

Could not merge and save new descriptor [org.jfrog.common.ExecutionFailed: Last retry failed: code exception. Not trying again (Failed to reload configuration: Duplicate key

Artifactory should check the existing replication config and reject duplicate configs.






Refactor Affinity API (config descriptor -> system.yaml) (RTFACT-22146)

[RTFACT-22376] Refactor AffinityService Created: 04/Jun/20  Updated: 04/Jun/20

Status: Open
Project: Artifactory Binary Repository
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: Sub-task Priority: Normal
Reporter: Aviv Anidjar Assignee: Unassigned
Resolution: Unresolved Votes: 0
Labels: None


 Description   

Get list of affinity nodes from db instead of descriptor. 






Refactor Affinity API (config descriptor -> system.yaml) (RTFACT-22146)

[RTFACT-22375] Modify RoleManager Created: 04/Jun/20  Updated: 04/Jun/20

Status: Open
Project: Artifactory Binary Repository
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: Sub-task Priority: Normal
Reporter: Aviv Anidjar Assignee: Unassigned
Resolution: Unresolved Votes: 0
Labels: None


 Description   

Need to change Role according to system.yaml
if "taskAdffinity" : "any" or taskAffinity: <EMPTY>
set role to taskAffinity. 

 

Remove isPrimary from code






[RTFACT-22373] [Nuget] Sorting packages by popularity Created: 04/Jun/20  Updated: 04/Jun/20

Status: Open
Project: Artifactory Binary Repository
Component/s: NuGet
Affects Version/s: None
Fix Version/s: None

Type: Improvement Priority: Normal
Reporter: Yann Chaysinh Assignee: Unassigned
Resolution: Unresolved Votes: 0
Labels: None


 Description   

When adding Artifactory as a Nuget feed in my Visual Studio, the components are listed by alphabetical order. 

When using the default Nuget feed (nuget.org), the components are sorted by popularity (even after a filter on the package name).

As a developer, I want to have the same sorting when using an Artifactory as my nuget feed






[RTFACT-22367] Modify or restrict the column NODE_PATH in NODES table to a particular value. Created: 03/Jun/20  Updated: 04/Jun/20

Status: Open
Project: Artifactory Binary Repository
Component/s: None
Affects Version/s: 6.17.0, 6.20.0
Fix Version/s: None

Type: Improvement Priority: Normal
Reporter: Muddana Jyothi VS Assignee: Unassigned
Resolution: Unresolved Votes: 0
Labels: None


 Description   

Description: When you create an artifact that is having a value larger than 1025 character inside the repo it will accept while deploying it but it will give the below error when you try to remove the artifact. Even if you try to remove the entire repository it won't allow deleting. This is observed with ORACLE Database in 6.x versions and cannot able reproduce in the  Derby.

ERROR:

java.sql.SQLException: ORA-12899: value too large for column "ART_STAGE"."NODES"."NODE_PATH" (actual: 1025, maximum: 1024)

Steps to Reprodcue:

  1. Crate a generic local repo (test-generic-repo)
  2. Deploy a text file from UI with the target path as below 

"abcdefhttps:/artifactory-staging.te.testing.abc:443/artifactory/generic-abc-release-local/https:/artifactory-staging.te.testing.abc:443/artifactory/generic-abc-release-local/https:/artifactory-staging.te.testing.abc:443/artifactory/generic-abc-release-local/https:/artifactory-staging.te.testing.abc:443/artifactory/generic-abc-release-local/https:/artifactory-staging.te.testing.abc:443/artifactory/generic-abc-release-local/https:/artifactory-staging.te.testing.abc:443/artifactory/generic-abc-release-local/https:/artifactory-staging.te.testing.abc:443/artifactory/generic-abc-release-local/https:/artifactory-staging.te.testing.abc:443/artifactory/generic-abc-release-local/https:/artifactory-staging.te.testing.abc:443/artifactory/generic-abc-release-local/https:/artifactory-staging.te.testing.abc:443/artifactory/generic-abc-release-local/https:/artifactory-staging.te.testing.abc:443/artifactory/generic-abc-release-local/https:/artifactory-staging.te.testing.abc:443/artifactory/generic-abc-release-local/console (1).log"

  1. Deploy second file from UI with the target path below 

"efbcedhttps:/artifactory-staging.te.testing.abc:443/artifactory/generic-abc-release-local/https:/artifactory-staging.te.testing.abc:443/artifactory/generic-abc-release-local/https:/artifactory-staging.te.testing.abc:443/artifactory/generic-abc-release-local/https:/artifactory-staging.te.testing.abc:443/artifactory/generic-abc-release-local/https:/artifactory-staging.te.testing.abc:443/artifactory/generic-abc-release-local/https:/artifactory-staging.te.testing.abc:443/artifactory/generic-abc-release-local/https:/artifactory-staging.te.testing.abc:443/artifactory/generic-abc-release-local/https:/artifactory-staging.te.testing.abc:443/artifactory/generic-abc-release-local/https:/artifactory-staging.te.testing.abc:443/artifactory/generic-abc-release-local/https:/artifactory-staging.te.testing.abc:443/artifactory/generic-abc-release-local/https:/artifactory-staging.te.testing.abc:443/artifactory/generic-abc-release-local/https:/artifactory-staging.te.testing.abc:443/artifactory/generic-abc-release-local/server_xml.txt"

  1. Now try to remove the content or any folder inside the repo it will give the Error below.

java.sql.SQLException: ORA-12899: value too large for column "SYSTEM"."NODES"."NODE_PATH" (actual: 1027, maximum: 1024)

      5. From the Artifactory logs you will observe the below logs 

2020-06-03 17:30:57,994 [http-nio-8081-exec-8] [ERROR] (o.a.r.c.e.m.GlobalExceptionMapper:48) - java.sql.SQLException: ORA-12899: value too large for column "SYSTEM"."NODES"."NODE_PATH" (actual: 1027, maximum: 1024)2020-06-03 17:30:57,994 [http-nio-8081-exec-8] [ERROR] (o.a.r.c.e.m.GlobalExceptionMapper:48) - java.sql.SQLException: ORA-12899: value too large for column "SYSTEM"."NODES"."NODE_PATH" (actual: 1027, maximum: 1024)
org.artifactory.storage.fs.VfsException: java.sql.SQLException: ORA-12899: value too large for column "SYSTEM"."NODES"."NODE_PATH" (actual: 1027, maximum: 1024)
at org.artifactory.storage.db.fs.service.FileServiceImpl.createFile(FileServiceImpl.java:174) at org.artifactory.storage.db.fs.model.DbMutableFile.doCreateNode(DbMutableFile.java:251) at org.artifactory.storage.db.fs.model.DbMutableItem.save(DbMutableItem.java:273)






[RTFACT-22366] Artifactory returns PyPi "yanked" release as the latest version Created: 03/Jun/20  Updated: 04/Jun/20

Status: Open
Project: Artifactory Binary Repository
Component/s: None
Affects Version/s: 7.4.0
Fix Version/s: None

Type: Bug Priority: Normal
Reporter: Omer Haglili Assignee: Unassigned
Resolution: Unresolved Votes: 0
Labels: None

Attachments: PNG File image-2020-06-03-18-23-03-144.png    

 Description   

Problem description:**Artifactory returns PyPi "yanked" release as the latest version 

 

What is the expected behavior?**A yanked release is a release that is always ignored by an installer, unless it is the only release that matches a version specifier (using either == or ===). See [PEP 592|https://www.python.org/dev/peps/pep-0592/] for more information.

 

Steps to reproduce:

 

**execute the following command with pip version 19.2 and above : 

pip install gssapi --no-cache-dir

we expect to get the 1.6.5 version as a result from the above command but we got the 1.6.6 version instead.

 






[RTFACT-22361] Artifactory sometimes doesn't send notifications to repo owners Created: 03/Jun/20  Updated: 03/Jun/20

Status: Open
Project: Artifactory Binary Repository
Component/s: None
Affects Version/s: 6.11.6
Fix Version/s: None

Type: Bug Priority: Normal
Reporter: Sven-Eric Evers Assignee: Unassigned
Resolution: Unresolved Votes: 0
Labels: None


 Description   

A user doesn't get notifications about new uploads in his repo. The user has logged in via SAML 2.0. When clicking on "check external status" the user is marked as "inactive". 

Maybe the missing notification is related to another effect: The user can login using Crowd or an SAML based authentication. He has different email addresses in both realms, so the mail address in artifactory changes dependent on the login process.






Long-term Metadata retries: Adaptations to global errors table (RTFACT-22147)

[RTFACT-22350] Convert task_type in replication_errors to EventType codes [converter] Created: 03/Jun/20  Updated: 04/Jun/20

Status: In Progress
Project: Artifactory Binary Repository
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: Sub-task Priority: Normal
Reporter: Uriah Levy Assignee: Mor Merhav
Resolution: Unresolved Votes: 0
Labels: None


 Description   

The current used task type codes in the errors table almost fully match the native EventType enum codes. This task is for creating a converter which normalizes the type of errors that are currently in the DB according to the EventType enum






[RTFACT-22349] Artifactory generates InRelease file with the wrong line endings in Windows Created: 02/Jun/20  Updated: 03/Jun/20

Status: Open
Project: Artifactory Binary Repository
Component/s: None
Affects Version/s: 6.19.0
Fix Version/s: None

Type: Bug Priority: Normal
Reporter: Nir Ovadia Assignee: Unassigned
Resolution: Unresolved Votes: 2
Labels: None
Environment:

Windows 2016 server

Debian 9



 Description   

In the recent Artifactory 6.19 update, support for InRelease metadata was added. Artifactory now produces an InRelease metadata file in the repository when working with GPG signing. In Windows environments, the file generates with DOS line endings ("\r\n"). The Artifactory client now downloads the InRelease files instead of the release/release.gpg files since they are heavier when downloading Debian packages. Trying to access the Debian packages from Artifactory in a Debian environment does not work since apt cannot handle the DOS line endings in the InRelease file it retrieves.

 

Changing the DOS line endings to Unix line endings ("\n") in the InRelease file (for example using dos2unix) and reuploading the file to the repo fixes this issue.






[RTFACT-22348] Cannot download from jcenter search results Created: 02/Jun/20  Updated: 02/Jun/20

Status: Open
Project: Artifactory Binary Repository
Component/s: Search, Web UI
Affects Version/s: 7.0.0, 7.4.0
Fix Version/s: None

Type: Bug Priority: Normal
Reporter: Aaron Rhodes Assignee: Unassigned
Resolution: Unresolved Votes: 0
Labels: None


 Description   

In Artifactory 7.x after searching for files in the jcenter search, clicking the download link in the results, returns a 404 using this URL:

http://artifactoryserver:12488/ui/api/v1/download?repoKey=jcenter&path=undefined

By comparison, the same download option in a 6.x version looks like this:

http://artifactoryserver:12301/artifactory/jcenter/org/jenkins-ci/plugins/junit/1.0/junit-1.0.jar






[RTFACT-22347] Request for Artifactory to serve the latest published file in case of same name and version number Created: 02/Jun/20  Updated: 04/Jun/20

Status: Open
Project: Artifactory Binary Repository
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: Change Request Priority: Normal
Reporter: Peter Nguyen Assignee: Unassigned
Resolution: Unresolved Votes: 0
Labels: None
Environment:

CentOS 7.5



 Description   

We tried testing the artifactory.request.searchLatestReleaseByDateCreated=true property on v6.11.6, but weren't seeing the expected results of the latest published file being served.

Our setup:

 Site A:

 - "Virtual Repo A"

    - "Local Repo A" with event-driven push replication to "Local Repo with Data Replicated from Site A"

    - "Local Repo with Data Replicated from Site B" 

 Site B:

 - "Virtual Repo B"

    - "Local Repo B" with event-driven push replication to "Local Repo with Data Replicated from Site B"

    - "Local Repo with Data Replicated from Site A"

 
Our test:

  1. Upload myrepo-1-1.x86_64.rpm to Virtual Repo A (stored in Local Repo A and pushed replicated to "Local Repo with Data Replicated from Site A" in Virtual Repo B)
  2. Modify myrepo-1-1.x86_64.rpm
  3. Upload modified myrepo-1-1.x86_64.rpm to Virtual Repo B (stored in Local Repo B and pushed replicated to "Local Repo with Data Replicated from Site B" in Virtual Repo A)
  4. Pull from Site A
  5. We get the unmodified version (the old version from step 1) instead of the expected modified version from step 3 

Our Enterprise Solution Architect, Pradnya, was also able to independently confirm our results. 

Our ask:

We would like to have a property (somewhat similar to artifactory.request.searchLatestReleaseByDateCreated) or some other functionality that would allow us to guarantee to our users the latest published version of the file no matter the package type will be served on the occasion where two teams/users who are working in parallel inadvertently upload files that may have different contents but exactly the same name and version number. 

We do recognize that npm, docker, and maven each have mechanisms that already allow for this type of behavior , we just want it for all other package types too.






[RTFACT-22341] NPM ping returns 500 server error Created: 02/Jun/20  Updated: 02/Jun/20

Status: Open
Project: Artifactory Binary Repository
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: Bug Priority: Normal
Reporter: Jackie Murphy Assignee: Unassigned
Resolution: Unresolved Votes: 0
Labels: None


 Description   

When hitting the URL http://xx.xx.xx.xx:8081/artifactory/api/npm/npm/-/ping

as used by the npm-ping command https://docs.npmjs.com/cli/ping.html

Artifactory returns a 500 response and logs a traceback of:

2020-05-28 18:02:49,223 [http-nio-8081-exec-10] [ERROR] (o.a.r.c.e.m.GlobalExceptionMapper:48) - null
java.lang.NullPointerException: null
    at org.artifactory.addon.npm.repo.NpmRemoteRepoHandler.replaceTarballUrl(NpmRemoteRepoHandler.java:398)
    at org.artifactory.addon.npm.repo.NpmRemoteRepoHandler.getVersionMetadata(NpmRemoteRepoHandler.java:219)
    at org.artifactory.addon.npm.repo.merge.NpmVersionMetadataMerger.merge(NpmVersionMetadataMerger.java:65)
    at org.artifactory.addon.npm.repo.merge.NpmMetadataMerger.getMergedResult(NpmMetadataMerger.java:67)
    at org.artifactory.addon.npm.repo.NpmVirtualRepoHandler.getVersionMetadata(NpmVirtualRepoHandler.java:83)
    at org.jfrog.repomd.npm.rest.NpmSubResource.packageVersion(NpmSubResource.java:79)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:498)
    at org.glassfish.jersey.server.model.internal.ResourceMethodInvocationHandlerFactory.lambda$static$0(ResourceMethodInvocationHandlerFactory.java:76)
    at org.glassfish.jersey.server.model.internal.AbstractJavaResourceMethodDispatcher$1.run(AbstractJavaResourceMethodDispatcher.java:148)
    at org.glassfish.jersey.server.model.internal.AbstractJavaResourceMethodDispatcher.invoke(AbstractJavaResourceMethodDispatcher.java:191)
    at org.glassfish.jersey.server.model.internal.JavaResourceMethodDispatcherProvider$ResponseOutInvoker.doDispatch(JavaResourceMethodDispatcherProvider.java:200)
    at org.glassfish.jersey.server.model.internal.AbstractJavaResourceMethodDispatcher.dispatch(AbstractJavaResourceMethodDispatcher.java:103)
    at org.glassfish.jersey.server.model.ResourceMethodInvoker.invoke(ResourceMethodInvoker.java:493)
    at org.glassfish.jersey.server.model.ResourceMethodInvoker.apply(ResourceMethodInvoker.java:415)
    at org.glassfish.jersey.server.model.ResourceMethodInvoker.apply(ResourceMethodInvoker.java:104)
    at org.glassfish.jersey.server.ServerRuntime$1.run(ServerRuntime.java:277)
    at org.glassfish.jersey.internal.Errors$1.call(Errors.java:272)
    at org.glassfish.jersey.internal.Errors$1.call(Errors.java:268)
    at org.glassfish.jersey.internal.Errors.process(Errors.java:316)
    at org.glassfish.jersey.internal.Errors.process(Errors.java:298)
    at org.glassfish.jersey.internal.Errors.process(Errors.java:268)
    at org.glassfish.jersey.process.internal.RequestScope.runInScope(RequestScope.java:289)
    at org.glassfish.jersey.server.ServerRuntime.process(ServerRuntime.java:256)
    at org.glassfish.jersey.server.ApplicationHandler.handle(ApplicationHandler.java:703)
    at org.glassfish.jersey.servlet.WebComponent.serviceImpl(WebComponent.java:416)
    at org.glassfish.jersey.servlet.WebComponent.service(WebComponent.java:370)
    at org.glassfish.jersey.servlet.ServletContainer.service(ServletContainer.java:389)
    at org.glassfish.jersey.servlet.ServletContainer.service(ServletContainer.java:342)
    at org.glassfish.jersey.servlet.ServletContainer.service(ServletContainer.java:229)
    at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:231)
    at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166)
    at org.artifactory.webapp.servlet.RepoFilter.execute(RepoFilter.java:195)
    at org.artifactory.webapp.servlet.RepoFilter.doFilter(RepoFilter.java:97)
    at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193)
    at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166)
    at org.artifactory.webapp.servlet.AccessFilter.useAuthentication(AccessFilter.java:427)
    at org.artifactory.webapp.servlet.AccessFilter.doFilterInternal(AccessFilter.java:214)
    at org.artifactory.webapp.servlet.AccessFilter.doFilter(AccessFilter.java:167)
    at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193)
    at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166)
    at org.artifactory.webapp.servlet.RequestFilter.doFilter(RequestFilter.java:77)
    at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193)
    at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166)
    at org.artifactory.webapp.servlet.ArtifactoryCsrfFilter.doFilter(ArtifactoryCsrfFilter.java:75)
    at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193)
    at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166)
    at org.springframework.session.web.http.SessionRepositoryFilter.doFilterInternal(SessionRepositoryFilter.java:164)
    at org.springframework.session.web.http.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:80)
    at org.artifactory.webapp.servlet.SessionFilter.doFilter(SessionFilter.java:62)
    at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193)
    at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166)
    at org.artifactory.webapp.servlet.ArtifactoryFilter.doFilter(ArtifactoryFilter.java:124)
    at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193)
    at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166)
    at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:199)
    at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:96)
    at org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:543)
    at org.apache.catalina.valves.rewrite.RewriteValve.invoke(RewriteValve.java:305)
    at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:139)
    at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:81)
    at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:87)
    at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:343)
    at org.apache.coyote.http11.Http11Processor.service(Http11Processor.java:609)
    at org.apache.coyote.AbstractProcessorLight.process(AbstractProcessorLight.java:65)
    at org.apache.coyote.AbstractProtocol$ConnectionHandler.process(AbstractProtocol.java:818)
    at org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.doRun(NioEndpoint.java:1623)
    at org.apache.tomcat.util.net.SocketProcessorBase.run(SocketProcessorBase.java:49)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    at org.apache.tomcat.util.threads.TaskThread$WrappingRunnable.run(TaskThread.java:61)
    at java.lang.Thread.run(Thread.java:748) 

 

Other URLs "nearby" to .../-/ping return 404, so it looks like npm ping support must have some code behind it, but it doesn't seem to be working as intended.

We ran into this issue when using the https://github.com/release-it/release-it tool, which recently added a npm ping healthcheck before attempting to interact with a repository.  We've disabled these checks as a workaround, but it looks like at the moment that tool (and possibly others? I personally am not super familiar with the node ecosystem) are not usable out of the box with artifactory.  For reference, here is the relevant issue in release-it: https://github.com/release-it/release-it/issues/637






[RTFACT-22340] ClientAbortException during upload results in 503 response when sharding Created: 02/Jun/20  Updated: 03/Jun/20

Status: Open
Project: Artifactory Binary Repository
Component/s: None
Affects Version/s: 7.5.0, 7.4.3
Fix Version/s: None

Type: Bug Priority: Normal
Reporter: Ariel Kabov Assignee: Unassigned
Resolution: Unresolved Votes: 0
Labels: None


 Description   

When using sharding binary provider, if an upload request is terminated during the upload, it will cause:
1. Artifactory will respond with HTTP status code of 503, while I would expect a 499:

2020-06-02T10:54:35.408Z|307c5fa7bf4b424|82.81.195.5|admin|PUT|/generic-local/1589834820101.zip|503|1153074080|0|19528|curl/7.64.1

2. Very ugly exceptions and ERRORs will be noticed in the Artifactory logs:

2020-06-02T10:54:15.883Z [jfrt ] [INFO ] [307c5fa7bf4b424 ] [o.a.e.UploadServiceImpl:399   ] [http-nio-8081-exec-2] - Deploy to 'generic-local:1589834820101.zip' Content-Length: 1153074080
2020-06-02T10:54:35.365Z [jfrt ] [WARN ] [307c5fa7bf4b424 ] [.a.a.f.e.c.EventualStorage:158] [pool-33-thread-2    ] - Caught exception while saving incoming stream to file /opt/jfrog/artifactory/var/data/artifactory/eventual/_pre/dbRecord412617330208283998.bin : Failed to read stream: Failed to read stream from inputStream to buffer. deleting corrupted file
2020-06-02T10:54:35.366Z [jfrt ] [ERROR] [307c5fa7bf4b424 ] [s.b.c.RemoteBinaryProvider:464] [pool-33-thread-1    ] - Failed to query remote Binary Provider node art2 at http://localhost:8046/artifactory/binarystore: null
2020-06-02T10:54:35.386Z [jfrt ] [ERROR] [307c5fa7bf4b424 ] [ShardingBinaryProviderImpl:327] [http-nio-8081-exec-2] - Failed to stream binary to sub provider: org.jfrog.storage.binstore.ifc.ClientInputStreamException: Failed to read stream: Failed to read stream from inputStream to buffer
java.util.concurrent.ExecutionException: org.jfrog.storage.binstore.ifc.ClientInputStreamException: Failed to read stream: Failed to read stream from inputStream to buffer
	at java.base/java.util.concurrent.FutureTask.report(FutureTask.java:122)
	at java.base/java.util.concurrent.FutureTask.get(FutureTask.java:191)
	at org.artifactory.addon.filestore.multiple.ShardingBinaryProviderImpl.waitForAllProviders(ShardingBinaryProviderImpl.java:323)
	at org.artifactory.addon.filestore.multiple.ShardingBinaryProviderImpl.writeToSubProviders(ShardingBinaryProviderImpl.java:254)
	at org.artifactory.addon.filestore.multiple.ShardingBinaryProviderImpl.addStream(ShardingBinaryProviderImpl.java:225)
	at org.jfrog.storage.binstore.providers.FileCacheBinaryProviderImpl.addStream(FileCacheBinaryProviderImpl.java:145)
	at org.artifactory.storage.db.binstore.service.BinaryServiceImpl.addBinary(BinaryServiceImpl.java:392)
	at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.base/java.lang.reflect.Method.invoke(Method.java:566)
	at org.springframework.aop.support.AopUtils.invokeJoinpointUsingReflection(AopUtils.java:343)
	at org.springframework.aop.framework.ReflectiveMethodInvocation.invokeJoinpoint(ReflectiveMethodInvocation.java:198)
	at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:163)
	at org.springframework.transaction.interceptor.TransactionAspectSupport.invokeWithinTransaction(TransactionAspectSupport.java:295)
	at org.springframework.transaction.interceptor.TransactionInterceptor.invoke(TransactionInterceptor.java:98)
	at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:186)
	at org.springframework.aop.framework.JdkDynamicAopProxy.invoke(JdkDynamicAopProxy.java:212)
	at com.sun.proxy.$Proxy219.addBinary(Unknown Source)
	at org.artifactory.repo.service.RepositoryServiceImpl.saveResource(RepositoryServiceImpl.java:1895)
	at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.base/java.lang.reflect.Method.invoke(Method.java:566)
	at org.springframework.aop.support.AopUtils.invokeJoinpointUsingReflection(AopUtils.java:343)
	at org.springframework.aop.framework.JdkDynamicAopProxy.invoke(JdkDynamicAopProxy.java:205)
	at com.sun.proxy.$Proxy176.saveResource(Unknown Source)
	at org.artifactory.engine.UploadServiceImpl.uploadItemWithContent(UploadServiceImpl.java:570)
	at org.artifactory.engine.UploadServiceImpl.uploadItemWithProvidedContent(UploadServiceImpl.java:552)
	at org.artifactory.engine.UploadServiceImpl.uploadItem(UploadServiceImpl.java:427)
	at org.artifactory.engine.UploadServiceImpl.uploadFile(UploadServiceImpl.java:418)
	at org.artifactory.engine.UploadServiceImpl.uploadArtifact(UploadServiceImpl.java:400)
	at org.artifactory.engine.UploadServiceImpl.adjustResponseAndUpload(UploadServiceImpl.java:222)
	at org.artifactory.engine.UploadServiceImpl.validateRequestAndUpload(UploadServiceImpl.java:188)
	at org.artifactory.engine.UploadServiceImpl.upload(UploadServiceImpl.java:131)
	at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.base/java.lang.reflect.Method.invoke(Method.java:566)
	at org.springframework.aop.support.AopUtils.invokeJoinpointUsingReflection(AopUtils.java:343)
	at org.springframework.aop.framework.ReflectiveMethodInvocation.invokeJoinpoint(ReflectiveMethodInvocation.java:198)
	at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:163)
	at org.artifactory.request.aop.RequestAdvice.invoke(RequestAdvice.java:67)
	at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:186)
	at org.springframework.aop.framework.JdkDynamicAopProxy.invoke(JdkDynamicAopProxy.java:212)
	at com.sun.proxy.$Proxy221.upload(Unknown Source)
	at org.artifactory.webapp.servlet.RepoFilter.doUpload(RepoFilter.java:284)
	at org.artifactory.webapp.servlet.RepoFilter.execute(RepoFilter.java:176)
	at org.artifactory.webapp.servlet.RepoFilter.doFilter(RepoFilter.java:97)
	at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193)
	at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166)
	at org.artifactory.webapp.servlet.AccessFilter.useAuthentication(AccessFilter.java:513)
	at org.artifactory.webapp.servlet.AccessFilter.authenticateAndExecute(AccessFilter.java:379)
	at org.artifactory.webapp.servlet.AccessFilter.doFilterInternal(AccessFilter.java:249)
	at org.artifactory.webapp.servlet.AccessFilter.doFilter(AccessFilter.java:193)
	at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193)
	at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166)
	at org.artifactory.webapp.servlet.RequestFilter.doFilter(RequestFilter.java:78)
	at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193)
	at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166)
	at org.artifactory.webapp.servlet.ArtifactoryCsrfFilter.doFilter(ArtifactoryCsrfFilter.java:86)
	at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193)
	at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166)
	at org.springframework.session.web.http.SessionRepositoryFilter.doFilterInternal(SessionRepositoryFilter.java:164)
	at org.springframework.session.web.http.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:80)
	at org.artifactory.webapp.servlet.SessionFilter.doFilter(SessionFilter.java:62)
	at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193)
	at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166)
	at org.artifactory.webapp.servlet.ArtifactoryTracingFilter.doFilter(ArtifactoryTracingFilter.java:27)
	at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193)
	at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166)
	at org.artifactory.webapp.servlet.ArtifactoryFilter.doFilter(ArtifactoryFilter.java:124)
	at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193)
	at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166)
	at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:199)
	at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:96)
	at org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:493)
	at org.apache.catalina.valves.rewrite.RewriteValve.invoke(RewriteValve.java:304)
	at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:137)
	at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:81)
	at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:87)
	at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:343)
	at org.apache.coyote.http11.Http11Processor.service(Http11Processor.java:798)
	at org.apache.coyote.AbstractProcessorLight.process(AbstractProcessorLight.java:66)
	at org.apache.coyote.AbstractProtocol$ConnectionHandler.process(AbstractProtocol.java:808)
	at org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.doRun(NioEndpoint.java:1498)
	at org.apache.tomcat.util.net.SocketProcessorBase.run(SocketProcessorBase.java:49)
	at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
	at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
	at org.apache.tomcat.util.threads.TaskThread$WrappingRunnable.run(TaskThread.java:61)
	at java.base/java.lang.Thread.run(Thread.java:834)
Caused by: org.jfrog.storage.binstore.ifc.ClientInputStreamException: Failed to read stream: Failed to read stream from inputStream to buffer
	at org.jfrog.storage.binstore.ifc.ClientStream.read(ClientStream.java:36)
	at org.apache.commons.io.IOUtils.copyLarge(IOUtils.java:2314)
	at org.apache.commons.io.IOUtils.copy(IOUtils.java:2270)
	at org.apache.commons.io.IOUtils.copyLarge(IOUtils.java:2291)
	at org.apache.commons.io.IOUtils.copy(IOUtils.java:2246)
	at org.apache.commons.io.FileUtils.copyToFile(FileUtils.java:1530)
	at org.apache.commons.io.FileUtils.copyInputStreamToFile(FileUtils.java:1506)
	at org.jfrog.storage.binstore.providers.tools.FilePersistenceHelper.saveStreamToTempFile(FilePersistenceHelper.java:55)
	at org.artifactory.addon.filestore.eventual.cluster.EventualStorage.addStream(EventualStorage.java:152)
	at org.artifactory.addon.filestore.eventual.cluster.EventualClusterBinaryProvider.addStream(EventualClusterBinaryProvider.java:157)
	at org.artifactory.addon.filestore.multiple.ShardingBinaryProviderImpl.lambda$addStreamsInParallel$4(ShardingBinaryProviderImpl.java:362)
	at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
	at org.artifactory.opentracing.TraceableRunnableDecorator.run(TraceableRunnableDecorator.java:25)
	at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
	at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
	... 1 common frames omitted
Caused by: org.jfrog.storage.binstore.concurrent.ConcurrentBufferInputStreamException: Failed to read stream from inputStream to buffer
	at org.artifactory.addon.filestore.multiple.concurrent.blocking.ConcurrentBufferInputBinaryStreamLeg.read(ConcurrentBufferInputBinaryStreamLeg.java:77)
	at java.base/java.io.InputStream.read(InputStream.java:205)
	at org.jfrog.storage.binstore.ifc.ClientStream.read(ClientStream.java:34)
	... 15 common frames omitted
Caused by: org.jfrog.storage.binstore.concurrent.ConcurrentBufferInputStreamException: Failed to read 32768 bytes stream from inputStream to buffer
	at org.artifactory.addon.filestore.multiple.concurrent.blocking.ConcurrentBufferInputBinaryProviderStream.reloadBuffer(ConcurrentBufferInputBinaryProviderStream.java:63)
	at org.artifactory.addon.filestore.multiple.concurrent.blocking.ConcurrentBufferInputBinaryStreamLeg.read(ConcurrentBufferInputBinaryStreamLeg.java:65)
	... 17 common frames omitted
Caused by: org.jfrog.storage.binstore.ifc.ClientInputStreamException: Failed to read stream: Failed to read stream: java.io.EOFException: Unexpected EOF read on the socket
	at org.jfrog.storage.binstore.ifc.ClientStream.read(ClientStream.java:44)
	at org.artifactory.addon.filestore.multiple.concurrent.blocking.ConcurrentBufferInputBinaryProviderStream.reloadBuffer(ConcurrentBufferInputBinaryProviderStream.java:55)
	... 18 common frames omitted
Caused by: org.jfrog.storage.binstore.ifc.ClientInputStreamException: Failed to read stream: java.io.EOFException: Unexpected EOF read on the socket
	at org.jfrog.storage.binstore.ifc.ClientStream.read(ClientStream.java:44)
	at org.jfrog.storage.binstore.providers.SavedToFileInputStream.read(SavedToFileInputStream.java:83)
	at org.jfrog.storage.binstore.ifc.ClientStream.read(ClientStream.java:42)
	... 19 common frames omitted
Caused by: org.apache.catalina.connector.ClientAbortException: java.io.EOFException: Unexpected EOF read on the socket
	at org.apache.catalina.connector.InputBuffer.realReadBytes(InputBuffer.java:348)
	at org.apache.catalina.connector.InputBuffer.checkByteBufferEof(InputBuffer.java:663)
	at org.apache.catalina.connector.InputBuffer.read(InputBuffer.java:370)
	at org.apache.catalina.connector.CoyoteInputStream.read(CoyoteInputStream.java:183)
	at java.base/java.io.BufferedInputStream.read1(BufferedInputStream.java:290)
	at java.base/java.io.BufferedInputStream.read(BufferedInputStream.java:351)
	at org.jfrog.storage.binstore.common.Sha1Sha2Md5ChecksumInputStream.read(Sha1Sha2Md5ChecksumInputStream.java:134)
	at org.jfrog.storage.binstore.ifc.ClientStream.read(ClientStream.java:42)
	... 21 common frames omitted
Caused by: java.io.EOFException: Unexpected EOF read on the socket
	at org.apache.coyote.http11.Http11InputBuffer.fill(Http11InputBuffer.java:742)
	at org.apache.coyote.http11.Http11InputBuffer.access$300(Http11InputBuffer.java:40)
	at org.apache.coyote.http11.Http11InputBuffer$SocketInputBuffer.doRead(Http11InputBuffer.java:1092)
	at org.apache.coyote.http11.filters.IdentityInputFilter.doRead(IdentityInputFilter.java:140)
	at org.apache.coyote.http11.Http11InputBuffer.doRead(Http11InputBuffer.java:263)
	at org.apache.coyote.Request.doRead(Request.java:581)
	at org.apache.catalina.connector.InputBuffer.realReadBytes(InputBuffer.java:344)
	... 28 common frames omitted
2020-06-02T10:54:35.390Z [jfrt ] [ERROR] [307c5fa7bf4b424 ] [ShardingBinaryProviderImpl:327] [http-nio-8081-exec-2] - Failed to stream binary to sub provider: org.apache.http.client.ClientProtocolException
java.util.concurrent.ExecutionException: org.apache.http.client.ClientProtocolException
	at java.base/java.util.concurrent.FutureTask.report(FutureTask.java:122)
	at java.base/java.util.concurrent.FutureTask.get(FutureTask.java:191)
	at org.artifactory.addon.filestore.multiple.ShardingBinaryProviderImpl.waitForAllProviders(ShardingBinaryProviderImpl.java:323)
	at org.artifactory.addon.filestore.multiple.ShardingBinaryProviderImpl.writeToSubProviders(ShardingBinaryProviderImpl.java:254)
	at org.artifactory.addon.filestore.multiple.ShardingBinaryProviderImpl.addStream(ShardingBinaryProviderImpl.java:225)
	at org.jfrog.storage.binstore.providers.FileCacheBinaryProviderImpl.addStream(FileCacheBinaryProviderImpl.java:145)
	at org.artifactory.storage.db.binstore.service.BinaryServiceImpl.addBinary(BinaryServiceImpl.java:392)
	at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.base/java.lang.reflect.Method.invoke(Method.java:566)
	at org.springframework.aop.support.AopUtils.invokeJoinpointUsingReflection(AopUtils.java:343)
	at org.springframework.aop.framework.ReflectiveMethodInvocation.invokeJoinpoint(ReflectiveMethodInvocation.java:198)
	at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:163)
	at org.springframework.transaction.interceptor.TransactionAspectSupport.invokeWithinTransaction(TransactionAspectSupport.java:295)
	at org.springframework.transaction.interceptor.TransactionInterceptor.invoke(TransactionInterceptor.java:98)
	at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:186)
	at org.springframework.aop.framework.JdkDynamicAopProxy.invoke(JdkDynamicAopProxy.java:212)
	at com.sun.proxy.$Proxy219.addBinary(Unknown Source)
	at org.artifactory.repo.service.RepositoryServiceImpl.saveResource(RepositoryServiceImpl.java:1895)
	at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.base/java.lang.reflect.Method.invoke(Method.java:566)
	at org.springframework.aop.support.AopUtils.invokeJoinpointUsingReflection(AopUtils.java:343)
	at org.springframework.aop.framework.JdkDynamicAopProxy.invoke(JdkDynamicAopProxy.java:205)
	at com.sun.proxy.$Proxy176.saveResource(Unknown Source)
	at org.artifactory.engine.UploadServiceImpl.uploadItemWithContent(UploadServiceImpl.java:570)
	at org.artifactory.engine.UploadServiceImpl.uploadItemWithProvidedContent(UploadServiceImpl.java:552)
	at org.artifactory.engine.UploadServiceImpl.uploadItem(UploadServiceImpl.java:427)
	at org.artifactory.engine.UploadServiceImpl.uploadFile(UploadServiceImpl.java:418)
	at org.artifactory.engine.UploadServiceImpl.uploadArtifact(UploadServiceImpl.java:400)
	at org.artifactory.engine.UploadServiceImpl.adjustResponseAndUpload(UploadServiceImpl.java:222)
	at org.artifactory.engine.UploadServiceImpl.validateRequestAndUpload(UploadServiceImpl.java:188)
	at org.artifactory.engine.UploadServiceImpl.upload(UploadServiceImpl.java:131)
	at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.base/java.lang.reflect.Method.invoke(Method.java:566)
	at org.springframework.aop.support.AopUtils.invokeJoinpointUsingReflection(AopUtils.java:343)
	at org.springframework.aop.framework.ReflectiveMethodInvocation.invokeJoinpoint(ReflectiveMethodInvocation.java:198)
	at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:163)
	at org.artifactory.request.aop.RequestAdvice.invoke(RequestAdvice.java:67)
	at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:186)
	at org.springframework.aop.framework.JdkDynamicAopProxy.invoke(JdkDynamicAopProxy.java:212)
	at com.sun.proxy.$Proxy221.upload(Unknown Source)
	at org.artifactory.webapp.servlet.RepoFilter.doUpload(RepoFilter.java:284)
	at org.artifactory.webapp.servlet.RepoFilter.execute(RepoFilter.java:176)
	at org.artifactory.webapp.servlet.RepoFilter.doFilter(RepoFilter.java:97)
	at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193)
	at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166)
	at org.artifactory.webapp.servlet.AccessFilter.useAuthentication(AccessFilter.java:513)
	at org.artifactory.webapp.servlet.AccessFilter.authenticateAndExecute(AccessFilter.java:379)
	at org.artifactory.webapp.servlet.AccessFilter.doFilterInternal(AccessFilter.java:249)
	at org.artifactory.webapp.servlet.AccessFilter.doFilter(AccessFilter.java:193)
	at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193)
	at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166)
	at org.artifactory.webapp.servlet.RequestFilter.doFilter(RequestFilter.java:78)
	at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193)
	at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166)
	at org.artifactory.webapp.servlet.ArtifactoryCsrfFilter.doFilter(ArtifactoryCsrfFilter.java:86)
	at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193)
	at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166)
	at org.springframework.session.web.http.SessionRepositoryFilter.doFilterInternal(SessionRepositoryFilter.java:164)
	at org.springframework.session.web.http.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:80)
	at org.artifactory.webapp.servlet.SessionFilter.doFilter(SessionFilter.java:62)
	at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193)
	at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166)
	at org.artifactory.webapp.servlet.ArtifactoryTracingFilter.doFilter(ArtifactoryTracingFilter.java:27)
	at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193)
	at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166)
	at org.artifactory.webapp.servlet.ArtifactoryFilter.doFilter(ArtifactoryFilter.java:124)
	at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193)
	at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166)
	at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:199)
	at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:96)
	at org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:493)
	at org.apache.catalina.valves.rewrite.RewriteValve.invoke(RewriteValve.java:304)
	at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:137)
	at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:81)
	at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:87)
	at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:343)
	at org.apache.coyote.http11.Http11Processor.service(Http11Processor.java:798)
	at org.apache.coyote.AbstractProcessorLight.process(AbstractProcessorLight.java:66)
	at org.apache.coyote.AbstractProtocol$ConnectionHandler.process(AbstractProtocol.java:808)
	at org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.doRun(NioEndpoint.java:1498)
	at org.apache.tomcat.util.net.SocketProcessorBase.run(SocketProcessorBase.java:49)
	at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
	at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
	at org.apache.tomcat.util.threads.TaskThread$WrappingRunnable.run(TaskThread.java:61)
	at java.base/java.lang.Thread.run(Thread.java:834)
Caused by: org.apache.http.client.ClientProtocolException: null
	at org.apache.http.impl.client.InternalHttpClient.doExecute(InternalHttpClient.java:187)
	at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:72)
	at org.jfrog.client.http.CloseableHttpClientDecorator.doExecute(CloseableHttpClientDecorator.java:107)
	at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:83)
	at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:108)
	at org.jfrog.storage.binstore.client.RemoteBinaryProvider.execute(RemoteBinaryProvider.java:438)
	at org.jfrog.storage.binstore.client.RemoteBinaryProvider.addStreamInternal(RemoteBinaryProvider.java:163)
	at org.jfrog.storage.binstore.client.RemoteBinaryProvider.addStream(RemoteBinaryProvider.java:145)
	at org.artifactory.addon.filestore.multiple.ShardingBinaryProviderImpl.lambda$addStreamsInParallel$4(ShardingBinaryProviderImpl.java:362)
	at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
	at org.artifactory.opentracing.TraceableRunnableDecorator.run(TraceableRunnableDecorator.java:25)
	at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
	at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
	... 1 common frames omitted
Caused by: org.apache.http.client.NonRepeatableRequestException: Cannot retry request with a non-repeatable request entity
	at org.apache.http.impl.execchain.RetryExec.execute(RetryExec.java:108)
	at org.apache.http.impl.execchain.RedirectExec.execute(RedirectExec.java:110)
	at org.apache.http.impl.client.InternalHttpClient.doExecute(InternalHttpClient.java:185)
	... 13 common frames omitted
Caused by: org.jfrog.storage.binstore.ifc.ClientInputStreamException: Failed to read stream: Failed to read stream from inputStream to buffer
	at org.jfrog.storage.binstore.ifc.ClientStream.read(ClientStream.java:36)
	at org.apache.http.entity.InputStreamEntity.writeTo(InputStreamEntity.java:133)
	at org.apache.http.impl.execchain.RequestEntityProxy.writeTo(RequestEntityProxy.java:121)
	at org.apache.http.impl.DefaultBHttpClientConnection.sendRequestEntity(DefaultBHttpClientConnection.java:156)
	at org.apache.http.impl.conn.CPoolProxy.sendRequestEntity(CPoolProxy.java:152)
	at org.apache.http.protocol.HttpRequestExecutor.doSendRequest(HttpRequestExecutor.java:238)
	at org.apache.http.protocol.HttpRequestExecutor.execute(HttpRequestExecutor.java:123)
	at org.apache.http.impl.execchain.MainClientExec.execute(MainClientExec.java:272)
	at org.apache.http.impl.execchain.ProtocolExec.execute(ProtocolExec.java:186)
	at org.apache.http.impl.execchain.RetryExec.execute(RetryExec.java:89)
	... 15 common frames omitted
Caused by: org.jfrog.storage.binstore.concurrent.ConcurrentBufferInputStreamException: Failed to read stream from inputStream to buffer
	at org.artifactory.addon.filestore.multiple.concurrent.blocking.ConcurrentBufferInputBinaryStreamLeg.read(ConcurrentBufferInputBinaryStreamLeg.java:77)
	at java.base/java.io.InputStream.read(InputStream.java:205)
	at org.jfrog.storage.binstore.ifc.ClientStream.read(ClientStream.java:34)
	... 24 common frames omitted
Caused by: org.jfrog.storage.binstore.concurrent.ConcurrentBufferInputStreamException: Failed to read 32768 bytes stream from inputStream to buffer
	at org.artifactory.addon.filestore.multiple.concurrent.blocking.ConcurrentBufferInputBinaryProviderStream.reloadBuffer(ConcurrentBufferInputBinaryProviderStream.java:63)
	at org.artifactory.addon.filestore.multiple.concurrent.blocking.ConcurrentBufferInputBinaryStreamLeg.read(ConcurrentBufferInputBinaryStreamLeg.java:65)
	... 26 common frames omitted
Caused by: org.jfrog.storage.binstore.ifc.ClientInputStreamException: Failed to read stream: Failed to read stream: java.io.EOFException: Unexpected EOF read on the socket
	at org.jfrog.storage.binstore.ifc.ClientStream.read(ClientStream.java:44)
	at org.artifactory.addon.filestore.multiple.concurrent.blocking.ConcurrentBufferInputBinaryProviderStream.reloadBuffer(ConcurrentBufferInputBinaryProviderStream.java:55)
	... 27 common frames omitted
Caused by: org.jfrog.storage.binstore.ifc.ClientInputStreamException: Failed to read stream: java.io.EOFException: Unexpected EOF read on the socket
	at org.jfrog.storage.binstore.ifc.ClientStream.read(ClientStream.java:44)
	at org.jfrog.storage.binstore.providers.SavedToFileInputStream.read(SavedToFileInputStream.java:83)
	at org.jfrog.storage.binstore.ifc.ClientStream.read(ClientStream.java:42)
	... 28 common frames omitted
Caused by: org.apache.catalina.connector.ClientAbortException: java.io.EOFException: Unexpected EOF read on the socket
	at org.apache.catalina.connector.InputBuffer.realReadBytes(InputBuffer.java:348)
	at org.apache.catalina.connector.InputBuffer.checkByteBufferEof(InputBuffer.java:663)
	at org.apache.catalina.connector.InputBuffer.read(InputBuffer.java:370)
	at org.apache.catalina.connector.CoyoteInputStream.read(CoyoteInputStream.java:183)
	at java.base/java.io.BufferedInputStream.read1(BufferedInputStream.java:290)
	at java.base/java.io.BufferedInputStream.read(BufferedInputStream.java:351)
	at org.jfrog.storage.binstore.common.Sha1Sha2Md5ChecksumInputStream.read(Sha1Sha2Md5ChecksumInputStream.java:134)
	at org.jfrog.storage.binstore.ifc.ClientStream.read(ClientStream.java:42)
	... 30 common frames omitted
Caused by: java.io.EOFException: Unexpected EOF read on the socket
	at org.apache.coyote.http11.Http11InputBuffer.fill(Http11InputBuffer.java:742)
	at org.apache.coyote.http11.Http11InputBuffer.access$300(Http11InputBuffer.java:40)
	at org.apache.coyote.http11.Http11InputBuffer$SocketInputBuffer.doRead(Http11InputBuffer.java:1092)
	at org.apache.coyote.http11.filters.IdentityInputFilter.doRead(IdentityInputFilter.java:140)
	at org.apache.coyote.http11.Http11InputBuffer.doRead(Http11InputBuffer.java:263)
	at org.apache.coyote.Request.doRead(Request.java:581)
	at org.apache.catalina.connector.InputBuffer.realReadBytes(InputBuffer.java:344)
	... 37 common frames omitted
2020-06-02T10:54:35.406Z [jfrt ] [WARN ] [307c5fa7bf4b424 ] [.r.ArtifactoryResponseBase:125] [http-nio-8081-exec-2] - Sending HTTP error code 503: 503 : Failed to stream binary to sub provider: org.jfrog.storage.binstore.ifc.ClientInputStreamException: Failed to read stream: Failed to read stream from inputStream to buffer

Steps to reproduce:
1. Setup Artifactory 7.4.3 HA with binarystore.xml:

<config version="v1">
    <chain template="cluster-s3"/>
    <provider type="s3" id="s3">
        <bucketName>NAME</bucketName>
        <endpoint>http://s3.amazonaws.com</endpoint>
        <identity>IDENTITY</identity>
        <credential>CREDS</credential>
        <path>PATH</path>
    </provider>
</config>

2. Start deploying a large file using cURL and terminate the upload in the middle using CTRL+C.






[RTFACT-22336] new Permission Role - Read All Resources (for Xray reports and APIs) Created: 02/Jun/20  Updated: 05/Jun/20

Status: Ready for Code Review
Project: Artifactory Binary Repository
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: Story Priority: Normal
Reporter: Dganit Arnon Assignee: Igor Usenko [EXT]
Resolution: Unresolved Votes: 0
Labels: None


 Description   

A new role is required to access and generate the reports and will be used for other search components capabilities in the future. The new Role name should be "Read All Resources".
Users with this Role should have the ability to access the report's sections and to generate reports on all resources (All Repositories, Builds, Release Bundles ).
This role can be given in the User/Group level (same as ManageWatch and Manage Policy roles).






[RTFACT-22335] Running test profiles on multiple machines - Rewrite Jenkins Pipeline Created: 02/Jun/20  Updated: 03/Jun/20

Status: In Progress
Project: Artifactory Binary Repository
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: Improvement Priority: Normal
Reporter: Omri Ziv Assignee: Omri Ziv
Resolution: Unresolved Votes: 0
Labels: None


 Description   

.






[RTFACT-22331] Old Eventual Binary Provider task-manager: implement as a background job Created: 02/Jun/20  Updated: 02/Jun/20

Status: Open
Project: Artifactory Binary Repository
Component/s: None
Affects Version/s: None
Fix Version/s: 7.6.0

Type: Story Priority: Normal
Reporter: Uriah Levy Assignee: Aviv Anidjar
Resolution: Unresolved Votes: 0
Labels: None


 Description   

The new CNHA mechanism for the old eventual provider uses an approach that acquires the role of the manager once (first node to start up, doesn't matter which), unlike the original mechanism that ran only on the primary node. 

 

This can create a possible confusion as to which node runs the manager as there is no easy way to tell which one holds the lock, which one should be restarted if crashed, and it takes up to ~30m for an orphan lock of this type to be cleaned. 

A better implementation would be to create a background job that runs on all nodes and eagerly tries to acquire the task manager role in case the owner shuts down during runtime. This will eliminate the requirement in restart and the need to know which node holds the role. 






[RTFACT-22325] Artifactory shows 500 error when we click on the Test button by adding http://download.newrelic.com/infrastructure_agent/linux/yum/el/7/x86_64/  URL in the remote repository Created: 02/Jun/20  Updated: 03/Jun/20

Status: Open
Project: Artifactory Binary Repository
Component/s: None
Affects Version/s: 7.5.2
Fix Version/s: None

Type: Improvement Priority: Normal
Reporter: Santhosh Pesari Assignee: Unassigned
Resolution: Unresolved Votes: 0
Labels: None


 Description   

Artifcatory shows a 500 error when we create a new rpm remote repository and add the URL:
http://download.newrelic.com/infrastructure_agent/linux/yum/el/7/x86_64/ and click on Test Button.

If we save without testing, then it works fine and packages can be pulled successfully.
Please go through this screen recording: https://recordit.co/gR51CuvjQa
Request.log
2020-05-28T21:40:52.974Z|60d673bf6865a8a5|52.8.67.255|santhosh|POST|/api/admin/repositories/testremote|500|1185|0|1020|Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_3) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/83.0.4103.61 Safari/537.36

Artifactory server.log:-
2020-05-28T21:40:52.973Z [jfrt ] [ERROR] [60d673bf6865a8a5] [c.e.m.GlobalExceptionMapper:48] [ttp-nio-8081-exec-10] - null
java.lang.NullPointerException: null

 

Remote repository test connection sends a HEAD request to remote URL. The path http://download.newrelic.com/infrastructure_agent/linux/yum/el/7/x86_64/ is not supported the HEAD request.

You can verify it with

curl http://download.newrelic.com/infrastructure_agent/linux/yum/el/7/x86_64/

vs

curl -I http://download.newrelic.com/infrastructure_agent/linux/yum/el/7/x86_64/ (that returns 404).

Finally, instead of Null pointer exception, when we click on test connectivity of remote repository, it should call a GET request in case HEAD request is failed and returns an informative error to the user.






[RTFACT-22323] The exclude pattern is not respected when REST API is triggered Created: 01/Jun/20  Updated: 04/Jun/20

Status: Open
Project: Artifactory Binary Repository
Component/s: None
Affects Version/s: 6.17.0
Fix Version/s: None

Type: Bug Priority: Normal
Reporter: Shilpa Kallaganad Assignee: Unassigned
Resolution: Unresolved Votes: 0
Labels: None

Issue Links:
Relationship
relates to RTFACT-21995 Seeing so many unnecessary calls to r... Open

 Description   

Exclude pattern configured for the remote repository( example: /com/jfrog/**) when the proxy is used and List Docker Tags REST API command is triggered by the client, The request is going to the external remote repository directly and the exclude pattern is not respected.

2020-05-27 19:30:51,637 [http-nio-9050-exec-1] [ERROR] (o.a.a.d.r.v.DockerV2RemoteRepoHandler:281) - Unable to fetch tags from 'https://quay.io/v2/com/jfrog/org/tools/app/tags/list?': HTTP/1.1 404 NOT FOUND

2020-05-27 19:30:51,051 [http-nio-9050-exec-1] [ERROR] (o.a.a.d.r.v.DockerV2RemoteRepoHandler:281) - Unable to fetch tags from 'https://registry.access.redhat.com/v2/com/jfrog/org/tools/app/tags/list?': HTTP/1.1 404 Not Found

2020-05-27 19:30:50,588 [http-nio-9050-exec-1] [ERROR] (o.a.a.d.r.v.DockerV2RemoteRepoHandler:281) - Unable to fetch tags from 'https://registry.redhat.io/v2/com/jfrog/org/tools/app/tags/list?': HTTP/1.1 404 Not Found

2020-05-27 19:30:49,743 [http-nio-9050-exec-1] [ERROR] (o.a.a.d.r.v.DockerV2RemoteRepoHandler:281) - Unable to fetch tags from 'https://registry-1.docker.io/v2/com/jfrog/org/tools/app/tags/list?': HTTP/1.1 401 Unauthorized
 

This is flooding the logs causing significant errors and likely performance impact to users.






[RTFACT-22321] Coping "Yum Groups" from one repo to another, losing group information Created: 01/Jun/20  Updated: 01/Jun/20

Status: Open
Project: Artifactory Binary Repository
Component/s: YUM
Affects Version/s: 6.11.6
Fix Version/s: None

Type: Bug Priority: Normal
Reporter: Rami Bechara Assignee: Unassigned
Resolution: Unresolved Votes: 0
Labels: None


 Description   

Hi,
 
Running: Artifactory Professional 6.11.6 rev 61106900
 
Problem:
I've an Artifactory repository(REPO1) containing a "Yum Group"(YUMGROUP) and when I copy this YUMGROUP from one Artifactory repository(REPO1) to another Artifactory repository(REPO2), I'm losing the "Yum Group" definition.
I copy the "Yum Group" using Artifactory web interface by doing a right-click on YUMGROUP and selecting "Copy".
 
Configuration:
Each repositories(REPO1,REPO2) are configured with:
RPM Group File Names: comps.xml,groups.xml
The "Yum Group" file name is comps.xml and was pushed to Artifactory within a tar file with the following structure:
repodata/comps.xml
YUMGROUP/file1.rpm
YUMGROUP/file2.rpm
...
 
Tests on REPO1 to make sure the "Yum Group" is well defined and working fine:

  • Using: "yum grouplist YUMGROUP" , Yum list "YUMGROUP" as a known Yum Group.
  • On Artifactory in the repodata directory I can see the file  fd3b0bd11229e31279ade1fdf5f61d21f9945748-comps.xml
     
    Tests on the destination Repository(REPO2)"
  • Using: "yum grouplist YUMGROUP" , Yum list "YUMGROUP" return the following message:
    Warning: no environments/groups match: intelerad-cluster
  • On Artifactory in the repodata directory I cannot find any file with *comps.xml 
     
     
    Also, when I distribute REPO1/YUMGROUP to bintray, the Yum Group definition isn't present in BinTray.
     
    If you need any information I'll be pleased to provide it to you.
     
    Regards,





[RTFACT-22318] Investigate large Debian repository move operation issues Created: 01/Jun/20  Updated: 01/Jun/20

Status: Open
Project: Artifactory Binary Repository
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: Bug Priority: Normal
Reporter: Shay Bagants Assignee: Alexei Vainshtein
Resolution: Unresolved Votes: 0
Labels: None


 Description   

Investigate large Debian repository move operation issues






[RTFACT-22316] test Created: 01/Jun/20  Updated: 01/Jun/20

Status: Open
Project: Artifactory Binary Repository
Component/s: Xray
Affects Version/s: 3.4.0
Fix Version/s: None

Type: Documentation Priority: Normal
Reporter: Lina Daher Assignee: Unassigned
Resolution: Unresolved Votes: 0
Labels: None


 Description   

xxx






[RTFACT-22313] Artifactory failed to start Created: 01/Jun/20  Updated: 01/Jun/20

Status: Open
Project: Artifactory Binary Repository
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: Bug Priority: Normal
Reporter: Din Wiesenfeld Assignee: Shay Bagants
Resolution: Unresolved Votes: 0
Labels: None


 Description   

2 customers failed to start after redeployment. We found the following exceptions in Artifactory log:

2020-06-01T06:43:32.691Z [jfrt ] [ERROR] [2de8a9f868306daf] [GenericDBPrivilegesVerifier:43] [ocalhost-startStop-2] - Could not determine sufficient privileges
org.postgresql.util.PSQLException: ERROR: relation "t1artifactory" already exists
	at org.postgresql.core.v3.QueryExecutorImpl.receiveErrorResponse(QueryExecutorImpl.java:2455)
	at org.postgresql.core.v3.QueryExecutorImpl.processResults(QueryExecutorImpl.java:2155)
	at org.postgresql.core.v3.QueryExecutorImpl.execute(QueryExecutorImpl.java:288)
	at org.postgresql.jdbc.PgStatement.executeInternal(PgStatement.java:430)
	at org.postgresql.jdbc.PgStatement.execute(PgStatement.java:356)
	at org.postgresql.jdbc.PgStatement.executeWithFlags(PgStatement.java:303)
	at org.postgresql.jdbc.PgStatement.executeCachedSql(PgStatement.java:289)
	at org.postgresql.jdbc.PgStatement.executeWithFlags(PgStatement.java:266)
	at org.postgresql.jdbc.PgStatement.executeUpdate(PgStatement.java:246)
	at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.base/java.lang.reflect.Method.invoke(Method.java:566)
	at org.apache.tomcat.jdbc.pool.StatementFacade$StatementProxy.invoke(StatementFacade.java:114)
	at com.sun.proxy.$Proxy62.executeUpdate(Unknown Source)
	at org.jfrog.storage.priviledges.GenericDBPrivilegesVerifier.isSufficientPrivileges(GenericDBPrivilegesVerifier.java:39)
	at org.jfrog.storage.priviledges.postgres.PostgresDBPrivilegesVerifier.isSufficientPrivileges(PostgresDBPrivilegesVerifier.java:13)
	at org.artifactory.storage.db.init.DbInitializationManager.enforceDBPrivileges(DbInitializationManager.java:200)
	at org.jfrog.storage.util.DbUtils.doWithConnection(DbUtils.java:496)
	at org.artifactory.storage.db.init.DbInitializationManager.runEnforceDBPrivilegesConversion(DbInitializationManager.java:189)
	at org.artifactory.storage.db.init.DbInitializationManager.init(DbInitializationManager.java:54)
	at org.artifactory.lifecycle.webapp.servlet.BasicConfigurationManager.initArtifactoryInstallation(BasicConfigurationManager.java:140)
	at org.artifactory.lifecycle.webapp.servlet.BasicConfigurationManager.initialize(BasicConfigurationManager.java:125)
	at org.artifactory.lifecycle.webapp.servlet.ArtifactoryHomeConfigListener.contextInitialized(ArtifactoryHomeConfigListener.java:57)
	at org.apache.catalina.core.StandardContext.listenerStart(StandardContext.java:4701)
	at org.apache.catalina.core.StandardContext.startInternal(StandardContext.java:5167)
	at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:183)
	at org.apache.catalina.core.ContainerBase.addChildInternal(ContainerBase.java:743)
	at org.apache.catalina.core.ContainerBase.addChild(ContainerBase.java:719)
	at org.apache.catalina.core.StandardHost.addChild(StandardHost.java:705)
	at org.apache.catalina.startup.HostConfig.deployDescriptor(HostConfig.java:614)
	at org.apache.catalina.startup.HostConfig$DeployDescriptor.run(HostConfig.java:1823)
	at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
	at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
	at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
	at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
	at java.base/java.lang.Thread.run(Thread.java:834)
2020-06-01T06:43:32.692Z [jfrt ] [ERROR] [2de8a9f868306daf] [.i.DbInitializationManager:205] [ocalhost-startStop-2] - Error while verifying DB privileges. Not starting migration.
2020-06-01T06:43:32.692Z [jfrt ] [ERROR] [2de8a9f868306daf] [d.i.DbInitializationManager:58] [ocalhost-startStop-2] - DB Schema initialization manager failed to init.
java.lang.IllegalStateException: java.lang.IllegalStateException: Error while verifying DB privileges
	at org.artifactory.storage.db.init.DbInitializationManager.runEnforceDBPrivilegesConversion(DbInitializationManager.java:191)
	at org.artifactory.storage.db.init.DbInitializationManager.init(DbInitializationManager.java:54)
	at org.artifactory.lifecycle.webapp.servlet.BasicConfigurationManager.initArtifactoryInstallation(BasicConfigurationManager.java:140)
	at org.artifactory.lifecycle.webapp.servlet.BasicConfigurationManager.initialize(BasicConfigurationManager.java:125)
	at org.artifactory.lifecycle.webapp.servlet.ArtifactoryHomeConfigListener.contextInitialized(ArtifactoryHomeConfigListener.java:57)
	at org.apache.catalina.core.StandardContext.listenerStart(StandardContext.java:4701)
	at org.apache.catalina.core.StandardContext.startInternal(StandardContext.java:5167)
	at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:183)
	at org.apache.catalina.core.ContainerBase.addChildInternal(ContainerBase.java:743)
	at org.apache.catalina.core.ContainerBase.addChild(ContainerBase.java:719)
	at org.apache.catalina.core.StandardHost.addChild(StandardHost.java:705)
	at org.apache.catalina.startup.HostConfig.deployDescriptor(HostConfig.java:614)
	at org.apache.catalina.startup.HostConfig$DeployDescriptor.run(HostConfig.java:1823)
	at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
	at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
	at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
	at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
	at java.base/java.lang.Thread.run(Thread.java:834)
Caused by: java.lang.IllegalStateException: Error while verifying DB privileges
	at org.artifactory.storage.db.init.DbInitializationManager.enforceDBPrivileges(DbInitializationManager.java:206)
	at org.jfrog.storage.util.DbUtils.doWithConnection(DbUtils.java:496)
	at org.artifactory.storage.db.init.DbInitializationManager.runEnforceDBPrivilegesConversion(DbInitializationManager.java:189)
	... 17 common frames omitted
Caused by: java.lang.RuntimeException: org.postgresql.util.PSQLException: ERROR: relation "t1artifactory" already exists
	at org.jfrog.storage.priviledges.GenericDBPrivilegesVerifier.isSufficientPrivileges(GenericDBPrivilegesVerifier.java:44)
	at org.jfrog.storage.priviledges.postgres.PostgresDBPrivilegesVerifier.isSufficientPrivileges(PostgresDBPrivilegesVerifier.java:13)
	at org.artifactory.storage.db.init.DbInitializationManager.enforceDBPrivileges(DbInitializationManager.java:200)
	... 19 common frames omitted
Caused by: org.postgresql.util.PSQLException: ERROR: relation "t1artifactory" already exists
	at org.postgresql.core.v3.QueryExecutorImpl.receiveErrorResponse(QueryExecutorImpl.java:2455)
	at org.postgresql.core.v3.QueryExecutorImpl.processResults(QueryExecutorImpl.java:2155)
	at org.postgresql.core.v3.QueryExecutorImpl.execute(QueryExecutorImpl.java:288)
	at org.postgresql.jdbc.PgStatement.executeInternal(PgStatement.java:430)
	at org.postgresql.jdbc.PgStatement.execute(PgStatement.java:356)
	at org.postgresql.jdbc.PgStatement.executeWithFlags(PgStatement.java:303)
	at org.postgresql.jdbc.PgStatement.executeCachedSql(PgStatement.java:289)
	at org.postgresql.jdbc.PgStatement.executeWithFlags(PgStatement.java:266)
	at org.postgresql.jdbc.PgStatement.executeUpdate(PgStatement.java:246)
	at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.base/java.lang.reflect.Method.invoke(Method.java:566)
	at org.apache.tomcat.jdbc.pool.StatementFacade$StatementProxy.invoke(StatementFacade.java:114)
	at com.sun.proxy.$Proxy62.executeUpdate(Unknown Source)
	at org.jfrog.storage.priviledges.GenericDBPrivilegesVerifier.isSufficientPrivileges(GenericDBPrivilegesVerifier.java:39)
	... 21 common frames omitted

According to Shayb guidance we have dropped 't1artifactory' table and restarted Artifactory.
Artifactory successfully started.

Artifactory version: 7.5.2






[RTFACT-22312] REST search by vcsRevision Created: 01/Jun/20  Updated: 01/Jun/20

Status: Open
Project: Artifactory Binary Repository
Component/s: REST API
Affects Version/s: None
Fix Version/s: None

Type: Bug Priority: Normal
Reporter: Barry Lapthorn Assignee: Unassigned
Resolution: Unresolved Votes: 0
Labels: None


 Description   

Hi, not entirely sure where to log this - I've searched the docs and I can't find an answer to this, so raising this as a bug.

There's a `vcsRevision` field (property?) on most artifacts which contains the git `sha1` for that build.

I want to search by that and get back all "build info" objects.

How do I use the REST API to search for all builds for a given git commit?

I looked here:

https://www.jfrog.com/confluence/display/RTF6X/Artifactory+REST+API#ArtifactoryRESTAPI-SEARCHES

and tried a few I though property search would work (but doesn't).  I've tried most of the others and I'm not getting any success.

Thanks,

Barry.






[RTFACT-22307] Connection leak when client abort connection before TLS handshake complete Created: 31/May/20  Updated: 04/Jun/20

Status: Open
Project: Artifactory Binary Repository
Component/s: None
Affects Version/s: 6.8.15, 6.20.0, 6.19.1
Fix Version/s: None

Type: Bug Priority: Normal
Reporter: Yu Feng Assignee: Unassigned
Resolution: Unresolved Votes: 0
Labels: None


 Description   

When TLS connection is aborted on client side before TLS handshake is completed, Artifactory doesn't close the connection on server side and leave the connection in CLOSE-WAIT state. One way to reproduce is:

  1. Configure Tomcat to use incorrect Java keystore password (so TLS handshake could never be established and give us time to abort connection on client side)
    Example config
        <Connector port="8094" protocol="HTTP/1.1"
          SSLEnabled="true"
          scheme="https"
          secure="true"
          clientAuth="false"
          sslProtocol="TLS"
          sslEnabledProtocols="SSLv2Hello,TLSv1,TLSv1.1,TLSv1.2"
          keystoreFile="conf/ssl.keystore"
          keystorePass="incorrect"
          maxThreads="1000"
          minSpareThreads="200"
          enableLookups="false"
          disableUploadTimeout="true"
          acceptCount="1000"
          connectionTimeout="40000"
          URIEncoding="UTF-8"
          maxConnections="-1"
          maxParameterCount="10000"
          sendReasonPhrase="true"
          relaxedPathChars='[]'
          relaxedQueryChars='[]'
        />
    
  1. Restart Artifactory and list connection using ss
    ss -a 'sport = :8094'
    Netid State      Recv-Q Send-Q                                         Local Address:Port                                                          Peer Address:Port
    tcp   LISTEN     0      128                                                        *:8094                                                                     *:*
    
  1. Use openssl to start TLS connection
    openssl s_client -connect localhost:8094 -showcerts
    

    openssl will hang because it's waiting for handshake response from Artifactory server which doesn't respond because of incorrect keystore
    ss shows one connection established which is expected

    ss -a 'sport = :8094'
    Netid State      Recv-Q Send-Q                                         Local Address:Port                                                          Peer Address:Port
    tcp   LISTEN     1      128                                                        *:8094                                                                     *:*
    tcp   ESTAB      301    0                                                  127.0.0.1:8094                                                             127.0.0.1:39581
    
  1. kill openssl on client side using "pkill openssl"
    [yufeng@yufeng-ld2 ~]$ openssl s_client -connect localhost:8094 -showcerts
    CONNECTED(00000003)
    Terminated
    
  1. Run ss again on server side and there is a connection leave in CLOSE-WAIT
    ss -a 'sport = :8094'
    Netid State      Recv-Q Send-Q                                         Local Address:Port                                                          Peer Address:Port
    tcp   LISTEN     1      128                                                        *:8094                                                                     *:*
    tcp   CLOSE-WAIT 302    0                                                  127.0.0.1:8094                                                             127.0.0.1:39581
    

This issue happens on Artifactory v6.19 as well as v6.8 (didn't test other versions)






[RTFACT-22298] JIRA feature A doc test Created: 31/May/20  Updated: 31/May/20

Status: Open
Project: Artifactory Binary Repository
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: Epic Priority: Normal
Reporter: Lina Daher Assignee: Unassigned
Resolution: Unresolved Votes: 0
Labels: None

Issue Links:
Finish-to-Finish link (WBSGantt)
can't finish until the linked issue is done. RTFACT-22296 JiraDocTest Open
Epic Name: JIRA feature A doc test

 Description   

test






JiraDocTest (RTFACT-22296)

[RTFACT-22297] TestJiradoc2 Created: 31/May/20  Updated: 31/May/20

Status: Open
Project: Artifactory Binary Repository
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: Sub-task Priority: Normal
Reporter: Lina Daher Assignee: Unassigned
Resolution: Unresolved Votes: 0
Labels: None


 Description   

test






[RTFACT-22296] JiraDocTest Created: 31/May/20  Updated: 31/May/20

Status: Open
Project: Artifactory Binary Repository
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: Documentation Priority: Normal
Reporter: Lina Daher Assignee: Unassigned
Resolution: Unresolved Votes: 0
Labels: None

Issue Links:
Finish-to-Finish link (WBSGantt)
Linked one can't finish until this issue is done. RTFACT-22298 JIRA feature A doc test Open
Sub-Tasks:
Key
Summary
Type
Status
Assignee
RTFACT-22297 TestJiradoc2 Sub-task Open  

 Description   

test






[RTFACT-22292] Ability to modify DB configs during Artifactory upgrades Created: 31/May/20  Updated: 01/Jun/20

Status: Open
Project: Artifactory Binary Repository
Component/s: Documentation
Affects Version/s: None
Fix Version/s: None

Type: Improvement Priority: Normal
Reporter: Prathibha Ayyappan Assignee: Unassigned
Resolution: Unresolved Votes: 0
Labels: None


 Description   

Cisco DBAs have moved all Artifactory indexes to another table space for performance purposes. This means all new indexes created during upgrades have to be moved after the upgrade which sometimes may require a small downtime. We would like the ability to specify the tablespaces for new indexes before the upgrades, so they can get created in the correct tablespace from the get go.






[RTFACT-22290] Traffic log - failure to parse entry with no IP Created: 31/May/20  Updated: 03/Jun/20

Status: Done
Project: Artifactory Binary Repository
Component/s: None
Affects Version/s: 7.5.0
Fix Version/s: 7.6.0, 7.5.6

Type: Bug Priority: Normal
Reporter: Rotem Kfir Assignee: Rotem Kfir
Resolution: Unresolved Votes: 0
Labels: None

Issue Links:
Trigger
Regression:
Yes

 Description   

It seems that during event based pull replication it is possible that entries with no IP and with content length 0 will be generated in the Traffic log. E.g.:

20200527120002|4c5608e5df3c1326|572|UPLOAD||event-test:a|0

The exception thrown is:

2020-05-28T08:56:14.206Z [jfrt ] [ERROR] [374b794dc778fcb9] [o.a.t.r.TrafficStreamParser:84] [http-nio-8081-exec-1] - Failed ton parse entry 20200527120002|4c5608e5df3c1326|572|UPLOAD||event-test:a|0 - skipping...
java.lang.IllegalArgumentException: No enum constant org.artifactory.traffic.TrafficAction.572
	at java.base/java.lang.Enum.valueOf(Enum.java:240)
	at org.artifactory.traffic.TrafficAction.valueOf(TrafficAction.java:28)
	at org.artifactory.traffic.entry.TokenizedTrafficEntryFactory.newTrafficEntry(TokenizedTrafficEntryFactory.java:55)
	at org.artifactory.traffic.read.TrafficStreamParser.parse(TrafficStreamParser.java:81)
	at org.artifactory.traffic.read.TrafficReader.getEntries(TrafficReader.java:117)
	at org.artifactory.traffic.read.TrafficReader.getEntries(TrafficReader.java:95)
	at org.artifactory.traffic.read.TrafficReader.getEntries(TrafficReader.java:67)
	at org.artifactory.traffic.TrafficServiceImpl.getEntryList(TrafficServiceImpl.java:119)
	at org.artifactory.traffic.TrafficServiceImpl.getTrafficUsageWithFilterCurrentNode(TrafficServiceImpl.java:154)
	at jdk.internal.reflect.GeneratedMethodAccessor741.invoke(Unknown Source)
	at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.base/java.lang.reflect.Method.invoke(Method.java:566)
	at org.springframework.aop.support.AopUtils.invokeJoinpointUsingReflection(AopUtils.java:343)
	at org.springframework.aop.framework.JdkDynamicAopProxy.invoke(JdkDynamicAopProxy.java:205)
	at com.sun.proxy.$Proxy255.getTrafficUsageWithFilterCurrentNode(Unknown Source)
	at org.artifactory.rest.resource.traffic.TrafficResource.getTransferUsageCurrentNode(TrafficResource.java:68)
	at jdk.internal.reflect.GeneratedMethodAccessor740.invoke(Unknown Source)
	at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.base/java.lang.reflect.Method.invoke(Method.java:566)
	at org.glassfish.jersey.server.model.internal.ResourceMethodInvocationHandlerFactory.lambda$static$0(ResourceMethodInvocationHandlerFactory.java:76)
	at org.glassfish.jersey.server.model.internal.AbstractJavaResourceMethodDispatcher$1.run(AbstractJavaResourceMethodDispatcher.java:148)
	at org.glassfish.jersey.server.model.internal.AbstractJavaResourceMethodDispatcher.invoke(AbstractJavaResourceMethodDispatcher.java:191)
	at org.glassfish.jersey.server.model.internal.JavaResourceMethodDispatcherProvider$TypeOutInvoker.doDispatch(JavaResourceMethodDispatcherProvider.java:243)
	at org.glassfish.jersey.server.model.internal.AbstractJavaResourceMethodDispatcher.dispatch(AbstractJavaResourceMethodDispatcher.java:103)
	at org.glassfish.jersey.server.model.ResourceMethodInvoker.invoke(ResourceMethodInvoker.java:493)
	at org.glassfish.jersey.server.model.ResourceMethodInvoker.apply(ResourceMethodInvoker.java:415)
	at org.glassfish.jersey.server.model.ResourceMethodInvoker.apply(ResourceMethodInvoker.java:104)
	at org.glassfish.jersey.server.ServerRuntime$1.run(ServerRuntime.java:277)
	at org.glassfish.jersey.internal.Errors$1.call(Errors.java:272)
	at org.glassfish.jersey.internal.Errors$1.call(Errors.java:268)
	at org.glassfish.jersey.internal.Errors.process(Errors.java:316)
	at org.glassfish.jersey.internal.Errors.process(Errors.java:298)
	at org.glassfish.jersey.internal.Errors.process(Errors.java:268)
	at org.glassfish.jersey.process.internal.RequestScope.runInScope(RequestScope.java:289)
	at org.glassfish.jersey.server.ServerRuntime.process(ServerRuntime.java:256)
	at org.glassfish.jersey.server.ApplicationHandler.handle(ApplicationHandler.java:703)
	at org.glassfish.jersey.servlet.WebComponent.serviceImpl(WebComponent.java:416)
	at org.glassfish.jersey.servlet.WebComponent.service(WebComponent.java:370)
	at org.glassfish.jersey.servlet.ServletContainer.service(ServletContainer.java:389)
	at org.glassfish.jersey.servlet.ServletContainer.service(ServletContainer.java:342)
	at org.glassfish.jersey.servlet.ServletContainer.service(ServletContainer.java:229)
	at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:231)
	at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166)
	at org.artifactory.webapp.servlet.RepoFilter.execute(RepoFilter.java:195)
	at org.artifactory.webapp.servlet.RepoFilter.doFilter(RepoFilter.java:97)
	at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193)
	at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166)
	at org.artifactory.webapp.servlet.authentication.ArtifactoryAuthenticationFilterChain.lambda$doFilter$1(ArtifactoryAuthenticationFilterChain.java:134)
	at org.artifactory.addon.ha.rest.HaRestAuthenticationFilter.doFilter(HaRestAuthenticationFilter.java:76)
	at org.artifactory.webapp.servlet.authentication.ArtifactoryAuthenticationFilterChain.doFilter(ArtifactoryAuthenticationFilterChain.java:171)
	at org.artifactory.webapp.servlet.AccessFilter.authenticateAndExecute(AccessFilter.java:307)
	at org.artifactory.webapp.servlet.AccessFilter.doFilterInternal(AccessFilter.java:176)
	at org.artifactory.webapp.servlet.AccessFilter.doFilter(AccessFilter.java:127)
	at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193)
	at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166)
	at org.artifactory.webapp.servlet.RequestFilter.doFilter(RequestFilter.java:78)
	at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193)
	at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166)
	at org.artifactory.webapp.servlet.ArtifactoryCsrfFilter.doFilter(ArtifactoryCsrfFilter.java:75)
	at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193)
	at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166)
	at org.springframework.session.web.http.SessionRepositoryFilter.doFilterInternal(SessionRepositoryFilter.java:164)
	at org.springframework.session.web.http.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:80)
	at org.artifactory.webapp.servlet.SessionFilter.doFilter(SessionFilter.java:62)
	at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193)
	at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166)
	at org.artifactory.webapp.servlet.ArtifactoryTracingFilter.doFilter(ArtifactoryTracingFilter.java:27)
	at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193)
	at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166)
	at org.artifactory.webapp.servlet.ArtifactoryFilter.doFilter(ArtifactoryFilter.java:126)
	at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193)
	at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166)
	at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:199)
	at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:96)
	at org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:543)
	at org.apache.catalina.valves.rewrite.RewriteValve.invoke(RewriteValve.java:305)
	at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:139)
	at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:81)
	at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:87)
	at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:343)
	at org.apache.coyote.http11.Http11Processor.service(Http11Processor.java:615)
	at org.apache.coyote.AbstractProcessorLight.process(AbstractProcessorLight.java:65)
	at org.apache.coyote.AbstractProtocol$ConnectionHandler.process(AbstractProtocol.java:818)
	at org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.doRun(NioEndpoint.java:1627)
	at org.apache.tomcat.util.net.SocketProcessorBase.run(SocketProcessorBase.java:49)
	at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
	at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
	at org.apache.tomcat.util.threads.TaskThread$WrappingRunnable.run(TaskThread.java:61)
	at java.base/java.lang.Thread.run(Thread.java:834)





[RTFACT-22289] System/Repository Import shows 'null' Created: 07/Apr/20  Updated: 31/May/20

Status: Backlog
Project: Artifactory Binary Repository
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: Improvement Priority: Normal
Reporter: John Wright Assignee: Unassigned
Resolution: Unresolved Votes: 0
Labels: None

Attachments: PNG File Screen Shot 2020-05-31 at 10.08.42.png     PNG File Screen Shot 2020-05-31 at 10.08.45.png     PNG File Screen Shot 2020-05-31 at 10.09.48.png    

 Description   

When running an import with on a directory with invalid permissions (read/execute and/or ownership), the UI shows 'null'.

An example of a more informative UI error if nothing is found at the directory: "No files or folders found. Please make sure the path is correct and the files can be accessed by Artifactory."

 

Steps to reproduce:
Run Artifactory not as root user. Get system export, remove read and execute or change ownership to non-Artifactory user. Attempt to import (will need to specify path and not use navigation in UI). 'null' appears when clicking Import.



 Comments   
Comment by Aviran Barda [ 31/May/20 ]

It is a backend response - moving to the Artiafactory project.





[RTFACT-22288] Integration with Event Client v0.0.6 Created: 31/May/20  Updated: 03/Jun/20

Status: Pending QA
Project: Artifactory Binary Repository
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: Task Priority: Normal
Reporter: Yevdo Abramov Assignee: Yevdo Abramov
Resolution: Unresolved Votes: 0
Labels: None


 Description   

Integration with Event Client v0.0.6

The API is changed to support headers forwarding and traceId as well as configurable router's hostname 






[RTFACT-22287] Update last_login_ip and last_login_time for user in access_users table Created: 30/May/20  Updated: 30/May/20

Status: Open
Project: Artifactory Binary Repository
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: Improvement Priority: Normal
Reporter: Prathibha Ayyappan Assignee: Unassigned
Resolution: Unresolved Votes: 0
Labels: None


 Description   

Right now the only way to find users that have recently used Artifactory is by parsing access or request logs. Most companies will have multiple environments and HA instances, so aggregating this data could become tedious. Consider updating the access_users table with the last_login_time and last_login_ip so Artifactory administrators can either use "GetUserDetails" REST API call or query the database directly to get this information.






[RTFACT-22284] JFrog Artifactory intermittently rejects authentication with 403 forbidden Created: 29/May/20  Updated: 29/May/20

Status: Open
Project: Artifactory Binary Repository
Component/s: RPM
Affects Version/s: 7.4.3
Fix Version/s: None

Type: Bug Priority: Normal
Reporter: Lester Guerzon Assignee: Unassigned
Resolution: Unresolved Votes: 0
Labels: artifactory, permissions, repository
Environment:

JFrog Platform 7.4.3 running in Kubernetes



 Description   

Good day,

We would like your assistance to check on a possible bug with JFrog Artifactory 7.4.

We are having a hard time configuring zypper clients (SLES and OpenSUSE) to work with our RPM Repository due to some "authentication" issues. At first, we thought it was just an issue with zypper and opensuse so we first focused on our efforts to make things work with zypper.

Error:

opensuse:~ # zypper refresh myapp
Retrieving repository 'myapp' metadata ....................................................................................................................................[error]
Repository 'myapp' is invalid.
[myapp|https://myuser@rpm.example.com/myapp-release-rpm/stable/myapp/2019/3/] Valid metadata not found at specified URL
History:
 - [myapp|https://myuser@rpm.example.com/myapp-release-rpm/stable/myapp/2019/3/] Repository type can't be determined.

Please check if the URIs defined for this repository are pointing to a valid repository.
Skipping repository 'myapp' because of the above error.
Could not refresh the repositories because of errors.
opensuse:~ #

But these authentication-related errors started to come up every now and then with yum clients as well (centOS), although with YUM it is very rare.

In a StackOverflow post, I described the issue in more detail.


So I tried doing man-in-the-middle to see what's happening under the hood, and this is the sequence with zypper:

Scenario 1 - successful authentication

The following is the sequence of a zypper refresh --repo myrepo:

(1) zypper sends an HTTP HEAD request with the base64-encoded username::

HEAD /myapp-release-rpm/stable/myapp/2019/3/repodata/repomd.xml HTTP/1.1
Host: rpm.example.com
Authorization: Basic dXNlcm5hbWU6
User-Agent: ZYpp 17.19.0 (curl 7.60.0) openSUSE-Leap-15.1-x86_64
Accept: */*
Connection: close

(2) jfrog responds with HTTP 401 Unauthorized with the WWW-Authenticate header:

HTTP/1.1 401 Unauthorized
Date: Thu, 28 May 2020 08:20:04 GMT
Content-Type: application/json;charset=ISO-8859-1
Connection: close
Server: Artifactory/7.4.3 70403900
X-Artifactory-Id: 2148103ba10eacbb:-16f1c4c1:172093a231a:-8000
X-Artifactory-Node-Id: artifactory-server
WWW-Authenticate: Basic realm="Artifactory Realm"

(3) zypper sends another HTTP HEAD request, this time with the base64-encoded username:password:

HEAD /myapp-release-rpm/stable/myapp/2019/3/repodata/repomd.xml HTTP/1.1
Host: rpm.example.com
Authorization: Basic dXNlcm5hbWU6cGFzc3dvcmQ=
User-Agent: ZYpp 17.19.0 (curl 7.60.0) openSUSE-Leap-15.1-x86_64
Accept: */*
Connection: close

(4) jfrog finally responds with an HTTP 200.

HTTP/1.1 200 OK
Date: Thu, 28 May 2020 08:20:04 GMT
Content-Type: application/xml
Content-Length: 1394
Connection: close
Server: Artifactory/7.4.3 70403900
X-Artifactory-Id: 2148103ba10eacbb:-16f1c4c1:172093a231a:-8000
X-Artifactory-Node-Id: artifactory-server
Last-Modified: Fri, 08 May 2020 10:25:19 GMT
Accept-Ranges: bytes
X-Artifactory-Filename: repomd.xml
Cache-Control: no-store

These are logged by Artifactory:

artifactory-request.log:

2020-05-28T08:20:34.566Z [5f78297c2aeabaa8] [DENIED LOGIN]   for client : username / 213.1.1.1. 
2020-05-28T08:20:34.870Z [570978212a5318e3] [ACCEPTED DOWNLOAD] myapp-release-rpm-cache:stable/myapp/2019/3/repodata/repomd.xml  for client : username / 213.1.1.1.

artifactory-access.log:

2020-05-28T08:20:34.566Z|5f78297c2aeabaa8|213.2.2.2|non_authenticated_user|HEAD|/myapp-release-rpm/stable/myapp/2019/3/repodata/repomd.xml|401|-1|0|8|ZYpp 17.19.0 (curl 7.60.0) openSUSE-Leap-15.1-x86_64
2020-05-28T08:20:34.721Z|8018b7cbc9c424e8|213.2.2.2|username|HEAD|/myapp-release-rpm/stable/myapp/2019/3/repodata/repomd.xml|200|-1|1394|3|ZYpp 17.19.0 (curl 7.60.0) openSUSE-Leap-15.1-x86_64
2020-05-28T08:20:34.870Z|570978212a5318e3|213.2.2.2|username|GET|/myapp-release-rpm/stable/myapp/2019/3/repodata/repomd.xml|200|-1|1394|2|ZYpp 17.19.0 (curl 7.60.0) openSUSE-Leap-15.1-x86_64
...

So basically, zypper comes in with a HEAD request, JFrog says "you're not authenticated", zypper responds back and tries to authenticate, and finally JFrog authenticates zypper. Makes sense so far.

Scenario 2 - 403 forbidden

Do some work, then run the same zypper refresh --repo myrepo command after a few minutes or so, and here is the result:

(1) zypper sends an HTTP HEAD request with the base64-encoded username::

HEAD /myapp-release-rpm/stable/myapp/2019/3/repodata/repomd.xml HTTP/1.1
Host: rpm.example.com
Authorization: Basic dXNlcm5hbWU6
User-Agent: ZYpp 17.19.0 (curl 7.60.0) openSUSE-Leap-15.1-x86_64
Accept: */*
Connection: close

(2) jfrog responds with HTTP 401 Unauthorized with the WWW-Authenticate header:

HTTP/1.1 401 Unauthorized
Date: Thu, 28 May 2020 08:30:44 GMT
Content-Type: application/json;charset=ISO-8859-1
Connection: close
Server: Artifactory/7.4.3 70403900
X-Artifactory-Id: 2148103ba10eacbb:-16f1c4c1:172093a231a:-8000
X-Artifactory-Node-Id: artifactory-server
WWW-Authenticate: Basic realm="Artifactory Realm"

(3) zypper sends another HTTP HEAD request, this time with the base64-encoded username:password:

HEAD /myapp-release-rpm/stable/myapp/2019/3/repodata/repomd.xml HTTP/1.1
Host: rpm.example.com
Authorization: Basic dXNlcm5hbWU6cGFzc3dvcmQ=
User-Agent: ZYpp 17.19.0 (curl 7.60.0) openSUSE-Leap-15.1-x86_64
Accept: */*
Connection: close

(4) this time, jfrog responds with 403 Forbidden instead of 200 OK.

HTTP/1.1 403 Forbidden
Date: Thu, 28 May 2020 08:30:44 GMT
Content-Type: application/json;charset=ISO-8859-1
Connection: close
Server: Artifactory/7.4.3 70403900
X-Artifactory-Id: 2148103ba10eacbb:-16f1c4c1:172093a231a:-8000
X-Artifactory-Node-Id: artifactory-server
WWW-Authenticate: Basic realm="Artifactory Realm"

artifactory-request.log:

2020-05-28T08:30:44.496Z [46c81a2450623166] [DENIED LOGIN]   for client : username / 213.1.1.1.
2020-05-28T08:30:44.630Z [769ed41c652daa7a] [DENIED LOGIN]   for client : username / 213.1.1.1.

artifactory-access.log:

2020-05-28T08:30:44.496Z|46c81a2450623166|213.2.2.2|non_authenticated_user|HEAD|/myapp-release-rpm/stable/myapp/2019/3/repodata/repomd.xml|401|-1|0|9|ZYpp 17.19.0 (curl 7.60.0) openSUSE-Leap-15.1-x86_64
2020-05-28T08:30:44.630Z|769ed41c652daa7a|213.2.2.2|non_authenticated_user|HEAD|/myapp-release-rpm/stable/myapp/2019/3/repodata/repomd.xml|403|-1|0|1|ZYpp 17.19.0 (curl 7.60.0) openSUSE-Leap-15.1-x86_64

Notice that zypper sends the same Authorization header value when asked to authenticate, but on the second scenario, JFrog fails to authenticate the request.

Did anybody have this same issue with JFrog before? We are guessing this is an issue with JFrog 7 since ours was just recently upgraded, but there is no way for us to verify this. And unfortunately for us, we are on the paid plan which doesn't even have a support license.

Any suggestions and comments will be very much appreciated.






[RTFACT-22282] Anonymous user has read access via REST API for nonexistent/empty locations that it shouldn't Created: 28/May/20  Updated: 28/May/20

Status: Open
Project: Artifactory Binary Repository
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: Bug Priority: Normal
Reporter: Matthew Wang Assignee: Matthew Wang
Resolution: Unresolved Votes: 0
Labels: None

Issue Links:
Duplicate
duplicates RTFACT-19252 Empty repositories are getting listed... Open

 Description   

Steps to reproduce:
-enable anonymous access
-remove the "Anything" permission target, so that the anonymous user doesn't have read access to any repo
-anonymously view a path for a repo with no content, like http://mill.jfrog.info:12116/artifactory/example-repo-local/. See that the anonymous user can view the repo (even though there is no content)
-see same behavior for nonexistent path, like http://mill.jfrog.info:12116/artifactory/example-repo-local/.svn/
When the path has content in it though (for example if you deploy an artifact to example-repo-local), then the anonymous user receives a 401 Unauthorized as expected.

Expected behavior:
The anonymous user receives a 401 Unauthorized when viewing the nonexistent/empty location.

This behavior is a security concern






[RTFACT-22275] Azure access token renewal Created: 28/May/20  Updated: 28/May/20

Status: Open
Project: Artifactory Binary Repository
Component/s: Access Tokens, Azure
Affects Version/s: 6.16.0
Fix Version/s: None

Type: New Feature Priority: Normal
Reporter: Maria-Luiza Koleva Assignee: Unassigned
Resolution: Unresolved Votes: 0
Labels: None
Environment:

Azure DevOps



 Description   

We would like to request a new product feature to allow access tokens to be easily renewed from inside Azure service connections. In the absence of a proper SSO solution it seems we have only two options for service connections in azure basic auth or token based.    Assuming we wanted to use token based access with expiring tokens we need an easy way for developers to refresh or renew their tokens that exceeds a basic API call https://www.jfrog.com/confluence/display/JFROG/Access+Tokens#AccessTokens-GeneratingRefreshableTokens.    Would it be possible to add a refresh button to the azure service connection in addition to bubbling up the expiration time for a given token?     If we wanted to expire tokens now for example, there is no indication on the service connection that the token has expired nor does it offer a way to easily regenerate or renew a token.     Can you please help to prioritize either a renew button in addition to the current verify button or some other semantics that would enable azure workflows to continue working seamlessly while supporting short lived token access without requiring manual intervention?






[RTFACT-22274] Optimize the copy API, We need to overwrite folder1 instead of copying to the next level of the target path folder1 Created: 28/May/20  Updated: 28/May/20

Status: Open
Project: Artifactory Binary Repository
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: New Feature Priority: Normal
Reporter: fanjinhui Assignee: Unassigned
Resolution: Unresolved Votes: 0
Labels: None


 Description   

 When we use this interface for image copy, there are duplicate copies and sub folders,I want to be able to overwrite the original file or folder instead of generating subfolders
 POST /api/copy/{srcReopKey}/{srcFilePath}?to=/{targetRepoKey}/{targetPath} 
 

Troubleshooting Steps:

 1. Copy folder1 to target path, we got {targetRepoKey}/folder1

POST /api/copy/{srcReopKey}/folder1?to=/{targetRepoKey}/folder1

2. Copy folder1 again , same target path, but we got {targetRepoKey}/folder1/folder1

POST /api/copy/{srcReopKey}/folder1?to=/{targetRepoKey}/folder1

 

 

 

 






[RTFACT-22273] Crowd SSO not working after changing SSO cookie name Created: 28/May/20  Updated: 03/Jun/20

Status: Open
Project: Artifactory Binary Repository
Component/s: None
Affects Version/s: 7.4.3
Fix Version/s: None

Type: Bug Priority: Normal
Reporter: Thomas Assignee: Unassigned
Resolution: Unresolved Votes: 0
Labels: None
Environment:

Atlassian Crowd 3.7.0



 Description   

When configuring the Crowd "SSO cookie name" to have a custom value instead of the default "crowd.token_key", it's not possible anymore to login to Artifactory via Crowd SSO.

According to RTFACT-9598 this value should be read from Crowd (probably by REST). It seems this doesn't work anymore, maybe the REST API on the Crowd side has changed.






[RTFACT-22270] Virtual Reposiroty Fails To List Packages When "List Remote Folder Items" Box Checked In Remote Repository Created: 28/May/20  Updated: 03/Jun/20

Status: Open
Project: Artifactory Binary Repository
Component/s: None
Affects Version/s: 6.19.0, 7.4.3
Fix Version/s: None

Type: Bug Priority: Normal
Reporter: Tim Telman Assignee: Unassigned
Resolution: Unresolved Votes: 1
Labels: None


 Description   

Virtual Reposiroty Fails To List Packages When "List Remote Folder Items" Box Checked In Remote Repository.

 

Steps to Reproduce:

 

  1. Create Generic Remote repository(Remote1) pointing to: https://repo.continuum.io/miniconda/
  2. Create one more Generic Remote Repository(Remote2), in my case I will point to: http://archive.ubuntu.com/ubuntu/
  3. Create Generic Virtual repository and aggregate those two remote repositories above in this Virtual repository.
  4.  Go to Repository Browser, and expand newly created Generic Virtual repository, you will see no packages listed
  5. Right Click on the newly created Virtual repository and select "Native Browser", and you will get the following error on this page:

{ "errors" : [ { "status" : 404, "message" : "

{\"error\":\"Failed to build dom document\"}

" } ] }

 

      6.  If you go back to the settings of the Remote repository(Remote1) and uncheck the : "List Remote Folder Items" box, then go back to the Repository Browser, and expand the Virtual repository again, you will notice that items are getting listed. 

      7. In addition to that, if you right-click and select "Native Browser", it will not throw that same error anymore.

 

Here is the ERROR when the issue is reproduced:

2020-05-27 17:38:17,024 [http-nio-8081-exec-7] [ERROR] (o.a.r.c.e.m.GlobalExceptionMapper:48) - Failed to build dom document2020-05-27 17:38:17,024 [http-nio-8081-exec-7] [ERROR] (o.a.r.c.e.m.GlobalExceptionMapper:48) - Failed to build dom documentjava.lang.RuntimeException: Failed to build dom document at org.artifactory.util.XmlUtils.parse(XmlUtils.java:56)

.

.

Caused by: org.jdom2.input.JDOMParseException: Error on line 2: Attribute name "async" associated with an element type "script" must be followed by the ' = ' character.

.

.

Caused by: org.xml.sax.SAXParseException: Attribute name "async" associated with an element type "script" must be followed by the ' = ' character.

 

 

Seems like the issue is related to that first remote repository(Repository1) that points to https://repo.continuum.io/miniconda/. Checking and unchecking the same "List Remote Folder Items" box in the second repository(Repository2) does not change the behavior.  This was reproduced on v6.19 and v7.4.3

 

Workaround:

 

As listed in step 6, the only way to fix this is by unchecking "List Remote Folder Items" box for Repository1.






[RTFACT-22269] Artifactory UI should remove "buildInfo.env" prefix when displaying environment variables Created: 27/May/20  Updated: 01/Jun/20

Status: Open
Project: Artifactory Binary Repository
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: Bug Priority: Normal
Reporter: Jason Gloege Assignee: Unassigned
Resolution: Unresolved Votes: 1
Labels: artifactory, ui


 Description   

When deploying to Artifactory via TeamCity Artifactory plugin (version 2.8) entries for properties included from a given properties file are passed along into Artifactory, but the belief is that the prefix should be removed. Whether this is before storing, or before displaying, is unknown.

 






[RTFACT-22268] Unable to Sync Groups from openLDAP Created: 27/May/20  Updated: 27/May/20

Status: Open
Project: Artifactory Binary Repository
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: Bug Priority: Normal
Reporter: Yogomaya Maharana Assignee: Unassigned
Resolution: Unresolved Votes: 0
Labels: None
Environment:

Artifactory Pro



 Description   

Getting error - Could not find DN for user 'username'. I have the set the artifactory.security.ldap.forceGroupMemberAttFullDN=true in systems.property file and restarted Artifactory.

We are unable to proceed further in the product evaluation anymore. any help is appreciated.






[RTFACT-22267] Failed to pull docker image from gcr.io using digest through Artifactory Created: 27/May/20  Updated: 29/May/20

Status: Open
Project: Artifactory Binary Repository
Component/s: Docker
Affects Version/s: 7.2.1
Fix Version/s: None

Type: Bug Priority: Normal
Reporter: Elio Marcolino Assignee: Unassigned
Resolution: Unresolved Votes: 0
Labels: None

Issue Links:
Dependency

 Description   

We are not being able to pull images using digest from gcr.io through Artifactory remote repo.

The pull command works when executed directly against the gcr.io

→ docker pull gcr.io/knative-releases/knative.dev/serving/cmd/autoscaler@sha256:bd125e90fffb44b843a183aa00f481cddee2317c0cfde9151c2482c5c2a8ed71
sha256:bd125e90fffb44b843a183aa00f481cddee2317c0cfde9151c2482c5c2a8ed71: Pulling from knative-releases/knative.dev/serving/cmd/autoscaler
Digest: sha256:bd125e90fffb44b843a183aa00f481cddee2317c0cfde9151c2482c5c2a8ed71
Status: Image is up to date for gcr.io/knative-releases/knative.dev/serving/cmd/autoscaler@sha256:bd125e90fffb44b843a183aa00f481cddee2317c0cfde9151c2482c5c2a8ed71
gcr.io/knative-releases/knative.dev/serving/cmd/autoscaler@sha256:bd125e90fffb44b843a183aa00f481cddee2317c0cfde9151c2482c5c2a8ed71

However, when we try the same through an Artifactory remote repo I get the following error

→ docker pull jfrog.local:8082/gcr-io/knative-releases/knative.dev/serving/cmd/autoscaler@sha256:bd125e90fffb44b843a183aa00f481cddee2317c0cfde9151c2482c5c2a8ed71
Error response from daemon: manifest for jfrog.local:8082/gcr-io/knative-releases/knative.dev/serving/cmd/autoscaler@sha256:bd125e90fffb44b843a183aa00f481cddee2317c0cfde9151c2482c5c2a8ed71 not found: manifest unknown: The named manifest is not known to the registry.

Using a regular tag reference instead of a digest works through Artifactory

→ docker pull jfrog.local:8082/gcr-io/knative-releases/knative.dev/serving/cmd/autoscaler:latest
latest: Pulling from gcr-io/knative-releases/knative.dev/serving/cmd/autoscaler
24f0c933cbef: Already exists
3c2cba919283: Already exists
ce2135b5db7f: Pull complete
8ce1d00c7e1d: Pull complete
Digest: sha256:61fc208b9c7923228275f8792288b3e356b2e80432655f237baafcf8ab7c3449
Status: Downloaded newer image for jfrog.local:8082/gcr-io/knative-releases/knative.dev/serving/cmd/autoscaler:latest
jfrog.local:8082/gcr-io/knative-releases/knative.dev/serving/cmd/autoscaler:latest

In Artifactory logs I can see this entry:

2020-05-27T23:00:11.853Z [jfrt ] [ERROR] [79e8977b54a9f6c8] [.DockerV2RemoteRepoHandler:448] [http-nio-8081-exec-7] - Missing Manifest from gcr-io 'v2/knative-releases/knative.dev/serving/cmd/autoscaler/manifests/sha256:bd125e90fffb44b843a183aa00f481cddee2317c0cfde9151c2482c5c2a8ed71' not found at gcr-io:knative-releases/knative.dev/serving/cmd/autoscaler/sha256__bd125e90fffb44b843a183aa00f481cddee2317c0cfde9151c2482c5c2a8ed71/manifest.json


 Comments   
Comment by Elio Marcolino [ 29/May/20 ]

We are having the same issue for other images as well:

  • gcr.io/knative-releases/knative.dev/serving/cmd/webhook@sha256:90562a10f5e37965f4f3332b0412afec1cf3dd1c06caed530213ca0603e52082
  • gcr.io/knative-releases/knative.dev/serving/cmd/activator@sha256:3b530bbcf892aff098444ae529a9d4150dfd0cd35c97babebd90eedae34ad8af
  • gcr.io/knative-releases/knative.dev/serving/cmd/controller@sha256:71f7c9f101e7e30e82a86d203fb98d6fa607c8d6ac2fcb73fd1defd365795223




[RTFACT-22266] Remove SEVERE level memory leak messages on shutdown Created: 27/May/20  Updated: 27/May/20

Status: Open
Project: Artifactory Binary Repository
Component/s: None
Affects Version/s: 6.19.1
Fix Version/s: None

Type: Improvement Priority: Normal
Reporter: Matthew Wang Assignee: Unassigned
Resolution: Unresolved Votes: 0
Labels: None


 Description   

When shutting down Artifactory, catalina logs will often output messages like the below:
SEVERE: The web application [access] created a ThreadLocal with key of type [org.jfrog.access.server.service.auth.AuthenticationServiceImpl$1] (value [org.jfrog.access.server.service.auth.AuthenticationServiceImpl$1@47d83a89]) and a value of type [org.jfrog.access.server.service.auth.model.AnonymousPrincipal] (value [anonymous]) but failed to remove it when the web application was stopped. Threads are going to be renewed over time to try and avoid a probable memory leak.

SEVERE: The web application [artifactory] created a ThreadLocal with key of type [java.lang.ThreadLocal] (value [java.lang.ThreadLocal@6dde7e88]) and a value of type [org.springframework.security.core.context.SecurityContextImpl] (value [org.springframework.security.core.context.SecurityContextImpl@ffffffff: Null authentication]) but failed to remove it when the web application was stopped. Threads are going to be renewed over time to try and avoid a probable memory leak.

Although these messages are not a cause of concern since Tomcat is completely shutdown when the application is undeployed (during the shutdown process) andthe "leak" has no implications since the process shuts down completely, these messages should be mitigated or removed as it could cause confusion.






[RTFACT-22261] Support CDN in RTF with a server with CNAME configured Created: 27/May/20  Updated: 03/Jun/20

Status: Open
Project: Artifactory Binary Repository
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: Story Priority: Normal
Reporter: Nessim Ifergan Assignee: Ido Klotz
Resolution: Unresolved Votes: 0
Labels: None

Issue Links:
Relationship

 Description   

Please make sure that the redirects to CDN are still working for the repos supported.
See the details in the jira https://www.jfrog.com/jira/browse/JFSAAS-576






[RTFACT-22256] Naming convention in Artifactory configuration descriptor for OAuth settings Created: 26/May/20  Updated: 26/May/20

Status: Open
Project: Artifactory Binary Repository
Component/s: None
Affects Version/s: 6.19.1, 7.4.3
Fix Version/s: None

Type: Improvement Priority: Normal
Reporter: Pavan Gonugunta Assignee: Unassigned
Resolution: Unresolved Votes: 0
Labels: None

Attachments: PNG File LDAP_settings.png     PNG File oauth_Settings.png    

 Description   

Feature description:  Artifactory configuration descriptor nests element with the same tag name i.e. oauthProvidersSettings. This behavior will be observed when Artifactory is configured with multiple OAuth providers.

Steps to reproduce:
-> Navigate to Admin -> OAuth SSO
-> Create/configure Artifactory with multiple OAuth Providers
-> Trigger the https://www.jfrog.com/confluence/display/RTF6X/Artifactory+REST+API#ArtifactoryRESTAPI-GeneralConfiguration REST API which retrieves configuration descriptor
http://Artifactory-URL/artifactory/api/system/configuration
-> From the configuration, we see that under oauthProvidersSettings tag there is another oauthProvidersSettings tag(with the same name) which will have all the OAuth provider configuration.

I have attached the screenshots for reference.

Expected behavior:
oauthProvidersSettings tag should contain oauthProvidersSetting tag and this will have the configuration of OAuth provider






[RTFACT-22253] Crowd authenticated users are not affected by 'remember me' Created: 26/May/20  Updated: 27/May/20

Status: Open
Project: Artifactory Binary Repository
Component/s: None
Affects Version/s: 6.11.3, 6.19.1
Fix Version/s: None

Type: Bug Priority: Normal
Reporter: Joey Naor Assignee: Unassigned
Resolution: Unresolved Votes: 3
Labels: None
Environment:

6.11.3 ZIP & 6.19.1 Docker



 Description   

Description:

The "remember me" feature does not maintain sessions for Artifactory users authenticated with Crowd, but works as intended for regular Artifactory users.

Expected behavior:

The "remember me" feature should maintain sessions for all Artifactory users, including the ones authenticating via Crowd.

Steps to reproduce (Artifactory 6.19.1):

  1. Connect a Crowd instance to Artifactory
  2. Login to Artifactory as a Crowd user while marking the "remember me" button
  3. Quit the browser using command+Q
  4. Navigate to Artifactory once more, which prompts the login page (session is not maintained)
  5. Login to Artifactory as an Artifactory user (admin for example)
  6. Quit the browser
  7. Navigate to Artifactory, which will redirect you without asking for another login (session maintained)





[RTFACT-22236] Improve error message when uploading Docker manifest.json while config layer is not found Created: 25/May/20  Updated: 25/May/20

Status: Open
Project: Artifactory Binary Repository
Component/s: None
Affects Version/s: 6.19.1, 7.4.4
Fix Version/s: None

Type: Improvement Priority: Normal
Reporter: Aviv Blonder Assignee: Unassigned
Resolution: Unresolved Votes: 0
Labels: None


 Description   

Currently, we expect the manifest.json to be pushed after the config layer is in its right place, or at least exist in the repository. If it's not, then we get a json parsing error, which is very generic and doesn't help with understanding the root cause.

To reproduce:

  1. Push hello-world to Artifactory.
  2. Remove its config layer (sha256__fce289e99eb9bca977dae136fbe2a82b6b7d4c372474c9235adc1741675f587e) from Artifactory manually.
  3. Download the manifest.json and remove it from Artifactory
  4. re-Upload the manifest json, use the same request the client executes:
$ curl -uadmin:password -H 'content-type: application/vnd.docker.distribution.manifest.v2+json' 'http://localhost:8081/v2/docker-local/hello-world/manifests/latest' -T manifest.json 

The response:

{"errors":[{"code":"MANIFEST_INVALID","message":"manifest invalid","detail":{"description":"java.io.EOFException: No content to map to Object due to end of input"}}]}% 

The error in Artifactory:

2020-05-25 11:34:36,972 [http-nio-8081-exec-20] [ERROR] (o.j.r.d.m.ManifestSchema2Deserializer:44) - Unable to deserialize the manifest.json file: No content to map to Object due to end of input2020-05-25 11:34:36,972 [http-nio-8081-exec-20] [ERROR] (o.j.r.d.m.ManifestSchema2Deserializer:44) - Unable to deserialize the manifest.json file: No content to map to Object due to end of inputjava.io.EOFException: No content to map to Object due to end of input at org.codehaus.jackson.map.ObjectMapper._initForReading(ObjectMapper.java:2775) at org.codehaus.jackson.map.ObjectMapper._readMapAndClose(ObjectMapper.java:2718) at org.codehaus.jackson.map.ObjectMapper.readTree(ObjectMapper.java:1558) at org.jfrog.repomd.docker.util.JsonUtil.readTree(JsonUtil.java:29) at org.jfrog.repomd.docker.manifest.ManifestSchema2Deserializer.applyAttributesFromContent(ManifestSchema2Deserializer.java:52) at org.jfrog.repomd.docker.manifest.ManifestSchema2Deserializer.deserialize(ManifestSchema2Deserializer.java:42) at org.jfrog.repomd.docker.manifest.ManifestDeserializer.deserialize(ManifestDeserializer.java:32) at org.jfrog.repomd.docker.v2.rest.handler.DockerV2LocalRepoHandler.processUploadedManifestType(DockerV2LocalRepoHandler.java:301) at org.jfrog.repomd.docker.v2.rest.handler.DockerV2LocalRepoHandler.uploadManifest(DockerV2LocalRepoHandler.java:275) at org.jfrog.repomd.docker.v2.rest.DockerV2Resource.uploadManifest(DockerV2Resource.java:81) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.base/java.lang.reflect.Method.invoke(Method.java:566) at org.glassfish.jersey.server.model.internal.ResourceMethodInvocationHandlerFactory.lambda$static$0(ResourceMethodInvocationHandlerFactory.java:76) at org.glassfish.jersey.server.model.internal.AbstractJavaResourceMethodDispatcher$1.run(AbstractJavaResourceMethodDispatcher.java:148) at org.glassfish.jersey.server.model.internal.AbstractJavaResourceMethodDispatcher.invoke(AbstractJavaResourceMethodDispatcher.java:191) at org.glassfish.jersey.server.model.internal.JavaResourceMethodDispatcherProvider$ResponseOutInvoker.doDispatch(JavaResourceMethodDispatcherProvider.java:200) at org.glassfish.jersey.server.model.internal.AbstractJavaResourceMethodDispatcher.dispatch(AbstractJavaResourceMethodDispatcher.java:103) at org.glassfish.jersey.server.model.ResourceMethodInvoker.invoke(ResourceMethodInvoker.java:493) at org.glassfish.jersey.server.model.ResourceMethodInvoker.apply(ResourceMethodInvoker.java:415) at org.glassfish.jersey.server.model.ResourceMethodInvoker.apply(ResourceMethodInvoker.java:104) at org.glassfish.jersey.server.ServerRuntime$1.run(ServerRuntime.java:277) at org.glassfish.jersey.internal.Errors$1.call(Errors.java:272) at org.glassfish.jersey.internal.Errors$1.call(Errors.java:268) at org.glassfish.jersey.internal.Errors.process(Errors.java:316) at org.glassfish.jersey.internal.Errors.process(Errors.java:298) at org.glassfish.jersey.internal.Errors.process(Errors.java:268) at org.glassfish.jersey.process.internal.RequestScope.runInScope(RequestScope.java:289) at org.glassfish.jersey.server.ServerRuntime.process(ServerRuntime.java:256) at org.glassfish.jersey.server.ApplicationHandler.handle(ApplicationHandler.java:703) at org.glassfish.jersey.servlet.WebComponent.serviceImpl(WebComponent.java:416) at org.glassfish.jersey.servlet.WebComponent.service(WebComponent.java:370) at org.glassfish.jersey.servlet.ServletContainer.service(ServletContainer.java:389) at org.glassfish.jersey.servlet.ServletContainer.service(ServletContainer.java:342) at org.glassfish.jersey.servlet.ServletContainer.service(ServletContainer.java:229) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:231) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166) at org.artifactory.webapp.servlet.RepoFilter.execute(RepoFilter.java:195) at org.artifactory.webapp.servlet.RepoFilter.doFilter(RepoFilter.java:97) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166) at org.artifactory.webapp.servlet.authentication.ArtifactoryAuthenticationFilterChain.lambda$doFilter$1(ArtifactoryAuthenticationFilterChain.java:134) at org.springframework.security.web.authentication.www.BasicAuthenticationFilter.doFilterInternal(BasicAuthenticationFilter.java:215) at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:107) at org.artifactory.webapp.servlet.authentication.ArtifactoryBasicAuthenticationFilter.doFilter(ArtifactoryBasicAuthenticationFilter.java:96) at org.artifactory.addon.docker.rest.DockerV2AuthenticationFilter.doFilter(DockerV2AuthenticationFilter.java:200) at org.artifactory.webapp.servlet.authentication.ArtifactoryAuthenticationFilterChain.doFilter(ArtifactoryAuthenticationFilterChain.java:152) at org.artifactory.webapp.servlet.AccessFilter.authenticateAndExecute(AccessFilter.java:311) at org.artifactory.webapp.servlet.AccessFilter.doFilterInternal(AccessFilter.java:208) at org.artifactory.webapp.servlet.AccessFilter.doFilter(AccessFilter.java:167) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166) at org.artifactory.webapp.servlet.RequestFilter.doFilter(RequestFilter.java:77) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166) at org.artifactory.webapp.servlet.ArtifactoryCsrfFilter.doFilter(ArtifactoryCsrfFilter.java:86) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166) at org.springframework.session.web.http.SessionRepositoryFilter.doFilterInternal(SessionRepositoryFilter.java:164) at org.springframework.session.web.http.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:80) at org.artifactory.webapp.servlet.SessionFilter.doFilter(SessionFilter.java:62) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166) at org.artifactory.webapp.servlet.ArtifactoryFilter.doFilter(ArtifactoryFilter.java:124) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166) at org.apache.catalina.core.ApplicationDispatcher.invoke(ApplicationDispatcher.java:728) at org.apache.catalina.core.ApplicationDispatcher.processRequest(ApplicationDispatcher.java:470) at org.apache.catalina.core.ApplicationDispatcher.doForward(ApplicationDispatcher.java:395) at org.apache.catalina.core.ApplicationDispatcher.forward(ApplicationDispatcher.java:316) at org.artifactory.util.DockerInternalRewrite.redirect(DockerInternalRewrite.java:62) at org.artifactory.webapp.servlet.ArtifactoryFilter.redirectIfNeeded(ArtifactoryFilter.java:153) at org.artifactory.webapp.servlet.ArtifactoryFilter.doFilter(ArtifactoryFilter.java:109) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166) at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:199) at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:96) at org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:493) at org.apache.catalina.valves.rewrite.RewriteValve.invoke(RewriteValve.java:304) at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:137) at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:81) at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:87) at org.apache.catalina.valves.rewrite.RewriteValve.invoke(RewriteValve.java:564) at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:137) at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:81) at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:87) at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:343) at org.apache.coyote.http11.Http11Processor.service(Http11Processor.java:798) at org.apache.coyote.AbstractProcessorLight.process(AbstractProcessorLight.java:66) at org.apache.coyote.AbstractProtocol$ConnectionHandler.process(AbstractProtocol.java:808) at org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.doRun(NioEndpoint.java:1498) at org.apache.tomcat.util.net.SocketProcessorBase.run(SocketProcessorBase.java:49) at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) at org.apache.tomcat.util.threads.TaskThread$WrappingRunnable.run(TaskThread.java:61) at java.base/java.lang.Thread.run(Thread.java:834) 

 

The suggested fix: before parsing the json, we should check if the it's not null, and throw an informative error if this is the case






[RTFACT-22228] Fix MySQL and MSSQL migration scripts (missing NOT NULL) for Access and Artifactory Created: 24/May/20  Updated: 24/May/20

Status: Open
Project: Artifactory Binary Repository
Component/s: Database
Affects Version/s: None
Fix Version/s: None

Type: Bug Priority: Normal
Reporter: Avishay Halpren Assignee: Unassigned
Resolution: Unresolved Votes: 0
Labels: None

Issue Links:
Trigger
was triggered by RTFACT-22125 Increase the database columns length ... In Progress

 Description   

When creating a migration script which modifies a column which is NOT NULL 

the ALTER TABLE statement must include the NOT NULL again or else the column will turn to NULLABLE (this happens both in MySQL and MSSQL)

 For example:

in file: mysql.sql

CREATE TABLE artifactory_servers (
  server_id                VARCHAR(128) NOT NULL,

... 

in file mysql_v211_server_id.sql: 
ALTER TABLE artifactory_servers MODIFY server_id VARCHAR(128);  -- causes the server_id to be NULLABLE (in mysql and mssql) 

 I Identified few scripts which didn't do that and therefore the migration and initial scripts are not aligned.

server_id(artifactory_servers)
tasks_context(tasks)
version(artifact_bundles) 
data(access_config)

 

 






[RTFACT-22222] Deleting an "Offline" node via the UI fails Created: 23/May/20  Updated: 23/May/20

Status: Open
Project: Artifactory Binary Repository
Component/s: HA
Affects Version/s: 7.4.3
Fix Version/s: None

Type: Bug Priority: Normal
Reporter: Patrick Russell Assignee: Unassigned
Resolution: Unresolved Votes: 0
Labels: None

Attachments: PNG File Screen Shot 2020-05-22 at 4.58.08 PM.png     PNG File Screen Shot 2020-05-22 at 4.58.28 PM.png    

 Description   

Problem description: The "X" button in the Artifactory Unified UI (Administration -> Monitoring -> Service Status) currently does not work. Using a browser's "Inspect" mode, it looks like the DELETE operation returns a 400 error and an internal NPE error is printed in the artifactory-service.log file.

What is the expected behavior? In Artifactory 6.X, clicking the "X" button removes the HA node from the UI and from the database. It is useful as otherwise an Admin must go onto the database and remove the HA node from the access_nodes table manually.

Steps to reproduce: 

  1. Set up an Artifactory 7.X HA cluster
  2. Shut down the secondary HA node
  3. From the Primary node's UI, attempt to delete the node. Note that this action fails with a 400 error and an error in the logs

Possible workaround: 

Go directly onto the database and run the following commands:

#List the nodes and their statuses
select * from access_topology;

#Delete the node from both access_topology and access_nodes tables
delete from access_topology where node_id = 'art2';

delete from access_nodes where node_id = 'art2';

 

Full error stacktrace when this is tried:

2020-05-22T23:58:38.681Z [jfrt ] [ERROR] [bf2ba0b37dd7581 ] [s.a.c.h.RemoveServerService:73] [http-nio-8081-exec-3] - Exception occurred while removing 'art2'2020-05-22T23:58:38.681Z [jfrt ] [ERROR] [bf2ba0b37dd7581 ] [s.a.c.h.RemoveServerService:73] [http-nio-8081-exec-3] - Exception occurred while removing 'art2'
java.lang.NullPointerException: null at org.artifactory.storage.db.servers.service.ArtifactoryServersCommonService.lambda$static$5(ArtifactoryServersCommonService.java:130) at org.artifactory.addon.ha.HaAddonImpl.artifactoryServerHasHeartbeat(HaAddonImpl.java:842) at org.artifactory.addon.ha.HaAddonImpl.deleteArtifactoryServer(HaAddonImpl.java:833) at jdk.internal.reflect.GeneratedMethodAccessor375.invoke(Unknown Source) at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.base/java.lang.reflect.Method.invoke(Method.java:566) at org.springframework.aop.support.AopUtils.invokeJoinpointUsingReflection(AopUtils.java:343) at org.springframework.aop.framework.JdkDynamicAopProxy.invoke(JdkDynamicAopProxy.java:205) at com.sun.proxy.$Proxy352.deleteArtifactoryServer(Unknown Source) at org.artifactory.ui.rest.service.admin.configuration.ha.RemoveServerService.removeServer(RemoveServerService.java:64) at org.artifactory.ui.rest.service.admin.configuration.ha.RemoveServerService.execute(RemoveServerService.java:50) at org.artifactory.rest.common.service.ServiceExecutor.process(ServiceExecutor.java:38) at org.artifactory.rest.common.resource.BaseResource.runService(BaseResource.java:127) at org.artifactory.ui.rest.resource.admin.configuration.servers.ServersStatusResource.removeServer(ServersStatusResource.java:62) at jdk.internal.reflect.GeneratedMethodAccessor374.invoke(Unknown Source) at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.base/java.lang.reflect.Method.invoke(Method.java:566) at org.glassfish.jersey.server.model.internal.ResourceMethodInvocationHandlerFactory.lambda$static$0(ResourceMethodInvocationHandlerFactory.java:76) at org.glassfish.jersey.server.model.internal.AbstractJavaResourceMethodDispatcher$1.run(AbstractJavaResourceMethodDispatcher.java:148) at org.glassfish.jersey.server.model.internal.AbstractJavaResourceMethodDispatcher.invoke(AbstractJavaResourceMethodDispatcher.java:191) at org.glassfish.jersey.server.model.internal.JavaResourceMethodDispatcherProvider$ResponseOutInvoker.doDispatch(JavaResourceMethodDispatcherProvider.java:200) at org.glassfish.jersey.server.model.internal.AbstractJavaResourceMethodDispatcher.dispatch(AbstractJavaResourceMethodDispatcher.java:103) at org.glassfish.jersey.server.model.ResourceMethodInvoker.invoke(ResourceMethodInvoker.java:493) at org.glassfish.jersey.server.model.ResourceMethodInvoker.apply(ResourceMethodInvoker.java:415) at org.glassfish.jersey.server.model.ResourceMethodInvoker.apply(ResourceMethodInvoker.java:104) at org.glassfish.jersey.server.ServerRuntime$1.run(ServerRuntime.java:277) at org.glassfish.jersey.internal.Errors$1.call(Errors.java:272) at org.glassfish.jersey.internal.Errors$1.call(Errors.java:268) at org.glassfish.jersey.internal.Errors.process(Errors.java:316) at org.glassfish.jersey.internal.Errors.process(Errors.java:298) at org.glassfish.jersey.internal.Errors.process(Errors.java:268) at org.glassfish.jersey.process.internal.RequestScope.runInScope(RequestScope.java:289) at org.glassfish.jersey.server.ServerRuntime.process(ServerRuntime.java:256) at org.glassfish.jersey.server.ApplicationHandler.handle(ApplicationHandler.java:703) at org.glassfish.jersey.servlet.WebComponent.serviceImpl(WebComponent.java:416) at org.glassfish.jersey.servlet.WebComponent.service(WebComponent.java:370) at org.glassfish.jersey.servlet.ServletContainer.service(ServletContainer.java:389) at org.glassfish.jersey.servlet.ServletContainer.service(ServletContainer.java:342) at org.glassfish.jersey.servlet.ServletContainer.service(ServletContainer.java:229) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:231) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166) at org.artifactory.webapp.servlet.RepoFilter.execute(RepoFilter.java:195) at org.artifactory.webapp.servlet.RepoFilter.doFilter(RepoFilter.java:97) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166) at org.artifactory.webapp.servlet.authentication.ArtifactoryAuthenticationFilterChain.lambda$doFilter$1(ArtifactoryAuthenticationFilterChain.java:134) at org.artifactory.webapp.servlet.authentication.PropsAuthenticationFilter.doFilter(PropsAuthenticationFilter.java:126) at org.artifactory.webapp.servlet.authentication.ArtifactoryAuthenticationFilterChain.doFilter(ArtifactoryAuthenticationFilterChain.java:171) at org.artifactory.webapp.servlet.AccessFilter.authenticateAndExecute(AccessFilter.java:385) at org.artifactory.webapp.servlet.AccessFilter.doFilterInternal(AccessFilter.java:249) at org.artifactory.webapp.servlet.AccessFilter.doFilter(AccessFilter.java:193) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166) at org.artifactory.webapp.servlet.RequestFilter.doFilter(RequestFilter.java:78) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166) at org.artifactory.webapp.servlet.ArtifactoryCsrfFilter.doFilter(ArtifactoryCsrfFilter.java:86) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166) at org.springframework.session.web.http.SessionRepositoryFilter.doFilterInternal(SessionRepositoryFilter.java:164) at org.springframework.session.web.http.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:80) at org.artifactory.webapp.servlet.SessionFilter.doFilter(SessionFilter.java:62) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166) at org.artifactory.webapp.servlet.ArtifactoryTracingFilter.doFilter(ArtifactoryTracingFilter.java:27) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166) at org.artifactory.webapp.servlet.ArtifactoryFilter.doFilter(ArtifactoryFilter.java:124) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166) at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:199) at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:96) at org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:493) at org.apache.catalina.valves.rewrite.RewriteValve.invoke(RewriteValve.java:304) at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:137) at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:81) at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:87) at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:343) at org.apache.coyote.http11.Http11Processor.service(Http11Processor.java:798) at org.apache.coyote.AbstractProcessorLight.process(AbstractProcessorLight.java:66) at org.apache.coyote.AbstractProtocol$ConnectionHandler.process(AbstractProtocol.java:808) at org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.doRun(NioEndpoint.java:1498) at org.apache.tomcat.util.net.SocketProcessorBase.run(SocketProcessorBase.java:49) at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) at org.apache.tomcat.util.threads.TaskThread$WrappingRunnable.run(TaskThread.java:61) at java.base/java.lang.Thread.run(Thread.java:834)

 






[RTFACT-22221] Fine grained replication type configuration per target on multi-push replication Created: 22/May/20  Updated: 22/May/20

Status: Open
Project: Artifactory Binary Repository
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: Improvement Priority: Normal
Reporter: Narasimha Pai Assignee: Unassigned
Resolution: Unresolved Votes: 0
Labels: None


 Description   

Ability to set scheduled replication or event-based replication per target for a given repo in multi-push replication.

For example, repo A is replicated to repo B and repo C on different servers. Replication between repo A and B should be event-based. Replication between A and C should be scheduled based. Configuration of the cron-task should be per target in the replication configuration.






[RTFACT-22211] Support Windows Package Manager, winget Created: 21/May/20  Updated: 25/May/20

Status: Open
Project: Artifactory Binary Repository
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: New Feature Priority: Normal
Reporter: Scott Mosher Assignee: Unassigned
Resolution: Unresolved Votes: 0
Labels: None


 Description   

Add support for new Package Manager.

 

https://docs.microsoft.com/en-us/windows/package-manager/winget/






[RTFACT-22209] Create an endpoint for subscription domain Created: 21/May/20  Updated: 03/Jun/20

Status: Pending QA
Project: Artifactory Binary Repository
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: Task Priority: Normal
Reporter: Yonatan Arbel Assignee: Yevdo Abramov
Resolution: Unresolved Votes: 0
Labels: None


 Description   

Create an endpoint for fetching subscription domain 

at this point the data is static and should be: 

The endpoint is: GET http://artifactory-url:8082/artifactory/ui/events/domains

Important: the rest call must be authenticated

[
    {
        id: 'artifact',
        name: 'Artifact',
        service: 'artifactory',
        event_types: [
            {
                id: 'deployed',
                name: 'Artifact was deployed',
                description: 'The webhook is triggered when an artifact is deployed to a repository. You can select the repositories and repository paths on which the webhook will apply.',
            }
        ],
    },
    {
        id: 'docker',
        name: 'Docker',
        micro_service: 'artifactory',
        event_types: [
            {
                id: 'pushed',
                name: 'Docker tag was pushed',
                description: 'The webhook is triggered when a new tag of a Docker image is pushed to a Docker repository. You can select the Docker repositories and repository paths on which the webhook will apply.',
            },
            {
                id: 'deleted',
                name: 'Docker tag was deleted',
                description: 'The webhook is triggered when a tag of a Docker image is deleted from a Docker repository. You can select the Docker repositories and repository paths on which the webhook will apply.',
            },
            {
                id: 'promoted',
                name: 'Docker tag was promoted',
                description: 'The webhook is triggered when a tag of a Docker image is promoted. You can select the Docker repositories and repository paths on which the webhook apply. The webhook will apply on the Docker repositories from which the Docker tag was promoted.',
            },
        ],
    },
    {
        id: 'build',
        name: 'Builds',
        micro_service: 'artifactory',
        event_types: [
            {
                id: 'uploaded',
                name: 'Build was uploaded',
                description: 'The webhook is triggered when a new build is uploaded. You can select the build names or build patterns on which the webhook will apply.',
            },
            {
                id: 'deleted',
                name: 'Build was deleted',
                description: 'The webhook is triggered when a build is deleted. You can select the build names or build patterns on which the webhook will apply.',
            },
            {
                id: 'promoted',
                name: 'Build was promoted',
                description: 'The webhook is triggered when a build is promoted. You can select the build names or build patterns on which the webhook will apply.',
            },
        ],
    },
];





[RTFACT-22207] Permissions API soft-fails on case mismatch in username Created: 21/May/20  Updated: 21/May/20

Status: Open
Project: Artifactory Binary Repository
Component/s: REST API
Affects Version/s: 6.18.1
Fix Version/s: None

Type: Bug Priority: Normal
Reporter: Grzegorz Skołyszewski Assignee: Unassigned
Resolution: Unresolved Votes: 0
Labels: None


 Description   

Assume there's user1 in the base.

When creating/updating permission target with application/vnd.org.jfrog.artifactory.security.PermissionTarget+json payload, if UsEr1 (or any other case-mismatch variation of the username) is used in $.principals.users, the call succeeds but the user is not in the permission target.

This is a confusing behavior, since the requested state does not match final state, even though no error is thrown in the process.

 

The same may apply to $.principals.groups or even $.repositories contents, but I have not checked that.






[RTFACT-22202] Indexed Build Resources not clear when using a pattern Created: 20/May/20  Updated: 20/May/20

Status: Open
Project: Artifactory Binary Repository
Component/s: Xray
Affects Version/s: None
Fix Version/s: None

Type: New Feature Priority: Normal
Reporter: Dusten Harrison Assignee: Unassigned
Resolution: Unresolved Votes: 0
Labels: None


 Description   

When you use a pattern in the configuration of Indexed Resources for builds the UI does not make it clear if there are builds associated with that pattern.

The UI just shows the defined include and exclude patterns and not the builds that meet those patterns.

Add the capability to see builds that are associated with a pattern.

Add the capability to index builds that where added before the pattern was created.






[RTFACT-22200] ability to filter artifacts Created: 20/May/20  Updated: 20/May/20

Status: Open
Project: Artifactory Binary Repository
Component/s: Artifactory Cloud
Affects Version/s: None
Fix Version/s: None

Type: New Feature Priority: Normal
Reporter: Jason Duff Assignee: Unassigned
Resolution: Unresolved Votes: 0
Labels: None


 Description   

It would be very helpful if, in the "Artifact Repository Browser" we had the ability to filter on specific artifacts such that all other artifacts would disappear.  Currently, the filtering simply highlights a match in what could be a very, very, large tree and the user has to scroll to find it.

Thanks.

jason






[RTFACT-22199] Artifactory Cloud - repo browser - case insensitivity Created: 20/May/20  Updated: 20/May/20

Status: Open
Project: Artifactory Binary Repository
Component/s: Artifactory Cloud
Affects Version/s: None
Fix Version/s: None

Type: New Feature Priority: Normal
Reporter: Jason Duff Assignee: Unassigned
Resolution: Unresolved Votes: 0
Labels: None


 Description   

In the "Artifact Repository Browser" for Artifactory Cloud, the artifacts should be listed alphabetically.  As it is, they are listed by what appears to be ASCII code which separates the uppercase letters from the lowercase letters, for instance.  This makes search/browsing very difficult.  

At the very least, provide the option to sort by either method.  This is very common sorting functionality.

Thanks.

jason






[RTFACT-22198] Artifactory Cloud/SaaS - case insensitive search Created: 20/May/20  Updated: 20/May/20

Status: Open
Project: Artifactory Binary Repository
Component/s: Artifactory Cloud
Affects Version/s: None
Fix Version/s: None

Type: New Feature Priority: Normal
Reporter: Jason Duff Assignee: Unassigned
Resolution: Unresolved Votes: 0
Labels: None


 Description   

When searching for an artifact using the "Quick Search" from the home page in Artifactory Cloud, it is case sensitive.  This makes search very difficult and even dangerous as the case is not always known, especially for local/internal repos.  

At the very lease, provide the option to match case exactly or not.  This is very basic search functionality.

Thanks.

jason






[RTFACT-22192] Add WebUI only settings to the REST API Created: 20/May/20  Updated: 20/May/20

Status: Open
Project: Artifactory Binary Repository
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: Improvement Priority: Normal
Reporter: Stefan Gangefors Assignee: Unassigned
Resolution: Unresolved Votes: 0
Labels: None


 Description   

I can't currently find any endpoint in the REST API that allows me to update for example the Storage Quota settings in the Maintenance Configuration page.

The REST API should allow control of ALL settings available through the WebUI.






[RTFACT-22186] Multiple Debian package deploy UI does not provide metadata fields Created: 20/May/20  Updated: 20/May/20

Status: Open
Project: Artifactory Binary Repository
Component/s: Debian, Deploy UI, Upload
Affects Version/s: 6.18.0, 7.4.0
Fix Version/s: None

Type: Bug Priority: Normal
Reporter: Daniel Werdermann Assignee: Unassigned
Resolution: Unresolved Votes: 0
Labels: debian, deploy, ui


 Description   

If you deploy debian packages via the UI and select the "Multi" mode. You are not provided with the necessary fields for the mandatory meta information.

So the uploaded files sitting there useless in the package pool. This renderes the "Multi" upload function rather pointless and confusing for the end users.






[RTFACT-22185] Artifactory multiple files deploy - progress bar disappeared Created: 20/May/20  Updated: 03/Jun/20

Status: Open
Project: Artifactory Binary Repository
Component/s: Web UI
Affects Version/s: 7.4.3
Fix Version/s: None

Type: Bug Priority: Normal
Reporter: Yuriy Tabolin Assignee: Unassigned
Resolution: Unresolved Votes: 0
Labels: None

Regression:
Yes

 Description   

Hi. We’ve recently upgraded our on-prem artifactory-pro from 6.18.1 to 7.4.3. artifactory works in docker with nginx reverse-proxy.
The problem is when I started to deploy multiple files via UI, I see a progress bar "Deploy in progress…", percents on it starts to increase, but after 5-20secs progress bar disappeared. At last seconds it shows about 10-15% (probably it depends on files I deployed). If I press "refresh" on a page, I see that files appeared one after another. So obviously, the deploying process is working in the background. But if I don’t see the progress bar, I can’t understand when deploying will be finished and how many times remains.
I don’t see any errors in logs, but I see that files deployed more than a couple of minutes.
I couldn’t find any related settings in artifactory, so it looks like a bug in UI.






[RTFACT-22180] remove encryption of values in system.yaml based on master.key Created: 19/May/20  Updated: 19/May/20

Status: Open
Project: Artifactory Binary Repository
Component/s: None
Affects Version/s: 7.4.3
Fix Version/s: None

Type: New Feature Priority: Normal
Reporter: Victor Chavez Assignee: Unassigned
Resolution: Unresolved Votes: 0
Labels: None


 Description   

We run Artifactory on EC2 with Puppet as the Configuration Management tool.

When upgrading from 6.10.x to 7.x, a system.yaml is created in the migration process. The database password is inserted, along with many other values. When we capture that file and its values in Puppet the first time, it was a surprise to find that when the artifactory service was restarted that the DB password was then encrypted, which required recapturing the values again. Not only that, since we have an HA setup, the password is now different on each node. This is totally unhelpful for us. If someone is able to compromise access to Artifactory, we have other major issues. Operationally with Puppet it is more difficult to sustain the management of n passwords where n is the number of HA nodes in our cluster. We already have systems in place to keep the DB passwords encrypted and placed on the EC2 with Puppet. We don't need it obfuscated and different on each separate HA node. We would have a similar system in place in k8s if we were ever to migrate our Artifactory implementation there as well.

Please allow a configurable option for the password to be stored in plaintext in the system.yaml.






[RTFACT-22176] Support Bundle Capturing Data Older than 24 Hours Created: 19/May/20  Updated: 25/May/20

Status: Open
Project: Artifactory Binary Repository
Component/s: None
Affects Version/s: 6.12.2, 6.19.1
Fix Version/s: None

Type: Improvement Priority: Normal
Reporter: Swarnendu Kayal Assignee: Unassigned
Resolution: Unresolved Votes: 0
Labels: None


 Description   

While creating a support bundle from Artifactory, even though the date span has been selected as "Last 24 hours", the files older than 24 hours are being captured in the support bundle. "Date Span" considers the files according to the time stamp present in the file name and not by its contents. However, even though filenames has got the time stamp and mentioned below, they are being captured in the support bundle:

rw-rr- 1 xxxxxx xxxxxx 70214 May 19 20:29 catalina.out-20191214

Also, the files which do not have a timestamp on the file name is also being captured in the support bundle. The support bundle has been captured on May 19, 2020. The old logs are captured as well.

Jan 22 10:33 xray_traffic.1579669432143.log
Jan 22 10:33 traffic.1579669432140.log
Jan 22 10:33 sha256_migration.log
Jan 22 10:33 request_trace.log
Jan 22 10:33 path_checksum_migration.log
Jan 22 10:33 import.export.log
Jan 22 10:33 build_info_migration.log
Jan 22 10:33 binarystore.log
Jan 22 10:34 conan_v2_migration.log
Mar 18 12:47 jdbc.log
Apr 3 15:49 user-plugin.log
Apr 14 23:57 server.key
Apr 15 00:00 key.pem
Apr 15 00:00 certificate.pem
Apr 24 20:50 requestTime.py
May 6 17:16 http.13.log
May 6 17:16 http.12.log
May 6 17:16 http.11.log
May 6 17:16 http.10.log
May 6 17:16 http.9.log
May 7 19:55 http.8.log
May 11 12:50 http.7.log
May 11 20:15 http.6.log
May 13 21:39 http.5.log
May 14 23:13 http.4.log
May 14 23:16 http.3.log
May 16 13:26 http.2.log
May 16 22:06 http.1.log

Expected Behaviour: The support bundle should capture the files which are only 24 hours old if "Last 24 hours" is selected. Instead of capturing the files using the names which have a timestamp, it should be captured using the timestamp on the server.

Severity Level: ** 4 (Whenever the customer wants to share the latest logs, the support bundle is including the old logs increasing the size of the files, which are not expected and which is limiting the security measures for the customer as irrelevant and old data is being sent to the vendor.)






[RTFACT-22168] Disabling Network ITests Created: 19/May/20  Updated: 21/May/20

Status: In Progress
Project: Artifactory Binary Repository
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: Task Priority: Normal
Reporter: Omri Naor Assignee: Omri Naor
Resolution: Unresolved Votes: 0
Labels: None


 Description   

Disabled tests:

1. org.artifactory.itest.addon.filestore.type.jclods.JCloudsBaseBinaryProviderTestBase (s3Old)

2. org.artifactory.itest.addon.filestore.type.jclods.gcs.JCloudsGCSBinaryProviderTest

3. org.artifactory.itest.addon.filestore.type.jclods.s3.JCloudsS3BinaryProviderTest

4. org.artifactory.itest.bintray.BintrayRestTest#failTest (security.xml)

5. org.artifactory.itest.bintray.BintrayRestTest#testOverrideParams

6. org.artifactory.itest.bintray.BintrayRestTest#testPushBuildToBintray

7. org.artifactory.itest.addon.vcs.VcsBitbucketTest#getTagFile (API v2)

8. org.artifactory.itest.addon.vcs.VcsBitbucketTest#getBranchFile

9. org.artifactory.itest.distribution.DistributorTest (cookie)






[RTFACT-22167] PostgreSQL index improvement comparison massuring Created: 19/May/20  Updated: 01/Jun/20

Status: Open
Project: Artifactory Binary Repository
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: Bug Priority: Normal
Reporter: Shay Bagants Assignee: Alexei Vainshtein
Resolution: Unresolved Votes: 0
Labels: None

Issue Links:
Relationship

 Description   

As a part of the PostgreSQL index improvement, we should prepare a document including the index change performances






[RTFACT-22163] Remove existing config-based Affinity Settings from codebase Created: 19/May/20  Updated: 04/Jun/20

Status: Ready for Code Review
Project: Artifactory Binary Repository
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: Story Priority: Normal
Reporter: Uriah Levy Assignee: Aviv Anidjar
Resolution: Unresolved Votes: 0
Labels: None


 Description   

This task is for removing the Affinity settings from the configuration descriptor. 

 

Note: some tests mutate affinity settings during runtime, so we will need to adapt them to work with a new system.yaml based system. 






[RTFACT-22161] Improve Artifactory Package Viewer UI to show latest 500 package Created: 18/May/20  Updated: 18/May/20

Status: Open
Project: Artifactory Binary Repository
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: Improvement Priority: Normal
Reporter: zhenming shen Assignee: Unassigned
Resolution: Unresolved Votes: 0
Labels: None


 Description   

We run into this issue from the Artifactory Package Viewer UI, where the package viewer detail page only returns 500 items. And the 500 items are the oldest version, but not the latest versions.
 
Whereas if in the search page, it says there are 1800+ versions.
 
Is there a way, we can return the latest 500 items instead of the oldest 500 items?
 
And it there a way we can increase the 500 hundred limit in a way that doesn't affect performance?






[RTFACT-22155] Rpm generated primary.xml.gz does not contain 'pre' attribute Created: 18/May/20  Updated: 25/May/20

Status: Open
Project: Artifactory Binary Repository
Component/s: None
Affects Version/s: 6.14.1, 6.18.1
Fix Version/s: None

Type: Bug Priority: Normal
Reporter: Shai Ben-Zvi Assignee: Unassigned
Resolution: Unresolved Votes: 0
Labels: None

Attachments: XML File 49a93158fe53eeccbd0afba59c6d8645293f7bfd08c18d3f3885b9044a2a1d18-primary.xml     XML File 5af8c51b20c53910ce26749795f1696d6959c17d-primary.xml    

 Description   

The 'pre' attribute is responsible to tell the rpm client that a package is a prerequisite for another package upon installation.
For example, if package a depends on package b, it means that package b must be installed before a.
Therefore, the package will have an attribute 'pre' with value 1 to let rpm know that it is required to be installed first.

Example file which contain this attribute:
https://github.com/openSUSE/libzypp/blob/master/zypp/parser/yum/schema/rpm-inc.rng

Reference to the createrepo code where the pre is taken from the primary dump:

https://github.com/rpm-software-management/createrepo_c/blob/master/src/xml_dump_primary.c#L124

Reference to the changelog when this feature was added in createrepo:
https://github.com/rpm-software-management/createrepo/blob/master/ChangeLog

Reproduction steps:
1. Download the following binary from Centos repository.
2. Downloaded the matching repodata primary.xml.gz from here.
3. Extract the xml file from the primary.xml.gz and notice this xml contains 'pre' attribute with value '1'.
4. Deploy the .rpm file to a local repository in Artifactory and wait for the metadata to be generated.
5. Once generated, download the primary.xml.gz which was generated and extract the xml file, you will notice that the 'pre' attribute is missing.

The impact is that sometimes the installation fails due to the order the zypper client try to install the package due to missing metadata.

I attached the original primary xml file and the one generated by Artifactory for comparison.






[RTFACT-22154] MFA - Add 2 fields on the internal user Created: 18/May/20  Updated: 18/May/20

Status: Open
Project: Artifactory Binary Repository
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: Change Request Priority: Normal
Reporter: Asaf Novak Assignee: Nadav Yogev
Resolution: Unresolved Votes: 0
Labels: None

Issue Links:
Dependency

 Description   

In order to support MFA (Multi Factor Authentication), there is a need to save 2 additional fields on the internal user (non transiant).

See additional tech spec:

https://docs.google.com/document/d/1spmF1eaw4IV_OLcdxoYQktDyw25ngSbLlKc8UU-rVjQ/edit?usp=sharing

https://docs.google.com/document/d/1sYTfIPBPtzxTpw7SdotOat171UKtILsNw_OBxxeKMMA/edit#

https://docs.google.com/document/d/1O-_WJpLwy8O4sTtceV-KcXZuU5Dnn4EOep_TzFPlK-I/edit#

 

Add 2 fields on the user (possibly on getCurrentUser)
mfaEnabled - Boolean
nextMfaVerify - Date
These fields should be saved persistently on internal users, and the front end server should be able to set and get them.






[RTFACT-22152] MARKERS/PRE_INIT/HOME_FILES converters may not run on certain upgrades Created: 17/May/20  Updated: 17/May/20

Status: Will Not Implement
Project: Artifactory Binary Repository
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: Bug Priority: Normal
Reporter: Uriah Levy Assignee: Unassigned
Resolution: Unresolved Votes: 0
Labels: None


 Description   

The mentioned converter types usually contain converter classes that assume that the "source" version of the upgrade is the one found inside etc/artifactory.properties (see ConverterManagerImpl#getOriginalVersion

This assumption is risky because the artifactory.properties file is not reliable for determining the source version on an upgrade scenario where a new node is spun up on the target version already with a target-version reflecting artifactory.properties file. 

Other converter types such as DATABASE, HOME_SYNC_FILES and POST_INIT use the original DB version (from artifactory_servers) to determine the source, which is always applicable for this upgrade scenario.  






[RTFACT-22151] Long-term Metadata retries: MDS event pipelines: read and write from errors table Created: 17/May/20  Updated: 02/Jun/20

Status: Open
Project: Artifactory Binary Repository
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: Story Priority: Normal
Reporter: Uriah Levy Assignee: Mor Merhav
Resolution: Unresolved Votes: 0
Labels: None

Issue Links:
Dependency
is a precondition for RTFACT-18274 Event pipe - long-term retries mechanism Open

 Description   

Write to the global event errors table on failure to execute a metadata event (after exhaustion of immediate retries) 






[RTFACT-22150] Align application logs with jfrog format Created: 17/May/20  Updated: 24/May/20

Status: Pending QA
Project: Artifactory Binary Repository
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: Improvement Priority: Normal
Reporter: Bar Haim Assignee: Bar Haim
Resolution: Unresolved Votes: 0
Labels: None


 Description   

Align application logs with jfrog format.

Currently all logs are printed in json format since the logger we are using is not the common jfrog logger, we need to change it.






[RTFACT-22147] Long-term Metadata retries: Adaptations to global errors table Created: 17/May/20  Updated: 04/Jun/20

Status: Development
Project: Artifactory Binary Repository
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: Story Priority: Normal
Reporter: Uriah Levy Assignee: Mor Merhav
Resolution: Unresolved Votes: 0
Labels: None

Issue Links:
Dependency
is a precondition for RTFACT-18274 Event pipe - long-term retries mechanism Open
Sub-Tasks:
Key
Summary
Type
Status
Assignee
RTFACT-22350 Convert task_type in replication_erro... Sub-task In Progress Mor Merhav  

 Description   

Replication currently saves errors in a dedicated replication errors table. This task is for making adaptations so that this infrastructure and schema are suitable for use in the Metadata event pipeline.






[RTFACT-22146] Refactor Affinity API (config descriptor -> system.yaml) Created: 17/May/20  Updated: 04/Jun/20

Status: In Progress
Project: Artifactory Binary Repository
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: Story Priority: Normal
Reporter: Uriah Levy Assignee: Aviv Anidjar
Resolution: Unresolved Votes: 0
Labels: None

Sub-Tasks:
Key
Summary
Type
Status
Assignee
RTFACT-22375 Modify RoleManager Sub-task Open  
RTFACT-22376 Refactor AffinityService Sub-task Open  

 Description   

Affinity task settings are currently managed in the configuration descriptor. This is for refactoring the Affinity mechanism and settings to move from the config descriptor to the system.yaml.






[RTFACT-22144] HA Upgrade from 6.9.1 to 6.16.2 failed Created: 15/May/20  Updated: 15/May/20

Status: Open
Project: Artifactory Binary Repository
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: Bug Priority: Normal
Reporter: Matias Dell Amerlina Rios Assignee: Unassigned
Resolution: Unresolved Votes: 0
Labels: artifactory, upgrade


 Description   

While performing an upgrade from 6.9.1 to 6.16.2 we noticed that our HA cluster was reporting 500s for a couple of minutes (7)

We followed these instructions for the upgrade.
However, after upgrading the master and put it back into rotation we noticed the 500s.

By looking at the logs the first error that we see is:

2020-05-15 00:57:24,325 [art-init] [ERROR] (o.a.c.ConvertersManagerImpl:216) - Conversion failed. You should analyze the error and retry launching Artifactory. Error is: unstable environment: Found one or more servers with different version Config Reload denied.

It's worth to mention that the previous log shows this information that seems to be relevant to that error.

2020-05-15 00:57:24,026 [art-init] [INFO ] (o.a.d.r.CentralConfigReader:71) - Converting artifactory.config.xml version from 'v220' to 'v224'

Right after the first error it's this exception which it seems to be normal according to the document.

2020-05-15 00:57:24,328 [art-init] [ERROR] (o.a.w.s.ArtifactoryContextConfigListener:96) - Application could not be initialized: unstable environment: Found one or more servers with different version Config Reload denied.
java.lang.reflect.InvocationTargetException: null
	at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
	at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
	at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
	at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
	at org.artifactory.webapp.servlet.ArtifactoryContextConfigListener.configure(ArtifactoryContextConfigListener.java:211)
	at org.artifactory.webapp.servlet.ArtifactoryContextConfigListener.access$200(ArtifactoryContextConfigListener.java:67)
	at org.artifactory.webapp.servlet.ArtifactoryContextConfigListener$1.run(ArtifactoryContextConfigListener.java:92)
Caused by: java.lang.RuntimeException: unstable environment: Found one or more servers with different version Config Reload denied.
	at org.artifactory.converter.ConvertersManagerImpl.handleException(ConvertersManagerImpl.java:223)
	at org.artifactory.converter.ConvertersManagerImpl.serviceConvert(ConvertersManagerImpl.java:171)
	at org.artifactory.spring.ArtifactoryApplicationContext.refresh(ArtifactoryApplicationContext.java:271)
	at org.artifactory.spring.ArtifactoryApplicationContext.<init>(ArtifactoryApplicationContext.java:153)
	... 7 common frames omitted
Caused by: java.lang.RuntimeException: unstable environment: Found one or more servers with different version Config Reload denied.
	at org.artifactory.config.CentralConfigServiceImpl.assertSaveDescriptorAllowed(CentralConfigServiceImpl.java:672)
	at org.artifactory.config.CentralConfigServiceImpl.preSaveDescriptor(CentralConfigServiceImpl.java:327)
	at org.artifactory.config.CentralConfigServiceImpl.forceSaveDescriptorInternal(CentralConfigServiceImpl.java:387)
	at org.artifactory.config.CentralConfigServiceImpl.initCacheAndGetCurrent(CentralConfigServiceImpl.java:173)
	at org.artifactory.config.CentralConfigServiceImpl.convert(CentralConfigServiceImpl.java:206)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:498)
	at org.springframework.aop.support.AopUtils.invokeJoinpointUsingReflection(AopUtils.java:343)
	at org.springframework.aop.framework.JdkDynamicAopProxy.invoke(JdkDynamicAopProxy.java:206)
	at com.sun.proxy.$Proxy178.convert(Unknown Source)
	at org.artifactory.converter.ConvertersManagerImpl.serviceConvert(ConvertersManagerImpl.java:167)
	... 9 common frames omitted
-

Once we upgraded the secondary node (we only have 2) and put it back in the LB the cluster started to run again.

Couple of questions:

  • Do you notice anything out of the ordinary given our logs?
  • The upgrade doc seems to implied that there should not be any downtimes. Though, it also mention that only uploads and downloads will be allowed during the upgrade. Is it right to assume that the UI will continue working given those limitations?


 Comments   
Comment by Matias Dell Amerlina Rios [ 15/May/20 ]

Adding the logs sorted by time. In case the description was confusing.

"1589504244328","05/14/2020 17:57:24.328 -0700","2020-05-15 00:57:24,328 [art-init] [ERROR] (o.a.w.s.ArtifactoryContextConfigListener:96) - Application could not be initialized: unstable environment: Found one or more servers with different version Config Reload denied."1589504244328","05/14/2020 17:57:24.328 -0700","2020-05-15 00:57:24,328 [art-init] [ERROR] (o.a.w.s.ArtifactoryContextConfigListener:96) - Application could not be initialized: unstable environment: Found one or more servers with different version Config Reload denied.java.lang.reflect.InvocationTargetException: null at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.artifactory.webapp.servlet.ArtifactoryContextConfigListener.configure(ArtifactoryContextConfigListener.java:211) at org.artifactory.webapp.servlet.ArtifactoryContextConfigListener.access$200(ArtifactoryContextConfigListener.java:67) at org.artifactory.webapp.servlet.ArtifactoryContextConfigListener$1.run(ArtifactoryContextConfigListener.java:92)Caused by: java.lang.RuntimeException: unstable environment: Found one or more servers with different version Config Reload denied. at org.artifactory.converter.ConvertersManagerImpl.handleException(ConvertersManagerImpl.java:223) at org.artifactory.converter.ConvertersManagerImpl.serviceConvert(ConvertersManagerImpl.java:171) at org.artifactory.spring.ArtifactoryApplicationContext.refresh(ArtifactoryApplicationContext.java:271) at org.artifactory.spring.ArtifactoryApplicationContext.<init>(ArtifactoryApplicationContext.java:153) ... 7 common frames omittedCaused by: java.lang.RuntimeException: unstable environment: Found one or more servers with different version Config Reload denied. at org.artifactory.config.CentralConfigServiceImpl.assertSaveDescriptorAllowed(CentralConfigServiceImpl.java:672) at org.artifactory.config.CentralConfigServiceImpl.preSaveDescriptor(CentralConfigServiceImpl.java:327) at org.artifactory.config.CentralConfigServiceImpl.forceSaveDescriptorInternal(CentralConfigServiceImpl.java:387) at org.artifactory.config.CentralConfigServiceImpl.initCacheAndGetCurrent(CentralConfigServiceImpl.java:173) at org.artifactory.config.CentralConfigServiceImpl.convert(CentralConfigServiceImpl.java:206) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.springframework.aop.support.AopUtils.invokeJoinpointUsingReflection(AopUtils.java:343) at org.springframework.aop.framework.JdkDynamicAopProxy.invoke(JdkDynamicAopProxy.java:206) at com.sun.proxy.$Proxy178.convert(Unknown Source) at org.artifactory.converter.ConvertersManagerImpl.serviceConvert(ConvertersManagerImpl.java:167) ... 9 common frames omitted""1589504244325","05/14/2020 17:57:24.325 -0700","2020-05-15 00:57:24,325 [art-init] [ERROR] (o.a.c.ConvertersManagerImpl:216) - Conversion failed. You should analyze the error and retry launching Artifactory. Error is: unstable environment: Found one or more servers with different version Config Reload denied." 




[RTFACT-22143] Custom message in artifactory only allows one link Created: 15/May/20  Updated: 18/May/20

Status: Open
Project: Artifactory Binary Repository
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: Bug Priority: Normal
Reporter: Ezekiel Knox Assignee: Unassigned
Resolution: Unresolved Votes: 1
Labels: None


 Description   

Description: If you have a custom message in artifactory version 7.x that has more than one link in it, only the 1st link is usable in the UI. This feature works properly in older versions of artifactory so it should in 7.x as well.

Expected behavior: The expected behavior is that if you have multiple links in your customer message, you should see your message with 2 or more hyperlinks like in 6.x. Instead in 7.x, only one link is generated. 

Steps to reproduce: To reproduce this issue, you make a 7.x instance (any version) and navigate to general settings. There you can create a custom message. To see the issue, you will need to add two links to your message.



 Comments   
Comment by Sebastian Lang [ 18/May/20 ]

This function is important for us as we link a discription and a self-service for internal use.





[RTFACT-22142] Visibility of the lower level repository data Created: 15/May/20  Updated: 15/May/20

Status: Open
Project: Artifactory Binary Repository
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: New Feature Priority: Normal
Reporter: Amaarah Johnson Assignee: Unassigned
Resolution: Unresolved Votes: 0
Labels: None


 Description   

On the storage page we have the size of all the local repositories. To see the size/path of all artifacts in a repo currently there is an API call available. However, we would like to see this feature given in Artifactory to view the size and path of all artifacts in a repository.






[RTFACT-22138] Allow a customizable size of a repository Created: 14/May/20  Updated: 14/May/20

Status: Open
Project: Artifactory Binary Repository
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: New Feature Priority: Normal
Reporter: Amaarah Johnson Assignee: Unassigned
Resolution: Unresolved Votes: 0
Labels: None


 Description   

Provide a way to divide the total available storage and allocate portions to specific repositories. The remaining space will be shared among the remaining repositories. 






[RTFACT-22137] List text type for each element in system.yaml Created: 14/May/20  Updated: 14/May/20

Status: Open
Project: Artifactory Binary Repository
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: Documentation Priority: Normal
Reporter: Joshua Han Assignee: Unassigned
Resolution: Unresolved Votes: 0
Labels: None


 Description   

Customers ask whether the parameter in system.yaml is a list or not, which is important for formating the yaml file and also when creating an automation using Ansible and others.

Please add a table that describes each element in system.yaml that describes what each of it is, similar to this (requested by Cerner) https://docs.ansible.com/ansible/latest/modules/docker_container_module.html






[RTFACT-22135] Add the ability to control the files in the cache dir Created: 14/May/20  Updated: 20/May/20

Status: Open
Project: Artifactory Binary Repository
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: New Feature Priority: Normal
Reporter: Batel Tova Assignee: Unassigned
Resolution: Unresolved Votes: 0
Labels: None


 Description   

Feature description: Today we can't control the size or the type of the files inside the cache directory, we only have the ability to control the directory size.

Expected behavior: Add the ability to control the files on the cache directory, add the option to cache files by a specific extension like a rule for a specific maximum size of a file, it will help us to create a smaller cache directory in case that we also have large files on common use. We are trying to create a small cache size for specific files

The severity level: 2 nice to have






[RTFACT-22134] User list table's sort by last login should sort sequentially rather than by date/time string Created: 14/May/20  Updated: 14/May/20

Status: Open
Project: Artifactory Binary Repository
Component/s: Web UI
Affects Version/s: 4.2.2
Fix Version/s: 4.4.0

Type: Change Request Priority: Normal
Reporter: Aaron Rhodes Assignee: Danny Reiser (Inactive)
Resolution: Unresolved Votes: 0
Labels: None

Issue Links:
Cloners
clones RTFACT-8569 User list table's sort by last login ... Resolved

 Description   

In the users list, sorting by last login sorts based on the value of the date/time string which is not sequential and therefore not useful.



 Comments   
Comment by Udacity Engineering [ 14/May/20 ]

Clone and reopen this old bug that now affects Artifactory 7.4. Screenshot. Note how the months are listed sequentially, even though the years are different! This implies alphabetical sorting rather than date sorting.





[RTFACT-22132] Hybrid Checksum policy Created: 14/May/20  Updated: 20/May/20

Status: Open
Project: Artifactory Binary Repository
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: Improvement Priority: Normal
Reporter: Oron Chalaf Assignee: Unassigned
Resolution: Unresolved Votes: 0
Labels: None


 Description   

The checksum policies in Artifactory are either verify against client or trust server checksum. If "verify against client" is used and the client does not publish a checksum you will have to manually fix the checksum from the UI for a 404 error will be encountered when trying to download the artifact.

The requirement is a new policy, that will verify the checksum against client if it was provided. In case client did not provide one, the checksum calculated by Artifactory will be trusted, allowing the file to be downloaded without manual intervention. 






[RTFACT-22131] Is it possible to link hash directory that stored binary files with symbolic links? Created: 14/May/20  Updated: 14/May/20

Status: Open
Project: Artifactory Binary Repository
Component/s: Artifact Storage, Filestore, Upload
Affects Version/s: 6.10.2
Fix Version/s: None

Type: Task Priority: Normal
Reporter: bnr-support Assignee: Unassigned
Resolution: Unresolved Votes: 0
Labels: None
Environment:

CentOS Linux 7.4



 Description   

Hi team.
We are using the Artifactory pro 6.10.2 on on-premise server Env. with a single storage volume.

Because we can't extend our main storage volume anymore, we are going to try the temporary avoidance using a new storage volume with directory symbolic link.

but we found the error in deploy action.
Please advise if there are any workarounds.

 

We tested it as below.
1) Add(attach) a new storage volume

2) move a hash directory(with files) from Artifactory filestore to new storage volume
$ mv -rf (data dir.)/filestore/03 → /new_storage/03
main storage volume new storage volume

3) make symbolic link
(data dir.)/filestore/03 ----link ---> /new_storage/03

 

Test Result & Issue

  • Download from the hash directory "03": Success

 

  • Deploy to hash directory "03": ERROR
      . sample file name(hash): 03ac7f....
        

 

  . I think that the conflict occurs because the symbolic link name and the directory name to be created are the same.
    

 


Due to our internal policy, migration to public cloud storage such as s3 is not possible now....
Please advise if there are any workarounds.






[RTFACT-22125] Increase the database columns length holding the product versions Created: 13/May/20  Updated: 24/May/20

Status: In Progress
Project: Artifactory Binary Repository
Component/s: Database
Affects Version/s: None
Fix Version/s: None

Type: Task Priority: Normal
Reporter: Yossi Shaul Assignee: Avishay Halpren
Resolution: Unresolved Votes: 0
Labels: None

Issue Links:
Trigger
triggered RTFACT-22228 Fix MySQL and MSSQL migration scripts... Open

 Description   

Artifactory cannot start if the version length is longer than 30 characters.
The version for dev branches is determined by the branch name and can easily surpass 30 characters. For instance, a branch named "feature/RTFACT-17370-hikari" on a 7.xSNAPSHOT branch will produce a version named "7.x.feature.RTFACT.17370.hikari".

I identified 4 tables in Arti and Access (although there might be more):

db_properties(artifactory_version)
artifactory_servers(artifactory_version)
access_servers(version)
access_nodes(version)

Change the column size to 128 characters

 






[RTFACT-22120] Concurrent Authentication requests to automatically associate a LDAP user to an imported LDAP group can cause duplicate_key errors in Postgres Created: 13/May/20  Updated: 02/Jun/20

Status: Open
Project: Artifactory Binary Repository
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: Bug Priority: Normal
Reporter: Matthew Wang Assignee: Tamir Hadad
Resolution: Unresolved Votes: 1
Labels: None

Issue Links:
Relationship

 Description   

If you have concurrent authentication requests that will automatically associate a LDAP user to an imported LDAP group, you may run into postgres duplicate_key errors.

Steps to reproduce:
-setup LDAP with Arti. Import LDAP group into Artifactory that a test user is part of in the LDAP Groups settings page
-run the following API (reproduces with API key or password):

for i in

{1..30}

; do curl -u tuser:AKCp5emRKuxu6F75UJFPahfQSzRjkSVmcxNfGeT2dEp7yjfpQmt8rXkTf5v97dQipnvdomF4M http://mill.jfrog.info:12050/artifactory/api/system/version & done

-notice in access request.log that there are PATCH requests for the user, and that there is a duplicate_key error in the access server's access.log.

This issue is causing builds to fail for the customer, since builds authenticate, and any authentication request has a chance to PATCH the user.

Workaround: Seems like using an access token instead of API Key/password will avoid updating the user on authentication

Relevant logs:

2020-05-13 00:08:12,594 [http-nio-8040-exec-7] [ERROR] (o.j.a.s.r.e.m.DefaultExceptionMapper:25) - General exception mapper caught:Could not save user
org.jfrog.access.server.exception.AccessStorageException: Could not save user
...
Caused by: org.postgresql.util.PSQLException: ERROR: duplicate key value violates unique constraint "access_users_groups_pk"
Detail: Key (user_id, group_id)=(1001, 1002) already exists.
at org.postgresql.core.v3.QueryExecutorImpl.receiveErrorResponse(QueryExecutorImpl.java:2455)
at org.postgresql.core.v3.QueryExecutorImpl.processResults(QueryExecutorImpl.java:2155)
at org.postgresql.core.v3.QueryExecutorImpl.execute(QueryExecutorImpl.java:288)
at org.postgresql.jdbc.PgStatement.executeInternal(PgStatement.java:430)
at org.postgresql.jdbc.PgStatement.execute(PgStatement.java:356)
at org.postgresql.jdbc.PgPreparedStatement.executeWithFlags(PgPreparedStatement.java:168)
at org.postgresql.jdbc.PgPreparedStatement.executeUpdate(PgPreparedStatement.java:135)
at sun.reflect.GeneratedMethodAccessor103.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.tomcat.jdbc.pool.StatementFacade$StatementProxy.invoke(StatementFacade.java:114)
at com.sun.proxy.$Proxy88.executeUpdate(Unknown Source)
at org.jfrog.storage.JdbcHelper.executeUpdate(JdbcHelper.java:229)
at org.jfrog.storage.SqlDaoHelper.updatePagination(SqlDaoHelper.java:184)
at org.jfrog.storage.SqlDaoHelper.paginationCreate(SqlDaoHelper.java:137)
at org.jfrog.access.server.db.dao.UsersGroupsDao.createAssociation(UsersGroupsDao.java:101)
at org.jfrog.access.server.db.dao.UsersGroupsDao.createAssociations(UsersGroupsDao.java:97)

Access server request.log:
2020-05-13T00:21:02.008+0000|127.0.0.1|jfrt@01e85cs0xaysq901fp0c870363|PATCH|http://localhost:8040/access/api/v1/users/tuser|500|1275|167|JFrog Access Java Client/4.11.0
2020-05-13T00:21:02.015+0000|127.0.0.1|jfrt@01e85cs0xaysq901fp0c870363|PATCH|http://localhost:8040/access/api/v1/users/tuser|200|1275|364|JFrog Access Java Client/4.11.0
2020-05-13T00:21:02.020+0000|127.0.0.1|jfrt@01e85cs0xaysq901fp0c870363|GET|http://localhost:8040/access/api/v1/users/tuser|200|0|167|JFrog Access Java Client/4.11.0
2020-05-13T00:21:02.024+0000|127.0.0.1|jfrt@01e85cs0xaysq901fp0c870363|GET|http://localhost:8040/access/api/v1/users/tuser|200|0|118|JFrog Access Java Client/4.11.0
2020-05-13T00:21:02.050+0000|127.0.0.1|jfrt@01e85cs0xaysq901fp0c870363|GET|http://localhost:8040/access/api/v1/users/tuser|200|0|19|JFrog Access Java Client/4.11.0
2020-05-13T00:21:02.057+0000|127.0.0.1|jfrt@01e85cs0xaysq901fp0c870363|PATCH|http://localhost:8040/access/api/v1/users/tuser|500|1275|271|JFrog Access Java Client/4.11.0
2020-05-13T00:21:02.066+0000|127.0.0.1|jfrt@01e85cs0xaysq901fp0c870363|POST|http://localhost:8040/access/api/v1/auth/authenticate|401|42|47|JFrog Access Java Client/4.11.0
2020-05-13T00:21:02.162+0000|127.0.0.1|jfrt@01e85cs0xaysq901fp0c870363|GET|http://localhost:8040/access/api/v1/users/tuser|200|0|90|JFrog Access Java Client/4.11.0
2020-05-13T00:21:02.167+0000|127.0.0.1|jfrt@01e85cs0xaysq901fp0c870363|PATCH|http://localhost:8040/access/api/v1/users/tuser|200|1275|374|JFrog Access Java Client/4.11.0
2020-05-13T00:21:02.171+0000|127.0.0.1|jfrt@01e85cs0xaysq901fp0c870363|GET|http://localhost:8040/access/api/v1/users/tuser|200|0|111|JFrog Access Java Client/4.11.0
2020-05-13T00:21:02.172+0000|127.0.0.1|jfrt@01e85cs0xaysq901fp0c870363|POST|http://localhost:8040/access/api/v1/auth/authenticate|401|42|54|JFrog Access Java Client/4.11.0
2020-05-13T00:21:02.182+0000|127.0.0.1|jfrt@01e85cs0xaysq901fp0c870363|GET|http://localhost:8040/access/api/v1/users/tuser|200|0|13|JFrog Access Java Client/4.11.0
2020-05-13T00:21:02.190+0000|127.0.0.1|jfrt@01e85cs0xaysq901fp0c870363|GET|http://localhost:8040/access/api/v1/users/tuser|200|0|29|JFrog Access Java Client/4.11.0
2020-05-13T00:21:02.193+0000|127.0.0.1|jfrt@01e85cs0xaysq901fp0c870363|GET|http://localhost:8040/access/api/v1/users/tuser|200|0|15|JFrog Access Java Client/4.11.0
2020-05-13T00:21:02.235+0000|127.0.0.1|jfrt@01e85cs0xaysq901fp0c870363|GET|http://localhost:8040/access/api/v1/users/tuser|200|0|58|JFrog Access Java Client/4.11.0
2020-05-13T00:21:02.245+0000|127.0.0.1|jfrt@01e85cs0xaysq901fp0c870363|PATCH|http://localhost:8040/access/api/v1/users/tuser|200|1275|226|JFrog Access Java Client/4.11.0
2020-05-13T00:21:02.264+0000|127.0.0.1|jfrt@01e85cs0xaysq901fp0c870363|PATCH|http://localhost:8040/access/api/v1/users/tuser|200|1275|296|JFrog Access Java Client/4.11.0






[RTFACT-22118] How to enable cdn like feature inside onprem for artifactory Created: 12/May/20  Updated: 13/May/20

Status: Open
Project: Artifactory Binary Repository
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: Bug Priority: Normal
Reporter: Rajasekaran Palaniswamy Assignee: Unassigned
Resolution: Unresolved Votes: 0
Labels: None


 Description   

Artifactory with S3/cloudfront enables CDN like feature.

We need a way to enable similar feature when running OnPrem.






[RTFACT-22112] Moving files to 1024+ chars path is allowed, but not supported by DB Created: 12/May/20  Updated: 12/May/20

Status: Open
Project: Artifactory Binary Repository
Component/s: None
Affects Version/s: 6.19.0, 7.4.0
Fix Version/s: None

Type: Bug Priority: Normal
Reporter: Nitzan Benshimol Assignee: Unassigned
Resolution: Unresolved Votes: 0
Labels: None

Attachments: PNG File Screen Shot 2020-05-07 at 16.02.19.png    

 Description   

Steps to reproduce:

1 - Deploy some file

2 - Invoke "move" to a custom path (a big one)

sometimes it does creates an empty directory but not moving, and then the error appears when trying to delete it.

 






[RTFACT-22104] Upgrade from 6.19.1 to 7.4.3 fails Created: 12/May/20  Updated: 28/May/20

Status: Open
Project: Artifactory Binary Repository
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: Bug Priority: Normal
Reporter: Ramin Mirsharifi Assignee: Unassigned
Resolution: Unresolved Votes: 0
Labels: None


 Description   

I have tried to upgrade artifactory 6.19.1 to 7.4.3 using docker but it fails

I see this error:

 

[main                ] - Error: Error starting application Failed pinging artifactory for 180Request failed with status code 503
    at createError (/opt/jfrog/artifactory/app/frontend/bin/server/dist/node_modules/axios/lib/core/createError.js:16:15)
    at settle (/opt/jfrog/artifactory/app/frontend/bin/server/dist/node_modules/axios/lib/core/settle.js:17:12)
    at IncomingMessage.handleStreamEnd (/opt/jfrog/artifactory/app/frontend/bin/server/dist/node_modules/axios/lib/adapters/http.js:237:11)
    at IncomingMessage.emit (events.js:203:15)
    at endReadableNT (_stream_readable.js:1145:12)
    at process._tickCallback (internal/process/next_tick.js:63:19)

    There are some warnings in migrations, not sure if it's related:

[.a.m.s.p.GoMetadataProvider:67] [tion-event-sweeper-1] - Module Info cannot be extracted from go-default layout - version files will not be resolved

 

 

 



 Comments   
Comment by Ramin Mirsharifi [ 28/May/20 ]

Turned out that I had a docker proxy setup and the Artifactory version 7 and above will fail with docker proxy configuration.





[RTFACT-22100] Repository becomes half visible when Compress Empty Folders box checked Created: 12/May/20  Updated: 12/May/20

Status: Open
Project: Artifactory Binary Repository
Component/s: UI, Web UI
Affects Version/s: 7.4.3
Fix Version/s: None

Type: Bug Priority: Normal
Reporter: Tim Telman Assignee: Unassigned
Resolution: Unresolved Votes: 0
Labels: None
Environment:

 

 

 

2. 

 

 

3.

 

 

 

 

 

 

 

 

 

 

 


Attachments: JPEG File 1.jpg     JPEG File 2.jpg     JPEG File 3.jpg    

 Description   

Repository becomes half-visible when "Compress Empty Folders" box checked

 

Steps to reproduce:

Chrome: Version 81.0.4044.138 (Official Build) (64-bit)

  1. Select and expand the repository where subfolders will be visible
  2. Click on "Sort and Filter" icon
  3. Check "Compress Empty Folders" box
  4. You will immediately notice that the repository that was selected and expanded scrolled up, and can't be scrolled down.

 

Workaround:

 

Check and Uncheck the  "Compress Empty Folders" box several times until you see the repository scrolled back to normal.






[RTFACT-22097] X-Ray IDE improvements (Eclipse/VSC plugins) Created: 11/May/20  Updated: 11/May/20

Status: Open
Project: Artifactory Binary Repository
Component/s: Xray
Affects Version/s: None
Fix Version/s: None

Type: Improvement Priority: Normal
Reporter: Justin Babuscio Assignee: Unassigned
Resolution: Unresolved Votes: 0
Labels: None


 Description   

Our use case is very simple:

  • X-Ray/Artifactory web views
    • We define Policies and Watches on custom artifact/package repositories (npm and maven)
    • Security folks review list of vulnerabilities/findings and choose to "Ignore" if acceptable
      • Maybe the 3rd party developers haven't fixed yet (i.e. open bug)
      • Maybe the attack vector and blast radius don't touch our software
      • Maybe we just don't mind the risk
  • Developers
    • Using Eclipse and Visual Studio Code plugins, we proactively monitor the X-Ray health of software applications while developing
      • Re-scan on new modules/components
      • Double check that we didn't introduce something new
    • Developers can then make updates (e.g. version upgrade) to see if it mitigates a vulnerability

 

Improvement/request:

  • It would be nice if we could bind our local source code to a watch policy
  • If a finding (e.g. npm module or maven JAR dependency) was flagged as "High" but later ignored under the policy, the IDE should reflect that status based on the policy that the local project is bound to.
  • If unbounded, it'll always show the raw results of scans (i.e. not interrupted by an X-Ray user or policy)

 

Without this feature, I'm afraid these plugins have little use for us in proactive monitoring as we sometimes have to just "ignore"  Highs.  They do have some use when we're trying to resolve a dependency so we can continue to work with these tools.

 

 






[RTFACT-22091] CNHA Rollout process Created: 11/May/20  Updated: 03/Jun/20

Status: Done
Project: Artifactory Binary Repository
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: Task Priority: Normal
Reporter: Uriah Levy Assignee: Uriah Levy
Resolution: Unresolved Votes: 0
Labels: None


 Description   

https://docs.google.com/spreadsheets/d/1bD0CFgyQZcMVaSmMxLkcfPjyE3YfBHfsP1FpAY069YA/edit#gid=0






[RTFACT-22073] Signed URL Download: Validate HTTP method Created: 05/May/20  Updated: 10/May/20

Status: Open
Project: Artifactory Binary Repository
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: Story Priority: Normal
Reporter: Ran Mor Assignee: Unassigned
Resolution: Unresolved Votes: 0
Labels: None


 Description   

Story

  • As JFrog, when a customer performs download using an RTF signed URL, I'd like to be able to validate they are using the GET HTTP method, so that they are not able to perform a security exploit by using the POST method

Scope

  • Serve signed URL download only if method is GET

Background

  • Artifactory supports generating a signed URL using the api/signed/url end point. the following story refers to the scenario where the customer had already generated a signed URL using this API, and now calls the signed URL to download the content. While the API for generating a signed URL is used using the POST method, the download itself should occur using the GET method (i.e. call the signed URL with a GET). In case the customer is able to use the signed URL using POST (i.e. if we don't validate the method when the signed URL is called), this may be exploited by attackers.

Acceptance Criteria

  • In case a customer attempts to perform a download using a JFrog signed URL, while using an HTTP method other than GET, the download should fail.





[RTFACT-22071] Artifactory destroys system.yaml format after start up Created: 08/May/20  Updated: 29/May/20

Status: Open
Project: Artifactory Binary Repository
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: Bug Priority: Normal
Reporter: Arturo Aparicio Assignee: Maxim Yurkovsky
Resolution: Unresolved Votes: 1
Labels: None


 Description   

Artifactory destroys system.yaml format after start up.

 

This makes it very error prone to try to edit the file. 

 

To reproduce:

  1. Install Artifactory with Debian using all defaults  but don't start it
  2. Cat the system.yaml (cat /var/opt/jfrog/artifactory/etc/system.yaml)
  3. Notice it is properly formatted
  4. Start the installation (service artifactory start)
  5. Wait for Artifactory to full come up
  6. Cat the system.yaml (cat /var/opt/jfrog/artifactory/etc/system.yaml)
  7. Notice the format has been destroyed

Badly formatted system.yaml

## @formatter:off
## JFROG ARTIFACTORY SYSTEM CONFIGURATION FILE
## HOW TO USE: comment-out any field and keep the correct yaml indentation by deleting only the leading '#' character.
configVersion: 1
## NOTE: JFROG_HOME is a place holder for the JFrog root directory containing the deployed product, the home directory for all JFrog products.
## Replace JFROG_HOME with the real path! For example, in RPM install, JFROG_HOME=/opt/jfrog## NOTE: Sensitive information such as passwords and join key are encrypted on first read.
## NOTE: The provided commented key and value is the default.## SHARED CONFIGURATIONS
## A shared section for keys across all services in this config
shared:
    ## Extra Java options to pass to the JVM. These values add to or override the defaults.
    #extraJavaOpts: "-Xms512m -Xmx2g"    ## Security Configuration
    security:
    ## Join key value for joining the cluster (takes precedence over 'joinKeyFile')
    #joinKey: "<Your joinKey>"    ## Join key file location
    #joinKeyFile: "<For example: JFROG_HOME/artifactory/var/etc/security/join.key>"    ## Master key file location
    ## Generated by the product on first startup if not provided
    #masterKeyFile: "<For example: JFROG_HOME/artifactory/var/etc/security/master.key>"    ## Maximum time to wait for key files (master.key and join.key)
    #bootstrapKeysReadTimeoutSecs: 120    ## Node Settings
    node:
    ## A unique id to identify this node.
    ## Default: auto generated at startup.
    #id: "art1"    ## Default: auto resolved by startup script
    #ip:    ## Sets this node as primary in HA installation
    #primary: true    ## Sets this node as part of HA installation
    #haEnabled: true    ## Database Configuration
    database:
    ## One of: mysql, oracle, mssql, postgresql, mariadb
    ## Default: Embedded derby## Example for postgresql
#type: postgresql
#driver: org.postgresql.Driver
#url: jdbc:postgresql://<your db url, for example: localhost:5432>/artifactory
#username: artifactory
#password: password 


 Comments   
Comment by Prasanna Raghavendra [ 09/May/20 ]

Maxim Yurkovsky Router updates the  system.yaml during start-up. I am guessing this is router's parsing breaking it. Can you check?

Arturo Aparicio Was the system.yaml  created by you or was it created by the installation?

Comment by Brian Krische [ 29/May/20 ]

I would just like to add that this is annoying when using software like Chef or Puppet to manage the system.yaml file.

The actual content of the file could be perfectly fine, but if Artifactory just modifies the formatting (indentation, etc.) at startup, then Chef/Puppet will think the file has been modified and want to reset it to desired state.

Could at the very lest some kind of option/property be set to disable modification of the system.yaml by Artifactory?





[RTFACT-22070] Unable to retrieve images from Nvidia docker public registry Created: 08/May/20  Updated: 20/May/20

Status: Open
Project: Artifactory Binary Repository
Component/s: None
Affects Version/s: 7.3.2
Fix Version/s: None

Type: Bug Priority: Normal
Reporter: Yuvarajan J Assignee: Unassigned
Resolution: Unresolved Votes: 1
Labels: None


 Description   

Problem Statement:

Using Artifactory version 7.3.2, I have configured Nvidia Docker registry's remote endpoint as the remote URL in docker remote repository https://nvcr.io/
As per the configuration, while pulling the docker images, the images would be fetched from the following location [https://ngc.nvidia.com/catalog/containers

]But, while trying to pull the images from the remote repository, the following error message is being returned.

❯ docker pull <ARTIFACTORY_HOST>/nvidia/tensorrtserver:20.02-py3-clientsdk
Error response from daemon: manifest for <ARTIFACTORY_HOST>/nvidia/tensorrtserver:20.02-py3-clientsdk not found: manifest unknown: The named manifest is not known to the registry.

artifactory-service.log
2020-05-08T22:15:05.153Z [jfrt ] [ERROR] [5fc9c9dc2a804c98] [.DockerV2RemoteRepoHandler:464] [ttp-nio-8081-exec-50] - Missing Manifest from <REMOTE-REPO> 'v2/nvidia/tensorrtserver/manifests/20.02-py3-clientsdk' not found at <REMOTE-REPO>:nvidia/tensorrtserver/20.02-py3-clientsdk/list.manifest.json

artifactory-request.log
2020-05-08T22:15:05.155Z|5fc9c9dc2a804c98|<IP_Masked>|admin|GET|/api/docker/<REMOTE-REPO>/v2/nvidia/tensorrtserver/manifests/20.02-py3-clientsdk|404|-1|0|2110|docker/19.03.8 go/go1.12.17 git-commit/afacb8b kernel/4.19.76-linuxkit os/linux arch/amd64 UpstreamClient(Docker-Client/19.03.8 (darwin))

This problem occurs only in recent times, as we were able to successfully pull the images using nvcr.io configured as the remote endpoint. The URL https://nvcr.io/ was publicly available. Now, this URL returns 401 unauthorized error while accessing from the browser. However, while trying to pull the image directly using the below command works fine.
docker pull nvcr.io/nvidia/tensorrtserver:20.02-py3-clientsdk
It works fine while directly trying to pull from the public registry, but not working via Artifactory.

Steps to reproduce the issue:
1. Configure a docker remote repository and update the remote URL as https://nvcr.io/
2. Use the commands from the 'Set Me Up' page to perform a docker login and docker pull for any image available under https://ngc.nvidia.com/catalog/containers registry.
3. Observe that the docker pull command returns the above-reported error.
4. Now, please try to use the direct download command available for the same image. For example: docker pull nvcr.io/nvidia/tensorrtserver:20.02-py3-clientsdk
5. Observe that the image download is successful.






[RTFACT-22064] Error handling inconsistent for MySQL jdbc connection Created: 08/May/20  Updated: 08/May/20

Status: Open
Project: Artifactory Binary Repository
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: Bug Priority: Normal
Reporter: Mikael Emanuelsson Assignee: Unassigned
Resolution: Unresolved Votes: 0
Labels: None


 Description   

Background in case: 132062 

 

Problem: error handling hides the underlying issue for three services (jfrt, jffe and jfrou) and in the fourth (jfmd) not reporting the issue.

 

Switching from derby in docker container to mysql: Root cause if issue was a incorrect specification of the jdbc connection. The port (3306) was missing i.e. 

jdbc:mysql://<host>:/artdb?....

should have said:

jdbc:mysql://<host>:3306/artdb?....

 

Logfiles where the incorrect jdbc is missing states the following and three services manages to setup their schemas. Only jfmd fails - but does not report its missery in the logfiles.

  • The logs shows that the container can connect to mysql
    {{ 2020-05-07T06:59:45.053Z [1;33m[jfac ][0;39m [34m[INFO ][0;39m [5f492c2f8aac689e] [s.d.u.AccessJdbcHelperImpl:138] [ocalhost-startStop-1] - Database: MySQL 5.6.37-82.2. Driver: MySQL Connector/J mysql-connector-java-8.0.19 (Revision: a0ca826f5cdf51a98356fdfb1bf251eb042f80bf)}}{{ 2020-05-07T06:59:45.053Z [1;33m[jfac ][0;39m [34m[INFO ][0;39m [5f492c2f8aac689e] [s.d.u.AccessJdbcHelperImpl:141] [ocalhost-startStop-1] - Connection URL: jdbc:mysql://mgxhot1dbp01.mgx.ppm.nu:/artdb?characterEncoding=UTF-8&elideSetAutoCommits=true&useSSL=false&serverTimezone=UTC}}{{ 2020-05-07T06:59:45.075Z [1;33m[jfac ][0;39m [34m[INFO ][0;39m [5f492c2f8aac689e] [s.d.u.AccessJdbcHelperImpl:150] [ocalhost-startStop-1] - ***Creating database schema***}}
  • The jfmd module does not protest in the logs
    2020-05-07T06:59:55.302Z [34;1m[jfmd ][0m [34m[INFO ][0m [4a968f341a181cd ] [database_bearer.go:71 ] [main                ] - Connecting to (db config: {mysql artifactory:***@tcp(mgxhot1dbp01.mgx.ppm.nu/artdb?charset=utf8&clientFoundRows=true&parseTime=true&tls=false}) [database]

 

  1. Prefered result - ERROR cant connect for all four services
  2. jfmd handles the lack of port in the same manner as the other services

 

Running with debug logging for jfmd in systemyaml helps identify the issue.

metadata:
    logging:
        application:
            level: debug

 

The Artifactory host is running:

  • OS: Red Hat Enterprise Linux Server 7.8
  • Docker version 19.03.8, build afacb8b
  • Docker-compose version 1.25.5, build 8a1c60f6
  • MySQL: 5.6
  • Artifactory 7.4.3

 

 

 






[RTFACT-22063] allow get item modified REST API to work with wildcards in the path Created: 07/May/20  Updated: 07/May/20

Status: Open
Project: Artifactory Binary Repository
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: Improvement Priority: Normal
Reporter: Matthew Wang Assignee: Unassigned
Resolution: Unresolved Votes: 0
Labels: None


 Description   

Currently, the get item modified API (https://www.jfrog.com/confluence/display/JFROG/Artifactory+REST+API#ArtifactoryRESTAPI-ItemLastModified), only works with a single artifact. It would be nice if it could work for a path with a wildcard, like example-repol-local/test*, and return a list of results.






[RTFACT-22062] Removing plugins need artifactory restart to clear cache Created: 07/May/20  Updated: 07/May/20

Status: Open
Project: Artifactory Binary Repository
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: Bug Priority: Normal
Reporter: Senthil Arumugam Assignee: Unassigned
Resolution: Unresolved Votes: 0
Labels: None
Environment:

Production


Issue Links:
Relationship

 Description   

When removing artifactory plugins, artifactory pods need to be restarted to clear the cache. Reload plugin API did not help.

 






[RTFACT-22059] Aql invalid with leading white space Created: 07/May/20  Updated: 07/May/20

Status: Open
Project: Artifactory Binary Repository
Component/s: AQL
Affects Version/s: 6.16.0
Fix Version/s: None

Type: Bug Priority: Normal
Reporter: Daniel Daugherty Assignee: Unassigned
Resolution: Unresolved Votes: 0
Labels: None


 Description   

Aql submitted to the rest API with leading white space gives parse error. Trim of content before submit resolves the issue. But trim in the Artifactory server side would save users headache.






[RTFACT-22051] Inconsistencies between tree-presentation and maven-metadata Created: 07/May/20  Updated: 07/May/20

Status: Open
Project: Artifactory Binary Repository
Component/s: None
Affects Version/s: 6.10.4
Fix Version/s: None

Type: Bug Priority: Normal
Reporter: Rebner, Dr. Gabor Assignee: Unassigned
Resolution: Unresolved Votes: 0
Labels: None


 Description   

We observe an inconsistency between the tree-file-representation and the maven-metadata.

We use two virtual repositories.

The first repository (a) consists of a list of maven-artifacts and a blacklisted folder. The tree-file-view and the metadata.xml are correct (the folder is blacklisted).
The second repository (b) consists of a list of maven-artifacts (including snapshot-artifacts which are blacklisted in a) and a. 

We actually expect to see the file-list in the end without any file from the blacklist, but the blacklisted files are kept in the list while the metadata.xml is still correct.

 






[RTFACT-22045] Improve the Storage summary filtering mechanism Created: 06/May/20  Updated: 11/May/20

Status: Open
Project: Artifactory Binary Repository
Component/s: None
Affects Version/s: 7.4.3
Fix Version/s: None

Type: Improvement Priority: Normal
Reporter: Batel Tova Assignee: Unassigned
Resolution: Unresolved Votes: 1
Labels: None

Attachments: GIF File Au6HAR94WY.gif     PNG File image-2020-05-06-18-55-56-450.png     PNG File image-2020-05-06-18-56-30-944.png    

 Description   

Today when we are using the storage summary UI and filtering by repository key we will filter the repositories list but won't update the total and the trash can information for this specific list as you can see in the following video and screenshots:



I believe that we need to add specific information according to the filtered list






[RTFACT-22044] Re-enable last API activity timestamp in GetUserDetails Created: 06/May/20  Updated: 26/May/20

Status: Open
Project: Artifactory Binary Repository
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: Change Request Priority: Normal
Reporter: Peter Nguyen Assignee: Unassigned
Resolution: Unresolved Votes: 0
Labels: None


 Description   

Hi Team,

Previously the Artifactory API allowed admins to determine the latest timestamp of when a given user interacted with Artifactory via the API. Apparently this changed to just be the latest timestamp of when a given user interacted with Artifactory through the UI. 

We would argue that the vast majority of Artifactory interactions comes from the backend due to automation and command line interfacing. We also see value in being able to track when a user logs into the UI. We would like to request that the https://www.jfrog.com/confluence/display/JFROG/Artifactory+REST+API#ArtifactoryRESTAPI-GetUserDetails call would return both values i.e. the timestamp of when a user last accessed API and UI presumably via authentication events. 

This would be vital to our efforts of being able to accurately determine our active user base as well as cull inactive accounts. Many of our users never interact with the UI so this is why API interaction timestamps would enable us to be more precise in maintaining our instance and its analytics.






[RTFACT-22043] search page in UI returns error Created: 06/May/20  Updated: 06/May/20

Status: Open
Project: Artifactory Binary Repository
Component/s: None
Affects Version/s: 6.18.0
Fix Version/s: None

Type: Bug Priority: Normal
Reporter: Mari Yamaguchi Assignee: Unassigned
Resolution: Unresolved Votes: 1
Labels: None


 Description   

The search page in the UI returns the error, "An unexpected error has occurred, please check artifactory logs for further details." 

 

The simple search should not error out. 

request.log: request.log:20200428072048|1|REQUEST|172.23.242.120|admin|GET|/ui/artifactsearch/pkg/conan|HTTP/1.1|200|0

artifactory log: 2020-04-28 07:20:55,199 [http-nio-127.0.0.1-8081-exec-2] [ERROR] (o.a.r.c.e.m.GlobalExceptionMapper:48) - null
java.lang.NullPointerException: null






[RTFACT-22040] X-Frame-Options or Content-Security-Policy: frame-ancestors HTTP Headers missing on port 8081 Created: 05/May/20  Updated: 05/May/20

Status: Open
Project: Artifactory Binary Repository
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: Bug Priority: Normal
Reporter: pradeep Assignee: Unassigned
Resolution: Unresolved Votes: 0
Labels: None


 Description   

Hello,

I am using artifactory-oss-6.11.6 in Windows Server 2016. Qualys is showing following vulnerability:
------------------------------------------------------------------------------------------------
*X-Frame-Options or Content-Security-Policy: frame-ancestors HTTP Headers missing on port 8081.*
--------------------------------------------------------------------------------------------
How can we mitigate the issue ?

Thanks
Pradeep






[RTFACT-22039] URL encoding causes Remote Repository pull to fail Created: 05/May/20  Updated: 13/May/20

Status: Open
Project: Artifactory Binary Repository
Component/s: Remote Repository
Affects Version/s: 6.18.1, 7.4.3
Fix Version/s: None

Type: Bug Priority: Normal
Reporter: Derek Pang Assignee: Unassigned
Resolution: Unresolved Votes: 0
Labels: None

Issue Links:
Relationship
relates to RTFACT-9044 unable to download a file that was de... Open

 Description   

A pull to a remote repository will fail if the curl URL contains URL encoding.

For example:

A generic remote repository is pointing to https://java-buildpack.cloudfoundry.org.

If a pull is attempted to obtain the package https://java-buildpack.cloudfoundry.org/openjdk/bionic/x86_64/bellsoft-jre8u252%2B9-linux-amd64.tar.gz it will fail because the "%2B" is translated to "+" by Artifactory. (i.e. curl -uXX:XXX "http://localhost:8081/artifactory/genericremoterepo/openjdk/bionic/x86_64/bellsoft-jre8u252%2B9-linux-amd64.tar.gz" will not retrieve the package.)

The remote repository host (in this case cloudfoundry) cannot handle the "+" sign and so returns a 404.

 

A request for a package that does not contain any URL encoding from the same repository will succeed (i.e. openjdk-jre-1.8.0_232-bionic.tar.gz).

 

 






[RTFACT-22032] Repository Statistics Created: 05/May/20  Updated: 05/May/20

Status: Open
Project: Artifactory Binary Repository
Component/s: REST API
Affects Version/s: None
Fix Version/s: None

Type: New Feature Priority: Normal
Reporter: Vignesh S Assignee: Unassigned
Resolution: Unresolved Votes: 0
Labels: None


 Description   

Would like to have a REST API to gather the repository statistics capabilities.

Some of the stats that the customers are willing to see

Repositories usage:

  1. The number of times the repository is being used by the users to deploy and get the artifacts, which includes the time of deployment and downloaded time
  2. Most downloaded artifact in a repository and the list of users downloaded it
  3. Provide the storage info for the corresponding repository against which the REST API is executed





[RTFACT-22024] Add 'check binary existence' option for replication setting in the UI Created: 05/May/20  Updated: 05/May/20

Status: Open
Project: Artifactory Binary Repository
Component/s: UI
Affects Version/s: None
Fix Version/s: None

Type: Improvement Priority: Normal
Reporter: Joshua Han Assignee: Unassigned
Resolution: Unresolved Votes: 0
Labels: None


 Description   

This feature is only configurable via the REST API:

https://www.jfrog.com/confluence/display/JFROG/Repository+Replication#RepositoryReplication-OptimizingRepositoryReplicationwithChecksum-BasedStorage

 

Please add it to the UI






[RTFACT-22023] Support Bundle fails to include logs Created: 05/May/20  Updated: 04/Jun/20

Status: Open
Project: Artifactory Binary Repository
Component/s: None
Affects Version/s: 7.4.1, 7.4.3
Fix Version/s: None

Type: Bug Priority: Normal
Reporter: Joshua Han Assignee: Unassigned
Resolution: Unresolved Votes: 0
Labels: None

Issue Links:
Relationship
Regression:
Yes

 Description   

Issue:

The Support bundle fails to include logs, even though there are logs updated

The issue is seen in the standard Docker and Zip (Mac) installers

Log Messages:

There are no errors or failures in the logs with debugging at org.jfrog.support

Here are log messages from the time the bundle without logs were created.

2020-04-23T12:39:09.538Z [jfrt ] [DEBUG] [70ef497b2ac1be27] [o.j.s.c.c.c.LogsCollector:85 ] [pool-15-thread-7  ] - Initiating collect eligibility check for file 'artifactory-service.log'

2020-04-23T12:39:09.538Z [jfrt ] [DEBUG] [70ef497b2ac1be27] [s.c.c.c.DefaultFilesMatcher:28] [pool-15-thread-7  ] - Matching file /opt/jfrog/artifactory/var/log/artifactory-service.log

2020-04-23T12:39:09.538Z [jfrt ] [DEBUG] [70ef497b2ac1be27] [s.c.c.c.DefaultFilesMatcher:58] [pool-15-thread-7  ] - File's last modified time: 2020-04-23T12:39:09.538267Z

 

Note how the logs do not show the step where it copies the file [(o.j.s.c.c.AbstractSpecificContentCollector:46) - Initiating copy of file 'artifactory-service.log']

Steps to reproduce the issue:

  1. Download a Mac version of Artifactory 7.4.1 Pro
  2. Create a Support bundle with default settings
  3. Notice how logs are not included





[RTFACT-22022] Issues with HA Installation documentation (for 6.x) Created: 04/May/20  Updated: 04/May/20

Status: Open
Project: Artifactory Binary Repository
Component/s: HA
Affects Version/s: None
Fix Version/s: None

Type: Documentation Priority: Normal
Reporter: Andrew Lillie Assignee: Unassigned
Resolution: Unresolved Votes: 0
Labels: None


 Description   

It appears that the main HA installation documentation is here:
https://www.jfrog.com/confluence/display/RTF6X/HA+Installation+and+Setup

In the section of this page on setting up a new secondary, it says:
Go through a new Artifactory Pro installation as described in Installing Artifactory with a link to:
https://www.jfrog.com/confluence/display/RTF6X/Installing+Artifactory

But if you read that page, it has a banner saying There are different instructions for installing Artifactory HA.  If you are installing an Artifactory HA cluster, please refer to HA Installation and Setup with a link right back to the HA+Installation+and+Setup page.  This is an unhelpful circle.

Additionally, it does mention If you follow the instructions on this page for an installation of Artifactory HA, your HA cluster will not work but not why it won't work.

I think it would be helpful if the main HA+Installation+and+Setup page explained how to properly install for a secondary (perhaps including mentioning which files to _not_ customize / add yourself).






[RTFACT-22021] Support Elastic for log analysis Created: 04/May/20  Updated: 04/May/20

Status: Open
Project: Artifactory Binary Repository
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: New Feature Priority: Normal
Reporter: Narasimha Pai Assignee: Unassigned
Resolution: Unresolved Votes: 0
Labels: None


 Description   

Support Elasticsearch and Kibana for JFrog Platform log analysis. One of the most popular log analytics tools used by our large enterprise customers.






[RTFACT-22020] ohad-product-backlog-do not delete Created: 04/May/20  Updated: 01/Jun/20

Status: Open
Project: Artifactory Binary Repository
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: Task Priority: Normal
Reporter: Ohad Aseo Assignee: Unassigned
Resolution: Unresolved Votes: 0
Labels: None


 Description   

ohad-product-backlog-do not delete






[RTFACT-22019] Not able to proxy Helm repository running on Azure Container Registry Created: 04/May/20  Updated: 04/May/20

Status: Open
Project: Artifactory Binary Repository
Component/s: Helm
Affects Version/s: None
Fix Version/s: None

Type: Bug Priority: Normal
Reporter: Elio Marcolino Assignee: Unassigned
Resolution: Unresolved Votes: 0
Labels: None


 Description   

For HelmCenter, we are trying to proxy and cache helm repositories running on Azure Container Registry.

Example: https://promitor.azurecr.io/helm/v1/repo

Using curl, I'm able to fetch the index file and the charts from that repo.

Inspecting the requests, I can see that every request is redirected to a signed URL like this:

→ curl -v https://promitor.azurecr.io/helm/v1/repo/index.yaml
*   Trying 13.69.64.95...
* TCP_NODELAY set
* Connected to promitor.azurecr.io (13.69.64.95) port 443 (#0)
* ALPN, offering h2
* ALPN, offering http/1.1
* Cipher selection: ALL:!EXPORT:!EXPORT40:!EXPORT56:!aNULL:!LOW:!RC4:@STRENGTH
* successfully set certificate verify locations:
*   CAfile: /etc/ssl/cert.pem
  CApath: none
* TLSv1.2 (OUT), TLS handshake, Client hello (1):
* TLSv1.2 (IN), TLS handshake, Server hello (2):
* TLSv1.2 (IN), TLS handshake, Certificate (11):
* TLSv1.2 (IN), TLS handshake, Server key exchange (12):
* TLSv1.2 (IN), TLS handshake, Server finished (14):
* TLSv1.2 (OUT), TLS handshake, Client key exchange (16):
* TLSv1.2 (OUT), TLS change cipher, Client hello (1):
* TLSv1.2 (OUT), TLS handshake, Finished (20):
* TLSv1.2 (IN), TLS change cipher, Client hello (1):
* TLSv1.2 (IN), TLS handshake, Finished (20):
* SSL connection using TLSv1.2 / ECDHE-RSA-AES256-GCM-SHA384
* ALPN, server did not agree to a protocol
* Server certificate:
*  subject: CN=*.azurecr.io
*  start date: May  2 16:59:03 2020 GMT
*  expire date: May 16 16:59:03 2020 GMT
*  subjectAltName: host "promitor.azurecr.io" matched cert's "*.azurecr.io"
*  issuer: C=US; ST=California; O=Zscaler Inc.; OU=Zscaler Inc.; CN=Zscaler Intermediate Root CA (zscaler.net) (t)
*  SSL certificate verify ok.
> GET /helm/v1/repo/index.yaml HTTP/1.1
> Host: promitor.azurecr.io
> User-Agent: curl/7.54.0
> Accept: */*
>
< HTTP/1.1 307 Temporary Redirect
< Server: openresty
< Date: Mon, 04 May 2020 20:30:06 GMT
< Content-Type: text/html; charset=utf-8
< Content-Length: 417
< Connection: keep-alive
< Access-Control-Expose-Headers: Docker-Content-Digest
< Access-Control-Expose-Headers: WWW-Authenticate
< Access-Control-Expose-Headers: Link
< Access-Control-Expose-Headers: X-Ms-Correlation-Request-Id
< Docker-Distribution-Api-Version: registry/2.0
< Location: https://weumanaged19.blob.core.windows.net/2d0d994199874e139a3800c72a530625-artifact-nkxe60asba//docker/registry/v2/blobs/sha256/02/025f506bfc4065f9d6814973a83b27e47ee564480e18495fedb69ff385ac003c/data?se=2020-05-04T20%3A50%3A06Z&sig=4%2FKK1kP5vKJzA8BWywuX9KGHzmVPNZfhkSN%2FqGQKNGs%3D&sp=r&sr=b&sv=2016-05-31&regid=2d0d994199874e139a3800c72a530625&anon=true
< Strict-Transport-Security: max-age=31536000; includeSubDomains
< X-Content-Type-Options: nosniff
< X-Ms-Correlation-Request-Id: 8e24d753-aa4a-4fc8-ae94-bf278f4f392f
< Strict-Transport-Security: max-age=31536000; includeSubDomains
<
<a href="https://weumanaged19.blob.core.windows.net/2d0d994199874e139a3800c72a530625-artifact-nkxe60asba//docker/registry/v2/blobs/sha256/02/025f506bfc4065f9d6814973a83b27e47ee564480e18495fedb69ff385ac003c/data?se=2020-05-04T20%3A50%3A06Z&amp;sig=4%2FKK1kP5vKJzA8BWywuX9KGHzmVPNZfhkSN%2FqGQKNGs%3D&amp;sp=r&amp;sr=b&amp;sv=2016-05-31&amp;regid=2d0d994199874e139a3800c72a530625&amp;anon=true">Temporary Redirect</a>.

* Connection #0 to host promitor.azurecr.io left intact

When I try to add this repo as a remote helm repository in Artifactory and resolve the same index file, I get an authentication error:

→ curl https://helmcenterstg.jfrog.io/artifactory/promitor-remote/index.yaml
{
  "errors" : [ {
    "status" : 404,
    "message" : "Server failed to authenticate the request. Make sure the value of Authorization header is formed correctly including the signature."
  } ]
}%

I can't see any useful information in the logs.






[RTFACT-22018] ohad-test-backlog-Do Not Delete!!!!!!! Created: 04/May/20  Updated: 01/Jun/20

Status: Open
Project: Artifactory Binary Repository
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: Task Priority: Normal
Reporter: Ohad Aseo Assignee: Unassigned
Resolution: Unresolved Votes: 0
Labels: None


 Description   

ohad-test-backlog






[RTFACT-22017] Moving Docker images to a version-less path is allowed Created: 04/May/20  Updated: 12/May/20

Status: Open
Project: Artifactory Binary Repository
Component/s: Docker
Affects Version/s: 6.19.0, 7.4.0
Fix Version/s: None

Type: Bug Priority: Normal
Reporter: Nitzan Benshimol Assignee: Unassigned
Resolution: Unresolved Votes: 0
Labels: None

Attachments: PNG File Screen Shot 2020-05-04 at 19.32.34.png     PNG File Screen Shot 2020-05-04 at 19.41.09.png    

 Description   

See images - 

 

Trying to pull such image will get an error because the default tag is "latest"

This kind of images will not successfully get indexed by Xray

This option should be blocked






[RTFACT-22015] NuGet not installing latest version from virtual repositories Created: 04/May/20  Updated: 04/May/20

Status: Open
Project: Artifactory Binary Repository
Component/s: NuGet
Affects Version/s: 6.19.0
Fix Version/s: None

Type: Bug Priority: Normal
Reporter: Stefan Gangefors Assignee: Unassigned
Resolution: Unresolved Votes: 0
Labels: None


 Description   

We have one local and one remote nuget repo included in a virtual repo.

When using the virtual repo, nuget always fetches an old version while using the local repo it fetches the latest version.

 

Using the virtual repository as source yields an unexpected result. It returns v3.3.0 of the selected package. 

root@6d5c653c3ab7:~/foo# dotnet add package Axis.PostgreSQL
  Writing /tmp/tmpRcQMFD.tmp
info : Adding PackageReference for package 'Axis.PostgreSQL' into project '/root/foo/foo.csproj'.
info : Restoring packages for /root/foo/foo.csproj...
info :   GET https://api.nuget.org/v3-flatcontainer/axis.postgresql/index.json
info :   CACHE https://artifacts.se.axis.com/artifactory/api/nuget/virtual-nuget/FindPackagesById()?id='Axis.PostgreSQL'&semVerLevel=2.0.0
info :   NotFound https://api.nuget.org/v3-flatcontainer/axis.postgresql/index.json 744ms
info : Package 'Axis.PostgreSQL' is compatible with all the specified frameworks in project '/root/foo/foo.csproj'.
info : PackageReference for package 'Axis.PostgreSQL' version '3.3.0' updated in file '/root/foo/foo.csproj'.
info : Committing restore...
info : Writing assets file to disk. Path: /root/foo/obj/project.assets.json
log  : Restore completed in 1.18 sec for /root/foo/foo.csproj.

 

Compare this when using the local repo. Here v7.0.0 is fetched, which is the latest version.

root@6d5c653c3ab7:~/foo# dotnet add package Axis.PostgreSQL
  Writing /tmp/tmpByWBDx.tmp
info : Adding PackageReference for package 'Axis.PostgreSQL' into project '/root/foo/foo.csproj'.
info : Restoring packages for /root/foo/foo.csproj...
info :   GET https://api.nuget.org/v3-flatcontainer/axis.postgresql/index.json
info :   CACHE https://artifacts.se.axis.com/artifactory/api/nuget/local-nuget/FindPackagesById()?id='Axis.PostgreSQL'&semVerLevel=2.0.0
info :   CACHE https://artifacts.se.axis.com/artifactory/api/nuget/local-nuget/FindPackagesById()?semVerLevel=2.0.0&id=%27Axis.PostgreSQL%27&%24skip=80
info :   NotFound https://api.nuget.org/v3-flatcontainer/axis.postgresql/index.json 715ms
info : Package 'Axis.PostgreSQL' is compatible with all the specified frameworks in project '/root/foo/foo.csproj'.
info : PackageReference for package 'Axis.PostgreSQL' version '7.0.0' updated in file '/root/foo/foo.csproj'.
info : Committing restore...
info : Writing assets file to disk. Path: /root/foo/obj/project.assets.json
log  : Restore completed in 1.05 sec for /root/foo/foo.csproj.

 

Any idea why this happens?

And is there a workaround to get the virtual repos to return the correct version?

 



 Comments   
Comment by Stefan Gangefors [ 04/May/20 ]

Duplicate of https://www.jfrog.com/jira/browse/RTFACT-21843

Sorry for not finding it before creating the ticket. I noticed the number 80 in the logs I posted in the ticket when reading it again and that number felt familiar. So I searched again and ofc found an existing ticket.

This is fixed in 6.19.1 for anyone finding this issue.





[RTFACT-22014] Allow to change LDAP referrals strategy and socket timeout through the UI Created: 04/May/20  Updated: 04/May/20

Status: Open
Project: Artifactory Binary Repository
Component/s: UI
Affects Version/s: None
Fix Version/s: None

Type: New Feature Priority: Normal
Reporter: Shani Attias Assignee: Unassigned
Resolution: Unresolved Votes: 0
Labels: None


 Description   

Currently, the only way to change the LDAP referrals strategy or the socket timeout is by adding and adjusting the below properties in artifactory.system.yaml

artifactory.security.ldap.referralStrategy (The value can be either 'follow', 'ignore' or 'throw'. The default value is ‘follow’.)
artifactory.security.ldap.socket.timeoutMillis

It would be more accessible and easier to change the above properties in the UI, LDAP web page.






[RTFACT-21996] Garbage Collector for Inactive High Availability Nodes Created: 01/May/20  Updated: 01/May/20

Status: Open
Project: Artifactory Binary Repository
Component/s: High Availability, Kubernetes
Affects Version/s: None
Fix Version/s: None

Type: New Feature Priority: Normal
Reporter: Tyler Denmon Assignee: Unassigned
Resolution: Unresolved Votes: 0
Labels: None


 Description   

We're running Artifactory as a high availability cluster inside of our EKS-hosted Kubernetes cluster. While doing some routine maintenance we discovered that Artifactory does not ever clear out HA nodes that have failed their heartbeats, even after long periods of time.

I assume this behavior is because Artifactory was originally designed to run on bare metal/VMs that were expected to be durable, but on Kubernetes our nodes are containers that are constantly getting restarted, shifted around, terminated, etc and so there is a large buildup of nodes that have failed their heartbeats and are never going to come back, which leaves us with a large amount of cruft.

Speaking with JFrog support, it appears that the only way to clean these up is by manually deleting the old nodes via the UI, which is not an enjoyable experience (I'm currently staring at 19 pages worth of these dead nodes on our production cluster, for reference). Furthermore the documentation I found regarding deleting HA nodes mentions the possibility of these old nodes causing problems in the cluster overall: https://www.jfrog.com/confluence/display/RTF6X/Managing+the+HA+Cluster#ManagingtheHACluster-RemovinganUnusedNode

My ask is to have a garbage collector added to Artifactory that can be configured to clean up nodes that have failed their heartbeat over a certain threshold. As an example, I could enable this new garbage collector on our Artifactory cluster to clean up any HA nodes that have failed their heartbeats for more than 24 hours and have the collector run once per day on a schedule.






[RTFACT-21995] Seeing so many unnecessary calls to remote registry Created: 01/May/20  Updated: 01/Jun/20

Status: Open
Project: Artifactory Binary Repository
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: Bug Priority: Normal
Reporter: B Assignee: Unassigned
Resolution: Unresolved Votes: 1
Labels: None

Issue Links:
Relationship
is related to RTFACT-22323 The exclude pattern is not respected ... Open

 Description   

We have virtual registry setup that has local,remote docker registries. I do not understand why Artifactory is calling to remote repo tags list even though it finds the tag in local registry. As far as I understand hierarchy is if it finds image and info from local registry it should not call to remote. Can you please help me understand why i am seeing so many calls like below?

2020-05-01 17:56:21,955 [http-nio-8081-exec-185] [ERROR] (o.a.a.d.r.v.DockerV2RemoteRepoHandler:320) - Unable to fetch catalog from 'https://registry-1.docker.io/v2/_catalog?n=1000': HTTP/1.1 401 Unauthorized
2020-05-01 17:56:21,966 [http-nio-8081-exec-132] [ERROR] (o.a.a.d.r.v.DockerV2RemoteRepoHandler:320) - Unable to fetch catalog from 'https://registry-1.docker.io/v2/_catalog?n=1000': HTTP/1.1 401 Unauthorized
2020-05-01 17:56:22,049 [http-nio-8081-exec-13] [ERROR] (o.a.a.d.r.v.DockerV2RemoteRepoHandler:281) - Unable to fetch tags from 'https://registry-1.docker.io/v2/com.shoprunner.data.sink/tags/list?': HTTP/1.1 401 Unauthorized
2020-05-01 17:56:22,054 [http-nio-8081-exec-38] [ERROR] (o.a.a.d.r.v.DockerV2RemoteRepoHandler:281) - Unable to fetch tags from 'https://registry-1.docker.io/v2/revelio-launcher/tags/list?': HTTP/1.1 401 Unauthorized
2020-05-01 17:56:22,066 [http-nio-8081-exec-47] [ERROR] (o.a.a.d.r.v.DockerV2RemoteRepoHandler:281) - Unable to fetch tags from 'https://registry-1.docker.io/v2/reverseproxy-datadog-agent-wip/tags/list?': HTTP/1.1 401 Unauthorized
2020-05-01 17:56:22,245 [http-nio-8081-exec-141] [ERROR] (o.a.a.d.r.v.DockerV2RemoteRepoHandler:281) - Unable to fetch tags from 'https://docker.bintray.io/v2/reverseproxy-datadog-agent-prd/tags/list?': HTTP/1.1 404 Not Found
2020-05-01 17:56:23,137 [http-nio-8081-exec-100] [ERROR] (o.a.a.d.r.v.DockerV2RemoteRepoHandler:320) - Unable to fetch catalog from 'https://registry-1.docker.io/v2/_catalog?n=1000': HTTP/1.1 401 Unauthorized


 Comments   
Comment by Stefan Gangefors [ 06/May/20 ]

We are seeing these errors too. Since they are most likely not actual errors its a nuisance to get alerted by errors in the log just to find that there is nothing to act on.





[RTFACT-21994] Artifactory should support Windows line endings (CR LF) on metadata calculation Created: 01/May/20  Updated: 20/May/20

Status: Open
Project: Artifactory Binary Repository
Component/s: None
Affects Version/s: 6.17.0, 6.19.1
Fix Version/s: None

Type: Improvement Priority: Normal
Reporter: Sankar Kumar Assignee: Unassigned
Resolution: Unresolved Votes: 1
Labels: None


 Description   

Metadata calculation is failing when YAML file contains Windows line endings (CR LF).
Example:
Helm chart kafka-router-1.0.2.tgz was not being indexed in index.yaml and causing below errors in Artifactory logs

2020-04-27 14:32:56,225 [art-exec-16949363] [ERROR] (o.j.r.h.HelmMetadataExtractor:72) - Could not extract metadata from chart kafka-router-1.0.2.tgz 

Upon investigation, we determined that some of the files in kafka-router-1.0.2.tgz had Windows (CR LF) line endings instead of Unix (LF) line endings.

Specifically:

kafka-router-1.0.2.tgz has templates/configmap.yaml with Windows line endings (CR LF). whereas kafka-router-1.0.3.tgz has templates/configmap.yaml with Unix line endings (LF)

The same was true of other files such as values.yaml, templates/deployment.yaml and .helmignore (though Chart.yaml did have LF line endings).

In the Web UI tree browser, kafka-router-1.0.2.tgz appears correctly alongside other versions with Unix line endings such as chart kafka-router-1.0.1.tgz and chart kafka-router-1.0.3.tgz, however, when you look at the Chart Info it is blank. Indeed, when you look at the Chart Info for kafka-router-1.0.3.tgz and then move to chart kafka-router-1.0.2.tgz, the Chart Info for chart kafka-router-1.0.3.tgz is still displayed. Each time you attempt to load the Chart Info for chart kafka-router-1.0.2.tgz, the following Error 500 is generated and can be seen in the Chrome Developer Console:

 

/artifactory/ui/views/helm:1 Failed to load resource: the server responded with a status of 500 (Internal Server Error)

Customer ASK:
1) Artifactory should support Windows line endings.
2) If Helm Chart information cannot be displayed, instead of a hidden Error 500 that won't be seen by users, could more information be provided? It feels wrong that Chart Info for another chart is still displayed when in actual fact an error has occurred.






[RTFACT-21993] Support multiple S3 buckets Created: 01/May/20  Updated: 04/May/20

Status: Open
Project: Artifactory Binary Repository
Component/s: S3
Affects Version/s: None
Fix Version/s: None

Type: New Feature Priority: Normal
Reporter: Ariel Kabov Assignee: Unassigned
Resolution: Unresolved Votes: 3
Labels: None

Issue Links:
Relationship
is related to RTFACT-16985 Allow using S3 with Storage Sharding Open

 Description   

This feature request is to support configuring Artifactory with 2 or more S3 buckets.

For large scale environments having huge buckets are more difficult to manage.

If we would provide an option to configure "redundancy" between several S3 buckets, it would be ideal.






[RTFACT-21971] Archive indexer job consumes too much memory Created: 30/Apr/20  Updated: 30/Apr/20

Status: Open
Project: Artifactory Binary Repository
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: Performance Priority: Normal
Reporter: Yossi Shaul Assignee: Unassigned
Resolution: Unresolved Votes: 0
Labels: None


 Description   

The archive indexer is pulling all the available tasks to memory without restrictions. If the list if very long, this can consume a lot of memory.






[RTFACT-21967] Artifactory 7 doesn't load in Microsoft Edge Created: 30/Apr/20  Updated: 14/May/20

Status: Open
Project: Artifactory Binary Repository
Component/s: Web UI
Affects Version/s: 7.3.2, 7.4.3
Fix Version/s: None

Type: Bug Priority: Normal
Reporter: Brian Krische Assignee: Unassigned
Resolution: Unresolved Votes: 0
Labels: None
Environment:

 
Operating System:

  • Windows 10 1909

Browser:

  • Microsoft Edge 44.18362.449.0
  • Microsoft EdgeHTML 18.18363

Artifactory Versions:

  • 7.3.2
  • 7.4.3

Issue Links:
Duplicate

 Description   

I am unable to load Artifactory 7 Web UI with Microsoft Edge browser, it remains stuck at the splash screen.

I noticed that if I look at the console log in Edge, I see the following error:

SCRIPT1028: SCRIPT1028: Expected identifier, string or number chunk-vendors.21570a23.js (39,23720)





[RTFACT-21961] Remote repository - repeated query params not propagated correctly Created: 30/Apr/20  Updated: 30/Apr/20

Status: Open
Project: Artifactory Binary Repository
Component/s: Remote Repository
Affects Version/s: 7.4.3
Fix Version/s: None

Type: Bug Priority: Normal
Reporter: Waldek Herka Assignee: Unassigned
Resolution: Unresolved Votes: 0
Labels: None


 Description   

Hello,
I believe there is a bug in "Propagate Query Params" feature of the remote repository configuration.

The parameters propagation can indeed be turned on/off. However, when it comes to propagating the list-type parameters they can only be used as comma-delimited.

Other forms which are widely accepted like simply repeating the keys or repeating them with [] which are supported by all modern web servers and frameworks are not handled correctly.

 

See details of the standard]

 

Below is a gist of the current behaviour is provided.

 

The query as follows:

?key=/val1&key=/val2&key=/val3

is transformed into:

?key=/val1

Only the first occurrence of the parameter makes it to the final query propagated to the remote handler.
 
Some evidence:

- wget:
wget -d "http://arhost.net:8082/artifactory/bms-nas-generic-prod-bms/com/amadeus/obe/mdw/DeviceCache/api/15.0.0.2/api-15.0.0.2-lib.tar.gz?repository_path=/remote/intdeliv/components&repository_path=/remote/projects/ngdcs/ngddelde&repository_path=/remote/projects/ngddelde/deliver&repository_path_2=/remote/projects/ngddelde/deliver"- artifactory:2020-04-30T09:56:56.309Z [jfrt ] [INFO ] [73e0ae216754417f] [o.a.r.HttpRepo:422            ] [http-nio-8081-exec-2] - bms-nas-generic-prod-bms downloading http://remote.handler.net:6080/com/amadeus/obe/mdw/DeviceCache/api/15.0.0.2/api-15.0.0.2-lib.tar.gz?repository_path=%2Fremote%2Fintdeliv%2Fcomponents&repository_path_2=%2Fremote%2Fprojects%2Fngddelde%2Fdeliver 708.96 KB
2020-04-30T09:56:56.383Z [jfrt ] [INFO ] [73e0ae216754417f] [o.a.r.HttpRepo:435            ] [http-nio-8081-exec-2] - bms-nas-generic-prod-bms downloaded  http://remote.handler.net:6080/com/amadeus/obe/mdw/DeviceCache/api/15.0.0.2/api-15.0.0.2-lib.tar.gz?repository_path=%2Fremote%2Fintdeliv%2Fcomponents&repository_path_2=%2Fremote%2Fprojects%2Fngddelde%2Fdeliver 708.96 KB at 9,843.06 KB/sec- remote:
172.29.1.124 - - [30/Apr/2020:09:56:56 +0000] "GET /com/amadeus/obe/mdw/DeviceCache/api/15.0.0.2/api-15.0.0.2-lib.tar.gz?repository_path=%2Fremote%2Fintdeliv%2Fcomponents&repository_path_2=%2Fremote%2Fprojects%2Fngddelde%2Fdeliver HTTP/1.1" 200 725974 "-" "Artifactory/7.4.3 70403900" "-"
172.29.1.124 - - [30/Apr/2020:09:56:56 +0000] "GET /com/amadeus/obe/mdw/DeviceCache/api/15.0.0.2/api-15.0.0.2-lib.tar.gz?repository_path=%2Fremote%2Fintdeliv%2Fcomponents&repository_path_2=%2Fremote%2Fprojects%2Fngddelde%2Fdeliver HTTP/1.1" 200 725974 "-" "Artifactory/7.4.3 70403900" "-"

Kind regards,

Waldek






[RTFACT-21956] Add Log Scrubbing Option to UI Support Zone 'Create Access Bundle' Action Created: 29/Apr/20  Updated: 20/May/20

Status: Open
Project: Artifactory Binary Repository
Component/s: Support Zone
Affects Version/s: 6.12.2
Fix Version/s: None

Type: Improvement Priority: Normal
Reporter: Timothy Golden Assignee: Ariel Kabov
Resolution: Unresolved Votes: 0
Labels: None
Environment:

On premises JFrog Artifactory production application cluster.



 Description   

We recently received a demonstration from Ariel Seftel of a script he developed that scrubs classified data (IP addresses & hostnames) from the logs/files in the JFrog Support Bundle prior to their upload to supportlogs.jfrog.com.  This script is designed to be run after Support Log bundle generation requiring OS level access to the server running the Artifactory instance or access to a native Linux/UNIX shell environment.  Most of our technicians do not enjoy OS level access to production servers.  Moreover, our technician's standard technical workstation runs the Windows OS without sufficient support for Linux/UNIX shells.  Therefore, we'd like to request that the capability provided by this script be embedded within the JFrog Artifactory application itself.  Meaning, when a technician uses the Support Zone 'Create Support Bundle' action they would be offered the option (checkbox?) to request that the logs/files be sanitized as part of the Support Bundle generation process itself.  This would remove the need for our technicians to have access to a specific operating environment (with sufficient access rights) to perform what is currently a post-Support Bundle generation atomic action.






[RTFACT-21954] Backup failure notification email will not trigger when a backup job fails due to Build info file does not exist error Created: 29/Apr/20  Updated: 11/May/20

Status: Open
Project: Artifactory Binary Repository
Component/s: Backup
Affects Version/s: 6.19.1
Fix Version/s: None

Type: Bug Priority: Normal
Reporter: Pavan Gonugunta Assignee: Unassigned
Resolution: Unresolved Votes: 0
Labels: None

Attachments: JPEG File backup-config.jpg    

 Description   

Issue Description:
When an Artifactory backup job fails while getting the build-info object for a specific build, Artifactory admin users will not receieve a failed backup notification email.

 

Steps to reproduce:

  1. Configure mail server to Artifactory instance, check the connectivity between mail server to Artifactory using Send Test Mail
  2. Deploy a build "test-build" to Artifactory.
  3. Delete the build info from the nodes table under DB.  Get the node_id using this query for the current build "SELECT * FROM nodes WHERE node_type=1 AND node_path='test-build'", delete the build which is deployed in Artifactory using the below query "DELETE FROM nodes WHERE node_id=<node-ID>", which will delete the build-info.json entry from the nodes table.
  4. Create a new Backup job, Enable "Verify enough disk space is available for backup", "Send Mail to Admins if there are Backup Errors" option in the backup job which will trigger an email to admin users when the backup fails, I have attached backup configuration for reference.
  5. Please run the backup, it fails with error in UI "Build info file does not exist: artifactory-build-info:test-build/49-1588005273557.json"





[RTFACT-21953] CLI should be able to upload artifacts according to layout Created: 29/Apr/20  Updated: 29/Apr/20

Status: Open
Project: Artifactory Binary Repository
Component/s: CLI
Affects Version/s: None
Fix Version/s: None

Type: New Feature Priority: Normal
Reporter: Vasiliy Gorokhov-Apelsinov Assignee: Unassigned
Resolution: Unresolved Votes: 0
Labels: None


 Description   

Deploy according to layout is currently possible only from UI.

CLI should be able to upload artifacts according to layout too.

 

Otherwise all these layouts become essentially useless.

  1. In order to upload file, user have to know what repo layout is, to construct correct path
  2. If server admin decides to change layout, all release scripts, CI pipelines etc. should be changed.

It would be great to just pass `orgPath`, `module` and so on, to CLI, and let it (or Artifactory server) do its work.






[RTFACT-21951] ArtifactoryConfigurationManagerTest takes 2 minutes to complete Created: 28/Apr/20  Updated: 04/Jun/20

Status: Ready For Merge
Project: Artifactory Binary Repository
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: Bug Priority: Normal
Reporter: Uriah Levy Assignee: Mor Merhav
Resolution: Unresolved Votes: 0
Labels: None


 Description   

The test was excluded from running as part of our db-tests suite (testng-db.xml). I re-added it because it seemed like a mistake. However, the test takes ~2m to run on my laptop, which I assume is related to the new conf-mgr v2 quiet period mechanism.

It might be possible to make it shorter by lowering the conf-mgr quiet period mechanism in the test. 






[RTFACT-21947] support the PyPI JSON api Created: 28/Apr/20  Updated: 28/Apr/20

Status: Open
Project: Artifactory Binary Repository
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: Improvement Priority: Normal
Reporter: Thomas Grainger Assignee: Unassigned
Resolution: Unresolved Votes: 0
Labels: None


 Description   

PyPI has a JSON api that allows clients to download hashes directly without downloading the entire package and hashing it: https://warehouse.pypa.io/api-reference/json/

This api is used in pip-compile from pip-tools https://github.com/jazzband/pip-tools/pull/1109






[RTFACT-21946] About artifactory Question Created: 28/Apr/20  Updated: 28/Apr/20

Status: Open
Project: Artifactory Binary Repository
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: Story Priority: Normal
Reporter: Jintae Son Assignee: Unassigned
Resolution: Unresolved Votes: 0
Labels: None


 Description   

Hi.
I am currently an Artifactory professional License user.
Can I get an answer by requesting questions about the Artifactory here?






[RTFACT-21936] ERROR is thrown by artifactory during DB statement Created: 27/Apr/20  Updated: 27/Apr/20

Status: Open
Project: Artifactory Binary Repository
Component/s: Artifactory Cloud, Database
Affects Version/s: 7.3.1
Fix Version/s: None

Type: Task Priority: Normal
Reporter: Jacob Saleh Assignee: Yaniv Shani
Resolution: Unresolved Votes: 0
Labels: None


 Description   

I see a lot of ERRORs being logged by Production Postgres DB related to Artifactory, it is cross region, I've provided 2 examples which look similar.

2020-04-27 07:00:50 UTC:192.168.130.219(59076):inditex_art@inditex_art:[12683]:ERROR: duplicate key value violates unique constraint "locks_pk"
2020-04-27 07:00:50 UTC:192.168.130.219(59076):inditex_art@inditex_art:[12683]:DETAIL: Key (category, lock_key)=(node-event-task-manager, node-npmjsorg-remote-cache/run-async/-/run-async-2.4.1.tgz) already exists.
2020-04-27 07:00:50 UTC:192.168.130.219(59076):inditex_art@inditex_art:[12683]:STATEMENT: insert into distributed_locks values($1,$2,$3,$4,$5,$6)

2020-04-27 05:01:46 UTC:192.168.191.29(42566):tr1_art@tr1_art:[88756]:ERROR: duplicate key value violates unique constraint "locks_pk"
2020-04-27 05:01:46 UTC:192.168.191.29(42566):tr1_art@tr1_art:[88756]:DETAIL: Key (category, lock_key)=(node-event-task-manager, libs-snapshot-local/com/thomsonreuters/anzpathway/cobalt-pathway-nz-comm/1.0.0-SNAPSHOT/cobalt-pathway-nz-comm-1.0.0-20180802.154603-28.pom) already exists.
2020-04-27 05:01:46 UTC:192.168.191.29(42566):tr1_art@tr1_art:[88756]:STATEMENT: insert into distributed_locks values($1,$2,$3,$4,$5,$6)

Please check and determine whether it should be handles or removed, as these production instances should not have an ERROR level log without being addressed accordingly






[RTFACT-21935] Quality and Maintenance Q2-20: Sprint 5 Created: 27/Apr/20  Updated: 10/May/20

Status: Backlog
Project: Artifactory Binary Repository
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: Epic Priority: Normal
Reporter: Shlomi Kriheli Assignee: Unassigned
Resolution: Unresolved Votes: 0
Labels: None

Epic Name: Quality and Maintenance Q2-20: Sprint 5

 Description   

.






[RTFACT-21934] Quality and Maintenance Q2-20: Sprint 4 Created: 27/Apr/20  Updated: 10/May/20

Status: Backlog
Project: Artifactory Binary Repository
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: Epic Priority: Normal
Reporter: Shlomi Kriheli Assignee: Unassigned
Resolution: Unresolved Votes: 0
Labels: None

Epic Name: Quality and Maintenance Q2-20: Sprint 4

 Description   

.






[RTFACT-21933] Quality and Maintenance Q2-20: Sprint 3 Created: 27/Apr/20  Updated: 13/May/20

Status: Backlog
Project: Artifactory Binary Repository
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: Epic Priority: Normal
Reporter: Shlomi Kriheli Assignee: Unassigned
Resolution: Unresolved Votes: 0
Labels: None

Epic Name: Quality and Maintenance Q2-20: Sprint 3

 Description   

.






[RTFACT-21932] Quality and Maintenance Q2-20: Sprint 2 Created: 27/Apr/20  Updated: 10/May/20

Status: Ready For Dev
Project: Artifactory Binary Repository
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: Epic Priority: Normal
Reporter: Shlomi Kriheli Assignee: Unassigned
Resolution: Unresolved Votes: 0
Labels: None

Epic Name: Quality and Maintenance Q2-20: Sprint 2

 Description   

.






[RTFACT-21931] Quality and Maintenance Q2-20: Sprint 1 Created: 27/Apr/20  Updated: 04/May/20

Status: Development
Project: Artifactory Binary Repository
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: Epic Priority: Normal
Reporter: Shlomi Kriheli Assignee: Unassigned
Resolution: Unresolved Votes: 0
Labels: None

Epic Name: Quality and Maintenance Q2-20: Sprint 1

 Description   

.






[RTFACT-21930] Customer Commitments Q2-20: Sprint 6 Created: 27/Apr/20  Updated: 28/Apr/20

Status: Backlog
Project: Artifactory Binary Repository
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: Epic Priority: Normal
Reporter: Shlomi Kriheli Assignee: Unassigned
Resolution: Unresolved Votes: 0
Labels: None

Epic Name: Customer Commitments Q2-20: Sprint 6

 Description   

.






[RTFACT-21929] Customer Commitments Q2-20: Sprint 5 Created: 27/Apr/20  Updated: 10/May/20

Status: Backlog
Project: Artifactory Binary Repository
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: Epic Priority: Normal
Reporter: Shlomi Kriheli Assignee: Unassigned
Resolution: Unresolved Votes: 0
Labels: None

Epic Name: Customer Commitments Q2-20: Sprint 5

 Description   

.






[RTFACT-21922] Improved Storage Quota control Created: 27/Apr/20  Updated: 27/Apr/20

Status: Open
Project: Artifactory Binary Repository
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: Improvement Priority: Normal
Reporter: Stefan Gangefors Assignee: Unassigned
Resolution: Unresolved Votes: 0
Labels: None


 Description   

As it stands today, Artifactorys Storage Quota control is a blunt instrument to make sure that we don't run out of disk space.

The upper limit of when it starts blocking uploads is when the disk is 99% used. For a disk of 30TB there are still 300GB free space which translates to quite a lot of artifacts.

Also, there are no checks for available inodes which can run out before the available space does.

 

The Storage Quota setting should be improved by changing from % to actual bytes. And it should be expanded to check inodes as well. Number of free inodes should also be specified in numbers and not as a % value.

I would suggest that you support a format where one could specify the following strings.

1000000000

1000000k

1000M

1G

etc.

 






[RTFACT-21921] Optimize NPM Virtual Metadata Caching and Aggregation Created: 27/Apr/20  Updated: 13/May/20

Status: Open
Project: Artifactory Binary Repository
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: Improvement Priority: Normal
Reporter: Angello Maggio Assignee: Unassigned
Resolution: Unresolved Votes: 0
Labels: None


 Description   

Currently NPM merges virtual metadata on the fly as every call to get package metadata on a virtual repository has to perform the merge and serialize the JSON.
There's no optimization for caching the intermediate metadata it seems, causing these repositories to be significantly less efficient than other virtuals of different package types.






[RTFACT-21916] Show Xray version from with the Artifactory UI Created: 26/Apr/20  Updated: 26/Apr/20

Status: Open
Project: Artifactory Binary Repository
Component/s: None
Affects Version/s: 7.4.1
Fix Version/s: None

Type: Improvement Priority: Normal
Reporter: Sheldon Daigle Assignee: Unassigned
Resolution: Unresolved Votes: 0
Labels: None


 Description   

Since the release of Artifactory v7.4, Xray has been integrated in to the Artifactory UI and the Xray UI has gone away. There's no way to see what version of Xray an Artifactory server is configured with, short of an API call. It would be nice if this were included as part of the "ui/admin/monitoring/service-status" page for example.






[RTFACT-21914] Exception isn't cought correctly when authenticating with API key with LDAP user Created: 26/Apr/20  Updated: 25/May/20

Status: Open
Project: Artifactory Binary Repository
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: Bug Priority: Normal
Reporter: Shai Ben-Zvi Assignee: Unassigned
Resolution: Unresolved Votes: 0
Labels: None

Issue Links:
Relationship
is related to RTFACT-21570 Using API Key with LDAP user leads to... Open

 Description   

When we authenticate with API key with an LDAP user we call the method refreshUserFromLdap
We perform a query to LDAP to search for the username and get it's groups.

The issue happens if the LDAP queries within findSettingsForActiveUser succeed, but the LDAP queries within searchUserInLdap fails.
When that happens, dirContextOperations is returned as null, then null pointer exceptions are shown within createSimpleUser, so 500 error is returned as the exception is not caught, and request log was skipped as well because of uncought exception.

For example:

FE (Httpd):

[17/Mar/2020:13:24:40 +0100] redacted "GET /artifactory/some-repo/path-to-package/some-file.tgz HTTP/1.1" 65 500 2151467 redacted

BE request.log:

No log is displayed for that particular request.

BE artifactory.log:

artifactory.2020-03-17.5.log.zip:2020-03-17 13:24:42,460 [http-nio-8081-exec-2549] [WARN ] (o.a.s.l.LdapServiceImpl:179) - Unexpected exception in LDAP query:for user redacted vid LDAP: Uncategorized exception occured during LDAP processing; nested exception is javax.naming.NamingException: LDAP response read timed out, timeout used:2000ms.

Please note that another possible impacted method is the getStringAttribute and populateGroups org/artifactory/security/ldap/LdapUtils.java:221 when going through the createSimpleUser.






[RTFACT-21913] After changing Artifactory's port, repositories "Set Me Up" URL directs to port 8040 instead of the new port Created: 26/Apr/20  Updated: 04/May/20

Status: Open
Project: Artifactory Binary Repository
Component/s: Web UI
Affects Version/s: None
Fix Version/s: None

Type: Bug Priority: Normal
Reporter: Shani Attias Assignee: Unassigned
Resolution: Unresolved Votes: 0
Labels: None


 Description   

After changing Artifactory's port, repositories "Set Me Up" URL directs to port 8040 instead of the new port.

Steps to reproduce:
1. change Artifactory's port by adding the following at the bottom of the file (i.e change to 1234)

artifactory:
    port: 1234

2. go to Artifactory --> choose a repository
3. click on "Set Me Up"
4. note that the URL is directed to Access port instead of the newly configured port:
(<artifactory_url>:8040/artifactory....)

Was reported for Artifactory 7.3.2 and tested on 7.4.1






[RTFACT-21912] URL redirect fails after changing Artifactory port Created: 26/Apr/20  Updated: 27/Apr/20

Status: Open
Project: Artifactory Binary Repository
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: Bug Priority: Normal
Reporter: Shani Attias Assignee: Unassigned
Resolution: Unresolved Votes: 0
Labels: None


 Description   

After changing Artifactory's port from 8081 to another port in the system.yaml file, the redirect fails.
Hitting <Artifactory_ip>:<new_port> should redirect to <Artifactory_ip>:8082/ui.
Instead, hitting <Artifactory_ip>:<new_port> will result in 404.

steps to reproduce:
1. Access the following URL:
HTTP://<Artifactory_ip>:8081
2. Notice you are redirected to HTTP://<Artifactory_ip>:8082/ui
3. Change Artifactory port in the system.yaml file by adding the following at the bottom of the file (i.e change to 1234)

artifactory:
    port: 1234

4. Try to access the following URL:
HTTP://<Artifactory_ip>:1234
5. Notice step 4 results in a 404 page

The expected behavior is instead of getting a 404 page, to be redirected to HTTP://<Artifactory_ip>:8082/ui as before the port was changed

tested in Artifactory 7.4.1






[RTFACT-21911] The uploaded rest API does not support the specified character set Created: 26/Apr/20  Updated: 26/Apr/20

Status: Open
Project: Artifactory Binary Repository
Component/s: None
Affects Version/s: 6.17.0
Fix Version/s: None

Type: Bug Priority: Normal
Reporter: Yunzong Guo Assignee: Unassigned
Resolution: Unresolved Votes: 0
Labels: None


 Description   

When i use the rest API to upload the file containing the Chinese file name, the returned "uri" is garbled, and the content type is:'the content-type' : 'application/vnd.org.jfrog.artifactory.storage.ItemCreated+json; Charset = ISO - 8859-1 '

For example:
1. Command is:
curl -vvv -H 'Accept':'application/json; charset=utf-8' -H 'Content-Type':'application/json; charset=utf-8' -H 'X-JFrog-Art-Api:AKCp5ekmj8byn2L8XSANG2hRKa1v5iqRN1bcMFyEfgcApR2ewzSmceW2bawkNj2Z1cQcdd5am' -T hello测试.docx "http://192.168.230.155:8081/artifactory/generic-test/path/"

2. Response is:
...
< Location: http://192.168.230.155:8081/artifactory/generic-test/path/hello??.docx
< Content-Type: application/vnd.org.jfrog.artifactory.storage.ItemCreated+json;charset=ISO-8859-1
< Transfer-Encoding: chunked
< Date: Sun, 26 Apr 2020 01:16:04 GMT
<
{
"repo" : "generic-test",
"path" : "/path/hello??.docx",
"created" : "2020-04-26T09:13:25.427+08:00",
"createdBy" : "admin",
"downloadUri" : "http://192.168.230.155:8081/artifactory/generic-test/path/hello??.docx",
"mimeType" : "application/vnd.openxmlformats-officedocument.wordprocessingml.document",
"size" : "14576",
"checksums" :

{ "sha1" : "77c0b50acc8362232ca0ab67c9ab7c1a197808d0", "md5" : "b60c12ba3221b321eac809404a9fb55a", "sha256" : "63bfd572be152ea668386945011aeccc417016f55ff9ef44065371c23dcbe245" }

,
"originalChecksums" :

{ "sha256" : "63bfd572be152ea668386945011aeccc417016f55ff9ef44065371c23dcbe245" }

,
"uri" : "http://192.168.230.155:8081/artifactory/generic-test/path/hello??.docx"
...

3. The correct file name is:hello测试.docx, but it return hello??.docx, and the Content-Type is:Content-Type: application/vnd.org.jfrog.artifactory.storage.ItemCreated+json;charset=ISO-8859-1






[RTFACT-21908] Conda default remote repository does not include channels Created: 24/Apr/20  Updated: 13/May/20

Status: Open
Project: Artifactory Binary Repository
Component/s: Conda
Affects Version/s: 6.19.0
Fix Version/s: None

Type: Bug Priority: Normal
Reporter: Arturo Aparicio Assignee: Unassigned
Resolution: Unresolved Votes: 1
Labels: None

Issue Links:
Duplicate
is duplicated by RTFACT-21909 Conda packages are not resolved in re... Resolved

 Description   

Conda default remote repository does not include user channels. This means that channel-specific downloads will fail when working against Artifactory vs when using the default Conda remote site.

To reproduce:
1. Run this against the default conda client/remote (non Artifactory)

conda install -c rapidsai -c nvidia -c conda-forge -c defaults cudf=0.12 python=3.6 cudatoolkit=10.1


2. Notice it downloads all packages as expected
3. Create the ARtifactory default Conda repositories and point the conda client to it
4. Run the same command and notice the failure

PackagesNotFoundError: The following packages are not available from current channels:

  - cudf=0.12

Current channels:

  - http://admin:key@mill.jfrog.team:12250/artifactory/api/conda/conda/rapidsai/linux-64
  - http://adminkey@mill.jfrog.team:12250/artifactory/api/conda/conda/rapidsai/noarch
  - http://admin:key@mill.jfrog.team:12250/artifactory/api/conda/conda/nvidia/linux-64
  - http://admin:key@mill.jfrog.team:12250/artifactory/api/conda/conda/nvidia/noarch
  - http://admin:key@mill.jfrog.team:12250/artifactory/api/conda/conda/conda-forge/linux-64
  - http://admin:key@mill.jfrog.team:12250/artifactory/api/conda/conda/conda-forge/noarch
  - http://admin:key@mill.jfrog.team:12250/artifactory/api/conda/conda/linux-64
  - http://admin:key@mill.jfrog.team:12250/artifactory/api/conda/conda/noarch

The problem seems to be that we only include the default packages (conda-remote
https://repo.anaconda.com/pkgs/main). The channel packages, can be found at https://conda.anaconda.org/.

A potential solution on the Artifactory side is to create two remote repositories and aggregate them under the default conda virtual.

Workaround:

1. Manually create a second conda remote repository with the URL https://conda.anaconda.org/






[RTFACT-21907] Conda remote repo cannot resolve some of the packages Created: 24/Apr/20  Updated: 27/Apr/20

Status: Open
Project: Artifactory Binary Repository
Component/s: Conda
Affects Version/s: 6.19.0
Fix Version/s: None

Type: Bug Priority: Normal
Reporter: Sowjanya Kamatam Assignee: Unassigned
Resolution: Unresolved Votes: 0
Labels: None


 Description   

Artifactory is unable to pull conda packages from the remote repository. Conda client is able to be successfully pull the packages directly.

$ conda info

 

active environment : base
active env location : /opt/conda
shell level : 1
user config file : /root/.condarc
populated config files : /root/.condarc
   conda version : 4.8.3
conda-build version : not installed
python version : 3.6.10.final.0
       virtual packages : __glibc=2.28
       base environment : /opt/conda  (writable)
channel URLs : http://admin:{token}@mill.jfrog.info:10000/artifactory/api/conda/conda/linux-64       http://admin:{token}@mill.jfrog.info:10000/artifactory/api/conda/conda/noarch
      package cache : /opt/conda/pkgs
                          /root/.conda/pkgs
      envs directories : /opt/conda/envs
                          /root/.conda/envs
       platform : linux-64
user-agent : conda/4.8.3 requests/2.23.0 CPython/3.6.10 Linux/4.19.76-linuxkit debian/10 glibc/2.28
        UID:GID : 0:0
        netrc file : None
        offline mode : False

 

Example: rapids=0.13 is failing to be installed 

 

$ conda install -c rapidsai -c nvidia -c conda-forge  -c defaults rapids=0.13 python=3.6
Collecting package metadata (current_repodata.json): done
Solving environment: failed with initial frozen solve. Retrying with flexible solve.
Collecting package metadata (repodata.json): done
Solving environment: failed with initial frozen solve. Retrying with flexible solve.
PackagesNotFoundError: The following packages are not available from current channels:
 - rapids=0.13
Current channels:http://admin:{token}@mill.jfrog.info:10000/artifactory/api/conda/conda/rapidsai/linux-64
http://admin:{token}@mill.jfrog.info:10000/artifactory/api/conda/conda/rapidsai/noarch
http://admin:{token}@mill.jfrog.info:10000/artifactory/api/conda/conda/nvidia/linux-64
http://admin:{token}@mill.jfrog.info:10000/artifactory/api/conda/conda/nvidia/noarch
http://admin:{token}A@mill.jfrog.info:10000/artifactory/api/conda/conda/conda-forge/noarch 
http://admin:{token}@mill.jfrog.info:10000/artifactory/api/conda/conda/linux-64 
http://admin:{token}@mill.jfrog.info:10000/artifactory/api/conda/conda/noarch
 
To search for alternate channels that may provide the conda package you're
looking for, navigate to https://anaconda.org
and use the search bar at the top of the page
http://admin:{token}@mill.jfrog.info:10000/artifactory/api/conda/conda/conda-forge/linux-64

Debug mode:

 

DEBUG conda.core.subdir_data:_load(260): 304 NOT MODIFIED for 'http://mill.jfrog.info:12006/artifactory/api/conda/conda/linux-64/repodata.json'. Updating mtime and loading from disk
TRACE conda.gateways.disk.update:touch(99): touching path /opt/conda/pkgs/cache/136d2f8c.json
DEBUG conda.core.subdir_data:_read_pickled(324): found pickle file /opt/conda/pkgs/cache/136d2f8c.q
DEBUG conda.resolve:__init__(110): restricting to unmanageable packages: __glibc
done
Solving environment: ...working... DEBUG conda.resolve:get_reduced_index(572): Retrieving packages for: 
  - python=3.6
  - rapids=0.13
failed with initial frozen solve. Retrying with flexible solve.
Traceback (most recent call last):
  File "/opt/conda/lib/python3.6/site-packages/conda/cli/install.py", line 265, in install
    should_retry_solve=(_should_retry_unfrozen or repodata_fn != repodata_fns[-1]),
  File "/opt/conda/lib/python3.6/site-packages/conda/core/solve.py", line 117, in solve_for_transaction should_retry_solve)
  File "/opt/conda/lib/python3.6/site-packages/conda/core/solve.py", line 158, in solve_for_diff
    force_remove, should_retry_solve)
  File "/opt/conda/lib/python3.6/site-packages/conda/core/solve.py", line 275, in solve_final_state
    ssc = self._add_specs(ssc)
  File "/opt/conda/lib/python3.6/site-packages/conda/core/solve.py", line 555, in _add_specs
    explicit_pool = ssc.r._get_package_pool(self.specs_to_add)
  File "/opt/conda/lib/python3.6/site-packages/conda/resolve.py", line 553, in _get_package_pool
    pool = self.get_reduced_index(specs)
  File "/opt/conda/lib/python3.6/site-packages/conda/common/io.py", line 88, in decorated
    return f(*args, **kwds)
  File "/opt/conda/lib/python3.6/site-packages/conda/resolve.py", line 574, in get_reduced_index    explicit_specs, features = self.verify_specs(explicit_specs)
  File "/opt/conda/lib/python3.6/site-packages/conda/resolve.py", line 288, in verify_specs
    raise ResolvePackageNotFound(bad_deps)
conda.exceptions.ResolvePackageNotFound: 
  - rapids=0.13

 

 

 

 






[RTFACT-21904] Integrate Mission Control import/export Created: 24/Apr/20  Updated: 27/Apr/20

Status: Open
Project: Artifactory Binary Repository
Component/s: Import/export, Mission-Control
Affects Version/s: None
Fix Version/s: None

Type: Improvement Priority: Normal
Reporter: Remi Bourgarel Assignee: Unassigned
Resolution: Unresolved Votes: 0
Labels: None


 Description   

The work for import/export is done on mission control. It needs to be integrated into mothership import/export of all services when MC is part of it (see epic link).

Export :

  • API : 
    • URL /api/v1/system/backup/export
    • Method : PUT
    • No parameter
    • Synchronous
    • Code 204 if ok, 500 if error
  • File created : JFHOME/var/backup/mc/export.json 
    • Not valid json as it's encrypted with masterkey

Import

  • API
    • URL /api/v1/system/backup/import
    • Method : PUT
    • No parameter
    • Synchronous
    • Code 204 if ok, 400 if file in bad format, 500 if other error
  • File input is expected at JFHOME/var/backup/mc/import.json 
    • This should be in the same format as export (encrypted with masterkey)

 






[RTFACT-21901] server.xml with duplicate address parameters Created: 23/Apr/20  Updated: 23/Apr/20

Status: Open
Project: Artifactory Binary Repository
Component/s: Artifact Storage
Affects Version/s: 7.4.1
Fix Version/s: None

Type: Bug Priority: Normal
Reporter: Daniel Werdermann Assignee: Unassigned
Resolution: Unresolved Votes: 0
Labels: None


 Description   

After upgrade from 7.3.1 to 7.4.1 the artifactory does not start with following error.
 
Start error:
/opt/jfrog/artifactory/var/log/tomcat/tomcat-catalina-2020-04-23.log

23-Apr-2020 14:30:30.935 SEVERE [main] org.apache.tomcat.util.digester.Digester.fatalError Parse Fatal Error at line 25 column 112: Attribute "address" was already specified for element "Connector".        org.xml.sax.SAXParseException; systemId: file:/opt/jfrog/artifactory/app/artifactory/tomcat/conf/server.xml; lineNumber: 25; columnNumber: 112; Attribute "address" was already specified for element "Connector".                at java.xml/com.sun.org.apache.xerces.internal.util.ErrorHandlerWrapper.createSAXParseException(ErrorHandlerWrapper.java:204)                at java.xml/com.sun.org.apache.xerces.internal.util.ErrorHandlerWrapper.fatalError(ErrorHandlerWrapper.java:178)                at java.xml/com.sun.org.apache.xerces.internal.impl.XMLErrorReporter.reportError(XMLErrorReporter.java:400)                at java.xml/com.sun.org.apache.xerces.internal.impl.XMLErrorReporter.reportError(XMLErrorReporter.java:327)                at java.xml/com.sun.org.apache.xerces.internal.impl.XMLScanner.reportFatalError(XMLScanner.java:1471)                at java.xml/com.sun.org.apache.xerces.internal.impl.XMLDocumentFragmentScannerImpl.scanAttribute(XMLDocumentFragmentScannerImpl.java:1524)                at java.xml/com.sun.org.apache.xerces.internal.impl.XMLDocumentFragmentScannerImpl.scanStartElement(XMLDocumentFragmentScannerImpl.java:1353)                at java.xml/com.sun.org.apache.xerces.internal.impl.XMLDocumentFragmentScannerImpl$FragmentContentDriver.next(XMLDocumentFragmentScannerImpl.java:2710)                at java.xml/com.sun.org.apache.xerces.internal.impl.XMLDocumentScannerImpl.next(XMLDocumentScannerImpl.java:605)                at java.xml/com.sun.org.apache.xerces.internal.impl.XMLDocumentFragmentScannerImpl.scanDocument(XMLDocumentFragmentScannerImpl.java:534)                at java.xml/com.sun.org.apache.xerces.internal.parsers.XML11Configuration.parse(XML11Configuration.java:888)                at java.xml/com.sun.org.apache.xerces.internal.parsers.XML11Configuration.parse(XML11Configuration.java:824)                at java.xml/com.sun.org.apache.xerces.internal.parsers.XMLParser.parse(XMLParser.java:141)                at java.xml/com.sun.org.apache.xerces.internal.parsers.AbstractSAXParser.parse(AbstractSAXParser.java:1216)                at java.xml/com.sun.org.apache.xerces.internal.jaxp.SAXParserImpl$JAXPSAXParser.parse(SAXParserImpl.java:635)                at org.apache.tomcat.util.digester.Digester.parse(Digester.java:1518)                at org.apache.catalina.startup.Catalina.load(Catalina.java:611)                at org.apache.catalina.startup.Catalina.load(Catalina.java:662)                at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)                at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)                at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)                at java.base/java.lang.reflect.Method.invoke(Method.java:566)                at org.apache.catalina.startup.Bootstrap.load(Bootstrap.java:309)                at org.apache.catalina.startup.Bootstrap.main(Bootstrap.java:492) 

/opt/jfrog/artifactory/app/artifactory/tomcat/conf/server.xml

<Connector port="8040" sendReasonPhrase="true" maxThreads="500" address="localhost" address="127.0.0.1" enableLookups="false" disableUploadTimeout="true"     minSpareThreads="20"/> 

 

When I delete the duplicate "address" parameter and try to start the service via 

systemctl start artifactory 

It fails again and the faulty line is in the file again. after a few minute






[RTFACT-21899] Nuget Virtual Repo Not Showing Next Link Created: 23/Apr/20  Updated: 23/Apr/20

Status: Open
Project: Artifactory Binary Repository
Component/s: NuGet
Affects Version/s: 7.4.1
Fix Version/s: None

Type: Bug Priority: Normal
Reporter: Matt Assignee: Unassigned
Resolution: Unresolved Votes: 1
Labels: None


 Description   

WE updated to Artifactory 7.4.1 recently and are seeing an issue with our virtual nuget repos. We have a virtual repo set up which mirrors a couple of other repositories, we have noticed that since the update when running FindPackagesById() on the repo when there is more than 80 versions for the package it returns 80 packages and no "Next" link. In the local source repos the behaviour is correct and there is a Next link.

Steps to Reproduce:

  1.  - Create a local nuget repo
  2.  - Create a virtual repo which mirrors/aggregates it
  3.  - Add more than 80 versions of a package to the repo
  4.  - Use artifactory/api/nuget/{virtualRepoKey}/FindPackagesById()?id=%27Package.Name%27&semVerLevel=2.0.0 - and artifactory/api/nuget/{localRepoKey}/FindPackagesById()?id=%27Package.Name%27&semVerLevel=2.0.0
  5.  - Observe that when running against the virtual repo you get no tag like below but for the local repo you do
    <link rel="next" href="..."/>

     






[RTFACT-21898] Missing information on the artifactory-access.log Created: 23/Apr/20  Updated: 23/Apr/20

Status: Open
Project: Artifactory Binary Repository
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: Documentation Priority: Normal
Reporter: Eran Blumenthal Assignee: Elana Bakst Salomon
Resolution: Unresolved Votes: 0
Labels: None


 Description   

The "Logging" section in the JFrog documentation is missing the artifactory-access.log.

Although this page does exist:
https://www.jfrog.com/confluence/display/JFROG/Access+Log






[RTFACT-21897] Unable to download expired resource from the UI Created: 06/Mar/20  Updated: 04/May/20

Status: Open
Project: Artifactory Binary Repository
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: Bug Priority: Normal
Reporter: Matan Katz Assignee: Unassigned
Resolution: Unresolved Votes: 0
Labels: None

Attachments: PNG File Screen Shot 2020-03-06 at 15.41.57.png    

 Description   

Unable to download expired resource in remote repository from the UI.

The cause: Artifact path contains the remote-cache repository and not the remote repository itself (in the attached screenshot npmjs is the remote repository, the repository it is downloading from is npmjs-cache)

When artifact is expired (mostly metadata files which requires checking again with the remote if the file was changed) it cannot be downloaded from the cache and requires to be downloaded from the remote.

For conclusion (using an example):

When navigating in the tree to an artifact in npmjs (remote repository), the download should be performed using npmjs repokey.

Only when navigating to an artifact in npmjs-cache (local cache repository), the download should be performed using the cache repokey.

 






[RTFACT-21891] Couldn't save resource reason: org.artifactory.concurrent.LockingException: Lock on LockEntryId Created: 22/Apr/20  Updated: 22/Apr/20

Status: Open
Project: Artifactory Binary Repository
Component/s: Artifact Storage, Database, Debian, HA
Affects Version/s: None
Fix Version/s: None

Type: Bug Priority: Normal
Reporter: Idan Marciano Assignee: Unassigned
Resolution: Unresolved Votes: 0
Labels: None
Environment:

7.4.0

7.x.infra.RTFACT.18693-38 (masterless ha (7.5))

 


Attachments: Zip Archive PIT debian fail on 18693-38 run 1.zip     Zip Archive PIT debian fail on 18693-38 run 2.zip    

 Description   

The problem occurred when i tested the primaryless feature on Package Indexer Tester on Debian PM

The test that failed on PIT is:

"DebianSpec.16. Search and install kostya in virtual repo debian3"

The fail is on artifactory service log:

2020-04-21T06:58:29.435Z [jfrt ] [ERROR] [58cfc3afc0b07f52] [o.a.r.d.DbStoringRepoMixin:290] [art-exec-38 ] - Couldn't save resource debian3:dists/kostya/InRelease, reason: org.artifactory.concurrent.LockingException: Lock on LockEntryId debian3:dists/kostya/InRelease not acquired in 120 seconds. Lock info: org.artifactory.storage.db.locks.provider.DbMapLockWrapper@1a36320d. at org.artifactory.storage.fs.lock.SessionLockEntry.acquire(SessionLockEntry.java:117)

artifactory request log:

2020-04-21T06:58:29.704Z|a85c41da155ff0c5|172.17.0.1|admin|POST|/api/deb/reindex/debian3|500|0|0|120764|artifactory-client-java/2.7.0

this log is taken from test failed log:

06:56:28.939 [Test worker] DEBUG org.apache.http.headers - http-outgoing-29 >> POST /ar

tifactory/api/deb/reindex/debian3?async=0 HTTP/1.1
06:56:28.939 [Test worker] DEBUG org.apache.http.headers - http-outgoing-29 >> Content-type: */*
06:56:28.939 [Test worker] DEBUG org.apache.http.headers - http-outgoing-29 >> Content-Length: 0
06:56:28.939 [Test worker] DEBUG org.apache.http.headers - http-outgoing-29 >> Host: localhost:32795
06:56:28.939 [Test worker] DEBUG org.apache.http.headers - http-outgoing-29 >> Connection: Keep-Alive
06:56:28.939 [Test worker] DEBUG org.apache.http.headers - http-outgoing-29 >> User-Agent: artifactory-client-java/2.7.0
06:56:28.939 [Test worker] DEBUG org.apache.http.headers - http-outgoing-29 >> Accept-Encoding: gzip,deflate
06:56:28.939 [Test worker] DEBUG org.apache.http.headers - http-outgoing-29 >> Authorization: Basic YWRtaW46cGFzc3dvcmQ=
06:58:29.704 [Test worker] DEBUG org.apache.http.headers - http-outgoing-29 << HTTP/1.1 500 Internal Server Error
06:58:29.704 [Test worker] DEBUG org.apache.http.headers - http-outgoing-29 << Date: Tue, 21 Apr 2020 06:58:29 GMT
06:58:29.704 [Test worker] DEBUG org.apache.http.headers - http-outgoing-29 << Content-Type: application/json
06:58:29.704 [Test worker] DEBUG org.apache.http.headers - http-outgoing-29 << Transfer-Encoding: chunked
06:58:29.704 [Test worker] DEBUG org.apache.http.headers - http-outgoing-29 << Connection: keep-alive
06:58:29.704 [Test worker] DEBUG org.apache.http.headers - http-outgoing-29 << Server: Artifactory/7.x.infra.RTFACT.18693 2147483647
06:58:29.704 [Test worker] DEBUG org.apache.http.headers - http-outgoing-29 << X-Artifactory-Id: 9bcb34876e630768da947df53ef23b65dcd1596e
06:58:29.704 [Test worker] DEBUG org.apache.http.headers - http-outgoing-29 << X-Artifactory-Node-Id: art1

the full logs are included in this ticket(2 runs same fail 2 log batch).

this fail is flaky






[RTFACT-21889] NPE when running npm search and one of the results has maintainers field as string, and not JSON object Created: 22/Apr/20  Updated: 30/Apr/20

Status: Open
Project: Artifactory Binary Repository
Component/s: None
Affects Version/s: 6.17.0, 7.4.0
Fix Version/s: None

Type: Bug Priority: Normal
Reporter: Aviv Blonder Assignee: Unassigned
Resolution: Unresolved Votes: 0
Labels: None


 Description   

If the "maintainers" field is a string and not an object, then Artifactory fails to parse it and throw NPE when running 'npm search'. The search command fails with a timeout.

 

Usually, the "maintainers" field is a JSON object, for example:

"maintainers": [{"name""test","email""my@test.com"}],

 

Steps to reproduce:

  1. Create an npm package called artifactorytestpkg, with this package.json:
{
  "name": "artifactorytestpkg",
  "version": "1.0.0",
  "main": "index.js",
  "maintainers": [
    "cool"
  ],
  "scripts": {
    "test": "echo \"Error: no test specified\" && exit 1"
  },
  "keywords": [],
  "author": "",
  "license": "ISC",
  "description": ""
} 

2. Publish it to Artifactory

3. Run:

npm search artifactorytestpkg 

4. Output:

npm search artifactorytestpkg --verbose
npm info it worked if it ends with ok
npm verb cli [
npm verb cli   '/usr/local/Cellar/node/13.7.0/bin/node',
npm verb cli   '/usr/local/bin/npm',
npm verb cli   'search',
npm verb cli   'artifactorytestpkg',
npm verb cli   '--verbose'
npm verb cli ]
npm info using npm@6.13.6
npm info using node@v13.7.0
npm verb npm-session 097c7f9b8f793000
npm http fetch GET 500 http://localhost:8081/artifactory/api/npm/npm/-/v1/search?text=artifactorytestpkg&size=20&from=0&quality=0.65&popularity=0.98&maintenance=0.5 70127ms attempt #3
npm WARN search fast search endpoint errored. Using old search.
npm verb all-package-metadata creating entry stream from local cache
npm verb all-package-metadata /Users/avivb/.npm/localhost_8081/-/all/.cache.json
npm verb all-package-metadata creating remote entry stream
npm verb all-package-metadata Cached data present with timestamp: 99999 requesting partial index update
npm http fetch GET 500 http://localhost:8081/artifactory/api/npm/npm/-/all/since?stale=update_after&startkey=99999 70062ms attempt #3
npm WARN Search data request failed, search might be stale
No matches found for "artifactorytestpkg"
npm verb exit [ 0, true ]
npm timing npm Completed in 140704ms
npm info ok 

 

Stacktrace in artifactory-service.log:

2020-04-22T11:45:03.612Z [jfrt ] [ERROR] [1671635cd059544f] [c.e.m.GlobalExceptionMapper:48] [27.0.0.1-8091-exec-2] - Index 0 out of bounds for length 02020-04-22T11:45:03.612Z [jfrt ] [ERROR] [1671635cd059544f] [c.e.m.GlobalExceptionMapper:48] [27.0.0.1-8091-exec-2] - Index 0 out of bounds for length 0java.lang.IndexOutOfBoundsException: Index 0 out of bounds for length 0 at java.base/jdk.internal.util.Preconditions.outOfBounds(Preconditions.java:64) at java.base/jdk.internal.util.Preconditions.outOfBoundsCheckIndex(Preconditions.java:70) at java.base/jdk.internal.util.Preconditions.checkIndex(Preconditions.java:248) at java.base/java.util.Objects.checkIndex(Objects.java:372) at java.base/java.util.ArrayList.get(ArrayList.java:458) at org.jfrog.repomd.npm.search.SearchResultsHelper.maintainerOf(SearchResultsHelper.java:294) at java.base/java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:195) at java.base/java.util.ArrayList$ArrayListSpliterator.forEachRemaining(ArrayList.java:1654) at java.base/java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:484) at java.base/java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:474) at java.base/java.util.stream.ReduceOps$ReduceOp.evaluateSequential(ReduceOps.java:913) at java.base/java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234) at java.base/java.util.stream.ReferencePipeline.collect(ReferencePipeline.java:578) at org.jfrog.repomd.npm.search.SearchResultsHelper.normalizeMaintainers(SearchResultsHelper.java:329) at org.jfrog.repomd.npm.search.SearchResultsHelper.access$200(SearchResultsHelper.java:31) at org.jfrog.repomd.npm.search.SearchResultsHelper$NewSearch.fromNpmMetadata(SearchResultsHelper.java:463) at org.jfrog.repomd.npm.search.SearchResultsHelper.lambda$npmLocalTextSearchAsNewSearchFormat$13(SearchResultsHelper.java:238) at java.base/java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:195) at java.base/java.util.stream.ReferencePipeline$2$1.accept(ReferencePipeline.java:177) at java.base/java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:195) at java.base/java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:195) at java.base/java.util.stream.ForEachOps$ForEachOp$OfRef.accept(ForEachOps.java:183) at java.base/java.util.stream.SliceOps$1$1.accept(SliceOps.java:199) at java.base/java.util.ArrayList$ArrayListSpliterator.tryAdvance(ArrayList.java:1631) at java.base/java.util.stream.ReferencePipeline.forEachWithCancel(ReferencePipeline.java:127) at java.base/java.util.stream.AbstractPipeline.copyIntoWithCancel(AbstractPipeline.java:502) at java.base/java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:488) at java.base/java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:474) at java.base/java.util.stream.ForEachOps$ForEachOp.evaluateSequential(ForEachOps.java:150) at java.base/java.util.stream.ForEachOps$ForEachOp$OfRef.evaluateSequential(ForEachOps.java:173) at java.base/java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234) at java.base/java.util.stream.ReferencePipeline.forEach(ReferencePipeline.java:497) at java.base/java.util.stream.ReferencePipeline$7$1.accept(ReferencePipeline.java:274) at java.base/java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:195) at java.base/java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:195) at java.base/java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:195) at java.base/java.util.HashMap$EntrySpliterator.forEachRemaining(HashMap.java:1746) at java.base/java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:484) at java.base/java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:474) at java.base/java.util.stream.ReduceOps$ReduceOp.evaluateSequential(ReduceOps.java:913) at java.base/java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234) at java.base/java.util.stream.ReferencePipeline.collect(ReferencePipeline.java:578) at org.jfrog.repomd.npm.search.SearchResultsHelper.latestVersionsOnce(SearchResultsHelper.java:210) at org.jfrog.repomd.npm.rest.handler.NpmLocalRepoHandler.search(NpmLocalRepoHandler.java:190) at org.artifactory.addon.npm.repo.NpmVirtualRepoHandler.lambda$getLocalResults$3(NpmVirtualRepoHandler.java:165) at java.base/java.util.stream.ReferencePipeline$7$1.accept(ReferencePipeline.java:271) at java.base/java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:195) at java.base/java.util.Iterator.forEachRemaining(Iterator.java:133) at java.base/java.util.Spliterators$IteratorSpliterator.forEachRemaining(Spliterators.java:1801) at java.base/java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:484) at java.base/java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:474) at java.base/java.util.stream.StreamSpliterators$WrappingSpliterator.forEachRemaining(StreamSpliterators.java:312) at java.base/java.util.stream.Streams$ConcatSpliterator.forEachRemaining(Streams.java:734) at java.base/java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:484) at java.base/java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:474) at java.base/java.util.stream.ReduceOps$ReduceOp.evaluateSequential(ReduceOps.java:913) at java.base/java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234) at java.base/java.util.stream.ReferencePipeline.collect(ReferencePipeline.java:578) at org.jfrog.repomd.npm.search.SearchResultsHelper.latestVersionsOnce(SearchResultsHelper.java:210) at org.artifactory.addon.npm.repo.NpmVirtualRepoHandler.search(NpmVirtualRepoHandler.java:152) at org.jfrog.repomd.npm.rest.NpmSubResource.searchRemote(NpmSubResource.java:191) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.base/java.lang.reflect.Method.invoke(Method.java:566) at org.glassfish.jersey.server.model.internal.ResourceMethodInvocationHandlerFactory.lambda$static$0(ResourceMethodInvocationHandlerFactory.java:76) at org.glassfish.jersey.server.model.internal.AbstractJavaResourceMethodDispatcher$1.run(AbstractJavaResourceMethodDispatcher.java:148) at org.glassfish.jersey.server.model.internal.AbstractJavaResourceMethodDispatcher.invoke(AbstractJavaResourceMethodDispatcher.java:191) at org.glassfish.jersey.server.model.internal.JavaResourceMethodDispatcherProvider$ResponseOutInvoker.doDispatch(JavaResourceMethodDispatcherProvider.java:200) at org.glassfish.jersey.server.model.internal.AbstractJavaResourceMethodDispatcher.dispatch(AbstractJavaResourceMethodDispatcher.java:103) at org.glassfish.jersey.server.model.ResourceMethodInvoker.invoke(ResourceMethodInvoker.java:493) at org.glassfish.jersey.server.model.ResourceMethodInvoker.apply(ResourceMethodInvoker.java:415) at org.glassfish.jersey.server.model.ResourceMethodInvoker.apply(ResourceMethodInvoker.java:104) at org.glassfish.jersey.server.ServerRuntime$1.run(ServerRuntime.java:277) at org.glassfish.jersey.internal.Errors$1.call(Errors.java:272) at org.glassfish.jersey.internal.Errors$1.call(Errors.java:268) at org.glassfish.jersey.internal.Errors.process(Errors.java:316) at org.glassfish.jersey.internal.Errors.process(Errors.java:298) at org.glassfish.jersey.internal.Errors.process(Errors.java:268) at org.glassfish.jersey.process.internal.RequestScope.runInScope(RequestScope.java:289) at org.glassfish.jersey.server.ServerRuntime.process(ServerRuntime.java:256) at org.glassfish.jersey.server.ApplicationHandler.handle(ApplicationHandler.java:703) at org.glassfish.jersey.servlet.WebComponent.serviceImpl(WebComponent.java:416) at org.glassfish.jersey.servlet.WebComponent.service(WebComponent.java:370) at org.glassfish.jersey.servlet.ServletContainer.service(ServletContainer.java:389) at org.glassfish.jersey.servlet.ServletContainer.service(ServletContainer.java:342) at org.glassfish.jersey.servlet.ServletContainer.service(ServletContainer.java:229) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:231) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166) at org.artifactory.webapp.servlet.RepoFilter.execute(RepoFilter.java:195) at org.artifactory.webapp.servlet.RepoFilter.doFilter(RepoFilter.java:97) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166) at org.artifactory.webapp.servlet.AccessFilter.useAuthentication(AccessFilter.java:519) at org.artifactory.webapp.servlet.AccessFilter.authenticateAndExecute(AccessFilter.java:385) at org.artifactory.webapp.servlet.AccessFilter.doFilterInternal(AccessFilter.java:249) at org.artifactory.webapp.servlet.AccessFilter.doFilter(AccessFilter.java:193) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166) at org.artifactory.webapp.servlet.RequestFilter.doFilter(RequestFilter.java:78) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166) at org.artifactory.webapp.servlet.ArtifactoryCsrfFilter.doFilter(ArtifactoryCsrfFilter.java:75) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166) at org.springframework.session.web.http.SessionRepositoryFilter.doFilterInternal(SessionRepositoryFilter.java:164) at org.springframework.session.web.http.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:80) at org.artifactory.webapp.servlet.SessionFilter.doFilter(SessionFilter.java:62) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166) at org.artifactory.webapp.servlet.ArtifactoryTracingFilter.doFilter(ArtifactoryTracingFilter.java:27) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166) at org.artifactory.webapp.servlet.ArtifactoryFilter.doFilter(ArtifactoryFilter.java:124) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166) at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:199) at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:96) at org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:543) at org.apache.catalina.valves.rewrite.RewriteValve.invoke(RewriteValve.java:305) at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:139) at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:81) at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:87) at org.apache.catalina.valves.rewrite.RewriteValve.invoke(RewriteValve.java:571) at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:343) at org.apache.coyote.http11.Http11Processor.service(Http11Processor.java:609) at org.apache.coyote.AbstractProcessorLight.process(AbstractProcessorLight.java:65) at org.apache.coyote.AbstractProtocol$ConnectionHandler.process(AbstractProtocol.java:810) at org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.doRun(NioEndpoint.java:1623) at org.apache.tomcat.util.net.SocketProcessorBase.run(SocketProcessorBase.java:49) at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) at org.apache.tomcat.util.threads.TaskThread$WrappingRunnable.run(TaskThread.java:61) at java.base/java.lang.Thread.run(Thread.java:834) 





[RTFACT-21887] Download is forbidden for user with token authorisation Created: 22/Apr/20  Updated: 22/Apr/20

Status: Open
Project: Artifactory Binary Repository
Component/s: Access Tokens
Affects Version/s: 7.3.2
Fix Version/s: None

Type: Bug Priority: Normal
Reporter: Roman Assignee: Unassigned
Resolution: Unresolved Votes: 0
Labels: None
Environment:

Linux artifactory 5.3.0-46-generic #38~18.04.1-Ubuntu SMP Tue Mar 31 04:17:56 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux



 Description   

Reproduce steps:

1) Create token for artifactory local user, using an article https://www.jfrog.com/confluence/display/JFROG/Artifactory+REST+API#ArtifactoryRESTAPI-create-tokenCreateToken

I used command:

curl -urmantsurov:<pass> -XPOST "https://artifactory/artifactory/api/security/token" -d username=coma-ansible-winagents" -d "scope=api:*"

2) For artifactory local user add read permission for a generic repository with binary files

3) Try to download binary file from generic repo using local user with token:

curl -H "Authorization: Bearer eyJ2ZXIiOiIyIiwidHlwIjoiSldUIiwiYWxnIjoiUlMyNTYiLCJraWQiOiIxN1N5cVBRcnFhT2ZtVHVsdC1ySVg3aloxUFB1QTVyRmZmdUFpQU9UaM4In0.eyJzdWIiOiJqZnJ0QDAxZTNtcWVxZWd3ZjkyMHQ2Njc5ODAwYnM1XC91c2Vyc1wvcm1hbnRzdXJvdiIsInNjcCI6ImFwaToqIGFwcGxpZWQtcGVybWlzc2lvbnNcL2dyb3VwczpcIml3dGQgY29tYSxpd3RkIGxpbnV4IGRldmVsb3BlcnNcIiIsImF1ZCI6ImpmcnRAMDFlM21xZXFlZ3dmOTIwdDY2Nzk4MDBiczUiLCJpc3MiOiJqZnJ0QDAxZTNtcWVxZWd3ZjkyMHQ2Njc5ODAwYnM1XC91c2Vyc1wvcm1hbnRzdXJvdiIsImV4cCI6MTU4NzQ2MTUyOSwiaWF0IjoxNTg3NDU3OTI5LCJqdGkiOiJlODVmMWYzNC1iMzlhLTRlMjktYTY2YS1kMWFmMGQyMzQ0NzgifQ.veqdmllR7kDJipFT2hKB9--4joSIZgsmun6dxVfbOXiCzN5-Ky6dP1Mtvta7y1jv7yCkPCBSkFUX6YYfUXRo3dLKvVZKWuNT0STpSkNsxqyDMk8hYTpOXrl-p7SCMIVZ2O54OYVyHDHohtLGoaBS_ESNxCFcaqAmHapRf5IftoJdyfEbADlw-fzDZ61qpUXY2DGEBpT8_pVcPE-yECr2RMi31Cljmc4Z_EgdVPJXkyRIVW6wqBJPU46mhcOlERnk-2-sobBfoO9NBFomGcxOpgtwcoMExt4vyxQRjeSnmIGTCeKhDvATMNmkBGR3or0am1zj25LAKjOLQewmJkCb0A" "https://artifactory/artifactory/test/common/7z1900-x64.exe"
{  "errors" : [ {    
"status" : 403,   
 "message" : "Download request for repo:path 'test:common/7z1900-x64.exe' is forbidden for user 'token:coma-ansible-winagents'."  } ]

In case I generate a token for a local group with read permission on same repo, I download same file through curl with success. Please help to fix a problem.






[RTFACT-21884] URL signing for Artifactory Created: 21/Apr/20  Updated: 21/Apr/20

Status: Open
Project: Artifactory Binary Repository
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: New Feature Priority: Normal
Reporter: Shubha Gururaja Rao Assignee: Unassigned
Resolution: Unresolved Votes: 0
Labels: None


 Description   

Customer EA is asking if we are planning to support this in the on-pre version:

URL signing as a part of the Artifactory product instead of just pointing to BINTRAY.

 






[RTFACT-21883] authentication prompt appears when navigating from /ui to /artifactory context Created: 21/Apr/20  Updated: 27/Apr/20

Status: Will Not Implement
Project: Artifactory Binary Repository
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: Bug Priority: Normal
Reporter: Matthew Wang Assignee: Unassigned
Resolution: Unresolved Votes: 0
Labels: None
Environment:

Artifactory 7.x



 Description   

Steps to reproduce:
1. log into Artifactory UI
2. navigate to artifactory context specific url, such as https://<URL>/artifactory/api/npm/npm-remote/npm
3. see credentials prompt in top of window

There should not be any authentication prompt when navigating from /ui to /artifactory.






[RTFACT-21882] Create a "watch" the whole watch --> scan a subset of something Created: 21/Apr/20  Updated: 22/Apr/20

Status: Open
Project: Artifactory Binary Repository
Component/s: Xray
Affects Version/s: None
Fix Version/s: None

Type: New Feature Priority: Normal
Reporter: Harsh Mota Assignee: Unassigned
Resolution: Unresolved Votes: 1
Labels: None


 Description   

Currently the way watches with property filter works is --> If certain property is present in a resource for which the watch was defined.. it will generate a violation. However we want a way to trigger the scan of the resource only if that as associated watch with certain property. 

Basically we would like to have "watch" the whole watch similar to the ealier feature requested to index a subset of index which is currently not possible. 

Here we want to apply a policy for a scan if a resource has a specified property, not generate a violation if that resource has that property.

 



 Comments   
Comment by Dustin Frank [ 22/Apr/20 ]

This is correct, the use case is to enact a policy on a resource property match.

Comment by Dustin Frank [ 22/Apr/20 ]

For clarification, if for example an artifact has a property called "public", we would want to check it with a specific policy with notification heavy rules.