Configuring Federated Catalog Crawling in MVD #426
Replies: 2 comments 1 reply
-
The Dataspace Protocol (DSP) defines a catalog endpoint, which is the only place where catalogs can be obtained.
The
Whatever authentication mechanism is configured in DSP, for example the Decentralized Claims Protocol (DCP). |
Beta Was this translation helpful? Give feedback.
-
Thank you for your response. I am trying to add a TargetNode to the Federated Catalog that I am setting up with the Minimum Viable Dataspace (MVD). To do this, I used a Federated Catalog (FC) example from the Eclipse EDC Samples repository, and I used fixed-node-resolver to implement the TargetNode.getAll() method. However, I am facing an issue regarding the structure of the TargetNode in a DID environment like MVD. I attempted to use the following TargetNode:
In this case, http://localhost:8092 is the address of the provider's catalog in the MVD setup, which I want to add as a node to my Federated Catalog (FC). Unfortunately, this configuration did not work as expected. I received a 400 Bad Request error from the Federated Catalog, and on the provider catalog side, I encountered the error: 'The given ID must conform to did:method:identifier but did not.' Even after changing the ID to a DID structure, it still didn’t work." Could you provide any insights into the correct structure for defining a TargetNode in this context (DID)? Am I missing any required configurations or dependencies? Any help would be greatly appreciated! |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
Dear All,
I observed that in this Minimum Viable Dataspace (MVD), there is a provider catalog (not a federated catalog that aggregates assets from other catalogs), and the assets have been injected using seeding.
I plan to use the Eclipse EDC Federated Catalog and seek guidance on how to declare other catalogs in the federated catalog so they can be crawled. From the Samples/federated-catalog, the target nodes are statically added, either with a node resolver extension or files that contain participants and not the catalog server. Therefore, I’m asking:
Which API should I use to configure and register other catalogs for crawling when the Federated Catalog is already running? In a production setup, adding a new catalog or node wouldn't be done by adding static nodes, building, and rerunning.
How is the Federated Catalog authenticated for other servers' catalogs?
Any insights would be greatly appreciated.
Beta Was this translation helpful? Give feedback.
All reactions