OSDU Software issueshttps://community.opengroup.org/groups/osdu/-/issues2024-01-11T15:59:10Zhttps://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/seismic-dms-suite/seismic-store-sdutil/-/issues/32Implement resumable file transfer (upload and download)2024-01-11T15:59:10ZSacha BrantsImplement resumable file transfer (upload and download)Given the size of data in Seismic DMS, users want to resume a file transfer (upload/upload).
It should make sure that there are no integrity issues.Given the size of data in Seismic DMS, users want to resume a file transfer (upload/upload).
It should make sure that there are no integrity issues.https://community.opengroup.org/osdu/platform/domain-data-mgmt-services/rock-and-fluid-sample/rafs-ddms-services/-/issues/325Add step of "show content schema" for each analysis type (Ex: NMR, Rel-Perm...)2024-01-11T15:57:30ZDebasis ChatterjeeAdd step of "show content schema" for each analysis type (Ex: NMR, Rel-Perm...)![image](/uploads/04bf6698a7e1cf7f6c049f3baa156e29/image.png)
Currently, the sequence is -
1. Create catalog record
2. Add content data
3. Get content data
Suggest adding one more step to show content schema, prior to "add content data...![image](/uploads/04bf6698a7e1cf7f6c049f3baa156e29/image.png)
Currently, the sequence is -
1. Create catalog record
2. Add content data
3. Get content data
Suggest adding one more step to show content schema, prior to "add content data"
1. Create catalog record
1. Show content schema
2. Add content data
3. Get content data
cc @Siarhei_Khaletski , @michael_jones_epamhttps://community.opengroup.org/osdu/platform/domain-data-mgmt-services/rock-and-fluid-sample/rafs-ddms-services/-/issues/324Add RCA (advanced) request in Postman Collection to show "add (content) data"...2024-01-11T15:50:10ZDebasis ChatterjeeAdd RCA (advanced) request in Postman Collection to show "add (content) data" step for multiple plugs in one goThis is something we discussed in the call today.
See Page 453 of KKS Core Analysis Report.
![RCA-add-data-for-many-plugs](/uploads/b097d733c6fc998af339f692280ef064/RCA-add-data-for-many-plugs.PNG)
It will suffice if you showcase the ...This is something we discussed in the call today.
See Page 453 of KKS Core Analysis Report.
![RCA-add-data-for-many-plugs](/uploads/b097d733c6fc998af339f692280ef064/RCA-add-data-for-many-plugs.PNG)
It will suffice if you showcase the step for 3 plugs from 3 consecutive depth points.
It is well understood that we'll need to create Sample record for each plug.
The only thing I am looking for is "add content data" and "read content data" in single steps each.
Let me know if this needs more clarification.
Thank you
cc @bdawson , @michael_jones_epam , @Siarhei_Khaletski , @bev005https://community.opengroup.org/osdu/platform/system/storage/-/issues/213Discrepancy in Storage API for create/update record operation2024-01-31T23:00:31ZNeha KhandelwalDiscrepancy in Storage API for create/update record operationFor Storage create/update record API, if a record ID ends in a dot (.) the data block for the record is not properly uploaded to the Microsoft storage account. In cases, where the records for create/update multiple record operation have ...For Storage create/update record API, if a record ID ends in a dot (.) the data block for the record is not properly uploaded to the Microsoft storage account. In cases, where the records for create/update multiple record operation have the similar IDs only differentiated by a dot at the end (ex. M/M.), the data block will be the same for both records. The issue is that Microsoft storage accounts do not support directory names ending with a dot (.), a forward slash (/), or a backslash (\\) and path segments ending with a dot ([https://learn.microsoft.com/en-us/rest/api/storageservices/naming-and-referencing-containers--blobs--and-metadata](https://learn.microsoft.com/en-us/rest/api/storageservices/naming-and-referencing-containers--blobs--and-metadata "https://learn.microsoft.com/en-us/rest/api/storageservices/naming-and-referencing-containers--blobs--and-metadata")). When uploading the block blob with the record data to the storage container, the BlobStore.class tries to use a path with the record ID as a folder, such as
\<kind\>/\<partition\>:reference-data--ExternalUnitOfMeasure:LIS-LAS::**M.**/1704916580557751
but \<partition\>:reference-data--ExternalUnitOfMeasure:LIS-LAS::**M.** is not a valid directory name so the dot at the end is ignored, and block blob is uploaded to \<partition\>:reference-data--ExternalUnitOfMeasure:LIS-LAS::**M** instead. This was also manually confirmed by trying to upload a blob to a folder with a name ending in dot.
It is a corner case but this issue has impacted RDD values for M and M. on all partitions
* For example on "prod-weu-des-prod-testing-eu", the records prod-weu-des-prod-testing-eu:reference-data--ExternalUnitOfMeasure:LIS-LAS::M. and prod-weu-des-prod-testing-eu:reference-data--ExternalUnitOfMeasure:LIS-LAS::M has same "M." values in ID, Code and Symbol field. These were created using RDD script/pipeline.
* Impact is on all partitions for \<partition\>:reference-data--ExternalUnitOfMeasure:LIS-LAS::M. and \<partition\>:reference-data--ExternalUnitOfMeasure:LIS-LAS::M values
Proposed solution is to reject record IDs that end in these unsupported characters (i.e. return 400 bad request when such record IDs are used).https://community.opengroup.org/osdu/platform/system/storage/-/issues/212GeoJson validation2024-03-15T14:11:20ZAdam ChengGeoJson validationThis is a linked issue between the Storage API and Search API.
When I ingest an new object witha invalid GeoJSON (e.g. polygon is is not close). It will pass the Storage API as it mainly check types. But it will silently failed indexing...This is a linked issue between the Storage API and Search API.
When I ingest an new object witha invalid GeoJSON (e.g. polygon is is not close). It will pass the Storage API as it mainly check types. But it will silently failed indexing and never show up on Search API.
A related issue: currently it take up to 30 seconds before a newly ingested object shows up on the Search API. It makes a bit challenging for a near real-time application.
Possible solution:
An additional query param on the PUT `/records` endpoint. If the param is set, the operation will only be successful when it finished indexing.
It would be ideal for ingestion and indexing/discovery operations to be atomichttps://community.opengroup.org/osdu/platform/system/storage/-/issues/211Different behavior on delete endpoint2024-03-15T14:13:21ZAdam ChengDifferent behavior on delete endpointThere are currently two endpoints for deleting an object>
1. Deleting multiple objects: /records/delete
2. Deleting single object /records/<Object_id>:delete
When attempt to delete an object that is already been deleted:
Endpoint 1 will...There are currently two endpoints for deleting an object>
1. Deleting multiple objects: /records/delete
2. Deleting single object /records/<Object_id>:delete
When attempt to delete an object that is already been deleted:
Endpoint 1 will return 204 while endpoint 2 will return 404
It would be desirable if endpoint 1 returns some sort of error if one of/multiple objects has already been deleted or non-existencehttps://community.opengroup.org/osdu/platform/deployment-and-operations/infra-gcp-provisioning/-/issues/29What is the tax rate in Ireland?2024-01-10T15:17:59ZGhost UserWhat is the tax rate in Ireland?The percentage that you pay depends on your income. The first part of your income, up to a certain amount, is taxed at 20%. This is known as the standard rate of tax and the amount that it applies to is known as the standard rate tax ban...The percentage that you pay depends on your income. The first part of your income, up to a certain amount, is taxed at 20%. This is known as the standard rate of tax and the amount that it applies to is known as the standard rate tax band. The rest of your income is taxed at the higher rate of tax, 40%. Answer Credit [https://vatcalculatorg.com/](https://vatcalculatorg.com/)https://community.opengroup.org/osdu/platform/deployment-and-operations/helm-charts-azure/-/issues/32Update deployments to support PodDisruptionBudget & TopologySpreadConstraints...2024-01-10T12:38:59ZRitesh KoulUpdate deployments to support PodDisruptionBudget & TopologySpreadConstraints for airflowNeed to update airflow-8.5.2.tgz file to support **PodDisruptionBudget** & **TopologySpreadConstraints** features.
More details regarding implementation can be found under comments section given for an existing MR
**PodDisruptionBudget*...Need to update airflow-8.5.2.tgz file to support **PodDisruptionBudget** & **TopologySpreadConstraints** features.
More details regarding implementation can be found under comments section given for an existing MR
**PodDisruptionBudget** - https://community.opengroup.org/osdu/platform/deployment-and-operations/helm-charts-azure/-/merge_requests/749#note_274646
**TopologySpreadConstraints** - https://community.opengroup.org/osdu/platform/deployment-and-operations/helm-charts-azure/-/merge_requests/749#note_275017https://community.opengroup.org/osdu/platform/system/reference/unit-service/-/issues/57Removal of CSPs Modules and Main Class Reassignment to the Core2024-01-10T10:00:08ZRustam Lotsmanenko (EPAM)rustam_lotsmanenko@epam.comRemoval of CSPs Modules and Main Class Reassignment to the Core# ADR: Remove provider modules from the Unit service.
Simplify the Development and maintenance of the Unit service by removing CSP modules.
## Status
- [x] Proposed
- [ ] Trialing
- [ ] Under review
- [ ] Approved
- [ ] Retired
## ...# ADR: Remove provider modules from the Unit service.
Simplify the Development and maintenance of the Unit service by removing CSP modules.
## Status
- [x] Proposed
- [ ] Trialing
- [ ] Under review
- [ ] Approved
- [ ] Retired
## Context & Scope
The CRS-Conversion service operates independently of cloud-specific technologies, conducting all calculations at runtime and storing necessary data within bundled service files. However, within the OSDU Community, four distinct artifacts are generated, each designated per CSP, tested, maintained, and patched for vulnerabilities separately.
## Decision
- Delete provider modules.
- Move the main class to the Unit Core.
- Merge and move Spring Security Configurations to the Unit Core. These configurations are used for service request handling and are independent of cloud technologies. Despite minimal differences, these configurations are dispersed across CSPs, leading to inconsistency in handling and increasing the risk of service misconfiguration.
- Merge and move properties files to the Unit Core.
- Determine the necessity of incorporating CSP libraries as pluggable utilities. These libraries could serve as background utilities for tasks such as log formatting and trace capture. If utilized within the CRS Catalog, establish a method to independently integrate them. This approach could subsequently be adopted for the Community Implementation.
## Rationale
The existing setup of the Unit multiplies the effort needed for maintenance and release processes without visible benefits. This service contains minimal cloud-specific code, primarily limited to occasional utilities from libraries. By excluding CSP modules, the OSDU Community can offer sustainable, thoroughly tested artifacts for the CRS Catalog, significantly reducing the necessary effort.
## Consequences
* Deletion of provider modules.
* Minor CI/CD refactoring to transition to a single artifact (JAR file) from four different artifacts.
* (Optional, pending agreement) Implement a solution for abstracting utility libraries used by CSPs, which could be beneficial in the future.
## Tradeoff Analysis
Beyond defining an abstraction mechanism for CSP utility libraries, the proposal aims to decrease the effort needed for support, releases, and vulnerability management. But if abstraction for libraries is needed it definitely should not introduce more complexity, as it would contradict the main goal of this ADR.
## Alternatives and implications
- Introducing the Core Plus module during the Community Implementation phase. Similar to existing CSP modules, it would be a shallow module. However, introducing custom utilities might pose complexities. On the other hand, it won't be hard to create the same shallow modules elsewhere, but Community OSDU is moving towards maintaining only a single version of the platform.
- Alternatively, we could include the main class, properties, and security configs in the Core, making those components optional without disrupting existing CSP providers.Rustam Lotsmanenko (EPAM)rustam_lotsmanenko@epam.comRustam Lotsmanenko (EPAM)rustam_lotsmanenko@epam.comhttps://community.opengroup.org/osdu/platform/system/reference/crs-conversion-service/-/issues/91Removal of CSPs Modules and Main Class Reassignment to the Core2024-03-25T12:11:26ZRustam Lotsmanenko (EPAM)rustam_lotsmanenko@epam.comRemoval of CSPs Modules and Main Class Reassignment to the Core# ADR: Remove provider modules from the CRS Conversion.
Simplify the Development and maintenance of the CRS Conversion service by removing CSP modules.
## Status
- [x] Proposed
- [ ] Trialing
- [ ] Under review
- [ ] Approved
- [ ] ...# ADR: Remove provider modules from the CRS Conversion.
Simplify the Development and maintenance of the CRS Conversion service by removing CSP modules.
## Status
- [x] Proposed
- [ ] Trialing
- [ ] Under review
- [ ] Approved
- [ ] Retired
## Context & Scope
The CRS-Conversion service operates independently of cloud-specific technologies, conducting all calculations at runtime and storing necessary data within bundled service files. However, within the OSDU Community, four distinct artifacts are generated, each designated per CSP, tested, maintained, and patched for vulnerabilities separately.
## Decision
- Delete provider modules.
- Move the main class to the CRS Conversion Core.
- Merge and move Spring Security Configurations to the CRS Conversion Core. These configurations are used for service request handling and are independent of cloud technologies. Despite minimal differences, these configurations are dispersed across CSPs, leading to inconsistency in handling and increasing the risk of service misconfiguration.
- Merge and move properties files to the CRS Conversion Core.
- Determine the necessity of incorporating CSP libraries as pluggable utilities. These libraries could serve as background utilities for tasks such as log formatting and trace capture. If utilized within the CRS Catalog, establish a method to independently integrate them. This approach could subsequently be adopted for the Community Implementation.
## Rationale
The existing setup of the CRS Conversion multiplies the effort needed for maintenance and release processes without visible benefits. This service contains minimal cloud-specific code, primarily limited to occasional utilities from libraries. By excluding CSP modules, the OSDU Community can offer sustainable, thoroughly tested artifacts for the CRS Catalog, significantly reducing the necessary effort.
## Consequences
* Deletion of provider modules.
* Minor CI/CD refactoring to transition to a single artifact (JAR file) from four different artifacts.
* (Optional, pending agreement) Implement a solution for abstracting utility libraries used by CSPs, which could be beneficial in the future.
## Tradeoff Analysis
Beyond defining an abstraction mechanism for CSP utility libraries, the proposal aims to decrease the effort needed for support, releases, and vulnerability management. But if abstraction for libraries is needed it definitely should not introduce more complexity, as it would contradict the main goal of this ADR.
## Alternatives and implications
- Introducing the Core Plus module during the Community Implementation phase. Similar to existing CSP modules, it would be a shallow module. However, introducing custom utilities might pose complexities. On the other hand, it won't be hard to create the same shallow modules elsewhere, but Community OSDU is moving towards maintaining only a single version of the platform.
- Alternatively, we could include the main class, properties, and security configs in the Core, making those components optional without disrupting existing CSP providers.Rustam Lotsmanenko (EPAM)rustam_lotsmanenko@epam.comRustam Lotsmanenko (EPAM)rustam_lotsmanenko@epam.comhttps://community.opengroup.org/osdu/platform/system/reference/crs-catalog-service/-/issues/79Removal of CSPs Modules and Main Class Reassignment to the Core2024-03-25T12:11:21ZRustam Lotsmanenko (EPAM)rustam_lotsmanenko@epam.comRemoval of CSPs Modules and Main Class Reassignment to the Core# ADR: Remove provider modules from the CRS Catalog.
Simplify the Development and maintenance of the CRS Catalog service by removing CSP modules.
## Status
- [x] Proposed
- [ ] Trialing
- [ ] Under review
- [ ] Approved
- [ ] Retire...# ADR: Remove provider modules from the CRS Catalog.
Simplify the Development and maintenance of the CRS Catalog service by removing CSP modules.
## Status
- [x] Proposed
- [ ] Trialing
- [ ] Under review
- [ ] Approved
- [ ] Retired
## Context & Scope
The CRS-Catalog service operates independently of cloud-specific technologies, conducting all calculations at runtime and storing necessary data within bundled service files. However, within the OSDU Community, four distinct artifacts are generated, each designated per CSP, tested, maintained, and patched for vulnerabilities separately.
## Decision
- Delete provider modules.
- Move the main class to the CRS Catalog Core. ([Azure](https://community.opengroup.org/osdu/platform/system/reference/crs-catalog-service/-/blob/master/provider/crs-catalog-azure/crs-catalog-aks/src/main/java/org/opengroup/osdu/crs/CrsAksApplication.java), [AWS](https://community.opengroup.org/osdu/platform/system/reference/crs-catalog-service/-/blob/master/provider/crs-catalog-aws/src/main/java/org/opengroup/osdu/crs/CrsApplicationAWS.java), [GC](https://community.opengroup.org/osdu/platform/system/reference/crs-catalog-service/-/blob/master/provider/crs-catalog-gc/crs-catalog-gke/src/main/java/org/opengroup/osdu/crs/CRSGKEApplication.java), [IBM](https://community.opengroup.org/osdu/platform/system/reference/crs-catalog-service/-/blob/master/provider/crs-catalog-ibm/crs-catalog-ocp/src/main/java/org/opengroup/osdu/crs/CrsOcpApplication.java))
- Merge and move Spring Security Configurations to the CRS Catalog Core. These configurations are used for service request handling and are independent of cloud technologies. Despite minimal differences, these configurations are dispersed across CSPs, leading to inconsistency in handling and increasing the risk of service misconfiguration. ([Azure](https://community.opengroup.org/osdu/platform/system/reference/crs-catalog-service/-/blob/master/provider/crs-catalog-azure/crs-catalog-aks/src/main/java/org/opengroup/osdu/crs/security/SecurityConfig.java),[AWS](https://community.opengroup.org/osdu/platform/system/reference/crs-catalog-service/-/blob/master/provider/crs-catalog-aws/src/main/java/org/opengroup/osdu/crs/security/AuthSecurityConfig.java),[GC](https://community.opengroup.org/osdu/platform/system/reference/crs-catalog-service/-/blob/master/provider/crs-catalog-gc/crs-catalog-gke/src/main/java/org/opengroup/osdu/crs/security/AuthSecurityConfig.java),[IBM](https://community.opengroup.org/osdu/platform/system/reference/crs-catalog-service/-/blob/master/provider/crs-catalog-ibm/crs-catalog-ocp/src/main/java/org/opengroup/osdu/crs/security/SecurityConfig.java))
- Merge and move properties files to the CRS Catalog Core.
- Determine the necessity of incorporating CSP libraries as pluggable utilities. These libraries could serve as background utilities for tasks such as log formatting and trace capture. If utilized within the CRS Catalog, establish a method to independently integrate them. This approach could subsequently be adopted for the Community Implementation.
## Rationale
The existing setup of the CRS Catalog multiplies the effort needed for maintenance and release processes without visible benefits. This service contains minimal cloud-specific code, primarily limited to occasional utilities from libraries. By excluding CSP modules, the OSDU Community can offer sustainable, thoroughly tested artifacts for the CRS Catalog, significantly reducing the necessary effort.
## Consequences
* Deletion of provider modules.
* Minor CI/CD refactoring to transition to a single artifact (JAR file) from four different artifacts.
* (Optional, pending agreement) Implement a solution for abstracting utility libraries used by CSPs, which could be beneficial in the future.
## Tradeoff Analysis
Beyond defining an abstraction mechanism for CSP utility libraries, the proposal aims to decrease the effort needed for support, releases, and vulnerability management. But if abstraction for libraries is needed it definitely should not introduce more complexity, as it would contradict the main goal of this ADR.
## Alternatives and implications
- Introducing the Core Plus module during the Community Implementation phase. Similar to existing CSP modules, it would be a shallow module. However, introducing custom utilities might pose complexities. On the other hand, it won't be hard to create the same shallow modules elsewhere, but Community OSDU is moving towards maintaining only a single version of the platform.
- Alternatively, we could include the main class, properties, and security configs in the Core, making those components optional without disrupting existing CSP providers.Rustam Lotsmanenko (EPAM)rustam_lotsmanenko@epam.comRustam Lotsmanenko (EPAM)rustam_lotsmanenko@epam.comhttps://community.opengroup.org/osdu/platform/system/dataset/-/issues/78Static Dataset API doc is outdated2024-01-08T12:42:45ZChad LeongStatic Dataset API doc is outdatedFix static API doc as part of https://community.opengroup.org/groups/osdu/platform/-/epics/19Fix static API doc as part of https://community.opengroup.org/groups/osdu/platform/-/epics/19https://community.opengroup.org/osdu/platform/pre-shipping/-/issues/650String array becomes String after index2024-01-08T08:56:09ZChad LeongString array becomes String after indexThe String array becomes String after it is indexed. Bug should be introduced by [MR 649](https://community.opengroup.org/osdu/platform/system/indexer-service/-/merge_requests/649)
See issue created under Indexer https://community.openg...The String array becomes String after it is indexed. Bug should be introduced by [MR 649](https://community.opengroup.org/osdu/platform/system/indexer-service/-/merge_requests/649)
See issue created under Indexer https://community.opengroup.org/osdu/platform/system/indexer-service/-/issues/137https://community.opengroup.org/osdu/platform/system/search-service/-/issues/152ADR: Ability to get all the records of a given Persisted Collection from search2024-01-11T09:20:16ZJuilee PaluskarADR: Ability to get all the records of a given Persisted Collection from search## Status
* [x] Proposed
* [ ] Trialing
* [ ] Under review
* [ ] Approved
* [ ] Retired
## Context & Scope
A persisted collection can aggregate objects of different nature including master data, work-product-component, reference data....## Status
* [x] Proposed
* [ ] Trialing
* [ ] Under review
* [ ] Approved
* [ ] Retired
## Context & Scope
A persisted collection can aggregate objects of different nature including master data, work-product-component, reference data. It could contain collection of records of heterogenous kind. At a given point, MemberIDs field of PersistedCollection maintains list of objects which are part of the collection.
More can be read from this [schema](https://community.opengroup.org/osdu/data/data-definitions/-/blob/master/Generated/work-product-component/PersistedCollection.1.2.0.json) .
Problem :
Today, there is no way to get all the records which belongs to a particular Persisted Collection. Today, user has to perform atleast 2 search queries to get the records of a Persisted Collection.
1st Query - To get the Persisted Collection record and retrieve record IDs are from MemeberID field.
2nd Query - To get the actual record from retrieved record Ids in the 1st query.
For the 2nd query, to get multiple records in 1 search query, user has to form a query with OR operator. E.g.
{
"Query" : recordId-1 **OR** recordId-2 **OR** recordId-3 .. recordId-1000.
}
Here ElasticSearch has limitation of max usage of **OR** conditions in 1 query.
So if a PersistedCollection contains more than 1000 records , user has to invoke multiple search queries to get all the records.
## Possible Solution
One of the possible solution to address this requirement could be adding Persisted Collection record id in the record's data. So whenever records get added to the Persisted Collection , record's data should be updated with the information of Persisted Collection id.
This could be done by listening to record change event for PersistedCollection kind .
## Consequences
* This will help users to get the records of Persisted Collection in a single go.
* This will help users to get the records to which he/she has access.
* This will help users to form queries to get desire records from PeristedCollection such as “Give all the records of a persisted collection where data.\<someproperty\> is \<xyz\>“ in one go.
* This will help users to make filters based on different objects in the collection.https://community.opengroup.org/osdu/platform/domain-data-mgmt-services/rock-and-fluid-sample/rafs-ddms-services/-/issues/322Preship postman collection is hardcoded to Azure ACLs2024-01-03T22:14:12ZBryan DawsonPreship postman collection is hardcoded to Azure ACLsThe postman collection for preshipping hardcodes the ACLs to a `contoso.com` domain.
![image.png](/uploads/e693f39caae11c3f4fa043628a023e2a/image.png)
Instead, this should use a variable for whole part after the `@` so that AWS and G...The postman collection for preshipping hardcodes the ACLs to a `contoso.com` domain.
![image.png](/uploads/e693f39caae11c3f4fa043628a023e2a/image.png)
Instead, this should use a variable for whole part after the `@` so that AWS and GC preshipping can reuse the same collection.https://community.opengroup.org/osdu/platform/domain-data-mgmt-services/rock-and-fluid-sample/rafs-ddms-services/-/issues/321Inconsistent variable usage in reference value postman collection2024-01-03T19:52:31ZBryan DawsonInconsistent variable usage in reference value postman collectionIn the postman collection for loading the reference data it uses the variable `WORKFLOW_URL` for most of the requests
![image.png](/uploads/1cc45ca96af511b64707de142f8418fe/image.png)
However, some of the newer requests added use a di...In the postman collection for loading the reference data it uses the variable `WORKFLOW_URL` for most of the requests
![image.png](/uploads/1cc45ca96af511b64707de142f8418fe/image.png)
However, some of the newer requests added use a different variable of `osdu_endpoint`
![image.png](/uploads/eb3ef72ada79756b33fe130b0f5fbb7f/image.png)
We should be consistent and use `WORKFLOW_URL` for all.https://community.opengroup.org/osdu/platform/domain-data-mgmt-services/rock-and-fluid-sample/rafs-ddms-services/-/issues/320Inconsistent variable usage in schema registration postman collection2024-01-03T19:52:31ZBryan DawsonInconsistent variable usage in schema registration postman collectionMost of the requests in the [schema collection](https://community.opengroup.org/osdu/platform/domain-data-mgmt-services/rock-and-fluid-sample/rafs-ddms-services/-/blob/main/deployments/rafsddms_schemas_mvp.postman_collection.json?ref_typ...Most of the requests in the [schema collection](https://community.opengroup.org/osdu/platform/domain-data-mgmt-services/rock-and-fluid-sample/rafs-ddms-services/-/blob/main/deployments/rafsddms_schemas_mvp.postman_collection.json?ref_type=heads) use the variable `SCHEMA_HOST` for the URL:
![image.png](/uploads/e488c000e08eac35ea04c155c873acae/image.png)
but a few do not and require setting up a separate variable for `OSDU_BASE_HOST`
![image.png](/uploads/0ae7881531b0a26d1c78e955a8d0f7e3/image.png)
We should change the collection to be consistent.https://community.opengroup.org/osdu/platform/domain-data-mgmt-services/reservoir/open-etp-server/-/issues/109ETP-8 Unexpected Dataspace schema: Got v9; want v13: <path>2024-01-03T15:08:46ZDzmitry Malkevich (EPAM)ETP-8 Unexpected Dataspace schema: Got v9; want v13: <path>GC instance for release M22 was upgraded from M21 with all data retained. When making call `Get Resources List`
```
curl --location --request GET 'https://preship.gcp.gnrg-osdu.projects.epam.com/api/reservoir-ddms/v2/dataspaces/Dani%2FVo...GC instance for release M22 was upgraded from M21 with all data retained. When making call `Get Resources List`
```
curl --location --request GET 'https://preship.gcp.gnrg-osdu.projects.epam.com/api/reservoir-ddms/v2/dataspaces/Dani%2FVolve-Grid/resources' \
--header 'Authorization: Bearer <token>' \
--header 'data-partition-id: m19'
```
to any of existing dataspaces user gets 500 error we are seeing following error in logs:
<details><summary>ETP-8 Unexpected Dataspace schema: Got v9; want v13: Dani/Volve-Grid</summary>
```json
{
"textPayload": "\tETP-8 Unexpected Dataspace schema: Got v9; want v13: Dani/Volve-Grid",
"insertId": "ahrd0nhq1me4l9sl",
"resource": {
"type": "k8s_container",
"labels": {
"namespace_name": "default",
"project_id": "osdu-service-prod",
"location": "us-central1-c",
"pod_name": "oetp-server-799d96f6b6-r4g5k",
"cluster_name": "asm-primary",
"container_name": "oetp-server"
}
},
"timestamp": "2024-01-03T15:02:58.479257813Z",
"severity": "INFO",
"labels": {
"k8s-pod/service_istio_io/canonical-name": "oetp-server",
"compute.googleapis.com/resource_name": "gke-asm-primary-asm-primary-pool-e828b97e-ml8z",
"k8s-pod/service_istio_io/canonical-revision": "latest",
"k8s-pod/pod-template-hash": "799d96f6b6",
"k8s-pod/security_istio_io/tlsMode": "istio",
"k8s-pod/app": "oetp-server"
},
"logName": "projects/osdu-service-prod/logs/stdout",
"receiveTimestamp": "2024-01-03T15:02:58.554538520Z"
}
```
</details>
From what we understood server is trying to update DB schema but fails, so we need your input on fixing this.
We've re-uploaded data to demo/Volve and now able to successfully retrieve resources for this dataspace.https://community.opengroup.org/osdu/ui/data-loading/osdu-cli/-/issues/24Non-existing records is showing as "invalidRecords"2024-01-03T11:05:57ZJan MortensenNon-existing records is showing as "invalidRecords"If we try to get a record that does not exist then it is showing as "invalidRecords". If same storage request is done in e.g. Postman then the response is 404 Record not found.
We should distinguish between
`{
"code": 400,
"rea...If we try to get a record that does not exist then it is showing as "invalidRecords". If same storage request is done in e.g. Postman then the response is 404 Record not found.
We should distinguish between
`{
"code": 400,
"reason": "Validation error.",
"message": "{\"errors\":[\"Not a valid record id. Found: <my record> \"]}"
}`
and
`{
"code": 404,
"reason": "Record not found",
"message": "The record '<my record>' was not found"
}`https://community.opengroup.org/osdu/platform/domain-data-mgmt-services/rock-and-fluid-sample/rafs-ddms-services/-/issues/319Preship Postman collection is missing v2 master data requests2024-01-02T19:14:23ZBryan DawsonPreship Postman collection is missing v2 master data requestsThe preship testing postman collection (https://community.opengroup.org/osdu/qa/-/blob/main/Dev/48_CICD_Setup_RAFSDDMSAPI/RAFSDDMS_API_CI-CD_v1.0.postman_collection.json?ref_type=heads) is missing requests in the v2 folder to create the ...The preship testing postman collection (https://community.opengroup.org/osdu/qa/-/blob/main/Dev/48_CICD_Setup_RAFSDDMSAPI/RAFSDDMS_API_CI-CD_v1.0.postman_collection.json?ref_type=heads) is missing requests in the v2 folder to create the referenced master data (like Sample, etc).
![image](/uploads/59e8b38ed56bd619f94b8dbcd8a68da0/image.png)
Results in failed tests:
```
{
"code": 422,
"reason": "Data validation failed.",
"errors": {
"Missing records in storage": [
"osdu:master-data--Sample:88Y"
]
}
}
```Kseniya BarkouskayaMichael JonesKseniya Barkouskaya