OSDU Software issueshttps://community.opengroup.org/groups/osdu/-/issues2023-07-17T17:42:18Zhttps://community.opengroup.org/osdu/platform/security-and-compliance/policy/-/issues/100Add correlation-id to response headers2023-07-17T17:42:18ZShane HutchinsAdd correlation-id to response headersToday policy service accepts a correlation-id in request headers. If one is not provided it will generate one.
This unique identifier will show up in logs and all requests to other OSDU services required as part of this request.
However...Today policy service accepts a correlation-id in request headers. If one is not provided it will generate one.
This unique identifier will show up in logs and all requests to other OSDU services required as part of this request.
However this correlation-id is not provided back to the response headers.
This could potentially aid in debugging and troubleshooting in the future.M19 - Release 0.22Shane HutchinsShane Hutchinshttps://community.opengroup.org/osdu/platform/system/notification/-/issues/53Storage Integration Test Fails in Azure2023-07-10T08:35:38ZYifan YeStorage Integration Test Fails in AzureIn azure, notification service only retrieves new subscription every 10 minutes. The integration test only waits for 1 minute right now. Therefore, the test is not able to find the subscription it created in notification and causing the ...In azure, notification service only retrieves new subscription every 10 minutes. The integration test only waits for 1 minute right now. Therefore, the test is not able to find the subscription it created in notification and causing the test to fail.M19 - Release 0.22Yifan YeYifan Yehttps://community.opengroup.org/osdu/platform/system/indexer-service/-/issues/91Use specific topic instead of the storage record change topic to send the re-...2024-03-01T12:04:33ZZhibin MaiUse specific topic instead of the storage record change topic to send the re-index eventsIn current implementation of Azure indexer, re-index events share the same topic of the storage record change events. It creates several kinds of problems:
1. Create unnecessary load on the storage service as many other services monitor ...In current implementation of Azure indexer, re-index events share the same topic of the storage record change events. It creates several kinds of problems:
1. Create unnecessary load on the storage service as many other services monitor the storage change events and react, e.g. data synch with external datastores
2. It could affect the index/re-index performance if storage service is busy
3. Create unnecessary duplicate copies of the data, e.g. multiple copies/versions of wks records with extract same content could be created.
4. Events generated from re-index or index-extension could block storage record change events which could have impact on SLO requirements in terms of index update latency.
We should use specific topic for re-index to send and receive the re-index events.M19 - Release 0.22Zhibin MaiZhibin Maihttps://community.opengroup.org/osdu/platform/security-and-compliance/policy/-/issues/99OPA http requests calling own policy service apis are blocked2023-07-25T18:13:17ZDadong ZhouOPA http requests calling own policy service apis are blockedWhile testing a policy, it is found that the http request in Rego policy is failing when calling the Storage api to retrieve a data record.
It is further investigated with a test rego policy using different urls. It seems the http reque...While testing a policy, it is found that the http request in Rego policy is failing when calling the Storage api to retrieve a data record.
It is further investigated with a test rego policy using different urls. It seems the http requests calling own policy service apis are blocked.
Here are the tests performed (For test cases 1 - 4, the test policy is deployed and evaluated in Shell OSDU Sandbox; For cases 5 & 6, the test policy is deployed and evaluated in my local machine Policy instance):
Policy rego file:
```
package osdu.partition["osdu"].test
import input
headers = {
"Content-Type": "application/json",
"data-partition-id":input.datapartitionid,
"Authorization": sprintf("Bearer %v", [input.token]),
"Accept": "application/json"
}
url := input.url
response := http.send({
"method": "GET",
"url": url,
"headers": headers,
"force_cache": true,
"force_cache_duration_seconds": 1,
"raise_error": false
})
```
Policy evaluation case 1:
Input - call Storage Info api:
```
{
"input": {
"url": "https://sandbox.osdu.shell.com/api/storage/v2/info"
}
}
```
Evaluation output with expected results (status code 200) - http call is working:
```
"response": {
"body": {
"artifactId": "storage-aws",
"branch": "refs/heads/release/r3-m15",
"buildTime": "2023-01-05T21:49:57.391Z",
"commitId": "343b1cd6109bb2c329dfa2d6c01efca241bb6688",
"commitMessage": "Merge branch 'cherry-pick-for-539' into 'release/0.18'",
"connectedOuterServices": [],
"groupId": "org.opengroup.osdu",
"version": "0.18.0-SNAPSHOT"
},
"headers": {
"access-control-allow-credentials": [
"true"
],
"access-control-allow-headers": [
"access-control-allow-origin, origin, content-type, accept, authorization, data-partition-id, correlation-id, appkey"
],
"access-control-allow-methods": [
"GET, POST, PUT, DELETE, OPTIONS, HEAD, PATCH"
],
"access-control-allow-origin": [
"*"
],
"access-control-max-age": [
"3600"
],
"cache-control": [
"no-cache, no-store, must-revalidate"
],
"content-security-policy": [
"default-src 'self'"
],
"content-type": [
"application/json"
],
"correlation-id": [
"da2ac137-8bfe-48ba-9856-dd659e1639be"
],
"date": [
"Thu, 01 Jun 2023 18:29:01 GMT"
],
"expires": [
"0"
],
"strict-transport-security": [
"max-age=31536000; includeSubDomains"
],
"x-content-type-options": [
"nosniff"
],
"x-envoy-upstream-service-time": [
"3"
],
"x-frame-options": [
"DENY"
],
"x-xss-protection": [
"1; mode=block"
]
},
"raw_body": "{\"groupId\":\"org.opengroup.osdu\",\"artifactId\":\"storage-aws\",\"version\":\"0.18.0-SNAPSHOT\",\"buildTime\":\"2023-01-05T21:49:57.391Z\",\"branch\":\"refs/heads/release/r3-m15\",\"commitId\":\"343b1cd6109bb2c329dfa2d6c01efca241bb6688\",\"commitMessage\":\"Merge branch 'cherry-pick-for-539' into 'release/0.18'\",\"connectedOuterServices\":[]}",
"status": "200 OK",
"status_code": 200
}
```
Policy evaluation case 2:
Input - call Storage Get api with a valid data record id:
```
{
"input": {
"url": "https://sandbox.osdu.shell.com/api/storage/v2/records/osdu:dataset--File.Generic:PolicyTest:LT_1_OWNER"
}
}
```
Evaluation output with unexpected results (status code 0) - http call is failing:
```
"response": {
"error": {
"code": "eval_http_send_network_error",
"message": "Get \"https://sandbox.osdu.shell.com/api/storage/v2/records/osdu:dataset--File.Generic:PolicyTest:LT_1_OWNER\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)"
},
"status_code": 0
}
```
Policy evaluation case 3:
Input - call Storage Get api with an invalid data record id:
```
{
"input": {
"url": "https://sandbox.osdu.shell.com/api/storage/v2/records/osdu:dataset--File.Generic:PolicyTest:InvalidID"
}
}
```
Evaluation output with expected results (status code 404) - http call is working:
```
"response": {
"body": {
"code": 404,
"message": "The record 'osdu:dataset--File.Generic:PolicyTest:InvalidID' was not found",
"reason": "Record not found"
},
"headers": {
"access-control-allow-credentials": [
"true"
],
"access-control-allow-headers": [
"access-control-allow-origin, origin, content-type, accept, authorization, data-partition-id, correlation-id, appkey"
],
"access-control-allow-methods": [
"GET, POST, PUT, DELETE, OPTIONS, HEAD, PATCH"
],
"access-control-allow-origin": [
"*"
],
"access-control-max-age": [
"3600"
],
"cache-control": [
"no-cache, no-store, must-revalidate"
],
"content-disposition": [
"inline;filename=f.txt"
],
"content-security-policy": [
"default-src 'self'"
],
"content-type": [
"application/json"
],
"correlation-id": [
"22839a83-3d4c-4a49-8ad1-f135e26a4080"
],
"date": [
"Thu, 01 Jun 2023 18:32:18 GMT"
],
"expires": [
"0"
],
"strict-transport-security": [
"max-age=31536000; includeSubDomains"
],
"x-content-type-options": [
"nosniff"
],
"x-envoy-upstream-service-time": [
"18"
],
"x-frame-options": [
"DENY"
],
"x-xss-protection": [
"1; mode=block"
]
},
"raw_body": "{\"code\":404,\"reason\":\"Record not found\",\"message\":\"The record 'osdu:dataset--File.Generic:PolicyTest:InvalidID' was not found\"}",
"status": "404 Not Found",
"status_code": 404
}
```
Policy evaluation case 4:
Input - Call Policy Health api:
```
{
"input": {
"url": "https://sandbox.osdu.shell.com/api/policy/v1/health"
}
}
```
Evaluation output with unexpected results (status code 0) - http call is failing:
```
"response": {
"error": {
"code": "eval_http_send_network_error",
"message": "Get \"https://sandbox.osdu.shell.com/api/policy/v1/health\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)"
},
"status_code": 0
}
```
Policy evaluation case 5 - policy rego file is deployed and evaluated in my local machine Policy instance:
Input - same input as in case 2 - call Storage Get api with a valid data record id:
```
{
"input": {
"url": "https://sandbox.osdu.shell.com/api/storage/v2/records/osdu:dataset--File.Generic:PolicyTest:LT_1_OWNER"
}
}
```
Evaluation output with expected results (status code 200) - http call is working:
```
"response": {
"body": {
"acl": {
"owners": [
"data.policytest.owners@osdu.shell.com"
],
"viewers": [
"data.policytest.no.viewers@osdu.shell.com"
]
},
"createTime": "2023-05-31T22:09:47.734Z",
"createUser": "osduusdevdpinformatica@shell.com",
"data": {
"DatasetProperties": {
"FileSourceInfo": {
"FileSource": "s3://osdudptfue1-shared-813258989325-us-east-1-file/osdu/uxHRxl9M8JdUf12AKuTkAZQai3LTZw6W/test.txt"
}
},
"ResourceSecurityClassification": "osdu:reference-data--ResourceSecurityClassification:RESTRICTED:"
},
"id": "osdu:dataset--File.Generic:PolicyTest:LT_1_OWNER",
"kind": "osdu:wks:dataset--File.Generic:1.0.0",
"legal": {
"legaltags": [
"osdu-Case-A1-allow-Affiliate"
],
"otherRelevantDataCountries": [
"US"
],
"status": "compliant"
},
"meta": [],
"modifyTime": "2023-06-01T14:37:46.994Z",
"modifyUser": "osduusdevdpinformatica@shell.com",
"version": 1685570987718344
},
"headers": {
"access-control-allow-credentials": [
"true"
],
"access-control-allow-headers": [
"access-control-allow-origin, origin, content-type, accept, authorization, data-partition-id, correlation-id, appkey"
],
"access-control-allow-methods": [
"GET, POST, PUT, DELETE, OPTIONS, HEAD, PATCH"
],
"access-control-allow-origin": [
"*"
],
"access-control-max-age": [
"3600"
],
"cache-control": [
"no-cache, no-store, must-revalidate"
],
"content-disposition": [
"inline;filename=f.txt"
],
"content-length": [
"806"
],
"content-security-policy": [
"default-src 'self'"
],
"content-type": [
"application/json"
],
"correlation-id": [
"872bccf2-1ce8-4b16-b5e9-1d3cdeba7c4f"
],
"date": [
"Thu, 01 Jun 2023 19:31:07 GMT"
],
"expires": [
"0"
],
"strict-transport-security": [
"max-age=31536000; includeSubDomains"
],
"x-content-type-options": [
"nosniff"
],
"x-envoy-upstream-service-time": [
"827"
],
"x-frame-options": [
"DENY"
],
"x-xss-protection": [
"1; mode=block"
]
},
"raw_body": "{\"data\":{\"DatasetProperties\":{\"FileSourceInfo\":{\"FileSource\":\"s3://osdudptfue1-shared-813258989325-us-east-1-file/osdu/uxHRxl9M8JdUf12AKuTkAZQai3LTZw6W/test.txt\"}},\"ResourceSecurityClassification\":\"osdu:reference-data--ResourceSecurityClassification:RESTRICTED:\"},\"meta\":[],\"id\":\"osdu:dataset--File.Generic:PolicyTest:LT_1_OWNER\",\"version\":1685570987718344,\"kind\":\"osdu:wks:dataset--File.Generic:1.0.0\",\"acl\":{\"viewers\":[\"data.policytest.no.viewers@osdu.shell.com\"],\"owners\":[\"data.policytest.owners@osdu.shell.com\"]},\"legal\":{\"legaltags\":[\"osdu-Case-A1-allow-Affiliate\"],\"otherRelevantDataCountries\":[\"US\"],\"status\":\"compliant\"},\"createUser\":\"osduusdevdpinformatica@shell.com\",\"createTime\":\"2023-05-31T22:09:47.734Z\",\"modifyUser\":\"osduusdevdpinformatica@shell.com\",\"modifyTime\":\"2023-06-01T14:37:46.994Z\"}",
"status": "200 OK",
"status_code": 200
}
```
Policy evaluation case 6 - policy rego file is deployed and evaluated in my local machine Policy instance:
Input - same input as in case 4 - Call Policy Health api:
```
{
"input": {
"url": "https://sandbox.osdu.shell.com/api/policy/v1/health"
}
}
```
Evaluation output with expected results (status code 200) - http call is working:
```
"response": {
"body": {
"message": "Healthy"
},
"headers": {
"content-length": [
"21"
],
"content-type": [
"application/json"
],
"date": [
"Thu, 01 Jun 2023 19:30:02 GMT"
],
"x-envoy-upstream-service-time": [
"8"
]
},
"raw_body": "{\"message\":\"Healthy\"}",
"status": "200 OK",
"status_code": 200
}
```
In cases 1 & 3, the http calls are not calling the Policy apis and the http calls are working.
In cases 2 & 4, the http calls are calling own Policy apis directly or indirectly and the http calls are blocked and failing.
In cases 5 & 6, same inputs as in cases 2 & 4. The http calls are calling the Policy apis on different OSDU instances and the http calls are working.
cc @hmarkovic @hutchins @MonicaJohns @chadM19 - Release 0.22Shane HutchinsShane Hutchinshttps://community.opengroup.org/osdu/platform/domain-data-mgmt-services/wellbore/wellbore-domain-services/-/issues/70Google. Job Failed #19803412023-07-11T13:56:46ZYan Sushchynski (EPAM)Google. Job Failed #1980341Could you have a look at the CI-CD Job [#1980341](https://community.opengroup.org/osdu/platform/domain-data-mgmt-services/wellbore/wellbore-domain-services/-/jobs/1980341), and help with it. The tests are complex and we actually don't kn...Could you have a look at the CI-CD Job [#1980341](https://community.opengroup.org/osdu/platform/domain-data-mgmt-services/wellbore/wellbore-domain-services/-/jobs/1980341), and help with it. The tests are complex and we actually don't know where to start investigate the issue from.
Thanks.M19 - Release 0.22Yan Sushchynski (EPAM)YannickYan Sushchynski (EPAM)https://community.opengroup.org/osdu/platform/security-and-compliance/policy/-/issues/98Audit logs for policy service2023-08-29T18:12:32ZHrvoje MarkovicAudit logs for policy serviceCreate audit logs for policy service similar to what is produced in storage service.Create audit logs for policy service similar to what is produced in storage service.M19 - Release 0.22Shane HutchinsShane Hutchinshttps://community.opengroup.org/osdu/platform/system/indexer-service/-/issues/90ADR: new reindex API to reindex the given records2023-10-03T14:39:44ZMingyang ZhuADR: new reindex API to reindex the given records
## Status
- [ ] Proposed
- [ ] Trialing
- [ ] Under review
- [X] Approved
- [ ] Retired
## Context
As of now, indexer has a reindex API to reindex the whole given kind. The API is useful in the scenarios when index data need to be migr...
## Status
- [ ] Proposed
- [ ] Trialing
- [ ] Under review
- [X] Approved
- [ ] Retired
## Context
As of now, indexer has a reindex API to reindex the whole given kind. The API is useful in the scenarios when index data need to be migrated because of some bug fixes, new indexer features etc. Sometimes, it may not necessary to reindex the entire kind if we know the exact impact, so it will be good to have a reindex API that only reindex the given records.
The use cases of the new API could be:
1. If there is a indexer bug or new indexer feature deployed, and we know exactly what are the records been impacted, we could use such API to only reindex those records
2. When user ingests data, and data successfully created in storage, but failed to be indexed in indexer for any reason. Application could use such API to manually fix the impacted records instead of reindexing the whole kind
## API spec
```yaml
paths:
"/api/indexer/v2/reindex/records":
post:
requestBody:
content:
application/json:
shema:
$ref: '#/components/schemas/ReindexRecordsRequest'
schemas:
ReindexRecordsRequest:
type: object
properties:
recordIds:
type: array
items:
type: string
example: ["recordId1", "recordId2]
```
## Limit
We will limit the given number of records as 1000 initially
```M19 - Release 0.22Mingyang ZhuMingyang Zhuhttps://community.opengroup.org/osdu/platform/security-and-compliance/policy/-/issues/97Job Failed #19747872023-05-24T17:52:18ZAndrei Skorkin [EPAM / GCP]Job Failed #1974787Job [#1974787](https://community.opengroup.org/osdu/platform/security-and-compliance/policy/-/jobs/1974787) failed for ced4568b60edb33db7122b8f9a802a9970315715:
'compile-and-unit-test' job not works in pipeline. As a result jobs are ski...Job [#1974787](https://community.opengroup.org/osdu/platform/security-and-compliance/policy/-/jobs/1974787) failed for ced4568b60edb33db7122b8f9a802a9970315715:
'compile-and-unit-test' job not works in pipeline. As a result jobs are skipped.M19 - Release 0.22Shane HutchinsShane Hutchinshttps://community.opengroup.org/osdu/platform/domain-data-mgmt-services/rock-and-fluid-sample/rafs-ddms-services/-/issues/9Required fields that should be optional2023-06-27T14:38:49ZBryan DawsonRequired fields that should be optional
When trying to POST a new RockSampleAnalsis I had to put `legal.status` and `data.Parameters` into my JSON even though both of those properties are optional from a schema perspective. You could maybe make an argument that `data.Paramete...
When trying to POST a new RockSampleAnalsis I had to put `legal.status` and `data.Parameters` into my JSON even though both of those properties are optional from a schema perspective. You could maybe make an argument that `data.Parameters` should be there from a business rules perspective, but `legal.status` should definitely not be required as the system will add that property in the storage service.
Example of the JSON I had to send ...
```json
[{
"id": "osdu-dev:work-product-component--RockSampleAnalysis:MyTestSample2",
"kind": "osdu:wks:work-product-component--RockSampleAnalysis:1.1.0",
"acl": {
"owners": [
"data.default.owners@osdu-dev.exxonmobil.com"
],
"viewers": [
"data.default.viewers@osdu-dev.exxonmobil.com"
]
},
"legal": {
"legaltags": [
"osdu-dev-default-legal"
],
"otherRelevantDataCountries": [
"US"
],
"status": "compliant"
},
"data": {
"Name": "Test RSA2",
"Description": "Testing the DDMS Endpoint",
"TopDepth": 12345.6,
"BottomDepth": 12345.6,
"Parameters": []
}
}]
```RAFS DDMS Sprint 11Ernesto GutierrezErnesto Gutierrezhttps://community.opengroup.org/osdu/platform/system/file/-/issues/83Checksum values do not match up - value prior to upload, value as auto-popula...2023-07-10T08:37:54ZDebasis ChatterjeeChecksum values do not match up - value prior to upload, value as auto-populated by File service and persisted in Dataset recordPlease see my recent test in Azure/M17/Preship.
https://community.opengroup.org/osdu/platform/pre-shipping/-/blob/main/R3-M17/Test_Plan_Results_M17/Core_Services/M17-Azuere-Core-File-and-Dataset-steps-Debasis.zip
Prior to uploading the ...Please see my recent test in Azure/M17/Preship.
https://community.opengroup.org/osdu/platform/pre-shipping/-/blob/main/R3-M17/Test_Plan_Results_M17/Core_Services/M17-Azuere-Core-File-and-Dataset-steps-Debasis.zip
Prior to uploading the file, I found checksum value from Linux Operating system.
After the file is uploaded and Dataset record is created, I try to compare with the value as auto-populated by File Service.
The values do not match.
Please check this.M19 - Release 0.22Chad LeongChad Leonghttps://community.opengroup.org/osdu/platform/pre-shipping/-/issues/495EDS DMS testing fails in Pre-ship GC environment R3 M17, when trying to get t...2024-01-08T14:05:16ZKamlesh TodaiEDS DMS testing fails in Pre-ship GC environment R3 M17, when trying to get the retrieval Instructions using the dataset--ConnectedSource.Generic object.<details><summary>Try fetching the document using the fetched dataset--ConnectedSource.Generic record</summary>
curl --location 'https://preship.gcp.gnrg-osdu.projects.epam.com/api/dataset/v1/retrievalInstructions' \
--header 'Data-P...<details><summary>Try fetching the document using the fetched dataset--ConnectedSource.Generic record</summary>
curl --location 'https://preship.gcp.gnrg-osdu.projects.epam.com/api/dataset/v1/retrievalInstructions' \
--header 'Data-Partition-Id: odesprod' \
--header 'Content-Type: application/json' \
--header 'Authorization: Bearer ya29.a0AWY7Ckl...WpA0169' \
--data '{
"datasetRegistryIds": [
"odesprod:dataset--ConnectedSource.Generic:KamMay082023"
]
}'
Response 400 Bad Request
{
"code": 400,
"reason": "Bad Request",
"message": "No DMS handler for kindSubType 'dataset--ConnectedSource.Generic' is registered"
}</summary>
</details>
This is the CSRE record: odesprod:master-data--ConnectedSourceRegistryEntry:AWSPreship-KTMAY092023_2
This is the CSDJ record: odesprod:master-data--ConnectedSourceDataJob:AWSPreshipKTMAY092023_2
This is the WPC record odesprod:work-product-component--Document:KamMay082023
Additional info can be found in the test results uploaded in the pre-ship testing folder
https://community.opengroup.org/osdu/platform/pre-shipping/-/blob/main/R3-M17/Test_Plan_Results_M17/EDS/GC_M17_EDS_DMSTesting.txt
@AshishSaxenaAccenture @dzmitry_malkevich @debasisc @chadM19 - Release 0.22https://community.opengroup.org/osdu/platform/security-and-compliance/entitlements/-/issues/123ADR: remove entitlements implicit quota of how many data groups can be create...2023-09-06T13:15:34ZMingyang ZhuADR: remove entitlements implicit quota of how many data groups can be created within a data partition## Status
- [X] Proposed
- [X] Trialing
- [X] Under review
- [X] Approved
- [ ] Retired
## Context
Entitlements service implements quota for the membership that one entity can have. The quota regulates the consumption behavior and prote...## Status
- [X] Proposed
- [X] Trialing
- [X] Under review
- [X] Approved
- [ ] Retired
## Context
Entitlements service implements quota for the membership that one entity can have. The quota regulates the consumption behavior and protect the service. An entity could be a user, a service account or a entitlements group.
There is a bootstrap group called "users.data.root", and there is specific implementation to automatically add this group as children to all created data groups. The requirement behind this implementation is that any user or service account belong to the group "users.data.root" should have access to all data groups within the data partition. It also means that the user or service account have full permission to access all data within the data partition, since storage is using entitlements data groups as the record ACL. We will use the term "full data permission" below to refer to this requirement.
Quota itself is a good thing to have, however, because of the membership design of "users.data.root" group, it introduces an implicit quota which limits how many data group in total that could be created in one data partition.
Red is the current Authz; green is proposed:
![Entitlements_ADR__123.drawio](/uploads/627ac1ea959b5537d3d6d62319568751/Entitlements_ADR__123.drawio.png)
## Tradeoff Analysis
The advantage of adding "users.data.root" group to all data groups is that it hides the "full data permission" requirement implementation details from other services. Therefore, only the entitlements code needs to be changed originally to support this requirement and all other services who requires data authorization will automatically get this feature.
However, on the other hand, it introduces an implicit quota which limits how many data groups can created within the data partition. Such implicit quota constrains the group scalability and usability from the application. E.g. Let's assume the membership quota to be 5000, due to the implicit quota, it limits only 5000 data groups can be created within the data partition. When this quota met, applications can't create any new group, so it blocks the application functionalities. And when this quota met, the average membership of individual user or service account may still be a small number, so it does not fully utilize the service.
> **ⓘ** Additional tradeoffs include that requiring each service to check for a group itself rather than relying on the entitlements service breaks
>
>- the [non-functional agility requirement](https://gitlab.opengroup.org/osdu/r3-program-activities/docs/-/raw/master/R3%20Document%20Snapshot/07-osdu-reference-architecture.pdf) since any change to this dependency requires touching all services
>- transparency, since the entitlements service is no longer authoritative
>- the [single-responsibility principle](https://en.wikipedia.org/wiki/Single-responsibility_principle), since a service is now responsible for authorization in addition to its main function.
## Decision
It is not a good idea to support "full data permission" by group hierarchy and ACL based entitlement. Such requirement should be implemented with role based or policy based entitlement. We'd like to propose a new design for this:
1. Entitlements service drops the implementation of adding "users.data.root" to all data groups. Therefore, it removes the undesired implicit quota of how many data groups can be created within a data partition.
2. Any downstream service which does data authorization with ACL checking should implement a new logic of checking whether the caller belongs to "users.data.root", if so, service should bypass the ACL checking and give the full data permission. Since all the services are using entitlements service to do API authorization, it already has the API request, no extra performance overhead will be added to the downstream services. Eventually, this logic should be converted to instance policy when the downstream services integrate with the policy service.
## Consequences
All downstream services which do data authorization by ACL checking needs to be reviewed whether they need the code change to support "full data permission" requirement.
> **ⓘ** This ADR grows the technical debt (growing the data authz logic in the search and storage service).
>
>The Policy service could address many of tradeoffs described above as well as the technical debt by abstracting these checks out of the services; however
>
>- The Policy service is non-performant for as few as 50 groups, so many will use the hard-coded approach initially.
>- It is difficult to unwind implementation of a temporary solution. For instance, our understanding is that the Storage service bypasses the Policy service entirely and calls OPA directly (also for performance reasons).
>
>For these reasons, Google Cloud suggests careful and persistent documentation of the technical debt which will need to be unwound in future.
As an agreement to balance between business feature development and technical debt, this ADR will add data manager authorization logic to both downstream services and OPA as a new policy. The reason we still need to add this hard coded logic to all downstream services is because policy is not 100% released yet, and the hard coded data authorization logic is still used in production. Since it will implement the new data manager policy in OPA, this ADR won't add any technical debt or logic discrepancy to policy service. The above technical debt can only be resolved when policy service releases to production and within its own tasks, it should remove all the hard coded data authz logic from all downstream services.
### Identified impact services
Entitlements (MR: https://community.opengroup.org/osdu/platform/security-and-compliance/entitlements/-/merge_requests/477)
Storage (MR: https://community.opengroup.org/osdu/platform/system/storage/-/merge_requests/694)
Search (has been already implemented the logic in the MR: https://community.opengroup.org/osdu/platform/system/search-service/-/merge_requests/298)
Seismic DMS
*To Be Added*M19 - Release 0.22Mingyang ZhuMingyang Zhuhttps://community.opengroup.org/osdu/platform/domain-data-mgmt-services/home/-/issues/2ADR: Creation of Rock and Fluid Sample DDMS2023-11-07T22:53:17ZBryan DawsonADR: Creation of Rock and Fluid Sample DDMS# ADR: Creation of Rock and Fluid Sample DDMS
Creation of Rock and Fluid Sample DDMS to support storage and analysis of rock and fluid sample data.
## Status
- [X] Proposed
- [x] Trialing
- [x] Under review
- [x] Approved
- [ ] Retire...# ADR: Creation of Rock and Fluid Sample DDMS
Creation of Rock and Fluid Sample DDMS to support storage and analysis of rock and fluid sample data.
## Status
- [X] Proposed
- [x] Trialing
- [x] Under review
- [x] Approved
- [ ] Retired
## Context & Scope
Rock and fluid samples are physical specimens extracted either directly from the subsurface or from collection points at the surface. These samples can be associated with a wellbore, but sometimes are not tied to a specific wellbore. Information and analysis of these samples are used in static reservoir model, dynamic reservoir modeling, facility design, flowline design, drilling design, and more.
Personas that typically use rock and fluid samples can include:
- Data Managers/Stewards
- Petrophysicists
- Petrologists
- Petrographers
- Geochemists
- Geologists
- Reservoir Modelers
- Reservoir Engineers
- Flow Assurance Engineers
- Facilities Engineers
- Lab Inventory Professionals
- Lab Analysis Professionals
Typically, the lifecycle of sample data follows this general flow: a) putting together a plan to collect samples, b) working with vendors to collect samples, c) samples are shipped to labs, d) samples can be analyzed at one or more labs, and e) placed in long term storage, as shown in Figure 1 below.
![image](/uploads/62132761478b82d916376dc06272f242/image.png)
*Figure 1. Example of Rock and Fluid Sample Business Process Flow*
Like most subsurface data, analyze of the samples is a highly iterative process and multiple data sets, and versions of, are generated. Storage of this data is highly variable as there is not an industry standard storage format. Data can be stored in a multitude of file formats with internal structures varying from vendor to vendor.
In order to analyze and best govern fluid sample data across this diverse data storage, there is a need for consistent data definitions, data structure, and a common abstractions (e.g., APIs). Otherwise, analyzing this dataset is a highly burdensome process requiring the data consumers to exert a high degree of manual effort in formatting and prepping the data for analysis.
## Decision
Enhance the OSDU Platform to provide a standard optimize method to store and consume Rock and Fluid Sample data and in particular support RCA and PVT.
And in doing so, creating a new DDMS in lieu of enhancing an existing DDMS.
The initial release of this DDMS aims to satisfy these use cases:
> - **Provide optimized bulk content storage:**
> - Enable data consumers to store bulk data extracted from reports in a way that is consistent and easy to access later
> - DDMS supporting this by have endpoints for bulk data ingestion/storage to well-defined data DDMS data models
> - **Support query, filter, and deliver bulk data within one dataset:**
> - Enable data consumers to find wells that have a specific rock and fluid attribute (E.g., permeability values > 18 mD in RCA) for a specific country
> - DDMS supporting this by:
> - Supporting query by any catalog data/metadata (e.g. geocontexts, lab vendor, dates, experimental method, technical assurance, etc.) or DDMS bulk data
> - Supporting filtering for value ranges and other criteria (e.g., depths, pressures, temperatures, etc.)
>- **Enable trace bulk data back to OSDU Catalog records:**
> - Enable data consumers to find the original RCA report after working with bulk data
> - DDMS supporting this by having endpoints that allow search against the Core Search service, for WKSs relevant to Rock and Fluid data, linked to DDMS Datasets
## Rationale
A DDMS does not exist today that provides optimize storage and consumption of rock and fluid sample data. This particular data set does not neatly fit into an existing DDMS (e.g., wellbore, reservoir, seismic, etc. ). It has a close affinity to the Wellbore Data Domain and therefore Wellbore DDMS, but the data is not always associated with and can exist without a wellbore. For that reason and using best practices of domain driven design, it was decided to create a new DDMS that can support an array of optimized storage and consumption specific to the Rock and Fluid Sample data domain.
Note, the Data Definition Data Domain teams are currently aligning OSDU Data Definitions by Data Domains. The decision to leverage domain drive design is in line with this effort. Current proposal can be found [here](https://gitlab.opengroup.org/osdu/subcommittees/data-def/docs/-/blob/master/Design%20Documents/WIP/DomainDefinitions/Domains-and-datatypes-Registry.xls).
## Consequences
Rock and Fluid Sample DDMS would be added as an experimental feature of the platform. The initial MVP would provide capabilities to store and consume RCA and PVT data. This would be both the metadata records associated with those data types (e.g. Coring, RockSample, RockSampleAnalysis, etc.) and the content. For the metadata records, we will utilize the core Storage Service, and not store separate instances. However, validation will occur before storing the JSON to the core storage service. Content will initially be stored in a single parquet file per analysis and will utilize the core file service for storage/management of the file.
The initial donation will only include an Azure SPI implementation. The consequence is that we'll need to work with the other CSPs post-donation on the creation of the other SPI layers.
No content parsers will be provided as part of the donation. This is due to both lack of an industry standard of sample analysis report content and the commercial nature of the products in this space (likely ML will need to be applied to extract the data from PDFs and other documents).
## When to revisit
Future considerations to revisit this DDMS when:
- DDMS architecture fundamentally shifts approach away from being domain-driven
- OSDU community aligns to refactor scope of this DDMS with another yet to be built DDMS
---
# Tradeoff Analysis - Input to decision
## Alternatives and implications
The main alternative considered was to add rock and fluid sample endpoints to the Wellbore DDMS, but as has been stated above, not all samples are tied directly to wellbores (e.g. sea surface captured fluid samples). The implication of having this be a separate service is there is potentially some duplication in how the service handles storage of the data (in the case of the initial MVP that is parquet).
## Decision criteria and tradeoffs
The team evaluated ties to other domains covered by existing DDMSs, and we did not find a good match. Following the domain driven design approach by creating this as a separate DDMS allows it to evolve to meet the needs of the consumers of rock and fluid samples and the unique needs that domain would have. We expect this will evolve quite a bit as we expand the service to accommodate more analytics use cases that span multiple sample reports. The benefit of allowing it to independently evolve would outweigh the potential code duplication elimination that would come from shoehorning this domain into the Wellbore DDMS.
Architecturally, the DDMS has been designed to conform with the 11 OSDU DDMS Key Principles and design best practices:
- https://community.opengroup.org/osdu/documentation/-/wikis/OSDU-(C)/Design-and-Implementation/Domain-&-Data-Management-Services
- https://community.opengroup.org/osdu/documentation/-/wikis/OSDU-(C)/Reference-Architecture/Functional-Architecture/Data
- https://community.opengroup.org/osdu/platform/domain-data-mgmt-services/home
Specific to this DDMS, these considerations were factored into the design and architecture:
- Domain-driven API
- Contextual boundaries
- Microservices-based
- Modular
- Vendor & technology agnostic
- Balanced between analytic performance and cloud resource cost
- Flexible component composition for specific use cases and requirements
![image](/uploads/7eda7c314a29785b6baabae266d3e605/image.png)
*Figure 2. Target DDMS Architecture*
## Decision timeline
A prototype was developed for the Rock and Fluid Sample DDMS in Q1 2023 by ExxonMobil and EPAM. We are trying to target the inclusion of an MVP of the RAFS DDMS in either M18 or M19.M19 - Release 0.22https://community.opengroup.org/osdu/platform/domain-data-mgmt-services/reservoir/open-etp-server/-/issues/48Maximum connected users to Open ETP Server is limited to 992023-08-17T12:37:59ZPavel KisliakMaximum connected users to Open ETP Server is limited to 99### Problem description:
Current architecture of Open ETP server has limit of 99 opened user sessions, this is related to maximum connections to PostgreSQL (100 by default). Of course the number of PostgreSQL connections can be increase...### Problem description:
Current architecture of Open ETP server has limit of 99 opened user sessions, this is related to maximum connections to PostgreSQL (100 by default). Of course the number of PostgreSQL connections can be increased, but this may require more resources from DB host machine ([why it's bad idea](https://stackoverflow.com/a/32584211/5265572)).
In the current implementation, new user session opens new DB connection, when limit is exceed, server returns a generic error.
This can be a real bottleneck, because ETP protocol which is based on WebSockets, assumes that a connection established with a user could live a long time.
**Update 5/02/2023:** Currently just one client can make server unavailable if he will make requests to 99 different dataspaces in the same session (fixed in !140 ).
### Suggested solution:
The most commonly used approach is to use shared DB connections, where each particular user request allocates DB connection from pool and releases it on finish. The schema below shows the current and proposed solution:
![POWERPNT_2023-04-03_16-17-48](/uploads/3096b2f8cadcb9434aac338f66f14ea1/POWERPNT_2023-04-03_16-17-48.png)
Additional benefit of suggested approach is better performance for new user requests, where reused DB connection has already prepared queries (statements).M19 - Release 0.22Pavel KisliakPavel Kisliakhttps://community.opengroup.org/osdu/platform/security-and-compliance/entitlements/-/issues/122ADR: New entitlements membership change events2023-07-14T10:09:11ZThiago SenadorADR: New entitlements membership change events## Status
- [X] Proposed
- [x] Trialing
- [x] Under review
- [x] Approved
- [ ] Retired
**Context & Scope**
Many OSDU applications need to react to entitlement’s membership change, feature that is [long overdue](https://community.openg...## Status
- [X] Proposed
- [x] Trialing
- [x] Under review
- [x] Approved
- [ ] Retired
**Context & Scope**
Many OSDU applications need to react to entitlement’s membership change, feature that is [long overdue](https://community.opengroup.org/osdu/platform/security-and-compliance/entitlements/-/issues/61) to be implemented in entitlements service. In addition, assuming almost every other OSDU service relies on entitlements service and cache its data, this notification mechanism could be used to prevent dirty cache scenarios as describe [here](https://community.opengroup.org/osdu/platform/security-and-compliance/entitlements/-/issues/121).
The scope of this ADR is the addition of asynchronous pubsub events signalizing the successful operation of the following entitlements service APIs:
```
DELETE /groups/{group_email}
POST /groups/{group_email}/members
DELETE /groups/{group_email}/members/{member_email}
```
**Trade-off Analysis**
The addition of the requested pubsub notification mechanism does not represent a breaking change for any involved API, consequently neither for the consuming applications. It should not introduce any performance degradation either since the event triggering is done asynchronously. Only concerned consuming applications would benefit from this new feature, while it remains completely transparent for others.
**Decision**
Only at the end of a successful operation, trigger the following events for the given entitlements’ APIs:
```
DELETE /groups/{group_email}
“entitlementsChangeEvent”: {
“kind”: “groupDeleted”
“group”: “<groupName>”
“user”: “”
“action”: “”
“modifiedBy”: “<user identity>”
“modifiedOn”: “<timestamp>”
}
POST /groups/{group_email}/members
“entitlementsChangeEvent”: {
“kind”: “groupChanged”
“group”: “<groupName>”
“user”: “<user>”
“action”: “add”
“modifiedBy”: “<user identity>”
“modifiedOn”: “<timestamp>”
}
DELETE /groups/{group_email}/members/{member_email}
“entitlementsChangeEvent”: {
“kind”: “groupChanged”
“group”: “<groupName>”
“user”: “<user>”
“action”: “remove”
“modifiedBy”: “<user identity>”
“modifiedOn”: “<timestamp>”
}
```M19 - Release 0.22https://community.opengroup.org/osdu/platform/deployment-and-operations/infra-azure-provisioning/-/issues/260Feature - Security rules for OSDU Infrastructure - Network (ServiceBus PE)2023-07-13T13:25:32ZVasyl Leskiv [SLB]Feature - Security rules for OSDU Infrastructure - Network (ServiceBus PE)Currently the connection from AKS to Service Bus is established through public endpoint. This has the next impact on highly loaded production:
- Security (public internet traffic)
- Load & auto scaling (AKS SNAT outgoing port limitation)...Currently the connection from AKS to Service Bus is established through public endpoint. This has the next impact on highly loaded production:
- Security (public internet traffic)
- Load & auto scaling (AKS SNAT outgoing port limitation)
- Performance (latency)
Switching to Private endpoints should resolve the items above.M19 - Release 0.22Arturo Hernandez [EPAM]Srinivasan Narayananshivani karipeArturo Hernandez [EPAM]https://community.opengroup.org/osdu/platform/system/reference/crs-conversion-service/-/issues/66convertTrajectory enhancements for scale and interpolation2023-07-10T07:18:22ZBert KampesconvertTrajectory enhancements for scale and interpolationCC @kiranallamsety @joshtownsend - - placeholder (I cannot seem to tag Phil or Roy)
Following changes are proposed to API `convertTrajectory v3`, which may result in a v4 change to request and response, with objectives:
- [ ] 1) Retur...CC @kiranallamsety @joshtownsend - - placeholder (I cannot seem to tag Phil or Roy)
Following changes are proposed to API `convertTrajectory v3`, which may result in a v4 change to request and response, with objectives:
- [ ] 1) Return point scale factor and grid convergence at each station (TBD: depth correction factor).
- [ ] 2) Add a new "method": "GNL" as alternative to "AzimuthalEquidistant" and "LMP", which does not apply scaling.
- [ ] 3) Add an input "MD_i" to the request body on which to interpolate.
Note: bug #52 should be fixed first (or at the same time as addressing these enhancements).
Related issue #26 with this service can be closed when item 1 is completed.
**Ad 1)**
Current response has a property "stations" per [api specification](https://community.opengroup.org/osdu/platform/system/reference/crs-conversion-service/-/blob/master/docs/v3/api_spec/crs_converter_openapi.json). Each "station" contains for example the azimuthTN and azimuthGN. The objective of this item is to add an additional output field for "point scale factor" (psf) and grid convergence at each station.
Note: There are various ways in which psf can be computed, for example we could call the Apache SIS engine at all computed trajectory (N,E) locations in the projected CRS. That would get the correctly computed actual psf. However, that approach would require an additional specific call to SIS.
To keep this simple and generic, the math/method that is already implemented is used instead, and the scale factor and convergence are computed from the response [as described here](https://community.opengroup.org/osdu/platform/system/reference/crs-conversion-service/-/blob/master/docs/v3/tutorial/CRS_Convert_Service_howto.md#53-unscaling-the-calculated-wellbore-path).
- requires a projected CRS for the calculated trajectory (which is there)
- For `method` "azimuthalEquidistant" and "LMP" the psf can be computed as described and simply added to the output as extra property.
- For the new `method` "GNL" (grid north local) the psf is added by the same trick. i.e., "GNL" is implemented by calling "azimuthalEquidistant" and then "unscaling" the output, see Ad 2) below.
Example desired response (note the additional "scalefactor" and "convergence" properties):
```json
"stations": [
{
"md": 0.0,
"inclination": 10.0,
"azimuthTN": 100.51354989131318,
"azimuthGN": 100.0,
"dxTN": 0.0,
"dyTN": 0.0,
"scalefactor": 1.000241,
"convergence": 0.51355,
"point": {
"x": 1999999.99999999,
"y": 9999999.99999969,
"z": 0.0
},
"wgs84Longitude": -85.88980921169528,
"wgs84Latitude": 27.553258329196616,
"dls": 0.0,
"original": true,
"dz": 0.0
},
...
```
* `scalefactor` rounded to 6 decimal places (1 mm per km).
* `convergence` rounded to 5 decimal places (0.2mm per km), computed as GC=TN-GN, and wrapped to interval (-180,+180) by "if convergence<-180 then convergence+=360; if convergence>180 then convergence-=360".
- TBD if depth correction factor should be returned. This is a simple correction factor computed as depth/radius (see doc). For now the decision is that this is not needed (because it is easily calculated if needed).
**Update 2023-05-10 BK**: A mistake in the above is that a divide by zero occurs for vertical part of the wellbore path. Hence we will use the trick to compute scale and convergence using a dummy survey (see tutorial) only for the first point and the last point, and then output as follows in a new property for v4 of the API
```json
"scaleConvergence": [
{
"scalefactor": 1.000241,
"convergence": 0.51355,
"point": {
"x": 1999999.99999999,
"y": 9999999.99999969,
"z": 0.0
}
},
{
"scalefactor": 1.000243,
"convergence": 0.51354,
"point": {
"x": 2000206.0812087534,
"y": 9999956.440082304,
"z": -1181.1945438868763
}
}
]
```
**Ad 2)**
This method is documented in section 3.3 of the wellbore doc which is linked in the [CRS Convert tutorial]
(https://community.opengroup.org/osdu/platform/system/reference/crs-conversion-service/-/blob/master/docs/v3/tutorial/CRS_Convert_Service_howto.md#5-computing-a-wellbore-trajectory-from-directional-survey-data).
The "GNL" method requires the input survey observables to be grid north referenced, i.e.,
- `method`: "GNL"
- `inputKind`: "MD_Incl_Azim"
- `azimuthReference`: "GridNorth" (or "GN" - however this is coded, probably detected by the first letter is g or G). (TBD if this is required. It may be possible that TN is given if the AzimuthalEquidistant method may work with it and a projected CRS).
It is a very simple method that notionally works as follows. First, as always, the minimum curvature computed local offsets are computed (which are "true to scale"). Then these are simply added to the 3D surface location in the same projected CRS. By not scaling these local offsets back to the map projection the difference between a cubical coordinates and curved coordinates are ignored.
However, it is desirable to output the scale factor, and that would not be possible if the minimum curvature offsets are simply added to the starting location. Hence, the implementation of GNL is to internally call "AzimuthalEquidistant", to compute scale factor, and "unscale" the results:
1. Check inputKind and azimuthReference and interpolate options.
2. If `method`: "GNL" then actually do "azimuthalEquidistant" internally.
3. Then calculate the psf at each station as per the above (ad 1).
4. **Then "unscale" the XY trajectory** [as described here](https://community.opengroup.org/osdu/platform/system/reference/crs-conversion-service/-/blob/master/docs/v3/tutorial/CRS_Convert_Service_howto.md#53-unscaling-the-calculated-wellbore-path).
Acceptance criteria:
- [ ] documentation in tutorial
and [api specification](https://community.opengroup.org/osdu/platform/system/reference/crs-conversion-service/-/blob/master/docs/v3/api_spec/crs_converter_openapi.json).
- [ ] implementation (passed tests)
**Ad 3)**
- [ ] Add optional input `MD_i` and output `Stations_i`
- [ ] Implement case to deal with `interpolation_interval` = Number (e.g., 10)
- [ ] Implemented, tested, accepted.
- [ ] Example in [tutorial](https://community.opengroup.org/osdu/platform/system/reference/crs-conversion-service/-/blob/master/docs/v3/tutorial/CRS_Convert_Service_howto.md#6-wellbore-interpolation-on-md)
Minimum curvature interpolation is done at given MD_i. Math is described in section 2.3 of "OSDU_wellbore_calculations.docx" which is linked in the [CRS Convert tutorial]
(https://community.opengroup.org/osdu/platform/system/reference/crs-conversion-service/-/blob/master/docs/v3/tutorial/CRS_Convert_Service_howto.md#5-computing-a-wellbore-trajectory-from-directional-survey-data).
The algorithm is summarized as:
1. First compute the minimum curvature offsets as normal with the stations that have MD,INC,AZI observables.
2. In a second pass, for each desired MD_i[i],
- a. Find the station before and after MD_i[i].
- b. Interpolate the Dog Leg with the equations provided at the desired MD_i.
- c. Compute the interpolated INC_i and AZI_i.
- d. Compute the local offsets dx,dy,dz.
- e. Add those offsets to the previous (real) station.
3. Output calculated (incl. interpolated) stations in an array `stations_i`.
To trigger interpolation at MD, require (check that):
- `inputKind`=="MD_Incl_Azim"
- `MD_i[]`: either
- a constant interpolation interval
- an array with "md_i" values as input with depths at which to interpolate the path.
Option 1 to interpolate the trajectory every 1 [unitZ] for MD_i as interval:
```json
{
"MD_i": {
"md_interval": 1.0
}
}
```
Option 2 for specified MD_i values:
```json
{
"MD_i": {
"md_i": [
200,
400,
600,
800
]
}
}
```
* A bad request error with message should be thrown if both an interval and individual MD_i are in the request.
For directional survey:
```json
"inputStations": [
{
"md": 0,
"inclination": 10,
"azimuth": 100
},
{
"md": 1000,
"inclination": 20,
"azimuth": 110
},
...
```
The interpolated stations/densified path is returned as `Stations_i` as follows.
(note the algorithm should set the "original" flag to true for MDs exactly at an inputStation, as shown for md=0 in the below example. For interpolated stations, the INC_i and AZI_i are returned as computed at the MD_i as shown).
```json
"stations_i": [
{
"md": 0.0,
"inclination": 10.0,
"azimuthTN": 100.51354989131318,
"azimuthGN": 100.0,
"dxTN": 0.0,
"dyTN": 0.0,
"scalefactor": 1.000241,
"convergence": 0.51355,
"point": {
"x": 1999999.99999999,
"y": 9999999.99999969,
"z": 0.0
},
"wgs84Longitude": -85.88980921169528,
"wgs84Latitude": 27.553258329196616,
"dls": 0.0,
"original": true,
"dz": 0.0
},
{
"md": 200.0,
"inclination": 12.000031,
"azimuthTN": 102.51354989131318,
"azimuthGN": 102.0023012,
...
"original": false,
...
```
Note:
- Consider if we have MD, INC but not AZI if this interpolation can be used also to compute paths for INC-ONLY stations. In that algorithm one may want to interpolate the AZI.
- Consider if this can be used to project out if MD_i is past the last MD (keep the same INC and AZI essentially as last station) in that special case.
- Consider what to do if MD_i is given and `interpolate`==True. We propose just keep the algorithm to do whatever it does now and not change its behavior. One could consider this to flag to output interleave interpolated stations in `stations`.M19 - Release 0.22https://community.opengroup.org/osdu/platform/system/schema-service/-/issues/123Incorrect status is being returned upon creating the schema that already exis...2023-05-31T11:42:03ZKamlesh TodaiIncorrect status is being returned upon creating the schema that already exists in the systemWhen one tries to create the schema that is already existing in the system, one gets the return error code of **400 - Bad request**. As per the API documentation, it is correct. But I think that the error code is misleading. The message ...When one tries to create the schema that is already existing in the system, one gets the return error code of **400 - Bad request**. As per the API documentation, it is correct. But I think that the error code is misleading. The message returned is also misleading. It returns "message": "Update/Create failed because schema id is present in another tenant, this is not true because the schema is present in the same tenant.
This is what one would expect, The return error code should be **409 Conflict** indicating that schema is already present. and the message should be "schema is present".M19 - Release 0.22https://community.opengroup.org/osdu/platform/data-flow/ingestion/external-data-sources/core-external-data-workflow/-/issues/9Keep only DAG files in the dags folder.2023-09-06T13:11:30ZYan Sushchynski (EPAM)Keep only DAG files in the dags folder.It would be great if the `dags`-folder contains only the following files: `src/dags/eds_scheduler/eds_scheduler_dag.py` and `src/dags/eds_ingest/src_dags_fetch_ingest_scheduler_dag.py`.
Other files with utilities and custom Airflow oper...It would be great if the `dags`-folder contains only the following files: `src/dags/eds_scheduler/eds_scheduler_dag.py` and `src/dags/eds_ingest/src_dags_fetch_ingest_scheduler_dag.py`.
Other files with utilities and custom Airflow operators are supposed to be stored in either `plugins`-folder or in a separate Python-package.
There are two reasons to do so:
1. Airflow Scheduler has to parse not DAG files each time; it takes extra time
2. Possible import issues
You might find this link useful: https://airflow.apache.org/docs/apache-airflow/2.2.5/plugins.htmlM19 - Release 0.22Denis Karpenok (EPAM)Chad LeongAshish SaxenaPriyanka BhongadeDenis Karpenok (EPAM)https://community.opengroup.org/osdu/platform/domain-data-mgmt-services/reservoir/open-etp-server/-/issues/14EPC and HDF5 files must have the same name2023-06-15T12:31:36ZPhilippe VerneyEPC and HDF5 files must have the same nameHello,
If an epc file has not got the same name than its corresponding HDF5 file, it does not look to be supported for an import to the open etp server.
RESQML standard allows, even if not recommended, an HDF5 file to be named different...Hello,
If an epc file has not got the same name than its corresponding HDF5 file, it does not look to be supported for an import to the open etp server.
RESQML standard allows, even if not recommended, an HDF5 file to be named differently since its name is given in the rel file of the obj_ExternalPartReference.
![image](/uploads/6c18ba5512d4bf11b7cc07d05d207b13/image.png)M19 - Release 0.22Laurent DenyLaurent Deny