OSDU Software issueshttps://community.opengroup.org/groups/osdu/-/issues2021-10-20T14:33:54Zhttps://community.opengroup.org/osdu/platform/security-and-compliance/entitlements/-/issues/76NoHostAvailableException Issue2021-10-20T14:33:54ZYifan YeNoHostAvailableException IssueThere is no handler/logger for NoHostAvailableException, and we can't trace this exception when it occurs.There is no handler/logger for NoHostAvailableException, and we can't trace this exception when it occurs.M9 - Release 0.12Yifan YeYifan Yehttps://community.opengroup.org/osdu/platform/system/schema-service/-/issues/65M8 Data Definitions content2021-09-22T20:31:55ZThomas Gehrmann [slb]M8 Data Definitions content- [ ] Publish the new types and (decorative) changes from the data Definitions member GitLab to the bootstrap resources.- [ ] Publish the new types and (decorative) changes from the data Definitions member GitLab to the bootstrap resources.M9 - Release 0.12Thomas Gehrmann [slb]Thomas Gehrmann [slb]https://community.opengroup.org/osdu/platform/system/schema-service/-/issues/64offset parameter is not working2021-12-14T12:32:46ZAbhishek Kumar (SLB)offset parameter is not workingThe Offset parameter is not working when searching the schemaInfo repository. If I specify the data-partition-id opendes, set the limit to 1 and do two queries, one with an offset of 0 and one with an offset of 99, I am getting the same ...The Offset parameter is not working when searching the schemaInfo repository. If I specify the data-partition-id opendes, set the limit to 1 and do two queries, one with an offset of 0 and one with an offset of 99, I am getting the same record.
To summarize, by changing the offset value there is no impact on the result list. Ideally, if there are multiple schemas in the list the behaviour should be:
S1,S2,S3,S4
Offset=0,Limit=1 ==> Should return S1
Offset=1,Limit=1 ==> Should return S2
Offset=2,Limit=1 ==> Should return S3M9 - Release 0.12Abhishek Kumar (SLB)Abhishek Kumar (SLB)https://community.opengroup.org/osdu/platform/domain-data-mgmt-services/wellbore/wellbore-domain-services/-/issues/19Review documentation2021-08-31T08:25:22ZGisele SouzaReview documentationReview provided documentation:
- Update the subcomitee page
- Update Glossary
- Update API docReview provided documentation:
- Update the subcomitee page
- Update Glossary
- Update API docM9 - Release 0.12Gisele SouzaGisele Souzahttps://community.opengroup.org/osdu/platform/domain-data-mgmt-services/wellbore/wellbore-domain-services/-/issues/18Alignment with EA's DDMS rules2021-08-31T08:28:42ZGisele SouzaAlignment with EA's DDMS rulesReview EA guidelines and provide a development planReview EA guidelines and provide a development planM9 - Release 0.12Gisele SouzaGisele Souzahttps://community.opengroup.org/osdu/platform/domain-data-mgmt-services/wellbore/wellbore-domain-services/-/issues/17Wellbore DDMS v3 APIs2021-10-20T13:03:46ZGisele SouzaWellbore DDMS v3 APIsAs part of the development of V3 APIs, a bulk data access has been added. The Wellbore APIs have been developed, however CSPs need to implement specific code to support the new APIs.
**Copy from the [slack post](https://opensdu.slack.co...As part of the development of V3 APIs, a bulk data access has been added. The Wellbore APIs have been developed, however CSPs need to implement specific code to support the new APIs.
**Copy from the [slack post](https://opensdu.slack.com/archives/C015T1RC0Q6/p1625047059076300):**
Vincent Rondot [SLB] 11:57 AM
Hi,
to follow-up on a topic raised some weeks ago [slack post 1](https://opensdu.slack.com/archives/C015T1RC0Q6/p1621240194082100) & [slack post 2](https://opensdu.slack.com/archives/C015T1RC0Q6/p1624519142062400),
I am kindly reminding that **we need the help of the different CSPs** to implement a small CSP specific logic required for the new Bulk Data APIs (chunking APIs).
So far, Azure and GCP implementation have been taken care of, and IBM and AWS are missing.
The changes required are located in:
- Cloud specific Libraries:
You need to create a factory method which returns a DaskStorageParameter object.
(https://community.opengroup.org/osdu/platform/domain-data-mgmt-services/wellbore/lib/wellbore-core/wellbore-core-lib/-/blob/master/osdu/core/api/storage/dask_storage_parameters.py)
- Cloud specific Injectors:
You need to enrich the Injector to create a DaskBulkStorage consmuning the DaskStorageParameter of the specific CSP.
Placeholders have been added and need to be properly implemented
AWS: https://community.opengroup.org/osdu/platform/domain-data-mgmt-services/wellbore/wellbore-domain-services/-/blob/master/app/injector/aws_injector.py#L36
IBM: https://community.opengroup.org/osdu/platform/domain-data-mgmt-services/wellbore/wellbore-domain-services/-/blob/master/app/injector/ibm_injector.py#L28
Feel free to refer to the Azure or GCP implementation as a guideline:
Azure:
https://community.opengroup.org/osdu/platform/domain-data-mgmt-services/wellbore/lib/wellbore-cloud/wellbore-azure-lib/-/blob/master/osdu_az/storage/dask_storage_parameters.py
https://community.opengroup.org/osdu/platform/domain-data-mgmt-services/wellbore/wellbore-domain-services/-/blob/master/app/injector/az_injector.py#L36
GCP:
https://community.opengroup.org/osdu/platform/domain-data-mgmt-services/wellbore/lib/wellbore-cloud/wellbore-gcp-lib/-/blob/master/osdu_gcp/storage/dask_storage_parameters.py
https://community.opengroup.org/osdu/platform/domain-data-mgmt-services/wellbore/wellbore-domain-services/-/blob/master/app/injector/gcp_injector.py
You may refer to Dask documentation for further info: https://docs.dask.org/en/latest/remote-data-services.html
Also don't hesitate to contact us for any help and guidance for those changes.
Thanks,
Regards,
Vincent
Link to video with code walk-through: https://opensdu.slack.com/archives/C015T1RC0Q6/p1628693191059100M9 - Release 0.12https://community.opengroup.org/osdu/platform/data-flow/ingestion/csv-parser/csv-parser/-/issues/45As part of GSM feature, Implement status publishing method in GCP2021-10-06T14:29:11ZMahesh DakshaAs part of GSM feature, Implement status publishing method in GCPThis is as per the GSM requirement to be implemented in each CSP. This issue has been created for GCP team to implement the publish method to publish the status events in message queue.This is as per the GSM requirement to be implemented in each CSP. This issue has been created for GCP team to implement the publish method to publish the status events in message queue.M9 - Release 0.12Riabokon Stanislav(EPAM)[GCP]Riabokon Stanislav(EPAM)[GCP]https://community.opengroup.org/osdu/platform/data-flow/ingestion/csv-parser/csv-parser/-/issues/43As part of GSM feature, Implement status publishing method in IBM2022-04-08T04:25:06ZMahesh DakshaAs part of GSM feature, Implement status publishing method in IBMThis is as per the GSM requirement to be implemented in each CSP. This issue has been created for IBM team to implement the publish method to publish the status events in message queue.This is as per the GSM requirement to be implemented in each CSP. This issue has been created for IBM team to implement the publish method to publish the status events in message queue.M9 - Release 0.12https://community.opengroup.org/osdu/platform/system/file/-/issues/39Implement CSP specific changes for File DMS APIs support2021-10-19T19:22:38ZKrishna Nikhil VedurumudiImplement CSP specific changes for File DMS APIs supportFollow up for https://community.opengroup.org/osdu/platform/system/file/-/issues/29 where the Core and Azure changes for File DMS changes were completed.
Implementation Updates:
* Core Common changes with skeleton interfaces and model...Follow up for https://community.opengroup.org/osdu/platform/system/file/-/issues/29 where the Core and Azure changes for File DMS changes were completed.
Implementation Updates:
* Core Common changes with skeleton interfaces and model objects - https://community.opengroup.org/osdu/platform/system/lib/core/os-core-common/-/merge_requests/99
* File service changes with Core and Azure implementation for File DMS APIs - https://community.opengroup.org/osdu/platform/system/file/-/merge_requests/128
* Dataset Service changes consuming the APIs - https://community.opengroup.org/osdu/platform/system/dataset/-/merge_requests/109
* Entitlements changes - New role for dataset - https://community.opengroup.org/osdu/platform/security-and-compliance/entitlements/-/merge_requests/90
Work for Other CSPs
* Using the feature flag in Dataset Service, use the latest DMS API endpoints.
* Create new role in Entitlements -> https://community.opengroup.org/osdu/platform/security-and-compliance/entitlements/-/merge_requests/90
* File Service changes - Implement `final IStorageService storageService;` for interactions with CSP Specific Blob store providers.M9 - Release 0.12https://community.opengroup.org/osdu/platform/data-flow/ingestion/ingestion-dags/-/issues/82Manifest ingestion does not show any updates in airflow when backslash charac...2021-10-19T19:16:21ZNaufal Mohamed NooriManifest ingestion does not show any updates in airflow when backslash character used in json body**Description**:
Using manifest ingestion (DAG) workflow service, when user insert backslash \ into json body manifest the workflow run stucks in SUBMITTED status. There is also no trace of the runID running in the Airflow log.
**Steps...**Description**:
Using manifest ingestion (DAG) workflow service, when user insert backslash \ into json body manifest the workflow run stucks in SUBMITTED status. There is also no trace of the runID running in the Airflow log.
**Steps to reproduce:**
a) Insert the body json into DAG worklow body. [With_Backslash_BodyData.json](/uploads/c2f2e8e8241df526830a73cc9ba2336a/With_Backslash_BodyData.json)
b) When submit the body json into base_url/api/workflow/v1/workflow/Osdu_ingest/workflowRun the workflow is submitted succesfully with the following response:
{
"workflowId": "dev:Osdu_ingest",
"runId": "4327f575-e7b3-490f-a1ee-b1e2e950c2a4",
"startTimeStamp": 1627041278115,
"status": "submitted",
"submittedBy": "naufal.noori@katalystdm.com"
}
c) After a while, check DAG run status and the workflow still showing the run is in submitted status. And no trace of the run ID in the Airflow log (This follow up check was done after 24 hours):
_Endpoint_: base_url/api/workflow/v1/workflow/Osdu_ingest/workflowRun/4327f575-e7b3-490f-a1ee-b1e2e950c2a4
_Response_:
{
"workflowId": "dev:Osdu_ingest",
"runId": "4327f575-e7b3-490f-a1ee-b1e2e950c2a4",
"startTimeStamp": 1627041278115,
"status": "submitted",
"submittedBy": "naufal.noori@katalystdm.com"
}
d) When a second trial run was conducted by replacing \ char with empty char, the workflow run was running perfectly and shows trace of running in Airflow log. [With_NO_Backslash_BodyData.json](/uploads/9fdbc2a59a930444feeb6bfacd1e1200/With_NO_Backslash_BodyData.json)
**Expectation**:
We are expecting that the workflow run to failed our request with clear and meaningful error message i.e. Request is failed. There is non-allowed special characters in line #something to line #something in your json body.
**Reason**
It will be a confusion for users to have a run successfully submitted but stuck in the process without any log trace whatsoever.
cc @debasiscM9 - Release 0.12https://community.opengroup.org/osdu/platform/system/schema-service/-/issues/61Support for version upgrade during schema-validation2022-02-25T14:13:08ZAbhishek Kumar (SLB)Support for version upgrade during schema-validationThere is intensive use of `$ref` to schema fragments, which are incoming as schema-IDs like e.g. `"$ref": "osdu:wks:AbstractSpatialLocation:1.0.0"`. These fragments use semantic versioning as well.
As a consequence, these ids should be v...There is intensive use of `$ref` to schema fragments, which are incoming as schema-IDs like e.g. `"$ref": "osdu:wks:AbstractSpatialLocation:1.0.0"`. These fragments use semantic versioning as well.
As a consequence, these ids should be validated also during schema-validation:
- $ref target versions for patch increments can only refer to higher patch versions - if they refer to a higher minor (or even major version) validation must fail.
<br>**Example:**
| Sl No| Type | Initial Version | New Version | Status |
| ------ | ------ | ------ | ------ |------ |
| 1| Base Schema | osdu:wks:AbstractSpatialLocation:1.0.0 | osdu:wks:AbstractSpatialLocation:1.0.1 | |
| 2| $ref | osdu:wks:AbstractCommonResources:2.1.1 | osdu:wks:AbstractCommonResources:2.1.2 | valid |
| 3| $ref | osdu:wks:AbstractCommonResources:2.1.1 | osdu:wks:AbstractCommonResources:2.2.1 | invalid |
| 4| $ref | osdu:wks:AbstractCommonResources:2.1.1 | osdu:wks:AbstractCommonResources:2.0.1 | invalid |
| 5| $ref | osdu:wks:AbstractCommonResources:2.1.1 | osdu:wks:AbstractCommonResources:2.1.0 | invalid |
| 6| $ref | osdu:wks:AbstractCommonResources:2.1.1 | osdu:wks:AbstractCommonResources:3.1.1 | invalid |
| 7| $ref | osdu:wks:AbstractCommonResources:2.1.1 | osdu:wks:AbstractCommonResources-Changed:2.1.1 | invalid |
- Similarly $ref target versions for higher minor versions can only refer to higher minor versions - higher major version references must be rejected by the validation.
<br>**Example:**
| Sl No| Type | Initial Version | New Version | Status |
| ------ | ------ | ------ | ------ |------ |
| 1| Base Schema | osdu:wks:AbstractSpatialLocation:1.0.0 | osdu:wks:AbstractSpatialLocation:1.1.0 | |
| 2| $ref | osdu:wks:AbstractCommonResources:2.1.1 | osdu:wks:AbstractCommonResources:2.1.2 | valid |
| 3| $ref | osdu:wks:AbstractCommonResources:2.1.1 | osdu:wks:AbstractCommonResources:2.2.1 | valid |
| 4| $ref | osdu:wks:AbstractCommonResources:2.1.1 | osdu:wks:AbstractCommonResources:2.0.1 | invalid |
| 5| $ref | osdu:wks:AbstractCommonResources:2.1.1 | osdu:wks:AbstractCommonResources:2.1.0 | invalid |
| 6| $ref | osdu:wks:AbstractCommonResources:2.1.1 | osdu:wks:AbstractCommonResources:3.1.1 | invalid |
| 7| $ref | osdu:wks:AbstractCommonResources:2.1.1 | osdu:wks:AbstractCommonResources-Changed:2.1.1 | invalid |M9 - Release 0.12Abhishek Kumar (SLB)Abhishek Kumar (SLB)https://community.opengroup.org/osdu/platform/data-flow/ingestion/csv-parser/csv-parser/-/issues/39Cleanup for existing folder structure for airflowdags in CSV Parser repo2021-09-09T01:34:23Zharshit aggarwalCleanup for existing folder structure for airflowdags in CSV Parser repoThis issue is related to https://community.opengroup.org/osdu/platform/data-flow/ingestion/csv-parser/csv-parser/-/issues/38 and [ADR](https://community.opengroup.org/osdu/platform/data-flow/home/-/issues/47)
Following MR \[[Link](https...This issue is related to https://community.opengroup.org/osdu/platform/data-flow/ingestion/csv-parser/csv-parser/-/issues/38 and [ADR](https://community.opengroup.org/osdu/platform/data-flow/home/-/issues/47)
Following MR \[[Link](https://community.opengroup.org/osdu/platform/data-flow/ingestion/csv-parser/csv-parser/-/merge_requests/118)\] is introducing changes in DAG repo structure to support Packaged Dags
Hence cleanup of existing folders `airflowdags/dags` and `airflowdags/plugins` folders is requiredM9 - Release 0.12harshit aggarwalharshit aggarwalhttps://community.opengroup.org/osdu/platform/data-flow/ingestion/csv-parser/csv-parser/-/issues/38Adopting new Dag Repo Structure for supporting Packaged Dags2021-09-09T03:45:07Zharshit aggarwalAdopting new Dag Repo Structure for supporting Packaged DagsThis issue is inline with this approved [[ADR](https://community.opengroup.org/osdu/platform/data-flow/home/-/issues/47)]
Following MR [[Link](https://community.opengroup.org/osdu/platform/data-flow/ingestion/csv-parser/csv-parser/-/mer...This issue is inline with this approved [[ADR](https://community.opengroup.org/osdu/platform/data-flow/home/-/issues/47)]
Following MR [[Link](https://community.opengroup.org/osdu/platform/data-flow/ingestion/csv-parser/csv-parser/-/merge_requests/118)] is introducing changes in DAG repo structure to support Packaged Dags
This is the new proposed structure for csv parser
```
├── airflowdags
├── osdu_csv_parser
| ├── __init__.py
| └── xyz.py
|
|___csv_ingestion_all_steps.py
```
As of now only Azure has made changes in their pipeline to honor this new structure, other CSPs should make changes in their pipelines to support this one
Once all CSP pipelines are moved to new structure, the existing folders `airflowdags/dags` and `airflowdags/plugins` folder will be cleaned upM9 - Release 0.12harshit aggarwalharshit aggarwalhttps://community.opengroup.org/osdu/platform/system/file/-/issues/35File should get download with actual filename and extension after hitting dow...2023-04-13T10:39:15Zsachin GuptaFile should get download with actual filename and extension after hitting downloadUrl### Problem Statement
Currently, the file service stores file in the persistent location with some random name and without extension. When we download the file using download SignedURL, it's download without extension and to see the co...### Problem Statement
Currently, the file service stores file in the persistent location with some random name and without extension. When we download the file using download SignedURL, it's download without extension and to see the content of the file we need to explicitly give an extension.
### Solution
We can overcome this problem by adding Content-Disposition and Content-Type header at the time of download signed URL creation. We can get file name from file metadata payload and based on file extension we can get content type of the file, both name and content type we can use to create download URL.
Note: the name field in the metadata payload is optional and if it is not present in the payload, then the above solution won't work for that and follow the current implementation of download URL creation.M9 - Release 0.12Paresh BehedeParesh Behedehttps://community.opengroup.org/osdu/platform/security-and-compliance/policy/-/issues/20Fix Search return fields Issue (Blocker)2022-02-01T04:49:12ZAsh SathyaseelanFix Search return fields Issue (Blocker)M9 - Release 0.12Hrvoje MarkovicHrvoje Markovichttps://community.opengroup.org/osdu/platform/security-and-compliance/policy/-/issues/17Policy service fails to get legaltags' descriptions2022-02-01T04:42:02ZHanna KavalionakPolicy service fails to get legaltags' descriptionsThe Policy service requests the descriptions of all legaltags assigned to records.
The Legal service legaltags:batchRetrieve request is limited to 25 legaltags as input. The Search service can request policy evaluation for thousands of ...The Policy service requests the descriptions of all legaltags assigned to records.
The Legal service legaltags:batchRetrieve request is limited to 25 legaltags as input. The Search service can request policy evaluation for thousands of records. So, we can't use legaltag description info for writing policy rules.
Example:
Lets say that there are 100 file records (opendes partition is used) that we are going to search for:
query : _{
"kind": "opendes:wks:dataset--File.Generic:1.0.0",
"limit" :100
}_
Set of records: [test_data.txt](/uploads/9ff2e68215b35db4b6aaaf35f83b194a/test_data.txt)
The Legal service log:
```
AppException(error=AppError(code=400, reason=Validation failed., message=Validation failed., errors=null, debuggingInfo=null, originalException=org.springframework.web.bind.MethodArgumentNotValidException: Validation failed for argument [0] in public org.springframework.http.ResponseEntity<org.opengroup.osdu.legal.tags.dto.LegalTagDtos> org.opengroup.osdu.legal.api.LegalTagApi.getLegalTags(org.opengroup.osdu.legal.tags.dto.RequestLegalTags): [Field error in object 'requestLegalTags' on field 'names': rejected value [[opendes-public-usa-dataset-epam, opendes-public-usa-dataset-epam, opendes-public-usa-dataset-epam, opendes-public-usa-dataset-epam, opendes-public-usa-dataset-epam, opendes-public-usa-dataset-epam, opendes-public-usa-dataset-epam, opendes-public-usa-dataset-epam, opendes-public-usa-dataset-epam, opendes-public-usa-dataset-epam, opendes-public-usa-dataset-epam, opendes-public-usa-dataset-epam, opendes-public-usa-dataset-epam, opendes-public-usa-dataset-epam, opendes-public-usa-dataset-epam, opendes-public-usa-dataset-epam, opendes-public-usa-dataset-epam, opendes-public-usa-dataset-epam, opendes-public-usa-dataset-epam, opendes-public-usa-dataset-epam, opendes-public-usa-dataset-epam, opendes-public-usa-dataset-epam, opendes-public-usa-dataset-epam, opendes-public-usa-dataset-epam, opendes-public-usa-dataset-epam, opendes-public-usa-dataset-epam, opendes-public-usa-dataset-epam, opendes-public-usa-dataset-epam, opendes-public-usa-dataset-epam, opendes-public-usa-dataset-epam, opendes-public-usa-dataset-epam, opendes-public-usa-dataset-epam, opendes-public-usa-dataset-epam, opendes-public-usa-dataset-epam, opendes-public-usa-dataset-epam, opendes-public-usa-dataset-epam, opendes-public-usa-dataset-epam, opendes-public-usa-dataset-epam, opendes-public-usa-dataset-epam, opendes-public-usa-dataset-epam, opendes-public-usa-dataset-epam, opendes-public-usa-dataset-epam, opendes-public-usa-dataset-epam, opendes-public-usa-dataset-epam, opendes-public-usa-dataset-epam, opendes-public-usa-dataset-epam, opendes-public-usa-dataset-epam, opendes-public-usa-dataset-epam, opendes-public-usa-dataset-epam, opendes-public-usa-dataset-epam]]; codes [Size.requestLegalTags.names,Size.names,Size.java.util.List,Size]; arguments [org.springframework.context.support.DefaultMessageSourceResolvable: codes [requestLegalTags.names,names]; arguments []; default message [names],25,1]; default message [size must be between 1 and 25]] ), originalException=org.springframework.web.bind.MethodArgumentNotValidException: Validation failed for argument [0] in public org.springframework.http.ResponseEntity<org.opengroup.osdu.legal.tags.dto.LegalTagDtos>
```M9 - Release 0.12https://community.opengroup.org/osdu/platform/data-flow/ingestion/ingestion-dags/-/issues/66IBM support to move to airflow 2.02021-11-18T15:06:48Zjingdong sunIBM support to move to airflow 2.0M9 - Release 0.12Anuj GuptaShaonjingdong sunAnuj Guptahttps://community.opengroup.org/osdu/platform/data-flow/ingestion/ingestion-workflow/-/issues/118ADR : New API to handle System workflows2023-12-15T05:47:08Zpreeti singh[Microsoft]ADR : New API to handle System workflows**Context:**
===
System workflows are the workflows which are available to all the data partitions. Any System workflow can be triggered and retrieved by any of the tenants, but it can be Created, updated, deleted by only a user with spe...**Context:**
===
System workflows are the workflows which are available to all the data partitions. Any System workflow can be triggered and retrieved by any of the tenants, but it can be Created, updated, deleted by only a user with special privilege (Let’s say it has system role).
This is more with respect to the workflows or DAGs that OSDU provides with.
**How it's done today:**
===
- There is no concept of system workflows.
- The workflow metadata is stored in partition specific cosmos collection.
**Issue with current design:**
===
- The behavior of create api endpoint will change and can be confusing to users if we use same for system as well as private workflow. Users might end up unknowingly creating system workflow by passing data-partition-id of special partition.
- It is difficult for the updates to be managed for the changes from the OSDU community, if we try to copy or replicate the information across all the customer partitions.
**Proposal:**
===
There are two types of workflows in the system, System workflows and Private workflows. The proposal is to create a new API endpoint to register System workflows.
- The new API shall be termed as `workflow/system`
- To **create/update/delete** System workflows - `/workflow/system` endpoint shall be used
- To **Get/Trigger** System workflows, existing workflow service endpoint must be used.
- The authorization of new end point shall be different from existing groups. We'll use service principal based authorization.
- The new API shall not accept data-partition-id as a header. Service would be aware where the System workflows are located.
- This API should interact *only* with System workflows. It should not have access to other workflows.
**Sequence Diagram for createWorkflow**
![createWorkflow](/uploads/da01a45cf14062aad0a5cfc48bd51c3d/createWorkflow.png)
**Sequence Diagram for getallWorkflows**
![getallWorkflows](/uploads/f767cc09ea1b27baeb6137e6dbdd9959/getallWorkflows.png)
**Sequence Diagram for getWorkflow option 1** (this one got finalized)
![getWorkflowOption1](/uploads/55dcd0f4ece4c0df83f5bef057c1cfbb/getWorkflowOption1.png)
**Sequence Diagram for getWorkflow option 2**
![getWorkflowOption2](/uploads/5ee667cb7226a62a05a03a8effa27988/getWorkflowOption2.png)M9 - Release 0.12preeti singh[Microsoft]preeti singh[Microsoft]https://community.opengroup.org/osdu/platform/system/schema-service/-/issues/51ADR: New API to handle system schemas2023-02-22T14:24:03ZAman VermaADR: New API to handle system schemas## Status
* [ ] Proposed
* [ ] Trialing
* [ ] Under review
* [x] Approved
* [ ] Retired
Context:
===
Public schemas are schemas which are available to all the tenants out of the box. Today the public schema includes schema from OSDU or...## Status
* [ ] Proposed
* [ ] Trialing
* [ ] Under review
* [x] Approved
* [ ] Retired
Context:
===
Public schemas are schemas which are available to all the tenants out of the box. Today the public schema includes schema from OSDU or SLB and are located in Schema service repository here: https://community.opengroup.org/osdu/platform/system/schema-service/-/tree/master/deployments/shared-schemas
How it's done today:
===
- Deployment expects existence of a special partition (generally termed as `common`) which will be used to load public schemas
- Schema service has only one endpoint today `/schema`. Based on the `data-partition-id` passed in the request headers, it's decided whether the schema is public or private.
- If the data-partition-id passed is "special partition", then the created schema is public, otherwise it's private. This is indicated by a field named `SCOPE` in crated schema.
Issue with current design:
===
1. The API behavior of `/schema` endpoint changes based on the data-partition-id header and can be confusing to users. Users might end up unknowingly creating public schemas by passing data-partition-id of special partition.
2. The special partition is expected to be provisioned for provisioning public schemas.
Proposal:
===
There are two types of schemas in the system, public (or system) schemas and private schema. The proposal is to create a dedicated API to created/ update system schemas. Hence-
- The new API shall be termed as **`/schemas/system`**
- **To create/ updated public schema**- `/schemas/system` endpoint shall be used
- **To read public schema**- existing `GET /schema` endpoint shall be used (same as current behaviour)
- The new API shall not accept `data-partition-id` as a header. Service would be aware where the public schemas are located.
- The authorization of new end point shall be different from exiting schema.editor/viewer role.
**Sequence diagram for creating public schemas**
![image](/uploads/d90a62b4d9a1aae2c6c7a18db4cb66e4/image.png)
**Sequence diagram for reading public schema**
![image](/uploads/53d3e00c194f5ed3e0889682f8184a8f/image.png)M9 - Release 0.12Aman VermaAman Vermahttps://community.opengroup.org/osdu/platform/data-flow/real-time/processors/pipe/-/issues/2Excepturi qui tempore officiis.2021-02-15T16:01:25ZDmitry KniazevExcepturi qui tempore officiis.###### Voluptas
Molestiae nostrum aut. Nisi voluptatem natus. Voluptas accusamus voluptatum. Fuga dolor dignissimos. Natus consequatur adipisci.
## Sit
Repudiandae delectus quia. Vitae non ad. Quod neque perferendis.
###### Dolor
Earum r...###### Voluptas
Molestiae nostrum aut. Nisi voluptatem natus. Voluptas accusamus voluptatum. Fuga dolor dignissimos. Natus consequatur adipisci.
## Sit
Repudiandae delectus quia. Vitae non ad. Quod neque perferendis.
###### Dolor
Earum reprehenderit eum. Quia qui voluptatem. Cum voluptatibus eligendi.
#### Laboriosam
At facilis a. Recusandae hic quasi. Voluptas alias eum.
Maiores voluptatem rerum. Modi deleniti veniam. *Nemo* rerum minus.Sample GitLab Project Milestone 22021-10-21