OSDU Software issueshttps://community.opengroup.org/groups/osdu/-/issues2023-07-20T11:05:00Zhttps://community.opengroup.org/osdu/platform/system/storage/-/issues/177Integration test coverage for users.data.root2023-07-20T11:05:00ZRustam Lotsmanenko (EPAM)rustam_lotsmanenko@epam.comIntegration test coverage for users.data.rootChanges to data authentication were recently introduced with the merge request: https://community.opengroup.org/osdu/platform/system/storage/-/merge_requests/694. However, we currently lack integration test cases to cover these modificat...Changes to data authentication were recently introduced with the merge request: https://community.opengroup.org/osdu/platform/system/storage/-/merge_requests/694. However, we currently lack integration test cases to cover these modifications.
It is essential to ensure that these changes won't disrupt the current flow and that `users.data.root` will consistently have access to ingested data.
To address this, we need to implement integration test cases to cover the new data authentication mechanisms.M20 - Release 0.23Rustam Lotsmanenko (EPAM)rustam_lotsmanenko@epam.comRustam Lotsmanenko (EPAM)rustam_lotsmanenko@epam.comhttps://community.opengroup.org/osdu/platform/data-flow/ingestion/external-data-sources/core-external-data-workflow/-/issues/32Versal Spatial Data Ingestion (While Ingesting the data, getting Spatial Coor...2023-10-05T05:07:25ZSelva Kumar SenathipathyVersal Spatial Data Ingestion (While Ingesting the data, getting Spatial Coordinate block as Empty)As part of Versal OSDU integration, spatial coordinate blocks are inserted as empty blocks into OSDU Target system.
While going through the air flow code we found the below observation,
We found that in FetchAndIngest the cleaning pro...As part of Versal OSDU integration, spatial coordinate blocks are inserted as empty blocks into OSDU Target system.
While going through the air flow code we found the below observation,
We found that in FetchAndIngest the cleaning process of the records is removing the coordinates, when coordinates has nested list. Seems like cleaning process supports only point type of geometry but versal has Multiline String and Multi Polygon type of geometry with nested list of coordinates.
For nested list e.g. [[-0.7484, 61.4182], [-0.9396, 61.4893]] the following method (_iterate_list) is returning empty list. Please find below snapshot of the methods where we think it is removing the coordinates values when coordinates are in the form of nested list.
**Repo Link**:_ https://community.opengroup.org/osdu/platform/data-flow/ingestion/osdu-airflow-lib/-/blob/master/osdu_airflow/eds/eds_ingest/clean_records.py_
![Air_Flow_clean_process](/uploads/8f41d0527d37d578f1395d4af5d1993b/Air_Flow_clean_process.jpg)
![Air_Flow_List_Iterate](/uploads/aec8664b4a47290f61f7425025d8dab4/Air_Flow_List_Iterate.jpg)M20 - Release 0.23Priyanka BhongadePriyanka Bhongadehttps://community.opengroup.org/osdu/platform/data-flow/ingestion/external-data-sources/core-external-data-workflow/-/issues/31Data-partition-id is hardcoded2024-02-20T06:59:59ZYan Sushchynski (EPAM)Data-partition-id is hardcodedHello,
It seems that we need to hardcode the `data-partition-id` value now. However, this approach doesn't match with the multipartition when services work with the data-partition-value a user passed.
Could we ask you to add the multi...Hello,
It seems that we need to hardcode the `data-partition-id` value now. However, this approach doesn't match with the multipartition when services work with the data-partition-value a user passed.
Could we ask you to add the multipartition support?
Thanks.Ashish SaxenaPriyanka BhongadeAshish Saxenahttps://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/seismic-dms-suite/seismic-store-service/-/issues/101Long running query (in Azure) is not cancelled2023-08-07T14:27:49ZLaura DamianLong running query (in Azure) is not cancelledSteps to reproduce:
- try to retrieve datasets in a folder with more than 1M datasets
- cancel the request or wait for timeout in the typescript code
- check the "Total Request Units" in CosmosDB metrics. These do not decrease.
Sugge...Steps to reproduce:
- try to retrieve datasets in a folder with more than 1M datasets
- cancel the request or wait for timeout in the typescript code
- check the "Total Request Units" in CosmosDB metrics. These do not decrease.
Suggestions:
- Add an explicit timeout in the typescript code for the calls to the sidecar.
- Add a cancellation token in the sidecar's endpoints and propagate them to CosmosDBMax ZeierLaura DamianKonstantin GukovMax Zeierhttps://community.opengroup.org/osdu/platform/data-flow/ingestion/segy-to-zgy-conversion/-/issues/25windows build for sgysdk2023-07-16T18:28:11ZQiang Fuwindows build for sgysdkIs the windows build for segysdk in dll format available?Is the windows build for segysdk in dll format available?https://community.opengroup.org/osdu/platform/system/lib/core/os-core-common/-/issues/70Invalid data-partition-id will create 500 code with no Authorize message.2023-10-10T12:16:06ZBruce JinInvalid data-partition-id will create 500 code with no Authorize message.While making calls to OSDU services, such as `secret` and `storage` service, testers discover that if they put invalid symbols in `data_partition_id`, we will have 500 code, but with reason of Access Denied.
After investigation, we reali...While making calls to OSDU services, such as `secret` and `storage` service, testers discover that if they put invalid symbols in `data_partition_id`, we will have 500 code, but with reason of Access Denied.
After investigation, we realize the partition service did not consider the situation that user put invalid URI symbols like `@#$%` in data partition
id, which will make the `normalizeStringUrl` function have Java.Lang exception in this [UrlNormalizationUtil.java](https://community.opengroup.org/osdu/platform/system/lib/core/os-core-common/-/blob/master/src/main/java/org/opengroup/osdu/core/common/util/UrlNormalizationUtil.java).
```
Caused by: java.lang.IllegalArgumentException: Malformed escape pair at index 57: http://os-partition:8080/api/partition/v1/partitions/osdu%
at java.net.URI.create(URI.java:852)
at org.opengroup.osdu.core.common.util.UrlNormalizationUtil.normalizeStringUrl(UrlNormalizationUtil.java:27)
```
This will generate a 500 code in entitlement service since the service will treat this error as a general error in [SpringExceptionMapper.java](handleGeneralException), instead a 400 code.
Also in Entitlements the error message is processed within [AuthorizationServiceImpl.java](https://community.opengroup.org/osdu/platform/system/lib/core/os-core-common/-/blob/master/src/main/java/org/opengroup/osdu/core/common/entitlements/AuthorizationServiceImpl.java), so it will have `"Access denied", "The user is not authorized to perform this action"` error message.
Here is a MR that will handle the 500 code produced from `java.lang.IllegalArgumentException` https://community.opengroup.org/osdu/platform/system/lib/core/os-core-common/-/merge_requests/219Bruce JinBruce Jinhttps://community.opengroup.org/osdu/platform/domain-data-mgmt-services/rock-and-fluid-sample/rafs-ddms-services/-/issues/137Documentation: Request to expose similar documentation for content schema tha...2023-10-03T08:45:40ZBryan DawsonDocumentation: Request to expose similar documentation for content schema that we do for WKS schemas@debasisc brought up that it would be nice if we have an easier to consume version of the content schemas in a similar vein to what is done for the WKS schemas at: https://community.opengroup.org/osdu/data/data-definitions/-/tree/master/E-R@debasisc brought up that it would be nice if we have an easier to consume version of the content schemas in a similar vein to what is done for the WKS schemas at: https://community.opengroup.org/osdu/data/data-definitions/-/tree/master/E-RSiarhei Khaletski (EPAM)Siarhei Khaletski (EPAM)https://community.opengroup.org/osdu/platform/consumption/geospatial/-/issues/275Transformer - Improve Trajectory Workflow2023-08-23T15:18:45ZNoel OkanyaTransformer - Improve Trajectory WorkflowAs a GCZ Developer, I want to improve the trajectory ingestion workflow, so that maintenance and performance are most optimal.
Acceptance Criteria
- Trajectory ingestion workflow refactored to use new OSDU helper functions.
- Perform a...As a GCZ Developer, I want to improve the trajectory ingestion workflow, so that maintenance and performance are most optimal.
Acceptance Criteria
- Trajectory ingestion workflow refactored to use new OSDU helper functions.
- Perform additional testing on trajectory ingestion on the current data setLevi RemingtonAnkita SrivastavaLevi Remingtonhttps://community.opengroup.org/osdu/platform/consumption/geospatial/-/issues/273Transformer - Monitor Ignite for Errors and Inform Stakeholders2023-07-12T15:23:06ZNoel OkanyaTransformer - Monitor Ignite for Errors and Inform StakeholdersAs a GCZ Developer, I want to monitor Ignite for Errors and Inform Stakeholders, so that we can communicate the current status of GCZ services in a timely manner.
Acceptance Criteria
- Stakeholders are informed of GCZ Services in a tim...As a GCZ Developer, I want to monitor Ignite for Errors and Inform Stakeholders, so that we can communicate the current status of GCZ services in a timely manner.
Acceptance Criteria
- Stakeholders are informed of GCZ Services in a timely mannerAnkita SrivastavaAnkita Srivastavahttps://community.opengroup.org/osdu/platform/consumption/geospatial/-/issues/272Transformer - Enhance Logging for Exception Handling2023-07-12T15:20:35ZNoel OkanyaTransformer - Enhance Logging for Exception HandlingAs a GCZ Developer, I want to enhance Logging for Exception Handling, so that the actual cause is logged in the Transformer logs.
Triggers:
- Observe invalid auth token
- Empty records
Acceptance Criteria:
- Enhanced Logging for Ex...As a GCZ Developer, I want to enhance Logging for Exception Handling, so that the actual cause is logged in the Transformer logs.
Triggers:
- Observe invalid auth token
- Empty records
Acceptance Criteria:
- Enhanced Logging for Exception Handling
- Get count exceptionAnkita SrivastavaAnkita Srivastavahttps://community.opengroup.org/osdu/platform/data-flow/ingestion/csv-parser/csv-parser/-/issues/82Update postman collection path for CSV parsers IT2023-09-01T14:05:35ZChad LeongUpdate postman collection path for CSV parsers ITFailure in https://community.opengroup.org/osdu/platform/data-flow/ingestion/csv-parser/csv-parser/-/merge_requests/390 shows that the postman collection path needs to be updated to reflect the latest test directory and collections in ht...Failure in https://community.opengroup.org/osdu/platform/data-flow/ingestion/csv-parser/csv-parser/-/merge_requests/390 shows that the postman collection path needs to be updated to reflect the latest test directory and collections in https://community.opengroup.org/osdu/qa/-/tree/main/Postman%20Collection/31_CICD_Setup_CSVIngestion
| CSP | Change path to |
| ------ | ------ |
| AWS | https://community.opengroup.org/osdu/qa/-/blob/main/Postman%20Collection/31_CICD_Setup_CSVIngestion/CSVWorkflow_AWS_CI-CD_v2.0..postman_collection_NotRun.json |
| IBM | https://community.opengroup.org/osdu/qa/-/blob/main/Postman%20Collection/31_CICD_Setup_CSVIngestion/CSVWorkflow_CI-CD_v2.0.postman_collection.json |
| GC | https://community.opengroup.org/osdu/qa/-/blob/main/Postman%20Collection/31_CICD_Setup_CSVIngestion/CSVWorkflow_CI-CD_v2.0.postman_collection.json |
- [X] Azure - Done
- [X] AWS - https://community.opengroup.org/osdu/platform/data-flow/ingestion/csv-parser/csv-parser/-/merge_requests/391
- [ ] IBM - Pending
- [ ] GC - PendingM20 - Release 0.23https://community.opengroup.org/osdu/platform/deployment-and-operations/base-containers-azure/alpine-zulu17/-/issues/1Use distroless java mariner instead of alpine2023-07-12T01:46:13ZArturo Hernandez [EPAM]Use distroless java mariner instead of alpineThis is using alpine as base image, would be nice if we can match the `mariner` based image.
This is the one used by ADME as far as I recall, adding @lucynliu for confirmation.
I would suggest to start using [Microsoft-openjdk](https:...This is using alpine as base image, would be nice if we can match the `mariner` based image.
This is the one used by ADME as far as I recall, adding @lucynliu for confirmation.
I would suggest to start using [Microsoft-openjdk](https://hub.docker.com/_/microsoft-openjdk-jdk) instead of alpine, unless there is a strong reason such as a bug which has not being fixed or else.
Additionally, would suggest to start adopting distroless based images which enhances security for all the java based services.https://community.opengroup.org/osdu/platform/deployment-and-operations/helm-charts-azure/-/issues/25Istio version upgrade + health check2023-07-12T21:19:34ZArturo Hernandez [EPAM]Istio version upgrade + health checkIstio will not support newer Kubernetes versions. `1.25` it is the latest one for the Istio version that we currently are recommending to install `1.15`.
[Istio Releases](https://istio.io/latest/docs/releases/supported-releases/)
I wou...Istio will not support newer Kubernetes versions. `1.25` it is the latest one for the Istio version that we currently are recommending to install `1.15`.
[Istio Releases](https://istio.io/latest/docs/releases/supported-releases/)
I would recommend to start thinking about this upgrade, normally it is better to do it sooner rather than later.
The recommended Istio version would be **`1.18.x`**
Additionally we are still using some default pod to measure api gateway health check, MSFT team suggested to get rid of that as it is single point of failure for appgw, all services will be unavailable if default pod it is unavailable.
Recommended approach would be to redirect to the Istio gateway health-check, Istio gateway it is configured to scale automatically, meaning that if this health check fails, most likely Istio failed and this is an accurate health check.
cc. @lucynliu @nursheikh
Let me know if we can start working on this and donate this to community.Arturo Hernandez [EPAM]Arturo Hernandez [EPAM]https://community.opengroup.org/osdu/platform/domain-data-mgmt-services/rock-and-fluid-sample/rafs-ddms-services/-/issues/123Prepare consistent service bootstrapping testing dataset2023-10-12T15:14:03ZSiarhei Khaletski (EPAM)Prepare consistent service bootstrapping testing dataset## Context
Nearly all OSDU services require bootstrapping datasets to verify service correctness after deployment.
Some of them require schemas uploading to the Schema service, others require master data and reference data upload, etc.
...## Context
Nearly all OSDU services require bootstrapping datasets to verify service correctness after deployment.
Some of them require schemas uploading to the Schema service, others require master data and reference data upload, etc.
## Scope
The team has already prepared testing Postman Collection (https://community.opengroup.org/osdu/qa/-/tree/main/Dev/48_CICD_Setup_RAFSDDMSAPI)
It requires a careful review by SME to ensure that the dataset is consistent:
1. Source files (XLS, PDF) are valid and publicly available, or (given authorization) are sanitized of any identifying information to ensure they do not violate any IP rights
2. Datasets describe (meta-data) related to their respective source files
3. Derived meta-data from the source files are inserted into the proper WPC (within the testing Postman Collection)
4. Derived bulk-data from the source files are inserted into the /data endpoints.
The goal is to have a clear and consistent chain of `source file` -> `source file dataset` -> `WPC` -> `bulk Data`.
Note: preliminary steps for registration source file datasets must be added to the project README file.RAFS DDMS Sprint 19Michael JonesMichael Joneshttps://community.opengroup.org/osdu/platform/domain-data-mgmt-services/wellbore/wellbore-domain-services/-/issues/75Log Recognition / Family call fails with 503 error2023-07-19T16:20:00ZMariel HerzogLog Recognition / Family call fails with 503 errorWellbore Log Recognition / Family call fails with 503 error intermittently.
1) Need to understand and resolve the error.
2) Additionally, it is critical to understand if during this failure there is any data loss or lack of data mappin...Wellbore Log Recognition / Family call fails with 503 error intermittently.
1) Need to understand and resolve the error.
2) Additionally, it is critical to understand if during this failure there is any data loss or lack of data mapping.
3) Ask to enhance documentation from the domain side for how the call works todayhttps://community.opengroup.org/osdu/platform/domain-data-mgmt-services/rock-and-fluid-sample/rafs-ddms-services/-/issues/122Water Analysis - request showcasing of implementation by using flexible model2023-10-03T08:42:41ZDebasis ChatterjeeWater Analysis - request showcasing of implementation by using flexible modelCurrent example is using custom schema entity.
rafsddms:wks:work-product-component--WaterAnalysisTest:1.0.0
Presumably, in future we should be able to handle this kind of need (of yet another test and its specific result or measured pro...Current example is using custom schema entity.
rafsddms:wks:work-product-component--WaterAnalysisTest:1.0.0
Presumably, in future we should be able to handle this kind of need (of yet another test and its specific result or measured properties) through flexible model - SampleAnalysis simply by adding suitable Reference data. With that approach, there will still be the need to develop Domain API for the new kind of test to allow read/write of data.
First - can you build and showcase one such simple example? I kept it flat, single dimension results.
[Water-analysis-using-flexible-Model.xlsx](/uploads/0aed0e589579b22e2fdc1668a0c3c300/Water-analysis-using-flexible-Model.xlsx)
Second - how will **extensibility** be supported in RAFS DDMS?
Ex: One community member wants to add support for a special type of test (on Rock or Fluid Sample). Using suitable documentation, he/she can add suitable Reference data to define the special test and test results. Using SampleAnalysisType reference data. But how will he/she add R/W Domain API to write data into Apache Parquet and read from "optimized content", without taking help from RAFS DDMS development team?
Please check and advise.
Thank youSiarhei Khaletski (EPAM)Siarhei Khaletski (EPAM)https://community.opengroup.org/osdu/platform/system/dataset/-/issues/56The dataset responds with a 500 server error instead of a DMS Service status ...2023-07-03T14:32:59ZRustam Lotsmanenko (EPAM)rustam_lotsmanenko@epam.comThe dataset responds with a 500 server error instead of a DMS Service status code.- The underlying DMS service could respond with varying errors and status codes.
- Especially after the EDS-DMS service introduction.
- But the Dataset service will not respect those responses and will try to parse it as an OK response.
...- The underlying DMS service could respond with varying errors and status codes.
- Especially after the EDS-DMS service introduction.
- But the Dataset service will not respect those responses and will try to parse it as an OK response.
Causing parsing errors, and making Dataset response confusing:
~~~
{
"code": 500,
"reason": "Internal Server Error",
"message": "Unrecognized field \"code\" (class org.opengroup.osdu.core.common.dms.model.RetrievalInstructionsResponse), not marked as ignorable (one known property: \"datasets\"])_ at [Source: (String)\"{\"code\":401,\"reason\":\"Access denied\",\"message\":\"The user is not authorized to perform this action\"}\"; line: 1, column: 12] (through reference chain: org.opengroup.osdu.core.common.dms.model.RetrievalInstructionsResponse[\"code\"])"
}
~~~
Solution:
- check the DMS response code, in case it's not ok do not try to parse it, instead respond with it to the user, highlighting that the error occurred in DMS. <br/>
Example:
~~~
{
"code": 403,
"reason": "Non-OK response received from DMS service: https://community.gcp.gnrg-osdu.projects.epam.com/api/file/v2/files/storageInstructions",
"message": "RBAC: access denied"
}
~~~M19 - Release 0.22Rustam Lotsmanenko (EPAM)rustam_lotsmanenko@epam.comRustam Lotsmanenko (EPAM)rustam_lotsmanenko@epam.comhttps://community.opengroup.org/osdu/platform/domain-data-mgmt-services/rock-and-fluid-sample/rafs-ddms-services/-/issues/121Sample Analysis - SCAL (Capillary Pressure) - Observations from Postman Colle...2023-10-03T08:32:08ZDebasis ChatterjeeSample Analysis - SCAL (Capillary Pressure) - Observations from Postman CollectionPlease provide proper report which is being used to build the test case using SCAL, Capillary Pressure measurements.
**Folder "Samples Analyses Report"**
**Step-01** - Create Record - wpc SamplesAnalysesReport
Row-35 refers to Master da...Please provide proper report which is being used to build the test case using SCAL, Capillary Pressure measurements.
**Folder "Samples Analyses Report"**
**Step-01** - Create Record - wpc SamplesAnalysesReport
Row-35 refers to Master data sample. Where is that getting created?
Rows 38/39 - 1 Sample ID and two identifiers?
Rows 42/43 - Does the test case involve two tests executed on the same sample?
Rows 57/58 and 61/62 - How are they aligned? Also are these Labs linked to Organisation master data?
Also, should there not be separate reports coming from two Labs?
Rows 65-67 - One report linked to 3 different files?
**Folder Sample Analysis (Cap Pressure)**
**Step-01** - Create record - wpc SamplesAnalysis
Where is the link to wpc SampleAnalysesReport?
Rows 27/28 - Where are these Sample (master) records being created?
Row-35 - Linked to specific log curve or Well Log WPC overall?
Rows 37/38 and rows 41/42 - How are these aligned? Also are these Labs linked to Organisation master data?
Duplicate information between wpc SamplesAnalysesReport and wpc SamplesAnalysis?
Row-57 - where is suitable Reference value added to support this entry?
Rows 66-71 - is this information also not coming from Reference values of a specific type of test?
**Step-03** - Add data POST https://{{RAFS_DDMS_HOST}}/capillarypressuretests/{{cp_record_id}}/data
Which step created cp_recprd_id?
Check units of measure?
Can you also share sample parquet file after some numerical data has been ingested?
We can compare with content schema as provided here.
https://community.opengroup.org/osdu/platform/domain-data-mgmt-services/rock-and-fluid-sample/rafs-ddms-services/-/tree/main/app/models/data_schemas/jsonschemaDebasis ChatterjeeMichael JonesMykhailo BuriakDebasis Chatterjeehttps://community.opengroup.org/osdu/platform/domain-data-mgmt-services/rock-and-fluid-sample/rafs-ddms-services/-/issues/120Water Analysis - observations - Postman collection and data samples2023-10-03T12:47:50ZDebasis ChatterjeeWater Analysis - observations - Postman collection and data samples**Step-01** creates record of wpc WaterAnalysisTest (custom schema).
This refers to parent record FluidSample. I do not see mention of this step (create record) elsewhere in the Collection.
This also refers to PVT record (ID="stable").
...**Step-01** creates record of wpc WaterAnalysisTest (custom schema).
This refers to parent record FluidSample. I do not see mention of this step (create record) elsewhere in the Collection.
This also refers to PVT record (ID="stable").
Earlier in the collection, a different PVT record was created ID="test".
Also, PVT is commonly used for Gas analysis.
**Source of data**
Can you provide actual report which was used to prepare this sample data?
Such as Kentish report (Page 453) was used for RCA data.
**Step-03-Add data** -
Please check units of measure. Right now it is degF for all fields.
Line-17 - refers to FluidSample ID=1. But wa_record_id refers to Fluid Sample record ID=20207905.
**Step-03 **- Can you please share parquet file resulting from successful "Add data" step?
Thank youMichael JonesMykhailo BuriakMichael Joneshttps://community.opengroup.org/osdu/platform/pre-shipping/-/issues/543Azure M18 - search results not returning all fields2023-08-09T06:41:53ZMichaelAzure M18 - search results not returning all fieldsWhen doing a search on some records, the "data" field/node is not returned.
For instance, the following search request returns a well without the data node:
```
curl --location 'https://osdu-ship.msft-osdu-test.org/api/search/v2/query'...When doing a search on some records, the "data" field/node is not returned.
For instance, the following search request returns a well without the data node:
```
curl --location 'https://osdu-ship.msft-osdu-test.org/api/search/v2/query' \
--header 'Content-Type: application/json' \
--header 'data-partition-id: opendes' \
--header 'Authorization: Bearer ...' \
--data '{
"kind": "osdu:wks:master-data--Well:*",
"query": "id:\"opendes:master-data--Well:perforationwell\""
}'
```
Response:
```
{
"results": [
{
"createTime": "2022-12-07T15:04:14.096Z",
"kind": "osdu:wks:master-data--Well:1.0.0",
"authority": "osdu",
"namespace": "osdu:wks",
"legal": {
"legaltags": [
"opendes-public-usa-dataset-open-test-data"
],
"otherRelevantDataCountries": [
"US"
],
"status": "compliant"
},
"createUser": "preshipping@azureglobal1.onmicrosoft.com",
"source": "wks",
"acl": {
"viewers": [
"data.default.viewers@opendes.contoso.com"
],
"owners": [
"data.default.owners@opendes.contoso.com"
]
},
"id": "opendes:master-data--Well:perforationwell",
"type": "master-data--Well",
"version": 1670425452534671,
"tags": {
"normalizedKind": "osdu:wks:master-data--Well:1"
}
}
],
"aggregations": null,
"totalCount": 1
}
```
If you request for the storage record, the data node is present.
```
curl --location 'https://osdu-ship.msft-osdu-test.org/api/storage/v2/records/opendes:master-data--Well:perforationwell' \
--header 'data-partition-id: opendes' \
--header 'Authorization: Bearer ...'
```
Response:
```
{
"data": {
"DefaultVerticalMeasurementID": "RotaryTable",
"FacilityEvents": [
{
"EffectiveDateTime": "1983-10-28T00:00:00",
"FacilityEventTypeID": "opendes:reference-data--FacilityEventType:Spud:"
},
{
"EffectiveDateTime": "1983-12-19T00:00:00",
"FacilityEventTypeID": "opendes:reference-data--FacilityEventType:TDReached:"
}
],
"FacilityName": "Perforation Well",
"FacilityOperators": [
{
"FacilityOperatorOrganisationID": "opendes:master-data--Organisation:Mobil:"
}
],
"FacilityStates": [
{
"FacilityStateTypeID": "opendes:reference-data--FacilityStateType:Abandoned:"
}
],
"FacilityTypeID": "opendes:reference-data--FacilityType:Well:",
"GeoContexts": [
{
"GeoPoliticalEntityID": "opendes:master-data--GeoPoliticalEntity:Netherlands_Country:",
"GeoTypeID": "opendes:reference-data--GeoPoliticalEntityType:Country:"
},
{
"GeoPoliticalEntityID": "opendes:master-data--GeoPoliticalEntity:B14_BlockID:",
"GeoTypeID": "opendes:reference-data--GeoPoliticalEntityType:BlockID:"
}
],
"NameAliases": [
{
"AliasName": "Perforation Well",
"AliasNameTypeID": "opendes:reference-data--AliasNameType:Well:"
},
{
"AliasName": "7534",
"AliasNameTypeID": "opendes:reference-data--AliasNameType:UWI:"
}
],
"OperatingEnvironmentID": "opendes:reference-data--OperatingEnvironment:Off:",
"Source": "TNO",
"SpatialLocation": {
"Wgs84Coordinates": {
"features": [
{
"geometry": {
"coordinates": [
4.33958866,
55.27975681
],
"type": "Point"
},
"properties": {},
"type": "Feature"
}
],
"type": "FeatureCollection"
}
},
"VerticalMeasurements": [
{
"VerticalCRSID": "opendes:reference-data--CoordinateReferenceSystem:MSL:",
"VerticalMeasurement": 38.8,
"VerticalMeasurementID": "RotaryTable",
"VerticalMeasurementPathID": "opendes:reference-data--VerticalMeasurementPath:Elevation:",
"VerticalMeasurementTypeID": "opendes:reference-data--VerticalMeasurementType:RotaryTable:"
}
]
},
"meta": [
{
"kind": "Unit",
"name": "m",
"persistableReference": "{\"abcd\":{\"a\":0.0,\"b\":1.0,\"c\":1.0,\"d\":0.0},\"symbol\":\"m\",\"baseMeasurement\":{\"ancestry\":\"L\",\"type\":\"UM\"},\"type\":\"UAD\"}",
"propertyNames": [
"VerticalMeasurements[].VerticalMeasurement"
],
"unitOfMeasureID": "opendes:reference-data--UnitOfMeasure:m:"
}
],
"id": "opendes:master-data--Well:perforationwell",
"version": 1670425452534671,
"kind": "osdu:wks:master-data--Well:1.0.0",
"acl": {
"viewers": [
"data.default.viewers@opendes.contoso.com"
],
"owners": [
"data.default.owners@opendes.contoso.com"
]
},
"legal": {
"legaltags": [
"opendes-public-usa-dataset-open-test-data"
],
"otherRelevantDataCountries": [
"US"
],
"status": "compliant"
},
"createUser": "preshipping@azureglobal1.onmicrosoft.com",
"createTime": "2022-12-07T15:04:14.096Z"
}
```M20 - Release 0.23Chad LeongChad Leong