OSDU Software issueshttps://community.opengroup.org/groups/osdu/-/issues2023-08-01T15:49:41Zhttps://community.opengroup.org/osdu/platform/security-and-compliance/secret/-/issues/2To get multiple secret from Aws, Azure and GCP and disable listing all secret...2023-08-01T15:49:41ZJeyakumar DevarajuluTo get multiple secret from Aws, Azure and GCP and disable listing all secrets in AzureThe current secret service will either accept one key and fetch the value for the key from the Azure key vault or get the complete list from the key vault(Azure).
Challenge:
Any service request with multiple secrets has to hit the secr...The current secret service will either accept one key and fetch the value for the key from the Azure key vault or get the complete list from the key vault(Azure).
Challenge:
Any service request with multiple secrets has to hit the secret service with multiple requests.
Proposed Solution:
Enhance the secret service as per ADR to accept multiple keys in one go and provide multiple key-value pairs in Azure, AWS and GCP
Disable: Provision to list all the secrets from the vault will expose all the secrets
From ADR
* **List**: return the list of keys that are known (JK: As per my understanding, Passing the list of know keys will provide the respective values)
ADR
https://community.opengroup.org/osdu/platform/system/home/-/issues/75#functional-requirementsM17 - Release 0.20Jeyakumar DevarajuluJeyakumar Devarajuluhttps://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/open-vds/-/issues/153Cannot create IJKCoordinateTransformer for 2D dataset (Python)2022-10-31T14:31:13ZAlexander JaustCannot create IJKCoordinateTransformer for 2D dataset (Python)## Description
I am currently playing around with the creation of VDS files. I am especially interested in working via the Python interface and the different coordinate systems. I set up the a small [Python script](/uploads/8d79d553af1f...## Description
I am currently playing around with the creation of VDS files. I am especially interested in working via the Python interface and the different coordinate systems. I set up the a small [Python script](/uploads/8d79d553af1f908973720a1e03b21e06/write_2d_vds_data_testing.py) that creates a simple 2D dataset from a NumPy array with random content. Parts of the script is based on the `npz_to_vds.py` script from the examples. I would like to convert between inline/crossline coordinates, voxel coordinates and world coordinates.
In my script, the creation of the VDS file is successful. I also see that the file is recognized as 2D file by OpenVDS during writing since the chunks written to the page buffer are 4*brick_size. However, when I want to obtain the `IJKCoordinateTransformer` for this file, I run into the following exception
```text
Exception:
Dimension -1 is not a valid dimension. Dimensionality_Max is 6.
```
When I create a 3D file with only one coordinate in z direction, obtaining the transformer seems to be successful.
## Expectation
I obtain the coordinate transformer which allows me to transform between [different coordinate](https://osdu.pages.opengroup.org/platform/domain-data-mgmt-services/seismic/open-vds/cppdoc/struct/structOpenVDS_1_1IJKCoordinateTransformer.html) systems (ijk, inline/crossline etc.).
## Questions
- Is this behavior expected? I assumed that I could still work with the IJK transformer.
- Do I create the file as a 2D file in a wrong way?
- If the behavior is expected, would it be possible to extend the transformer to work with 2D data and/or as a quick fix to make the error message more expressive?
## System
- Arm64 MacOS 12.6
- VDS 3.0.3 with Python interfacehttps://community.opengroup.org/osdu/platform/system/storage/-/issues/146POST /query/records:batch with normalization stops converting after 1 convers...2022-10-28T08:04:37ZAn NgoPOST /query/records:batch with normalization stops converting after 1 conversion failureAn attribute was defined as a number in the schema:
```
"depthA": {
"title": "depthA",
"type": "number"
}
```
The meta specified is to convert the values in depthA from ft to meter.
```
"meta": [
{
"...An attribute was defined as a number in the schema:
```
"depthA": {
"title": "depthA",
"type": "number"
}
```
The meta specified is to convert the values in depthA from ft to meter.
```
"meta": [
{
"kind": "Unit",
"name": "ft",
"persistableReference": "{\"scaleOffset\":{\"scale\":0.3048,\"offset\":0.0},\"symbol\":\"ft\",\"baseMeasurement\":{\"ancestry\":\"Length\",\"type\":\"UM\"},\"type\":\"USO\"}",
"propertyNames": [
"depthA",
"depthB"
],
```
The record was ingested/created with an empty string assigned to depthA.
```
"data": {
"depthA": "",
"depthB": 123,
"depthC": 456
},
```
Upon record creation, fetch API was called to normalize the record before indexing.
The conversion failed to convert depthA. An error was logged. Fetch API returned a 200, but with a conversion error.
![image](/uploads/28575874041594004a487f3ee009f1f9/image.png)
After this error, the API skipped conversion for other attributes.
Indexer saw this error and returned a 400 status. Trace index trace returns:
```
"statusCode": 400,
"trace": [
"Unit conversion: illegal value for property depthA"
]
```
**Action:** API should continue to convert all specified attributes, and log the conversion errors for those that failed.https://community.opengroup.org/osdu/platform/security-and-compliance/policy/-/issues/79Add Open Telemetry (OTEL) to Policy Service2023-10-31T22:06:15ZShane HutchinsAdd Open Telemetry (OTEL) to Policy ServiceAdd Open Telemetry (OTEL) to Policy Service
- Focus on Trace/Span support
In a later issue add metric and logs supportAdd Open Telemetry (OTEL) to Policy Service
- Focus on Trace/Span support
In a later issue add metric and logs supportShane HutchinsShane Hutchinshttps://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/seismic-dms-suite/seismic-store-service/-/issues/69Subproject creation Bad Request2023-03-24T16:02:13ZDenis Karpenok (EPAM)Subproject creation Bad RequestGCP preshipping environment.
Tenant was created:
`{
"name": "autotesttenantid436502",
"esd": "odesprod.osdu-gcp.go3-nrg.projects.epam.com",
"gcpid": "osdu-data-prod",
"default_acls": "users.datalake.admins@odesprod.osdu...GCP preshipping environment.
Tenant was created:
`{
"name": "autotesttenantid436502",
"esd": "odesprod.osdu-gcp.go3-nrg.projects.epam.com",
"gcpid": "osdu-data-prod",
"default_acls": "users.datalake.admins@odesprod.osdu-gcp.go3-nrg.projects.epam.com"
}`
Trying to create subproject.
Request:
`curl --location --request POST 'https://preship.gcp.gnrg-osdu.projects.epam.com/api/seismic-store/v3/subproject/tenant/autotesttenantid436502/subproject/subprojectodi725168' \
--header 'Content-Type: application/json' \
--header 'data-partition-id: odesprod' \
--header 'ltag: odesprod-SeismicDMS-Legal-Tag-Test7116874' \
--header 'Authorization: Bearer ID_TOCKEN' \
--data-raw '{
"admin": "admin@odesprod.osdu-gcp.go3-nrg.projects.epam.com",
"storage_class": "MULTI_REGIONAL",
"storage_location": "US",
"legal": {
"legaltags": [
"odesprod-SeismicDMS-Legal-Tag-Test7116874"
],
"otherRelevantDataCountries": [
"US"
]
}
}'`
Response:
`[seismic-store-service] Bad Request`
Seismic-store logs:
`2022-10-21 15:40:40.798 EET{"error":{"code":400,"message":"[seismic-store-service] Bad Request","status":"BAD_REQUEST"}}`Sacha BrantsSacha Brantshttps://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/seismic-dms-suite/seismic-store-service/-/issues/68Utility LS endpoint doesn't work for directories2023-03-24T19:11:22ZKonstantin KhottchenkovUtility LS endpoint doesn't work for directoriesNew test scenario was added for UTILITY LS endpoint. The feature of filtering the output for only datasets, only folders or both datasets and folders was added and tested.
The result of tests shows that use of "wmode" parameter with valu...New test scenario was added for UTILITY LS endpoint. The feature of filtering the output for only datasets, only folders or both datasets and folders was added and tested.
The result of tests shows that use of "wmode" parameter with values "dirs" and "all" that filter response to receive only names of directories or both datasets and directories correspondingly fails for AWS and ANTHOS. We couldn't check if Google also affected because Google environment is broken at all. Thus this tests were disabled for mentioned CSPs.
[Pipeline run](https://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/seismic-dms-suite/seismic-store-service/-/jobs/1458328)https://community.opengroup.org/osdu/platform/data-flow/ingestion/energistics/commons/-/issues/8Move dataset API function into commons2022-11-22T12:39:26ZAugustin Pilard ZenMove dataset API function into commonsget_file_and_location_from_dataset_registry
get_type -> generalize
get_type_from_file ?
get_type_from_bytes?
create_manifest -> not sureget_file_and_location_from_dataset_registry
get_type -> generalize
get_type_from_file ?
get_type_from_bytes?
create_manifest -> not sureAugustin Pilard ZenAugustin Pilard Zenhttps://community.opengroup.org/osdu/platform/consumption/geospatial/-/issues/174Data - SPIKE: Consider Design Options for Supporting Entitlements2023-12-19T17:30:47ZBrianData - SPIKE: Consider Design Options for Supporting EntitlementsAs a GCZ developer, I want to consider Design Options for Supporting Entitlements, so that we can scope out enhancement tasks to support entitlements.
Create a community group (Shell, Exxon) to discuss requirements and expectations.
A...As a GCZ developer, I want to consider Design Options for Supporting Entitlements, so that we can scope out enhancement tasks to support entitlements.
Create a community group (Shell, Exxon) to discuss requirements and expectations.
Acceptance Criteria:
1. Application users have their service entitlements
2. Need to ensure we can pull entitlements information along as attributes on the layers - we have been told this is possible so need to confirm/test - this should allow users to arrange access control on the client side based on these entitlements.https://community.opengroup.org/osdu/platform/system/dataset/-/issues/45Registered dataset records output with no “createUser”, “createTime”, “modify...2023-01-12T12:23:25ZRustam Lotsmanenko (EPAM)rustam_lotsmanenko@epam.comRegistered dataset records output with no “createUser”, “createTime”, “modifyUser”, “modifyTime” properties.Response for created dataset registry or requested via **GET** endpoint `/getDatasetRegistry` doesn't contain “createUser”, “createTime”, “modifyUser”, “modifyTime” properties.
~~~
{
"datasetRegistries": [
{
"id":...Response for created dataset registry or requested via **GET** endpoint `/getDatasetRegistry` doesn't contain “createUser”, “createTime”, “modifyUser”, “modifyTime” properties.
~~~
{
"datasetRegistries": [
{
"id": "osdu:dataset--File.Generic:579c89e204bd4e3da1f9025d9a542579",
"version": 1666268695566567,
"kind": "osdu:wks:dataset--File.Generic:1.0.0",
"acl": {
"viewers": [
"data.default.viewers@osdu.osdu-gcp.go3-nrg.projects.epam.com"
],
"owners": [
"data.default.owners@osdu.osdu-gcp.go3-nrg.projects.epam.com"
]
},
"legal": {
"legaltags": [
"osdu-demo-legaltag"
],
"otherRelevantDataCountries": [
"US"
],
"status": "compliant"
},
"data": {
"ResourceId": "srn:osdu:file:dc556e0e3a554105a80cfcb19372a62d:",
"ResourceTypeID": "srn:type:file/json:",
"ResourceSecurityClassification": "srn:reference-data/ResourceSecurityClassification:RESTRICTED:",
"ResourceSource": "Some Company App",
"ResourceName": "trajectories - 1000.json",
"ResourceDescription": "Trajectory For Wellbore xyz",
"DatasetProperties": {
"FileSourceInfo": {
"FileSource": "/ef27ad6f-dbc1-458d-8541-1446e3b0685a/05b8dd43f2724532b59e6fc9d724c5d5"
}
}
},
"meta": [
{
"additionalProp1": {},
"additionalProp2": {},
"additionalProp3": {}
}
],
"tags": {}
}
]
}
~~~
If a record is requested directly from the Storage service (by ID) via `/records:batch` endpoint, these properties exist.
~~~
{
"records": [
{
"data": {
"ResourceId": "srn:osdu:file:dc556e0e3a554105a80cfcb19372a62d:",
"ResourceTypeID": "srn:type:file/json:",
"ResourceSecurityClassification": "srn:reference-data/ResourceSecurityClassification:RESTRICTED:",
"ResourceSource": "Some Company App",
"ResourceName": "trajectories - 1000.json",
"ResourceDescription": "Trajectory For Wellbore xyz",
"DatasetProperties": {
"FileSourceInfo": {
"FileSource": "/ef27ad6f-dbc1-458d-8541-1446e3b0685a/05b8dd43f2724532b59e6fc9d724c5d5"
}
}
},
"meta": [
{
"additionalProp1": {},
"additionalProp2": {},
"additionalProp3": {}
}
],
"id": "osdu:dataset--File.Generic:579c89e204bd4e3da1f9025d9a542579",
"version": 1666268695566567,
"kind": "osdu:wks:dataset--File.Generic:1.0.0",
"acl": {
"viewers": [
"data.default.viewers@osdu.osdu-gcp.go3-nrg.projects.epam.com"
],
"owners": [
"data.default.owners@osdu.osdu-gcp.go3-nrg.projects.epam.com"
]
},
"legal": {
"legaltags": [
"osdu-demo-legaltag"
],
"otherRelevantDataCountries": [
"US"
],
"status": "compliant"
},
"createUser": "rustam_lotsmanenko@osdu-gcp.go3-nrg.projects.epam.com",
"createTime": "2022-10-20T12:24:57.379Z"
}
],
"notFound": [],
"conversionStatuses": []
}
~~~
But Dataset doesn't use Storage `/records:batch` endpoint for records fetching after dataset registration or for fetching records, instead `/records` endpoint is used which does not provide such properties in response:
When registry created and when registry requested via **GET** endpoint:
https://community.opengroup.org/osdu/platform/system/dataset/-/blob/master/dataset-core/src/main/java/org/opengroup/osdu/dataset/service/DatasetRegistryServiceImpl.java#L165
Core common method used for fetching records:
https://community.opengroup.org/osdu/platform/system/lib/core/os-core-common/-/blob/master/src/main/java/org/opengroup/osdu/core/common/storage/StorageService.java#L73https://community.opengroup.org/osdu/platform/data-flow/ingestion/energistics/prodml-parser/-/issues/10add location on manifest2022-10-21T09:33:40ZAugustin Pilard Zenadd location on manifestAugustin Pilard ZenAugustin Pilard Zenhttps://community.opengroup.org/osdu/platform/consumption/geospatial/-/issues/173Transformer - Document process to make a new data area in OSDU for GCZ2022-10-19T16:33:39ZJoel RomeroTransformer - Document process to make a new data area in OSDU for GCZPer Brian, this needs to be a living document that we create/improve as we add additional data types to GCZ - written in a way someone not on our team could build a new transformer connection/mapping for a new (or private) data area of O...Per Brian, this needs to be a living document that we create/improve as we add additional data types to GCZ - written in a way someone not on our team could build a new transformer connection/mapping for a new (or private) data area of OSDU over time.https://community.opengroup.org/osdu/platform/data-flow/ingestion/energistics/prodml-parser/-/issues/9Add dockerfile with step in CI2022-10-19T09:32:44Zetienne peyssonAdd dockerfile with step in CITake a view of good practice
Update readme for docker usesTake a view of good practice
Update readme for docker usesAugustin Pilard ZenAugustin Pilard Zenhttps://community.opengroup.org/osdu/platform/domain-data-mgmt-services/reservoir/open-etp-client/-/issues/5Facing issue to connect etp-server from etp-client and issue while deploying ...2023-05-05T01:03:05ZBrindaban DasFacing issue to connect etp-server from etp-client and issue while deploying etp-client in openshift**Issue 1:**
We were not able to connect to etp-server from etp-client, it seems as per current etp-client code considering both bearer token and basic auth as jwt bearer token, so when basic auth is using to connect to etp-server from ...**Issue 1:**
We were not able to connect to etp-server from etp-client, it seems as per current etp-client code considering both bearer token and basic auth as jwt bearer token, so when basic auth is using to connect to etp-server from etp-client, code is not working properly. We have done the below code changes in a feature branch to address this issue.
1. [commit1](https://community.opengroup.org/osdu/platform/domain-data-mgmt-services/reservoir/open-etp-client/-/commit/783b5a1b8051b373980a8f2a287725e4fadfe179)
2. [commit2](https://community.opengroup.org/osdu/platform/domain-data-mgmt-services/reservoir/open-etp-client/-/commit/104ef2e63deb940cc303caf5248574e9abf580cb)
Could anyone look into the changes and suggest whether we can merge this changes into the main branch.
**Issue 2:**
We are also facing issue while deploying the etp-client code into openshift. So, the below changes made into the package.json to address the issue.
1. [commit1](https://community.opengroup.org/osdu/platform/domain-data-mgmt-services/reservoir/open-etp-client/-/commit/f013d2ddc9e7afe7e27b6202222225f023e9b245)
2. [commit2](https://community.opengroup.org/osdu/platform/domain-data-mgmt-services/reservoir/open-etp-client/-/commit/21cbe428e430f8f575d4624ad92d8ec2d711ddb3)
Please suggest whether we can made this changes in package.json file in open-etp client.https://community.opengroup.org/osdu/platform/data-flow/ingestion/external-data-sources/core-external-data-workflow/-/issues/5EDS Ingest : Provide default date and time to LastSuccessfulRunDateUTC in Con...2022-10-13T12:43:58ZPriyanka BhongadeEDS Ingest : Provide default date and time to LastSuccessfulRunDateUTC in ConnectedSourceDataRegistryIn case of missing value for "LastSuccessfulRunDateUTC" in the connected source data job id, Provide a default value in code so that the Dag run should not failIn case of missing value for "LastSuccessfulRunDateUTC" in the connected source data job id, Provide a default value in code so that the Dag run should not failM15 - Release 0.18Priyanka BhongadePriyanka Bhongadehttps://community.opengroup.org/osdu/platform/security-and-compliance/entitlements/-/issues/115Description for new entitlements groups cannot contain dash anymore2022-10-12T23:00:13ZThiago SenadorDescription for new entitlements groups cannot contain dash anymoreThe new [regex](https://community.opengroup.org/osdu/platform/security-and-compliance/entitlements/-/merge_requests/345/diffs#diff-content-04683d25644d98f633453bc0c92dd78012c42ecd) to validate the `description` field during entitlements ...The new [regex](https://community.opengroup.org/osdu/platform/security-and-compliance/entitlements/-/merge_requests/345/diffs#diff-content-04683d25644d98f633453bc0c92dd78012c42ecd) to validate the `description` field during entitlements groups creation does not accept `-` anymore.
This is a breaking change for one of our applications. Is there a requirement preventing `-` to be used in the description?https://community.opengroup.org/osdu/platform/system/search-service/-/issues/101Sorting on nested array attributes2023-06-08T12:55:46ZMandar KulkarniSorting on nested array attributesI have read this [documentation](https://community.opengroup.org/osdu/platform/system/search-service/-/blob/master/docs/tutorial/ArrayOfObjects.md#sort)and this [ADR](https://community.opengroup.org/osdu/platform/system/search-service/-/...I have read this [documentation](https://community.opengroup.org/osdu/platform/system/search-service/-/blob/master/docs/tutorial/ArrayOfObjects.md#sort)and this [ADR](https://community.opengroup.org/osdu/platform/system/search-service/-/issues/38) related to sorting.
The question I have is do we support sorting of records based on particular elements in the array with a filter condition?
We have a use case to sort the Wellbore ("osdu:wks:master-data--Wellbore:1.0.0") records based on the VerticalMeasurement value of type KB.
VerticalMeasurements is an array in the data block in Wellbore records. Each record can have an array of VerticalMeasurements
So if I have 5 records which contain values like below for VerticalMeasurements:
```
"id":"wellbore1",
"VerticalMeasurements": [
{
"VerticalMeasurementID": "well header elevation KB",
"VerticalMeasurementPathID": "tenant1:reference-data--VerticalMeasurementPath:ELEV:",
"VerticalMeasurement": 10,
"VerticalMeasurementTypeID": "tenant1:reference-data--VerticalMeasurementType:KB:"
},
{
"VerticalMeasurementID": "well header elevation GL",
"VerticalMeasurementPathID": "tenant1:reference-data--VerticalMeasurementPath:ELEV:",
"VerticalMeasurement": 50,
"VerticalMeasurementTypeID": "tenant1:reference-data--VerticalMeasurementType:GR:",
}]
"id":"wellbore2",
"VerticalMeasurements": [
{
"VerticalMeasurementID": "well header elevation KB",
"VerticalMeasurementPathID": "tenant1:reference-data--VerticalMeasurementPath:ELEV:",
"VerticalMeasurement": 20,
"VerticalMeasurementTypeID": "tenant1:reference-data--VerticalMeasurementType:KB:"
},
{
"VerticalMeasurementID": "well header elevation GL",
"VerticalMeasurementPathID": "tenant1:reference-data--VerticalMeasurementPath:ELEV:",
"VerticalMeasurement": 40,
"VerticalMeasurementTypeID": "tenant1:reference-data--VerticalMeasurementType:GR:",
}]
"id":"wellbore3",
"VerticalMeasurements": [
{
"VerticalMeasurementID": "well header elevation KB",
"VerticalMeasurementPathID": "tenant1:reference-data--VerticalMeasurementPath:ELEV:",
"VerticalMeasurement": 30,
"VerticalMeasurementTypeID": "tenant1:reference-data--VerticalMeasurementType:KB:"
},
{
"VerticalMeasurementID": "well header elevation GL",
"VerticalMeasurementPathID": "tenant1:reference-data--VerticalMeasurementPath:ELEV:",
"VerticalMeasurement": 30,
"VerticalMeasurementTypeID": "tenant1:reference-data--VerticalMeasurementType:GR:",
}]
"id":"wellbore4",
"VerticalMeasurements": [
{
"VerticalMeasurementID": "well header elevation KB",
"VerticalMeasurementPathID": "tenant1:reference-data--VerticalMeasurementPath:ELEV:",
"VerticalMeasurement": 40,
"VerticalMeasurementTypeID": "tenant1:reference-data--VerticalMeasurementType:KB:"
},
{
"VerticalMeasurementID": "well header elevation GL",
"VerticalMeasurementPathID": "tenant1:reference-data--VerticalMeasurementPath:ELEV:",
"VerticalMeasurement": 20,
"VerticalMeasurementTypeID": "tenant1:reference-data--VerticalMeasurementType:GR:",
}]
"id":"wellbore5",
"VerticalMeasurements": [
{
"VerticalMeasurementID": "well header elevation KB",
"VerticalMeasurementPathID": "tenant1:reference-data--VerticalMeasurementPath:ELEV:",
"VerticalMeasurement": 50,
"VerticalMeasurementTypeID": "tenant1:reference-data--VerticalMeasurementType:KB:"
},
{
"VerticalMeasurementID": "well header elevation GL",
"VerticalMeasurementPathID": "tenant1:reference-data--VerticalMeasurementPath:ELEV:",
"VerticalMeasurement": 10,
"VerticalMeasurementTypeID": "tenant1:reference-data--VerticalMeasurementType:GR:",
}]
```
then we want to sort the records based on value of data.VerticalMeasurements.VerticalMeasurement where data.VerticalMeasurement.VerticalMeasurementTypeID is tenant1:reference-data--VerticalMeasurementType:KB
So the **EXPECTED** result of search query
```
{
"kind": "osdu:wks:master-data--Wellbore:1.0.0",
"query":"nested(data.VerticalMeasurements, (VerticalMeasurementTypeID:\"tenant1:reference-data--VerticalMeasurementType:KB\"))",
"sort": {
"field": [
"nested(data.VerticalMeasurements, VerticalMeasurement, min)"
],
"order": [
"DESC"
]
}
}
```
should be like below
```
{
"id":"wellbore5",
"id":"wellbore4",
"id":"wellbore3",
"id":"wellbore2",
"id":"wellbore1"
}
```
This currently doesn't seem possible today.
So the records get sorted on the minimum values from the VerticalMeasurements array and the **ACTUAL** response is
```
{
"id":"wellbore1",
"id":"wellbore2",
"id":"wellbore3",
"id":"wellbore4",
"id":"wellbore5"
}
```
This is because minimum values from VerticalMeasurements array are for VerticalMeasurementType:GR and sorting happens on the mode specified in the query which is **'min'**.
The below query which specifies filter within sort clause also gives a similar response.
```
{
"kind": "osdu:wks:master-data--Wellbore:1.0.0",
"query":"nested(data.VerticalMeasurements, (VerticalMeasurementTypeID:\"tenant1:reference-data--VerticalMeasurementType:KB\"))",
"sort": {
"field": [
"nested(data.VerticalMeasurements, VerticalMeasurement, min)"
],
"order": [
"ASC"
],
"filter":["nested(data.VerticalMeasurements, VerticalMeasurementTypeID:\"tenant1:reference-data--VerticalMeasurementType:KB\", match)"]
}
}
```
Can the sorting be supported via search service where sorting of records happens on the field specified inside query or inside sort clause?
Let me know if this needs more clarification and requires an ADR or if this is something that can be fixed as an issue.https://community.opengroup.org/osdu/platform/system/indexer-service/-/issues/77Remove number retry attempts for schema not found 404 and remove call to depr...2022-10-28T11:23:49ZHarshika DhootRemove number retry attempts for schema not found 404 and remove call to deprecated storage API in indexer serviceIndexer service is making number of retry attempts for the schemas that are not there in schema service and giving 404, also it is calling depreciated storage service after its attempts for schema service.
To fix this issue we have this ...Indexer service is making number of retry attempts for the schemas that are not there in schema service and giving 404, also it is calling depreciated storage service after its attempts for schema service.
To fix this issue we have this already merged PR [indexer-service/-/merge_requests/384](https://community.opengroup.org/osdu/platform/system/indexer-service/-/merge_requests/384)Harshika DhootHarshika Dhoothttps://community.opengroup.org/osdu/platform/security-and-compliance/entitlements/-/issues/114Is there any API to get all the partitions related to a user2022-10-09T09:08:30ZShuai LiIs there any API to get all the partitions related to a userI have a use case that after a user logins in OSDU system UI, he/she could see all the partitions he/she has permissions with, then he/she could select one specific partition to do some operations.
As there is no direct relationship be...I have a use case that after a user logins in OSDU system UI, he/she could see all the partitions he/she has permissions with, then he/she could select one specific partition to do some operations.
As there is no direct relationship betweeen a user and partitions. It is not possible to get this user-partition relationship from partition service. The only way to find this relationship is get all the groups related to the user, then derive the partitions from the groups. But currently all the entilement APIs need a data-partition-id header, it is not possible to get all the groups related to a user regardless of the partitions. It seems like a circular dependency (Entitlement APIs need partitions as input, but without entilement API we could not get partitions related to a user).https://community.opengroup.org/osdu/data/open-test-data/-/issues/88Enhance sample load manifests for Marker data2022-10-08T11:28:56ZDebasis ChatterjeeEnhance sample load manifests for Marker dataRefer to this example
https://community.opengroup.org/osdu/platform/data-flow/data-loading/open-test-data/-/blob/master/rc--3.0.0/4-instances/TNO/work-products/markers_1_1_0/load_top_1.1.0_1002_csv.json
Schema version 1.1.0
Also see ex...Refer to this example
https://community.opengroup.org/osdu/platform/data-flow/data-loading/open-test-data/-/blob/master/rc--3.0.0/4-instances/TNO/work-products/markers_1_1_0/load_top_1.1.0_1002_csv.json
Schema version 1.1.0
Also see example of populated schema (1.2.1)
https://community.opengroup.org/osdu/data/data-definitions/-/blob/master/Examples/work-product-component/WellboreMarkerSet.1.2.1.json
Comments here are not for schema difference 1.1.0 vs. 1.2.1. But for lack of information shown for each marker in sample load manifest.
See marker array in sample load manifest. (shows only two properties filled for each marker inside array).
```
"WellboreID": "osdu:master-data--Wellbore:1002:",
"Markers": [
{
"MarkerName": "Maassluis Formation",
"MarkerMeasuredDepth": 0.0
},
{
"MarkerName": "Oosterhout Formation",
"MarkerMeasuredDepth": 242.5
},
....
```
This does not showcase use of detailed description of each marker for Wellbore "1002".
```
"AvailableMarkerProperties": [
{
"MarkerPropertyTypeID": "partition-id:reference-data--MarkerPropertyType:MissingThickness:",
"MarkerPropertyUnitID": "partition-id:reference-data--UnitOfMeasure:ft:",
"Name": "MissingThickness"
}
],
"Markers": [
{
"MarkerName": "Example MarkerName",
"MarkerID": "Example Marker ID",
"InterpretationID": "namespace:work-product-component--GeobodyBoundaryInterpretation:GeobodyBoundaryInterpretation-911bb71f-06ab-4deb-8e68-b8c9229dc76b:",
"MarkerMeasuredDepth": 12345.6,
"MarkerSubSeaVerticalDepth": 12345.6,
"MarkerDate": "2020-02-13T09:13:15.55Z",
"MarkerObservationNumber": 12345.6,
"MarkerInterpreter": "Example MarkerInterpreter",
"MarkerTypeID": "namespace:reference-data--MarkerType:BioStratigraphy:",
"FeatureTypeID": "namespace:reference-data--FeatureType:Base:",
"FeatureName": "Example FeatureName",
"PositiveVerticalDelta": 12345.6,
"NegativeVerticalDelta": 12345.6,
"SurfaceDipAngle": 12345.6,
"SurfaceDipAzimuth": 12345.6,
"Missing": "Example Missing",
"GeologicalAge": "Example GeologicalAge"
}
],
"StratigraphicColumnID": "namespace:work-product-component--StratigraphicColumn:StratigraphicColumn-911bb71f-06ab-4deb-8e68-b8c9229dc76b:",
"StratigraphicColumnRankInterpretationID": "namespace:work-product-component--StratigraphicColumnRankInterpretation:StratigraphicColumnRankInterpretation-911bb71f-06ab-4deb-8e68-b8c9229dc76b:",
```
This CSV file may be helpful.
https://gitlab.opengroup.org/osdu/subcommittees/data-def/work-products/schema/-/blob/master/Examples/WorkedExamples/WellboreMarkerSet/dataset_and_wpc/MarkerSet-b8fd398a-5d74-45fa-8ecb-03b1ad927026.csv
cc @Keith_Wallhttps://community.opengroup.org/osdu/platform/data-flow/ingestion/external-data-sources/core-external-data-workflow/-/issues/4EDS fetch - show in Airflow log quick test of a service in order to test succ...2022-10-07T11:35:09ZDebasis ChatterjeeEDS fetch - show in Airflow log quick test of a service in order to test success with authenticationAs per my recent discussion with @jeyakumar-jk -
This kind of test will iron out if user specification of source authentication information (Ex: client secret etc. via secret service) is causing any problem.
Execute search for example ...As per my recent discussion with @jeyakumar-jk -
This kind of test will iron out if user specification of source authentication information (Ex: client secret etc. via secret service) is causing any problem.
Execute search for example using common Reference Entity such as ExistenceKind.
Limit this kind of check for proper OSDU-compliant data source.
External non-OSDU sources may always not support query on Reference entity.