OSDU Software issueshttps://community.opengroup.org/groups/osdu/-/issues2022-11-15T17:29:04Zhttps://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/open-vds/-/issues/157Behavior for files with irregular inlines/crosslines2022-11-15T17:29:04ZAlena ChaikouskayaBehavior for files with irregular inlines/crosslines_(sorry, I will try to divide issue into parts, as for some reason it gets constantly marked as spam)_
While playing with openvds we accidentally created synthetic segy files which are imported into vds incorrectly (roundtrip breaks).
W..._(sorry, I will try to divide issue into parts, as for some reason it gets constantly marked as spam)_
While playing with openvds we accidentally created synthetic segy files which are imported into vds incorrectly (roundtrip breaks).
We are not sure how likely files like that can appear in reality, but our domain knowledge source tells us that it is possible in theory.
1. File [broken1.segy](/uploads/1e35d0885342952d3f3c5ec69273e02c/broken1.segy) with ilines `[1, 6, 11, 15]`
(Stride is 5, last element is at distance 4)
2. File [broken2.segy](/uploads/f2e69b64faf656cdb771ac2215264221/broken2.segy) with ilines `[1, 6, 11, 16, 21, 26, 27]`
(Stride is 5, last element is at distance 1)
Some rows just get lost, never to be found in vds.
Main observation is that as openvds for some reason on purpose ignores the distance between two last values, we need to supply a different distance to last element to reproduce this behavior.
How short/long is the distance of the last element might also matter as changing [this](https://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/open-vds/-/blob/master/tools/SEGYImport/SEGYImport.cpp#L2185) suspicious piece of code (as it seems that `a + (a-b)%c - b` is not divisible by `c`) fixed only one of those cases for me.https://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/seismic-dms-suite/seismic-store-service/-/issues/70Subproject deletion bad request2023-03-24T16:00:19ZYan Sushchynski (EPAM)Subproject deletion bad requestWe found a bug connected with deleting subprojects. It seems that when we delete them the call to Entitlements service has a wrong URL.
We can see from the logs that Seismic sends request to the following URL:
https://entitlements/api/...We found a bug connected with deleting subprojects. It seems that when we delete them the call to Entitlements service has a wrong URL.
We can see from the logs that Seismic sends request to the following URL:
https://entitlements/api/entitlements/v2/groups/data/<goupname>.
The __data__ is extra here.
The bug is here: https://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/seismic-dms-suite/seismic-store-service/-/blob/master/app/sdms/src/cloud/providers/google/config.ts#L131M15 - Release 0.18Diego MolteniSacha BrantsDiego Moltenihttps://community.opengroup.org/osdu/platform/data-flow/ingestion/external-data-sources/core-external-data-workflow/-/issues/7EDS Ingest M14 known Bug for XCOM summary2022-11-23T11:29:56ZPriyanka BhongadeEDS Ingest M14 known Bug for XCOM summaryXCOM summary in eds-ingest DAG displays incorrect data when data-partition-id is different for the source and target environment.
- [x] Do the related code changes
- [x] Testing in the GLAB environment
**Acceptance Criteria**
Faile...XCOM summary in eds-ingest DAG displays incorrect data when data-partition-id is different for the source and target environment.
- [x] Do the related code changes
- [x] Testing in the GLAB environment
**Acceptance Criteria**
Failed Records list will show records that failed during ingestion and the successfully ingested record list will show records that are successfully ingested at the Data Operator side.M15 - Release 0.18Priyanka BhongadePriyanka Bhongadehttps://community.opengroup.org/osdu/platform/system/search-service/-/issues/103Search - Policy Integration "400 Request Header Or Cookie Too Large"2023-07-04T11:01:18ZThulasi Dass SubramanianSearch - Policy Integration "400 Request Header Or Cookie Too Large"**Background**
We are observing an intermittent issue, after enabling Policy in Search Service resulting in the following error response
```
{
"code": 400,
"reason": "Bad Request",
"message": "Failed to derive xcontent"
}
``...**Background**
We are observing an intermittent issue, after enabling Policy in Search Service resulting in the following error response
```
{
"code": 400,
"reason": "Bad Request",
"message": "Failed to derive xcontent"
}
```
**Analysis:**
Based on our localhost analysis ( [logs](/uploads/8c93c067644655091d5e8125885a7fdb/Policy-Translate-Header-issue.txt)) current user (preshipping@azureglobal1.onmicrosoft.com) belongs to _more than 2000 data groups_ as member.
While Search calls Policy translate API, in request header '**X-Data-Groups**' values has **more than 2000 groups** for the user, which results in '**400 Request Header Or Cookie Too Large**'.
```HttpResponse(headers={null=[HTTP/1.1 400 Bad Request], Server=[Microsoft-Azure-Application-Gateway/v2], Connection=[close], Content-Length=[259], Date=[Wed, 02 Nov 2022 07:06:04 GMT], Content-Type=[text/html]}, body=<html><head><title>400 Request Header Or Cookie Too Large</title></head><body><center><h1>400 Bad Request</h1></center><center>Request Header Or Cookie Too Large</center><hr><center>Microsoft-Azure-Application-Gateway/v2</center></body></html>, contentType=text/html, responseCode=400, exception=null, request=https://osdu-ship.msft-osdu-test.org/api/policy/v1/translate, httpMethod=POST, latency=1623```
This error body translated as input query for ElasticSearch which results in ElasticSearch exception
```
{
"code": 400,
"reason": "Bad Request",
"message": "Failed to derive xcontent"
}
```
**Workaround:**
- We have **deleted** the stale/test groups present in X-Data-Groups for the user via Entitlements API.
**Need Inputs:**
The above workaround is not a ideal/permanent solution. Hence we are looking for any inputs to remediate this issue across all environmentsM15 - Release 0.18Shane HutchinsThulasi Dass SubramanianShane Hutchinshttps://community.opengroup.org/osdu/platform/data-flow/ingestion/csv-parser/csv-parser/-/issues/80Return successfully created IDs from airflow when checking status of runId2022-11-07T16:59:52ZZachary KeirnReturn successfully created IDs from airflow when checking status of runIdPart of the testing is to validate that the records created in the workflow exist and are correct. Currently, in order to do this, one has to go to the airflow console and copy/paste the successfully created IDs. It would be helpful if t...Part of the testing is to validate that the records created in the workflow exist and are correct. Currently, in order to do this, one has to go to the airflow console and copy/paste the successfully created IDs. It would be helpful if the worflowRun API would return the successfully created IDs or if there was another API that would do this based on the runID.https://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/seismic-dms-suite/seismic-store-sdutil/-/issues/21DDMS to third party non-AWS S3 endpoints2023-03-30T16:44:43ZBrian PruittDDMS to third party non-AWS S3 endpointsUnable to connect DDMS to 3rd party non-AWS S3 endpoints. Requesting endpoint parameter input for “get_s3_client” function.Unable to connect DDMS to 3rd party non-AWS S3 endpoints. Requesting endpoint parameter input for “get_s3_client” function.https://community.opengroup.org/osdu/platform/system/storage/-/issues/150PersistedCollection cannot scale to large values, an upper limit for records ...2022-11-08T09:28:01ZGary MurphyPersistedCollection cannot scale to large values, an upper limit for records is neededPersisted Collections have been seen lately in various environments that are getting somewhat pathological, meaning they are straining the limits of what the consuming services (mainly Storage and Search) can handle. As the number of it...Persisted Collections have been seen lately in various environments that are getting somewhat pathological, meaning they are straining the limits of what the consuming services (mainly Storage and Search) can handle. As the number of items in a Persisted Collection rise, they will increase the size of the Storage record beyond practical limits as well as put a heavy load on Indexing and Search as they are updated.
An exact limit is a bit tricky to specify, but experience with 100K records has shown increased 500 return codes from Storage and Search when counts are in that neighborhood (100K).
Based on the above behavior (and the upcoming introduction of Collaboration Spaces which provide a scalable solution with transactions and promotion capabilities), it is proposed to introduce a practical limit for sizes of Persisted Collections. A straw man number could be on the order of 50K records mentioned in the Persisted Collection. Counts higher than that would trigger an error on Storage PUT and meaningful response text.
Collaboration Spaces will hopefully be the correct home for controlled collections of massive size (1M records is considered reasonable) since updates can be done via distributed transactions and no single Storage record has to scale to contain the contents of the collection. In the meantime, a limit for Persisted Collections is needed.
[Collaboration Spaces](https://gitlab.opengroup.org/osdu/subcommittees/ea/work-products/adr-elaboration/-/issues/48)
[PersistedCollection](https://community.opengroup.org/osdu/data/data-definitions/-/blob/master/E-R/work-product-component/PersistedCollection.1.0.0.md)https://community.opengroup.org/osdu/platform/system/sdks/common-python-sdk/-/issues/15ADR: Static code analysis for Python libraries2023-10-30T14:43:02ZYan Sushchynski (EPAM)ADR: Static code analysis for Python libraries## Context
Python is a dynamically typed language, so developers don't need to worry about types. This works well if a project is small and a few developers work on it.
However, once the project gets bigger, and involves a lot of engi...## Context
Python is a dynamically typed language, so developers don't need to worry about types. This works well if a project is small and a few developers work on it.
However, once the project gets bigger, and involves a lot of engineers, understanding how code works becomes the cornerstone of the further development. Python has type annotations designed to help developers to understand code. Now, these type annotations in our Python libraries are kind of hints for developers and their IDEs, but following them is not mandatory, and they can be simply ignored.
As a result, we face issues when some methods are called with arguments with wrong types. And these bugs unexpectedly show in runtime under certain conditions.
It is not so rare to get the following runtime error: `AttributeError: 'dict' object has no attribute 'to_JSON'`
However, these bugs could be easily catch with any static analyzer.
## Decision
Add a static analysis step for type checking to CI/CD pipelines right before unit-tests. The step will be run on the container with preinstalled tools for Python static analysis (e.g., [pytype](https://github.com/google/pytype) or [mypy](https://github.com/python/mypy)).
At first, we are going to add static analysis to the following libraries:
1. https://community.opengroup.org/osdu/platform/system/sdks/common-python-sdk/-/tree/master/osdu_api - excluding CSP-specific code from `osdu_api/providers`;
1. https://community.opengroup.org/osdu/platform/data-flow/ingestion/osdu-ingestion-lib;
1. https://community.opengroup.org/osdu/platform/data-flow/ingestion/osdu-airflow-lib.
Further, we can cover other Python libraries with static analysis.
## Consequences
Pros:
1. It will be much easier to catch subtle bugs without writing extra unit-tests;
2. Developers will be forced to follow type annotations that will make code more readable and understandable.
Cons:
1. The existing code should be refactored to pass static analysis validations;
2. Some developers might find obeying these new rules too strict.M16 - Release 0.19https://community.opengroup.org/osdu/platform/data-flow/ingestion/csv-parser/csv-parser/-/issues/79Error diagnostics - need to improve significantly2022-12-13T00:31:21ZDebasis ChatterjeeError diagnostics - need to improve significantlyYou may start of by checking here.
https://community.opengroup.org/osdu/platform/pre-shipping/-/tree/main/R3-M14/AWS-M14/Ingestion%20DAG%20CSV
For each and every problem, I did not get suitable clue from error log.
1. problem in data. ...You may start of by checking here.
https://community.opengroup.org/osdu/platform/pre-shipping/-/tree/main/R3-M14/AWS-M14/Ingestion%20DAG%20CSV
For each and every problem, I did not get suitable clue from error log.
1. problem in data. ELEVATION has non numeric value.
2. problem in schema - TVD, Latitude, Longitude - missed "type=string".
3. At times when the file is missed (incorrect sequence in collection), it gives fatal error instead of saying clearly that "Unable to get the CSV file".
Caused situation where record gets created, we can see all properties from Storage service, but none from Search service.
Nearly impossible to figure out, for average Data Loader (user).
Next, imagine we are ingesting 1000 rows from source CSV and problem occurs in row-253 and row-455.
User's expectation is that CSV Ingestion program should pinpoint and clearly indicate row number and type of problem which caused the failure.
cc @chad , @tdixonhttps://community.opengroup.org/osdu/platform/security-and-compliance/secret/-/issues/2To get multiple secret from Aws, Azure and GCP and disable listing all secret...2023-08-01T15:49:41ZJeyakumar DevarajuluTo get multiple secret from Aws, Azure and GCP and disable listing all secrets in AzureThe current secret service will either accept one key and fetch the value for the key from the Azure key vault or get the complete list from the key vault(Azure).
Challenge:
Any service request with multiple secrets has to hit the secr...The current secret service will either accept one key and fetch the value for the key from the Azure key vault or get the complete list from the key vault(Azure).
Challenge:
Any service request with multiple secrets has to hit the secret service with multiple requests.
Proposed Solution:
Enhance the secret service as per ADR to accept multiple keys in one go and provide multiple key-value pairs in Azure, AWS and GCP
Disable: Provision to list all the secrets from the vault will expose all the secrets
From ADR
* **List**: return the list of keys that are known (JK: As per my understanding, Passing the list of know keys will provide the respective values)
ADR
https://community.opengroup.org/osdu/platform/system/home/-/issues/75#functional-requirementsM17 - Release 0.20Jeyakumar DevarajuluJeyakumar Devarajuluhttps://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/open-vds/-/issues/153Cannot create IJKCoordinateTransformer for 2D dataset (Python)2022-10-31T14:31:13ZAlexander JaustCannot create IJKCoordinateTransformer for 2D dataset (Python)## Description
I am currently playing around with the creation of VDS files. I am especially interested in working via the Python interface and the different coordinate systems. I set up the a small [Python script](/uploads/8d79d553af1f...## Description
I am currently playing around with the creation of VDS files. I am especially interested in working via the Python interface and the different coordinate systems. I set up the a small [Python script](/uploads/8d79d553af1f908973720a1e03b21e06/write_2d_vds_data_testing.py) that creates a simple 2D dataset from a NumPy array with random content. Parts of the script is based on the `npz_to_vds.py` script from the examples. I would like to convert between inline/crossline coordinates, voxel coordinates and world coordinates.
In my script, the creation of the VDS file is successful. I also see that the file is recognized as 2D file by OpenVDS during writing since the chunks written to the page buffer are 4*brick_size. However, when I want to obtain the `IJKCoordinateTransformer` for this file, I run into the following exception
```text
Exception:
Dimension -1 is not a valid dimension. Dimensionality_Max is 6.
```
When I create a 3D file with only one coordinate in z direction, obtaining the transformer seems to be successful.
## Expectation
I obtain the coordinate transformer which allows me to transform between [different coordinate](https://osdu.pages.opengroup.org/platform/domain-data-mgmt-services/seismic/open-vds/cppdoc/struct/structOpenVDS_1_1IJKCoordinateTransformer.html) systems (ijk, inline/crossline etc.).
## Questions
- Is this behavior expected? I assumed that I could still work with the IJK transformer.
- Do I create the file as a 2D file in a wrong way?
- If the behavior is expected, would it be possible to extend the transformer to work with 2D data and/or as a quick fix to make the error message more expressive?
## System
- Arm64 MacOS 12.6
- VDS 3.0.3 with Python interfacehttps://community.opengroup.org/osdu/platform/system/storage/-/issues/146POST /query/records:batch with normalization stops converting after 1 convers...2022-10-28T08:04:37ZAn NgoPOST /query/records:batch with normalization stops converting after 1 conversion failureAn attribute was defined as a number in the schema:
```
"depthA": {
"title": "depthA",
"type": "number"
}
```
The meta specified is to convert the values in depthA from ft to meter.
```
"meta": [
{
"...An attribute was defined as a number in the schema:
```
"depthA": {
"title": "depthA",
"type": "number"
}
```
The meta specified is to convert the values in depthA from ft to meter.
```
"meta": [
{
"kind": "Unit",
"name": "ft",
"persistableReference": "{\"scaleOffset\":{\"scale\":0.3048,\"offset\":0.0},\"symbol\":\"ft\",\"baseMeasurement\":{\"ancestry\":\"Length\",\"type\":\"UM\"},\"type\":\"USO\"}",
"propertyNames": [
"depthA",
"depthB"
],
```
The record was ingested/created with an empty string assigned to depthA.
```
"data": {
"depthA": "",
"depthB": 123,
"depthC": 456
},
```
Upon record creation, fetch API was called to normalize the record before indexing.
The conversion failed to convert depthA. An error was logged. Fetch API returned a 200, but with a conversion error.
![image](/uploads/28575874041594004a487f3ee009f1f9/image.png)
After this error, the API skipped conversion for other attributes.
Indexer saw this error and returned a 400 status. Trace index trace returns:
```
"statusCode": 400,
"trace": [
"Unit conversion: illegal value for property depthA"
]
```
**Action:** API should continue to convert all specified attributes, and log the conversion errors for those that failed.https://community.opengroup.org/osdu/platform/security-and-compliance/policy/-/issues/79Add Open Telemetry (OTEL) to Policy Service2023-10-31T22:06:15ZShane HutchinsAdd Open Telemetry (OTEL) to Policy ServiceAdd Open Telemetry (OTEL) to Policy Service
- Focus on Trace/Span support
In a later issue add metric and logs supportAdd Open Telemetry (OTEL) to Policy Service
- Focus on Trace/Span support
In a later issue add metric and logs supportShane HutchinsShane Hutchinshttps://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/seismic-dms-suite/seismic-store-service/-/issues/69Subproject creation Bad Request2023-03-24T16:02:13ZDenis Karpenok (EPAM)Subproject creation Bad RequestGCP preshipping environment.
Tenant was created:
`{
"name": "autotesttenantid436502",
"esd": "odesprod.osdu-gcp.go3-nrg.projects.epam.com",
"gcpid": "osdu-data-prod",
"default_acls": "users.datalake.admins@odesprod.osdu...GCP preshipping environment.
Tenant was created:
`{
"name": "autotesttenantid436502",
"esd": "odesprod.osdu-gcp.go3-nrg.projects.epam.com",
"gcpid": "osdu-data-prod",
"default_acls": "users.datalake.admins@odesprod.osdu-gcp.go3-nrg.projects.epam.com"
}`
Trying to create subproject.
Request:
`curl --location --request POST 'https://preship.gcp.gnrg-osdu.projects.epam.com/api/seismic-store/v3/subproject/tenant/autotesttenantid436502/subproject/subprojectodi725168' \
--header 'Content-Type: application/json' \
--header 'data-partition-id: odesprod' \
--header 'ltag: odesprod-SeismicDMS-Legal-Tag-Test7116874' \
--header 'Authorization: Bearer ID_TOCKEN' \
--data-raw '{
"admin": "admin@odesprod.osdu-gcp.go3-nrg.projects.epam.com",
"storage_class": "MULTI_REGIONAL",
"storage_location": "US",
"legal": {
"legaltags": [
"odesprod-SeismicDMS-Legal-Tag-Test7116874"
],
"otherRelevantDataCountries": [
"US"
]
}
}'`
Response:
`[seismic-store-service] Bad Request`
Seismic-store logs:
`2022-10-21 15:40:40.798 EET{"error":{"code":400,"message":"[seismic-store-service] Bad Request","status":"BAD_REQUEST"}}`Sacha BrantsSacha Brantshttps://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/seismic-dms-suite/seismic-store-service/-/issues/68Utility LS endpoint doesn't work for directories2023-03-24T19:11:22ZKonstantin KhottchenkovUtility LS endpoint doesn't work for directoriesNew test scenario was added for UTILITY LS endpoint. The feature of filtering the output for only datasets, only folders or both datasets and folders was added and tested.
The result of tests shows that use of "wmode" parameter with valu...New test scenario was added for UTILITY LS endpoint. The feature of filtering the output for only datasets, only folders or both datasets and folders was added and tested.
The result of tests shows that use of "wmode" parameter with values "dirs" and "all" that filter response to receive only names of directories or both datasets and directories correspondingly fails for AWS and ANTHOS. We couldn't check if Google also affected because Google environment is broken at all. Thus this tests were disabled for mentioned CSPs.
[Pipeline run](https://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/seismic-dms-suite/seismic-store-service/-/jobs/1458328)https://community.opengroup.org/osdu/platform/data-flow/ingestion/energistics/commons/-/issues/8Move dataset API function into commons2022-11-22T12:39:26ZAugustin Pilard ZenMove dataset API function into commonsget_file_and_location_from_dataset_registry
get_type -> generalize
get_type_from_file ?
get_type_from_bytes?
create_manifest -> not sureget_file_and_location_from_dataset_registry
get_type -> generalize
get_type_from_file ?
get_type_from_bytes?
create_manifest -> not sureAugustin Pilard ZenAugustin Pilard Zenhttps://community.opengroup.org/osdu/platform/consumption/geospatial/-/issues/174Data - SPIKE: Consider Design Options for Supporting Entitlements2023-12-19T17:30:47ZBrianData - SPIKE: Consider Design Options for Supporting EntitlementsAs a GCZ developer, I want to consider Design Options for Supporting Entitlements, so that we can scope out enhancement tasks to support entitlements.
Create a community group (Shell, Exxon) to discuss requirements and expectations.
A...As a GCZ developer, I want to consider Design Options for Supporting Entitlements, so that we can scope out enhancement tasks to support entitlements.
Create a community group (Shell, Exxon) to discuss requirements and expectations.
Acceptance Criteria:
1. Application users have their service entitlements
2. Need to ensure we can pull entitlements information along as attributes on the layers - we have been told this is possible so need to confirm/test - this should allow users to arrange access control on the client side based on these entitlements.https://community.opengroup.org/osdu/platform/system/dataset/-/issues/45Registered dataset records output with no “createUser”, “createTime”, “modify...2023-01-12T12:23:25ZRustam Lotsmanenko (EPAM)rustam_lotsmanenko@epam.comRegistered dataset records output with no “createUser”, “createTime”, “modifyUser”, “modifyTime” properties.Response for created dataset registry or requested via **GET** endpoint `/getDatasetRegistry` doesn't contain “createUser”, “createTime”, “modifyUser”, “modifyTime” properties.
~~~
{
"datasetRegistries": [
{
"id":...Response for created dataset registry or requested via **GET** endpoint `/getDatasetRegistry` doesn't contain “createUser”, “createTime”, “modifyUser”, “modifyTime” properties.
~~~
{
"datasetRegistries": [
{
"id": "osdu:dataset--File.Generic:579c89e204bd4e3da1f9025d9a542579",
"version": 1666268695566567,
"kind": "osdu:wks:dataset--File.Generic:1.0.0",
"acl": {
"viewers": [
"data.default.viewers@osdu.osdu-gcp.go3-nrg.projects.epam.com"
],
"owners": [
"data.default.owners@osdu.osdu-gcp.go3-nrg.projects.epam.com"
]
},
"legal": {
"legaltags": [
"osdu-demo-legaltag"
],
"otherRelevantDataCountries": [
"US"
],
"status": "compliant"
},
"data": {
"ResourceId": "srn:osdu:file:dc556e0e3a554105a80cfcb19372a62d:",
"ResourceTypeID": "srn:type:file/json:",
"ResourceSecurityClassification": "srn:reference-data/ResourceSecurityClassification:RESTRICTED:",
"ResourceSource": "Some Company App",
"ResourceName": "trajectories - 1000.json",
"ResourceDescription": "Trajectory For Wellbore xyz",
"DatasetProperties": {
"FileSourceInfo": {
"FileSource": "/ef27ad6f-dbc1-458d-8541-1446e3b0685a/05b8dd43f2724532b59e6fc9d724c5d5"
}
}
},
"meta": [
{
"additionalProp1": {},
"additionalProp2": {},
"additionalProp3": {}
}
],
"tags": {}
}
]
}
~~~
If a record is requested directly from the Storage service (by ID) via `/records:batch` endpoint, these properties exist.
~~~
{
"records": [
{
"data": {
"ResourceId": "srn:osdu:file:dc556e0e3a554105a80cfcb19372a62d:",
"ResourceTypeID": "srn:type:file/json:",
"ResourceSecurityClassification": "srn:reference-data/ResourceSecurityClassification:RESTRICTED:",
"ResourceSource": "Some Company App",
"ResourceName": "trajectories - 1000.json",
"ResourceDescription": "Trajectory For Wellbore xyz",
"DatasetProperties": {
"FileSourceInfo": {
"FileSource": "/ef27ad6f-dbc1-458d-8541-1446e3b0685a/05b8dd43f2724532b59e6fc9d724c5d5"
}
}
},
"meta": [
{
"additionalProp1": {},
"additionalProp2": {},
"additionalProp3": {}
}
],
"id": "osdu:dataset--File.Generic:579c89e204bd4e3da1f9025d9a542579",
"version": 1666268695566567,
"kind": "osdu:wks:dataset--File.Generic:1.0.0",
"acl": {
"viewers": [
"data.default.viewers@osdu.osdu-gcp.go3-nrg.projects.epam.com"
],
"owners": [
"data.default.owners@osdu.osdu-gcp.go3-nrg.projects.epam.com"
]
},
"legal": {
"legaltags": [
"osdu-demo-legaltag"
],
"otherRelevantDataCountries": [
"US"
],
"status": "compliant"
},
"createUser": "rustam_lotsmanenko@osdu-gcp.go3-nrg.projects.epam.com",
"createTime": "2022-10-20T12:24:57.379Z"
}
],
"notFound": [],
"conversionStatuses": []
}
~~~
But Dataset doesn't use Storage `/records:batch` endpoint for records fetching after dataset registration or for fetching records, instead `/records` endpoint is used which does not provide such properties in response:
When registry created and when registry requested via **GET** endpoint:
https://community.opengroup.org/osdu/platform/system/dataset/-/blob/master/dataset-core/src/main/java/org/opengroup/osdu/dataset/service/DatasetRegistryServiceImpl.java#L165
Core common method used for fetching records:
https://community.opengroup.org/osdu/platform/system/lib/core/os-core-common/-/blob/master/src/main/java/org/opengroup/osdu/core/common/storage/StorageService.java#L73https://community.opengroup.org/osdu/platform/data-flow/ingestion/energistics/prodml-parser/-/issues/10add location on manifest2022-10-21T09:33:40ZAugustin Pilard Zenadd location on manifestAugustin Pilard ZenAugustin Pilard Zenhttps://community.opengroup.org/osdu/platform/consumption/geospatial/-/issues/173Transformer - Document process to make a new data area in OSDU for GCZ2022-10-19T16:33:39ZJoel RomeroTransformer - Document process to make a new data area in OSDU for GCZPer Brian, this needs to be a living document that we create/improve as we add additional data types to GCZ - written in a way someone not on our team could build a new transformer connection/mapping for a new (or private) data area of O...Per Brian, this needs to be a living document that we create/improve as we add additional data types to GCZ - written in a way someone not on our team could build a new transformer connection/mapping for a new (or private) data area of OSDU over time.