OSDU Software issueshttps://community.opengroup.org/groups/osdu/-/issues2024-01-17T12:08:05Zhttps://community.opengroup.org/osdu/platform/system/sdks/common-python-sdk/-/issues/18AWS : allow 'osdu_api.ini' file path configuration2024-01-17T12:08:05ZValentin GauthierAWS : allow 'osdu_api.ini' file path configurationFor now, it seems that it is not possible to modify the default file path of the *osdu_api.ini* file when using AWS.
As it is required for the AWS configuration, I suggest a modification of this line : https://community.opengroup.org/osd...For now, it seems that it is not possible to modify the default file path of the *osdu_api.ini* file when using AWS.
As it is required for the AWS configuration, I suggest a modification of this line : https://community.opengroup.org/osdu/platform/system/sdks/common-python-sdk/-/blob/master/osdu_api/providers/aws/service_principal_util.py#L63
The modification could be :
```python
config_file_name = os.environ.get("OSDU_API_CONFIG_INI") or "osdu_api.ini"
```
Thus, it will use the "OSDU_API_CONFIG_INI" environment variable as in *DefaultConfigManager* class (see. https://community.opengroup.org/osdu/platform/system/sdks/common-python-sdk/-/blob/master/osdu_api/configuration/config_manager.py#L73)
This modification should not break any actual configuration, except if a configuration use 2 differents *osdu_api.ini* files, one used for *DefaultConfiguration* class and one for the AWS configuration (which seems not to be a good practice I think ?).https://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/seismic-dms-suite/seismic-store-sdutil/-/issues/25sdutil cp - to show checksum comparison after completion of copying the file2023-05-18T09:40:40ZDebasis Chatterjeesdutil cp - to show checksum comparison after completion of copying the filePlease consider adding this feature to ensure integrity of data in Seismic Store.
Show checksum of source data file, and the same from the copied file.
Even add the same feature in "sdutil stat". "stat" may also report in bytes units o...Please consider adding this feature to ensure integrity of data in Seismic Store.
Show checksum of source data file, and the same from the copied file.
Even add the same feature in "sdutil stat". "stat" may also report in bytes units of measure.
R3M16/Azure/Preship sdutil -
"**cp**" command (copying the file)
sdutil copy file
```
(sdutilenv) C:\seismic-store-sdutil-master>python sdutil cp C:\TEMP\osdu-volve.segy sd://opendes/debasis/volve.segy
Uploading [========================================] **1104999800/1104999800** [100%] in 12:36.1 (1461432.29/s)
Transfer completed
(sdutilenv) C:\seismic-store-sdutil-master>
```
Source data in local disk
(sdutilenv) C:\seismic-store-sdutil-master>dir C:\TEMP\osdu-volve.segy
Volume in drive C is OS
Volume Serial Number is 62E2-67ED
Directory of C:\TEMP
04/24/2021 04:39 AM **1,104,999,800** osdu-volve.segy
1 File(s) 1,104,999,800 bytes
0 Dir(s) 25,783,111,680 bytes free
(sdutilenv) C:\seismic-store-sdutil-master>
"**stat**" command
```
(sdutilenv) C:\seismic-store-sdutil-master>python sdutil ls sd://opendes/debasis
volve.segy
(sdutilenv) C:\seismic-store-sdutil-master>python sdutil stat sd://opendes/debasis/volve.segy
- Name: sd://opendes/debasis/volve.segy
- Created By: 97pQgJtRFH99Y1KViwFV4GaADxKsIeRG9ZPJ-4PnMb0
- Created Date: Wed Mar 29 2023 00:00:57 GMT+0000 (Coordinated Universal Time)
- **Size: 1.0 GB**
- ReadOnly: False
(sdutilenv) C:\seismic-store-sdutil-master>
```M18 - Release 0.21Debasis ChatterjeeDebasis Chatterjeehttps://community.opengroup.org/osdu/platform/data-flow/ingestion/external-data-sources/core-external-data-workflow/-/issues/20Remove StrEnum from the code2023-06-07T16:37:09ZYan Sushchynski (EPAM)Remove StrEnum from the codeHello,
I think it is possible to delete `StrEnum` from the dependencies, and replace them with something like:
```
class YourStrEnum(str, Enum):
pass
```
The enum from the above behaves the same as StrEnum, so it spares us installi...Hello,
I think it is possible to delete `StrEnum` from the dependencies, and replace them with something like:
```
class YourStrEnum(str, Enum):
pass
```
The enum from the above behaves the same as StrEnum, so it spares us installing extra dependency
More details here:
https://docs.python.org/3.8/library/enum.html#othersM18 - Release 0.21Ashish SaxenaNisha ThakranJeyakumar DevarajuluPriyanka BhongadeAshish Saxenahttps://community.opengroup.org/osdu/platform/security-and-compliance/entitlements/-/issues/122ADR: New entitlements membership change events2023-07-14T10:09:11ZThiago SenadorADR: New entitlements membership change events## Status
- [X] Proposed
- [x] Trialing
- [x] Under review
- [x] Approved
- [ ] Retired
**Context & Scope**
Many OSDU applications need to react to entitlement’s membership change, feature that is [long overdue](https://community.openg...## Status
- [X] Proposed
- [x] Trialing
- [x] Under review
- [x] Approved
- [ ] Retired
**Context & Scope**
Many OSDU applications need to react to entitlement’s membership change, feature that is [long overdue](https://community.opengroup.org/osdu/platform/security-and-compliance/entitlements/-/issues/61) to be implemented in entitlements service. In addition, assuming almost every other OSDU service relies on entitlements service and cache its data, this notification mechanism could be used to prevent dirty cache scenarios as describe [here](https://community.opengroup.org/osdu/platform/security-and-compliance/entitlements/-/issues/121).
The scope of this ADR is the addition of asynchronous pubsub events signalizing the successful operation of the following entitlements service APIs:
```
DELETE /groups/{group_email}
POST /groups/{group_email}/members
DELETE /groups/{group_email}/members/{member_email}
```
**Trade-off Analysis**
The addition of the requested pubsub notification mechanism does not represent a breaking change for any involved API, consequently neither for the consuming applications. It should not introduce any performance degradation either since the event triggering is done asynchronously. Only concerned consuming applications would benefit from this new feature, while it remains completely transparent for others.
**Decision**
Only at the end of a successful operation, trigger the following events for the given entitlements’ APIs:
```
DELETE /groups/{group_email}
“entitlementsChangeEvent”: {
“kind”: “groupDeleted”
“group”: “<groupName>”
“user”: “”
“action”: “”
“modifiedBy”: “<user identity>”
“modifiedOn”: “<timestamp>”
}
POST /groups/{group_email}/members
“entitlementsChangeEvent”: {
“kind”: “groupChanged”
“group”: “<groupName>”
“user”: “<user>”
“action”: “add”
“modifiedBy”: “<user identity>”
“modifiedOn”: “<timestamp>”
}
DELETE /groups/{group_email}/members/{member_email}
“entitlementsChangeEvent”: {
“kind”: “groupChanged”
“group”: “<groupName>”
“user”: “<user>”
“action”: “remove”
“modifiedBy”: “<user identity>”
“modifiedOn”: “<timestamp>”
}
```M19 - Release 0.22https://community.opengroup.org/osdu/platform/consumption/geospatial/-/issues/234Integration - Azure Support for M182023-05-11T14:57:19ZLevi RemingtonIntegration - Azure Support for M18Azure leverages a standard for deploying OSDU services into their environment. The expectation is that Azure documentation and helm charts are written for GCZ specifically, and hosted in the [helm-charts-azure](https://community.opengrou...Azure leverages a standard for deploying OSDU services into their environment. The expectation is that Azure documentation and helm charts are written for GCZ specifically, and hosted in the [helm-charts-azure](https://community.opengroup.org/osdu/platform/deployment-and-operations/helm-charts-azure) repo.
Top priority: Get Kurbernetes Pipelines up and running.
What are helm charts?
- Way of defining resources required to deploy service to Kubernetes
- A set of instructions for building a singular package from a service and deploying that into Kubernetes
Background per CSP:
- Each CSP has different helm charts
- Azure hosts their helm charts in one repo, whereas AWS hosts their helm charts in each service's respective repo for instance
Resources:
- Bryan J Dawson - member of the Architecture SubCommittee - willing to provide guidance on OSDU architecture standards
- Chad Leong - relay any pain points of this process back to Chad so a process can be developed to aid new services into this integration in the future
Acceptance Criteria:
- GCZ Helm Charts created for Azure
- Process documented and relayed to Chad Leonghttps://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/seismic-dms-suite/seismic-store-service/-/issues/97Implementing DDMSDatasets[] standardize content data2023-03-30T16:29:46ZChad LeongImplementing DDMSDatasets[] standardize content dataDDMS references to optimized content were found to be created ad-hoc and outside the work-product-component schemas.
Following the [original observation](https://gitlab.opengroup.org/osdu/subcommittees/ea/docs/-/issues/7) an [ADR was c...DDMS references to optimized content were found to be created ad-hoc and outside the work-product-component schemas.
Following the [original observation](https://gitlab.opengroup.org/osdu/subcommittees/ea/docs/-/issues/7) an [ADR was created](https://gitlab.opengroup.org/osdu/subcommittees/ea/docs/-/issues/10), which standardizes the optimized content references from work-product-component entity types. Over time, DDMSs are expected to implement optimized content references using the `data.DDMSDatasets[]` property and support migration.https://community.opengroup.org/osdu/platform/domain-data-mgmt-services/wellbore/wellbore-domain-services/-/issues/64Implementing DDMSDatasets[] standardize content data2023-04-17T08:43:12ZChad LeongImplementing DDMSDatasets[] standardize content dataDDMS references to optimized content were found to be created ad-hoc and outside the work-product-component schemas.
Following the [original observation](https://gitlab.opengroup.org/osdu/subcommittees/ea/docs/-/issues/7) an [ADR was c...DDMS references to optimized content were found to be created ad-hoc and outside the work-product-component schemas.
Following the [original observation](https://gitlab.opengroup.org/osdu/subcommittees/ea/docs/-/issues/7) an [ADR was created](https://gitlab.opengroup.org/osdu/subcommittees/ea/docs/-/issues/10), which standardizes the optimized content references from work-product-component entity types. Over time, DDMSs are expected to implement optimized content references using the `data.DDMSDatasets[]` property and support migration.https://community.opengroup.org/osdu/platform/data-flow/ingestion/external-data-sources/core-external-data-workflow/-/issues/19EDS - Auto generate Refresh Token for RefreshTokenKeyName OAuth Flow type whe...2023-05-04T11:38:41ZPriyanka BhongadeEDS - Auto generate Refresh Token for RefreshTokenKeyName OAuth Flow type when expired- [x] Identify the changes
- [x] create a function/POC to handle Auto generation of Refresh Token for RefreshTokenKeyName OAuth Flow type when expired
- [x] create a unit test case
- [x] Test the functionality
- [x] code review- [x] Identify the changes
- [x] create a function/POC to handle Auto generation of Refresh Token for RefreshTokenKeyName OAuth Flow type when expired
- [x] create a unit test case
- [x] Test the functionality
- [x] code reviewM17 - Release 0.20Priyanka BhongadePriyanka Bhongadehttps://community.opengroup.org/osdu/platform/system/storage/-/issues/170Invalidate derived data when parent record is deleted2023-03-31T10:02:02ZAn NgoInvalidate derived data when parent record is deletedDerived data (records with ancestry/parent) inherit the legal tags from the parent record(s).
So when at least one of the parent records is deleted, then the children records are no longer valid. Without this step, there are records wit...Derived data (records with ancestry/parent) inherit the legal tags from the parent record(s).
So when at least one of the parent records is deleted, then the children records are no longer valid. Without this step, there are records with invalid legal tags (or no legal tag) still exists in the system.https://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/seismic-dms-suite/seismic-store-service/-/issues/96Read Only Root File System for Seismic Pods Crashes2023-04-12T17:52:30ZAbhay JoshiRead Only Root File System for Seismic Pods CrashesWhen making a change to have the Os-Seismic-Store pods be a Readonly RootFileSystem, the pods seem to crash without any kubectl logs whatsoever. We suspect it is because the Application is writing to the Pods but are unable to see where ...When making a change to have the Os-Seismic-Store pods be a Readonly RootFileSystem, the pods seem to crash without any kubectl logs whatsoever. We suspect it is because the Application is writing to the Pods but are unable to see where things are being written. We would like to fix this issue as it is a security concern.https://community.opengroup.org/osdu/platform/data-flow/ingestion/external-data-sources/core-external-data-workflow/-/issues/18EDS - Adding Logger to give details about Osdu_ingest run id and Sample fetch...2023-03-20T13:36:39ZPriyanka BhongadeEDS - Adding Logger to give details about Osdu_ingest run id and Sample fetched data record_ Add Logger to display Osdu_ingest run id in below format
Osdu_ingest runId=xxxx
- Correction in logger while dsplaying sample fetched data record
currently logger has " Record 1 :"
To make the message more clearer , changing the disp..._ Add Logger to display Osdu_ingest run id in below format
Osdu_ingest runId=xxxx
- Correction in logger while dsplaying sample fetched data record
currently logger has " Record 1 :"
To make the message more clearer , changing the display message in logs as "Displaying only one Sample Record"M17 - Release 0.20Nisha ThakranPriyanka BhongadeNisha Thakranhttps://community.opengroup.org/osdu/platform/system/storage/-/issues/169ADR: API to purge a batch of storage records2023-05-02T12:16:58ZMandar KulkarniADR: API to purge a batch of storage recordsNew API in Storage service to purge a batch of records
## Status
- [X] Proposed
- [ ] Trialing
- [ ] Under review
- [ ] Approved
- [ ] Retired
## Context & Scope
The OSDU Storage service provides 2 ways to delete a record. One way is ...New API in Storage service to purge a batch of records
## Status
- [X] Proposed
- [ ] Trialing
- [ ] Under review
- [ ] Approved
- [ ] Retired
## Context & Scope
The OSDU Storage service provides 2 ways to delete a record. One way is to logically delete the record in which the record with same id can be revived later because its version history is maintained.The other way is to permanently delete the record (called as purging) in which case, the record's version history is deleted too. This operation cannot be undone meaning records purged cannot be revived.
In both types of deletions, the record content cannot be accessed using storage or search service.
The storage service provides separate APIs for logical deletion (`POST /records/{id}:delete`) and purging of records (`DELETE /records/{id}`).
The storage service provides API for logical deletion of batch of records (`POST /records/delete`), but such an API is not available for purging of records.
The proposal is to provide an API on storage service to support purging of batch of records, where the maximum batch of size 500 will be supported.
Only the record IDs passed in the request body will be deleted not including any linked records or files if they exist. Cleaning up of all the linked records, such as child records, records in relationship block, and actual data (files ingested via workflow service), would not be in the scope of this API. It would be the user's responsibility.
The new bulk API will work on active as well as non-active (soft-deleted) records, similar to the existing purge API.
Purging of records can be performed by the owner of the records and the owner should be part of users.datalake.admins group.
The API response would be similar to the response of the logical deletion API that is `POST /records/{id}:delete`
In case of partial success, the response code would be 207 and the not-deleted-record-IDs would be listed in the response.
## Tradeoff Analysis
In the absence of an API to purge a batch of records, users would have to call the DELETE API once for every record and it would increase the number of calls to the storage service.
## Decision
Provide an admin-only API to purge a batch of records, with maximum batch size of 500 records.
The Open API specs for storage service with new API is here:
[storage_openapi_batchpurge.yaml](/uploads/1da3f68253419edd693a87d706049565/storage_openapi_batchpurge.yaml)
## Consequences
- New API on Storage service would be available.
- Documentation of Storage service should be modified with details for the new API.https://community.opengroup.org/osdu/platform/data-flow/data-loading/open-test-data/-/issues/89Stratigraphy and WellboreMarkerSet - questions and concerns2023-10-25T05:58:29ZDebasis ChatterjeeStratigraphy and WellboreMarkerSet - questions and concernsRefer to this excellent documentation (worked example) https://gitlab.opengroup.org/osdu/subcommittees/data-def/work-products/schema/-/blob/master/Examples/WorkedExamples/Reservoir%20Data/Stratigraphy/README.md
For wellbore 15/3-7,
> T...Refer to this excellent documentation (worked example) https://gitlab.opengroup.org/osdu/subcommittees/data-def/work-products/schema/-/blob/master/Examples/WorkedExamples/Reservoir%20Data/Stratigraphy/README.md
For wellbore 15/3-7,
> Top of Viking (group, rank=1) = 4049.0 m Top of Draupne (formation, rank=2) is 4049.0 m. Top of Heather (formation, rank=2) is 4049.0 m.
Looking at populated WellboreMarkerSet record in https://community.opengroup.org/osdu/data/data-definitions/-/blob/master/Examples/work-product-component/WellboreMarkerSet.1.2.1.json
For populated load manifest of sample data (TNO marker data) does not utilize adequate number of properties from Marker array. https://community.opengroup.org/osdu/platform/data-flow/data-loading/open-test-data/-/blob/master/rc--3.0.0/4-instances/TNO/work-products/markers_1_1_0/load_top_1.1.0_1001_csv.json
```
"Markers": [
{
"MarkerName": "QUATER. UNDIFF.",
"MarkerMeasuredDepth": 0.0
},
{
"MarkerName": "Breda Formation",
"MarkerMeasuredDepth": 282.0
},
{
"MarkerName": "Veldhoven Clay Member",
"MarkerMeasuredDepth": 501.0
},
```
It would be nice to get suitable sample data, JSON files/load-manifests (for related entities) that actually match with this excellent documentation. Some hints are in the worked example. Such as- for "Gudrun" Stratigraphic Column - https://gitlab.opengroup.org/osdu/subcommittees/data-def/work-products/schema/-/blob/master/Examples/WorkedExamples/Reservoir%20Data/Stratigraphy/README.md#stratigraphic-column So, it is necessary to convert this information in a complete (loadable) package so as to get a proper reference.
My concerns - Markers.MarkerName is "free text". Open to human error. When Data Loader is populating data from many wells from this NPD field, he/she uses "Top of Draupne" for one well and uses "Top - Draupne" for another well. Use case - for obtaining contour map of "Top of Draupne", it becomes necessary to get MD (or TVD-SS), X, Y from all wellbores.
Question -
1. FeatureTypeID and FeatureName. FeatureType can be "Top", "Base". The values are clear. FeatureName - why is this left as "free text" rather than link to existing record in some other parent entity?
https://community.opengroup.org/osdu/data/data-definitions/-/blob/master/E-R/reference-data/FeatureType.1.0.0.md Description = Used to describe the type of features. Common values being Top, Base, OWC, Fault etc.
Is the property name unambiguous? Is this more "Contact type"? In any case, what would be typical value of FeatureName in NPD example when we have to build Markers array for NPD Wellbore 15-3/7?
> Top of Viking (group, rank=1) = 4049.0 m Top of Draupne (formation, rank=2) is 4049.0 m. Top of Heather (formation, rank=2) is 4049.0 m.
1. In Markers array, there are some properties. Such as MarkerTypeID, Missing. Being array, we can populate information of several markers within one specific Wellbore.
Now, there is also provision of new property/block.
AvailableMarkerProperties such as MissingThickness. Not obvious how this will be used for multiple elements as present in markers array.
1. For WellboreMarkerSet, it is linked to one StraigraphicColumn. The link is from overall record and not individual array element.
In any case, what would be typical value of StraigraphicColumn in NPD example when we have to build Markers array for NPD Wellbore 15-3/7? Leave that as "Gudrun" for Column overall?
> Top of Viking (group, rank=1) = 4049.0 m Top of Draupne (formation, rank=2) is 4049.0 m. Top of Heather (formation, rank=2) is 4049.0 m.
cc - @gehrmann and @keith_wall for informationhttps://community.opengroup.org/osdu/platform/system/storage/-/issues/168Storage should allow empty data block upon record creation/update2023-03-22T04:13:47ZAn NgoStorage should allow empty data block upon record creation/updateStorage PUT api should allow empty data block upon record creation/update if that is compliant with the schema being defined.
Currently, data block is required.
data: {}
This is a breaking change since it changes the behavior of the ...Storage PUT api should allow empty data block upon record creation/update if that is compliant with the schema being defined.
Currently, data block is required.
data: {}
This is a breaking change since it changes the behavior of the API.
Indexer service needs to be checked to ensure empty data block is being handled correctly.https://community.opengroup.org/osdu/platform/data-flow/ingestion/ingestion-workflow/-/issues/151Misleading message in Xcom summary when legal tag is missing2023-03-16T11:31:11ZDebasis ChatterjeeMisleading message in Xcom summary when legal tag is missingI was running a simple test case in AWS/M16/Preship.
I was getting this message.
Now, I get failure. \[{'id': 'osdu:reference-data--FacilityEventType:DC13MAR', 'kind': 'osdu:wks:reference-data--FacilityEventType:1.0.0', 'reason': '400 C...I was running a simple test case in AWS/M16/Preship.
I was getting this message.
Now, I get failure. \[{'id': 'osdu:reference-data--FacilityEventType:DC13MAR', 'kind': 'osdu:wks:reference-data--FacilityEventType:1.0.0', 'reason': '400 Client Error: Bad Request for url: [http://os-storage.osdu-services:8080/api/storage/v2/records'}](http://os-storage.osdu-services:8080/api/storage/v2/records'%7D)\]
Turns out (thanks to AWS Support Nazeem Akbar Ali) that this is because legal tag was not defined and he found the reason by chckingr elevant log file.
See details here -
https://community.opengroup.org/osdu/platform/pre-shipping/-/issues/470#note_207244https://community.opengroup.org/osdu/platform/data-flow/ingestion/energistics/resqml-parser/-/issues/1(SpatialPoint/SpatialArea) What to do with big dataset ?2023-03-15T10:01:23ZValentin Gauthier(SpatialPoint/SpatialArea) What to do with big dataset ?For now the entire points list is required to be load in single list to compute the bounding box and the central point.
It could fail if the data is too heavy.
We should be able to handle thatFor now the entire points list is required to be load in single list to compute the bounding box and the central point.
It could fail if the data is too heavy.
We should be able to handle thatValentin GauthierValentin Gauthierhttps://community.opengroup.org/osdu/platform/data-flow/ingestion/segy-to-vds-conversion/-/issues/17Improvements for execution context2024-03-26T11:22:57ZSacha BrantsImprovements for execution contextToday, the execution context requires:
```json
{
"Payload": {
"AppKey": "",
"data-partition-id": "partition"
},
"id_token": "",
"persistent_id": "filename.vds",
"vds_ur...Today, the execution context requires:
```json
{
"Payload": {
"AppKey": "",
"data-partition-id": "partition"
},
"id_token": "",
"persistent_id": "filename.vds",
"vds_url": "sd://partition/sub-project",
"work_product_id": "",
"file_record_id": ""
}
```
This could be simplified to only require work_product_id and file_record_id as all the information needed is present in those records.
Suggested new execution context:
```json
{
"data-partition-id": "partition",
"work_product_id": "",
"file_record_id": ""
}
```
The persistent_id would be generated similar to what the SEG-Y to ZGY DAG does: Output file path will be generated from the input: insert a GUID after the file name, replace .sgy extension with .vds.
Note, this removes the need for id_tokenYan Sushchynski (EPAM)Yan Sushchynski (EPAM)https://community.opengroup.org/osdu/platform/data-flow/ingestion/energistics/witsml-parser/-/issues/65Well Log record type - does not link to existing Reference data for Curve Types2023-03-13T19:41:17ZDebasis ChatterjeeWell Log record type - does not link to existing Reference data for Curve TypesI noticed that the Parser shows synthetic/random number for Log curve IDs.
It does not attempt to match with existing entries in LogCurve.
"LogCurveTypeID": "namespace:reference-data--LogCurveType:BakerHughesInteq:A08A:",
What ...I noticed that the Parser shows synthetic/random number for Log curve IDs.
It does not attempt to match with existing entries in LogCurve.
"LogCurveTypeID": "namespace:reference-data--LogCurveType:BakerHughesInteq:A08A:",
What are your thoughts? Should this be considered as gap?
I can create issue for tracking.
Copying to others who tested this feature.
Bonus requirements - populate LogCurveFamilyID and LogCurveMainFamilyID.
```
"LogCurveBusinessValueID": "namespace:reference-data--LogCurveBusinessValue:High:",
"LogCurveMainFamilyID": "namespace:reference-data--LogCurveMainFamily:Acoustic:",
"LogCurveFamilyID": "namespace:reference-data--LogCurveFamily:Acoustic%20Amplitude:"
```
Excerpt from WellLog work-product component record created by the Parser.
```
{
"IsProcessed": true,
"LogCurveMainFamilyID": null,
"DateStamp": "2023-03-12T13:14:48.039219+0000",
"LogCurveFamilyID": null,
"CurveID": "92c731a9-ae27-49d8-a246-27ddff7a1ad1",
"TopDepth": 499.0,
"CurveVersion": null,
"InterpreterName": null,
"CurveQuality": null,
"NullValue": null,
"Interpolate": true,
"DepthUnit": "odesprod:reference-data--UnitOfMeasure:m:",
"DepthCoding": null,
"Mnemonic": "ROP",
"BaseDepth": 509.01,
"LogCurveTypeID": null,
"LogCurveBusinessValueID": null,
"CurveUnit": "odesprod:reference-data--UnitOfMeasure:m:"
},
```https://community.opengroup.org/osdu/platform/data-flow/ingestion/external-data-sources/eds-dms/-/issues/12EDS DMS - Getting Bulk data from Katalyst Wrapper and choosing node of delivery.2023-03-13T11:24:45ZPriyanka BhongadeEDS DMS - Getting Bulk data from Katalyst Wrapper and choosing node of delivery.Katalyst is developing this service to support request for bulk data (ex: DLIS, SegY file) from Operator’s OSDU instance.
The critical thing for Katalyst is to get the ID (in this example, it is 10338269).
With that information, we are...Katalyst is developing this service to support request for bulk data (ex: DLIS, SegY file) from Operator’s OSDU instance.
The critical thing for Katalyst is to get the ID (in this example, it is 10338269).
With that information, we are able to “place an order” for requested file.
The requester will receive Order ID (in this example it is 621295) which is good for tracking actual delivery of requested data.
However, there are two more pieces which would be really useful when OSDU instance sends “request for bulk data” to Katalyst.
1. Email ID of requester.
2. Choice of Delivery node. There is often arrangement between Katalyst and its client to deliver data via SFTP or directly to a cloud location. So, a list (ex: 10 for delivery node, 20 for delivery node2…) can be delivered ahead of time. The requester simply will add the choice (delivery node=10) in the request payload.
POST
{{osdu_endpoint_url}}/osdu-eds/v1/dataset/getRetrievalInstructions
Request Body
{
"datasetRegistryids":["katalyst:dataset--File.Generic:10338269"]
}
Response Body
{
"providerKey": "katalyst",
"results": [
{
"datasetRegistryId": "katalyst:dataset--File.Generic:10338269",
"retrievalProperties": {
"outputFile": "196698-24_CR-GQK-121-depth.segy",
"fileSize": 2.94,
"orderId": "621295",
"orderItemId": "7551725",
"kdmItemId": "10338269",
"orderStatus": "NEW",
"fileSizeUOM": "MB",
"outputNodeId": "53086",
"priority": "Normal",
"outputNodeDescription": "Normal retrievals (Calgary Test1)",
"fileType": "SEGY"
}
}
]
}Thulasi Dass SubramanianSrinivasan NarayananThulasi Dass Subramanianhttps://community.opengroup.org/osdu/platform/system/search-service/-/issues/124OSDU Search Endpoint Response2023-03-13T09:52:27ZRex Von Brixon Apa-apOSDU Search Endpoint ResponseIf the ingested property is of type "string" and no record is ingested, search returns a response of **None** on that property.
However if the ingested property is non-string and no record is ingested, search does not return anything for...If the ingested property is of type "string" and no record is ingested, search returns a response of **None** on that property.
However if the ingested property is non-string and no record is ingested, search does not return anything for that property.
We are expecting the property to reflect with response of **None** even for non-strings.
Documentation of the test: https://community.opengroup.org/osdu/platform/pre-shipping/-/blob/main/R3-M15/Test_Plan_Results_M15/Manifest_Ingestion/M15_AWS_Manifest_Ingestion_custom-schema_Rex.docx