OSDU Software issueshttps://community.opengroup.org/groups/osdu/-/issues2023-04-06T20:25:50Zhttps://community.opengroup.org/osdu/platform/data-flow/ingestion/external-data-sources/eds-dms/-/issues/13Input validation on the API2023-04-06T20:25:50ZOkoun-Ola Fabien HouetoInput validation on the APIWe need a clear documentation of the approach for input validation. While there may not be a documentation or guideline at the forum level, EDS could document its approach to input validation. See https://community.opengroup.org/osdu/pla...We need a clear documentation of the approach for input validation. While there may not be a documentation or guideline at the forum level, EDS could document its approach to input validation. See https://community.opengroup.org/osdu/platform/system/storage/-/issues/51#note_39725 and https://community.opengroup.org/osdu/platform/security-and-compliance/home/-/issues/95#note_149265https://community.opengroup.org/osdu/platform/pre-shipping/-/issues/448Pre-shipping: AWS CloudWatch does not show correct Container Mapping and No R...2023-04-06T12:34:51ZNaufal Mohamed NooriPre-shipping: AWS CloudWatch does not show correct Container Mapping and No Relevant Log From Any API runsI am currently testing non destructive operational procedure for AWS. The test involves monitor logs obtained from AWS console (CloudWatch) from any API runs from Postman (i.e. search or ingestion or storage API).
I encountered 2 peculi...I am currently testing non destructive operational procedure for AWS. The test involves monitor logs obtained from AWS console (CloudWatch) from any API runs from Postman (i.e. search or ingestion or storage API).
I encountered 2 peculiar issues:
a) I dont see any relevant logs retrieved from CloudWatch --> Log Groups --> /aws/containerinsights/r3-m16-eks-main-cluster/application
either from os-search (search API made from Postman) or storage API. In the previous milestone release, I am able to see all related logs when I ran search API from POSTMAN through cloudwatch.
![image](/uploads/7ebf9d0969056dee6819257a1f73e7c8/image.png)
b) The container insight map does not show r3m16 resources but only shows r3m12. It is weird as from the log group I can see the r3m16 resource built but the container map does not show any related map related to the current release.
![image](/uploads/799f012d870a1126f1e5e0f187b68cf2/image.png)M16 - Release 0.19https://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/seismic-dms-suite/seismic-store-sdutil/-/issues/23sdutil cp doesn't show status on download2023-04-05T16:39:09ZBryan Dawsonsdutil cp doesn't show status on downloadWhen using the sdutil cp command it does not show the status of how far into copy when it is going from the cloud to local disk (neither how many bytes nor percent done).When using the sdutil cp command it does not show the status of how far into copy when it is going from the cloud to local disk (neither how many bytes nor percent done).https://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/seismic-dms-suite/seismic-store-service/-/issues/98Pagination not supported by IBM and AWS for DATASET LIST (POST) endpoint2023-04-05T14:28:58ZPratiksha ShedgePagination not supported by IBM and AWS for DATASET LIST (POST) endpointA new API has been added as DATASET LIST (POST) endpoint which supports pagination. This API should return the list of datasets and nextPageCursor to get the next list of datasets. However, IBM and AWS do not support pagination for this ...A new API has been added as DATASET LIST (POST) endpoint which supports pagination. This API should return the list of datasets and nextPageCursor to get the next list of datasets. However, IBM and AWS do not support pagination for this endpoint, which causes the pagination tests to fail during pipeline runs.
Pipeline runs:
IBM-https://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/seismic-dms-suite/seismic-store-service/-/jobs/1823012
AWS-https://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/seismic-dms-suite/seismic-store-service/-/jobs/1842803https://community.opengroup.org/osdu/platform/security-and-compliance/home/-/issues/132Project Vulnerability Scanning: osdu/platform/data-flow/data-loading/osdu-cli2023-04-05T13:20:19Zdesman boldenProject Vulnerability Scanning: osdu/platform/data-flow/data-loading/osdu-cli**Why did I receive this?**
In efforts to increase security on the OSDU platform we must ensure all projects containing source code are being scanned on a regular basis. You are receiving this notification because you have been identifi...**Why did I receive this?**
In efforts to increase security on the OSDU platform we must ensure all projects containing source code are being scanned on a regular basis. You are receiving this notification because you have been identified as an owner of a project in Gitlab that isn't being scanned for vulnerabilities.
**What do I need to do?**
Please include gitlab-ultimate.yml (https://community.opengroup.org/osdu/platform/ci-cd-pipelines/-/blob/master/scanners/gitlab-ultimate.yml) to your project so it can be scanned for vulnerabilities.
**Project(s) in Scope:**
osdu/platform/data-flow/data-loading/osdu-cliM17 - Release 0.20Chad LeongChad Leonghttps://community.opengroup.org/osdu/platform/domain-data-mgmt-services/well-delivery/well-delivery/-/issues/9Avoid proliferation of Storage service/record in this DDMS, instead leverage ...2023-04-05T12:52:13ZDebasis ChatterjeeAvoid proliferation of Storage service/record in this DDMS, instead leverage facility from Core ServicesNoted this service in Well Delivery DDMS.
PUT {{welldeliveryURL}}/storage/v1/well for Master record Well
PUT {{welldeliveryURL}}/storage/v1/wellboretrajectory for work-product component WellboreTrajectory
There are at least two proble...Noted this service in Well Delivery DDMS.
PUT {{welldeliveryURL}}/storage/v1/well for Master record Well
PUT {{welldeliveryURL}}/storage/v1/wellboretrajectory for work-product component WellboreTrajectory
There are at least two problems with this approach.
1. Additional source code to maintain and manage over time.
2. Does not leverage rich features (ex: integrity checking) as offered by mainstream initiatives such as Manifest-based Ingestion.
cc - @elandre for informationhttps://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/seismic-dms-suite/seismic-store-sdutil/-/issues/26[Azure R3M16] sdutil : Segy file appears to be present in SD-STORE, but the f...2023-04-04T11:19:24Zkenneth liew[Azure R3M16] sdutil : Segy file appears to be present in SD-STORE, but the file transmission was unsuccessful. I had failed to transfer my SEGY file to sdutil storage since I put the wrong local file path, but the file was created on sdutil storage.
I had run the commands "sdutil stat" and "sdutil cp" for your reference.
Below is my Python comm...I had failed to transfer my SEGY file to sdutil storage since I put the wrong local file path, but the file was created on sdutil storage.
I had run the commands "sdutil stat" and "sdutil cp" for your reference.
Below is my Python command for your reference.
```
(sdutilenv) C:\Sdutil\AZURE_R3M16>python sdutil auth login
Successfully logged into Azure SDUTIL.
(sdutilenv) C:\Sdutil\AZURE_R3M16>python sdutil cp C:\Users\kuanl\Desktop\SegY\SampleSegy\UP000000001__UP123456__TST-SEGY-UPLOAD-TST__1000022.sgy sd://opendes/kennethv3/TestFailed2.sgy
Wrong Command: C:\Users\kuanl\Desktop\SegY\SampleSegy\UP000000001__UP123456__TST-SEGY-UPLOAD-TST__1000022.sgy is not a valid local file name or the local file does not exist.
For more information type "python sdutil cp" to open the command help menu.
(sdutilenv) C:\Sdutil\AZURE_R3M16>python sdutil ls sd://opendes/kennethv3
SegyTest.segy
Seismic_data.segy
TestFailed.sgy
TestFailed1.sgy
TestFailed2.sgy
UP000000001__UP123456__TST-SEGY-UPLOAD-TST__100001.sgy
UP000000001__UP123456__TST-SEGY-UPLOAD-TST__100001.sgy.sgy
UP000000002__UP123456__TST-SEGY-UPLOAD-TST__100002.sgy
UP000000002__UP123456__TST-SEGY-UPLOAD-TST__10001.sgy
(sdutilenv) C:\Sdutil\AZURE_R3M16>python sdutil stat sd://opendes/kennethv3/TestFailed2.sgy
- Name: sd://opendes/kennethv3/TestFailed2.sgy
- Created By: 97pQgJtRFH99Y1KViwFV4GaADxKsIeRG9ZPJ-4PnMb0
- Created Date: Tue Apr 04 2023 09:07:48 GMT+0000 (Coordinated Universal Time)
- ReadOnly: False`
(sdutilenv) C:\Sdutil\AZURE_R3M16>python sdutil cp sd://opendes/kennethv3/TestFailed2.sgy C:\Users\kuanl\Desktop\SegY\SampleSegy\TestFailed2.sgy
[423] [seismic-store-service] opendes/kennethv3/TestFailed2.sgy is locked for write [RCODE:WL86400]
```https://community.opengroup.org/osdu/platform/data-flow/ingestion/energistics/witsml-parser/-/issues/64Refactor DAG related code2023-04-04T10:49:00ZYan Sushchynski (EPAM)Refactor DAG related code### Introduction
There is DAG related code that is executed in the container during a DAG run. The code is [here](https://community.opengroup.org/osdu/platform/data-flow/ingestion/energistics/witsml-parser/-/blob/master/energistics/src...### Introduction
There is DAG related code that is executed in the container during a DAG run. The code is [here](https://community.opengroup.org/osdu/platform/data-flow/ingestion/energistics/witsml-parser/-/blob/master/energistics/src/witsml_parser/main.py) and [here](https://community.opengroup.org/osdu/platform/data-flow/ingestion/energistics/witsml-parser/-/blob/master/energistics/src/witsml_parser/energistics/libs/create_energistics_manifest.py). And this code looks messy and outdated, and requires some refactoring.
### What should be done?
1. Update the code to make it work with the most recent `osdu-*` Python libs. The dependencies are here https://community.opengroup.org/osdu/platform/data-flow/ingestion/energistics/witsml-parser/-/blob/master/build/requirements.txt
2. Delete deprecated functionality of processing files by `preload_file_path` [here](https://community.opengroup.org/osdu/platform/data-flow/ingestion/energistics/witsml-parser/-/blob/master/energistics/src/witsml_parser/energistics/libs/create_energistics_manifest.py#L314).
3. Add the static-analysis step in the CI/CD.
4. Add possibility to pass the user's access/id token to the DAG
5. Common refactoring, because the code is messy now (a lot of "ifs" and lines of code in a single function)M17 - Release 0.20Vadzim Kulybaharshit aggarwalWalter Detienne peyssonMarc Burnie [AWS]Vadzim Kulybahttps://community.opengroup.org/osdu/platform/system/home/-/issues/47ADR: File Service design2023-03-31T15:29:13ZStephen Whitley (Invited Expert)ADR: File Service design
# Decision Title
## Status
- [x] Proposed
- [x] Trialing
- [x] Under review
- [x] Approved
- [ ] Retired
## Context & Scope
Files are one of the source of data for ingestion. In OSDU R2, there is a file service (not tagged as part of...
# Decision Title
## Status
- [x] Proposed
- [x] Trialing
- [x] Under review
- [x] Approved
- [ ] Retired
## Context & Scope
Files are one of the source of data for ingestion. In OSDU R2, there is a file service (not tagged as part of OSDU R2 release) that helps with signed location to upload files and there is also a delivery service that helps with providing signed URLs to download the file.
What is missing today is more of holistic management of file as an entity. Management of file as an entity could include
- Managing metadata related to files. Managing would mean all needed CRUD operations on the metadata of a file.
- Type safe way of managing the file metadata.
- Discover ability of files based on metadata.
- Enabling file-based downstream Ingestion and enrichment workflows.
- Secure access controlled downloading of file identified by metadata record id.
- One service (consolidation) that handles all the functionalities (uploading, downloading, discovery) for file type data.
### In-Scope:
Retaining the existing upload functionality in file service.
Introduction of new APIs for posting metadata for a single file and retrieve metadata using the metadata record id.
Rationalizing file service by bringing in download capabilities from delivery service into file service.
### Out of Scope:
Supporting various use cases of uploads and downloads like folder, batch of files etc are out of scope of this ADR.
These all the valid use cases and file service can be enhanced in incremental fashion for same.
## Decision
The decision is to consolidate or rationalize all the file management functionalities that exist in OSDU R2 in various services into a single service (File Service). Along with that enable Multi partition support, access control, and compliance on the file data.
![File-Service-HLD](/uploads/a1c62f507c38f97305fb0c51349a85e6/File-Service-HLD.PNG)
## Rationale
File service would be more complete in terms of managing file as an entity. It would be formed by leveraging capabilities of all existing services. This would help us re-use what exists and extend the capabilities.
## Consequences
Single service will be responsible for managing all aspects of a file. It would provide capabilities to upload, download and manage metadata for files. There are some modifications on existing functionalities that are proposed below
| Functionality | API | Status | Capability |
|---------------------|-------------------------|--------|--------------------------------------------------|
| Download | /files/{id}/downloadURL | New | Get signed URL for download based on metadata Id |
| Upload | /files/uploadURL | New | Get signed upload location. Partition aware. |
| Metadata Management | /files/metadata | New | Post Medata for the file |
| | /files/{id}/metadata | New | Get Metadata record by id |
The posting of metadata workflow would be
![FileServiceADRFlow](/uploads/37a0bfbd3a4971167c01bb0fb46030df/FileServiceADRFlow.png)
The workflow ensures movement of files from a landing zone to a persistent zone. These zones are partition specific. These zones can be seen as
- Landing Zone - Area which user can user to stage/land their files.
- Persistent Zone - Area under control of data platform, enabling secure delivery of discoverable files.
### API Spec
The API spec of enhanced file service can be found at - https://community.opengroup.org/osdu/platform/system/file/-/blob/fileservice-api-enhancement/docs/file-service_openapi.yaml (fileservice-api-enhancement branch)
# Tradeoff Analysis - Input to decision
1. Need of a holistic file management service
2. Existing functionality is only for upload and download and in different services.
3. Type safe way of ingesting file metadata into the system.
4. Integration of file ingestion with other downstream workflows.https://community.opengroup.org/osdu/platform/data-flow/ingestion/csv-parser/csv-parser/-/issues/73ACLs being overridden in CSV ingestor2023-03-31T11:38:29ZGauri ChitaleACLs being overridden in CSV ingestorIDs of Record generated are predetermined by using Natural Keys refer
https://community.opengroup.org/osdu/platform/data-flow/ingestion/csv-parser/csv-parser/-/blob/master/csv-parser-core/src/main/java/org/opengroup/osdu/csvparser/handl...IDs of Record generated are predetermined by using Natural Keys refer
https://community.opengroup.org/osdu/platform/data-flow/ingestion/csv-parser/csv-parser/-/blob/master/csv-parser-core/src/main/java/org/opengroup/osdu/csvparser/handler/handlers/IdHandler.java
Now there are scenarios where a user is trying to update same record which was created by another user. The user who is trying to update the record may not access to the ACL associated to the existing record
but because we use service principle token for our ingestion jobs which have all the ACL accesses, the update operation goes through. Which is not expected behavior. The next user can update data as well ACL which could result in total data loss for original user.M13 - Release 0.16https://community.opengroup.org/osdu/platform/system/storage/-/issues/170Invalidate derived data when parent record is deleted2023-03-31T10:02:02ZAn NgoInvalidate derived data when parent record is deletedDerived data (records with ancestry/parent) inherit the legal tags from the parent record(s).
So when at least one of the parent records is deleted, then the children records are no longer valid. Without this step, there are records wit...Derived data (records with ancestry/parent) inherit the legal tags from the parent record(s).
So when at least one of the parent records is deleted, then the children records are no longer valid. Without this step, there are records with invalid legal tags (or no legal tag) still exists in the system.https://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/seismic-dms-suite/seismic-store-sdutil/-/issues/1e2e tests: setup step must create the subproject2023-03-30T16:57:09ZRucha Deshpandee2e tests: setup step must create the subprojectThe e2e tests assume that a subproject exists. Just as some files are uploaded in the 'setup' step, the subproject must also be created as part of the setup step here.
https://community.opengroup.org/osdu/platform/domain-data-mgmt-servic...The e2e tests assume that a subproject exists. Just as some files are uploaded in the 'setup' step, the subproject must also be created as part of the setup step here.
https://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/seismic-dms-suite/seismic-store-sdutil/-/blob/master/test/e2e/conftest.pyRucha DeshpandeDiego MolteniYunhua KoglinRucha Deshpandehttps://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/seismic-dms-suite/seismic-store-sdutil/-/issues/8AWS upload/download implementation does not respect the applied manifest2023-03-30T16:56:40ZDiego MolteniAWS upload/download implementation does not respect the applied manifestThe upload/download storage method in the AWS implementation stores object as path/dataset-name without respecting the applied "GENERIC" manifest that is expecting block to be numbered from 0 to N. sdutil upload object as single blocks s...The upload/download storage method in the AWS implementation stores object as path/dataset-name without respecting the applied "GENERIC" manifest that is expecting block to be numbered from 0 to N. sdutil upload object as single blocks so these must be named as "0" or consumer application won't be able to read them back (according with the generic manifest)
Change `object_name = f"{s3_folder_name}/{dataset.name}"` to `object_name = f"{s3_folder_name}/0"` in both upload and download methodYunhua KoglinYunhua Koglinhttps://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/seismic-dms-suite/seismic-store-sdutil/-/issues/9sdutil cp (GCP) "from cloud to desktop" - shows error although copy is succes...2023-03-30T16:56:21ZDebasis Chatterjeesdutil cp (GCP) "from cloud to desktop" - shows error although copy is successful.Originally reported in Preship environment.
https://gitlab.opengroup.org/osdu/subcommittees/ea/projects/pre-shipping/home/-/issues/225
See error 400, although copying step is successful.
`(sdutilenv) C:\seismic-store-sdutil-master>pyth...Originally reported in Preship environment.
https://gitlab.opengroup.org/osdu/subcommittees/ea/projects/pre-shipping/home/-/issues/225
See error 400, although copying step is successful.
`(sdutilenv) C:\seismic-store-sdutil-master>python sdutil cp sd://dc-test2/dc-proj2/test/osdu-volve2.3f5cdc2f-d6fc-4437-838e-6d3df5f10e00.zgy C:\temp\osdu-volve-19aug.zgy --idtoken=%ID_TOKEN%`
> - Downloading Data [ 0% | | 1.00M/1.44G - 00:00|08:01 - 3.21MB/s ]
> - Downloading Data [ 17% |████████▏ | 256M/1.44G - 00:12|00:58 - 21.8MB/s ]
> - Downloading Data [ 17% |████████▏ | 256M/1.44G - 00:12|01:00 - 21.2MB/s ]
> - Downloading Data [ 17% |████████▏ | 256M/1.44G - 00:15|01:15 - 17.0MB/s ]
> - Downloading Data [ 17% |████████▏ | 256M/1.44G - 00:13|01:04 - 19.8MB/s ]
> - Downloading Data [ 17% |████████▏ | 256M/1.44G - 00:12|01:00 - 21.1MB/s ]
> - Downloading Data [ 13% |██████▏ | 192M/1.44G - 00:09|01:05 - 20.5MB/s ]
> - Transfer completed: 18.498 [MB/s]
**[400] [seismic-store-service] The request body parameter has not been specified.**
(sdutilenv) C:\seismic-store-sdutil-master>
Enclosed is log from @Yan_Sushchynski
![SSDMS-400-error-logs](/uploads/6264c15f1d786d3c0988733ec0b364d9/SSDMS-400-error-logs.JPG)https://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/seismic-dms-suite/seismic-store-sdutil/-/issues/12[GCP] Can't download data from Seismic Store if it consists of more than one ...2023-03-30T16:54:59ZYan Sushchynski (EPAM)[GCP] Can't download data from Seismic Store if it consists of more than one fileAfter uploading oVDS dataset to Seismic Store with SEGY->oVDS converter, I want to download the result to my local machine.
But I got this error
![image](/uploads/ebb3058d89b9dd27ddf937bd259b8125/image.png)
As I understand, `sdutil` us...After uploading oVDS dataset to Seismic Store with SEGY->oVDS converter, I want to download the result to my local machine.
But I got this error
![image](/uploads/ebb3058d89b9dd27ddf937bd259b8125/image.png)
As I understand, `sdutil` uses `gcsurl` of the dataset and attempts to download it directly from the bucket, but it can't do this if the dataset consists of more than one file.
This is how the oVDS dataset looks in the bucket
![image](/uploads/361a48c79afa88e2542dd269d4cb94b8/image.png)https://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/seismic-dms-suite/seismic-store-sdutil/-/issues/17sdutil to handle source SegY file that is already in cloud location2023-03-30T16:53:30ZDebasis Chatterjeesdutil to handle source SegY file that is already in cloud locationImagine the Segy file is already in cloud location. How to get sdutil to use that as input instead of user’s own desktop and local disk?
If we can achieve that, we can have a successful end to end workflow for Seismic.
Step-1: Data Loa...Imagine the Segy file is already in cloud location. How to get sdutil to use that as input instead of user’s own desktop and local disk?
If we can achieve that, we can have a successful end to end workflow for Seismic.
Step-1: Data Loader uploads SegY to cloud location and then create WP, WPC, Dataset etc. by using Manifest based Ingestion. This is how people work today without Seismic DDMS.
Step-2: Runs sdutil directly off Segy file (which is already in cloud). Next runs converter to zgy or vds as needed.https://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/seismic-dms-suite/seismic-store-sdutil/-/issues/14Sources other than segy require an alternate ingest flow2023-03-30T16:53:14ZGregSources other than segy require an alternate ingest flowSources other than segy require an alternate ingest flow. Conversion within data platform containers may lead to scalability issues when very large seismic datasets must be converted, or large numbers of volumes must be converted in para...Sources other than segy require an alternate ingest flow. Conversion within data platform containers may lead to scalability issues when very large seismic datasets must be converted, or large numbers of volumes must be converted in parallel, potentially also requiring an alternate ingest flow.
a. A seismic dataset that comprises a dataset--FileCollection.Bluware.OpenVDS:1.0.0 can be created by using the underlying Seismic DMS APIs, together with client-side use of Bluware libraries and utilities. This approach requires orchestration of the required steps by the client of the Seismic DMS, for which an orchestration utility is likely helpful.
b. The current data definition for dataset--FileCollection.Bluware.OpenVDS:1.0.0 provides for inclusion of each object within the FileCollection within the FileCollection’s metadata in the DatasetProperties.FileSourceInfos array. However this approach leads to scalability issues for large FileCollections, which could include tens or hundreds of millions of objects, resulting in resource constraints when attempting to serialize and/or parse a single json object that lists them all.
Related to #13.Chris ZhangChris Zhanghttps://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/seismic-dms-suite/seismic-store-sdutil/-/issues/13Ingest of multi-object, cloud-optimized formats into Seismic DDMS2023-03-30T16:52:43ZGregIngest of multi-object, cloud-optimized formats into Seismic DDMSIngest of multi-object, cloud-optimized formats into Seismic DDMS
a. The sdutil utility can be used to create a seismic dataset within an associated, already created seismic project, but only if the seismic dataset includes exactly one ...Ingest of multi-object, cloud-optimized formats into Seismic DDMS
a. The sdutil utility can be used to create a seismic dataset within an associated, already created seismic project, but only if the seismic dataset includes exactly one object. It isn’t currently possible to use sdutil to create a seismic dataset comprising an object-store optimized dataset--FileCollection.Bluware.OpenVDS:1.0.0, even if the FileCollection already exists in another object store location or on a local file system.
b. An existing ingest flow provided in the R3M7 release can be used to generate and create a seismic dataset that comprises a dataset--FileCollection.Bluware.OpenVDS:1.0.0, but only if:
i. The ingested source is in segy format
ii. The conversion from segy to OpenVDS S3 format occurs in a container hosted within the data platformChris ZhangChris Zhanghttps://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/seismic-dms-suite/seismic-store-sdutil/-/issues/15Consumption of seismic datasets2023-03-30T16:50:03ZGregConsumption of seismic datasetsAs described in this seismic store sdutil issue (#12), the sdutil utility can be used to retrieve a local copy of a seismic dataset, but only if the seismic dataset includes exactly one object. It isn’t currently possible to use sdutil ...As described in this seismic store sdutil issue (#12), the sdutil utility can be used to retrieve a local copy of a seismic dataset, but only if the seismic dataset includes exactly one object. It isn’t currently possible to use sdutil to retrieve a local copy of a seismic dataset that comprises an object-store optimized dataset--FileCollection.Bluware.OpenVDS:1.0.0.
A primary consumption use case for such object-store optimized seismic datasets is parallel, streaming-oriented access that avoids local copies of the dataset.
Related to #12.Chris ZhangChris Zhanghttps://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/seismic-dms-suite/seismic-store-sdutil/-/issues/21DDMS to third party non-AWS S3 endpoints2023-03-30T16:44:43ZBrian PruittDDMS to third party non-AWS S3 endpointsUnable to connect DDMS to 3rd party non-AWS S3 endpoints. Requesting endpoint parameter input for “get_s3_client” function.Unable to connect DDMS to 3rd party non-AWS S3 endpoints. Requesting endpoint parameter input for “get_s3_client” function.