seismic-dms-service issueshttps://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/seismic-dms-suite/seismic-store-service/-/issues2024-01-05T10:29:26Zhttps://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/seismic-dms-suite/seismic-store-service/-/issues/107[ADR] Hierarchical deletion of datasets2024-01-05T10:29:26ZMaggie Salak[ADR] Hierarchical deletion of datasets# Introduction
We need a way to delete millions of datasets (including metadata and files in blob storage) in Seismic DMS. A single delete operation can include up to 50 million datasets.
The purpose of this ADR is to define the approa...# Introduction
We need a way to delete millions of datasets (including metadata and files in blob storage) in Seismic DMS. A single delete operation can include up to 50 million datasets.
The purpose of this ADR is to define the approach to implementing a hierarchical delete feature in SDMS.
# Status
* [x] Initiated
* [x] Proposed
* [x] Under Review
* [ ] Approved
* [ ] Rejected
# Problem statement
SDMS API currently exposes the following endpoints for deleting datasets:
- `DELETE /dataset/tenant/{tenantid}/subproject/{subprojectid}/dataset/{datasetid}`
Deletes a single dataset.
- `DELETE /subproject/tenant/{tenantid}/subproject/{subprojectid}`
Deletes a subproject.
The endpoint deleting a subproject currently does not scale to the required number of datasets. The current implementation also leaves a possibility of an inconsistent state between the metadata and files in blob storage - in case some of the files fail to be deleted, the deletion of metadata associated with these datasets is not reverted.
SDMS currently does not have the functionality of deleting only selected datasets in a subproject, filtered by path, tags, labels, etc.
# Proposed Solution
In short:
- Create new API endpoints to support starting and tracking progress of the asynchronous deletion operation.
- Deploy a new service on k8s that would asynchronously delete datasets.
## Overview
We will introduce the bulk-delete feature as follows:
1. Implement and deploy a separate application to the same K8s cluster: the _deletion service_.
This service will accept the bulk deletion requests from SDMS API, perform the deletion and keep track of the progress of this long-running operation.
2. Add the new endpoint to SDMS API to delete all datasets in a specified path:
`PUT /operations/bulk-delete?sdpath={sdpath}`
Status: 202 Accepted
`sdpath` in the format `sd://tenant/subproject/path`
Response schema:
```json
{
"operationId": "{string}"
}
```
3. Add the new endpoint to SDMS API to view the status and progress of the delete operation:
`GET /operations/bulk-delete/status/{operationid}`
Status: 200 OK
Response schema:
```json
{
"OperationId": "{string}",
"CreatedAt": "{string}",
"CreatedBy": "{string}",
"LastUpdatedAt": "{string}",
"Status": "{string}",
"DatasetsCnt": "{int}",
"DeletedCnt": "{int}",
"FailedCnt": "{int}"
}
```
Headers will contain `data-partition-id` information to check if the user is registered in the partition before retrieving the operation status.
## Details
### Initiating a delete operation
- The new `PUT` endpoint will support the following cases for the dataset path, provided in the `sdpath` parameter:
- `path = /<path>/` - all datasets under the specified path should be deleted.
- path not specified - all datasets in the subproject should be deleted.
If the deletion of the subproject (metadata and container) is desired as well, the clients should call the delete subproject endpoint after the datasets bulk delete operation completes to ensure non-blocking deletion of the subproject in case it is composed by many datasets.
- The endpoint triggers the deletion job and returns the ID of the initiated operation.
- The delete operation is initiated in SDMS by pushing a message onto a queue (Azure Storage queue in case of Azure implementation; a different queuing mechanism can be used by other CSPs); the message contains the `operationId` and the parameters from the original request (tenant, subproject, path).
### Deletion service
Deletion service is a separate component from SDMS API, deployed to the same K8s cluster. The implementation details of the service can be decided by the individual CSPs. This section describes the proposed implementation for Azure.
The source code of the new component will be contributed to the Sidecar solution in the `seismic-store-service` repository.
The logic of the deletion service will work as follows:
- The service consumes the message from the Azure Storage queue and initiates the deletion process.
- All items (dataset IDs and `gcsurl` which determines the location in blob storage) matching the provided subproject and path are retrieved from Cosmos database.
- For each dataset, the deletion service checks if it is locked.
- If yes, the item is discarded from the delete operation.
- If not, the deletion service locks the dataset. The lock value in this case will contain a string indicating that the dataset is locked for deletion (e.g. WDELETE). This will allow another delete operation to delete the dataset if the deletion failed previously. However, it will prevent deletion of datasets locked with a regular write lock which would indicate that it is being actively used.
- The retrieved items are added to storage which allows the deletion service to keep track of the datasets to delete. In the first version of the implementation, the deletion service will store the retrieved datasets in memory.
In a later phase we are planning to use a persistent storage (e.g. Service Bus queue) to store the items to be deleted. This will allow the service to resume deletion after a restart as well as retry deletion for the datasets where it failed.
- The deletion service leverages existing Redis queues to keep track of the overall deletion operation status and progress.
- The deletion service retrieves and deletes the datasets by checking the store containing items to be deleted. In the first version of the implementation it simply iterates over items stored in memory.
- The datasets are processed in batches; for each batch we retrieve the associated blobs from the storage account using the `gcsurl` property of the metadata.
- The blobs from the current batch are deleted.
- We then delete the metadata documents from Cosmos DB, leaving the ones for which the blob deletion was unsuccessful. We consider that the deletion was successful if the blobs were not found as we assume they were deleted earlier.
- The deletion status is updated in Redis after processing every dataset.
- At the end, the status of a completed operation (with errors or without) is saved in Redis.
- The deletion status should not be deleted at this point so that users can query the operation status after completion.
### Sequence diagram for the deletion operation
![deletion_diagram_osdu](/uploads/b097c46896644e19a7374df96560aabd/deletion_diagram_updated.png)
### Deletion status
The status of delete operations will be saved in Redis.
It will be written by the deletion service (updated with the current progress) and read by SDMS API
(when users request the deletion status).
SDMS API and the deletion service will agree on the naming convention for the key in Redis,
e.g. `deletequeue:status:{operationId}`.
The new `GET` endpoint allowing users to query the status of a delete operation will return the following information:
- **`OperationId`** - ID of the delete operation.
- **`Status`** - Current status of the delete operation; possible values are: 'Not started', 'Started', 'In progress', 'Completed', 'Completed with errors'.
- `CreatedAt` - Timestamp of the creation of the delete operation.
- `CreatedBy` - Entity initiating the delete operation.
- `LastUpdatedAt` - Timestamp of the last status update of the delete operation.
- `DatasetsCnt` - Total number of datasets to be deleted; initially not set, until the enumeration of datasets for deletion is completed.
- `DeletedCnt` - Number of deleted datasets; updated after each dataset processed by the deletion service, after both blobs and metadata are deleted.
- `FailedCnt` - Number of datasets for which the deletion failed; updated after each dataset processed by the deletion service if a failure occurs.
_(only the fileds in **bold** are required)_
_(dataset counts will be empty if the status is "not started")_
### Sequence diagram for the deletion status
![deletion_status_diagram](/uploads/52b27cfb56a9942cf7628e81aeb41eec/deletion_status_diagram.png)
# Out of scope / limitations
- Detailed statistics about datasets which failed to be deleted. In the first phase of implementation the deletion status endpoint will provide aggregated statistics as mentioned in the `Deletion status` section. Users will need to refer to logs to find out which datasets failed to be deleted.
- The bulk-delete feature does not guarantee the operation can continue after a restart of the deletion service. It will be up to the different CSPs to determine if there is retry logic for failed datasets or recovery support built into the service.
- Deleting 'orphan' blobs with missing metadata. Files without metadata containing a matching `gcsurl` will not be deleted as part of the delete operation as metadata is the source of truth for which blobs need to be deleted.
- Identifying blobs belonging to a different dataset but located in the same virtual folder as files of another dataset. Since `gcsurl` carries information about the location of files to be deleted, the delete operation will not be able to detect 'unrelated' files erroneously uploaded with the same virtual folder.
# Consequences
The same bulk deletion API endpoints can be implemented by any CSPs besides Azure.
The status endpoint is not CSP-specific. As long as the bulk delete implementation saves
the job status with the same schema to Redis, the status endpoint will work for any other CSP out of the box.M22 - Release 0.25Diego MolteniMark YanMaggie SalakSneha PoddarDiego Moltenihttps://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/seismic-dms-suite/seismic-store-service/-/issues/108[ADR] Hierarchical data distribution statistics based on path - API endpoint2023-12-21T12:00:55ZIzabela Kulakowska[ADR] Hierarchical data distribution statistics based on path - API endpoint# Introduction
We need a solution for retrieving dataset statistics currently consisting of only dataset sizes.
The purpose of this ADR is to define the approach for retrieving the hierarchical data distribution statistics based on a p...# Introduction
We need a solution for retrieving dataset statistics currently consisting of only dataset sizes.
The purpose of this ADR is to define the approach for retrieving the hierarchical data distribution statistics based on a path.
# Status
* [x] Initiated
* [x] Proposed
* [x] Under Review
* [ ] Approved
* [ ] Rejected
# Problem statement
The SDMS API currently exposes the following endpoints for managing the datasets sizes:
- `POST /dataset/tenant/{tenantid}/subproject/{subprojectid}/dataset/{datasetid}/size` - computes the actual dataset size and updates the dataset metadata `computed_size` field.
- (deprecated) `GET /dataset/tenant/{tenantid}/subproject/{subprojectid}/sizes` - fetches the sizes of the datasets based on the metadata field `filemetadata.size`.
# Proposed solution
Create new API endpoint for retrieving the total size value for a dataset, a subfolder and a subproject. The new endpoint would require _viewer_ or _admin_ roles.
## Overview
```bash
GET /dataset/tenant/{tenant}/subproject/{subproject}/size?path={path}&datasetid={datasetname}
```
Path parameters:
- **tenant** - tenant
- **subproject** - subproject
Query parameters:
- **path** - folder path for which the analytics are going to be retrieved [mandatory if query parameter `{datasetid}` is specified]
- **datasetid** - dataset name for which the analytics are going to be retrieved
Response:
HTTP 200
```json
{
"dataset_count": 9999,
"size_bytes": 1024
}
```
- **dataset_count** - number of datasets under a specific subproject/folder
- **size_bytes** - sum of sizes [B] of all datasets under a specific subproject/folder or for a specific dataset
### Examples:
- `GET /dataset/tenant/tenant1/subproject/subproject1/size` - fetch and sum sizes of all datasets in the `subproject1`
- `GET /dataset/tenant/tenant1/subproject/subproject1/size&path=folderA/folderB` - fetch and sum sizes of all datasets under the folder path `folderA/folderB` in subproject `subproject1`
- `GET /dataset/tenant/tenant1/subproject/subproject1/size&path=folderA/folderB&datasetid=file.txt` - fetch the size of a dataset with a name `file.txt` that resides under the folder path `folderA/folderB` in subproject `subproject1`
## Details
Currently, two fields in the dataset metadata record can store information about the dataset size: `filemetadata.size` and `computed_size`. `filemetadata.size` is being used by the SDK on the client side, `computed_size` is intended to be computed and ingested on the server side.
To make sure the chosen field can be a reliable source of truth, the API endpoint implementation will calculate the sum of dataset sizes based on `compute_size` field.
# Out of scope / limitations
A challenge with using `computed_size` field as a source of truth is that some datasets may not have this property calculated, as currently the only way to update this value is by manually calling the `Compute Size` POST endpoint.
The solution to ensure the reliability of the value of the `computed_size` field will be the subject of a separate ADR.M22 - Release 0.25Izabela KulakowskaIzabela Kulakowskahttps://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/seismic-dms-suite/seismic-store-service/-/issues/100SDMS commit "feat: GONRG-6259: Add osdu google" causes import error when running2023-07-12T02:21:00ZRashaad GraySDMS commit "feat: GONRG-6259: Add osdu google" causes import error when runningWhile testing a problem, at run time the service gets confused because of the "Auth" import that was include in your commit. [feat: GONRG-6259: Add osdu google](https://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seis...While testing a problem, at run time the service gets confused because of the "Auth" import that was include in your commit. [feat: GONRG-6259: Add osdu google](https://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/seismic-dms-suite/seismic-store-service/-/commit/43ea83caa7ff307e6014022244dfbaaa1734d1b2)
The file in question: [gc/index.ts](https://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/seismic-dms-suite/seismic-store-service/-/blob/43ea83caa7ff307e6014022244dfbaaa1734d1b2/app/sdms/src/cloud/providers/gc/index.ts)
When our [Dataset Parser](https://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/seismic-dms-suite/seismic-store-service/-/blob/master/app/sdms/src/services/dataset/parser.ts#L52) uses the Auth.isImpersonationToken() method it goes to your file instead of its proper location [Auth.ts](https://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/seismic-dms-suite/seismic-store-service/-/blob/master/app/sdms/src/auth/auth.ts) class.
This is causing an issue. While using Azure it goes to your method and returns false for all tokens that it checks even if it is an actual impersonation token.
Because it is NOT properly identifying the token as an impersonation token, incorrect information is passed into a new Dataset's metadata.
Please take some time to remedy this, as it affects more than just your added filesYan Sushchynski (EPAM)Yan Sushchynski (EPAM)https://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/seismic-dms-suite/seismic-store-service/-/issues/32[ADR] Domain API2023-10-23T10:46:12ZSacha Brants[ADR] Domain API# Introduction
In order to natively support seismic datasets as defined by the OSDU authority, avoid duplicating the logic in applications to convert seismic data from one schema version to another, and potentially implement different l...# Introduction
In order to natively support seismic datasets as defined by the OSDU authority, avoid duplicating the logic in applications to convert seismic data from one schema version to another, and potentially implement different logic, Seismic DMS should provide APIs to support and manage seismic datasets by validating their schema model and return them to the latest version of the schema.
## Status
- [X] Proposed
- [x] Trialing
- [x] Under review
- [x] Approved
- [ ] Retired
## Context & Scope
### (1) OSDU SCHEMAS ORGANIZATION
In the [OSDU schemas organization](https://community.opengroup.org/osdu/data/data-definitions/-/tree/master/E-R) schemas are organized into different categories. A “dataset” schema provides a piece of bulk data information along with its logical representation while the seismic record of other categories requires to be linked with an existing (pre-ingested) dataset.
![image](/uploads/4f34b082ffc589ca6cc0a549994c1f0a/image.png)
SDMS will provide a set of domain-specific API to support these schema format:
- FileCollection Datasets:
- [FileCollection.Generic.1.0.0](https://community.opengroup.org/osdu/data/data-definitions/-/blob/master/E-R/dataset/FileCollection.Generic.1.0.0.md)
- [FileCollection.SEGY.1.0.0](https://community.opengroup.org/osdu/data/data-definitions/-/blob/master/E-R/dataset/FileCollection.SEGY.1.0.0.md)
- [FileCollection.Slb.OpenZGY.1.0.0](https://community.opengroup.org/osdu/data/data-definitions/-/blob/master/E-R/dataset/FileCollection.Slb.OpenZGY.1.0.0.md)
- [FileCollection.Bluware.OpenVDS.1.0.0](https://community.opengroup.org/osdu/data/data-definitions/-/blob/master/E-R/dataset/FileCollection.Bluware.OpenVDS.1.0.0.md)
- Work Product Components:
- [SeismicTraceData.1.1.0](https://community.opengroup.org/osdu/data/data-definitions/-/blob/master/E-R/work-product-component/SeismicTraceData.1.1.0.md)
- [SeismicBinGrid.1.0.0](https://community.opengroup.org/osdu/data/data-definitions/-/blob/master/E-R/work-product-component/SeismicBinGrid.1.0.0.md)
- [SeismicLineGeometry.1.0.0.md](https://community.opengroup.org/osdu/data/data-definitions/-/blob/master/E-R/work-product-component/SeismicLineGeometry.1.0.0.md)
- Master Data:
- [SeismicAcquisitionSurvey.1.2.0.md](https://community.opengroup.org/osdu/data/data-definitions/-/blob/master/E-R/master-data/SeismicAcquisitionSurvey.1.2.0.md)
- [SeismicProcessingProject.1.1.0.md](https://community.opengroup.org/osdu/data/data-definitions/-/blob/master/E-R/master-data/SeismicProcessingProject.1.1.0.md)
References and Naming Convention:
- SCHEMA: The seismic schema model, for example, SeismicTraceData / SesmicBinGrid / FileCollection.SEGY /...
- SCHEMA VERSION: The schema model versions, for example, SeismicTraceData 1.0.0 / SeismicTraceData 1.6.2 /...
- RECORD: The seismic object schema recorded in the DE Storage Service
- RECORD-ID: the unique record ID, for example, ABC1234
- RECORD-VERSION: The record versions, for example, ABC1234 V1 / ABC1234 V2 /...
### (2) SDMS DOMAIN SPECIFIC APIs
SDMS will provide domain-specific APIs to handle the ingestion, schema validation, and underline bulk management for seismic datasets and components SCHEMA as defined by the OSDU authority.
For each supported SCHEMA, we will document the model with examples and provide APIs to manage both RECORD and their VERSIONS
![image](/uploads/b9b6d2e5f2371c680bf4ffebc7f741cb/image.png)
- An endpoint to ingest the seismic dataset:
- When an object is ingested using this endpoint, a new RECORD will be created if the RECORD-ID is not specified with the request model. A new RECORD-ID and the RECORD-VERSION will be generated and returned. In addition, for FileCollection dataset schema only, a storage resource will be created to host bulk.
- When an object is ingested using this endpoint, a RECORD will be updated if the RECORD-ID is specified in the request with the request model. A new RECORD-VERSIOn will be generated and returned.
- An endpoint to list all datasets of a specific kind
- This endpoint will support query paginated.
- An endpoint to retrieve the last version of the RECORD-ID
- An endpoint to retrieve a specific version of the RECORD-ID
- An endpoint to retrieve all versions for a RECORD-ID
- An endpoint to delete the RECORD with all associated version
- This endpoint will perform and hard delete by removing all RECORD-VERSIONS athe nd associated bulk.
- ~~An endpoint to reindex dataset ingested with the V3 version of SDMS into the V4~~
The SDMS service will provide support to the highest Patch.Minor version of each Major and automatic conversion between versions. For example, if the client calls the v1 endpoint that supports the schema version 1.1.0 to request a record that was ingested with a schema version 1.0.0, SDMS will automatically convert the required record, from the ingested version 1.0.0 to the supported 1.1.0. In addition, we will support conversion between Major versions if conversion rules have been correctly specified (or an error will be thrown).
For each SCHEMA VERSION, the schema model will be documented (in the shared swagger) and examples will also be provided:
**Schema Definition**
![image](/uploads/fdd3145394948c65a09882281d9f91bc/image.png)
**Schema Example**
![image](/uploads/e9a9332bd5699d52ce113a19b3597213/image.png)
### (3) STORAGE ORGANIZATION AND CONNECTION STRINGS
Each time a FileCollection SCHEMA is ingested in SDMS, a new storage container is created in the CSP storage service. The container name will be automatically generated by SDMS by hashing the dataset name information specified in the request schema with the generated ID to guarantee the unicity of the storage resource in each partition. SDMS will provide specific endpoints to generate connection strings for a RECORD-ID and/or RECORD-VERSION to let the caller independently ingest the associated bulk. These storage resources are protected and the connection strings are released only after the caller has been authorized by the service via shared Entitlement Service (ACL-check).
These are the endpoints SDMS will expose for generating upload or download connection strings:
![image](/uploads/f63a7470d79b673e06a58c401471bb0e/image.png)
### (4) DATASET UPLOAD EXAMPLE WORKFLOW
**Dataset Registration**
![image](/uploads/e4dd914cbad85087991611aa6a4b651c/image.png)
**Dataset Ingestion**
![image](/uploads/6ff4caa1beb7699a908694370d0f6ee4/image.png)
### (5) DATASET DOWNLOAD EXAMPLE WORKFLOW
**Dataset Registration**
![image](/uploads/e4dd914cbad85087991611aa6a4b651c/image.png)
**Dataset Consumption**
![image](/uploads/8cf6d9f3d2ffeb87376202c03dc21090/image.png)
### (6) Implementation
Check [Merge Requests](#related-merge-requests) associated to this issue and [OpenAPI](app/sdms-v4/docs/openapi.yaml) definition for details.
## Decision
## Rationale
## Consequences
## When to revisitDiego MolteniDiego Moltenihttps://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/seismic-dms-suite/seismic-store-service/-/issues/112[ADR] Synching SDMS V3 datasets in SDMS V42024-02-28T07:31:26ZDiego Molteni[ADR] Synching SDMS V3 datasets in SDMS V4# Introduction
We need a solution for make dataset ingested in SDMS V3 visible and consumed by SDMS V4.
The purpose of this ADR is to describes how to enable a synchronization mechanism that allows users of SDMS V4 to consume seismic d...# Introduction
We need a solution for make dataset ingested in SDMS V3 visible and consumed by SDMS V4.
The purpose of this ADR is to describes how to enable a synchronization mechanism that allows users of SDMS V4 to consume seismic dataset entities ingested in SDMS V3 via client applications, even though the two versions of the system have entirely different architectural logics.
# Status
* [x] Initiated
* [x] Proposed
* [ ] Under Review
* [ ] Approved
* [ ] Rejected
# Problem statement
The Seismic Data Management Service V4 (SDMS V4) stores and manages data types as defined by the Open Subsurface Data Universe (OSDU) Authority. The APIs (Application Programming Interfaces) provide robust data type checks and are fully integrated with the OSDU policy service. The goal is to minimize ambiguity in the authorization model and facilitate straightforward adoption through a consistent usage pattern. In contrast, the V3 version of the service defines, saves, and manages proprietary metadata records, interacts directly with the entitlement service, and organizes records into collections/data-groups named subprojects.
<div align="center">
<br/><img src="/uploads/5e1a58219ca35be9da530b0eba2ed9fa/arch-diagram.png"
alt="sdms-architectural-diagram"
style="display: block; margin: 0 auto;"/><br/>
</div>
The key difference between the two versions of the service lies in the way of how the cloud storage URI is generated. In SDMS V4 this is generated starting from the record-id value while in SDMS V3 the generated URI is a random UUID.
# Proposed solution
Update SDMS V4 by adding the capability to correctly retrieve the storage location for the dataset's bulk data if the dataset was ingested via SDMS V3.
## Scenarios
When a dataset is ingested in SDMS V3 from a seismic application, the latter also creates an OSDU Bulk record linked to a Work Product Component, as shown in the following diagram:
<div align="center">
<br/><img src="/uploads/3d73191098963a80675c2ed6e96472cc/image.png"
alt="sdms-architectural-diagram"
style="display: block; margin: 0 auto; height: 30%; width: 30%" /><br/>
</div>
The seismic applications saves the SDMS V3 URI (also known as `sdapth`) in the `FileSourceInfo` property of the created OSDU Bulk record. This is done to later facilitate communication of the URI to SDMS V3 for retrieving the storage connection string required to access the dataset's bulk data.
### Example of SDMS V3 dataset metadata
```json
{
"name": "test-data.zgy",
"tenant": "partition",
"subproject": "subproject",
"path": "/",
"ltag": "test-legal",
"created_by": "test-user@slb.com",
"last_modified_date": "Tue Sep 12 2023 11:04:29 GMT+0000 (Coordinated Universal Time)",
"created_date": "Tue Sep 12 10:54:10 GMT+0000 (Coordinated Universal Time)",
"gcsurl": "ss-weu-xkz32bjwg2425gn/bdf36c8a-3c62-3151-12b7-227af4727520",
"ctag": "sMTz0oWeId1nOnrx",
"readonly": true,
"sbit": null,
"sbit_count": 0,
"filemetadata": {
"type": "GENERIC",
"size": 1544552448,
"nobjects": 47
},
"seismicmeta_guid": "partition:work-product-component--SeismicTraceData:326bac9a-1fb2-5c73-9c64-6ca122c5025a",
"access_policy": "uniform"
}
```
### Example of OSDU storage associated Work Product Component
```json
{
"id": "partition:work-product-component--SeismicTraceData:326bac9a-1fb2-5c73-9c64-6ca122c5025",
"kind": "osdu:wks:work-product-component--SeismicTraceData:1.3.0",
"version": 1685099234631439,
"acl": {
"viewers": [
"data.test@domain.slb.com"
],
"owners": [
"data.test@domain.com"
]
},
"legal": {
"legaltags": [
"test-legal"
],
"otherRelevantDataCountries": [
"US"
],
"status": "compliant"
},
"data": {
"BinGridID": "partition:work-product-component--SeismicBinGrid:2a714f2b12aa346d16a08c5a2f4e157e:",
"Datasets": [
"partition:dataset--FileCollection.Slb.OpenZGY:1de532c2-4d1b-5316-ba4a-422342321d55"
],
"DDMSDatasets": [
"urn:dataset--FileCollection.Slb.OpenZGY:1de532c2-4d1b-5316-ba4a-422342321d55"
],
"Name": "test-data.zgy",
"Source": "osdu",
"SubmitterName": "test-user@domain.com"
},
"createUser": "test-user@domain.com",
"createTime": "2023-09-12T11:04:30.321Z",
"modifyUser": "test-user@domain.com",
"modifyTime": "2023-09-12T18:09:12.703Z"
}
```
### Example of OSDU storage associated File Collection
```json
{
"id": "partition:dataset--FileCollection.Slb.OpenZGY:1de532c2-4d1b-5316-ba4a-422342321d55",
"version": "4426199321664216",
"kind": "osdu:wks:dataset--FileCollection.Slb.OpenZGY:1.0.0",
"acl": {
"viewers": [
"data.test@domain.slb.com"
],
"owners": [
"data.test@domain.com"
]
},
"legal": {
"legaltags": [
"test-legal"
],
"otherRelevantDataCountries": [
"US"
],
"status": "compliant"
},
"createUser": "test-user@domain.com",
"createTime": "2023-09-12T11:04:02.705Z",
"data": {
"Endian": "BIG",
"SEGYRevision": "rev 1",
"TotalSize": "1544552448",
"Name": "test-data.zgy",
"DatasetProperties": {
"FileCollectionPath": "sd://tenant/subproject/",
"FileSourceInfos": [
{
"FileSource": "test-data.zgy",
"Name": "test-data.zgy",
"FileSize": "1544552448",
}
]
}
}
}
```
## Proposed Solution
To enable applications to access bulk datasets ingested in SDMS V3 through SDMS V4, we need to update the mechanism in SDMS V4 for retrieving the correct storage URI associated with the Bulk record. This update is necessary to generate a valid connection string for accessing the bulk data.
When a Bulk record is created, the SDMS V3 URI (also known as 'sdapth') is typically saved in the `FileCollectionPath` and `FileSource` properties. In the most common scenarios, the `sd://tenant/subproject/path` portion of the URI is stored in the `FileCollectionPath` property, while the URI's name is stored in the `FileSource` property.
When a connection access string is requested for a Bulk record through SDMS V4, the service should detect if the record's file source type refers to a V3 dataset's URI. If this last case, the service should then:
1. extract the `subproject` name from the `FileCollectionPath`
```python
subproject = record.data.DatasetProperties.FileCollectionPath.replace("sd://", "").split('/')[1]
```
2. extract the `path` from the `FileCollectionPath`
```python
subproject = (record.data.DatasetProperties.FileCollectionPath.replace("sd://", "").split('/')[2:]).replace("//", "/")
```
3. extract the `name` from the `FileSource`
```python
name = record.data.DatasetProperties.FileSourceInfos[0].FileSource
```
4. retrieve the storage URL from the V3 journal
```sql
SELECT c.data.gcsurl
FROM c
WHERE
c.data.subproject="{subproject}"
AND c.data.path="{path}"
AND c.data.name="{name}"
```
5. generate the connection string using the retrieved storage URL
```python
storage_client = StorageClient("{storage-url}")
return storage_client.getConnectionString()
```
#### Notes
Seismic applications use different approaches to save the SDMS V3 URI in the Bulk record, and all these cases should be considered:
1. The sd://tenant/subproject/path is saved in the `FileCollectionPath`, and the name is saved in `FileSource`.
2. The full sd://tenant/subproject/path/name URI is saved in both `FileCollectionPath` and `FileSource`.
3. The sd://tenant/subproject/path URI is saved in `FileCollectionPath`, and the name in `FileSource`, but this latter starts with the ./ special character (which should be removed).
### Limitations
Applications that do not match the described flow should we reviewed with the application owner before defining the right strategy to enable the synchronization of datasets ingested in SDMS V3 with SDSM V4.M22 - Release 0.25Sacha BrantsSneha PoddarSacha Brantshttps://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/seismic-dms-suite/seismic-store-service/-/issues/105Symbol(Id) in the documents should be removed2023-08-29T15:08:38ZLaura DamianSymbol(Id) in the documents should be removedRight now each document in the database contain a Symbol(id) entry which contains information on the name and the partition key.
This information is duplicated in the document's name and its id of the documents.
Removing this information...Right now each document in the database contain a Symbol(id) entry which contains information on the name and the partition key.
This information is duplicated in the document's name and its id of the documents.
Removing this information would have a few benefits:
- keep the cost down, as it adds up to the cost of storage
- improve the performance (even marginally) of both inserts and reads, but it could add up for big numbers
- simplify a potential future migration to a different partition key(s).Diego MolteniDiego Moltenihttps://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/seismic-dms-suite/seismic-store-service/-/issues/98Pagination not supported by IBM and AWS for DATASET LIST (POST) endpoint2023-04-05T14:28:58ZPratiksha ShedgePagination not supported by IBM and AWS for DATASET LIST (POST) endpointA new API has been added as DATASET LIST (POST) endpoint which supports pagination. This API should return the list of datasets and nextPageCursor to get the next list of datasets. However, IBM and AWS do not support pagination for this ...A new API has been added as DATASET LIST (POST) endpoint which supports pagination. This API should return the list of datasets and nextPageCursor to get the next list of datasets. However, IBM and AWS do not support pagination for this endpoint, which causes the pagination tests to fail during pipeline runs.
Pipeline runs:
IBM-https://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/seismic-dms-suite/seismic-store-service/-/jobs/1823012
AWS-https://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/seismic-dms-suite/seismic-store-service/-/jobs/1842803https://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/seismic-dms-suite/seismic-store-service/-/issues/96Read Only Root File System for Seismic Pods Crashes2023-04-12T17:52:30ZAbhay JoshiRead Only Root File System for Seismic Pods CrashesWhen making a change to have the Os-Seismic-Store pods be a Readonly RootFileSystem, the pods seem to crash without any kubectl logs whatsoever. We suspect it is because the Application is writing to the Pods but are unable to see where ...When making a change to have the Os-Seismic-Store pods be a Readonly RootFileSystem, the pods seem to crash without any kubectl logs whatsoever. We suspect it is because the Application is writing to the Pods but are unable to see where things are being written. We would like to fix this issue as it is a security concern.https://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/seismic-dms-suite/seismic-store-service/-/issues/94Info endpoint is missed2023-06-13T20:05:35ZDenis Karpenok (EPAM)Info endpoint is missedcurl --location --request GET 'https://preship.gcp.gnrg-osdu.projects.epam.com/api/seismic-store/v3/info'
Response:
[seismic-store-service] Unauthenticated Access. Authorizations not found in the request.
With authentication response ...curl --location --request GET 'https://preship.gcp.gnrg-osdu.projects.epam.com/api/seismic-store/v3/info'
Response:
[seismic-store-service] Unauthenticated Access. Authorizations not found in the request.
With authentication response is:
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Error</title>
</head>
<body>
<pre>Cannot GET /api/v3/info</pre>
</body>
</html>
Expected:
Version is returning without authentication.https://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/seismic-dms-suite/seismic-store-service/-/issues/57Utilizing Standard Pipelines2023-03-24T19:24:00ZDavid Diederichd.diederich@opengroup.orgUtilizing Standard PipelinesI'd like this project to consider merging your CI pipeline work with the osdu/platform/ci-cd-pipelines> project, and utilize more jobs by includes than using local CI config.
### Some Reasons to Consider
**Copy/paste code is hard to ke...I'd like this project to consider merging your CI pipeline work with the osdu/platform/ci-cd-pipelines> project, and utilize more jobs by includes than using local CI config.
### Some Reasons to Consider
**Copy/paste code is hard to keep maintained**
Most of your CI logic appears to have started as a copy/paste from the main repository, anyway.
But keeping it local means that developers need to update changes in multiple places, and when they're working on the improvements they don't have your use case in mind.
This included some recent developments to get the dev2 environment going, but it also includes the changes to the FOSSA scanning -- you're still using an older, unmaintained image for the scanning.
And, when I did the changes, I worked test examples for maven and pip, the two supported build systems.
If npm had been there, I would have had it in mind.
**You miss new pipeline developments**
I'm moving pieces of the release management scripts into the pipeline to make more aspects of the tagging process happen automatically from branch creation.
For now, it's only dependency scanning data, but upgrades are planned to do more stages from there.
The GitLab Ultimate scanners check for security vulnerabilities, and the InfoSec team utilizes these results to plan their work.
These scanners aren't running on your project, but would be if included the appropriate CI configuration -- or at least, we'd see what needs to be improved on those scanners to function if they don't work out of the box.
**Your improvements aren't available to others**
Any improvements you make to the CI process after you've copied it remains in your local repository.
Others could benefit from having this available in a common location.
Supporting another language gives future OSDU projects more capabilities right at the start.
You'd even get to define the basic processes for these.
### Open to Discussion
I'd like to hear more about how the custom pipelines came to be, and if they are serving a need that can't be generalized.
For steps that are truly custom and unique to your project, it makes sense to have them as local CI config files.
If we do decide to start using more of the standard pipeline logic, I think we'll need to implement it slowly, a piece at a time.
Of course, if you think a big bang MR is better, I'd consider that, too.
Thank you in advance for your thoughts.https://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/seismic-dms-suite/seismic-store-service/-/issues/40Domain API - provide read/write access to trace data2023-03-27T19:16:48ZDebasis ChatterjeeDomain API - provide read/write access to trace dataNeutral domain API to access Seismic trace data irrespective of content storage in oZgy or in oVDS.
Consider suitable protocol keeping in mind the large volume of data involved.
This will open up opportunity for interoperability for x-...Neutral domain API to access Seismic trace data irrespective of content storage in oZgy or in oVDS.
Consider suitable protocol keeping in mind the large volume of data involved.
This will open up opportunity for interoperability for x-vendor applications.
cc - @pq for informationhttps://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/seismic-dms-suite/seismic-store-service/-/issues/39[ADR] seismic storage tiers support2023-09-08T06:37:16ZDiego Molteni[ADR] seismic storage tiers support# Introduction
This ADR proposes how support to multi storage tiers should be enabled in SDMS to better manage storage costs.
## Status
- [x] Initiated
- [ ] Proposed
- [ ] Under Review
- [ ] Approved
- [ ] Rejected
## SDMS dataset c...# Introduction
This ADR proposes how support to multi storage tiers should be enabled in SDMS to better manage storage costs.
## Status
- [x] Initiated
- [ ] Proposed
- [ ] Under Review
- [ ] Approved
- [ ] Rejected
## SDMS dataset concepts and ingestion overview
A dataset resource, in SDMS, is identified by the follow URI string
**`sd://tenant/subproject/path/dataset`** where:
- **tenant**: is the unique data-partition-id.
- **subproject**: is the name of the data group.
- **path**: is a virtual path in the subproject (a folder three).
- **dataset**: is the name of the dataset.
For example in the dataset **`sd://opendes/sandbox/processing/2023/result.zgy`**
- **tenant** = opendes
- **subproject** = sandbox
- **path** = /processing/2023/
- **dataset** = result.zgy
A dataset in SDMS is composed by a metadata descriptor, maintained in the SDMS db catalogue, and a set of objects saved in a cloud storage resource. A dataset through SDMS is always seen as a single entity even if its data content has been split into multiple objects.
In general, SDMS, has a dedicated storage resource per partition and all objects composing datasets are stored into it. All objects composing a dataset are saved into the storage account in different way based on the storage policy applied at data group level (subproject):
- **subproject access policy = "uniform"**: access is granted at data group level. A subproject's writer/reader can write-read/read any dataset in the subproject. For each subproject a dedicated storage resource is created and all objects composing the dataset are saved under a virtual folder path. For example, in Azure, a dataset is saved into storage-account(per partition)\container(per subproject)\virtual-folder(per dataset)\object_0...object_N.
- **subproject access policy = "dataset"**: access is granted at dataset level. A dataset writer/reader can write-read/read only the datasets he has access too. For each dataset a dedicated storage resource is created and all objects composing the dataset are saved under a virtual folder path. For example, in Azure, a dataset is saved into storage-account(per partition)\container(per dataset)\virtual-folder(per dataset)\object_0...object_N.
When a dataset is uploaded to SDMS, the following ingestion flow is executed:
1. Client register the dataset in SDMS. SDMS will create a dataset descriptor in the internal catalogue and reserve a storage area where upload dataset composing objects.
1. SDMS returns the descriptor metadata to the client.
1. Client request a connection string to SDMS for the reserved storage resource.
1. SDMS returns the generated connection string to the Client.
1. Client split the dataset into multiple object and upload them to the reserved storage resource.
1. Client request to SDMS to close the dataset.
![image](/uploads/3efc31ddabf2b406b942c6cdce0b2594/image.png)
## Storage Tiers and SDKs support
To provide a cost-effective solution, SDMS must enable storage tiers management features in order to save dataset's composing objects to a specific storage tier class. For example, in Azure, supported tiers class are **Hot**, **Cold**, **Archive**. If data objects can be saved into a **Cold** tier instead of a **Hot**, a cost saving will be generated for clients.
An object's tier can be set or updated directly by calling cloud storage methods when an object is uploaded or manipulated.
![image](/uploads/26c2ecf612ac2ff9f0a48aa249489824/image.png)
These operations are executed from client applications through CSP provided SDKs. The SDMS suite offers 2 clients libraries: a C++ client library **SDAPI** and a Python command line utilities **SDUTIL**.
This ADR presents how SDMS service and provided client libraries, **SDAPI** and **SDUTIL**, should be enhanced to support object's tiering features.
## Set storage tier class
This features enables consumer to set the desired storage tier class when objects are uploaded.
In SDAPI, we will add a storage tier class argument to the generic dataset opening method. This will be set as default storage tier class when a dataset object is uploaded. If not set, the default storage tier class will be used (Hot for Azure). In addition the tiering argument will be added to both dataset and utility upload method provided to ingest in a local dataset in SDMS as single object.
```c++
// open a dataset specifying the storage tier class.
SDGenericDataset dataset(&manager, "sd://tenant/subproject/path/dataset");
dataset.open(SDDatasetDisposition::CREATE|OVERRIDE, {
{ api::json::Constants::tier, Tier::<tier-class>}});
// object will be uploaded with the dataset specified <tier-class>
dataset.write("object_name", data, size);
// save the storage tier information in the manifest
dataset.close();
// upload a dataset: generic dataset class
SDGenericDataset dataset(&manager, "sd://tenant/subproject/path/dataset");
dataset.upload("fileToUploadPath", Tier::Cold);
// save the storage tier information in the manifest
dataset.close();
// upload a dataset: utility class
SDUtils utils(&manager);
utils.upload("sd://tenant/subproject/path/dataset", "fileToUploadPath", Tier::Cold);
```
In case the dataset already exist and is opened with a READ_WRITE disposition, the tier should be set as the one specified in the manifest. In case this is not present the default one should be applied.
The SDUTIL utility, does not provide methods to manipulate objects. Dataset are uploaded to SDMS via **cp** command that automatically split the dataset into multiple objects and upload them in the storage resource. The tier class can be specified in the upload version of the **cp** command. All dataset's composing objects will be uploaded to the **cp** command specified tier class.
```python
sdutil cp data sd://tenant/subproject/path/data --tier="<tier_class>"
```
In both SDUTIL and SDAPI, when dataset is closed (at the end of an upload operation), the storage tier class value must be set in the dataset manifest in order to make the tier information available and trusted by ingestion and consuming applications.
```json
manifest: {
type: "most of the times set to \"GENERIC\"",
nojbects: "number of dataset's objects",
size: "the dataset size",
checksum: "the dataset checksum",
tier_class: "the storage tier class"
}
```
Please note that SDMS is in control of changes made by SDAPI and SDUTIL applications only. if other apps are used to change the dataset object's storage tier class, these must also updated the dataset manifest(by calling the PATCH /dataset endpoint).
## Update storage tier class
This features enables consumer to updated the desired storage tier class to an uploaded object.
In SDAPI, we will add a new method to the generic dataset and the utility classes to update the dataset object's storage tier class.
```c++
// uploaded a dataset tier class: generic dataset class
SDGenericDataset dataset(&manager, "sd://tenant/subproject/path/dataset");
dataset.open(SDDatasetDisposition::READ_WRITE);
// update the storage class tier of all dataset's objects
dataset.update(Tier::<Tier::class>);
// save the storage tier information in the manifest
dataset.close()
// uploaded a dataset tier class: utility class
SDUtils utils(&manager);
utils.update("sd://tenant/subproject/path/dataset", Tier::Cold);
```
In **SDUTIL**, we will update the **patch** command to update the storage tier class to all dataset's objects :
```python
sdutil patch sd://tenant/subproject/path/dataset --tier=<tier_class>
```
## Retrieve storage tier class
To know what is the dataset's storage tier class, client applications can retrieve the dataset descriptor and read the content of the associated value in the manifest. both SDAPI and SDUTIL should be updated to expose the new value. In SDAPI the dataset model will be updated adding the extra property and in SDUTIL we will enhanced the **stat** command by adding the tier class information to the detailed command output.
```
- Name: sd://test-partition/sandbox/cube.zgy
- Created By: dmolteni3@.com
- Created Date: Tue May 16 2023 11:16:08 GMT+0000 (Coordinated Universal Time)
- Size: 36.0 MB
- No of Objects: 2
- Legal Tag: test-partition-default-legal
- Storage Tier Class: Hot
```Diego MolteniDiego Moltenihttps://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/seismic-dms-suite/seismic-store-service/-/issues/131GC and GC baremetal deploy fail2024-03-19T14:35:58ZDaniel PerezGC and GC baremetal deploy failAliaksandr Ramanovich (EPAM)Yauheni Rykhter (EPAM)Aliaksandr Ramanovich (EPAM)https://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/seismic-dms-suite/seismic-store-service/-/issues/130IBM E2E tests fail2024-03-19T14:59:18ZDaniel PerezIBM E2E tests failE2E tests for IBM in SDMS V3 are failing with no healthy upstream, this seems to be an issue with environment itself.E2E tests for IBM in SDMS V3 are failing with no healthy upstream, this seems to be an issue with environment itself.Anuj GuptaIsha KumariAnuj Guptahttps://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/seismic-dms-suite/seismic-store-service/-/issues/129DATASET SELECT LS POST: while putting invalid characters in select it is givi...2024-02-29T12:14:56ZIsha KumariDATASET SELECT LS POST: while putting invalid characters in select it is giving response code 200. it should give 400 DATASET SELECT LS POST: while putting invalid characters in selectit is giving response code 200. it should give 400 DATASET SELECT LS POST: while putting invalid characters in selectit is giving response code 200. it should give 400https://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/seismic-dms-suite/seismic-store-service/-/issues/128Subproject creation accepts non-existing groups in ACLs2024-02-26T17:21:16ZYan Sushchynski (EPAM)Subproject creation accepts non-existing groups in ACLs## Description of the problem
There is an issue when it is possible to create a new subproject with non-existing groups in the `acls` field. And then, any action, except deleting the subproject, throws `403` in the subproject.
## Steps ...## Description of the problem
There is an issue when it is possible to create a new subproject with non-existing groups in the `acls` field. And then, any action, except deleting the subproject, throws `403` in the subproject.
## Steps to reproduce it
1. Create a new subproject with invalid acls:
```
curl --location --request POST 'https://<svc_url>/v3/subproject/tenant/osdu/subproject/test-123' \
--header 'x-api-key: {{SVC_API_KEY}}' \
--header 'Content-Type: application/json' \
--header 'ltag: osdu-demo-legaltag' \
--header 'appkey: {{DE_APP_KEY}}' \
--header 'Authorization: Bearer <token>' \
--data-raw '{
"storage_class": "REGIONAL",
"storage_location": "US-CENTRAL1",
"acls": {
"admins": [
"data.sdms.non-existing.admin@osdu.group"
],
"viewers": [
"data.sdms.non-existing.viewer@osdu.group"
]
}
}'
```
This request is executed without any error.
2. Try to upload any file to the subproject:
```shell
python sdutil cp somefile sd://osdu/test-123/somefile
```
Output:
```
[403] [seismic-store-service] User not authorized to perform this operation
```Diego MolteniSacha BrantsDiego Moltenihttps://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/seismic-dms-suite/seismic-store-service/-/issues/127Issue with Get Status API2024-02-09T18:12:56ZJiman KimIssue with Get Status APIHello we are running some authentication testing and are running into some behaviors that may or may not be a bug.
for this endpoint
/seistore-svc/api/v4/status
We have 3 tests running
1. Sends an invalid token
2. Sends a valid toke...Hello we are running some authentication testing and are running into some behaviors that may or may not be a bug.
for this endpoint
/seistore-svc/api/v4/status
We have 3 tests running
1. Sends an invalid token
2. Sends a valid token but signed with a wrong secret
3. Sends the HTTP request without an authorization header.
1,2 return a 401
but 3 returns 200.
Is this a bug or intended behavior?
Thank you!M21 - Release 0.24https://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/seismic-dms-suite/seismic-store-service/-/issues/125Patch dataset name issue2024-01-22T16:21:23ZYan Sushchynski (EPAM)Patch dataset name issueWe ran the [collection](https://community.opengroup.org/osdu/platform/pre-shipping/-/blob/main/R3-M22/GC-M22/GC_OSDU_Smoke_Tests.postman_collection.json?ref_type=heads), and this request
```bash
curl --location --request PATCH 'https:/...We ran the [collection](https://community.opengroup.org/osdu/platform/pre-shipping/-/blob/main/R3-M22/GC-M22/GC_OSDU_Smoke_Tests.postman_collection.json?ref_type=heads), and this request
```bash
curl --location --request PATCH 'https://<host>/api/seismic-store/v3/dataset/tenant/m19/subproject/subprojectodi374308/dataset/AutoTest_dsetodi831125?path=autotest_path' \
--header 'Content-Type: application/json' \
--header 'data-partition-id: m19' \
--header 'Authorization: Bearer token' \
--data '{
"dataset_new_name": "autotest_new",
"metadata": {
"f1": "v1",
"f2": "v2",
"f3": "v3"
},
"filemetadata": {
"f1": "v1",
"f2": "v2",
"f3": "v3"
},
"last_modified_date": "Thu Jul 16 2020 04:37:41 GMT+0000 (Coordinated Universal Time)",
"gtags": [
"tag01",
"tag02",
"tag03"
],
"ltag": "m19-SeismicDMS-Legal-Tag-Test7649172",
"readonly": false,
"seismicmeta": {
"kind": "m19:seistore:seismic2d:1.0.0",
"legal": {
"legaltags": [
"m19-SeismicDMS-Legal-Tag-Test7649172"
],
"otherRelevantDataCountries": [
"US"
]
},
"data": {
"msg": "Auto Test sample data patched"
}
}
}'
```
And, we get the following error:
```bash
[seismic-store-service] The dataset sd://m19/subprojectodi374308/autotest_path/autotest_new already exists, even so there is no such a dataset in Seismic at the moment
```https://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/seismic-dms-suite/seismic-store-service/-/issues/124[IBM] fix new cloudant driver/lib2024-03-05T01:03:53ZAnuj Gupta[IBM] fix new cloudant driver/libFix : Unit Test Required
- [x] if no db created. initDB
- [x] existing record get (no migration)
- [x] save new record and get
- [x] put and patch
- [x] delete
- [x] runQuery
Branch : [ibm-cloudant-result](https://community.opengroup.or...Fix : Unit Test Required
- [x] if no db created. initDB
- [x] existing record get (no migration)
- [x] save new record and get
- [x] put and patch
- [x] delete
- [x] runQuery
Branch : [ibm-cloudant-result](https://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/seismic-dms-suite/seismic-store-service/-/commits/fix/ibm-cloudant-result)M23 - Release 0.26Isha KumariIsha Kumari2024-01-26https://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/seismic-dms-suite/seismic-store-service/-/issues/123[IBM] replace keycloak-admin with @keycloak/keycloak-admin-client2024-03-05T01:08:14ZDiego Molteni[IBM] replace keycloak-admin with @keycloak/keycloak-admin-clientplease replace the deprecated and vulnerable package [keycloak-admin](https://www.npmjs.com/package/keycloak-admin) with the new [@keycloak/keycloak-admin-client](https://www.npmjs.com/package/@keycloak/keycloak-admin-client)please replace the deprecated and vulnerable package [keycloak-admin](https://www.npmjs.com/package/keycloak-admin) with the new [@keycloak/keycloak-admin-client](https://www.npmjs.com/package/@keycloak/keycloak-admin-client)M23 - Release 0.26Isha KumariIsha Kumari