OSDU Software issueshttps://community.opengroup.org/groups/osdu/-/issues2023-08-24T09:51:26Zhttps://community.opengroup.org/osdu/platform/data-flow/ingestion/external-data-sources/core-external-data-workflow/-/issues/34Auto Secret rotation policy for customer's secrets stored in secret service KV.2023-08-24T09:51:26Zpreeti singh[Microsoft]Auto Secret rotation policy for customer's secrets stored in secret service KV.The potential impact of credential rotation on the customer end, leading to 4xx responses in EDS DMS and DAGs is not a great customer experience.
We need to improve on this before we graduate EDS .
A secret getting rotated will break EDS...The potential impact of credential rotation on the customer end, leading to 4xx responses in EDS DMS and DAGs is not a great customer experience.
We need to improve on this before we graduate EDS .
A secret getting rotated will break EDS workflows today .Ashish SaxenaAshish Saxenahttps://community.opengroup.org/osdu/platform/data-flow/ingestion/external-data-sources/core-external-data-workflow/-/issues/33Fetch-and-ingest should take into account the deletion of data at source2023-09-20T13:50:12Zpreeti singh[Microsoft]Fetch-and-ingest should take into account the deletion of data at sourceCurrently Fetch-and-ingest takes into account newly created or modified (since last successful run) “catalog” records at Source.
It doesnt consider the deletion of data at source.
This is usecase:
- Provider has file X which has metad...Currently Fetch-and-ingest takes into account newly created or modified (since last successful run) “catalog” records at Source.
It doesnt consider the deletion of data at source.
This is usecase:
- Provider has file X which has metadata JSON file X’ (both on provider’s instance).
- Now EDS Fetch and Ingest runs and X’ is copied over to operator’s OSDU instance. Let us call it X’’ which has an external Connected Source pointing to X.
- Provider deletes X.
- Fetch and ingest runs but does not delete X since there is no timestamp change. There is no modified date since the whole file is deleted. To ensure operator and provider files are in sync, a full diff needs to be run.
- Therefore, operator is still able to search X’’.
- Operator uses EDS DMS API to access the external record. The external record returns the signed URL for X that does not exist and results in an error.
In this case , it will be great if the metadata at operator side is also deleted.Ashish SaxenaAshish Saxenahttps://community.opengroup.org/osdu/platform/domain-data-mgmt-services/rock-and-fluid-sample/rafs-ddms-services/-/issues/228Reservoir Model (Dynamic)2023-08-24T09:39:17ZMykhailo BuriakReservoir Model (Dynamic)Mykhailo BuriakMykhailo Buriakhttps://community.opengroup.org/osdu/platform/domain-data-mgmt-services/rock-and-fluid-sample/rafs-ddms-services/-/issues/227Reservoir Model (Staitc)2023-08-24T09:38:10ZMykhailo BuriakReservoir Model (Staitc)Mykhailo BuriakMykhailo Buriakhttps://community.opengroup.org/osdu/platform/security-and-compliance/legal/-/issues/46Allow for configurability of security classification2023-09-29T14:54:47ZFabrice HauyAllow for configurability of security classificationThe AbstractCommonResources property "ResourceSecurityClassification" is now deprecated, as it was redundant to the Security Classification of the legal tag, where it belongs. Furthermore, as part of the legal tag, it allows for access p...The AbstractCommonResources property "ResourceSecurityClassification" is now deprecated, as it was redundant to the Security Classification of the legal tag, where it belongs. Furthermore, as part of the legal tag, it allows for access policies to be defined against the security classification.
However, the legal tag "security classification" seems to be hardcoded. We would like to request for this value to be configurable (or possibly rely on the reference-data of open governance).https://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/seismic-dms-suite/seismic-store-service/-/issues/107[ADR] Hierarchical deletion of datasets2024-01-05T10:29:26ZMaggie Salak[ADR] Hierarchical deletion of datasets# Introduction
We need a way to delete millions of datasets (including metadata and files in blob storage) in Seismic DMS. A single delete operation can include up to 50 million datasets.
The purpose of this ADR is to define the approa...# Introduction
We need a way to delete millions of datasets (including metadata and files in blob storage) in Seismic DMS. A single delete operation can include up to 50 million datasets.
The purpose of this ADR is to define the approach to implementing a hierarchical delete feature in SDMS.
# Status
* [x] Initiated
* [x] Proposed
* [x] Under Review
* [ ] Approved
* [ ] Rejected
# Problem statement
SDMS API currently exposes the following endpoints for deleting datasets:
- `DELETE /dataset/tenant/{tenantid}/subproject/{subprojectid}/dataset/{datasetid}`
Deletes a single dataset.
- `DELETE /subproject/tenant/{tenantid}/subproject/{subprojectid}`
Deletes a subproject.
The endpoint deleting a subproject currently does not scale to the required number of datasets. The current implementation also leaves a possibility of an inconsistent state between the metadata and files in blob storage - in case some of the files fail to be deleted, the deletion of metadata associated with these datasets is not reverted.
SDMS currently does not have the functionality of deleting only selected datasets in a subproject, filtered by path, tags, labels, etc.
# Proposed Solution
In short:
- Create new API endpoints to support starting and tracking progress of the asynchronous deletion operation.
- Deploy a new service on k8s that would asynchronously delete datasets.
## Overview
We will introduce the bulk-delete feature as follows:
1. Implement and deploy a separate application to the same K8s cluster: the _deletion service_.
This service will accept the bulk deletion requests from SDMS API, perform the deletion and keep track of the progress of this long-running operation.
2. Add the new endpoint to SDMS API to delete all datasets in a specified path:
`PUT /operations/bulk-delete?sdpath={sdpath}`
Status: 202 Accepted
`sdpath` in the format `sd://tenant/subproject/path`
Response schema:
```json
{
"operationId": "{string}"
}
```
3. Add the new endpoint to SDMS API to view the status and progress of the delete operation:
`GET /operations/bulk-delete/status/{operationid}`
Status: 200 OK
Response schema:
```json
{
"OperationId": "{string}",
"CreatedAt": "{string}",
"CreatedBy": "{string}",
"LastUpdatedAt": "{string}",
"Status": "{string}",
"DatasetsCnt": "{int}",
"DeletedCnt": "{int}",
"FailedCnt": "{int}"
}
```
Headers will contain `data-partition-id` information to check if the user is registered in the partition before retrieving the operation status.
## Details
### Initiating a delete operation
- The new `PUT` endpoint will support the following cases for the dataset path, provided in the `sdpath` parameter:
- `path = /<path>/` - all datasets under the specified path should be deleted.
- path not specified - all datasets in the subproject should be deleted.
If the deletion of the subproject (metadata and container) is desired as well, the clients should call the delete subproject endpoint after the datasets bulk delete operation completes to ensure non-blocking deletion of the subproject in case it is composed by many datasets.
- The endpoint triggers the deletion job and returns the ID of the initiated operation.
- The delete operation is initiated in SDMS by pushing a message onto a queue (Azure Storage queue in case of Azure implementation; a different queuing mechanism can be used by other CSPs); the message contains the `operationId` and the parameters from the original request (tenant, subproject, path).
### Deletion service
Deletion service is a separate component from SDMS API, deployed to the same K8s cluster. The implementation details of the service can be decided by the individual CSPs. This section describes the proposed implementation for Azure.
The source code of the new component will be contributed to the Sidecar solution in the `seismic-store-service` repository.
The logic of the deletion service will work as follows:
- The service consumes the message from the Azure Storage queue and initiates the deletion process.
- All items (dataset IDs and `gcsurl` which determines the location in blob storage) matching the provided subproject and path are retrieved from Cosmos database.
- For each dataset, the deletion service checks if it is locked.
- If yes, the item is discarded from the delete operation.
- If not, the deletion service locks the dataset. The lock value in this case will contain a string indicating that the dataset is locked for deletion (e.g. WDELETE). This will allow another delete operation to delete the dataset if the deletion failed previously. However, it will prevent deletion of datasets locked with a regular write lock which would indicate that it is being actively used.
- The retrieved items are added to storage which allows the deletion service to keep track of the datasets to delete. In the first version of the implementation, the deletion service will store the retrieved datasets in memory.
In a later phase we are planning to use a persistent storage (e.g. Service Bus queue) to store the items to be deleted. This will allow the service to resume deletion after a restart as well as retry deletion for the datasets where it failed.
- The deletion service leverages existing Redis queues to keep track of the overall deletion operation status and progress.
- The deletion service retrieves and deletes the datasets by checking the store containing items to be deleted. In the first version of the implementation it simply iterates over items stored in memory.
- The datasets are processed in batches; for each batch we retrieve the associated blobs from the storage account using the `gcsurl` property of the metadata.
- The blobs from the current batch are deleted.
- We then delete the metadata documents from Cosmos DB, leaving the ones for which the blob deletion was unsuccessful. We consider that the deletion was successful if the blobs were not found as we assume they were deleted earlier.
- The deletion status is updated in Redis after processing every dataset.
- At the end, the status of a completed operation (with errors or without) is saved in Redis.
- The deletion status should not be deleted at this point so that users can query the operation status after completion.
### Sequence diagram for the deletion operation
![deletion_diagram_osdu](/uploads/b097c46896644e19a7374df96560aabd/deletion_diagram_updated.png)
### Deletion status
The status of delete operations will be saved in Redis.
It will be written by the deletion service (updated with the current progress) and read by SDMS API
(when users request the deletion status).
SDMS API and the deletion service will agree on the naming convention for the key in Redis,
e.g. `deletequeue:status:{operationId}`.
The new `GET` endpoint allowing users to query the status of a delete operation will return the following information:
- **`OperationId`** - ID of the delete operation.
- **`Status`** - Current status of the delete operation; possible values are: 'Not started', 'Started', 'In progress', 'Completed', 'Completed with errors'.
- `CreatedAt` - Timestamp of the creation of the delete operation.
- `CreatedBy` - Entity initiating the delete operation.
- `LastUpdatedAt` - Timestamp of the last status update of the delete operation.
- `DatasetsCnt` - Total number of datasets to be deleted; initially not set, until the enumeration of datasets for deletion is completed.
- `DeletedCnt` - Number of deleted datasets; updated after each dataset processed by the deletion service, after both blobs and metadata are deleted.
- `FailedCnt` - Number of datasets for which the deletion failed; updated after each dataset processed by the deletion service if a failure occurs.
_(only the fileds in **bold** are required)_
_(dataset counts will be empty if the status is "not started")_
### Sequence diagram for the deletion status
![deletion_status_diagram](/uploads/52b27cfb56a9942cf7628e81aeb41eec/deletion_status_diagram.png)
# Out of scope / limitations
- Detailed statistics about datasets which failed to be deleted. In the first phase of implementation the deletion status endpoint will provide aggregated statistics as mentioned in the `Deletion status` section. Users will need to refer to logs to find out which datasets failed to be deleted.
- The bulk-delete feature does not guarantee the operation can continue after a restart of the deletion service. It will be up to the different CSPs to determine if there is retry logic for failed datasets or recovery support built into the service.
- Deleting 'orphan' blobs with missing metadata. Files without metadata containing a matching `gcsurl` will not be deleted as part of the delete operation as metadata is the source of truth for which blobs need to be deleted.
- Identifying blobs belonging to a different dataset but located in the same virtual folder as files of another dataset. Since `gcsurl` carries information about the location of files to be deleted, the delete operation will not be able to detect 'unrelated' files erroneously uploaded with the same virtual folder.
# Consequences
The same bulk deletion API endpoints can be implemented by any CSPs besides Azure.
The status endpoint is not CSP-specific. As long as the bulk delete implementation saves
the job status with the same schema to Redis, the status endpoint will work for any other CSP out of the box.M22 - Release 0.25Diego MolteniMark YanMaggie SalakSneha PoddarDiego Moltenihttps://community.opengroup.org/osdu/platform/system/register/-/issues/47Integration test core pom references core-lib-gc2023-08-21T18:40:26ZAlok JoshiIntegration test core pom references core-lib-gchttps://community.opengroup.org/osdu/platform/system/register/-/blob/master/testing/register-test-core/pom.xml
There is a dependency reference to core-lib-gc inside the core dependencies for integration test. This violates the principal...https://community.opengroup.org/osdu/platform/system/register/-/blob/master/testing/register-test-core/pom.xml
There is a dependency reference to core-lib-gc inside the core dependencies for integration test. This violates the principal of having only non-CSP logic in the core module.https://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/seismic-dms-suite/seismic-store-service/-/issues/106The getBloctkSizes services takes > 30 seconds to complete for a 20 GB file2023-08-29T15:09:09ZMichaelThe getBloctkSizes services takes > 30 seconds to complete for a 20 GB fileWe uploaded a 20 GB file to Azure pre-shipping Seismic DDMS environment.
When calling the getBlockSizes() method from the sdapi library, the call took over 30 seconds to complete.
We are worried about how long it takes to execute this ...We uploaded a 20 GB file to Azure pre-shipping Seismic DDMS environment.
When calling the getBlockSizes() method from the sdapi library, the call took over 30 seconds to complete.
We are worried about how long it takes to execute this method since it affects the user experience when trying to visualize data from these files. We are also concerned because we want to support larger files (100 GB + in size).
The file was uploaded to the following sd path: sd://opendes/michaelm19/ST0202R08_PZ_PrSDM_CIP_gathers_in_PP_Time.segyhttps://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/seismic-dms-suite/seismic-store-sdutil/-/issues/28No errors/warnings reported when uploading seismic file with corrupted blocks2023-08-18T15:15:16ZMichaelNo errors/warnings reported when uploading seismic file with corrupted blocksI uploaded a segy file to seismic ddms for the AZURE pre shipping using sdutil and no errors/warnings were reported.
Later, when trying to read the segy file using the sdapi library, there were unreadable blocks. I re-uploaded the file ...I uploaded a segy file to seismic ddms for the AZURE pre shipping using sdutil and no errors/warnings were reported.
Later, when trying to read the segy file using the sdapi library, there were unreadable blocks. I re-uploaded the file to a different sd path and I got MD5 checksum errors when trying to read the block.
we tried to upload the file to the following locations:
sd://opendes/michaelm19/cdp_stack.sgy: the last block of this file (22) was unreadable
sd://opendes/michaelm19/cdp_stack_from_q.sgy : the blocks 3, 6, 9, and 13 have checksum errors
The original file can be accessed here: https://osdutroubleshootingfiles.s3.us-east-2.amazonaws.com/cdp_stack.sgy
Our Concern is that sdutil did not report any errors when the results of the file uploading is corrupted.https://community.opengroup.org/osdu/platform/domain-data-mgmt-services/rock-and-fluid-sample/rafs-ddms-services/-/issues/168Reconsider the approach to displaying the list of supported analysis types2023-08-18T12:50:47ZSiarhei Khaletski (EPAM)Reconsider the approach to displaying the list of supported analysis typesTBU
Options:
1. Use an enum for models to be able to see on the Swagger page
2. Use documentation and release notes
3. ...TBU
Options:
1. Use an enum for models to be able to see on the Swagger page
2. Use documentation and release notes
3. ...https://community.opengroup.org/osdu/platform/domain-data-mgmt-services/reservoir/open-etp-client/-/issues/19Standardization of endpoints2023-11-09T14:26:30ZChad LeongStandardization of endpoints# Summary
Based on the recent M19 pre-shipping testing, we identified a few discrepancies in the Reservoir DDMS endpoints for both ETP and REST servers. We need to follow the standard convention of OSDU.
## Actual behavior
These are t...# Summary
Based on the recent M19 pre-shipping testing, we identified a few discrepancies in the Reservoir DDMS endpoints for both ETP and REST servers. We need to follow the standard convention of OSDU.
## Actual behavior
These are the endpoints observed today.
ETP Server:
- [AWS] wss://osdu.r3m18.preshiptesting.osdu.aws/api/oreservoir-ddms-etp/v2/
- [Azure] wss://osdu-ship.msft-osdu-test.org/oetp/reservoir-ddms/
- [Google] wss://preship.gcp.gnrg-osdu.projects.epam.com/api/oetp-server/v2/
- [IBM] ?
REST Server:
- [AWS] https://osdu.r3m18.preshiptesting.osdu.aws/api/reservoir-ddms/v2
- [Azure] https://osdu-ship.msft-osdu-test.org/api/oetp-client/v2
- [Google] https://preship.gcp.gnrg-osdu.projects.epam.com/api/reservoir-ddms/v2
- [IBM] ?
## Intended behavior
We should follow the standard endpoint convention in OSDU.
ETP Server:
- `{baserurl}/api/reservoir-ddms-etp/v2/`
REST server:
- `{baserurl}/api/reservoir-ddms/v2/{methods}`
## Pending Implementation
- [X] AWS
- [ ] Azure
- [X] GC
- [ ] IBMM21 - Release 0.24https://community.opengroup.org/osdu/platform/system/dataset/-/issues/58Unable to Retrieve Loaded File from Glab (Azure)2023-08-18T12:35:21ZVsevolod SvirinUnable to Retrieve Loaded File from Glab (Azure)I encountered an issue while following the file-loading process through the Dataset service. Despite successfully uploading files and registering the Dataset on Glab (Azure), I am unable to retrieve the loaded file using the provided sig...I encountered an issue while following the file-loading process through the Dataset service. Despite successfully uploading files and registering the Dataset on Glab (Azure), I am unable to retrieve the loaded file using the provided signed URL.
Steps Taken:
1. Get Upload storage instructions
2. Upload a text file
3. Register Dataset
4. Get Retrieval instructions
5. Download the text file using the signed URL - results in a 404 error labeled as "BlobNotFound."
Upon analyzing the signed URL, it seems to be targeting the "file-persistent-area" for file retrieval. However, this directory or file structure does not appear to exist. Notably, the files are present within the staging container.
@Srinivasan_Narayananhanuman magarhanuman magarhttps://community.opengroup.org/osdu/platform/system/search-service/-/issues/132Follow-up from "Add filter to nested sort"2023-08-17T16:07:05ZMark ChanceFollow-up from "Add filter to nested sort"The following discussion from !535 should be addressed:
- [ ] @nthakur started a [discussion](https://community.opengroup.org/osdu/platform/system/search-service/-/merge_requests/535#note_241458): (+2 comments)
> Will filter conte...The following discussion from !535 should be addressed:
- [ ] @nthakur started a [discussion](https://community.opengroup.org/osdu/platform/system/search-service/-/merge_requests/535#note_241458): (+2 comments)
> Will filter context work for non-nested scenario? If it does, can you please update non-nested section as well? If it does not, then we should add this in limitation documentation.M20 - Release 0.23Mark ChanceMark Chancehttps://community.opengroup.org/osdu/platform/domain-data-mgmt-services/wellbore/wellbore-domain-services/-/issues/80purge data records generated from e2e2023-08-15T15:46:47ZYunhua Koglinpurge data records generated from e2eCurrently, run e2e creates deleted records in storage, not purge the records.
e2e should clean up storage at the end.Currently, run e2e creates deleted records in storage, not purge the records.
e2e should clean up storage at the end.https://community.opengroup.org/osdu/platform/system/storage/-/issues/181GET: /records/{recordID}/{version} - ERROR 5002024-01-01T08:47:32ZSiarhei Khaletski (EPAM)GET: /records/{recordID}/{version} - ERROR 500**Context**
GET: /records/{recordID}/{version} fails with error 500 if an invalid version is provided (see the attachment)
We noticed an odd behavior of the service:
List of existing versions of the following record: `opendes:work-pro...**Context**
GET: /records/{recordID}/{version} fails with error 500 if an invalid version is provided (see the attachment)
We noticed an odd behavior of the service:
List of existing versions of the following record: `opendes:work-product-component--SamplesAnalysis:e9f02f48f43149a8b69606ff7597f391`
![image](/uploads/3d75fd80a57f5558c7d0eb00a4d795eb/image.png)
If request unexisting version `1` - status error 500
![image](/uploads/d3dc228f70263bd24ff7d09975baa63c/image.png)
Meanwhile, if request unexisting version `1234` - status 404
![image](/uploads/e82da89c3673b643aaa26845f0eb0c81/image.png)
**Azure GLab Logs**
![image](/uploads/8d54b1addcbc1835b4ea3c90135072b6/image.png)
**Expected Behavior**
404 - status codeM22 - Release 0.25Siarhei Khaletski (EPAM)Chad LeongSiarhei Khaletski (EPAM)https://community.opengroup.org/osdu/platform/data-flow/ingestion/ingestion-workflow/-/issues/156listallworkflow API does not return Version of services/DAG deployed.2023-10-23T08:06:15Zvinisha krishnalistallworkflow API does not return Version of services/DAG deployed.The following API returns Version 1 which seems to be hardcoded. We need to get the version of services/DAGs deployed. With ADME we do not have visibility of which version is needed.
{base_url}/solutions/data-flow/apis/workflow-service...The following API returns Version 1 which seems to be hardcoded. We need to get the version of services/DAGs deployed. With ADME we do not have visibility of which version is needed.
{base_url}/solutions/data-flow/apis/workflow-service#/Workflow/listAllWorkflow
![image](/uploads/c99bde240f7cd2276d899e3f05f7cd73/image.png)https://community.opengroup.org/osdu/platform/pre-shipping/-/issues/589Azure M19 - Audit and Metric (Platform KPI) - results in restricted site2023-08-16T08:40:30ZDebasis ChatterjeeAzure M19 - Audit and Metric (Platform KPI) - results in restricted siteWhen testing this Platform KPI, I am shown a site where I do not have privilege. So, I cannot check what results are generated by this API.
In addition, there is a hint of using a vendor proprietary system.
POST https://osdu-ship.msft-...When testing this Platform KPI, I am shown a site where I do not have privilege. So, I cannot check what results are generated by this API.
In addition, there is a hint of using a vendor proprietary system.
POST https://osdu-ship.msft-osdu-test.org/api/aam/azure/subscriptions/{{subscription_id}}/{{resource_group_id}}/{{vm_name}}
Body
```
{
"metric_id":"Percentage CPU",
"timespan":"2023-08-03T06:25:00Z/2023-08-07T18:41:00Z",
"aggregation":"Total",
"interval":"PT1H"
}
```
Response
```
{
"cost": 6495,
"interval": "PT1H",
"namespace": "Microsoft.Compute/virtualMachines",
"resourceregion": "eastus",
"timespan": "2023-08-03T06:25:00Z/2023-08-07T18:41:00Z",
"value": [
{
"displayDescription": "The percentage of allocated compute units that are currently in use by the Virtual Machine(s)",
"errorCode": "Success",
"id": "/subscriptions/7c052588-ead2-45c9-9346-5b156a157bd1/resourceGroups/osdu-gcz-naresh-rg/providers/Microsoft.Compute/virtualMachines/osdu-gcz-dev-vm/providers/Microsoft.Insights/metrics/Percentage CPU",
"name": {
"localizedValue": "Percentage CPU",
"value": "Percentage CPU"
},
"timeseries": [],
"type": "Microsoft.Insights/metrics",
"unit": "Percent"
}
]
}
```Mohd Asad ShaikhMohd Asad Shaikhhttps://community.opengroup.org/osdu/platform/pre-shipping/-/issues/583M19: GC EDS Ingestion Error, some of the expected Reference Data is missing, ...2024-01-08T16:18:59ZAnkit GoyalM19: GC EDS Ingestion Error, some of the expected Reference Data is missing, while testing EDS and adding Connected Source Registry Entry.DAG: Osdu_ingest R3 manifest processing with providing integrity
XCom
KeyValuesaved_record_ids{}skipped_ids{'provide_manifest_integrity_task': [{'id': 'm19:master-data--ConnectedSourceRegistryEntry:AWSPreship-AGAUG102023', 'kind': 'osdu...DAG: Osdu_ingest R3 manifest processing with providing integrity
XCom
KeyValuesaved_record_ids{}skipped_ids{'provide_manifest_integrity_task': [{'id': 'm19:master-data--ConnectedSourceRegistryEntry:AWSPreship-AGAUG102023', 'kind': 'osdu:wks:master-data--ConnectedSourceRegistryEntry:1.2.0', 'reason': 'Missing parents: {SRN: m19:master-data--Organisation:AWS-PRESHIP-AGAUG102023, SRN: m19:master-data--ActivityTemplate:CSRE-ActivityTemplate-AGAUG102023, SRN: m19:reference-data--OAuth2FlowType:RefreshTokenKeyName}'}]}
@chad @debasisc @todaiksDzmitry Malkevich (EPAM)Jeyakumar DevarajuluDzmitry Malkevich (EPAM)https://community.opengroup.org/osdu/platform/system/storage/-/issues/180Unable to nullify a non-system attribute from DateTime value to null or empty...2023-08-22T10:20:49ZShubhankar SrivastavaUnable to nullify a non-system attribute from DateTime value to null or empty value using Storage serviceTo support a business use case, a user would need to update an existing attribute (with data type as date time) residing under data { } section of a **work-product-component** schema from a valid DateTime value (e.g.- 2023-08-10T00:00:00...To support a business use case, a user would need to update an existing attribute (with data type as date time) residing under data { } section of a **work-product-component** schema from a valid DateTime value (e.g.- 2023-08-10T00:00:00+0000) to "null" or an empty string (""). But when this transaction is attempted and executed via STORAGE service, the value of the attribute remains unchanged even after a successful execution (HTTP status code - 200). STORAGE service should allow users to register an empty/null value for DateTime attribute.
Please note that the attribute "DateSubmitted" does not belong to the list of System Properties like "createTime" or "modifyTime" and might not be used for auditing purposes.
1. "kind": "shell:wks:work-product-component--LQCWebSheet:1.0.0"
2. Example record:
{
"data": {
"ApprovalStatusTypeID": "osdu:reference-data--LQCApprovalStatusType:Submitted:",
"Source": "shell",
"Name": null,
"IsBonus": false,
"LoggingInterpreter": null,
"FinalDeliveryDuration": 1.0,
"WebSheetName": "Test_LWD_Websheet_Edit_Approver_Request_v2",
"LastUpdatedPPEmail": null,
"ApproverEmail": "NewApprover1.Nayak@shell.com",
"WellboreID": "osdu:master-data--Wellbore:BDLQCGOM2_1_WB2:", "
"OperationalComment": "Test_LWD_Websheet_Edit_Approver_v2_Operational_Comments",
"ApproverComment": null,
"SourceApplication": "Created in LQC WebSheets",
"SubmitterName": "Sujith.Submitter@shell.com",
"IsApprovalStatusReset": true,
"DateSubmitted": "2023-06-05T07:56:19.914485+0000"
},
"kind": "shell:wks:work-product-component--LQCWebSheet:1.0.0",
"source": "wks",
"acl": {
"viewers": [
"data.default.viewers@osdu.shell.com"
],
"owners": [
"data.default.owners@osdu.shell.com"
]
},
"type": "work-product-component--LQCWebSheet",
"version": 1686283555925808,
"tags": {
"normalizedKind": "shell:wks:work-product-component--LQCWebSheet:1"
},
"modifyUser": "Monalisa.Mohapatra@shell.com",
"modifyTime": "2023-06-09T04:05:56.083Z",
"createTime": "2022-12-15T11:26:58.940Z",
"authority": "shell",
"namespace": "shell:wks",
"legal": {
"legaltags": [
"osdu-shell-lqc-dataset-testing"
],
"otherRelevantDataCountries": [
"US"
],
"status": "compliant"
},
"createUser": "Labanyendu.Nayak@shell.com",
"id": "osdu:work-product-component--LQCWebSheet:62008"
}
3. Target attribute - "data.DateSubmitted"https://community.opengroup.org/osdu/platform/domain-data-mgmt-services/wellbore/wellbore-domain-services/-/issues/78Anthos/Baremetal. (NoSuchKey) when calling "get welllog data"2023-10-04T16:46:26ZYan Sushchynski (EPAM)Anthos/Baremetal. (NoSuchKey) when calling "get welllog data"Hello,
Postman Environment: https://community.opengroup.org/osdu/platform/pre-shipping/-/blob/main/R3-M19/QA_Artifacts_M19/envFilesAndCollections/envFiles/OSDU%20R3%20M19%20RI%20Pre-ship.postman_environment.json Postman Collection: htt...Hello,
Postman Environment: https://community.opengroup.org/osdu/platform/pre-shipping/-/blob/main/R3-M19/QA_Artifacts_M19/envFilesAndCollections/envFiles/OSDU%20R3%20M19%20RI%20Pre-ship.postman_environment.json Postman Collection: https://community.opengroup.org/osdu/platform/pre-shipping/-/blob/main/R3-M19/QA_Artifacts_M19/envFilesAndCollections/Wellbore%20DDMS%20CI-CD%20v3.0.postman_collection.json.
Steps to reproduce:
1. Create a WellLog
2. Post the WellLog data
3. Get the WellLog data.
The logs show that when we post the well log data, a new fodler and a parquet file are created:
```log
DEBUG:Sending http request: <AWSPreparedRequest stream_output=False, method=PUT, url=https://s3.ref.gcp.gnrg-osdu.projects.epam.com/wellbore/logstore-osdu/9ee8ed74df9b8efb695f376771eea3e707b66753/bulk/2c0429ad-b4a1-4a70-a17e-bb08cc245f3f/data/0_4_1691662228355.e70a959cea89c6147785c7fa57cde5be8b6dc250.parquet
```
And then, when we want to get the data, it attempts to get an absent `bulk_catalog.json`:
```log
DEBUG:Sending http request: <AWSPreparedRequest stream_output=True, method=GET, url=https://s3.ref.gcp.gnrg-osdu.projects.epam.com/wellbore/logstore-osdu/9ee8ed74df9b8efb695f376771eea3e707b66753/bulk/2c0429ad-b4a1-4a70-a17e-bb08cc245f3f/data/bulk_catalog.json
```
Linked issue: https://community.opengroup.org/osdu/platform/pre-shipping/-/issues/568M19 - Release 0.22YannickYannick