OSDU Software issueshttps://community.opengroup.org/groups/osdu/-/issues2023-06-20T05:07:07Zhttps://community.opengroup.org/osdu/platform/system/storage/-/issues/163The request to get records of particular kind using the limit is not working.2023-06-20T05:07:07ZKamlesh TodaiThe request to get records of particular kind using the limit is not working.The Storage API CI/CD v1.11 (from Platform Validation project) was working on all the platforms and passing with 100% pass rate.
https://community.opengroup.org/osdu/platform/testing/-/blob/master/Postman%20Collection/12_CICD_Setup_Stor...The Storage API CI/CD v1.11 (from Platform Validation project) was working on all the platforms and passing with 100% pass rate.
https://community.opengroup.org/osdu/platform/testing/-/blob/master/Postman%20Collection/12_CICD_Setup_StorageAPI/Storage%20API%20CI-CD%20v1.11.postman_collection.json
At present, it is still passing with 100% pass rate in AWS R3 M16 Platform Validation (forum testing environment)
But it is not passing with 100% pass rate in all other Platform Validation CSPs environments as well as
it is not passing with 100% pass rate in all CSPs environments in pre-ship
In the referenced collection Request #8 is failing.
The following request for STORAGE API is in question 08 - Storage - Get all records for a kind with limit of 10 records
=====================================================================
e.g. of passing in Platform Validaition R3 M16 (forum testing)
curl --location 'https://r3m16.forumtesting.osdu.aws/api/storage/v2/query/records?limit=10&kind=osdu%3Awks%3AautoTest_955280%3A1.1.0' \
--header 'data-partition-id: osdu' \
--header 'Accept: application/json' \
--header 'Authorization: Bearer eyJraWQiOi...4XnucQETfnB3biA' \
--header 'Cookie: session=eyJfZnJlc2giOmZhbHNlLCJfcGVybWFuZW50Ijp0cnVlfQ.Y_VNrw.SMJbZoZwlkMYCD7E9ge4ICPnqJY'
https://{{STORAGE_HOST}}/query/records?limit=10&kind={{authority}}:{{schemaSource}}:{{entityType}}:{{schemaVerMajor}}.{{schemaVerMinor}}.{{schemaVerPatch}}
The response code: 200 OK
{
"results": [
"osdu:999611481173:999301114394"
]
}
===================================================================
Example of when it is failing
curl --location 'https://r3m16-ue1.preshiptesting.osdu.aws/api/storage/v2/query/records?limit=10&kind=osdu%3Awks%3AautoTest_20923%3A1.1.0' \
--header 'data-partition-id: osdu' \
--header 'Accept: application/json' \
--header 'Authorization: Bearer eyJraWQiOi...tW7kPscDabFJ3sEPeNA'
Response code: 415 Unsupported Media Type
Body of response is blank
It is same message for all the CSP where failure is happening
============================================================================
@chad @debasiscM16 - Release 0.19https://community.opengroup.org/osdu/platform/pre-shipping/-/issues/458For AWS platform query to get all kinds is not returning any records.2023-06-13T11:00:00ZKamlesh TodaiFor AWS platform query to get all kinds is not returning any records.The query to retrieve all the kinds is not returning any results (records)
curl --location 'https://r3m16-ue1.preshiptesting.osdu.aws/api/storage/v2/query/kinds' \
--header 'data-partition-id: osdu' \
--header 'Accept: application/json'...The query to retrieve all the kinds is not returning any results (records)
curl --location 'https://r3m16-ue1.preshiptesting.osdu.aws/api/storage/v2/query/kinds' \
--header 'data-partition-id: osdu' \
--header 'Accept: application/json' \
--header 'Authorization: Bearer eyJraWQiOiJ...7kPscDabFJ3sEPeNA'
The response 200 OK (with results being empty)
{
"results": []
}
The collection used can be found at https://community.opengroup.org/osdu/platform/testing/-/blob/master/Postman%20Collection/12_CICD_Setup_StorageAPI/Storage%20API%20CI-CD%20v1.11.postman_collection.json
The request name is "01 Storage - Get all kinds success scenario"
@chad @debasischttps://community.opengroup.org/osdu/platform/system/storage/-/issues/166Need example of how to use the POST /query/records:batch Fetch multiple rec...2023-04-20T03:00:55ZKamlesh TodaiNeed example of how to use the POST /query/records:batch Fetch multiple recordsThe Storage API documentation mention about
POST /query/records/batch Fetch multiple records. Would like to get the sample of how is this feature expected to be used.
Need clarification on
Account ID is the active OSDU account (OSDU ...The Storage API documentation mention about
POST /query/records/batch Fetch multiple records. Would like to get the sample of how is this feature expected to be used.
Need clarification on
Account ID is the active OSDU account (OSDU account or customer's account) which the users choose to use with the Search API.
frame-of-reference: This value indicates whether normalization applies, should be either 'none' or 'units=SI;crs=wgs84;elevation=msl;azimuth=true north;dates=utc;'
@chad @debasiscM17 - Release 0.20https://community.opengroup.org/osdu/platform/data-flow/ingestion/external-data-sources/core-external-data-workflow/-/issues/17Fetch-and-Ingest - logic for handling company name (Organisation Master data)2023-07-28T02:30:48ZDebasis ChatterjeeFetch-and-Ingest - logic for handling company name (Organisation Master data)Provider sends Master data SeismicAcquisitionSurvey.
One of the field uses Organisation – record ID=BP (meaning Owner/Operator = BP)
Operator who is receiving data from Provider may have this company recorded in Organisation.
But they m...Provider sends Master data SeismicAcquisitionSurvey.
One of the field uses Organisation – record ID=BP (meaning Owner/Operator = BP)
Operator who is receiving data from Provider may have this company recorded in Organisation.
But they may have synthetic ID.
And name = BP may be a field for the record.
Such as from https://community.opengroup.org/osdu/data/data-definitions/-/blob/master/Examples/master-data/Organisation.1.1.0.json
```
"OrganisationID": "Example External Organisation Identifier",
"OrganisationName": "Example OrganisationName",
```
The question is – do we entertain this kind of match/checking during **ingest** of **fetch and ingest**?
If yes, this should be considered a priority. What do you think?
Proposed solution -
fetch-and-ingest may search for value "BP" directly in ID, next search for "BP" in OrganisationName from Organisation.
If it finds match, then it should alter incoming JSON payload and provide suitable ID.
So that incoming data can be ingested successfully.
If it finds multiple matches, then the first record may be used.
cc @AshishSaxenaAccenture , @jeyakumar-jk , @ekayM20 - Release 0.23Nisha ThakranJeyakumar DevarajuluNisha Thakranhttps://community.opengroup.org/osdu/ui/data-loading/wellbore-ddms-data-loader/-/issues/58wbdutil - use associated LAS file that is already available in cloud (OSDU DP...2023-03-06T14:12:42ZDebasis Chatterjeewbdutil - use associated LAS file that is already available in cloud (OSDU DP) for case when we use existing WellLog work-product-componentC:\TEMP>wbdutil ingest data --welllog_id osdu:work-product-component--WellLog:3762da4efdbc44c093cca48c597fb3dc **-p "C:\TEMP\7556_l0102_1984_comp.las"** -t %OSDUTOKEN% -c %CONFIGPATH%
For the above use case, the WellLog WPC will have an...C:\TEMP>wbdutil ingest data --welllog_id osdu:work-product-component--WellLog:3762da4efdbc44c093cca48c597fb3dc **-p "C:\TEMP\7556_l0102_1984_comp.las"** -t %OSDUTOKEN% -c %CONFIGPATH%
For the above use case, the WellLog WPC will have an associated LAS file.
In order to be consistent, it would be useful to retrieve that and use as source LAS file. Hence no need for "-p" file part above.
Then things would be more consistent.
cc @chadhttps://community.opengroup.org/osdu/platform/system/search-service/-/issues/124OSDU Search Endpoint Response2023-03-13T09:52:27ZRex Von Brixon Apa-apOSDU Search Endpoint ResponseIf the ingested property is of type "string" and no record is ingested, search returns a response of **None** on that property.
However if the ingested property is non-string and no record is ingested, search does not return anything for...If the ingested property is of type "string" and no record is ingested, search returns a response of **None** on that property.
However if the ingested property is non-string and no record is ingested, search does not return anything for that property.
We are expecting the property to reflect with response of **None** even for non-strings.
Documentation of the test: https://community.opengroup.org/osdu/platform/pre-shipping/-/blob/main/R3-M15/Test_Plan_Results_M15/Manifest_Ingestion/M15_AWS_Manifest_Ingestion_custom-schema_Rex.docxhttps://community.opengroup.org/osdu/platform/pre-shipping/-/issues/448Pre-shipping: AWS CloudWatch does not show correct Container Mapping and No R...2023-04-06T12:34:51ZNaufal Mohamed NooriPre-shipping: AWS CloudWatch does not show correct Container Mapping and No Relevant Log From Any API runsI am currently testing non destructive operational procedure for AWS. The test involves monitor logs obtained from AWS console (CloudWatch) from any API runs from Postman (i.e. search or ingestion or storage API).
I encountered 2 peculi...I am currently testing non destructive operational procedure for AWS. The test involves monitor logs obtained from AWS console (CloudWatch) from any API runs from Postman (i.e. search or ingestion or storage API).
I encountered 2 peculiar issues:
a) I dont see any relevant logs retrieved from CloudWatch --> Log Groups --> /aws/containerinsights/r3-m16-eks-main-cluster/application
either from os-search (search API made from Postman) or storage API. In the previous milestone release, I am able to see all related logs when I ran search API from POSTMAN through cloudwatch.
![image](/uploads/7ebf9d0969056dee6819257a1f73e7c8/image.png)
b) The container insight map does not show r3m16 resources but only shows r3m12. It is weird as from the log group I can see the r3m16 resource built but the container map does not show any related map related to the current release.
![image](/uploads/799f012d870a1126f1e5e0f187b68cf2/image.png)M16 - Release 0.19https://community.opengroup.org/osdu/platform/security-and-compliance/home/-/issues/132Project Vulnerability Scanning: osdu/platform/data-flow/data-loading/osdu-cli2023-04-05T13:20:19Zdesman boldenProject Vulnerability Scanning: osdu/platform/data-flow/data-loading/osdu-cli**Why did I receive this?**
In efforts to increase security on the OSDU platform we must ensure all projects containing source code are being scanned on a regular basis. You are receiving this notification because you have been identifi...**Why did I receive this?**
In efforts to increase security on the OSDU platform we must ensure all projects containing source code are being scanned on a regular basis. You are receiving this notification because you have been identified as an owner of a project in Gitlab that isn't being scanned for vulnerabilities.
**What do I need to do?**
Please include gitlab-ultimate.yml (https://community.opengroup.org/osdu/platform/ci-cd-pipelines/-/blob/master/scanners/gitlab-ultimate.yml) to your project so it can be scanned for vulnerabilities.
**Project(s) in Scope:**
osdu/platform/data-flow/data-loading/osdu-cliM17 - Release 0.20Chad LeongChad Leonghttps://community.opengroup.org/osdu/platform/system/indexer-service/-/issues/79Reindexing timing out2023-06-21T18:03:38ZOkoun-Ola Fabien HouetoReindexing timing outIn addition to issues in #66, it seems that the reindexer is user the user token and therefore reindexing will time out when the user token expires. Reindexer should use mechanism to avoid timing out on the token.
For example, we are get...In addition to issues in #66, it seems that the reindexer is user the user token and therefore reindexing will time out when the user token expires. Reindexer should use mechanism to avoid timing out on the token.
For example, we are getting this error
"_The user is not authorized to perform this action, errors=null, debuggingInfo=account id: null | user email: admin@testing.com_"Chad LeongChad Leonghttps://community.opengroup.org/osdu/platform/system/search-service/-/issues/123Search service does not ignore unmapped fields (records without spatial attri...2023-03-13T11:02:45ZAn NgoSearch service does not ignore unmapped fields (records without spatial attributes are returned regardless)The following request returns all records in that kinds I can access, but none of them actually has SpatialLocation attribute.
```
curl --location '<baseUrl>/search/v2/query' \
--header 'data-partition-id: partitionID' \
--header 'Auth...The following request returns all records in that kinds I can access, but none of them actually has SpatialLocation attribute.
```
curl --location '<baseUrl>/search/v2/query' \
--header 'data-partition-id: partitionID' \
--header 'Authorization: Bearer ' \
--header 'Content-Type: application/json' \
--data '{
"kind": "osdu:test:Hello:1.0.0",
"query": "*",
"spatialFilter": {
"field": "data.SpatialLocation.Wgs84Coordinates",
"byIntersection": {
"polygons": [
{
"points": [
{
"longitude": -180,
"latitude": 90
},
{
"longitude": 180,
"latitude": 90
},
{
"longitude": 180,
"latitude": -90
},
{
"longitude": -180,
"latitude": -90
},
{
"longitude": -180,
"latitude": 90
}
]
}
]
}
}
}'
```
However, the following request returns 0 record which is expected.
```
curl --location '<baseUrl>/search/v2/query' \
--header 'data-partition-id: partitionID' \
--header 'Authorization: Bearer ' \
--header 'Content-Type: application/json' \
--data '{
"kind": "osdu:test:Hello:1.0.0",
"query": "_exists_:data.SpatialLocation"
}'
```
**Fix**: Ignore Unmapped fields in Elastic SearchM17 - Release 0.20Neelesh ThakurNeelesh Thakurhttps://community.opengroup.org/osdu/platform/data-flow/ingestion/external-data-sources/eds-dms/-/issues/12EDS DMS - Getting Bulk data from Katalyst Wrapper and choosing node of delivery.2023-03-13T11:24:45ZPriyanka BhongadeEDS DMS - Getting Bulk data from Katalyst Wrapper and choosing node of delivery.Katalyst is developing this service to support request for bulk data (ex: DLIS, SegY file) from Operator’s OSDU instance.
The critical thing for Katalyst is to get the ID (in this example, it is 10338269).
With that information, we are...Katalyst is developing this service to support request for bulk data (ex: DLIS, SegY file) from Operator’s OSDU instance.
The critical thing for Katalyst is to get the ID (in this example, it is 10338269).
With that information, we are able to “place an order” for requested file.
The requester will receive Order ID (in this example it is 621295) which is good for tracking actual delivery of requested data.
However, there are two more pieces which would be really useful when OSDU instance sends “request for bulk data” to Katalyst.
1. Email ID of requester.
2. Choice of Delivery node. There is often arrangement between Katalyst and its client to deliver data via SFTP or directly to a cloud location. So, a list (ex: 10 for delivery node, 20 for delivery node2…) can be delivered ahead of time. The requester simply will add the choice (delivery node=10) in the request payload.
POST
{{osdu_endpoint_url}}/osdu-eds/v1/dataset/getRetrievalInstructions
Request Body
{
"datasetRegistryids":["katalyst:dataset--File.Generic:10338269"]
}
Response Body
{
"providerKey": "katalyst",
"results": [
{
"datasetRegistryId": "katalyst:dataset--File.Generic:10338269",
"retrievalProperties": {
"outputFile": "196698-24_CR-GQK-121-depth.segy",
"fileSize": 2.94,
"orderId": "621295",
"orderItemId": "7551725",
"kdmItemId": "10338269",
"orderStatus": "NEW",
"fileSizeUOM": "MB",
"outputNodeId": "53086",
"priority": "Normal",
"outputNodeDescription": "Normal retrievals (Calgary Test1)",
"fileType": "SEGY"
}
}
]
}Thulasi Dass SubramanianSrinivasan NarayananThulasi Dass Subramanianhttps://community.opengroup.org/osdu/platform/data-flow/ingestion/energistics/witsml-parser/-/issues/65Well Log record type - does not link to existing Reference data for Curve Types2023-03-13T19:41:17ZDebasis ChatterjeeWell Log record type - does not link to existing Reference data for Curve TypesI noticed that the Parser shows synthetic/random number for Log curve IDs.
It does not attempt to match with existing entries in LogCurve.
"LogCurveTypeID": "namespace:reference-data--LogCurveType:BakerHughesInteq:A08A:",
What ...I noticed that the Parser shows synthetic/random number for Log curve IDs.
It does not attempt to match with existing entries in LogCurve.
"LogCurveTypeID": "namespace:reference-data--LogCurveType:BakerHughesInteq:A08A:",
What are your thoughts? Should this be considered as gap?
I can create issue for tracking.
Copying to others who tested this feature.
Bonus requirements - populate LogCurveFamilyID and LogCurveMainFamilyID.
```
"LogCurveBusinessValueID": "namespace:reference-data--LogCurveBusinessValue:High:",
"LogCurveMainFamilyID": "namespace:reference-data--LogCurveMainFamily:Acoustic:",
"LogCurveFamilyID": "namespace:reference-data--LogCurveFamily:Acoustic%20Amplitude:"
```
Excerpt from WellLog work-product component record created by the Parser.
```
{
"IsProcessed": true,
"LogCurveMainFamilyID": null,
"DateStamp": "2023-03-12T13:14:48.039219+0000",
"LogCurveFamilyID": null,
"CurveID": "92c731a9-ae27-49d8-a246-27ddff7a1ad1",
"TopDepth": 499.0,
"CurveVersion": null,
"InterpreterName": null,
"CurveQuality": null,
"NullValue": null,
"Interpolate": true,
"DepthUnit": "odesprod:reference-data--UnitOfMeasure:m:",
"DepthCoding": null,
"Mnemonic": "ROP",
"BaseDepth": 509.01,
"LogCurveTypeID": null,
"LogCurveBusinessValueID": null,
"CurveUnit": "odesprod:reference-data--UnitOfMeasure:m:"
},
```https://community.opengroup.org/osdu/platform/data-flow/ingestion/segy-to-vds-conversion/-/issues/17Improvements for execution context2024-03-26T11:22:57ZSacha BrantsImprovements for execution contextToday, the execution context requires:
```json
{
"Payload": {
"AppKey": "",
"data-partition-id": "partition"
},
"id_token": "",
"persistent_id": "filename.vds",
"vds_ur...Today, the execution context requires:
```json
{
"Payload": {
"AppKey": "",
"data-partition-id": "partition"
},
"id_token": "",
"persistent_id": "filename.vds",
"vds_url": "sd://partition/sub-project",
"work_product_id": "",
"file_record_id": ""
}
```
This could be simplified to only require work_product_id and file_record_id as all the information needed is present in those records.
Suggested new execution context:
```json
{
"data-partition-id": "partition",
"work_product_id": "",
"file_record_id": ""
}
```
The persistent_id would be generated similar to what the SEG-Y to ZGY DAG does: Output file path will be generated from the input: insert a GUID after the file name, replace .sgy extension with .vds.
Note, this removes the need for id_tokenYan Sushchynski (EPAM)Yan Sushchynski (EPAM)https://community.opengroup.org/osdu/platform/data-flow/ingestion/energistics/resqml-parser/-/issues/1(SpatialPoint/SpatialArea) What to do with big dataset ?2023-03-15T10:01:23ZValentin Gauthier(SpatialPoint/SpatialArea) What to do with big dataset ?For now the entire points list is required to be load in single list to compute the bounding box and the central point.
It could fail if the data is too heavy.
We should be able to handle thatFor now the entire points list is required to be load in single list to compute the bounding box and the central point.
It could fail if the data is too heavy.
We should be able to handle thatValentin GauthierValentin Gauthierhttps://community.opengroup.org/osdu/platform/data-flow/ingestion/ingestion-workflow/-/issues/151Misleading message in Xcom summary when legal tag is missing2023-03-16T11:31:11ZDebasis ChatterjeeMisleading message in Xcom summary when legal tag is missingI was running a simple test case in AWS/M16/Preship.
I was getting this message.
Now, I get failure. \[{'id': 'osdu:reference-data--FacilityEventType:DC13MAR', 'kind': 'osdu:wks:reference-data--FacilityEventType:1.0.0', 'reason': '400 C...I was running a simple test case in AWS/M16/Preship.
I was getting this message.
Now, I get failure. \[{'id': 'osdu:reference-data--FacilityEventType:DC13MAR', 'kind': 'osdu:wks:reference-data--FacilityEventType:1.0.0', 'reason': '400 Client Error: Bad Request for url: [http://os-storage.osdu-services:8080/api/storage/v2/records'}](http://os-storage.osdu-services:8080/api/storage/v2/records'%7D)\]
Turns out (thanks to AWS Support Nazeem Akbar Ali) that this is because legal tag was not defined and he found the reason by chckingr elevant log file.
See details here -
https://community.opengroup.org/osdu/platform/pre-shipping/-/issues/470#note_207244https://community.opengroup.org/osdu/platform/system/storage/-/issues/168Storage should allow empty data block upon record creation/update2023-03-22T04:13:47ZAn NgoStorage should allow empty data block upon record creation/updateStorage PUT api should allow empty data block upon record creation/update if that is compliant with the schema being defined.
Currently, data block is required.
data: {}
This is a breaking change since it changes the behavior of the ...Storage PUT api should allow empty data block upon record creation/update if that is compliant with the schema being defined.
Currently, data block is required.
data: {}
This is a breaking change since it changes the behavior of the API.
Indexer service needs to be checked to ensure empty data block is being handled correctly.https://community.opengroup.org/osdu/data/open-test-data/-/issues/89Stratigraphy and WellboreMarkerSet - questions and concerns2023-10-25T05:58:29ZDebasis ChatterjeeStratigraphy and WellboreMarkerSet - questions and concernsRefer to this excellent documentation (worked example) https://gitlab.opengroup.org/osdu/subcommittees/data-def/work-products/schema/-/blob/master/Examples/WorkedExamples/Reservoir%20Data/Stratigraphy/README.md
For wellbore 15/3-7,
> T...Refer to this excellent documentation (worked example) https://gitlab.opengroup.org/osdu/subcommittees/data-def/work-products/schema/-/blob/master/Examples/WorkedExamples/Reservoir%20Data/Stratigraphy/README.md
For wellbore 15/3-7,
> Top of Viking (group, rank=1) = 4049.0 m Top of Draupne (formation, rank=2) is 4049.0 m. Top of Heather (formation, rank=2) is 4049.0 m.
Looking at populated WellboreMarkerSet record in https://community.opengroup.org/osdu/data/data-definitions/-/blob/master/Examples/work-product-component/WellboreMarkerSet.1.2.1.json
For populated load manifest of sample data (TNO marker data) does not utilize adequate number of properties from Marker array. https://community.opengroup.org/osdu/platform/data-flow/data-loading/open-test-data/-/blob/master/rc--3.0.0/4-instances/TNO/work-products/markers_1_1_0/load_top_1.1.0_1001_csv.json
```
"Markers": [
{
"MarkerName": "QUATER. UNDIFF.",
"MarkerMeasuredDepth": 0.0
},
{
"MarkerName": "Breda Formation",
"MarkerMeasuredDepth": 282.0
},
{
"MarkerName": "Veldhoven Clay Member",
"MarkerMeasuredDepth": 501.0
},
```
It would be nice to get suitable sample data, JSON files/load-manifests (for related entities) that actually match with this excellent documentation. Some hints are in the worked example. Such as- for "Gudrun" Stratigraphic Column - https://gitlab.opengroup.org/osdu/subcommittees/data-def/work-products/schema/-/blob/master/Examples/WorkedExamples/Reservoir%20Data/Stratigraphy/README.md#stratigraphic-column So, it is necessary to convert this information in a complete (loadable) package so as to get a proper reference.
My concerns - Markers.MarkerName is "free text". Open to human error. When Data Loader is populating data from many wells from this NPD field, he/she uses "Top of Draupne" for one well and uses "Top - Draupne" for another well. Use case - for obtaining contour map of "Top of Draupne", it becomes necessary to get MD (or TVD-SS), X, Y from all wellbores.
Question -
1. FeatureTypeID and FeatureName. FeatureType can be "Top", "Base". The values are clear. FeatureName - why is this left as "free text" rather than link to existing record in some other parent entity?
https://community.opengroup.org/osdu/data/data-definitions/-/blob/master/E-R/reference-data/FeatureType.1.0.0.md Description = Used to describe the type of features. Common values being Top, Base, OWC, Fault etc.
Is the property name unambiguous? Is this more "Contact type"? In any case, what would be typical value of FeatureName in NPD example when we have to build Markers array for NPD Wellbore 15-3/7?
> Top of Viking (group, rank=1) = 4049.0 m Top of Draupne (formation, rank=2) is 4049.0 m. Top of Heather (formation, rank=2) is 4049.0 m.
1. In Markers array, there are some properties. Such as MarkerTypeID, Missing. Being array, we can populate information of several markers within one specific Wellbore.
Now, there is also provision of new property/block.
AvailableMarkerProperties such as MissingThickness. Not obvious how this will be used for multiple elements as present in markers array.
1. For WellboreMarkerSet, it is linked to one StraigraphicColumn. The link is from overall record and not individual array element.
In any case, what would be typical value of StraigraphicColumn in NPD example when we have to build Markers array for NPD Wellbore 15-3/7? Leave that as "Gudrun" for Column overall?
> Top of Viking (group, rank=1) = 4049.0 m Top of Draupne (formation, rank=2) is 4049.0 m. Top of Heather (formation, rank=2) is 4049.0 m.
cc - @gehrmann and @keith_wall for informationhttps://community.opengroup.org/osdu/platform/system/storage/-/issues/169ADR: API to purge a batch of storage records2023-05-02T12:16:58ZMandar KulkarniADR: API to purge a batch of storage recordsNew API in Storage service to purge a batch of records
## Status
- [X] Proposed
- [ ] Trialing
- [ ] Under review
- [ ] Approved
- [ ] Retired
## Context & Scope
The OSDU Storage service provides 2 ways to delete a record. One way is ...New API in Storage service to purge a batch of records
## Status
- [X] Proposed
- [ ] Trialing
- [ ] Under review
- [ ] Approved
- [ ] Retired
## Context & Scope
The OSDU Storage service provides 2 ways to delete a record. One way is to logically delete the record in which the record with same id can be revived later because its version history is maintained.The other way is to permanently delete the record (called as purging) in which case, the record's version history is deleted too. This operation cannot be undone meaning records purged cannot be revived.
In both types of deletions, the record content cannot be accessed using storage or search service.
The storage service provides separate APIs for logical deletion (`POST /records/{id}:delete`) and purging of records (`DELETE /records/{id}`).
The storage service provides API for logical deletion of batch of records (`POST /records/delete`), but such an API is not available for purging of records.
The proposal is to provide an API on storage service to support purging of batch of records, where the maximum batch of size 500 will be supported.
Only the record IDs passed in the request body will be deleted not including any linked records or files if they exist. Cleaning up of all the linked records, such as child records, records in relationship block, and actual data (files ingested via workflow service), would not be in the scope of this API. It would be the user's responsibility.
The new bulk API will work on active as well as non-active (soft-deleted) records, similar to the existing purge API.
Purging of records can be performed by the owner of the records and the owner should be part of users.datalake.admins group.
The API response would be similar to the response of the logical deletion API that is `POST /records/{id}:delete`
In case of partial success, the response code would be 207 and the not-deleted-record-IDs would be listed in the response.
## Tradeoff Analysis
In the absence of an API to purge a batch of records, users would have to call the DELETE API once for every record and it would increase the number of calls to the storage service.
## Decision
Provide an admin-only API to purge a batch of records, with maximum batch size of 500 records.
The Open API specs for storage service with new API is here:
[storage_openapi_batchpurge.yaml](/uploads/1da3f68253419edd693a87d706049565/storage_openapi_batchpurge.yaml)
## Consequences
- New API on Storage service would be available.
- Documentation of Storage service should be modified with details for the new API.https://community.opengroup.org/osdu/platform/data-flow/ingestion/external-data-sources/core-external-data-workflow/-/issues/18EDS - Adding Logger to give details about Osdu_ingest run id and Sample fetch...2023-03-20T13:36:39ZPriyanka BhongadeEDS - Adding Logger to give details about Osdu_ingest run id and Sample fetched data record_ Add Logger to display Osdu_ingest run id in below format
Osdu_ingest runId=xxxx
- Correction in logger while dsplaying sample fetched data record
currently logger has " Record 1 :"
To make the message more clearer , changing the disp..._ Add Logger to display Osdu_ingest run id in below format
Osdu_ingest runId=xxxx
- Correction in logger while dsplaying sample fetched data record
currently logger has " Record 1 :"
To make the message more clearer , changing the display message in logs as "Displaying only one Sample Record"M17 - Release 0.20Nisha ThakranPriyanka BhongadeNisha Thakranhttps://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/seismic-dms-suite/seismic-store-service/-/issues/96Read Only Root File System for Seismic Pods Crashes2023-04-12T17:52:30ZAbhay JoshiRead Only Root File System for Seismic Pods CrashesWhen making a change to have the Os-Seismic-Store pods be a Readonly RootFileSystem, the pods seem to crash without any kubectl logs whatsoever. We suspect it is because the Application is writing to the Pods but are unable to see where ...When making a change to have the Os-Seismic-Store pods be a Readonly RootFileSystem, the pods seem to crash without any kubectl logs whatsoever. We suspect it is because the Application is writing to the Pods but are unable to see where things are being written. We would like to fix this issue as it is a security concern.