Storage issueshttps://community.opengroup.org/osdu/platform/system/storage/-/issues2023-12-05T13:46:05Zhttps://community.opengroup.org/osdu/platform/system/storage/-/issues/192RAFSDDMS Unit conversion issue2023-12-05T13:46:05ZRustam Lotsmanenko (EPAM)rustam_lotsmanenko@epam.comRAFSDDMS Unit conversion issueIt was observed that the record from the collection: https://community.opengroup.org/osdu/qa/-/tree/main/Dev/48_CICD_Setup_RAFSDDMSAPI?ref_type=heads
Requested with conversion headers:
```plaintext
curl --location 'https://community.g...It was observed that the record from the collection: https://community.opengroup.org/osdu/qa/-/tree/main/Dev/48_CICD_Setup_RAFSDDMSAPI?ref_type=heads
Requested with conversion headers:
```plaintext
curl --location 'https://community.gcp.gnrg-osdu.projects.epam.com/api/storage/v2/query/records:batch' \
--header 'Content-Type: application/json' \
--header 'data-partition-id: osdu' \
--header 'accept: application/json' \
--header 'frame-of-reference: units=SI;crs=wgs84;elevation=msl;azimuth=true north;dates=utc;' \
--header 'Authorization: Bearer ' \
--data '{
"records": [
"osdu:work-product-component--RockSampleAnalysis:Test"
]
}'
```
Causing internal server error:
```plaintext
Caused by: java.lang.NullPointerException: Cannot invoke "com.google.gson.JsonArray.size()" because "elementArray" is null
at org.opengroup.osdu.core.common.util.JsonUtils.overrideNestedNumberPropertyOfJsonObject(JsonUtils.java:219)
at org.opengroup.osdu.core.common.util.JsonUtils.overrideNumberPropertyOfJsonObject(JsonUtils.java:146)
at org.opengroup.osdu.core.common.crs.UnitConversionImpl.convertRecordToSIUnits(UnitConversionImpl.java:166)
at org.opengroup.osdu.core.common.crs.UnitConversionImpl.convertUnitsToSI(UnitConversionImpl.java:56)
at org.opengroup.osdu.storage.conversion.DpsConversionService.doConversion(DpsConversionService.java:80)
at org.opengroup.osdu.storage.service.BatchServiceImpl.fetchMultipleRecords(BatchServiceImpl.java:228)
at org.opengroup.osdu.storage.api.QueryApi.fetchRecords(QueryApi.java:135)
```
Further investigation is required to fix it.https://community.opengroup.org/osdu/platform/system/storage/-/issues/162Record ACL should be case insensitive2023-03-09T18:17:51ZAn NgoRecord ACL should be case insensitiveEntitlements group creation always lowercases the group name, regardless of the input.
Storage honors the ACL group name case sensitivity. This creates inconsistency for ACL validation.
**For example:**<br>
User creates a data group cal...Entitlements group creation always lowercases the group name, regardless of the input.
Storage honors the ACL group name case sensitivity. This creates inconsistency for ACL validation.
**For example:**<br>
User creates a data group called: data.SomeGroup.viewers<br>
Upon this request, Entitlements creates a group called: data.somegroup.viewers
Upon creating a record, the user enters data.SomeGroup.viewers as the ACL.<br>
If the user tries to fetch the record, a 403 is returned since Entitlements only sees group data.somegroup.viewers.
**Fix:**<br>
**For existing records (addressing the ghosted records):** Storage fetch record validation should lowercase the ACL group against the list of groups returned from Entitlements.<br>
**Long term solution:** The fix should be in the record creation. Storage PUT API should lowercase the ACL upon record creation. OR We could fail the PUT request if the ACL group has mixed case. Note that there is no ACL group existence validation upon record creation.https://community.opengroup.org/osdu/platform/system/storage/-/issues/219Records created with special characters are not discoverable2024-03-15T13:22:20ZAbhishek Kumar (SLB)Records created with special characters are not discoverableStorage service allows user to a create record with encoded special character.
However, if we try to get the created record storage service return 404.
**Actual ID:** winter-aker-bp-super-sprint-5:reference-data--UnitOfMeasure:m/h
<br>
...Storage service allows user to a create record with encoded special character.
However, if we try to get the created record storage service return 404.
**Actual ID:** winter-aker-bp-super-sprint-5:reference-data--UnitOfMeasure:m/h
<br>
**Encoded ID**: winter-aker-bp-super-sprint-5:reference-data--UnitOfMeasure:m%2fh
The Storage POST endpoint allows user to create storage records with encoded ids:
![image](/uploads/6a923c3582dcb993eaf8d84e2ff32166/image.png)
But the problem arises when user tries to retrieve the record using get endpoint:
`{
"code": 400,
"reason": "Validation error.",
"message": "{\"errors\":[\"Not a valid record id. Found: winter-aker-bp-super-sprint-5:reference-data--UnitOfMeasure:m%2fh\"]}"
}`
The same record do appears in the search result:
![image](/uploads/566be3486c02de5956b2c96e10709d2e/image.png)Chad LeongChad Leonghttps://community.opengroup.org/osdu/platform/system/storage/-/issues/94Remove deprecated storage schema api feature flag and relevant integration tests2022-11-21T11:04:04ZLarissa PereiraRemove deprecated storage schema api feature flag and relevant integration testsThe following endpoints are currently driven by feature flag "schema_endpoints_disabled". MR (https://community.opengroup.org/osdu/platform/system/storage/-/merge_requests/251)
* POST endpoint(**/api/storage/v2/schema**) in storage servi...The following endpoints are currently driven by feature flag "schema_endpoints_disabled". MR (https://community.opengroup.org/osdu/platform/system/storage/-/merge_requests/251)
* POST endpoint(**/api/storage/v2/schema**) in storage service.
* GET endpoint(**/api/storage/v2/schema**) in storage service
* DELETE endpoint(**/api/storage/v2/schema**) in storage service
* GET endpoint(**/api/storage/v2/query/kinds**) in storage service
When the feature flag is removed please ensure to also update integration tests to fix/remove any tests that are using these deprecated endpoints. Currently these tests are being ignored if the feature flag is true (viz. schema_endpoints_disabled = true) as part of MR (https://community.opengroup.org/osdu/platform/system/storage/-/merge_requests/257)
**Update** -- the GET /query/kinds endpoint ended up being taken out from under the feature flag and retained as it is needed for the storage concern around what kinds are actually in storage vs. merely defined in the Schema Service.https://community.opengroup.org/osdu/platform/system/storage/-/issues/189[SAST] Vue_DOM_XSS in file index.html2023-11-15T10:54:25ZYauhen Shaliou [EPAM/GCP][SAST] Vue_DOM_XSS in file index.html**Description**
The method m-1"\> embeds untrusted data in generated output with href, at line 36 of \\storage\\provider\\storage-azure\\src\\main\\resources\\static\\index.html. This untrusted data is embedded into the output without p...**Description**
The method m-1"\> embeds untrusted data in generated output with href, at line 36 of \\storage\\provider\\storage-azure\\src\\main\\resources\\static\\index.html. This untrusted data is embedded into the output without proper sanitization or encoding, enabling an attacker to inject malicious code into the generated web-page.
# **Location:**
<table>
<tr>
<th> </th>
<th>Source</th>
<th>Destination</th>
</tr>
<tr>
<th>File</th>
<td>storage/provider/storage-azure/src/main/resources/static/index.html</td>
<td>storage/provider/storage-azure/src/main/resources/static/index.html</td>
</tr>
<tr>
<th>Line number</th>
<td>92</td>
<td>36</td>
</tr>
<tr>
<th>Object</th>
<td>pathname</td>
<td>href</td>
</tr>
<tr>
<th>Code line</th>
<td>return location.protocol + '//' + location.host + location.pathname</td>
<td>
\<a :href="signInUrl" class="btn btn-primary" v-if="!token" class="col-2"\>Login\</a\>
</td>
</tr>
</table>M21 - Release 0.24https://community.opengroup.org/osdu/platform/system/storage/-/issues/67Skipdupes flag fails to recognize identical records when data block contains ...2022-11-21T11:51:57ZGary MurphySkipdupes flag fails to recognize identical records when data block contains integer-valued fields.**Summary** The "skipdupes" flag on PUT for a record does not work when a property value is an **integer**.
**Details** </br>
When a record is created, the "skipdupes" parameter can be set to "true" such that a duplicate record will not...**Summary** The "skipdupes" flag on PUT for a record does not work when a property value is an **integer**.
**Details** </br>
When a record is created, the "skipdupes" parameter can be set to "true" such that a duplicate record will not be created and the skip will be indicated in the response details. However, if a value for a "data" attribute ("dimension" in the example below) is an integer, skipdupes seems to never recognize that nothing has changed. The PUT request will always create a new record. It seems like float/text are fine.
` "data": {
"log": {
"dataType": "number333",
"dimension": 1,
"family": "Bulk Density Correction",
"familyType": "Density",
"format": "float64",
"longName": "DENSITY CORRECTION (DECR)",
"mnemonic": "DRHO",
"name": "DRHO",
"unitKey": "G/C3",
"bulkURI": "urn:uuid:d789e548-4dbf-4c76-b87a-77f7b29e94fe"
},`https://community.opengroup.org/osdu/platform/system/storage/-/issues/222SLB Feature Request - Need capability to write policy based on data records p...2024-03-15T13:21:48ZDadong ZhouSLB Feature Request - Need capability to write policy based on data records propertiesFrom Fabrice HAÜY \[SLB\] on Slack:
Hi Team, I'm looking for some updated information / roadmap, as from our latest conversations at the OSDU F2F in London, I understood that currently, the policy engine only knowns about id, kind, lega...From Fabrice HAÜY \[SLB\] on Slack:
Hi Team, I'm looking for some updated information / roadmap, as from our latest conversations at the OSDU F2F in London, I understood that currently, the policy engine only knowns about id, kind, legal tag, and acl, making it not possible to create policy entitlements based on the value of a property of the record. I'm looking for information surrounding this limitation and when it'll be unlocked. thank you in advance
Copied from Policy repo: https://community.opengroup.org/osdu/platform/security-and-compliance/policy/-/issues/95
cc @chad @hutchins @KellyZhouhttps://community.opengroup.org/osdu/platform/system/storage/-/issues/127Soft-deleted record was skipped when re-ingested with same data2022-08-23T15:54:49ZAn NgoSoft-deleted record was skipped when re-ingested with same data**Steps to reproduce the current behavior:**
1. Ingest a record
2. Soft-delete the record
3. Fetch the record to confirm it is now "inactive", "not found"
**Case 1:** Works as expected
4. Ingest the same record using the same id and D...**Steps to reproduce the current behavior:**
1. Ingest a record
2. Soft-delete the record
3. Fetch the record to confirm it is now "inactive", "not found"
**Case 1:** Works as expected
4. Ingest the same record using the same id and DIFFERENT data, skipdupes=true
> Record was NOT skipped. Deleted record became active again. A new version of the record is created.
> Example response:
```
{
"recordCount": 1,
"recordIds": [
"osdu:document:ee7e8869217541a8b31f4e2ea18f7e3a"
],
"skippedRecordIds": [],
"recordIdVersions": [
"osdu:document:ee7e8869217541a8b31f4e2ea18f7e3a:1654731042152281"
]
}
```
5. Soft-delete the record
6. Fetch the record to confirm it is now "inactive", "not found"
**Case 2:** Skips the record even though it was already deleted
7. Ingest the same record using the same id, SAME data, skipdupes=true
> Record was skipped. So the record remains "inactive", "not found". The PUT call did nothing to the record.
>Example response:
```
{
"recordCount": 1,
"skippedRecordIds": [
"slb-osdu-dev-sis-internal-hq:document:ee7e8869217541a8b31f4e2ea18f7e3a"
]
}
```
**Expected behavior:**
If skipdupes is true
- if the record doesn't exist at all, then create a new record.
- **if the record was soft-deleted, then make the record active again if the data is the same (last deleted version becomes the latest version), or create a new version if data is different.**
- if the record exists,
- if the data is the same, then skip it.
- if data is different, then create a new version
If skipdupes is false:
- if the record doesn't exist at all, then create a new record.
- **if the record was soft-deleted, then create a new version of the record**
- if the record exists, then a new version of the record will be created, regardless whether the data is the same or different.https://community.opengroup.org/osdu/platform/system/storage/-/issues/159Storage adds null meta to record ingested without2023-03-22T04:11:53ZAn NgoStorage adds null meta to record ingested without1. Record was ingested without specifying "meta" block. PUT api was successful.
2. Fetch the ingested record. Notice that Storage added "meta": null to the record.
**Checking with Search.**
Search indexed successfully. Status code was 2...1. Record was ingested without specifying "meta" block. PUT api was successful.
2. Fetch the ingested record. Notice that Storage added "meta": null to the record.
**Checking with Search.**
Search indexed successfully. Status code was 200.
Search result does not return the meta.
The current behavior is challenged saying that Meta block shouldn't have been added. Or if added, then it should be empty and not null.
So instead of adding:
"meta": null
It should be:
"meta": []
Upon creating or updating a record, providing an empty meta block should also be allowed.https://community.opengroup.org/osdu/platform/system/storage/-/issues/63Storage API /query/kinds behaviour is different on GCP compared to other CSPs2023-03-13T10:16:44ZFlorent FourcadeStorage API /query/kinds behaviour is different on GCP compared to other CSPsAccording to storage API documentation :
> a given **kind** can have zero or exactly one schema associated with.
While testing storage API record creation on Azure, I created a record with a non tied to schema kind.
When requesting /qu...According to storage API documentation :
> a given **kind** can have zero or exactly one schema associated with.
While testing storage API record creation on Azure, I created a record with a non tied to schema kind.
When requesting /query/kinds after record creation, the kind is absent from the returned results.
Doing the same test on a GCP instance, my kind (not tied to a schema) does appears when requesting /query/kinds.
I took a look at the code and I saw that to get the kinds, GCP retrieve all kinds from a RecordMetadata Database, so all kinds are returned, even if not tied to a schema.
On other CSPs, it seems that kinds are returned from a Schema database, so only kinds tied to a schema are returned.https://community.opengroup.org/osdu/platform/system/storage/-/issues/100Storage API /query/kinds is broken and breaks reindex functionality2023-03-13T10:16:44ZGary MurphyStorage API /query/kinds is broken and breaks reindex functionality**_Takeaway_**<br/>
The /query/kinds API has been broken in OSDU Storage for quite a while, and fixing it was not a priority as Schema Service endpoints were thought to be the successor solution. This is not the case, and /query/kinds ...**_Takeaway_**<br/>
The /query/kinds API has been broken in OSDU Storage for quite a while, and fixing it was not a priority as Schema Service endpoints were thought to be the successor solution. This is not the case, and /query/kinds needs to work as designed.<br/>
**_Summary_**
The context here is the issue to change the Indexer to use Schema Service schemas instead of the original Storage Schemas (https://community.opengroup.org/osdu/platform/system/indexer-service/-/issues/7). This has been done successfully; however, the original plan to retire the Storage Schema endpoints + /query/kinds entirely exposed a hole in functionality that needs to be addressed. Essentially, it was thought that fixing /query/kinds would not be needed with the Schema Service, but the use cases where Storage is the source of truth for *in use* kinds were not caught.<br/><br/>
**Key Use Case** -- reindexing all kinds<br/><br/>
Reindexing all kinds in an Elasticsearch cluster (Reindex All) is an infrequent but vital operation. Cases where it is required include:
disaster recovery after Storage Records are restored, application of changes to Elasticsearch analyzers, and correction of indices after changes to base OSDU schemas or client schemas.<br/><br/>
Disaster Recovery Scenario:
1. All records in Storage (including underlying CosmosDB or FireStore or whatever) are brought back to RPO state.
2. The Search index is not in sync yet with the restored Storage records, so Reindex All is executed.
3. Reindex All should *not* be using the Schema get all schemas endpoint as that will retrieve every schema that has been defined in the installation which includes unused schemas and obsolete schemas and those may number in the thousands. Instead, Reindex All needs to use /query/kinds from Storage which will retrieve only those kinds actually in use in Storage.
4. As Reindex All executes, the list of kinds is retrieved from Storage /query/kinds and iterated over, triggering a reindex on each individual kind known to Storage.M10 - Release 0.13https://community.opengroup.org/osdu/platform/system/storage/-/issues/179Storage batch API returns 404 for unauthorized records2024-03-07T13:08:37ZAn NgoStorage batch API returns 404 for unauthorized records**Use-case:** Reindex Kind API is called.
Noted in the logs there were 404s returned.
Record Fetch on some of the impacted records, 403s were returned.
Investigation shows Batch Record fetch returned 404s instead.
Issue identified f...**Use-case:** Reindex Kind API is called.
Noted in the logs there were 404s returned.
Record Fetch on some of the impacted records, 403s were returned.
Investigation shows Batch Record fetch returned 404s instead.
Issue identified from this workflow:
- Storage batch API responds unauthorized records (403) as not found (404)
### ADR: Storage batch API responds unauthorized records (403) as not found (404)
#### Status
- [x] Proposed
- [ ] Trialing
- [ ] Under review
- [ ] Approved
- [ ] Retired
#### Context & Scope
The current behavior of Storage batch API: if a record is not authorized, it is put in the _notFound_ field of the response body along with other not found records. The response body in this case looks like this:
```
{
"records": [],
"notFound": [
"opendes:facet:unauthorizedrecord1",
"opendes:facet:unauthorizedrecord2",
//other not found records...
],
"conversionStatuses": []
}
```
#### Solution
To fix this behavior of the Storage batch API we can introduce a new field to the response body. The proposed solution is to add a new field (_unauthorized_) to the response body, so we can distinguish between unauthorized and actual not found records. Sample response body:
```
{
"records": [],
"notFound": [
//not found records...
],
"unauthorized": [
"opendes:facet:unauthorizedrecord1",
"opendes:facet:unauthorizedrecord2"
],
"conversionStatuses": []
}
```
#### Сonsequence
This solution is a breaking change as it implies changing API contract. It will include a change in the core library, a change in Storage, and then a change in the Indexer service to handle batch API response.Chad LeongChad Leonghttps://community.opengroup.org/osdu/platform/system/storage/-/issues/117Storage fails to delete large number of records upon legal tag expiration2024-03-21T15:19:58ZAn NgoStorage fails to delete large number of records upon legal tag expirationIf there are large number of records associated with a legalTag that expires after running the cron job, we are seeing availability issues and inconsistent result in terms of record searchability.
**Observations:**
**LegalTag cron job...If there are large number of records associated with a legalTag that expires after running the cron job, we are seeing availability issues and inconsistent result in terms of record searchability.
**Observations:**
**LegalTag cron job update issue:**
**Scenario**: I have a large number of records (in the 6 digits) that are associated with a legalTag (i.e. the record metadata has a particular legalTag (let's call it lt1) in the legal.legaltags section). The legalTag lt1 is set to expire soon
**Event**: lt1 expires
**Action 1** : Cron job `updateLegalTagStatus` is triggered on a periodic basis, which grabs the legalTags that have changed their state (valid to invalid and invalid to valid) and publishes this information onto SB topic 'legaltags' and EG topic 'legaltagschangedtopic'. The legalTag also changes its state in the CosmosDb
'legaltagschangedtopic' has an event subscription to SB topic 'legaltagschangedtopiceg', which has a subscription 'eg_sb_legaltagssubscription'
**Action 2 **: Storage service pulls messages from 'eg_sb_legaltagssubscription' for LegalTag update events and updates records associated with lt1. Storage updates the recordMetadata with active/inactive record status and publishes the change onto SB and EG for indexer-queue to consume.
**Expected outcome:** All records associated with lt1 are now inactive. They are unsearchable from Storage and Search APIs.
**Actual outcome:** Some records associated with lt1 are now inactive. They are unsearchable from Storage and Search APIs. I can still search other records.
**Issue**: Not all records are getting pulled from Storage service at **Action 2** to be processed. Thus, many records simply don't change their state, although the legalTag is invalid now.
**Observed behavior/possible improvements:**
1. The context of legalTag change (active to inactive or inactive to active) is not considered by Storage when fetching records to update. Storage tries to fetch ALL records for that legalTag with the query
SELECT * FROM c WHERE ARRAY_CONTAINS(c.metadata.legal.legaltags, lt1). In case of large number of records, this is a longer operation. We observed throttling on the cosmos-db during this process
2. No way to retry. Because Legal service updates the letalTag status in cosmosDb, running the `updateLegalTagStatus` job again will not pick up this legal tag. To do this, we are required to manually change the status of the legalTag and run the cron job again. Upon manual retries, we face the issue above where Storage is trying to process ALL records again.
3. What happens when Storage job is interrupted, possibly due to pod restart (high cpu utilization) or network error or cosmosDb error? Retrying the whole job doesn't help muchChad LeongChad Leonghttps://community.opengroup.org/osdu/platform/system/storage/-/issues/123Storage GET record returns 404 for records with optional version (Record ID e...2023-06-06T20:04:08ZAn NgoStorage GET record returns 404 for records with optional version (Record ID ending with colon)Storage GET /api/storage/v2/records/{id} returns 404 error for records whose ID ends with a colon (version is empty).
For example, "osdu:master-data--Wellbore:nz-100000391126:"
This is the case where the version component is empty (this...Storage GET /api/storage/v2/records/{id} returns 404 error for records whose ID ends with a colon (version is empty).
For example, "osdu:master-data--Wellbore:nz-100000391126:"
This is the case where the version component is empty (this is allowed as part of [this change](https://community.opengroup.org/osdu/platform/system/storage/-/issues/26#summary-january-26-2021) in record ID validation).
Expected behavior should be returning the latest version of the record.https://community.opengroup.org/osdu/platform/system/storage/-/issues/74storage max record id length to 1024 character2022-11-21T16:30:10ZNeelesh Thakurstorage max record id length to 1024 characterAzure Storage service supports record ID up to 1024 character. Other providers don't have this limitation.
Either we make this consistent across all cloud providers or document this specific restriction for Azure.Azure Storage service supports record ID up to 1024 character. Other providers don't have this limitation.
Either we make this consistent across all cloud providers or document this specific restriction for Azure.https://community.opengroup.org/osdu/platform/system/storage/-/issues/216Storage PUT /records lost update2024-02-28T08:02:10ZMykyta SavchukStorage PUT /records lost updateThe issue occurs in storage service when trying to update the same record (with the same id) using multiple asynchronous requests at the same time. As a result, only one version is saved in the database and the others are lost.
For exam...The issue occurs in storage service when trying to update the same record (with the same id) using multiple asynchronous requests at the same time. As a result, only one version is saved in the database and the others are lost.
For example, suppose we call the storage PUT API with three asynchronous requests for the same record. Even though the storage returns 201 with version for each of the requests, calling /records/{id}/{version} with the three created versions results in two 404s and only one 200. All three versions are saved in the blob storage, but "gcsVersionPaths" array of the record in the database has only one new version.
Looking at the code, it appears that this is a lost update problem. When updating a record, the storage fetches the record from the database, performs certain manipulations on it, and then saves it in the database. So when multiple threads are running at the same time, they simultaneously fetch the same record (with the same "gcsVersionPaths" array), add a new version to the array, and save the record in the database. And each thread overwrites the newly added version by the previous thread, resulting in only one version being saved by the last thread executed.
Possible solution: Implement optimistic locking for PUT API. To implement optimistic locking, we can add an additional field to the database record that is updated together with the record. So we fetch the record along with this field and when saving it, we check whether the value of the field has changed, if so, we abort the changes.
I'm assuming all provider databases have this functionality built in. For example, in Azure CosmosDB, every item stored in the database has a system-defined property "_etag", and to enable optimistic locking we can pass parameters when saving the record.https://community.opengroup.org/osdu/platform/system/storage/-/issues/139[STORAGE] PUT. Reports 201 success with a 50 records payload but actually fails2023-02-13T15:19:27ZErnesto Gutierrez[STORAGE] PUT. Reports 201 success with a 50 records payload but actually fails**Description**
While issuing following request [50_records_payload.json](/uploads/3d2ddceee544b9741af0a0b54fff9981/50_records_payload.json), the storage service returns a 201 with records and versions [STORAGE_201_put_records.json](/upl...**Description**
While issuing following request [50_records_payload.json](/uploads/3d2ddceee544b9741af0a0b54fff9981/50_records_payload.json), the storage service returns a 201 with records and versions [STORAGE_201_put_records.json](/uploads/48a60f0dfa71bb13852b7ca8cc12fd8b/STORAGE_201_put_records.json).
But when trying to fecth the records they are not created/updated.
Looking at the logs [Storage_LOG_50_records.txt](/uploads/fdf868480d289199eb916f9d5d575b8f/Storage_LOG_50_records.txt), it seems the service is reaching this line https://community.opengroup.org/osdu/platform/system/lib/cloud/azure/os-core-lib-azure/-/blob/1bddde80718274e34a36aee673092bf20526f5aa/src/main/java/org/opengroup/osdu/azure/cosmosdb/CosmosStoreBulkOperations.java#L124
**Expected behavior**
Two behaviors are expected
1. Payload with 50 records should not fail
2. If for any reason the request fail, the error should be propagated back and return error instead of 201.M13 - Release 0.16Krishna Nikhil VedurumudiKrishna Nikhil Vedurumudihttps://community.opengroup.org/osdu/platform/system/storage/-/issues/130Storage PUT: setting a non-number value to a number attribute results in an e...2022-08-23T15:45:28ZAn NgoStorage PUT: setting a non-number value to a number attribute results in an empty 400 response (no error message)For example, given this payload. This was provided:
` "value": Infinity`
```
curl --location --request PUT 'https://domain.com/api/storage/v2/records' \
--header 'accept: application/json' \
--header 'data-partition-id: osdu' \
--hea...For example, given this payload. This was provided:
` "value": Infinity`
```
curl --location --request PUT 'https://domain.com/api/storage/v2/records' \
--header 'accept: application/json' \
--header 'data-partition-id: osdu' \
--header 'Content-Type: application/json' \
--header 'Authorization: <token>' \
--data-raw '[
{
"acl": {
"owners": [
"data.default.owners@domain.com"
],
"viewers": [
"data.default.viewers@domain.com"
]
},
"data": {
"ExtensionProperties": {
"osdu": {
"curvesProperties": [
{
"curveID": "CTEM_GPITF",
"properties": [
{
"name": "MEASURE-POINT-OFFSET",
"value": Infinity
}
]
}
]
}
}
},
"kind": "osdu:wks:work-product-component--WellLog:1.1.0",
"legal": {
"legaltags": [
"osdu-default-legal"
],
"otherRelevantDataCountries": [
"US"
]
}
}
]'
```
Reponse:
Empty 400
![image](/uploads/18749c6ea879c9c888a3c5c173288b23/image.png)https://community.opengroup.org/osdu/platform/system/storage/-/issues/113Storage /records endpoint without having Content-type in the header throws 41...2023-03-01T04:46:31ZAn NgoStorage /records endpoint without having Content-type in the header throws 415 errorStorage /records endpoint without having Content-type in the header throws 415 error codeStorage /records endpoint without having Content-type in the header throws 415 error codehttps://community.opengroup.org/osdu/platform/system/storage/-/issues/121Storage Schema endpoints should be obsoleted2022-08-24T10:54:38ZGary MurphyStorage Schema endpoints should be obsoletedRemove code and config related to the storage schemas APIs from OSDU as they are EOL.
The following APIs are to be removed
- GET /Schema
- DELETE /Schema
- POST /schemaRemove code and config related to the storage schemas APIs from OSDU as they are EOL.
The following APIs are to be removed
- GET /Schema
- DELETE /Schema
- POST /schema