Storage issueshttps://community.opengroup.org/osdu/platform/system/storage/-/issues2023-11-15T10:54:25Zhttps://community.opengroup.org/osdu/platform/system/storage/-/issues/189[SAST] Vue_DOM_XSS in file index.html2023-11-15T10:54:25ZYauhen Shaliou [EPAM/GCP][SAST] Vue_DOM_XSS in file index.html**Description**
The method m-1"\> embeds untrusted data in generated output with href, at line 36 of \\storage\\provider\\storage-azure\\src\\main\\resources\\static\\index.html. This untrusted data is embedded into the output without p...**Description**
The method m-1"\> embeds untrusted data in generated output with href, at line 36 of \\storage\\provider\\storage-azure\\src\\main\\resources\\static\\index.html. This untrusted data is embedded into the output without proper sanitization or encoding, enabling an attacker to inject malicious code into the generated web-page.
# **Location:**
<table>
<tr>
<th> </th>
<th>Source</th>
<th>Destination</th>
</tr>
<tr>
<th>File</th>
<td>storage/provider/storage-azure/src/main/resources/static/index.html</td>
<td>storage/provider/storage-azure/src/main/resources/static/index.html</td>
</tr>
<tr>
<th>Line number</th>
<td>92</td>
<td>36</td>
</tr>
<tr>
<th>Object</th>
<td>pathname</td>
<td>href</td>
</tr>
<tr>
<th>Code line</th>
<td>return location.protocol + '//' + location.host + location.pathname</td>
<td>
\<a :href="signInUrl" class="btn btn-primary" v-if="!token" class="col-2"\>Login\</a\>
</td>
</tr>
</table>M21 - Release 0.24https://community.opengroup.org/osdu/platform/system/storage/-/issues/181GET: /records/{recordID}/{version} - ERROR 5002024-01-01T08:47:32ZSiarhei Khaletski (EPAM)GET: /records/{recordID}/{version} - ERROR 500**Context**
GET: /records/{recordID}/{version} fails with error 500 if an invalid version is provided (see the attachment)
We noticed an odd behavior of the service:
List of existing versions of the following record: `opendes:work-pro...**Context**
GET: /records/{recordID}/{version} fails with error 500 if an invalid version is provided (see the attachment)
We noticed an odd behavior of the service:
List of existing versions of the following record: `opendes:work-product-component--SamplesAnalysis:e9f02f48f43149a8b69606ff7597f391`
![image](/uploads/3d75fd80a57f5558c7d0eb00a4d795eb/image.png)
If request unexisting version `1` - status error 500
![image](/uploads/d3dc228f70263bd24ff7d09975baa63c/image.png)
Meanwhile, if request unexisting version `1234` - status 404
![image](/uploads/e82da89c3673b643aaa26845f0eb0c81/image.png)
**Azure GLab Logs**
![image](/uploads/8d54b1addcbc1835b4ea3c90135072b6/image.png)
**Expected Behavior**
404 - status codeM22 - Release 0.25Siarhei Khaletski (EPAM)Chad LeongSiarhei Khaletski (EPAM)https://community.opengroup.org/osdu/platform/system/storage/-/issues/176Storage x-collaboration header bug2023-09-26T14:21:44ZShane HutchinsStorage x-collaboration header bugFound this issue in /api/storage/v2/query/records, /api/storage/v2/query/records:batch
Received a response with 5xx status code: 500
Run this curl command to reproduce this failure:
curl -X GET -H 'Authorization: Bearer TOKEN' -H ...Found this issue in /api/storage/v2/query/records, /api/storage/v2/query/records:batch
Received a response with 5xx status code: 500
Run this curl command to reproduce this failure:
curl -X GET -H 'Authorization: Bearer TOKEN' -H 'data-partition-id: osdu' -H 'x-collaboration: ^À' 'https://osdu.r3m18.preshiptesting.osdu.aws/api/storage/v2/query/records?kind='
curl -X POST -H 'Authorization: Bearer TOKEN' -H 'data-partition-id: osdu' -H 'x-collaboration: ^À' -d '[]' https://osdu.r3m18.preshiptesting.osdu.aws/api/storage/v2/records/delete
PUT /api/storage/v2/records
curl -X PUT -H 'Authorization: Bearer TOKEN' -H 'data-partition-id: osdu' -H 'x-collaboration: €' -d '[]' https://osdu.r3m18.preshiptesting.osdu.aws/api/storage/v2/records
Azure PUT /api/storage/v2/records:
curl -X PUT -H 'Authorization: Bearer TOKEN' -H 'data-partition-id: opendes' -H 'x-collaboration: €' -d '[]' https://osdu-ship.msft-osdu-test.org/api/storage/v2/records
Confirmed this bug in AWS and Azure.https://community.opengroup.org/osdu/platform/system/storage/-/issues/163The request to get records of particular kind using the limit is not working.2023-06-20T05:07:07ZKamlesh TodaiThe request to get records of particular kind using the limit is not working.The Storage API CI/CD v1.11 (from Platform Validation project) was working on all the platforms and passing with 100% pass rate.
https://community.opengroup.org/osdu/platform/testing/-/blob/master/Postman%20Collection/12_CICD_Setup_Stor...The Storage API CI/CD v1.11 (from Platform Validation project) was working on all the platforms and passing with 100% pass rate.
https://community.opengroup.org/osdu/platform/testing/-/blob/master/Postman%20Collection/12_CICD_Setup_StorageAPI/Storage%20API%20CI-CD%20v1.11.postman_collection.json
At present, it is still passing with 100% pass rate in AWS R3 M16 Platform Validation (forum testing environment)
But it is not passing with 100% pass rate in all other Platform Validation CSPs environments as well as
it is not passing with 100% pass rate in all CSPs environments in pre-ship
In the referenced collection Request #8 is failing.
The following request for STORAGE API is in question 08 - Storage - Get all records for a kind with limit of 10 records
=====================================================================
e.g. of passing in Platform Validaition R3 M16 (forum testing)
curl --location 'https://r3m16.forumtesting.osdu.aws/api/storage/v2/query/records?limit=10&kind=osdu%3Awks%3AautoTest_955280%3A1.1.0' \
--header 'data-partition-id: osdu' \
--header 'Accept: application/json' \
--header 'Authorization: Bearer eyJraWQiOi...4XnucQETfnB3biA' \
--header 'Cookie: session=eyJfZnJlc2giOmZhbHNlLCJfcGVybWFuZW50Ijp0cnVlfQ.Y_VNrw.SMJbZoZwlkMYCD7E9ge4ICPnqJY'
https://{{STORAGE_HOST}}/query/records?limit=10&kind={{authority}}:{{schemaSource}}:{{entityType}}:{{schemaVerMajor}}.{{schemaVerMinor}}.{{schemaVerPatch}}
The response code: 200 OK
{
"results": [
"osdu:999611481173:999301114394"
]
}
===================================================================
Example of when it is failing
curl --location 'https://r3m16-ue1.preshiptesting.osdu.aws/api/storage/v2/query/records?limit=10&kind=osdu%3Awks%3AautoTest_20923%3A1.1.0' \
--header 'data-partition-id: osdu' \
--header 'Accept: application/json' \
--header 'Authorization: Bearer eyJraWQiOi...tW7kPscDabFJ3sEPeNA'
Response code: 415 Unsupported Media Type
Body of response is blank
It is same message for all the CSP where failure is happening
============================================================================
@chad @debasiscM16 - Release 0.19https://community.opengroup.org/osdu/platform/system/storage/-/issues/160ADR - Clean OpenAPI 3.0 Documentation using 'Code First Approach'2023-07-10T08:02:52ZOm Prakash GuptaADR - Clean OpenAPI 3.0 Documentation using 'Code First Approach'## Status
- [X] Proposed
- [ ] Trialing
- [ ] Under review
- [x] Approved
- [ ] Retired
## Context & Scope
While adopting **OpenAPI 3.0** standards using `springdoc`, we end up adding lot of documentation to native controller of each AP...## Status
- [X] Proposed
- [ ] Trialing
- [ ] Under review
- [x] Approved
- [ ] Retired
## Context & Scope
While adopting **OpenAPI 3.0** standards using `springdoc`, we end up adding lot of documentation to native controller of each API.
- API contract is not clearly visible
- reduces the readability of the API
- business logic & documentation at the same place
## Tradeoff Analysis
- To maintain clean API documentation
- API, Controller segregation
- adopt future changes w.r.t to documentation or contract change
## Proposed Solution:
- Introduce API, Controller Layer Segregation
- API will have contract, definitions & OpenAPI documentation
- Controller will implement the API contract with clean code
#References:
1. [‘Code First’ API Documentation](https://reflectoring.io/spring-boot-springdoc/)
## Sample Refactor in Storage Patch API
- [Patch API](https://community.opengroup.org/osdu/platform/system/storage/-/blob/az/td-codefirst/storage-core/src/main/java/org/opengroup/osdu/storage/api/PatchApi.java)
- [Patch Controller](https://community.opengroup.org/osdu/platform/system/storage/-/blob/az/td-codefirst/storage-core/src/main/java/org/opengroup/osdu/storage/api/PatchController.java)
## Sample Example code
Lets consider a TODO API with normal Crud operation
First we write Interface and define necessary annotations.
```
@RequestMapping("/api/todos")
@Tag(name = "Todo API", description = "euismod in pellentesque ...")
interface TodoApi {
@GetMapping
@ResponseStatus(code = HttpStatus.OK)
List<Todo> findAll();
@GetMapping("/{id}")
@ResponseStatus(code = HttpStatus.OK)
Todo findById(@PathVariable String id);
@PostMapping
@ResponseStatus(code = HttpStatus.CREATED)
Todo save(@RequestBody Todo todo);
@PutMapping("/{id}")
@ResponseStatus(code = HttpStatus.OK)
Todo update(@PathVariable String id, @RequestBody Todo todo);
@DeleteMapping("/{id}")
@ResponseStatus(code = HttpStatus.NO_CONTENT)
void delete(@PathVariable String id);
}
```
##
Then we derive existing controllers from interface for controller implementation
```
@RestController
class TodoController implements TodoApi {
// method implementations
}
```
## Consequences
- Requires changes across services and code refactoring.
- No Breaking functional changes.M17 - Release 0.20Chad LeongOm Prakash GuptaChad Leonghttps://community.opengroup.org/osdu/platform/system/storage/-/issues/139[STORAGE] PUT. Reports 201 success with a 50 records payload but actually fails2023-02-13T15:19:27ZErnesto Gutierrez[STORAGE] PUT. Reports 201 success with a 50 records payload but actually fails**Description**
While issuing following request [50_records_payload.json](/uploads/3d2ddceee544b9741af0a0b54fff9981/50_records_payload.json), the storage service returns a 201 with records and versions [STORAGE_201_put_records.json](/upl...**Description**
While issuing following request [50_records_payload.json](/uploads/3d2ddceee544b9741af0a0b54fff9981/50_records_payload.json), the storage service returns a 201 with records and versions [STORAGE_201_put_records.json](/uploads/48a60f0dfa71bb13852b7ca8cc12fd8b/STORAGE_201_put_records.json).
But when trying to fecth the records they are not created/updated.
Looking at the logs [Storage_LOG_50_records.txt](/uploads/fdf868480d289199eb916f9d5d575b8f/Storage_LOG_50_records.txt), it seems the service is reaching this line https://community.opengroup.org/osdu/platform/system/lib/cloud/azure/os-core-lib-azure/-/blob/1bddde80718274e34a36aee673092bf20526f5aa/src/main/java/org/opengroup/osdu/azure/cosmosdb/CosmosStoreBulkOperations.java#L124
**Expected behavior**
Two behaviors are expected
1. Payload with 50 records should not fail
2. If for any reason the request fail, the error should be propagated back and return error instead of 201.M13 - Release 0.16Krishna Nikhil VedurumudiKrishna Nikhil Vedurumudihttps://community.opengroup.org/osdu/platform/system/storage/-/issues/117Storage fails to delete large number of records upon legal tag expiration2024-03-21T15:19:58ZAn NgoStorage fails to delete large number of records upon legal tag expirationIf there are large number of records associated with a legalTag that expires after running the cron job, we are seeing availability issues and inconsistent result in terms of record searchability.
**Observations:**
**LegalTag cron job...If there are large number of records associated with a legalTag that expires after running the cron job, we are seeing availability issues and inconsistent result in terms of record searchability.
**Observations:**
**LegalTag cron job update issue:**
**Scenario**: I have a large number of records (in the 6 digits) that are associated with a legalTag (i.e. the record metadata has a particular legalTag (let's call it lt1) in the legal.legaltags section). The legalTag lt1 is set to expire soon
**Event**: lt1 expires
**Action 1** : Cron job `updateLegalTagStatus` is triggered on a periodic basis, which grabs the legalTags that have changed their state (valid to invalid and invalid to valid) and publishes this information onto SB topic 'legaltags' and EG topic 'legaltagschangedtopic'. The legalTag also changes its state in the CosmosDb
'legaltagschangedtopic' has an event subscription to SB topic 'legaltagschangedtopiceg', which has a subscription 'eg_sb_legaltagssubscription'
**Action 2 **: Storage service pulls messages from 'eg_sb_legaltagssubscription' for LegalTag update events and updates records associated with lt1. Storage updates the recordMetadata with active/inactive record status and publishes the change onto SB and EG for indexer-queue to consume.
**Expected outcome:** All records associated with lt1 are now inactive. They are unsearchable from Storage and Search APIs.
**Actual outcome:** Some records associated with lt1 are now inactive. They are unsearchable from Storage and Search APIs. I can still search other records.
**Issue**: Not all records are getting pulled from Storage service at **Action 2** to be processed. Thus, many records simply don't change their state, although the legalTag is invalid now.
**Observed behavior/possible improvements:**
1. The context of legalTag change (active to inactive or inactive to active) is not considered by Storage when fetching records to update. Storage tries to fetch ALL records for that legalTag with the query
SELECT * FROM c WHERE ARRAY_CONTAINS(c.metadata.legal.legaltags, lt1). In case of large number of records, this is a longer operation. We observed throttling on the cosmos-db during this process
2. No way to retry. Because Legal service updates the letalTag status in cosmosDb, running the `updateLegalTagStatus` job again will not pick up this legal tag. To do this, we are required to manually change the status of the legalTag and run the cron job again. Upon manual retries, we face the issue above where Storage is trying to process ALL records again.
3. What happens when Storage job is interrupted, possibly due to pod restart (high cpu utilization) or network error or cosmosDb error? Retrying the whole job doesn't help muchChad LeongChad Leonghttps://community.opengroup.org/osdu/platform/system/storage/-/issues/110Issue in Publisher Facade2022-11-21T09:51:31ZAbhishek Kumar (SLB)Issue in Publisher FacadeThe branch is not in a running state due to bug in the azure core library.
Please refer to this issue https://community.opengroup.org/osdu/platform/system/lib/cloud/azure/os-core-lib-azure/-/issues/17
**Branch:** UsageOfPublishFacadeThe branch is not in a running state due to bug in the azure core library.
Please refer to this issue https://community.opengroup.org/osdu/platform/system/lib/cloud/azure/os-core-lib-azure/-/issues/17
**Branch:** UsageOfPublishFacadeNikhil Singh[MicroSoft]Nikhil Singh[MicroSoft]https://community.opengroup.org/osdu/platform/system/storage/-/issues/86Able to insert a record with invalid ACL in preship environment for Azure and...2022-11-21T10:10:35ZKamlesh TodaiAble to insert a record with invalid ACL in preship environment for Azure and IBM platforms**For Azure and IBM**
While testing the Dynamic policy ran into an issue of being able to create a record by providing an invalid ACL, using the storage API.
It appears that the storage API is not validating the ACL. In ACL for owners da...**For Azure and IBM**
While testing the Dynamic policy ran into an issue of being able to create a record by providing an invalid ACL, using the storage API.
It appears that the storage API is not validating the ACL. In ACL for owners data.**nodefault**.owner@...
PUT https://{{STORAGE_endpoint}}/records
[{
"kind": "{{data-partition-id}}:{{schemaSource}}:master-data--Well:1.0.0",
"legal": {
"legaltags": [
"{{tagName}}"
],
"otherRelevantDataCountries": [
"US"
]
},
"acl": {
"owners": [
"data.**notdefault**.owner@{{data-partition-id}}{{domain}}"
],
"viewers": [
"data.default.viewer@{{data-partition-id}}{{domain}}"
]
},
"id": "{{data-partition-id}}:master-data--Well:dynamic-policy-test-data-1-{{randomId}}",
"data": {
"description": "Dynamic policy test record 1"
}
}]
**For AWS:**
It does not create a record and gives the message of Forbidden
**For GCP:**
It does not create a record and gives the message Policy service is unavailable.
[DynamicPolicyTestingStatus.xlsx](/uploads/706b480e63e73f0d34dfa1873f7abbb2/DynamicPolicyTestingStatus.xlsx)[DynamicTestingM7.docx](/uploads/ccff71cfa867c66cbec9642901c883ef/DynamicTestingM7.docx)https://community.opengroup.org/osdu/platform/system/storage/-/issues/74storage max record id length to 1024 character2022-11-21T16:30:10ZNeelesh Thakurstorage max record id length to 1024 characterAzure Storage service supports record ID up to 1024 character. Other providers don't have this limitation.
Either we make this consistent across all cloud providers or document this specific restriction for Azure.Azure Storage service supports record ID up to 1024 character. Other providers don't have this limitation.
Either we make this consistent across all cloud providers or document this specific restriction for Azure.https://community.opengroup.org/osdu/platform/system/storage/-/issues/66[Intermittent] Record Metadata is available in Cosmos but the Blob store retu...2022-09-27T11:10:14ZKrishna Nikhil Vedurumudi[Intermittent] Record Metadata is available in Cosmos but the Blob store returns a 404.If record metadata exist and the actual record doesn't exist in BlobStore, FetchBatchRecords API is going to return a 500 with following response
```
{
"code": 500,
"reason": "Unable to process parallel blob download",
"mess...If record metadata exist and the actual record doesn't exist in BlobStore, FetchBatchRecords API is going to return a 500 with following response
```
{
"code": 500,
"reason": "Unable to process parallel blob download",
"message": "AppException(error=AppError(code=404, reason=Specified blob was not found, message=Status code 404, \"<?xml version=\"1.0\" encoding=\"utf-8\"?><Error><Code>BlobNotFound</Code><Message>The specified blob does not exist._RequestId:580b9915-f01e-0009-2c0a-3c65a8000000_Time:2021-04-28T08:45:41.2917696Z</Message></Error>\", errors=null, debuggingInfo=null, originalException=com.azure.storage.blob.models.BlobStorageException: Status code 404, \"<?xml version=\"1.0\" encoding=\"utf-8\"?><Error><Code>BlobNotFound</Code><Message>The specified blob does not exist._RequestId:580b9915-f01e-0009-2c0a-3c65a8000000_Time:2021-04-28T08:45:41.2917696Z</Message></Error>\"), originalException=com.azure.storage.blob.models.BlobStorageException: Status code 404, \"<?xml version=\"1.0\" encoding=\"utf-8\"?><Error><Code>BlobNotFound</Code><Message>The specified blob does not exist._RequestId:580b9915-f01e-0009-2c0a-3c65a8000000_Time:2021-04-28T08:45:41.2917696Z</Message></Error>\")"
}
```
Couple of issues to investigate / fix
- The PersistentServiceImpl ensures that if the blob write has failed, the cosmos db update will not happen. How did we run into this inconsistency.
- If one blob does not exist, the entire FetchBatchRecords call should not fail with a 500.
- Error message for 5xx should always be standard. So, a 500 in this case should be Internal Server Error.