Storage issueshttps://community.opengroup.org/osdu/platform/system/storage/-/issues2022-08-23T21:00:11Zhttps://community.opengroup.org/osdu/platform/system/storage/-/issues/131No update notification sent2022-08-23T21:00:11ZQiang FuNo update notification sentStep to reproduce:
1) setup notification endpoint
2) subscribe to "recordchange" topic
3) create a record by using api/storage/v2/records endpoint. Verify a "create" notification received.
4) modify the payload used in step 3 and run ap...Step to reproduce:
1) setup notification endpoint
2) subscribe to "recordchange" topic
3) create a record by using api/storage/v2/records endpoint. Verify a "create" notification received.
4) modify the payload used in step 3 and run api/storage/v2/records again, Another "create" notification received.
5) modify the payload used in step 4 and run api/storage/v2/records/?skipdupes=true, "create" notification received.
In step 4 and 5 we are expecting a "update" notificationhttps://community.opengroup.org/osdu/platform/system/storage/-/issues/129Storage returns inconsistent and wrong responses if nested attribute filters ...2023-07-07T12:15:36ZAn NgoStorage returns inconsistent and wrong responses if nested attribute filters which do not exist are specifiedUsing /api/storage/v2/records/{id}
optional attribute filter:
![image](/uploads/05e101feb2102c16adba4575f31a4aa7/image.png)
Example, given this data:
```
"data": {
"relationships": {
"well": {
"id": "slb-osdu-tryme:...Using /api/storage/v2/records/{id}
optional attribute filter:
![image](/uploads/05e101feb2102c16adba4575f31a4aa7/image.png)
Example, given this data:
```
"data": {
"relationships": {
"well": {
"id": "slb-osdu-tryme:well",
"name": "Card Creek 2"
},
"relatedItems": {
"ids": [
"Log1",
"Marker1"
],
"names": [
"Log Name1",
"Marker Name1"
]
}
}
```
A few observances when filtering with string:
data.something returns 200
data.relationships.something returns 200
data.relationships.well.something returns 200
data.relationships.well.number returns 500 at first. Then after a few tries, it returns 200.
data.relationships.something returns 200
data.relationships.relatedItem.something returns 500 (no s on relatedItems)
data.relationships.relatedItems.something returns 500 at first, then 200 after that.
Expected return code should be 400.M19 - Release 0.22https://community.opengroup.org/osdu/platform/system/storage/-/issues/126All versions of a record have the same modifyUser and modifyTime2023-05-30T10:26:54ZAn NgoAll versions of a record have the same modifyUser and modifyTimeThe concept is that one record should have 1 version of metadata.
However, in regard to modifyUser and modifyTime attributes, they should be different for each version.
Currently, the behaviors are as implemented, but the behavior by th...The concept is that one record should have 1 version of metadata.
However, in regard to modifyUser and modifyTime attributes, they should be different for each version.
Currently, the behaviors are as implemented, but the behavior by the above concept is wrong.
So with the current behavior, for multiple versions of the same record modifyTime and modifyUser value are same and they are overwritten to all versions during every modification made to the record.
Which means for records having only 1 version, it is like below.
|version1|
|:-------|
|createUser|
|createTime|
But when the record is modified and multiple versions are created, the metadata of the record for latest version is applied to all versions including the first version as well, and all versions have value for modifyUser and modifyTime attributes.
|version1|version2 |version3|
|:-------|:--------|:--------|
|createUser| createUser| createUser|
|createTime| createTime| createTime|
|modifyUser2 |modifyUser2|modifyUser2|
|modifyTime2 |modifyTime2|modifyTime2|
**Expected:**
Version 1 should only have createUser and createTime. modifyUser and modifyTime should not exist in the first version.
Version 2+ should have different modifyUser and modifyTime for each version
|version1|version2 |version3|
|:-------|:--------|:--------|
|createUser| createUser| createUser|
|createTime| createTime| createTime|
| |modifyUser1|modifyUser2|
| |modifyTime1|modifyTime2|https://community.opengroup.org/osdu/platform/system/storage/-/issues/125Very high number of 429s on CosmosDb when there is a usage spike in Storage `...2023-07-19T19:35:51ZAlok JoshiVery high number of 429s on CosmosDb when there is a usage spike in Storage `query/records:batch api`In one of our client environments, we are consistently seeing very high number of 429 errors from CosmosDb. This is causing latency spikes for Storage apis.
From our investigation, this seems to be related to the query/records:batch api...In one of our client environments, we are consistently seeing very high number of 429 errors from CosmosDb. This is causing latency spikes for Storage apis.
From our investigation, this seems to be related to the query/records:batch api performance/optimization issue. We see a direct correlation between `query/records:batch api` spike and CosmosDb 429 error spike within multiple time windows. Please see attached images for reference.
In the first image, we can see a time window when CosmosDb threw a lot of 429 errors. In the second image, we can see Storage api usage pattern. Most of the api calls are made to the `query/records:batch api` which also affects latency numbers. The patterns on both images are very similar
![ComosDb_usage](/uploads/898f423082ae4193bb7636b058905555/ComosDb_usage.PNG)![api_usage](/uploads/ad233ce903b1596a3dc1f6048548088f/api_usage.PNG)
We've tried increasing the RUs on cosmosDb on multiple incidents but that doesn't help.
Further load tests showed that query/records:batch can be a root cause of the 429 errors.
Into the scope of the issue fixing it would be reasonable to implement some features from the topic https://docs.microsoft.com/en-us/azure/cosmos-db/sql/performance-tips-query-sdk?tabs=v3&pivots=programming-language-javahttps://community.opengroup.org/osdu/platform/system/storage/-/issues/124ADR: Supporting data block modification through Storage PATCH API call2023-07-05T09:41:17ZMandar KulkarniADR: Supporting data block modification through Storage PATCH API callSupporting data block modification Storage PATCH API call
## Status
- [X] Proposed
- [X] Trialing
- [X] Under review
- [X] Approved
- [ ] Retired
## Context & Scope
Only record tags, legal tags and ACLs can up updated through PATCH AP...Supporting data block modification Storage PATCH API call
## Status
- [X] Proposed
- [X] Trialing
- [X] Under review
- [X] Approved
- [ ] Retired
## Context & Scope
Only record tags, legal tags and ACLs can up updated through PATCH API in storage service.
The PATCH API cannot be used to update the data block in the record/s. For updating data block, PUT API needs to be used.
## Tradeoff Analysis
Updating an individual attribute inside data block needs two calls currently, one to GET the record and then PUT call to update the record content with new attribute value.
Providing PATCH API to update attributes in data block will support doing this operation in just one call to OSDU storage.
## Decision
We can update PATCH API to support modifications in data blocks. The API will continue to follow the [rfc6902 standard](https://www.rfc-editor.org/rfc/rfc6902.html).
Currently the PATCH API supports modifications in record tags, legal tags and ACLs only. It supports 3 operations namely add, replace and remove.
The same operations would be supported for data block.
- In "add" operation, specified property from the request would be appended with values provided in "value" field.
- In "replace" operation, specified property from the request would be fully replaced by values provided in "value" field.
- In "remove" operation, values provided in "value" field would be removed for specified property from the request.
Users specify the complete path to the property they want to update in "path" field, i.e. "/acl/viewers" indicates the values for metadata acl viewers would be updated.
Similarly "/data/TechnicalAssuranceID" would indicate that TechnicalAssuranceID attribute from the data block would be updated.
"/data/CurrentOperatorID" would indicate that CurrentOperatorID attribute from the data block would be updated.
"/data/EXtensionProperties/Attribute1" would indicate that Attribute1 from the ExtensionProperties inside data block would be updated.
"/data/SpatialLocation/SpatialGeometryTypeID" would indicate that SpatialGeometryTypeID from the SpatialLocation inside data block would be updated.
Version of the record would be incremented in case of data block update through PATCH API to maintain consistent behavior with PUT API.
## Consequences
- PATCH API behavior will be updated.
- Storage service documentation needs to be updated.M17 - Release 0.20https://community.opengroup.org/osdu/platform/system/storage/-/issues/122Storage Service Records Fetch Error2022-08-24T10:52:45ZSamiullah GhousudeenStorage Service Records Fetch Error**Not able to retrieve records from Storage Service **
If record id contains html encoded characters(%2F), then Storage service doesn't returns expected result but Search query returns as expected.
This issue same across all CSP's - A...**Not able to retrieve records from Storage Service **
If record id contains html encoded characters(%2F), then Storage service doesn't returns expected result but Search query returns as expected.
This issue same across all CSP's - AZURE, GCP, IBM & AWS.
For example , below query returns expected result through Search Service -
{
"kind": "*:wks:reference-data--UnitOfMeasure:1.0.0",
"limit": 10,
"aggregateBy":"kind",
"query":"id:\"osdu:reference-data--UnitOfMeasure:V%2FB\" OR id:\"opendes:reference-data--UnitOfMeasure:v%2Fv\" OR id:\"odesprod:reference-data--UnitOfMeasure:H%2Fm\" OR id:\"opendes:reference-data--UnitOfMeasure:US%2FF\""
}
However, in Storage service getting response - HTTP Status 400 – Bad Request
<!doctype html>
<html lang="en">
<head>
<title>HTTP Status 400 – Bad Request</title>
<style type="text/css">
body {
font-family: Tahoma, Arial, sans-serif;
}
h1,
h2,
h3,
b {
color: white;
background-color: #525D76;
}
h1 {
font-size: 22px;
}
h2 {
font-size: 16px;
}
h3 {
font-size: 14px;
}
p {
font-size: 12px;
}
a {
color: black;
}
.line {
height: 1px;
background-color: #525D76;
border: none;
}
</style>
</head>
<body>
<h1>HTTP Status 400 – Bad Request</h1>
</body>
</html>Marc Burnie [AWS]Marc Burnie [AWS]https://community.opengroup.org/osdu/platform/system/storage/-/issues/118[Azure] Deletion records with invalid legaltag without any logging2022-04-12T14:49:06ZYauheni Lesnikau[Azure] Deletion records with invalid legaltag without any loggingThere is an message handler which process the `LegalTagChanged ` events, and if the legal tag became invalid, appropriate records should be mark as `Inactive` (soft deleted). The issue is that there is no any logging which explicitly say...There is an message handler which process the `LegalTagChanged ` events, and if the legal tag became invalid, appropriate records should be mark as `Inactive` (soft deleted). The issue is that there is no any logging which explicitly says that deletion performs there.Yauheni LesnikauYauheni Lesnikauhttps://community.opengroup.org/osdu/platform/system/storage/-/issues/116In Azure environment the end point to query the data with limit is not working2022-08-26T11:59:00ZKamlesh TodaiIn Azure environment the end point to query the data with limit is not workingIn the Azure environment, the end point to query the data with a limit is not working.
e.g. https://osdu-ship.msft-osdu-test.org/api/storage/v2/query/kinds?limit=10
Response: 400 Bad Request
{
"code": 400,
"reason": "Limit not suppo...In the Azure environment, the end point to query the data with a limit is not working.
e.g. https://osdu-ship.msft-osdu-test.org/api/storage/v2/query/kinds?limit=10
Response: 400 Bad Request
{
"code": 400,
"reason": "Limit not supported",
"message": "The limit is invalid"
}
@debasisc @sehuboy @kumar_vaibav @ChrisZhangM10 Patch - Release 0.13 patchKrishna Nikhil VedurumudiKrishna Nikhil Vedurumudihttps://community.opengroup.org/osdu/platform/system/storage/-/issues/114No audit log for succeed PATCH updates2022-03-24T14:46:56ZYauheni LesnikauNo audit log for succeed PATCH updatesWhen the PATCH update performs there is audit logging only for failed record ids. We need to add similar logging for the succeed ones as well.When the PATCH update performs there is audit logging only for failed record ids. We need to add similar logging for the succeed ones as well.M11 - Release 0.14Yauheni LesnikauYauheni Lesnikauhttps://community.opengroup.org/osdu/platform/system/storage/-/issues/109storage get record with version api returns 5002022-11-21T09:54:03ZNeelesh Thakurstorage get record with version api returns 500Here is a curl where the record exists and I get 200:
```
curl --location --request GET 'https://evt.api.enterprisedata.cloud.slb-ds.com/api/storage/v2/records/opendes%3Awork-product-component--RegularHeightField%3A0a16c55a-aec0-4f21-a5...Here is a curl where the record exists and I get 200:
```
curl --location --request GET 'https://evt.api.enterprisedata.cloud.slb-ds.com/api/storage/v2/records/opendes%3Awork-product-component--RegularHeightField%3A0a16c55a-aec0-4f21-a55b-abd0447d31f6/1637157881569884' \
--header 'accept: application/json' \
--header 'data-partition-id: opendes' \
--header 'Authorization: Bearer ***'
```
failure
Here I changed the last digit of the version:
```
curl --location --request GET 'https://evt.api.enterprisedata.cloud.slb-ds.com/api/storage/v2/records/opendes%3Awork-product-component--RegularHeightField%3A0a16c55a-aec0-4f21-a55b-abd0447d31f6/1637157881569881' \
--header 'accept: application/json' \
--header 'data-partition-id: opendes' \
--header 'Authorization: Bearer ***'
```
response:
```
{
"code": 500,
"reason": "Version not found",
"message": "The version 1637157881569881 can't be found for record opendes:work-product-component--RegularHeightField:0a16c55a-aec0-4f21-a55b-abd0447d31f6"
}
```
Expected result: return code is 404
Actual result: return code is 500https://community.opengroup.org/osdu/platform/system/storage/-/issues/107Intermittent record not found errors in Storage batch API2022-11-21T09:50:21ZAn NgoIntermittent record not found errors in Storage batch APIError has been reported on Storage query/records:batch API where the user sometimes is not able to retrieve a few records. The same records could be fetched at a later time. Storage service is responding with record not found error, and ...Error has been reported on Storage query/records:batch API where the user sometimes is not able to retrieve a few records. The same records could be fetched at a later time. Storage service is responding with record not found error, and is impacting 1% of the request.
**Job details to reproduce the error: **
- Number of records: 14K
- Storage batch size: 20
- Number of threads: 10
- Record size: few KBs (well head information)Neelesh ThakurNeelesh Thakurhttps://community.opengroup.org/osdu/platform/system/storage/-/issues/104Add flexible page size option in Storage getRecordsByKind API2022-01-17T14:02:00ZVibhuti Sharma [Microsoft]Add flexible page size option in Storage getRecordsByKind APIGet records by Kind API in storage service returns results in the form of multiple pages in case of large number of records. The page size of these pages is constant - it is equal to the `limit` specified in the optional query parameter ...Get records by Kind API in storage service returns results in the form of multiple pages in case of large number of records. The page size of these pages is constant - it is equal to the `limit` specified in the optional query parameter or the default limit configuration set in config file.
# **Context**
Reindex API in indexer service calls the getRecordsByKind storage API to fetch multiple records. We need an option of querying storage without the constant page size constraint for performance related enhancements on azure provider.
# **Proposed Solution**
Add an **optional** parameter in storage service API, which when set to true will make the API return results with page size <= limit configured. The default behavior will be to return results with page size == limit configured.M10 - Release 0.13Vibhuti Sharma [Microsoft]Vibhuti Sharma [Microsoft]https://community.opengroup.org/osdu/platform/system/storage/-/issues/103Upgrade to Log4J 2.172021-12-21T02:09:48ZDavid Diederichd.diederich@opengroup.orgUpgrade to Log4J 2.17The Apache Foundation released another Log4j2 update, version 2.17, which address a denial of service vulnerability.
This issue tracks progress to upgrade this dependency for this project.The Apache Foundation released another Log4j2 update, version 2.17, which address a denial of service vulnerability.
This issue tracks progress to upgrade this dependency for this project.https://community.opengroup.org/osdu/platform/system/storage/-/issues/102Log4J Expedient Updates and Patches2021-12-17T20:19:56ZDavid Diederichd.diederich@opengroup.orgLog4J Expedient Updates and PatchesThis issue associates MRs that were applied to this project quickly to get a patched version ready as soon as possible. The intent is to provide a reference point for later, more thoughtful, analysis.This issue associates MRs that were applied to this project quickly to get a patched version ready as soon as possible. The intent is to provide a reference point for later, more thoughtful, analysis.Spencer Suttonsuttonsp@amazon.comSpencer Suttonsuttonsp@amazon.comhttps://community.opengroup.org/osdu/platform/system/storage/-/issues/99Datetime conversion not working2022-06-09T15:51:52ZMingyang ZhuDatetime conversion not workingStorage batch API returns error for Date/Datetime conversion. Because it is not converted to UTC properly, indexer cannot process it.
Indexer error example:
{
"results": [
{
"index": {
"trace": [
"datetime ...Storage batch API returns error for Date/Datetime conversion. Because it is not converted to UTC properly, indexer cannot process it.
Indexer error example:
{
"results": [
{
"index": {
"trace": [
"datetime parsing error: unknown format for attribute: SPUD_DATE | value: 3/28/2012",
"datetime parsing error: unknown format for attribute: STATUS_DATE | value: 4/30/2018"
],
"statusCode": 400,
"lastUpdateTime": "2021-11-12T01:40:48.603Z"
},
"id": "sandbox-weu-des-prod-testing-e:wellbore:PDD-MzMwNTMwMzY2NzAxMDA"
}
],
"aggregations": null,
"totalCount": 1
}M10 - Release 0.13https://community.opengroup.org/osdu/platform/system/storage/-/issues/98Indexer failures with 5XX error codes should be searchable2022-02-28T12:56:35ZLarissa PereiraIndexer failures with 5XX error codes should be searchableSome of the records indexing fails due to unknown (5xx) error. As of today, these records are then not searchable, not identified as errors during indexing. Records failed with 5XX errors should be searchable with default indexing fields...Some of the records indexing fails due to unknown (5xx) error. As of today, these records are then not searchable, not identified as errors during indexing. Records failed with 5XX errors should be searchable with default indexing fields.
## Context
Currently, when the schema or storage service throws a 500 error, indexer stops right away. After a few retries, indexer ignores this record. Thus, the records that failed with index status 500 are not searchable.
## Proposed solution
We can use the record changed/updated event to also contain the acl and legal tag section of the record. This will enable indexer to index the failed record (assuming no schema is exists) with the the right metadata, thereby making the record searchable. Indexer will use the id, kind, acl and tags properties to index this record and the trace will be available to view as a result of the query "index.status = 500".https://community.opengroup.org/osdu/platform/system/storage/-/issues/97Storage createOrUpdateRecord api fails with 500 if too many versions of the s...2021-11-18T15:33:23ZAlok JoshiStorage createOrUpdateRecord api fails with 500 if too many versions of the same recordId are createdWe've been observing this issue where users try to create too many versions of the same recordId. The versions are stored as part of the record metadata in CosmosDb (StorageRecord table). When the metadata reaches a size limit (around 2M...We've been observing this issue where users try to create too many versions of the same recordId. The versions are stored as part of the record metadata in CosmosDb (StorageRecord table). When the metadata reaches a size limit (around 2MB), adding more versions to the list fails.
This failure in CosmosDb operation is returned as an internal server error (500) from Storage. This should rather be a 4xx (413 Request too large) exception.
Changes proposed:
- Catch RequestEntityTooLargeException explicitly in core-lib-azure https://community.opengroup.org/osdu/platform/system/lib/cloud/azure/os-core-lib-azure/-/blob/master/src/main/java/org/opengroup/osdu/azure/cosmosdb/CosmosStore.java#L522
- Gracefully handle the exception from core-lib-azure in Storage, instead of always throwing a 500 https://community.opengroup.org/osdu/platform/system/storage/-/blob/master/storage-core/src/main/java/org/opengroup/osdu/storage/service/PersistenceServiceImpl.java#L132Alok JoshiAlok Joshihttps://community.opengroup.org/osdu/platform/system/storage/-/issues/96Incomplete Impersonalization: End user access depends on his rights to physic...2021-12-28T15:20:53ZRostislav Dublin (EPAM)Incomplete Impersonalization: End user access depends on his rights to physical storageOSDU offers a robust RBAC based Authentication and Authorization model provided by the Entitlements service.
Entitlements allows to assign the User with a limited set of roles, sufficient for the precise definition of his powers within t...OSDU offers a robust RBAC based Authentication and Authorization model provided by the Entitlements service.
Entitlements allows to assign the User with a limited set of roles, sufficient for the precise definition of his powers within the system. Having an ACL mechanism at the Record level ensures that these permissions are accurately projected onto the data. And with the activated connection with the Policy Service and correctly configured policies, the verification of rights becomes even more sophisticated.
It is obvious that any duplication of this mechanism is as redundant as it is harmful.
However, this duplication is currently identified at the level of additional verification of the rights of the end user account to buckets and objects of the Blob storage. First of all, this is relevant for the GCP CSP, where such behavior is found, at least, in the code of the Storage service. There are the following flaws here:
1. Impersonalization (using a service account instead of an end user account) is not always used when accessing buckets and objects of the Blob storage and depends on the "isEnableImpersonalization" boolean, which is "false" by default. This means that impersonalization is not performed and GCS requests are made on behalf of the end user account.
The logic behind preserving and maintaining this functionality is not entirely clear. After all, this contradicts the very principle of microservice architecture, implying the encapsulation of procedures for working with data within a microservice. The end user account has nothing to do with the physical storage access capabilities of the microservice itself, which must be unconditionally provided to the microservice's service account .
2. The problem persists even when the "isEnableImpersonalization" boolean is set to "true" (although this boolean is not raised to "true" anywhere in our test and production environments). In this case, requests to GCS are made on behalf of a service account (datafier or another dedicated), which removes the problem of service access to data...
But! Integration tests related to checking user authorization for data manipulation are starting to fail. This happens because the hasAccess(...) method is incorrectly implemented in the code of the GoogleCloudStorage repository, and it delegates some of the checks to the GCS level, but does not perform the necessary checks at the ACL level.
This leads to the fact that in the tests that check the lack of access of the test "no-data-access-tester" user to manipulate data, his false authorization occurs.
We should get rid of this by carrying out the following refactoring based on the tasks:
- For developers:
- refuse to use the "isEnableImpersonalization" boolean and destroy the very mention of it in the program code, thereby making impersonalization the only available mode, since all requests to data will come only from the service account under which the service is running:
- GoogleCloudStorage.class (and new ObmStorage.class) - multiple places;
- GoogleCloudStorageTest.class - one place;
- provider/storage-gcp/application.properties - "osdu.gcp.storage.gcs.enable-impersonalization" property definition
- insert an Entitlements+ACL check of end-user rights to operations with Records in those places of the program code where it was unfairly omitted (in the hope of reliable verification of the end-user rights to physical storage). Correct validations are easily added using the methods of the DataAuthorizationService class (validateOwnerAccess, validateViewerOrOwnerAccess, hasAccess) and/or using the private method GoogleCloudStorage#validateMetadata(), which are already used for this purpose, but not everywhere.
At a minimum, checks need to be added in the methods:
- GoogleCloudStorage # read (RecordMetadata record, Long version, boolean checkDataInconsistency);
- GoogleCloudStorage # hasAccess (RecordMetadata ... records)
- DevOps:
- stop the practice of giving end-user accounts any rights to physical storage.
- check the configurations of the business user accounts in the IAM and revoke all privileges to cloud resources.M10 - Release 0.13Rostislav Dublin (EPAM)Rostislav Dublin (EPAM)https://community.opengroup.org/osdu/platform/system/storage/-/issues/95Need to remove version processing support for patch api2022-03-24T14:47:02ZYauheni LesnikauNeed to remove version processing support for patch apiRegarding to the changes in recordId format, we need to remove version processing support for patch api.
The old format ordered that recordId should contains at least 3 parts spit by ':' and fourth (optional) part always was version. Now...Regarding to the changes in recordId format, we need to remove version processing support for patch api.
The old format ordered that recordId should contains at least 3 parts spit by ':' and fourth (optional) part always was version. Now format changed and number of parts split by ':' is not defined.
An example of affected payload:
```
{
"query": {
"ids": [
"tenant:nam:productionWellHistory:42461358371234.2012-02-01.M-test"
]
},
"ops": [
{
"op": "add",
"path": "/tags",
"value": [
"testtag:testvalue"
]
}
]
}
```
The id in payload contains 4 parts split by ':'. According to current implementation the id will be considered as id with version, but it is not correctM10 - Release 0.13Yauheni LesnikauYauheni Lesnikauhttps://community.opengroup.org/osdu/platform/system/storage/-/issues/93Remove information about deprecated Schema features from Storage service tuto...2022-11-21T11:06:07ZDebasis ChatterjeeRemove information about deprecated Schema features from Storage service tutorialPlease see below. Add "Deprecated" warning here. Use of Schema service is preferred.
https://community.opengroup.org/osdu/platform/system/storage/-/blob/master/docs/tutorial/StorageService.md#schemasPlease see below. Add "Deprecated" warning here. Use of Schema service is preferred.
https://community.opengroup.org/osdu/platform/system/storage/-/blob/master/docs/tutorial/StorageService.md#schemas