Partition issueshttps://community.opengroup.org/osdu/platform/system/partition/-/issues2020-12-02T21:49:32Zhttps://community.opengroup.org/osdu/platform/system/partition/-/issues/2Partition Service Should Support Updating Existing Secrets in a Partition2020-12-02T21:49:32ZMatt WisePartition Service Should Support Updating Existing Secrets in a Partition## Status
- [X] Proposed
- [ ] Trialing
- [ ] Under review
- [ ] Approved
- [ ] Retired
## Context & Scope
The Partition Service currently supports 3 API Operations.
1. Create Partition (POST)
2. Get Partition (GET)
3. Delete Partition...## Status
- [X] Proposed
- [ ] Trialing
- [ ] Under review
- [ ] Approved
- [ ] Retired
## Context & Scope
The Partition Service currently supports 3 API Operations.
1. Create Partition (POST)
2. Get Partition (GET)
3. Delete Partition (DELETE)
Currently, the only way to add new secrets or update existing ones to the KV store for a partition is to delete it and recreate it. This leads to extra API calls and risks missing putting back some data that was not meant to be touched.
## Decision
If this is implemented, KV stores will be simpler to update existing values on without having to delete existing data.
## Rationale
Partitions need to be updated with new secrets and edits to existing ones. In order to minimize the complexity of this operation for external callers, having a route to support this will simplify partition editing processes.
## Consequences
New API route in partition has to be supported/tested by all implementors of the serviceM1 - Release 0.1ethiraj krishnamanaiduDania Kodeih (Microsoft)JoeChris ZhangRucha DeshpandeMatt Wiseethiraj krishnamanaiduhttps://community.opengroup.org/osdu/platform/system/partition/-/issues/4Modify contract to capture sensitive flag for partition specific secrets config2020-10-08T20:16:10ZNeelesh ThakurModify contract to capture sensitive flag for partition specific secrets configProblem: Partition secret configurations available via partition service can pose following security issue:
1. All Secrets are exposed by default to any service regardless if they need them or not.
2. Secrets are held in memory cache bo...Problem: Partition secret configurations available via partition service can pose following security issue:
1. All Secrets are exposed by default to any service regardless if they need them or not.
2. Secrets are held in memory cache both at the partition service and the service client library.
3. Potential for logging secret values to the central logger are increased due to secrets being sent in Microservice HTTP Response Objects.
a. Trace Logs are often used to dump http request and response objects between services for debugging purposes.
Solution: Provide a mechanism to distinguish secret and non-secret partition configuration and delegate responsibility of consuming secret using cloud native libraries at service level.
**Current**
```
public class PartitionInfo {
@Builder.Default
Map<String, Object> properties = new HashMap<>();
}
```
e.g.
```
{
"properties": {
"complianceRuleSet": "shared",
"storageAccountKey": "test-storage-**secret**"
}
}
```
**Proposed**
```
public class PartitionInfo {
@Builder.Default
Map<String, Property> properties = new HashMap<>();
}
public class Property {
@Builder.Default
private boolean sensitive = false;
private Object value;
}
```
e.g.
```
{
"properties": {
"complianceRuleSet": {
"sensitive": false,
"value": "shared"
},
"storageAccountKey": {
"sensitive": true,
"value": "test-storage-**key**"
}
}
}
```ethiraj krishnamanaiduethiraj krishnamanaiduhttps://community.opengroup.org/osdu/platform/system/partition/-/issues/49ADR - Formalize the System/Shared Tenant.2024-03-26T09:59:12ZRustam Lotsmanenko (EPAM)rustam_lotsmanenko@epam.comADR - Formalize the System/Shared Tenant.# ADR: Formalize the System/Shared Tenant.
## Status
- [x] Proposed
- [ ] Trialing
- [ ] Under review
- [ ] Approved
- [ ] Retired
## Context
- The OSDU Platform has global meta-data, which is vital for the platform's integrity and ...# ADR: Formalize the System/Shared Tenant.
## Status
- [x] Proposed
- [ ] Trialing
- [ ] Under review
- [ ] Approved
- [ ] Retired
## Context
- The OSDU Platform has global meta-data, which is vital for the platform's integrity and normal operation and is shared across all tenants. This meta-data is provisioned once the new OSDU instance is created and then remains immutable.
- It's important to consider that the shared tenant can function not only as a repository for global metadata but also as a standard private tenant, particularly in single-tenant environments. Therefore, any proposed system API changes should not disrupt the existing regular flow, ensuring that system tenants can maintain private tenant-specific configurations without interference.
## Problem Statement
In the current approach, we do not have a proper way to distinguish the system tenant from private tenants.
System tenant configured per service, via the environment, for example, https://community.opengroup.org/osdu/platform/system/schema-service/-/blob/master/provider/schema-gc/src/main/resources/application.properties?ref_type=heads#L25
The first created tenant in the OSDU platform by default is used as originating storage for multiple shared data - default system schemes, reference data, and system DAGs.
All the other tenants are forced to consume this data from the first created tenant space. This breaks the tenant data isolation concept for multi-tenant environments and forces core services implementation to "be aware" of the specific case of the first tenant and implement some workarounds etc.
Also, it does not take into account some corner cases like removing the very first tenant from the system.
## Proposal
To have a more consistent system and to improve tenant data isolation we propose to formalize a special system tenant at the Partition service API, which will contain all the immutable shared meta-data, like system schemes, system DAGs, and default OPA rego rules.
## Decision
- When a new OSDU instance is created, a system tenant is created for that instance by default along with the first client tenant. Should be configurable, it will be possible to use the same first client tenant as a system tenant if there is no need to create it as a separate tenant.
![Untitled_Diagram.drawio_30_](/uploads/133dafba6e51bf176b3515b391daefe9/Untitled_Diagram.drawio_30_.png)
- Move the configuration of the system tenant from services environment variables to the Partition service.
- Update Partition API accordingly.
- Partition service should not expose the system tenant in the full partitions list, should be configurable, if the system tenant is used as a private then it should be present in the partition list.
- Add new endpoints, that would provide only system (shared) configurations.
- Forbid system tenant management through regular endpoints.
- Forbid system tenant deletion.
## Rationale
Formalizing the System tenant will get us closer to platform standardization.
With updates, it will be possible to reduce resources provided for the system tenant, for example, Elasticsearch, Database instances, etc.
Also, it will give us the ability to align feature flag configurations, which are currently scattered.
It will be possible to move global platform configurations from environment variables to the system tenant.
## Consequences
Required code changes:
- Updates in Core-Common library to use new API (Partition client), in a nonbreaking manner, but highlighting existing flow as deprecated.
- Updates in consumer services to utilize new capabilities. Schema, Workflow, Feature Flag Service implementation in the Core Common. Again in a nonbreaking manner, allowing to use old approach.
- A clear approach to update CSP tenant configurations used in the dev environment. Since developers do not have access to CSP Partition configurations it will be helpful to have an approach for such cases. Having versioned bootstrap configurations is preferable.
To enable API:
- Update Partition configuration to enable system API.
- Configure system tenant name in Partition configuration, default is `system`.
- Configure services that are using system tenant to use API instead of env configuration to determine system tenant (Only after changes are implemented in service).
No changes are required if API is disabled. But we do hope that it won't be hard to adapt as changes can be enabled sequentially without disrupting existing environments.M24 - Release 0.27https://community.opengroup.org/osdu/platform/system/partition/-/issues/30Is there a solution to easily delete all the data related to a partition?2023-08-07T09:57:33ZShuai LiIs there a solution to easily delete all the data related to a partition?If we call the delete partition API, only the partition record is deleted from partition service. All the data related to this partition (e.g. file, storage records, user groups) remains in their locations.
Since partion is a "data parti...If we call the delete partition API, only the partition record is deleted from partition service. All the data related to this partition (e.g. file, storage records, user groups) remains in their locations.
Since partion is a "data partition", if I delete a data partition, I want to delete all the related data in this data partition. I probably need to call many other API to clean up the data. Not all services provide APIs to easily delete all data related to a partition.
Is there any consideration on this requirement?https://community.opengroup.org/osdu/platform/system/partition/-/issues/48ADR - Platform Feature Flag Management Standardization2024-03-22T13:18:09ZRustam Lotsmanenko (EPAM)rustam_lotsmanenko@epam.comADR - Platform Feature Flag Management Standardization# ADR: Platform Feature Management Standardization.
## Status
- [x] Proposed
- [ ] Trialing
- [ ] Under review
- [ ] Approved
- [ ] Retired
## Context
- The OSDU Platform uses a Feature Flag pattern to control releasing software all...# ADR: Platform Feature Management Standardization.
## Status
- [x] Proposed
- [ ] Trialing
- [ ] Under review
- [ ] Approved
- [ ] Retired
## Context
- The OSDU Platform uses a Feature Flag pattern to control releasing software allowing for code to be continually deployed into production environments while optionally controlling whether the functional change the code enables is accessible. https://osdu.projects.opengroup.org/pmc/work-products/pmc-portal/pmc-policies/main/projects/feature-flag.html
- The usage of Feature Flags is inconsistent, with some being configured via environment variables, which makes them global, but likely not because it's a requirement, but rather due to how they've been implemented in practice, for example, Policy integration or Global Status Publishing, etc. https://community.opengroup.org/groups/osdu/platform/-/epics/29.
## Problem Statement
- Platform maintenance and configuration aren't centralized in one location, instead, they are scattered across environment variables, application properties files, and Partition service configurations. To update some feature flags the environment settings need to be modified, and the services will require redeployment.
- There is no proper way to configure the feature flag globally and per tenant. It's either global or per tenant. For example, it's not possible to enable Policy integration for the platform as a whole but disable it for specific tenants.
- The current global configurations managed through environment variables are too low-level; updating config maps and redeploying services are necessary to make changes.
## Proposal
It is proposed to use Partition info to hold feature flags for specific services and utilise existing implementation for this with some enhancements: to have an ability to use several flags for different services we need to cache whole partition info propertied set instead of single property like it was implemented for Policy service individually (note: this implementation is not used).
## Decision
System Partition info should be used to determine whether a feature is disabled globally or not.
The existing environment variables approach will be deprecated - we will use the system partition to hold all the global environment flags and variables. These system flags can be overridden by partition configurations, allowing granular feature control while still making it possible to define the behavior of a platform in general if there is no need for granularity.
The implementation code may be moved to the common library for reuse in different services.
Naming conventions:
~~~
system feature flags: system.feature.[feature_name].enabled. Example: system.feature.policy.enabled
feature flags: [partition_id].feature.[feature_name].enabled. Example: opendes.feature.policy.enabled
~~~
Generic flow implementation is described in the diagram below:
1. Partition info is taken from the cache (if present) or obtained from the Partition service API call and cached afterward for performance optimization with reasonable invalidation timeout (5-10 minutes, should be configurable)
2. Algorithm checks partition info for the corresponding feature flags
- uses system partition variable - if it is set to false considers the corresponding feature as switched off
- otherwise uses partition-specific flag value
![image-2024-2-21_15-19-26](/uploads/00210facf2fdea895947e7f5af39f0fa/image-2024-2-21_15-19-26.png)
## Rationale
https://community.opengroup.org/groups/osdu/platform/-/epics/29
- By having standardized feature management it will be possible to introduce new features gradually. For example, enabling them for a single tenant for test purposes.
- It will be possible to manage resources more efficiently, for example, it will be possible to configure GSM(global status messaging) per tenant, not per platform, enabling it only where it's needed.
- It will be possible to manage all features in the runtime without restarts and redeployments.
- It will be possible to manage features exclusively via API thus allowing implementing admin UI board.
## Consequences
- We will have to enforce to use of a new approach for new feature implementations managed by feature flags. For that, we might need a guide, on how to update Partition configurations for dev environments for each CSP.M24 - Release 0.27https://community.opengroup.org/osdu/platform/system/partition/-/issues/37Exception encountered during context initialization2023-12-07T05:43:18ZNikhil Patilnikhil.patil5@ibm.comException encountered during context initializationBeanCreationException: Error creating bean with name 'springSecurityFilterChain' defined in class path resource [org/springframework/security/config/annotation/web/configuration/WebSecurityConfiguration.class]
Bean instantiation via fact...BeanCreationException: Error creating bean with name 'springSecurityFilterChain' defined in class path resource [org/springframework/security/config/annotation/web/configuration/WebSecurityConfiguration.class]
Bean instantiation via factory method Failed to instantiate [javax.servlet.Filter]: Factory method 'springSecurityFilterChain' threw exceptio java.lang.NoClassDefFoundError: org/springframework/security/core/context/DeferredSecurityContextM22 - Release 0.25Nikhil Patilnikhil.patil5@ibm.comNikhil Patilnikhil.patil5@ibm.comhttps://community.opengroup.org/osdu/platform/system/partition/-/issues/36ADR: Partition API Access authorization modification2024-01-15T20:14:53ZHimanshu KumrawatADR: Partition API Access authorization modification# ADR
## Title
Restricted Partition API's
## Context
Currently partition-service APIs permissions check is identical for all operation. For **CREATE**/**UPDATE**/**DETELE** or **GET**/**LIST** operations same access permissions are ...# ADR
## Title
Restricted Partition API's
## Context
Currently partition-service APIs permissions check is identical for all operation. For **CREATE**/**UPDATE**/**DETELE** or **GET**/**LIST** operations same access permissions are applicable.
![Screenshot 2024-01-15 190925.png](/uploads/368d9ed906cda698b6d263fa7df886f9/Screenshot_2024-01-15_190925.png)![create.png](/uploads/1d2b7d96eff81bcde0afff4d6f8a91d6/create.png)![delete.png](/uploads/74afe9faee169a4e710ee9f0df1d5d27/delete.png){width=872 height=67}
![patch.png](/uploads/ce7db7e9a8ac7271ca2bb8a62e240406/patch.png){width=771 height=99}
![list.png](/uploads/09aceb6c1cc4bf1272c40086ee5ac69c/list.png){width=566 height=57}
While checking for authorization there's no differentiation can be made on the basis of what endpoint is under consideration.
## Decision
Therefore, it is being proposed that, `hasPermissions` method used in **PreAuthorize** annotation can be provided with PartitionOperation parameter to distinguish different API endpoints while checking their permissions.
![operation.png](/uploads/8bd2906c3e3cb4032e8c9c3f2f562b79/operation.png)e.g.
![create2.png](/uploads/08ad4d66c30d51869bbbb710a7a6db3f/create2.png)When needed to onboard this authorization change there is a new application configuration variable {**enable.crud.based.authorization**} need to be enabled (set as True) for enabling the check.
![config.png](/uploads/2bbff4bdbdb6c5f34057038d597d8d43/config.png)By-default the config is set to false
The `partitionOperation` parameter can be passed to overridden implementation of `isDomainAdminServiceAccount` and then used for providing access by different CSP's.
![implementataion.png](/uploads/2385a2fb9d997371231ea811cfce56cf/implementataion.png)
## Conclusion
The default implementation for non-azure CSP's is modified accordingly by azure team, to adapt this change from code perspective but with no change in logic of access.
For Azure its been decided when flag {**enable.crud.based.authorization**} is enabled the **CREATE**/**UPDATE**/**DETELE** operations are restricted and API returns with **403 Forbidden**.M23 - Release 0.26Himanshu KumrawatHimanshu Kumrawathttps://community.opengroup.org/osdu/platform/system/partition/-/issues/35Add /liveness_check2023-11-21T19:08:57ZRiabokon Stanislav(EPAM)[GCP]Add /liveness_checkNeed to add the endpoint '/liveness_check' in order to verify the operational status of the Partition Service.Need to add the endpoint '/liveness_check' in order to verify the operational status of the Partition Service.M22 - Release 0.25Riabokon Stanislav(EPAM)[GCP]Riabokon Stanislav(EPAM)[GCP]https://community.opengroup.org/osdu/platform/system/partition/-/issues/34Change response code for RequestRejectedException2023-11-16T10:54:27ZNeha KhandelwalChange response code for RequestRejectedExceptionCurrently, when a request URL contains an unknown or potentially malicious string, Spring Security utilizes a HttpFirewall interface to reject the request with a _org.springframework.security.web.firewall.RequestRejectedException._ This ...Currently, when a request URL contains an unknown or potentially malicious string, Spring Security utilizes a HttpFirewall interface to reject the request with a _org.springframework.security.web.firewall.RequestRejectedException._ This exception will return as 500 Internal Server Error with the message "The request was rejected because the URL contained a potentially malicious String \[string\]." An example of such a string is "//".
Since this error is caused by a bad request from the user, the retuned response should instead be a 400 Client Error. Furthermore, keeping the response as a 500 error can impact the SLIs/SLOs of both SDMS and the Partition Service.
The purposed solution is to implement a RequestRejectedHandler to change the response code to 400 when there is a RequestRejectedException.https://community.opengroup.org/osdu/platform/system/partition/-/issues/33Use a Secret service for storing and fetching secrets and sensitive configura...2023-07-25T10:25:36ZRustam Lotsmanenko (EPAM)rustam_lotsmanenko@epam.comUse a Secret service for storing and fetching secrets and sensitive configurations.Rustam Lotsmanenko (EPAM)rustam_lotsmanenko@epam.comRustam Lotsmanenko (EPAM)rustam_lotsmanenko@epam.comhttps://community.opengroup.org/osdu/platform/system/partition/-/issues/32GET /api/partition/v1/partitions/{partitionId}2023-11-27T16:00:04ZShane HutchinsGET /api/partition/v1/partitions/{partitionId}Received a response with 5xx status code: 500
{"timestamp":"2023-06-14T14:13:07.676+00:00","status":500,"error":"Internal Server Error","path":"/api/partition/v1/partitions/%0D"}
Really was expecting a 401 or 404 here.
Run this curl co...Received a response with 5xx status code: 500
{"timestamp":"2023-06-14T14:13:07.676+00:00","status":500,"error":"Internal Server Error","path":"/api/partition/v1/partitions/%0D"}
Really was expecting a 401 or 404 here.
Run this curl command to reproduce this failure:
curl -X GET -H 'Authorization: Bearer TOKEN' -H 'data-partition-id: osdu' https://osdu.r3m18.preshiptesting.osdu.aws/api/partition/v1/partitions/%0D
GET /api/partition/v1/partitions/{partitionId} on Azure:
curl -X GET -H 'Authorization: Bearer TOKEN' -H 'data-partition-id: opendes' https://osdu-ship.msft-osdu-test.org/api/partition/v1/partitions/%0DChad LeongChad Leonghttps://community.opengroup.org/osdu/platform/system/partition/-/issues/31upgrade azure-storage SDK2023-01-18T21:31:34ZNur Sheikhupgrade azure-storage SDKIn partition service we are using the azure-storage sdk 8.6.5 from com.microsoft.azure package which is too old and not having much support. It iis advisable to use the latest sdk for com.azure package.In partition service we are using the azure-storage sdk 8.6.5 from com.microsoft.azure package which is too old and not having much support. It iis advisable to use the latest sdk for com.azure package.https://community.opengroup.org/osdu/platform/system/partition/-/issues/27Partition Service Architecture Documentation2023-03-09T18:15:53ZMichael van der HavenPartition Service Architecture DocumentationThe API for the Partition Service is very straight-forward. But when looking at the effect of adding a new partition to the system is on topics like ingestion, search and security; then it is actually a very complex topic to explain an m...The API for the Partition Service is very straight-forward. But when looking at the effect of adding a new partition to the system is on topics like ingestion, search and security; then it is actually a very complex topic to explain an manage for users/customers.
For example: adding partition means carefully updating all users and groups in the entitlements service: clearly mapping out which groups end-up in which partition when viewing it from a user base or mapping what data is available to what groups when viewing it from a partition base (i.e. it is not the first time it is not clear that a group in an ACL for data in a partition is not mapped to any of the groups in the entitlements service or at least there are no users with access to that particular partition).
A clear document describing cause, effect and expectations from an operations perspective is very much desired.https://community.opengroup.org/osdu/platform/system/partition/-/issues/25Upgrade to Log4J 2.172021-12-21T00:29:17ZDavid Diederichd.diederich@opengroup.orgUpgrade to Log4J 2.17The Apache Foundation released another Log4j2 update, version 2.17, which address a denial of service vulnerability.
This issue tracks progress to upgrade this dependency for this project.The Apache Foundation released another Log4j2 update, version 2.17, which address a denial of service vulnerability.
This issue tracks progress to upgrade this dependency for this project.https://community.opengroup.org/osdu/platform/system/partition/-/issues/24Log4J Expedient Updates and Patches2021-12-15T17:15:30ZDavid Diederichd.diederich@opengroup.orgLog4J Expedient Updates and PatchesThis issue associates MRs that were applied to this project quickly to get a patched version ready as soon as possible. The intent is to provide a reference point for later, more thoughtful, analysis.This issue associates MRs that were applied to this project quickly to get a patched version ready as soon as possible. The intent is to provide a reference point for later, more thoughtful, analysis.https://community.opengroup.org/osdu/platform/system/partition/-/issues/23Apache log4j CVE-2021-442282021-12-14T15:56:43ZDmitrii GerashchenkoApache log4j CVE-2021-44228https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-44228
Zero-day vulnerability affects log4j and can lead to remote code execution. This is a critical issue and needs to be resolved as soon as possible.
---
Apache Log4j2 <=2.14...https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-44228
Zero-day vulnerability affects log4j and can lead to remote code execution. This is a critical issue and needs to be resolved as soon as possible.
---
Apache Log4j2 <=2.14.1 JNDI features used in configuration, log messages, and parameters do not protect against attacker controlled LDAP and other JNDI related endpoints. An attacker who can control log messages or log message parameters can execute arbitrary code loaded from LDAP servers when message lookup substitution is enabled. From log4j 2.15.0, this behavior has been disabled by default. In previous releases (>2.10) this behavior can be mitigated by setting system property "log4j2.formatMsgNoLookups" to “true” or it can be mitigated in prior releases (<2.10) by removing the JndiLookup class from the classpath (example: zip -q -d log4j-core-\*.jar org/apache/logging/log4j/core/lookup/JndiLookup.class).https://community.opengroup.org/osdu/platform/system/partition/-/issues/22API spec for partition service can not be loaded in Gitlab2021-11-25T13:16:02ZDmitrii GerashchenkoAPI spec for partition service can not be loaded in GitlabAPI spec for partition service can not be loaded in Gitlab: https://community.opengroup.org/osdu/platform/system/partition/-/blob/master/docs/api/partition_openapi.yaml
![image](/uploads/7d888ca0cfd788037c134f302f68d10b/image.png)
Path...API spec for partition service can not be loaded in Gitlab: https://community.opengroup.org/osdu/platform/system/partition/-/blob/master/docs/api/partition_openapi.yaml
![image](/uploads/7d888ca0cfd788037c134f302f68d10b/image.png)
Path "/actuator/health" is duplecated.Dmitrii GerashchenkoDmitrii Gerashchenkohttps://community.opengroup.org/osdu/platform/system/partition/-/issues/21Use HPA for kubernetes service2021-11-05T18:56:36ZRostislav Vatolinvatolinrp@gmail.comUse HPA for kubernetes serviceImplement practices described here: https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/Implement practices described here: https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/https://community.opengroup.org/osdu/platform/system/partition/-/issues/20Update the partitionListCache without rebuild on partition create, delete2022-11-24T11:47:11ZDmitrii GerashchenkoUpdate the partitionListCache without rebuild on partition create, deleteFor now, the `partitionListCache` is purged on `createPartition` or `deletePartition` invocation that leads to the unnecessary request to storage on the next invocation of `getAllPartitions` method.
To optimize this behavior the `partit...For now, the `partitionListCache` is purged on `createPartition` or `deletePartition` invocation that leads to the unnecessary request to storage on the next invocation of `getAllPartitions` method.
To optimize this behavior the `partitionListCache` could be updated on `createPartition` or `deletePartition` invocation without request to storage.https://community.opengroup.org/osdu/platform/system/partition/-/issues/19MS CloudTableClient has not timeouts2021-10-01T11:44:22ZDmitrii GerashchenkoMS CloudTableClient has not timeoutsMS TableStorage's client - CloudTableClient uses default timeout settings.
The client can try to connect to the MS server for up to 2 minutes: 3 retry attempts with 30 seconds delay between attempts.
The MaximumExecutionTime is null.
I...MS TableStorage's client - CloudTableClient uses default timeout settings.
The client can try to connect to the MS server for up to 2 minutes: 3 retry attempts with 30 seconds delay between attempts.
The MaximumExecutionTime is null.
I created a dummy server and tested the case when MS TableStorage responds with latency. There is no timeout for a response in the client so the client could be blocked infinitely.
The client doesn't throw errors on long TableStorage's latencies what could be the cause of 504 errors for API consumers.
Also, it means that we can't see any exceptions even if MS TableStorage responds with latencies.Dmitrii GerashchenkoDmitrii Gerashchenko