OSDU Software issueshttps://community.opengroup.org/groups/osdu/-/issues2022-01-18T20:06:59Zhttps://community.opengroup.org/osdu/platform/data-flow/ingestion/ingestion-dags/-/issues/80M7 Manifest based ingestion - Load Testing2022-01-18T20:06:59ZBen LasscockM7 Manifest based ingestion - Load Testing## Definitions
Load Testing - The number of records or manifests that can be processes at a time.
## Background
Throughout M7 there have been a number of performance improvements delivered by EPAM, as well as work on improving issues ...## Definitions
Load Testing - The number of records or manifests that can be processes at a time.
## Background
Throughout M7 there have been a number of performance improvements delivered by EPAM, as well as work on improving issues with configuration etc. We expect this has made a significant improvement to the capacity of the manifest based ingestion, but we don't have a specific figure.
**The process of load testing should be repeatable, with the expectation it will be applied to the upcoming Airflow 2+ changes.**
## Requirements
We need test the "5000 manifest test" @debasisc @todaiks to be re-run on the M7 release. The result should a binary pass/fail and the wall-time for executing the job. For completeness (Table 1) we show a set of recommended test cases that we believe should ultimately be automated and runnable through the QA group.
| Test | Issue | AWS | Azure | GCP | IBM |
| ----------- | ----------- | --- | ----- | --- | --- |
| the "5000 manifest" | Our current baseline | | | |
| 1 Manifest with 5,000 records | | | | |
| 1 Manifest with 20,000 records | | | | |
| 1 Manifest with 50,000 records | Limit on the size of the request body | | | |
| 50K manifests in multiple requests, not simultaneously | Airflow 1.X doesn’t allow sending multiple requests (Fixed in Airflow 2.0) | | | |
| chunks of 50, 1000 DAG runs | 1. max_active_runs (50) limitation 2. limitation of workflow service: java heap error Issue 64 3. Storage Service has a limitation of storing no more than 500 records/s | | | |
| chunks of 1000 | see above | | | |
| 50 DAG runs | | | | | |
| Launch several different DAGS simultaneously | | | |
| Ingest the Volve data | to promote adoption | | |
| Ingest the TNO data | to promote adoption | | |M7 - Release 0.10Chris ZhangChris Zhanghttps://community.opengroup.org/osdu/platform/domain-data-mgmt-services/wellbore/wellbore-domain-services/-/issues/16Wellbore DMS - APIs for WellLog, WellTrajectory and WellboreMarkerSet2021-07-21T10:31:07ZGisele SouzaWellbore DMS - APIs for WellLog, WellTrajectory and WellboreMarkerSet- New APIs for bulk and meta data of the following WPC: WellLog, WellTrajectory and WellboreMarkerSet
- Unit tests/integration tests to validate the new APIs- New APIs for bulk and meta data of the following WPC: WellLog, WellTrajectory and WellboreMarkerSet
- Unit tests/integration tests to validate the new APIsM7 - Release 0.10Gisele SouzaGisele Souza2021-07-21https://community.opengroup.org/osdu/platform/system/schema-service/-/issues/60AADSecurityConfigWithIstioEnabled performs authentication with istio enabled2021-07-29T05:43:28ZAbhishek Kumar (SLB)AADSecurityConfigWithIstioEnabled performs authentication with istio enabledThere is no difference in behavior when `azure.istio.auth.enabled` is `true` or `false`.
Both, `AADSecurityConfig` and `AADSecurityConfigWithIstioEnabled.java` performs same function.
```
http
.csrf().disable()
....There is no difference in behavior when `azure.istio.auth.enabled` is `true` or `false`.
Both, `AADSecurityConfig` and `AADSecurityConfigWithIstioEnabled.java` performs same function.
```
http
.csrf().disable()
.sessionManagement().sessionCreationPolicy(SessionCreationPolicy.NEVER)
.and()
.authorizeRequests()
.antMatchers("/", "/index.html",
"/v2/api-docs",
"/configuration/ui",
"/swagger-resources/**",
"/configuration/security",
"/swagger",
"/swagger-ui.html",
"/webjars/**").permitAll()
.anyRequest().authenticated()
.and()
.addFilterBefore(appRoleAuthFilter, UsernamePasswordAuthenticationFilter.class);
```
Vs
```
http.httpBasic().disable()
.csrf().disable();
```M7 - Release 0.10Aman VermaAman Vermahttps://community.opengroup.org/osdu/platform/home/-/issues/37Version endpoint - Part 1 (Storage Service)2021-08-13T12:21:04ZKateryna Kurach (EPAM)Version endpoint - Part 1 (Storage Service)Issue that tracks M8 activities: https://community.opengroup.org/osdu/platform/home/-/issues/36
I'd like to propose a new endpoint for all services to retrieve the version information. I'm most interested in the tag version / upcoming t...Issue that tracks M8 activities: https://community.opengroup.org/osdu/platform/home/-/issues/36
I'd like to propose a new endpoint for all services to retrieve the version information. I'm most interested in the tag version / upcoming tag version. My thinking is maven-centric, but I'd like to get the artifact version injected into the jar files and available via a simple GET endpoint.
I have several use cases in mind right now:
1. The end customer should have a way to query their environment to know what versions they are running. Then they would know that patches they see coming in on community have been applied to their environment. The Admin UI may be able to use this endpoint to make that query easier.
1. Application developers would be able to query versions of the services they are working with to determine compatibility.
1. The CI pipeline can query the running instances of dependent services, and issue a warning if the major/minor doesn't match the currently executing one. Have branch names or commit hashes would further improve this, but isn't part of my initial thinking.
What complexities and challenges do you see in trying to provide this information?M7 - Release 0.10https://community.opengroup.org/osdu/platform/data-flow/ingestion/csv-parser/csv-parser/-/issues/36Not able to inegst large file in IBM CSV ingestion2021-07-20T12:07:14ZShrikant GargNot able to inegst large file in IBM CSV ingestionWe are not able to ingest largse csv file with 100 k records (>25 MB) in IBM CSV Ingestion.
When we open stream from download URL and try to read csv file , stream connection gets closed and only partial records gets ingested(only those ...We are not able to ingest largse csv file with 100 k records (>25 MB) in IBM CSV Ingestion.
When we open stream from download URL and try to read csv file , stream connection gets closed and only partial records gets ingested(only those reocrds which are read till that time).
Error description
'Caused by: java.lang.IllegalStateException: ConnectionClosedException reading next record: org.apache.http.ConnectionClosedException: Premature end of Content-Length delimited message body (expected: 37,181,768; received: 18,350,080)\n'M7 - Release 0.10Shrikant GargShrikant Garghttps://community.opengroup.org/osdu/platform/data-flow/ingestion/ingestion-dags/-/issues/73Python SDK - Manifest-based ingestion updates [GONRG-2726] - Part 12021-07-20T15:28:02ZKateryna Kurach (EPAM)Python SDK - Manifest-based ingestion updates [GONRG-2726] - Part 1[https://jiraeu.epam.com/browse/GONRG-2694](https://jiraeu.epam.com/browse/GONRG-2694)[https://jiraeu.epam.com/browse/GONRG-2694](https://jiraeu.epam.com/browse/GONRG-2694)M7 - Release 0.10Siarhei Khaletski (EPAM)Kateryna Kurach (EPAM)Siarhei Khaletski (EPAM)https://community.opengroup.org/osdu/platform/domain-data-mgmt-services/well-delivery/well-delivery/-/issues/2Azure: Onboarding Well Delivery DDMS2021-08-10T20:10:38ZJasonAzure: Onboarding Well Delivery DDMS**Service name**: `Well Delivery DDMS`
The following steps must be completed for a service to onboard with OSDU on Azure. Additionally, please add the `Service Onboarding` tag to this issue when it is created.
## Required Documentation...**Service name**: `Well Delivery DDMS`
The following steps must be completed for a service to onboard with OSDU on Azure. Additionally, please add the `Service Onboarding` tag to this issue when it is created.
## Required Documentation for Service Approval (link or provide info here)
- What entity types does this service manage: See slide 6 [here](https://community.opengroup.org/osdu/platform/domain-data-mgmt-services/well-delivery/well-delivery/-/blob/master/docs/presentation/OSDU_Well_Delivery_DDMS_Contribution.pptx).
- Functional Swagger Link: [here](https://osdu-glab.msft-osdu-test.org/api/well-delivery/swagger-ui.html#/)
- Instructions on how to run and test the service locally: [here](https://community.opengroup.org/osdu/platform/domain-data-mgmt-services/well-delivery/well-delivery/-/tree/master/provider/wd-azure)
- Instructions for creating ADO pipeline for the service: [here](https://community.opengroup.org/jsangiamo/sample-ddms-onboarding/-/blob/master/service_docs/well_delivery/ado_pipeline_guide.md)
- What are the entitlements required to call the different endpoints for this service:
To query data: `service.storage.viewer`, `service.storage.creator`, or `service.storage.admin`.
To create data: `service.storage.creator`, `service.storage.admin`.
To soft delete: `service.storage.creator`, or `service.storage.admin`.
To hard delete data or data version: `service.storage.admin`.
- What infrastructure is deployed for this service: A cosmos databas `well-delivery` in data partition Cosmos account with containers for each data type.
- How is the infrastructure for this service deployed: Currently within the service code
- What is the default tier/scaling for the service-specific infrastructure: 1200 RUs / container * 11 containers currently = 13,200 total or approximately $700/month
- Are the schemas included in WKS [YES/NO]. If not, how will customers load the schemas: Schemas not currently in WKS but are projected to be there before release
- Link to a postman collection or VS Code .http file where a customer can find an end-to-end workflow for the service (not required):
## Onboarding Steps:
**Infrastructure and Initial Requirements**
- [x] Create helm charts for service. The charts for each service are located in the `devops/azure` directory. You can look at charts from other services as a model. The charts will be nearly identical except for the different environment variables, values, etc each service needs to run.
- [ ] Service is deployed into its own namespace (currently optional).
- [ ] Service has its own ingress (currently optional).
- [x] If there are new entitlements for this DDMS, add them to the list of groups used to bootstrap a data partition that can be found [here](https://community.opengroup.org/osdu/platform/security-and-compliance/entitlements/-/blob/master/provider/entitlements-v2-azure/src/main/resources/provisioning/groups/datalake_service_groups.json).
- [x] If there are new entitlements for this DDMS, add them to the list of groups used to bootstrap the opendes data partition found [here](https://community.opengroup.org/osdu/platform/deployment-and-operations/infra-azure-provisioning/-/blob/master/tools/test_data/entitlements_data_uploader/src/main/resources/bootstrap-data.txt).
- [x] Create an Istio auth policy in the `devops/azure/chart/templates` directory for the service. Here is an example of an Istio auth policy that is generic and can be used by other services. [Link](https://community.opengroup.org/osdu/platform/system/storage/-/blob/master/devops/azure/chart/templates/azure-istio-auth-policy.yaml).
- [x] Add any test data that is required for the service integration tests. [Link](https://community.opengroup.org/osdu/platform/deployment-and-operations/infra-azure-provisioning/-/tree/master/tools/test_data).
- [x] Update `upload-data.py` to upload any new test data files you created. [Link](https://community.opengroup.org/osdu/platform/deployment-and-operations/infra-azure-provisioning/-/blob/master/tools/test_data/upload-data.py).
**Developer Experience**
- [x] Verify that the README for the Azure provider correctly and clearly describes how to run and test the service.
- [x] Create environment variable script to generate .yaml files (if sevice in Java) to be used with Intellij [EnvFile](https://plugins.jetbrains.com/plugin/7861-envfile) plugin and .envrc files to be used with [direnv](https://direnv.net/). [Link](https://community.opengroup.org/osdu/platform/deployment-and-operations/infra-azure-provisioning/-/tree/master/tools/variables).
- [x] Community PR requirements are enforced for the repo by default.
**Development and Demo Azure Devops Pipelines**
- [x] Create development ADO pipeline at `devops/azure/development-pipeline.yml` in the service repo.
- [x] Verify development pipeline passes in ADO.
- [x] Create documentation on how to add this service to an existing ADO project.
**Functional Validations**
- [x] Service Swagger is reachable.
- [x] Service pases all integration tests.
- [x] Service is able to pass any end-to-end workflows that have been defined.
**Release**
- [x] Relase helm charts created.
- [x] Release helm charts validated.
- [x] Release helm charts uploaded to Azure ACR (work with Manish Kumar for this).
- [x] Release documentation for [helm-charts-azure](https://community.opengroup.org/osdu/platform/deployment-and-operations/helm-charts-azure) written and verified.M7 - Release 0.10JasonJasonhttps://community.opengroup.org/osdu/platform/data-flow/ingestion/energistics/witsml-parser/-/issues/39New version of WITSML parser doesn't pass date-time validation2021-07-14T12:52:54ZYan Sushchynski (EPAM)New version of WITSML parser doesn't pass date-time validationHello!
Now, date-time fields must follow RFC3339 standard. For more details: https://community.opengroup.org/osdu/platform/data-flow/ingestion/ingestion-dags/-/merge_requests/50#updates-description
The error was when I tried to parse ...Hello!
Now, date-time fields must follow RFC3339 standard. For more details: https://community.opengroup.org/osdu/platform/data-flow/ingestion/ingestion-dags/-/merge_requests/50#updates-description
The error was when I tried to parse this file: https://community.opengroup.org/osdu/platform/data-flow/ingestion/energistics-osdu-integration/-/blob/master/energistics/witsml_data/Well.xml
ERROR - Schema validation error. Data field.
[2021-06-30 16:25:43,186] {base_task_runner.py:113} INFO - Job 79438: Subtask validate_manifest_schema_task [2021-06-30 16:25:43,185] {validate_schema.py:289} ERROR - Manifest kind: odesprod:wks:master-data--Well:1.0.0
[2021-06-30 16:25:43,187] {base_task_runner.py:113} INFO - Job 79438: Subtask validate_manifest_schema_task [2021-06-30 16:25:43,187] {validate_schema.py:290} ERROR - Error: '2021-06-30T16:24:43.178585' is not a 'date-time'
I think this date-time field lacks "Z" at the end.M7 - Release 0.10Laurent DenyLaurent Denyhttps://community.opengroup.org/osdu/platform/security-and-compliance/entitlements/-/issues/66OSDU GCP Migrate R2 services to Entitlement v2 (integrate services to use ent...2021-07-14T12:24:51ZSergey Krupenin (EPAM)OSDU GCP Migrate R2 services to Entitlement v2 (integrate services to use entitlement v2) [GONRG-2647]https://jiraeu.epam.com/browse/GONRG-2647https://jiraeu.epam.com/browse/GONRG-2647M7 - Release 0.10Riabokon Stanislav(EPAM)[GCP]Riabokon Stanislav(EPAM)[GCP]https://community.opengroup.org/osdu/platform/security-and-compliance/entitlements/-/issues/65GCP OSDU Entitlement V2 API [GONRG-226]2021-07-07T13:41:51ZSergey Krupenin (EPAM)GCP OSDU Entitlement V2 API [GONRG-226]https://jiraeu.epam.com/browse/GONRG-226https://jiraeu.epam.com/browse/GONRG-226M7 - Release 0.10Riabokon Stanislav(EPAM)[GCP]Riabokon Stanislav(EPAM)[GCP]https://community.opengroup.org/osdu/platform/deployment-and-operations/infra-gcp-provisioning/-/issues/3Deployment framework [H1] [GONRG-619]2021-07-07T13:41:23ZSergey Krupenin (EPAM)Deployment framework [H1] [GONRG-619]https://jiraeu.epam.com/browse/GONRG-619https://jiraeu.epam.com/browse/GONRG-619M7 - Release 0.10Oleksandr Kosse (EPAM)Oleksandr Kosse (EPAM)https://community.opengroup.org/osdu/platform/data-flow/ingestion/csv-parser/csv-parser/-/issues/26CSV Enhancement - Multithread optimization2021-07-08T16:33:39ZFernando Nahu Cantera RubioCSV Enhancement - Multithread optimization## Multithread optimization
Each record is read and added as a task in an executor service to be enriched and stored parallel with other records.## Multithread optimization
Each record is read and added as a task in an executor service to be enriched and stored parallel with other records.M7 - Release 0.10SwapnilFernando Nahu Cantera RubioSwapnilhttps://community.opengroup.org/osdu/platform/data-flow/ingestion/csv-parser/csv-parser/-/issues/25CSV Enhancement - Id generation change2021-07-08T16:33:30ZFernando Nahu Cantera RubioCSV Enhancement - Id generation change## Id generation change
Change in the ID generation to follow OSDU pattern ```<authority/data-partition-id>:<source>:<entity-type>:<base64-of-xosdu-natural-keys>```
* authority/data-partition-id is taken from the request triggering the ...## Id generation change
Change in the ID generation to follow OSDU pattern ```<authority/data-partition-id>:<source>:<entity-type>:<base64-of-xosdu-natural-keys>```
* authority/data-partition-id is taken from the request triggering the workflowM7 - Release 0.10SwapnilFernando Nahu Cantera RubioSwapnilhttps://community.opengroup.org/osdu/platform/data-flow/ingestion/csv-parser/csv-parser/-/issues/24CSV Enhancement - Relationships2021-07-08T16:33:26ZFernando Nahu Cantera RubioCSV Enhancement - Relationships## Relationships
* CSV ingestion supports two kinds of relationships:
1. **Deterministic (Schema-driven)**
These relationships require that the entity be referred to in the record's targetKind schema under an attribute having ```...## Relationships
* CSV ingestion supports two kinds of relationships:
1. **Deterministic (Schema-driven)**
These relationships require that the entity be referred to in the record's targetKind schema under an attribute having ```x-osdu-relationship``` tag. Because they are present in the schema, they are represented directly as attributes in the ```data``` block of the record.
2. **Non Deterministic (Data-driven)**
These relationships do not require any mention in the schema. They are represented within the ```data.relationships``` block of the record.
* ExtensionProperties block in the file metadata record is used to provide additional information for ingestion. We can use this block to provide relationship information. There are three ways of providing this information:
* In the ```relationships``` block, with the entity name and a list of parent record ID(s). The ID(s) provided here are directly used to establish relationships.
* In the ```relatedNaturalKey``` block, as an entity that requires a search of the targetKind using the natural keys provided to establish a relationship.
* _sourceColumn_: Column name of the CSV file which refers to the key parent attribute.
* _targetKind_: Schema ID of the parent record.
* _targetAttribute_: The key attribute of the parent record which is used to search the parent record.
* _**Pre-requisites**_: CSV file should have the key attributes of the parent records.
```
{
"ExtensionProperties": {
"relationships": {
"project": {
"ids": [
"<recordId1>"
]
},
"well": {
"ids": [
"<recordId2>"
"<recordId3>"
]
}
},
"relatedNaturalKey": {
"wellbore": {
"targetKind":"<<authority>:<source>:<entityType>:<version>>",
"keys": [
{
"sourceColumn":"UWI",
"targetAttribute":"uwi"
}
]
}
}
}
}
```
* The schema of the record should have information about attributes that contain deterministic relationships.
* The _EntityType_ field within the ```x-osdu-relationship``` block should contain the entity that needs to be matched from the ExtensionProperties block.
```
{
"properties": {
"wellId": {
"type":"string",
"pattern":"^[\\w\\-\\.]+:\\-\\-well:[\\w\\-\\.\\:\\%]+:[0-9]*$",
"x-osdu-relationship": [
{
"GroupType":"master-data",
"EntityType":"well"
}
]
},
"wellboreId": {
"type":"string",
"pattern":"^[\\w\\-\\.]+:\\-\\-wellbore:[\\w\\-\\.\\:\\%]+:[0-9]*$",
"x-osdu-relationship": [
{
"GroupType":"master-data",
"EntityType":"wellbore"
}
]
}
}
}
```
* The final record will then have the relationships defined as below:
```
{
"data": {
"relationships": {
"project": {
"ids": [
"<recordId1>"
]
}
},
"wellId":"<recordId2>",
"wellboreId":"<recordId5>"
}
}M7 - Release 0.10SwapnilFernando Nahu Cantera RubioSwapnilhttps://community.opengroup.org/osdu/platform/data-flow/ingestion/csv-parser/csv-parser/-/issues/23CSV Parser Enhancement - Nested Schema2021-07-08T16:33:21ZFernando Nahu Cantera RubioCSV Parser Enhancement - Nested Schema## Nested Schema
* To support the ingestion of data into nested attributes, the headers of the uploaded csv header should match the nested attributes of the target schemas, using the delimiter characters defined on the metadata file.
*...## Nested Schema
* To support the ingestion of data into nested attributes, the headers of the uploaded csv header should match the nested attributes of the target schemas, using the delimiter characters defined on the metadata file.
* The ```nestedFieldDelimiter``` attribute in file metadata is used to define which character is going to be used on the csv file header to describe the different levels of nested attributes while the ingestor parses the files.
* The delimiter character used to define nested structures on the csv file header must match the one defined by the ```nestedFieldDelimiter``` on the file metadata record, otherwise the attributes on the csv file will not be considered nested.
```
{
"ExtensionProperties": {
"FileContentsDetails": {
"TargetKind": "<<authority>:<source>:<entityType>:<version>>",
"nestedFieldDelimiter":".",
"FileType": "csv"
}
}
}M7 - Release 0.10SwapnilFernando Nahu Cantera RubioSwapnilhttps://community.opengroup.org/osdu/platform/data-flow/ingestion/csv-parser/csv-parser/-/issues/22CSV Parser Enhancement - Spatial data handler2021-07-08T16:33:16ZFernando Nahu Cantera RubioCSV Parser Enhancement - Spatial data handler## Spatial data handler
### Pre-requisities:
* Schema used to ingest the data has Spatial reference.
* CSV file has the Spatial data attributes.
* ExtensionProperties Block is used to provide content details of the file, the Workflow S...## Spatial data handler
### Pre-requisities:
* Schema used to ingest the data has Spatial reference.
* CSV file has the Spatial data attributes.
* ExtensionProperties Block is used to provide content details of the file, the Workflow Service uses this same block to provide Spatial data information.
* SpatialMapping: This section is used to create the Spatial data block in the ingested records.
* type: This field refers to the type of the Spatial data; currently the Workflow Service only supports point.
* latitude: This field refers to the Latitude of the point.
* longitude: This field refers to the Longitude of the point.
```
{
"ExtensionProperties": {
"FileContentsDetails": {
"TargetKind": "<<authority>:<source>:<entityType>:<version>>",
"FileType": "csv",
"SpatialMapping":{
"type": "point",
"latitude": "Column name of the CSV which contains the LATITUDE value",
"longitude": "Column name of the CSV which contains the LONGITUDE value"
},
"FrameOfReference": [
{
"kind": "CRS",
"name": "GCS_WGS_1984",
"persistableReference": "{\"wkt\":\"GEOGCS[\\\"GCS_WGS_1984\\\",DATUM[\\\"D_WGS_1984\\\",SPHEROID[\\\"WGS_1984\\\",6378137.0,298.257223563]],PRIMEM[\\\"Greenwich\\\",0.0],UNIT[\\\"Degree\\\",0.0174532925199433],AUTHORITY[\\\"EPSG\\\",4326]]\",\"ver\":\"PE_10_3_1\",\"name\":\"GCS_WGS_1984\",\"authCode\":{\"auth\":\"EPSG\",\"code\":\"4326\"},\"type\":\"LBC\"}",
"propertyNames": [
"Column name of the CSV which contains the LATITUDE value",
"Column name of the CSV which contains the LONGITUDE value"
],
"propertyValues": [
"deg"
],
"uncertainty": 0
}
}
}
}M7 - Release 0.10SwapnilFernando Nahu Cantera RubioSwapnilhttps://community.opengroup.org/osdu/platform/data-flow/ingestion/csv-parser/csv-parser/-/issues/21CSV Parser Enhancement - Token generation for long-running jobs2021-07-08T16:33:07ZFernando Nahu Cantera RubioCSV Parser Enhancement - Token generation for long-running jobs## Token generation for long-running jobs
An interface AuthJwtToken was added for generating tokens, the following classes have dummy implementations for it, and until reworked the request token will be used.
- AwsServiceAccountAuthToke...## Token generation for long-running jobs
An interface AuthJwtToken was added for generating tokens, the following classes have dummy implementations for it, and until reworked the request token will be used.
- AwsServiceAccountAuthToken
- ServiceAccountAuthToken
- IBMServicePrincipalAuthTokenM7 - Release 0.10SwapnilFernando Nahu Cantera RubioSwapnilhttps://community.opengroup.org/osdu/platform/ci-cd-pipelines/-/issues/23Spotbugs Failing in some Services with Out of Memory Exception2021-07-06T15:38:33ZDavid Diederichd.diederich@opengroup.orgSpotbugs Failing in some Services with Out of Memory ExceptionSome services, such as [Partition](https://community.opengroup.org/osdu/platform/system/partition/-/jobs/404507) fail in the spotbugs step. If re-run with `SECURE_LOG_LEVEL` set to `"debug"`, we see that the [failure](https://community.o...Some services, such as [Partition](https://community.opengroup.org/osdu/platform/system/partition/-/jobs/404507) fail in the spotbugs step. If re-run with `SECURE_LOG_LEVEL` set to `"debug"`, we see that the [failure](https://community.opengroup.org/osdu/platform/system/partition/-/jobs/404635#L878) is `java.lang.OutOfMemoryError`
From that [same debug output](https://community.opengroup.org/osdu/platform/system/partition/-/jobs/404635#L868), spotbugs is run with `java -Xmx1900M`.M7 - Release 0.10David Diederichd.diederich@opengroup.orgDavid Diederichd.diederich@opengroup.orghttps://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/seismic-dms-suite/seismic-store-service/-/issues/31Impersonation token rework2021-07-15T20:29:03ZSacha BrantsImpersonation token reworkRework to create a generic API that can be implemented by all providersRework to create a generic API that can be implemented by all providersM7 - Release 0.10Diego MolteniDiego Moltenihttps://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/seismic-dms-suite/seismic-store-service/-/issues/30Dataset Level Access2021-07-15T20:39:47ZSacha BrantsDataset Level AccessAllow ACLs to be set at the dataset levelAllow ACLs to be set at the dataset levelM7 - Release 0.10Varunkumar ManoharVarunkumar Manohar