OSDU Software issueshttps://community.opengroup.org/groups/osdu/-/issues2023-09-06T20:02:06Zhttps://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/seismic-dms-suite/seismic-store-service/-/issues/109Unsupported Feature in Dataset LS Get Endpoint Causing Test Failures on AWS a...2023-09-06T20:02:06ZPratiksha ShedgeUnsupported Feature in Dataset LS Get Endpoint Causing Test Failures on AWS and AnthosA new feature has been introduced for the dataset LS get endpoint, comprising the Search (to select a single SQL-like search parameter) and Select (to choose multiple fields for retrieval) query parameters. The API is expected to return ...A new feature has been introduced for the dataset LS get endpoint, comprising the Search (to select a single SQL-like search parameter) and Select (to choose multiple fields for retrieval) query parameters. The API is expected to return a list of datasets based on the search and select query parameters. However, AWS and Anthos do not support this new feature for this endpoint, leading to test failures during pipeline runs.
Pipeline runs:
AWS: https://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/seismic-dms-suite/seismic-store-service/-/jobs/2200880
Anthos: https://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/seismic-dms-suite/seismic-store-service/-/jobs/2200882https://community.opengroup.org/osdu/platform/domain-data-mgmt-services/wellbore/wellbore-domain-services/-/issues/81Provide option to load Trajectory data from CSV source2023-09-08T17:55:50ZDebasis ChatterjeeProvide option to load Trajectory data from CSV sourceOften source data is available in CSV.
For now, Data Loader has an extra step to reformat existing data from CSV into JSON format.
It would be beneficial to provide this option for "Post data" (Wellbore Trajectory).
Typical use case.
H...Often source data is available in CSV.
For now, Data Loader has an extra step to reformat existing data from CSV into JSON format.
It would be beneficial to provide this option for "Post data" (Wellbore Trajectory).
Typical use case.
Header row - indicating available fields such as Depth, Inclination, Azimuth.
Following rows contain actual trajectory data.https://community.opengroup.org/osdu/platform/system/schema-service/-/issues/135Schema bootstrap script failure2023-09-05T06:13:32ZSachin JaiswalSchema bootstrap script failureAnyone tries to deploy M19 on fresh environment would run into below issue because of incorrect sequence in load_sequence file.
`
Error with kind osdu:wks:work-product-component--SeismicTraceData:1.5.0: Message: Invalid input, osdu:wks:...Anyone tries to deploy M19 on fresh environment would run into below issue because of incorrect sequence in load_sequence file.
`
Error with kind osdu:wks:work-product-component--SeismicTraceData:1.5.0: Message: Invalid input, osdu:wks:AbstractWPCGroupType:1.2.0 not registered but provided as reference
Error with kind osdu:wks:work-product-component--SoilGasMonitoring:1.1.0: Message: Invalid input, osdu:wks:AbstractWPCGroupType:1.2.0 not registered but provided as reference
Error with kind osdu:wks:master-data--StorageFacility:1.2.0: Message: Invalid input, osdu:wks:AbstractMaster:1.2.0 not registered but provided as reference
Error with kind osdu:wks:work-product-component--StratigraphicColumn:1.2.0: Message: Invalid input, osdu:wks:AbstractWPCGroupType:1.2.0 not registered but provided as reference
Error with kind osdu:wks:work-product-component--StratigraphicColumnRankInterpretation:1.3.0: Message: Invalid input, osdu:wks:AbstractWPCGroupType:1.2.0 not registered but provided as reference
Error with kind osdu:wks:work-product-component--StratigraphicUnitInterpretation:1.2.0: Message: Invalid input, osdu:wks:AbstractWPCGroupType:1.2.0 not registered but provided as reference
Error with kind osdu:wks:work-product-component--StructuralOrganizationInterpretation:1.2.0: Message: Invalid input, osdu:wks:AbstractWPCGroupType:1.2.0 not registered but provided as reference
Error with kind osdu:wks:work-product-component--SubRepresentation:1.2.0: Message: Invalid input, osdu:wks:AbstractWPCGroupType:1.2.0 not registered but provided as reference
Error with kind osdu:wks:master-data--SurveyProgram:1.2.0: Message: Invalid input, osdu:wks:AbstractMaster:1.2.0 not registered but provided as reference
Error with kind osdu:wks:work-product-component--TimeSeries:1.2.0: Message: Invalid input, osdu:wks:AbstractWPCGroupType:1.2.0 not registered but provided as reference
Error with kind osdu:wks:work-product-component--TubularAssembly:1.3.0: Message: Invalid input, osdu:wks:AbstractWPCGroupType:1.2.0 not registered but provided as reference
Error with kind osdu:wks:work-product-component--TubularComponent:1.3.0: Message: Invalid input, osdu:wks:AbstractWPCGroupType:1.2.0 not registered but provided as reference
Error with kind osdu:wks:work-product-component--TubularExternalComponent:1.1.0: Message: Invalid input, osdu:wks:AbstractWPCGroupType:1.2.0 not registered but provided as reference
Error with kind osdu:wks:work-product-component--TubularUmbilical:1.2.0: Message: Invalid input, osdu:wks:AbstractWPCGroupType:1.2.0 not registered but provided as reference
Error with kind osdu:wks:work-product-component--UnsealedSurfaceFramework:1.3.0: Message: Invalid input, osdu:wks:AbstractWPCGroupType:1.2.0 not registered but provided as reference
Error with kind osdu:wks:work-product-component--UnstructuredColumnLayerGridRepresentation:1.2.0: Message: Invalid input, osdu:wks:AbstractWPCGroupType:1.2.0 not registered but provided as reference
Error with kind osdu:wks:work-product-component--UnstructuredGridRepresentation:1.2.0: Message: Invalid input, osdu:wks:AbstractWPCGroupType:1.2.0 not registered but provided as reference
Error with kind osdu:wks:work-product-component--VelocityModeling:1.3.0: Message: Invalid input, osdu:wks:AbstractWPCGroupType:1.2.0 not registered but provided as reference
Error with kind osdu:wks:work-product-component--VoidageGroupInterpretation:1.2.0: Message: Invalid input, osdu:wks:AbstractWPCGroupType:1.2.0 not registered but provided as reference
Error with kind osdu:wks:master-data--Well:1.3.0: Message: Invalid input, osdu:wks:AbstractMaster:1.2.0 not registered but provided as reference
Error with kind osdu:wks:master-data--WellActivity:1.2.0: Message: Invalid input, osdu:wks:AbstractMaster:1.2.0 not registered but provided as reference
Error with kind osdu:wks:master-data--WellActivityProgram:1.1.0: Message: Invalid input, osdu:wks:AbstractMaster:1.2.0 not registered but provided as reference
Error with kind osdu:wks:master-data--WellBarrierElementTest:1.1.0: Message: Invalid input, osdu:wks:AbstractMaster:1.2.0 not registered but provided as reference
Error with kind osdu:wks:work-product-component--WellLog:1.4.0: Message: Invalid input, osdu:wks:AbstractWPCGroupType:1.2.0 not registered but provided as reference
Error with kind osdu:wks:master-data--WellPlanningWell:1.1.0: Message: Invalid input, osdu:wks:AbstractMaster:1.2.0 not registered but provided as reference
Error with kind osdu:wks:master-data--WellPlanningWellbore:1.1.0: Message: Invalid input, osdu:wks:AbstractMaster:1.2.0 not registered but provided as reference
Error with kind osdu:wks:master-data--Wellbore:1.4.0: Message: Invalid input, osdu:wks:AbstractMaster:1.2.0 not registered but provided as reference
Error with kind osdu:wks:master-data--WellboreArchitecture:1.1.0: Message: Invalid input, osdu:wks:AbstractMaster:1.2.0 not registered but provided as reference
Error with kind osdu:wks:work-product-component--WellboreIntervalSet:1.2.0: Message: Invalid input, osdu:wks:AbstractWPCGroupType:1.2.0 not registered but provided as reference
Error with kind osdu:wks:work-product-component--WellboreMarkerSet:1.4.0: Message: Invalid input, osdu:wks:AbstractWPCGroupType:1.2.0 not registered but provided as reference
Error with kind osdu:wks:master-data--WellboreOpening:1.2.0: Message: Invalid input, osdu:wks:AbstractMaster:1.2.0 not registered but provided as reference
Error with kind osdu:wks:work-product-component--WellboreTrajectory:1.3.0: Message: Invalid input, osdu:wks:AbstractWPCGroupType:1.2.0 not registered but provided as reference
##[error]Bash exited with code '1'.
`Sachin JaiswalSachin Jaiswalhttps://community.opengroup.org/osdu/platform/system/search-service/-/issues/133Elasticsearch licensing2024-01-18T07:50:02ZChad LeongElasticsearch licensing# Problem Statement
Currently, Search service is using Elasticsearch [7.8.1](https://community.opengroup.org/osdu/platform/system/search-service/-/blob/master/pom.xml?ref_type=heads#L33). There is a need to upgrade the version to provid...# Problem Statement
Currently, Search service is using Elasticsearch [7.8.1](https://community.opengroup.org/osdu/platform/system/search-service/-/blob/master/pom.xml?ref_type=heads#L33). There is a need to upgrade the version to provide stability, features and performance improvement.
Specifically following the release of version 7.10.2 https://mvnrepository.com/artifact/org.elasticsearch/elasticsearch-core/7.10.2 , Elastic has since transitioned its licensing from the Apache 2.0 license to the Server Side Public License (SSPL) for any future versions https://mvnrepository.com/artifact/org.elasticsearch/elasticsearch-core.
OSDU software needs to be licensed using Apache 2.0. This change has raised concerns, particularly regarding compatibility issues with client bindings and providing future updates to Elasticsearch.
## Impact
There are 2 components to the search - Elastic client bindings and server.
- Client bindings are integral components of applications that facilitate seamless communication with our Elastic Search Service. These bindings have traditionally been Apache 2.0 compatible. The shift to SSPL raises compatibility concerns, potentially preventing the upgrade of client bindings.
- The Elastic server itself is used as a tool, so we don’t need to worry about Apache compatibility. Server-side upgrades are possible but may encounter a future technical barrier without client-binding upgrades.
## Objective
We need to address this licensing challenge and find an alternative that allows for a smooth transition. We are actively exploring options for an elastic alternative that can bridge the gap between client bindings and server upgrades.
**An option №1** is https://opensearch.org/docs/latest/clients/java/
- https://aws.amazon.com/blogs/opensource/keeping-clients-of-opensearch-and-elasticsearch-compatible-with-open-source/
Pros:
- OpenSearch is an ElasticSearch fork, and fully compatible with v 7.10 see https://opensearch.org/faq/#q1.8. Thus refactoring should be more or less straightforward.
- Easier to preserve existing features.
- It's possible to change clients in Services and keep ElastSearch as a backend server.
Cons:
- Following-up releases do not guarantee compatibility with ElasticSearch API: https://opensearch.org/faq/#q1.9
Action items:
- Potentially could bind CSPs to ElasticSearch server v 7.10 or force them to switch to OpenSearch server.
- Switch Indexer and Search to use OpenSearch clients.
**Option №2** is an Elasticsearch client with an Apache license https://github.com/elastic/elasticsearch-java/
Pros:
- Possible to keep Elasticsearch as a backend.
- Later we could migrate to Elasticsearch 8.
Cons:
- Could require a bit more thorough migration for Search and Indexer, unlike OpenSearch. Since it's a different lib with different interfaces, we may need to rewrite a lot of code. In the meantime, OpenSearch has a fork of High-level-rest-client https://opensearch.org/docs/latest/clients/java-rest-high-level/ which could simplify migration to just swapping imports.
- Additionally, we should be aware that the Elasticsearch server's licensing could still pose an issue.
Action items:
- Migrate Indexer and Search to use Elasticsearch Apache client.
## Decisions
Option 2 seems to be a better long-term solution with the possibility of keeping Elasticsearch as a backend. A separate migration strategy has been written here https://community.opengroup.org/osdu/platform/system/indexer-service/-/issues/111https://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/open-vds/-/issues/204Add a mechanism to use an external application to get credentials2023-09-08T08:27:25ZMorten OfstadAdd a mechanism to use an external application to get credentialsIn order to get credentials that require a user to log in, it will be useful to run a separate executable. This is e.g. how git works (global config sets credential.helper to point to an executable, can be configured per URL prefix, see ...In order to get credentials that require a user to log in, it will be useful to run a separate executable. This is e.g. how git works (global config sets credential.helper to point to an executable, can be configured per URL prefix, see Git - gitcredentials Documentation (git-scm.com)). Integrating this directly in OpenVDS makes it easy for other applications to take advantage of.
The suggested implementation will add new global keys (valid for all cloud providers) credential_helper and credential_helper_args to the connection string format. If the credential_helper key is present, the executable pointed to will be run with the args from credential_helper_args and the URL as arguments, and the output will be parsed as a connection string and added to the remaining keys after removing the credential_helper and credential_helper_args keys. This allows other arguments like tolerance etc. to be passed on from the original connection string after using the credentials helper.https://community.opengroup.org/osdu/platform/domain-data-mgmt-services/rock-and-fluid-sample/rafs-ddms-services/-/issues/239RCA data - need efficient retrieval mechanism for all plugs of one wellbore2024-01-12T14:08:48ZDebasis ChatterjeeRCA data - need efficient retrieval mechanism for all plugs of one wellbore![Kentish-Page453](/uploads/7431359d26bded4155a5df798e38fa57/Kentish-Page453.PNG)
Let's assume that somehow Operator will build own tool or depend upon ISV to provide a tool to ingest data into multiple Rock Sample records (one per core...![Kentish-Page453](/uploads/7431359d26bded4155a5df798e38fa57/Kentish-Page453.PNG)
Let's assume that somehow Operator will build own tool or depend upon ISV to provide a tool to ingest data into multiple Rock Sample records (one per core plug) and multiple Rock Sample Analysis records (one per core plug), and also populate analysis data into parquet by using Domain API.
But the "ask" here is an efficient retrieval mechanism through one new API end point.
The response should array with each element showing plug number, depth, porosity, permeability and Grain density.
This will allow Operator and ISV's to efficiently use the RCA data in an application.
Such as display of core analysis data next to well log display. showing log porosity and core porosity side by side.
cc @esakkipremhttps://community.opengroup.org/osdu/platform/deployment-and-operations/infra-gcp-provisioning/-/issues/26Baremetal deployment - Minio is not deployed properly2023-09-12T07:25:41ZDo DangBaremetal deployment - Minio is not deployed properlyI followed this [guide](https://community.opengroup.org/osdu/platform/deployment-and-operations/infra-gcp-provisioning/-/blob/master/helm/osdu-infra-baremetal/README.md)
and used this command to install osdu baremetal `helm install -f ...I followed this [guide](https://community.opengroup.org/osdu/platform/deployment-and-operations/infra-gcp-provisioning/-/blob/master/helm/osdu-infra-baremetal/README.md)
and used this command to install osdu baremetal `helm install -f custom-values.yaml osdu-baremetal oci://community.opengroup.org:5555/osdu/platform/deployment-and-operations/infra-gcp-provisioning/gc-helm/osdu-gc-baremetal`
custom-values.yaml
```
global:
domain: "osdu.local"
# Configuration parameter to switch between HTTP and HTTPS mode for external endpoint.
# Default - HTTP. HTTPS requires additional configuration
useHttps: false
keycloak:
auth:
# Fill in variable value, the value should contain only alphanumerical characters and should be at least 8 symbols
adminPassword: "abc123456"
# This value should be set to 'none' if https is not used (global.useHttps = false), otherwise the value needs to be changed to 'passthrough' if https is used (global.useHttps = true)
proxy: none
minio:
auth:
# Fill in variable value
rootPassword: "abc123456"
persistence:
size: 15Gi
# This value should be set to 'true' when using self-signed certificates or installing on minikube and docker desktop
#useInternalServerUrl: true
postgresql:
global:
postgresql:
auth:
# Fill in variable value
postgresPassword: "abc123456"
persistence:
size: 8Gi
airflow:
externalDatabase:
# Fill in variable value
password: "abc123456"
auth:
# Fill in variable value
password: "abc123456"
elasticsearch:
security:
# Fill in variable value
elasticPassword: "abc123456"
master:
persistence:
size: 8Gi
data:
persistence:
size: 8Gi
rabbitmq:
auth:
# Fill in variable value
password: "abc123456"
```
However, it seems that Minio is not deployed properly.
when I login to minio endpoint `http://minio.osdu.local/login`, it shows `Post "http://s3.osdu.local/": dial tcp: lookup s3.osdu.local on 10.96.0.10:53: no such host`
Does anyone know what is the issue?Dzmitry Malkevich (EPAM)Dzmitry Malkevich (EPAM)https://community.opengroup.org/osdu/platform/deployment-and-operations/infra-azure-provisioning/-/issues/317Error: Plugin did not respond2023-09-11T19:29:09ZYukuo WangError: Plugin did not respondWe captured several terraform plan failure recently.
│ Error: Plugin did not respond
│
│ with module.system_storage_account.azurerm_storage_account.main,
│ on ../../../modules/providers/azure/storage-account/main.tf line 19, in reso...We captured several terraform plan failure recently.
│ Error: Plugin did not respond
│
│ with module.system_storage_account.azurerm_storage_account.main,
│ on ../../../modules/providers/azure/storage-account/main.tf line 19, in resource "azurerm_storage_account" "main":
│ 19: resource "azurerm_storage_account" "main" {
│
│ The plugin encountered an error, and failed to respond to the
│ plugin.(*GRPCProvider).ReadResource call. The plugin logs may contain more
│ details.
╷
│ Error: Request cancelled
│
│ with module.keyvault_policy.azurerm_key_vault_access_policy.keyvault[0],
│ on ../../../modules/providers/azure/keyvault-policy/main.tf line 15, in resource "azurerm_key_vault_access_policy" "keyvault":
│ 15: resource "azurerm_key_vault_access_policy" "keyvault" {
│
│ The plugin.(*GRPCProvider).UpgradeResourceState request was cancelled.
╵
Also with stack trace logs:
Stack trace from the terraform-provider-azurerm_v3.39.1_x5 plugin:
panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x0 pc=0x4c12582]
goroutine 1950 [running]:
github.com/hashicorp/terraform-provider-azurerm/internal/services/containers.resourceKubernetesClusterRead(0xc001d94480, {0x5d01ea0?, 0xc000737000})
github.com/hashicorp/terraform-provider-azurerm/internal/services/containers/kubernetes_cluster_resource.go:2060 +0x9c2
github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema.(*Resource).read(0x6f8e340?, {0x6f8e340?, 0xc001fd32c0?}, 0xd?, {0x5d01ea0?, 0xc000737000?})
github.com/hashicorp/terraform-plugin-sdk/v2@v2.24.1/helper/schema/resource.go:712 +0x178
github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema.(*Resource).RefreshWithoutUpgrade(0xc000b56b60, {0x6f8e340, 0xc001fd32c0}, 0xc001f90750, {0x5d01ea0, 0xc000737000})
github.com/hashicorp/terraform-plugin-sdk/v2@v2.24.1/helper/schema/resource.go:1015 +0x585
github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema.(*GRPCProviderServer).ReadResource(0xc00152f980, {0x6f8e340?, 0xc001fd2ea0?}, 0xc001c5a100)
github.com/hashicorp/terraform-plugin-sdk/v2@v2.24.1/helper/schema/grpc_provider.go:613 +0x4a5
github.com/hashicorp/terraform-plugin-go/tfprotov5/tf5server.(*server).ReadResource(0xc001930320, {0x6f8e340?, 0xc001fd2780?}, 0xc001127140)
github.com/hashicorp/terraform-plugin-go@v0.14.1/tfprotov5/tf5server/server.go:748 +0x4b1
github.com/hashicorp/terraform-plugin-go/tfprotov5/internal/tfplugin5._Provider_ReadResource_Handler({0x63c6d80?, 0xc001930320}, {0x6f8e340, 0xc001fd2780}, 0xc001347b20, 0x0)
github.com/hashicorp/terraform-plugin-go@v0.14.1/tfprotov5/internal/tfplugin5/tfplugin5_grpc.pb.go:349 +0x170
google.golang.org/grpc.(*Server).processUnaryRPC(0xc00027a000, {0x6f9e380, 0xc000f9e000}, 0xc002595b00, 0xc001993530, 0xb246a90, 0x0)
google.golang.org/grpc@v1.50.1/server.go:1340 +0xd23
google.golang.org/grpc.(*Server).handleStream(0xc00027a000, {0x6f9e380, 0xc000f9e000}, 0xc002595b00, 0x0)
google.golang.org/grpc@v1.50.1/server.go:1713 +0xa2f
google.golang.org/grpc.(*Server).serveStreams.func1.2()
google.golang.org/grpc@v1.50.1/server.go:965 +0x98
created by google.golang.org/grpc.(*Server).serveStreams.func1
google.golang.org/grpc@v1.50.1/server.go:963 +0x28a
Error: The terraform-provider-azurerm_v3.39.1_x5 plugin crashed!
This is always indicative of a bug within the plugin. It would be immensely
helpful if you could report the crash with the plugin's maintainers so that it
can be fixed. The output above should help diagnose the issue.
By troubleshooting on this, we noticed that there is a bug fix:
Fix nil panic by correcting nil check expression: https://github.com/hashicorp/terraform-provider-azurerm/pull/21850
This fix is inclued in terraform-provider-azurerm v3.57.0 (May 19, 2023)
https://github.com/hashicorp/terraform-provider-azurerm/blob/v3.57.0/CHANGELOG.md
BUG FIXES:
data.azurerm_kubernetes_cluster - prevent a panic when some values returned are nil (#21850)https://community.opengroup.org/osdu/platform/system/dataset/-/issues/60health-check-api is missing2023-11-06T16:28:29ZChad Leonghealth-check-api is missingIn all [core services](https://community.opengroup.org/osdu/documentation/-/wikis/Core-Services-API-Docs), the [health check APIs](
https://osdu-ship.msft-osdu-test.org/api/file/v2/swagger-ui/index.html#/health-check-api) are provided to...In all [core services](https://community.opengroup.org/osdu/documentation/-/wikis/Core-Services-API-Docs), the [health check APIs](
https://osdu-ship.msft-osdu-test.org/api/file/v2/swagger-ui/index.html#/health-check-api) are provided to do a quick check on the service health.
Two endpoints are missing today in [dataset API](https://osdu-ship.msft-osdu-test.org/api/dataset/v1/swagger-ui/index.html):
Only `data-partition-id` is needed in the header.
```
/liveness_check
/readiness_check
```Srinivasan NarayananSrinivasan Narayananhttps://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/seismic-dms-suite/seismic-store-service/-/issues/111[ADR] Synching SDMS V4 datasets in SDMS V32023-09-29T12:05:42ZDiego Molteni[ADR] Synching SDMS V4 datasets in SDMS V3# Introduction
We need a solution for make dataset ingested in SDMS V4 visible and consumed by SDMS V3.
The purpose of this ADR is to describes how to enable a synchronization mechanism that allows users of SDMS V3 to consume seismic d...# Introduction
We need a solution for make dataset ingested in SDMS V4 visible and consumed by SDMS V3.
The purpose of this ADR is to describes how to enable a synchronization mechanism that allows users of SDMS V3 to consume seismic dataset entities ingested in SDMS V4, even though the two versions of the system have entirely different architectural logics.
# Status
* [x] Initiated
* [x] Proposed
* [ ] Under Review
* [ ] Approved
* [ ] Rejected
# Problem statement
The Seismic Data Management Service V4 (SDMS V4) stores and manages data types as defined by the Open Subsurface Data Universe (OSDU) Authority. The APIs (Application Programming Interfaces) provide robust data type checks and are fully integrated with the OSDU policy service. The goal is to minimize ambiguity in the authorization model and facilitate straightforward adoption through a consistent usage pattern. In contrast, the V3 version of the service defines, saves, and manages proprietary metadata records, interacts directly with the entitlement service, and organizes records into collections/data-groups named subprojects.
<div align="center">
<br/><img src="/uploads/5e1a58219ca35be9da530b0eba2ed9fa/arch-diagram.png"
alt="sdms-architectural-diagram"
style="display: block; margin: 0 auto" /><br/>
</div>
The key difference between the two versions of the service lies in the form of the record. In the case of the OSDU record adopted by SDMS V4, it is entirely managed by the storage service. However, the V3 metadata has its own format, and to locate a dataset ingested in SDMS V4 via V3, it is necessary to create a V3 proprietary record. The following section will describe how an OSDU record can be translated into a V3 record to enable the synchronization process between the systems
# Proposed solution
Create a new service capable of detecting when a new dataset is registered in SDMS V4 and creating the corresponding record in SDMS V3
## Overview
As previously noted, in SDMS V3, the dataset descriptor has a proprietary structure and is maintained in an internal catalog. However, in SDMS V4, the descriptor is a standard OSDU record managed by the storage service. To make a datasets, ingested in SDMS V4, visible in SDMS V3 we must create a corresponding V3 metadata. This section describes how an SDMS V3 record can be created, using the OSDU record details, to make the ingested dataset in V4 visible in V3
### The SDMS V3 dataset descriptor
```json
{
"id": "the record id <used as key in the service journal catalogue>",
"data": {
"name": "the dataset name",
"tenant": "the tenant name",
"subproject": "the subproject name",
"path": "the dataset virtual folder path",
"acls": {
"admins": "list of entitlement groups with admin rights",
"viewers": "list of entitlement groups with viewer rights"
},
"ltag": "the associated legal tag",
"created-by": "the id of the user who ingested the dataset",
"created_date": "the date and time when the dataset was ingested",
"last_modified_date": "the date and time when the dataset was last modified",
"gcsurl": "the storage uri string where bulks are saved",
"ctag": "a coherency hash tag that changes every time this record is modified",
"readonly": "the access mode level",
"filemetadata": {
"nobjects": "the number of blobs composing the dataset",
"size": "the dataset bulk total size",
"type": "the type of the manifest",
"checksum": "the dataset bulk checksum",
"tier_class": "the dataset storage tier class"
},
"computed_size": "the computed dataset size",
"computed_size_date": "the date and time when the dataset size was computed",
"seismicmeta_guid": "the associated OSDU record id"
}
}
```
### The SDMS V4 record (simplified)
```json
{
"kind": "the osdu dataset kind",
"acl": {
"viewers": "list of entitlement groups with viewer rights",
"owners": "list of entitlement groups with admin rights",
},
"legal": {
"legaltags": "the list of legal tags",
"otherRelevantDataCountries": "the list of data countries",
"status": "the legal status"
},
"data": {
"Name": "the dataset name",
"Description": "the dataset description",
"TotalSize": "the dataset total size",
"DatasetProperties": {
"FileCollectionPath": "the dataset virtual folder path",
"FileSourceInfos": [
{
"FileSource": "the file component source",
"PreloadFilePath": "the file component origin",
"Name": "the file component name",
"FileSize": "the file component size",
"Checksum": "the file component checksum",
"ChecksumAlgorithm": "the checksum algorithm"
}
],
"Checksum": "the dataset checksum"
}
}
```
### ADR symbols definitions
To make it simpler for the reader to understand the examples in the following sections, we define the following symbols:
| Symbols | Description |
| --- | --- |
| RV3 | the SDMS V3 record |
| RV4 | the SDMS V4 record |
| RV4.DatasetProperties | the record_v4.data.DatasetProperties element |
| RV4.FileSourceInfos | the record_v4.data.DatasetProperties.FileSourceInfos element |
### The SDMS V3 record generation in detail
- `RV3.id`
The ID in SDMS V3 is autogenerated based on the values composing the SDMS V3 URI: `tenant`, `subproject`, `path` and `name`.
```python
hash_obj = hashlib.sha512()
hash_obj.update((RV3.data.path + RV3.data.name).encode('utf-8'))
hashed_value = hash_obj.hexdigest()
cosmos_record["id"] = 'ds-' + RV3.data.tenant + '-' + RV3.data.subproject + '-' + hashed_value
```
- `RV3.data.name`
The dataset name.
```python
if 'Name' in RV4.data:
RV3.data.name = RV4.data.Name
elif len(FileSourceInfos) == 1 and 'Name' in FileSourceInfos[0]
RV3.data.name = FileSourceInfos[0].Name
else:
RV3.data.name = RV4.id
```
- `RV3.data.tenant`
The dataset tenant name matches the data-partition-id in the OSDU model. This specific information cannot be automatically detected in a V4 record but can be easily detected by the syncing process .
```python
RV3.data.tenant = data_partition_id
```
- `RV3.data.subproject`
The dataset resource group name (referred to as subproject in SDMS V3) must exist in SDMS V3 with the `access_policy` property set to `dataset`. Essentially, each partition in SDMS V3 should have a default data group where all SDMS V4 datasets can be collected. This required data group can be automatically created by the syncing process. The name of the data group will default to `syncv4`.
```python
RV3.data.subproject = "syncv4"
```
- `RV3.data.path`
The dataset virtual path represents the logical folder structure in the data group (subproject) where the dataset is stored.
```python
RV3.data.path = RV4.DatasetProperties.FileCollectionPath
```
- `RV3.data.acls`
The Access Control List (ACL) defines the list of users with admin and viewer rights. The only difference is that in the SDMS V3 record, the `owners` list is named `admins`, while the `viewers` list has matching names.
```python
RV3.data.acls.admins = RV4.acls.owners
RV3.data.acls.viewers = RV4.acls.viewers
```
- `RV3.data.ltag`
In SDMS V3, legal tag information is represented by a unique value, whereas in SDMS V4, it is represented as a list. To simplify the record composition, we select the first valid legal tag from the V4 record list. If no valid legal tags are found in the V4 record, we should always set an invalid legal tag in V3. If this is not set, V3 will inherit a valid legal tag from the data group, risking the possibility of a non-accessible record in V4 being addressable in V3.
```python
for tag in RV4.legal.legaltags:
if isValid(tag):
RV3.data.ltag = tag
break
if tag is None:
RV3.data.ltag = RV4.legal.legaltags[0]
```
- `RV3.data.created-by`
The user who created/ingested the dataset.
```python
RV3.data['created-by'] = RV4.createUser
```
- `RV3.data.created_date`
The timestamp when the dataset was created/ingested.
```python
RV3.data.created_date = RV4.createTime
```
- `RV3.data.last_modified_date`
The timestamp when the dataset was last modified.
```python
RV3.data.last_modified_date = RV4.modifyTime
```
- `RV3.data.gcsurl`
The storage ID of the container/bucket where dataset bulk files are stored. This value is automatically generated based on the record ID value.
```python
hash_obj = hashlib.sha256()
hash_obj.update(RV4.id.encode('utf-8'))
RV3.data.gcsurl = hash_obj.hexdigest()[:-1]
```
- `RV3.data.ctag`
The Coherency Tag (ctag) is a hash code associated with the dataset descriptor that changes every time the metadata is updated. This property exists only in SDMS V3, and it is autogenerated.
```python
alphabet = string.ascii_letters + string.digits
RV3.data.ctag = ''.join(secrets.choice(alphabet) for _ in range(16))
```
- `RV3.data.readonly`
The `readonly` property defines the dataset's status regarding readability. If set to `false`, the dataset can be accessed in both read and write modes. If set to `true`, the dataset can only be accessed in read mode. In SDMS V4, a dataset cannot be marked as `readonly`, and for this reason, in the generated V3 record, the value will be defaulted to `false`.
```python
RV3.data.readonly = False
```
- `RV3.data.filemetadata`
The `filemetadata`, also known as the dataset manifest, is an object containing information about how the dataset's bulks are stored in the cloud storage resource. The only supported manifest in SDMS V3 is the `GENERIC`, which requires that all objects composing the dataset be saved in sequential order using the `0` to `N-1` naming convention, where `N` is the number of objects. The fields composing the dataset manifest are:
`nobjects`: the number of objects composing the dataset. this value can be computed by counting the number of objects composing the dataset.
`size`: the dataset total size can be computed by summing the sizes of all objects composing the dataset. Alternatively, if it exists, the `RV4.data.TotalSize` can be used, but computing it will provide a better and clearer result.
`type`: the manifest type, with `GENERIC` the only supported.
`checksum`: the dataset checksum.
`tier_class`: the dataset storage tiering class.
```python
blob_list = getBlobClient(connectionString)
size = 0
tier_class = None
objects_num = 0
error = False
for blob in blob_list:
if blob.name != str(count):
error = True
if tier_class == None:
tier_class = blob.blob_tier
objects_num = objects_num + 1
size = size + blob.size
if not error:
RV3.data.filemetadata.type = 'GENERIC'
RV3.data.filemetadata.nobjects = objects_num
RV3.data.filemetadata.size = size
if 'Checksum' in RV4.DatasetProperties:
RV3.data.filemetadata.checksum = RV4.DatasetProperties.Checksum
RV3.data.filemetadata.tier_class = tier
else:
RV3.data.filemetadata = None
```
- `RV3.data.computed_size`
The `computed_size` is generated by SDMS V3 when the `/size` endpoint is triggered. This endpoint calculates the size of the datasets by summing the sizes of all composing objects. This field has been introduced because the dataset filemetadata object is an optional field created by client applications, such as sdapi or sdutil, and can only be trusted by them.
```python
blob_list = getBlobClient(connectionString)
size = 0
for blob in blob_list:
size = size + blob.size
RV3.data.computed_size = size
```
- `RV3.data.computed_size_date`
This is the timestamp of when the dataset size has been computed by SDMS V3.
```python
RV3.data.computed_size_date = str(datetime.datetime.now())
```
- `RV3.data.seismicmeta_guid`
The `seismicmeta_guid` is the ID of a record linked with the SDMS V3 dataset. This can be associated with the SDMS V4 record so all extra properties can be downloaded by consumer applications.
```python
RV3.data.seismicmeta_guid = RV4.id
```
### The Script to validate the proposed conversion
- The script [sync-script.py](/uploads/2421d4b04fe2a6fdd560f1df321e5d36/sync-script.py) is provided with this ADR (for testing purposes only) to demonstrate and validate the synching flow between SDMS V4 and V3:
- Create a random data file of 16MB and compute the checksum
- Fill an OSDU record and register it in SDMS V4
- Upload the 16MB file as 4 objects of 4MB each using the connection string generated via SDMS V4
- Generate an V3 metadata record and register it in SDMS V3
- Ensure the dataset in SDMS V3 can be located after ingestion
- Download all objects using the connection string generated via SDMS V3
- Compare the initial object with the download one to ensure these match
#### Example of an SDMS V4 ingested record
```json
{
"id": "opendes:dataset--FileCollection.SEGY:7fe06451787641c4953a06a63e44967a",
"kind": "osdu:wks:dataset--FileCollection.SEGY: 1.1.0",
"version": 1694519237996696,
"acl": {
"viewers": [
"data.sdms.opendes.tdata.fe6730f9-bb3d-46a3-9f03-3d529e32360d.viewer@opendes.domain.com"
],
"owners": [
"data.sdms.opendes.tdata.fe6730f9-bb3d-46a3-9f03-3d529e32360d.admin@opendes.domain.com"
]
},
"legal": {
"legaltags": [
"ltag-seistore-test-01"
],
"otherRelevantDataCountries": [
"US"
],
"status": "compliant"
},
"modifyUser": "test-user@domain.com",
"modifyTime": "2023-09-07T11:47:18.625Z",
"createUser": "test-user@domain.com",
"createTime": "2023-09-07T07:17:58.443Z",
"data": {
"Name": "data-sync.segy",
"TotalSize": "16777216",
"Description": "SDMS synching test record",
"DatasetProperties": {
"FileCollectionPath": "/f1/f2/f3/",
"FileSourceInfos": [
{
"FileSource": "data-sync.segy",
"Name": "data-sync.segy",
"FileSize": "16777216",
"Checksum": "8ce2025f9b27e3017ab15f15b261d599",
"ChecksumAlgorithm": "MD5"
}
],
"Checksum": "8ce2025f9b27e3017ab15f15b261d599"
}
}
}
```
#### Example of a generated SDMS V3 metadata
```json
{
"id": "ds-opendes-syncv4-c0699ac77bc64a5772ac7f6f455ce5a251e3686d87d26e91df2ecc73e7bfdf4b0a16ac757c2ec227c1a6814d097a0b6b759a01dc52753754a0a18dfaea53c7d0",
"data": {
"name": "data-sync.segy",
"tenant": "opendes",
"subproject": "syncv4",
"path": "/f1/f2/f3/",
"acls": {
"admins": [
"data.sdms.opendes.tdata.fe6730f9-bb3d-46a3-9f03-3d529e32360d.admin@opendes.domain.com"
],
"viewers": [
"data.sdms.opendes.tdata.fe6730f9-bb3d-46a3-9f03-3d529e32360d.viewer@opendes.domain.com"
]
},
"ltag": "ltag-seistore-test-01",
"created-by": "test-user@domain.com",
"created_date": "2023-09-07T07:17:58.443Z",
"last_modified_date": "2023-09-07T11:47:18.625Z",
"gcsurl": "a5993feef91df715c176452fe1a26d04ca70e88d0ccff268e92cd74c76dde61",
"ctag": "9STTAfiKl4iukKbp",
"readonly": "false",
"filemetadata": {
"nobjects": 4,
"size": 16777216,
"type": "GENERIC",
"checksum": "8ce2025f9b27e3017ab15f15b261d599",
"tier_class": "Hot"
},
"computed_size": 16777216,
"computed_size_date": "2023-09-12 13:47:45.877142",
"seismicmeta_guid": "opendes:dataset--FileCollection.SEGY:7fe06451787641c4953a06a63e44967a"
}
}
```
### SDMS V4 to V3 Synching Automation
The preceding section explains the process of creating a metadata descriptor for SDMS V3 using an OSDU record. This metadata descriptor enables access to a dataset ingested in SDMS V4 through SDMS V3.
In order to automate the process, we will deploy a new service called the `sdms-sync-service`, which will be responsible for generating an SDMS V3 record every time a new dataset is registered in SDMS V4. When a dataset is registered in SDMS V4, a message will be pushed into a Redis queue `insert-synch-v4:{record-id}:{partition}:{other-required-params}`. The new service will consume the messages from the Redis queue and initiate the synching process:
- retrieve the OSDU record from storage service
- generates the corresponding SDMS V3 metadata descriptor
- saves the generated metadata in the SDMS V3 journal.
<div align="center">
<br/><img src="/uploads/b2d6eb24b28516feb0908e5ef7232a2e/sdms-sync-service.png"
alt="sdms-sync-service"
style="display: block; margin: 0 auto" /><br/>
</div>
### Details
- If a dataset is patched in SDMS V4, the service should push an `insert` message into the Redis queue:
- If the previous `insert` message is still in the queue (not yet consumed by the sync service), the existing entry will be overwritten in the queue, and the sync service will create the updated one.
- If the previous version was already synced, when the new message is consumed, the updated record will be created, and because the generated key is identical, it will overwrite the existing record in the journal.
- if a dataset is delete in SDMS V4 the service should push a `delete` message in the Redis queue.
- When the delete message is consumed, the sync service will generate only the V3 record key and remove the entry from the journal.
- If the `insert` message was still not consumed from the queue, when the sync service consume it it should check if a `delete` message is also present for the same record. In case this is located in the queue, the sync service will skip the sync process and remove both entry `insert` and `delete` from the Redis queue.
### Limitations
When a dataset is registered in V4 via a client app, the record is created instantaneously, while uploading the bulk data into the storage resource takes longer. If the `insert` message is consumed before the bulk data is uploaded, the file manifest cannot be computed due to missing objects. To address this issue, we can enable a background process in the `sync-service` that loops over the created SDMS V3 records and updates the manifest in cases where it does not exist or when the last modified time in the corresponding SDMS V4 record is greater than the one reported in the V3 entry. This approach should be re-discussed with the community to find an optimal strategy to apply.M22 - Release 0.25Sacha BrantsMark YanSacha Brantshttps://community.opengroup.org/osdu/platform/system/indexer-service/-/issues/111ADR: Search backend (Elasticsearch) Upgrade2024-02-05T14:31:07ZNeelesh ThakurADR: Search backend (Elasticsearch) Upgrade## Status
- [X] Proposed
- [ ] Under review
- [ ] Approved
- [ ] Retired
## Background
Elasticsearch serves as the backend for indexer and search services. To communicate with the Elasticsearch server (deployed and managed independent...## Status
- [X] Proposed
- [ ] Under review
- [ ] Approved
- [ ] Retired
## Background
Elasticsearch serves as the backend for indexer and search services. To communicate with the Elasticsearch server (deployed and managed independently), these services use the Elasticsearch [Java high level rest API](https://www.elastic.co/guide/en/elasticsearch/client/java-rest/current/java-rest-high.html) client SDK. Official OSDU Data Platform supported Elasticsearch version for server & client SDK is [v7.8.1](https://www.elastic.co/blog/elastic-stack-7-8-1-released). Current version is quite old, and already beyond [end of life](https://www.elastic.co/support/eol) support. A new major version (v.8.x), which was released in April 2022, is also available. Furthermore, not only will an upgrade to Elasticsearch resolve issues & offers new features and capabilities, but it will also save costs. Here are few reasons, Elasticsearch client & server should be updated:
- [Log4J vulnerability](https://blog.qualys.com/vulnerabilities-threat-research/2021/12/10/apache-log4j2-zero-day-exploited-in-the-wild-log4shell) discovered on December 2021 forced all CSPs to update their Elasticsearch server version. At this time, all CSPs are on different server versions e.g. Azure v7.17.x, IBM v7.11.x etc. Even though Elasticsearch promises on not introducing any breaking change on a major version, we have found issues in past. Ideally all CSPs should be on same client & server versions to avoid any potential issues.
Community effort on [Reference Implementation](https://gitlab.opengroup.org/osdu/subcommittees/ea/work-products/architecture-decision-records/-/blob/main/0006-osdu-will-have-a-reference-implementation.md?ref_type=heads) gives us a good opportunity to upgrade and align Elasticsearch client and server version.
- Elasticsearch v7.8.1 has reached end of life some time back. [Officially supported](https://www.elastic.co/support/eol) version for Elasticsearch v7 is v7.17.x or higher. If an issue found with client SDK or server, than fix is usually avaialable in most recent version.
- Elasticsearch has launched many new versions past v7.8.1 with several improvements & new features, some notable ones in v8.x are mentioned below:
- Elasticsearch v8.3.x has [removed](https://www.elastic.co/guide/en/elasticsearch/reference/current/size-your-shards.html#shard-count-recommendation) 1k shard count per node limitations. OSDU DD Definition Team has introduced several new schemas over the course of few milestone releases. On Elasticsearch, each schema index generates two shards. Currently, a single node in an elasticsearch instance can only hold up to 1K shards. A small or medium-sized Elasticsearch cluster can quickly run out of shard capacity with so many new schemas.
- Reduced resource requirements via [memory heap reductions](https://www.elastic.co/blog/significantly-decrease-your-elasticsearch-heap-memory-usage). This can result in lowered customers’ total cost of ownership. Added support for the [ARM architecture](https://www.elastic.co/blog/whats-new-elasticsearch-7-12-0-put-a-search-box-on-s3), it offers 20% better performance while being 10% cheaper than x86-64. Introduced novel ways to use less storage by decoupling compute from storage with a new [frozen tier and searchable snapshots](https://www.elastic.co/blog/whats-new-elasticsearch-7-10-0-searchable-snapshots-store-more-for-less).
- Improved indexing latency of several data types including [geo-points, geo-shapes](https://www.elastic.co/guide/en/elasticsearch/reference/8.0/release-highlights.html#_faster_indexing_of_geo_point_geo_shape_and_range_fields) etc. [Enhanced error messages](https://issues.apache.org/jira/browse/LUCENE-9538) on invalid geo-shape indexing. It can now provide more meaningful messages capturing issues with shape, rather a generic messages in current version. Several new geo queries (e.g. [geo-grid query](https://www.elastic.co/guide/en/elasticsearch/reference/8.3/release-highlights.html#new_geo_grid_query) etc.), aggregations (e.g. [cartesian-centroid](https://www.elastic.co/guide/en/elasticsearch/reference/8.6/release-highlights.html#support_cartesian_centroid_aggregation_over_points_shapes), [geo-hex](https://www.elastic.co/guide/en/elasticsearch/reference/8.7/release-highlights.html#geohex_aggregations_on_both_geo_point_geo_shape_fields) aggregation over points and shapes etc.) are also introduced.
- Introduced a new [health API](https://www.elastic.co/guide/en/elasticsearch/reference/8.7/release-highlights.html#health_api_generally_available) designed to report the health of the cluster. The new API offers a detailed report that can include a precise diagnosis and a solution, as well as a high level overview of the cluster health. The operational teams can benefit greatly from this API.
- Released a full suite of native [vector search](https://www.elastic.co/what-is/vector-search) via [kNN search](https://www.elastic.co/guide/en/elasticsearch/reference/8.0/release-highlights.html#_new_knn_search_api). It adds support for natural language processing (NLP) models directly into Elasticsearch. Users can now perform named entity recognition, sentiment analysis, text classification, and more directly in Elasticsearch — without requiring additional components or coding. Elasticsearch v8.x also includes native support for [approximate nearest neighbor (ANN)](http://www.elastic.co/blog/introducing-approximate-nearest-neighbor-search-in-elasticsearch-8-0) search — making it possible to compare vector-based queries with a vector-based document corpus with speed and at scale.
## Proposal
Any Elasticsearch upgrade will require coordination with community and CSPs. This can be very time consuming. Instead of just upgrading Elasticsearch to latest v7.17.x, we should upgrade to v8.10.0 (or the highest released v8.x) to minimize the disruption and repeat this step very soon. Since the last major version of Elasticsearch (v8) was released 18 months ago, once v9 is released, the entire v7 (v7.17.x) family will be deprecated, as stated in the [support documentation](https://www.elastic.co/support/eol).
We should breakdown upgrade into two parts:
#### Latest v7.17.x Upgrade
1. Take back up (snapshot) of the data. We cannot roll back to an earlier version unless we have snapshot.
1. Upgrade Elasticsearch server to latest v7.17.13 (or highest available v7.17.x).
1. Replace Indexer & Search services [Java high level rest API](https://www.elastic.co/guide/en/elasticsearch/client/java-rest/current/java-rest-high.html) client SDK with new [Java API client SDK](https://www.elastic.co/guide/en/elasticsearch/client/java-api-client/current/index.html). Elasticsearch has [changed](https://www.elastic.co/pricing/faq/licensing) **Java high level rest API** client SDK's license in v7.10.2 from Apache 2.0 to [SSPL](https://www.mongodb.com/licensing/server-side-public-license). New license is not preffered license for OSDU Data Platform as explained in the [issue](https://community.opengroup.org/osdu/platform/system/search-service/-/issues/133).
[Java API client SDK](https://www.elastic.co/guide/en/elasticsearch/client/java-api-client/current/index.html) with [Apache 2.0](https://github.com/elastic/elasticsearch-java/) license is available v7.15.0 onwards (including v.8.x). Along similar timeline [Java high level rest API](https://www.elastic.co/guide/en/elasticsearch/client/java-rest/current/java-rest-high.html) has been [deprecated](https://www.elastic.co/guide/en/elasticsearch/client/java-rest/current/java-rest-high.html) in favor of [Java API client SDK](https://www.elastic.co/guide/en/elasticsearch/client/java-api-client/current/index.html).
#### Latest v8.10.x Upgrade
1. Complete client and server upgrade to latest v7.17.x as stated above.
1. Use the Kibana [Upgrade Assistant](https://www.elastic.co/guide/en/kibana/7.17/upgrade-assistant.html) to prepare for upgrade from v7.17 to v8.10.0. The Upgrade Assistant identifies deprecated settings and guides users through resolving issues.
1. Review the deprecation logs from the Upgrade Assistant.
1. Review breaking changes including breaking changes for each minor v8.x release up to v8.10.0.
1. Make the recommended changes to ensure that applications/APIs continue to operate as expected after the upgrade.
1. Take a current snapshot before server upgrade is started.
1. Upgrade Elasticsearch server to latest v8.10.0 (or highest available v8.x).
1. Upgrade Indexer and Search service [Java API client SDK](https://www.elastic.co/guide/en/elasticsearch/client/java-api-client/current/index.html) to v8.10.0 (or highest available v8.x).
Elasticsearch upgrade recommendations from v7.x to v8.x can be found [here](https://www.elastic.co/guide/en/elastic-stack/8.10/upgrading-elastic-stack.html#prepare-to-upgrade).https://community.opengroup.org/osdu/platform/domain-data-mgmt-services/wellbore/lib/wellbore-cloud/wellbore-gcp-lib/-/issues/1Add multipartition support2023-09-13T10:24:24ZYan Sushchynski (EPAM)Add multipartition supportThere is no multipartition support for the `GC` implementation. It causes a few problems:
1. We are forced to specify a GC project id
2. We have to use default bucket names
3. We can't separate the data in different partitionsThere is no multipartition support for the `GC` implementation. It causes a few problems:
1. We are forced to specify a GC project id
2. We have to use default bucket names
3. We can't separate the data in different partitionshttps://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/seismic-dms-suite/seismic-store-service/-/issues/112[ADR] Synching SDMS V3 datasets in SDMS V42024-02-28T07:31:26ZDiego Molteni[ADR] Synching SDMS V3 datasets in SDMS V4# Introduction
We need a solution for make dataset ingested in SDMS V3 visible and consumed by SDMS V4.
The purpose of this ADR is to describes how to enable a synchronization mechanism that allows users of SDMS V4 to consume seismic d...# Introduction
We need a solution for make dataset ingested in SDMS V3 visible and consumed by SDMS V4.
The purpose of this ADR is to describes how to enable a synchronization mechanism that allows users of SDMS V4 to consume seismic dataset entities ingested in SDMS V3 via client applications, even though the two versions of the system have entirely different architectural logics.
# Status
* [x] Initiated
* [x] Proposed
* [ ] Under Review
* [ ] Approved
* [ ] Rejected
# Problem statement
The Seismic Data Management Service V4 (SDMS V4) stores and manages data types as defined by the Open Subsurface Data Universe (OSDU) Authority. The APIs (Application Programming Interfaces) provide robust data type checks and are fully integrated with the OSDU policy service. The goal is to minimize ambiguity in the authorization model and facilitate straightforward adoption through a consistent usage pattern. In contrast, the V3 version of the service defines, saves, and manages proprietary metadata records, interacts directly with the entitlement service, and organizes records into collections/data-groups named subprojects.
<div align="center">
<br/><img src="/uploads/5e1a58219ca35be9da530b0eba2ed9fa/arch-diagram.png"
alt="sdms-architectural-diagram"
style="display: block; margin: 0 auto;"/><br/>
</div>
The key difference between the two versions of the service lies in the way of how the cloud storage URI is generated. In SDMS V4 this is generated starting from the record-id value while in SDMS V3 the generated URI is a random UUID.
# Proposed solution
Update SDMS V4 by adding the capability to correctly retrieve the storage location for the dataset's bulk data if the dataset was ingested via SDMS V3.
## Scenarios
When a dataset is ingested in SDMS V3 from a seismic application, the latter also creates an OSDU Bulk record linked to a Work Product Component, as shown in the following diagram:
<div align="center">
<br/><img src="/uploads/3d73191098963a80675c2ed6e96472cc/image.png"
alt="sdms-architectural-diagram"
style="display: block; margin: 0 auto; height: 30%; width: 30%" /><br/>
</div>
The seismic applications saves the SDMS V3 URI (also known as `sdapth`) in the `FileSourceInfo` property of the created OSDU Bulk record. This is done to later facilitate communication of the URI to SDMS V3 for retrieving the storage connection string required to access the dataset's bulk data.
### Example of SDMS V3 dataset metadata
```json
{
"name": "test-data.zgy",
"tenant": "partition",
"subproject": "subproject",
"path": "/",
"ltag": "test-legal",
"created_by": "test-user@slb.com",
"last_modified_date": "Tue Sep 12 2023 11:04:29 GMT+0000 (Coordinated Universal Time)",
"created_date": "Tue Sep 12 10:54:10 GMT+0000 (Coordinated Universal Time)",
"gcsurl": "ss-weu-xkz32bjwg2425gn/bdf36c8a-3c62-3151-12b7-227af4727520",
"ctag": "sMTz0oWeId1nOnrx",
"readonly": true,
"sbit": null,
"sbit_count": 0,
"filemetadata": {
"type": "GENERIC",
"size": 1544552448,
"nobjects": 47
},
"seismicmeta_guid": "partition:work-product-component--SeismicTraceData:326bac9a-1fb2-5c73-9c64-6ca122c5025a",
"access_policy": "uniform"
}
```
### Example of OSDU storage associated Work Product Component
```json
{
"id": "partition:work-product-component--SeismicTraceData:326bac9a-1fb2-5c73-9c64-6ca122c5025",
"kind": "osdu:wks:work-product-component--SeismicTraceData:1.3.0",
"version": 1685099234631439,
"acl": {
"viewers": [
"data.test@domain.slb.com"
],
"owners": [
"data.test@domain.com"
]
},
"legal": {
"legaltags": [
"test-legal"
],
"otherRelevantDataCountries": [
"US"
],
"status": "compliant"
},
"data": {
"BinGridID": "partition:work-product-component--SeismicBinGrid:2a714f2b12aa346d16a08c5a2f4e157e:",
"Datasets": [
"partition:dataset--FileCollection.Slb.OpenZGY:1de532c2-4d1b-5316-ba4a-422342321d55"
],
"DDMSDatasets": [
"urn:dataset--FileCollection.Slb.OpenZGY:1de532c2-4d1b-5316-ba4a-422342321d55"
],
"Name": "test-data.zgy",
"Source": "osdu",
"SubmitterName": "test-user@domain.com"
},
"createUser": "test-user@domain.com",
"createTime": "2023-09-12T11:04:30.321Z",
"modifyUser": "test-user@domain.com",
"modifyTime": "2023-09-12T18:09:12.703Z"
}
```
### Example of OSDU storage associated File Collection
```json
{
"id": "partition:dataset--FileCollection.Slb.OpenZGY:1de532c2-4d1b-5316-ba4a-422342321d55",
"version": "4426199321664216",
"kind": "osdu:wks:dataset--FileCollection.Slb.OpenZGY:1.0.0",
"acl": {
"viewers": [
"data.test@domain.slb.com"
],
"owners": [
"data.test@domain.com"
]
},
"legal": {
"legaltags": [
"test-legal"
],
"otherRelevantDataCountries": [
"US"
],
"status": "compliant"
},
"createUser": "test-user@domain.com",
"createTime": "2023-09-12T11:04:02.705Z",
"data": {
"Endian": "BIG",
"SEGYRevision": "rev 1",
"TotalSize": "1544552448",
"Name": "test-data.zgy",
"DatasetProperties": {
"FileCollectionPath": "sd://tenant/subproject/",
"FileSourceInfos": [
{
"FileSource": "test-data.zgy",
"Name": "test-data.zgy",
"FileSize": "1544552448",
}
]
}
}
}
```
## Proposed Solution
To enable applications to access bulk datasets ingested in SDMS V3 through SDMS V4, we need to update the mechanism in SDMS V4 for retrieving the correct storage URI associated with the Bulk record. This update is necessary to generate a valid connection string for accessing the bulk data.
When a Bulk record is created, the SDMS V3 URI (also known as 'sdapth') is typically saved in the `FileCollectionPath` and `FileSource` properties. In the most common scenarios, the `sd://tenant/subproject/path` portion of the URI is stored in the `FileCollectionPath` property, while the URI's name is stored in the `FileSource` property.
When a connection access string is requested for a Bulk record through SDMS V4, the service should detect if the record's file source type refers to a V3 dataset's URI. If this last case, the service should then:
1. extract the `subproject` name from the `FileCollectionPath`
```python
subproject = record.data.DatasetProperties.FileCollectionPath.replace("sd://", "").split('/')[1]
```
2. extract the `path` from the `FileCollectionPath`
```python
subproject = (record.data.DatasetProperties.FileCollectionPath.replace("sd://", "").split('/')[2:]).replace("//", "/")
```
3. extract the `name` from the `FileSource`
```python
name = record.data.DatasetProperties.FileSourceInfos[0].FileSource
```
4. retrieve the storage URL from the V3 journal
```sql
SELECT c.data.gcsurl
FROM c
WHERE
c.data.subproject="{subproject}"
AND c.data.path="{path}"
AND c.data.name="{name}"
```
5. generate the connection string using the retrieved storage URL
```python
storage_client = StorageClient("{storage-url}")
return storage_client.getConnectionString()
```
#### Notes
Seismic applications use different approaches to save the SDMS V3 URI in the Bulk record, and all these cases should be considered:
1. The sd://tenant/subproject/path is saved in the `FileCollectionPath`, and the name is saved in `FileSource`.
2. The full sd://tenant/subproject/path/name URI is saved in both `FileCollectionPath` and `FileSource`.
3. The sd://tenant/subproject/path URI is saved in `FileCollectionPath`, and the name in `FileSource`, but this latter starts with the ./ special character (which should be removed).
### Limitations
Applications that do not match the described flow should we reviewed with the application owner before defining the right strategy to enable the synchronization of datasets ingested in SDMS V3 with SDSM V4.M22 - Release 0.25Sacha BrantsSneha PoddarSacha Brantshttps://community.opengroup.org/osdu/platform/domain-data-mgmt-services/well-delivery/well-delivery/-/issues/20WPC - MasterData GroupType Change2023-09-13T14:28:08ZChad LeongWPC - MasterData GroupType ChangeHi, we have this upcoming planned change - I suspect this will affect the Well Delivery DDMS. Please be informed.
#### M21 (v0.24.0) Change Warning
OSDU uses group-type classifications for the entities. The definitions for the group-ty...Hi, we have this upcoming planned change - I suspect this will affect the Well Delivery DDMS. Please be informed.
#### M21 (v0.24.0) Change Warning
OSDU uses group-type classifications for the entities. The definitions for the group-types are provided in
the [Schema Usage Guide](https://community.opengroup.org/osdu/data/data-definitions/-/blob/v0.22.0/Guides/Chapters/02-GroupType.md#2-group-type).
Over time some of the entities' group-type classifications have been challenged. The following types appear in the wrong
group-type:
1. Reports are seen as non-tangible state descriptions/snapshots
1. master-data--FluidsReport proposed migrated to → work-product-component--WellFluidsReport.
2. master-data--OperationsReport proposed migrated to → work-product-component--WellOperationsReport.
2. Tubulars - when exchanged they are often transported using datasets/files, but the data themselves are tangible and
associated with an investment making them master-data.
1. work-product-component--TubularAssembly proposed migrated to → master-data--TubularAssembly.
2. work-product-component--TubularComponent proposed migrated to → master-data--TubularComponent.
3. work-product-component--TubularExternalComponent proposed migrated to →
master-data--TubularExternalComponent.
Obviously, this will impact operators who have ingested a large number of such data/records. This advance notice is
intended to prepare for this change and/or engage with data definitions, specifically
the [Well Delivery work-stream](https://opensdu.slack.com/archives/CL7MK8KMW), to influence the implementation of the
change.
There is a [schema preview including documentation provided](https://community.opengroup.org/osdu/data/data-definitions/-/blob/368b3a703581bb2d4a19210c935f922d9b3a4f4a/E-R/ChangeReport.md#snapshot-2023-08-18-towards-m21) in the Data Definitions community mirror.https://community.opengroup.org/osdu/platform/consumption/geospatial/-/issues/294Azure Kubernetes with istio Helm Chart ignite (Gridgain) service deployment t...2023-09-22T05:23:58ZBenjamin LaGroneAzure Kubernetes with istio Helm Chart ignite (Gridgain) service deployment template seems to be missing02f2760524014f5d634077420102883d761d7a8b had helm template `docs/deployment/kubernetes/helm-charts/templates/service.yaml`, which was deleted in 47ad25719f658a3d71fd10601d4e51c8d1f27fd4
without this template, Ignite/Gridgain has no serv...02f2760524014f5d634077420102883d761d7a8b had helm template `docs/deployment/kubernetes/helm-charts/templates/service.yaml`, which was deleted in 47ad25719f658a3d71fd10601d4e51c8d1f27fd4
without this template, Ignite/Gridgain has no service attached to the pod
I restored the file in my local repo and was able to successfully deploy, however can see no internal communication between pods
In our configuration we have istio sidecart deployed in clusterhttps://community.opengroup.org/osdu/platform/consumption/geospatial/-/issues/295kubernetes deployment docs osdu-istio does not configure GCZ deployment for i...2023-09-22T05:23:58ZBenjamin LaGronekubernetes deployment docs osdu-istio does not configure GCZ deployment for istio enabled environments, it appears to create new istio-system namespace insteadin docs folder geospatial/docs/deployment/kubernetes/osdu-istio
provided chart does not seem to configure GCZ deployment for istio enabled environments, it instead appears to create new istio-system namespace instead. I'm not sure what t...in docs folder geospatial/docs/deployment/kubernetes/osdu-istio
provided chart does not seem to configure GCZ deployment for istio enabled environments, it instead appears to create new istio-system namespace instead. I'm not sure what the intent was, but this would likely fail to deploy where istio already deployed/enabled.
Could we have some clarification on the intent?
it would be helpful to deploy into an environment where istio already createdhttps://community.opengroup.org/osdu/platform/consumption/geospatial/-/issues/296Transformer pod crashloopBackoff in Azure K8s2023-09-22T05:23:58ZBenjamin LaGroneTransformer pod crashloopBackoff in Azure K8skubectl logs gcz-transformer-7c4dbd8dcf-qrsz6 -n ignite
2023-09-13 18:34:19,443 main DEBUG Apache Log4j Core 2.17.2 initializing configuration YamlConfiguration[location=jar:file:/app.jar!/BOOT-INF/classes!/log4j2.yml]
2023-09-13 18:34:1...kubectl logs gcz-transformer-7c4dbd8dcf-qrsz6 -n ignite
2023-09-13 18:34:19,443 main DEBUG Apache Log4j Core 2.17.2 initializing configuration YamlConfiguration[location=jar:file:/app.jar!/BOOT-INF/classes!/log4j2.yml]
2023-09-13 18:34:19,445 main DEBUG PluginManager 'Core' found 127 plugins
2023-09-13 18:34:19,445 main DEBUG PluginManager 'Level' found 0 plugins
2023-09-13 18:34:19,446 main DEBUG Processing node for object appenders
2023-09-13 18:34:19,446 main DEBUG Processing node for object Console
2023-09-13 18:34:19,446 main DEBUG Node name is of type STRING
2023-09-13 18:34:19,447 main DEBUG Processing node for object PatternLayout
2023-09-13 18:34:19,447 main DEBUG Node Pattern is of type STRING
2023-09-13 18:34:19,447 main DEBUG Returning PatternLayout with parent Console of type layout:class org.apache.logging.log4j.core.layout.PatternLayout
2023-09-13 18:34:19,448 main DEBUG Returning Console with parent appenders of type appender:class org.apache.logging.log4j.core.appender.ConsoleAppender
2023-09-13 18:34:19,448 main DEBUG Processing node for array RollingFile
2023-09-13 18:34:19,449 main DEBUG Processing RollingFile[0]
2023-09-13 18:34:19,449 main DEBUG Processing node for object PatternLayout
2023-09-13 18:34:19,450 main DEBUG Node pattern is of type STRING
2023-09-13 18:34:19,450 main DEBUG Returning PatternLayout with parent RollingFile of type layout:class org.apache.logging.log4j.core.layout.PatternLayout
2023-09-13 18:34:19,450 main DEBUG Processing node for object Policies
2023-09-13 18:34:19,450 main DEBUG Processing node for object SizeBasedTriggeringPolicy
2023-09-13 18:34:19,451 main DEBUG Node size is of type STRING
2023-09-13 18:34:19,451 main DEBUG Returning SizeBasedTriggeringPolicy with parent Policies of type SizeBasedTriggeringPolicy:class org.apache.logging.log4j.core.appender.rolling.SizeBasedTriggeringPolicy
2023-09-13 18:34:19,457 main DEBUG Returning Policies with parent RollingFile of type Policies:class org.apache.logging.log4j.core.appender.rolling.CompositeTriggeringPolicy
2023-09-13 18:34:19,457 main DEBUG Processing node for object DefaultRollOverStrategy
2023-09-13 18:34:19,460 main DEBUG Node max is of type NUMBER
2023-09-13 18:34:19,460 main DEBUG Returning DefaultRollOverStrategy with parent RollingFile of type DefaultRolloverStrategy:class org.apache.logging.log4j.core.appender.rolling.DefaultRolloverStrategy
2023-09-13 18:34:19,460 main DEBUG Processing node for array File
2023-09-13 18:34:19,461 main DEBUG Processing File[0]
2023-09-13 18:34:19,461 main DEBUG Processing node for object PatternLayout
2023-09-13 18:34:19,462 main DEBUG Node pattern is of type STRING
2023-09-13 18:34:19,463 main DEBUG Returning PatternLayout with parent File of type layout:class org.apache.logging.log4j.core.layout.PatternLayout
2023-09-13 18:34:19,463 main DEBUG Processing File[1]
2023-09-13 18:34:19,463 main DEBUG Processing node for object PatternLayout
2023-09-13 18:34:19,464 main DEBUG Node pattern is of type STRING
2023-09-13 18:34:19,464 main DEBUG Returning PatternLayout with parent File of type layout:class org.apache.logging.log4j.core.layout.PatternLayout
2023-09-13 18:34:19,465 main DEBUG Returning appenders with parent root of type appenders:class org.apache.logging.log4j.core.config.AppendersPlugin
2023-09-13 18:34:19,465 main DEBUG Processing node for object Loggers
2023-09-13 18:34:19,465 main DEBUG Processing node for array logger
2023-09-13 18:34:19,466 main DEBUG Processing logger[0]
2023-09-13 18:34:19,466 main DEBUG Processing array for object AppenderRef
2023-09-13 18:34:19,466 main DEBUG Node ref is of type STRING
2023-09-13 18:34:19,467 main DEBUG Returning AppenderRef with parent logger of type AppenderRef:class org.apache.logging.log4j.core.config.AppenderRef
2023-09-13 18:34:19,467 main DEBUG Processing logger[1]
2023-09-13 18:34:19,467 main DEBUG Processing array for object AppenderRef
2023-09-13 18:34:19,468 main DEBUG Node ref is of type STRING
2023-09-13 18:34:19,468 main DEBUG Returning AppenderRef with parent logger of type AppenderRef:class org.apache.logging.log4j.core.config.AppenderRef
2023-09-13 18:34:19,469 main DEBUG Processing logger[2]
2023-09-13 18:34:19,469 main DEBUG Processing array for object AppenderRef
2023-09-13 18:34:19,469 main DEBUG Node ref is of type STRING
2023-09-13 18:34:19,470 main DEBUG Returning AppenderRef with parent logger of type AppenderRef:class org.apache.logging.log4j.core.config.AppenderRef
2023-09-13 18:34:19,470 main DEBUG Processing node for object Root
2023-09-13 18:34:19,470 main DEBUG Node level is of type STRING
2023-09-13 18:34:19,471 main DEBUG Processing node for array AppenderRef
2023-09-13 18:34:19,471 main DEBUG Processing AppenderRef[0]
2023-09-13 18:34:19,471 main DEBUG Processing AppenderRef[1]
2023-09-13 18:34:19,472 main DEBUG Returning Root with parent Loggers of type root:class org.apache.logging.log4j.core.config.LoggerConfig$RootLogger
2023-09-13 18:34:19,472 main DEBUG Returning Loggers with parent root of type loggers:class org.apache.logging.log4j.core.config.LoggersPlugin
2023-09-13 18:34:19,474 main DEBUG Completed parsing configuration
2023-09-13 18:34:19,477 main DEBUG PluginManager 'Lookup' found 16 plugins
2023-09-13 18:34:19,479 main DEBUG Building Plugin[name=layout, class=org.apache.logging.log4j.core.layout.PatternLayout].
2023-09-13 18:34:19,497 main DEBUG PluginManager 'TypeConverter' found 26 plugins
2023-09-13 18:34:19,516 main DEBUG PatternLayout$Builder(pattern="[%-5level] %d{yyyy-MM-dd HH:mm:ss.SSS} [%t] %c{1} - %msg%n", PatternSelector=null, Configuration(jar:file:/app.jar!/BOOT-INF/classes!/log4j2.yml), Replace=null, charset="null", alwaysWriteExceptions="null", disableAnsi="null", noConsoleNoAnsi="null", header="null", footer="null")
2023-09-13 18:34:19,516 main DEBUG PluginManager 'Converter' found 48 plugins
2023-09-13 18:34:19,530 main DEBUG Building Plugin[name=appender, class=org.apache.logging.log4j.core.appender.ConsoleAppender].
2023-09-13 18:34:19,544 main DEBUG ConsoleAppender$Builder(target="null", follow="null", direct="null", bufferedIo="null", bufferSize="null", immediateFlush="null", ignoreExceptions="null", PatternLayout([%-5level] %d{yyyy-MM-dd HH:mm:ss.SSS} [%t] %c{1} - %msg%n), name="LogToConsole", Configuration(jar:file:/app.jar!/BOOT-INF/classes!/log4j2.yml), Filter=null, ={})
2023-09-13 18:34:19,547 main DEBUG Starting OutputStreamManager SYSTEM_OUT.false.false
2023-09-13 18:34:19,548 main DEBUG Building Plugin[name=layout, class=org.apache.logging.log4j.core.layout.PatternLayout].
2023-09-13 18:34:19,549 main DEBUG PatternLayout$Builder(pattern="[%-5level] %d{yyyy-MM-dd HH:mm:ss.SSS} [%t] %c{1} - %msg%n", PatternSelector=null, Configuration(jar:file:/app.jar!/BOOT-INF/classes!/log4j2.yml), Replace=null, charset="null", alwaysWriteExceptions="null", disableAnsi="null", noConsoleNoAnsi="null", header="null", footer="null")
2023-09-13 18:34:19,550 main DEBUG Building Plugin[name=SizeBasedTriggeringPolicy, class=org.apache.logging.log4j.core.appender.rolling.SizeBasedTriggeringPolicy].
2023-09-13 18:34:19,556 main DEBUG createPolicy(size="10MB")
2023-09-13 18:34:19,558 main DEBUG Building Plugin[name=Policies, class=org.apache.logging.log4j.core.appender.rolling.CompositeTriggeringPolicy].
2023-09-13 18:34:19,559 main DEBUG createPolicy(={SizeBasedTriggeringPolicy(size=10485760)})
2023-09-13 18:34:19,559 main DEBUG Building Plugin[name=DefaultRolloverStrategy, class=org.apache.logging.log4j.core.appender.rolling.DefaultRolloverStrategy].
2023-09-13 18:34:19,563 main DEBUG DefaultRolloverStrategy$Builder(max="10", min="null", fileIndex="null", compressionLevel="null", ={}, stopCustomActionsOnError="null", tempCompressedFilePattern="null", Configuration(jar:file:/app.jar!/BOOT-INF/classes!/log4j2.yml))
2023-09-13 18:34:19,564 main DEBUG Building Plugin[name=appender, class=org.apache.logging.log4j.core.appender.RollingFileAppender].
2023-09-13 18:34:19,567 main DEBUG RollingFileAppender$Builder(fileName="logs/app.log", filePattern="logs/${date:yyyy-MM}/app-%d{MM-dd-yyyy}-%i.log.gz", append="null", locking="null", Policies(CompositeTriggeringPolicy(policies=[SizeBasedTriggeringPolicy(size=10485760)])), DefaultRollOverStrategy(DefaultRolloverStrategy(min=1, max=10, useMax=true)), advertise="null", advertiseUri="null", createOnDemand="null", filePermissions="null", fileOwner="null", fileGroup="null", bufferedIo="null", bufferSize="null", immediateFlush="null", ignoreExceptions="null", PatternLayout([%-5level] %d{yyyy-MM-dd HH:mm:ss.SSS} [%t] %c{1} - %msg%n), name="LogToRollingFile", Configuration(jar:file:/app.jar!/BOOT-INF/classes!/log4j2.yml), Filter=null, ={})
2023-09-13 18:34:19,575 main DEBUG Returning file creation time for /logs/app.log
2023-09-13 18:34:19,576 main DEBUG Starting RollingFileManager logs/app.log
2023-09-13 18:34:19,581 main DEBUG PluginManager 'FileConverter' found 2 plugins
2023-09-13 18:34:19,587 main DEBUG Setting prev file time to 2023-09-13T18:34:19.000+0000
2023-09-13 18:34:19,588 main DEBUG Initializing triggering policy CompositeTriggeringPolicy(policies=[SizeBasedTriggeringPolicy(size=10485760)])
2023-09-13 18:34:19,588 main DEBUG Initializing triggering policy SizeBasedTriggeringPolicy(size=10485760)
2023-09-13 18:34:19,589 main DEBUG Building Plugin[name=layout, class=org.apache.logging.log4j.core.layout.PatternLayout].
2023-09-13 18:34:19,590 main DEBUG PatternLayout$Builder(pattern="%d{yyyy-MM-dd HH:mm:ss.SSS} - %msg%n", PatternSelector=null, Configuration(jar:file:/app.jar!/BOOT-INF/classes!/log4j2.yml), Replace=null, charset="null", alwaysWriteExceptions="null", disableAnsi="null", noConsoleNoAnsi="null", header="null", footer="null")
2023-09-13 18:34:19,592 main DEBUG Building Plugin[name=appender, class=org.apache.logging.log4j.core.appender.FileAppender].
2023-09-13 18:34:19,596 main DEBUG FileAppender$Builder(fileName="logs/dataIngestionError_13-09-2023-06-34.log", append="false", locking="null", advertise="null", advertiseUri="null", createOnDemand="null", filePermissions="null", fileOwner="null", fileGroup="null", bufferedIo="null", bufferSize="null", immediateFlush="null", ignoreExceptions="null", PatternLayout(%d{yyyy-MM-dd HH:mm:ss.SSS} - %msg%n), name="LogToGeoJsonSummaryFile", Configuration(jar:file:/app.jar!/BOOT-INF/classes!/log4j2.yml), Filter=null, ={})
2023-09-13 18:34:19,598 main DEBUG Starting FileManager logs/dataIngestionError_13-09-2023-06-34.log
2023-09-13 18:34:19,599 main DEBUG Building Plugin[name=layout, class=org.apache.logging.log4j.core.layout.PatternLayout].
2023-09-13 18:34:19,600 main DEBUG PatternLayout$Builder(pattern="%d{yyyy-MM-dd HH:mm:ss.SSS} - %msg%n", PatternSelector=null, Configuration(jar:file:/app.jar!/BOOT-INF/classes!/log4j2.yml), Replace=null, charset="null", alwaysWriteExceptions="null", disableAnsi="null", noConsoleNoAnsi="null", header="null", footer="null")
2023-09-13 18:34:19,601 main DEBUG Building Plugin[name=appender, class=org.apache.logging.log4j.core.appender.FileAppender].
2023-09-13 18:34:19,602 main DEBUG FileAppender$Builder(fileName="logs/trajectoryLog_13-09-2023-06-34.log", append="false", locking="null", advertise="null", advertiseUri="null", createOnDemand="null", filePermissions="null", fileOwner="null", fileGroup="null", bufferedIo="null", bufferSize="null", immediateFlush="null", ignoreExceptions="null", PatternLayout(%d{yyyy-MM-dd HH:mm:ss.SSS} - %msg%n), name="TrajectoryLog", Configuration(jar:file:/app.jar!/BOOT-INF/classes!/log4j2.yml), Filter=null, ={})
2023-09-13 18:34:19,603 main DEBUG Starting FileManager logs/trajectoryLog_13-09-2023-06-34.log
2023-09-13 18:34:19,603 main DEBUG Building Plugin[name=appenders, class=org.apache.logging.log4j.core.config.AppendersPlugin].
2023-09-13 18:34:19,604 main DEBUG createAppenders(={LogToConsole, LogToRollingFile, LogToGeoJsonSummaryFile, TrajectoryLog})
2023-09-13 18:34:19,605 main DEBUG Building Plugin[name=AppenderRef, class=org.apache.logging.log4j.core.config.AppenderRef].
2023-09-13 18:34:19,606 main DEBUG createAppenderRef(ref="LogToRollingFile", level="null", Filter=null)
2023-09-13 18:34:19,606 main DEBUG Building Plugin[name=logger, class=org.apache.logging.log4j.core.config.LoggerConfig].
2023-09-13 18:34:19,609 main DEBUG LoggerConfig$Builder(additivity="false", level="INFO", levelAndRefs="null", name="org.osdu.gcz.transformer", includeLocation="null", ={LogToRollingFile}, ={}, Configuration(jar:file:/app.jar!/BOOT-INF/classes!/log4j2.yml), Filter=null)
2023-09-13 18:34:19,611 main DEBUG Building Plugin[name=AppenderRef, class=org.apache.logging.log4j.core.config.AppenderRef].
2023-09-13 18:34:19,612 main DEBUG createAppenderRef(ref="LogToGeoJsonSummaryFile", level="null", Filter=null)
2023-09-13 18:34:19,612 main DEBUG Building Plugin[name=logger, class=org.apache.logging.log4j.core.config.LoggerConfig].
2023-09-13 18:34:19,613 main DEBUG LoggerConfig$Builder(additivity="false", level="INFO", levelAndRefs="null", name="geoJsonSummaryLog", includeLocation="null", ={LogToGeoJsonSummaryFile}, ={}, Configuration(jar:file:/app.jar!/BOOT-INF/classes!/log4j2.yml), Filter=null)
2023-09-13 18:34:19,613 main DEBUG Building Plugin[name=AppenderRef, class=org.apache.logging.log4j.core.config.AppenderRef].
2023-09-13 18:34:19,614 main DEBUG createAppenderRef(ref="TrajectoryLog", level="null", Filter=null)
2023-09-13 18:34:19,615 main DEBUG Building Plugin[name=logger, class=org.apache.logging.log4j.core.config.LoggerConfig].
2023-09-13 18:34:19,616 main DEBUG LoggerConfig$Builder(additivity="false", level="INFO", levelAndRefs="null", name="trajectoryLog", includeLocation="null", ={TrajectoryLog}, ={}, Configuration(jar:file:/app.jar!/BOOT-INF/classes!/log4j2.yml), Filter=null)
2023-09-13 18:34:19,616 main DEBUG Building Plugin[name=AppenderRef, class=org.apache.logging.log4j.core.config.AppenderRef].
2023-09-13 18:34:19,617 main DEBUG createAppenderRef(ref="LogToConsole", level="null", Filter=null)
2023-09-13 18:34:19,617 main DEBUG Building Plugin[name=AppenderRef, class=org.apache.logging.log4j.core.config.AppenderRef].
2023-09-13 18:34:19,618 main DEBUG createAppenderRef(ref="LogToRollingFile", level="null", Filter=null)
2023-09-13 18:34:19,618 main DEBUG Building Plugin[name=root, class=org.apache.logging.log4j.core.config.LoggerConfig$RootLogger].
2023-09-13 18:34:19,620 main DEBUG LoggerConfig$RootLogger$Builder(additivity="null", level="ERROR", levelAndRefs="null", includeLocation="null", ={LogToConsole, LogToRollingFile}, ={}, Configuration(jar:file:/app.jar!/BOOT-INF/classes!/log4j2.yml), Filter=null)
2023-09-13 18:34:19,621 main DEBUG Building Plugin[name=loggers, class=org.apache.logging.log4j.core.config.LoggersPlugin].
2023-09-13 18:34:19,622 main DEBUG createLoggers(={org.osdu.gcz.transformer, geoJsonSummaryLog, trajectoryLog, root})
2023-09-13 18:34:19,625 main DEBUG Configuration YamlConfiguration[location=jar:file:/app.jar!/BOOT-INF/classes!/log4j2.yml] initialized
2023-09-13 18:34:19,625 main DEBUG Starting configuration YamlConfiguration[location=jar:file:/app.jar!/BOOT-INF/classes!/log4j2.yml]
2023-09-13 18:34:19,625 main DEBUG Started configuration YamlConfiguration[location=jar:file:/app.jar!/BOOT-INF/classes!/log4j2.yml] OK.
2023-09-13 18:34:19,626 main DEBUG Shutting down OutputStreamManager SYSTEM_OUT.false.false-1
2023-09-13 18:34:19,627 main DEBUG OutputStream closed
2023-09-13 18:34:19,627 main DEBUG Shut down OutputStreamManager SYSTEM_OUT.false.false-1, all resources released: true
2023-09-13 18:34:19,627 main DEBUG Appender DefaultConsole-1 stopped with status true
2023-09-13 18:34:19,628 main DEBUG Stopped org.apache.logging.log4j.core.config.DefaultConfiguration@7506e922 OK
2023-09-13 18:34:19,708 main DEBUG Registering MBean org.apache.logging.log4j2:type=31221be2
2023-09-13 18:34:19,712 main DEBUG Registering MBean org.apache.logging.log4j2:type=31221be2,component=StatusLogger
2023-09-13 18:34:19,713 main DEBUG Registering MBean org.apache.logging.log4j2:type=31221be2,component=ContextSelector
2023-09-13 18:34:19,715 main DEBUG Registering MBean org.apache.logging.log4j2:type=31221be2,component=Loggers,name=
2023-09-13 18:34:19,716 main DEBUG Registering MBean org.apache.logging.log4j2:type=31221be2,component=Loggers,name=trajectoryLog
2023-09-13 18:34:19,717 main DEBUG Registering MBean org.apache.logging.log4j2:type=31221be2,component=Loggers,name=org.osdu.gcz.transformer
2023-09-13 18:34:19,717 main DEBUG Registering MBean org.apache.logging.log4j2:type=31221be2,component=Loggers,name=geoJsonSummaryLog
2023-09-13 18:34:19,718 main DEBUG Registering MBean org.apache.logging.log4j2:type=31221be2,component=Appenders,name=LogToConsole
2023-09-13 18:34:19,719 main DEBUG Registering MBean org.apache.logging.log4j2:type=31221be2,component=Appenders,name=LogToGeoJsonSummaryFile
2023-09-13 18:34:19,720 main DEBUG Registering MBean org.apache.logging.log4j2:type=31221be2,component=Appenders,name=LogToRollingFile
2023-09-13 18:34:19,720 main DEBUG Registering MBean org.apache.logging.log4j2:type=31221be2,component=Appenders,name=TrajectoryLog
2023-09-13 18:34:19,724 main DEBUG org.apache.logging.log4j.core.util.SystemClock does not support precise timestamps.
2023-09-13 18:34:19,724 main DEBUG Reconfiguration complete for context[name=31221be2] at URI jar:file:/app.jar!/BOOT-INF/classes!/log4j2.yml (org.apache.logging.log4j.core.LoggerContext@449b2d27) with optional ClassLoader: null
2023-09-13 18:34:19,724 main DEBUG Shutdown hook enabled. Registering a new one.
2023-09-13 18:34:19,726 main DEBUG LoggerContext[name=31221be2, org.apache.logging.log4j.core.LoggerContext@449b2d27] started OK.
2023-09-13 18:34:20,593 main DEBUG Reconfiguration started for context[name=31221be2] at URI null (org.apache.logging.log4j.core.LoggerContext@449b2d27) with optional ClassLoader: null
2023-09-13 18:34:20,594 main DEBUG Using configurationFactory org.apache.logging.log4j.core.config.ConfigurationFactory$Factory@1c3a4799
2023-09-13 18:34:20,607 main DEBUG PluginManager 'Lookup' found 16 plugins
2023-09-13 18:34:20,612 main DEBUG Apache Log4j Core 2.17.2 initializing configuration YamlConfiguration[location=jar:file:/app.jar!/BOOT-INF/classes!/log4j2.yml]
2023-09-13 18:34:20,612 main DEBUG PluginManager 'Core' found 127 plugins
2023-09-13 18:34:20,613 main DEBUG PluginManager 'Level' found 0 plugins
2023-09-13 18:34:20,613 main DEBUG Processing node for object appenders
2023-09-13 18:34:20,613 main DEBUG Processing node for object Console
2023-09-13 18:34:20,614 main DEBUG Node name is of type STRING
2023-09-13 18:34:20,614 main DEBUG Processing node for object PatternLayout
2023-09-13 18:34:20,614 main DEBUG Node Pattern is of type STRING
2023-09-13 18:34:20,615 main DEBUG Returning PatternLayout with parent Console of type layout:class org.apache.logging.log4j.core.layout.PatternLayout
2023-09-13 18:34:20,615 main DEBUG Returning Console with parent appenders of type appender:class org.apache.logging.log4j.core.appender.ConsoleAppender
2023-09-13 18:34:20,616 main DEBUG Processing node for array RollingFile
2023-09-13 18:34:20,616 main DEBUG Processing RollingFile[0]
2023-09-13 18:34:20,616 main DEBUG Processing node for object PatternLayout
2023-09-13 18:34:20,617 main DEBUG Node pattern is of type STRING
2023-09-13 18:34:20,617 main DEBUG Returning PatternLayout with parent RollingFile of type layout:class org.apache.logging.log4j.core.layout.PatternLayout
2023-09-13 18:34:20,617 main DEBUG Processing node for object Policies
2023-09-13 18:34:20,618 main DEBUG Processing node for object SizeBasedTriggeringPolicy
2023-09-13 18:34:20,618 main DEBUG Node size is of type STRING
2023-09-13 18:34:20,619 main DEBUG Returning SizeBasedTriggeringPolicy with parent Policies of type SizeBasedTriggeringPolicy:class org.apache.logging.log4j.core.appender.rolling.SizeBasedTriggeringPolicy
2023-09-13 18:34:20,619 main DEBUG Returning Policies with parent RollingFile of type Policies:class org.apache.logging.log4j.core.appender.rolling.CompositeTriggeringPolicy
2023-09-13 18:34:20,620 main DEBUG Processing node for object DefaultRollOverStrategy
2023-09-13 18:34:20,620 main DEBUG Node max is of type NUMBER
2023-09-13 18:34:20,620 main DEBUG Returning DefaultRollOverStrategy with parent RollingFile of type DefaultRolloverStrategy:class org.apache.logging.log4j.core.appender.rolling.DefaultRolloverStrategy
2023-09-13 18:34:20,621 main DEBUG Processing node for array File
2023-09-13 18:34:20,621 main DEBUG Processing File[0]
2023-09-13 18:34:20,621 main DEBUG Processing node for object PatternLayout
2023-09-13 18:34:20,622 main DEBUG Node pattern is of type STRING
2023-09-13 18:34:20,622 main DEBUG Returning PatternLayout with parent File of type layout:class org.apache.logging.log4j.core.layout.PatternLayout
2023-09-13 18:34:20,623 main DEBUG Processing File[1]
2023-09-13 18:34:20,623 main DEBUG Processing node for object PatternLayout
2023-09-13 18:34:20,623 main DEBUG Node pattern is of type STRING
2023-09-13 18:34:20,624 main DEBUG Returning PatternLayout with parent File of type layout:class org.apache.logging.log4j.core.layout.PatternLayout
2023-09-13 18:34:20,624 main DEBUG Returning appenders with parent root of type appenders:class org.apache.logging.log4j.core.config.AppendersPlugin
2023-09-13 18:34:20,625 main DEBUG Processing node for object Loggers
2023-09-13 18:34:20,625 main DEBUG Processing node for array logger
2023-09-13 18:34:20,625 main DEBUG Processing logger[0]
2023-09-13 18:34:20,626 main DEBUG Processing array for object AppenderRef
2023-09-13 18:34:20,626 main DEBUG Node ref is of type STRING
2023-09-13 18:34:20,627 main DEBUG Returning AppenderRef with parent logger of type AppenderRef:class org.apache.logging.log4j.core.config.AppenderRef
2023-09-13 18:34:20,627 main DEBUG Processing logger[1]
2023-09-13 18:34:20,627 main DEBUG Processing array for object AppenderRef
2023-09-13 18:34:20,628 main DEBUG Node ref is of type STRING
2023-09-13 18:34:20,628 main DEBUG Returning AppenderRef with parent logger of type AppenderRef:class org.apache.logging.log4j.core.config.AppenderRef
2023-09-13 18:34:20,628 main DEBUG Processing logger[2]
2023-09-13 18:34:20,629 main DEBUG Processing array for object AppenderRef
2023-09-13 18:34:20,629 main DEBUG Node ref is of type STRING
2023-09-13 18:34:20,630 main DEBUG Returning AppenderRef with parent logger of type AppenderRef:class org.apache.logging.log4j.core.config.AppenderRef
2023-09-13 18:34:20,630 main DEBUG Processing node for object Root
2023-09-13 18:34:20,630 main DEBUG Node level is of type STRING
2023-09-13 18:34:20,631 main DEBUG Processing node for array AppenderRef
2023-09-13 18:34:20,631 main DEBUG Processing AppenderRef[0]
2023-09-13 18:34:20,631 main DEBUG Processing AppenderRef[1]
2023-09-13 18:34:20,632 main DEBUG Returning Root with parent Loggers of type root:class org.apache.logging.log4j.core.config.LoggerConfig$RootLogger
2023-09-13 18:34:20,632 main DEBUG Returning Loggers with parent root of type loggers:class org.apache.logging.log4j.core.config.LoggersPlugin
2023-09-13 18:34:20,633 main DEBUG Completed parsing configuration
2023-09-13 18:34:20,633 main DEBUG PluginManager 'Lookup' found 16 plugins
2023-09-13 18:34:20,634 main DEBUG Building Plugin[name=layout, class=org.apache.logging.log4j.core.layout.PatternLayout].
2023-09-13 18:34:20,634 main DEBUG PatternLayout$Builder(pattern="[%-5level] %d{yyyy-MM-dd HH:mm:ss.SSS} [%t] %c{1} - %msg%n", PatternSelector=null, Configuration(jar:file:/app.jar!/BOOT-INF/classes!/log4j2.yml), Replace=null, charset="null", alwaysWriteExceptions="null", disableAnsi="null", noConsoleNoAnsi="null", header="null", footer="null")
2023-09-13 18:34:20,635 main DEBUG PluginManager 'Converter' found 48 plugins
2023-09-13 18:34:20,636 main DEBUG Building Plugin[name=appender, class=org.apache.logging.log4j.core.appender.ConsoleAppender].
2023-09-13 18:34:20,637 main DEBUG ConsoleAppender$Builder(target="null", follow="null", direct="null", bufferedIo="null", bufferSize="null", immediateFlush="null", ignoreExceptions="null", PatternLayout([%-5level] %d{yyyy-MM-dd HH:mm:ss.SSS} [%t] %c{1} - %msg%n), name="LogToConsole", Configuration(jar:file:/app.jar!/BOOT-INF/classes!/log4j2.yml), Filter=null, ={})
2023-09-13 18:34:20,639 main DEBUG Building Plugin[name=layout, class=org.apache.logging.log4j.core.layout.PatternLayout].
2023-09-13 18:34:20,640 main DEBUG PatternLayout$Builder(pattern="[%-5level] %d{yyyy-MM-dd HH:mm:ss.SSS} [%t] %c{1} - %msg%n", PatternSelector=null, Configuration(jar:file:/app.jar!/BOOT-INF/classes!/log4j2.yml), Replace=null, charset="null", alwaysWriteExceptions="null", disableAnsi="null", noConsoleNoAnsi="null", header="null", footer="null")
2023-09-13 18:34:20,640 main DEBUG Building Plugin[name=SizeBasedTriggeringPolicy, class=org.apache.logging.log4j.core.appender.rolling.SizeBasedTriggeringPolicy].
2023-09-13 18:34:20,641 main DEBUG createPolicy(size="10MB")
2023-09-13 18:34:20,641 main DEBUG Building Plugin[name=Policies, class=org.apache.logging.log4j.core.appender.rolling.CompositeTriggeringPolicy].
2023-09-13 18:34:20,642 main DEBUG createPolicy(={SizeBasedTriggeringPolicy(size=10485760)})
2023-09-13 18:34:20,642 main DEBUG Building Plugin[name=DefaultRolloverStrategy, class=org.apache.logging.log4j.core.appender.rolling.DefaultRolloverStrategy].
2023-09-13 18:34:20,643 main DEBUG DefaultRolloverStrategy$Builder(max="10", min="null", fileIndex="null", compressionLevel="null", ={}, stopCustomActionsOnError="null", tempCompressedFilePattern="null", Configuration(jar:file:/app.jar!/BOOT-INF/classes!/log4j2.yml))
2023-09-13 18:34:20,643 main DEBUG Building Plugin[name=appender, class=org.apache.logging.log4j.core.appender.RollingFileAppender].
2023-09-13 18:34:20,645 main DEBUG RollingFileAppender$Builder(fileName="logs/app.log", filePattern="logs/${date:yyyy-MM}/app-%d{MM-dd-yyyy}-%i.log.gz", append="null", locking="null", Policies(CompositeTriggeringPolicy(policies=[SizeBasedTriggeringPolicy(size=10485760)])), DefaultRollOverStrategy(DefaultRolloverStrategy(min=1, max=10, useMax=true)), advertise="null", advertiseUri="null", createOnDemand="null", filePermissions="null", fileOwner="null", fileGroup="null", bufferedIo="null", bufferSize="null", immediateFlush="null", ignoreExceptions="null", PatternLayout([%-5level] %d{yyyy-MM-dd HH:mm:ss.SSS} [%t] %c{1} - %msg%n), name="LogToRollingFile", Configuration(jar:file:/app.jar!/BOOT-INF/classes!/log4j2.yml), Filter=null, ={})
2023-09-13 18:34:20,645 main DEBUG PluginManager 'FileConverter' found 2 plugins
2023-09-13 18:34:20,646 main DEBUG Initializing triggering policy SizeBasedTriggeringPolicy(size=10485760)
2023-09-13 18:34:20,646 main DEBUG Building Plugin[name=layout, class=org.apache.logging.log4j.core.layout.PatternLayout].
2023-09-13 18:34:20,647 main DEBUG PatternLayout$Builder(pattern="%d{yyyy-MM-dd HH:mm:ss.SSS} - %msg%n", PatternSelector=null, Configuration(jar:file:/app.jar!/BOOT-INF/classes!/log4j2.yml), Replace=null, charset="null", alwaysWriteExceptions="null", disableAnsi="null", noConsoleNoAnsi="null", header="null", footer="null")
2023-09-13 18:34:20,648 main DEBUG Building Plugin[name=appender, class=org.apache.logging.log4j.core.appender.FileAppender].
2023-09-13 18:34:20,649 main DEBUG FileAppender$Builder(fileName="logs/dataIngestionError_13-09-2023-06-34.log", append="false", locking="null", advertise="null", advertiseUri="null", createOnDemand="null", filePermissions="null", fileOwner="null", fileGroup="null", bufferedIo="null", bufferSize="null", immediateFlush="null", ignoreExceptions="null", PatternLayout(%d{yyyy-MM-dd HH:mm:ss.SSS} - %msg%n), name="LogToGeoJsonSummaryFile", Configuration(jar:file:/app.jar!/BOOT-INF/classes!/log4j2.yml), Filter=null, ={})
2023-09-13 18:34:20,649 main DEBUG Building Plugin[name=layout, class=org.apache.logging.log4j.core.layout.PatternLayout].
2023-09-13 18:34:20,650 main DEBUG PatternLayout$Builder(pattern="%d{yyyy-MM-dd HH:mm:ss.SSS} - %msg%n", PatternSelector=null, Configuration(jar:file:/app.jar!/BOOT-INF/classes!/log4j2.yml), Replace=null, charset="null", alwaysWriteExceptions="null", disableAnsi="null", noConsoleNoAnsi="null", header="null", footer="null")
2023-09-13 18:34:20,651 main DEBUG Building Plugin[name=appender, class=org.apache.logging.log4j.core.appender.FileAppender].
2023-09-13 18:34:20,652 main DEBUG FileAppender$Builder(fileName="logs/trajectoryLog_13-09-2023-06-34.log", append="false", locking="null", advertise="null", advertiseUri="null", createOnDemand="null", filePermissions="null", fileOwner="null", fileGroup="null", bufferedIo="null", bufferSize="null", immediateFlush="null", ignoreExceptions="null", PatternLayout(%d{yyyy-MM-dd HH:mm:ss.SSS} - %msg%n), name="TrajectoryLog", Configuration(jar:file:/app.jar!/BOOT-INF/classes!/log4j2.yml), Filter=null, ={})
2023-09-13 18:34:20,652 main DEBUG Building Plugin[name=appenders, class=org.apache.logging.log4j.core.config.AppendersPlugin].
2023-09-13 18:34:20,652 main DEBUG createAppenders(={LogToConsole, LogToRollingFile, LogToGeoJsonSummaryFile, TrajectoryLog})
2023-09-13 18:34:20,653 main DEBUG Building Plugin[name=AppenderRef, class=org.apache.logging.log4j.core.config.AppenderRef].
2023-09-13 18:34:20,654 main DEBUG createAppenderRef(ref="LogToRollingFile", level="null", Filter=null)
2023-09-13 18:34:20,654 main DEBUG Building Plugin[name=logger, class=org.apache.logging.log4j.core.config.LoggerConfig].
2023-09-13 18:34:20,655 main DEBUG LoggerConfig$Builder(additivity="false", level="INFO", levelAndRefs="null", name="org.osdu.gcz.transformer", includeLocation="null", ={LogToRollingFile}, ={}, Configuration(jar:file:/app.jar!/BOOT-INF/classes!/log4j2.yml), Filter=null)
2023-09-13 18:34:20,655 main DEBUG Building Plugin[name=AppenderRef, class=org.apache.logging.log4j.core.config.AppenderRef].
2023-09-13 18:34:20,656 main DEBUG createAppenderRef(ref="LogToGeoJsonSummaryFile", level="null", Filter=null)
2023-09-13 18:34:20,656 main DEBUG Building Plugin[name=logger, class=org.apache.logging.log4j.core.config.LoggerConfig].
2023-09-13 18:34:20,657 main DEBUG LoggerConfig$Builder(additivity="false", level="INFO", levelAndRefs="null", name="geoJsonSummaryLog", includeLocation="null", ={LogToGeoJsonSummaryFile}, ={}, Configuration(jar:file:/app.jar!/BOOT-INF/classes!/log4j2.yml), Filter=null)
2023-09-13 18:34:20,657 main DEBUG Building Plugin[name=AppenderRef, class=org.apache.logging.log4j.core.config.AppenderRef].
2023-09-13 18:34:20,658 main DEBUG createAppenderRef(ref="TrajectoryLog", level="null", Filter=null)
2023-09-13 18:34:20,658 main DEBUG Building Plugin[name=logger, class=org.apache.logging.log4j.core.config.LoggerConfig].
2023-09-13 18:34:20,659 main DEBUG LoggerConfig$Builder(additivity="false", level="INFO", levelAndRefs="null", name="trajectoryLog", includeLocation="null", ={TrajectoryLog}, ={}, Configuration(jar:file:/app.jar!/BOOT-INF/classes!/log4j2.yml), Filter=null)
2023-09-13 18:34:20,660 main DEBUG Building Plugin[name=AppenderRef, class=org.apache.logging.log4j.core.config.AppenderRef].
2023-09-13 18:34:20,660 main DEBUG createAppenderRef(ref="LogToConsole", level="null", Filter=null)
2023-09-13 18:34:20,661 main DEBUG Building Plugin[name=AppenderRef, class=org.apache.logging.log4j.core.config.AppenderRef].
2023-09-13 18:34:20,661 main DEBUG createAppenderRef(ref="LogToRollingFile", level="null", Filter=null)
2023-09-13 18:34:20,662 main DEBUG Building Plugin[name=root, class=org.apache.logging.log4j.core.config.LoggerConfig$RootLogger].
2023-09-13 18:34:20,662 main DEBUG LoggerConfig$RootLogger$Builder(additivity="null", level="ERROR", levelAndRefs="null", includeLocation="null", ={LogToConsole, LogToRollingFile}, ={}, Configuration(jar:file:/app.jar!/BOOT-INF/classes!/log4j2.yml), Filter=null)
2023-09-13 18:34:20,663 main DEBUG Building Plugin[name=loggers, class=org.apache.logging.log4j.core.config.LoggersPlugin].
2023-09-13 18:34:20,663 main DEBUG createLoggers(={org.osdu.gcz.transformer, geoJsonSummaryLog, trajectoryLog, root})
2023-09-13 18:34:20,664 main DEBUG Configuration YamlConfiguration[location=jar:file:/app.jar!/BOOT-INF/classes!/log4j2.yml] initialized
2023-09-13 18:34:20,664 main DEBUG Starting configuration YamlConfiguration[location=jar:file:/app.jar!/BOOT-INF/classes!/log4j2.yml]
2023-09-13 18:34:20,665 main DEBUG Started configuration YamlConfiguration[location=jar:file:/app.jar!/BOOT-INF/classes!/log4j2.yml] OK.
2023-09-13 18:34:20,666 main DEBUG Appender TrajectoryLog stopped with status true
2023-09-13 18:34:20,666 main DEBUG Appender LogToRollingFile stopped with status true
2023-09-13 18:34:20,667 main DEBUG Appender LogToGeoJsonSummaryFile stopped with status true
2023-09-13 18:34:20,667 main DEBUG Appender LogToConsole stopped with status true
2023-09-13 18:34:20,668 main DEBUG Stopped YamlConfiguration[location=jar:file:/app.jar!/BOOT-INF/classes!/log4j2.yml] OK
2023-09-13 18:34:20,670 main DEBUG Registering MBean org.apache.logging.log4j2:type=31221be2
2023-09-13 18:34:20,671 main DEBUG Registering MBean org.apache.logging.log4j2:type=31221be2,component=StatusLogger
2023-09-13 18:34:20,671 main DEBUG Registering MBean org.apache.logging.log4j2:type=31221be2,component=ContextSelector
2023-09-13 18:34:20,672 main DEBUG Registering MBean org.apache.logging.log4j2:type=31221be2,component=Loggers,name=
2023-09-13 18:34:20,673 main DEBUG Registering MBean org.apache.logging.log4j2:type=31221be2,component=Loggers,name=trajectoryLog
2023-09-13 18:34:20,673 main DEBUG Registering MBean org.apache.logging.log4j2:type=31221be2,component=Loggers,name=org.osdu.gcz.transformer
2023-09-13 18:34:20,674 main DEBUG Registering MBean org.apache.logging.log4j2:type=31221be2,component=Loggers,name=geoJsonSummaryLog
2023-09-13 18:34:20,674 main DEBUG Registering MBean org.apache.logging.log4j2:type=31221be2,component=Appenders,name=LogToConsole
2023-09-13 18:34:20,675 main DEBUG Registering MBean org.apache.logging.log4j2:type=31221be2,component=Appenders,name=LogToGeoJsonSummaryFile
2023-09-13 18:34:20,676 main DEBUG Registering MBean org.apache.logging.log4j2:type=31221be2,component=Appenders,name=LogToRollingFile
2023-09-13 18:34:20,676 main DEBUG Registering MBean org.apache.logging.log4j2:type=31221be2,component=Appenders,name=TrajectoryLog
2023-09-13 18:34:20,677 main DEBUG Reconfiguration complete for context[name=31221be2] at URI jar:file:/app.jar!/BOOT-INF/classes!/log4j2.yml (org.apache.logging.log4j.core.LoggerContext@449b2d27) with optional ClassLoader: null
2023-09-13 18:34:21,283 main DEBUG Reconfiguration started for context[name=31221be2] at URI null (org.apache.logging.log4j.core.LoggerContext@449b2d27) with optional ClassLoader: null
2023-09-13 18:34:21,283 main DEBUG Using configurationFactory org.apache.logging.log4j.core.config.ConfigurationFactory$Factory@1c3a4799
2023-09-13 18:34:21,295 main DEBUG PluginManager 'Lookup' found 16 plugins
2023-09-13 18:34:21,300 main DEBUG Apache Log4j Core 2.17.2 initializing configuration YamlConfiguration[location=jar:file:/app.jar!/BOOT-INF/classes!/log4j2.yml]
2023-09-13 18:34:21,300 main DEBUG PluginManager 'Core' found 127 plugins
2023-09-13 18:34:21,301 main DEBUG PluginManager 'Level' found 0 plugins
2023-09-13 18:34:21,301 main DEBUG Processing node for object appenders
2023-09-13 18:34:21,301 main DEBUG Processing node for object Console
2023-09-13 18:34:21,302 main DEBUG Node name is of type STRING
2023-09-13 18:34:21,302 main DEBUG Processing node for object PatternLayout
2023-09-13 18:34:21,303 main DEBUG Node Pattern is of type STRING
2023-09-13 18:34:21,303 main DEBUG Returning PatternLayout with parent Console of type layout:class org.apache.logging.log4j.core.layout.PatternLayout
2023-09-13 18:34:21,303 main DEBUG Returning Console with parent appenders of type appender:class org.apache.logging.log4j.core.appender.ConsoleAppender
2023-09-13 18:34:21,304 main DEBUG Processing node for array RollingFile
2023-09-13 18:34:21,304 main DEBUG Processing RollingFile[0]
2023-09-13 18:34:21,304 main DEBUG Processing node for object PatternLayout
2023-09-13 18:34:21,305 main DEBUG Node pattern is of type STRING
2023-09-13 18:34:21,305 main DEBUG Returning PatternLayout with parent RollingFile of type layout:class org.apache.logging.log4j.core.layout.PatternLayout
2023-09-13 18:34:21,306 main DEBUG Processing node for object Policies
2023-09-13 18:34:21,306 main DEBUG Processing node for object SizeBasedTriggeringPolicy
2023-09-13 18:34:21,306 main DEBUG Node size is of type STRING
2023-09-13 18:34:21,307 main DEBUG Returning SizeBasedTriggeringPolicy with parent Policies of type SizeBasedTriggeringPolicy:class org.apache.logging.log4j.core.appender.rolling.SizeBasedTriggeringPolicy
2023-09-13 18:34:21,307 main DEBUG Returning Policies with parent RollingFile of type Policies:class org.apache.logging.log4j.core.appender.rolling.CompositeTriggeringPolicy
2023-09-13 18:34:21,307 main DEBUG Processing node for object DefaultRollOverStrategy
2023-09-13 18:34:21,307 main DEBUG Node max is of type NUMBER
2023-09-13 18:34:21,308 main DEBUG Returning DefaultRollOverStrategy with parent RollingFile of type DefaultRolloverStrategy:class org.apache.logging.log4j.core.appender.rolling.DefaultRolloverStrategy
2023-09-13 18:34:21,308 main DEBUG Processing node for array File
2023-09-13 18:34:21,308 main DEBUG Processing File[0]
2023-09-13 18:34:21,308 main DEBUG Processing node for object PatternLayout
2023-09-13 18:34:21,309 main DEBUG Node pattern is of type STRING
2023-09-13 18:34:21,309 main DEBUG Returning PatternLayout with parent File of type layout:class org.apache.logging.log4j.core.layout.PatternLayout
2023-09-13 18:34:21,309 main DEBUG Processing File[1]
2023-09-13 18:34:21,309 main DEBUG Processing node for object PatternLayout
2023-09-13 18:34:21,310 main DEBUG Node pattern is of type STRING
2023-09-13 18:34:21,310 main DEBUG Returning PatternLayout with parent File of type layout:class org.apache.logging.log4j.core.layout.PatternLayout
2023-09-13 18:34:21,310 main DEBUG Returning appenders with parent root of type appenders:class org.apache.logging.log4j.core.config.AppendersPlugin
2023-09-13 18:34:21,311 main DEBUG Processing node for object Loggers
2023-09-13 18:34:21,311 main DEBUG Processing node for array logger
2023-09-13 18:34:21,311 main DEBUG Processing logger[0]
2023-09-13 18:34:21,312 main DEBUG Processing array for object AppenderRef
2023-09-13 18:34:21,312 main DEBUG Node ref is of type STRING
2023-09-13 18:34:21,312 main DEBUG Returning AppenderRef with parent logger of type AppenderRef:class org.apache.logging.log4j.core.config.AppenderRef
2023-09-13 18:34:21,313 main DEBUG Processing logger[1]
2023-09-13 18:34:21,313 main DEBUG Processing array for object AppenderRef
2023-09-13 18:34:21,314 main DEBUG Node ref is of type STRING
2023-09-13 18:34:21,314 main DEBUG Returning AppenderRef with parent logger of type AppenderRef:class org.apache.logging.log4j.core.config.AppenderRef
2023-09-13 18:34:21,314 main DEBUG Processing logger[2]
2023-09-13 18:34:21,314 main DEBUG Processing array for object AppenderRef
2023-09-13 18:34:21,315 main DEBUG Node ref is of type STRING
2023-09-13 18:34:21,315 main DEBUG Returning AppenderRef with parent logger of type AppenderRef:class org.apache.logging.log4j.core.config.AppenderRef
2023-09-13 18:34:21,316 main DEBUG Processing node for object Root
2023-09-13 18:34:21,316 main DEBUG Node level is of type STRING
2023-09-13 18:34:21,316 main DEBUG Processing node for array AppenderRef
2023-09-13 18:34:21,316 main DEBUG Processing AppenderRef[0]
2023-09-13 18:34:21,317 main DEBUG Processing AppenderRef[1]
2023-09-13 18:34:21,317 main DEBUG Returning Root with parent Loggers of type root:class org.apache.logging.log4j.core.config.LoggerConfig$RootLogger
2023-09-13 18:34:21,317 main DEBUG Returning Loggers with parent root of type loggers:class org.apache.logging.log4j.core.config.LoggersPlugin
2023-09-13 18:34:21,318 main DEBUG Completed parsing configuration
2023-09-13 18:34:21,318 main DEBUG PluginManager 'Lookup' found 16 plugins
2023-09-13 18:34:21,319 main DEBUG Building Plugin[name=layout, class=org.apache.logging.log4j.core.layout.PatternLayout].
2023-09-13 18:34:21,320 main DEBUG PatternLayout$Builder(pattern="[%-5level] %d{yyyy-MM-dd HH:mm:ss.SSS} [%t] %c{1} - %msg%n", PatternSelector=null, Configuration(jar:file:/app.jar!/BOOT-INF/classes!/log4j2.yml), Replace=null, charset="null", alwaysWriteExceptions="null", disableAnsi="null", noConsoleNoAnsi="null", header="null", footer="null")
2023-09-13 18:34:21,320 main DEBUG PluginManager 'Converter' found 48 plugins
2023-09-13 18:34:21,321 main DEBUG Building Plugin[name=appender, class=org.apache.logging.log4j.core.appender.ConsoleAppender].
2023-09-13 18:34:21,322 main DEBUG ConsoleAppender$Builder(target="null", follow="null", direct="null", bufferedIo="null", bufferSize="null", immediateFlush="null", ignoreExceptions="null", PatternLayout([%-5level] %d{yyyy-MM-dd HH:mm:ss.SSS} [%t] %c{1} - %msg%n), name="LogToConsole", Configuration(jar:file:/app.jar!/BOOT-INF/classes!/log4j2.yml), Filter=null, ={})
2023-09-13 18:34:21,324 main DEBUG Building Plugin[name=layout, class=org.apache.logging.log4j.core.layout.PatternLayout].
2023-09-13 18:34:21,325 main DEBUG PatternLayout$Builder(pattern="[%-5level] %d{yyyy-MM-dd HH:mm:ss.SSS} [%t] %c{1} - %msg%n", PatternSelector=null, Configuration(jar:file:/app.jar!/BOOT-INF/classes!/log4j2.yml), Replace=null, charset="null", alwaysWriteExceptions="null", disableAnsi="null", noConsoleNoAnsi="null", header="null", footer="null")
2023-09-13 18:34:21,326 main DEBUG Building Plugin[name=SizeBasedTriggeringPolicy, class=org.apache.logging.log4j.core.appender.rolling.SizeBasedTriggeringPolicy].
2023-09-13 18:34:21,326 main DEBUG createPolicy(size="10MB")
2023-09-13 18:34:21,327 main DEBUG Building Plugin[name=Policies, class=org.apache.logging.log4j.core.appender.rolling.CompositeTriggeringPolicy].
2023-09-13 18:34:21,327 main DEBUG createPolicy(={SizeBasedTriggeringPolicy(size=10485760)})
2023-09-13 18:34:21,328 main DEBUG Building Plugin[name=DefaultRolloverStrategy, class=org.apache.logging.log4j.core.appender.rolling.DefaultRolloverStrategy].
2023-09-13 18:34:21,328 main DEBUG DefaultRolloverStrategy$Builder(max="10", min="null", fileIndex="null", compressionLevel="null", ={}, stopCustomActionsOnError="null", tempCompressedFilePattern="null", Configuration(jar:file:/app.jar!/BOOT-INF/classes!/log4j2.yml))
2023-09-13 18:34:21,329 main DEBUG Building Plugin[name=appender, class=org.apache.logging.log4j.core.appender.RollingFileAppender].
2023-09-13 18:34:21,330 main DEBUG RollingFileAppender$Builder(fileName="logs/app.log", filePattern="logs/${date:yyyy-MM}/app-%d{MM-dd-yyyy}-%i.log.gz", append="null", locking="null", Policies(CompositeTriggeringPolicy(policies=[SizeBasedTriggeringPolicy(size=10485760)])), DefaultRollOverStrategy(DefaultRolloverStrategy(min=1, max=10, useMax=true)), advertise="null", advertiseUri="null", createOnDemand="null", filePermissions="null", fileOwner="null", fileGroup="null", bufferedIo="null", bufferSize="null", immediateFlush="null", ignoreExceptions="null", PatternLayout([%-5level] %d{yyyy-MM-dd HH:mm:ss.SSS} [%t] %c{1} - %msg%n), name="LogToRollingFile", Configuration(jar:file:/app.jar!/BOOT-INF/classes!/log4j2.yml), Filter=null, ={})
2023-09-13 18:34:21,330 main DEBUG PluginManager 'FileConverter' found 2 plugins
2023-09-13 18:34:21,331 main DEBUG Initializing triggering policy SizeBasedTriggeringPolicy(size=10485760)
2023-09-13 18:34:21,331 main DEBUG Building Plugin[name=layout, class=org.apache.logging.log4j.core.layout.PatternLayout].
2023-09-13 18:34:21,332 main DEBUG PatternLayout$Builder(pattern="%d{yyyy-MM-dd HH:mm:ss.SSS} - %msg%n", PatternSelector=null, Configuration(jar:file:/app.jar!/BOOT-INF/classes!/log4j2.yml), Replace=null, charset="null", alwaysWriteExceptions="null", disableAnsi="null", noConsoleNoAnsi="null", header="null", footer="null")
2023-09-13 18:34:21,332 main DEBUG Building Plugin[name=appender, class=org.apache.logging.log4j.core.appender.FileAppender].
2023-09-13 18:34:21,333 main DEBUG FileAppender$Builder(fileName="logs/dataIngestionError_13-09-2023-06-34.log", append="false", locking="null", advertise="null", advertiseUri="null", createOnDemand="null", filePermissions="null", fileOwner="null", fileGroup="null", bufferedIo="null", bufferSize="null", immediateFlush="null", ignoreExceptions="null", PatternLayout(%d{yyyy-MM-dd HH:mm:ss.SSS} - %msg%n), name="LogToGeoJsonSummaryFile", Configuration(jar:file:/app.jar!/BOOT-INF/classes!/log4j2.yml), Filter=null, ={})
2023-09-13 18:34:21,333 main DEBUG Building Plugin[name=layout, class=org.apache.logging.log4j.core.layout.PatternLayout].
2023-09-13 18:34:21,334 main DEBUG PatternLayout$Builder(pattern="%d{yyyy-MM-dd HH:mm:ss.SSS} - %msg%n", PatternSelector=null, Configuration(jar:file:/app.jar!/BOOT-INF/classes!/log4j2.yml), Replace=null, charset="null", alwaysWriteExceptions="null", disableAnsi="null", noConsoleNoAnsi="null", header="null", footer="null")
2023-09-13 18:34:21,334 main DEBUG Building Plugin[name=appender, class=org.apache.logging.log4j.core.appender.FileAppender].
2023-09-13 18:34:21,335 main DEBUG FileAppender$Builder(fileName="logs/trajectoryLog_13-09-2023-06-34.log", append="false", locking="null", advertise="null", advertiseUri="null", createOnDemand="null", filePermissions="null", fileOwner="null", fileGroup="null", bufferedIo="null", bufferSize="null", immediateFlush="null", ignoreExceptions="null", PatternLayout(%d{yyyy-MM-dd HH:mm:ss.SSS} - %msg%n), name="TrajectoryLog", Configuration(jar:file:/app.jar!/BOOT-INF/classes!/log4j2.yml), Filter=null, ={})
2023-09-13 18:34:21,336 main DEBUG Building Plugin[name=appenders, class=org.apache.logging.log4j.core.config.AppendersPlugin].
2023-09-13 18:34:21,336 main DEBUG createAppenders(={LogToConsole, LogToRollingFile, LogToGeoJsonSummaryFile, TrajectoryLog})
2023-09-13 18:34:21,336 main DEBUG Building Plugin[name=AppenderRef, class=org.apache.logging.log4j.core.config.AppenderRef].
2023-09-13 18:34:21,337 main DEBUG createAppenderRef(ref="LogToRollingFile", level="null", Filter=null)
2023-09-13 18:34:21,338 main DEBUG Building Plugin[name=logger, class=org.apache.logging.log4j.core.config.LoggerConfig].
2023-09-13 18:34:21,338 main DEBUG LoggerConfig$Builder(additivity="false", level="INFO", levelAndRefs="null", name="org.osdu.gcz.transformer", includeLocation="null", ={LogToRollingFile}, ={}, Configuration(jar:file:/app.jar!/BOOT-INF/classes!/log4j2.yml), Filter=null)
2023-09-13 18:34:21,339 main DEBUG Building Plugin[name=AppenderRef, class=org.apache.logging.log4j.core.config.AppenderRef].
2023-09-13 18:34:21,339 main DEBUG createAppenderRef(ref="LogToGeoJsonSummaryFile", level="null", Filter=null)
2023-09-13 18:34:21,340 main DEBUG Building Plugin[name=logger, class=org.apache.logging.log4j.core.config.LoggerConfig].
2023-09-13 18:34:21,340 main DEBUG LoggerConfig$Builder(additivity="false", level="INFO", levelAndRefs="null", name="geoJsonSummaryLog", includeLocation="null", ={LogToGeoJsonSummaryFile}, ={}, Configuration(jar:file:/app.jar!/BOOT-INF/classes!/log4j2.yml), Filter=null)
2023-09-13 18:34:21,340 main DEBUG Building Plugin[name=AppenderRef, class=org.apache.logging.log4j.core.config.AppenderRef].
2023-09-13 18:34:21,341 main DEBUG createAppenderRef(ref="TrajectoryLog", level="null", Filter=null)
2023-09-13 18:34:21,341 main DEBUG Building Plugin[name=logger, class=org.apache.logging.log4j.core.config.LoggerConfig].
2023-09-13 18:34:21,342 main DEBUG LoggerConfig$Builder(additivity="false", level="INFO", levelAndRefs="null", name="trajectoryLog", includeLocation="null", ={TrajectoryLog}, ={}, Configuration(jar:file:/app.jar!/BOOT-INF/classes!/log4j2.yml), Filter=null)
2023-09-13 18:34:21,342 main DEBUG Building Plugin[name=AppenderRef, class=org.apache.logging.log4j.core.config.AppenderRef].
2023-09-13 18:34:21,343 main DEBUG createAppenderRef(ref="LogToConsole", level="null", Filter=null)
2023-09-13 18:34:21,343 main DEBUG Building Plugin[name=AppenderRef, class=org.apache.logging.log4j.core.config.AppenderRef].
2023-09-13 18:34:21,344 main DEBUG createAppenderRef(ref="LogToRollingFile", level="null", Filter=null)
2023-09-13 18:34:21,344 main DEBUG Building Plugin[name=root, class=org.apache.logging.log4j.core.config.LoggerConfig$RootLogger].
2023-09-13 18:34:21,345 main DEBUG LoggerConfig$RootLogger$Builder(additivity="null", level="ERROR", levelAndRefs="null", includeLocation="null", ={LogToConsole, LogToRollingFile}, ={}, Configuration(jar:file:/app.jar!/BOOT-INF/classes!/log4j2.yml), Filter=null)
2023-09-13 18:34:21,346 main DEBUG Building Plugin[name=loggers, class=org.apache.logging.log4j.core.config.LoggersPlugin].
2023-09-13 18:34:21,346 main DEBUG createLoggers(={org.osdu.gcz.transformer, geoJsonSummaryLog, trajectoryLog, root})
2023-09-13 18:34:21,346 main DEBUG Configuration YamlConfiguration[location=jar:file:/app.jar!/BOOT-INF/classes!/log4j2.yml] initialized
2023-09-13 18:34:21,347 main DEBUG Starting configuration YamlConfiguration[location=jar:file:/app.jar!/BOOT-INF/classes!/log4j2.yml]
2023-09-13 18:34:21,347 main DEBUG Started configuration YamlConfiguration[location=jar:file:/app.jar!/BOOT-INF/classes!/log4j2.yml] OK.
2023-09-13 18:34:21,348 main DEBUG Appender TrajectoryLog stopped with status true
2023-09-13 18:34:21,349 main DEBUG Appender LogToRollingFile stopped with status true
2023-09-13 18:34:21,349 main DEBUG Appender LogToGeoJsonSummaryFile stopped with status true
2023-09-13 18:34:21,350 main DEBUG Appender LogToConsole stopped with status true
2023-09-13 18:34:21,350 main DEBUG Stopped YamlConfiguration[location=jar:file:/app.jar!/BOOT-INF/classes!/log4j2.yml] OK
2023-09-13 18:34:21,352 main DEBUG Registering MBean org.apache.logging.log4j2:type=31221be2
2023-09-13 18:34:21,353 main DEBUG Registering MBean org.apache.logging.log4j2:type=31221be2,component=StatusLogger
2023-09-13 18:34:21,353 main DEBUG Registering MBean org.apache.logging.log4j2:type=31221be2,component=ContextSelector
2023-09-13 18:34:21,354 main DEBUG Registering MBean org.apache.logging.log4j2:type=31221be2,component=Loggers,name=
2023-09-13 18:34:21,355 main DEBUG Registering MBean org.apache.logging.log4j2:type=31221be2,component=Loggers,name=trajectoryLog
2023-09-13 18:34:21,355 main DEBUG Registering MBean org.apache.logging.log4j2:type=31221be2,component=Loggers,name=org.osdu.gcz.transformer
2023-09-13 18:34:21,356 main DEBUG Registering MBean org.apache.logging.log4j2:type=31221be2,component=Loggers,name=geoJsonSummaryLog
2023-09-13 18:34:21,356 main DEBUG Registering MBean org.apache.logging.log4j2:type=31221be2,component=Appenders,name=LogToConsole
2023-09-13 18:34:21,357 main DEBUG Registering MBean org.apache.logging.log4j2:type=31221be2,component=Appenders,name=LogToGeoJsonSummaryFile
2023-09-13 18:34:21,357 main DEBUG Registering MBean org.apache.logging.log4j2:type=31221be2,component=Appenders,name=LogToRollingFile
2023-09-13 18:34:21,358 main DEBUG Registering MBean org.apache.logging.log4j2:type=31221be2,component=Appenders,name=TrajectoryLog
2023-09-13 18:34:21,358 main DEBUG Reconfiguration complete for context[name=31221be2] at URI jar:file:/app.jar!/BOOT-INF/classes!/log4j2.yml (org.apache.logging.log4j.core.LoggerContext@449b2d27) with optional ClassLoader: null
. ____ _ __ _ _
/\\ / ___'_ __ _ _(_)_ __ __ _ \ \ \ \
( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \
\\/ ___)| |_)| | | | | || (_| | ) ) ) )
' |____| .__|_| |_|_| |_\__, | / / / /
=========|_|==============|___/=/_/_/_/
:: Spring Boot :: (v2.7.10)
2023-09-13 18:34:21,705 main DEBUG AsyncLogger.ThreadNameStrategy=UNCACHED (user specified null, default is UNCACHED)
2023-09-13 18:34:21,705 main DEBUG org.apache.logging.log4j.core.util.SystemClock does not support precise timestamps.
[18:34:23] (wrn) Failed to resolve IGNITE_HOME automatically for class codebase [class=class o.a.i.i.util.IgniteUtils, e=URI is not hierarchical]
Console logging handler is not configured.
[18:34:23] __________ ________________
[18:34:23] / _/ ___/ |/ / _/_ __/ __/
[18:34:23] _/ // (7 7 // / / / / _/
[18:34:23] /___/\___/_/|_/___/ /_/ /___/
[18:34:23]
[18:34:23] ver. 8.8.13#20211223-sha1:80557a10
[18:34:23] 2021 Copyright(C) GridGain Systems, Inc. and Contributors
[18:34:23]
[18:34:23] Ignite documentation: http://gridgain.com
[18:34:23]
[18:34:23] Quiet mode.
[18:34:23] ^-- Logging by 'JavaLogger [quiet=true, config=null]'
[18:34:23] ^-- To see **FULL** console log here add -DIGNITE_QUIET=false or "-v" to ignite.{sh|bat}
[18:34:23]
[18:34:23] OS: Linux 5.4.0-1091-azure amd64
[18:34:23] VM information: OpenJDK Runtime Environment 1.8.0_212-b04 IcedTea OpenJDK 64-Bit Server VM 25.212-b04
[18:34:23] Please set system property '-Djava.net.preferIPv4Stack=true' to avoid possible problems in mixed environments.
[18:34:23] Configured plugins:
[18:34:23] ^-- None
[18:34:23]
[18:34:23] Configured failure handler: [hnd=StopNodeOrHaltFailureHandler [tryStop=false, timeout=0, super=AbstractFailureHandler [ignoredFailureTypes=UnmodifiableSet [SYSTEM_WORKER_BLOCKED, SYSTEM_CRITICAL_OPERATION_TIMEOUT]]]]
[18:34:24] Message queue limit is set to 0 which may lead to potential OOMEs when running cache operations in FULL_ASYNC or PRIMARY_SYNC modes due to message queues growth on sender and receiver sides.
[18:34:24] Security status [authentication=off, tls/ssl=off]
[18:34:25] REST protocols do not start on client node. To start the protocols on client node set '-DIGNITE_REST_START_ON_CLIENT=true' system property.https://community.opengroup.org/osdu/platform/system/search-service/-/issues/134Search should not return 404 in case there are no matching data in Elasticsearch2023-11-08T14:07:37ZDenis Karpenok (EPAM)Search should not return 404 in case there are no matching data in Elasticsearch**The expected result:**
- When no data matches the query response is 200 OK with an empty list.
**Actual results are:**
- Inconsistent, sometimes it's 200 OK sometimes it's 400.
**Reason:**
- Not all requests to ElasticSearch have...**The expected result:**
- When no data matches the query response is 200 OK with an empty list.
**Actual results are:**
- Inconsistent, sometimes it's 200 OK sometimes it's 400.
**Reason:**
- Not all requests to ElasticSearch have parameters to ignore user errors, usually, those are preliminary requests to get details for further search queries, for example: https://community.opengroup.org/osdu/platform/system/search-service/-/blob/master/search-core/src/main/java/org/opengroup/osdu/search/service/FieldMappingTypeService.java#L49
**Solution:**
- Suppress all 400 errors from Elasticsearch and respond to the end user only with 200 OK.
**Pros:**
- More consistent workflow for client applications.
- Reduced error handling for client applications.
More details are in the attached CSV files:
[test_results_2023-08-29_11-34-31.csv](/uploads/03bf18c852387f4da493aa13b97ad5d3/test_results_2023-08-29_11-34-31.csv)
[test_results_2023-08-29_11-51-20.csv](/uploads/6071b35ea688e57bdf24112198a9ddd7/test_results_2023-08-29_11-51-20.csv)https://community.opengroup.org/osdu/data/open-test-data/-/issues/92Create Seismic 2D Navigation sample JSON payloads - ready to support display ...2023-09-14T13:02:22ZDebasis ChatterjeeCreate Seismic 2D Navigation sample JSON payloads - ready to support display of SP labelsPlease see
https://gitlab.opengroup.org/osdu/subcommittees/data-def/work-products/schema/-/issues/348#note_69692
With that information, I think we may need to overhaul these (SEGP1) examples.
Existing JSON payloads here
https://commu...Please see
https://gitlab.opengroup.org/osdu/subcommittees/data-def/work-products/schema/-/issues/348#note_69692
With that information, I think we may need to overhaul these (SEGP1) examples.
Existing JSON payloads here
https://community.opengroup.org/osdu/platform/data-flow/data-loading/open-test-data/-/tree/master/rc--3.0.0/4-instances/Volve/work-products/seismics_1_2_0
cc @Keith_Wallhttps://community.opengroup.org/osdu/platform/system/indexer-service/-/issues/113ADR: Bag of Words2024-03-18T14:07:18ZMark ChanceADR: Bag of Words# ADR: Copy all text field to BagOfWords field
<a name="TOC"></a>
[[_TOC_]]
# Status
- [x] Proposed
- [x] Trialing
- [x] Under review
- [x] Approved
- [ ] Retired
# Background
The application development stakeholders want to provid...# ADR: Copy all text field to BagOfWords field
<a name="TOC"></a>
[[_TOC_]]
# Status
- [x] Proposed
- [x] Trialing
- [x] Under review
- [x] Approved
- [ ] Retired
# Background
The application development stakeholders want to provide their users a mechanism to search for words in a record regardless of where it appears in the record. Currently this is not working for nested fields as inner mechanism is relying on `query_string` ES query which is not allowing searching through nested documents.
# Context & Scope
[Back to TOC](#TOC)
## Requirements
- User is able to find resources by words stored in any field using query without using explicit field names.
- User is able to find resources referencing given ID from external systems if this ID is part of referencing OSDU ID.
- (Additional) List of all phrases is stored inside single field to be able to implement simple autocompletion.
[Back to TOC](#TOC)
# Tradeoff Analysis
## Option 1
All the fields are copied and to the word-bag using copy_to mechanism. We are proposing `bagOfWords` as the internal field name for this use case. This enables the user to find wells through their alias names using fulltext query (name aliases are stored in the nested array, so currently it is not possible without explicitly specifying field name).Additionally, to `bagOfWords` we would like to add ID detail as they are often IDs from external source systems like (“osdu:wks::master-data—Well-1.0.0:43234324” detail here may contain UWI). So, when the users know 4323424 (for example from the source system) but don't know OSDU internal ID system, they are still able to find records referencing them (for example find all DS related to given wellbore). Such a field is also valuable for implementing search-as-you-type autocompletion, we can create simple but powerful version of it by just adding a subfield with ES completion indexing and expose it for searching.
## Option 2
If for some reason alternative 1 is too broad, it is suggested to use the indexing hints added to the schema files as described here: https://gitlab.opengroup.org/osdu/subcommittees/ea/work-products/adr-elaboration/-/issues/66. A tag such as x-osdu-indexing-copytowordbag could be an indicator that the associated field is to be added to the workbag field:
“x-osdu-indexing-copytowordbag”: “enabled”/"disabled"
for example. However such approach would make schemas less portable as every OSDU installation may have different needs.
[Back to TOC](#TOC)
# Proposed solution
For each kind of resource, an index will be created and the value will contain all (normalized) tokens across all other text fields in the mapping.
This will enable a query of the form:
```json
{
"kind": "osdu:*:*:*",
"query": "test"
}
```
which would return
```json
{
"results": [
{
"data": {
"FacilityName": "Example test"
},
"id": "osdu:master-data--Well:1012"
},
{
"data": {
"FacilityNameAlias": "Example test"
},
"id": "osdu:master-data--Well:30142"
}
]
}
```
The search service query against the word_bag field so that the two wells would be returned despite 'test' occurring in different fields.
[Back to TOC](#TOC)
## Accepted Limitations / things to work out
[Back to TOC](#TOC)
# Change Management
* Operators may need to execute reindex with force_clean=true action on indices to enable this feature.
# Decision
# Consequences
* The indexer code changes should have no impact on automated applications as they are using field related queries which are unchanged. Application where user is controlling top level query might show new additional results (for matches in nested objects and in ID details), but this is expected behavior.
[Back to TOC](#TOC)
#EOF.M22 - Release 0.25Mark ChanceMark Chance