OSDU Software issueshttps://community.opengroup.org/groups/osdu/-/issues2022-10-13T11:08:05Zhttps://community.opengroup.org/osdu/platform/system/schema-service/-/issues/95x-osdu-indexing changes are breaking2022-10-13T11:08:05ZThomas Gehrmann [slb]x-osdu-indexing changes are breaking# Context:
Indexing hints in the OSDU schemas are considered decorations and not taken into account when schemas versions are
validated for 'breaking changes'.
Downstream indexing changes from any state to any other state are considere...# Context:
Indexing hints in the OSDU schemas are considered decorations and not taken into account when schemas versions are
validated for 'breaking changes'.
Downstream indexing changes from any state to any other state are considered breaking changes:
* Breaking changes for the indexer: changes from `flattened` to `nested` require the re-indexing of the kind in
question.
* Consuming applications must use a different query syntax.
# How it's done today:
The process depends on human interaction (assuming OSDU well-known schemas here, but this is no different for custom
schemas):
* Stakeholders ask for an indexing behavior change, OSDU Data Definition reacts by changing the `x-osdu-indexing`
extension tag values in the schema.
* OSDU Data Definition Release notes identify the kinds, which are to be re-indexed.
* In M10 virtually all kinds had to be re-indexed
* In M11 type `reference-data--QualityDataRuleSet` requires re-indexing
* During deployment the records for the affected kinds must be re-indexed.
# Issue with current design:
Upon deployment of a new milestone (or custom schemas),
1. for all involved data-partitions, delete the index for the changed kind and trigger re-indexing. This can take -
depending on the number of records per kind - a very log time and cause serious down-time.
2. Applications have no good way of understanding that the query syntax has changed. Applications may no longer find
data if they depended on queries into data structures affected by the change.
# Proposal:
## `PUBLISHED` Schema Status
1. For schemas with state `PUBLISHED` treat changes to the `x-osdu-indexing` extension tag values in the schema as **_
breaking changes_**.
2. Breaking changes require an incremented major schema version number.
3. Schema Validation Changes during schema creation:
* Changes to the `x-osdu-indexing` extension tag values in `PUBLISHED` schemas with same major schema version
numbers will be **_rejected_**. I.e., the attempted registration of such schema will fail with error.
## `DEVELOPMENT` Schema Status
1. The validation for `DEVELOPMENT` status schemas for incremental versions on top of or between existing minor or patch
versions follows the same rules as for `PUBLISHED` schemas. Attempts to change the `x-osdu-indexing` extension tag
values will be **_rejected_** by the Schema service.
2. For 'single' version schemas in `DEVELOPMENT`, the updates of the `x-osdu-indexing` extension tag values are
permitted.
* It is the responsibility of the schema authors to communicate the impact to deployment and consumers. This is
expected to be acceptable during the development phase.
CC @nthakur @ChrisZhang @chad @pbehedeM12 - Release 0.15https://community.opengroup.org/osdu/platform/data-flow/ingestion/energistics/witsml-parser/-/issues/55Adjust WITSML Parser to suitable Reference value2022-05-02T15:22:50ZDebasis ChatterjeeAdjust WITSML Parser to suitable Reference value@epeysson - Please see related issue here.
https://gitlab.opengroup.org/osdu/subcommittees/data-def/work-products/schema/-/issues/343
@gehrmann is working on getting these entries added for official "Data Definition" release.
At that t...@epeysson - Please see related issue here.
https://gitlab.opengroup.org/osdu/subcommittees/data-def/work-products/schema/-/issues/343
@gehrmann is working on getting these entries added for official "Data Definition" release.
At that time, the parser will need adjustment to the value of version 2.0.
Thank you
cc - @chad , @jean_francois.rainaud for informationM12 - Release 0.15etienne peyssonetienne peyssonhttps://community.opengroup.org/osdu/platform/data-flow/ingestion/csv-parser/csv-parser/-/issues/62Airflow log shows wrong "number of records"2022-03-29T11:23:38ZDebasis ChatterjeeAirflow log shows wrong "number of records"Excerpt from Airflow log -
[2021-11-15 12:35:49,137] {pod_launcher.py:149} INFO - 2021-11-15 12:35:49.068 INFO 1 --- [ main] o.o.o.c.p.i.service.IBMIngestionService : Total records in File are = 4
Although the CSV has 4 rows...Excerpt from Airflow log -
[2021-11-15 12:35:49,137] {pod_launcher.py:149} INFO - 2021-11-15 12:35:49.068 INFO 1 --- [ main] o.o.o.c.p.i.service.IBMIngestionService : Total records in File are = 4
Although the CSV has 4 rows - one being header row and actually 3 rows of actual data.
Suggest we change the message suitably.
Airflow log -
[CSV-Ingestion-custom-Airflow-log-for-IBM-DC.txt](/uploads/333be1c85de2d89bfc12cf7e5de26c3b/CSV-Ingestion-custom-Airflow-log-for-IBM-DC.txt)
Data file used for the run in IBM, R3M9 Preship environment.
[IBM_sample_CSV-DC.csv](/uploads/1b8e0c74d10bc3fc47526d467b751f92/IBM_sample_CSV-DC.csv)M12 - Release 0.15https://community.opengroup.org/osdu/platform/data-flow/ingestion/csv-parser/csv-parser/-/issues/44As part of GSM feature, Implement status publishing method in AWS2022-07-20T20:47:15ZMahesh DakshaAs part of GSM feature, Implement status publishing method in AWSThis is as per the GSM requirement to be implemented in each CSP. This issue has been created for AWS team to implement the publish method to publish the status events in message queue.This is as per the GSM requirement to be implemented in each CSP. This issue has been created for AWS team to implement the publish method to publish the status events in message queue.M12 - Release 0.15https://community.opengroup.org/osdu/platform/consumption/geospatial/-/issues/129Data - Add more 2D Seismic - pending identification of where to put it2023-05-31T15:44:59ZJoel RomeroData - Add more 2D Seismic - pending identification of where to put itNote:
May be addressed by New Zealand data that is being loaded. - 5/31/23Note:
May be addressed by New Zealand data that is being loaded. - 5/31/23GCZ Sprint 20Michael WilhiteMichael Wilhitehttps://community.opengroup.org/osdu/platform/consumption/geospatial/-/issues/130Data - Create test data set with at least one million rows of kind Well in 1....2022-11-30T17:38:36ZJoel RomeroData - Create test data set with at least one million rows of kind Well in 1.1.0.As a GCZ developer, I want to add a large data set, so that I can perform load and performance tests.
**Acceptance Criteria**
- At least one million well records in OSDU
- Provide knowledge transfer of data loading scriptAs a GCZ developer, I want to add a large data set, so that I can perform load and performance tests.
**Acceptance Criteria**
- At least one million well records in OSDU
- Provide knowledge transfer of data loading scriptGCZ Sprint 20Michael WilhiteMichael Wilhitehttps://community.opengroup.org/osdu/platform/system/storage/-/issues/139[STORAGE] PUT. Reports 201 success with a 50 records payload but actually fails2023-02-13T15:19:27ZErnesto Gutierrez[STORAGE] PUT. Reports 201 success with a 50 records payload but actually fails**Description**
While issuing following request [50_records_payload.json](/uploads/3d2ddceee544b9741af0a0b54fff9981/50_records_payload.json), the storage service returns a 201 with records and versions [STORAGE_201_put_records.json](/upl...**Description**
While issuing following request [50_records_payload.json](/uploads/3d2ddceee544b9741af0a0b54fff9981/50_records_payload.json), the storage service returns a 201 with records and versions [STORAGE_201_put_records.json](/uploads/48a60f0dfa71bb13852b7ca8cc12fd8b/STORAGE_201_put_records.json).
But when trying to fecth the records they are not created/updated.
Looking at the logs [Storage_LOG_50_records.txt](/uploads/fdf868480d289199eb916f9d5d575b8f/Storage_LOG_50_records.txt), it seems the service is reaching this line https://community.opengroup.org/osdu/platform/system/lib/cloud/azure/os-core-lib-azure/-/blob/1bddde80718274e34a36aee673092bf20526f5aa/src/main/java/org/opengroup/osdu/azure/cosmosdb/CosmosStoreBulkOperations.java#L124
**Expected behavior**
Two behaviors are expected
1. Payload with 50 records should not fail
2. If for any reason the request fail, the error should be propagated back and return error instead of 201.M13 - Release 0.16Krishna Nikhil VedurumudiKrishna Nikhil Vedurumudihttps://community.opengroup.org/osdu/platform/domain-data-mgmt-services/reservoir/open-etp-client/-/issues/2Match Semantic Release to the OSDU Release Strategy2023-05-04T23:59:29ZSiarhei Khaletski (EPAM)Match Semantic Release to the OSDU Release Strategy## Overview
Basically, the REST client comes with [open-etp-client-publish-npm](https://community.opengroup.org/osdu/platform/domain-data-mgmt-services/reservoir/open-etp-client/-/jobs/1234864) step. The current configuration of `semanti...## Overview
Basically, the REST client comes with [open-etp-client-publish-npm](https://community.opengroup.org/osdu/platform/domain-data-mgmt-services/reservoir/open-etp-client/-/jobs/1234864) step. The current configuration of `semantic-release` doesn't match the OSDU Release strategy.
The step is needed to be reviewed and appropriately configured.
## Issue Scope
- [ ] Match the ETP REST client to the OSDU Release Strategy
- [ ] Add publishing to Gitlab Registry step (`semantic-release: npm` plugin)
- [ ] Set version management only for `release/*` branches and tagsM13 - Release 0.16https://community.opengroup.org/osdu/platform/data-flow/ingestion/energistics/witsml-parser/-/issues/60Implement missing fields following Wellbore Trajectory schema update2022-06-21T18:48:18Zetienne peyssonImplement missing fields following Wellbore Trajectory schema updateFollowing the WEllboare trajectory schema update (1.0.0 -> 1.1.0), we need to add the mapping for the following fields :
```
"AppliedOperations": [
"Example AppliedOperations"
],
"CompanyID": "namespace:master-data--Org...Following the WEllboare trajectory schema update (1.0.0 -> 1.1.0), we need to add the mapping for the following fields :
```
"AppliedOperations": [
"Example AppliedOperations"
],
"CompanyID": "namespace:master-data--Organisation:SomeUniqueOrganisationID:",
```M13 - Release 0.16https://community.opengroup.org/osdu/platform/data-flow/ingestion/csv-parser/csv-parser/-/issues/73ACLs being overridden in CSV ingestor2023-03-31T11:38:29ZGauri ChitaleACLs being overridden in CSV ingestorIDs of Record generated are predetermined by using Natural Keys refer
https://community.opengroup.org/osdu/platform/data-flow/ingestion/csv-parser/csv-parser/-/blob/master/csv-parser-core/src/main/java/org/opengroup/osdu/csvparser/handl...IDs of Record generated are predetermined by using Natural Keys refer
https://community.opengroup.org/osdu/platform/data-flow/ingestion/csv-parser/csv-parser/-/blob/master/csv-parser-core/src/main/java/org/opengroup/osdu/csvparser/handler/handlers/IdHandler.java
Now there are scenarios where a user is trying to update same record which was created by another user. The user who is trying to update the record may not access to the ACL associated to the existing record
but because we use service principle token for our ingestion jobs which have all the ACL accesses, the update operation goes through. Which is not expected behavior. The next user can update data as well ACL which could result in total data loss for original user.M13 - Release 0.16https://community.opengroup.org/osdu/platform/data-flow/ingestion/segy-to-vds-conversion/-/issues/11Suppport for refresh tokens2022-07-13T13:37:13ZJørgen Lindjorgen.lind@3lc.aiSuppport for refresh tokensThe OpenVDS tools support refresh_tokens instead of sd_tokens. When using refresh_tokens new sd_tokens will be generated when the current sd_tokens expires. This makes it possible to have short lived sd_tokens while still having lengthy ...The OpenVDS tools support refresh_tokens instead of sd_tokens. When using refresh_tokens new sd_tokens will be generated when the current sd_tokens expires. This makes it possible to have short lived sd_tokens while still having lengthy import rutines.
Specifically SEGYImport requires the connection string to not specify the sd_token but instead specifying 5 other arguments delimited by a semi colon ;
The arguments are:
AuthTokenUrl
ClientId
ClientSecret
Scopes
RefreshToken
OpenVDS will use the additional arguments to make a "application/x-www-form-urlencoded" rest call to the AuthTokenUrl using the arguments as form parameters to generate a new access_token that will be used as the sd_token.
The code performing this task can be seen here:
[IORefreshToken.cpp](https://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/open-vds/-/blob/master/src/OpenVDS/IO/IORefreshToken.cpp)
This is how VDSInfo would look if it used refresh_tokens:
`VDSInfo --url sd://opendes/release13/sgy4/ABC91357 --connection "sdauthorityurl=https://some_url/osdu-seismic/api/v3;sdapikey=xx;AuthTokenUrl=some_auth_token_url;ClientId=some_client_id;ClientSecret=some_client_secret;Scopes=some_space_delimited_scope;RefreshToken=a_refresh_token"`
In most application the sdapikey is ignored.
Its seems the current ingestion dag only support sd_tokens making it difficult to import large datasets.M13 - Release 0.16https://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/seismic-dms-suite/seismic-store-service/-/issues/42[GCP] Seismic store doesn't use Partition Service to get a GCP project-id of ...2023-03-27T19:16:22ZYan Sushchynski (EPAM)[GCP] Seismic store doesn't use Partition Service to get a GCP project-id of Google Cloud ProjectThe main problems are following:
- See no signs that SSDMS uses Partition Service at all, it accepts requests with no data-partition-id header
- When we create SSDMS tenant, we have to specify `gcpid`, the project where data will be stor...The main problems are following:
- See no signs that SSDMS uses Partition Service at all, it accepts requests with no data-partition-id header
- When we create SSDMS tenant, we have to specify `gcpid`, the project where data will be stored if we use this tenant in our `sd-path`.
It causes two problems:
- users have to know the actual `gcpid`
- users can specify the `gcpid` that doesn’t correspond `data-partition-id`
Example of create tenant request:
```
{
"gcpid": "{{gcp_project_id}}",
"esd": "{{data-partition-id}}.osdu-gcp.go3-nrg.projects.epam.com",
"default_acl": "data.default.owners@{{data-partition-id}}.osdu-gcp.go3-nrg.projects.epam.com"
}
```
Solution is to use Partition Service to get GCP project-id, thus users don't need to specify `gcpid` manually and the GCP project-id is chosen correctly.
cc:
@Kateryna_Kurach @Siarhei_KhaletskiM13 - Release 0.16https://community.opengroup.org/osdu/platform/data-flow/ingestion/energistics/witsml-parser/-/issues/36WITSML ingestion integration with Wellbore DDMS2022-08-23T13:29:37ZChris ZhangWITSML ingestion integration with Wellbore DDMSTo support End to end wellbore DDMS workflow
Today WITSML just stores the file, doesn’t extract trajectories and logs or use by applications thru wellbore DDMS. This issue is track the work to integrate WITSML ingestion with Wellbore DDM...To support End to end wellbore DDMS workflow
Today WITSML just stores the file, doesn’t extract trajectories and logs or use by applications thru wellbore DDMS. This issue is track the work to integrate WITSML ingestion with Wellbore DDMS to enable end to end workflow.M13 - Release 0.16https://community.opengroup.org/osdu/platform/consumption/geospatial/-/issues/152Investigate batch upload capability for OSDU API2022-11-30T17:39:18ZLevi RemingtonInvestigate batch upload capability for OSDU APIGCZ Sprint 24Bryan GunterBryan Gunterhttps://community.opengroup.org/osdu/platform/data-flow/ingestion/energistics/witsml-parser/-/issues/61WITSML Parser - Well Trajectory Failure2022-08-31T11:19:18ZVadzim KulybaWITSML Parser - Well Trajectory Failure```
[2022-08-29, 20:47:17 UTC] {validate_schema.py:322} ERROR - Schema validation error. Data field.
[2022-08-29, 20:47:17 UTC] {validate_schema.py:323} ERROR - Manifest kind: opendes:wks:work-product-component--WellboreTrajectory:1.1.0
...```
[2022-08-29, 20:47:17 UTC] {validate_schema.py:322} ERROR - Schema validation error. Data field.
[2022-08-29, 20:47:17 UTC] {validate_schema.py:323} ERROR - Manifest kind: opendes:wks:work-product-component--WellboreTrajectory:1.1.0
[2022-08-29, 20:47:17 UTC] {validate_schema.py:324} ERROR - Error: 'Azi' does not match '^[\\w\\-\\.]+:reference-data\\-\\-TrajectoryStationPropertyType:[\\w\\-\\.\\:\\%]+:[0-9]*$'
Failed validating 'pattern' in schema['properties']['data']['allOf'][3]['properties']['AvailableTrajectoryStationProperties']['items']['properties']['TrajectoryStationPropertyTypeID']:
{'description': 'The reference to a trajectory station property type - '
'of if interpreted as channels, the curve or channel '
'name type, identifying e.g. MD, Inclination, Azimuth. '
'This is a relationship to a '
'reference-data--TrajectoryStationPropertyType record '
'id.',
'example': 'partition-id:reference-data--TrajectoryStationPropertyType:AzimuthTN:',
'pattern': '^[\\w\\-\\.]+:reference-data\\-\\-TrajectoryStationPropertyType:[\\w\\-\\.\\:\\%]+:[0-9]*$',
'title': 'Trajectory Station Property Type ID',
'type': 'string',
'x-osdu-relationship': [{'EntityType': 'TrajectoryStationPropertyType',
'GroupType': 'reference-data'}]}
On instance['data']['AvailableTrajectoryStationProperties'][0]['TrajectoryStationPropertyTypeID']:
'Azi'
```
This is error log from azure DEMO validate_manifest_schema_task, but it is common code issue, because it is repro on gcp (cc @Yan_Sushchynski)
I think the main issue inside parser in this line:
https://community.opengroup.org/osdu/platform/data-flow/ingestion/energistics/witsml-parser/-/blob/master/energistics/src/witsml_parser/energistics/libs/energistics_parsers/witsml_2_0/trajectory_parser.py#L117
Because `tagname` didn't match this schema pattern (cc @epeysson)M14 - Release 0.17https://community.opengroup.org/osdu/platform/domain-data-mgmt-services/wellbore/wellbore-domain-services/-/issues/45Decouple different cloud providers' requirements/dependencies2023-07-06T19:41:18ZYan Sushchynski (EPAM)Decouple different cloud providers' requirements/dependenciesNow, for Wellbore DMS there is a single `requirements.in` file for all cloud providers. It works well unless cloud-specific libraries depend on the same third-parties with **different** versions; so, there is a problem with executing `pi...Now, for Wellbore DMS there is a single `requirements.in` file for all cloud providers. It works well unless cloud-specific libraries depend on the same third-parties with **different** versions; so, there is a problem with executing `pip-compile requirements.in`.
Example:
```
There are incompatible versions in the resolved dependencies:
osdu-api~=0.15.0.dev (from osdu-core-lib-python-anthos==1.0.1->-r requirements.in (line 44))
osdu-api==0.14.0 (from osdu-core-lib-python-aws==1.0.1->-r requirements.in (line 43))
```
In the example above this error is possible to fix by synchronizing `osdu-api` library, however, a similar issue with the same third-parties can occur in cloud SDKs, so, we are not able to fix them that easily.
As a solution I can propose splitting building Wellbore DMS image into two steps:
1. Build a base image with the basic requirements installed.
2. Build separate images with **cloud specific** requirements and dependencies based on the previous one.
The working example of such two-step-builds is implemented [here](https://community.opengroup.org/osdu/platform/data-flow/ingestion/energistics-osdu-integration/-/tree/master/build). Firstly, we build the [base image](https://community.opengroup.org/osdu/platform/data-flow/ingestion/energistics-osdu-integration/-/blob/master/build/Dockerfile#L23), and then the providers build their own images based on the previous one (e.g., [GCP](https://community.opengroup.org/osdu/platform/data-flow/ingestion/energistics-osdu-integration/-/blob/master/build/Dockerfile#L51)).M14 - Release 0.17https://community.opengroup.org/osdu/platform/domain-data-mgmt-services/reservoir/open-etp-client/-/issues/3[Azure] Set up common pipeline fot e2e testing2023-05-05T00:23:44ZSiarhei Khaletski (EPAM)[Azure] Set up common pipeline fot e2e testing## Issue scope
Need to implement a common pipeline for e2e testing in the Gitlab pipeline ([Azure step placeholder](https://community.opengroup.org/osdu/platform/domain-data-mgmt-services/reservoir/open-etp-client/-/jobs/1239907)).
Po...## Issue scope
Need to implement a common pipeline for e2e testing in the Gitlab pipeline ([Azure step placeholder](https://community.opengroup.org/osdu/platform/domain-data-mgmt-services/reservoir/open-etp-client/-/jobs/1239907)).
Postman collection [here](https://community.opengroup.org/osdu/platform/testing/-/tree/master/Dev/45_CICD_Reservoir_DDMS)M14 - Release 0.17https://community.opengroup.org/osdu/platform/data-flow/ingestion/opc-ua-ingestion/-/issues/2Explore the OPC UA client and server2022-07-06T03:59:50ZAshutosh KumarExplore the OPC UA client and server1: Explore OPC UA Client and server architecture.
2: Check the communication methods between them.1: Explore OPC UA Client and server architecture.
2: Check the communication methods between them.M14 - Release 0.17Ashutosh KumarAshutosh Kumarhttps://community.opengroup.org/osdu/platform/system/lib/cloud/azure/os-core-lib-azure/-/issues/200.13.0 Getting NoSuchMethodError at com.azure.storage.blob.specialized.BlobAs...2022-08-30T11:52:13ZTsvetelina Ivanova0.13.0 Getting NoSuchMethodError at com.azure.storage.blob.specialized.BlobAsyncClientBase.lambda$downloadStreamWithResponse$22When using version 0.13.0 we get exception NoSuchMethodError at com.azure.storage.blob.specialized.BlobAsyncClientBase.lambda$downloadStreamWithResponse$22 when try to read files from blob storage.
In version 0.13.0 a new version of azu...When using version 0.13.0 we get exception NoSuchMethodError at com.azure.storage.blob.specialized.BlobAsyncClientBase.lambda$downloadStreamWithResponse$22 when try to read files from blob storage.
In version 0.13.0 a new version of azure-storage-blob is introduced 12.13.0. In class BlobAsyncClientBase on line 1069 a call to FluxUtil.createRetriableDownloadFlux() is made.
In class FluxUtil method createRetriableDownloadFlux() does not exists. (This class is in library azure-core).
**Stack Trace:**
java.lang.NoSuchMethodError: com.azure.core.util.FluxUtil.createRetriableDownloadFlux(Ljava/util/function/Supplier;Ljava/util/function/BiFunction;IJ)Lreactor/core/publisher/Flux;
at com.azure.storage.blob.specialized.BlobAsyncClientBase.lambda$downloadStreamWithResponse$22(BlobAsyncClientBase.java:1069)
at reactor.core.publisher.FluxMapFuseable$MapFuseableSubscriber.onNext(FluxMapFuseable.java:113)
at reactor.core.publisher.Operators$MonoSubscriber.complete(Operators.java:1816)
at reactor.core.publisher.MonoFlatMap$FlatMapInner.onNext(MonoFlatMap.java:249)
at reactor.core.publisher.FluxSwitchIfEmpty$SwitchIfEmptySubscriber.onNext(FluxSwitchIfEmpty.java:74)
at reactor.core.publisher.Operators$ScalarSubscription.request(Operators.java:2398)
at reactor.core.publisher.Operators$MultiSubscriptionSubscriber.set(Operators.java:2194)
at reactor.core.publisher.Operators$MultiSubscriptionSubscriber.onSubscribe(Operators.java:2068)
at reactor.core.publisher.FluxFlatMap.trySubscribeScalarMap(FluxFlatMap.java:192)
at reactor.core.publisher.MonoFlatMap.subscribeOrReturn(MonoFlatMap.java:53)
at reactor.core.publisher.InternalMonoOperator.subscribe(InternalMonoOperator.java:57)
at reactor.core.publisher.MonoFlatMap$FlatMapMain.onNext(MonoFlatMap.java:157)
at reactor.core.publisher.FluxContextWrite$ContextWriteSubscriber.onNext(FluxContextWrite.java:107)
at reactor.core.publisher.FluxDoOnEach$DoOnEachSubscriber.onNext(FluxDoOnEach.java:173)
at reactor.core.publisher.Operators$MonoSubscriber.complete(Operators.java:1816)
at reactor.core.publisher.MonoFlatMap$FlatMapMain.onNext(MonoFlatMap.java:151)
at reactor.core.publisher.FluxMap$MapSubscriber.onNext(FluxMap.java:120)
at reactor.core.publisher.FluxOnErrorResume$ResumeSubscriber.onNext(FluxOnErrorResume.java:79)
at reactor.core.publisher.Operators$MonoSubscriber.complete(Operators.java:1816)
at reactor.core.publisher.MonoFlatMap$FlatMapMain.onNext(MonoFlatMap.java:151)
at reactor.core.publisher.FluxDelaySubscription$DelaySubscriptionMainSubscriber.onNext(FluxDelaySubscription.java:189)
at reactor.core.publisher.SerializedSubscriber.onNext(SerializedSubscriber.java:99)
at reactor.core.publisher.SerializedSubscriber.onNext(SerializedSubscriber.java:99)
at reactor.core.publisher.FluxTimeout$TimeoutMainSubscriber.onNext(FluxTimeout.java:180)
at reactor.core.publisher.Operators$MonoSubscriber.complete(Operators.java:1816)
at reactor.core.publisher.MonoFlatMap$FlatMapInner.onNext(MonoFlatMap.java:249)
at reactor.core.publisher.FluxMapFuseable$MapFuseableSubscriber.onNext(FluxMapFuseable.java:127)
at reactor.core.publisher.FluxMapFuseable$MapFuseableSubscriber.onNext(FluxMapFuseable.java:127)
at reactor.core.publisher.Operators$MonoSubscriber.complete(Operators.java:1816)
at reactor.core.publisher.MonoFlatMap$FlatMapMain.onNext(MonoFlatMap.java:151)
at reactor.core.publisher.SerializedSubscriber.onNext(SerializedSubscriber.java:99)
at reactor.core.publisher.FluxRetryWhen$RetryWhenMainSubscriber.onNext(FluxRetryWhen.java:174)
at reactor.core.publisher.Operators$MonoInnerProducerBase.complete(Operators.java:2664)
at reactor.core.publisher.MonoSingle$SingleSubscriber.onComplete(MonoSingle.java:180)
at reactor.core.publisher.Operators$ScalarSubscription.request(Operators.java:2400)
at reactor.core.publisher.MonoFlatMapMany$FlatMapManyMain.onSubscribeInner(MonoFlatMapMany.java:150)
at reactor.core.publisher.MonoFlatMapMany$FlatMapManyMain.onNext(MonoFlatMapMany.java:189)
at reactor.core.publisher.SerializedSubscriber.onNext(SerializedSubscriber.java:99)
at reactor.core.publisher.FluxRetryWhen$RetryWhenMainSubscriber.onNext(FluxRetryWhen.java:174)
at reactor.core.publisher.MonoCreate$DefaultMonoSink.success(MonoCreate.java:165)
at reactor.netty.http.client.HttpClientConnect$HttpIOHandlerObserver.onStateChange(HttpClientConnect.java:414)
at reactor.netty.ReactorNetty$CompositeConnectionObserver.onStateChange(ReactorNetty.java:671)
at reactor.netty.resources.DefaultPooledConnectionProvider$DisposableAcquire.onStateChange(DefaultPooledConnectionProvider.java:201)
at reactor.netty.resources.DefaultPooledConnectionProvider$PooledConnection.onStateChange(DefaultPooledConnectionProvider.java:457)
at reactor.netty.http.client.HttpClientOperations.onInboundNext(HttpClientOperations.java:637)
at reactor.netty.channel.ChannelOperationsHandler.channelRead(ChannelOperationsHandler.java:93)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357)
at io.netty.channel.CombinedChannelDuplexHandler$DelegatingChannelHandlerContext.fireChannelRead(CombinedChannelDuplexHandler.java:436)
at io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:324)
at io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:311)
at io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:432)
at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:276)
at io.netty.channel.CombinedChannelDuplexHandler.channelRead(CombinedChannelDuplexHandler.java:251)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357)
at io.netty.handler.ssl.SslHandler.unwrap(SslHandler.java:1371)
at io.netty.handler.ssl.SslHandler.decodeNonJdkCompatible(SslHandler.java:1245)
at io.netty.handler.ssl.SslHandler.decode(SslHandler.java:1285)
at io.netty.handler.codec.ByteToMessageDecoder.decodeRemovalReentryProtection(ByteToMessageDecoder.java:507)
at io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:446)
at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:276)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357)
at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919)
at io.netty.channel.kqueue.AbstractKQueueStreamChannel$KQueueStreamUnsafe.readReady(AbstractKQueueStreamChannel.java:544)
at io.netty.channel.kqueue.AbstractKQueueChannel$AbstractKQueueUnsafe.readReady(AbstractKQueueChannel.java:381)
at io.netty.channel.kqueue.KQueueEventLoop.processReady(KQueueEventLoop.java:211)
at io.netty.channel.kqueue.KQueueEventLoop.run(KQueueEventLoop.java:289)
at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:986)
at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
at java.lang.Thread.run(Thread.java:748)M14 - Release 0.17https://community.opengroup.org/osdu/platform/consumption/geospatial/-/issues/167Transformer - Gcz transformer run enhancement2022-11-30T18:42:04ZAnkita SrivastavaTransformer - Gcz transformer run enhancement1. Create a batch script to compile and run gcz - transformer1. Create a batch script to compile and run gcz - transformerGCZ Sprint 26Ankita SrivastavaAnkita Srivastava