OSDU Software issueshttps://community.opengroup.org/groups/osdu/-/issues2021-07-28T13:30:51Zhttps://community.opengroup.org/osdu/platform/data-flow/ingestion/energistics/witsml-parser/-/issues/6AWS Implementation of Energistics Parsers2021-07-28T13:30:51ZAsh SathyaseelanAWS Implementation of Energistics ParsersSample issueSample issueM7 - Release 0.10GregGreg2021-04-09https://community.opengroup.org/osdu/platform/data-flow/ingestion/energistics/witsml-parser/-/issues/16IBM Implementation of Energistics Parsers2021-07-28T13:41:05ZJana ScheyIBM Implementation of Energistics ParsersSample issueSample issueM7 - Release 0.10Jay HollingsworthJay Hollingsworth2021-04-09https://community.opengroup.org/osdu/platform/data-flow/ingestion/csv-parser/csv-parser/-/issues/10CSV parser program uses File service but should use Dataset service2021-07-01T03:03:41ZSpencer Suttonsuttonsp@amazon.comCSV parser program uses File service but should use Dataset serviceIt looks like this uses the File service to pull down a CSV before doing the parsing logic. Dataset service should be used when interacting with any bulk data or files via OSDU.
**I'm planning on updating this code to use Dataset servic...It looks like this uses the File service to pull down a CSV before doing the parsing logic. Dataset service should be used when interacting with any bulk data or files via OSDU.
**I'm planning on updating this code to use Dataset service instead, would this be mergeable when done?**M7 - Release 0.10ethiraj krishnamanaiduDania Kodeih (Microsoft)Joeethiraj krishnamanaiduhttps://community.opengroup.org/osdu/platform/data-flow/ingestion/ingestion-workflow/-/issues/92Workflow property validation should be done at the API level2021-07-27T14:46:45ZMatt WiseWorkflow property validation should be done at the API levelCurrently, it seems it is left up to each provider implementation to properly validate fields like 'WorkflowName'.
This should be an API validation instead in the core code.
Example: WorkflowName regex check in provider code
![image](...Currently, it seems it is left up to each provider implementation to properly validate fields like 'WorkflowName'.
This should be an API validation instead in the core code.
Example: WorkflowName regex check in provider code
![image](/uploads/af09db72f6882c102853ab8751a70873/image.png)M7 - Release 0.10ethiraj krishnamanaiduDania Kodeih (Microsoft)Wladmir FrazaoJoeChris ZhangDmitriy RudkoSpencer Suttonsuttonsp@amazon.comMatt Wiseethiraj krishnamanaiduhttps://community.opengroup.org/osdu/platform/data-flow/ingestion/ingestion-workflow/-/issues/95Workflow API authorization failure should throw 401/403 not 4042021-07-21T14:40:45ZMatt WiseWorkflow API authorization failure should throw 401/403 not 404When a user is not valid or a valid user does not have the entitlements group required to call /api/workflow/v1/workflow, they are returned a 404 error. This should follow entitlements convention and return a 401/403 depending on the ca...When a user is not valid or a valid user does not have the entitlements group required to call /api/workflow/v1/workflow, they are returned a 404 error. This should follow entitlements convention and return a 401/403 depending on the case.
![image](/uploads/d414580f319bcadb0fe7575fb8763ca4/image.png)M7 - Release 0.10Dania Kodeih (Microsoft)Wladmir FrazaoJoeDmitriy RudkoMatt WiseAlan HensonDania Kodeih (Microsoft)https://community.opengroup.org/osdu/platform/data-flow/ingestion/csv-parser/csv-parser/-/issues/14CSV parser not deleting the workflow that is registered after integration tests2021-07-01T03:03:44ZKishore BattulaCSV parser not deleting the workflow that is registered after integration testsCSV Parser in azure registered the parser through workflow service register workflow API. After integration test, the workflow service must be deleted otherwise each run will create csv workflows in the system which will slow down the ai...CSV Parser in azure registered the parser through workflow service register workflow API. After integration test, the workflow service must be deleted otherwise each run will create csv workflows in the system which will slow down the airflow to load huge number of DAGs at runtime.M7 - Release 0.10SwapnilSwapnilhttps://community.opengroup.org/osdu/platform/system/schema-service/-/issues/40Schema Version Update Issue2021-07-08T06:20:43ZYunhua KoglinSchema Version Update IssueTo reproduce the issue:
1. Register a new schema
2. Update the schema by bumping the patch (or minor) version and adding a new data property
3. Call update schema endpoint
{
"error":{
"code":400,
"message":"Breaking chang...To reproduce the issue:
1. Register a new schema
2. Update the schema by bumping the patch (or minor) version and adding a new data property
3. Call update schema endpoint
{
"error":{
"code":400,
"message":"Breaking changes found, please change schema major version",
"errors":[
{
"domain":"global",
"reason":"badRequest",
"message":"Breaking changes found, please change schema major version"
}
]
}
}M7 - Release 0.10Abhishek Kumar (SLB)Abhishek Kumar (SLB)https://community.opengroup.org/osdu/platform/deployment-and-operations/infra-azure-provisioning/-/issues/155Onboard Well Delivery DDMS2022-08-23T10:47:29ZJasonOnboard Well Delivery DDMS**Service name**: `Well Delivery DDMS`
https://community.opengroup.org/osdu/platform/domain-data-mgmt-services/well-delivery/well-delivery
The following steps must be completed for a service to onboard with OSDU on Azure. Additionally, ...**Service name**: `Well Delivery DDMS`
https://community.opengroup.org/osdu/platform/domain-data-mgmt-services/well-delivery/well-delivery
The following steps must be completed for a service to onboard with OSDU on Azure. Additionally, please add the `Service Onboarding` tag to this issue when it is created.
For more information, visit our service onboarding documentation [here](https://community.opengroup.org/osdu/platform/deployment-and-operations/infra-azure-provisioning/-/blob/master/docs/service-onboarding.md).
## Steps:
**Infrastructure and Initial Requirements**
- [ ] Add any additional Azure cloud infrastructure (Cosmos containers, Storage containers, fileshares, etc.) to the Terraform template. [Link](https://community.opengroup.org/osdu/platform/deployment-and-operations/infra-azure-provisioning/-/tree/master/infra/templates/osdu-r3-mvp). Note that if the infrastructure is a part of the data-partition template, you may need to add secrets to the keyvault that are partition specific; if doing so, update the createPartition REST request to include the keys that you have added so they are accessible in service code. [Link](https://community.opengroup.org/osdu/platform/deployment-and-operations/infra-azure-provisioning/-/blob/master/tools/rest/partition.http#L48)
- [x] Create an ingress point for the service. [Link](https://community.opengroup.org/osdu/platform/deployment-and-operations/infra-azure-provisioning/-/blob/master/charts/osdu-common/templates/appgw-ingress.yaml)
- [x] Add any test data that is required for the service integration tests. [Link](https://community.opengroup.org/osdu/platform/deployment-and-operations/infra-azure-provisioning/-/tree/master/tools/test_data)
- [x] Update `upload-data.py` to upload any new test data files you created. [Link](https://community.opengroup.org/osdu/platform/deployment-and-operations/infra-azure-provisioning/-/blob/master/tools/test_data/upload-data.py).
- [x] Update the integration tester with any entitlements required to test the service. [Link](https://community.opengroup.org/osdu/platform/deployment-and-operations/infra-azure-provisioning/-/blob/master/tools/test_data/user_info_1.json)
- [x] Add in any new secrets that the service needs to run. [Link](https://community.opengroup.org/osdu/platform/deployment-and-operations/infra-azure-provisioning/-/blob/master/charts/osdu-common/templates/kv-secrets.yaml)
- [ ] Create environment variable script to generate .yaml files to be used with Intellij [EnvFile](https://plugins.jetbrains.com/plugin/7861-envfile) plugin and .envrc files to be used with [direnv](https://direnv.net/). [Link](https://community.opengroup.org/osdu/platform/deployment-and-operations/infra-azure-provisioning/-/tree/master/tools/variables)
**Gitlab Code and Documentation**
- [x] Complete the service code such that it passes all integration tests locally. There is some documentation on starting off implementing an Azure provider. [Link](./gitlab-service-readme-template.md)
- [x] Create helm charts for service. The charts for each service are located in the `devops/azure` directory. You can look at charts from other services as a model. The charts will be nearly identical except for the different environment variables, values, etc each service needs to run. [Link](./gitlab-service-guide.md)
- [x] Implement Istio for the service if this has not already been done. Here is an example MR that shows what steps are required.
- [ ] Create an Istio auth policy in the `devops/azure/chart/templates` directory. Here is an example of an Istio auth policy that is generic and can be used by other services. [Link](https://community.opengroup.org/osdu/platform/system/storage/-/blob/master/devops/azure/chart/templates/azure-istio-auth-policy.yaml)
- [x] Add any variables that are required for the service integration tests to the Azure CI-CD file. [Link](https://community.opengroup.org/osdu/platform/ci-cd-pipelines/-/blob/master/cloud-providers/azure.yml)
- [ ] Verify that the README for the Azure provider correctly and clearly describes how to run and test the service. There is a README template to help. [Link](./gitlab-service-readme-template.md)
- [ ] Push any changes and verify that the Gitlab pipeline is passing in master.
**Development and Demo Azure Devops Pipelines**
- [x] Create development ADO pipeline at `devops/azure/development-pipeline.yml` in the service repo.
- [ ] Verify development pipeline passes in ADO.
- [x] Create Demo ADO pipeline at `devops/azure/pipeline.yml` in the service repo.
- [ ] Verify demo pipeline is passing in ADO.
**User Documentation**
- [ ] Add the service to the mirror pipeline instructions. [Link](https://community.opengroup.org/osdu/platform/deployment-and-operations/infra-azure-provisioning/-/blob/master/docs/code-mirroring.md)
- [ ] Add the service to the manual deployment instructions. [Link](https://community.opengroup.org/osdu/platform/deployment-and-operations/infra-azure-provisioning/-/tree/master/charts)
- [ ] Add any required variables to the already existing variable group instructions for automated deployment. You should know if any variables need to be added to existing variable groups from creating the development and demo pipelines. [Link](https://community.opengroup.org/osdu/platform/deployment-and-operations/infra-azure-provisioning/-/blob/master/docs/service-automation.md#create-osdu-service-libraries)
- [ ] Add a variable group `Azure Service Release - $SERVICE_NAME` to the documentation. You should know what values to set for this variable group from creating the development and demo pipelines. [Link](https://community.opengroup.org/osdu/platform/deployment-and-operations/infra-azure-provisioning/-/blob/master/docs/service-automation.md#create-osdu-service-libraries)
- [ ] Add a step for creating the service pipeline at the bottom of the service-automation page. [Link](https://community.opengroup.org/osdu/platform/deployment-and-operations/infra-azure-provisioning/-/blob/master/docs/service-automation.md#create-osdu-service-libraries)
- [ ] Create a rest script with sample calls to the service for users. [Link](https://community.opengroup.org/osdu/platform/deployment-and-operations/infra-azure-provisioning/-/tree/master/tools/rest)M7 - Release 0.10Dmitriy RudkoSumra ZafarJasonDmitriy Rudkohttps://community.opengroup.org/osdu/platform/security-and-compliance/policy/-/issues/10Operational readiness for policy service - bootstrap default policies2022-08-23T11:19:18ZHrvoje MarkovicOperational readiness for policy service - bootstrap default policiesInclude default policies in the bootstrap of the system that ensure the same behavior we have now.Include default policies in the bootstrap of the system that ensure the same behavior we have now.M7 - Release 0.10https://community.opengroup.org/osdu/platform/deployment-and-operations/infra-azure-provisioning/-/issues/161Implement Legal Tag Update Workflow2021-07-27T21:19:00ZAbhishek ChowdhryImplement Legal Tag Update WorkflowWe need a system in place which we will periodically evaluate the status of each legal tag as "valid" or "invalid" and update storage records accordingly. This system should also expose the events such that they can be consumed by other ...We need a system in place which we will periodically evaluate the status of each legal tag as "valid" or "invalid" and update storage records accordingly. This system should also expose the events such that they can be consumed by other subscribers within and outside of OSDU.
The changed Legal tags will be sent to a new topic in EventGrid from where they will be forwarded to a topic on Service bus. The Storage service will poll the Legal Tags from Service bus topic and will to update the status of all the records associated with those tags.M7 - Release 0.10Abhishek ChowdhryAbhishek Chowdhryhttps://community.opengroup.org/osdu/platform/deployment-and-operations/infra-azure-provisioning/-/issues/163Architecture change- service resources- Add cosmos db and Storage account2022-08-23T10:47:29ZAman VermaArchitecture change- service resources- Add cosmos db and Storage accountAdditional cosmos DB and Storage account is needed to in services resource group to support shared schemas. This database/ SA would be in addition to all the partition specific cosmos dbs/ SAs
---
__Design__
1. We already have a module...Additional cosmos DB and Storage account is needed to in services resource group to support shared schemas. This database/ SA would be in addition to all the partition specific cosmos dbs/ SAs
---
__Design__
1. We already have a module for cosmos db. The same can be leveraged to create cosmos db in service resources.
2. We already have a module for Storage account. The same can be leveraged to create Storage Account in service resources.
_Module Requirements_
- Required modules are already present
_Template Requirements_
- Database will be named with the suffix of "system" to distinguish from table or db
- Database will be created as part of the service Resource Template
- Database will be locked
- Database location and replication location will be consistent in naming patterns to Data Partitions
- Database by default will use the same type of throughput settings as other CosmosDBs.
- Storage account will be named with the suffix of "system" to distinguish from other SAs
- Storage account will be created as part of the service Resource Template
- Storage account will be locked
- Storage account location and replication location will be consistent in naming patterns to Data Partitions
---
__Acceptance Criteria__
1. Architecture Diagram Change
3. Modify Central service to add the additional database/ SA.
4. Ensure all Module Unit Tests Pass
5. Ensure all Template Unit Tests and Integration Tests Pass
6. Update all required documentation
cc: @polavishnu, @manishkM7 - Release 0.10Aman VermaAman Vermahttps://community.opengroup.org/osdu/platform/system/schema-service/-/issues/47Revised WellLog2021-07-16T19:28:28ZThomas Gehrmann [slb]Revised WellLogBootstrap a revised WellLog version,
- [x] `osdu:wks:work-product-component--WellLog:1.1.0`
- [x] with related reference data type `osdu:wks:reference-data--WellLogSamplingDomainType:1.0.0`Bootstrap a revised WellLog version,
- [x] `osdu:wks:work-product-component--WellLog:1.1.0`
- [x] with related reference data type `osdu:wks:reference-data--WellLogSamplingDomainType:1.0.0`M7 - Release 0.10Thomas Gehrmann [slb]Thomas Gehrmann [slb]https://community.opengroup.org/osdu/platform/system/storage/-/issues/73Storage ACL validation (for Azure implementation) is performed against Entitl...2021-07-27T21:16:00ZAlok JoshiStorage ACL validation (for Azure implementation) is performed against Entitlements V1 databaseAzure's [CloudStorageImpl](https://community.opengroup.org/osdu/platform/system/storage/-/blob/master/provider/storage-azure/src/main/java/org/opengroup/osdu/storage/provider/azure/CloudStorageImpl.java) class's write method calls `valid...Azure's [CloudStorageImpl](https://community.opengroup.org/osdu/platform/system/storage/-/blob/master/provider/storage-azure/src/main/java/org/opengroup/osdu/storage/provider/azure/CloudStorageImpl.java) class's write method calls `validateRecordAcls` to perform ACL validation.
The existence of ACL is checked against the Entitlements V1 database, which is the issue. The ACL validation should be performed against Entitlements service, so that the responsibility falls on Entitlements service. If such api is not available, the ACL validation should be done against the new Entitlements V2 databaseM7 - Release 0.10Neelesh ThakurNeelesh Thakurhttps://community.opengroup.org/osdu/platform/system/file/-/issues/30Need for DELETE endpoint2021-06-29T09:41:44ZParesh BehedeNeed for DELETE endpointCurrently there is no way to delete already uploaded file by user from data platform, in case user uploads wrong file by mistake that file can not be deleted by user.
We must give ability to delete metadata record and file associated wi...Currently there is no way to delete already uploaded file by user from data platform, in case user uploads wrong file by mistake that file can not be deleted by user.
We must give ability to delete metadata record and file associated with that metadata record to user, so that user can delete file uploaded by him/her when ever its necessary.
New endpoint in File Service could be DELETE /v2/files/{id}/metadataM7 - Release 0.10Paresh BehedeParesh Behedehttps://community.opengroup.org/osdu/platform/deployment-and-operations/infra-azure-provisioning/-/issues/171Adding new in partition service for Workflow Ingestion Service (storage accont)2021-06-14T12:19:20ZAalekh JainAdding new in partition service for Workflow Ingestion Service (storage accont)In order to support multi partition for storage account in workflow ingestion service, we need to add the following properties to partition service -
1. `ingest-storage-account-name`
2. `ingest-storage-account-key`
MR is raised here: ...In order to support multi partition for storage account in workflow ingestion service, we need to add the following properties to partition service -
1. `ingest-storage-account-name`
2. `ingest-storage-account-key`
MR is raised here: !317
Change in core lib azure are introduced here: https://community.opengroup.org/osdu/platform/system/lib/cloud/azure/os-core-lib-azure/-/merge_requests/110M7 - Release 0.10https://community.opengroup.org/osdu/platform/deployment-and-operations/infra-azure-provisioning/-/issues/173opendes hardcoded in http scripts2021-06-23T09:23:50ZKishore Battulaopendes hardcoded in http scriptsopendes is hardcoded in the http scripts even though a variable exists at the top of the scripts. This is resulting unexpected behavior when changing the data-partition-id.
One of the hardcoded locations: https://community.opengroup.org...opendes is hardcoded in the http scripts even though a variable exists at the top of the scripts. This is resulting unexpected behavior when changing the data-partition-id.
One of the hardcoded locations: https://community.opengroup.org/osdu/platform/deployment-and-operations/infra-azure-provisioning/-/blob/master/tools/rest/check.http#L202M7 - Release 0.10https://community.opengroup.org/osdu/platform/data-flow/ingestion/csv-parser/csv-parser/-/issues/20CSV Parser Enhancement - Improvement of search client to escape special chara...2021-07-08T16:33:45ZSwapnilCSV Parser Enhancement - Improvement of search client to escape special characters
## Improvement of search client to escape special characters
Change in the Search Client to escape special character reserved by the Search Service when building queries.
The special characters are: ~ ` ! @ # $ % ^ * ( ) - _ + = { } [ ...
## Improvement of search client to escape special characters
Change in the Search Client to escape special character reserved by the Search Service when building queries.
The special characters are: ~ ` ! @ # $ % ^ * ( ) - _ + = { } [ ] | \ / : ; ' < > , . ?M7 - Release 0.10SwapnilFernando Nahu Cantera RubioSwapnilhttps://community.opengroup.org/osdu/platform/system/lib/core/os-core-common/-/issues/43Add support for nested Unit references2021-07-14T19:54:45ZSanjeev-SLBAdd support for nested Unit referencesOSDU R3 schemas have introduced the notion of unit references (either to meta or external records) that live in the same structures as the data in the data block. This means the frame of reference service needs to be able to deal with...OSDU R3 schemas have introduced the notion of unit references (either to meta or external records) that live in the same structures as the data in the data block. This means the frame of reference service needs to be able to deal with these references, and this is especially an issue for nested array data where a unique path using "." notation can't be defined. Array notation would generally be used, e.g. data.VerticalMeasurement[4], etc.
In this solution the Frame Of reference (FOR) is always embedded in the meta block.
Homogeneous Cases: (We have to update specified fields for all the items in the nested array)
GIVEN an embedded FOR for an array of objects
WHEN I can declare without an index one or many properties within the objects e.g.
{
"kind": "Unit",
"name": "ft",
"persistableReference": "{\"abcd\":{\"a\":0.0,\"b\":0.3048,\"c\":1.0,\"d\":0.0},\"symbol\":\"ft\",\"baseMeasurement\":{\"ancestry\":\"L\",\"type\":\"UM\"},\"type\":\"UAD\"}",
"unitOfMeasureID": "partition-id:reference-data--UnitOfMeasure:ft:",
"propertyNames": [
"VerticalMeasurement.VerticalMeasurement",
"Markers[].MarkerMeasuredDepth",
"Markers[].PositiveVerticalDelta",
"Markers[].NegativeVerticalDelta"
]
}
THEN all properties declared for that object within that array use the same FOR and should be converted the same way (see attachment for expected result in Work product component 'normalized' example)
GIVEN an embedded FOR for an array of objects
WHEN I can declare without an index one or many properties within the objects e.g.
{
"kind": "Unit",
"name": "m",
"persistableReference": "{\"abcd\":{\"a\":0.0,\"b\":1.0,\"c\":1.0,\"d\":0.0},\"symbol\":\"m\",\"baseMeasurement\":{\"ancestry\":\"L\",\"type\":\"UM\"},\"type\":\"UAD\"}",
"unitOfMeasureID": "namespace:reference-data--UnitOfMeasure:m:",
"propertyNames": [
"Markers[].MarkerMeasuredDepth"
}
AND WHEN object 2 has an error converting
THEN the error message should declare the specific object with the error e.g. error converting Markers[1].MarkerMeasuredDepth because ....
----
InHomogeneous Cases: (We have to update specified fields for specifying the items in the nested array)
GIVEN an embedded FOR for an array of objects
WHEN I can declare with an index to a specific object in the array e.g. data.VerticalMeasurement[2]
{
"kind": "Unit",
"name": "ft",
"persistableReference": "{\"abcd\":{\"a\":0.0,\"b\":0.3048,\"c\":1.0,\"d\":0.0},\"symbol\":\"ft\",\"baseMeasurement\":{\"ancestry\":\"L\",\"type\":\"UM\"},\"type\":\"UAD\"}",
"unitOfMeasureID": "partition-id:reference-data--UnitOfMeasure:ft:",
"propertyNames": [
"VerticalMeasurement.VerticalMeasurement",
"Markers[0].MarkerMeasuredDepth",
"Markers[2].MarkerMeasuredDepth",
"Markers[].PositiveVerticalDelta",
"Markers[].NegativeVerticalDelta"
]
},
{
"kind": "Unit",
"name": "yd",
"PersistableReference": "{\"abcd\":{\"a\":0.0,\"b\":0.9144,\"c\":1.0,\"d\":0.0},\"symbol\":\"yd\",\"baseMeasurement\":{\"ancestry\":\"L\",\"type\":\"UM\"},\"type\":\"UAD\"}",
"unitOfMeasureID": "partition-id:reference-data--UnitOfMeasure:yd:",
"propertyNames": [
"Markers[1].MarkerMeasuredDepth"
]
},
THEN it only converts the object indexes declared (see attachment for expected result in Work product component 'normalized' example)
GIVEN an embedded FOR for an array of objects
WHEN I can declare the same FOR multiple times for the same object e.g.
{
"kind": "Unit",
"name": "rad",
"persistableReference": "{\"abcd\":{\"a\":0.0,\"b\":1.0,\"c\":1.0,\"d\":0.0},\"symbol\":\"rad\",\"baseMeasurement\":{\"ancestry\":\"A\",\"type\":\"UM\"},\"type\":\"UAD\"}",
"unitOfMeasureID": "namespace:reference-data--UnitOfMeasure:rad:",
"propertyNames": [
"Markers[].SurfaceDipAngle"
]
}}{
"kind": "Unit",
"name": "rad",
"persistableReference": "{\"abcd\":{\"a\":0.0,\"b\":1.0,\"c\":1.0,\"d\":0.0},\"symbol\":\"rad\",\"baseMeasurement\":{\"ancestry\":\"A\",\"type\":\"UM\"},\"type\":\"UAD\"}",
"unitOfMeasureID": "namespace:reference-data--UnitOfMeasure:rad:",
"propertyNames": [
"Markers[2].SurfaceDipAngle"
]
}}
Then last conversion declared wins meaning all objects in first declaration get converted to the initial FOR and the object[2] gets converted in another FOR
-------M7 - Release 0.10Sanjeev-SLBSanjeev-SLBhttps://community.opengroup.org/osdu/platform/data-flow/ingestion/ingestion-dags/-/issues/60Implement code changes for Manifest-based ingestion DAG to be compatible with...2021-07-14T12:37:20ZKateryna Kurach (EPAM)Implement code changes for Manifest-based ingestion DAG to be compatible with Airflow 2.0 [GONRG-2591]Update manifest-based ingestion DAG to work with Airflow 2.0
https://jiraeu.epam.com/browse/GONRG-2591Update manifest-based ingestion DAG to work with Airflow 2.0
https://jiraeu.epam.com/browse/GONRG-2591M7 - Release 0.10Kateryna Kurach (EPAM)Kateryna Kurach (EPAM)https://community.opengroup.org/osdu/platform/data-flow/ingestion/ingestion-dags/-/issues/61Implement code changes for WITSML DAG to be compatible with Airflow 2.0 [GON...2021-07-14T14:10:31ZKateryna Kurach (EPAM)Implement code changes for WITSML DAG to be compatible with Airflow 2.0 [GONRG-2729]https://jiraeu.epam.com/browse/GONRG-2729https://jiraeu.epam.com/browse/GONRG-2729M7 - Release 0.10Siarhei Khaletski (EPAM)Siarhei Khaletski (EPAM)