OSDU Software issueshttps://community.opengroup.org/groups/osdu/-/issues2022-08-17T15:11:13Zhttps://community.opengroup.org/osdu/platform/consumption/geospatial/-/issues/144SPIKE: Investigate Wellbore and Well Delivery DDMS current status2022-08-17T15:11:13ZLevi RemingtonSPIKE: Investigate Wellbore and Well Delivery DDMS current statusLast we checked, Wellbore and Well Delivery DDMS was meant to release in Alpha in April. We have now received an updated priority list for data type support, and the more complex items require Wellbore and Well Delivery DDMS before GCZ c...Last we checked, Wellbore and Well Delivery DDMS was meant to release in Alpha in April. We have now received an updated priority list for data type support, and the more complex items require Wellbore and Well Delivery DDMS before GCZ can implement.
This issue is to investigate current status of the Wellbore and Well Delivery DDMS to get an update on timelines/ETA for the soonest we can start implementing the DDMS to achieve these data types.
Data types in question:
Well top/bottom
Well Path as a measured 3D line
Log as a measured 3D lineGCZ Sprint 22https://community.opengroup.org/osdu/platform/consumption/geospatial/-/issues/143Provider - Bug: initial extent in AGO not reflecting extent of features2022-09-13T15:27:19ZLevi RemingtonProvider - Bug: initial extent in AGO not reflecting extent of featuresWhen viewing a Koop-provided layer in AGO, it should automatically zoom to the extent of all visible features. This was previously working, but for some reason it is not working now.
Need to investigate the initial query made by AGO to...When viewing a Koop-provided layer in AGO, it should automatically zoom to the extent of all visible features. This was previously working, but for some reason it is not working now.
Need to investigate the initial query made by AGO to see how Provider is handling a `returnExtentOnly` query.
Acceptance Criteria:
- Opening a koop-provided layer in AGO initializes with accurate extent of featuresGCZ Sprint 24Levi RemingtonLevi Remingtonhttps://community.opengroup.org/osdu/platform/consumption/geospatial/-/issues/142Wiki - Add documentation to explain logs & log configuration2022-08-17T15:11:14ZLevi RemingtonWiki - Add documentation to explain logs & log configurationAcceptance Criteria:
- GCZ Wiki has a section explaining the logs, the differences between log modes, and how to change log modesAcceptance Criteria:
- GCZ Wiki has a section explaining the logs, the differences between log modes, and how to change log modesLevi RemingtonLevi Remingtonhttps://community.opengroup.org/osdu/platform/pre-shipping/-/issues/339AZURE M12 - All search requests failing with 500 errors2022-08-23T11:28:07ZMichaelAZURE M12 - All search requests failing with 500 errorsAll search requests are failing with 500 errors. Below is an example request that is failing:
```
url --location --request POST 'https://osdu-ship.msft-osdu-test.org/api/search/v2/query' \
--header 'Content-Type: application/json' \
--he...All search requests are failing with 500 errors. Below is an example request that is failing:
```
url --location --request POST 'https://osdu-ship.msft-osdu-test.org/api/search/v2/query' \
--header 'Content-Type: application/json' \
--header 'data-partition-id: opendes' \
--header 'Authorization: Bearer ...' \
--data-raw '{
"kind": "osdu:wks:master-data--Well:*"
}'
```
Response:
```
{
"code": 500,
"reason": "Persistence error",
"message": "Error generating token"
}
```
Did something happen in the AZURE M12 pre-shipping environment that is causing these requests to fail?M12 - Release 0.15Krishna Nikhil VedurumudiKrishna Nikhil Vedurumudihttps://community.opengroup.org/osdu/platform/system/schema-service/-/issues/112Azure - shared partition should not have same name as created partition2022-12-20T11:39:58ZArturo Hernandez [EPAM]Azure - shared partition should not have same name as created partitionAs for now the schema service does uses "opendes" as shared partition, in the azure architecture this will save schema records in the shared cosmosdb, this will cause duplication of schemas and will not allow to update anymore schemas un...As for now the schema service does uses "opendes" as shared partition, in the azure architecture this will save schema records in the shared cosmosdb, this will cause duplication of schemas and will not allow to update anymore schemas until shared is properly defined in the properties:
```
shared.tenant.name=opendes
###
azure.system.sharedTenant=${shared.tenant.name}
```
If we use opendes, the records will be saved in the Shared cosmosdb, however, if we try to upload schemas, there are already opendes authority schemas and will not be able to update the schema.
I would suggest to use value `osdu` for shared tenant.name, as most of the shared schema authority does have osdu.M14 - Release 0.17Arturo Hernandez [EPAM]Igor Zimovets (EPAM)Arturo Hernandez [EPAM]https://community.opengroup.org/osdu/data/data-definitions/-/issues/42SeismicTraceData - property names are very similar to one another, hence may ...2022-08-02T09:13:04ZDebasis ChatterjeeSeismicTraceData - property names are very similar to one another, hence may be confusing. Please consider renaming one**CreateionDateTime** - Date that a resource (work product component here) is formed outside of OSDU before loading (e.g. publication date).
This refers to when the SegY file was actually produced by Interpretation program, Processing co...**CreateionDateTime** - Date that a resource (work product component here) is formed outside of OSDU before loading (e.g. publication date).
This refers to when the SegY file was actually produced by Interpretation program, Processing company or even during modern day acquisition.
As you know, there is also the audit trail fields
**createTime** and createUser - to indicate when the record was created in OSDU Data Platform/instance and by who.
As you can see these properties look very similar and some user may make mistake unknowingly.
**CreateionDateTime** and **createTime**
Please consider changing the first name as that is very specific to this work-product component.
cc @Keith_Wallhttps://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/open-vds/-/issues/134Read/write speed in 'tests/python/basic_tests/createtest.py'2022-08-08T16:44:06ZAli VaziriRead/write speed in 'tests/python/basic_tests/createtest.py'Hi,
I used the test in the script 'tests/python/basic_tests/createtest.py' to measure the read/write speed (lines 44 and 47). I compared the results (in seconds) with [OpenZGY](https://community.opengroup.org/osdu/platform/domain-data-m...Hi,
I used the test in the script 'tests/python/basic_tests/createtest.py' to measure the read/write speed (lines 44 and 47). I compared the results (in seconds) with [OpenZGY](https://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/open-zgy) for three cubes with 1024, 2048, 3072 points/direction and float32 data:
ZGY (write): 10, 81, 344\
VDS (write): 19, 105, 686
ZGY (read): 4, 32, 151\
VDS (read): 1, 11, 331
I see the writing is slower than OpenZGY (which is sequential and if there's any interest, I can provide the small script I used for timing OpenZGY).
Is the 'createtest.py' the most efficient way for reading/writing without compression? Thank you in advance!
Sincerely,\
Ali
P.S. I used 'BrickSize_512' as greater sizes are not allowed due to 0x7FFFFFFF number hardcoded in 'DataBlock.cpp'.https://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/open-vds/-/issues/133When I am doing Vdscopy from local windows machine using python script using ...2022-08-25T04:10:53Zsangamesh hooliWhen I am doing Vdscopy from local windows machine using python script using sub process getting'[Dimensions_012LOD0/50: sdapi 3.14.0 - : Encountered network error when sending http request]\nVolumeDataAccessManager destructor: there where upload errors\n''[Dimensions_012LOD0/50: sdapi 3.14.0 - : Encountered network error when sending http request]\nVolumeDataAccessManager destructor: there where upload errors\n'https://community.opengroup.org/osdu/platform/domain-data-mgmt-services/reservoir/open-etp-server/-/issues/7[Partition Client] Error while calling getParitionList & getPartitionInfo2023-07-26T20:20:31ZErnesto Gutierrez[Partition Client] Error while calling getParitionList & getPartitionInfoWhile issuing request to get partition list and info, ETP server yield following error
```sh
[2022-Jul-28 17:10:04] Error: openETPServer: Error: basic_string::_M_construct null not valid
```
Issue appears to be a nullptr added in BaseC...While issuing request to get partition list and info, ETP server yield following error
```sh
[2022-Jul-28 17:10:04] Error: openETPServer: Error: basic_string::_M_construct null not valid
```
Issue appears to be a nullptr added in BaseClient::makeRequest call [link](https://community.opengroup.org/osdu/platform/domain-data-mgmt-services/reservoir/open-etp-server/-/blob/cbea307206591b27d0d9bf37d8811bc49bc97257/src/lib/oes/osdu/api/PartitionClient.cpp#L63).
This causes the pipeline to break in [azure_deploy](https://community.opengroup.org/osdu/platform/domain-data-mgmt-services/reservoir/open-etp-server/-/jobs/1251583#L93) stepM13 - Release 0.16Ernesto GutierrezErnesto Gutierrezhttps://community.opengroup.org/osdu/platform/domain-data-mgmt-services/reservoir/open-etp-server/-/issues/6CI/CD Pipeline fails. Ubuntu 21.04/ Ubuntu 21.10 are end of life2022-08-01T20:42:59ZSiarhei Khaletski (EPAM)CI/CD Pipeline fails. Ubuntu 21.04/ Ubuntu 21.10 are end of life## Issue Details
Ubuntu 21.04/ Ubuntu 21.10 are end of life
It is not possible to build a docker image. CI/CD pipeline fails.
https://fridge.ubuntu.com/2022/01/21/ubuntu-21-04-hirsute-hippo-end-of-life-reached-on-january-20-2022/
http...## Issue Details
Ubuntu 21.04/ Ubuntu 21.10 are end of life
It is not possible to build a docker image. CI/CD pipeline fails.
https://fridge.ubuntu.com/2022/01/21/ubuntu-21-04-hirsute-hippo-end-of-life-reached-on-january-20-2022/
https://ubuntu-news.org/2022/07/19/ubuntu-21-10-impish-indri-end-of-life-reached-on-july-14-2022/
![image](/uploads/41abedde13be138ef7b742cd2d132055/image.png)
## Issue Scope
Need either try to fix the build somehow or update the base Ubuntu image version.M13 - Release 0.16https://community.opengroup.org/osdu/platform/system/lib/cloud/azure/os-core-lib-azure/-/issues/25Exception handling2022-07-28T19:09:58ZArsen GrigoryanException handlingWhen we called the method getIdToken and pass invalid parameters especially "clientSecret", throws NullPointerException for resolving this issue was added logic that checks, that if the response status is not 200 then returned AppExcept...When we called the method getIdToken and pass invalid parameters especially "clientSecret", throws NullPointerException for resolving this issue was added logic that checks, that if the response status is not 200 then returned AppException.Arsen GrigoryanArsen Grigoryanhttps://community.opengroup.org/osdu/platform/system/indexer-service/-/issues/71Indexer not creating new index in Elasticsearch when new schema is added2022-09-12T21:55:55ZYifei XuIndexer not creating new index in Elasticsearch when new schema is addedIt was noticed that Elastic search indexes are not created when we register a Schema. Instead, they are created when we ingest the data the first time. Index mappings are created automatically based on the ingested record, not based on t...It was noticed that Elastic search indexes are not created when we register a Schema. Instead, they are created when we ingest the data the first time. Index mappings are created automatically based on the ingested record, not based on the schema. Due to this behavior many attributes and data types are not properly indexed.
We want to understand if this is the intended behavior in the core code logic. This was at least observed on AWS.
Steps to Reproduce:
- Create new OSDU environment with sample data (Except “osdu:wks:dataset--FileCollection.Generic:1.0.0” data)
- Search for FileCollection Schema {{osdu_base_url}}/api/schema-service/v1/schema/osdu:wks:dataset--FileCollection.Generic:1.0.0. This will return the schema structure.
- Login to Elastic search container
- Run CURL to list indices matching FileCollection curl -u elastic:<pwd> https://localhost:9200/_cat/indices -k | grep -i file
- There will not be any index for FileCollection
- Use Dataset Service to add a record for FileCollection without Data.DatasetProperties.FileSourceInfos
- Login to Elastic search container search for the index using command curl -u elastic:<pwd> https://localhost:9200/_cat/indices -k | grep -i file
- Now new index will be created for FileCollection based on the payload and not by the Schema structure.
- The index will not have any mapping for Data.DatasetProperties.FileSourceInfos
Here are some important questions:
1. Should an index be created after a new schema is created?
1. If not, how will the index be created when a record is added (for cases with and without schema already present in the system)
1. What should happen to the index when the schema is updated?
@fhoueto.amz @gustavurda @debasisc @chad
M14 - Release 0.17Yifei XuGustavo UrdanetaYifei Xuhttps://community.opengroup.org/osdu/platform/system/storage/-/issues/136Schema Validation Failed - Storage Service2022-11-21T11:11:21ZSamiullah GhousudeenSchema Validation Failed - Storage ServiceData ingestion through `Storage PUT service` does not validate schema, kind & attributes.
As in below request able to ingest `"TestAttribute": "Test-Sami"` attribute/value which is not defined in Contract type reference data - WKS schem...Data ingestion through `Storage PUT service` does not validate schema, kind & attributes.
As in below request able to ingest `"TestAttribute": "Test-Sami"` attribute/value which is not defined in Contract type reference data - WKS schema.
<details><summary> Storage PUT Request </summary>
<pre><code>
curl --location --request PUT 'https://osdu-ship.msft-osdu-test.org/api/storage/v2/records' \
--header 'Content-Type: application/json' \
--header 'data-partition-id: opendes' \
--header 'Authorization: Bearer eyJ0eXAiOiJKV1Qi ' \
--data-raw '[
{
"id": "opendes:reference-data--ContractorType:test-sami01",
"kind": "osdu:wks:reference-data--ContractorType:1.0.0",
"acl": {
"owners": [
"data.default.owners@opendes.contoso.com"
],
"viewers": [
"data.default.viewers@opendes.contoso.com"
]
},
"legal": {
"legaltags": [
"opendes-public-usa-dataset-7643990"
],
"otherRelevantDataCountries": [
"US"
]
},
"data": {
"Name2": "Well",
"ID2": "Well",
"Code2": "Well",
"Source2": "Workbook Published/FacilityTypeType.1.0.0.xlsx; commit SHA 0b4db59a.",
"TestAttribute" : "Test-Sami"
}
}
]'
</code></pre>
</details>
<details><summary> Storage GET Request </summary>
<pre><code>
{
"data": {
"Name2": "Well",
"ID2": "Well",
"Code2": "Well",
"Source2": "Workbook Published/FacilityTypeType.1.0.0.xlsx; commit SHA 0b4db59a.",
"TestAttribute": "Test-Sami"
},
"meta": null,
"id": "opendes:reference-data--ContractorType:test-sami01",
"version": 1658769507968280,
"kind": "osdu:wks:reference-data--ContractorType:1.0.0",
"acl": {
"viewers": [
"data.default.viewers@opendes.contoso.com"
],
"owners": [
"data.default.owners@opendes.contoso.com"
]
},
"legal": {
"legaltags": [
"opendes-public-usa-dataset-7643990"
],
"otherRelevantDataCountries": [
"US"
],
"status": "compliant"
},
"createUser": "preshipping@azureglobal1.onmicrosoft.com",
"createTime": "2022-07-05T17:06:37.282Z",
"modifyUser": "preshipping@azureglobal1.onmicrosoft.com",
"modifyTime": "2022-07-25T17:18:28.992Z"
}
</code></pre>
</details>
Also, able to ingest and fetch data through Storage Service without creating schema `osdu:wks:reference-data--ContractorTypeTestSami:1.0.0 ` in OSDU system as noticed below :
<details><summary> Storage PUT Request </summary>
<pre><code>
curl --location --request PUT 'https://osdu-ship.msft-osdu-test.org/api/storage/v2/records' \
--header 'Content-Type: application/json' \
--header 'data-partition-id: opendes' \
--header 'Authorization: Bearer eyJ0eXAiOiJKV1QiLCJhbGciOiJSU ' \
--data-raw '[
{
"id": "opendes:reference-data--ContractorTypeTestSami:test-sami01",
"kind": "osdu:wks:reference-data--ContractorTypeTestSami:1.0.0",
"acl": {
"owners": [
"data.default.owners@opendes.contoso.com"
],
"viewers": [
"data.default.viewers@opendes.contoso.com"
]
},
"legal": {
"legaltags": [
"opendes-public-usa-dataset-7643990"
],
"otherRelevantDataCountries": [
"US"
]
},
"data": {
"Name2": "Well",
"ID2": "Well",
"Code2": "Well",
"Source2": "Workbook Published/FacilityTypeType.1.0.0.xlsx; commit SHA 0b4db59a.",
"TestAttribute" : "Test-Sami"
}
}
]'
</code></pre>
</details>
<details><summary> Storage GET Request </summary>
<pre><code>
{
"data": {
"Name2": "Well",
"ID2": "Well",
"Code2": "Well",
"Source2": "Workbook Published/FacilityTypeType.1.0.0.xlsx; commit SHA 0b4db59a.",
"TestAttribute": "Test-Sami"
},
"meta": null,
"id": "opendes:reference-data--ContractorTypeTestSami:test-sami01",
"version": 1658770548926014,
"kind": "osdu:wks:reference-data--ContractorTypeTestSami:1.0.0",
"acl": {
"viewers": [
"data.default.viewers@opendes.contoso.com"
],
"owners": [
"data.default.owners@opendes.contoso.com"
]
},
"legal": {
"legaltags": [
"opendes-public-usa-dataset-7643990"
],
"otherRelevantDataCountries": [
"US"
],
"status": "compliant"
},
"createUser": "preshipping@azureglobal1.onmicrosoft.com",
"createTime": "2022-07-25T17:35:49.251Z"
}
</code></pre>
</details>
cc- @chad @debasischttps://community.opengroup.org/osdu/platform/system/notification/-/issues/42Need for better Notification service GCP implementation "subscriber-control-...2022-08-02T13:26:57ZRostislav Dublin (EPAM)Need for better Notification service GCP implementation "subscriber-control-topic" subscriptions management approachThis issue is created on the @Rustam_Lotsmanenko comment in the MR https://community.opengroup.org/osdu/platform/system/notification/-/merge_requests/235#note_133946
The text of the comment is:
> "... we may consider discussing the log...This issue is created on the @Rustam_Lotsmanenko comment in the MR https://community.opengroup.org/osdu/platform/system/notification/-/merge_requests/235#note_133946
The text of the comment is:
> "... we may consider discussing the logic of the Notification service related to subscription configuration ...
>
> I see two problems related to the current Notification service logic related to dynamic subscriptions, first is using a [message broker as a "source of truth"](https://community.opengroup.org/osdu/platform/system/notification/-/blob/master/provider/notification-gcp/src/main/java/org/opengroup/osdu/notification/provider/gcp/pubsub/OqmSubscriberManager.java#L208), and I believe we should move it from querying message broker to querying the database where info about subscriptions created via Register service should be.
>
> The second one is a bit heavy way of handling with dynamic creation of subscriptions per each service replica, maybe later we could consider to use a bit lightweight solution like [Redis Pub/Sub](https://redis.io/docs/manual/pubsub/), where each replica can open a listener channel that can be automatically closed on shutdown and does not require additional handling."
I invite dear colleagues @andrei_dalhikh @Yauhen_Shaliou @Rustam_Lotsmanenko @Stanislav_Riabokon to the discussion.
My opinion I will share right now as a first comment to the Issue.M14 - Release 0.17Andrei Dalhikh [EPAM/GC]Andrei Dalhikh [EPAM/GC]2022-08-01https://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/seismic-dms-suite/seismic-store-service/-/issues/64[azure] list in a sub-project might returns datasets from other sub-projects2022-07-25T15:48:22ZSacha Brants[azure] list in a sub-project might returns datasets from other sub-projectsIf two datasets share the same prefix (sandbox, sandbox-test), then, calling the list API to get all the datasets in one sub-project will return the datasets from both sub-projects.
This applies to Azure implementation only.If two datasets share the same prefix (sandbox, sandbox-test), then, calling the list API to get all the datasets in one sub-project will return the datasets from both sub-projects.
This applies to Azure implementation only.M12 - Release 0.15https://community.opengroup.org/osdu/platform/data-flow/ingestion/segy-to-vds-conversion/-/issues/13SegY to oVDS conversion in sdstore2023-03-20T11:46:40ZChad LeongSegY to oVDS conversion in sdstore## Introduction
For oVDS conversion, the converter supports native cloud location and sdstore as long as the location of the segy is provided in the segy_file.
The practice is for all segy/oVDS/oZGY file to be stored in the sdstore so ...## Introduction
For oVDS conversion, the converter supports native cloud location and sdstore as long as the location of the segy is provided in the segy_file.
The practice is for all segy/oVDS/oZGY file to be stored in the sdstore so that applications can access the seismic data via the sdms API.
All CSP should ensure the following implementation of segy-oVDS conversion in the sdstore. Here are some examples of expected implementation of the workflow from IBM, GCP, Azure.
Example working procedure - postman collection from IBM: https://community.opengroup.org/osdu/platform/pre-shipping/-/blob/main/R3-M12/IBM-M12/M12-IBM_ODI_R3_v2.0.1_SEGY-to-Open_VDS_Conversion_Collection.postman_collection.json
Example working procedure - postman collection from GCP: https://community.opengroup.org/osdu/platform/pre-shipping/-/blob/main/R3-M12/GCP-M12/OpenVDS_SSDMS_to_SSDMS_conversion_CI-CD.postman_collection.json
Status
- [x] AWS - M16
- [x] IBM - Working
- [x] GCP - Working
- [x] Azure - Added in M13
## Discrepancies observed in AWS
```json
{
"executionContext": {
"data-partition-id": "{{data_partition_id}}",
"url_connection": "Region={{vdsUrlConnectionStringRegion}};AccessKeyId={{vdsUrlConnectionStringAccessKeyId}};SecretKey={{vdsUrlConnectionStringSecretAccessKey}};SessionToken={{vdsUrlConnectionStringSessionToken}};Expiration={{vdsUrlConnectionStringExpiration}}",
"input_connection": "Region={{vdsInputConnectionStringRegion}};AccessKeyId={{vdsInputConnectionStringAccessKeyId}};SecretKey={{vdsInputConnectionStringSecretAccessKey}};SessionToken={{vdsInputConnectionStringSessionToken}};Expiration={{vdsInputConnectionStringExpiration}}",
"segy_file": "{{fileSource}}",
"url": ""
}
}
```
You can see that the executionContext is missing several keys like :
```
"work_product_id": "{{work-product-id}}",
"file_record_id": "{{file-record-id}}",
"persistent_id": "{{vds_id}}",
"id_token": "{{id_token}}"
```
These ids are needed to correctly fetch the seismic parameters (e.g. Inline, crossline etc.) to perform the oVDS conversion.
Expected implementation is observed in - IBM / GCP / Azure [M13]
```json
{
"executionContext": {
"Payload": {
"AppKey": "test-app",
"data-partition-id": "{{data-partition-id}}"
},
"vds_url": "{{test_vds_url}}",
"work_product_id": "{{work-product-id}}",
"file_record_id": "{{file-record-id}}",
"persistent_id": "{{vds_id}}",
"id_token": "{{id_token}}"
}
}
```M16 - Release 0.19Okoun-Ola Fabien HouetoOkoun-Ola Fabien Houetohttps://community.opengroup.org/osdu/governance/project-management-committee/-/issues/9Retrospective session for M122023-04-14T13:09:00ZChad LeongRetrospective session for M12- What went well in M12?
- What could be improved in M12?
- What didn't work in M12?
- What will we commit to improving in upcoming M13?- What went well in M12?
- What could be improved in M12?
- What didn't work in M12?
- What will we commit to improving in upcoming M13?https://community.opengroup.org/osdu/platform/system/schema-service/-/issues/110OSDU-DD-M13-Delivery2022-08-10T17:21:25ZThomas Gehrmann [slb]OSDU-DD-M13-DeliveryThis issue stands for the integration of the schema bootstrapping resources delivered by OSDU Data Definitions for the Milestone 13, 0.16.This issue stands for the integration of the schema bootstrapping resources delivered by OSDU Data Definitions for the Milestone 13, 0.16.M13 - Release 0.16Thomas Gehrmann [slb]Thomas Gehrmann [slb]https://community.opengroup.org/osdu/platform/security-and-compliance/policy/-/issues/61Consider migration to FastAPI framework2022-09-15T15:34:08ZShane HutchinsConsider migration to FastAPI frameworkWe are currently using Flask v1 for policy service. As discussed in 2022/07/20 E&O dev meeting, there are a number of benefits for using FastAPI over Flask.
- Data validation (via Pydantic), serialization and deserialization (for buildin...We are currently using Flask v1 for policy service. As discussed in 2022/07/20 E&O dev meeting, there are a number of benefits for using FastAPI over Flask.
- Data validation (via Pydantic), serialization and deserialization (for building an API)
- Blazing performance over Flask
- Async request capability
- Automatic documentation (via JSON Schema and OpenAPI)
In addition, while lightweight and easy to use, Flask’s built-in server which we are using is not suitable for production as it doesn’t scale well and by default serves only one request at a time.M14 - Release 0.17Shane HutchinsShane Hutchinshttps://community.opengroup.org/osdu/data/data-definitions/-/issues/41Publish DD M132022-07-22T16:54:39ZThomas Gehrmann [slb]Publish DD M13Publish the Data Definition M13 contribution on the community mirror site for public consumption.Publish the Data Definition M13 contribution on the community mirror site for public consumption.Thomas Gehrmann [slb]Thomas Gehrmann [slb]