Wellbore Domain Services issueshttps://community.opengroup.org/osdu/platform/domain-data-mgmt-services/wellbore/wellbore-domain-services/-/issues2024-03-13T16:26:50Zhttps://community.opengroup.org/osdu/platform/domain-data-mgmt-services/wellbore/wellbore-domain-services/-/issues/92(WIP) ADR: Adding /formationpressuretests Endpoint to OSDU Wellbore DDMS Service2024-03-13T16:26:50ZRostislav Dublin (EPAM)(WIP) ADR: Adding /formationpressuretests Endpoint to OSDU Wellbore DDMS Service# Adding /formationpressuretests endpoints to OSDU Wellbore DDMS Service
## Context
The OSDU Wellbore DDMS service currently includes /welllogs and
/welllogs/data endpoints. These endpoints store Welllog records in
the Catalog in comp...# Adding /formationpressuretests endpoints to OSDU Wellbore DDMS Service
## Context
The OSDU Wellbore DDMS service currently includes /welllogs and
/welllogs/data endpoints. These endpoints store Welllog records in
the Catalog in compliance with the work-product-component--WellLog:1.x.y
schema and their bulk data in the DDMS storage.
However, currently, there are no equivalent endpoints for Formation
Pressure Tests (FPT). Given the crucial role of FPT data in subsurface
data exploration, access to and management of this data directly within
the OSDU environment is essential.
## Scope
The proposed /formationpressuretests and /formationpressuretests/data
endpoints aim to extend the capabilities of the OSDU Wellbore DDMS
Service. These additions will allow users to retrieve formation
pressure test records related to specific wellbores.
The endpoints will be compatible with the proposed
work-product-component--FormationPressureTests:1.x.y schema. This new
FPT schema will largely mirror the existing structure of the Welllog
schema. An essential structural similarity is the use of the
`data.Curves[]` array in the Welllog schema and the `data.Stations[]`
array in the FPT schema. These arrays define the columns, and columns'
properties of the actual bulk data array belonging to the respective
Welllog/FPT records.
The functionality of these new endpoints will mirror that of the
existing /welllogs endpoint, providing a consistent and expanded user
experience.
## Trade-off Analysis
- *Adding /formationpressuretests and /formationpressuretests/data Endpoints*:
While direct access to FPT records will streamline the user experience, additional
development and maintenance resources will be required.
- *Not Adding /formationpressuretests and /formationpressuretests/data Endpoints*:
Avoiding the immediate need for more development resources can result in a fragmented
user experience and inefficiencies when users need to access FPT data.
## Decision
Given the vital role FPT data plays in subsurface workflows and the need for a
more consistent user experience, we propose to extend the OSDU Wellbore
DDMS service by adding the /formationpressuretests and
/formationpressuretests/data endpoints.Rostislav Dublin (EPAM)Mykhailo BuriakRostislav Dublin (EPAM)https://community.opengroup.org/osdu/platform/domain-data-mgmt-services/wellbore/wellbore-domain-services/-/issues/91Add "/readiness" endpoint2024-02-19T11:05:45ZYan Sushchynski (EPAM)Add "/readiness" endpointHaving a distinct endpoint for readiness probes, signifying that the service is ready for recieving traffic, would be a great enhancement. In our scenario, this endpoint could invoke the readiness Storage and Entitlements services. This ...Having a distinct endpoint for readiness probes, signifying that the service is ready for recieving traffic, would be a great enhancement. In our scenario, this endpoint could invoke the readiness Storage and Entitlements services. This process ensures all Welbore DMS dependencies are ready, affirming Wellbore readiness.
More info about the endpoint: https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/https://community.opengroup.org/osdu/platform/domain-data-mgmt-services/wellbore/wellbore-domain-services/-/issues/89POST "content" data step to provide clear status message showing "wdms URI" f...2024-01-30T11:53:58ZDebasis ChatterjeePOST "content" data step to provide clear status message showing "wdms URI" for content data in Apache Parquet filePOST https://{{WELLBORE_DDMS_HOST}}/ddms/v3/welllogs/osdu:work-product-component--WellLog:AutoTest_999842309177/data
can show us more information like WDMS URI and number of rows added, time taken etc.
**Body**
```
{
"columns": [
...POST https://{{WELLBORE_DDMS_HOST}}/ddms/v3/welllogs/osdu:work-product-component--WellLog:AutoTest_999842309177/data
can show us more information like WDMS URI and number of rows added, time taken etc.
**Body**
```
{
"columns": [
"GR_ID",
"POR_ID",
"Bulk Density"
],
"index": [
0,
1,
2,
3,
4
],
"data": [
[
0,
1111.1,
2222.1
],
...
```
**Response**
```
{
"recordCount": 1,
"recordIdVersions": [
"osdu:work-product-component--WellLog:AutoTest_999842309177:1706122790284229"
],
"recordIds": [
"osdu:work-product-component--WellLog:AutoTest_999842309177"
],
"skippedRecordIds": []
}
```
We get final confirmation only when we retrieve the catalog record or retrieve content data using Domain API.
Catalog data will show me WDMS URI.
GET https://{{WELLBORE_DDMS_HOST}}/ddms/v3/welllogs/osdu:work-product-component--WellLog:AutoTest_999842309177
```
"wdms": {
"bulkURI": "urn:wdms-1:uuid:d10726c6-b094-4646-8ad4-0e7b94f52e13"
}
}
```
cc @cmonmoutonhttps://community.opengroup.org/osdu/platform/domain-data-mgmt-services/wellbore/wellbore-domain-services/-/issues/88Option to provide TVD-based log data by using Trajectory data2023-12-11T01:44:08ZDebasis ChatterjeeOption to provide TVD-based log data by using Trajectory dataAssume both trajectory station data and well log curves have been properly populated as "optimized content" (Parquet).
Do you think it is reasonable to expect a new API end-point to present TVD-converted log data?
It would be important ...Assume both trajectory station data and well log curves have been properly populated as "optimized content" (Parquet).
Do you think it is reasonable to expect a new API end-point to present TVD-converted log data?
It would be important to provide conversion algorithm information too.
Similar to how there is provision in CRS conversion (Spatial block).
cc @deny (Please add more details as required)https://community.opengroup.org/osdu/platform/domain-data-mgmt-services/wellbore/wellbore-domain-services/-/issues/87500 Error on Patch WellLog Session. Baremetal2024-01-08T14:11:18ZYan Sushchynski (EPAM)500 Error on Patch WellLog Session. BaremetalHello,
Our environment is Baremetal, which is basically uses S3(MinIO), the service is from the release/0.24 branch. The Postman collection is here: https://community.opengroup.org/osdu/platform/pre-shipping/-/blob/main/R3-M21/QA_Artifa...Hello,
Our environment is Baremetal, which is basically uses S3(MinIO), the service is from the release/0.24 branch. The Postman collection is here: https://community.opengroup.org/osdu/platform/pre-shipping/-/blob/main/R3-M21/QA_Artifacts_M21/envFilesAndCollections/Wellbore%20DDMS%20CI-CD%20v3.0.postman_collection.json?ref_type=heads.
We send the following request:
```
curl --location --request PATCH 'https://<host>/api/os-wellbore-ddms/ddms/v3/welllogs/<well-log-id>/sessions/<s-id>' \
--header 'data-partition-id:<data-partition-id>' \
--header 'Content-Type: application/json' \
--header 'Authorization: Bearer <token>' \
--data '{
"state": "commit"
}'
```
And it returns the following response:
```
500
{
"error": [
"Access Denied."
]
}
```
We have found out that the service is attempring to put bucket during the response
![image](/uploads/be9b56902ac07b54945a0cecf185ddf2/image.png)
Thanks.M21 - Release 0.24YannickCyril MonmoutonYannickhttps://community.opengroup.org/osdu/platform/domain-data-mgmt-services/wellbore/wellbore-domain-services/-/issues/86Provide Data R/W endpoint for Marker "content" (not "catalog") data (see use ...2023-11-15T23:47:19ZDebasis ChatterjeeProvide Data R/W endpoint for Marker "content" (not "catalog") data (see use case of AvailableMarkerPropertiesThis link will show typical use case where such external data may be persisted in "optimized content" similar to Well Log Curve or Trajectory Station properties.
https://community.opengroup.org/osdu/data/data-definitions/-/blob/master/E...This link will show typical use case where such external data may be persisted in "optimized content" similar to Well Log Curve or Trajectory Station properties.
https://community.opengroup.org/osdu/data/data-definitions/-/blob/master/Examples/WorkedExamples/WellboreMarkerSet/README.mdhttps://community.opengroup.org/osdu/platform/domain-data-mgmt-services/wellbore/wellbore-domain-services/-/issues/84Challenge ingesting certain subfiles in LIS well log format2023-11-14T14:35:23ZBjørn Harald FotlandChallenge ingesting certain subfiles in LIS well log formatHi,
we are working with an operator performing bulk ingestion of different types of well log file formats (LAS, DLIS, LIS). We have met some challenges with the LIS files originating from from DISKOS.
A LIS file may consist of several ...Hi,
we are working with an operator performing bulk ingestion of different types of well log file formats (LAS, DLIS, LIS). We have met some challenges with the LIS files originating from from DISKOS.
A LIS file may consist of several subfiles with metadata and data, each one corresponding to an OSDU WellLog record and data for each curve.
What we observe is that a number of LIS subfiles do not fulfil the requirements of the Wellbore DDMS regarding ReferenceCurve properties. The requirements being that the ReferenceCurve is sorted (decreasingly/increasingly) and that the reference curve samples are unique. The reference curve are typically depth samples.
Example: Repeated depth samples with other curve samples have different values for the same depth.
DEPTH GR CALI
2299.0 .. ..
2300.0 10.0 9.0
2300.0 20.0 8.0
2301.0 .. ..
In this case it is challenging to find an approach to ingest the curve data into the Wellbore DDMS without losing data fidelity or introducing manual processes.
Given that the data need to be loaded into the Wellbore DDMS, what would be the recommended approach?
Ideally keeping the data in the first version of the WellLog for updating/fixing the reference curve in a later WellLog version/new WellLog record using applications/workflows performed on top of OSDU.https://community.opengroup.org/osdu/platform/domain-data-mgmt-services/wellbore/wellbore-domain-services/-/issues/81Provide option to load Trajectory data from CSV source2023-09-08T17:55:50ZDebasis ChatterjeeProvide option to load Trajectory data from CSV sourceOften source data is available in CSV.
For now, Data Loader has an extra step to reformat existing data from CSV into JSON format.
It would be beneficial to provide this option for "Post data" (Wellbore Trajectory).
Typical use case.
H...Often source data is available in CSV.
For now, Data Loader has an extra step to reformat existing data from CSV into JSON format.
It would be beneficial to provide this option for "Post data" (Wellbore Trajectory).
Typical use case.
Header row - indicating available fields such as Depth, Inclination, Azimuth.
Following rows contain actual trajectory data.https://community.opengroup.org/osdu/platform/domain-data-mgmt-services/wellbore/wellbore-domain-services/-/issues/80purge data records generated from e2e2023-08-15T15:46:47ZYunhua Koglinpurge data records generated from e2eCurrently, run e2e creates deleted records in storage, not purge the records.
e2e should clean up storage at the end.Currently, run e2e creates deleted records in storage, not purge the records.
e2e should clean up storage at the end.https://community.opengroup.org/osdu/platform/domain-data-mgmt-services/wellbore/wellbore-domain-services/-/issues/78Anthos/Baremetal. (NoSuchKey) when calling "get welllog data"2023-10-04T16:46:26ZYan Sushchynski (EPAM)Anthos/Baremetal. (NoSuchKey) when calling "get welllog data"Hello,
Postman Environment: https://community.opengroup.org/osdu/platform/pre-shipping/-/blob/main/R3-M19/QA_Artifacts_M19/envFilesAndCollections/envFiles/OSDU%20R3%20M19%20RI%20Pre-ship.postman_environment.json Postman Collection: htt...Hello,
Postman Environment: https://community.opengroup.org/osdu/platform/pre-shipping/-/blob/main/R3-M19/QA_Artifacts_M19/envFilesAndCollections/envFiles/OSDU%20R3%20M19%20RI%20Pre-ship.postman_environment.json Postman Collection: https://community.opengroup.org/osdu/platform/pre-shipping/-/blob/main/R3-M19/QA_Artifacts_M19/envFilesAndCollections/Wellbore%20DDMS%20CI-CD%20v3.0.postman_collection.json.
Steps to reproduce:
1. Create a WellLog
2. Post the WellLog data
3. Get the WellLog data.
The logs show that when we post the well log data, a new fodler and a parquet file are created:
```log
DEBUG:Sending http request: <AWSPreparedRequest stream_output=False, method=PUT, url=https://s3.ref.gcp.gnrg-osdu.projects.epam.com/wellbore/logstore-osdu/9ee8ed74df9b8efb695f376771eea3e707b66753/bulk/2c0429ad-b4a1-4a70-a17e-bb08cc245f3f/data/0_4_1691662228355.e70a959cea89c6147785c7fa57cde5be8b6dc250.parquet
```
And then, when we want to get the data, it attempts to get an absent `bulk_catalog.json`:
```log
DEBUG:Sending http request: <AWSPreparedRequest stream_output=True, method=GET, url=https://s3.ref.gcp.gnrg-osdu.projects.epam.com/wellbore/logstore-osdu/9ee8ed74df9b8efb695f376771eea3e707b66753/bulk/2c0429ad-b4a1-4a70-a17e-bb08cc245f3f/data/bulk_catalog.json
```
Linked issue: https://community.opengroup.org/osdu/platform/pre-shipping/-/issues/568M19 - Release 0.22YannickYannickhttps://community.opengroup.org/osdu/platform/domain-data-mgmt-services/wellbore/wellbore-domain-services/-/issues/77Use of DDMSDatasets construct for linking "catalog" and "content"2024-01-08T17:59:03ZDebasis ChatterjeeUse of DDMSDatasets construct for linking "catalog" and "content"When testing in M19/Preship/Azure, I noticed that it is still the old style of linking from WellLog work-product component to parquet.
```
"ExtensionProperties": {
"step": {
"unitKey": "ft",
...When testing in M19/Preship/Azure, I noticed that it is still the old style of linking from WellLog work-product component to parquet.
```
"ExtensionProperties": {
"step": {
"unitKey": "ft",
"value": 0.1
},
"dateModified": "2013-03-22T11:16:03Z",
"wdms": {
"bulkURI": "urn:wdms-1:uuid:5ed7e9ef-b94e-4ba0-9b4b-c9acf03edb99"
}
}
```
whereas the current approach seems to be using DDMSDatasets.
See here
https://community.opengroup.org/osdu/data/data-definitions/-/blob/master/Examples/work-product-component/WellLog.1.4.0.json
```
"DDMSDatasets": [
"urn://wddms-3/uuid:20840361-adc0-4842-999b-5639bd07bb38",
"eml://rddms-1/dataspace('demo/Volve')/resqml20.obj_ContinuousProperty(1615d8d2-2a2d-482c-885e-14225b89e90c)"
],
```
Please advise when it is planned to switch.
cc @chadhttps://community.opengroup.org/osdu/platform/domain-data-mgmt-services/wellbore/wellbore-domain-services/-/issues/75Log Recognition / Family call fails with 503 error2023-07-19T16:20:00ZMariel HerzogLog Recognition / Family call fails with 503 errorWellbore Log Recognition / Family call fails with 503 error intermittently.
1) Need to understand and resolve the error.
2) Additionally, it is critical to understand if during this failure there is any data loss or lack of data mappin...Wellbore Log Recognition / Family call fails with 503 error intermittently.
1) Need to understand and resolve the error.
2) Additionally, it is critical to understand if during this failure there is any data loss or lack of data mapping.
3) Ask to enhance documentation from the domain side for how the call works todayhttps://community.opengroup.org/osdu/platform/domain-data-mgmt-services/wellbore/wellbore-domain-services/-/issues/73ADR: Worker Service for Wellbore Bulk Data Access2024-01-10T20:12:24ZKin Jin NgADR: Worker Service for Wellbore Bulk Data Access## Status
- [X] Proposed
- [X] Trialing
- [X] Under review
- [ ] Approved
- [ ] Retired
## Context & Scope
Currently, as of M16, Wellbore DDMS is experiencing performance challenges involving WellLogs operations with large bulk data (>...## Status
- [X] Proposed
- [X] Trialing
- [X] Under review
- [ ] Approved
- [ ] Retired
## Context & Scope
Currently, as of M16, Wellbore DDMS is experiencing performance challenges involving WellLogs operations with large bulk data (>1 Gb), especially on data reading.
It was also observed that Wellbore DDMS requires a significant amount of memory in comparison to the amount of data manipulated to serve incoming requests.
See issues #21 and #27.
Wellbore DDMS is composed of a general main service, which is responsible for handling both client facing API requests, and data access operations to underlying bulk data store.
In turn, the bulk data management implementation in WDDMS is highly based on [Dask](https://www.dask.org/).
For instance, for a large WellLog dataset stored in Wellbore DDMS, the associated data will not be be located in a individual parquet file, but rather distributed in several distinct parquet files.
When a request to retrieve the bulk data associated to a specific subset of WellLog curves, including or not the optional reference range,
is received, Dask is used to process all required parquet files, across which the queried data is stored,
and extract the cropped data corresponding to the selected curves and range from the WellLog dataset.
All operations in the described workflow are executed end to end in the same container for a given request.
Though the main service approach and Dask capabilities provide a simple and straighforward deployment,
it was identified, from previous analysis, that such pairing poses considerable limitations on
Wellbore DDMS performance and scalability capacity.
## Trade-off Analysis
Standard Python framework already offers a good support for I/O bound operations (see [asyncio](https://docs.python.org/3/library/asyncio.html)),
however, when it becomes more complex to deal with CPU bound operations and data transformation operations, Dask brings a first answer to that.
For instance, when reading and writing large WellLog datasets, Dask provides a concise and straighforward implementation to reconciliate data from multiple parquet files.
Nevertheless, if Dask appears to be a good solution for heavy computation, in most WDDMS' supported scenarios of data queries/filters,
Wellbore DDMS is primarily constraint by I/O operations rather than by data transformation operations.
Additionally, Dask showed not to be efficient when handling several queries involving smaller amounts of data,
as its minimum required memory footprint does not scale down based on the smaller volumes of data.
Dask cluster is implemented as a process based local cluster, which also brings several issues:
- Dask workers are internal to the pods and therefore cannot be shared with other WDDMS service instances.
- The scaling/resources request are indirectly done through WDDMS, not the Dask workers.
- Dask workers are actually process forks of WDDMS which leads to unnecessary memory usage even at startup or when idle.
Finally, we spotted several memory leaks within Dask and there are [several memory managment related issues open in Dask's GitHub](
https://github.com/dask/distributed/issues?q=is%3Aissue+is%3Aopen+label%3Amemory+).
## Decision
Dask remains a great tools but it does not fit the needs of WDDMS. Therefore Dask will be removed and
replaced by a new dedicated service responsible for bulk data access only called _wddms bulk data worker service_.
_wddms bulk data worker service_ will be specialized in bulk I/O and bulk data manipulation (transformation, filtering), while WDDMS main service will keep all domain knowledge/responsibility such as meta data manipulation or consistency rules for instance, but
will delegate bulk data operations to the _wddms bulk worker service_.
_wddms bulk worker service_ will not use Dask at all. This means the [current bulk data acces layer](https://community.opengroup.org/osdu/platform/domain-data-mgmt-services/wellbore/wellbore-domain-services/-/tree/master/app/bulk_persistence)
in WDDMS will not be moved as-is into the new dedicated service but reworked and tailored to WDDMS specific needs.
The image below illustrates side by side how scaling and workload distribution occur in the current and the target designs.
In the current implementation, an incoming request to retrieve a large amount of data will be limited to the Dask workers resources
of a single WDDMS pod though Dask workers from other WDDMS instances might be available.
In the target design, unlike the current architecture, all processing capacity of the _wddms bulk workers_ instances is available to be used by any WDDMS instances. That arrangement unlocks a better scaling capability as it is done directly on bulk data workers upon needs.
![scaling_view_worker_next](/uploads/921e4f3f506570bafabf38a917dbc3c7/scaling_view_worker_next.jpg){: width="60%"}
### Security Implications
In the current design, the authorization (ACL/policy) checks and the bulk data access operations in WDDMS are performed in the same service instance. Bulk data will only be served to valid users entitled to access the associated work product component record.
The changes proposed in this ADR separate the data access control layer, located in the main WDDMS service, from the bulk data access itself, located in the new _wddm bulk worker service_. See below, the changes in the communication patterns in the current vs target design diagrams.
Allowing users or other services to directly access _wddms bulk woker service_ endpoint would permit bypassing the data access control checks in the main WDDMS service.
Therefore, with the new topology, additional deployment configuration settings will be required to preserve a compliant and secure data access control in WDDMS,
- _wddms bulk woker service_ must not be accessible from the external network
- _wddms bulk woker service_ will only accept requests from WDDMS main service instances
#### Current
![threat_model_current](/uploads/8ef5bf06976e23ad45d2a243064c3e8c/threat_model_current.jpg){: width="60%"}
#### Target
![threat_model_target](/uploads/de4da75c833d44e8503eba6647d0ec98/threat_model_target.jpg){: width="60%"}Chad LeongChad Leonghttps://community.opengroup.org/osdu/platform/domain-data-mgmt-services/wellbore/wellbore-domain-services/-/issues/71IBM Wellbore Domain Services Integration test cases are failing.2023-07-10T12:49:38Zvikas ranaIBM Wellbore Domain Services Integration test cases are failing.https://community.opengroup.org/osdu/platform/domain-data-mgmt-services/wellbore/wellbore-domain-services/-/jobs/1992734.https://community.opengroup.org/osdu/platform/domain-data-mgmt-services/wellbore/wellbore-domain-services/-/jobs/1992734.M18 - Release 0.21vikas ranavikas ranahttps://community.opengroup.org/osdu/platform/domain-data-mgmt-services/wellbore/wellbore-domain-services/-/issues/69WellboreTrajectory can be created with a non-existing WellboreID2023-05-16T14:43:23ZZachary KeirnWellboreTrajectory can be created with a non-existing WellboreIDCurrent postman collection for pre-ship testing creates a WellboreTrajectory with a typo in the WellboreID. The Trajectory is created even though the WellboreID does not exist. So there is no referential integrity check being performed w...Current postman collection for pre-ship testing creates a WellboreTrajectory with a typo in the WellboreID. The Trajectory is created even though the WellboreID does not exist. So there is no referential integrity check being performed when creating a Trajectory with POST https://{{WELLBORE_DDMS_HOST}}/ddms/v3/wellboretrajectories. The body has this:
`"id": "{{data-partition-id}}:work-product-component--WellboreTrajectory:{{WellboreDMSRunId}}",
"kind": "{{authority}}:{{schemaSource}}:work-product-component--WellboreTrajectory:1.1.0",
"data": {
"Name": "Wellbore_Trajectory_{{WellboreDMSRunId}}",
"WellboreID": "{{data-partition-id}}:master-data--Wellbore::{{WellboreDMSRunId}}:",`
Which has typo of double colon after Wellbore.
The WPC for WellboreTrajectory is successfully created however the WellboreID is invalid and the record does not exist.Using GET https://{{WELLBORE_DDMS_HOST}}/ddms/v3/wellbores/osdu:master-data--Wellbore::AutoTest_999130548486:
`{
"origin": "osdu-data-ecosystem-storage",
"errors": [
{
"code": 404,
"reason": "Record not found",
"message": "The record 'osdu:master-data--Wellbore::AutoTest_999130548486' was not found"
}
]
}`
Link to postman collection is here: [](https://community.opengroup.org/osdu/platform/pre-shipping/-/blob/main/R3-M17/QA_Artifacts_M17/envFilesAndCollections/Wellbore%20DDMS%20CI-CD%20v3.0.postman_collection.json)
THis was found using M17 pre-ship environment.https://community.opengroup.org/osdu/platform/domain-data-mgmt-services/wellbore/wellbore-domain-services/-/issues/64Implementing DDMSDatasets[] standardize content data2023-04-17T08:43:12ZChad LeongImplementing DDMSDatasets[] standardize content dataDDMS references to optimized content were found to be created ad-hoc and outside the work-product-component schemas.
Following the [original observation](https://gitlab.opengroup.org/osdu/subcommittees/ea/docs/-/issues/7) an [ADR was c...DDMS references to optimized content were found to be created ad-hoc and outside the work-product-component schemas.
Following the [original observation](https://gitlab.opengroup.org/osdu/subcommittees/ea/docs/-/issues/7) an [ADR was created](https://gitlab.opengroup.org/osdu/subcommittees/ea/docs/-/issues/10), which standardizes the optimized content references from work-product-component entity types. Over time, DDMSs are expected to implement optimized content references using the `data.DDMSDatasets[]` property and support migration.https://community.opengroup.org/osdu/platform/domain-data-mgmt-services/wellbore/wellbore-domain-services/-/issues/61Info endpoint is wrong2023-09-18T12:22:22ZDenis Karpenok (EPAM)Info endpoint is wrongcurl --location --request GET 'https://preship.gcp.gnrg-osdu.projects.epam.com/api/os-wellbore-ddms/ddms/v2/version' \
--header 'Authorization: Bearer ya29.a0AVvZVsrprnK-YJskwVK6U-rHin-we0GqKPvWJLVHgcMdSMI6NCFEokv9PP9vB9Cx5Yr0WFGplIjrhrL...curl --location --request GET 'https://preship.gcp.gnrg-osdu.projects.epam.com/api/os-wellbore-ddms/ddms/v2/version' \
--header 'Authorization: Bearer ya29.a0AVvZVsrprnK-YJskwVK6U-rHin-we0GqKPvWJLVHgcMdSMI6NCFEokv9PP9vB9Cx5Yr0WFGplIjrhrLNjpI9Jz1vQrGT5tBBCMQzDdhn6_eDg8shFuW41sgwvN4QXzQugc4fC1B3fo3Utc_AoDRJy76SIH1bulb7xQaCgYKARwSARISFQGbdwaI5FZ4kMXYd7G5xwgl4ecAEg0169'
Response:
{
"service": "Wellbore DDMS OSDU",
"version": "0.2",
"buildNumber": "local",
"release": "M16",
"details": {
"build_date": "2023-02-17T23:26:54Z",
"build_number": "e181f785",
"build_origin": "Gitlab",
"commit_id": "e181f785",
"commit_branch": "",
"environment_name": "undefined",
"cloud_provider": "gc",
"de_client_config_timeout": "10",
"enable_read_fast_track": "False"
}
}
Expected: As all other services use "info" endpoint instead of "version" and works without authrntication.https://community.opengroup.org/osdu/platform/domain-data-mgmt-services/wellbore/wellbore-domain-services/-/issues/55Slow performance for fetch curve data requests2023-02-26T18:10:03ZMichaelSlow performance for fetch curve data requestsFetching curve data in Wellbore DDMS curve data request appears to take at least 800 milliseconds, even if the size of the curve data is small.
Ideally, the typical fetch curve data request should take less than 500 milliseconds.
The f...Fetching curve data in Wellbore DDMS curve data request appears to take at least 800 milliseconds, even if the size of the curve data is small.
Ideally, the typical fetch curve data request should take less than 500 milliseconds.
The following Wellbore DDMS curve data request will fetch curve data for a well log in the Azure M15 pre-shipping environment. This well log only has 3 curves with 5 indices, however, it can take 1600 milliseconds to complete.
```
curl --location --request GET 'https://osdu-ship.msft-osdu-test.org/api/os-wellbore-ddms/ddms/v3/welllogs/opendes:work-product-component--WellLog:AutoTest_999265713361/data' \
--header 'data-partition-id: opendes' \
--header 'offset: 0' \
--header 'limit: 100' \
--header 'curves;' \
--header 'describe: false' \
--header 'orient: split' \
--header 'Authorization: Bearer {{access_token}}'
```https://community.opengroup.org/osdu/platform/domain-data-mgmt-services/wellbore/wellbore-domain-services/-/issues/45Decouple different cloud providers' requirements/dependencies2023-07-06T19:41:18ZYan Sushchynski (EPAM)Decouple different cloud providers' requirements/dependenciesNow, for Wellbore DMS there is a single `requirements.in` file for all cloud providers. It works well unless cloud-specific libraries depend on the same third-parties with **different** versions; so, there is a problem with executing `pi...Now, for Wellbore DMS there is a single `requirements.in` file for all cloud providers. It works well unless cloud-specific libraries depend on the same third-parties with **different** versions; so, there is a problem with executing `pip-compile requirements.in`.
Example:
```
There are incompatible versions in the resolved dependencies:
osdu-api~=0.15.0.dev (from osdu-core-lib-python-anthos==1.0.1->-r requirements.in (line 44))
osdu-api==0.14.0 (from osdu-core-lib-python-aws==1.0.1->-r requirements.in (line 43))
```
In the example above this error is possible to fix by synchronizing `osdu-api` library, however, a similar issue with the same third-parties can occur in cloud SDKs, so, we are not able to fix them that easily.
As a solution I can propose splitting building Wellbore DMS image into two steps:
1. Build a base image with the basic requirements installed.
2. Build separate images with **cloud specific** requirements and dependencies based on the previous one.
The working example of such two-step-builds is implemented [here](https://community.opengroup.org/osdu/platform/data-flow/ingestion/energistics-osdu-integration/-/tree/master/build). Firstly, we build the [base image](https://community.opengroup.org/osdu/platform/data-flow/ingestion/energistics-osdu-integration/-/blob/master/build/Dockerfile#L23), and then the providers build their own images based on the previous one (e.g., [GCP](https://community.opengroup.org/osdu/platform/data-flow/ingestion/energistics-osdu-integration/-/blob/master/build/Dockerfile#L51)).M14 - Release 0.17https://community.opengroup.org/osdu/platform/domain-data-mgmt-services/wellbore/wellbore-domain-services/-/issues/33Avoid proliferation of Storage service/records from DDMS2022-05-23T18:59:56ZDebasis ChatterjeeAvoid proliferation of Storage service/records from DDMSCurrently, we see way to create standard Master and work-product entities from this DDMS-specific service.
ex: Master data "Well"
POST {{baseUrl}}/ddms/v3/wells
ex: Work-product component "WellLog"
POST {{baseUrl}}/ddms/v3/welllogs
Wh...Currently, we see way to create standard Master and work-product entities from this DDMS-specific service.
ex: Master data "Well"
POST {{baseUrl}}/ddms/v3/wells
ex: Work-product component "WellLog"
POST {{baseUrl}}/ddms/v3/welllogs
What is the problem in leveraging proven ways of creating wks Master/wpc entities?
The benefits are -
1. Avoid additional source code for long term maintenance.
2. Leverage features like integrity check as provided by Manifest-based Ingestion.
Niche services such as I/O of well log curve data, Well trajectory station data, Well log recognition - are all very much welcome.
copying @gmarblestone , @chad and @todaiks for information.