OSDU Software issueshttps://community.opengroup.org/groups/osdu/-/issues2024-01-10T16:30:02Zhttps://community.opengroup.org/osdu/platform/domain-data-mgmt-services/rock-and-fluid-sample/rafs-ddms-services/-/issues/174Implement CRUD endpoints to manage MICP data2024-01-10T16:30:02ZMykhailo BuriakImplement CRUD endpoints to manage MICP dataMykhailo BuriakMykhailo Buriakhttps://community.opengroup.org/osdu/platform/domain-data-mgmt-services/rock-and-fluid-sample/rafs-ddms-services/-/issues/173Implement CRUD endpoints to manage XRF data2023-11-21T15:38:12ZMykhailo BuriakImplement CRUD endpoints to manage XRF data**Implement POST/api/rafs-ddms/xrf/{record_id}/data is implemented and available on Swagger**
* record_id is the SamplesAnalysis WPC which was created previously
* User should be able to fill the SamplesAnalysis report with measurements...**Implement POST/api/rafs-ddms/xrf/{record_id}/data is implemented and available on Swagger**
* record_id is the SamplesAnalysis WPC which was created previously
* User should be able to fill the SamplesAnalysis report with measurements taken in this analysis
* Successful response (200 status code) should update SamplesAnalysis record and include DDMSDatasets array with GET endpoint to the linked bulk data of xrf
* Validation cases should be covered with appropriate status codes
* Request & response structure should correspond to populated JSON
**Implement GET/api/rafs-ddms/xrf/{record_id}/data endpoint is implemented and available on Swagger**
* record_id is the SamplesAnalysis WPC
* User should be able to retrieve bulk data of xrf using new endpoint
* Successful response (200 status code) should retrieve all existing xrf Measurements (bulk data) linked to specified record id (SamplesAnalysis WPC)
* Validation cases should be covered with appropriate status codes
* Structure of request and response should correspond to populated JSON content schema
the content schema for XRF: https://gitlab.opengroup.org/osdu/subcommittees/data-def/projects/RAFSDDMSDEV/docs/-/blob/main/Design%20Documents/elemental_composition_xrf_data_schema.json
reference data: https://gitlab.opengroup.org/osdu/subcommittees/data-def/projects/RAFSDDMSDEV/docs/-/blob/main/Design%20Documents/ReferenceValues/Manifests/reference-data/OPEN/Elements.1.0.0.jsonRAFS DDMS Sprint 20Olena Holub (EPAM)Olena Holub (EPAM)https://community.opengroup.org/osdu/platform/domain-data-mgmt-services/rock-and-fluid-sample/rafs-ddms-services/-/issues/172Implement CRUD endpoints to manage XRD data2023-11-28T19:38:13ZMykhailo BuriakImplement CRUD endpoints to manage XRD data**Implement POST/api/rafs-ddms/xrd/{record_id}/data is implemented and available on Swagger**
* record_id is the SamplesAnalysis WPC which was created previously
* User should be able to fill the SamplesAnalysis report with measurements...**Implement POST/api/rafs-ddms/xrd/{record_id}/data is implemented and available on Swagger**
* record_id is the SamplesAnalysis WPC which was created previously
* User should be able to fill the SamplesAnalysis report with measurements taken in this analysis
* Successful response (200 status code) should update SamplesAnalysis record and include DDMSDatasets array with GET endpoint to the linked bulk data of xrd
* Validation cases should be covered with appropriate status codes
* Request & response structure should correspond to populated JSON
**Implement GET/api/rafs-ddms/xrd/{record_id}/data endpoint is implemented and available on Swagger**
* record_id is the SamplesAnalysis WPC
* User should be able to retrieve bulk data of xrd using new endpoint
* Successful response (200 status code) should retrieve all existing xrd Measurements (bulk data) linked to specified record id (SamplesAnalysis WPC)
* Validation cases should be covered with appropriate status codes
* Structure of request and response should correspond to populated JSON content schema
content schema: https://gitlab.opengroup.org/osdu/subcommittees/data-def/projects/RAFSDDMSDEV/docs/-/blob/main/Design%20Documents/mineralogy_xrd_data_schema.json
reference data: https://gitlab.opengroup.org/osdu/subcommittees/data-def/projects/RAFSDDMSDEV/docs/-/blob/main/Design%20Documents/ReferenceValues/Manifests/reference-data/OPEN/Minerals.1.0.0.jsonRAFS DDMS Sprint 20Carlos ColinCarlos Colinhttps://community.opengroup.org/osdu/platform/pre-shipping/-/issues/593IBM M19 - Register service - Error 4042023-08-24T12:02:04ZChad LeongIBM M19 - Register service - Error 404# Summary
Register service is not running and returning error 404
## Steps to reproduce
========================
Request:
GET https://cpd-osdu.apps.ibmosdu-preship.lndu.p1.openshiftapps.com/osdu-register/api/register/v1/info
Total...# Summary
Register service is not running and returning error 404
## Steps to reproduce
========================
Request:
GET https://cpd-osdu.apps.ibmosdu-preship.lndu.p1.openshiftapps.com/osdu-register/api/register/v1/info
Total time (seconds): 0.491632
Request body:
```json
No content
```
Response body: 404
```json
No content
```
## Intended Behavior
Response body: 200
```json
{
"groupId": "org.opengroup.osdu",
"artifactId": "register-azure",
"version": "0.22.0",
"buildTime": "2023-07-18T08:07:14.761926256Z",
"branch": "v0.22.0",
"commitId": "f6a4530a1a945150e7679139558461b16075df0d",
"commitMessage": "Creating Release Commit",
"connectedOuterServices": []
}
```https://community.opengroup.org/osdu/platform/pre-shipping/-/issues/592Azure M19: Register service : error 5002023-08-24T14:00:03ZChad LeongAzure M19: Register service : error 500# Summary
GET topics should return the [topics](https://community.opengroup.org/osdu/platform/system/register/-/blob/master/provider/register-azure/src/main/resources/topics.json) registered, but is returning 500.
## Steps to reproduce...# Summary
GET topics should return the [topics](https://community.opengroup.org/osdu/platform/system/register/-/blob/master/provider/register-azure/src/main/resources/topics.json) registered, but is returning 500.
## Steps to reproduce
========================
Request:
GET https://osdu-ship.msft-osdu-test.org/api/register/v1/topics/
Total time (seconds): 1.487018
Request body:
```json
No content
```
Response body: 500
```json
{
"code": 500,
"reason": "Persistence error",
"message": "Error generating token"
}
```
## Intended Behavior
Returning [topics](https://community.opengroup.org/osdu/platform/system/register/-/blob/master/provider/register-azure/src/main/resources/topics.json) registered
```json
[
{
"name": "records-changed",
"description": "This notification is sent whenever a new record or record version is created, updated or deleted, and when a new schema is created in storage.",
"state": "ACTIVE",
"example": [
{
"id": "common:abc:123",
"kind": "common:petrel:regularheightfieldsurface:1.0.0",
"op": "create",
"recordUpdated": "false"
},
{
"id": "common:abc:124",
"kind": "common:petrel:regularheightfieldsurface:1.0.0",
"op": "create",
"recordUpdated": "true"
},
{
"kind": "common:petrel:regularheightfieldsurface:1.0.0",
"op": "create_schema"
},
{
"id": "common:ghi:345",
"kind": "common:petrel:regularheightfieldsurface:1.0.0",
"op": "delete"
}
]
},
{
"name": "schema-changed",
"description": "This notification is sent whenever a new schema is created or updated via schema-service.",
"state": "ACTIVE",
"example": [
{
"kind": "osdu:wks:wellbore:1.0.0",
"op": "update"
},
{
"kind": "osdu:wks:wellbore:2.0.0",
"op": "create"
}
]
},
{
"name": "status-changed",
"description": "Every Service/Stage would publish their respective status changed information in this topic.",
"state": "ACTIVE",
"example": [
{
"kind": "dataSetDetails",
"properties": {
"correlationId": "12345",
"dataSetId": "12345",
"dataSetVersionId": "1",
"dataSetType": "FILE",
"recordCount": 10.0,
"timestamp": 1622118996000.0
}
},
{
"kind": "status",
"properties": {
"correlationId": "12345",
"recordId": "12334",
"recordIdVersion": "123ff",
"stage": "STORAGE_SYNC",
"status": "FAILED",
"message": "acl is not valid",
"errorCode": 400.0,
"timestamp": 1622118996000.0
}
}
]
},
{
"name": "legaltags-changed",
"description": "This notification is sent whenever a new legaltag is created, updated or deleted.",
"state": "ACTIVE",
"example": [
{
"statusChangedTags": [
{
"changedTagName": "osdu-openZgy-Legal-Tag-1727673",
"changedTagStatus": "incompliant"
}
]
}
]
}
]
```Srinivasan NarayananSrinivasan Narayananhttps://community.opengroup.org/osdu/platform/security-and-compliance/legal/-/issues/47ADR: Provide search capability in legal tags2024-02-16T20:03:31ZSrabana GuhaADR: Provide search capability in legal tags##
[[_TOC_]]
## Problem Statement
Is it possible to add the search capability for legal tags based on the legaltag attributes including the ones in extensionProperties? There might be hundreds of thousands of Legaltags. Need to count...##
[[_TOC_]]
## Problem Statement
Is it possible to add the search capability for legal tags based on the legaltag attributes including the ones in extensionProperties? There might be hundreds of thousands of Legaltags. Need to count on the number of leagltags while designing the solutions. For more details please check #36
## Features that need to be supported
* Structure of ExtensionProperties attributes has no definitive format. It will be different for different companies. Shell might follow a structure which might be different for other companies. So extensionproperties need to be pretty much flexible
* The formatting of all LegalTags are pretty much standardized. We are referring to the structure outside of the extensionproperties. Currently, it’s having Name, Description, properties, isValid attributes other that id and dataPartitionId
* Search should support all attribute search in legaltags
* Expected new API to have POST request with body which will contain the query. Query can be pretty complex.
* Mostly the feature will be available as a separate API under legal services.
* Multi- attribute search along with complex queries
* Sort and Search together need to be supported along with limit and offset
* The response should return the entire legalTag with all attributes.
* DSL preferred to Lucene syntax – need to revisit this implementationwise.
## Potential Solution Approach
We have brainstormed on the solution that we need to adopt in order to provide the Search capability in the Legal Tag.
In any of these implementations what remains common is the introduction of attribute "kind". This attribute can be defaulted so as to support backward compatibility. Next step is to define a Schema on the Legal Tags. Any change in any of the attributes in the Legal tags will call for creating different Schema version on the Legal tags
<table>
<tr>
<th>
</th>
<th>
**Use the current Indexer Service to Index Legal tags in same format with Storage.**
</th>
<th>Replicate part of Indexer in Legal to Index the LegalTag documents.</th>
<td>
**Without using Indexer and creating Legaltag as record.**
(updated design)
</td>
</tr>
<tr>
<th>
**De**sign
</th>
<th>
**![Legal with Indexer.jpeg](/uploads/f0f23ae5086a2a3c8579493799eb7790/Legal_with_Indexer.jpeg){width="294" height="177"}**
![Detailed_Design_with_Indexer_Option_1.png](https://community.opengroup.org/osdu/platform/security-and-compliance/legal/-/issues/47/designs/OSDU_Legal_ADR_-_Option_1.png)
</th>
<th>
![Legal implements Indexer.jpeg](/uploads/955769681ec262c7bbe4af700152d47c/Legal_implements_Indexer.jpeg){width="126" height="223"}
![Detailed_Design_without_Indexer_Option_2.png](https://community.opengroup.org/osdu/platform/security-and-compliance/legal/-/issues/47/designs/OSDU_Legal_ADR_-_Option_2_without_Indexer.png)
</th>
<th>Same as current design flow as Records
![Detailed_Design_with_storage_Option_3.png](https://community.opengroup.org/osdu/platform/security-and-compliance/legal/-/issues/47/designs/OSDU_Legal_ADR_-_Option_3A_using_Storage.png)
</th>
<tr>
<th>Solution implementation</th>
<th>
**As depicted in the above diagram any PUT/POST request on the Legal tags will need to interact with Indexer to Index the Legal Tag documents. Additionally Legal Tag document need to conform to the Legal Tag schema. The current Indexer service with modifications can be leveraged to achieve this. Legal needs to send change event notification for Indexer to pick up changes. Indexer needs to subscribe to change event notification on Legal accordingly and index the legal tags in ElasticSearch(ES).**
For Search we need to exclude the Legal tag documents from normal search endpoints. We have to introduce separate endpoints for Legal tag search. Exclude policy evaluation (or acls) when searching for legal tag.
different CSPs need to be involved for the implementation of the core services. For example, Search and Legal implementation code handled at the CSP layer for the implementation. The underlying datastore and infrastructure used is dependent on the CSP implementation. Need to account for this
</th>
<th>
As illustrated above, the heavy lifting operation of indexing the legal tags in ES is being handled by the Legal Services. Some part of Indexer code can be reused for implementation purpose.
For Search we need to exclude the Legal tag documents from normal search endpoints, which can already be done by existing functionality -system-meta-data-legal. No need to have separate endpoints for Legal tag search. Exclude policy evaluation (or acls) when searching for legal tag.
</th>
<td>
Create storage record for each legal tag (that contains optional schema) with service.legal.owner as the owner in acl. The viewer should come from PUT /legaltag so access can be controlled as needed. This of course could be defaulted. Additionally access to these legal tag records could be controlled by Policy using existing functionality.
For Search, by using kind -system-meta-data-*, these will be excluded from normal search endpoints.
</td>
</tr>
<tr>
<th>Development and Testing Effort</th>
<th>High development and testing effort as it involves touchpoint of multiple Core services. This is design, development ad testing heavy approach.</th>
<th>Development, testing effort more as touching multiple core services</th>
<td>Much lighter approach compared to other two.</td>
</tr>
<tr>
<th>Services impacted</th>
<th>Indexer, Notification, Legal, Search.</th>
<th>Legal, Indexer, Search</th>
<td>Search</td>
</tr>
<tr>
<th>Risks / feature robustness</th>
<th>Cleanest and most preferred.</th>
<th>
Code duplication which could lead to inconsistency (e.g., ES upgrades, new schema features, etc.)
Expand surface of ES (currently only indexer and search have access to ES)
</th>
<td>Defining/using legal tag on legal tag Storage records seems like anti-pattern.</td>
</tr>
<tr>
<th>
</th>
<th>
</th>
<th>
</th>
<td>
</td>
</tr>
</table>
Proposed [Legal Legal Schema](/uploads/6b930a475487e71b78a4de73143a5bf8/legal_schema.json)
Proposed Legal Query API spec legal POST `/query`, no params. Payload would include `{ "kind": {}, "limit": 0, "query": "string", "sort": { "field": [ "string" ], "order": [ "ASC" ] }, "offset": 0 }`M23 - Release 0.26Shane HutchinsShane Hutchinshttps://community.opengroup.org/osdu/platform/consumption/geospatial/-/issues/290Transformer - Implement measured depth steps in the log segment interpolation2023-11-20T16:38:03ZNoel OkanyaTransformer - Implement measured depth steps in the log segment interpolationAs a GCZ Developer, I want to implement measured depth steps in the log segment interpolation, so that I can more faithfully represent the curve of a trajectory segment.
Current Log Ingestion:
* Identify start/end of log, construct a s...As a GCZ Developer, I want to implement measured depth steps in the log segment interpolation, so that I can more faithfully represent the curve of a trajectory segment.
Current Log Ingestion:
* Identify start/end of log, construct a simple linestring with two points
* This line becomes more and more inaccurate as the length increases
Potential Solution:
* Instead of passing only a start/end point into the python script, we would pass the following:
* [start, {...steps} end] - where steps is start + i
* [1, 3, 5, 7, 8]
This would be an enhancement of the current Well Log Curve/Line ingestion/cache.
Acceptance Criteria:
- GCZ automatically interpolates steps between a log segment's begin and end point.Ankita SrivastavaAnkita Srivastavahttps://community.opengroup.org/osdu/platform/domain-data-mgmt-services/rock-and-fluid-sample/rafs-ddms-services/-/issues/171[500] error for V2 GET data endpoint2023-08-31T16:27:48ZErnesto Gutierrez[500] error for V2 GET data endpointWhile issuing the GET content endpoint request there is a 500 error thrown by the rafs service.
See the logs:
[rafs_log_get_content_500.txt](/uploads/e63c191ecc2cc07200bc34728dd0da4a/rafs_log_get_content_500.txt)While issuing the GET content endpoint request there is a 500 error thrown by the rafs service.
See the logs:
[rafs_log_get_content_500.txt](/uploads/e63c191ecc2cc07200bc34728dd0da4a/rafs_log_get_content_500.txt)RAFS DDMS Sprint 15Ernesto GutierrezErnesto Gutierrezhttps://community.opengroup.org/osdu/platform/system/reference/crs-conversion-service/-/issues/77unitDLS returns incorrect PR string when unitMD is passed as m for v4/convert...2023-08-23T11:22:25ZPuneet BhardwajunitDLS returns incorrect PR string when unitMD is passed as m for v4/convertTrajectory input request.When unitMD is passed as **m**, the unitDLS in the response should return "deg/30m" but instead is returning "deg/100m" .
In the current scenario, if unitMD is passed as "m", the **dlFactor** gets set as "30.48" which originally should ...When unitMD is passed as **m**, the unitDLS in the response should return "deg/30m" but instead is returning "deg/100m" .
In the current scenario, if unitMD is passed as "m", the **dlFactor** gets set as "30.48" which originally should be "30".
Due to this unitDLS, returns incorrect PR string.M20 - Release 0.23https://community.opengroup.org/osdu/platform/system/reference/crs-catalog-service/-/issues/34CRS Catalog points-in-aou enable terminating colon for CRS record id2024-03-17T00:54:59ZBert KampesCRS Catalog points-in-aou enable terminating colon for CRS record idThe CRS conversion APIs all require a terminating colon when referencing a CRS record id in the request.
The CRS Catalog APIs for some endpoints accept a terminating colon, and in some cases do not.
This issue is intended to make the Ca...The CRS conversion APIs all require a terminating colon when referencing a CRS record id in the request.
The CRS Catalog APIs for some endpoints accept a terminating colon, and in some cases do not.
This issue is intended to make the Catalog APIs optionally accept a colon if they currently do not. This way it is backward compatible, and the various CRS APIs in Convert and Catalog are all consistent with each other and with the OSDU platform standards.
_(Side note: the Core Services Search and Storage APIs may not all be complying, but those will be ignored and the understanding is those will not be changed anyway. This issue is only about the CRS Catalog APIs)._
It is low priority, but presumed to be easy change of input parsing. Wherever currently an `id string` is passed in the request like `osdu:reference-data\--CoordinateReferenceSystem:Geographic2D:EPSG::4158` (without a terminating colon), then check that passing `osdu:reference-data\--CoordinateReferenceSystem:Geographic2D:EPSG::4158:` (with a terminating colon) does not work.
If it fails with a terminating colon, then make it work (both ways).
The suggested code fix is as follows. For API that work without a terminating colon, but not with. Then when handling the input id string, remove the terminating colon if the string has it. Then pass it on as before with colon removed. This should fix the issue, i.e., this will allow input with and without terminating colon, and the response will be unchanged (i.e., it will execute the logic as if the request was as before, without a colon).
- [ ] check [swagger](https://community.opengroup.org/osdu/platform/system/reference/crs-catalog-service/-/blob/master/docs/api_spec/crs-catalog-openapi-v3.yaml) and update as needed.
- [ ] update [tutorial](https://community.opengroup.org/osdu/platform/system/reference/crs-catalog-service/-/blob/master/docs/tutorial/CRS_Catalog_Service.md#2-crs-catalog-endpoints) with text and examples.
- NOTE: **the updated tutorial should have footnotes or remarks, i.e., the examples can consistently show usage of the colon, but then a footnote or note that prior to M21 the syntax did not accept the colon at the end of the id.**
- This should be done in issue !297 if that is still open and not in this branch. Actually !297 branch could be used to do the code changes rather than creating a branch here (or at least merge 297 before creating a branch here to avoid conflicts in tutorial updates).
- [ ] find for all APIs any `record id` (optionally) used in request body (or url parameter). This started by a report that `points-in-aou` was inconsistent with other APIs, but lets check swagger and tutorial for all APIs. An initial check shows the following:
1. `{{osduonaws_base_url}}/api/crs/catalog/v3/coordinate-reference-system?recordId=osdu:reference-data\--CoordinateReferenceSystem:Geographic2D:EPSG::4158`
- This GET currently works without a terminating colon for recordId, and I believe **it fails if a colon is added** (TBC). We can leave it that way, but ideally we fix it and change it to accept either with or without terminating colon. if this is easy to do then fix it (I mean to say, the priority is points-in-aou and we can live with this if need be).
- NOTE: the dataId= should never have a terminating colon.
2. `{{osduonaws_base_url}}/api/crs/catalog/v3/coordinate-transformation?recordId=osdu:reference-data\--CoordinateTransformation:EPSG::15851`
- This GET currently works without a terminating colon for recordId, and I believe i**t fails if a colon is added** (TBC). (same as above).
- NOTE: the dataId= should never have a terminating colon.
3. `{{osduonaws_base_url}}/api/crs/catalog` POST /v3/coordinate-reference-system`
- [This example in tutorial](https://community.opengroup.org/osdu/platform/system/reference/crs-catalog-service/-/blob/master/docs/tutorial/CRS_Catalog_Service.md#46-fetch-all-projected-crss-based-on-wgs-84-that-use-the-unit-meter) shows `BaseCRS` without a colon. **This should be made to work with and without colon**.
- NOTE: the above example in tutorial uses "opendes". This has been changed to "osdu" of course.
- NOTE: [tutorial alternatives using Search or Storage Services](https://community.opengroup.org/osdu/platform/system/reference/crs-catalog-service/-/blob/master/docs/tutorial/CRS_Catalog_Service.md#321-alternative-1-retrieve-a-single-crsct-record-using-the-search-service) shows the query does not have a terminating colon and that may be why AWS did not implement the CRS Service with a colon (because it wraps to search). This does not have to (should not be) addressed! I.e., This issue is only about CRS Catalog, very simple, and not about the platform or other Core Services.
4. `{{osduonaws_base_url}}/api/crs/catalog` POST /v3/coordinate-transformation`
- [This example](https://community.opengroup.org/osdu/platform/system/reference/crs-catalog-service/-/blob/master/docs/tutorial/CRS_Catalog_Service.md#53-find-all-cts-between-two-horizontal-crss) shows `SourceCRS` and `TargetCrs` already work with terminating colon. Please confirm in a test, and if it works with terminating colon then there is no action.
5. `{{osduonaws_base_url}}/api/crs/catalog` POST /v3/points-in-aou`
- [This example](https://community.opengroup.org/osdu/platform/system/reference/crs-catalog-service/-/blob/master/docs/tutorial/CRS_Catalog_Service.md#62-examples) shows that `"recordId": "osdu:reference-data--CoordinateTransformation:EPSG::15851"` does **not have a terminating colon**. Also we have tested a terminating colon is not accepted. To fix this, it is suggested to not change the code, except for the input parsing of the `recordId string`, i.e., if the request has a terminating colon then remove that and pass on the request further as in the current version. That way it should allow both a terminating colon and not.
<details><summary>Example - Click to expand</summary>
CRS Catalog Service references CRS and CT records by id without a closing colon (error if terminating colon is used). CRS Conversion Service requires a terminating colon (error if colon is omitted). What is way forward for this (is it provider dependent? What is the OSDU standard? Does it need to be fixed? I just added it here as placeholder and have a comment in the system. If the way forward is to fix this then a new issue should be opened for it. Also see attached email.
[RE__CRS_Catalog_Service_should_use_a_terminating_colon_in_requests_using_a_record_id_.msg](/uploads/679962df21862012b1b3ee65a839b15f/RE__CRS_Catalog_Service_should_use_a_terminating_colon_in_requests_using_a_record_id_.msg)
1. The standard is: record `id` come without trailing colon/without version as the version number is explicitly represented in the system property `version`.
2. Relationships, e.g. as in `data.BaseCRSID` _can_ refer to a specific version (`{id}:{version}`) or to implicitly the latest version by omitting the version number (`{id}:`), this is the trailing colon variant as used in the majority of the cases.
No change required for 2.
Ad 1).
The [documentation of EpsgManifestGenerator](https://gitlab.opengroup.org/osdu/subcommittees/data-def/work-products/schema/-/tree/master/ReferenceValues/Resources/IOGP#support-spatial-discovery) mentions the Multipolygon.
The current generated manifest does not. I did not check (yet) if perhaps somehow it is related to an option running Generator, or for special case with Operator that has multiple usages. The first thing would be to check the Generator with the standard -crs 3851 option to epsg.org to see what that returns.
**Test that fails:**
```json
curl --location 'https://osdu.com/api/crs/catalog/v3/points-in-aou' \
--header 'data-partition-id: osdu' \
--header 'Content-Type: application/json' \
--header 'Authorization: Bearer eyJhbGciOiJm8lCA' \
--header 'Cookie: PF=lOekY5BExFoeE8n9DvELw2' \
--data '{
"recordId": "osdu:reference-data--CoordinateReferenceSystem:Projected:EPSG::3851",
"points": [
{
"latitude": -40,
"longitude": 175
},
{
"latitude": -40,
"longitude": -175
},
{
"latitude": 25,
"longitude": -90
},
{
"latitude": 45,
"longitude": -92
}
]
}
'
```
**Gives a (not so helpful) error. Expected result (Response):**
</details>M21 - Release 0.24Puneet BhardwajKIRAN ALLAMSETYPuneet Bhardwajhttps://community.opengroup.org/osdu/platform/domain-data-mgmt-services/rock-and-fluid-sample/rafs-ddms-services/-/issues/170Migrate API v1 endpoints to the API v22023-12-07T22:36:15ZSiarhei Khaletski (EPAM)Migrate API v1 endpoints to the API v2**Context**
The new API v2 has been introduced, as well as the generic WPCs schemas ([SamplesAnalysesReport.1.0.0](https://community.opengroup.org/osdu/platform/domain-data-mgmt-services/rock-and-fluid-sample/rafs-ddms-services/-/blob/m...**Context**
The new API v2 has been introduced, as well as the generic WPCs schemas ([SamplesAnalysesReport.1.0.0](https://community.opengroup.org/osdu/platform/domain-data-mgmt-services/rock-and-fluid-sample/rafs-ddms-services/-/blob/main/deployments/shared-schemas/rafsddms/work-product-component/SamplesAnalysesReport.1.0.0.json), [SamplesAnalysis.1.0.0](https://community.opengroup.org/osdu/platform/domain-data-mgmt-services/rock-and-fluid-sample/rafs-ddms-services/-/blob/main/deployments/shared-schemas/rafsddms/work-product-component/SamplesAnalysis.1.0.0.json)) have been introduced.
The generic schemas are used to cover the major part of the analysis types provided in API v1.
**Scope**
Migrate (v1 must stay available [until its deprecation](https://community.opengroup.org/osdu/platform/domain-data-mgmt-services/rock-and-fluid-sample/rafs-ddms-services/-/issues/162))
1. rocksampleanalyses (RCA) - https://osdu-glab.msft-osdu-test.org/api/rafs-ddms/docs#/rocksampleanalyses (https://community.opengroup.org/osdu/platform/domain-data-mgmt-services/rock-and-fluid-sample/rafs-ddms-services/-/issues/253)
1. pvtreports - https://osdu-glab.msft-osdu-test.org/api/rafs-ddms/docs#/pvtreports (https://community.opengroup.org/osdu/platform/domain-data-mgmt-services/rock-and-fluid-sample/rafs-ddms-services/-/issues/254)
2. coringreports - https://osdu-glab.msft-osdu-test.org/api/rafs-ddms/docs#/coringreports (https://community.opengroup.org/osdu/platform/domain-data-mgmt-services/rock-and-fluid-sample/rafs-ddms-services/-/issues/255)
3. ccereports - https://osdu-glab.msft-osdu-test.org/api/rafs-ddms/docs#/ccereports (https://community.opengroup.org/osdu/platform/domain-data-mgmt-services/rock-and-fluid-sample/rafs-ddms-services/-/issues/256)
4. difflibreports - https://osdu-glab.msft-osdu-test.org/api/rafs-ddms/docs#/difflibreports (https://community.opengroup.org/osdu/platform/domain-data-mgmt-services/rock-and-fluid-sample/rafs-ddms-services/-/issues/282)
5. transporttests - https://osdu-glab.msft-osdu-test.org/api/rafs-ddms/docs#/transporttests (https://community.opengroup.org/osdu/platform/domain-data-mgmt-services/rock-and-fluid-sample/rafs-ddms-services/-/issues/282)
6. compositionalanalysisreports - https://osdu-glab.msft-osdu-test.org/api/rafs-ddms/docs#/compositionalanalysisreports (https://community.opengroup.org/osdu/platform/domain-data-mgmt-services/rock-and-fluid-sample/rafs-ddms-services/-/issues/282)
7. multistageseparatortests - https://osdu-glab.msft-osdu-test.org/api/rafs-ddms/docs#/multistageseparatortests (https://community.opengroup.org/osdu/platform/domain-data-mgmt-services/rock-and-fluid-sample/rafs-ddms-services/-/issues/282)
8. swellingtests - https://osdu-glab.msft-osdu-test.org/api/rafs-ddms/docs#/swellingtests (https://community.opengroup.org/osdu/platform/domain-data-mgmt-services/rock-and-fluid-sample/rafs-ddms-services/-/issues/282)
9. constantvolumedepletiontests - https://osdu-glab.msft-osdu-test.org/api/rafs-ddms/docs#/constantvolumedepletiontests (https://community.opengroup.org/osdu/platform/domain-data-mgmt-services/rock-and-fluid-sample/rafs-ddms-services/-/issues/282)
10. wateranalysisreports - https://osdu-glab.msft-osdu-test.org/api/rafs-ddms/docs#/wateranalysisreports (https://community.opengroup.org/osdu/platform/domain-data-mgmt-services/rock-and-fluid-sample/rafs-ddms-services/-/issues/286)
11. stocktankoilanalysisreports - https://osdu-glab.msft-osdu-test.org/api/rafs-ddms/docs#/stocktankoilanalysisreports (https://community.opengroup.org/osdu/platform/domain-data-mgmt-services/rock-and-fluid-sample/rafs-ddms-services/-/issues/286)
12. interfacialtensiontests - https://osdu-glab.msft-osdu-test.org/api/rafs-ddms/docs#/interfacialtensiontests (https://community.opengroup.org/osdu/platform/domain-data-mgmt-services/rock-and-fluid-sample/rafs-ddms-services/-/issues/286)
13. vaporliquidequilibriumtests - https://osdu-glab.msft-osdu-test.org/api/rafs-ddms/docs#/vaporliquidequilibriumtests (https://community.opengroup.org/osdu/platform/domain-data-mgmt-services/rock-and-fluid-sample/rafs-ddms-services/-/issues/286)
14. multiplecontactmiscibilitytests - https://osdu-glab.msft-osdu-test.org/api/rafs-ddms/docs#/multiplecontactmiscibilitytests (https://community.opengroup.org/osdu/platform/domain-data-mgmt-services/rock-and-fluid-sample/rafs-ddms-services/-/issues/286)
15. slimtubetests - https://osdu-glab.msft-osdu-test.org/api/rafs-ddms/docs#/slimtubetests (https://community.opengroup.org/osdu/platform/domain-data-mgmt-services/rock-and-fluid-sample/rafs-ddms-services/-/issues/286)
16. samplesanalysesreport - https://osdu-glab.msft-osdu-test.org/api/rafs-ddms/docs#/samplesanalysesreport (https://community.opengroup.org/osdu/platform/domain-data-mgmt-services/rock-and-fluid-sample/rafs-ddms-services/-/issues/257)
17. capillarypressuretests - https://osdu-glab.msft-osdu-test.org/api/rafs-ddms/docs#/capillarypressuretests (#250)
18. relativepermeabilitytests - https://osdu-glab.msft-osdu-test.org/api/rafs-ddms/docs#/relativepermeabilitytests (https://community.opengroup.org/osdu/platform/domain-data-mgmt-services/rock-and-fluid-sample/rafs-ddms-services/-/issues/283)
19. fractionationtests - https://osdu-glab.msft-osdu-test.org/api/rafs-ddms/docs#/fractionationtests (https://community.opengroup.org/osdu/platform/domain-data-mgmt-services/rock-and-fluid-sample/rafs-ddms-services/-/issues/283)
20. extractiontests - https://osdu-glab.msft-osdu-test.org/api/rafs-ddms/docs#/extractiontests (https://community.opengroup.org/osdu/platform/domain-data-mgmt-services/rock-and-fluid-sample/rafs-ddms-services/-/issues/283)
21. physicalchemistrytests - https://osdu-glab.msft-osdu-test.org/api/rafs-ddms/docs#/physicalchemistrytests
22. electricalproperties - https://osdu-glab.msft-osdu-test.org/api/rafs-ddms/docs#/electricalproperties (https://community.opengroup.org/osdu/platform/domain-data-mgmt-services/rock-and-fluid-sample/rafs-ddms-services/-/issues/259)
23. rockcompressibilities - https://osdu-glab.msft-osdu-test.org/api/rafs-ddms/docs#/rockcompressibilities (https://community.opengroup.org/osdu/platform/domain-data-mgmt-services/rock-and-fluid-sample/rafs-ddms-services/-/issues/283)
24. watergasrelativepermeabilities - https://osdu-glab.msft-osdu-test.org/api/rafs-ddms/docs#/watergasrelativepermeabilities (https://community.opengroup.org/osdu/platform/domain-data-mgmt-services/rock-and-fluid-sample/rafs-ddms-services/-/issues/283)
25. formationresistivityindexes - https://osdu-glab.msft-osdu-test.org/api/rafs-ddms/docs#/formationresistivityindexes (https://community.opengroup.org/osdu/platform/domain-data-mgmt-services/rock-and-fluid-sample/rafs-ddms-services/-/issues/259)
**Acceptance Criteria**
1. The following analysis types are available in API v2
2. OSDU published SamplesAnalysis schema is registered and used: https://gitlab.opengroup.org/osdu/subcommittees/data-def/work-products/schema/-/blob/master/E-R/work-product-component/SamplesAnalysis.1.0.0.md?ref_type=heads
2. V2 version doesn't have the separate WPCs for each data type and content schemas are linked to SamplesAnalysisID
3. All tests are updated and passingSiarhei Khaletski (EPAM)Ernesto GutierrezSiarhei Khaletski (EPAM)https://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/open-vds/-/issues/198Failed to create 3D Prestack container on OSDU2023-08-29T14:40:24ZAnatoly YanchevskyFailed to create 3D Prestack container on OSDUI try to create 3D Prestack with 4 axis:
Sample/Trace(offset)/Crossline/Inline
Error occurred on call "OpenVDS::Create".
Message: "DimensionGroup 41 is not a valid dimension."
Stack: [call_stack.txt](/uploads/fb0ec722ccda7ec907d357aa4b0...I try to create 3D Prestack with 4 axis:
Sample/Trace(offset)/Crossline/Inline
Error occurred on call "OpenVDS::Create".
Message: "DimensionGroup 41 is not a valid dimension."
Stack: [call_stack.txt](/uploads/fb0ec722ccda7ec907d357aa4b03a211/call_stack.txt)
Same volume can be created on local disk.
Output from VDSInfo: [VDSInfo_axis.txt](/uploads/1ecb09ef3ce874ef18686d3397a45a61/VDSInfo_axis.txt) and [VDSInfo_channels.txt](/uploads/bd3daf2ce80b5022ecab479f329109d7/VDSInfo_channels.txt)
Probably, I do something wrong, but I did not understand what exactly.https://community.opengroup.org/osdu/platform/system/indexer-service/-/issues/108Poor performance for index augmenter2023-08-24T20:45:43ZZhibin MaiPoor performance for index augmenterThough we made several enhancements related to index augmenter directly or indirectly, such as creating separating re-index topic, splitting the big message with 1000 records to small message with 50 records to support parallel indexing,...Though we made several enhancements related to index augmenter directly or indirectly, such as creating separating re-index topic, splitting the big message with 1000 records to small message with 50 records to support parallel indexing, and etc. We still found that the index performance with augmenter enabled is much worse than the index performance with augmenter disabled. For example, for WellLog with multiple extension configurations, the performance with augmenter enabled is about 15 times slower than the performance with augmenter disabled.
With augmenter enabled,
1. Index one record individually, each record (for given property configurations) requires 8 queries to get all information in order to populate the extended properties. In this test test, cache does not take effect at all.
2. Index a kind with 291 WellLog records, each record requires 6.8 queries on average. In this test case, the cache should play important role. However, we found the cache mechanism basically does not take much effect.
As I ran the tests from local, the latency of search is about 1.5 times longer than the latency of search in cloud env. I estimated that the performance with augmenter enabled is still about 10 times slower if we don't make any enhancement.M20 - Release 0.23Zhibin MaiZhibin Maihttps://community.opengroup.org/osdu/platform/domain-data-mgmt-services/rock-and-fluid-sample/rafs-ddms-services/-/issues/169Evaluate and Draft Reference Value Lists2024-01-10T16:32:26ZMichael JonesEvaluate and Draft Reference Value ListsDraft reference value lists for forum approval and implementation
GCMS Aromatics and Saturates:
- AreaHeightQualifier
- BiomarkersAreaHeight
Older need:
- [CompressibilityMeasurementType ](https://gitlab.opengroup.org/osdu/subcommittee...Draft reference value lists for forum approval and implementation
GCMS Aromatics and Saturates:
- AreaHeightQualifier
- BiomarkersAreaHeight
Older need:
- [CompressibilityMeasurementType ](https://gitlab.opengroup.org/osdu/subcommittees/data-def/projects/RAFSDDMSDEV/docs/-/blob/main/Design%20Documents/ReferenceValues/Manifests/reference-data/OPEN/CompressibilityMeasurementType.json)
Existing drafts that have not been processed or approved by the forum:
- [AromaticBiomarkersCompounds](https://gitlab.opengroup.org/osdu/subcommittees/data-def/projects/RAFSDDMSDEV/docs/-/blob/main/Design%20Documents/ReferenceValues/Manifests/reference-data/OPEN/AromaticBiomarkersCompounds.1.0.0.json)
- [SaturateBiomarkersCompounds](https://gitlab.opengroup.org/osdu/subcommittees/data-def/projects/RAFSDDMSDEV/docs/-/blob/main/Design%20Documents/ReferenceValues/Manifests/reference-data/OPEN/SaturateBiomarkersCompounds.1.0.0.json)Michael JonesMichael Joneshttps://community.opengroup.org/osdu/platform/domain-data-mgmt-services/rock-and-fluid-sample/rafs-ddms-services/-/issues/167RegEx issue with Stock Tank Analysis2023-08-18T12:57:58ZBryan DawsonRegEx issue with Stock Tank Analysis**Description**
The WPC schema regex for the ID is:
`^[\\w\\-\\.]+:work-product-component\\-\\-StockTankAnalysis:[\\w\\-\\.\\:\\%]+$`
but in the content schema, the regex is:
`^[\\w\\-\\.]+:work-product-component--StockTankOilAnalysisTe...**Description**
The WPC schema regex for the ID is:
`^[\\w\\-\\.]+:work-product-component\\-\\-StockTankAnalysis:[\\w\\-\\.\\:\\%]+$`
but in the content schema, the regex is:
`^[\\w\\-\\.]+:work-product-component--StockTankOilAnalysisTest:[\\w\\-\\.\\:\\%]+:[0-9]*$`
Looks like the WPC schema is registered via the postman collection as rafsddms:wks:work-product-component--StockTankAnalysis:1.0.0. As this is the only one without "Test" at the end I assume it was an oversight and we should change the WPC vs. the content schema.
**Needed Fix**
Change WPC Schema regex to:
`^[\\w\\-\\.]+:work-product-component\\-\\-StockTankOilAnalysisTest:[\\w\\-\\.\\:\\%]+$`Ernesto GutierrezErnesto Gutierrezhttps://community.opengroup.org/osdu/platform/system/storage/-/issues/182Issues observed with logging2023-12-01T06:47:32ZLarissa PereiraIssues observed with logging**Issue 1: Duplicate operation IDs**
We observed multiple dependency logs for disparate operations (based on record ids) with identical operation Id's for the POST QueryApi/getRecords API. Duplicate entries were observed when reading fr...**Issue 1: Duplicate operation IDs**
We observed multiple dependency logs for disparate operations (based on record ids) with identical operation Id's for the POST QueryApi/getRecords API. Duplicate entries were observed when reading from BlobStore for operation READ_FROM_STORAGE_CONTAINER although these logs belonged to separate operations.
![image](/uploads/afc539574de597bba300b5d6b2a18b8a/image.png)
**Issue 2: Multiple dependency logs and missing Read log**
We observed multiple dependency logs with identical operation Id's for the POST QueryApi/fetchRecords. These entries were observed when querying CosmosStore, however the READ_FROM_STORAGE_CONTAINER dependency log is missing.
![image](/uploads/ce377f8bf6ee95646ca1ab5d910df167/image.png)M22 - Release 0.25VidyaDharani LokamVidyaDharani Lokamhttps://community.opengroup.org/osdu/platform/pre-shipping/-/issues/591SeismicDMS can not create subproject2023-08-18T11:53:05ZDo DangSeismicDMS can not create subprojectI followed this [quickstart](https://community.opengroup.org/osdu/platform/data-flow/ingestion/segy-to-vds-conversion/-/blob/master/docs/gcp/QUICKSTART.md) and created a tenant, however, I can not create subproject.
```
{
"default_ac...I followed this [quickstart](https://community.opengroup.org/osdu/platform/data-flow/ingestion/segy-to-vds-conversion/-/blob/master/docs/gcp/QUICKSTART.md) and created a tenant, however, I can not create subproject.
```
{
"default_acls": "users.datalake.admins@esstar.group,users.datalake.ops@esstar.group",
"gcpid": "test_project_id",
"esd": "esstar.group",
"name": "tenant581706"
}
```
Request
```
POST https://preship.gcp.gnrg-osdu.projects.epam.com/api/seismic-store/v3/subproject/tenant/tenant581706/subproject/subprojectodi297508?recursive=false
Content-Type: application/json
data-partition-id: esstar
ltag: esstar-SeismicDMS-Legal-Tag-Test3858228
request body
{
"storage_class": "MULTI_REGIONAL",
"storage_location": "US",
"acl": {
"owners": [
"users.datalake.admins@esstar.group"
],
"viewers": [
"users.datalake.ops@esstar.group"
]
}
}
response
Cannot convert undefined or null to object
```Dzmitry Malkevich (EPAM)Dzmitry Malkevich (EPAM)https://community.opengroup.org/osdu/platform/domain-data-mgmt-services/reservoir/open-etp-server/-/issues/73Implement caching of PostgreSql statements2023-09-08T08:48:57ZPavel KisliakImplement caching of PostgreSql statementsAfter the recent improvement regarding limitation of max connected users ( #48), there is still existing one thing that can help to improve performance a bit.\
Currently, each new user request requires to build set of PostgreSql statemen...After the recent improvement regarding limitation of max connected users ( #48), there is still existing one thing that can help to improve performance a bit.\
Currently, each new user request requires to build set of PostgreSql statements. Of course they are already cached on the PqLib side, library returns code PG_42P05 (already prepared), but based on benchmark results it's not absolutely cheap.
![image](/uploads/292f596a0341022c724067d8cb40ab54/image.png)
I think that make sense to cache them also on the application side.\
Preliminary dev plan:
- Introduce map of `Statement` objects in the `Connection`.
- Method `Connection::prepare` should return `Statement` object (build new or get from map).
- Refactor all existing usages of prepared statements
Update 01 Sep:
- Currently each new prepared statement has new name (can lead to degradation performance in time).
```cpp
void Statement::Impl::buildStatementName() {
auto uid = oes::core::Guid::make();
std::stringstream ss;
ss << std::this_thread::get_id();
_statementName = "OES_" + uid.toStringNoDash() + "_" + ss.str();
}
```
- It is necessary to make deterministic generation names of statements.
- Need to avoid recreating statements with the same name (this will lead to abort transactions).Pavel KisliakPavel Kisliakhttps://community.opengroup.org/osdu/data/data-definitions/-/issues/63Publish DD M20 v0.23.02023-10-06T05:52:50ZThomas Gehrmann [slb]Publish DD M20 v0.23.0- [x] publish the M20 content delivered by OSDU Data Definition- [x] publish the M20 content delivered by OSDU Data DefinitionThomas Gehrmann [slb]Thomas Gehrmann [slb]https://community.opengroup.org/osdu/platform/system/schema-service/-/issues/134OSDU-DD-Delivery-M20 (v0.23.0)2023-08-21T19:12:47ZThomas Gehrmann [slb]OSDU-DD-Delivery-M20 (v0.23.0)- [x] Update to the M20 deliverables from OSDU Data Definitions- [x] Update to the M20 deliverables from OSDU Data DefinitionsM20 - Release 0.23Thomas Gehrmann [slb]Thomas Gehrmann [slb]