OSDU Software issueshttps://community.opengroup.org/groups/osdu/-/issues2024-02-28T14:42:27Zhttps://community.opengroup.org/osdu/ui/data-loading/osdu-manifest-validator/-/issues/1Fail the script if a json file is invalid2024-02-28T14:42:27ZYan Sushchynski (EPAM)Fail the script if a json file is invalidhttps://community.opengroup.org/osdu/ui/admin-ui-group/admin-ui/-/issues/21[Feature] Multi data-partition support2024-03-14T11:58:06ZEirik Haughom[Feature] Multi data-partition supportAs customers deploy additional data partitions to the same OSDU instance it would be good to manage those from a single Admin UI, removing the need to deploy i.e. 10x Admin UIs for 10 DPs on a single OSDU instance.
I suggest making the ...As customers deploy additional data partitions to the same OSDU instance it would be good to manage those from a single Admin UI, removing the need to deploy i.e. 10x Admin UIs for 10 DPs on a single OSDU instance.
I suggest making the configuration an array:
```json
{
"data_partitions": [
"dp1",
"dp2",
"dp3",
"Opendes"
]
}
```
And having a drop-down in the GUI.
![image](/uploads/2fdb3774549d19f3aa93785ad62554b5/image.png)https://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/open-vds/-/issues/234Missing error message caused by trivial bug2024-02-28T12:47:12ZPaal KvammeMissing error message caused by trivial bugError message from DmsDataset::registerDataset() is missing the important part.
In src/OpenVDS/IO/DmsIoFactories/DmsIoManagerFactory.cpp
Line 200: ```responseData = std::move(request->m_uploadHandler->responseData);```
Line 204: ```re...Error message from DmsDataset::registerDataset() is missing the important part.
In src/OpenVDS/IO/DmsIoFactories/DmsIoManagerFactory.cpp
Line 200: ```responseData = std::move(request->m_uploadHandler->responseData);```
Line 204: ```respons_str.insert(...); // access request->m_uploadHandler->responseData```
The bug is also copy-pasted into DmsDataset::lockDataset().https://community.opengroup.org/osdu/ui/admin-ui-group/admin-ui/-/issues/20Automatic add to users-group2024-03-14T11:58:59ZEirik HaughomAutomatic add to users-groupWhen you add a user to any OSDU Entitlements group through the Admin UI it inspects and adds the user to the users-group as well if it doesn't exist there. While this is a great feature for many, some govern this groups membership throug...When you add a user to any OSDU Entitlements group through the Admin UI it inspects and adds the user to the users-group as well if it doesn't exist there. While this is a great feature for many, some govern this groups membership through other means.
I.e. some customers run a group-hierarchy where groups are nested into the users group, and thus would not like individual users present in this group.
I propose to make this feature configurable (opt-in or opt-out) through the config file.
```json
{
"automatic_add_to_users" = "true"/"false"
}
```https://community.opengroup.org/osdu/platform/domain-data-mgmt-services/production/historian/services/osdu-timeseries-ingestion-service/-/issues/27[BUG] Batch Endpoint Wrong Response Details2024-03-27T07:55:49Zaliuddin abd rauf[BUG] Batch Endpoint Wrong Response Detailsbatch endpoint returning 500 code when no timeseries data could be processed due to errors (any error either source not valid, entity not authorized, propertyDesc not valid etc..).
![image.png](/uploads/e404951f2b0840bc3aed444b1d2cd820/...batch endpoint returning 500 code when no timeseries data could be processed due to errors (any error either source not valid, entity not authorized, propertyDesc not valid etc..).
![image.png](/uploads/e404951f2b0840bc3aed444b1d2cd820/image.png)I think for this one, it should be 400 bad request, since the the user provide with non valid data, except if the error happened because of the entity being passed is not authorized, then it should be 403. If the error is combination between different types of error code, eg: 400 + 404, the it should return 207.PDMS MVP1, phase2https://community.opengroup.org/osdu/platform/domain-data-mgmt-services/production/historian/services/osdu-timeseries-ingestion-service/-/issues/26[BUG] Single Endpoint Error Response Value Incorrect2024-03-27T07:57:51Zaliuddin abd rauf[BUG] Single Endpoint Error Response Value Incorrectthe error response format for the single timeseries ingestion is correct, but the content or value of the json body seems not correct for the reason and message property.
![image.png](/uploads/3b80ecfb5fcf32719d4724c205180193/image.png)...the error response format for the single timeseries ingestion is correct, but the content or value of the json body seems not correct for the reason and message property.
![image.png](/uploads/3b80ecfb5fcf32719d4724c205180193/image.png)For the reason, following standard error format, it should contain the reason for the code, it usually same as the reason for the httpstatuscode, eg: 400, reason is Bad Request. And for the message part, it should clarify what trigger the error, instead of passing stringfy json value.PDMS MVP1, phase2https://community.opengroup.org/osdu/platform/domain-data-mgmt-services/production/historian/services/osdu-timeseries-ingestion-service/-/issues/25[BUG] Batch Endpoint Failed Content To Be JSON not String2024-03-27T08:05:54Zaliuddin abd rauf[BUG] Batch Endpoint Failed Content To Be JSON not Stringfor the batch endpoint, the return response body, for the content section in the failed request should be in JSON format instead of stringfy json.
![Screenshot from 2024-02-28 11-24-54.png](/uploads/a1d7297a8630747f5c3d2e401c68bc10/Scre...for the batch endpoint, the return response body, for the content section in the failed request should be in JSON format instead of stringfy json.
![Screenshot from 2024-02-28 11-24-54.png](/uploads/a1d7297a8630747f5c3d2e401c68bc10/Screenshot_from_2024-02-28_11-24-54.png){width=717 height=449}
and the reason part should be reflect the code stated, eg: 404 not found.PDMS MVP1, phase2https://community.opengroup.org/osdu/platform/system/indexer-service/-/issues/152Unable search ingested records using search api2024-03-08T12:30:33ZMohd Asad ShaikhUnable search ingested records using search apiHi Team,
I am not able to search artifacts using a search service. However, I can able to search using a storage service. Attached the Dag ingested success response, empty search response.![Dag_Success_result](/uploads/0a5813e7d6d4d8f6a0...Hi Team,
I am not able to search artifacts using a search service. However, I can able to search using a storage service. Attached the Dag ingested success response, empty search response.![Dag_Success_result](/uploads/0a5813e7d6d4d8f6a08052fc23ef8852/Dag_Success_result.png)
![image__3_](/uploads/6638d7ff797274aaacc35e7a09e9dbf3/image__3_.png)
![search_result_](/uploads/d316028de45b17c3be16e9d4a575c5c1/search_result_.png)https://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/open-vds/-/issues/233CRS - Problem when the data is displayed in different UTM zones at the same p...2024-03-07T09:15:19ZJuliana Fernandesjuliana.fernandes@iesbrazil.com.brCRS - Problem when the data is displayed in different UTM zones at the same project.Hello,
IesBrazil team is testing OpenVDS+ with CRS and one of the steps were to QC the data using Headwave from Bluware.
The team noticed a problem and we did a documentation on the tests that I will present below:
**Goal of the Test...Hello,
IesBrazil team is testing OpenVDS+ with CRS and one of the steps were to QC the data using Headwave from Bluware.
The team noticed a problem and we did a documentation on the tests that I will present below:
**Goal of the Tests:** Check if OpenVDS+ is adding correctly the CRS into the VDS file,<br>
**Methodology:** Convert SEGY to VDS using OpenVDS+/Headwave and QC the data using Headwave,<br>
**Data used:** Volve and Brazilian data (Volve doesn't present any problem). From Brazil we used 4 files from Solimões Basin and 1 file from Amazonas Basin, provided by ANP. The data can find [HERE](https://reate.cprm.gov.br/anp/TERRESTRE), below you can download directly all the files used in this test (from Brazil, that is where we identified the problem):
* [0233_LESTE_URUCU.3D.MIG_FIN.1.sgy](https://reate.cprm.gov.br/arquivos/index.php/s/DKD0oj9FsZAU8tI/download?path=%2FSISMICA_3D%2F0233_LESTE_URUCU%2FTEMPO%2FSISMICA&files=0233_LESTE_URUCU.3D.MIG_FIN.1.sgy) - Solimões Basin, SAD69/UTM 20S, EPSG:29190
* [0237_AEROPORTO.3D.MIG_FIN.1.sgy](https://reate.cprm.gov.br/arquivos/index.php/s/DKD0oj9FsZAU8tI/download?path=%2FSISMICA_3D%2F0237_AEROPORTO%2FTEMPO%2FSISMICA&files=0237_AEROPORTO.3D.MIG_FIN.1.sgy) - Solimões Basin, SAD69/UTM 20S, EPSG:29190
* [0237_IGARAPE_MARTA.3D.MIG_FIN.2.sgy](https://reate.cprm.gov.br/arquivos/index.php/s/DKD0oj9FsZAU8tI/download?path=%2FSISMICA_3D%2F0237_IGARAPE_MARTA%2FTEMPO%2FSISMICA&files=0237_IGARAPE_MARTA.3D.MIG_FIN.2.sgy) - Solimões Basin, SAD69/UTM 20S, EPSG:29190
* [R0300_3D_CHIBATA_PSTM.3D.PSTM.1.sgy](https://reate.cprm.gov.br/arquivos/index.php/s/DKD0oj9FsZAU8tI/download?path=%2FSISMICA_3D%2FR0300_3D_CHIBATA%2FTEMPO%2FSISMICA&files=R0300_3D_CHIBATA_PSTM.3D.PSTM.1.sgy) - Solimões Basin, SIRGAS 2000/UTM 20S, EPSG:31980
* [R0300_2D_AM_URUCARA.3D.PSTM.1.sgy](https://reate.cprm.gov.br/arquivos/index.php/s/IPNA8z7hO1vHsxI/download?path=%2FSISMICA_3D%2FR0300_3D_AM_URUCARA%2FTEMPO%2FSISMICA&files=R0300_3D_AM_URUCARA.3D.PSTM.1.sgy) - Amazonas Basin, SAD69/UTM 21S, EPSG:29191<br>
**Shapefile:** Georeferenced polygons of exploratory blocks in geographic coordinates and datum SAD69, available [HERE](https://geomaps.anp.gov.br/geoanp/),<br>
**Problem:** When the project has a different zone from the data (e.g: the project is located at SAD69/ UTM 20S and the data is located at SAD69/ UTM 21S), the file is wrongly spatially positioned (We used a VDS converted by Headwave and the original SEGY to compare),<br>
**OpenVDS+ Version:** 3.3.0,<br>
**Comparative Scenario:**
* SEGY with Original CRS
* SEGY with WGS84 CRS
* VDS from HW with Original CRS
* VDS from HW with WGS84 CRS
* VDS from OpenVDS+ with WGS84 CRS (only option available)
### First Scenario - SEGY with Original CRS
The project is under the coordinate reference system CRS SAD69/ UTM 20S (EPSG:29190) that includes most of the data of the test.<br>
In this test the team uploaded all the segy files, listed in the "Data used" topic, under the Original CRS (also informed with the data list) and displayed. The polygon in red at the superior right corner in the image is the shapefile for the block R0300_3D_AM_URUCARA.<br>
**Result: All the data are at the expected spatial position.**
![SEGY_with_Original_CRS](/uploads/285ab921116f92658bbf43007924bdbb/SEGY_with_Original_CRS.png)
### Second Scenario - SEGY with WGS84 CRS
The project is under the coordinate reference system CRS SAD69/ UTM 20S (EPSG:29190) that includes most of the data of the test.<br>
In this test the team uploaded all the segy files, listed in the "Data used" topic, under the CRS WGS84 and displayed. The polygon in red at the superior right corner in the image is the shapefile for the block R0300_3D_AM_URUCARA.<br>
**Result: All the data are at the expected spatial position.**
![SEGY_with_Original_CRS](/uploads/285ab921116f92658bbf43007924bdbb/SEGY_with_Original_CRS.png)
### Third Scenario - VDS from HW with Original CRS
The project is under the coordinate reference system CRS SAD69/ UTM 20S (EPSG:29190) that includes most of the data of the test.<br>
In this test the team converted to vds, using Headwave, all the segy files, listed in the "Data used" topic, under the Original CRS (also informed with the data list) and displayed. The polygon in red at the superior right corner in the image is the shapefile for the block R0300_3D_AM_URUCARA.<br>
**Result: All the data are at the expected spatial position.**
![SEGY_with_Original_CRS](/uploads/285ab921116f92658bbf43007924bdbb/SEGY_with_Original_CRS.png)
### Fourth Scenario - VDS from HW with WGS84 CRS
The project is under the coordinate reference system CRS SAD69/ UTM 20S (EPSG:29190) that includes most of the data of the test.<br>
In this test the team converted to vds, using Headwave, all the segy files, listed in the "Data used" topic, under the CRS WGS84 and displayed. The polygon in red at the superior right corner in the image is the shapefile for the block R0300_3D_AM_URUCARA.<br>
**Result: All the data are at the expected spatial position.**
![SEGY_with_Original_CRS](/uploads/285ab921116f92658bbf43007924bdbb/SEGY_with_Original_CRS.png)
### Fifth Scenario - VDS from OpenVDS+ with WGS84 CRS (only option available)
The project is under the coordinate reference system CRS SAD69/ UTM 20S (EPSG:29190) that includes most of the data of the test.<br>
In this test the team converted to vds, using OpenVDS+, all the segy files, listed in the "Data used" topic, under the CRS WGS84 and displayed. The polygon in red at the superior right corner in the image is the shapefile for the block R0300_3D_AM_URUCARA.<br>
**Result: The file R0300_3D_AM_URUCARA that is located under a different zone from the project (21S) is wrongly spatially positioned (Should be at the same position that the red polygon is).**
![VDS_with_WGS84_Open](/uploads/fc6638d20fffea098cd2fde1ee44dcac/VDS_with_WGS84_Open.png)
We are available for any additional information needed.
Regards,
Julianahttps://community.opengroup.org/osdu/platform/data-flow/real-time/RTDIP/rtdip/-/issues/5Add "create new stream" method in the ingestion service2024-02-27T16:26:44ZSantiago Ortiz [EPAM]santiago_ortiz@epam.comAdd "create new stream" method in the ingestion service# Description
**(Can be changed depending on architecture updates)**
Add a method in the ingestion service to set up a new stream based on a "streaming manifest".
# Acceptance Criteria
[TBD]
# Testing Scenarios
[TBD]
# Technical Notes
...# Description
**(Can be changed depending on architecture updates)**
Add a method in the ingestion service to set up a new stream based on a "streaming manifest".
# Acceptance Criteria
[TBD]
# Testing Scenarios
[TBD]
# Technical Notes
[TBD]
- Based on OSDU-RT_Design presentation (slide 14)https://community.opengroup.org/osdu/platform/data-flow/real-time/RTDIP/rtdip/-/issues/4[Spike] Research RTDIP streaming capabilities2024-03-13T14:12:39ZSantiago Ortiz [EPAM]santiago_ortiz@epam.com[Spike] Research RTDIP streaming capabilities# Description
Understand how can RTDIP library be used to create a pipeline with "[spark structured streaming](https://spark.apache.org/docs/latest/api/python/reference/pyspark.ss/index.html)" jobs
# Acceptance Criteria
Small proof-of-co...# Description
Understand how can RTDIP library be used to create a pipeline with "[spark structured streaming](https://spark.apache.org/docs/latest/api/python/reference/pyspark.ss/index.html)" jobs
# Acceptance Criteria
Small proof-of-concept created to simulate the desired behavior.
# Testing Scenarios
N/A
# Technical Notes
- TBDhttps://community.opengroup.org/osdu/platform/data-flow/real-time/RTDIP/rtdip/-/issues/3Create a secure repository2024-03-14T15:01:28ZMikhail TeplitskiyCreate a secure repositoryKeep EPAM connections safe and secretKeep EPAM connections safe and secretIgor Zimovets (EPAM)Igor Zimovets (EPAM)2024-03-01https://community.opengroup.org/osdu/platform/data-flow/real-time/RTDIP/rtdip/-/issues/2[Spike] Research RTDIP integration capabilities2024-02-27T16:12:10ZSantiago Ortiz [EPAM]santiago_ortiz@epam.com[Spike] Research RTDIP integration capabilities# Description:
Understand how can RTDIP library be integrated with a kafka cluster and databricks delta tables.
# Acceptance criteria:
Small proof-of-concept created to simulate the desired behavior.
# Testing Scenarios:
N/A
# Technical ...# Description:
Understand how can RTDIP library be integrated with a kafka cluster and databricks delta tables.
# Acceptance criteria:
Small proof-of-concept created to simulate the desired behavior.
# Testing Scenarios:
N/A
# Technical Notes:
N/ABacklog 2024https://community.opengroup.org/osdu/platform/data-flow/real-time/RTDIP/rtdip/-/issues/1EPAM Team has access to main infrastructure components2024-02-27T14:53:45ZMikhail TeplitskiyEPAM Team has access to main infrastructure components<DBD><DBD>Backlog 2024https://community.opengroup.org/osdu/platform/system/storage/-/issues/218ADR: Option to retain source systems audit info and override audit fields dur...2024-03-26T15:01:14ZRasheed Nagoor GaniADR: Option to retain source systems audit info and override audit fields during migration[[_TOC_]]
# Status
- [x] Proposed
- [ ] Trialing
- [ ] Under review
- [ ] Approved
- [ ] Retired
# Background
When a record is created, the 'createUser'/'modifyUser' field automatically captures the username from the token and sets ...[[_TOC_]]
# Status
- [x] Proposed
- [ ] Trialing
- [ ] Under review
- [ ] Approved
- [ ] Retired
# Background
When a record is created, the 'createUser'/'modifyUser' field automatically captures the username from the token and sets the 'createTime'/'modifyTime' to the current timestamp. These fields play a crucial role in providing audit information to identify who created or modified and when. While this mechanism works seamlessly for new records created through the OSDU APIs, it may lead to confusion when dealing with migrated data.
The source systems maintain their own set of audit fields, which should be preserved in their original state during migration. Preserving this audit trail is vital to upholding data integrity and regulatory compliance.
Refer Aha ticket [IDEA-I-130](https://osdu-community.ideas.aha.io/ideas/IDEA-I-130)
# Context & Scope
The audit information captured in 'createUser', 'createTime', 'modifyUser' and 'modifyTime' fields can be stored in extendedProperties. However, the limitations of extendedProperties, such as the inability to index values, hinder the efficient filtering and retrieval of records.
To address this issue, either source system’s audit information such as createUser, creatTime, modifyUser and modifyTime should be set in new attribute, or the storage service should allow to override existing attribute values.
# Proposed solution
Option 1: Introduce an 'Audit' object attribute into the common schema, integrating it as a standard attribute of all data type schemas. This approach ensures consistent and comprehensive auditing capabilities across different data types.
Option 2: Implement a new user or role with specialized permissions to override audit attributes, including the createUser, createDate, modifyUser and modifyTime fields. This designated user or role is specifically designated for managing data migration processes. For instance, when initiating the ingestion API using this designated user, the Platform verifies its migration status. In such instances, the user's email and creation time will be sourced from Manifest values rather than the token or current timestamp.
# Consequences
Option 1: The implementation of Option 1 may entail a time-consuming process and could potentially have a significant impact on existing records. Integrating the 'Audit' object attribute into the common schema may require thorough planning and careful consideration to mitigate disruptions to the system.
Option 2: While Option 2 eliminates the need for introducing new attributes, it necessitates modifications to the Storage Service logic. Adapting the system to accommodate a new user or role with override permissions may require adjustments to the existing logic and workflows within the Storage Service.https://community.opengroup.org/osdu/platform/domain-data-mgmt-services/reservoir/open-etp-server/-/issues/119Slowness when getting large number of objects for "imported by reference" res...2024-02-29T08:10:59ZLaurent DenySlowness when getting large number of objects for "imported by reference" resourcesThere is a performance hit when using getObjects with a large list of objects, in a dataspace containing resources imported by references from another dataspace.There is a performance hit when using getObjects with a large list of objects, in a dataspace containing resources imported by references from another dataspace.M23 - Release 0.26Laurent DenyLaurent Denyhttps://community.opengroup.org/osdu/platform/system/search-service/-/issues/158Feature 'featureFlag.autocomplete.enabled' does not work.2024-03-08T10:40:15ZRiabokon Stanislav(EPAM)[GCP]Feature 'featureFlag.autocomplete.enabled' does not work.The GC team initiated testing for a new feature 'featureFlag.autocomplete.enabled.' Following the documentation guidelines, we configured the 'featureFlag.bagOfWords.enabled' flag with a value of 'true' on the Indexer Service and set 'fe...The GC team initiated testing for a new feature 'featureFlag.autocomplete.enabled.' Following the documentation guidelines, we configured the 'featureFlag.bagOfWords.enabled' flag with a value of 'true' on the Indexer Service and set 'featureFlag.autocomplete.enabled' to 'true' as well. Unfortunately, the integration test did not yield the expected results.
To investigate the issue further, we carefully examined the index from the Elasticsearch.
```
{
"osdu-search1709032988256-test-data--integration-1.0.1": {
"aliases": {
"a1632179934": {
},
"a1632185707": {
}
},
"mappings": {
"dynamic": "false",
"properties": {
"acl": {
"properties": {
"owners": {
"type": "keyword"
},
"viewers": {
"type": "keyword"
}
}
},
"ancestry": {
"properties": {
"parents": {
"type": "keyword"
}
}
},
"authority": {
"type": "constant_keyword",
"value": "osdu"
},
"bagOfWords": {
"type": "text",
"store": true,
"fields": {
"autocomplete": {
"type": "completion",
"analyzer": "simple",
"preserve_separators": true,
"preserve_position_increments": true,
"max_input_length": 50
}
}
},
"createTime": {
"type": "date"
},
"createUser": {
"type": "keyword"
},
"data": {
"properties": {
"Basin": {
"type": "text",
"fields": {
"keyword": {
"type": "keyword",
"null_value": "null",
"ignore_above": 256
},
"keywordLower": {
"type": "keyword",
"null_value": "null",
"ignore_above": 256,
"normalizer": "lowercase"
}
},
"copy_to": [
"bagOfWords"
]
},
"Center": {
"type": "text",
"fields": {
"keyword": {
"type": "keyword",
"null_value": "null",
"ignore_above": 256
},
"keywordLower": {
"type": "keyword",
"null_value": "null",
"ignore_above": 256,
"normalizer": "lowercase"
}
},
"copy_to": [
"bagOfWords"
]
},
"Country": {
"type": "text",
"fields": {
"keyword": {
"type": "keyword",
"null_value": "null",
"ignore_above": 256
},
"keywordLower": {
"type": "keyword",
"null_value": "null",
"ignore_above": 256,
"normalizer": "lowercase"
}
},
"copy_to": [
"bagOfWords"
]
},
"County": {
"type": "text",
"fields": {
"keyword": {
"type": "keyword",
"null_value": "null",
"ignore_above": 256
},
"keywordLower": {
"type": "keyword",
"null_value": "null",
"ignore_above": 256,
"normalizer": "lowercase"
}
},
"copy_to": [
"bagOfWords"
]
},
"DblArray": {
"type": "double"
},
"EmptyAttribute": {
"type": "text",
"fields": {
"keyword": {
"type": "keyword",
"null_value": "null",
"ignore_above": 256
},
"keywordLower": {
"type": "keyword",
"null_value": "null",
"ignore_above": 256,
"normalizer": "lowercase"
}
},
"copy_to": [
"bagOfWords"
]
},
"Established": {
"type": "date"
},
"Field": {
"type": "text",
"fields": {
"keyword": {
"type": "keyword",
"null_value": "null",
"ignore_above": 256
},
"keywordLower": {
"type": "keyword",
"null_value": "null",
"ignore_above": 256,
"normalizer": "lowercase"
}
},
"copy_to": [
"bagOfWords"
]
},
"Location": {
"type": "geo_point"
},
"OriginalOperator": {
"type": "text",
"fields": {
"keyword": {
"type": "keyword",
"null_value": "null",
"ignore_above": 256
},
"keywordLower": {
"type": "keyword",
"null_value": "null",
"ignore_above": 256,
"normalizer": "lowercase"
}
},
"copy_to": [
"bagOfWords"
]
},
"Rank": {
"type": "integer"
},
"Score": {
"type": "integer"
},
"State": {
"type": "text",
"fields": {
"keyword": {
"type": "keyword",
"null_value": "null",
"ignore_above": 256
},
"keywordLower": {
"type": "keyword",
"null_value": "null",
"ignore_above": 256,
"normalizer": "lowercase"
}
},
"copy_to": [
"bagOfWords"
]
},
"WellName": {
"type": "text",
"fields": {
"keyword": {
"type": "keyword",
"null_value": "null",
"ignore_above": 256
},
"keywordLower": {
"type": "keyword",
"null_value": "null",
"ignore_above": 256,
"normalizer": "lowercase"
}
},
"copy_to": [
"bagOfWords"
]
},
"WellStatus": {
"type": "text",
"fields": {
"keyword": {
"type": "keyword",
"null_value": "null",
"ignore_above": 256
},
"keywordLower": {
"type": "keyword",
"null_value": "null",
"ignore_above": 256,
"normalizer": "lowercase"
}
},
"copy_to": [
"bagOfWords"
]
},
"WellType": {
"type": "text",
"fields": {
"keyword": {
"type": "keyword",
"null_value": "null",
"ignore_above": 256
},
"keywordLower": {
"type": "keyword",
"null_value": "null",
"ignore_above": 256,
"normalizer": "lowercase"
}
},
"copy_to": [
"bagOfWords"
]
}
}
},
"id": {
"type": "keyword"
},
"index": {
"properties": {
"lastUpdateTime": {
"type": "date"
},
"statusCode": {
"type": "integer"
},
"trace": {
"type": "text"
}
}
},
"kind": {
"type": "keyword"
},
"legal": {
"properties": {
"legaltags": {
"type": "keyword"
},
"otherRelevantDataCountries": {
"type": "keyword"
},
"status": {
"type": "keyword"
}
}
},
"modifyTime": {
"type": "date"
},
"modifyUser": {
"type": "keyword"
},
"namespace": {
"type": "keyword"
},
"source": {
"type": "constant_keyword",
"value": "search1709032988256"
},
"tags": {
"type": "flattened"
},
"type": {
"type": "keyword"
},
"version": {
"type": "long"
},
"x-acl": {
"type": "keyword"
}
}
},
"settings": {
"index": {
"routing": {
"allocation": {
"include": {
"_tier_preference": "data_content"
}
}
},
"refresh_interval": "30s",
"number_of_shards": "1",
"provided_name": "osdu-search1709032988256-test-data--integration-1.0.1",
"creation_date": "1709032992807",
"number_of_replicas": "1",
"uuid": "rpIKCM9NRmm3gb41_7algw",
"version": {
"created": "7171799"
}
}
}
}
}
```
As far as we can determine, the Indexer has introduced a new block:
```
"bagOfWords": {
"type": "text",
"store": true,
"fields": {
"autocomplete": {
"type": "completion",
"analyzer": "simple",
"preserve_separators": true,
"preserve_position_increments": true,
"max_input_length": 50
}
}
}
```
Acknowledged that the implementation is in accordance with the documentation.
The new request to the Search Service has been reviewed.
```
{
"offset":0,
"kind":"osdu:search1709032988256:test-data--Integration:1.0.1",
"limit":0,
"query":"data.OriginalOperator:OFFICE4",
"suggestPhrase":"data",
"returnHighlightedFields":false,
"highlightedFields":[
],
"returnedFields":[
],
"queryAsOwner":false,
"trackTotalCount":false
}
```
A request from the Search Service to the Elasticsearch:
`SearchRequest{searchType=QUERY_THEN_FETCH, indices=[osdu-search1709032988256-test-data--integration-1.0.1,-.*,-system-meta-data-*], indicesOptions=IndicesOptions[ignore_unavailable=true, allow_no_indices=true, expand_wildcards_open=true, expand_wildcards_closed=false, expand_wildcards_hidden=false, allow_aliases_to_multiple_indices=true, forbid_closed_indices=true, ignore_aliases=false, ignore_throttled=true], types=[], routing='null', preference='null', requestCache=null, scroll=null, maxConcurrentShardRequests=0, batchedReduceSize=512, preFilterShardSize=null, allowPartialSearchResults=null, localClusterAlias=null, getOrCreateAbsoluteStartMillis=-1, ccsMinimizeRoundtrips=true, source={"from":0,"size":10,"timeout":"1m","query":{"bool":{"must":[{"bool":{"must":[{"query_string":{"query":"data.OriginalOperator:OFFICE4","fields":[],"type":"best_fields","default_operator":"or","max_determinized_states":10000,"allow_leading_wildcard":false,"enable_position_increments":true,"fuzziness":"AUTO","fuzzy_prefix_length":0,"fuzzy_max_expansions":50,"phrase_slop":0,"escape":false,"auto_generate_synonyms_phrase_query":true,"fuzzy_transpositions":true,"boost":1.0}}],"adjust_pure_negative":true,"boost":1.0}}],"filter":[{"terms":{"x-acl":["data.test-users-data-root`
truncated...
Notably, the recent request from the Search Service to Elasticsearch lacks a field identified as 'autocomplete.'
Additionally, we have identified a method, org.opengroup.osdu.search.util.SuggestionsQueryUtil#getSuggestions, which seems to contain the logic related to suggestions.
```
public SuggestBuilder getSuggestions(String suggestPhrase) {
if (!autocompleteFeatureFlag.isFeatureEnabled(AUTOCOMPLETE_FEATURE_NAME) || suggestPhrase == null || suggestPhrase == "") {
return null;
}
SuggestionBuilder suggestionBuilder = SuggestBuilders.completionSuggestion(
"bagOfWords.autocomplete"
).text(suggestPhrase).skipDuplicates(true);
SuggestBuilder suggestBuilder = new SuggestBuilder();
suggestBuilder.addSuggestion(SUGGESTION_NAME, suggestionBuilder);
return suggestBuilder;
}
```
I suppose this method can be used when we create a request to Elastic Search, but this method will be run ONLY Junit tests.
To sum up, this feature 'featureFlag.autocomplete', perhaps, has not been implemented. Please, play an attention for it.M23 - Release 0.26Mark ChanceMark Chancehttps://community.opengroup.org/osdu/platform/system/lib/core/os-core-common/-/issues/79ADR: OSDU API Versioning Strategy.2024-03-21T10:00:55ZRustam Lotsmanenko (EPAM)rustam_lotsmanenko@epam.comADR: OSDU API Versioning Strategy.# ADR: OSDU API Versioning Strategy.
## Status
- [ ] Proposed
- [ ] Trialing
- [ ] Under review
- [ ] Approved
- [ ] Retired
## Context
- We have mature APIs solidified through time, but they are not immutable for life. From time to...# ADR: OSDU API Versioning Strategy.
## Status
- [ ] Proposed
- [ ] Trialing
- [ ] Under review
- [ ] Approved
- [ ] Retired
## Context
- We have mature APIs solidified through time, but they are not immutable for life. From time to time, changes are suggested, for example, https://community.opengroup.org/osdu/platform/system/partition/-/issues/49
- OSDU Java-based services communicate through interfaces defined in the Core-Common library, those interfaces must be aligned with related service API, for example https://community.opengroup.org/osdu/platform/system/lib/core/os-core-common/-/tree/master/src/main/java/org/opengroup/osdu/core/common/storage?ref_type=heads
- This ADR mainly refers to cases when we have the necessity to change existing concepts. We are suggesting the introduction of a way to handle those changes, keeping compatibility, and minimizing configurations.
## Problem Statement
- We do not have a strategy to introduce breaking changes in APIs without disrupting existing environments or making overcomplicated configurations.
- Any change requires updates in Core-Common interfaces and in services that rely on those APIs, slowing down development time. Change should be adopted from top to bottom, from service that provides API to services that consume it.
- It's impossible to support two versions at a time, change should be implemented everywhere. Major changes are happening with the same interfaces, we're not introducing them as new entities but doing a rewrite of existing logic.
## Proposal
- Introduce API-compatible autodiscovery configurations in Core-Common. Using Spring Conditions we can introduce self-assembling configurations, enabling compatible clients in an automated manner.
- Improve /info endpoint response by providing services API version in it.
## Decision
- Add the supported API version to the /info endpoint response:
~~~
{
"groupId": "org.opengroup.osdu",
"artifactId": "partition-gc",
"version": "0.26.0-SNAPSHOT",
"api-version": "V2"
.....
}
~~~
- Introduce autodiscovery mechanism to Core-Common, client configurations that would pick implementations according to a response from /info endpoints.
![Untitled_Diagram.drawio_27_](/uploads/c76eafd5760d89c88093ee5574ed20a5/Untitled_Diagram.drawio_27_.png)
- If API changes require updates in consumer service, introduce them not as a replacement for current functionality but as an addition, relying on autoconfigures from Core-Common.
![Untitled_Diagram.drawio_28_](/uploads/bdd9d172fff18d1a2c5cf3bd28b6c933/Untitled_Diagram.drawio_28_.png)
**Pros**:
- Reduced configuration hell, no need to point out each API version manually in the environment or properties.
- Defined versioning strategy.
- Ability to introduce new API versions, keeping existing environments intact.
**Cons**:
- Bean configs that depend on HTTP requests to fetch API version could slow down initial service initialization.
Pseudocode example:
~~~
@ConditionalOn(PartitionAPICondition.class)
public class PartitionServiceV1{
V1 compatible logic...
}
@ConditionalOn(PartitionAPICondition.class)
public class PartitionServiceV2{
V2 compatible logic...
}
public class PartitionAPICondition{
getApiVersion(){
String apiVersion = partition.getApiVersion()
return apiVersion;
}
}
~~~
## Use cases
**We do not need this approach if:**
1. OSDU API change does not require changes in existing logic.
2. If a new API introduces new entities, and new endpoints, and can be handled by corresponding service without changes in their clients.
3. If new APIs are complementary to existing ones.
For example, Storage Replay API https://community.opengroup.org/osdu/platform/system/storage/-/issues/187 doesn't require changing existing Reindex API in Indexer, it adds new entities and will be handled separately.
**This approach might help us if:**
1. OSDU API change requires changes in existing services client, logic, data structure, and the way they communicate.
2. If API change affects existing entities that are widely used within service logic.
To keep backward compatibility we usually can put the required changes in place. Update the client or add a new one. The main purpose of this ADR is to simplify the maintenance of those changes. Instead of configuring each component manually, pointing out what components should be present in the operating platform, rely on auto configurations via discovery.
## Rationale
- With a versioning strategy that offers backward compatibility development process can go faster.
- Existing environments, that are bound to stable APIs wouldn't block Platform evolution.
- Defining a standard way of versioning APIs will improve Platform maintenance, and help to unify configurations that are currently a mixture of env variables, properties, etc.
## Consequences
- Adding API version to the /info endpoint.
- Changes in Core-Common, the introduction of API version self-discovery mechanism.
- Requirement to follow new versioning approach.
## Alternatives
**Similar approach but without autodiscovery:**
~~~
@ConditionalOn(PartitionAPIV1Property)
public class PartitionServiceV1{
V1 compatible logic...
}
@ConditionalOnProperty(PartitionAPIV2Property)
public class PartitionServiceV2{
V2 compatible logic...
}
public class PartitionAPICondition{
PartitionAPIV1Property = false;
PartitionAPIV2Property = true;
}
}
~~~
Pros:
- no impact on initialization time
Cons:
- It could be a configuration hell, configuring all API versions through properties
~~~
partition.api=v2
storage.api=v1
entitlements.api=v3
etc, etc.
~~~
**Keep things as they are:**
- The proposed solution might be overcomplicated, and violate the YAGNI principle, if use cases are limited, then we can define strategy per change, instead of introducing a standard.https://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/seismic-dms-suite/seismic-store-service/-/issues/128Subproject creation accepts non-existing groups in ACLs2024-02-26T17:21:16ZYan Sushchynski (EPAM)Subproject creation accepts non-existing groups in ACLs## Description of the problem
There is an issue when it is possible to create a new subproject with non-existing groups in the `acls` field. And then, any action, except deleting the subproject, throws `403` in the subproject.
## Steps ...## Description of the problem
There is an issue when it is possible to create a new subproject with non-existing groups in the `acls` field. And then, any action, except deleting the subproject, throws `403` in the subproject.
## Steps to reproduce it
1. Create a new subproject with invalid acls:
```
curl --location --request POST 'https://<svc_url>/v3/subproject/tenant/osdu/subproject/test-123' \
--header 'x-api-key: {{SVC_API_KEY}}' \
--header 'Content-Type: application/json' \
--header 'ltag: osdu-demo-legaltag' \
--header 'appkey: {{DE_APP_KEY}}' \
--header 'Authorization: Bearer <token>' \
--data-raw '{
"storage_class": "REGIONAL",
"storage_location": "US-CENTRAL1",
"acls": {
"admins": [
"data.sdms.non-existing.admin@osdu.group"
],
"viewers": [
"data.sdms.non-existing.viewer@osdu.group"
]
}
}'
```
This request is executed without any error.
2. Try to upload any file to the subproject:
```shell
python sdutil cp somefile sd://osdu/test-123/somefile
```
Output:
```
[403] [seismic-store-service] User not authorized to perform this operation
```Diego MolteniSacha BrantsDiego Moltenihttps://community.opengroup.org/osdu/platform/consumption/geospatial/-/issues/346Data - Load Horizon/Interpretation extent polygons into GLAB2024-03-12T14:37:03ZLevi RemingtonData - Load Horizon/Interpretation extent polygons into GLABAs a GCZ Developer, I require access to Horizon/Interpretation Extent Polygons in the GLAB OSDU instance. However, loading attempts have resulted in error due to issues with the GLAB's CRSTransformation service, which prevent transformin...As a GCZ Developer, I require access to Horizon/Interpretation Extent Polygons in the GLAB OSDU instance. However, loading attempts have resulted in error due to issues with the GLAB's CRSTransformation service, which prevent transforming points into WGS84.
Because the same data loading techniques are successful in the Azure Preship environment, but are failing in GLAB, it indicates an issue with the GLAB environment that should first be addressed by the GLAB team.Valentin GauthierMichael WilhiteLevi RemingtonValentin Gauthier