OSDU Software issueshttps://community.opengroup.org/groups/osdu/-/issues2021-02-15T15:56:02Zhttps://community.opengroup.org/osdu/platform/data-flow/real-time/processors/pipe/-/issues/9Incidunt dolorem dolore vitae.2021-02-15T15:56:02ZDmitry KniazevIncidunt dolorem dolore vitae.#### Possimus
Optio sunt ipsum. Et voluptatibus laboriosam. Error labore incidunt. Repellat in voluptatem. Qui repudiandae neque.
```ruby
Quia.
```#### Possimus
Optio sunt ipsum. Et voluptatibus laboriosam. Error labore incidunt. Repellat in voluptatem. Qui repudiandae neque.
```ruby
Quia.
```https://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/seismic-dms-suite/seismic-store-service/-/issues/99Include aws region in dataset information for AWS Seismic DDMS data2024-02-26T21:52:49ZMichaelInclude aws region in dataset information for AWS Seismic DDMS dataWhen using sdapi to retreive seismic ddms data coming from AWS, a user needs to first set the AWS_REGION environment variable (see ticket https://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/seismic-dms-suite/s...When using sdapi to retreive seismic ddms data coming from AWS, a user needs to first set the AWS_REGION environment variable (see ticket https://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/seismic-dms-suite/seismic-store-cpp-lib/-/issues/21).
To better handle this use case, the get dataset service `/dataset/tenant/{tenantid}/subproject/{subproject}/dataset/{datasetid}` should provide information regarding the aws region if the dataset is stored in s3 storage.https://community.opengroup.org/osdu/platform/system/search-service/-/issues/114Inconsistency status codes when user has no access2023-06-15T11:10:54ZMarton NagyInconsistency status codes when user has no accessWhen I requested a search at url "https://{baseURL}/api/search/v2/search" with headers like:
```
[Authorization, Bearer ey...]
[Accept, application/json]
[data-partition-id, slb]
```
and body: `{"kind":"*:*:*:*","limit":100,"...When I requested a search at url "https://{baseURL}/api/search/v2/search" with headers like:
```
[Authorization, Bearer ey...]
[Accept, application/json]
[data-partition-id, slb]
```
and body: `{"kind":"*:*:*:*","limit":100,"query":"(kind:osdu\\:wks\\:master-data--Wellbore\\:*) AND (\"mnagy-12\" )","queryAsOwner":false,"offset":0}`
on an environment where **there is no "slb" data partition**, or at least I have no access to that.
I've got result: `{"code":401,"reason":"Access denied","message":"The user is not authorized to perform this action"}`
And the next query was successful when the data-partition-id header was changed to a valid data partition for which I have access.
Instead of 401 I would think a 403 - Forbidden would much more clear, as 401 usually means "I don't know who you are", and 403 "I know who you are, but you cannot do that".https://community.opengroup.org/osdu/platform/system/storage/-/issues/120Inconsistent behavior of storage PUT when skipdupes is passed as true2022-08-26T10:06:09ZMandar KulkarniInconsistent behavior of storage PUT when skipdupes is passed as trueStorage PUT API has an optional query parameter called [skipdupes](https://community.opengroup.org/osdu/platform/system/storage/-/blob/master/docs/tutorial/StorageService.md#using-skipdupes)
Current behavior of storage PUT API to update...Storage PUT API has an optional query parameter called [skipdupes](https://community.opengroup.org/osdu/platform/system/storage/-/blob/master/docs/tutorial/StorageService.md#using-skipdupes)
Current behavior of storage PUT API to update existing record is:
If skipdupes is passed as true, if the data, meta blocks in the input request are same as the existing record content, then the record update is skipped.
When skipdupes is passed as true, the record update is skipped in a scenario when the user has passed different legal, acl, tags blocks content in the input request, but data and meta block content is same as that of the existing record.
(This happens because when skipdupes is passed as true, the storage service compares only data and meta blocks of the incoming and existing records and not all the blocks in the record.)
Expected behavior is :
If skipdupes is passed as true, both data and meta blocks should be compared. If data block is same but legal, acl, tags blocks are different, then the same record should be updated. To keep the behavior in-sync with PATCH API, the record version should not be updated in case only tags, legal or acl blocks are being changed.https://community.opengroup.org/osdu/platform/system/lib/core/os-core-common/-/issues/24Inconsistent error handling across services.2022-09-27T14:19:25ZGregInconsistent error handling across services.Entitlements Service & Search Service. The response body is in JSON format. The attributes of “code”, “reason”, “message” are returned along with status code, e.g., 401 Unauthorized. Legal Service. The response body is in Text format wit...Entitlements Service & Search Service. The response body is in JSON format. The attributes of “code”, “reason”, “message” are returned along with status code, e.g., 401 Unauthorized. Legal Service. The response body is in Text format with no attributes of “code”, “reason”, “message” returned.M1 - Release 0.1Chris ZhangChris Zhanghttps://community.opengroup.org/osdu/platform/domain-data-mgmt-services/rock-and-fluid-sample/rafs-ddms-services/-/issues/321Inconsistent variable usage in reference value postman collection2024-01-03T19:52:31ZBryan DawsonInconsistent variable usage in reference value postman collectionIn the postman collection for loading the reference data it uses the variable `WORKFLOW_URL` for most of the requests
![image.png](/uploads/1cc45ca96af511b64707de142f8418fe/image.png)
However, some of the newer requests added use a di...In the postman collection for loading the reference data it uses the variable `WORKFLOW_URL` for most of the requests
![image.png](/uploads/1cc45ca96af511b64707de142f8418fe/image.png)
However, some of the newer requests added use a different variable of `osdu_endpoint`
![image.png](/uploads/eb3ef72ada79756b33fe130b0f5fbb7f/image.png)
We should be consistent and use `WORKFLOW_URL` for all.https://community.opengroup.org/osdu/platform/domain-data-mgmt-services/rock-and-fluid-sample/rafs-ddms-services/-/issues/320Inconsistent variable usage in schema registration postman collection2024-01-03T19:52:31ZBryan DawsonInconsistent variable usage in schema registration postman collectionMost of the requests in the [schema collection](https://community.opengroup.org/osdu/platform/domain-data-mgmt-services/rock-and-fluid-sample/rafs-ddms-services/-/blob/main/deployments/rafsddms_schemas_mvp.postman_collection.json?ref_typ...Most of the requests in the [schema collection](https://community.opengroup.org/osdu/platform/domain-data-mgmt-services/rock-and-fluid-sample/rafs-ddms-services/-/blob/main/deployments/rafsddms_schemas_mvp.postman_collection.json?ref_type=heads) use the variable `SCHEMA_HOST` for the URL:
![image.png](/uploads/e488c000e08eac35ea04c155c873acae/image.png)
but a few do not and require setting up a separate variable for `OSDU_BASE_HOST`
![image.png](/uploads/0ae7881531b0a26d1c78e955a8d0f7e3/image.png)
We should change the collection to be consistent.https://community.opengroup.org/osdu/platform/deployment-and-operations/helm-charts-azure/-/issues/2Incorrect chart for Policy Service2021-05-18T10:15:50ZAnkit Sharma [Microsoft]Incorrect chart for Policy ServiceCharts for policy service needs to be fixed.
It will install OPA but not policy service.Charts for policy service needs to be fixed.
It will install OPA but not policy service.Ankit Sharma [Microsoft]Ankit Sharma [Microsoft]https://community.opengroup.org/osdu/platform/deployment-and-operations/helm-charts-azure/-/issues/27Incorrect Kubernetes namespace in Airflow container retrieval instructions2023-09-18T12:37:46ZPaweł GrudzieńIncorrect Kubernetes namespace in Airflow container retrieval instructions**Title:** Incorrect Kubernetes namespace in Airflow container retrieval instructions
**Description:**
The provided instructions for accessing the Airflow web container refer to the wrong Kubernetes namespace. The documentation current...**Title:** Incorrect Kubernetes namespace in Airflow container retrieval instructions
**Description:**
The provided instructions for accessing the Airflow web container refer to the wrong Kubernetes namespace. The documentation currently indicates the namespace as `airflow` whereas the setup instructions establish it as `airflow2`. This is minor bug but gets me every time I try to deploy (and was not obvious the first time I deployed).
**Details:**
In the provided documentation, users are instructed to set up Airflow in the `airflow2` namespace:
```bash
# Create Namespace
NAMESPACE=airflow2
kubectl create namespace $NAMESPACE
```
However, subsequent instructions to retrieve the Airflow web container are using the `airflow` namespace:
```bash
# Get Airflow web container
AIRFLOW_WEB_CONTAINER=$(kubectl get pod -n airflow | grep "web" | cut -f 1 -d " ")
```
```
$ AIRFLOW_WEB_CONTAINER=$(kubectl get pod -n airflow | grep "web" | cut -f 1 -d " ")
No resources found in airflow namespace.
```
**Expected Behavior:**
The instructions should be consistent, with both referring to the same Kubernetes namespace.
**Actual Behavior:**
There's an inconsistency between setup instructions and the container retrieval instructions in terms of the namespace used.
**Steps to Reproduce:**
1. Follow the provided instructions to set up Airflow.
2. Attempt to retrieve the Airflow web container using the given command.
3. Observe the mismatch in namespace usage.
**Suggested Fix:**
Update the container retrieval instructions to use the `airflow2` namespace:
```bash
# Get Airflow web container
AIRFLOW_WEB_CONTAINER=$(kubectl get pod -n airflow2 | grep "web" | cut -f 1 -d " ")
```https://community.opengroup.org/osdu/platform/domain-data-mgmt-services/reservoir/open-etp-server/-/issues/83Incorrect protocol exception message when putting an incorrect dataspace URI2023-09-22T16:14:54ZPhilippe VerneyIncorrect protocol exception message when putting an incorrect dataspace URIHi I have tried to put a dataspace to open-etp-server using these attributes
Energistics::Etp::v12::Datatypes::Object::Dataspace dataspace;
dataspace.uri = "eml:///dataspace('testF2I/450dce67-0697-4bab-923f-456353d173ad)";
dataspace....Hi I have tried to put a dataspace to open-etp-server using these attributes
Energistics::Etp::v12::Datatypes::Object::Dataspace dataspace;
dataspace.uri = "eml:///dataspace('testF2I/450dce67-0697-4bab-923f-456353d173ad)";
dataspace.path = "testF2I/450dce67-0697-4bab-923f-456353d173ad";
dataspace.storeCreated = 0;
dataspace.storeLastWrite = 0;
As you can notice, I forgot the ending simple quote in dataspace.uri which should raise a protocol exception on server side which it does. But here is what the server returns
```
Message Header received :
protocol : 24
type : 1000
id : 3
correlation id : 4
flags : 2
EXCEPTION received for message_id 4
One or more error code :
*************************************************
Resource non received :
key : 0
message : Space URI uses all-zero UUID
code : 5
*************************************************
```
I think the error code should be 9 (i.e. EINVALID_URI) instead of 5 (EINVALID_ARGUMENT) according to the specs and the error message should not be related to something about all-zero UUID since I don't send all-zero UUID at all.
Ideally, the error message should show the erroneous uri for the client to check what has been received by the server.
I think it may be related to #18https://community.opengroup.org/osdu/platform/system/partition/-/issues/9Incorrect response for incorrect request2021-03-19T13:58:44ZRiabokon Stanislav(EPAM)[GCP]Incorrect response for incorrect request
POST https://<partition_service>/api/partition/v1/partitions/Test1-2571428 with body
```
{
"properties_invalid": {
}
}
```
AR: Response is 404 with body "Partition does not exist."
ER: 400 Bad Request, since "properties" fiel...
POST https://<partition_service>/api/partition/v1/partitions/Test1-2571428 with body
```
{
"properties_invalid": {
}
}
```
AR: Response is 404 with body "Partition does not exist."
ER: 400 Bad Request, since "properties" field should be mandatory.
I suppose we have to consider another implementation related to an annotation 'Valid' https://community.opengroup.org/osdu/platform/system/partition/-/blob/master/partition-core/src/main/java/org/opengroup/osdu/partition/api/PartitionApi.java#L50https://community.opengroup.org/osdu/platform/system/schema-service/-/issues/123Incorrect status is being returned upon creating the schema that already exis...2023-05-31T11:42:03ZKamlesh TodaiIncorrect status is being returned upon creating the schema that already exists in the systemWhen one tries to create the schema that is already existing in the system, one gets the return error code of **400 - Bad request**. As per the API documentation, it is correct. But I think that the error code is misleading. The message ...When one tries to create the schema that is already existing in the system, one gets the return error code of **400 - Bad request**. As per the API documentation, it is correct. But I think that the error code is misleading. The message returned is also misleading. It returns "message": "Update/Create failed because schema id is present in another tenant, this is not true because the schema is present in the same tenant.
This is what one would expect, The return error code should be **409 Conflict** indicating that schema is already present. and the message should be "schema is present".M19 - Release 0.22https://community.opengroup.org/osdu/platform/deployment-and-operations/infra-azure-provisioning/-/issues/319Incorrect usage of trim function leads to malformed resource names in monitor...2023-09-18T10:59:58ZPaweł GrudzieńIncorrect usage of trim function leads to malformed resource names in monitoring resources terraformIn the Terraform module, the trim function is being used to remove specific suffixes from strings. However, the current usage can lead to the unintended removal of characters, causing malformed resource names in Azure resources.
Descri...In the Terraform module, the trim function is being used to remove specific suffixes from strings. However, the current usage can lead to the unintended removal of characters, causing malformed resource names in Azure resources.
Description:
In the Monitoring Resources main.tf Terraform module, the trim function is being used to remove specific suffixes from strings. However, the current usage can lead to the unintended removal of characters, causing malformed resource names in Azure resources.
Details:
The specific instance observed is in the trimming of the -rg suffix from resource group names. The current code uses:
```
central_group_prefix = trim(data.terraform_remote_state.central_resources.outputs.central_resource_group_name, "-rg")
```
The intention is to remove the -rg suffix, but due to the behavior of trim, it also removes any individual -, r, and g characters from the ends of the string, leading to unexpected results.
For instance, a name like "osdu-pl2-crpl2-583g-rg" is trimmed to "osdu-pl2-crpl2-583" instead of the expected "osdu-pl2-crpl2-583g".
Expected Behavior:
The -rg suffix should be removed without affecting other characters in the string.
Actual Behavior:
Characters within the -rg suffix are being removed individually if they are at the ends of the string, leading to unexpected results.
Steps to Reproduce:
Use a resource group name like "osdu-pl2-crpl2-583g-rg".
Apply the Terraform module.
Observe that resources dependent on the central_group_prefix variable have the g character missing.
Suggested Fix:
Replace the trim function with the trimsuffix function, which will only remove the exact -rg suffix:
```
central_group_prefix = trimsuffix(data.terraform_remote_state.central_resources.outputs.central_resource_group_name, "-rg")
```
This change should be applied wherever the trim function is used in a similar context.https://community.opengroup.org/osdu/platform/security-and-compliance/legal/-/issues/44Incorrect validation error message for updating LegalTags - HV000028: Unexpec...2023-11-22T06:58:00ZChad LeongIncorrect validation error message for updating LegalTags - HV000028: Unexpected exception during isValid call# Summary
If you are updating legal tags with invalid request body, the error message is incorrect.
```PUT https://osdu.bm-preship.gcp.gnrg-osdu.projects.epam.com/api/legal/v1/legaltags/```
Request body:
```json
{
"name": "opende...# Summary
If you are updating legal tags with invalid request body, the error message is incorrect.
```PUT https://osdu.bm-preship.gcp.gnrg-osdu.projects.epam.com/api/legal/v1/legaltags/```
Request body:
```json
{
"name": "opendes-Test-Legal-Tag-chad-123456",
"description": "updated desc 2",
"properties": {
"contractId": "123456",
"countryOfOrigin": [
"US",
"CA"
],
"dataType": "Third Party Data",
"exportClassification": "EAR99",
"originator": "Schlumberger",
"personalData": "No Personal Data",
"securityClassification": "Private",
"expirationDate": "2023-07-31"
}
}
```
# Expected Behavior
Response
```json
{
"code": 400,
"reason": "Validation error.",
"message": "The request body is invalid."
}
```
## Actual Behavior
Response
```json
{
"code": 400,
"reason": "Validation error.",
"message": "HV000028: Unexpected exception during isValid call."
}
```
This is an example of valid request body
```json
{
"name": "osdu-Test-Legal-Tag-chad-123456",
"description": "Legal Tag added for Well",
"contractId": "chad-AE33334",
"expirationDate": "2100-12-21",
"extensionProperties": {
"test_attr": "chad-test"
}
}
```https://community.opengroup.org/osdu/platform/system/lib/core/os-core-common/-/issues/77Increase timeout for storage service requests2024-02-01T12:51:10ZSudesh TagadpallewarIncrease timeout for storage service requestsWhen registering dataset using `/registerDataset` some users are getting 400 error. As per the Logs this request is timing out(with the error- **Unexpected error sending to URL http://storage/api/storage/v2/records METHOD PUT error java....When registering dataset using `/registerDataset` some users are getting 400 error. As per the Logs this request is timing out(with the error- **Unexpected error sending to URL http://storage/api/storage/v2/records METHOD PUT error java.net.SocketTimeoutException: Read timed out**) when it tries to upsertRecord in the Storage. We have found out that when dataset service is calling storage service and it is taking more than 5 seconds which results in a SocketTimeoutException. When creating `StorageService` instance using `StorageFactory`, new `HttpClient()` instance is used which has default timeout of 5 seconds. Instead of using new `HttpClient` instance `HttpClientHandler` instance should have been used which has 60 seconds timeout. This code is present in the core-common library. See attached image for reference![storage](/uploads/31cfbdb427cf9fc78168dc0fbc4e7f24/storage.JPG)https://community.opengroup.org/osdu/platform/system/indexer-service/-/issues/5Index cleanup API support2020-10-15T08:03:33ZArtem Dobrynin (EPAM)Index cleanup API support## Change Type:
- [X] Feature
- [ ] Bugfix
- [ ] Refactoring
## Context and Scope
There is no functionality to drop obsolete and stale indices in core module.
## Decision
- Implement `cleanupIndices` endpoint in Indexer service (see ...## Change Type:
- [X] Feature
- [ ] Bugfix
- [ ] Refactoring
## Context and Scope
There is no functionality to drop obsolete and stale indices in core module.
## Decision
- Implement `cleanupIndices` endpoint in Indexer service (see https://community.opengroup.org/osdu/platform/system/indexer-service/-/merge_requests/16 as example)
- Add indexes clean-up in Storage service, when Kind was deleted.
## Rational
This change will keep our Elasticsearch indices clean and healthy. Without it, we are forced to monitor Elasticsearch and manually delete all test and stale indices.
This is also affecting our performance. Because of frequent tests, a lot of indices are being created and not deleted. It causes a raise of callback time. With indices cleanup functionality we could avoid that.
## Consequences
We should add this functionality support in every method where there is an index/kind deletionDmitriy RudkoDmitriy Rudko2020-09-22https://community.opengroup.org/osdu/platform/system/indexer-service/-/issues/73Indexer fails to correctly parse properties with special characters2022-08-23T15:08:44ZAn NgoIndexer fails to correctly parse properties with special charactersFor example:
```
"SpatialArea": {
"Wgs84Coordinates": {
"features": [
{
"geometry": {
"type": "Point",
"coordinates": [
2.2863,
61.198685
...For example:
```
"SpatialArea": {
"Wgs84Coordinates": {
"features": [
{
"geometry": {
"type": "Point",
"coordinates": [
2.2863,
61.198685
]
},
"properties": {
"id": "a:b"
},
"type": "Feature"
}
],
"type": "FeatureCollection"
}
}
```
Indexer fails to parse the properties id whose value contains a colon.https://community.opengroup.org/osdu/platform/system/indexer-service/-/issues/40Indexer, Frame of Reference, DateTime conversion2022-09-29T13:41:07ZDebasis ChatterjeeIndexer, Frame of Reference, DateTime conversionCan you please share a working example of DataTime conversion?
Load manifest (JSON) showing actual data and also matching "meta" block for a date field.
**I tried the following in Load Manifest (AWS R3M8 Preship environment) -**
Data...Can you please share a working example of DataTime conversion?
Load manifest (JSON) showing actual data and also matching "meta" block for a date field.
**I tried the following in Load Manifest (AWS R3M8 Preship environment) -**
Data has this line -
` "ProjectEndDate": "2008-02-01T14:00:00.000+03:00",`
Meta block has this for DateTime conversion -
```
{
"kind": "DateTime",
"name": "UTC-ISO8601",
"persistableReference": "{\"format\":\"yyyy-MM-ddTHH:mm:ss.fffZ\",\"timeZone\":\"UTC\",\"type\":\"DTM\"}",
"propertyNames": [
"ProjectEndDate ]
}
```
**I see this error when I query for troubleshooting (Search service) .**
For now, we can simply discuss problem of DateTime conversion.
```
{
"kind": "osdu:wks:master-data--SeismicAcquisitionSurvey:1.0.0",
"query": "id: \"osdu:master-data--SeismicAcquisitionSurvey:ST0202R08-DC-23Oct\"",
"returnedFields": [
"id",
"index"
]
}
```
Response
```
{
"results": [
{
"index": {
"trace": [
"Unit conversion: persistableReference not valid",
"Unit conversion: persistableReference not valid",
"DateTime conversion: Frame of reference does not match given data for property ProjectEndDate, no conversion applied."
],
"statusCode": 400,
"lastUpdateTime": "2021-10-23T10:13:21.496Z"
},
"id": "osdu:master-data--SeismicAcquisitionSurvey:ST0202R08-DC-23Oct"
}
],
"totalCount": 1
}
```
Thanks for your help.
cc - @gehrmann for information
cc - @jingdongsun , @anujgupta and @shamazum (since some work was done by IBM resources in this area, as far as I remember)https://community.opengroup.org/osdu/platform/system/indexer-service/-/issues/153Indexer is not supporting 64-bit integer value2024-03-23T16:00:07ZAn NgoIndexer is not supporting 64-bit integer valueA bug was submitted for case when a seismic volume size did not get indexed. The [AbstractDataset ](https://community.opengroup.org/osdu/data/data-definitions/-/blob/master/Generated/abstract/AbstractDataset.1.0.0.json?ref_type=heads#L26...A bug was submitted for case when a seismic volume size did not get indexed. The [AbstractDataset ](https://community.opengroup.org/osdu/data/data-definitions/-/blob/master/Generated/abstract/AbstractDataset.1.0.0.json?ref_type=heads#L26)definition (and a few more [attributes](https://community.opengroup.org/search?group_id=218&nav_source=navbar&project_id=91&repository_ref=master&scope=blobs&search=convertible+to+a+long+integer+extension:json+path:Authoring&search_code=true)) states that the value must be converted to a long integer. But it seems the Indexer only handles 32-bit integer values.
Proposal to fix:
* declare that the default schema definition for "int" is a 64-bit value, which will increase storage and processing and require a re-index on potentially all ingested data.
* create a new schema type "long int" that supports 64-bit value, update the existing schema definition for just the attributes that may exceed 32-bit size, and re-index the affected data.
Screen shots of error and data value:
![storage.png](/uploads/0d9a6215309c8164f647bfd7d657bcf8/storage.png)
![indexing_error.png](/uploads/5f8752f09cbb2e5c9c93595805a87600/indexing_error.png)https://community.opengroup.org/osdu/platform/system/indexer-service/-/issues/57Indexer not paying attention to updated DEVELOPMENT status schema2022-08-26T09:22:49ZEric SchoenIndexer not paying attention to updated DEVELOPMENT status schema1. We installed a DEVELOPMENT status schema with an incorrect x-osdu-indexing hint. (It was at the wrong level in the schema.). We indexed some data with this incorrect schema, and the affected field was not queryable.
2. We then fixed t...1. We installed a DEVELOPMENT status schema with an incorrect x-osdu-indexing hint. (It was at the wrong level in the schema.). We indexed some data with this incorrect schema, and the affected field was not queryable.
2. We then fixed the schema (moved only the x-osdu-indexing extensions) and reinstalled it with the PUT endpoint, but with the same version number. We confirmed by retrieving the schema that the changes had been committed to the schema service.
3. We deleted the records created with the prior version and created new records.
4. The field in question was still not queryable.
5. We installed the schema again, but bumped the SchemaVersionPatch value by 1.
6. We deleted the records created with the second version of the schema and created new records with updated "kind" values.
7. This time, the field in question was queryable.
In the past, we've been able to install updated DEVELOPMENT status schema with the same version number and the indexer would appear to take notice. Is the indexer not noticing changes limited to x-osdu-indexing extensions, and using a cached version of the prior schema content?