OSDU Software issueshttps://community.opengroup.org/groups/osdu/-/issues2022-11-20T12:42:03Zhttps://community.opengroup.org/osdu/platform/data-flow/ingestion/opc-ua-ingestion/-/issues/12OPC-UA-PROD : Add application name application uri and certificate while send...2022-11-20T12:42:03ZAshutosh KumarOPC-UA-PROD : Add application name application uri and certificate while sending request to server
Please add certifcate, Application name, Application uri to server while sending connection request.
Please add certifcate, Application name, Application uri to server while sending connection request.https://community.opengroup.org/osdu/platform/data-flow/ingestion/external-data-sources/core-external-data-workflow/-/issues/8EDS Testing in GLAB Environment2022-12-05T10:13:12ZPriyanka BhongadeEDS Testing in GLAB EnvironmentBelow are the Features/Issues addressed for M15 Release and Tested in GLAB Environment
- [x] https://community.opengroup.org/osdu/platform/data-flow/ingestion/external-data-sources/eds-dms/-/issues/4
- [x] https://community.opengroup.or...Below are the Features/Issues addressed for M15 Release and Tested in GLAB Environment
- [x] https://community.opengroup.org/osdu/platform/data-flow/ingestion/external-data-sources/eds-dms/-/issues/4
- [x] https://community.opengroup.org/osdu/platform/data-flow/ingestion/external-data-sources/core-external-data-workflow/-/issues/7
- [x] https://community.opengroup.org/osdu/platform/data-flow/ingestion/external-data-sources/external-data-framework/-/issues/253
- [x] https://community.opengroup.org/osdu/platform/data-flow/ingestion/external-data-sources/external-data-framework/-/issues/252
- [x] https://community.opengroup.org/osdu/platform/data-flow/ingestion/external-data-sources/core-external-data-workflow/-/issues/3
- [x] https://community.opengroup.org/osdu/platform/data-flow/ingestion/external-data-sources/core-external-data-workflow/-/issues/2
- [x] https://community.opengroup.org/osdu/platform/data-flow/ingestion/external-data-sources/external-data-framework/-/issues/254
- [x] https://community.opengroup.org/osdu/platform/data-flow/ingestion/external-data-sources/external-data-framework/-/issues/255M15 - Release 0.18Nisha ThakranPriyanka BhongadeNisha Thakranhttps://community.opengroup.org/osdu/platform/data-flow/ingestion/opc-ua-ingestion/-/issues/13OPC-UA-PROD : Add time stamp along with Power values2022-11-24T15:00:38ZAshutosh KumarOPC-UA-PROD : Add time stamp along with Power valuesPlease add time stamp along with power values so that
1: Data should be displayed as : Node id, values, timePlease add time stamp along with power values so that
1: Data should be displayed as : Node id, values, timeAshutosh KumarAshutosh Kumarhttps://community.opengroup.org/osdu/platform/data-flow/ingestion/opc-ua-ingestion/-/issues/14OPC-UA-PROD : Create a service for subscription of single node2022-11-29T08:16:46ZAshutosh KumarOPC-UA-PROD : Create a service for subscription of single nodeCreate a separate service which when invoked connects to server and subscribe to a node(POWER) and fetch all relevant values
so that:
1: This service is a standalone service and can be invoked separately for subscription.Create a separate service which when invoked connects to server and subscribe to a node(POWER) and fetch all relevant values
so that:
1: This service is a standalone service and can be invoked separately for subscription.Ashutosh KumarAshutosh Kumarhttps://community.opengroup.org/osdu/platform/data-flow/ingestion/opc-ua-ingestion/-/issues/15OPC-UA-PROD : Subscribe to different Nodes from client side2022-12-02T11:07:49ZAshutosh KumarOPC-UA-PROD : Subscribe to different Nodes from client sideWrite a service to subscribe to different nodes (eg: Power, Temp, Pressure etc) simultaneaously to fetch the data.Write a service to subscribe to different nodes (eg: Power, Temp, Pressure etc) simultaneaously to fetch the data.Ashutosh KumarAshutosh Kumarhttps://community.opengroup.org/osdu/platform/data-flow/ingestion/opc-ua-ingestion/-/issues/16OPC-UA-PROD : Fetch the values for subscribed nodes2022-11-29T08:25:53ZAshutosh KumarOPC-UA-PROD : Fetch the values for subscribed nodesFetch the value, timestamp and node id of all the subscribed values so that
1: The valu should keep changing whenever there is change in server sideFetch the value, timestamp and node id of all the subscribed values so that
1: The valu should keep changing whenever there is change in server sideAshutosh KumarAshutosh Kumarhttps://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/seismic-dms-suite/seismic-store-service/-/issues/86osdu:wks:work-product-component--NotionalSeismicLine:1.0.02023-03-24T15:58:23ZSacha Brantsosdu:wks:work-product-component--NotionalSeismicLine:1.0.0https://community.opengroup.org/osdu/platform/system/storage/-/issues/154Storage service stale in-memory cache leads to inconsistency.2023-02-15T18:37:33ZNikhil Singh[MicroSoft]Storage service stale in-memory cache leads to inconsistency.We recently uncovered a bug in storage service due to local cache getting stale. The flow can be understood by the following steps.
1. Deletion of a legal tag via legal service delete API --> response 204 No content after successful del...We recently uncovered a bug in storage service due to local cache getting stale. The flow can be understood by the following steps.
1. Deletion of a legal tag via legal service delete API --> response 204 No content after successful deletion
2. Storage service API call made at https://**********/api/storage/v2/push-handlers/legaltag-changed?token=*** --> Goes to a pod P1 of storage service --> Updates the records compliance for all the record associated with the deleted tag in step 1---> Removes the deleted tag from local cache of pod P1.
3. Storage PUT call to create a record with the deleted legal tag--> goes to a pod P2 of storage--> the cache still has that legal tag-->returns 201 created.
At step 3, all calls going to pod p1 returns "Invalid legal tag" but API calls landing on other pods successfully create these records.
The service ITs are failing in transient manner due to this issue.M17 - Release 0.20Nikhil Singh[MicroSoft]Nikhil Singh[MicroSoft]https://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/seismic-dms-suite/seismic-store-service/-/issues/89Implement dataset storage for AWS2023-09-27T13:19:38ZSacha BrantsImplement dataset storage for AWShttps://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/seismic-dms-suite/seismic-store-service/-/issues/90Fix deployment to allow access to in https://osdu-glab.msft-osdu-test.org/sei...2023-06-15T14:28:28ZSacha BrantsFix deployment to allow access to in https://osdu-glab.msft-osdu-test.org/seistore-svc/api/v4/swagger-ui.htmlhttps://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/seismic-dms-suite/seismic-store-service/-/issues/91The v3 to v4 sync process needs to be implemented for all models2023-09-20T02:16:49ZSacha BrantsThe v3 to v4 sync process needs to be implemented for all modelshttps://community.opengroup.org/osdu/platform/data-flow/ingestion/opc-ua-ingestion/-/issues/17OPC-UA-PROD : Pass node ids from post service and get values2022-12-02T12:31:50ZAshutosh KumarOPC-UA-PROD : Pass node ids from post service and get valuesWrite a post service which accepts dynamic number of node ids from request body and gets the relevant values from server.Write a post service which accepts dynamic number of node ids from request body and gets the relevant values from server.Ashutosh KumarAshutosh Kumarhttps://community.opengroup.org/osdu/platform/deployment-and-operations/terraform-deployment-aws/-/issues/1Deployment instructions have some issues2022-12-22T16:16:51ZDaryl GunnDeployment instructions have some issuesI've been trying to deploy OSDU platform to AWS (main and release 0.17) but had some challenges:
- Terraform validate shows a couple of undefined provider warnings and some deprecated argument warnings (different numbers of each in main...I've been trying to deploy OSDU platform to AWS (main and release 0.17) but had some challenges:
- Terraform validate shows a couple of undefined provider warnings and some deprecated argument warnings (different numbers of each in main or 0.17)- I'm not familiar with terraform to know if these have consequences.
- There's a typo: Command `terraform deploy -var-file=../../<shared-resource-prefix>.tfvars.json -auto-approve` should be terraform 'apply'.
- There's a missing folder and file: Instructions reference `python3 ./devops/scripts/create-helm-values-file.py <resource-prefix> --region=<deploy-region>` but I can't find that folder or file.
- The helm chart command is described as `helm repo add core $HELM_REPO_PATH/core`, but no HELM_REPO_PATH parameter is set in the instructions. I assumed it was s3://osdu-artifacts but get access denied when trying to get charts from s3://osdu-artifacts/core. Although instructions state the bucket is public, it cannot be accessed or have its contents listed.https://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/open-vds/-/issues/162Byte swap bug in Ibm2ieee2023-06-20T12:16:12ZArtem ВByte swap bug in Ibm2ieeeWe found that `Ibm2ieee` implementation does not take into account file and host endianness.
For example:
```
template<SEGY::Endianness ENDIANNESS, SEGY::BinaryHeader::DataSampleFormatCode FORMAT>
void copySamples(float * prTarget, con...We found that `Ibm2ieee` implementation does not take into account file and host endianness.
For example:
```
template<SEGY::Endianness ENDIANNESS, SEGY::BinaryHeader::DataSampleFormatCode FORMAT>
void copySamples(float * prTarget, const unsigned char * puSource, int iSampleMin, int iSampleMax)
{
if (ENDIANNESS == SEGY::Endianness::BigEndian)
{
nValue = (int)(puSource[iSample * 4 + 0] << 24 | puSource[iSample * 4 + 1] << 16 | puSource[iSample * 4 + 2] << 8 | puSource[iSample * 4 + 3]);
}
else
{
nValue = (int)(puSource[iSample * 4 + 3] << 24 | puSource[iSample * 4 + 2] << 16 | puSource[iSample * 4 + 1] << 8 | puSource[iSample * 4 + 0]);
}
}
```
use native conversion with real file endianess.
The original 'Ibm2ieee' form SEGY.cpp always swaps bytes:
```
template<SEGY::Endianness ENDIANNESS, SEGY::BinaryHeader::DataSampleFormatCode FORMAT>
void copySamples(float * prTarget, const unsigned char * puSource, int iSampleMin, int iSampleMax)
{
if (FORMAT == SEGY::BinaryHeader::DataSampleFormatCode::IBMFloat)
{
SEGY::Ibm2ieee(prTarget, puSource + iSampleMin * 4, iSampleMax - iSampleMin);
return;
}
}
void ibm2ieee(void * to, const void * from, size_t len)
{
...
#ifdef WIN32
fr = _byteswap_ulong(fr);
#else
fr = __builtin_bswap32(fr);
#endif // WIN32
...
}
```
It is needed to check real file and hosts endianess before swap bytes.
---
It makes wrong traces data for little-endian SEG-Ys with IBMFloats data formathttps://community.opengroup.org/osdu/platform/security-and-compliance/legal/-/issues/33LegalTag name field max length is too small2023-03-03T05:54:14ZDadong ZhouLegalTag name field max length is too smallShell is currently migrating the SDU E&O contracts to OSDU LegalTags. We use long human-readable names as the ids and the names are much longer then the current LegalTag name field max length(100). Is it possible to increase the name fie...Shell is currently migrating the SDU E&O contracts to OSDU LegalTags. We use long human-readable names as the ids and the names are much longer then the current LegalTag name field max length(100). Is it possible to increase the name field max length to like 400? Thanks.https://community.opengroup.org/osdu/platform/system/reference/schema-upgrade/-/issues/1Multi threaded processing of JSON records2023-03-09T18:43:52ZVikas Hoode [BP]vikas.hoode@bp.comMulti threaded processing of JSON recordsJSOn migration is now a sequential process. To speed up process execution we need a solution via a multi threaded program.JSOn migration is now a sequential process. To speed up process execution we need a solution via a multi threaded program.M16 - Release 0.19Vikas Hoode [BP]vikas.hoode@bp.comVikas Hoode [BP]vikas.hoode@bp.com2022-12-16https://community.opengroup.org/osdu/platform/system/reference/schema-upgrade/-/issues/2Roll back feature for JSON upgrade2023-03-09T18:43:47ZVikas Hoode [BP]vikas.hoode@bp.comRoll back feature for JSON upgraderequired to design a roll back plan for JSON migration.required to design a roll back plan for JSON migration.M16 - Release 0.19Vikas Hoode [BP]vikas.hoode@bp.comVikas Hoode [BP]vikas.hoode@bp.com2022-12-19https://community.opengroup.org/osdu/platform/system/storage/-/issues/156ADR: Recover a soft deleted record in storage2023-09-11T08:27:45ZAbhishek NandaADR: Recover a soft deleted record in storageAbility to recover a soft deleted record in storage service
# Decision Title
## Status
- [X] Proposed
- [ ] Trialing
- [ ] Under review
- [ ] Approved
- [ ] Retired
## Context & Scope
The storage service provides 2 ways to delete a r...Ability to recover a soft deleted record in storage service
# Decision Title
## Status
- [X] Proposed
- [ ] Trialing
- [ ] Under review
- [ ] Approved
- [ ] Retired
## Context & Scope
The storage service provides 2 ways to delete a record. One way is to logically delete the record in which the record with same id can be revived later because its version history is maintained and the other one is to purge the record in which case, the record's version history is deleted too. In both types of deletions, the record cannot be accessed using storage or search service.
Today there is no easy way to query or recover the soft-deleted records. Providing admin-only APIs will help admins to search, view and recover the soft-deleted data if required.
# Tradeoff Analysis - Input to decision
Today users have to maintain the soft deleted record IDs on their own. Below is the workaround available today to attempt recovery of such records
1. Recreate the record with existing id and random/empty data and meta blocks. This will mark the record as active.
2. Fetch all versions of the record.
3. Fetch the latest version prior to the one just created to get back the actual record data and meta blocks.
4. Recreate the record using the response to create a new version of the record with the appropriate data.
## Decision
Create 3 new APIs as below
1. Fetch deleted records (accessible to _users.datalake.admins_) -> This will fetch a list of records. Since the list can be very long it should return a maximum of 100 records and support a from and to deletion dates filter along with pagination.
![image](/uploads/ca34cf94f3184fba05d2ade6bb502a90/image.png)
2. Recover deleted records by id (accessible to _users.datalake.admins_) -> This will take a list of record ids (max 500) that are to be recovered and return the list of record ids that succeeded as well as failed.
![image](/uploads/ae448c5fb9ed5803101aeba51a4fd7b4/image.png)
3. Recover deleted records by metadata filters (Currently support for only fromDeletedDate and toDeletedDate) (accessible to _users.datalake.admins_) -> This will take filter criteria of records that are to be recovered and return the list of record ids that succeeded as well as failed.
![image](/uploads/2b1d373eed8513e166fba784be4b3250/image.png)
## Consequences
1. This will help users to bulk recover deleted records in a single go.
2. The APIs will help prevent having garbage record versions that had to be created just to make the record active.
3. This will help users to fetch a list of soft deleted records which was not possible earlier.
Open API spec for the service
[storage-recover-swagger.yaml](/uploads/396cc62881dfe5f075f0e987f0313472/storage-recover-swagger.yaml)https://community.opengroup.org/osdu/platform/data-flow/ingestion/ingestion-workflow/-/issues/150Misleading log statements2022-12-12T15:35:32ZMaksim MalkovMisleading log statementsWorkflow service search for a triggered workflow first in provided data partition. System workflow like CSV would not be available in data partition. In such cases service publish logs "workflow not found"
Next same workflow is searched ...Workflow service search for a triggered workflow first in provided data partition. System workflow like CSV would not be available in data partition. In such cases service publish logs "workflow not found"
Next same workflow is searched in system db and it is found there and processing completes
But these logs are creating a confusion that some workflow is not found by workflow service, but actually there is no such issue.M16 - Release 0.19https://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/open-zgy/-/issues/25implications associated with Python(pickle) module.2022-12-30T09:03:08ZJayesh Bagulimplications associated with Python(pickle) module.I am working on a vulnerability issue of the Pickle module in open-zgy.
https://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/open-zgy/-/security/vulnerabilities/18266
The pickle library’s documentation discour...I am working on a vulnerability issue of the Pickle module in open-zgy.
https://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/open-zgy/-/security/vulnerabilities/18266
The pickle library’s documentation discourages the unpickling of untrusted data. Currently, deserialization is happening with a simple approach.
To prevent unsafe deserialization there are multiple approaches are there.
1) Implementing a message authentication code (MAC) to ensure the data integrity of the payload. (hmac and hashlib)
2) Run the deserialization code with limited access permissions.
3) Validate Inputs.
I would like to hear which will best suit it as well as compatibility option for all existing things.
CC: @Srinivasan_Narayanan @nursheikh @chadJayesh BagulJayesh Bagul