OSDU Software issueshttps://community.opengroup.org/groups/osdu/-/issues2022-11-29T08:25:53Zhttps://community.opengroup.org/osdu/platform/data-flow/ingestion/opc-ua-ingestion/-/issues/16OPC-UA-PROD : Fetch the values for subscribed nodes2022-11-29T08:25:53ZAshutosh KumarOPC-UA-PROD : Fetch the values for subscribed nodesFetch the value, timestamp and node id of all the subscribed values so that
1: The valu should keep changing whenever there is change in server sideFetch the value, timestamp and node id of all the subscribed values so that
1: The valu should keep changing whenever there is change in server sideAshutosh KumarAshutosh Kumarhttps://community.opengroup.org/osdu/platform/data-flow/ingestion/opc-ua-ingestion/-/issues/15OPC-UA-PROD : Subscribe to different Nodes from client side2022-12-02T11:07:49ZAshutosh KumarOPC-UA-PROD : Subscribe to different Nodes from client sideWrite a service to subscribe to different nodes (eg: Power, Temp, Pressure etc) simultaneaously to fetch the data.Write a service to subscribe to different nodes (eg: Power, Temp, Pressure etc) simultaneaously to fetch the data.Ashutosh KumarAshutosh Kumarhttps://community.opengroup.org/osdu/platform/data-flow/ingestion/opc-ua-ingestion/-/issues/14OPC-UA-PROD : Create a service for subscription of single node2022-11-29T08:16:46ZAshutosh KumarOPC-UA-PROD : Create a service for subscription of single nodeCreate a separate service which when invoked connects to server and subscribe to a node(POWER) and fetch all relevant values
so that:
1: This service is a standalone service and can be invoked separately for subscription.Create a separate service which when invoked connects to server and subscribe to a node(POWER) and fetch all relevant values
so that:
1: This service is a standalone service and can be invoked separately for subscription.Ashutosh KumarAshutosh Kumarhttps://community.opengroup.org/osdu/platform/data-flow/ingestion/opc-ua-ingestion/-/issues/13OPC-UA-PROD : Add time stamp along with Power values2022-11-24T15:00:38ZAshutosh KumarOPC-UA-PROD : Add time stamp along with Power valuesPlease add time stamp along with power values so that
1: Data should be displayed as : Node id, values, timePlease add time stamp along with power values so that
1: Data should be displayed as : Node id, values, timeAshutosh KumarAshutosh Kumarhttps://community.opengroup.org/osdu/platform/data-flow/ingestion/opc-ua-ingestion/-/issues/12OPC-UA-PROD : Add application name application uri and certificate while send...2022-11-20T12:42:03ZAshutosh KumarOPC-UA-PROD : Add application name application uri and certificate while sending request to server
Please add certifcate, Application name, Application uri to server while sending connection request.
Please add certifcate, Application name, Application uri to server while sending connection request.https://community.opengroup.org/osdu/platform/data-flow/ingestion/opc-ua-ingestion/-/issues/11OPC-UA-PROD : SPIKE- Research way to subscribe to different node from client ...2022-11-20T12:38:17ZAshutosh KumarOPC-UA-PROD : SPIKE- Research way to subscribe to different node from client to serverResearch way to subscribe to different node in opc ua server.Research way to subscribe to different node in opc ua server.https://community.opengroup.org/osdu/platform/data-flow/ingestion/opc-ua-ingestion/-/issues/10OPC-UA-PROD : Create security directory for storing client certificate2022-11-20T12:29:50ZAshutosh KumarOPC-UA-PROD : Create security directory for storing client certificateCreate temp ->client->security for storing .pki certificate for connection validationCreate temp ->client->security for storing .pki certificate for connection validationAshutosh KumarAshutosh Kumarhttps://community.opengroup.org/osdu/platform/data-flow/ingestion/opc-ua-ingestion/-/issues/9OPC-UA- PROD : Create a single instance/classs for opc ua server connectivity2022-11-20T12:17:37ZAshutosh KumarOPC-UA- PROD : Create a single instance/classs for opc ua server connectivityCreate single instance for server connectivity so that
1: Every rest api call doesn't need to create separate instance for server connectivityCreate single instance for server connectivity so that
1: Every rest api call doesn't need to create separate instance for server connectivityAshutosh KumarAshutosh Kumarhttps://community.opengroup.org/osdu/platform/system/search-service/-/issues/104Search service failing to parse query when using value containing "OR"2022-11-21T09:27:30ZMadinabonu AlisherovaSearch service failing to parse query when using value containing "OR"[Background]
There was a problem with query parsing in search Api query builder. When [POST] query request is sent from `/query` endpoint in `Search Service` with this query below in the request payload, it is giving 400 error. There wa...[Background]
There was a problem with query parsing in search Api query builder. When [POST] query request is sent from `/query` endpoint in `Search Service` with this query below in the request payload, it is giving 400 error. There was a problem while parsing this query, the problem was with the string which contains "OR" or "AND" confusing it as an operator in the compilation not as a part of the string.
```json
{
"kind": "osdu:wks:reference-data--UnitOfMeasure:*",
"limit": 30,
"query": "data.Code.keyword:\"GOR\" OR (nested(data.NameAlias, (AliasName.keyword:(\"FOO\"))))",
"returnedFields": [
"id"
],
"queryAsOwner": false,
"offset": 0
}
```
will result in search service to fail:
{"code":400,"reason":"Bad Request","message":"token_mgr_error: Lexical error at line 1, column 21. Encountered: <EOF> after : \\"\\\\\\"G\\""}'
Any other value than the string contains "OR" or "AND" such "GOR, GAND" works fine such as "GIF", "GAD" and etc. The logic problem was in parseQueryNodesFromQueryString method inside `QueryParserUtil` class, while compiling and matching the patterns with "OR" or "AND" in pattern compiling.
To solve the issue, white space or bracket such following regex was added `[\s)]` before and after "OR" and "AND" while Pattern compilation making sure it looks for an "OR" or "AND" operator with space before and after in the sent request query. And, for orPositions or andPositions, to compare current "OR" or "AND" position with space with query character position, to match it, for matcher start position 1 should be added.https://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/open-vds/-/issues/160Extend documentation of examples and supply sample files2022-11-18T12:56:34ZAlexander JaustExtend documentation of examples and supply sample filesIt is great that there are some examples on how to use OpenVDS. It would be even more helpful if there would be a short explanation of what each example does, what assumptions are made and, if necessary, that input files would be include...It is great that there are some examples on how to use OpenVDS. It would be even more helpful if there would be a short explanation of what each example does, what assumptions are made and, if necessary, that input files would be included. I think that would make life much easier for beginners.
For example, the [npz_to_vds.py](https://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/open-vds/-/blob/master/examples/NpzToVds/npz_to_vds.py) script is really helpful, but it would be nice if the input file would be provided with the script. It is not immediately clear that what assumptions on the file are made. When I looked into the file first I had the following questions:
- Why is there a `--npy` command line parameter that is never used?
- Why do the axis descriptors seem to expect x, y and z to be in a certain range [0,2000]?
- Why is the value range computed in the way it is computed? What is the "correct" way to give a value range? Should it be certain percentiles?
- Where is the input file or how can I create a valid input file myself?
- How does writing data via the page accessor actually work and where do I find more information about that?
- Do I have to use `open` and `close` for interaction with the VDS file or could I also use
```python
...
with openvds.create(
args.url,
args.connection,
layoutDescriptor,
axisDescriptors,
channelDescriptors,
metaData,
compressionMethod,
compressionTolerance
) as vds:
layout = openvds.getLayout(vds)
...
accessor.commit()https://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/open-vds/-/issues/159Online Python documentation incomplete2022-11-21T09:34:46ZAlexander JaustOnline Python documentation incompleteIt seems that the online documentation of the Python interface is incomplete. I did not go through everything, but I found the following inconsistencies and problems.
Would it be possible to expose more documentation on the homepage? I ...It seems that the online documentation of the Python interface is incomplete. I did not go through everything, but I found the following inconsistencies and problems.
Would it be possible to expose more documentation on the homepage? I am not sure if some documentation is simply missing or not generated on purpose. In all cases that I have checked the classes and methods have a documentation accessible via Python's `help(...)` function.
## Observations
### VolumeDataLayoutDescriptor
I do not find anything about the constructors of `class VolumeDataLayoutDescriptor` in the [online documentation](https://osdu.pages.opengroup.org/platform/domain-data-mgmt-services/seismic/open-vds/python-api.html#openvds.VolumeDataLayoutDescriptor). When I query the documentation via Python I get much more information including the constructor.
That means if I do the following in a Python session
```text
import openvds
help(openvds.VolumeDataLayoutDescriptor)
```
gives the following (shortened) output
```text
Help on class VolumeDataLayoutDescriptor in module openvds.core:
class VolumeDataLayoutDescriptor(pybind11_builtins.pybind11_object)
| Method resolution order:
| VolumeDataLayoutDescriptor
| pybind11_builtins.pybind11_object
| builtins.object
|
| Methods defined here:
|
| __init__(...)
| __init__(*args, **kwargs)
| Overloaded function.
|
| 1. __init__(self: openvds.core.VolumeDataLayoutDescriptor) -> None
|
| 2. __init__(self: openvds.core.VolumeDataLayoutDescriptor, brickSize: OpenVDS::VolumeDataLayoutDescriptor::BrickSize, negativeMargin: int, positiveMargin: int, brickSize2DMultiplier: int, lodLevels: OpenVDS::VolumeDataLayoutDescriptor::LODLevels, options: OpenVDS::VolumeDataLayoutDescriptor::Options, fullResolutionDimension: int = 0) -> None
|
...
```
As you can see there is some information on the constructor.
### VolumeDataRequest
There are (at least?) two types of `VolumeDataRequest`, but only one is documented in the [online documentation](https://osdu.pages.opengroup.org/platform/domain-data-mgmt-services/seismic/open-vds/python-api.html#openvds.VolumeDataRequest).
The documentation explains the usage of an object of type `openvds.core.VolumeDataRequest`, see
```text
>>> import openvds
>>> data_request = openvds.VolumeDataRequest
>>> print(data_request)
<class 'openvds.core.VolumeDataRequest'>
>>> buffer = data_request.buffer
>>> data = data_request.data
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
AttributeError: type object 'openvds.core.VolumeDataRequest' has no attribute 'data'
```
This class does not have any attribute called `data` so the error is correct.
However, a function call to the function `requestVolumeSubset` of a `VolumeDataAccessManager` will return an object of type `openvds.volumedataaccess.VolumeDataRequest` with different interface which has a property called `data` instead of `buffer`.
```text
>>> import openvds
>>> data_request = openvds.volumedataaccess.VolumeDataRequest
>>> print(data_request)
<class 'openvds.volumedataaccess.VolumeDataRequest'>
>>> buffer = data_request.buffer
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
AttributeError: type object 'VolumeDataRequest' has no attribute 'buffer'
>>> data = data_request.data
```
This class does not have a `buffer` attribute so the error message is fine. It is not clear from the documentation of [`VolumeDataAccessManager`](https://osdu.pages.opengroup.org/platform/domain-data-mgmt-services/seismic/open-vds/python-api.html#id59) that the return type is not a `openvds.core.VolumeDataRequest`.
### openvds.IJKCoordinateTransformer
There is no documentation on the `IJKCoordinateTransformer` for Python even though it is accessible from Python
```text
>>> transformer = openvds.IJKCoordinateTransformer
>>> print(transformer)
<class 'openvds.core.IJKCoordinateTransformer'>
```
### VolumeDataAccessManager
The type annotation for the constructor of the `handle` seems to be off as it is `int`, but should be `openvds.core.VDS`.
```text
>>> vdam = openvds.VolumeDataAccessManager
>>> print(vdam)
<class 'openvds.volumedataaccess.VolumeDataAccessManager'>
>>> vdam = openvds.VolumeDataAccessManager(1)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/Users/aej/software/openvds-3.0.3-install-python/python/openvds/volumedataaccess.py", line 184, in __init__
self._manager = openvds.core.getAccessManager(handle)
TypeError: getAccessManager(): incompatible function arguments. The following argument types are supported:
1. (handle: openvds.core.VDS) -> OpenVDS::VolumeDataAccessManager
Invoked with: 1
```
Update: In the last case, supplying an object of type `openvds.core.VDS` also fails so I assume the documentation might be off here or I misunderstand something.https://community.opengroup.org/osdu/platform/system/file/-/issues/81Test cases commented FileFlowTest.java - Core module2022-11-22T14:31:27ZPramesh PatilTest cases commented FileFlowTest.java - Core moduleall test cases of FileFlowTest.java got commented and recent M14 someone added annotation to suppress this i.e. `
@SuppressWarnings("java:S2187") // there is no test cases in this class at present
Any specific reason for comment in th...all test cases of FileFlowTest.java got commented and recent M14 someone added annotation to suppress this i.e. `
@SuppressWarnings("java:S2187") // there is no test cases in this class at present
Any specific reason for comment in the FileFlowTest class.Pramesh PatilPramesh Patilhttps://community.opengroup.org/osdu/platform/system/schema-service/-/issues/119If abstract reference schema is DEVELOPMENT than kind schema cannot be set to...2022-11-22T11:04:42ZNeelesh ThakurIf abstract reference schema is DEVELOPMENT than kind schema cannot be set to PUBLISHEDIf a Kind schema is using abstract reference schema (using $ref) and these abstract reference schema status is `DEVELOPMENT` than Schema service shouldn't allow changing the status of Kind schema to 'PUBLISHED'.
This can lead to breakin...If a Kind schema is using abstract reference schema (using $ref) and these abstract reference schema status is `DEVELOPMENT` than Schema service shouldn't allow changing the status of Kind schema to 'PUBLISHED'.
This can lead to breaking change in Kind schema as abstract reference schema can still be changed and breaking changes are allowed on `DEVELOPMENT`.https://community.opengroup.org/osdu/platform/system/file/-/issues/80ADR: Leverage File Service for Storage Operations2023-07-05T09:43:17ZElizabeth HalperADR: Leverage File Service for Storage Operations# Introduction
## Status
- [x] Initiated
- [x] Proposed
- [x] Under Review
- [ ] Approved
- [ ] Rejected
## Decision
All DMS's will leverage the File Service as a layer between the DMS and storage. Additionally, the File Service will...# Introduction
## Status
- [x] Initiated
- [x] Proposed
- [x] Under Review
- [ ] Approved
- [ ] Rejected
## Decision
All DMS's will leverage the File Service as a layer between the DMS and storage. Additionally, the File Service will provide methods through the exposed service interface to move data on different storage tiers for all DMS's in the most cost-effective manner based on how it is being used. File Service will provide an abstraction over all storage actions, including calls to the partition service. Therefore, this will not need to be implemented in services that don't have that functionality yet. All DDMS's will use File Service for the storage of binary data, and other services will be able to leverage File Service as well.
This decision is a proposed solution to the rejection of [this ADR](https://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/seismic-dms-suite/seismic-store-service/-/issues/39)
![DataStorageFlow](/uploads/92dbaeae4a51fa893e59e8ff2cb7934c/DataStorageFlow.png)
### Add File Service Endpoints
Provide a new utility endpoint to retrieve the list of supported storage tiers. We will need to discuss how implement an endpoint that will return the list of supported storage tiers. Additionally, other functionality, such as finer-grained access which is required for SDMS storage procedures, will need to be included in File Service.
### Refactor DMS Dataset Requests
Instead of directly loading in the Cosmos client library to each DMS, we will send the REST requests above to the file service to add datasets to the database.
## Rationale
We want all capabilities regarding storage to be available for all DMS's with the smallest amount of variation possible. Additionally, by implementing these features once in one service (File Service), the community will save a lot of time because other services will not need to change when storage features change. The example we use above is storage tiers. Although this requires an initial investment in refactoring services to leverage File Service, we will ultimately be able to implement storage tiers for all DMS's without much change to the services themselves.
## Consequences
We will need to:
- Add this additional functionality to the File Service
- "Onboard" services to using the File Service for all their storage actions
- Refactor all services to make REST requests to File Service as opposed to directly using the library
- We would need to enforce uniformity of requests given different services will be adding storage tier to their models
These tasks take a lot of time and effort as well as collaboration across many parties. We will need all CSV's and ISV's to support this motion and contribute to ensuring all services are compliant with this decision.Elizabeth HalperElizabeth Halperhttps://community.opengroup.org/osdu/platform/consumption/geospatial/-/issues/190Data - Display Horizon/Interpretation extent polygons on map2023-07-06T15:52:09ZLevi RemingtonData - Display Horizon/Interpretation extent polygons on mapAs a GCZ user, I want Horizon/Interpretation extent polygons data to be supported in GCZ, so that I can display this on a map.
- Create a web service to display in a map
Acceptance Criteria:
- Horizon/Interpretation extent polygons dis...As a GCZ user, I want Horizon/Interpretation extent polygons data to be supported in GCZ, so that I can display this on a map.
- Create a web service to display in a map
Acceptance Criteria:
- Horizon/Interpretation extent polygons displayed on a Maphttps://community.opengroup.org/osdu/platform/consumption/geospatial/-/issues/186Deployment - Separate repos for Provider and Transformer2022-11-16T16:14:32ZJoel RomeroDeployment - Separate repos for Provider and TransformerSuggest to separate repos for Provider and Transformer in order to have separate CI/CD for each. This will allow users to deploy each separately and save resources. Will require discussion with OSDU code tagging team.Suggest to separate repos for Provider and Transformer in order to have separate CI/CD for each. This will allow users to deploy each separately and save resources. Will require discussion with OSDU code tagging team.https://community.opengroup.org/osdu/platform/data-flow/ingestion/osdu-ingestion-lib/-/issues/7Validation issue in OSDU_Ingest2023-01-04T12:49:39ZJeyakumar DevarajuluValidation issue in OSDU_IngestWe have created a dataset in a data provider(external OSDU system) and when we do manifest ingestion and refer the external dataset id in SourceRecordID attribute of connectedsource.generic record into an operator system(current OSDU sys...We have created a dataset in a data provider(external OSDU system) and when we do manifest ingestion and refer the external dataset id in SourceRecordID attribute of connectedsource.generic record into an operator system(current OSDU system), it throws a validation error in SourceRecordID (attached screenshot for reference, attached manifest as well) but we haven't added any pattern, but still, it throws validation issue.
ConnectedSource.Generic - SourceRecordID attribute
![image](/uploads/923e721856a96cdead3b4b8f94a42e39/image.png)
Throws error for the below dataset
['osdu:dataset--ConnectedSource.Generic:6d0a547cf0153202feeec449816f5ad26df21317ed48260a31ffb56be5cc1090:']
![XOMSummary-Dataset-failure](/uploads/20e6aee6f42dd7a8ec370e58a07452eb/XOMSummary-Dataset-failure.png)
Manifest used for ingestion
[connectedsource_generic.json](/uploads/f56c08d484980dcbe4b243342289afc5/connectedsource_generic.json)https://community.opengroup.org/osdu/platform/data-flow/ingestion/external-data-sources/eds-dms/-/issues/4EDS DMS - dataset registry id during retrieval service without Schema Authority2023-03-01T07:36:49ZPriyanka BhongadeEDS DMS - dataset registry id during retrieval service without Schema AuthorityDataset Retrieval Service to take ids without Schema Authority
Currently working with below example ID
'osdu:dataset--File.Generic:6d0a547cf0153202feeec449816f5ad26df21317ed48260a31ffb56be5cc1090'
Changes to be made to consume below val...Dataset Retrieval Service to take ids without Schema Authority
Currently working with below example ID
'osdu:dataset--File.Generic:6d0a547cf0153202feeec449816f5ad26df21317ed48260a31ffb56be5cc1090'
Changes to be made to consume below value.
'dataset--File.Generic:6d0a547cf0153202feeec449816f5ad26df21317ed48260a31ffb56be5cc1090'Thulasi Dass SubramanianThulasi Dass Subramanianhttps://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/open-vds/-/issues/157Behavior for files with irregular inlines/crosslines2022-11-15T17:29:04ZAlena ChaikouskayaBehavior for files with irregular inlines/crosslines_(sorry, I will try to divide issue into parts, as for some reason it gets constantly marked as spam)_
While playing with openvds we accidentally created synthetic segy files which are imported into vds incorrectly (roundtrip breaks).
W..._(sorry, I will try to divide issue into parts, as for some reason it gets constantly marked as spam)_
While playing with openvds we accidentally created synthetic segy files which are imported into vds incorrectly (roundtrip breaks).
We are not sure how likely files like that can appear in reality, but our domain knowledge source tells us that it is possible in theory.
1. File [broken1.segy](/uploads/1e35d0885342952d3f3c5ec69273e02c/broken1.segy) with ilines `[1, 6, 11, 15]`
(Stride is 5, last element is at distance 4)
2. File [broken2.segy](/uploads/f2e69b64faf656cdb771ac2215264221/broken2.segy) with ilines `[1, 6, 11, 16, 21, 26, 27]`
(Stride is 5, last element is at distance 1)
Some rows just get lost, never to be found in vds.
Main observation is that as openvds for some reason on purpose ignores the distance between two last values, we need to supply a different distance to last element to reproduce this behavior.
How short/long is the distance of the last element might also matter as changing [this](https://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/open-vds/-/blob/master/tools/SEGYImport/SEGYImport.cpp#L2185) suspicious piece of code (as it seems that `a + (a-b)%c - b` is not divisible by `c`) fixed only one of those cases for me.https://community.opengroup.org/osdu/platform/data-flow/ingestion/csv-parser/csv-parser/-/issues/80Return successfully created IDs from airflow when checking status of runId2022-11-07T16:59:52ZZachary KeirnReturn successfully created IDs from airflow when checking status of runIdPart of the testing is to validate that the records created in the workflow exist and are correct. Currently, in order to do this, one has to go to the airflow console and copy/paste the successfully created IDs. It would be helpful if t...Part of the testing is to validate that the records created in the workflow exist and are correct. Currently, in order to do this, one has to go to the airflow console and copy/paste the successfully created IDs. It would be helpful if the worflowRun API would return the successfully created IDs or if there was another API that would do this based on the runID.