OSDU Software issueshttps://community.opengroup.org/groups/osdu/-/issues2021-10-29T17:57:48Zhttps://community.opengroup.org/osdu/platform/data-flow/real-time/streams/stream-admin-service/-/issues/11Implement DeploymentAdminService2021-10-29T17:57:48ZDmitry KniazevImplement DeploymentAdminServiceImplement Java SpringBoot service `org.opengroup.osdu.streaming.service.DeploymentAdminService.java` used by StreamingAdminService to perform k8s deployments operations using [kubernetes java client](https://github.com/kubernetes-client/...Implement Java SpringBoot service `org.opengroup.osdu.streaming.service.DeploymentAdminService.java` used by StreamingAdminService to perform k8s deployments operations using [kubernetes java client](https://github.com/kubernetes-client/java/wiki/3.-Code-Examples):
- [x] createDeployment method should created the new deployment using the YAML/JSON deployment definition provided as an argument, set replicas to 0 and return
- [x] deleteDeployment method should delete the deployment using deployment id provided as an argument
- [x] startDeployment method should set the replicas of the deployment to 1 (or more), ensure the pods have started and return
- [x] stopDeployment method should set the replicas of the deployment to 0, ensure the pods have stopped and return
- [x] test for every method aboveStephen NimmoStephen Nimmohttps://community.opengroup.org/osdu/platform/consumption/geospatial/-/issues/80Issue: Unable to query Geometry objects using Nodejs thin client for Ignite2022-06-22T17:13:46ZJoel RomeroIssue: Unable to query Geometry objects using Nodejs thin client for IgniteDependency for #74 and #76
Investigate the following options:
- Continue to investigate using NodeJS client - Levi / Bryan
- Build NodeJS wrapper for Java Ignite client - Philip
- Use Ignite native REST API instead of NodeJS client - ...Dependency for #74 and #76
Investigate the following options:
- Continue to investigate using NodeJS client - Levi / Bryan
- Build NodeJS wrapper for Java Ignite client - Philip
- Use Ignite native REST API instead of NodeJS client - All
Acceptance Criteria:
- Recommend and decide which client is viable to move forward withGCZ Sprint 5Bryan GunterLevi RemingtonBryan Gunterhttps://community.opengroup.org/osdu/platform/domain-data-mgmt-services/production/historian/services/osdu-pps-timeseries-service/-/issues/31[BUG] Bad Request Body (Missing source) Should Return 400 Bad Request2024-03-13T05:41:28Zaliuddin abd rauf[BUG] Bad Request Body (Missing source) Should Return 400 Bad RequestThis affecting the batch endpoint (latest and normal). when source field is not exist in the request body, it not return the correct error code, that should be 400, since the source is a required field in the request body. if other requi...This affecting the batch endpoint (latest and normal). when source field is not exist in the request body, it not return the correct error code, that should be 400, since the source is a required field in the request body. if other required field not exist, it return a correct 400 code, even though the response body still not following the format at &9.
![image.png](/uploads/17f7cdda0fadd907706af292913a2b2c/image.png)PDMS MVP1, phase2https://community.opengroup.org/osdu/platform/domain-data-mgmt-services/production/historian/services/osdu-pps-timeseries-service/-/issues/30[BUG] Wrong Response Code If Path Value Not Match2024-03-11T05:03:21Zaliuddin abd rauf[BUG] Wrong Response Code If Path Value Not MatchTesting by providing a valid value for entityKey, propertyDescriptor and source to the endpoint, but those 3 values never been paired to be use in timeseries ingestion. So it will return the error message of `Failed to get a Stream Mappi...Testing by providing a valid value for entityKey, propertyDescriptor and source to the endpoint, but those 3 values never been paired to be use in timeseries ingestion. So it will return the error message of `Failed to get a Stream Mapping from PDS` as per below:
![image.png](/uploads/79f12d9cc5a054dde6267417e2701e22/image.png){width=771 height=320}
the error schema not correct, need to follow &9, also the status code for me it need to be 404 not found, since the request using a valid value, it just there's no record found for it. Please also give input on the suitable error code for this if any.PDMS MVP1, phase2Danh Nguyenvandanh.nguyen@petronas.comDanh Nguyenvandanh.nguyen@petronas.comhttps://community.opengroup.org/osdu/platform/domain-data-mgmt-services/production/historian/services/osdu-pps-timeseries-service/-/issues/29[BUG] Non Exist Path Value Return Different Type of Response Code2024-03-12T08:48:16Zaliuddin abd rauf[BUG] Non Exist Path Value Return Different Type of Response Codefor the `Get Timeseries for single Entity, Property and Source, Return the latest value of a Timeseries` endpoint, when the path variable specified was non-exist, it will return different type of error
1. entityKey (400)
![image.png...for the `Get Timeseries for single Entity, Property and Source, Return the latest value of a Timeseries` endpoint, when the path variable specified was non-exist, it will return different type of error
1. entityKey (400)
![image.png](/uploads/725207de720984bb2beab9fc711deb93/image.png){width="718" height="323"}
2. propertyDescriptor (404)
![image.png](/uploads/a283c31c9c8b0b11f843c769032b81aa/image.png){width="703" height="316"}
3. source (400)
![image.png](/uploads/cab31d11d1b7a1a9ae64fa9edaaf20f9/image.png){width="666" height="299"}
For all the situation above, it should return the same code which is 404 not found, and for the response body, it should follow format specified in &9 with proper value.Danh Nguyenvandanh.nguyen@petronas.comDanh Nguyenvandanh.nguyen@petronas.comhttps://community.opengroup.org/osdu/platform/domain-data-mgmt-services/production/historian/services/osdu-pps-timeseries-service/-/issues/28[BUG] Invalid Start and End Timestamp Resulting 500 Internal Server Error2024-03-13T03:42:19Zaliuddin abd rauf[BUG] Invalid Start and End Timestamp Resulting 500 Internal Server ErrorInvalid or no value for start and end timestamp should return 400 instead of 500 with proper error response, happened for the `single Entity, Property and Source` endpoint.
![image.png](/uploads/d0c2e718df7f8f7cf4830c614dfd36f5/image.png)Invalid or no value for start and end timestamp should return 400 instead of 500 with proper error response, happened for the `single Entity, Property and Source` endpoint.
![image.png](/uploads/d0c2e718df7f8f7cf4830c614dfd36f5/image.png)PDMS MVP1, phase2Danh Nguyenvandanh.nguyen@petronas.comDanh Nguyenvandanh.nguyen@petronas.comhttps://community.opengroup.org/osdu/platform/domain-data-mgmt-services/production/historian/services/osdu-pps-timeseries-service/-/issues/16[BUG] Wrong Response Code on Invalid data-partition-id2024-03-07T07:37:31Zaliuddin abd rauf[BUG] Wrong Response Code on Invalid data-partition-idPlease refer to &14 for the details of this issuePlease refer to &14 for the details of this issueDanh Nguyenvandanh.nguyen@petronas.comDanh Nguyenvandanh.nguyen@petronas.comhttps://community.opengroup.org/osdu/platform/consumption/geospatial/-/issues/177Data - SPIKE: Investigate CRS Trajectory Conversion Service2023-02-23T06:32:09ZJoel RomeroData - SPIKE: Investigate CRS Trajectory Conversion ServiceMeet with Joshua Townsend to discuss capabilities of Trajectory Conversion service and understand how we (or operators) can use it to guarantee geospatial ingestion by GCZ.
**Prerequisite: Trajectory and M14 environment**Meet with Joshua Townsend to discuss capabilities of Trajectory Conversion service and understand how we (or operators) can use it to guarantee geospatial ingestion by GCZ.
**Prerequisite: Trajectory and M14 environment**Bryan GunterLevi RemingtonBryan Gunterhttps://community.opengroup.org/osdu/platform/domain-data-mgmt-services/well-delivery/well-delivery/-/issues/11osdu-core-client and osdu-core-common packages with similar or duplicated fun...2022-02-22T18:53:43ZYunhua Koglinosdu-core-client and osdu-core-common packages with similar or duplicated functionsWhen version info is added to well delivery service, we noticed that there are similar functions/definitions between osdu-core-client and osdu-core-common, such this error message:
nested exception is org.springframework.context.annotat...When version info is added to well delivery service, we noticed that there are similar functions/definitions between osdu-core-client and osdu-core-common, such this error message:
nested exception is org.springframework.context.annotation.ConflictingBeanDefinitionException: Annotation-specified bean name 'versionInfoProperties' for bean class [**org.opengroup.osdu.core.common.info.VersionInfoProperties**] conflicts with existing, non-compatible bean definition of same name and class [**org.opengroup.osdu.core.client.info.VersionInfoProperties**]
To avoid such conflicts in the future, maybe osdu-core-client should be added into osdu-core-common because osdu-core-common has been widely used in other osdu java-based services, or a clear separating APIs for these two packages.M11 - Release 0.14Andrei IonescuAndrei Ionescuhttps://community.opengroup.org/osdu/platform/data-flow/real-time/streams/stream-admin-service/-/issues/1Implement TopicAdminService2021-10-15T20:22:10ZStephen NimmoImplement TopicAdminServiceImplement Java SpringBoot service `org.opengroup.osdu.streaming.service.TopicAdminService.java` used by StreamingAdminService to perform Kafka topics manipulations:
- [ ] createTopic method should create the new topic and set the number...Implement Java SpringBoot service `org.opengroup.osdu.streaming.service.TopicAdminService.java` used by StreamingAdminService to perform Kafka topics manipulations:
- [ ] createTopic method should create the new topic and set the number following parameters:
- [ ] topicName = kind
- [ ] partitions = 10
- [ ] replicas = 3
- [ ] deleteTopic method should delete the topic using topicName provided as an argument
- [ ] test for every method aboveStephen NimmoStephen Nimmohttps://community.opengroup.org/osdu/platform/security-and-compliance/policy/-/issues/20Fix Search return fields Issue (Blocker)2022-02-01T04:49:12ZAsh SathyaseelanFix Search return fields Issue (Blocker)M9 - Release 0.12Hrvoje MarkovicHrvoje Markovichttps://community.opengroup.org/osdu/platform/deployment-and-operations/infra-azure-provisioning/-/issues/77Architecture Change - Central Resources - Add Graph Database2023-09-06T17:03:21ZDaniel SchollArchitecture Change - Central Resources - Add Graph DatabaseThe addition of a Graph Database is required in order to support enhanced Entitlements and a new Entitlements Service based on Graph Database Functionality. This database has been determined to be a Cosmos Database and leverage the [Azu...The addition of a Graph Database is required in order to support enhanced Entitlements and a new Entitlements Service based on Graph Database Functionality. This database has been determined to be a Cosmos Database and leverage the [Azure Cosmos Graph API](https://docs.microsoft.com/en-us/azure/cosmos-db/graph-introduction)
The database for entitlements needs to be a single database for the OSDU stamp and is not part of a Data Partition and is planned to be a part of the Central Resources.
---
__Design__
Terraform Resources exist in AzureRM for managing a Gremlin Graph within a Cosmos Account. These resources are different than those used by a SQL Database and Container. Two options exist for the module work.
1. Enhance the CosmosDB Module to support both SQL and Gremlin Databases.
2. Create a separate module for each database type that is independent.
There are no known advantages at the moment as to why a single module would be of benefit so the default decision is to use a new module for this Graph API functionality.
_Module Requirements_
- The Module if possible should be as similar to Cosmos DB as possible.
_Template Requirements_
- Database will be named with the suffix of graph to distinguish from table or db
- Database will be created as part of the Central Resource Template
- Database will be locked
- Database location and replication location will be consistent in naming patterns to Data Partitions
- Database by default will use the same type of throughput settings as CosmosDB.
---
__Acceptance Criteria__
1. Architecture Diagram Change
2. Modify or create an infrastructure module responsible for adding Cosmos Graph Database.
3. Modify Central Resources to add the additional database.
4. Ensure all Module Unit Tests Pass
5. Ensure all Template Unit Tests and Integration Tests Pass
6. Update all required documentationJanuary - 21Daniel SchollDaniel Schollhttps://community.opengroup.org/osdu/platform/deployment-and-operations/infra-azure-provisioning/-/issues/16Using a single Service Bus in a multipartition setup2021-06-23T04:30:22ZViacheslav Tarasov - SLBUsing a single Service Bus in a multipartition setupWe need to leave Service Bus as part of the partition resource group, but while multiple exist only one is going to be used. We'd also need to document this not to confuse consumers, or update the deployment diagram to only show one serv...We need to leave Service Bus as part of the partition resource group, but while multiple exist only one is going to be used. We'd also need to document this not to confuse consumers, or update the deployment diagram to only show one service bus.
A related discussion is available at https://github.com/Azure/osdu-infrastructure/issues/131https://community.opengroup.org/osdu/platform/deployment-and-operations/infra-azure-provisioning/-/issues/7Secrets in secret store not getting updated2021-06-14T04:26:36ZKiran VeerapaneniSecrets in secret store not getting updatedSecrets in secret store is not getting updated, After the secret is created, Not able to perform add, update and delete keys to the secret.Secrets in secret store is not getting updated, After the secret is created, Not able to perform add, update and delete keys to the secret.Sprint 10/25 - 10/31https://community.opengroup.org/osdu/platform/system/schema-service/-/issues/9[System/Core] Schema Service2020-11-09T20:06:02ZStephen Whitley (Invited Expert)[System/Core] Schema ServiceThe schema service is both an implementation of the schema standard defined below and a service to help create, manage and share schemas between different users. It aims to both raise awareness of the different models used by teams to he...The schema service is both an implementation of the schema standard defined below and a service to help create, manage and share schemas between different users. It aims to both raise awareness of the different models used by teams to help reuse both of the data sources but also of the schemas themselves. This will hopefully help reduce the friction obtaining data from multiple sources.
**Schema service goals**:
- To enable easier creation of new schemas
- To discover popular schemas in use today
- To make reuse of existing schemas easier to achieve
**Schema Definition**:
A schema is the model definition for a particular entity. At its core it defines the property names, their types and relationships to other models. It also includes certain qualities to expect about the actual entities and the data its hold.
An entity is a specific instance of a schema. A schema is assigned an entityType which categorizes the data the schema defines. For example 'well' is an example of an entityType.
An entityType is assigned an 'authority'. An authority is a namespace to help differentiate types defined by different groups e.g. it allows for both an slb:well and a chevron:well type to exist.
With schema definitions we aim to define the standards for data models used by a DDMS. This will reduce the friction with different systems data sharing needs helping with our vision to enable data discovery and sharing.
**Principles**:
- Low threshold for Schema Registration
: We should make it as easy as possible to create a new schema and register it, Whether this be through examples, tooling to help auto-generate a schema or validators to give early feedback when invalid markup is used. Also it must be easy to reuse existing schema definitions - DE-schemas and external schemas - using composition.
- Easy to consume
: JSON schema allows for JSON pointers to other schema parts and fragments, which need to be chased to obtain a full schema. A registered DE schema is self-contained, i.e. it contains only internal references "$ref": "#/definitions/../target". This is particularly important for schemas, which originally refer to external URLs. All internal "$ref" are resolvable. Consumers find all the answers inside the single schema document. The schema has a predictable structure.
- Robust
: All Schemas are read-only and versioned. The major version is controlled by the schema producer, the minor version is controlled by the schema service. When a schema is assigned version 1.0.0 or higher no breaking changes can be applied on new schema versions. Breaking changes will be allowed on any schema version with major version 0.X.X.
- Breaking changes are:
- removed properties
- changes to existing properties e.g.changed types, changed required properties
: This gives consumers confidence that they can use data without fear of breaks in the future. However schemas can be deprecated meaning that no new data will be assigned to it.
**Schema attributes**
Schema definitions are based on in the [JSON schema standard](https://json-schema.org/latest/json-schema-core.html) and [OpenAPI 3](https://swagger.io/docs/specification/data-models/). There are subtle differences in the keyword support, which are explained in conjunction with OAS3 [here](https://swagger.io/docs/specification/data-models/keywords/).
- [x] AWS
- [x] Azure
- [x] IBM
- [ ] GCPM1 - Release 0.1Hrvoje MarkovicFerris ArgyleDania Kodeih (Microsoft)Wladmir FrazaoChris ZhangMichael CleminsonHrvoje Markovic2020-10-30https://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/open-vds/-/issues/240AWS Directory Bucket access2024-03-24T22:23:58ZKlaas KosterAWS Directory Bucket accessOpenVDS 3.4.0 does not connect to the new AWS Directory Bucket.
url = 's3://vds--use1-az4--x-s3/rline1601'
connection = "Region = us-east-1"
vds = openvds.open(url, connection)
Generates the error:
RuntimeError: Error on downloading...OpenVDS 3.4.0 does not connect to the new AWS Directory Bucket.
url = 's3://vds--use1-az4--x-s3/rline1601'
connection = "Region = us-east-1"
vds = openvds.open(url, connection)
Generates the error:
RuntimeError: Error on downloading VolumeDataLayout object: Http error response: 404 -\> vds--use1-az4--x-s3.s3.us-east-1.amazonaws.com/rline1601/VolumeDataLayout: The specified bucket does not exist.
The error makes sense, because the correct address for the file is:
vds--use1-az4--x-s3.s3express-use1-az4.us-east-1.amazonaws.com/rline1601/VolumeDataLayout
So, currently OpenVDS forms the address by inserting 's3' between the bucket and the region, but for a Directory Bucket this is incorrect and should be 's3express-use1-az4'.
I would assume that the 's3express' part is universal, but that '-use1-az4' needs to be coming from the specified url.
I don't mind tinkering with the code and seeing how this can be fixed. But, it would be great is someone (Morten?) can point me to the part of the code where these addresses are formed. I have an example Python script that correctly reads a file from a Directory bucket, see below.https://community.opengroup.org/osdu/platform/domain-data-mgmt-services/production/core/dspdm-services/-/issues/1Service should use only Community Maven Repositories2024-03-28T09:18:49ZDanylo Vanin (EPAM)Service should use only Community Maven Repositories## `pom.xml` files should point to only Community repositories.
In the current state several `pom.xml` files point to non-Community Maven Repositories.
Example (`src/pom.xml`, lines 192+):
```xml
<distributionManagement>
<rep...## `pom.xml` files should point to only Community repositories.
In the current state several `pom.xml` files point to non-Community Maven Repositories.
Example (`src/pom.xml`, lines 192+):
```xml
<distributionManagement>
<repository>
<id>dspdm-release</id>
<name>dspdm-release</name>
<url>https://repo.ds365.ai/artifactory/dspdm-maven-release</url>
</repository>
<snapshotRepository>
<id>dspdm-snapshots</id>
<name>dspdm-snapshots</name>
<url>https://repo.ds365.ai/artifactory/dspdm-maven-snapshots</url>
</snapshotRepository>
</distributionManagement>
```
Example (`src/dspdm.msp.mainservice/pom.xml`, line 52-53, but used as property in other parts of file):
```xml
<release.url>https://artifacts.repo.openearth.community/artifactory/distservices-maven-staging</release.url> <snapshot.url>https://artifacts.repo.openearth.community/artifactory/distservices-maven-snapshots</snapshot.url>
```
To make service compliant with the Community approach, please do the following:
1. Add `.mvn` folder with correct profile settings. Examples can be found in other Java services such as Storage (https://community.opengroup.org/osdu/platform/system/storage/-/blob/master/.mvn/community-maven.settings.xml) or Search (https://community.opengroup.org/osdu/platform/system/search-service). Note the project id in the following URL types: `https://community.opengroup.org/api/v4/projects/**44**/packages/maven`. It should be configured to id of current repository (id of current project is 1245).
2. Remove any references to non-Community repositories inside `pom.xml` files. Examples of pom.xml files can be found in other Java-based services (e.g. [here](https://community.opengroup.org/osdu/platform/system/storage/-/blob/master/storage-core/pom.xml?ref_type=heads))
3. Each pom.xml should be configured so each artifact is published to Community Maven repository. References to this logic can be also found in other Java-based services.
Additional information on how Maven building and publishing jobs are configured in CI/CD can be found here: https://community.opengroup.org/osdu/platform/ci-cd-pipelines/-/blob/master/build/maven.ymlRiabokon Stanislav(EPAM)[GCP]Danylo Vanin (EPAM)Riabokon Stanislav(EPAM)[GCP]https://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/seismic-dms-suite/seismic-store-service/-/issues/131GC and GC baremetal deploy fail2024-03-19T14:35:58ZDaniel PerezGC and GC baremetal deploy failAliaksandr Ramanovich (EPAM)Yauheni Rykhter (EPAM)Aliaksandr Ramanovich (EPAM)https://community.opengroup.org/osdu/data/data-definitions/-/issues/70Publish DD M23.1 v0.26.12024-03-29T13:31:41ZThomas Gehrmann [slb]Publish DD M23.1 v0.26.1Publish the final Data Definition schema/reference value deliverables for the platform M23.
Related: After this milestone is published, the links to reference value lists have to be updated:
1. https://community.opengroup.org/osdu/plat...Publish the final Data Definition schema/reference value deliverables for the platform M23.
Related: After this milestone is published, the links to reference value lists have to be updated:
1. https://community.opengroup.org/osdu/platform/system/reference/schema-upgrade/-/blob/main/Docs/docs/index.md?ref_type=headsThomas Gehrmann [slb]Thomas Gehrmann [slb]https://community.opengroup.org/osdu/platform/domain-data-mgmt-services/reservoir/open-etp-server/-/issues/120Change transaction retry logic2024-03-21T22:47:46ZGilson MartinsChange transaction retry logicTHe purpose is to change the retry logic. For each exponential try, if the interval between each is bigger the a maximum amount of time, then retry with the this maximum interval.THe purpose is to change the retry logic. For each exponential try, if the interval between each is bigger the a maximum amount of time, then retry with the this maximum interval.Gilson MartinsGilson Martins