OSDU Software issueshttps://community.opengroup.org/groups/osdu/-/issues2024-02-15T07:06:15Zhttps://community.opengroup.org/osdu/platform/domain-data-mgmt-services/production/historian/services/osdu-pps-timeseries-service/-/issues/13[REQ] Move source param from query params to path variables2024-02-15T07:06:15ZAsan Arifov[REQ] Move source param from query params to path variablesNeed to move source param from params to path variables
![4.png](/uploads/cb4420609b9b22f8a71a40052c316b04/4.png)Need to move source param from params to path variables
![4.png](/uploads/cb4420609b9b22f8a71a40052c316b04/4.png)PDMS MVP1, phase2Danh Nguyennguyencong.danh@petronas.comkhoi huynh anhanhkhoi.huynh@petronas.comDanh Nguyennguyencong.danh@petronas.comhttps://community.opengroup.org/osdu/platform/consumption/geospatial/-/issues/298Tranformer - CSP specific Authenticate is not used2024-02-14T21:46:23ZGuillaume CailletTranformer - CSP specific Authenticate is not usedHi,
While adding AWS support for GCZ, I noticed that our own class implementing `IAuthenticate` was properly used when calling some API Endpoint, but not during the application launch, when calling one of the FeatureCacheSynchronizer. ...Hi,
While adding AWS support for GCZ, I noticed that our own class implementing `IAuthenticate` was properly used when calling some API Endpoint, but not during the application launch, when calling one of the FeatureCacheSynchronizer. The traceback is:
```
2023-09-19 09:50:25.441 ERROR 8412 --- [pool-1-thread-1] o.o.g.t.s.FeatureCacheSynchronizerHelper : ACCESS_TOKEN_EXCEPTION
org.osdu.gcz.transformer.exception.OsduException: ACCESS_TOKEN_EXCEPTION
at org.osdu.gcz.transformer.repository.osdu.OAuthTokenUtils.getAccessToken(OAuthTokenUtils.java:110) ~[gcz-transformer-core-0.24.0-SNAPSHOT.jar!/:na]
at org.osdu.gcz.transformer.repository.osdu.Authenticate.getAccessToken(Authenticate.java:27) ~[gcz-transformer-core-0.24.0-SNAPSHOT.jar!/:na]
at org.osdu.gcz.transformer.scheduled.FeatureCacheSynchronizerHelper.synchronizeInBatch(FeatureCacheSynchronizerHelper.java:102) ~[gcz-transformer-core-0.24.0-SNAPSHOT.jar!/:na]
at org.osdu.gcz.transformer.scheduled.FeatureCacheLocalSynchronizer.synchronizeInBatch(FeatureCacheLocalSynchronizer.java:114) [gcz-transformer-core-0.24.0-SNAPSHOT.jar!/:na]
at org.osdu.gcz.transformer.scheduled.FeatureCacheLocalSynchronizer.lambda$runOnce$0(FeatureCacheLocalSynchronizer.java:81) [gcz-transformer-core-0.24.0-SNAPSHOT.jar!/:na]
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) ~[na:1.8.0_382]
```
Tentative MR: https://community.opengroup.org/osdu/platform/consumption/geospatial/-/merge_requests/128/diffshttps://community.opengroup.org/osdu/platform/system/indexer-service/-/issues/81ADR: Configurable Index Extensions and De-Normalizations2024-02-14T18:00:03ZThomas Gehrmann [slb]ADR: Configurable Index Extensions and De-Normalizations<a name="TOC"></a>
[[_TOC_]]
Originally recorded during June 28-30, 2022 F2F as "Hints replacements, multiple index schemas (participation of indexer
& data definition needs to be in charge), content vs catalog, side-car", then renamed...<a name="TOC"></a>
[[_TOC_]]
Originally recorded during June 28-30, 2022 F2F as "Hints replacements, multiple index schemas (participation of indexer
& data definition needs to be in charge), content vs catalog, side-car", then renamed to ADR: User-friendly/App-friendly
Index Schemas
in [Enterprise Architecture ADR #66](https://gitlab.opengroup.org/osdu/subcommittees/ea/work-products/adr-elaboration/-/issues/66)
<details>
<summary markdown="span">Preparation Material</summary>
OSDU Data Definitions conducted a number of sessions in the Core Concepts meetings, which contain supplementary
information:
**2022**
1. [Meeting Minutes 2022-07-05](https://gitlab.opengroup.org/osdu/subcommittees/data-def/projects/core-concepts/docs/-/blob/master/Meeting%20Minutes/2022/2022-07-05-DataDefinitionsCoreConcepts_MeetingMinutes.md#42-user-friendly-schemas-de-normalizations)
2. [Meeting Minutes 2022-07-12](https://gitlab.opengroup.org/osdu/subcommittees/data-def/projects/core-concepts/docs/-/blob/master/Meeting%20Minutes/2022/2022-07-12-DataDefinitionsCoreConcepts_MeetingMinutes.md#43-user-friendly-schemas-aka-index-schemas)
3. [Meeting Minutes 2022-07-19](https://gitlab.opengroup.org/osdu/subcommittees/data-def/projects/core-concepts/docs/-/blob/master/Meeting%20Minutes/2022/2022-07-19-DataDefinitionsCoreConcepts_MeetingMinutes.md#43-user-friendly-schemas-aka-index-schemas)
4. [Meeting Minutes 2022-07-26](https://gitlab.opengroup.org/osdu/subcommittees/data-def/projects/core-concepts/docs/-/blob/master/Meeting%20Minutes/2022/2022-07-26-DataDefinitionsCoreConcepts_MeetingMinutes.md#42-user-friendly-schemas-aka-index-schemas)
**2023**
1. [Meeting Minutes 2023-03-21](https://gitlab.opengroup.org/osdu/subcommittees/data-def/projects/core-concepts/docs/-/blob/master/Meeting%20Minutes/2023/2023-03-21-DataDefinitionsCoreConcepts_MeetingMinutes.md#42-index-extensions-adr-66-configuration)
2. [Meeting Minutes 2023-03-28](https://gitlab.opengroup.org/osdu/subcommittees/data-def/projects/core-concepts/docs/-/blob/master/Meeting%20Minutes/2023/2023-03-28-DataDefinitionsCoreConcepts_MeetingMinutes.md#42-index-extensions-configuration-mechanics-schema-review)
3. [Enterprise Architecture Advice Forum 2023-04-12](https://opensdu.slack.com/archives/C04TPV9CRUP/p1681291140407219?thread_ts=1681217870.084929&cid=C04TPV9CRUP)
</details>
# Status
- [x] Proposed
- [x] Trialing
- [x] Under review
- [x] Approved
- [ ] Retired
# Context & Scope
The entity type schemas delivered by the OSDU Data definitions subcommittee pose a number of challenges
for consumers. Most of them are due to the normalization of schemas and the friendliness to ingestors, which allows
storage of values as is and less standardized. The main problem is the usage of arrays of objects, which are difficult
when forming queries and cause costs for indexing. So far the issues have been mitigated by decorating arrays of objects
with `x-osdu-indexing` instructions. An umbrella issue has been recorded in
[community DD issue #30](https://community.opengroup.org/osdu/data/data-definitions/-/issues/30), which collects a
numer of more detailed requests.
In previous OSDU prototypes, this was addressed by specific workarounds,
see [OSDU R1 Indexing Approach and Specification](https://gitlab.opengroup.org/osdu/subcommittees/ea/work-products/adr-elaboration/-/wikis/uploads/46b4f84f0903cc385abd147a0175a00a/r1_indexing.pdf).
Here an attempt to classify the workarounds listed in the R1 document above:
1. Extraction of standardized values from arrays of objects using conditions (e.g., Well UWI, SpudDate).
2. Chasing relationships to parent or related objects in order to de-normalize parent/related object values on children.
3. Offering related object's Name/Code for presentations in applications.
4. Counting children of well-known kinds. (The priority of this is lower compared to 1 and 2. The current Search service
should be capable of performing querying a particular parent-child relationship.)
The current methods using `x-osdu-virtual-properties`, `x-osdu-is-derived` and `x-osdu-indexing` JSON schema decorations
fall short when the query conditions become dependent on platform operators usage of, e.g., reference values. In many
cases the reference value lists shipped by OSDU are incomplete or not clearly enough documented to guide global platform
standards.
[Back to TOC](#TOC)
---
## Requirements
* We need a configurable way to define rules for property extraction, either from nested arrays of objects or from
related objects.
* We need OSDU provided standard index schema extensions to extend the entity types schemas with extracted values. (
Governance for interoperability)
* We need to open the index schema extensions to applications and services to optimize frequently used query patterns.
One of them is the look-up of names or codes of related objects where the source record holds the target record id.
* We need a platform embedded service, which performs the extractions and de-normalizations on demand (data
creation/update events)
* we need platform support to refresh indexes if the indexing schemas change (both for OSDU and application indexing
schemas).
[Back to TOC](#TOC)
---
# Tradeoff Analysis
The original tradeoff analysis was performed and recorded
in [EA ADR #66](https://gitlab.opengroup.org/osdu/subcommittees/ea/work-products/adr-elaboration/-/issues/66).
The need for performance required further simplification.
* Replicating derived/de-normalized property values in Storage records was discarded as this would create an enormous
stack of versions for each individual record as records would need to be updated if properties derived from parents or
children changed.
* Instead, de-normalization could happen exclusively in the indexer, simultaneously exploiting the already indexed
values of parent and children records. (Preferred option)
* Using configurable index extension rules was already proposed
in [EA ADR #66](https://gitlab.opengroup.org/osdu/subcommittees/ea/work-products/adr-elaboration/-/issues/66). The
proposed additional index schemas with references to configurations were discarded. All required information can be
encoded in the configurations themselves. Any index extension schema fragments and documentation can be auto-generated
from the configurations.
* Interoperability is achieved by firm governance rules - the configurations are stored and customizable as OPEN
governance reference-data. However, additional governance rules have to be provided to keep interoperability
guaranteed across deployments and to prevent unwanted interference of index extensions with actual schema properties.
[Back to TOC](#TOC)
---
# Solution
## Index Extension, Data Definition
OSDU Standard index extensions are defined by OSDU Data Definition work-streams with the intent to provide
user/application friendly, derived properties. The standard set, together with the OSDU schemas, form the
interoperability foundation. They can contribute to deliver domain specific APIs according to the Domain Driven Design
principles.
The configurations are encoded in OSDU reference-data records, one per each major schema version. The proposed type name
is IndexPropertyPathConfiguration. The diagram below shows the decomposition into parts.
![IndexPropertyPathConfiguration](/uploads/7f1330dd7a41903a90174feb7fe2c9d9/IndexPropertyPathConfiguration.png)
* One IndexPropertyPathConfiguration record corresponds to one schema kind's major version, i.e., the
IndexPropertyPathConfiguration record id for all the `schema osdu:wks:master-data--Wellbore:1.*.*` kinds is set
to `partition-id:reference-data--IndexPropertyPathConfiguration:osdu:wks:master-data--Wellbore:1`. Code, Name and
Descriptions are filled with meaningful data as usual for all reference-data types.
* The additional index properties are added with one JSON object each in the `Configurations[]` array. The Name defined
the name of the index 'column', or the name of the property one can search for. The Policy decides, in the current
usage, whether the resulting value is a single value or an array containing the aggregated, derived values.
* Each `Configurations[]` element has at least one element defined in `Paths[]`.
* The `ValueExtraction` object has one mandatory property, `ValuePath`. The other optional two properties hold value
match conditions, i.e., the property containing the value to be matched and the value to match.
* If no `RelatedObjectsSpec` is present, the value is derived from the object being indexed.
* If `RelatedObjectsSpec` is provided, the value extraction is carried out in related objects - depending on
the `RelationshipDirection` indirection parent/related object or children. The property holding the record id to
follow is specified in `RelatedObjectID`, so is the expected target kind. As in `ValueExtraction`, the selection can
be filtered by a match condition (`RelatedConditionProperty` and `RelatedConditionMatches`)
With this, the extension properties can be defined as if they were provided by a schema.
Most of the use cases deal with text (string) types. The definition of configurations is however not limited to string
types. As long as the property is known to the indexer, i.e., the source record schema is describing the types, the type
can be inferred by the indexer. This does not work for nested arrays of objects, which have not been indexed
with `"x-osdu-indexing": {"type":"nested"}`. In this case the types unknown to teh Indexer Service are
string-serialized; the resulting index type is then of type `string`, still supporting text search.
[Back to TOC](#TOC)
---
### Use Case 1, WellUWI
_As a user I want to discover and match Wells by their UWI. I am aware that this is not globally reliable, however, I am
able to specify a prioritized AliasNameType list to look up value in the NameAliases array._
The configuration demonstrates extractions from the record being indexed itself. With Policy `ExtractFirstMatch`, the
first value matching the condition `RelatedConditionProperty` is equal to one of `RelatedConditionMatches`.
<details><summary>Configuration for Well, extract WellUWI from NameAliases[]</summary>
```json
{
"data": {
"Configurations": [
{
"Name": "WellUWI",
"Policy": "ExtractFirstMatch",
"Paths": [
{
"ValueExtraction": {
"RelatedConditionMatches": [
"{{data-partition-id}}:reference-data--AliasNameType:UniqueIdentifier:",
"{{data-partition-id}}:reference-data--AliasNameType:RegulatoryName:",
"{{data-partition-id}}:reference-data--AliasNameType:PreferredName:",
"{{data-partition-id}}:reference-data--AliasNameType:CommonName:"
],
"RelatedConditionProperty": "data.NameAliases[].AliasNameTypeID",
"ValuePath": "data.NameAliases[].AliasName"
}
}
],
"UseCase": "As a user I want to discover and match Wells by their UWI. I am aware that this is not globally reliable, however, I am able to specify a prioritized AliasNameType list to look up value in the NameAliases array."
}
]
}
}
```
</details>
[Back to TOC](#TOC)
---
### Use Case 2, CountryNames
_As a user I want to find objects by a country name, with the understanding that an object may extend over country
boundaries._
This configuration demonstrates the extraction from related index objects - here `RelatedObjectKind`
being `osdu:wks:master-data--GeoPoliticalEntity:1.`, which are found via `RelatedObjectID` as
in `data.GeoContexts[].GeoPoliticalEntityID`. The condition is constrained to be that GeoTypeID is
GeoPoliticalEntityType:Country.
<details><summary>Configuration for Well, extract CountryNames from GeoContexts[]</summary>
```json
{
"data": {
"Configurations": [
{
"Name": "CountryNames",
"Policy": "ExtractAllMatches",
"Paths": [
{
"RelatedObjectsSpec": {
"RelatedObjectID": "data.GeoContexts[].GeoPoliticalEntityID",
"RelatedObjectKind": "osdu:wks:master-data--GeoPoliticalEntity:1.",
"RelatedConditionMatches": [
"{{data-partition-id}}:reference-data--GeoPoliticalEntityType:Country:"
],
"RelatedConditionProperty": "data.GeoContexts[].GeoTypeID"
},
"ValueExtraction": {
"ValuePath": "data.GeoPoliticalEntityName"
}
}
],
"UseCase": "As a user I want to find objects by a country name, with the understanding that an object may extend over country boundaries."
}
]
}
}
```
</details>
[Back to TOC](#TOC)
---
### Use Case 3, Wellbore Name on WellLog Children
_As a user I want to discover WellLog instances by the wellbore's name value._
A variant of this can be WellUWI from parent Wellbore → Well; in that case the value would be derived from the
already extended index values.
This configuration demonstrates extractions from multiple `Paths[]`.
<details><summary>Configuration for WellLog, extract WellboreName from parent WellboreID</summary>
```json
{
"data": {
"Configurations": [
{
"Name": "WellboreName",
"Policy": "ExtractFirstMatch",
"Paths": [
{
"RelatedObjectsSpec": {
"RelatedObjectKind": "osdu:wks:master-data--Wellbore:1.",
"RelatedObjectID": "data.WellboreID"
},
"ValueExtraction": {
"ValuePath": "data.VirtualProperties.DefaultName"
}
},
{
"RelatedObjectsSpec": {
"RelatedObjectKind": "osdu:wks:master-data--Wellbore:1.",
"RelatedObjectID": "data.WellboreID"
},
"ValueExtraction": {
"ValuePath": "data.FacilityName"
}
}
],
"UseCase": "As a user I want to discover WellLog instances by the wellbore's name value."
}
]
}
}
```
</details>
[Back to TOC](#TOC)
---
### Use Case 4, Wellbore index WellLogCurveMnemonics
_As a user I want to find Wellbores by well log mnemonics._
This configuration demonstrates the Policy `ExtractAllMatches` with related objects discovered by
RelationshipDirection `ParentToChildren`, i.e., related objects referring the indexed record.
<details><summary>Configuration for WellLog, extract WellboreName from parent WellboreID</summary>
```json
{
"data": {
"Configurations": [
{
"Name": "WellLogCurveMnemonics",
"Policy": "ExtractAllMatches",
"Paths": [
{
"RelatedObjectsSpec": {
"RelationshipDirection": "ParentToChildren",
"RelatedObjectID": "WellboreID",
"RelatedObjectKind": "osdu:wks:work-product-component--WellLog:1."
},
"ValueExtraction": {
"ValuePath": "Curves[].Mnemonic"
}
}
],
"UseCase": "As a user I want to find Wellbores by well log mnemonics."
}
]
}
}
```
</details>
[Back to TOC](#TOC)
---
## Index Extension, Governance
OSDU Data Definition ships reference value list content for all reference-data group-type entities. The type
IndexPropertyPathConfiguration is classified as OPEN governance, which usually means that new records can be added by
platform operators. This rule must be adjusted for IndexPropertyPathConfiguration records.
### Permitted Changes to IndexPropertyPathConfiguration Records
It is permitted to
* customize the conditions for value extractions, notable the matching values in `RelatedConditionMatches`.
* add additional `Paths[]` elements to `Configurations[].Paths[]`
* add new index property configuration objects to the `Configurations[]` array. To avoid interference with future OSDU
updates it is strongly recommended to add a namespace prefix to the Configurations[].Name, e.g., "OperatorX.WellUWI".
### Prohibited Changes to IndexPropertyPathConfiguration Records
It is not permitted to
* change the target value type of existing, OSDU shipped index extensions. Example the `ExtractionPath` to a string
property in the original OSDU `Configurations[].ValueExtraction.ValuePath` must not be altered to a number, integer,
or array.
* change the meaning of existing, OSDU shipped index extensions.
* remove OSDU shipped extension definitions in Configurations[].
[Back to TOC](#TOC)
---
## Consumption by Indexer Service
### Recursive Index Updates
With the introduction of de-normalizations record updates can cause infinite recursions. The implementation needs to
address this and avoid situations like in the following diagram:
![Recursions](/uploads/020675583cb7b65560f0d73ffe08fc3c/Recursions.png)
On the left hand Storage records are updated to new versions, which trigger indexing. The update of the index triggers
the index update of related index records due to the derived property values (as defined in the `RelatedObjectsSpec`).
These updates may, in turn, cause a recursion. This must not happen.
The augmenter introduces a new attribute `ancestry_kinds` in the Attributes map of the message payload when sending
messages to update the index of parent/children records. The value of `ancestry_kinds` attribute can include multiple
kinds separated by comma. This new attribute is used to prevent infinite loop of the index chasing. The indexer-queue
must pass the attribute back to the indexer when it receives indexing messages.
### Pseudo-Code
1. For each record to be indexed (create/update event from Storage service):
* Has the record kind a IndexPropertyPathConfiguration?
* Yes
* get or create the internal index schema that combines the schema of the record kind and schema of extended
properties
* create index document that combines the properties of original record and extended properties
* call ElasticSearch service to create or update the index of the record with extended properties
* No
* **_No action_** (=default for records without IndexPropertyPathConfiguration)
2. Re-Indexing (create/update event from Storage service for a IndexPropertyPathConfiguration record)<br>
To update the schema (or say template) of the kind in ElasticSearch when the kind is re-indexed:
* create the internal index schema derived from the kind (as registered in the Schema service)
* create the internal index schema derived from IndexPropertyPathConfiguration
* merge the internal index schemas
* convert the schema to ElasticSearch template
* call ElasticSearch service to update the index template (schema)
[Back to TOC](#TOC)
---
## Accepted Limitations
* A change in the configurations requires re-indexing of all the records of a major schema version kind. It is the same
limitation as an in-place schema change for any kind.
* All the extensions defined in the IndexPropertyPathConfiguration records refer to properties in the `data` block,
including `ValuePath`, `RelatedObjectID`, `RelatedConditionProperty`.
* Only properties in the `data` block of records being indexed can be reached by the `ValuePath`; system properties are
out of reach. The prefix `data.` is therefore optional and can be omitted.
* The formats/values of the extended properties are extracted from the formats/values of the related index records. If
the formats of the original properties are unknown in the related index records, the indexer will set the value type
of the extended properties as string or string array. (With additional complexity and schema parsing, this limitation
can be overcome, but currently the added value seems to be marginal.)
* If the extended properties are extracted from arrays of objects indexed with
(`"x-osdu-indexing": {"type":"flattened"}`), the indexer cannot re-construct the object properties to the
nested objects when the policy `ExtractAllMatches` is applied. (The kind of indexing is already a deliberate choice.
With additional complexity, this limitation can be overcome, but currently the added value seems to
be marginal.)
* To simplify the solution, all the related kinds defined in the configuration are kinds with major version only. They
must end with dot ".". For example: `"RelatedObjectKind": "osdu:wks:work-product-component--WellLog:1."`.
* Index updates may take time. Immediate consistency cannot be expected.
* When a kind derives extended properties from its parent(s), a new data property `data.AssociatedIdentities` is added
on demand by the indexer. The property name `AssociatedIdentities` is therefore reserved by the Indexer and shall not
be used in any OSDU schemas.
Currently, the property name `AssociatedIdentities` is not in use in any of the OSDU well-known schemas. Tests will be
implemented in the OSDU Data Definition pipeline to ensure that this reserved name does not appear as property in
the `data` block.
[Back to TOC](#TOC)
---
# Change Management
1. Configurations are reference-data and need to be ingested/updated.
2. OSDU Data Definitions must take on the task of defining IndexPropertyPathConfiguration records.
3. Updates (extensions) of index extensions must be managed carefully as they cause re-indexing the kinds involved.
# Decision
# Consequences
* The indexer code changes should have no impact on the system if no IndexPropertyPathConfiguration records are present.
[Back to TOC](#TOC)
---
# ADR Comments BelowM18 - Release 0.21https://community.opengroup.org/osdu/platform/consumption/geospatial/-/issues/285Data - Ingestion of Well Log as a Well Bottom Location2024-02-14T16:09:58ZNoel OkanyaData - Ingestion of Well Log as a Well Bottom LocationAs a GCZ developer, I want to represent the log as a point on the well bottom hole location, so that GCZ supports ingesting the logs.
Acceptance Criteria:
- GCZ is able to ingest a well log record and spatialize it with the wells bottom...As a GCZ developer, I want to represent the log as a point on the well bottom hole location, so that GCZ supports ingesting the logs.
Acceptance Criteria:
- GCZ is able to ingest a well log record and spatialize it with the wells bottom hole locationAnkita SrivastavaAnkita Srivastavahttps://community.opengroup.org/osdu/platform/consumption/geospatial/-/issues/332Provider - Add configuration option for authInfo from rest/info2024-02-14T16:09:56ZLevi RemingtonProvider - Add configuration option for authInfo from rest/infoAs a GCZ Developer, I need to add configuration option for authInfo from rest/info, so that we can allow the installers of GCZ to enter the token service URL.As a GCZ Developer, I need to add configuration option for authInfo from rest/info, so that we can allow the installers of GCZ to enter the token service URL.Levi RemingtonLevi Remingtonhttps://community.opengroup.org/osdu/platform/consumption/geospatial/-/issues/310Documentation - Add guidance for GCZ installation to offline environments2024-02-14T15:39:32ZLevi RemingtonDocumentation - Add guidance for GCZ installation to offline environmentsAs a GCZ Developer, I want to add guidance for GCZ installation to offline environments, so that GCZ users can reference for guidance.As a GCZ Developer, I want to add guidance for GCZ installation to offline environments, so that GCZ users can reference for guidance.Levi RemingtonAnkita SrivastavaLevi Remingtonhttps://community.opengroup.org/osdu/platform/consumption/geospatial/-/issues/331Transformer - Add config option for which Port to run Java Springboot applica...2024-02-14T15:39:09ZLevi RemingtonTransformer - Add config option for which Port to run Java Springboot application onAnkita SrivastavaAnkita Srivastavahttps://community.opengroup.org/osdu/platform/domain-data-mgmt-services/reservoir/open-etp-server/-/issues/118No Data separation when using the Same Database2024-02-14T10:21:45ZYan Sushchynski (EPAM)No Data separation when using the Same DatabaseLet's consider a scenario where a singular database connection string is used across all data-partitions. If we create a new dataspace in any given partition, we observe that it is visible when we list dataspaces from other partitions. H...Let's consider a scenario where a singular database connection string is used across all data-partitions. If we create a new dataspace in any given partition, we observe that it is visible when we list dataspaces from other partitions. Hence, we can see all the dataspaces from every data-partition.
Is this an intentional design choice, suggesting that different databases should be utilized for different data-partitions?
Or, alternatively, would it be beneficial to add an additional column to our table(s), which would allow us to filter results by a data-partition identifier?Laurent DenyLaurent Denyhttps://community.opengroup.org/osdu/platform/consumption/geospatial/-/issues/313Generate geoJson skips all records if one records fails with unexpected excep...2024-02-13T18:58:50ZAnkita SrivastavaGenerate geoJson skips all records if one records fails with unexpected exceptionSearch.generateGeoJSON has some expectation for known arrays like FacilityEvents, NameAliases etc. if any mandatory attribute is missing in one record, it will throw exception and skip further records processing and do not load cache at ...Search.generateGeoJSON has some expectation for known arrays like FacilityEvents, NameAliases etc. if any mandatory attribute is missing in one record, it will throw exception and skip further records processing and do not load cache at all.
Expected: Record with error should be skipped and continue processingGCZ Sprint 52Ankita SrivastavaAnkita Srivastavahttps://community.opengroup.org/osdu/platform/consumption/geospatial/-/issues/276Event Tracking/Timelines2024-02-13T16:28:15ZNoel OkanyaEvent Tracking/TimelinesAs a GCZ Product Owner, I want to prepare for the below events, so that we can provide the stakeholders with GCZ updates:
1. EAGE Digital in March 24, 2024 (no OSDU topic)
2. ERGIS event in April 24th - 25th, 2024 - Esri
3. OSDU F2F in E...As a GCZ Product Owner, I want to prepare for the below events, so that we can provide the stakeholders with GCZ updates:
1. EAGE Digital in March 24, 2024 (no OSDU topic)
2. ERGIS event in April 24th - 25th, 2024 - Esri
3. OSDU F2F in Europe April 24th week (No action from the team)https://community.opengroup.org/osdu/platform/domain-data-mgmt-services/rock-and-fluid-sample/rafs-ddms-services/-/issues/331Viscosity Correlation Data Model2024-02-13T15:07:18ZMykhailo BuriakViscosity Correlation Data Model* ViscosityCorrelationID
* ViscosityCorrelationMethod
* ParentPVTModelID
* DocumentID
* Coefficients
* CoefficientName
* Value
* CriticalVolume* ViscosityCorrelationID
* ViscosityCorrelationMethod
* ParentPVTModelID
* DocumentID
* Coefficients
* CoefficientName
* Value
* CriticalVolumeErnesto GutierrezErnesto Gutierrezhttps://community.opengroup.org/osdu/platform/system/reference/crs-conversion-service/-/issues/68CRS Conversion convertGeoJson needs to validate input - 500 error2024-02-13T14:52:50ZAn NgoCRS Conversion convertGeoJson needs to validate input - 500 errorUnhandled error occurs when invalid input is sent to convertGeoJson.
```
curl --location '.../api/crs/converter/v2/convertGeoJson' \
--header 'Authorization: Bearer TOKEN' \
--header 'data-partition-id: partition-id \
--header 'Content-...Unhandled error occurs when invalid input is sent to convertGeoJson.
```
curl --location '.../api/crs/converter/v2/convertGeoJson' \
--header 'Authorization: Bearer TOKEN' \
--header 'data-partition-id: partition-id \
--header 'Content-Type: application/json' \
--header 'correlation-id: id \
--data '{"toCRS":"{\"authCode\":{\"auth\":\"EPSG\",\"code\":\"4326\"},\"name\":\"GCS_WGS_1984\",\"type\":\"LBC\",\"ver\":\"PE_10_3_1\",\"wkt\":\"GEOGCS[\\\"GCS_WGS_1984\\\",DATUM[\\\"D_WGS_1984\\\",SPHEROID[\\\"WGS_1984\\\",6378137.0,298.257223563]],PRIMEM[\\\"Greenwich\\\",0.0],UNIT[\\\"Degree\\\",0.0174532925199433],AUTHORITY[\\\"EPSG\\\",4326]]\"}","toUnitZ":"{\"baseMeasurement\":{\"ancestry\":\"Length\",\"type\":\"UM\"},\"scaleOffset\":{\"offset\":0.0,\"scale\":1.0},\"symbol\":\"m\",\"type\":\"USO\"}"}'
```
The input here is missing the required FeatureCollection
Reponse:
```
{
"code": 500,
"reason": "Server error.",
"message": "An unknown error has occurred."
}
```
convertGeoJson did not validate this. 500 error was returned, caused a Service-Availability Fast Burn Alert.
This error should be handled gracefully and return a 400 Bad Request.M21 - Release 0.24Puneet BhardwajKIRAN ALLAMSETYPuneet Bhardwajhttps://community.opengroup.org/osdu/platform/system/reference/crs-conversion-service/-/issues/70convertTrajectory enhancements for inverse minimum curvature2024-02-13T14:49:07ZKIRAN ALLAMSETYconvertTrajectory enhancements for inverse minimum curvatureImplement inverse minimum curvature for `inputKind "dX_dY_dZ"`. Please first read the document and follow the equations in tab Min.Curv in the spreadsheet.
This enhancement will "invert" the input dx,dy,dz to output MD,INC,AZI at the s...Implement inverse minimum curvature for `inputKind "dX_dY_dZ"`. Please first read the document and follow the equations in tab Min.Curv in the spreadsheet.
This enhancement will "invert" the input dx,dy,dz to output MD,INC,AZI at the stations. It will then use these to compute the trajectory.
* The dX and dY are normally GN referenced and true to scale, and hence the output AZI is also GN referenced. But if they would be TN, then the output would be AZI_TN.
* The implementation can also take as input "X_Y_Z" data, because for each station the difference with the previous is used, i.e., dX[5]-dX[4] = X[5]-X[4] (at least when ignoring point scale factor corrections and varying convergence which are not inverted).
* The purpose is for cases where the INC and AZI survey observables are lost, but an application needs it to load. Another case is inertial surveys, if they need to be loaded as if they are MWD surveys.
* This should not be an elaborate API because this should normally not happen, and if it does, the trajectory does not need to be perfectly reconstructed to the millimeter.
* This function can be used iteratively, i.e., to get "normal output" one can call convertTrajectory with `inputKind "dX_dY_dZ"` to get MD, INC and AZI, then use that with`inputKind "MD_Incl_Azi"`, which response will be as normal with all output fields.
- [x] Add an options and handle `InputKind` "dX_dY_dZ". Implement the formulas.
- [x] Pass test: by roundtrip. See below for test case. First create an input survey with MD,INC,AZI. Then compute the trajectory using AzimuthalEquidistant (forward min. curvature).
- Then run the new implementation with MD_X_Y_Z as input and compute the AZI and INC at the stations. The results should match the original input (except AZI is undetermined for INC=0).
- Then compute the dX,dY,dZ at each station and use that as input for the new implementation with "MD_dX_dY_dZ".
- [ ] Document in swagger.
- "inputKind": _The kind of input; one of MD_Incl_Azim (default), MD_X_Y_Z, MD_dX_dY_dZ, X_Y_Z, dX_dY_dZ. MD stands for measured depth; MD_X_Y_Z/X_Y_Z stand for absolute coordinates in the reference CRS, MD_dX_dY_dZ/dX_dY_dZ stand for deviations relative to the reference point._
- This needs to include also "MD_Incl" - to be checked but it should in v4 already.
- Add an explanation at the end: "For survey MD, Incl, Azi input, minimum curvature is used to calculate the local deviations and absolute coordinates. For MD_dX,dY,dZ, the inverse minimum curvature is first applied to compute Inc and Azi, which then are used as input to compute the trajectory."
- [ ] Document in tutorial with an example.
- [ ] Bert to check tutorial and swagger. We need to add other things too that may not be there, like:
- overview of input and output units.
- interpolate option.
- GNL method.
**Basic data flow/algorithm steps** (this text ought to be copied to a comment block in the code base at start of new function)
1. Given an original request with `InputKind` "dX_dY_dZ" and values for dx,dy,dz inputStations, units given by unitXY and unitZ, as well as all other input parameters such as CRS, AZIREF, method and interpolate,
2. First compute MD, INC and AZI using the inverse min. curvature equations.
- This only requires the InputStation to invert as per doc and spreadsheet (i.e., agnostic to CRS, AZIREF, and method etc.)
- dx,dy,dz should be normalized to SI (to meter) temporarily to compute INC and AZI. The MDs might have to be denormalized (or copied from original) for next step.
3. Create a dummy request with `InputKind` "MD_Incl_Azim" and values for md,inc,azi inputStations (as computed in step 2).
- unitXY : should be removed from the dummy if trajectoryCRS is projected.
- unitMD : (is an optional parameter defaulting to unitZ) - MDs should be converted to unitZ unless unitMD is given by user in request, then convert MDs to that unit.
- Note, if in step 1 the dx,dy were TN then the computed AZI is TN referenced.
4. Run convertTrajectory with the dummy request. This will output the "normal" response incl. path, AZI_TN, AZI_GN, etc. in response.stations. It would even interpolate if user had that in the original request.
**"dX_dY_dZ" Request input parsing/exception handling**
For step 1/2 (compute INC and AZI using inverse min curvature):
- "inputStations".[i]: required
- "dx": required
- "dy": required
- "dz": required
- (md, inc and azi should not be present on input for this method so check for that and return bad request with a message if they are on input)
- "unitXY": required (for the input dx,dy)
- "unitZ": required (for the input dz)
- "unitMD": optional (for the output MDs; defaults to unitZ if i recall correctly- do the same as normal parsing)
For step 3/4 (dummy call after INC and AZI are computed):
- "trajectoryCRS": required (for dummy call in step 3; at that point unitxy comes from projCRS as normal).
- "referencePoint": required
- "azimuthReference": required
- "method": required
- "interpolate": optional (not expected).
**Example input request**
```json
{
"trajectoryCRS": "osdu:reference-data--CoordinateReferenceSystem:BoundProjected:EPSG::23032_EPSG::1612:",
"azimuthReference": "GN",
"unitXY": "osdu:reference-data--UnitOfMeasure:m:",
"unitZ": "osdu:reference-data--UnitOfMeasure:m:",
"unitMD": "osdu:reference-data--UnitOfMeasure:ft:",
"referencePoint": {
"x": 400000,
"y": 6500000,
"z": 100
},
"inputKind": "dX_dY_dZ",
"inputStations": [
{ "dx": 0, "dy": 0, "dz": 0 },
{ "dx": 0.00, "dy": 0, "dz": 3137.0 },
{ "dx": 0.04, "dy": -0.09, "dz": 3150 },
{ "dx": 0.10, "dy": -0.34, "dz": 3175 },
{ "dx": -0.56, "dy": -0.42, "dz": 3199.99 },
{ "dx": -2.76, "dy": -0.33, "dz": 3224.88 },
{ "dx": -6.31, "dy": -0.41, "dz": 3249.63 },
{ "dx": -10.31, "dy": -0.73, "dz": 3274.31 },
{ "dx": -14.30, "dy": -1.05, "dz": 3298.98 }
],
"method": "AzimuthalEquidistant"
"interpolate": false,
}
```
**Step 3/4 dummy request**
The dx,dy,dz above should after inverse min curvature lead to a dummy request using the computed MD,INC,AZI **(ROUND To 3DP)** as follows:
```json
{
"trajectoryCRS": "osdu:reference-data--CoordinateReferenceSystem:BoundProjected:EPSG::23032_EPSG::1612:",
"azimuthReference": "GN",
"unitXY": "osdu:reference-data--UnitOfMeasure:m:",
"unitZ": "osdu:reference-data--UnitOfMeasure:m:",
"unitMD": "osdu:reference-data--UnitOfMeasure:ft:",
"referencePoint": {
"x": 400000,
"y": 6500000,
"z": 100
},
"inputKind": "MD_Incl_Azim",
"inputStations": [
{ "md": 0, "inclination": 0, "azimuth": 0 }, // round these all to 3 dp
{ "md": 10291.995, "inclination": 0, "azimuth": 55.47 }, // round these all to 3 dp
{ "md": 10334.646, "inclination": 0.83, "azimuth": 155.47 }, // ..
{ "md": 10416.667, "inclination": 0.43, "azimuth": 188.78 },
{ "md": 10498.688, "inclination": 2.98, "azimuth": 271.4 },
{ "md": 10580.709, "inclination": 7.12, "azimuth": 272.82 },
{ "md": 10662.730, "inclination": 9.23, "azimuth": 265.43 },
{ "md": 10744.751, "inclination": 9.23, "azimuth": 265.43 },
{ "md": 10826.772, "inclination": 9.23, "azimuth": 265.43 }
],
"method": "AzimuthalEquidistant"
"interpolate": false,
}
```
Where I converted MD in meters to ft as follows:
| m | ft |
|------|-------------|
| 0 | 0 |
| 3137 | 10291.99475 |
| 3150 | 10334.64567 |
| 3175 | 10416.66667 |
| 3200 | 10498.68766 |
| 3225 | 10580.70866 |
| 3250 | 10662.72966 |
| 3275 | 10744.75066 |
| 3300 | 10826.77165 |
**Step 3/4 dummy response**
(as normal).
* But in OperationsApplied make sure to put in pertinent remarks.
Math and test data are described in section 2.4 of "OSDU_wellbore_calculations.docx" which is linked in the \[CRS Convert tutorial\] (https://community.opengroup.org/osdu/platform/system/reference/crs-conversion-service/-/blob/master/docs/v3/tutorial/CRS_Convert_Service_howto.md#5-computing-a-wellbore-trajectory-from-directional-survey-data).M21 - Release 0.24KIRAN ALLAMSETYKIRAN ALLAMSETYhttps://community.opengroup.org/osdu/platform/system/reference/crs-conversion-service/-/issues/75Error is thrown from the convert API for below input request.2024-02-13T14:38:08ZKIRAN ALLAMSETYError is thrown from the convert API for below input request.Error is thrown from the convert API for below input request.
Request:
{
"fromCRS": "osdu:reference-data--CoordinateReferenceSystem:Projected:EPSG::32066:",
"toCRS": " osdu:reference-data--CoordinateReferenceSystem:Geographic2D...Error is thrown from the convert API for below input request.
Request:
{
"fromCRS": "osdu:reference-data--CoordinateReferenceSystem:Projected:EPSG::32066:",
"toCRS": " osdu:reference-data--CoordinateReferenceSystem:Geographic2D:EPSG::4326:",
"points": [
{
"x": 400000.0000000291,
"y": 6499999.999999412,
"z": 99.99999999999999
},
{
"x": 400000.00000617723,
"y": 6500065.706417698,
"z": 99.99999999999999
}
]
}
Response:
{
"code": 400,
"reason": "Error",
"message": "Bad request"
}
Expected Response:
{
"code": 400,
"reason": "Could not find a conversion method for the given input. no transformation 'NAD_1927_BLM_Zone_16N' -> 'GCS_WGS_1984",
"message": "Bad request"
}M21 - Release 0.24Puneet BhardwajKIRAN ALLAMSETYPuneet Bhardwajhttps://community.opengroup.org/osdu/platform/security-and-compliance/policy/-/issues/113Policy Service should have a separate audit log2024-02-13T14:11:41ZShane HutchinsPolicy Service should have a separate audit log@MonicaJohns requested that Policy Service should have it's own audit log (in addition to the information gathered in pod logs).
Thinking I could save a file in bundle server (S3, blob storage) with each change or something like that.@MonicaJohns requested that Policy Service should have it's own audit log (in addition to the information gathered in pod logs).
Thinking I could save a file in bundle server (S3, blob storage) with each change or something like that.M23 - Release 0.26Shane HutchinsShane Hutchinshttps://community.opengroup.org/osdu/platform/security-and-compliance/policy/-/issues/125Upgrade SHA1 to SHA2 or SHA2562024-02-13T14:11:22ZShane HutchinsUpgrade SHA1 to SHA2 or SHA256Policy Service currently uses SHA1 for logging purposes with changes to policies. This SHA1 is returned in json response when changes are made as well.
While it's not used for security it would be nice to upgrade from SHA1.
Created bas...Policy Service currently uses SHA1 for logging purposes with changes to policies. This SHA1 is returned in json response when changes are made as well.
While it's not used for security it would be nice to upgrade from SHA1.
Created based upon https://community.opengroup.org/osdu/platform/security-and-compliance/policy/-/issues/124Shane HutchinsShane Hutchinshttps://community.opengroup.org/osdu/platform/home/-/issues/54Community Driver/Mapper Contributions: Repository Assignment2024-02-13T11:20:58ZRustam Lotsmanenko (EPAM)rustam_lotsmanenko@epam.comCommunity Driver/Mapper Contributions: Repository Assignment# ADR: Community Driver/Mapper Contributions: Repository Assignment
## Status
- [ ] Proposed
- [ ] Trialing
- [ ] Under review
- [ ] Approved
- [ ] Retired
## Context
The Community Implementation of the OSDU Platform will feature a ...# ADR: Community Driver/Mapper Contributions: Repository Assignment
## Status
- [ ] Proposed
- [ ] Trialing
- [ ] Under review
- [ ] Approved
- [ ] Retired
## Context
The Community Implementation of the OSDU Platform will feature a reusable and tested foundation, which can be deployed with custom technology stacks using sets of Drivers and Mappers, a.k.a. the 'South Decision Point.' This approach enables the customization of underlying resources for each cloud or specific environment without necessitating code changes. The framework accommodates the contribution of new Drivers/Mappers at a later stage.
https://community.opengroup.org/osdu/platform/system/lib/drivers
https://community.opengroup.org/osdu/platform/system/lib/mappers
## Problem Statement
If anyone wishes to introduce new sets of drivers/mappers, they may be unsure where to contribute them. This decision involves whether to add them to the existing Community repository, such as https://community.opengroup.org/osdu/platform/system/lib/drivers/os-obm, or to create a new repository not directly affiliated with community projects.
## Decision Options
**1. In the same repository, as a new module, for example,** https://community.opengroup.org/osdu/platform/system/lib/drivers/os-obm
Pros:
- Easier to align API (Java Interfaces) updates with custom drivers and update them accordingly.
- Ability to check whether updates are compatible with custom drivers.
- Easier to maintain versioning. Auto version increments could be done in place for each set of drivers.
Cons:
- Issues with custom drivers may affect the stability of community drivers. Vulnerability scans, build issues, failed tests, etc.
- Breaking changes in the driver API should be implemented by custom driver contributors without delays, or it could break the flow.
Sum: **This approach would favor custom drivers more, and the community implementation could be affected.**
**2. As a separate project, for example in group** https://community.opengroup.org/osdu/platform/system/lib
Pros:
- The community set of drivers will be kept isolated, simplifying their maintenance, etc.
- Updates to the driver API can be implemented by custom driver contributors at their own pace.
Cons:
- Breaking changes could be harder to adopt.
- Harder to keep versions up to date, custom driver maintainers should keep up to use the latest version.
Sum: **This approach would favor community drivers more, custom driver maintainers should keep up.**
## Decision
TBD
## Consequences
TBDhttps://community.opengroup.org/osdu/platform/home/-/issues/52ADR - Release management change for Core Libraries2024-02-13T10:27:00ZRene von Borstel [EPAM]ADR - Release management change for Core Libraries## Decision Title
Release management change for Core Libraries
## Status
- [x] Proposed
- [ ] Approved
- [ ] Implementing (incl. documenting)
- [ ] Testing
- [ ] Released
## Purpose
Change in release process for Core Libraries to have...## Decision Title
Release management change for Core Libraries
## Status
- [x] Proposed
- [ ] Approved
- [ ] Implementing (incl. documenting)
- [ ] Testing
- [ ] Released
## Purpose
Change in release process for Core Libraries to have reduced impact on Code tagging process for milestone releases.
## Problem statement
Right before the code freeze, we have core library and all services upgrades during that milestone. If we have a major upgrade e.g. spring boot update or Jackson Library gets upgraded from some older to new version, because the services have been working on some older version of core libraries, we will see that there will be a compile time errors or runtime errors on all the services in most of the time. That will actually impact the stability of the system, because now you stop all your development work in order to sanitize the release branch so that the service is up and running and all the items are passing being an additional overhead that we are taking.
## Proposed solution
Core libraries are not something that are shipped to the customers and are used internally within the OSDU community internally. Hence, they do not need to follow the milestone versioning.
We can avoid the above mentioned of upgrading library versions in services at every release by maintaining the following versioning strategy for Core Libraries
- Create independent versioning of Core Libraries.
- Do not cut a release branch at every release.
- Follow the following versioning strategy while rolling out new versions for Core Libraries.
- Major Version
- Create a new major version when the release contains Backward incompatible changes in Interfaces or Model classes.
- For eg: `id` in `Record` class is changed to `recordId`.
- Minor Version
- Use minor version when additional methods are added to Interfaces, new fields are added to Model classes
- Changes in versions of dependencies - Springboot, Jackson etc.
- Patch Version
- Increment patch version when Bug fixes or Security patches are applied to the Library.
With this approach we avoid patching core libraries right before the release and thereby, reduce the amount of time spent on Stabilizing the service during code tagging process.
## Consequences
- We retire the -rc* versioning strategy. We no longer create release candidates in Core Libraries.
- Every commit on the Core Library will end up creating a new version depending on the type of the change.
## Target Release
M14
## Owner
Please contact @krveduruDavid Diederichd.diederich@opengroup.orgChad LeongDavid Diederichd.diederich@opengroup.orghttps://community.opengroup.org/osdu/platform/security-and-compliance/policy/-/issues/124bandit scan issue: Use of weak SHA1 hash for security.2024-02-13T01:10:20ZSolomon Ayalewbandit scan issue: Use of weak SHA1 hash for security.bandit scan is showing a potential issue with Severity: High, Confidence: High
check the scan log for detail.
Run started:2023-12-18 22:51:23.816146
```
Test results:
>> Issue: [B324:hashlib] Use of weak SHA1 hash for security. Consid...bandit scan is showing a potential issue with Severity: High, Confidence: High
check the scan log for detail.
Run started:2023-12-18 22:51:23.816146
```
Test results:
>> Issue: [B324:hashlib] Use of weak SHA1 hash for security. Consider usedforsecurity=False
Severity: High Confidence: High
CWE: CWE-327 (https://cwe.mitre.org/data/definitions/327.html)
More Info: https://bandit.readthedocs.io/en/1.7.6/plugins/b324_hashlib.html
Location: /Users/solxget/OSDU-clean/os-policy-service/app/api/policy_read_api.py:317:23
316 data = opa_response.json["result"]["raw"]
317 sha1 = hashlib.sha1(data.encode()).hexdigest()
318 response.headers["X-SHA-1"] = sha1
--------------------------------------------------
>> Issue: [B324:hashlib] Use of weak SHA1 hash for security. Consider usedforsecurity=False
Severity: High Confidence: High
CWE: CWE-327 (https://cwe.mitre.org/data/definitions/327.html)
More Info: https://bandit.readthedocs.io/en/1.7.6/plugins/b324_hashlib.html
Location: /Users/solxget/OSDU-clean/os-policy-service/app/api/policy_update_api.py:325:11
324
325 sha1 = hashlib.sha1(contents.decode("utf-8").encode()).hexdigest()
326 response.headers["X-SHA-1"] = sha1
--------------------------------------------------
>> Issue: [B324:hashlib] Use of weak SHA1 hash for security. Consider usedforsecurity=False
Severity: High Confidence: High
CWE: CWE-327 (https://cwe.mitre.org/data/definitions/327.html)
More Info: https://bandit.readthedocs.io/en/1.7.6/plugins/b324_hashlib.html
Location: /Users/solxget/OSDU-clean/os-policy-service/app/api/validate_api.py:96:15
95 ):
96 sha1 = hashlib.sha1(data.encode()).hexdigest()
97 response.headers["X-SHA-1"] = sha1
--------------------------------------------------
>> Issue: [B324:hashlib] Use of weak SHA1 hash for security. Consider usedforsecurity=False
Severity: High Confidence: High
CWE: CWE-327 (https://cwe.mitre.org/data/definitions/327.html)
More Info: https://bandit.readthedocs.io/en/1.7.6/plugins/b324_hashlib.html
Location: /Users/solxget/OSDU-clean/os-policy-service/app/bundles/bundle.py:156:44
155 contents = f.read()
156 existing_sha1 = hashlib.sha1(contents).hexdigest()
157 updated_existing = True
--------------------------------------------------
>> Issue: [B324:hashlib] Use of weak SHA1 hash for security. Consider usedforsecurity=False
Severity: High Confidence: High
CWE: CWE-327 (https://cwe.mitre.org/data/definitions/327.html)
More Info: https://bandit.readthedocs.io/en/1.7.6/plugins/b324_hashlib.html
Location: /Users/solxget/OSDU-clean/os-policy-service/app/bundles/bundle.py:161:35
160 if updated_existing:
161 updated_sha1 = hashlib.sha1(policy).hexdigest()
162 if existing_sha1 == updated_sha1:
--------------------------------------------------
Code scanned:
Total lines of code: 7294
Total lines skipped (#nosec): 0
Total potential issues skipped due to specifically being disabled (e.g., #nosec BXXX): 0
Run metrics:
Total issues (by severity):
Undefined: 0
Low: 138
Medium: 34
High: 6
Total issues (by confidence):
Undefined: 0
Low: 33
Medium: 3
High: 142
Files skipped (0):
```M22 - Release 0.25David Diederichd.diederich@opengroup.orgChad LeongShane HutchinsDavid Diederichd.diederich@opengroup.orghttps://community.opengroup.org/osdu/platform/system/register/-/issues/59HMAC secret validation doesn't verify if secret is hexadecimal2024-02-12T12:31:37ZIzabela KulakowskaHMAC secret validation doesn't verify if secret is hexadecimalThe HMAC secret provided as a parameter in the payload for [API operation to create the subscription](https://community.opengroup.org/osdu/platform/system/register/-/blob/master/register-core/src/main/java/org/opengroup/osdu/register/api...The HMAC secret provided as a parameter in the payload for [API operation to create the subscription](https://community.opengroup.org/osdu/platform/system/register/-/blob/master/register-core/src/main/java/org/opengroup/osdu/register/api/SubscriberApi.java?ref_type=heads#L100) needs to be in a hexadecimal number format, but SecretValidator class allows it to be any even length string matching regex ^[a-zA-Z0-9]{8,30}+$.
If provided secret matches the requirements from SecretValidator but is not hexadecimal number then creating the subscription causes an exception in Register Service when trying to [get the signed signature](https://community.opengroup.org/osdu/platform/system/register/-/blob/master/register-core/src/main/java/org/opengroup/osdu/register/subscriber/services/ChallengeResponseCheck.java?ref_type=heads#L108), more precisely [parsing the secret in SignatureService class](https://community.opengroup.org/osdu/platform/system/lib/core/os-core-common/-/blob/master/src/main/java/org/opengroup/osdu/core/common/cryptographic/SignatureService.java?ref_type=heads#L122).
The API user gets an [error](https://community.opengroup.org/osdu/platform/system/register/-/blob/master/register-core/src/main/java/org/opengroup/osdu/register/subscriber/services/CreateSubscription.java?ref_type=heads#L72) “Failed challenge response check to GET <push endpoint>” which doesn’t indicate an issue with the provided secret.