OSDU Software issueshttps://community.opengroup.org/groups/osdu/-/issues2023-08-08T19:54:41Zhttps://community.opengroup.org/osdu/platform/deployment-and-operations/infra-gcp-provisioning/-/issues/24simple_osdu_docker_desktop - CrashLoopBackOff, Error in several pods2023-08-08T19:54:41ZChad Leongsimple_osdu_docker_desktop - CrashLoopBackOff, Error in several podsHi - following the guide https://community.opengroup.org/osdu/platform/deployment-and-operations/infra-gcp-provisioning/-/tree/master/examples/simple_osdu_docker_desktop, with the following `custom-values.yaml`:
```yaml
global:
doma...Hi - following the guide https://community.opengroup.org/osdu/platform/deployment-and-operations/infra-gcp-provisioning/-/tree/master/examples/simple_osdu_docker_desktop, with the following `custom-values.yaml`:
```yaml
global:
domain: "localhost"
# Configuration parameter to switch between HTTP and HTTPS mode for external endpoint.
# Default - HTTP. HTTPS requires additional configuration
useHttps: false
keycloak:
auth:
# Fill in variable value, the value should contain only alphanumerical characters and should be at least 8 symbols
adminPassword: "admin12345"
# This value should be set to 'none' unless https is used (global.useHttps = true)
proxy: none
minio:
auth:
# Fill in variable value
rootPassword: "admin12345"
persistence:
size: 30Gi
# This value should be set to 'true' when using self-signed certificates or installing on minikube and docker desktop
useInternalServerUrl: true
postgresql:
global:
postgresql:
auth:
# Fill in variable value
postgresPassword: "admin12345"
persistence:
size: 8Gi
airflow:
externalDatabase:
# Fill in variable value
password: "admin12345"
auth:
# Fill in variable value
password: "admin12345"
elasticsearch:
security:
# Fill in variable value
elasticPassword: "admin12345"
master:
persistence:
size: 8Gi
data:
persistence:
size: 8Gi
# Configuration parameter to enable data bootstrap in storage service
conf:
enableDataBootstrap: &enable_data_bootstrap false
gc_legal_deploy:
conf:
bootstrapEnabled: *enable_data_bootstrap
gc_storage_deploy:
conf:
bootstrapEnabled: *enable_data_bootstrap
```
I seem to be getting several `errors` and `CrashLoopBackOff` in different pods:
```
airflow-bootstrap-deployment-5755bb555f-xwv8j 1/2 Error 4 7m31s
airflow-scheduler-6594fc8fb7-7jtj9 2/2 Running 0 7m30s
airflow-web-787f64645f-27x2b 1/2 Running 0 7m29s
crs-catalog-6fd7748997-rq5c2 2/2 Running 0 7m33s
crs-conversion-f5f6fdcb8-rlv9b 2/2 Running 0 7m31s
dataset-7bd48f55c-wg8hm 2/2 Running 0 7m32s
eds-dms-7889fcb467-6pxws 2/2 Running 0 7m31s
elastic-bootstrap-deployment-5548867bd4-j69kj 1/2 Error 3 (4m7s ago) 7m31s
elasticsearch-master-0 1/2 Running 0 7m32s
entitlements-556bdb4bd-q7r6r 0/2 Pending 0 7m27s
entitlements-bootstrap-6685b84c96-4qlmv 1/2 Error 1 (2m22s ago) 7m31s
file-865dcd77fc-4k949 2/2 Running 0 7m27s
indexer-5bc6dd8fc8-xdksw 0/2 Pending 0 7m26s
keycloak-0 1/2 Running 0 7m32s
keycloak-bootstrap-deployment-7455cd6448-d64tb 1/2 Error 3 7m28s
legal-68f8d464bc-jx2c2 2/2 Running 0 7m30s
minio-95957bb-pldbg 1/2 Running 1 (2m32s ago) 7m30s
minio-bootstrap-deployment-6556948c7f-mthdm 0/2 Pending 0 7m27s
notification-5d7484fd7-rh4vc 1/2 Error 0 7m32s
opa-8d9d54c46-ldjjk 1/2 Running 0 7m30s
partition-748bbb8b77-4gkcp 2/2 Running 0 7m31s
partition-bootstrap-5fdf48dd59-jpbb8 1/2 Error 2 (3m22s ago) 7m31s
policy-bd6bfb75b-2f58n 2/2 Running 0 7m33s
policy-bootstrap-55f74957f4-j84bf 1/2 Running 2 (101s ago) 7m31s
postgres-bootstrap-deployment-84d45f87d5-fxmbt 1/2 Running 1 (2m39s ago) 7m30s
postgresql-db-0 1/2 Running 0 7m32s
rabbitmq-0 1/2 Running 0 7m32s
rabbitmq-bootstrap-deployment-5d957b54cd-qvtc9 1/2 Error 5 (4m39s ago) 7m33s
redis-dataset-5d75c9cbcb-7rhld 2/2 Running 0 7m29s
redis-entitlements-589bddff6-rn5rm 2/2 Running 0 7m28s
redis-indexer-7c5d65b7f6-jxclx 2/2 Running 0 7m29s
redis-notification-5564799dc6-dnzfp 2/2 Running 0 7m32s
redis-search-686c9dfd87-fh4l8 0/2 Pending 0 7m27s
redis-seismic-store-855fbd99bd-98p7w 2/2 Running 0 7m31s
redis-storage-7d948cd6bb-vbwzm 0/2 Pending 0 7m26s
register-778bffc499-6kjt2 1/2 Error 0 7m29s
schema-69f66d8d8c-cj8nr 2/2 Running 0 7m33s
schema-bootstrap-6557dc76d7-xfgjm 1/2 CrashLoopBackOff 2 (2m16s ago) 7m30s
search-7f48df6c6b-lwjw8 0/2 Pending 0 7m27s
secret-8f8dd48f6-jgjkl 2/2 Running 0 7m33s
seismic-store-7f49bc88d4-66cmk 2/2 Running 0 7m29s
storage-75d9786668-b8gdh 2/2 Running 0 7m28s
unit-fb59b9999-8wgj6 2/2 Running 0 7m28s
well-delivery-784b676d97-8bmz7 2/2 Running 0 7m28s
wellbore-55d756b874-724x6 0/2 Pending 0 7m27s
workflow-5d56b64678-27v54 2/2 Running 0 7m32s
workflow-bootstrap-6d9f99c499-8fqx5 1/2 CrashLoopBackOff 5 (2m7s ago) 7m32s
```
Any idea how I can resolve these?Dzmitry Malkevich (EPAM)Dzmitry Malkevich (EPAM)https://community.opengroup.org/osdu/platform/security-and-compliance/entitlements/-/issues/124Reduse IOPS for /groups endpoint2023-08-09T14:40:44ZRustam Lotsmanenko (EPAM)rustam_lotsmanenko@epam.comReduse IOPS for /groups endpoint`/groups` endpoint in Entitlements is used to retrieve user groups.
Access to that endpoint is protected with a spring security authorization filter.
~~~
@PreAuthorize("@authorizationFilter.hasAnyPermission('" + AppProperties.OPS + ...`/groups` endpoint in Entitlements is used to retrieve user groups.
Access to that endpoint is protected with a spring security authorization filter.
~~~
@PreAuthorize("@authorizationFilter.hasAnyPermission('" + AppProperties.OPS + "', '" + AppProperties.ADMIN + "', '" + AppProperties.USERS + "')")
~~~
The filter determines if the user can request his group, requesting user groups from cahce\db. <br/>
![image](/uploads/0ce42c53651b73ce920a4933536cd2f6/image.png)
After the access evaluation, the process repeats but to prepare the response. <br/>
![image](/uploads/6c6e193d0a08e09f83edbed7b35891ec/image.png)
Access to the cache does not cost much but this is the most loaded endpoint, the current implementation causes X2 IOPS.https://community.opengroup.org/osdu/platform/security-and-compliance/home/-/issues/176Operator Security Questionnaire/Survey2023-11-28T10:06:36Zdesman boldenOperator Security Questionnaire/SurveyAll,
Two years ago we developed the attached document as additional input for OSDU platform security. As Rick Hadley in one of the team meetings, it about time to send another survey as the requirements may have changed. Please review t...All,
Two years ago we developed the attached document as additional input for OSDU platform security. As Rick Hadley in one of the team meetings, it about time to send another survey as the requirements may have changed. Please review the attached document and provide feedback on any adjustments that need to be made. Thanks.[OSDU_-_Security_Survey_v3.docx](/uploads/b023ee929dcace0936e2b46d79d27a30/OSDU_-_Security_Survey_v3.docx)M22 - Release 0.25desman boldenAnuj GuptaRobert Chadwick [Schlumberger]Yauhen Shaliou [EPAM/GCP]Yong Zengdesman bolden2023-06-30https://community.opengroup.org/osdu/platform/consumption/geospatial/-/issues/262AdminUI - Build web app for GCZ administration2023-05-31T19:59:44ZJoel RomeroAdminUI - Build web app for GCZ administrationAs a GCZ User, I want to have an Admin UI that will enable to manage my GCZ instance.
Note:
- Review Total Energy OSDU Admin UI implementation
- Will find who we can contactAs a GCZ User, I want to have an Admin UI that will enable to manage my GCZ instance.
Note:
- Review Total Energy OSDU Admin UI implementation
- Will find who we can contacthttps://community.opengroup.org/osdu/platform/system/schema-service/-/issues/128DRAFT ADR: New API to delete private schemas2023-12-13T16:20:03ZAndrei Dalhikh [EPAM/GC]DRAFT ADR: New API to delete private schemas
## Status
- [ ] Proposed
- [ ] Trialing
- [ ] Under review
- [ ] Approved
- [ ] Retired
## Context & Scope
End users and QA engineers often need to delete individual schemas, which contain errors or were uploaded by mistake. Current ...
## Status
- [ ] Proposed
- [ ] Trialing
- [ ] Under review
- [ ] Approved
- [ ] Retired
## Context & Scope
End users and QA engineers often need to delete individual schemas, which contain errors or were uploaded by mistake. Current Schema service API does not have such method and these schemas are deleted manually.
# Tradeoff Analysis
The important thing to consider before deleting any schema from a private context is that data may already exist for that schema. Therefore, delete should only be successful when schema with same id/body is recreated into system partition as SHARED.
**TODO:** edge cases to be elaborated and documented here
## Decision
1. Introduce a delete endpoint to have an ability to delete individual private schema. This endpoint should be governed by some admin role to cleanup schemas from private partitions only.
1. Schemas in SHARED partition should still not have delete support.
1. Create new API method as below:
![Schema_delete_API](/uploads/c787d54444f164938246c2be132e8809/Schema_delete_API.jpg)
OpenAPI spec for this API change [schema_openapi.yaml](https://community.opengroup.org/osdu/platform/system/schema-service/-/blob/Delete_schema_api_draft/docs/api/schema_openapi.yaml)
## Consequences
1. This will help users to keep clean set of schemas in their environments
2. This will reduce the number of support requests for DevOpsM22 - Release 0.25https://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/open-vds/-/issues/187[question] SDMS v4 support2023-05-24T16:10:07ZFilip Brzęk[question] SDMS v4 supportHi,
might I ask, what's the support posture for SDMS v4 endpoints, that AFAIK are available for openvds from M17 [link to the swagger](https://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/seismic-dms-suite/sei...Hi,
might I ask, what's the support posture for SDMS v4 endpoints, that AFAIK are available for openvds from M17 [link to the swagger](https://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/seismic-dms-suite/seismic-store-service/-/blob/v0.17.2/app/sdms-v4/docs/openapi.yaml?ref_type=tags)?
Regards,
Filiphttps://community.opengroup.org/osdu/platform/domain-data-mgmt-services/rock-and-fluid-sample/rafs-ddms-services/-/issues/16Idea: Allow for units to be converted on /data GET endpoints2023-10-03T12:51:01ZBryan DawsonIdea: Allow for units to be converted on /data GET endpoints@gehrmann had an idea during the May 23, 2023 demo to allow the consumer to specify a frame-of-reference for the units they want the data in for the /*/data endpoints. This would do unit conversion from the stored units in the parquet i...@gehrmann had an idea during the May 23, 2023 demo to allow the consumer to specify a frame-of-reference for the units they want the data in for the /*/data endpoints. This would do unit conversion from the stored units in the parquet into the consumers specified units for the numerical values.Siarhei Khaletski (EPAM)Michael JonesMykhailo BuriakSiarhei Khaletski (EPAM)https://community.opengroup.org/osdu/platform/system/storage/-/issues/172Metadata update API succeeds on remove operation on a `tag` if the tag doesn'...2023-05-25T10:36:21ZAlok JoshiMetadata update API succeeds on remove operation on a `tag` if the tag doesn't existSteps to reproduce:
- Create a record with some tags
- Try to update the record metadata via [metadata update API](https://community.opengroup.org/osdu/platform/system/storage/-/blob/master/docs/tutorial/StorageService.md#metadata-updat...Steps to reproduce:
- Create a record with some tags
- Try to update the record metadata via [metadata update API](https://community.opengroup.org/osdu/platform/system/storage/-/blob/master/docs/tutorial/StorageService.md#metadata-update-api) by removing a non-existing tag
```
curl --request PATCH \
--url '/api/storage/v2/records' \
--header 'accept: application/json' \
--header 'authorization: Bearer <JWT>' \
--header 'content-type: application/json'\
--header 'Data-Partition-Id: common'
--data-raw ‘{
"query": {
"ids": [
"tenant1:type:unique-identifier:version"
]
},
"ops": [
{
"op":"remove",
"path":"/tags",
"value":[
"tagthatdoesntexist"
]
}
]
}
```
This should return 4xx, but returns 2xxhttps://community.opengroup.org/osdu/platform/consumption/geospatial/-/issues/257Transformer - Enhance Transformer Logging Root Cause Detection2023-05-31T15:23:31ZLevi RemingtonTransformer - Enhance Transformer Logging Root Cause DetectionCurrent Transformer Logs like "GET COUNT EXCEPTION" or "CACHE NOT INITIALIZED" point generally to areas of the code but could be improved with more specific references to aid in troubleshooting.
Additionally, log4j.yml does not make us...Current Transformer Logs like "GET COUNT EXCEPTION" or "CACHE NOT INITIALIZED" point generally to areas of the code but could be improved with more specific references to aid in troubleshooting.
Additionally, log4j.yml does not make use of environment variables to set log level (info, trace, etc.), but this would be a valuable enhancement for Kubernetes deployments since modifying the file directly is an anti-pattern.
Finally, please note in documentation the location of Transformer app.log in Kubernetes environment to be in the Transformer pod's `logs/app.log` directory.
Acceptance Criteria:
- Exception messages should include stack tracehttps://community.opengroup.org/osdu/platform/security-and-compliance/policy/-/issues/96ADR: Make OPA configuration dynamic updatable2024-02-26T16:37:13ZShane HutchinsADR: Make OPA configuration dynamic updatable## Status
- [x] Proposed
- [ ] Trialing
- [ ] Under review
- [x] Approved
- [ ] Retired
## Context
OSDU has adopted Rego as the language to define policies and [Open Policy Agent](https://www.openpolicyagent.org/docs/latest/) as an int...## Status
- [x] Proposed
- [ ] Trialing
- [ ] Under review
- [x] Approved
- [ ] Retired
## Context
OSDU has adopted Rego as the language to define policies and [Open Policy Agent](https://www.openpolicyagent.org/docs/latest/) as an internal solution to manage and enforce the policies. To enforce a policy, various OSDU services call policy service which internally calls OPA API. Some services (storage) bypass policy service and make low level calls to OPA directly.
Today OPA configuration is strictly managed by CSPs, generally with a [kubernetes config map](https://kubernetes.io/docs/concepts/configuration/configmap/). By having this static and only updatable with backend it breaks the ability to add a partition with [partition](https://community.opengroup.org/osdu/platform/system/partition) create API.
As a result, once a new partition is the following services are become impacted:
- Storage
- Search
Any services that depends on the above, including but not limited to:
- Indexer
- [seismic-dms-suite seismic-store-service v4](https://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/seismic-dms-suite/seismic-store-service/-/tree/master/app/sdms-v4)
For additional context see the following issues and links:
- https://community.opengroup.org/osdu/platform/security-and-compliance/policy/-/issues/94
- [Support Multi Partition Policies in OPA](https://community.opengroup.org/osdu/platform/security-and-compliance/policy/-/wikis/Support-Multi-Partition-Policies-in-OPA)
The workaround:
- Workaround requires backend access and manual updates for updating the OPA configuration. See [workaround](https://osdu.pages.opengroup.org/platform/security-and-compliance/policy/bundles/#adding-a-new-partition-to-osdu)
## Scope
Implement APIs to manage OPA configuration.
## Solution
Update the Policy Service /bootstrap API to also create, update and manage the configmap for OPA.
![image](/uploads/d7b7a0791ef1afb1897a067abdc0996f/image.png)
## Consequences
- Kubernetes permissions to allow read and update of OPA config map (opa-agent) will be required.
- CSPs will need to not update the config map once created.
## Futures
- At a later date partition service could be configured to call policy bootstrap API to remove the burden of having to call an additional API.M23 - Release 0.26Shane HutchinsShane Hutchinshttps://community.opengroup.org/osdu/platform/data-flow/ingestion/home/-/issues/54Enabling User Context in Ingestion, Google Cloud approach2023-11-17T10:16:09ZAndrei Dalhikh [EPAM/GC]Enabling User Context in Ingestion, Google Cloud approach## User impersonation approach
To address the [highlighted security concerns](https://community.opengroup.org/osdu/platform/data-flow/ingestion/home/-/issues/52#note_134717) in the related [ADR](https://community.opengroup.org/osdu/plat...## User impersonation approach
To address the [highlighted security concerns](https://community.opengroup.org/osdu/platform/data-flow/ingestion/home/-/issues/52#note_134717) in the related [ADR](https://community.opengroup.org/osdu/platform/data-flow/ingestion/home/-/issues/52) we make user impersonation the responsibility of the Entitlements service. To have this we do the following:
- Limit Airflow SA with single role to impersonate users (group: **users.datalake.delegation**)
- Add special group (**users.datalake.impersonation**) to the users, which MAY be impersonated (in general - workflow service users)
- Implement the following authorisation flow:
## Flow diagram
![user_impersonation_flow.drawio](/uploads/b870acc57f2450139ecd827e60d37a0a/user_impersonation_flow.drawio.png)
1. User initiates DAG execution through the Workflow service API request
2. Workflow service makes state record in database and stores user id as workflow run id (submittedBy)
3. DAG obtains user id from workflow db record by Workflow service API request (implemented in Python SDK)
4. DAG performs a call to some OSDU service, Python SDK library code injects all outgoing requests with the Airflow service account token and the special "on-behalf-of" header. The "on-behalf-of" header value is equal to the user id obtained at the previous step
5. OSDU service passes incoming request headers to the Entitlements service to authorise user request
6. Entitlements service performs the following flow:
- Only `/group` endpoint will support impersonation flow, Entitlements management endpoints like add member, create group, etc will ignore "on-behalf-of" header.
- If request to Entitlements endpont `/groups` contains special "on-behalf-of" header and the user (Airflow SA) that is willing to act on behalf is NOT a member of the special group to impersonate users (group: users.datalake.delegation) then service returns HTTP 403 Forbidden status
- If request contains special "on-behalf-of" header and the user (Airflow SA) that is willing to act on behalf belongs to special group to impersonate users (group: users.datalake.delegation) then it collects groups for the impersonated user specified in this header.
- If the impersonated user (that initially triggered workflow, and should be acknowledged as an owner/creator of ongoing changes) group list does NOT contain special group (users.datalake.impersonation) then Entitlements service returns HTTP 403 Forbidden.
- Else the impersonated user group list returned to the calling OSDU service
7. OSDU service performs usual check of the returned groups list for the presence of specific rights to make the call.
- Entitlements service response should NOT be cached due to security reasons in the case of "on-behalf-of" request header presence
See also: [Entitlements service documentation](https://community.opengroup.org/osdu/platform/security-and-compliance/entitlements/-/tree/master/provider/entitlements-v2-jdbc#authorisation-flow-for-impersonated-users)M20 - Release 0.23Andrei Dalhikh [EPAM/GC]Andrei Dalhikh [EPAM/GC]https://community.opengroup.org/osdu/platform/consumption/geospatial/-/issues/256Transformer - BUG:JSON generate exception error when testing.2023-05-31T15:34:03ZJoel RomeroTransformer - BUG:JSON generate exception error when testing.As a GCZ tester, I want to address the JSON generate error when testing the GCZ.
Notes:
- Identify what data is causing the error (Empty data set)
- Troubleshoot and address the underlying issue
Acceptance Criteria:
- Add error handlin...As a GCZ tester, I want to address the JSON generate error when testing the GCZ.
Notes:
- Identify what data is causing the error (Empty data set)
- Troubleshoot and address the underlying issue
Acceptance Criteria:
- Add error handling for empty data set that is causing the generate error
- Return more meaningful error messagehttps://community.opengroup.org/osdu/platform/security-and-compliance/policy/-/issues/95Feature Request - Need capability to write policy based on data records prope...2024-03-15T15:38:30ZDadong ZhouFeature Request - Need capability to write policy based on data records propertiesFrom Fabrice HAÜY [SLB] on Slack:
Hi Team, I'm looking for some updated information / roadmap, as from our latest conversations at the OSDU F2F in London, I understood that currently, the policy engine only knowns about id, kind, legal ...From Fabrice HAÜY [SLB] on Slack:
Hi Team, I'm looking for some updated information / roadmap, as from our latest conversations at the OSDU F2F in London, I understood that currently, the policy engine only knowns about id, kind, legal tag, and acl, making it not possible to create policy entitlements based on the value of a property of the record. I'm looking for information surrounding this limitation and when it'll be unlocked. thank you in advance
cc @hmarkovic @hutchins @chadhttps://community.opengroup.org/osdu/platform/system/file/-/issues/83Checksum values do not match up - value prior to upload, value as auto-popula...2023-07-10T08:37:54ZDebasis ChatterjeeChecksum values do not match up - value prior to upload, value as auto-populated by File service and persisted in Dataset recordPlease see my recent test in Azure/M17/Preship.
https://community.opengroup.org/osdu/platform/pre-shipping/-/blob/main/R3-M17/Test_Plan_Results_M17/Core_Services/M17-Azuere-Core-File-and-Dataset-steps-Debasis.zip
Prior to uploading the ...Please see my recent test in Azure/M17/Preship.
https://community.opengroup.org/osdu/platform/pre-shipping/-/blob/main/R3-M17/Test_Plan_Results_M17/Core_Services/M17-Azuere-Core-File-and-Dataset-steps-Debasis.zip
Prior to uploading the file, I found checksum value from Linux Operating system.
After the file is uploaded and Dataset record is created, I try to compare with the value as auto-populated by File Service.
The values do not match.
Please check this.M19 - Release 0.22Chad LeongChad Leonghttps://community.opengroup.org/osdu/platform/domain-data-mgmt-services/wellbore/wellbore-domain-services/-/issues/69WellboreTrajectory can be created with a non-existing WellboreID2023-05-16T14:43:23ZZachary KeirnWellboreTrajectory can be created with a non-existing WellboreIDCurrent postman collection for pre-ship testing creates a WellboreTrajectory with a typo in the WellboreID. The Trajectory is created even though the WellboreID does not exist. So there is no referential integrity check being performed w...Current postman collection for pre-ship testing creates a WellboreTrajectory with a typo in the WellboreID. The Trajectory is created even though the WellboreID does not exist. So there is no referential integrity check being performed when creating a Trajectory with POST https://{{WELLBORE_DDMS_HOST}}/ddms/v3/wellboretrajectories. The body has this:
`"id": "{{data-partition-id}}:work-product-component--WellboreTrajectory:{{WellboreDMSRunId}}",
"kind": "{{authority}}:{{schemaSource}}:work-product-component--WellboreTrajectory:1.1.0",
"data": {
"Name": "Wellbore_Trajectory_{{WellboreDMSRunId}}",
"WellboreID": "{{data-partition-id}}:master-data--Wellbore::{{WellboreDMSRunId}}:",`
Which has typo of double colon after Wellbore.
The WPC for WellboreTrajectory is successfully created however the WellboreID is invalid and the record does not exist.Using GET https://{{WELLBORE_DDMS_HOST}}/ddms/v3/wellbores/osdu:master-data--Wellbore::AutoTest_999130548486:
`{
"origin": "osdu-data-ecosystem-storage",
"errors": [
{
"code": 404,
"reason": "Record not found",
"message": "The record 'osdu:master-data--Wellbore::AutoTest_999130548486' was not found"
}
]
}`
Link to postman collection is here: [](https://community.opengroup.org/osdu/platform/pre-shipping/-/blob/main/R3-M17/QA_Artifacts_M17/envFilesAndCollections/Wellbore%20DDMS%20CI-CD%20v3.0.postman_collection.json)
THis was found using M17 pre-ship environment.https://community.opengroup.org/osdu/platform/security-and-compliance/policy/-/issues/94OPA Breaks Adding a New Partition to OSDU2023-12-26T14:20:13ZShane HutchinsOPA Breaks Adding a New Partition to OSDUCurrently there is only a manual workaround to add a new partition to OSDU https://osdu.pages.opengroup.org/platform/security-and-compliance/policy/bundles/#adding-a-new-partition-to-osdu
Currently if you do not follow these manual step...Currently there is only a manual workaround to add a new partition to OSDU https://osdu.pages.opengroup.org/platform/security-and-compliance/policy/bundles/#adding-a-new-partition-to-osdu
Currently if you do not follow these manual steps. There will never be a bundle for the partition and Policy Service will error on all requests for that partition. OPA configuration (which generally comes from a kubernetes config map) isn't updated to know to attempt to read bundle for that partition.
This is known to break Policy, Storage, Search and Seismic DMS (seismic-store-service v4).
Impacts Milestone releases: M14-M18M20 - Release 0.23Hrvoje MarkovicNeelesh ThakurRustam Lotsmanenko (EPAM)rustam_lotsmanenko@epam.comDadong ZhouYauhen Shaliou [EPAM/GCP]Shane HutchinsSrinivasan NarayananYong Zengvikas ranaHrvoje Markovichttps://community.opengroup.org/osdu/platform/deployment-and-operations/infra-gcp-provisioning/-/issues/23Issues 17/22: OSDU pods are in CrashLoopBackOff state2023-08-16T13:00:20Zm sIssues 17/22: OSDU pods are in CrashLoopBackOff stateLocal HTTP minikube cluster with issues 17/22 errors, documentation(odt), custom-values.yaml and logs attached:
[commTicket.tar.gz](/uploads/74cda85e867b3addebf02ccac559f86f/commTicket.tar.gz)Local HTTP minikube cluster with issues 17/22 errors, documentation(odt), custom-values.yaml and logs attached:
[commTicket.tar.gz](/uploads/74cda85e867b3addebf02ccac559f86f/commTicket.tar.gz)Dzmitry Malkevich (EPAM)Dzmitry Malkevich (EPAM)https://community.opengroup.org/osdu/platform/consumption/geospatial/-/issues/255Do not pass sensitive information using configuration files2023-05-11T20:06:33ZMorris EstepaDo not pass sensitive information using configuration filesThe client secret is currently expected to be passed in as a value through the application's configuration file. I suggest that we should instead follow established patterns used in other OSDU services to pass around sensitive informatio...The client secret is currently expected to be passed in as a value through the application's configuration file. I suggest that we should instead follow established patterns used in other OSDU services to pass around sensitive information to the running application.
See: https://community.opengroup.org/osdu/platform/consumption/geospatial/-/blob/master/gcz-transformer-core/config/application.yml#L83https://community.opengroup.org/osdu/platform/consumption/geospatial/-/issues/254Data - Load SLB New Zealand data into OSDU2024-03-12T05:04:37ZJoel RomeroData - Load SLB New Zealand data into OSDUAs a GCZ Product Owner, I want SLB New Zealand test data set to be loaded, so that the data is available for GCZ to develop with.
Note:
- Suggest Data Manager to load the data on IBM pre-ship (OSDU instance)
- Reach out to Operators fo...As a GCZ Product Owner, I want SLB New Zealand test data set to be loaded, so that the data is available for GCZ to develop with.
Note:
- Suggest Data Manager to load the data on IBM pre-ship (OSDU instance)
- Reach out to Operators for Data Managers that have this expertise
Acceptance Criteria:
- Data loaded to IBM pre-ship
- GCZ ran on loaded data and possible issues captured in the backlogMichael WilhiteMichael Wilhitehttps://community.opengroup.org/osdu/platform/data-flow/ingestion/energistics/commons/-/issues/9Correct CRS transformation to WGS842023-06-07T14:24:52ZValentin GauthierCorrect CRS transformation to WGS84The function `search_bound_projected_for_projected_epsg_code` searches a BoundProject in the CRS catalog.
After some tests on the results, it seems that the translation is not correct (data that should layed in the North Sea is layed in...The function `search_bound_projected_for_projected_epsg_code` searches a BoundProject in the CRS catalog.
After some tests on the results, it seems that the translation is not correct (data that should layed in the North Sea is layed in Africa).
The search of the BoundProjectedCRS must be fixed.
See :
- https://community.opengroup.org/osdu/platform/data-flow/ingestion/energistics/commons/-/blob/main/app/osdu_utils.py#L393
- https://community.opengroup.org/osdu/data/data-definitions/-/blob/master/Guides/Chapters/04-FrameOfReference.md#443-finding-the-coordinatetransformation-to-wgs-84