OSDU Software issueshttps://community.opengroup.org/groups/osdu/-/issues2022-07-04T00:33:29Zhttps://community.opengroup.org/osdu/platform/pre-shipping/-/issues/50HTTP Status Error 404 Not found - workflow service osdu_ingest2022-07-04T00:33:29ZChad LeongHTTP Status Error 404 Not found - workflow service osdu_ingest## Summary
Tried to submit a manifest through
`POST https://workflow-drgfbg5txq-uc.a.run.app/v1/workflow/Osdu_ingest/workflowRun`
Got the error 404.
## Steps to reproduce
1) Get auth token
2) Submit manifest through the workflow s...## Summary
Tried to submit a manifest through
`POST https://workflow-drgfbg5txq-uc.a.run.app/v1/workflow/Osdu_ingest/workflowRun`
Got the error 404.
## Steps to reproduce
1) Get auth token
2) Submit manifest through the workflow service through postman
## Example Environment(Tenant)
gcp preship
## What is the current bug behavior?
Error 404
## What is the expected correct behavior?
Status code 200
## Relevant logs and/or screenshots
(Paste any relevant logs - please use code blocks (```) to format console output, logs, and code, as
it's very hard to read otherwise.)
## Possible fixes
(If you can, link to the line of code that might be responsible for the problem)
/cc @kateryna_kurach
/cc @aliaksandr_ramanovich
/cc @sergey_krupeninAliaksandr Ramanovich (EPAM)Aliaksandr Ramanovich (EPAM)https://community.opengroup.org/osdu/platform/pre-shipping/-/issues/51ADR: E2E preshipment team A workflow bot2023-08-09T08:47:29Zetienne peyssonADR: E2E preshipment team A workflow bot
## Status
- [x] Draft
- [ ] Proposed
- [x] Trialing
- [ ] Under Review
- [ ] Approved
- [ ] Retired
## Context & Scope
Following the Preshipment validation dashboard :
![preship-validation-scope](/uploads/1205798b98bfc6cbc4b2f341fe...
## Status
- [x] Draft
- [ ] Proposed
- [x] Trialing
- [ ] Under Review
- [ ] Approved
- [ ] Retired
## Context & Scope
Following the Preshipment validation dashboard :
![preship-validation-scope](/uploads/1205798b98bfc6cbc4b2f341fe68eb45/preship-validation-scope.png)
There are multiple manual steps to achieve in order to test each workflow.
This seems to be error prone and time consuming.
This ADR focuses on the following steps :
- Authenticate to any CSP
- File uploading whenever required
- Trigger DAG
- Validate the workflow
- Generate a report
- Clean up
That work could be also be extended with the following :
- Bulk loading
Another ADR has been approved for maintaining Postman collections to be integrated for testing on DAGs and services endpoints end to end executions.
There might be some overlapping with the ![Postman Collection ADR](https://community.opengroup.org/osdu/platform/data-flow/ingestion/home/-/issues/49).
## Proposition
Implement a script/framework so we can test each use case independently from any computer but also a Gitlab pipeline (e2e tests).
- Pros :
Get a clear report of the process so we can more easily provide feedbacks.
Have a common place with configuration templates to fill in for CSPs (another script could also help on that part).
Add other workflows using the existing framework.
Test using multiple source files.
Match the framework version with the releases so pipelines could be run in multiple test environments at the same time (as long as environments are available).
Easy onboarding for new developers.
- Cons :
Maintenance of the configuration parameters (CSPs) should follow releases cadence.
Needs developers
## Decision
## Rationale
## Consequences
- Direct consequence on the Preshipment team A.
## When to revisit
## Tradeoff analysis - input to decision
## Decision timelinehttps://community.opengroup.org/osdu/platform/pre-shipping/-/issues/52CSV Parser failure - R3M7 IBM- Custom Schema - Airflow DAG error2021-09-24T07:00:12ZSteven EvansCSV Parser failure - R3M7 IBM- Custom Schema - Airflow DAG errorRunning a practice test on M7 csv custom schema prior to M8 testing program, an issue was encountered when “triggering the Workflow” step 09 was actioned. This successfully triggered and created a runid however on checking the Airflow DA...Running a practice test on M7 csv custom schema prior to M8 testing program, an issue was encountered when “triggering the Workflow” step 09 was actioned. This successfully triggered and created a runid however on checking the Airflow DAG logs the process failed with the following error:
![image](/uploads/fcf25be25a5dcd82f619c84b33e1f728/image.png)![Custom_Schema_DAG_failure_log](/uploads/95971f525950934d1271d0e77cbccde9/Custom_Schema_DAG_failure_log.PNG)Pre-Shipping R3-M7Shrikant GargSteven EvansShrikant Garghttps://community.opengroup.org/osdu/platform/consumption/geospatial/-/issues/64POC - Create a visual of the POC deliverables2022-02-01T16:52:04ZJoel RomeroPOC - Create a visual of the POC deliverables- Graphics to help communicate timelines and progress toward the POC- Graphics to help communicate timelines and progress toward the POCGCZ Sprint 8Joel RomeroJoel Romerohttps://community.opengroup.org/osdu/platform/system/storage/-/issues/90maven-surefire-plugin version 2.5 causes token generation issues for Azure in...2021-11-01T13:35:32ZAlok Joshimaven-surefire-plugin version 2.5 causes token generation issues for Azure in ADO pipelineAfter pulling back changes from Gitlab (changes include [MR](https://community.opengroup.org/osdu/platform/system/storage/-/merge_requests/261), which upgrades maven-surefire-plugin), we saw issues in integration test task in ADO pipelin...After pulling back changes from Gitlab (changes include [MR](https://community.opengroup.org/osdu/platform/system/storage/-/merge_requests/261), which upgrades maven-surefire-plugin), we saw issues in integration test task in ADO pipeline for Storage. We were able to isolate and pin point the issue to this change by only upgrading the plugin version in a separate branch. When upgrading this plugin to version 2.5, the `INTEGRATION_TESTER` and `TESTER_SERVICEPRINCIPAL_SECRET` doesn't get pulled properly during integration test run in the pipeline (for whatever reason, there is a `null` string attached after the actual value), resulting in NPE.
This causes token generation issue and all integration tests fail
![test_output](/uploads/46d59bea166b898624997bb09b9c5d20/test_output.PNG)![test_result](/uploads/f7968ae30cf2613cd728a61f07b9518e/test_result.PNG)Alok JoshiAlok Joshihttps://community.opengroup.org/osdu/platform/deployment-and-operations/infra-azure-provisioning/-/issues/202Upgrade Airflow Lib with release version2021-11-12T03:43:34Zharshit aggarwalUpgrade Airflow Lib with release versionWith this MR we are adding a dev version of a airflow python package which should be replaced with release version during release
https://community.opengroup.org/osdu/platform/deployment-and-operations/infra-azure-provisioning/-/merge_r...With this MR we are adding a dev version of a airflow python package which should be replaced with release version during release
https://community.opengroup.org/osdu/platform/deployment-and-operations/infra-azure-provisioning/-/merge_requests/485harshit aggarwalharshit aggarwalhttps://community.opengroup.org/osdu/platform/system/schema-service/-/issues/70GCP Integration test is taking too much time2022-05-11T07:58:44ZAbhishek Kumar (SLB)GCP Integration test is taking too much timeGCP integration is taking too long to complete.
One thing, I prominently observed is that there are too many logging of huge messages which is possibly slowing down the entire process.
![image](/uploads/77ae3ba57eb81626cc37d73972a1b166/...GCP integration is taking too long to complete.
One thing, I prominently observed is that there are too many logging of huge messages which is possibly slowing down the entire process.
![image](/uploads/77ae3ba57eb81626cc37d73972a1b166/image.png)
https://community.opengroup.org/osdu/platform/system/schema-service/-/jobs/590392
Please find below time taken by other CSPs:
- azure_test = Duration: 14 minutes 20 seconds
- aws-test-java = Duration: 8 minutes 18 seconds
- ibm-test = Duration: 11 minutes 55 seconds
- osdu-gcp-test = Duration: **59 minutes 36 seconds**Oleksandr Kosse (EPAM)Riabokon Stanislav(EPAM)[GCP]Oleksandr Kosse (EPAM)https://community.opengroup.org/osdu/ui/data-loading/wellbore-ddms-data-loader/-/issues/14Add Class to map LAS data object to Well Log object2022-12-13T08:25:25ZNiall McDaidAdd Class to map LAS data object to Well Log objectAdd a class to covert the LAS data object output by LASIO to one or more OSDU Well Log objects.
Add unit tests for the class.Add a class to covert the LAS data object output by LASIO to one or more OSDU Well Log objects.
Add unit tests for the class.M9 - Release 0.12Niall McDaidNiall McDaidhttps://community.opengroup.org/osdu/platform/system/schema-service/-/issues/72Job Failed #5909182021-09-23T08:14:48ZAbhishek Kumar (SLB)Job Failed #590918IT is passing for all other CSPs except IBM, please help us troubleshoot it.
Job [#590918](https://community.opengroup.org/osdu/platform/system/schema-service/-/jobs/590918) failed for 9d348cbe1714b56ba24144f063067565b634e1b6:IT is passing for all other CSPs except IBM, please help us troubleshoot it.
Job [#590918](https://community.opengroup.org/osdu/platform/system/schema-service/-/jobs/590918) failed for 9d348cbe1714b56ba24144f063067565b634e1b6:Anuj GuptaAnuj Guptahttps://community.opengroup.org/osdu/platform/system/reference/crs-conversion-service/-/issues/28Add memory limits2021-10-20T14:33:27ZYifan YeAdd memory limitsAdding memory limits:
AKS node autoscaler uses memory limits to add nodes to cluster when HPA(Horizontal Pod Autoscaler) needs more capacity. Values were experimentally determined. Given implementation allows managing the list of envs w...Adding memory limits:
AKS node autoscaler uses memory limits to add nodes to cluster when HPA(Horizontal Pod Autoscaler) needs more capacity. Values were experimentally determined. Given implementation allows managing the list of envs where limits are enabled.Yifan YeYifan Yehttps://community.opengroup.org/osdu/platform/system/reference/crs-catalog-service/-/issues/13Add memory limits2021-10-20T14:33:52ZYifan YeAdd memory limitsAdding memory limits:
AKS node autoscaler uses memory limits to add nodes to cluster when HPA(Horizontal Pod Autoscaler) needs more capacity. Values were experimentally determined. Given implementation allows managing the list of envs w...Adding memory limits:
AKS node autoscaler uses memory limits to add nodes to cluster when HPA(Horizontal Pod Autoscaler) needs more capacity. Values were experimentally determined. Given implementation allows managing the list of envs where limits are enabled.Yifan YeYifan Yehttps://community.opengroup.org/osdu/platform/pre-shipping/-/issues/53WITSML Parser (Trajectory) - Error 500 on Post metadata - R3M82021-09-23T11:09:44Zetienne peyssonWITSML Parser (Trajectory) - Error 500 on Post metadata - R3M8I'm receiving the following error :
"Client failed to authenticate using SASL: PLAIN" and code : 500
After making the following call :
POST https://osdu-cpd-osdu.odi-osdu-og-fa7661852f2ab29a6be32f560b2f5573-0000.us-south.containers...I'm receiving the following error :
"Client failed to authenticate using SASL: PLAIN" and code : 500
After making the following call :
POST https://osdu-cpd-osdu.odi-osdu-og-fa7661852f2ab29a6be32f560b2f5573-0000.us-south.containers.appdomain.cloud/osdu-file/api/file/v2/files/metadata
- Given Authorization with Access/id token
- Given data-partition-id opendes
- Given Content-Type application/json
- Given x-ms-blob-type BlockBlob
Given body :
```json
{
"data" : {
"TotalSize" : 5299.0,
"Source" : "TNO Data Source",
"Name" : "Trajectory DC",
"Endian" : "BIG",
"Description" : "Trajectory WITSML dataset",
"DatasetProperties" : {
"FileSourceInfo" : {
"FileSource" : "567637002f924633833787511ab77dfa",
"Name" : "trajectory_DC.xml",
"PreloadFilePath" : "s3://oc-cpd-opendes-staging-bucket/567637002f924633833787511ab77dfa",
"PreloadFileCreateUser" : null,
"PreloadFileModifyDate" : 1631859302.437914453,
"PreloadFileModifyUser" : null
}
}
},
"kind" : "opendes:wks:dataset--File.Generic:1.0.0",
"acl" : {
"viewers" : [ "data.default.viewers@opendes.ibm.com" ],
"owners" : [ "data.default.owners@opendes.ibm.com" ]
},
"legal" : {
"otherRelevantDataCountries" : [ "US" ],
"status" : "compliant",
"legaltags" : [ "opendes-Test-Legal-Tag-7292798" ]
},
"createUser" : null,
"createTime" : 1631859302.433772431,
"modifyUser" : null,
"modifyTime" : 1631859302.437564058
}
```
It was working properly few days ago.
Another question :
I see in the DAG that you generate the metadata if we don't provide it when you are triggering the witsml parser.
What is the recommended way of doing ?
Do you still allow the metaadata to be sent ?Pre-Shipping R3-M8Gokul NagareGokul Nagarehttps://community.opengroup.org/osdu/platform/system/lib/cloud/azure/os-core-lib-azure/-/issues/12Add @Configuration to config bean2021-09-17T06:34:46ZRonak SakhujaAdd @Configuration to config beanSpring doesn't create a bean on startup if we donot have @Configuration property. Hence adding @Configuration property to config classSpring doesn't create a bean on startup if we donot have @Configuration property. Hence adding @Configuration property to config classhttps://community.opengroup.org/osdu/platform/security-and-compliance/entitlements/-/issues/78503 error "upstream connect error"2021-09-21T14:46:43ZDmitrii Gerashchenko503 error "upstream connect error"Sometimes entitlements v2 GET groups endpoint (Azure cloud) returns 503 error with the body "upstream connect error or disconnect/reset before headers, reset reason: connection failure".
According to https://cloud.google.com/blog/produc...Sometimes entitlements v2 GET groups endpoint (Azure cloud) returns 503 error with the body "upstream connect error or disconnect/reset before headers, reset reason: connection failure".
According to https://cloud.google.com/blog/products/containers-kubernetes/kubernetes-best-practices-terminating-with-grace
Pod termination includes 3 steps:
1. Pod is set to the “Terminating” State and removed from the endpoints list of all Services
2. PreStop Hook is executed: https://kubernetes.io/docs/concepts/containers/container-lifecycle-hooks/#hook-details
3. SIGTERM signal is sent to the pod
If PreStop is empty then the 1st and 2nd steps are executed instantaneously.
Then the entitlements application gets SIGTERM and stops immediately.
There is a “server.shutdown” property for Spring Boot but its default value is “immediate” (not “graceful”).
Therefore if there are active connections they will be terminated immediately after SIGTERM.Dmitrii GerashchenkoDmitrii Gerashchenkohttps://community.opengroup.org/osdu/platform/pre-shipping/-/issues/54Not able to copy a small file (with one line of text) using sdutils from seis...2021-09-30T23:03:56ZKamlesh TodaiNot able to copy a small file (with one line of text) using sdutils from seismic-dms-sdutil and following the steps described in readme file(env) D:\OSDU\PreShipping\M8\seismic-store-sdutil>type data1.txt
"My Test Data"
(env) D:\OSDU\PreShipping\M8\seismic-store-sdutil>python sdutil cp data1.txt sd://opendes/kttestsubprojsep16/mydata1.txt
[423] [seismic-store-service] open...(env) D:\OSDU\PreShipping\M8\seismic-store-sdutil>type data1.txt
"My Test Data"
(env) D:\OSDU\PreShipping\M8\seismic-store-sdutil>python sdutil cp data1.txt sd://opendes/kttestsubprojsep16/mydata1.txt
[423] [seismic-store-service] opendes/kttestsubprojsep16/mydata1.txt is write locked [RCODE:WL86400] Locked from yesterday’s attempt
(env) D:\OSDU\PreShipping\M8\seismic-store-sdutil>python sdutil cp data2.txt sd://opendes/kttestsubprojsep16/mydata2.txt
- Uploading Data [ 0% | | 0.00/17.0 - 00:06|? - ?B/s ]
maximum recursion depth exceeded while calling a Python object
Attached is the list of commands used to set the environment and other steps before trying to execute the copy command[sdutil_problemLog.docx](/uploads/91875b1726f4ec8cd25a087677f741fb/sdutil_problemLog.docx)Walter DWalter Dhttps://community.opengroup.org/osdu/platform/pre-shipping/-/issues/90Unable to upload larger file to Azure ('Connection aborted.', timeout('The wr...2021-10-01T12:38:09ZGrant MarblestoneUnable to upload larger file to Azure ('Connection aborted.', timeout('The write operation timed out'))I have attempted to write a segy file to azure via Sdutil.
I have successfully uploaded a small file 17k using the command following successfully.
python sdutil cp data1.txt sd://opendes/grant-test/grant/data1.txt
But I fail on a 80Meg...I have attempted to write a segy file to azure via Sdutil.
I have successfully uploaded a small file 17k using the command following successfully.
python sdutil cp data1.txt sd://opendes/grant-test/grant/data1.txt
But I fail on a 80Meg file with the following command.
python sdutil cp data2.txt sd://opendes/grant-test/grant/data2.txt
I attempted to track down the source of the issue but was not able to spend the time.
in \seismic-store-sdutil\sdlib\cmd\cp\cmd.py
on line approx 47 there is the method **upload_data_chunks**
The small file goes thru the if/else for a single put in seismic-store-sdutil\sdutilenv\Lib\site-packages\azure\storage\blob\_upload_helpers.py
while the larger file goes thru the use_original_upload_path (line 122).
After that i followed the data thru the code to seismic-store-sdutil\sdutilenv\Lib\site-packages\azure\storage\blob\_shared\uploads.py
The large file seems to have a max_concurrency of 1. Which seems strange.
Anyway, at this point i gave up. I was unable to set the timeout anywhere.
Note: Debasis and I are unable to upload but Chris can.Sumra ZafarSumra Zafarhttps://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/seismic-dms-suite/seismic-store-sdutil/-/issues/11Unable to upload larger file to Azure ('Connection aborted.', timeout('The wr...2021-09-28T12:36:46ZGrant MarblestoneUnable to upload larger file to Azure ('Connection aborted.', timeout('The write operation timed out'))I have attempted to write a segy file to azure via Sdutil.
I have successfully uploaded a small file 17k using the command following successfully.
python sdutil cp data1.txt sd://opendes/grant-test/grant/data1.txt
But I fail on a 80Meg...I have attempted to write a segy file to azure via Sdutil.
I have successfully uploaded a small file 17k using the command following successfully.
python sdutil cp data1.txt sd://opendes/grant-test/grant/data1.txt
But I fail on a 80Meg file with the following command.
python sdutil cp data2.txt sd://opendes/grant-test/grant/data2.txt
I attempted to track down the source of the issue but was not able to spend the time.
in \seismic-store-sdutil\sdlib\cmd\cp\cmd.py
on line approx 47 there is the method **upload_data_chunks**
The small file goes thru the if/else for a single put in seismic-store-sdutil\sdutilenv\Lib\site-packages\azure\storage\blob\_upload_helpers.py
while the larger file goes thru the use_original_upload_path (line 122).
After that i followed the data thru the code to seismic-store-sdutil\sdutilenv\Lib\site-packages\azure\storage\blob\_shared\uploads.py
The large file seems to have a max_concurrency of 1. Which seems strange.
Anyway, at this point i gave up. I was unable to set the timeout anywhere.
Note: Debasis and I are unable to upload but Chris can.MANISH KUMARMANISH KUMARhttps://community.opengroup.org/osdu/platform/system/reference/unit-service/-/issues/26Add memory limits2021-09-28T16:33:16ZRostislav Vatolinvatolinrp@gmail.comAdd memory limitsAdding memory limits:
AKS node autoscaler uses memory limits to add nodes to cluster when HPA(Horizontal Pod Autoscaler) needs more capacity.
Values were experimentally determined.
Given implementation allows managing the list of envs ...Adding memory limits:
AKS node autoscaler uses memory limits to add nodes to cluster when HPA(Horizontal Pod Autoscaler) needs more capacity.
Values were experimentally determined.
Given implementation allows managing the list of envs where limits are enabled.https://community.opengroup.org/osdu/platform/system/partition/-/issues/17Add memory limits2021-09-28T15:39:46ZRostislav Vatolinvatolinrp@gmail.comAdd memory limitsAdding memory limits:
AKS node autoscaler uses memory limits to add nodes to cluster when HPA(Horizontal Pod Autoscaler) needs more capacity. Values were experimentally determined. Given implementation allows managing the list of envs w...Adding memory limits:
AKS node autoscaler uses memory limits to add nodes to cluster when HPA(Horizontal Pod Autoscaler) needs more capacity. Values were experimentally determined. Given implementation allows managing the list of envs where limits are enabled.https://community.opengroup.org/osdu/platform/security-and-compliance/entitlements/-/issues/79Add memory limits2021-09-24T11:14:08ZRostislav Vatolinvatolinrp@gmail.comAdd memory limitsAdding memory limits:
AKS node autoscaler uses memory limits to add nodes to cluster when HPA(Horizontal Pod Autoscaler) needs more capacity. Values were experimentally determined. Given implementation allows managing the list of envs w...Adding memory limits:
AKS node autoscaler uses memory limits to add nodes to cluster when HPA(Horizontal Pod Autoscaler) needs more capacity. Values were experimentally determined. Given implementation allows managing the list of envs where limits are enabled.