OSDU Software issueshttps://community.opengroup.org/groups/osdu/-/issues2023-03-27T19:35:26Zhttps://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/seismic-dms-suite/seismic-store-service/-/issues/1Delete dataset API does not delete COS (Blob Storage) object2023-03-27T19:35:26ZWalter DDelete dataset API does not delete COS (Blob Storage) objectThe delete dataset API of seismic-store-service, calls the storage service POST delete record API. This API deletes the object from COS(Blob Storage) belonging to the dataset. However, the COS object is available even though the response...The delete dataset API of seismic-store-service, calls the storage service POST delete record API. This API deletes the object from COS(Blob Storage) belonging to the dataset. However, the COS object is available even though the response is 204 No Content. We realize that storage service POST delete is just doing soft delete. We wanted to confirm if this is the expected behavior.ethiraj krishnamanaiduethiraj krishnamanaiduhttps://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/seismic-dms-suite/seismic-store-service/-/issues/4GCP specfic naming conventions2023-03-27T19:32:16ZRucha DeshpandeGCP specfic naming conventionsThere are many GCP specific names used in the models:
such as gcpid, gcp_bucket etc.
There is also an API called /api/v3/utility/gcs-access-token.
The code should be re-visited to remove any CSP specific naming used.There are many GCP specific names used in the models:
such as gcpid, gcp_bucket etc.
There is also an API called /api/v3/utility/gcs-access-token.
The code should be re-visited to remove any CSP specific naming used.Rucha DeshpandeDiego MolteniRucha Deshpandehttps://community.opengroup.org/osdu/platform/system/lib/cloud/azure/os-core-lib-azure/-/issues/7EG management SDK has to be moved to GA2022-07-11T19:41:13ZKomal MakkarEG management SDK has to be moved to GAEG management is using preview [SDK](https://community.opengroup.org/osdu/platform/system/lib/cloud/azure/os-core-lib-azure/-/blob/master/pom.xml#L55), which can be risky.
Move to the GA version of it.EG management is using preview [SDK](https://community.opengroup.org/osdu/platform/system/lib/cloud/azure/os-core-lib-azure/-/blob/master/pom.xml#L55), which can be risky.
Move to the GA version of it.harshit aggarwalharshit aggarwalhttps://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/seismic-dms-suite/seismic-store-service/-/issues/8Integration tests assume a valid legal tag exists2023-03-27T19:30:52ZRucha DeshpandeIntegration tests assume a valid legal tag existsIf the FEATURE FLAG for legal is set to true, it checks the validity of the legal tag passed in 'ltag'.If the FEATURE FLAG for legal is set to true, it checks the validity of the legal tag passed in 'ltag'.Rucha DeshpandeDiego MolteniRucha Deshpandehttps://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/seismic-dms-suite/seismic-store-service/-/issues/17Tenants - usage of Partition Service2023-03-27T19:30:13ZRucha DeshpandeTenants - usage of Partition ServiceSince we are using a new Tenant model in this service there are new tenant related APIs.
Can we not use the existing Partition Service APIs?Since we are using a new Tenant model in this service there are new tenant related APIs.
Can we not use the existing Partition Service APIs?Rucha DeshpandeDiego MolteniRucha Deshpandehttps://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/seismic-dms-suite/seismic-store-service/-/issues/18Dataset with seimsic metadata fails due to updates in R3 data definitions in ...2023-03-27T19:29:25ZRucha DeshpandeDataset with seimsic metadata fails due to updates in R3 data definitions in Storage ServicePosting a dataset with seismic metadata that is to be stored as a Storage record fails.
Seismic DMS service needs to be updated to work with R3 Data Definitions.
See issue:
https://community.opengroup.org/osdu/platform/system/storage/-/i...Posting a dataset with seismic metadata that is to be stored as a Storage record fails.
Seismic DMS service needs to be updated to work with R3 Data Definitions.
See issue:
https://community.opengroup.org/osdu/platform/system/storage/-/issues/44Rucha DeshpandeDiego MolteniRucha Deshpandehttps://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/seismic-dms-suite/seismic-store-service/-/issues/22e2e test script needs to run from repository root only2023-03-27T19:28:43ZRucha Deshpandee2e test script needs to run from repository root onlyThe run-e2e-tests.sh script has the following check. This will not work in internal pipelines where the distribution folder structure is different.
if [ ! -f "tsconfig.json" ]; then
printf "\n%s\n" "[ERROR] The script must be cal...The run-e2e-tests.sh script has the following check. This will not work in internal pipelines where the distribution folder structure is different.
if [ ! -f "tsconfig.json" ]; then
printf "\n%s\n" "[ERROR] The script must be called from the project root directory."
exit 1
fiRucha DeshpandeDiego MolteniRucha Deshpandehttps://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/seismic-dms-suite/seismic-store-service/-/issues/23createQuery and createKey - generalize structure2023-03-27T19:27:48ZRucha DeshpandecreateQuery and createKey - generalize structureThe following 2 methods
createQuery(namespace: string, kind: string): IJournalQueryModel;
createKey(specs: any): object;
The structure of the parameter should be abstracted to be s
AWS wants to be able to pass information such as
{...The following 2 methods
createQuery(namespace: string, kind: string): IJournalQueryModel;
createKey(specs: any): object;
The structure of the parameter should be abstracted to be s
AWS wants to be able to pass information such as
{
table_name:
tenant_name
subproject_name
..etc
}
of type 'any'.
This is required for AWS,as we are restricted to parse and use the 'Namespace', 'kind' which does not work in all scenarios for the models we have.Rucha DeshpandeDiego MolteniRucha Deshpandehttps://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/seismic-dms-suite/seismic-store-sdutil/-/issues/1e2e tests: setup step must create the subproject2023-03-30T16:57:09ZRucha Deshpandee2e tests: setup step must create the subprojectThe e2e tests assume that a subproject exists. Just as some files are uploaded in the 'setup' step, the subproject must also be created as part of the setup step here.
https://community.opengroup.org/osdu/platform/domain-data-mgmt-servic...The e2e tests assume that a subproject exists. Just as some files are uploaded in the 'setup' step, the subproject must also be created as part of the setup step here.
https://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/seismic-dms-suite/seismic-store-sdutil/-/blob/master/test/e2e/conftest.pyRucha DeshpandeDiego MolteniYunhua KoglinRucha Deshpandehttps://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/seismic-dms-suite/seismic-store-service/-/issues/26Code linting IBM2023-03-27T19:26:57ZDiego MolteniCode linting IBMcode linting to apply on ibm code.
``$ tslint -c tslint.json 'src/cloud/providers/ibm/**/*.ts'``code linting to apply on ibm code.
``$ tslint -c tslint.json 'src/cloud/providers/ibm/**/*.ts'``Walter DWalter Dhttps://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/seismic-dms-suite/seismic-store-service/-/issues/27AWS seismic store service ci cd testing yaml needs to be in ci cd repo2023-03-27T19:26:25ZDaniel PerezAWS seismic store service ci cd testing yaml needs to be in ci cd repoI have noticed that gitlab yaml for AWS testing has been included in seismic store service (https://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/seismic-dms-suite/seismic-store-service/-/tree/master/devops/aws...I have noticed that gitlab yaml for AWS testing has been included in seismic store service (https://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/seismic-dms-suite/seismic-store-service/-/tree/master/devops/aws), this file needs to be in CI CD repo https://community.opengroup.org/osdu/platform/ci-cd-pipelines/-/tree/master/
Please also follow standard and integrate in same yaml inside of cloud providers as we do for other providers https://community.opengroup.org/osdu/platform/ci-cd-pipelines/-/tree/master/cloud-providersRucha DeshpandeRucha Deshpandehttps://community.opengroup.org/osdu/platform/system/lib/core/os-core-common/-/issues/30Cursor search returns results even when version does not match.2022-12-08T17:13:50ZGregCursor search returns results even when version does not match.POST – Query With Cursor. Functionality working but defect.
For example, below request is to search "opendes:osdu:wellbore-master:0.2.0" with the cursor "2F8900678904A680D24593BC7D8BEEA5".
However, it is still returning the results wit...POST – Query With Cursor. Functionality working but defect.
For example, below request is to search "opendes:osdu:wellbore-master:0.2.0" with the cursor "2F8900678904A680D24593BC7D8BEEA5".
However, it is still returning the results with 0.2.0 even I change the version number of data, e.g. "opendes:osdu:wellbore-master:0.0.8"
{ "kind": "opendes:osdu:wellbore-master:0.2.0", "cursor": "2F8900678904A680D24593BC7D8BEEA5", "aggregateBy": "kind" }
It seems that the kind should not be required once you have the Cursor ID. Or, if it is required, the kind should need to be the same as the original query.M1 - Release 0.1Chris ZhangChris Zhanghttps://community.opengroup.org/osdu/platform/security-and-compliance/home/-/issues/2Universal encryption of data at rest2020-06-24T20:26:25ZPaco Hope (AWS)Universal encryption of data at restAll data stored by OSDU must be encrypted at rest: in all services that store data, and in all infrastructure providers. Storage at rest includes:
* Virtual machines / Container hosts
* Shared file services
* Object storage (e.g., S3, G...All data stored by OSDU must be encrypted at rest: in all services that store data, and in all infrastructure providers. Storage at rest includes:
* Virtual machines / Container hosts
* Shared file services
* Object storage (e.g., S3, Google Cloud Storage, Azure Cloud Storage)
* Relational databases
* Document databases (e.g., ElasticSearch)
Infrastructure providers are:
* AWS
* Azure
* Google Cloud Platform
* IBM / RedHat
### Operator Inputs
- **Chevron**: Chevron requires the use of Chevron's HSM or Azure Key Vault (but does not require BYOK if Azure Key Vault is used).
- **Repsol**: Azure Key Vault is acceptable.
- **Equinor**: Record Level Encryption (RLE) to segment data based on classifica-tion would be beneficial but it is not a definitive requirement (depends on the data). HSM shall be used for central/critical components.
## Definition of Done
For each infastructure provider:
1. Document all areas that store data at rest
2. Document what encryption at rest is used
3. Document where encryption at rest is not available and/or not used
4. Link to specific information for more details
Given 4 infrastructure providers and approximately 5 kinds of storage each, there will need to be about 20 statements generated.
## For example
This is a fictitious example.
> On AWS, "virtual machines" are EC2 instances. The "hard disk" of the virtual machine is an EBS volume. All EBS volumes are encrypted using Amazon Key Management Service (KMS) encryption. For information on EBS volume encryption options _click here_. For information on KMS encryption algorithms and keys, _click here_.M1 - Release 0.1https://community.opengroup.org/osdu/platform/system/lib/core/os-core-common/-/issues/23Create LegalTag Expiration date is not human readable.2021-01-23T20:12:58ZGregCreate LegalTag Expiration date is not human readable.POST – Create LegalTag. Functionality working but defect. The attribute of "expirationDate" in payload is “2222222222222”, which is not human-readable. Need to convert epoch time to human-readable date. The schema of “expirationDate” of ...POST – Create LegalTag. Functionality working but defect. The attribute of "expirationDate" in payload is “2222222222222”, which is not human-readable. Need to convert epoch time to human-readable date. The schema of “expirationDate” of the legal tag is in the format of “yyyy-mm-dd”, e.g.,"2040-06-02".M1 - Release 0.1Chris ZhangChris Zhanghttps://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/home/-/issues/3[Seismic DDMS] Provide coverage for metadata2021-06-15T21:24:32ZCelina Marcolinocsilva10@slb.com[Seismic DDMS] Provide coverage for metadata1. Provide coverage and support for seismic metadata following OSDU Seismic Data Model. Ensure seismic metadata are defined and follow OSDU Data Model Schema as in: https://gitlab.opengroup.org/osdu/json-schemas/-/tree/OpenDES_Archive/ge...1. Provide coverage and support for seismic metadata following OSDU Seismic Data Model. Ensure seismic metadata are defined and follow OSDU Data Model Schema as in: https://gitlab.opengroup.org/osdu/json-schemas/-/tree/OpenDES_Archive/geophysics
2. Provide Coverage for Processing Seismic Header information
3. Provide coverage for Survey Definition (grid definition)
4. Provide coverage for Seismic Navigation
5. Provide coverage for Dataset type and domainM1 - Release 0.1https://community.opengroup.org/osdu/platform/security-and-compliance/home/-/issues/3Universal encryption of data in transit2020-06-24T20:31:51ZPaco Hope (AWS)Universal encryption of data in transitAll data transmitted inside OSDU and into/out of OSDU must be encrypted in all infrastructure providers.
### Data in transit includes:
* Between the OSDU API endpoint and the requesting client
* When it's another OSDU service (e.g., ...All data transmitted inside OSDU and into/out of OSDU must be encrypted in all infrastructure providers.
### Data in transit includes:
* Between the OSDU API endpoint and the requesting client
* When it's another OSDU service (e.g., Load Service calls Search Service)
* When it's "internal" (e.g., the operator's non-OSDU systems calling OSDU APIs)
* When it's "external" e.g., an authorised external entity, like a partner or JV.
* Between OSDU services and the constituent cloud services
* Storage
* Databases
* Cloud provider API calls
### Infrastructure providers are:
* AWS
* Azure
* Google Cloud Platform
* IBM / RedHat
## Definition of Done
For each infastructure provider:
1. Document all flows of data
2. Document what encryption is used for each flow
3. Document where encryption is not available and/or not used
4. Link to cloud-specific information for more details on encryption
5. All internet/extranet facing APIs refuse connections at less than TLS 1.2.
Given 4 infrastructure providers and approximately 2 kinds of data flows each, there will need to be about 8 statements generated.
### For example
This is a fictitious example.
> On AWS, all containers run Kubernetes version XYZ, each connection supports TLS version A, B, and C. For additional details, _click here_.M1 - Release 0.1https://community.opengroup.org/osdu/platform/security-and-compliance/home/-/issues/4Operator-controlled TLS Options2020-06-19T16:32:46ZPaco Hope (AWS)Operator-controlled TLS OptionsAs an operator, I can provide the public key certificate and corresponding private key for all TLS endpoints that handle my data. This covers externally-facing API endpoints. Anything that is deployed as part of deploying the OSDU data p...As an operator, I can provide the public key certificate and corresponding private key for all TLS endpoints that handle my data. This covers externally-facing API endpoints. Anything that is deployed as part of deploying the OSDU data platform (not the cloud provider's own endpoints).
1. Issuing the TLS certificate for the API endpoints from a CA / PKI that the operator controls.
* This must be importable into OSDU
* This must be renewable when it expires
2. The TLS cipher policy must be controlled to restrict connections to operator-approved TLS ciphers.
### Operator Inputs
- **Chevron**: This is mandatory for Chevron.
- **Repsol**: This is mandatory for Repsol.
- **Equinor**: We do have an internal PKI so we need to be able to configure trust between internal resources and the OSDU install. (*Paco comment: this might require BYOK for certificates, it might not*)
## Definition of Done
* An operator can provide one or more TLS certificates and they will be deployed to the externally-facing endpoints.
* An operator can indicate which valid TLS ciphers are acceptable/supported for their OSDU endpoints. (e.g., TLS 1.2 only)
* When connecting to the API endpoints, the operator's provided TLS certificate is presented in the TLS handshake.
* When connecting to the API endpoints, TLS connections are rejected unless the client selects an acceptable cipher.
* The requirements for an operator-provided X.509 certificate (e.g., signature, key size, etc) is documented so that operators can supply compatible certificates. Linking to the appopriate cloud provider's documentation will be necessary, but not sufficient.
* The requirements for an operator-provided TLS cipher list is documented so operators can select their cipher suites. Linking to the appropriate cloud provider's documentation for mechanisms will be necessary, but not sufficient.M1 - Release 0.1https://community.opengroup.org/osdu/platform/system/home/-/issues/58[Notification and registration services] Integrate with other platform servic...2021-06-24T02:46:19ZChris Zhang[Notification and registration services] Integrate with other platform services and DDMSNotification and registration services are now available in all cloud platforms. This issue is to track
1. the work to integrate them with other platform services and DDMS.
1. the ability to deliver an enrichment capability for OSDU. Ba...Notification and registration services are now available in all cloud platforms. This issue is to track
1. the work to integrate them with other platform services and DDMS.
1. the ability to deliver an enrichment capability for OSDU. Based on data change notification from storage, we should be able to trigger a DAG in workflow service to enrich the data.
1. Example: when the seismic SEGY dataset header is ingested, it should be possible to trigger an enrichment DAG that can start the processing of the SEGY and conversion into VDS or ZGY for optimal application consumption thru seismic DMS APIs.
The design options are at: https://community.opengroup.org/osdu/platform/system/notification/-/wikis/OSDU-Platform-Eventshttps://community.opengroup.org/osdu/platform/security-and-compliance/home/-/issues/5OSDU and platform logging requirements2021-04-22T21:28:09ZPaco Hope (AWS)OSDU and platform logging requirementsAll OSDU logs need to go to a specific, well-known location. That means:
### Logs must exist
* OSDU service logs (e.g., load service, search service, etc.)
* application platform logs (e.g., kubernetes, tomcat, nginx, apache, whatever)...All OSDU logs need to go to a specific, well-known location. That means:
### Logs must exist
* OSDU service logs (e.g., load service, search service, etc.)
* application platform logs (e.g., kubernetes, tomcat, nginx, apache, whatever)
* operating system level logs for VMs
* cloud service provider logs
### Logs must be protected
Logs go one of two ways:
1. They leave OSDU and go to an operator-provided location. In that case, security and management of logs is the operator's responsibility.
2. They remain in an OSDU-specific location (e.g., a log server, an S3 bucket, a cloud-native log aggregation service). In that case additional security requirements apply.
- Logs must be encrypted at rest
- Logs must be protected from unauthorised modification
- Logs must be protected from unauthorised access
- **ConocoPhillips** identified RBAC for log access
### Log Locations
- **Chevron**: Azure Log Analytics
- **Total**: Azure Monitor
- **Petronas**: LogRhythm
- **ConocoPhillips**: Splunk
- **Equinor**: Azure EventHub
### Log Retention
- **BP** highlighted data retention as a security concern. Logs are the one place where the platform itself produces data. Do we activate some automatic cloud-native log deletion?
- **Repsol** Log integrity measures are mandatory as well as a retention period of 13 months of the logs.
### Definition of Done
1. Log formats need to be documented and defined for each service and component at each of these levels (OSDU, app, OS, cloud).
2. Log locations are documented for each cloud provider choice.
3. The operator can indicate their preference for logs to either remain in an OSDU-specific location or be exported to another system.M1 - Release 0.1ethiraj krishnamanaiduRaj KannanJoeethiraj krishnamanaiduhttps://community.opengroup.org/osdu/platform/system/lib/core/os-core-common/-/issues/4Making UrlFetchServiceImpl as singleton spring bean2021-01-17T16:28:54ZKishore BattulaMaking UrlFetchServiceImpl as singleton spring bean# Convert http clients into spring singleton components
## Context and Scope
In current OSDU core commons there are two http clients
- UrlFetchServiceImpl which is implementation of IUrlFetchService. It uses HttpClientHandler which is a ...# Convert http clients into spring singleton components
## Context and Scope
In current OSDU core commons there are two http clients
- UrlFetchServiceImpl which is implementation of IUrlFetchService. It uses HttpClientHandler which is a spring with request scope
- HttpClient which is implementation of IHttpClient
UrlFetchServiceImpl is a spring component with request scope and HttpClient is not a spring component.
## Questions
- Any specific reason for having UrlFetchServiceImpl as request scoped spring bean? Is this because of injection of JaxRsDpsLogger in HttpClientHandler?
- Any specific reason for not having HttpClient as spring bean?
## Proposal
Convert these two http clients to spring bean with singleton as scope.
- Converting the existing UrlFetchServiceImpl and HttpClientHandler to singleton scope
- Converting the HttpClient as singleton scoped spring bean. We still support the constructor way of invoking for backward compatibility.
## Rationale
I am working on logging the request/response for external clients at these clients. For this logging I want to use JaxRsDpsLogger which needs to be autowired.
# ----------------- ADR update based on discussion ---------------
Splitting this ADR into two different issues.
This ADR is about converting UrlFetchServiceImpl as singleton spring bean.ethiraj krishnamanaiduDania Kodeih (Microsoft)Wladmir FrazaoBrandt BealAlan Brazethiraj krishnamanaidu