seismic-dms-service issueshttps://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/seismic-dms-suite/seismic-store-service/-/issues2023-09-20T02:16:49Zhttps://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/seismic-dms-suite/seismic-store-service/-/issues/91The v3 to v4 sync process needs to be implemented for all models2023-09-20T02:16:49ZSacha BrantsThe v3 to v4 sync process needs to be implemented for all modelshttps://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/seismic-dms-suite/seismic-store-service/-/issues/93Create service.seismicddms.ops group2023-07-05T09:34:24ZJan MortensenCreate service.seismicddms.ops groupAs mentioned in [issue 73 in entitlements](https://community.opengroup.org/osdu/platform/security-and-compliance/entitlements/-/issues/73) there is a hardcoded dependency on being member of the users.datalake.admins for some of the funct...As mentioned in [issue 73 in entitlements](https://community.opengroup.org/osdu/platform/security-and-compliance/entitlements/-/issues/73) there is a hardcoded dependency on being member of the users.datalake.admins for some of the functionality in the Seismic DMS service. This causes some confusion, especially given that the users.datalake.* groups are not inherited, so even a member of the higher level users.datalake.ops would not be able to use the functionality as it specifically targets the admins-group.
**Suggestion**
Instead of creating a hard-coded dependency on this group, there should; in my opinion; rather have been created a new service-group for this purpose, e.g. service.seismicddms.ops (or service.sddms.ops, or...). This would create better transparency and independence on the access needed for using the service rather than relying on these group-of-groups/convenience-groups.https://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/seismic-dms-suite/seismic-store-service/-/issues/94Info endpoint is missed2023-06-13T20:05:35ZDenis Karpenok (EPAM)Info endpoint is missedcurl --location --request GET 'https://preship.gcp.gnrg-osdu.projects.epam.com/api/seismic-store/v3/info'
Response:
[seismic-store-service] Unauthenticated Access. Authorizations not found in the request.
With authentication response ...curl --location --request GET 'https://preship.gcp.gnrg-osdu.projects.epam.com/api/seismic-store/v3/info'
Response:
[seismic-store-service] Unauthenticated Access. Authorizations not found in the request.
With authentication response is:
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Error</title>
</head>
<body>
<pre>Cannot GET /api/v3/info</pre>
</body>
</html>
Expected:
Version is returning without authentication.https://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/seismic-dms-suite/seismic-store-service/-/issues/96Read Only Root File System for Seismic Pods Crashes2023-04-12T17:52:30ZAbhay JoshiRead Only Root File System for Seismic Pods CrashesWhen making a change to have the Os-Seismic-Store pods be a Readonly RootFileSystem, the pods seem to crash without any kubectl logs whatsoever. We suspect it is because the Application is writing to the Pods but are unable to see where ...When making a change to have the Os-Seismic-Store pods be a Readonly RootFileSystem, the pods seem to crash without any kubectl logs whatsoever. We suspect it is because the Application is writing to the Pods but are unable to see where things are being written. We would like to fix this issue as it is a security concern.https://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/seismic-dms-suite/seismic-store-service/-/issues/97Implementing DDMSDatasets[] standardize content data2023-03-30T16:29:46ZChad LeongImplementing DDMSDatasets[] standardize content dataDDMS references to optimized content were found to be created ad-hoc and outside the work-product-component schemas.
Following the [original observation](https://gitlab.opengroup.org/osdu/subcommittees/ea/docs/-/issues/7) an [ADR was c...DDMS references to optimized content were found to be created ad-hoc and outside the work-product-component schemas.
Following the [original observation](https://gitlab.opengroup.org/osdu/subcommittees/ea/docs/-/issues/7) an [ADR was created](https://gitlab.opengroup.org/osdu/subcommittees/ea/docs/-/issues/10), which standardizes the optimized content references from work-product-component entity types. Over time, DDMSs are expected to implement optimized content references using the `data.DDMSDatasets[]` property and support migration.https://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/seismic-dms-suite/seismic-store-service/-/issues/1Delete dataset API does not delete COS (Blob Storage) object2023-03-27T19:35:26ZWalter DDelete dataset API does not delete COS (Blob Storage) objectThe delete dataset API of seismic-store-service, calls the storage service POST delete record API. This API deletes the object from COS(Blob Storage) belonging to the dataset. However, the COS object is available even though the response...The delete dataset API of seismic-store-service, calls the storage service POST delete record API. This API deletes the object from COS(Blob Storage) belonging to the dataset. However, the COS object is available even though the response is 204 No Content. We realize that storage service POST delete is just doing soft delete. We wanted to confirm if this is the expected behavior.ethiraj krishnamanaiduethiraj krishnamanaiduhttps://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/seismic-dms-suite/seismic-store-service/-/issues/4GCP specfic naming conventions2023-03-27T19:32:16ZRucha DeshpandeGCP specfic naming conventionsThere are many GCP specific names used in the models:
such as gcpid, gcp_bucket etc.
There is also an API called /api/v3/utility/gcs-access-token.
The code should be re-visited to remove any CSP specific naming used.There are many GCP specific names used in the models:
such as gcpid, gcp_bucket etc.
There is also an API called /api/v3/utility/gcs-access-token.
The code should be re-visited to remove any CSP specific naming used.Rucha DeshpandeDiego MolteniRucha Deshpandehttps://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/seismic-dms-suite/seismic-store-service/-/issues/18Dataset with seimsic metadata fails due to updates in R3 data definitions in ...2023-03-27T19:29:25ZRucha DeshpandeDataset with seimsic metadata fails due to updates in R3 data definitions in Storage ServicePosting a dataset with seismic metadata that is to be stored as a Storage record fails.
Seismic DMS service needs to be updated to work with R3 Data Definitions.
See issue:
https://community.opengroup.org/osdu/platform/system/storage/-/i...Posting a dataset with seismic metadata that is to be stored as a Storage record fails.
Seismic DMS service needs to be updated to work with R3 Data Definitions.
See issue:
https://community.opengroup.org/osdu/platform/system/storage/-/issues/44Rucha DeshpandeDiego MolteniRucha Deshpandehttps://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/seismic-dms-suite/seismic-store-service/-/issues/22e2e test script needs to run from repository root only2023-03-27T19:28:43ZRucha Deshpandee2e test script needs to run from repository root onlyThe run-e2e-tests.sh script has the following check. This will not work in internal pipelines where the distribution folder structure is different.
if [ ! -f "tsconfig.json" ]; then
printf "\n%s\n" "[ERROR] The script must be cal...The run-e2e-tests.sh script has the following check. This will not work in internal pipelines where the distribution folder structure is different.
if [ ! -f "tsconfig.json" ]; then
printf "\n%s\n" "[ERROR] The script must be called from the project root directory."
exit 1
fiRucha DeshpandeDiego MolteniRucha Deshpandehttps://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/seismic-dms-suite/seismic-store-service/-/issues/23createQuery and createKey - generalize structure2023-03-27T19:27:48ZRucha DeshpandecreateQuery and createKey - generalize structureThe following 2 methods
createQuery(namespace: string, kind: string): IJournalQueryModel;
createKey(specs: any): object;
The structure of the parameter should be abstracted to be s
AWS wants to be able to pass information such as
{...The following 2 methods
createQuery(namespace: string, kind: string): IJournalQueryModel;
createKey(specs: any): object;
The structure of the parameter should be abstracted to be s
AWS wants to be able to pass information such as
{
table_name:
tenant_name
subproject_name
..etc
}
of type 'any'.
This is required for AWS,as we are restricted to parse and use the 'Namespace', 'kind' which does not work in all scenarios for the models we have.Rucha DeshpandeDiego MolteniRucha Deshpandehttps://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/seismic-dms-suite/seismic-store-service/-/issues/34While testing the Seismic API - list of endpoint returning incorrect response.2023-03-27T19:19:16ZKamlesh TodaiWhile testing the Seismic API - list of endpoint returning incorrect response.Not able to retrieve tenant metadata - Get 403 Forbidden response
Upon trying to list subproject in a tenant with an exported authorization token - Get 500 Internal Server Error
After patching the dataset not able to retrieve the datase...Not able to retrieve tenant metadata - Get 403 Forbidden response
Upon trying to list subproject in a tenant with an exported authorization token - Get 500 Internal Server Error
After patching the dataset not able to retrieve the dataset info.
Upon trying to patch dataset with invalid/expired authorization token - Get response of 404 Not Found instead of 401 or 403
Upon trying to Validate the ctag of a dataset with an invalid/expired auth token - it successfully validates instead of returning 401 or 403
Upsert tags to a dataset with invalid gtag gives the response of 200 OK instead of 400 or 404
Delete a data set with an invalid datasetid gives the response of 200 OK instead of 400 or 404
Delete a data set with an invalid path gives the response of 200 OK instead of 400 or 404
Retrieve a list of datasets and sub-directories inside a seismic store path with invalid cursor gives Response of 200 OK instead of 400
Attached is the document giving the details of the request and the invalid responses received.
[SeismicDMS_CollectionNotes.json](/uploads/a5f8e41de5251679b22c277f9a210ec3/SeismicDMS_CollectionNotes.json)
The collection can be found here
https://community.opengroup.org/osdu/platform/testing/-/blob/master/Postman%20Collection/27_CICD_Setup_SeismicDMSAPI/SeismicDMS%20API%20CI-CD%20v2.0.postman_collection.json
The testing was primarily done on IBM and some on AWS.
@ChrisZhang @sacha @anujgupta @Wibbenhttps://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/seismic-dms-suite/seismic-store-service/-/issues/37Support auth with access_token2023-03-27T19:17:39ZAleksandr Spivakov (EPAM)Support auth with access_tokenCurrently service supports only id_token for authorization. It will be good to have support for access_token.Currently service supports only id_token for authorization. It will be good to have support for access_token.https://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/seismic-dms-suite/seismic-store-service/-/issues/40Domain API - provide read/write access to trace data2023-03-27T19:16:48ZDebasis ChatterjeeDomain API - provide read/write access to trace dataNeutral domain API to access Seismic trace data irrespective of content storage in oZgy or in oVDS.
Consider suitable protocol keeping in mind the large volume of data involved.
This will open up opportunity for interoperability for x-...Neutral domain API to access Seismic trace data irrespective of content storage in oZgy or in oVDS.
Consider suitable protocol keeping in mind the large volume of data involved.
This will open up opportunity for interoperability for x-vendor applications.
cc - @pq for informationhttps://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/seismic-dms-suite/seismic-store-service/-/issues/42[GCP] Seismic store doesn't use Partition Service to get a GCP project-id of ...2023-03-27T19:16:22ZYan Sushchynski (EPAM)[GCP] Seismic store doesn't use Partition Service to get a GCP project-id of Google Cloud ProjectThe main problems are following:
- See no signs that SSDMS uses Partition Service at all, it accepts requests with no data-partition-id header
- When we create SSDMS tenant, we have to specify `gcpid`, the project where data will be stor...The main problems are following:
- See no signs that SSDMS uses Partition Service at all, it accepts requests with no data-partition-id header
- When we create SSDMS tenant, we have to specify `gcpid`, the project where data will be stored if we use this tenant in our `sd-path`.
It causes two problems:
- users have to know the actual `gcpid`
- users can specify the `gcpid` that doesn’t correspond `data-partition-id`
Example of create tenant request:
```
{
"gcpid": "{{gcp_project_id}}",
"esd": "{{data-partition-id}}.osdu-gcp.go3-nrg.projects.epam.com",
"default_acl": "data.default.owners@{{data-partition-id}}.osdu-gcp.go3-nrg.projects.epam.com"
}
```
Solution is to use Partition Service to get GCP project-id, thus users don't need to specify `gcpid` manually and the GCP project-id is chosen correctly.
cc:
@Kateryna_Kurach @Siarhei_KhaletskiM13 - Release 0.16https://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/seismic-dms-suite/seismic-store-service/-/issues/46SegyImport and OpenVDS DAG2023-03-27T19:15:37ZGregSegyImport and OpenVDS DAGThe OpenVDS DAG should allow header parameters to be passed for the conversion which can override header information in the Segy. The DAG uses SegyImport which can accept header parameters (see http://osdu.pages.community.opengroup.org/p...The OpenVDS DAG should allow header parameters to be passed for the conversion which can override header information in the Segy. The DAG uses SegyImport which can accept header parameters (see http://osdu.pages.community.opengroup.org/platform/domain-data-mgmt-services/seismic/open-vds/tools/SEGYImport/README.html ), however there is no mechanism to pass these header fields to the DAG.M10 - Release 0.13Chris ZhangChris Zhanghttps://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/seismic-dms-suite/seismic-store-service/-/issues/53Provide Domain API to read/write Seismic 2D Navigation data (sourced from mul...2023-03-24T21:39:41ZDebasis ChatterjeeProvide Domain API to read/write Seismic 2D Navigation data (sourced from multiple formats)This will be very similar to approach in Wellbore DDMS.
Access to Well Log data via API, although source data can be LAS, DLIS, LIS, WITSML.
Similarly in this case, source data can be UKOOA< SegP1, IOGP format.
But uniform set of Domai...This will be very similar to approach in Wellbore DDMS.
Access to Well Log data via API, although source data can be LAS, DLIS, LIS, WITSML.
Similarly in this case, source data can be UKOOA< SegP1, IOGP format.
But uniform set of Domain API should allow programmatic access to this information to help applications.
Common use case being an application that wants to display 2D Navigation on Map, with option to show SP labels at certain zoom level.
Please also check this issue for Data Definition.
https://gitlab.opengroup.org/osdu/subcommittees/data-def/work-products/schema/-/issues/348
cc - @Keith_Wall (for information)https://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/seismic-dms-suite/seismic-store-service/-/issues/52Difference in documentation and functionality of utility cp endpoint2023-03-24T19:32:53ZWalter DDifference in documentation and functionality of utility cp endpointThe documentation for Utility CP endpoint mentions 'The source and destination dataset must be in the same sub-project.' However, the endpoint returns 202 Accepted response even when the source and destination sub-project are not same.The documentation for Utility CP endpoint mentions 'The source and destination dataset must be in the same sub-project.' However, the endpoint returns 202 Accepted response even when the source and destination sub-project are not same.M11 - Release 0.14Diego MolteniDiego Moltenihttps://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/seismic-dms-suite/seismic-store-service/-/issues/57Utilizing Standard Pipelines2023-03-24T19:24:00ZDavid Diederichd.diederich@opengroup.orgUtilizing Standard PipelinesI'd like this project to consider merging your CI pipeline work with the osdu/platform/ci-cd-pipelines> project, and utilize more jobs by includes than using local CI config.
### Some Reasons to Consider
**Copy/paste code is hard to ke...I'd like this project to consider merging your CI pipeline work with the osdu/platform/ci-cd-pipelines> project, and utilize more jobs by includes than using local CI config.
### Some Reasons to Consider
**Copy/paste code is hard to keep maintained**
Most of your CI logic appears to have started as a copy/paste from the main repository, anyway.
But keeping it local means that developers need to update changes in multiple places, and when they're working on the improvements they don't have your use case in mind.
This included some recent developments to get the dev2 environment going, but it also includes the changes to the FOSSA scanning -- you're still using an older, unmaintained image for the scanning.
And, when I did the changes, I worked test examples for maven and pip, the two supported build systems.
If npm had been there, I would have had it in mind.
**You miss new pipeline developments**
I'm moving pieces of the release management scripts into the pipeline to make more aspects of the tagging process happen automatically from branch creation.
For now, it's only dependency scanning data, but upgrades are planned to do more stages from there.
The GitLab Ultimate scanners check for security vulnerabilities, and the InfoSec team utilizes these results to plan their work.
These scanners aren't running on your project, but would be if included the appropriate CI configuration -- or at least, we'd see what needs to be improved on those scanners to function if they don't work out of the box.
**Your improvements aren't available to others**
Any improvements you make to the CI process after you've copied it remains in your local repository.
Others could benefit from having this available in a common location.
Supporting another language gives future OSDU projects more capabilities right at the start.
You'd even get to define the basic processes for these.
### Open to Discussion
I'd like to hear more about how the custom pipelines came to be, and if they are serving a need that can't be generalized.
For steps that are truly custom and unique to your project, it makes sense to have them as local CI config files.
If we do decide to start using more of the standard pipeline logic, I think we'll need to implement it slowly, a piece at a time.
Of course, if you think a big bang MR is better, I'd consider that, too.
Thank you in advance for your thoughts.https://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/seismic-dms-suite/seismic-store-service/-/issues/58For Tenant there is no endpoint that can be used to list all the available te...2023-03-24T19:22:43ZKamlesh TodaiFor Tenant there is no endpoint that can be used to list all the available tenantsThere should be a way to list all the tenants to which the user has access. At present, there is no way to do that. If one had created the tenant in the past and cannot remember the name, then there is no way to find that name.There should be a way to list all the tenants to which the user has access. At present, there is no way to do that. If one had created the tenant in the past and cannot remember the name, then there is no way to find that name.https://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/seismic-dms-suite/seismic-store-service/-/issues/65Make cloud interfaces and abstract classes less GCP specific2023-03-24T19:13:34ZYan Sushchynski (EPAM)Make cloud interfaces and abstract classes less GCP specificHello!
During implementing `Anthos` provider we faced troubles with creating the concrete `Journal` class: [here](https://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/seismic-dms-suite/seismic-store-service/-/...Hello!
During implementing `Anthos` provider we faced troubles with creating the concrete `Journal` class: [here](https://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/seismic-dms-suite/seismic-store-service/-/blob/feat/Anthos_GCP/app/sdms/src/cloud/providers/anthos/postgresql.ts#L96).
If we understand correctly, `AbstractJournal` and `AbstractJournalTransaction` classes simply reproduce GCP Datastore interfaces. It is ok for GCP implementation, because there is no extra effort needed for implementing concrete journal classes. However, it is hard to implement concrete journal classes for other CSPs. This becomes obvious when we compare the number of lines for GCP and other CSPs particular journal classes (e.g., [GCP](https://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/seismic-dms-suite/seismic-store-service/-/blob/feat/Anthos_GCP/app/sdms/src/cloud/providers/google/datastore.ts) and [Azure](https://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/seismic-dms-suite/seismic-store-service/-/blob/feat/Anthos_GCP/app/sdms/src/cloud/providers/azure/cosmosdb.ts)).
Also, using Datastore "low-level" logic in the core code makes this code hard to read and debug. E.g., for Anthos we use PostgreSQL database and the concrete implementation required a lot of workarounds to fit Datastore "low-level" methods into SQL.
For example, there are specific Datastore operators in the abstract class ([here](https://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/seismic-dms-suite/seismic-store-service/-/blob/feat/Anthos_GCP/app/sdms/src/cloud/journal.ts#L25)).
I'd suggest refactoring the common code and switch from using Datastore methods to focusing on more general and high-level logic.
For example, instead of using [this](https://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/seismic-dms-suite/seismic-store-service/-/blob/master/app/sdms/src/services/dataset/dao.ts#L46) in the common code
```ts
let query = journalClient.createQuery(
Config.SEISMIC_STORE_NS + '-' + dataset.tenant + '-' + dataset.subproject, Config.DATASETS_KIND);
query = query.filter('name', dataset.name).filter('path', dataset.path);
const [entities] = await journalClient.runQuery(query);
```
We could use something like this:
```ts
// Just an example
const entity = await journalClient.getEntity(dataset.path, dataset.name);
```
In this case, we could be more concise and cleaner in concrete CSP implementations. Also, implementing high-level classes lets us use the best practices for each particular data base.