OSDU Software issueshttps://community.opengroup.org/groups/osdu/-/issues2021-06-14T04:26:40Zhttps://community.opengroup.org/osdu/platform/deployment-and-operations/infra-azure-provisioning/-/issues/70Unit Service Onboarding Fix doc2021-06-14T04:26:40ZFabien BosquetUnit Service Onboarding Fix docThis is a follow-up on Issue #55 (Unit Service Onboarding)
The git repository adress and the ADO library name for this service are incorrects.
Since I am unable to create a new branch to propose a new pull request, I have attached a pat...This is a follow-up on Issue #55 (Unit Service Onboarding)
The git repository adress and the ADO library name for this service are incorrects.
Since I am unable to create a new branch to propose a new pull request, I have attached a patch [0001-Fix-unit-service-doc.patch](/uploads/be9fe3e6d91dbe0f6df7b524f9082a71/0001-Fix-unit-service-doc.patch)Decemberhttps://community.opengroup.org/osdu/platform/deployment-and-operations/infra-azure-provisioning/-/issues/57New Service Bus topic subscribing to Event Grid Topic for WKS2021-06-14T04:26:39ZKomal MakkarNew Service Bus topic subscribing to Event Grid Topic for WKS## Type
<!-- Please choose the type of ticket. -->
- [x] Feature Request
- [ ] Bug Report
## Priority
- [x] High
- [ ] Medium
- [ ] Low
------------------------
------------------------
## Feature Request
__Why is this change nee...## Type
<!-- Please choose the type of ticket. -->
- [x] Feature Request
- [ ] Bug Report
## Priority
- [x] High
- [ ] Medium
- [ ] Low
------------------------
------------------------
## Feature Request
__Why is this change needed?__
WKS service consumes the storage record changed notification via Service Bus R2. When we move to R3 infrastructure, we will have to make WKS consume from Service Bus R3. This change will have to make sure for all the environments, no notifications are lost. All notifications from SB R3 and SB R2 should be consumed by WKS, in all environments. To prevent that, we can make WKS consume from Service Bus R3 before it gets in production.
__Current behavior__
Storage publishes to Service Bus R2 and WKS has a subscriber listening to the notifications.
__Expected behavior__
Introduction of **Service Bus R3** in the following fashion
![image](/uploads/d53b66dbb49637d2cff12d6b7564cfb8/image.png)
--------------------------
--------------------------
## Other information
<!-- Any other information that is important to this PR such as screenshots of how the component looks before and after the change. -->
The above can be broken down into the following tasks.
1. Service Bus topic will be created.
2. Service Bus topic will subscribe to Event Grid topic.
3. Service bus topic will have a subscriber which WKS will listen to.
To be discussed:
1. TTL for messages, DLQ, and other characteristics of the Service bus Topic.
2. Any special permissions/roles to be granted.
```DecemberKomal MakkarKomal Makkarhttps://community.opengroup.org/osdu/platform/deployment-and-operations/infra-azure-provisioning/-/issues/56CRS Catalog Service Onboarding2022-08-23T10:47:29ZNicholas KarskyCRS Catalog Service Onboarding**Service name**: `CRS Catalog`
The following steps must be completed for a service to onboard with OSDU on Azure. Additionally, please add the `Service Onboarding` tag to this issue when it is created.
For more information, visit our ...**Service name**: `CRS Catalog`
The following steps must be completed for a service to onboard with OSDU on Azure. Additionally, please add the `Service Onboarding` tag to this issue when it is created.
For more information, visit our service onboarding documentation [here](https://community.opengroup.org/osdu/platform/deployment-and-operations/infra-azure-provisioning/-/blob/master/docs/service-onboarding.md).
## Steps:
**Infrastructure and Initial Requirements**
- [x] Add any additional Azure cloud infrastructure (Cosmos containers, Storage containers, fileshares, etc.) to the Terraform template. [Link](https://community.opengroup.org/osdu/platform/deployment-and-operations/infra-azure-provisioning/-/tree/master/infra/templates/osdu-r3-mvp). Note that if the infrastructure is a part of the data-partition template, you may need to add secrets to the keyvault that are partition specific; if doing so, update the createPartition REST request to include the keys that you have added so they are accessible in service code. [Link](https://community.opengroup.org/osdu/platform/deployment-and-operations/infra-azure-provisioning/-/blob/master/tools/rest/partition.http#L48)
- [x] Create an ingress point for the service. [Link](https://community.opengroup.org/osdu/platform/deployment-and-operations/infra-azure-provisioning/-/blob/master/charts/osdu-common/templates/appgw-ingress.yaml)
- [x] Add any test data that is required for the service integration tests. [Link](https://community.opengroup.org/osdu/platform/deployment-and-operations/infra-azure-provisioning/-/tree/master/tools/test_data)
- [x] Update `upload-data.py` to upload any new test data files you created. [Link](https://community.opengroup.org/osdu/platform/deployment-and-operations/infra-azure-provisioning/-/blob/master/tools/test_data/upload-data.py).
- [x] Update the integration tester with any entitlements required to test the service. [Link](https://community.opengroup.org/osdu/platform/deployment-and-operations/infra-azure-provisioning/-/blob/master/tools/test_data/user_info_1.json)
- [x] Add in any new secrets that the service needs to run. [Link](https://community.opengroup.org/osdu/platform/deployment-and-operations/infra-azure-provisioning/-/blob/master/charts/osdu-common/templates/kv-secrets.yaml)
- [x] Create environment variable script to generate .yaml files to be used with Intellij [EnvFile](https://plugins.jetbrains.com/plugin/7861-envfile) plugin and .envrc files to be used with [direnv](https://direnv.net/). [Link](https://community.opengroup.org/osdu/platform/deployment-and-operations/infra-azure-provisioning/-/tree/master/tools/variables)
**Gitlab Code and Documentation**
- [x] Complete the service code such that it passes all integration tests locally. There is some documentation on starting off implementing an Azure provider. [Link](./gitlab-service-readme-template.md)
- [x] Create helm charts for service. The charts for each service are located in the `devops/azure` directory. You can look at charts from other services as a model. The charts will be nearly identical except for the different environment variables, values, etc each service needs to run. [Link](./gitlab-service-guide.md)
- [x] Implement Istio for the service if this has not already been done. Here is an example MR that shows what steps are required. [Link](https://community.opengroup.org/osdu/platform/system/storage/-/merge_requests/64)
- [x] Create an Istio auth policy in the `devops/azure/chart/templates` directory. Here is an example of an Istio auth policy that is generic and can be used by other services. [Link](https://community.opengroup.org/osdu/platform/system/storage/-/blob/master/devops/azure/chart/templates/azure-istio-auth-policy.yaml)
- [x] Add any variables that are required for the service integration tests to the Azure CI-CD file. [Link](https://community.opengroup.org/osdu/platform/ci-cd-pipelines/-/blob/master/cloud-providers/azure.yml)
- [x] Verify that the README for the Azure provider correctly and clearly describes how to run and test the service. There is a README template to help. [Link](./gitlab-service-readme-template.md)
- [x] Push any changes and verify that the Gitlab pipeline is passing in master.
**Development and Demo Azure Devops Pipelines**
- [x] Create development ADO pipeline at `devops/azure/development-pipeline.yml` in the service repo.
- [x] Verify development pipeline passes in ADO.
- [x] Create Demo ADO pipeline at `devops/azure/pipeline.yml` in the service repo.
- [x] Verify demo pipeline is passing in ADO.
**User Documentation**
- [x] Add the service to the mirror pipeline instructions. [Link](https://community.opengroup.org/osdu/platform/deployment-and-operations/infra-azure-provisioning/-/blob/master/docs/code-mirroring.md)
- [x] Add the service to the manual deployment instructions. [Link](https://community.opengroup.org/osdu/platform/deployment-and-operations/infra-azure-provisioning/-/tree/master/charts)
- [x] Add any required variables to the already existing variable group instructions for automated deployment. You should know if any variables need to be added to existing variable groups from creating the development and demo pipelines. [Link](https://community.opengroup.org/osdu/platform/deployment-and-operations/infra-azure-provisioning/-/blob/master/docs/service-automation.md#create-osdu-service-libraries)
- [x] Add a variable group `Azure Service Release - $SERVICE_NAME` to the documentation. You should know what values to set for this variable group from creating the development and demo pipelines. [Link](https://community.opengroup.org/osdu/platform/deployment-and-operations/infra-azure-provisioning/-/blob/master/docs/service-automation.md#create-osdu-service-libraries)
- [x] Add a step for creating the service pipeline at the bottom of the service-automation page. [Link](https://community.opengroup.org/osdu/platform/deployment-and-operations/infra-azure-provisioning/-/blob/master/docs/service-automation.md#create-osdu-service-libraries)
- [x] Create a rest script with sample calls to the service for users. [Link](https://community.opengroup.org/osdu/platform/deployment-and-operations/infra-azure-provisioning/-/tree/master/tools/rest)
## Setup:
1. Create an empty repo `crs-catalog-service`
2. Add a variable into `Mirror Variables`
> ADO_ORGANIZATION and ADO_PROJECT should be your actual names.
| Variable | Value |
|----------|-------|
| CRS_CATALOG_REPO | `https://dev.azure.com/${ADO_ORGANIZATION}/$ADO_PROJECT/_git/crs-catalog-service` |
3. Edit the Mirror Pipeline and add the task
```
- task: swellaby.mirror-git-repository.mirror-git-repository-vsts-task.mirror-git-repository-vsts-task@1
displayName: 'crs-catalog'
inputs:
sourceGitRepositoryUri: 'https://community.opengroup.org/osdu/platform/system/reference/crs-catalog-service.git'
destinationGitRepositoryUri: '$(CRS_CATALOG_REPO)'
destinationGitRepositoryPersonalAccessToken: $(ACCESS_TOKEN)
```
4. Run the Mirror Pipeline
5. Create a Variable Group `Azure Service Release - crs catalog` with the variables:
| Variable | Value |
|----------|-------|
| MAVEN_DEPLOY_POM_FILE_PATH | `drop/provider/crs-catalog-azure/crs-catalog-aks` |
6. Create a new pipeline using the `crs-catalog-service` repo and the `/devops/azure/pipeline.yml` file of that repo.
7. Upload the [crs_catalog_v2.json](https://community.opengroup.org/osdu/platform/system/reference/crs-catalog-service/-/blob/master/data/crs_catalog_v2.json) file located in the Project data folder to the fileshare `crs` of the storage account in the service resources.
8. Execute the PipelineDecemberNicholas KarskyNicholas Karsky2020-12-19https://community.opengroup.org/osdu/platform/deployment-and-operations/infra-azure-provisioning/-/issues/55Unit Service Onboarding2022-08-23T10:47:31ZNicholas KarskyUnit Service Onboarding**Service name**: `Unit`
The following steps must be completed for a service to onboard with OSDU on Azure. Additionally, please add the `Service Onboarding` tag to this issue when it is created.
For more information, visit our service...**Service name**: `Unit`
The following steps must be completed for a service to onboard with OSDU on Azure. Additionally, please add the `Service Onboarding` tag to this issue when it is created.
For more information, visit our service onboarding documentation [here](https://community.opengroup.org/osdu/platform/deployment-and-operations/infra-azure-provisioning/-/blob/master/docs/service-onboarding.md).
## Steps:
**Infrastructure and Initial Requirements**
- [x] Add any additional Azure cloud infrastructure (Cosmos containers, Storage containers, fileshares, etc.) to the Terraform template. [Link](https://community.opengroup.org/osdu/platform/deployment-and-operations/infra-azure-provisioning/-/tree/master/infra/templates/osdu-r3-mvp). Note that if the infrastructure is a part of the data-partition template, you may need to add secrets to the keyvault that are partition specific; if doing so, update the createPartition REST request to include the keys that you have added so they are accessible in service code. [Link](https://community.opengroup.org/osdu/platform/deployment-and-operations/infra-azure-provisioning/-/blob/master/tools/rest/partition.http#L48)
- [x] Create an ingress point for the service. [Link](https://community.opengroup.org/osdu/platform/deployment-and-operations/infra-azure-provisioning/-/blob/master/charts/osdu-common/templates/appgw-ingress.yaml)
- [x] Add any test data that is required for the service integration tests. [Link](https://community.opengroup.org/osdu/platform/deployment-and-operations/infra-azure-provisioning/-/tree/master/tools/test_data)
- [x] Update `upload-data.py` to upload any new test data files you created. [Link](https://community.opengroup.org/osdu/platform/deployment-and-operations/infra-azure-provisioning/-/blob/master/tools/test_data/upload-data.py).
- [x] Update the integration tester with any entitlements required to test the service. [Link](https://community.opengroup.org/osdu/platform/deployment-and-operations/infra-azure-provisioning/-/blob/master/tools/test_data/user_info_1.json)
- [x] Add in any new secrets that the service needs to run. [Link](https://community.opengroup.org/osdu/platform/deployment-and-operations/infra-azure-provisioning/-/blob/master/charts/osdu-common/templates/kv-secrets.yaml)
- [x] Create environment variable script to generate .yaml files to be used with Intellij [EnvFile](https://plugins.jetbrains.com/plugin/7861-envfile) plugin and .envrc files to be used with [direnv](https://direnv.net/). [Link](https://community.opengroup.org/osdu/platform/deployment-and-operations/infra-azure-provisioning/-/tree/master/tools/variables)
**Gitlab Code and Documentation**
- [x] Complete the service code such that it passes all integration tests locally. There is some documentation on starting off implementing an Azure provider. [Link](./gitlab-service-readme-template.md)
- [x] Create helm charts for service. The charts for each service are located in the `devops/azure` directory. You can look at charts from other services as a model. The charts will be nearly identical except for the different environment variables, values, etc each service needs to run. [Link](./gitlab-service-guide.md)
- [x] Implement Istio for the service if this has not already been done. Here is an example MR that shows what steps are required. [Link](https://community.opengroup.org/osdu/platform/system/storage/-/merge_requests/64)
- [x] Create an Istio auth policy in the `devops/azure/chart/templates` directory. Here is an example of an Istio auth policy that is generic and can be used by other services. [Link](https://community.opengroup.org/osdu/platform/system/storage/-/blob/master/devops/azure/chart/templates/azure-istio-auth-policy.yaml)
- [x] Add any variables that are required for the service integration tests to the Azure CI-CD file. [Link](https://community.opengroup.org/osdu/platform/ci-cd-pipelines/-/blob/master/cloud-providers/azure.yml)
- [x] Verify that the README for the Azure provider correctly and clearly describes how to run and test the service. There is a README template to help. [Link](./gitlab-service-readme-template.md)
- [x] Push any changes and verify that the Gitlab pipeline is passing in master.
**Development and Demo Azure Devops Pipelines**
- [x] Create development ADO pipeline at `devops/azure/development-pipeline.yml` in the service repo.
- [x] Verify development pipeline passes in ADO.
- [x] Create Demo ADO pipeline at `devops/azure/pipeline.yml` in the service repo.
- [x] Verify demo pipeline is passing in ADO.
**User Documentation**
- [x] Add the service to the mirror pipeline instructions. [Link](https://community.opengroup.org/osdu/platform/deployment-and-operations/infra-azure-provisioning/-/blob/master/docs/code-mirroring.md)
- [x] Add the service to the manual deployment instructions. [Link](https://community.opengroup.org/osdu/platform/deployment-and-operations/infra-azure-provisioning/-/tree/master/charts)
- [x] Add any required variables to the already existing variable group instructions for automated deployment. You should know if any variables need to be added to existing variable groups from creating the development and demo pipelines. [Link](https://community.opengroup.org/osdu/platform/deployment-and-operations/infra-azure-provisioning/-/blob/master/docs/service-automation.md#create-osdu-service-libraries)
- [x] Add a variable group `Azure Service Release - $SERVICE_NAME` to the documentation. You should know what values to set for this variable group from creating the development and demo pipelines. [Link](https://community.opengroup.org/osdu/platform/deployment-and-operations/infra-azure-provisioning/-/blob/master/docs/service-automation.md#create-osdu-service-libraries)
- [x] Add a step for creating the service pipeline at the bottom of the service-automation page. [Link](https://community.opengroup.org/osdu/platform/deployment-and-operations/infra-azure-provisioning/-/blob/master/docs/service-automation.md#create-osdu-service-libraries)
- [x] Create a rest script with sample calls to the service for users. [Link](https://community.opengroup.org/osdu/platform/deployment-and-operations/infra-azure-provisioning/-/tree/master/tools/rest)
## Setup:
1. Create an empty repo `unit-service`
2. Add a variable into `Mirror Variables`
> ADO_ORGANIZATION and ADO_PROJECT should be your actual names.
| Variable | Value |
|----------|-------|
| UNIT_REPO | `https://dev.azure.com/${ADO_ORGANIZATION}/$ADO_PROJECT/_git/unit-service` |
3. Edit the Mirror Pipeline and add the task
```
- task: swellaby.mirror-git-repository.mirror-git-repository-vsts-task.mirror-git-repository-vsts-task@1
displayName: 'unit'
inputs:
sourceGitRepositoryUri: 'https://community.opengroup.org/osdu/platform/system/reference/unit-service.git'
destinationGitRepositoryUri: '$(UNIT_REPO)'
destinationGitRepositoryPersonalAccessToken: $(ACCESS_TOKEN)
```
4. Run the Mirror Pipeline
5. Create a Variable Group `Azure Service Release - unit` with the variables:
| Variable | Value |
|----------|-------|
| MAVEN_DEPLOY_POM_FILE_PATH | `drop/provider/unit-azure/unit-aks` |
6. Create a Pipeline `service-unit` against the Repo `unit-service` for file `/devops/azure/pipeline.yml`
7. Upload the [unit_catalog_v2.json](https://community.opengroup.org/osdu/platform/system/reference/unit-service/-/blob/master/data/unit_catalog_v2.json) file located in the Project data folder to the fileshare `unit` of the storage account in the service resources.
8. Execute the PipelineDecemberNicholas KarskyNicholas Karsky2020-12-19https://community.opengroup.org/osdu/platform/deployment-and-operations/infra-azure-provisioning/-/issues/54Register Service Onboarding2021-06-14T04:26:39Zharshit aggarwalRegister Service Onboarding**Service name**: `Register Service`
The following steps must be completed for a service to onboard with OSDU on Azure. Additionally, please add the `Service Onboarding` tag to this issue when it is created.
For more information, visit...**Service name**: `Register Service`
The following steps must be completed for a service to onboard with OSDU on Azure. Additionally, please add the `Service Onboarding` tag to this issue when it is created.
For more information, visit our service onboarding documentation [here](https://community.opengroup.org/osdu/platform/deployment-and-operations/infra-azure-provisioning/-/blob/master/docs/service-onboarding.md).
## Steps:
**Infrastructure and Initial Requirements**
- [x] Add any additional Azure cloud infrastructure (Cosmos containers, Storage containers, fileshares, etc.) to the Terraform template. [Link](https://community.opengroup.org/osdu/platform/deployment-and-operations/infra-azure-provisioning/-/tree/master/infra/templates/osdu-r3-mvp). Note that if the infrastructure is a part of the data-partition template, you may need to add secrets to the keyvault that are partition specific; if doing so, update the createPartition REST request to include the keys that you have added so they are accessible in service code. [Link](https://community.opengroup.org/osdu/platform/deployment-and-operations/infra-azure-provisioning/-/blob/master/tools/rest/partition.http#L48)
- [x] Create an ingress point for the service. [Link](https://community.opengroup.org/osdu/platform/deployment-and-operations/infra-azure-provisioning/-/blob/master/charts/osdu-common/templates/appgw-ingress.yaml)
- [x] Add any test data that is required for the service integration tests. [Link](https://community.opengroup.org/osdu/platform/deployment-and-operations/infra-azure-provisioning/-/tree/master/tools/test_data)
- [x] Update `upload-data.py` to upload any new test data files you created. [Link](https://community.opengroup.org/osdu/platform/deployment-and-operations/infra-azure-provisioning/-/blob/master/tools/test_data/upload-data.py).
- [x] Update the integration tester with any entitlements required to test the service. [Link](https://community.opengroup.org/osdu/platform/deployment-and-operations/infra-azure-provisioning/-/blob/master/tools/test_data/user_info_1.json)
- [x] Add in any new secrets that the service needs to run. [Link](https://community.opengroup.org/osdu/platform/deployment-and-operations/infra-azure-provisioning/-/blob/master/charts/osdu-common/templates/kv-secrets.yaml)
- [x] Create environment variable script to generate .yaml files to be used with Intellij [EnvFile](https://plugins.jetbrains.com/plugin/7861-envfile) plugin and .envrc files to be used with [direnv](https://direnv.net/). [Link](https://community.opengroup.org/osdu/platform/deployment-and-operations/infra-azure-provisioning/-/tree/master/tools/variables)
**Gitlab Code and Documentation**
- [x] Complete the service code such that it passes all integration tests locally. There is some documentation on starting off implementing an Azure provider. [Link](./gitlab-service-readme-template.md)
- [x] Create helm charts for service. The charts for each service are located in the `devops/azure` directory. You can look at charts from other services as a model. The charts will be nearly identical except for the different environment variables, values, etc each service needs to run. [Link](./gitlab-service-guide.md)
- [x] Implement Istio for the service if this has not already been done. Here is an example MR that shows what steps are required. [Link](https://community.opengroup.org/osdu/platform/system/storage/-/merge_requests/64)
- [x] Create an Istio auth policy in the `devops/azure/chart/templates` directory. Here is an example of an Istio auth policy that is generic and can be used by other services. [Link](https://community.opengroup.org/osdu/platform/system/storage/-/blob/master/devops/azure/chart/templates/azure-istio-auth-policy.yaml)
- [x] Add any variables that are required for the service integration tests to the Azure CI-CD file. [Link](https://community.opengroup.org/osdu/platform/ci-cd-pipelines/-/blob/master/cloud-providers/azure.yml)
- [x] Verify that the README for the Azure provider correctly and clearly describes how to run and test the service. There is a README template to help. [Link](./gitlab-service-readme-template.md)
- [x] Push any changes and verify that the Gitlab pipeline is passing in master.
**Development and Demo Azure Devops Pipelines**
- [x] Create development ADO pipeline at `devops/azure/development-pipeline.yml` in the service repo.
- [x] Verify development pipeline passes in ADO.
- [x] Create Demo ADO pipeline at `devops/azure/pipeline.yml` in the service repo.
- [x] Verify demo pipeline is passing in ADO.
**User Documentation**
- [x] Add the service to the mirror pipeline instructions. [Link](https://community.opengroup.org/osdu/platform/deployment-and-operations/infra-azure-provisioning/-/blob/master/docs/code-mirroring.md)
- [x] Add the service to the manual deployment instructions. [Link](https://community.opengroup.org/osdu/platform/deployment-and-operations/infra-azure-provisioning/-/tree/master/charts)
- [x] Add any required variables to the already existing variable group instructions for automated deployment. You should know if any variables need to be added to existing variable groups from creating the development and demo pipelines. [Link](https://community.opengroup.org/osdu/platform/deployment-and-operations/infra-azure-provisioning/-/blob/master/docs/service-automation.md#create-osdu-service-libraries)
- [x] Add a variable group `Azure Service Release - $SERVICE_NAME` to the documentation. You should know what values to set for this variable group from creating the development and demo pipelines. [Link](https://community.opengroup.org/osdu/platform/deployment-and-operations/infra-azure-provisioning/-/blob/master/docs/service-automation.md#create-osdu-service-libraries)
- [x] Add a step for creating the service pipeline at the bottom of the service-automation page. [Link](https://community.opengroup.org/osdu/platform/deployment-and-operations/infra-azure-provisioning/-/blob/master/docs/service-automation.md#create-osdu-service-libraries)
- [x] Create a rest script with sample calls to the service for users. [Link](https://community.opengroup.org/osdu/platform/deployment-and-operations/infra-azure-provisioning/-/tree/master/tools/rest)Decemberharshit aggarwalharshit aggarwalhttps://community.opengroup.org/osdu/platform/deployment-and-operations/infra-azure-provisioning/-/issues/52Notification Service Onboarding2021-06-14T04:26:39ZKomal MakkarNotification Service Onboarding**Service name**: `Notification`
> Service has no support for partitions and can only operate using a partition with the exact name of 'opendes'.
The following steps must be completed for a service to onboard with OSDU on Azure. Additi...**Service name**: `Notification`
> Service has no support for partitions and can only operate using a partition with the exact name of 'opendes'.
The following steps must be completed for a service to onboard with OSDU on Azure. Additionally, please add the `Service Onboarding` tag to this issue when it is created.
For more information, visit our service onboarding documentation [here](https://community.opengroup.org/osdu/platform/deployment-and-operations/infra-azure-provisioning/-/blob/master/docs/service-onboarding.md).
## Steps:
**Infrastructure and Initial Requirements**
- [x] Add any additional Azure cloud infrastructure (Cosmos containers, Storage containers, fileshares, etc.) to the Terraform template. [Link](https://community.opengroup.org/osdu/platform/deployment-and-operations/infra-azure-provisioning/-/tree/master/infra/templates/osdu-r3-mvp). Note that if the infrastructure is a part of the data-partition template, you may need to add secrets to the keyvault that are partition specific; if doing so, update the createPartition REST request to include the keys that you have added so they are accessible in service code. [Link](https://community.opengroup.org/osdu/platform/deployment-and-operations/infra-azure-provisioning/-/blob/master/tools/rest/partition.http#L48)
- [x] Create an ingress point for the service. [Link](https://community.opengroup.org/osdu/platform/deployment-and-operations/infra-azure-provisioning/-/blob/master/charts/osdu-common/templates/appgw-ingress.yaml)
- [x] Add any test data that is required for the service integration tests. [Link](https://community.opengroup.org/osdu/platform/deployment-and-operations/infra-azure-provisioning/-/tree/master/tools/test_data)
- [x] Update `upload-data.py` to upload any new test data files you created. [Link](https://community.opengroup.org/osdu/platform/deployment-and-operations/infra-azure-provisioning/-/blob/master/tools/test_data/upload-data.py).
- [x] Update the integration tester with any entitlements required to test the service. [Link](https://community.opengroup.org/osdu/platform/deployment-and-operations/infra-azure-provisioning/-/blob/master/tools/test_data/user_info_1.json)
- [x] Add in any new secrets that the service needs to run. [Link](https://community.opengroup.org/osdu/platform/deployment-and-operations/infra-azure-provisioning/-/blob/master/charts/osdu-common/templates/kv-secrets.yaml)
- [x] Create environment variable script to generate .yaml files to be used with Intellij [EnvFile](https://plugins.jetbrains.com/plugin/7861-envfile) plugin and .envrc files to be used with [direnv](https://direnv.net/). [Link](https://community.opengroup.org/osdu/platform/deployment-and-operations/infra-azure-provisioning/-/tree/master/tools/variables)
**Gitlab Code and Documentation**
- [x] Complete the service code such that it passes all integration tests locally. There is some documentation on starting off implementing an Azure provider. [Link](./gitlab-service-readme-template.md)
- [x] Create helm charts for service. The charts for each service are located in the `devops/azure` directory. You can look at charts from other services as a model. The charts will be nearly identical except for the different environment variables, values, etc each service needs to run. [Link](./gitlab-service-guide.md)
- [x] Implement Istio for the service if this has not already been done. Here is an example MR that shows what steps are required. [Link](https://community.opengroup.org/osdu/platform/system/storage/-/merge_requests/64)
- [x] Create an Istio auth policy in the `devops/azure/chart/templates` directory. Here is an example of an Istio auth policy that is generic and can be used by other services. [Link](https://community.opengroup.org/osdu/platform/system/storage/-/blob/master/devops/azure/chart/templates/azure-istio-auth-policy.yaml)
- [x] Add any variables that are required for the service integration tests to the Azure CI-CD file. [Link](https://community.opengroup.org/osdu/platform/ci-cd-pipelines/-/blob/master/cloud-providers/azure.yml)
- [x] Verify that the README for the Azure provider correctly and clearly describes how to run and test the service. There is a README template to help. [Link](./gitlab-service-readme-template.md)
- [x] Push any changes and verify that the Gitlab pipeline is passing in master.
**Development and Demo Azure Devops Pipelines**
- [x] Create development ADO pipeline at `devops/azure/development-pipeline.yml` in the service repo.
- [x] Verify development pipeline passes in ADO.
- [x] Create Demo ADO pipeline at `devops/azure/pipeline.yml` in the service repo.
- [x] Verify demo pipeline is passing in ADO.
**User Documentation**
- [x] Add the service to the mirror pipeline instructions. [Link](https://community.opengroup.org/osdu/platform/deployment-and-operations/infra-azure-provisioning/-/blob/master/docs/code-mirroring.md)
- [x] Add the service to the manual deployment instructions. [Link](https://community.opengroup.org/osdu/platform/deployment-and-operations/infra-azure-provisioning/-/tree/master/charts)
- [x] Add any required variables to the already existing variable group instructions for automated deployment. You should know if any variables need to be added to existing variable groups from creating the development and demo pipelines. [Link](https://community.opengroup.org/osdu/platform/deployment-and-operations/infra-azure-provisioning/-/blob/master/docs/service-automation.md#create-osdu-service-libraries)
- [x] Add a variable group `Azure Service Release - $SERVICE_NAME` to the documentation. You should know what values to set for this variable group from creating the development and demo pipelines. [Link](https://community.opengroup.org/osdu/platform/deployment-and-operations/infra-azure-provisioning/-/blob/master/docs/service-automation.md#create-osdu-service-libraries)
- [x] Add a step for creating the service pipeline at the bottom of the service-automation page. [Link](https://community.opengroup.org/osdu/platform/deployment-and-operations/infra-azure-provisioning/-/blob/master/docs/service-automation.md#create-osdu-service-libraries)
- [x] Create a rest script with sample calls to the service for users. [Link](https://community.opengroup.org/osdu/platform/deployment-and-operations/infra-azure-provisioning/-/tree/master/tools/rest)
## Setup:
1. Create an empty repo `notification`
2. Add a variable into `Mirror Variables`
> ADO_ORGANIZATION and ADO_PROJECT should be your actual names.
| Variable | Value |
|----------|-------|
| NOTIFICATION_REPO | `https://dev.azure.com/${ADO_ORGANIZATION}/$ADO_PROJECT/_git/notification` |
3. Edit the Mirror Pipeline and add the task
```
- task: swellaby.mirror-git-repository.mirror-git-repository-vsts-task.mirror-git-repository-vsts-task@1
displayName: 'notification'
inputs:
sourceGitRepositoryUri: 'https://community.opengroup.org/osdu/platform/system/notification.git'
destinationGitRepositoryUri: '$(NOTIFICATION_REPO)'
destinationGitRepositoryPersonalAccessToken: $(ACCESS_TOKEN)
```
4. Run the Mirror Pipeline
5. Create a Variable Group `Azure Service Release - notification` with the variables:
| Variable | Value |
|----------|-------|
| MAVEN_DEPLOY_POM_FILE_PATH | `drop/provider/notification-azure` |
| MAVEN_INTEGRATION_TEST_OPTIONS | `-DargLine="-DNOTIFICATION_REGISTER_BASE_URL=$(NOTIFICATION_REGISTER_BASE_URL) -DAZURE_AD_TENANT_ID=$(AZURE_TENANT_ID) -DINTEGRATION_TESTER=$(INTEGRATION_TESTER) -DTESTER_SERVICEPRINCIPAL_SECRET=$(AZURE_TESTER_SERVICEPRINCIPAL_SECRET) -DAZURE_AD_APP_RESOURCE_ID=$(AZURE_AD_APP_RESOURCE_ID) -DNO_DATA_ACCESS_TESTER=$(NO_DATA_ACCESS_TESTER) -DNO_DATA_ACCESS_TESTER_SERVICEPRINCIPAL_SECRET=$(NO_DATA_ACCESS_TESTER_SERVICEPRINCIPAL_SECRET) -DENVIRONMENT=DEV -DHMAC_SECRET=$(AZURE_EVENT_SUBSCRIBER_SECRET) -DTOPIC_ID=$(AZURE_EVENT_TOPIC_NAME) -DNOTIFICATION_BASE_URL=$(NOTIFICATION_BASE_URL) -DREGISTER_CUSTOM_PUSH_URL_HMAC=$(REGISTER_CUSTOM_PUSH_URL_HMAC) -DOSDU_TENANT=$(OSDU_TENANT)"` |
| MAVEN_INTEGRATION_TEST_POM_FILE_PATH | `drop/deploy/testing/notification-test-azure/pom.xml` |
| SERVICE_RESOURCE_NAME | `$(AZURE_NOTIFICATION_SERVICE_NAME)` |
6. Create a Pipeline `service-notification` against the Repo `notification-service` for file `/devops/azure/pipeline.yml`
7. Execute the PipelineDecemberKomal MakkarKomal Makkarhttps://community.opengroup.org/osdu/platform/deployment-and-operations/infra-azure-provisioning/-/issues/47Support blue-green deployments2022-09-15T12:09:38ZSherman YangSupport blue-green deploymentsEnhance deployment architecture to handle blue-green deployments. Infrastructure and pipelines need to be enhanced/updated to support blue-green deployments. This is needed to allow zero downtime upgrade/redeployments after changes or ke...Enhance deployment architecture to handle blue-green deployments. Infrastructure and pipelines need to be enhanced/updated to support blue-green deployments. This is needed to allow zero downtime upgrade/redeployments after changes or key rotations. It would also allow time to test the new deployments and fix issues before exposing the new deployments to clients.
https://docs.microsoft.com/en-us/samples/microsoft/aks-postgre-keyrotation/blue--green-secret-rotation-with-azure-keyvault-and-aks/DecemberDaniel SchollDaniel Schollhttps://community.opengroup.org/osdu/platform/deployment-and-operations/infra-azure-provisioning/-/issues/1Airflow Middleware Onboarding2021-02-01T17:53:45ZKiran VeerapaneniAirflow Middleware OnboardingThe ingest project requires the use of Airflow as a Middleware layer to be running in AKS so that Ingest Services can leverage Airflow as a Workflow Engine.
- [x] Architecture Design of Required Azure Resources Necessary for Airflow
1. ...The ingest project requires the use of Airflow as a Middleware layer to be running in AKS so that Ingest Services can leverage Airflow as a Workflow Engine.
- [x] Architecture Design of Required Azure Resources Necessary for Airflow
1. Postgres
2. Redis
3. File Storage
- [x] Host 3rd Party Source Code
1. airflow-function
2. airflow-statsd
- [x] GitLab Pipeline required to containerize and host containers
1. airflow-function
2. airflow-statsd
- [x] Host Helm Charts for installation
1. osdu-airflow
**Automation Onboarding**
- [x] create Pipelines for airflow deployment
- [x] Update helm template task to run python script to add namespace for generated airflow yamls
- [x] Update git ops task to copy the charts generated from airflow.targz in different folder to flux repository
- [x] Execute Installation in Terrforom
1. osdu-airflow
---
__Acceptance Criteria__
1. Airflow Installs automatically as part of the service_resources template.
2. All Tests Pass
3. All Pipelines Pass
4. Documentation Exists
5. Services are able to leverage Airflow Workflow EngineDecemberDaniel SchollHema Vishnu Pola [Microsoft]Daniel Scholl2020-12-19https://community.opengroup.org/osdu/platform/system/dataset/-/issues/19Dataset-test-core hardly reusable for CSP providers2021-06-10T12:03:16ZRustam Lotsmanenko (EPAM)rustam_lotsmanenko@epam.comDataset-test-core hardly reusable for CSP providersCurrently core testing hard to reuse and built in cicd:
1. Pre-integration step looks unclear [schema creation step](https://community.opengroup.org/osdu/platform/system/dataset/-/blob/master/testing/dataset-test-core/src/main/java/org/o...Currently core testing hard to reuse and built in cicd:
1. Pre-integration step looks unclear [schema creation step](https://community.opengroup.org/osdu/platform/system/dataset/-/blob/master/testing/dataset-test-core/src/main/java/org/opengroup/osdu/dataset/Dataset.java#L57), schema should be created manually for each env where tests should run? Proposal to automate schema creation
2. Usually kinds highlight that they are intended for testing [This kind can be used by platform users](https://community.opengroup.org/osdu/platform/system/dataset/-/blob/master/testing/dataset-test-core/src/main/java/org/opengroup/osdu/dataset/Dataset.java#L213), proposal to unhardcode and make it self-describing for example `dataset--File.TestDataset***`
3. Not convenient formatting for [ACL's](https://community.opengroup.org/osdu/platform/system/dataset/-/blob/master/testing/dataset-test-core/src/main/java/org/opengroup/osdu/dataset/Dataset.java#L230), there is no step for creating this particular ACL in entitlements, proposal to replace`testing.com` with DOMAIN variable.
4. If this part will be included in future , it can cause extra impact for other CSP providers, if not will be better to remove it [commented](https://community.opengroup.org/osdu/platform/system/dataset/-/blob/master/testing/dataset-test-core/src/main/java/org/opengroup/osdu/dataset/Dataset.java#L252)
cc @Dmitriy_RudkoM1 - Release 0.1Matt WiseMatt Wisehttps://community.opengroup.org/osdu/platform/system/dataset/-/issues/18Dataset: Fix common stricture of POM files2021-03-16T14:08:54ZDmitriy RudkoDataset: Fix common stricture of POM filesDataset `pom` files structure does not following common structure.
Please see details in ADR: https://community.opengroup.org/osdu/platform/system/home/-/issues/55
Please take a look at implementation example here:
https://community.op...Dataset `pom` files structure does not following common structure.
Please see details in ADR: https://community.opengroup.org/osdu/platform/system/home/-/issues/55
Please take a look at implementation example here:
https://community.opengroup.org/osdu/platform/system/storage/-/blob/master/pom.xml#L123
**Actual:**
```xml
<repository>
<id>${gitlab-server}</id>
<url>https://community.opengroup.org/api/...</url>
</repository>
```
**Expected:**
```xml
<repository>
<id>${repo.releases.id}</id>
<url>${repo.releases.url}</url>
</repository>
...
<profile>
<id>Default</id>
...
```M1 - Release 0.1Matt WiseMatt Wisehttps://community.opengroup.org/osdu/platform/system/storage/-/issues/51[Storage Service] Can't retrieve a record with encoded space symbol in ID2023-04-06T20:25:51ZDmitriy Rudko[Storage Service] Can't retrieve a record with encoded space symbol in IDThere is a **core** issue with record IDs that contain encrypted space symbols (%20):
**Create record RQ:**
```
curl --location --request PUT 'https://os-storage-attcrcktoa-uc.a.run.app/api/storage/v2/records' \
--header 'Data-Partition...There is a **core** issue with record IDs that contain encrypted space symbols (%20):
**Create record RQ:**
```
curl --location --request PUT 'https://os-storage-attcrcktoa-uc.a.run.app/api/storage/v2/records' \
--header 'Data-Partition-ID: osdu' \
--header 'Authorization: Bearer <token>' \
--header 'Content-Type: application/json' \
--data-raw '[{
"id": "osdu:work-product-component--WellboreMarkerSet:Some%20Text",
...
}]'
```
**Get record RQ:**
```
curl --location --request GET 'https://os-storage-attcrcktoa-uc.a.run.app/api/storage/v2/records/osdu:work-product-component--WellboreMarkerSet:Some%20Text' \
--header 'Data-Partition-ID: osdu' \
--header 'Authorization: Bearer <token>'
```
**Get record RS:**
```
{
"code": 400,
"reason": "Validation error.",
"message": "getLatestRecordVersion.id: Not a valid record id. Found: osdu:work-product-component--WellboreMarkerSet:Some Text"
}
```
**Expected Result:**
- I should be able to retrieve it by ID that was used during record creation
**Actual Result:**
- Its not possible to retrieve the recordM1 - Release 0.1ethiraj krishnamanaiduThomas Gehrmann [slb]Dmitriy Rudkoethiraj krishnamanaiduhttps://community.opengroup.org/osdu/platform/domain-data-mgmt-services/wellbore/wellbore-domain-services/-/issues/10Wellbore domain services - Integration/functional tests2021-03-05T05:10:53ZYogesh ChinthaWellbore domain services - Integration/functional testsHey @FrancoisVinyes, IBM team ran the Integration/functional tests for wellbore-domain-services application using pytest. When we ran them, we found a failure and here is the short summary:
"FAILED tests\integration\functional\tests\tes...Hey @FrancoisVinyes, IBM team ran the Integration/functional tests for wellbore-domain-services application using pytest. When we ran them, we found a failure and here is the short summary:
"FAILED tests\integration\functional\tests\test_search.py::test_setup_for_search - AssertionError: unexpected status code, actual=405, expected=200"
Attached is the file having failure test details. Because of the test failure some of the subsequent tests have been skipped and we are not able to achieve 100% functional test success.
We have analysed tests and think that it could be due to the "PUT" method requests which are now replaced by "POST" in the common code, may be the tests need to be updated for the same. Request you to have a look and let us know if you have faced this error or what could be the problem? Any help will be appreciated. Thank you.
[IntegrationTest_Issue.txt](/uploads/f3e2941627b7cecdb5ef51e74d8844c1/IntegrationTest_Issue.txt)M1 - Release 0.1Francois VinyesLuc YriarteFrancois Vinyeshttps://community.opengroup.org/osdu/platform/system/home/-/issues/73API documentation - R3 Ingestion Service2022-11-24T12:35:40ZDebasis ChatterjeeAPI documentation - R3 Ingestion ServiceI refer to this link
https://community.opengroup.org/osdu/documentation/-/wikis/Core-Services-Overview
Click on Ingestion Service
https://community.opengroup.org/osdu/platform/data-flow/ingestion/ingestion-service/-/blob/master/README....I refer to this link
https://community.opengroup.org/osdu/documentation/-/wikis/Core-Services-Overview
Click on Ingestion Service
https://community.opengroup.org/osdu/platform/data-flow/ingestion/ingestion-service/-/blob/master/README.md
Shows "R2" and example shows File instead of Dataset.
Please plan to update this portion for R3.
Thank youM1 - Release 0.1Dmitriy RudkoMatt WiseDmitriy Rudko2021-02-26https://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/seismic-dms-suite/seismic-store-service/-/issues/15502 Gateway Error on calling Redis get and set functions2021-02-23T04:27:47ZWalter D502 Gateway Error on calling Redis get and set functionsHi @DiegoMolteni
We have been getting the 502 Gateway Error for some APIs. One of the APIs is Create Subproject API. On debugging the code the error is thrown on the following 2 lines in compliance.ts file in the create subproject flow:...Hi @DiegoMolteni
We have been getting the 502 Gateway Error for some APIs. One of the APIs is Create Subproject API. On debugging the code the error is thrown on the following 2 lines in compliance.ts file in the create subproject flow:
1. await this._cache.set(ltag, results.invalidLegalTags.length === 0);
2. await this._cache.set(ltag, results.invalidLegalTags.length === 0);
Interestingly, this is happening on the DEV environment. I've not encountered the issue in my local. Have you faced this error or have a clue on what the problem could be? Any help would be appreciated. Thank you.M1 - Release 0.1Diego MolteniDaniel PerezDiego Molteni2021-02-26https://community.opengroup.org/osdu/platform/data-flow/ingestion/ingestion-dags/-/issues/30Move ingestion DAGs and operators under a folder named osdu2021-06-15T04:06:21ZKishore BattulaMove ingestion DAGs and operators under a folder named osduCurrently the DAGs and operators are in top level folder `src`. Clients deploying these DAGs and operators will copy the DAGs into DAGs folder and operators into operators folders.
In a customer environment there will be more DAGs and ...Currently the DAGs and operators are in top level folder `src`. Clients deploying these DAGs and operators will copy the DAGs into DAGs folder and operators into operators folders.
In a customer environment there will be more DAGs and operators and there are chances where the python names can conflict with the existing names mentioned in this repository.
Can we move the DAGs, operators and hooks into osdu folder or any other folder name so that it will be easy manage. This has to be done in the repository only because the dags use import statements for operators and libs which will fail if someone wanted to put these under different folder structure.
**Benifits**
- Avoids naming conflict
- Easy to propagate updates from this repository into airflow. We can replace the entire folder in the destinationM1 - Release 0.1https://community.opengroup.org/osdu/platform/system/lib/core/os-core-common/-/issues/27CRS service not integrated with core services.2022-09-27T13:51:37ZGregCRS service not integrated with core services.CRS service not integrated with core services.CRS service not integrated with core services.M1 - Release 0.1Chris ZhangChris Zhanghttps://community.opengroup.org/osdu/platform/system/lib/core/os-core-common/-/issues/26Create LegalTag when the LegalTag already exists returns with poor formatting.2022-09-27T13:48:32ZGregCreate LegalTag when the LegalTag already exists returns with poor formatting.POST – Create LegalTag . Functionality working but needs to improve error handling. "A LegalTag already exists for the given name opendes-public-usa-dataset-osduonaws-testing. **Can\\u0027t** create again. Id is -1934363762"POST – Create LegalTag . Functionality working but needs to improve error handling. "A LegalTag already exists for the given name opendes-public-usa-dataset-osduonaws-testing. **Can\\u0027t** create again. Id is -1934363762"M1 - Release 0.1Chris ZhangChris Zhanghttps://community.opengroup.org/osdu/platform/system/lib/core/os-core-common/-/issues/21CORS not implemented properly in os-core-common ResponseHeaders class2021-03-12T15:07:31ZSpencer Suttonsuttonsp@amazon.comCORS not implemented properly in os-core-common ResponseHeaders classMost of the Spring filters in our services set response headers from the ResponseHeaders class in os-core-common:
![Screen_Shot_2021-01-14_at_4.23.16_PM](/uploads/87802f333ef061a787bf18405dec5da4/Screen_Shot_2021-01-14_at_4.23.16_PM.png)...Most of the Spring filters in our services set response headers from the ResponseHeaders class in os-core-common:
![Screen_Shot_2021-01-14_at_4.23.16_PM](/uploads/87802f333ef061a787bf18405dec5da4/Screen_Shot_2021-01-14_at_4.23.16_PM.png)
There are several problems with the way these headers are set:
* Access-Control-Allow-Origin resolves to [*] which doesn’t work at all with front-end apps, it would need to resolve to * to work
* Even if Access-Control-Allow-Origin did resolve to *, it would be very insecure
* Similarly, Access-Control-Allow-Methods resolves to a list which also doesn’t work at all with front-end apps. It needs to resolve to a comma-delimited string
Without addressing these issues, efforts like the Admin UI will fail because it won’t be able to interact with the platform without proper CORS implementation in place.
*Proposed Resolution:*
* Set Access-Control-Allow-Origin header from environment variable
* Make Access-Control-Allow-Methods resolve correctlyM1 - Release 0.1ethiraj krishnamanaiduJoeSrihari PrabaharanChris ZhangSpencer Suttonsuttonsp@amazon.comMatt Wiseethiraj krishnamanaiduhttps://community.opengroup.org/osdu/platform/data-flow/ingestion/ingestion-workflow/-/issues/71ADR: Workflow Service - R3 Improvements2021-04-15T12:58:17ZDmitriy RudkoADR: Workflow Service - R3 Improvements## Context
During work with different stream, we identify several critical design issues with Workflow service that needs to be addressed in R3:
* Workflow service is not just an `abstraction` over orchestration engine (Airflow) but also...## Context
During work with different stream, we identify several critical design issues with Workflow service that needs to be addressed in R3:
* Workflow service is not just an `abstraction` over orchestration engine (Airflow) but also contains OSDU specific logic (`DataType`, `WorkflowType`, `UserType`). This logic should be moved to Ingestion Service.
* Workflow Service do not respect Data Partitions. Users potentially can trigger any Workflow in the system.
* There is not functionality to register a new Workflow
## Scope
- Add functionality to register new Workflows
- Add support of Data Partitions
- Remove OSDU specific workflow functionality (`DataType`, `WorkflowType`, `UserType`) from Workflow Service.
- Allow OSDU clients directly trigger registered Workflows, without Ingestion Service.
- Update API to reflect [Google REST API Design Guide](https://cloud.google.com/apis/design). Please see[OpenAPI Spec](https://community.opengroup.org/osdu/platform/data-flow/ingestion/ingestion-workflow/-/blob/refactoring_workflow/docs/api/openapi.workflow.yaml) for details.
## Decision
- Accept API changes as a part of R3
- Accept Workflow > Core changes as a part of R3
- Deprecate exiting Workflow API (startWorkflow, etc)
## Rationale
- Registration of workflows required for E2E R3 Ingestion
- API spec is on critical path for CSV Ingestion
## Consequences
- Most of the Core logic changes will be implemented by GCP
- Will require support of CSPs as SPI layer will be touched.
## When to revisit
- Post R3
## Technical details:
![R3_Workflow_-_L3__Target](/uploads/75f02f3ec73ee85a95bb668dc7426df2/R3_Workflow_-_L3__Target.png)
![R3_Workflow_-_L4__Target](/uploads/03429b8474b61049b4327ae920969374/R3_Workflow_-_L4__Target.png)
### SPI Layer:
- `IWorkflowEngineService` - **Has default implementation.** Abstraction over orchestration engine. By default we have implementation for Airflow.
- `IWorkflowManagerService` - **Has default implementation.** Implements CRUD over Workflow entity.
- `IWorkflowRunService` - - **Has default implementation.** Implements CRUD over Workflow Run entity.
- `IWorkflowMetadataRepository` - Should be implemented by CSP!. Repository for Workflow entity.
- `IWorkflowRunRepository` - Should be implemented by CSP!. Repository for Workflow Run entityM1 - Release 0.1Dmitriy RudkoDmitriy Rudkohttps://community.opengroup.org/osdu/platform/system/schema-service/-/issues/34Regex for kind2021-05-22T06:37:55ZThomas Gehrmann [slb]Regex for kindFor consistency, please enforce the same rules for `kind` or SchemaId as the Storage service does:
```regex
KIND_REGEX = "^[\\w\\-\\.]+:[\\w\\-\\.]+:[\\w\\-\\.\\/]+:[0-9]+.[0-9]+.[0-9]+$"
as used in regex101 ...For consistency, please enforce the same rules for `kind` or SchemaId as the Storage service does:
```regex
KIND_REGEX = "^[\\w\\-\\.]+:[\\w\\-\\.]+:[\\w\\-\\.\\/]+:[0-9]+.[0-9]+.[0-9]+$"
as used in regex101 ^[\w\-\.]+:[\w\-\.]+:[\w\-\.\/]+:[0-9]+.[0-9]+.[0-9]+$
```
The regex pattern should be part of the API spec.M1 - Release 0.1ethiraj krishnamanaiduethiraj krishnamanaidu