infra-azure-provisioning issueshttps://community.opengroup.org/osdu/platform/deployment-and-operations/infra-azure-provisioning/-/issues2021-06-14T04:26:39Zhttps://community.opengroup.org/osdu/platform/deployment-and-operations/infra-azure-provisioning/-/issues/54Register Service Onboarding2021-06-14T04:26:39Zharshit aggarwalRegister Service Onboarding**Service name**: `Register Service`
The following steps must be completed for a service to onboard with OSDU on Azure. Additionally, please add the `Service Onboarding` tag to this issue when it is created.
For more information, visit...**Service name**: `Register Service`
The following steps must be completed for a service to onboard with OSDU on Azure. Additionally, please add the `Service Onboarding` tag to this issue when it is created.
For more information, visit our service onboarding documentation [here](https://community.opengroup.org/osdu/platform/deployment-and-operations/infra-azure-provisioning/-/blob/master/docs/service-onboarding.md).
## Steps:
**Infrastructure and Initial Requirements**
- [x] Add any additional Azure cloud infrastructure (Cosmos containers, Storage containers, fileshares, etc.) to the Terraform template. [Link](https://community.opengroup.org/osdu/platform/deployment-and-operations/infra-azure-provisioning/-/tree/master/infra/templates/osdu-r3-mvp). Note that if the infrastructure is a part of the data-partition template, you may need to add secrets to the keyvault that are partition specific; if doing so, update the createPartition REST request to include the keys that you have added so they are accessible in service code. [Link](https://community.opengroup.org/osdu/platform/deployment-and-operations/infra-azure-provisioning/-/blob/master/tools/rest/partition.http#L48)
- [x] Create an ingress point for the service. [Link](https://community.opengroup.org/osdu/platform/deployment-and-operations/infra-azure-provisioning/-/blob/master/charts/osdu-common/templates/appgw-ingress.yaml)
- [x] Add any test data that is required for the service integration tests. [Link](https://community.opengroup.org/osdu/platform/deployment-and-operations/infra-azure-provisioning/-/tree/master/tools/test_data)
- [x] Update `upload-data.py` to upload any new test data files you created. [Link](https://community.opengroup.org/osdu/platform/deployment-and-operations/infra-azure-provisioning/-/blob/master/tools/test_data/upload-data.py).
- [x] Update the integration tester with any entitlements required to test the service. [Link](https://community.opengroup.org/osdu/platform/deployment-and-operations/infra-azure-provisioning/-/blob/master/tools/test_data/user_info_1.json)
- [x] Add in any new secrets that the service needs to run. [Link](https://community.opengroup.org/osdu/platform/deployment-and-operations/infra-azure-provisioning/-/blob/master/charts/osdu-common/templates/kv-secrets.yaml)
- [x] Create environment variable script to generate .yaml files to be used with Intellij [EnvFile](https://plugins.jetbrains.com/plugin/7861-envfile) plugin and .envrc files to be used with [direnv](https://direnv.net/). [Link](https://community.opengroup.org/osdu/platform/deployment-and-operations/infra-azure-provisioning/-/tree/master/tools/variables)
**Gitlab Code and Documentation**
- [x] Complete the service code such that it passes all integration tests locally. There is some documentation on starting off implementing an Azure provider. [Link](./gitlab-service-readme-template.md)
- [x] Create helm charts for service. The charts for each service are located in the `devops/azure` directory. You can look at charts from other services as a model. The charts will be nearly identical except for the different environment variables, values, etc each service needs to run. [Link](./gitlab-service-guide.md)
- [x] Implement Istio for the service if this has not already been done. Here is an example MR that shows what steps are required. [Link](https://community.opengroup.org/osdu/platform/system/storage/-/merge_requests/64)
- [x] Create an Istio auth policy in the `devops/azure/chart/templates` directory. Here is an example of an Istio auth policy that is generic and can be used by other services. [Link](https://community.opengroup.org/osdu/platform/system/storage/-/blob/master/devops/azure/chart/templates/azure-istio-auth-policy.yaml)
- [x] Add any variables that are required for the service integration tests to the Azure CI-CD file. [Link](https://community.opengroup.org/osdu/platform/ci-cd-pipelines/-/blob/master/cloud-providers/azure.yml)
- [x] Verify that the README for the Azure provider correctly and clearly describes how to run and test the service. There is a README template to help. [Link](./gitlab-service-readme-template.md)
- [x] Push any changes and verify that the Gitlab pipeline is passing in master.
**Development and Demo Azure Devops Pipelines**
- [x] Create development ADO pipeline at `devops/azure/development-pipeline.yml` in the service repo.
- [x] Verify development pipeline passes in ADO.
- [x] Create Demo ADO pipeline at `devops/azure/pipeline.yml` in the service repo.
- [x] Verify demo pipeline is passing in ADO.
**User Documentation**
- [x] Add the service to the mirror pipeline instructions. [Link](https://community.opengroup.org/osdu/platform/deployment-and-operations/infra-azure-provisioning/-/blob/master/docs/code-mirroring.md)
- [x] Add the service to the manual deployment instructions. [Link](https://community.opengroup.org/osdu/platform/deployment-and-operations/infra-azure-provisioning/-/tree/master/charts)
- [x] Add any required variables to the already existing variable group instructions for automated deployment. You should know if any variables need to be added to existing variable groups from creating the development and demo pipelines. [Link](https://community.opengroup.org/osdu/platform/deployment-and-operations/infra-azure-provisioning/-/blob/master/docs/service-automation.md#create-osdu-service-libraries)
- [x] Add a variable group `Azure Service Release - $SERVICE_NAME` to the documentation. You should know what values to set for this variable group from creating the development and demo pipelines. [Link](https://community.opengroup.org/osdu/platform/deployment-and-operations/infra-azure-provisioning/-/blob/master/docs/service-automation.md#create-osdu-service-libraries)
- [x] Add a step for creating the service pipeline at the bottom of the service-automation page. [Link](https://community.opengroup.org/osdu/platform/deployment-and-operations/infra-azure-provisioning/-/blob/master/docs/service-automation.md#create-osdu-service-libraries)
- [x] Create a rest script with sample calls to the service for users. [Link](https://community.opengroup.org/osdu/platform/deployment-and-operations/infra-azure-provisioning/-/tree/master/tools/rest)Decemberharshit aggarwalharshit aggarwalhttps://community.opengroup.org/osdu/platform/deployment-and-operations/infra-azure-provisioning/-/issues/52Notification Service Onboarding2021-06-14T04:26:39ZKomal MakkarNotification Service Onboarding**Service name**: `Notification`
> Service has no support for partitions and can only operate using a partition with the exact name of 'opendes'.
The following steps must be completed for a service to onboard with OSDU on Azure. Additi...**Service name**: `Notification`
> Service has no support for partitions and can only operate using a partition with the exact name of 'opendes'.
The following steps must be completed for a service to onboard with OSDU on Azure. Additionally, please add the `Service Onboarding` tag to this issue when it is created.
For more information, visit our service onboarding documentation [here](https://community.opengroup.org/osdu/platform/deployment-and-operations/infra-azure-provisioning/-/blob/master/docs/service-onboarding.md).
## Steps:
**Infrastructure and Initial Requirements**
- [x] Add any additional Azure cloud infrastructure (Cosmos containers, Storage containers, fileshares, etc.) to the Terraform template. [Link](https://community.opengroup.org/osdu/platform/deployment-and-operations/infra-azure-provisioning/-/tree/master/infra/templates/osdu-r3-mvp). Note that if the infrastructure is a part of the data-partition template, you may need to add secrets to the keyvault that are partition specific; if doing so, update the createPartition REST request to include the keys that you have added so they are accessible in service code. [Link](https://community.opengroup.org/osdu/platform/deployment-and-operations/infra-azure-provisioning/-/blob/master/tools/rest/partition.http#L48)
- [x] Create an ingress point for the service. [Link](https://community.opengroup.org/osdu/platform/deployment-and-operations/infra-azure-provisioning/-/blob/master/charts/osdu-common/templates/appgw-ingress.yaml)
- [x] Add any test data that is required for the service integration tests. [Link](https://community.opengroup.org/osdu/platform/deployment-and-operations/infra-azure-provisioning/-/tree/master/tools/test_data)
- [x] Update `upload-data.py` to upload any new test data files you created. [Link](https://community.opengroup.org/osdu/platform/deployment-and-operations/infra-azure-provisioning/-/blob/master/tools/test_data/upload-data.py).
- [x] Update the integration tester with any entitlements required to test the service. [Link](https://community.opengroup.org/osdu/platform/deployment-and-operations/infra-azure-provisioning/-/blob/master/tools/test_data/user_info_1.json)
- [x] Add in any new secrets that the service needs to run. [Link](https://community.opengroup.org/osdu/platform/deployment-and-operations/infra-azure-provisioning/-/blob/master/charts/osdu-common/templates/kv-secrets.yaml)
- [x] Create environment variable script to generate .yaml files to be used with Intellij [EnvFile](https://plugins.jetbrains.com/plugin/7861-envfile) plugin and .envrc files to be used with [direnv](https://direnv.net/). [Link](https://community.opengroup.org/osdu/platform/deployment-and-operations/infra-azure-provisioning/-/tree/master/tools/variables)
**Gitlab Code and Documentation**
- [x] Complete the service code such that it passes all integration tests locally. There is some documentation on starting off implementing an Azure provider. [Link](./gitlab-service-readme-template.md)
- [x] Create helm charts for service. The charts for each service are located in the `devops/azure` directory. You can look at charts from other services as a model. The charts will be nearly identical except for the different environment variables, values, etc each service needs to run. [Link](./gitlab-service-guide.md)
- [x] Implement Istio for the service if this has not already been done. Here is an example MR that shows what steps are required. [Link](https://community.opengroup.org/osdu/platform/system/storage/-/merge_requests/64)
- [x] Create an Istio auth policy in the `devops/azure/chart/templates` directory. Here is an example of an Istio auth policy that is generic and can be used by other services. [Link](https://community.opengroup.org/osdu/platform/system/storage/-/blob/master/devops/azure/chart/templates/azure-istio-auth-policy.yaml)
- [x] Add any variables that are required for the service integration tests to the Azure CI-CD file. [Link](https://community.opengroup.org/osdu/platform/ci-cd-pipelines/-/blob/master/cloud-providers/azure.yml)
- [x] Verify that the README for the Azure provider correctly and clearly describes how to run and test the service. There is a README template to help. [Link](./gitlab-service-readme-template.md)
- [x] Push any changes and verify that the Gitlab pipeline is passing in master.
**Development and Demo Azure Devops Pipelines**
- [x] Create development ADO pipeline at `devops/azure/development-pipeline.yml` in the service repo.
- [x] Verify development pipeline passes in ADO.
- [x] Create Demo ADO pipeline at `devops/azure/pipeline.yml` in the service repo.
- [x] Verify demo pipeline is passing in ADO.
**User Documentation**
- [x] Add the service to the mirror pipeline instructions. [Link](https://community.opengroup.org/osdu/platform/deployment-and-operations/infra-azure-provisioning/-/blob/master/docs/code-mirroring.md)
- [x] Add the service to the manual deployment instructions. [Link](https://community.opengroup.org/osdu/platform/deployment-and-operations/infra-azure-provisioning/-/tree/master/charts)
- [x] Add any required variables to the already existing variable group instructions for automated deployment. You should know if any variables need to be added to existing variable groups from creating the development and demo pipelines. [Link](https://community.opengroup.org/osdu/platform/deployment-and-operations/infra-azure-provisioning/-/blob/master/docs/service-automation.md#create-osdu-service-libraries)
- [x] Add a variable group `Azure Service Release - $SERVICE_NAME` to the documentation. You should know what values to set for this variable group from creating the development and demo pipelines. [Link](https://community.opengroup.org/osdu/platform/deployment-and-operations/infra-azure-provisioning/-/blob/master/docs/service-automation.md#create-osdu-service-libraries)
- [x] Add a step for creating the service pipeline at the bottom of the service-automation page. [Link](https://community.opengroup.org/osdu/platform/deployment-and-operations/infra-azure-provisioning/-/blob/master/docs/service-automation.md#create-osdu-service-libraries)
- [x] Create a rest script with sample calls to the service for users. [Link](https://community.opengroup.org/osdu/platform/deployment-and-operations/infra-azure-provisioning/-/tree/master/tools/rest)
## Setup:
1. Create an empty repo `notification`
2. Add a variable into `Mirror Variables`
> ADO_ORGANIZATION and ADO_PROJECT should be your actual names.
| Variable | Value |
|----------|-------|
| NOTIFICATION_REPO | `https://dev.azure.com/${ADO_ORGANIZATION}/$ADO_PROJECT/_git/notification` |
3. Edit the Mirror Pipeline and add the task
```
- task: swellaby.mirror-git-repository.mirror-git-repository-vsts-task.mirror-git-repository-vsts-task@1
displayName: 'notification'
inputs:
sourceGitRepositoryUri: 'https://community.opengroup.org/osdu/platform/system/notification.git'
destinationGitRepositoryUri: '$(NOTIFICATION_REPO)'
destinationGitRepositoryPersonalAccessToken: $(ACCESS_TOKEN)
```
4. Run the Mirror Pipeline
5. Create a Variable Group `Azure Service Release - notification` with the variables:
| Variable | Value |
|----------|-------|
| MAVEN_DEPLOY_POM_FILE_PATH | `drop/provider/notification-azure` |
| MAVEN_INTEGRATION_TEST_OPTIONS | `-DargLine="-DNOTIFICATION_REGISTER_BASE_URL=$(NOTIFICATION_REGISTER_BASE_URL) -DAZURE_AD_TENANT_ID=$(AZURE_TENANT_ID) -DINTEGRATION_TESTER=$(INTEGRATION_TESTER) -DTESTER_SERVICEPRINCIPAL_SECRET=$(AZURE_TESTER_SERVICEPRINCIPAL_SECRET) -DAZURE_AD_APP_RESOURCE_ID=$(AZURE_AD_APP_RESOURCE_ID) -DNO_DATA_ACCESS_TESTER=$(NO_DATA_ACCESS_TESTER) -DNO_DATA_ACCESS_TESTER_SERVICEPRINCIPAL_SECRET=$(NO_DATA_ACCESS_TESTER_SERVICEPRINCIPAL_SECRET) -DENVIRONMENT=DEV -DHMAC_SECRET=$(AZURE_EVENT_SUBSCRIBER_SECRET) -DTOPIC_ID=$(AZURE_EVENT_TOPIC_NAME) -DNOTIFICATION_BASE_URL=$(NOTIFICATION_BASE_URL) -DREGISTER_CUSTOM_PUSH_URL_HMAC=$(REGISTER_CUSTOM_PUSH_URL_HMAC) -DOSDU_TENANT=$(OSDU_TENANT)"` |
| MAVEN_INTEGRATION_TEST_POM_FILE_PATH | `drop/deploy/testing/notification-test-azure/pom.xml` |
| SERVICE_RESOURCE_NAME | `$(AZURE_NOTIFICATION_SERVICE_NAME)` |
6. Create a Pipeline `service-notification` against the Repo `notification-service` for file `/devops/azure/pipeline.yml`
7. Execute the PipelineDecemberKomal MakkarKomal Makkarhttps://community.opengroup.org/osdu/platform/deployment-and-operations/infra-azure-provisioning/-/issues/47Support blue-green deployments2022-09-15T12:09:38ZSherman YangSupport blue-green deploymentsEnhance deployment architecture to handle blue-green deployments. Infrastructure and pipelines need to be enhanced/updated to support blue-green deployments. This is needed to allow zero downtime upgrade/redeployments after changes or ke...Enhance deployment architecture to handle blue-green deployments. Infrastructure and pipelines need to be enhanced/updated to support blue-green deployments. This is needed to allow zero downtime upgrade/redeployments after changes or key rotations. It would also allow time to test the new deployments and fix issues before exposing the new deployments to clients.
https://docs.microsoft.com/en-us/samples/microsoft/aks-postgre-keyrotation/blue--green-secret-rotation-with-azure-keyvault-and-aks/DecemberDaniel SchollDaniel Schollhttps://community.opengroup.org/osdu/platform/deployment-and-operations/infra-azure-provisioning/-/issues/1Airflow Middleware Onboarding2021-02-01T17:53:45ZKiran VeerapaneniAirflow Middleware OnboardingThe ingest project requires the use of Airflow as a Middleware layer to be running in AKS so that Ingest Services can leverage Airflow as a Workflow Engine.
- [x] Architecture Design of Required Azure Resources Necessary for Airflow
1. ...The ingest project requires the use of Airflow as a Middleware layer to be running in AKS so that Ingest Services can leverage Airflow as a Workflow Engine.
- [x] Architecture Design of Required Azure Resources Necessary for Airflow
1. Postgres
2. Redis
3. File Storage
- [x] Host 3rd Party Source Code
1. airflow-function
2. airflow-statsd
- [x] GitLab Pipeline required to containerize and host containers
1. airflow-function
2. airflow-statsd
- [x] Host Helm Charts for installation
1. osdu-airflow
**Automation Onboarding**
- [x] create Pipelines for airflow deployment
- [x] Update helm template task to run python script to add namespace for generated airflow yamls
- [x] Update git ops task to copy the charts generated from airflow.targz in different folder to flux repository
- [x] Execute Installation in Terrforom
1. osdu-airflow
---
__Acceptance Criteria__
1. Airflow Installs automatically as part of the service_resources template.
2. All Tests Pass
3. All Pipelines Pass
4. Documentation Exists
5. Services are able to leverage Airflow Workflow EngineDecemberDaniel SchollHema Vishnu Pola [Microsoft]Daniel Scholl2020-12-19https://community.opengroup.org/osdu/platform/deployment-and-operations/infra-azure-provisioning/-/issues/94Bug - AKS Default Node Pool OS Disk Size is set to 30G and not configurable2021-06-14T04:26:41ZDaniel SchollBug - AKS Default Node Pool OS Disk Size is set to 30G and not configurableCurrently the infrastructure allows configuration of many things in regards to VM for Default Node Pool on AKS. OS Disk size is not one of them and needs to be a configurable option.
Additionally the Default Version of Kubernetes versi...Currently the infrastructure allows configuration of many things in regards to VM for Default Node Pool on AKS. OS Disk size is not one of them and needs to be a configurable option.
Additionally the Default Version of Kubernetes version is set to 1.18.8 which is no longer an available version of AKS in many regions.January - 21Daniel SchollDaniel Schollhttps://community.opengroup.org/osdu/platform/deployment-and-operations/infra-azure-provisioning/-/issues/92Bug - Infrastructure is missing fileshares for storage account in Service Res...2021-06-14T04:26:41ZDaniel SchollBug - Infrastructure is missing fileshares for storage account in Service ResourcesRequired File Share folders for crs-conversion not implemented by Infrastructure.
This will require a change in the helm chart for crs-conversion due to the required naming conventions of paths in share not able to use _Required File Share folders for crs-conversion not implemented by Infrastructure.
This will require a change in the helm chart for crs-conversion due to the required naming conventions of paths in share not able to use _January - 21Daniel SchollDaniel Schollhttps://community.opengroup.org/osdu/platform/deployment-and-operations/infra-azure-provisioning/-/issues/90Bug - Indexer Service ADO pipelines fail with recent changes to Indexer Servi...2021-06-14T04:26:41ZDaniel SchollBug - Indexer Service ADO pipelines fail with recent changes to Indexer Service due to integration with schema-service.ADO deployed pipelines recently started failing for Indexer Service Build. Research indicated that this was due to integration to schema-service and new Environment Variables required for testing that were not updated in the ADO Librari...ADO deployed pipelines recently started failing for Indexer Service Build. Research indicated that this was due to integration to schema-service and new Environment Variables required for testing that were not updated in the ADO Libraries.
This update ends up being a manual change to an ADO Library and is therefore a breaking change fix that has to be performed manually by any pipelines running Indexer-Service.
The following change needs to be added to the `Azure Service Release - indexer-service` ADO library.
Variable `MAVEN_INTEGRATION_TEST_OPTIONS` add another parameter -DHOST=$(HOST_URL)January - 21Daniel SchollDaniel Schollhttps://community.opengroup.org/osdu/platform/deployment-and-operations/infra-azure-provisioning/-/issues/89Adding new properties in partition service required for Register Service2021-06-14T04:26:41Zharshit aggarwalAdding new properties in partition service required for Register ServiceWe want to add new properties to partition service which are relevant to support multi partitioning in register service.
This [MR ](https://community.opengroup.org/osdu/platform/system/lib/cloud/azure/os-core-lib-azure/-/merge_requests...We want to add new properties to partition service which are relevant to support multi partitioning in register service.
This [MR ](https://community.opengroup.org/osdu/platform/system/lib/cloud/azure/os-core-lib-azure/-/merge_requests/65)has made the relevant changes in core-lib-azure
These properties are as follows
1. `eventgrid-resourcegroup` - This is required by Azure Event Grid SDK
1. `encryption-key-identifier` - This key is used for encrypting the subscription secret before storing them in cosmosJanuary - 21https://community.opengroup.org/osdu/platform/deployment-and-operations/infra-azure-provisioning/-/issues/88Feature change - Adding support for Pod auto scaling for celery workers in ai...2021-06-14T04:26:41ZKishore BattulaFeature change - Adding support for Pod auto scaling for celery workers in airflow**Why is this change needed**
This change is needed to autoscale airflow worker based on load on the workers. It helps to reduce cost by tearing down when there isn't load and would be able to automatically handle increase in load witho...**Why is this change needed**
This change is needed to autoscale airflow worker based on load on the workers. It helps to reduce cost by tearing down when there isn't load and would be able to automatically handle increase in load without manual intervention.
**Current Behavior**
Currently airflow is configured with fixed set of worker pods. The value is set to 1.
**Expected Behavior**
Add configuration to support autoscaling. Airflow workers should scale up and down based on the autoscaling configuration
**Acceptance criteria**
- Update all required documentationJanuary - 21https://community.opengroup.org/osdu/platform/deployment-and-operations/infra-azure-provisioning/-/issues/85Deletion of storage container - "workflow-tasks-sharing"2021-06-14T04:26:41ZAalekh JainDeletion of storage container - "workflow-tasks-sharing"**Service name**: Ingestion Service
**Parent issue**: #84
**Overview**: Data sharing across tasks in workflow will now be done by creating a dedicated containers for each dag run (see #84). Hence we no longer need the container "workfl...**Service name**: Ingestion Service
**Parent issue**: #84
**Overview**: Data sharing across tasks in workflow will now be done by creating a dedicated containers for each dag run (see #84). Hence we no longer need the container "workflow-tasks-sharing" which is being used to share the data across workflow tasks. This container needs to be deleted.
## Prerequisites:
## Steps:
**Infrastructure Onboarding**
- [x] **Deletion of storage container** - "workflow-tasks-sharing"
- [x] Obtain approval for any infrastructure requirements.
- [x] Implement any required infrastructure changes.
- [x] Obtain approval for merge request(s) containing infrastructure changes.
**Chart Onboarding**
**Integration Test Onboarding**
**Manual Onboarding**
**Automation Onboarding**January - 21Daniel SchollDaniel Schollhttps://community.opengroup.org/osdu/platform/deployment-and-operations/infra-azure-provisioning/-/issues/84Architecture Change - Data Partition - Add dedicated Storage Account for use ...2021-06-14T04:26:40ZAalekh JainArchitecture Change - Data Partition - Add dedicated Storage Account for use by Ingestion Service### Service name : Ingestion Service
#### Why is this needed
Currently we only have a single container where the data is being stored for all the dag runs. The data sharing across tasks is being done by generating SAS tokens at contain...### Service name : Ingestion Service
#### Why is this needed
Currently we only have a single container where the data is being stored for all the dag runs. The data sharing across tasks is being done by generating SAS tokens at container level. This gives the access to any dag run to access the data from any other dag runs as well. This leads to a **security concern** regarding data storage and hence and brings a requirement to change the existing infrastructure.
#### Current behavior
Generating sas tokens on a container level. This container is dedicated towards storing the data required for all the dag runs.
#### Expected behavior
The new change will add a storage account, which is dedicatedly used for ingestion-workflow where the containers will be created and deleted on the fly.
**Created** - Whenever we have the requirement to share data across tasks in workflow for a particular dag run.
**Deleted** - Once the DAG run is completed either with success or failure the container created for that DAG run is deleted.
As containers are created and deleted on the fly, a dedicated storage account is needed for this usecase so that these temporary storage containers don't pollute the existing storage account.
**Other Solutions Considered**
Explored ways to handled this isolation at directory level where we would use a single container for storing data for all the dag runs. There is no support for the SAS generation at directory level. This forced us to go with SAS generation on container level.
#### Acceptance criteria
1. Adding a new storage account to the existing infra without breaking changes
2. Ensure the unit tests for infra-azure-provisioning pass
3. Update the ingestion service code to reflect on the infra chanes (`getSignedUrl` for ingestion service to use new storage account where the sas tokens will be generated for the newly created containers)
4. Update all required documentation
5. Update architecture diagram
Storage account config requirements -
1. Replication type - LRS
2. Backup requirements - No backup
3. Data retention requirements - No data retention
## Prerequisites:
> The lock must be removed on the storage account prior to executing this change due to the removal action of a container from the storage account.
## Steps:
**Infrastructure Onboarding**
- [x] Creation of **new Storage Account**
- [x] **Deletion of storage container** - "workflow-tasks-sharing"
- [x] Obtain approval for any infrastructure requirements.
- [x] Implement any required infrastructure changes.
- [x] Obtain approval for merge request(s) containing infrastructure changes.
**Chart Onboarding**
**Integration Test Onboarding**
**Manual Onboarding**
**Automation Onboarding**January - 21Daniel SchollDaniel Schollhttps://community.opengroup.org/osdu/platform/deployment-and-operations/infra-azure-provisioning/-/issues/83Support for overriding configuration in airflow2021-06-14T04:26:40ZKishore BattulaSupport for overriding configuration in airflowClients need to override the configuration that was checkedin into the master infrastructure repository. As every client will have different configuration it is hard to maintain in infrastructure repository. There should be a way for cli...Clients need to override the configuration that was checkedin into the master infrastructure repository. As every client will have different configuration it is hard to maintain in infrastructure repository. There should be a way for client to use thier configuration as part of infrastructure deployment.January - 21https://community.opengroup.org/osdu/platform/deployment-and-operations/infra-azure-provisioning/-/issues/82BUG - Service Template - AKS Template plan always calculates 1 change2021-06-14T04:26:40ZDaniel SchollBUG - Service Template - AKS Template plan always calculates 1 changeAfter the Software Upgrades from Issue #75 the plan for Service Resources always calculates a Diagnostic Change due to a new implementation of diagnostic settings.
```
# azurerm_monitor_diagnostic_setting.aks_diagnostics will be updat...After the Software Upgrades from Issue #75 the plan for Service Resources always calculates a Diagnostic Change due to a new implementation of diagnostic settings.
```
# azurerm_monitor_diagnostic_setting.aks_diagnostics will be updated in-place
~ resource "azurerm_monitor_diagnostic_setting" "aks_diagnostics" {
id = "/subscriptions/929e9ae0-7bb1-4563-a200-9863fe27cae4/resourcegroups/osdu-mvp-srscholl-0uq8-rg/providers/Microsoft.ContainerService/managedClusters/osdu-mvp-srscholl-0uq8-aks|aks_diagnostics"
name = "aks_diagnostics"
# (2 unchanged attributes hidden)
- metric {
- category = "API Server (PREVIEW)" -> null
- enabled = true -> null
- retention_policy {
- days = 100 -> null
- enabled = true -> null
}
}
+ metric {
+ category = "AllMetrics"
+ enabled = true
+ retention_policy {
+ days = 100
+ enabled = true
}
}
# (7 unchanged blocks hidden)
}
Plan: 0 to add, 1 to change, 0 to destroy.
```January - 21Daniel SchollDaniel Schollhttps://community.opengroup.org/osdu/platform/deployment-and-operations/infra-azure-provisioning/-/issues/80Feature Change - Data Partition - Enable CORS configuration for Blob Containe...2021-06-14T04:26:40ZKrishna Nikhil VedurumudiFeature Change - Data Partition - Enable CORS configuration for Blob Containers on Storage Accounts__Why is this change needed?__
The issue was raised by clients when they have tried to access the Signed URLs generated by File Service from a UI.
Generally browsers do perform a Preflight request if any HTTP call is made on a differe...__Why is this change needed?__
The issue was raised by clients when they have tried to access the Signed URLs generated by File Service from a UI.
Generally browsers do perform a Preflight request if any HTTP call is made on a different domain.
Because the current blob containers are not configured with CORS rules, the Storage Account's blob service returns a 403 response against the Pre-Flight Request.
__Current behavior__
Here is the sample response sent by the Storage Account if CORS rule is not configured.
```
curl --location --request OPTIONS 'https://krvedurutest.blob.core.windows.net/' --header 'Origin: http://krveduru' --header 'Access-Control-Request-Method: PUT' --data-raw 'foo'
<?xml version="1.0" encoding="utf-8"?>
<Error>
<Code>CorsPreflightFailure</Code>
<Message>CORS not enabled or no matching rule found for this request.
RequestId:361fa49d-d01e-008d-0f09-e4a669000000
Time:2021-01-06T08:52:18.7406133Z</Message>
<MessageDetails>No CORS rules matches this request</MessageDetails>
</Error>
```
__Expected behavior__
Once CORS rules are enabled and the required domains are whitelisted, the OPTIONS call would return a 200 OK status and the subsequent PUT / GET calls will be allowed.
__Current Design__
Current Module doesn't allow for any support on Blob Properties
```
resource "azurerm_storage_account" "main" {
# required
name = lower(var.name)
resource_group_name = data.azurerm_resource_group.main.name
location = data.azurerm_resource_group.main.location
account_tier = var.performance_tier
account_replication_type = var.replication_type
# optional
account_kind = var.kind
enable_https_traffic_only = var.https
tags = var.resource_tags
# enrolls storage account into azure 'managed identities' authentication
identity {
type = "SystemAssigned"
}
}
```
__Initial Design Proposal__
A simple enhancement to the module can add in support for CORS rules as necessary and the module can be backward compatible by setting the default rule as an empty []
```
# Empty Variable by default is *NO CORS* configuration
variable "cors_rule" {
description = "CORS rules for storage account."
type = list(object({
allowed_origins = list(string),
allowed_methods = list(string),
allowed_headers = list(string),
exposed_headers = list(string),
max_age_in_seconds = number
}))
default = []
}
resource "azurerm_storage_account" "storage" {
# required
name = lower(var.name)
resource_group_name = data.azurerm_resource_group.main.name
location = data.azurerm_resource_group.main.location
account_tier = var.performance_tier
account_replication_type = var.replication_type
# optional
account_kind = var.kind
enable_https_traffic_only = var.https
tags = var.resource_tags
blob_properties {
dynamic "cors_rule" {
for_each = var.cors_rule
content {
allowed_origins = cors_rule.value.allowed_origins
allowed_methods = cors_rule.value.allowed_methods
allowed_headers = cors_rule.value.allowed_headers
exposed_headers = cors_rule.value.exposed_headers
max_age_in_seconds = cors_rule.value.max_age_in_seconds
}
}
}
}
```
The Template design for Data Partition can also add in the Variable so it can be set independently for each configuration and then just passed through to the module.
---
_Possible Issues_
1. The feature needs to be tested to ensure no breaking changes. To mitigate the risk the suggestion is to roll the feature out in 2 steps. Step 1 is implement the change but don't enable use of the change. This will ensure no breaking change. Then Step 2 enable the change.
2. CORS Rules have an environment specific value associated to it that would be the allowed origins. Investigation needs to occur on what the rule would be for necessary domain origins and how this would be known ahead of time prior to infrastructure build. Does this mean a desired DNS name is now required to be known prior to building out infra?
---
__Acceptance Criteria__
1. Design Feature to ensure can be implemented with a non breaking change.
2. Update Storage Module
3. Ensure all Module Unit Tests Pass
4. Ensure all Template Unit Tests and Integration Tests Pass
5. Update all required documentationJanuary - 21Daniel SchollDaniel Schollhttps://community.opengroup.org/osdu/platform/deployment-and-operations/infra-azure-provisioning/-/issues/77Architecture Change - Central Resources - Add Graph Database2023-09-06T17:03:21ZDaniel SchollArchitecture Change - Central Resources - Add Graph DatabaseThe addition of a Graph Database is required in order to support enhanced Entitlements and a new Entitlements Service based on Graph Database Functionality. This database has been determined to be a Cosmos Database and leverage the [Azu...The addition of a Graph Database is required in order to support enhanced Entitlements and a new Entitlements Service based on Graph Database Functionality. This database has been determined to be a Cosmos Database and leverage the [Azure Cosmos Graph API](https://docs.microsoft.com/en-us/azure/cosmos-db/graph-introduction)
The database for entitlements needs to be a single database for the OSDU stamp and is not part of a Data Partition and is planned to be a part of the Central Resources.
---
__Design__
Terraform Resources exist in AzureRM for managing a Gremlin Graph within a Cosmos Account. These resources are different than those used by a SQL Database and Container. Two options exist for the module work.
1. Enhance the CosmosDB Module to support both SQL and Gremlin Databases.
2. Create a separate module for each database type that is independent.
There are no known advantages at the moment as to why a single module would be of benefit so the default decision is to use a new module for this Graph API functionality.
_Module Requirements_
- The Module if possible should be as similar to Cosmos DB as possible.
_Template Requirements_
- Database will be named with the suffix of graph to distinguish from table or db
- Database will be created as part of the Central Resource Template
- Database will be locked
- Database location and replication location will be consistent in naming patterns to Data Partitions
- Database by default will use the same type of throughput settings as CosmosDB.
---
__Acceptance Criteria__
1. Architecture Diagram Change
2. Modify or create an infrastructure module responsible for adding Cosmos Graph Database.
3. Modify Central Resources to add the additional database.
4. Ensure all Module Unit Tests Pass
5. Ensure all Template Unit Tests and Integration Tests Pass
6. Update all required documentationJanuary - 21Daniel SchollDaniel Schollhttps://community.opengroup.org/osdu/platform/deployment-and-operations/infra-azure-provisioning/-/issues/76Add Terraform Service Resource Template Feature Flags2022-08-23T10:47:32ZDaniel SchollAdd Terraform Service Resource Template Feature FlagsService Resource Templates needs the ability to deprecate certain functionality. This can be done by incorporating a feature flag option so that deployments of Service Resources can have certain features flagged on or off as necessary.
...Service Resource Templates needs the ability to deprecate certain functionality. This can be done by incorporating a feature flag option so that deployments of Service Resources can have certain features flagged on or off as necessary.
__Feature Option:__ _OSDU Namespace_
This option should be enabled by default and creates the OSDU namespace and original config map for backward compatibility. In order to enable multiple releases and namespaces infrastructure in the future can not by default specify and hardcode a default namespace.
__Feature Option:__ _Flux_
This option should be enabled by default and installs and configures Flux. Moving forward flux should be a configurable option and not mandated. Release management is planned to use full helm chart capability to manage releases. Flux will move to an opt in feature if desired at the time of installation.January - 21Daniel SchollDaniel Scholl2021-01-08https://community.opengroup.org/osdu/platform/deployment-and-operations/infra-azure-provisioning/-/issues/75Upgrade Infrastructure tools and software dependencies2021-06-14T04:26:40ZDaniel SchollUpgrade Infrastructure tools and software dependenciesInfrastructure software versions for the following components need to be upgraded to the latest releases.
1. Terraform (0.12.29 -> 0.14.3)
2. Terraform AzureRM Provider (2.33.0 -> 2.41.0)
3. Terraform AzureAD Provider (1.0.0 -> 1.1.1)
...Infrastructure software versions for the following components need to be upgraded to the latest releases.
1. Terraform (0.12.29 -> 0.14.3)
2. Terraform AzureRM Provider (2.33.0 -> 2.41.0)
3. Terraform AzureAD Provider (1.0.0 -> 1.1.1)
4. Terraform Kubernetes Provider (1.11.3 -> 1.13.3)
5. Terraform Helm Provider (1.2.3 -> 2.0.1)
6. Application Gateway Ingress Controller (1.2.0 -> 1.3.0)
7. Jetstack Certificate Manager (0.16.1 -> 1.1.0)
8. Flux CD (1.5.0 -> 1.6.0)
9. Keda (1.4.2 -> 1.5.0)
10. Key Vault CSI Driver (0.0.13 -> 0.0.15)
11. Azure Active Directory Pod Identity (2.0.0 -> 3.0.0)
---
__Upgrade Path__
Testing for an Upgrade Path to accomplish the Upgrades require the following process.
1. Upgrade Terraform from Version 0.12.29 --> 0.13.5.
This will require a single MR that updates the Pipelines to use the new Terraform Version. The state objects should be naturally upgraded. Manual environments would require a tf init, tf plan and tf apply on all 3 templates.
2. Upgrade Terraform from Version 0.13.5 --> 0.14.3
This will require a single MR that updates the Pipelines to use the new Terraform Version. The state objects should be naturally upgraded. Manual environments would require a tf init, tf plan and tf apply on all 3 templates.
Warning will occur on this version regarding deprecated provisioning blocks
```
Warning: Version constraints inside provider configuration blocks are deprecated
on main.tf line 41, in provider "azurerm":
41: version = "=2.33.0"
Terraform 0.13 and earlier allowed provider version constraints inside the
provider configuration block, but that is now deprecated and will be removed
in a future version of Terraform. To silence this warning, move the provider
version constraint into the required_providers block.
```
3. Now the full MR can be moved in that will accomplish the provider upgrades.
This will require a tf init -upgrade flag to be set for the provider version to allow the change. Manual environments would require a tf init -upgrade, tf plan and tf apply on all 3 templates
---
__Acceptance Criteria__
- All Module and Template Unit Tests Should Pass.
- All Template Integration Tests Should Pass.
- Application Should Load and Function Successfully.
- Documentation Should be updated.
- This will be an infrastructure major version change but identify and document if any upgrade path is possible.January - 21Daniel SchollDaniel Scholl2021-01-08https://community.opengroup.org/osdu/platform/deployment-and-operations/infra-azure-provisioning/-/issues/72Using 'verify' phase in integrationTestMavenGoal2021-06-14T04:26:40Zharshit aggarwalUsing 'verify' phase in integrationTestMavenGoalUsing verify phase in [integrationTestMavenGoal ](https://community.opengroup.org/osdu/platform/deployment-and-operations/infra-azure-provisioning/-/blob/master/devops/tasks/deployment-steps.yml#L24) will enable services using either of ...Using verify phase in [integrationTestMavenGoal ](https://community.opengroup.org/osdu/platform/deployment-and-operations/infra-azure-provisioning/-/blob/master/devops/tasks/deployment-steps.yml#L24) will enable services using either of surefire or failsafe plugin to run Integration Tests. Currently WKS and Schema services are using failsafe plugin which requires `mvn verify` to run Integration Tests.
Similar change was made earlier in [azure.yml](https://community.opengroup.org/osdu/platform/ci-cd-pipelines/-/blob/master/cloud-providers/azure.yml#L223) to run Integration tests using `mvn verify`
We can also keep `package` as default value and override with `verify` when requiredJanuary - 21https://community.opengroup.org/osdu/platform/deployment-and-operations/infra-azure-provisioning/-/issues/65CRS Conversion Service Onboarding2021-06-14T04:26:40ZSumra ZafarCRS Conversion Service Onboarding**Service name**: `CRS Conversion Service`
The following steps must be completed for a service to onboard with OSDU on Azure. Additionally, please add the `Service Onboarding` tag to this issue when it is created.
For more information,...**Service name**: `CRS Conversion Service`
The following steps must be completed for a service to onboard with OSDU on Azure. Additionally, please add the `Service Onboarding` tag to this issue when it is created.
For more information, visit our service onboarding documentation [here](https://community.opengroup.org/osdu/platform/deployment-and-operations/infra-azure-provisioning/-/blob/master/docs/service-onboarding.md).
## Steps:
**Infrastructure and Initial Requirements**
- [x] Add any additional Azure cloud infrastructure (Cosmos containers, Storage containers, fileshares, etc.) to the Terraform template. [Link](https://community.opengroup.org/osdu/platform/deployment-and-operations/infra-azure-provisioning/-/tree/master/infra/templates/osdu-r3-mvp). Note that if the infrastructure is a part of the data-partition template, you may need to add secrets to the keyvault that are partition specific; if doing so, update the createPartition REST request to include the keys that you have added so they are accessible in service code. [Link](https://community.opengroup.org/osdu/platform/deployment-and-operations/infra-azure-provisioning/-/blob/master/tools/rest/partition.http#L48)
- [x] Create an ingress point for the service. [Link](https://community.opengroup.org/osdu/platform/deployment-and-operations/infra-azure-provisioning/-/blob/master/charts/osdu-common/templates/appgw-ingress.yaml)
- [x] Add any test data that is required for the service integration tests. [Link](https://community.opengroup.org/osdu/platform/deployment-and-operations/infra-azure-provisioning/-/tree/master/tools/test_data)
- [x] Update `upload-data.py` to upload any new test data files you created. [Link](https://community.opengroup.org/osdu/platform/deployment-and-operations/infra-azure-provisioning/-/blob/master/tools/test_data/upload-data.py).
- [x] Update the integration tester with any entitlements required to test the service. [Link](https://community.opengroup.org/osdu/platform/deployment-and-operations/infra-azure-provisioning/-/blob/master/tools/test_data/user_info_1.json)
- [x] Add in any new secrets that the service needs to run. [Link](https://community.opengroup.org/osdu/platform/deployment-and-operations/infra-azure-provisioning/-/blob/master/charts/osdu-common/templates/kv-secrets.yaml)
- [x] Create environment variable script to generate .yaml files to be used with Intellij [EnvFile](https://plugins.jetbrains.com/plugin/7861-envfile) plugin and .envrc files to be used with [direnv](https://direnv.net/). [Link](https://community.opengroup.org/osdu/platform/deployment-and-operations/infra-azure-provisioning/-/tree/master/tools/variables)
**Gitlab Code and Documentation**
- [x] Complete the service code such that it passes all integration tests locally. There is some documentation on starting off implementing an Azure provider. [Link](./gitlab-service-readme-template.md)
- [x] Create helm charts for service. The charts for each service are located in the `devops/azure` directory. You can look at charts from other services as a model. The charts will be nearly identical except for the different environment variables, values, etc each service needs to run. [Link](./gitlab-service-guide.md)
- [x] Implement Istio for the service if this has not already been done. Here is an example MR that shows what steps are required. [Link](https://community.opengroup.org/osdu/platform/system/storage/-/merge_requests/64)
- [x] Create an Istio auth policy in the `devops/azure/chart/templates` directory. Here is an example of an Istio auth policy that is generic and can be used by other services. [Link](https://community.opengroup.org/osdu/platform/system/storage/-/blob/master/devops/azure/chart/templates/azure-istio-auth-policy.yaml)
- [x] Add any variables that are required for the service integration tests to the Azure CI-CD file. [Link](https://community.opengroup.org/osdu/platform/ci-cd-pipelines/-/blob/master/cloud-providers/azure.yml)
- [x] Verify that the README for the Azure provider correctly and clearly describes how to run and test the service. There is a README template to help. [Link](./gitlab-service-readme-template.md)
- [x] Push any changes and verify that the Gitlab pipeline is passing in master.
**Development and Demo Azure Devops Pipelines**
- [x] Create development ADO pipeline at `devops/azure/development-pipeline.yml` in the service repo.
- [x] Verify development pipeline passes in ADO.
- [x] Create Demo ADO pipeline at `devops/azure/pipeline.yml` in the service repo.
- [x] Verify demo pipeline is passing in ADO.
**User Documentation**
- [x] Add the service to the mirror pipeline instructions. [Link](https://community.opengroup.org/osdu/platform/deployment-and-operations/infra-azure-provisioning/-/blob/master/docs/code-mirroring.md)
- [x] Add the service to the manual deployment instructions. [Link](https://community.opengroup.org/osdu/platform/deployment-and-operations/infra-azure-provisioning/-/tree/master/charts)
- [x] Add any required variables to the already existing variable group instructions for automated deployment. You should know if any variables need to be added to existing variable groups from creating the development and demo pipelines. [Link](https://community.opengroup.org/osdu/platform/deployment-and-operations/infra-azure-provisioning/-/blob/master/docs/service-automation.md#create-osdu-service-libraries)
- [x] Add a variable group `Azure Service Release - $SERVICE_NAME` to the documentation. You should know what values to set for this variable group from creating the development and demo pipelines. [Link](https://community.opengroup.org/osdu/platform/deployment-and-operations/infra-azure-provisioning/-/blob/master/docs/service-automation.md#create-osdu-service-libraries)
- [x] Add a step for creating the service pipeline at the bottom of the service-automation page. [Link](https://community.opengroup.org/osdu/platform/deployment-and-operations/infra-azure-provisioning/-/blob/master/docs/service-automation.md#create-osdu-service-libraries)
- [x] Create a rest script with sample calls to the service for users. [Link](https://community.opengroup.org/osdu/platform/deployment-and-operations/infra-azure-provisioning/-/tree/master/tools/rest)
## Setup:
1. Create an empty repo `crs-conversion-service`
2. Add a variable into `Mirror Variables`
> ADO_ORGANIZATION and ADO_PROJECT should be your actual names.
| Variable | Value |
|----------|-------|
| CRS_CONVERSION_REPO| `https://dev.azure.com/${ADO_ORGANIZATION}/$ADO_PROJECT/_git/crs-conversion-service` |
3. Edit the Mirror Pipeline and add the task
```
- task: swellaby.mirror-git-repository.mirror-git-repository-vsts-task.mirror-git-repository-vsts-task@1
displayName: 'crs-conversion-service'
inputs:
sourceGitRepositoryUri: 'https://community.opengroup.org/osdu/platform/system/reference/crs-conversion-service.git'
destinationGitRepositoryUri: '$(CRS_CONVERSION_REPO)'
destinationGitRepositoryPersonalAccessToken: $(ACCESS_TOKEN)
```
4. Run the Mirror Pipeline
5. Create a Variable Group `Azure Service Release - crs-conversion` with the variables:
| Variable | Value |
|----------|-------|
| MAVEN_DEPLOY_POM_FILE_PATH | `drop/provider/crs-converter-azure/crs-converter-aks` |
6. Create a Pipeline `service-crs-conversion` against the Repo `crs-conversion-service` for file `/devops/azure/pipeline.yml`
7. Upload the SIS_DATA folder file located in the Project data folder to the fileshare `crs-conversion` of the storage account in the service resources. (See Below for Sample Code)
8. Execute the Pipeline
`Setup needed for CRS Conversion Tests:`
- Apache SIS Setup is needed. SIS_DATA in Env Variables should be pointed to the directory. [More Info](https://community.opengroup.org/osdu/platform/system/reference/crs-conversion-service/-/tree/master/apachesis_setup)
- SHARE needs to be created and named `crs-conversion` Alternate share name can also be configured [here](crs-conversion)
Sample Code:
```bash
az storage share create --name $SHARE_NAME --account-key $accountKey --account-name $accountName
```
-The directory sturcture needs to be followed for the SIS_DATA for the tests to pass. Data needs to be copied to the share location.
Sample Code:
```bash
SHARE_NAME="crs-conversion"
search_dir="apachesis_setup/SIS_DATA"
az storage directory create --name "apachesis_setup" --account-name $accountName --account-key $accountKey --share-name $SHARE_NAME
az storage directory create --name "apachesis_setup\SIS_DATA" --account-name $accountName --account-key $accountKey --share-name $SHARE_NAME
az storage directory create --name "apachesis_setup\SIS_DATA\Databases" --account-name $accountName --account-key $accountKey --share-name $SHARE_NAME
az storage directory create --name "apachesis_setup\SIS_DATA\Databases\ExternalSources" --account-name $accountName --account-key $accountKey --share-name $SHARE_NAME
az storage directory create --name "apachesis_setup\SIS_DATA\Databases\SpatialMetadata" --account-name $accountName --account-key $accountKey --share-name $SHARE_NAME
az storage directory create --name "apachesis_setup\SIS_DATA\Databases\SpatialMetadata\log" --account-name $accountName --account-key $accountKey --share-name $SHARE_NAME
az storage directory create --name "apachesis_setup\SIS_DATA\Databases\SpatialMetadata\seg0" --account-name $accountName --account-key $accountKey --share-name $SHARE_NAME
az storage directory create --name "apachesis_setup\SIS_DATA\DatumChanges" --account-name $accountName --account-key $accountKey --share-name $SHARE_NAME
find "$search_dir/" -type f -print0 | while read -d $'\0' file; do
echo "File: $file"
az storage file upload --account-name $accountName --account-key $accountKey --share-name $SHARE_NAME --source "$file" --path "$file"
done
```January - 21Sumra ZafarSumra Zafarhttps://community.opengroup.org/osdu/platform/deployment-and-operations/infra-azure-provisioning/-/issues/60Schema service onboarding2021-06-14T04:26:39ZAman VermaSchema service onboarding**Service name**: `Schema Service`
The following steps must be completed for a service to onboard with OSDU on Azure. Additionally, please add the `Service Onboarding` tag to this issue when it is created.
For more information, visit o...**Service name**: `Schema Service`
The following steps must be completed for a service to onboard with OSDU on Azure. Additionally, please add the `Service Onboarding` tag to this issue when it is created.
For more information, visit our service onboarding documentation [here](https://community.opengroup.org/osdu/platform/deployment-and-operations/infra-azure-provisioning/-/blob/master/docs/service-onboarding.md).
## Steps:
**Infrastructure and Initial Requirements**
- [x] Add any additional Azure cloud infrastructure (Cosmos containers, Storage containers, fileshares, etc.) to the Terraform template. [Link](https://community.opengroup.org/osdu/platform/deployment-and-operations/infra-azure-provisioning/-/tree/master/infra/templates/osdu-r3-mvp). Note that if the infrastructure is a part of the data-partition template, you may need to add secrets to the keyvault that are partition specific; if doing so, update the createPartition REST request to include the keys that you have added so they are accessible in service code. [Link](https://community.opengroup.org/osdu/platform/deployment-and-operations/infra-azure-provisioning/-/blob/master/tools/rest/partition.http#L48)
- [x] Create an ingress point for the service. [Link](https://community.opengroup.org/osdu/platform/deployment-and-operations/infra-azure-provisioning/-/blob/master/charts/osdu-common/templates/appgw-ingress.yaml)
- [x] Add any test data that is required for the service integration tests. [Link](https://community.opengroup.org/osdu/platform/deployment-and-operations/infra-azure-provisioning/-/tree/master/tools/test_data)
- [x] Update `upload-data.py` to upload any new test data files you created. [Link](https://community.opengroup.org/osdu/platform/deployment-and-operations/infra-azure-provisioning/-/blob/master/tools/test_data/upload-data.py).
- [x] Update the integration tester with any entitlements required to test the service. [Link](https://community.opengroup.org/osdu/platform/deployment-and-operations/infra-azure-provisioning/-/blob/master/tools/test_data/user_info_1.json)
- [x] Add in any new secrets that the service needs to run. [Link](https://community.opengroup.org/osdu/platform/deployment-and-operations/infra-azure-provisioning/-/blob/master/charts/osdu-common/templates/kv-secrets.yaml)
- [x] Create environment variable script to generate .yaml files to be used with Intellij [EnvFile](https://plugins.jetbrains.com/plugin/7861-envfile) plugin and .envrc files to be used with [direnv](https://direnv.net/). [Link](https://community.opengroup.org/osdu/platform/deployment-and-operations/infra-azure-provisioning/-/tree/master/tools/variables)
**Gitlab Code and Documentation**
- [x] Complete the service code such that it passes all integration tests locally. There is some documentation on starting off implementing an Azure provider. [Link](./gitlab-service-readme-template.md)
- [x] Create helm charts for service. The charts for each service are located in the `devops/azure` directory. You can look at charts from other services as a model. The charts will be nearly identical except for the different environment variables, values, etc each service needs to run. [Link](./gitlab-service-guide.md)
- [x] Implement Istio for the service if this has not already been done. Here is an example MR that shows what steps are required. [Link](https://community.opengroup.org/osdu/platform/system/storage/-/merge_requests/64)
- [x] Create an Istio auth policy in the `devops/azure/chart/templates` directory. Here is an example of an Istio auth policy that is generic and can be used by other services. [Link](https://community.opengroup.org/osdu/platform/system/storage/-/blob/master/devops/azure/chart/templates/azure-istio-auth-policy.yaml)
- [x] Add any variables that are required for the service integration tests to the Azure CI-CD file. [Link](https://community.opengroup.org/osdu/platform/ci-cd-pipelines/-/blob/master/cloud-providers/azure.yml)
- [x] Verify that the README for the Azure provider correctly and clearly describes how to run and test the service. There is a README template to help. [Link](./gitlab-service-readme-template.md)
- [x] Push any changes and verify that the Gitlab pipeline is passing in master.
**Development and Demo Azure Devops Pipelines**
- [x] Create development ADO pipeline at `devops/azure/development-pipeline.yml` in the service repo.
- [x] Verify development pipeline passes in ADO.
- [x] Create Demo ADO pipeline at `devops/azure/pipeline.yml` in the service repo.
- [x] Verify demo pipeline is passing in ADO.
**User Documentation**
- [x] Add the service to the mirror pipeline instructions. [Link](https://community.opengroup.org/osdu/platform/deployment-and-operations/infra-azure-provisioning/-/blob/master/docs/code-mirroring.md)
- [x] Add the service to the manual deployment instructions. [Link](https://community.opengroup.org/osdu/platform/deployment-and-operations/infra-azure-provisioning/-/tree/master/charts)
- [x] Add any required variables to the already existing variable group instructions for automated deployment. You should know if any variables need to be added to existing variable groups from creating the development and demo pipelines. [Link](https://community.opengroup.org/osdu/platform/deployment-and-operations/infra-azure-provisioning/-/blob/master/docs/service-automation.md#create-osdu-service-libraries)
- [x] Add a variable group `Azure Service Release - $SERVICE_NAME` to the documentation. You should know what values to set for this variable group from creating the development and demo pipelines. [Link](https://community.opengroup.org/osdu/platform/deployment-and-operations/infra-azure-provisioning/-/blob/master/docs/service-automation.md#create-osdu-service-libraries)
- [x] Add a step for creating the service pipeline at the bottom of the service-automation page. [Link](https://community.opengroup.org/osdu/platform/deployment-and-operations/infra-azure-provisioning/-/blob/master/docs/service-automation.md#create-osdu-service-libraries)
- [x] Create a rest script with sample calls to the service for users. [Link](https://community.opengroup.org/osdu/platform/deployment-and-operations/infra-azure-provisioning/-/tree/master/tools/rest)January - 21harshit aggarwalAman Vermaharshit aggarwal