infra-azure-provisioning issueshttps://community.opengroup.org/osdu/platform/deployment-and-operations/infra-azure-provisioning/-/issues2023-01-18T16:19:21Zhttps://community.opengroup.org/osdu/platform/deployment-and-operations/infra-azure-provisioning/-/issues/248data-partition terraform unsupported attribute error for resource "azurerm_ke...2023-01-18T16:19:21ZFabien Bosquetdata-partition terraform unsupported attribute error for resource "azurerm_key_vault_secret" "storage_account_blob_endpoint"I have an error when following the manual install of the azure infrastructure.
The issue appears when running `terraform plan` for the `data-partition` as described here.
https://community.opengroup.org/osdu/platform/deployment-and-oper...I have an error when following the manual install of the azure infrastructure.
The issue appears when running `terraform plan` for the `data-partition` as described here.
https://community.opengroup.org/osdu/platform/deployment-and-operations/infra-azure-provisioning/-/blob/master/infra/templates/osdu-r3-mvp/data_partition/README.md
```
terraform plan -var-file custom.tfvars
╷
│ Warning: Deprecated attribute
│
│ on ../../../modules/providers/azure/aks/main.tf line 169, in resource "azurerm_kubernetes_cluster" "main":
│ 169: addon_profile[0].oms_agent[0].log_analytics_workspace_id
│
│ The attribute "log_analytics_workspace_id" is deprecated. Refer to the provider documentation for details.
│
│ (and one more similar warning elsewhere)
╵
╷
│ Warning: Argument is deprecated
│
│ with module.service_bus.azurerm_servicebus_namespace_authorization_rule.main,
│ on ../../../modules/providers/azure/service-bus/main.tf line 144, in resource "azurerm_servicebus_namespace_authorization_rule" "main":
│ 144: namespace_name = azurerm_servicebus_namespace.main.name
│
│ Deprecated in favor of "namespace_id"
│
│ (and 18 more similar warnings elsewhere)
╵
╷
│ Error: Unsupported attribute
│
│ on secrets.tf line 103, in resource "azurerm_key_vault_secret" "storage_account_blob_endpoint":
│ 103: value = module.storage_account.endpoint
│ ├────────────────
│ │ module.storage_account is a object
│
│ This object does not have an attribute named "endpoint".
```Arturo Hernandez [EPAM]shivani karipeArturo Hernandez [EPAM]https://community.opengroup.org/osdu/platform/deployment-and-operations/infra-azure-provisioning/-/issues/247Upgrade Terraform version to latest stable version2023-03-27T21:34:58Zshivani karipeUpgrade Terraform version to latest stable version[Terraform 0.14.4](https://releases.hashicorp.com/terraform/0.14.4/) is the version which we are now using to create the infrastructure.
* ~~To upgrade terraform to latest version `1.3.4`~~
* ~~To upgrade golang version to `1.18.8`~~
* ...[Terraform 0.14.4](https://releases.hashicorp.com/terraform/0.14.4/) is the version which we are now using to create the infrastructure.
* ~~To upgrade terraform to latest version `1.3.4`~~
* ~~To upgrade golang version to `1.18.8`~~
* ~~To upgrade azurerm provider?~~
* To upgrade azuread provider?
This initiative started to get advantage of some of the azurerm features such as the keyvault features, as well to have flexibility in the future to use newer resource attributes which may not be available in current provider version, and the terraform version upgrade it is the first step.
When we started to research destroy scenarios and greenfield scenarios, but noticed that are not available in our current azurerm provider version (2.98) only in azurerm 3.33 ():
key_vault {
purge_soft_delete_on_destroy = true
recover_soft_deleted_key_vaults = true
}
https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/guides/features-block
This is affecting destroy scenario.
Motivation behind this (non functional requirements):
* Would recommend to take a look at this: https://www.hashicorp.com/blog/announcing-hashicorp-terraform-1-0-general-availability
* Greater performance, and terraform versions interoperability basically, new features for sensitive strings in state.
* Additionally, it is recommended by azurerm and azuread to upgrade terraform version prior to upgrade provider version.
* We would be able to upgrade provider version.
About golang upgrade:
* golang version is very old and we had seeing that time to time some library is not available anymore for go v1.12 for unit tests (2 years ago)
* Library outdate and compatibility with newer imports versions.
About providers upgrade (possibly would be nice to think about this for the near future):
* Current azurerm version: 2.98.0 / latest 3.33
* Current azuread version: 1.1.1 / latest 2.30 (2 years ago)
* Noticed some recent changes in the resources for azuread provider which are not updated in our modules and may have unexpected behavior in the future like application_ad (we already faced in the past), if you take a look at the resource for recent stable provider version it is not at all related to the module that it is being used in the community module version.
* Removed deprecated attributes in old providers
* Renamed attributes
* Superseded resources (here the resource can be deprecated or removed by the upgraded version)
Eventually, terraform community code will became obsolete if there are changes in the AzureARM api which are not compatible anymore with the azurerm/azuread providers.Arturo Hernandez [EPAM]Igor Zimovets (EPAM)Arturo Hernandez [EPAM]https://community.opengroup.org/osdu/platform/deployment-and-operations/infra-azure-provisioning/-/issues/246Feature - Security rules for OSDU Infrastructure - Network2023-08-16T18:28:56ZArturo Hernandez [EPAM]Feature - Security rules for OSDU Infrastructure - Network| Done | Infra Relation | Rule |
|------|----------------|--------------------------------------------------------------------------------------------...| Done | Infra Relation | Rule |
|------|----------------|---------------------------------------------------------------------------------------------|
| !740 | NETWORK | ~~Ensure keyvault is recoverable~~ |
| !825 | NETWORK | ~~Ensure that public network access is disabled for Azure Key Vaults~~ |
| !843 | NETWORK | Ensure that Azure CosmosDB does not allow access from all networks |
| !776 | NETWORK | ~~Ensure that public network access is disabled in Redis Cache~~ |
| !776 | NETWORK | ~~Ensure that Redis Cache uses private link~~ |
| !620 #218 | NETWORK | ~~Ensure that Azure Kubernetes Service Private Clusters is enabled~~ |
| !825 | NETWORK | ~~Ensure that Azure Key Vaults use Private Links~~ |
| | NETWORK | Ensure that Postgres DB use Private Links |
| | NETWORK | Ensure that Storage Accounts use Private Links |
| !879 | NETWORK | Ensure that Event Grid uses Private Links |
* [ ] All changes must be well documented and if downtime it would be expected
* [ ] TF scripts should work without errors in greenfield environments
* [ ] If TF Brownfield apply presents any migration or downtime, to be documented
* [ ] Check if Cosmos/resource backup policies are affected by private endpointsArturo Hernandez [EPAM]Igor Zimovets (EPAM)Arturo Hernandez [EPAM]https://community.opengroup.org/osdu/platform/deployment-and-operations/infra-azure-provisioning/-/issues/245Feature - Security rules for OSDU Infrastructure - Encryption2023-08-01T22:26:09ZArturo Hernandez [EPAM]Feature - Security rules for OSDU Infrastructure - EncryptionFrom EPAM security recommendations we got the following suggestions for *ENCRYPTION* to comply with:
| Done | Infra Relation | Rule |
|------|--------...From EPAM security recommendations we got the following suggestions for *ENCRYPTION* to comply with:
| Done | Infra Relation | Rule |
|------|----------------|---------------------------------------------------------------------------------------------|
| [ ] | ENCRYPTION | Ensure Storage Service Encryption is enabled for Storage Accounts |
| [ ] | ENCRYPTION | Ensure that Storage Accounts have infrastructure encryption enabled |
| [ ] | ENCRYPTION | Ensure Storage Accounts are using the latest version of TLS encryption |
| [ ] | ENCRYPTION | Ensure that "OS and Data" disks are encrypted with Customer Managed Key |
| [ ] | ENCRYPTION | Ensure that public network access is disabled in Managed Disks |
| [ ] | ENCRYPTION | Ensure that all unattached VM disks are encrypted |
| [ ] | ENCRYPTION | Ensure that Container Registries are configured to disable public network access |
| [ ] | ENCRYPTION | Ensure that Container Registries are encrypted with a customer-managed key |
All changes must be well documented and if downtime it would be expected.
Would be nice to test this in greenfield environments as well.Arturo Hernandez [EPAM]Igor Zimovets (EPAM)Siarhei Symanovich (EPAM)Aliaksei Kruk2Arturo Hernandez [EPAM]https://community.opengroup.org/osdu/platform/deployment-and-operations/infra-azure-provisioning/-/issues/243Upgrade AKS/Istio in Flux based model2022-09-29T19:34:08ZVasyl Leskiv [SLB]Upgrade AKS/Istio in Flux based model- Flux based model - Istio v1.11.3 (max supported AKS version: 1.22)
- Helm based model - Istio v1.14.0 (max supported AKS version: 1.24)
As we decided to continue support Flux based model - it would be good to sync Istio version with H...- Flux based model - Istio v1.11.3 (max supported AKS version: 1.22)
- Helm based model - Istio v1.14.0 (max supported AKS version: 1.24)
As we decided to continue support Flux based model - it would be good to sync Istio version with Helm based model to be able Upgrade AKS to the latest version according to client requirement.Arturo Hernandez [EPAM]shivani karipeArturo Hernandez [EPAM]https://community.opengroup.org/osdu/platform/deployment-and-operations/infra-azure-provisioning/-/issues/242Azure AD for authentication to be used to connect to PostgresDB2022-09-02T07:50:09Zdevesh bajpaiAzure AD for authentication to be used to connect to PostgresDBAs of today Airflow uses credentials stored in KeyVault to connect to Postgres via PG bouncer service.
Customer has raised a concern regarding how Postgres DB is being used by Airflow. As recommended best practices, Azure AD for authent...As of today Airflow uses credentials stored in KeyVault to connect to Postgres via PG bouncer service.
Customer has raised a concern regarding how Postgres DB is being used by Airflow. As recommended best practices, Azure AD for authentication to be used (see https://docs.microsoft.com/en-us/azure/postgresql/single-server/concepts-azure-ad-authentication and here https://docs.microsoft.com/en-us/azure/postgresql/single-server/how-to-configure-sign-in-azure-ad-authentication).Vineeth Guna [Microsoft]Vineeth Guna [Microsoft]https://community.opengroup.org/osdu/platform/deployment-and-operations/infra-azure-provisioning/-/issues/241Drop support of keda_v2_enabled flag on services side2022-08-30T20:48:30ZVasyl Leskiv [SLB]Drop support of keda_v2_enabled flag on services sideAs beyond infrastructure repo the feature flag has been added into service repos ( for example helm-charts-azure) we need to make cleanup on services side and drop the file [infra-azure-provisioning/docs/keda-upgrade.md](https://communit...As beyond infrastructure repo the feature flag has been added into service repos ( for example helm-charts-azure) we need to make cleanup on services side and drop the file [infra-azure-provisioning/docs/keda-upgrade.md](https://community.opengroup.org/osdu/platform/deployment-and-operations/infra-azure-provisioning/-/blob/master/docs/keda-upgrade.md) as it doesn't make sense to support keda v1 anymore.Arturo Hernandez [EPAM]Arturo Hernandez [EPAM]https://community.opengroup.org/osdu/platform/deployment-and-operations/infra-azure-provisioning/-/issues/240Swagger Sanity Phase 2: Using springdoc-openapi for swaggers APIs2024-03-11T08:38:40ZKomal MakkarSwagger Sanity Phase 2: Using springdoc-openapi for swaggers APIs## Context
- The swagger APIs are maintained as part of each service, for instance [Storage Swagger Controller](https://community.opengroup.org/osdu/platform/system/storage/-/blob/master/storage-core/src/main/java/org/opengroup/osdu/stor...## Context
- The swagger APIs are maintained as part of each service, for instance [Storage Swagger Controller](https://community.opengroup.org/osdu/platform/system/storage/-/blob/master/storage-core/src/main/java/org/opengroup/osdu/storage/swagger/HomeController.java).
- The swagger doc is handwritten and manually maintained, for instance [Storage Service Swagger](https://community.opengroup.org/osdu/platform/system/storage/-/blob/master/docs/api/storage_openapi.yaml).
## Assumption
The swagger for all services is in version 3 already. We continue supporting swagger 2 for the business continuity.
## Problem statement
The cost of maintenance of the swagger doc is high. We have stale swaggers in the system already.
## Proposed solution
We have the following frameworks that will help lower the cost of upkeep of swagger.
1. ```springdoc-openapi```
2. ```springdoc-openapi-webflux-ui```
## Scope / Acceptance Criteria
The above effort will encapsulate the following
1. Controller, Model improvement to retain the swagger information that is present today.
2. The swagger endpoint path will be a consistent as we see it [today](https://community.opengroup.org/osdu/platform/system/storage/-/blob/master/storage-core/src/main/java/org/opengroup/osdu/storage/swagger/HomeController.java#L24).
3. The deprecation of the swagger controllers we maintain should be seamless experience for the user.
# Target Release
@krveduru to add
# FAQ
1. How will we manage migration to next generation of swaggers, when available.
We will count on Spring Boot's adaptation of the next generation to be back compatible.
2. Will there be regressions in the existing experience?
No, as specified in the Scope.
## Useful references
https://www.baeldung.com/spring-rest-openapi-documentationhttps://community.opengroup.org/osdu/platform/deployment-and-operations/infra-azure-provisioning/-/issues/234Airflow Logs getting truncated in log Analytics2022-08-02T02:52:16Zdevesh bajpaiAirflow Logs getting truncated in log AnalyticsAirflow logs created in blobs tore are are sent to log analytics
refer : https://community.opengroup.org/osdu/platform/deployment-and-operations/infra-azure-provisioning/-/tree/master/source/airflow-function
but it is observed that in ...Airflow logs created in blobs tore are are sent to log analytics
refer : https://community.opengroup.org/osdu/platform/deployment-and-operations/infra-azure-provisioning/-/tree/master/source/airflow-function
but it is observed that in case airflow logs has multiple line those logs are truncated in log analytics
> e.g.
airlfow logs in blob store
<pre>
[2022-06-10, 06:23:09 UTC] {validate_schema.py:322} ERROR - Schema validation error. Data field.
[2022-06-10, 06:23:09 UTC] {validate_schema.py:323} ERROR - Manifest kind: osdu:wks:work-product-component--WellboreTrajectory:1.1.0
[2022-06-10, 06:23:09 UTC] {validate_schema.py:324} ERROR - Error: None is not of type 'string'
Failed validating 'type' in schema['properties']['data']['allOf'][3]['properties']['AppliedOperations']['items']:
{'type': 'string'}
On instance['data']['AppliedOperations'][0]:
None
</pre>
> export from logAnalytics
<pre>
--------------------------------------------------------------------------------",INFO
"2022-06-10 06:23:09,305","Error: None is not of type 'string'",ERROR
"2022-06-10 06:23:09,305","Manifest kind: osdu:wks:work-product-component--WellboreTrajectory:1.1.0",ERROR
"2022-06-10 06:23:09,304","Schema validation error. Data field.",ERROR
"2022-06-10 06:23:09,026","Exporting the following env vars:
</pre>https://community.opengroup.org/osdu/platform/deployment-and-operations/infra-azure-provisioning/-/issues/230Secret Service Onboarding / EDS2023-02-28T13:29:26ZArturo Hernandez [EPAM]Secret Service Onboarding / EDS**Service name**: `SECRET Service`
The following steps must be completed for a service to onboard with OSDU on Azure. Additionally, please add the `Service Onboarding` tag to this issue when it is created.
For more information, visit o...**Service name**: `SECRET Service`
The following steps must be completed for a service to onboard with OSDU on Azure. Additionally, please add the `Service Onboarding` tag to this issue when it is created.
For more information, visit our service onboarding documentation [here](https://community.opengroup.org/osdu/platform/deployment-and-operations/infra-azure-provisioning/-/blob/master/docs/service-onboarding.md).
## Steps:
**Infrastructure and Initial Requirements**
- [x] Add any additional Azure cloud infrastructure (Cosmos containers, Storage containers, fileshares, etc.) to the Terraform template. [Link](https://community.opengroup.org/osdu/platform/deployment-and-operations/infra-azure-provisioning/-/tree/master/infra/templates/osdu-r3-mvp). Note that if the infrastructure is a part of the data-partition template, you may need to add secrets to the keyvault that are partition specific; if doing so, update the createPartition REST request to include the keys that you have added so they are accessible in service code. [Link](https://community.opengroup.org/osdu/platform/deployment-and-operations/infra-azure-provisioning/-/blob/master/tools/rest/partition.http#L48)
- [x] Create an ingress point for the service. [Link](https://community.opengroup.org/osdu/platform/deployment-and-operations/infra-azure-provisioning/-/blob/master/charts/osdu-common/templates/appgw-ingress.yaml)
- [x] Add any test data that is required for the service integration tests. [Link](https://community.opengroup.org/osdu/platform/deployment-and-operations/infra-azure-provisioning/-/tree/master/tools/test_data)
- [x] Update `upload-data.py` to upload any new test data files you created. [Link](https://community.opengroup.org/osdu/platform/deployment-and-operations/infra-azure-provisioning/-/blob/master/tools/test_data/upload-data.py).
- [x] Update the integration tester with any entitlements required to test the service. [Link](https://community.opengroup.org/osdu/platform/deployment-and-operations/infra-azure-provisioning/-/blob/master/tools/test_data/user_info_1.json)
- [x] Add in any new secrets that the service needs to run. [Link](https://community.opengroup.org/osdu/platform/deployment-and-operations/infra-azure-provisioning/-/blob/master/charts/osdu-common/templates/kv-secrets.yaml)
- [x] Create environment variable script to generate .yaml files to be used with Intellij [EnvFile](https://plugins.jetbrains.com/plugin/7861-envfile) plugin and .envrc files to be used with [direnv](https://direnv.net/). [Link](https://community.opengroup.org/osdu/platform/deployment-and-operations/infra-azure-provisioning/-/tree/master/tools/variables)
**Gitlab Code and Documentation**
- [x] Complete the service code such that it passes all integration tests locally. There is some documentation on starting off implementing an Azure provider. [Link](./gitlab-service-readme-template.md)
- [x] Create helm charts for service. The charts for each service are located in the `devops/azure` directory. You can look at charts from other services as a model. The charts will be nearly identical except for the different environment variables, values, etc each service needs to run. [Link](./gitlab-service-guide.md)
- [x] Implement Istio for the service if this has not already been done. Here is an example MR that shows what steps are required. [Link](https://community.opengroup.org/osdu/platform/system/storage/-/merge_requests/64)
- [x] Create an Istio auth policy in the `devops/azure/chart/templates` directory. Here is an example of an Istio auth policy that is generic and can be used by other services. [Link](https://community.opengroup.org/osdu/platform/system/storage/-/blob/master/devops/azure/chart/templates/azure-istio-auth-policy.yaml)
- [x] Add any variables that are required for the service integration tests to the Azure CI-CD file. [Link](https://community.opengroup.org/osdu/platform/ci-cd-pipelines/-/blob/master/cloud-providers/azure.yml)
- [x] Verify that the README for the Azure provider correctly and clearly describes how to run and test the service. There is a README template to help. [Link](./gitlab-service-readme-template.md)
- [x] Push any changes and verify that the Gitlab pipeline is passing in master.
**Development and Demo Azure Devops Pipelines**
- [x] Create development ADO pipeline at `devops/azure/development-pipeline.yml` in the service repo.
- [x] Verify development pipeline passes in ADO.
- [x] Create Demo ADO pipeline at `devops/azure/pipeline.yml` in the service repo.
- [x] Verify demo pipeline is passing in ADO.
**User Documentation**
- [x] Add the service to the mirror pipeline instructions. [Link](https://community.opengroup.org/osdu/platform/deployment-and-operations/infra-azure-provisioning/-/blob/master/docs/code-mirroring.md)
- [x] Add the service to the manual deployment instructions. [Link](https://community.opengroup.org/osdu/platform/deployment-and-operations/infra-azure-provisioning/-/tree/master/charts)
- [x] Add any required variables to the already existing variable group instructions for automated deployment. You should know if any variables need to be added to existing variable groups from creating the development and demo pipelines. [Link](https://community.opengroup.org/osdu/platform/deployment-and-operations/infra-azure-provisioning/-/blob/master/docs/service-automation.md#create-osdu-service-libraries)
- [x] Add a variable group `Azure Service Release - $SERVICE_NAME` to the documentation. You should know what values to set for this variable group from creating the development and demo pipelines. [Link](https://community.opengroup.org/osdu/platform/deployment-and-operations/infra-azure-provisioning/-/blob/master/docs/service-automation.md#create-osdu-service-libraries)
- [x] Add a step for creating the service pipeline at the bottom of the service-automation page. [Link](https://community.opengroup.org/osdu/platform/deployment-and-operations/infra-azure-provisioning/-/blob/master/docs/service-automation.md#create-osdu-service-libraries)
- [x] Create a rest script with sample calls to the service for users. [Link](https://community.opengroup.org/osdu/platform/deployment-and-operations/infra-azure-provisioning/-/tree/master/tools/rest)Rene von Borstel [EPAM]Arturo Hernandez [EPAM]Rene von Borstel [EPAM]https://community.opengroup.org/osdu/platform/deployment-and-operations/infra-azure-provisioning/-/issues/228[BUG] Swagger page could not be displayed2022-06-09T18:53:20ZRostislav Vatolinvatolinrp@gmail.com[BUG] Swagger page could not be displayedSwagger page could not be displayed after recent upgrade of springfox-boot-starter to 3.0.0 for partition service. AuthorizationPolicy for partition service requires fix.Swagger page could not be displayed after recent upgrade of springfox-boot-starter to 3.0.0 for partition service. AuthorizationPolicy for partition service requires fix.https://community.opengroup.org/osdu/platform/deployment-and-operations/infra-azure-provisioning/-/issues/227[AKS Policies] Fix volume types policy to comply with least privilege principle2022-06-07T12:25:37ZArturo Hernandez [EPAM][AKS Policies] Fix volume types policy to comply with least privilege principleCurrently policy applied for "Allowed volume types" it is allowing `*`:
```json
{
"effect": { "value": "audit"},
"excludedNamespaces": {"value": ["kube-system", "gatekeeper-system", "azure-arc"]},
"allowedVolumeTypes": {"value": [...Currently policy applied for "Allowed volume types" it is allowing `*`:
```json
{
"effect": { "value": "audit"},
"excludedNamespaces": {"value": ["kube-system", "gatekeeper-system", "azure-arc"]},
"allowedVolumeTypes": {"value": ["*"]}
}
```
To support keyvault and csi providers, need to adopt least privilege principle to get rid of "all" expression.
Related to #218https://community.opengroup.org/osdu/platform/deployment-and-operations/infra-azure-provisioning/-/issues/226Error during Infra Provisioning on Azure via DevOps2022-05-10T18:22:44ZErnesto GuimaraesError during Infra Provisioning on Azure via DevOpsI followed the steps to deploy the R3 M10 using Azure DevOps Pipelines. During the TF Apply Step, I Received this error. Any suggestion on what is happening here?
Thank you
A resource with the ID "/subscriptions/XXXXXXX/resourceGroups/...I followed the steps to deploy the R3 M10 using Azure DevOps Pipelines. During the TF Apply Step, I Received this error. Any suggestion on what is happening here?
Thank you
A resource with the ID "/subscriptions/XXXXXXX/resourceGroups/OSDU-Exploration-XXXXXX-pa0r-rg/providers/Microsoft.KeyVault/vaults/osdu-exploration-pa0r-kv/objectId/YYYYYYYY" already exists - to be managed via Terraform this resource needs to be imported into the State. Please see the resource documentation for "azurerm_key_vault_access_policy" for more information.�[0m
2022-04-29T18:27:00.6787563Z
2022-04-29T18:27:00.6788528Z �[0m on ../../../modules/providers/azure/keyvault-policy/main.tf line 15, in resource "azurerm_key_vault_access_policy" "keyvault":https://community.opengroup.org/osdu/platform/deployment-and-operations/infra-azure-provisioning/-/issues/225Disable Registry Scan feature for flux2022-04-12T09:17:46ZDzmitry_Paulouski (slb)Disable Registry Scan feature for fluxThere are a lot of error messages in the Flux pod.
They are caused by Flux checking for new images, but access to container registry is not provided:
https://fluxcd.io/legacy/flux/faq/#how-do-i-give-flux-access-to-an-image-registry
_Flux...There are a lot of error messages in the Flux pod.
They are caused by Flux checking for new images, but access to container registry is not provided:
https://fluxcd.io/legacy/flux/faq/#how-do-i-give-flux-access-to-an-image-registry
_Flux transparently looks at the image pull secrets that you attach to workloads and service accounts, and thereby uses the same credentials that Kubernetes uses for pulling each image. In general, if your pods are running, then Kubernetes has pulled the images, and Flux should be able to access them too._
Since we do not use this feature, it can be disabled. https://fluxcd.io/legacy/flux/faq/#can-i-disable-flux-registry-scanningDzmitry_Paulouski (slb)Dzmitry_Paulouski (slb)https://community.opengroup.org/osdu/platform/deployment-and-operations/infra-azure-provisioning/-/issues/224Create Hierarchical storage account to support File collection2023-08-16T10:39:58ZHarshit SaxenaCreate Hierarchical storage account to support File collectionTo support file collection feature, we need to initialize Azure datalake in storage account.
MR - https://community.opengroup.org/osdu/platform/deployment-and-operations/infra-azure-provisioning/-/merge_requests/570To support file collection feature, we need to initialize Azure datalake in storage account.
MR - https://community.opengroup.org/osdu/platform/deployment-and-operations/infra-azure-provisioning/-/merge_requests/570Harshit SaxenaHarshit Saxenahttps://community.opengroup.org/osdu/platform/deployment-and-operations/infra-azure-provisioning/-/issues/223Upgrade AGIC to 1.4.0 to support Health Probe annotation2023-08-16T10:39:58ZSabarish K R EUpgrade AGIC to 1.4.0 to support Health Probe annotationUpgrade AGIC to 1.4.0 to support custom Health Probe annotation.Upgrade AGIC to 1.4.0 to support custom Health Probe annotation.Sabarish K R ESabarish K R Ehttps://community.opengroup.org/osdu/platform/deployment-and-operations/infra-azure-provisioning/-/issues/222Enable XCOM Summary for Manifest Ingestion Dags2023-08-16T10:39:58Zharshit aggarwalEnable XCOM Summary for Manifest Ingestion DagsEnable XCOM Summary for Manifest Ingestion Dag. The IDS for the records ingested as well the ones which were skipped can be checked in the xcomEnable XCOM Summary for Manifest Ingestion Dag. The IDS for the records ingested as well the ones which were skipped can be checked in the xcomhttps://community.opengroup.org/osdu/platform/deployment-and-operations/infra-azure-provisioning/-/issues/221WITSML Parser Dag Loading Scripts2023-08-16T10:39:58Zharshit aggarwalWITSML Parser Dag Loading ScriptsAdding Data loading Scripts for WITSML Parser DagAdding Data loading Scripts for WITSML Parser Daghttps://community.opengroup.org/osdu/platform/deployment-and-operations/infra-azure-provisioning/-/issues/220Update Data loading Scripts for CSV/Manifest Ingestion to support packaged Dags2023-08-16T10:39:58Zharshit aggarwalUpdate Data loading Scripts for CSV/Manifest Ingestion to support packaged DagsUpdating Data loading Scripts for CSV/Manifest Ingestion to support packaged DagsUpdating Data loading Scripts for CSV/Manifest Ingestion to support packaged Dagshttps://community.opengroup.org/osdu/platform/deployment-and-operations/infra-azure-provisioning/-/issues/219Standardize the Environment variable naming for Airflow Variables2022-01-19T03:48:51Zharshit aggarwalStandardize the Environment variable naming for Airflow VariablesAs we see some variables using HOST as suffix while some as URL, we should Standardize
AIRFLOW_VAR_CORE__SERVICE__SCHEMA__URL: "https://#{OSDU_SVC_ENDPOINT}#/api/schema-service/v1"
AIRFLOW_VAR_CORE__SERVICE__SEARCH__URL: "h...As we see some variables using HOST as suffix while some as URL, we should Standardize
AIRFLOW_VAR_CORE__SERVICE__SCHEMA__URL: "https://#{OSDU_SVC_ENDPOINT}#/api/schema-service/v1"
AIRFLOW_VAR_CORE__SERVICE__SEARCH__URL: "https://#{OSDU_SVC_ENDPOINT}#/api/search/v2"
AIRFLOW_VAR_CORE__SERVICE__STORAGE__URL: "https://#{OSDU_SVC_ENDPOINT}#/api/storage/v2"
AIRFLOW_VAR_CORE__SERVICE__FILE__HOST: "https://#{OSDU_SVC_ENDPOINT}#/api/file/v2"
AIRFLOW_VAR_CORE__SERVICE__WORKFLOW__HOST: "https://#{OSDU_SVC_ENDPOINT}#/api/workflow"
AIRFLOW_VAR_CORE__SERVICE__WORKFLOW__HOST: "https://#{OSDU_SVC_ENDPOINT}#/api/workflow/v1"
AIRFLOW_VAR_CORE__SERVICE__SEARCH_WITH_CURSOR__URL: "https://#{OSDU_SVC_ENDPOINT}#/api/search/v2/query_with_cursor"
AIRFLOW_VAR_CORE__SERVICE__PARTITION__URL: "https://#{OSDU_SVC_ENDPOINT}#/api/partition/v1"
AIRFLOW_VAR_CORE__SERVICE__LEGAL__HOST: "https://#{OSDU_SVC_ENDPOINT}#/api/legal/v1"
AIRFLOW_VAR_CORE__SERVICE__ENTITLEMENTS__URL: "https://#{OSDU_SVC_ENDPOINT}#/api/entitlements/v2"