OSDU Software issueshttps://community.opengroup.org/groups/osdu/-/issues2021-06-23T04:37:43Zhttps://community.opengroup.org/osdu/platform/deployment-and-operations/infra-azure-provisioning/-/issues/50Mount plugins folder into airflow instance and change dags folder2021-06-23T04:37:43ZKishore BattulaMount plugins folder into airflow instance and change dags folder```
## Type
<!-- Please choose the type of ticket. -->
- [x] Feature Request
- [ ] Bug Report
## Priority
- [X] High
- [ ] Medium
- [ ] Low
------------------------
------------------------
## Feature Request
__Why is this change...```
## Type
<!-- Please choose the type of ticket. -->
- [x] Feature Request
- [ ] Bug Report
## Priority
- [X] High
- [ ] Medium
- [ ] Low
------------------------
------------------------
## Feature Request
__Why is this change needed?__
This changes is need to move dags folder to sub folder in share and mounting plugins folder to /opt/airflow/plugins
__Current behavior__
Currently the whole fileshare is mounted as dags folder
__Expected behavior__
dags mount takes subPath: dags
New mount is added for plugins
----------------------------
--------------------------
## Bug Report
<!-- If this is a bug report, fill up the following -->
__Breaking__
<!-- Is the bug breaking something. -->
- [X] YES
- [] NO
__Attached Logs?__
<!-- Please attach relevant logs. -->
- [ ] YES
- [x] NO
__Reproduction__
<!-- Please mention how often can you reproduce it. -->
__Current behavior__
<!-- Please describe the current behavior you observe -->
__Expected behavior__
<!-- Please describe the behavior you anticipate -->
__Steps to reproduce__
<!-- Please add how to reproduce the bug -->
--------------------------
--------------------------
## Other information
<!-- Any other information that is important to this PR such as screenshots of how the component looks before and after the change. -->
```https://community.opengroup.org/osdu/platform/system/file/-/issues/10Add driver field in /getLocation API response2022-09-27T11:43:47ZWei SunAdd driver field in /getLocation API responseIn current File service API design, the getLocation API is used to return signed URL only, but the backend storage providers have some http headers to optimize the data operation. I suggest to add new field for driver in getLocation resp...In current File service API design, the getLocation API is used to return signed URL only, but the backend storage providers have some http headers to optimize the data operation. I suggest to add new field for driver in getLocation response to enable client applications to upload file to the signed URL with optimized way.
origin:
```json
{
"FileID": "file ID",
"Location": {
"SignedURL": "GCS signed URL"
}
}
```
change to:
```json
{
"FileID": "file ID",
"Location": {
"SignedURL": "GCS signed URL"
"Driver": "GCS"
}
}
```https://community.opengroup.org/osdu/platform/security-and-compliance/entitlements/-/issues/18Remove member email validation2021-03-05T05:47:47ZRostyslav Matushkin (SLB)Remove member email validationIn OSDU, a user id could be anything and may not be in email format. The current code contains the latency logic in the original SLB implementation, but it is not applicable for OSDU.
To resolve this issue, we have to remove the member ...In OSDU, a user id could be anything and may not be in email format. The current code contains the latency logic in the original SLB implementation, but it is not applicable for OSDU.
To resolve this issue, we have to remove the member email validation logic in the POST groups/{group_email}/members API and allows the service to add any user id into a group.
As a note, the group id is still in email format and will be something like group_name@{partitionid}.{DOMAIN}. Since the entitlements v2 service allows to add group hierarchy (adding a group into another group) and it is the same POST API as adding user into a group. The service will still use the {DOMAIN} suffix to determine whether the given member is a group or user.https://community.opengroup.org/osdu/platform/deployment-and-operations/infra-azure-provisioning/-/issues/51Swagger-ui.html calls are getting blocked from istio for schema service2021-06-14T04:26:39ZAman VermaSwagger-ui.html calls are getting blocked from istio for schema service
## Type
<!-- Please choose the type of ticket. -->
- [ ] Feature Request
- [x] Bug Report
## Priority
- [x] High
- [ ] Medium
- [ ] Low
------------------------
------------------------
## Bug report
<!-- If this is a feature req...
## Type
<!-- Please choose the type of ticket. -->
- [ ] Feature Request
- [x] Bug Report
## Priority
- [x] High
- [ ] Medium
- [ ] Low
------------------------
------------------------
## Bug report
<!-- If this is a feature request, fill up the following -->
__Why is this change needed?__
All the services have istio policies configured for them which allow users to call certain end points without providing authorization information. Typical endpoints that are open for such access are health check and api docs
__Current behavior__
The swagger-ui.html page is not accessible for schema service on azure
__Expected behavior__
The swager-ui.html page should be accessible for schema service.
--------------------------
--------------------------Aman VermaAman Vermahttps://community.opengroup.org/osdu/ui/data-loading/bulk-loader/-/issues/1How to load Missing Manifests using bulk-loader scripts2021-06-16T22:17:25ZsrinivasHow to load Missing Manifests using bulk-loader scriptsHi,
When Ingesting sample dataset using Bulk-loader scripts, some manifests are missing.
Ex: Total Trajectories are 5944, but few documents are not loaded in ElasticSearch. Please refer below:
yellow open opendes-osdu-wellboretrajector...Hi,
When Ingesting sample dataset using Bulk-loader scripts, some manifests are missing.
Ex: Total Trajectories are 5944, but few documents are not loaded in ElasticSearch. Please refer below:
yellow open opendes-osdu-wellboretrajectory-wp-0.2.0 UhWBAa4oRRy4C7DGNsiuTg 1 1 **5943 **0 1.9mb 1.9mb
yellow open opendes-osdu-wellboretrajectory-wpc-0.2.0 Fod9BmhHQvyIwmUHi7EKcA 1 1 **5942 **0 2.2mb 2.2mb
How to find missing manifest and load the same?https://community.opengroup.org/osdu/platform/system/file/-/issues/11ADR Master and Reference Schema versioning; SRN format2023-07-05T10:09:40ZKateryna Kurach (EPAM)ADR Master and Reference Schema versioning; SRN format### Change Type:
- [X] Feature
- [ ] Bugfix
- [ ] Refactoring
### Context and Scope
## 1. Reference and Master Data Schema version format
Different aspects of OSDU Reference Schemas (RS), Reference OSDU Resources populated with speci...### Change Type:
- [X] Feature
- [ ] Bugfix
- [ ] Refactoring
### Context and Scope
## 1. Reference and Master Data Schema version format
Different aspects of OSDU Reference Schemas (RS), Reference OSDU Resources populated with specific Reference Value Lists, and other OSDU schemas can change with time. It was discussed on the Data Definitions team and Reference Data Ingestion meetings that there are requirements to track these different categories of change/versioning. Many of the identified categories are below. We have added other versioning categories and clarifications as well.
1.1 For any OSDU schema, capture:
- **Schema version** - Describes the version of the Schema structure. Usually a new schema structure version will be delivered together with a new OSDU release, but minor schema versions may also be released (e.g. a schema change that simply adds a property (which is a non-breaking change)).
*Question: Is governance established that the schema version will be tracked by the schema name, or was this a temporary solution by Thomas Gehrmann? Is this documented somewhere? If not, then OSDU needs to establish the proper governance on this and document it.*
*Question: Are we capturing minor and major schema changes? If yes, how is each defined?*
- **Resource version** - Data change within the same schema version. The schema/structure itself didn't change, but a new version of the Resource was added to OSDU (e.g. because one or more property values needed to be updated). For most schemas, it is understood that data change simply creates a new Resource, with incremented version, with the different data values. However, this concept deserves special attention with Reference Data Values/Lists since changing some Reference Data Values can sometimes have massive and breaking data management consequences (e.g. Reference Lists classified by the DD&M subcommittee as “fixed” are defined by OSDU. This exact list is critical either to system functionality or to industry interoperability).
This version number must be incremented regardless of what the reason was for any change to the contents of the data, including the categories below in the Reference Values section.
- **Source** –Uniquely describes the system and/or organization from which this data object comes. Many different source-versions can attempt to identify the same real-world object (such as Wellbore) or activity (such as Production Volume reporting). (For a Wellbore, for example, this would be similar to PPDM’s WELL_VERSION.)
Ideally, we could track:
- Source to my organization (value would capture an outside organization) “data.DataSourceOrganisationID” property?
- Source system/application/database “source” property?
*Notes: This identifies a version data that attempts to define a real-world object or measurement, not a version of a data object that would need to be numerically incremented like the other version categories here.*
1.2 For Reference Values:
- **Reference Value data changes** – In addition to the general “version” resource property, the following properties are needed to better govern Reference Value lists:
- OSDU-governed: You might create a new version because of an OSDU-governed change to a reference list. The OSDU Reference List version must be captured, and incremented whenever an updated OSDU-governed list is published and subsequently used in a Reference Data resource. This applies to the OSDU-governed reference values in an “open” list, and to “fixed” governance categories of reference value lists, as determined by the OSDU Reference Values team. A way to capture this does not exist yet.
- Locally governed: You might create a new version because a governed Reference List for a particular implementation was updated, like at an operator (i.e. “open” and “local” reference list category). The locally-governed Reference List version must be captured and incremented whenever the local data governance group publishes and the list is susbsequently used in a Reference Data resource. This applies to the reference values in an “open” list, and to “local” governance categories of reference value lists, as determined by the OSDU Reference Values team. A way to capture this does not exist yet.
- **Attribution Authority**: For any reference value or reference list, those values and descriptions may have been created by OSDU or by an outside organization (such as PPDM or Energistics). Both OSDU and outside standards may change over time, so it is critical to capture both the source organization and the publication version of those outside standards used. This is already accommodated by the Attribution Authority, Publication, and Revision properties which are standard Reference Resource properties.
*Note: this is different than the “OSDU-governed” versioning category mentioned above. The OSDU-governed versioning category refers to a complete list of Reference Values for a particular reference object. The attribution authority is captured to each value individually. In other words, an OSDU-governed reference list could potentially include some values created by OSDU attribution authority and some from an outside attribution authority, but the list as a whole will be considered “OSDU-governed”.*
Summary: OSDU should establish clear governance to appropriately and consistently track these categories of versioning:
For any resource:
- Schema Version (might exist in schema name format; needs confirmation)
- Resource Version (exists)
Additional to Reference List resources:
- OSDU-governed list version (does not exist)
- Locally-governed list version (does not exist)
- Attribution Authority + Publication + Revision (exists)
The best solution would be to create appropriate properties for the version categories that do not yet exist.
In addition, OSDU should also capture the OSDU governance category of Reference Value Lists within the reference schema and resource itself: “Fixed”, “Open”, or “Local”. A way to capture this does not exist yet.
## 2. SRN format
Also, decision has to be taken regarding SRN format. It must be decided whether it has to contain corresponding schema version or not. Currently SRN doesn't contain a version (e.g. "srn:<namespace>:reference-data/VerticalCRS:MSL:").
*Note: Tentatively, we think that capturing Schema Version + Resource Version in the schema name would uniquely identify resource referenced (like a foreign key).*
For reference lists, you want to be able to identify the specific version of the reference list that a WPC (e.g.) references.
However, for a WPC (Marker, e.g.) to reference a parent Master object (Wellbore, e.g.), it doesn’t need to reference a specific point-in-time version of it; It should reference the most recent version.
If this is true, SRNs for Reference Data would need to include Schema + Resource Version in the SRN, but SRN would be more generic for all other group types.
Problem: SRN identity is uncertain.
A. Is SRN intended to uniquely define the physical real-world object in the case of Master Data (like a Wellbore)? If yes, then SRN should not contain version for Master Data references.
B. Or is SRN intended to uniquely define a data record with its version (like a GUID)? If yes, then Master Data Version should be included in the SRN.
It should not be used for both, but both must be accommodated by OSDU.
Some additional condiderations:
A. Version is NOT included in an SRN.
Pros:
- It simplifies end-user aggregation of data to a single parent record. Your WPCs, created at different times will be referencing the same Master data record, not a point-in-time older version of that Master record. Existing WPC are always in the "current" state and users do not have to enrich and create a new version of WPC each time corresponding RS or Master Data Schema changes.
Cons:
- It leaves the question open as to how you could have different Wellbore Versions (similar to WELL_VERSION in PPDM). It seems that this is not currently supported by the OSDU canonical schemas, but is a real use case – similar to the way you can have different versions of Trajectories in WPC.
- You can loose aspects of historical parent-child relationships/data lineage. For example, a Trajectory might have TVD calculated based on the “active” elevation of a particular Wellbore resource version. Then that Wellbore gets updated, and the newest version of that Wellbore record has a different active elevation type or active elevation value. Now the Trajectory file is out-of-sync in this regard with its parent Wellbore from that point-in-time.
B. Version is included in an SRN.
Pros:
- It it potentially allows you to have different Wellbore Versions (using UWI and Source, for example, as the natural key)
- Traceability and lineage of the data
Cons:
- Raises the question of how to uniquely identify the one physical wellbore, or the “gold” Wellbore record (similar to WELL in PPDM)
- Complexities with updating existing WPCs that have links to older versions of MDS. End-user aggregations can be disaligned if there are WPCs in the system that are linked to different schema versions.
- Another consideration is related to possible future search complexity. If SRN value changes, some WPC could be found using "new" SRN value and some WPCs should be found using "old" SRN value.
Users will have to implement additional enrichment workflows to solve the issues related to SRN version descrepancies (and probably develop some functions that will detect all "outdated" SRN links). That leads to high usage of computing resources (e.g. to change all WPC SRNs to point to the new version etc).
### Decision
- There is a requirement to track Schemas versioning. Decision has to be taken on the Schema version format (especially for Reference Schemas)
- Decision has to be taken on the SRN format: will it contain Schema version or not ("srn:<namespace>:reference-data/VerticalCRS:MSL:" VS "srn:<namespace>:reference-data/VerticalCRS.1.0.0:MSL:"
### Rationale
### Consequence
- No consequences for CSPs
- Consequences for majority of the OSDU services. Change in the Schema definition will lead to the change in the Manifest creation process as well as in Enrichment and Delivery API.https://community.opengroup.org/osdu/platform/security-and-compliance/entitlements/-/issues/19Defining and configuration of permission groups (ops, admins, editors, viewers)2021-03-24T18:30:02ZPreksha Beohar-SlbDefining and configuration of permission groups (ops, admins, editors, viewers)We are getting ready to migrate from entitlements v1 to entitlement v2 service, which supports hierarchical structure in OSDU. So we need to define and configure permission groups (ops, admins, editors, viewers) for all the existing OSDU...We are getting ready to migrate from entitlements v1 to entitlement v2 service, which supports hierarchical structure in OSDU. So we need to define and configure permission groups (ops, admins, editors, viewers) for all the existing OSDU service groups.
As a part of the Entitlements v2 - Tenant Init Api, we will be bootstrapping all the service groups and the user groups.
1. Updated JSON file with all the existing OSDU service groups
2. Update the service principals for all the new service groups
3. Removed the following service groups which did not exist in OSDU -
```
"service.plugin.admin"
"service.messaging.admin"
```
4. Added following service groups -
```
"service.schema-service.editors"
"service.schema-service.viewers"
"service.schema-service.admin"
"service.file.editors"
"service.file.viewers"
"service.workflow.creator"
"service.workflow.viewer"
"service.workflow.admin"https://community.opengroup.org/osdu/platform/system/storage/-/issues/31[Storage Service] Integration tester ACL hardcoded for the bulk-acl integrati...2021-06-16T22:19:35ZRucha Deshpande[Storage Service] Integration tester ACL hardcoded for the bulk-acl integration testThe integration tester ACL used in the new bulk acl integration test is hardcoded in org.opengroup.osdu.storage.util\TestUtils.java.
This assumes the integration test user is already in that entitlements group.
` public static final S...The integration tester ACL used in the new bulk acl integration test is hardcoded in org.opengroup.osdu.storage.util\TestUtils.java.
This assumes the integration test user is already in that entitlements group.
` public static final String getIntegrationTesterAcl() {
return String.format("data.integration.test@%s", getAclSuffix());
}
`
An environment variable should be used instead.M1 - Release 0.1ethiraj krishnamanaiduJoeRucha DeshpandeMatt Wiseethiraj krishnamanaiduhttps://community.opengroup.org/osdu/platform/security-and-compliance/entitlements/-/issues/20Return 403 when data partition id does not exist2021-03-01T19:48:27ZRostislav Vatolinvatolinrp@gmail.comReturn 403 when data partition id does not existWhen calling API GET /entitlements/v2/groups with header "data-partition-id" with a non-existing tenant, the response should be 403 instead of 500.When calling API GET /entitlements/v2/groups with header "data-partition-id" with a non-existing tenant, the response should be 403 instead of 500.https://community.opengroup.org/osdu/platform/system/storage/-/issues/32[Storage Service] Bulk ACL Integration tests do not cover revertObjectMetadat...2021-06-16T22:19:34ZRucha Deshpande[Storage Service] Bulk ACL Integration tests do not cover revertObjectMetadata functionalityThe Bulk ACL integration tests do not cover the revertObjectMetadata functionality.The Bulk ACL integration tests do not cover the revertObjectMetadata functionality.M1 - Release 0.1ethiraj krishnamanaiduJoeRucha DeshpandeMatt Wiseethiraj krishnamanaiduhttps://community.opengroup.org/osdu/platform/security-and-compliance/entitlements/-/issues/21Filtering using partition key reduces RUs (Request Units)2021-03-01T21:18:40ZRostislav Vatolinvatolinrp@gmail.comFiltering using partition key reduces RUs (Request Units)To reduce RU consumption and increase the performance of the Entitlements service, there is a need in including filtering by partition key, when doing node lookup.
More details about Request Units here: https://docs.microsoft.com/en-us/...To reduce RU consumption and increase the performance of the Entitlements service, there is a need in including filtering by partition key, when doing node lookup.
More details about Request Units here: https://docs.microsoft.com/en-us/azure/cosmos-db/request-unitshttps://community.opengroup.org/osdu/platform/system/home/-/issues/50Enhanced API Specs for Ingestion Workflow Service2020-09-07T18:38:17ZSwapnilEnhanced API Specs for Ingestion Workflow Service## Status
- [x] Proposed
- [ ] Trialing
- [ ] Under review
- [ ] Approved
- [ ] Retired
## Context & Scope
In OSDU R2, there is Ingestion Workflow Service(https://community.opengroup.org/osdu/platform/data-flow/ingestion/ingestion-work...## Status
- [x] Proposed
- [ ] Trialing
- [ ] Under review
- [ ] Approved
- [ ] Retired
## Context & Scope
In OSDU R2, there is Ingestion Workflow Service(https://community.opengroup.org/osdu/platform/data-flow/ingestion/ingestion-workflow) for orchestration tool( airflow) specific managerial operations. In OSDU R3, the proposal is to have an enhanced version of Ingestion Workflow Service. Both serve a similar purpose i.e to provide a wrapper functionality for orchestrator tools and is designed to carry out CRUD operations of domain workflows and domain workflow runs. R2 version of the service talks about only 3 APIs – startWorkflow, updateWorfklowStatus, getStatus. R2 version of the service is tightly dependent on Apache Airflow as the orchestration tool. This ADR introduces an enhanced version of Ingestion Workflow Service API to cater to more complex workflow scenarios in Ingestion workflows. Also, the aim is to have an orchestrator tool independent specification.
## Decision
In OSDU R3 framework, Ingestion Workflow Service will be responsible for end to end management (creation, modification, execution and monitoring) of ingestion workflows from user perspective. This will become the way to create domain workflows in OSDU Data Platform. Users with workflow creation & triggers roles are completely abstracted from technical complexities in the orchestration tool used.
Supported workflow operations by the new version are as follows:
CRUD operations for Workflow:
- Creation\registration of workflow. (new)
- Updation\editing of workflow (new)
- Querying details of a workflow. (new)
- Listing all configured workflows. (new)
- Deletion of workflow. (new)
CRUD operations for Workflow Run:
- Triggering a workflow. (already exist)
- Querying details of all workflow runs for a workflow.
- Querying details of a workflow run. (already exist)
## Rationale
The Domain Workflow expectations from an orchestration tool are not supported out of the box by options available in the market. Different domain workflows have different expectations on case by case basis. It is important to have a wrapper layer which casts the behaviour of the orchestrator tool to a custom behaviour suited for domain workflows executions on Data Platform. This will also enable loose coupling of orchestrator tools (currently airflow) with Data Platform. Adoption of alternate orchestration tools will be better managed.
## Consequences
Data Platform's workflow expectations and orchestrator tool behaviour mismatch will be better managed. In case of modifications to Orchestrator tools, changes can be incorporated without multiple touch points spread across the Data Platform. Breaking changes while version upgrades (if any) and alternative tool implementation will be a controlled activity.
# Tradeoff Analysis - Input to decision
The new proposed version will incorporate below user actions in true sense:
- User is completely abstracted from the underlying orchestrator tool. Workflow editors can create and manage complex workflow without being technical experts on the orchestrator tool.
- OSDU data platform will be easily able to integrate future changes to Orchestrator framework. Technical changes will be within periphery of the Workflow Ingestion Service.
- Users can query into historical runs of the workflows, and the status of ingestion can be tracked at right granularity.This will end user domain workflow success\failure\in progress reporting.
- Users will have fine grained control over the attributes of workflow (for example – max concurrency, active\inactive features).
This version of service will have no negative impact when compared to current version of Ingestion Workflow Service in terms of:
- Performance
- Scalability
- Reliability
## Decision criteria and tradeoffs
- Usability
- Cost of Implementationhttps://community.opengroup.org/osdu/platform/security-and-compliance/entitlements/-/issues/22Bug fix: make DOMAIN in the integration test dynamic2021-03-24T18:27:18ZMingyang ZhuBug fix: make DOMAIN in the integration test dynamic1. We have make the DOMAIN configurable in the service, but we still hard-code the domain in the integration tests, so we need to make it load from environment variable as well
2. We haven't config spring.application.name so the applicat...1. We have make the DOMAIN configurable in the service, but we still hard-code the domain in the integration tests, so we need to make it load from environment variable as well
2. We haven't config spring.application.name so the application name does show populate correctly in application insight.Mingyang ZhuMingyang Zhuhttps://community.opengroup.org/osdu/platform/system/lib/core/os-core-common/-/issues/16[Bug] 'expirationDate' field is serialized incorrectly due to the use of Gson.2021-06-16T22:19:51ZRostyslav Matushkin (SLB)[Bug] 'expirationDate' field is serialized incorrectly due to the use of Gson.The 'expirationDate' field in 'org.opengroup.osdu.core.common.model.legal.Properties' is serialized incorrectly due to the use of Gson.
This is the value we get using Gson:
`{"expirationDate": "Dec 31, 2099"}`
And this is the value we ...The 'expirationDate' field in 'org.opengroup.osdu.core.common.model.legal.Properties' is serialized incorrectly due to the use of Gson.
This is the value we get using Gson:
`{"expirationDate": "Dec 31, 2099"}`
And this is the value we get using Jackson:
`{"expirationDate": "2099-12-31"}`
Gson serializes java.sql.Date into format "MMM d, yyyy" while our date format is "yyyy-MM-dd".ethiraj krishnamanaiduMingyang Zhuethiraj krishnamanaiduhttps://community.opengroup.org/osdu/platform/deployment-and-operations/infra-azure-provisioning/-/issues/55Unit Service Onboarding2022-08-23T10:47:31ZNicholas KarskyUnit Service Onboarding**Service name**: `Unit`
The following steps must be completed for a service to onboard with OSDU on Azure. Additionally, please add the `Service Onboarding` tag to this issue when it is created.
For more information, visit our service...**Service name**: `Unit`
The following steps must be completed for a service to onboard with OSDU on Azure. Additionally, please add the `Service Onboarding` tag to this issue when it is created.
For more information, visit our service onboarding documentation [here](https://community.opengroup.org/osdu/platform/deployment-and-operations/infra-azure-provisioning/-/blob/master/docs/service-onboarding.md).
## Steps:
**Infrastructure and Initial Requirements**
- [x] Add any additional Azure cloud infrastructure (Cosmos containers, Storage containers, fileshares, etc.) to the Terraform template. [Link](https://community.opengroup.org/osdu/platform/deployment-and-operations/infra-azure-provisioning/-/tree/master/infra/templates/osdu-r3-mvp). Note that if the infrastructure is a part of the data-partition template, you may need to add secrets to the keyvault that are partition specific; if doing so, update the createPartition REST request to include the keys that you have added so they are accessible in service code. [Link](https://community.opengroup.org/osdu/platform/deployment-and-operations/infra-azure-provisioning/-/blob/master/tools/rest/partition.http#L48)
- [x] Create an ingress point for the service. [Link](https://community.opengroup.org/osdu/platform/deployment-and-operations/infra-azure-provisioning/-/blob/master/charts/osdu-common/templates/appgw-ingress.yaml)
- [x] Add any test data that is required for the service integration tests. [Link](https://community.opengroup.org/osdu/platform/deployment-and-operations/infra-azure-provisioning/-/tree/master/tools/test_data)
- [x] Update `upload-data.py` to upload any new test data files you created. [Link](https://community.opengroup.org/osdu/platform/deployment-and-operations/infra-azure-provisioning/-/blob/master/tools/test_data/upload-data.py).
- [x] Update the integration tester with any entitlements required to test the service. [Link](https://community.opengroup.org/osdu/platform/deployment-and-operations/infra-azure-provisioning/-/blob/master/tools/test_data/user_info_1.json)
- [x] Add in any new secrets that the service needs to run. [Link](https://community.opengroup.org/osdu/platform/deployment-and-operations/infra-azure-provisioning/-/blob/master/charts/osdu-common/templates/kv-secrets.yaml)
- [x] Create environment variable script to generate .yaml files to be used with Intellij [EnvFile](https://plugins.jetbrains.com/plugin/7861-envfile) plugin and .envrc files to be used with [direnv](https://direnv.net/). [Link](https://community.opengroup.org/osdu/platform/deployment-and-operations/infra-azure-provisioning/-/tree/master/tools/variables)
**Gitlab Code and Documentation**
- [x] Complete the service code such that it passes all integration tests locally. There is some documentation on starting off implementing an Azure provider. [Link](./gitlab-service-readme-template.md)
- [x] Create helm charts for service. The charts for each service are located in the `devops/azure` directory. You can look at charts from other services as a model. The charts will be nearly identical except for the different environment variables, values, etc each service needs to run. [Link](./gitlab-service-guide.md)
- [x] Implement Istio for the service if this has not already been done. Here is an example MR that shows what steps are required. [Link](https://community.opengroup.org/osdu/platform/system/storage/-/merge_requests/64)
- [x] Create an Istio auth policy in the `devops/azure/chart/templates` directory. Here is an example of an Istio auth policy that is generic and can be used by other services. [Link](https://community.opengroup.org/osdu/platform/system/storage/-/blob/master/devops/azure/chart/templates/azure-istio-auth-policy.yaml)
- [x] Add any variables that are required for the service integration tests to the Azure CI-CD file. [Link](https://community.opengroup.org/osdu/platform/ci-cd-pipelines/-/blob/master/cloud-providers/azure.yml)
- [x] Verify that the README for the Azure provider correctly and clearly describes how to run and test the service. There is a README template to help. [Link](./gitlab-service-readme-template.md)
- [x] Push any changes and verify that the Gitlab pipeline is passing in master.
**Development and Demo Azure Devops Pipelines**
- [x] Create development ADO pipeline at `devops/azure/development-pipeline.yml` in the service repo.
- [x] Verify development pipeline passes in ADO.
- [x] Create Demo ADO pipeline at `devops/azure/pipeline.yml` in the service repo.
- [x] Verify demo pipeline is passing in ADO.
**User Documentation**
- [x] Add the service to the mirror pipeline instructions. [Link](https://community.opengroup.org/osdu/platform/deployment-and-operations/infra-azure-provisioning/-/blob/master/docs/code-mirroring.md)
- [x] Add the service to the manual deployment instructions. [Link](https://community.opengroup.org/osdu/platform/deployment-and-operations/infra-azure-provisioning/-/tree/master/charts)
- [x] Add any required variables to the already existing variable group instructions for automated deployment. You should know if any variables need to be added to existing variable groups from creating the development and demo pipelines. [Link](https://community.opengroup.org/osdu/platform/deployment-and-operations/infra-azure-provisioning/-/blob/master/docs/service-automation.md#create-osdu-service-libraries)
- [x] Add a variable group `Azure Service Release - $SERVICE_NAME` to the documentation. You should know what values to set for this variable group from creating the development and demo pipelines. [Link](https://community.opengroup.org/osdu/platform/deployment-and-operations/infra-azure-provisioning/-/blob/master/docs/service-automation.md#create-osdu-service-libraries)
- [x] Add a step for creating the service pipeline at the bottom of the service-automation page. [Link](https://community.opengroup.org/osdu/platform/deployment-and-operations/infra-azure-provisioning/-/blob/master/docs/service-automation.md#create-osdu-service-libraries)
- [x] Create a rest script with sample calls to the service for users. [Link](https://community.opengroup.org/osdu/platform/deployment-and-operations/infra-azure-provisioning/-/tree/master/tools/rest)
## Setup:
1. Create an empty repo `unit-service`
2. Add a variable into `Mirror Variables`
> ADO_ORGANIZATION and ADO_PROJECT should be your actual names.
| Variable | Value |
|----------|-------|
| UNIT_REPO | `https://dev.azure.com/${ADO_ORGANIZATION}/$ADO_PROJECT/_git/unit-service` |
3. Edit the Mirror Pipeline and add the task
```
- task: swellaby.mirror-git-repository.mirror-git-repository-vsts-task.mirror-git-repository-vsts-task@1
displayName: 'unit'
inputs:
sourceGitRepositoryUri: 'https://community.opengroup.org/osdu/platform/system/reference/unit-service.git'
destinationGitRepositoryUri: '$(UNIT_REPO)'
destinationGitRepositoryPersonalAccessToken: $(ACCESS_TOKEN)
```
4. Run the Mirror Pipeline
5. Create a Variable Group `Azure Service Release - unit` with the variables:
| Variable | Value |
|----------|-------|
| MAVEN_DEPLOY_POM_FILE_PATH | `drop/provider/unit-azure/unit-aks` |
6. Create a Pipeline `service-unit` against the Repo `unit-service` for file `/devops/azure/pipeline.yml`
7. Upload the [unit_catalog_v2.json](https://community.opengroup.org/osdu/platform/system/reference/unit-service/-/blob/master/data/unit_catalog_v2.json) file located in the Project data folder to the fileshare `unit` of the storage account in the service resources.
8. Execute the PipelineDecemberNicholas KarskyNicholas Karsky2020-12-19https://community.opengroup.org/osdu/platform/system/schema-service/-/issues/20Make default account id configurable for Schema service2020-09-22T03:33:05ZRustam Lotsmanenko (EPAM)rustam_lotsmanenko@epam.comMake default account id configurable for Schema service## Context and Scope
Hardcoded shared account id value as "common" in org.opengroup.osdu.schema.constants.SchemaConstants makes it difficult to run Schema service without a configured account. Currently, its presence is obligatory. Howev...## Context and Scope
Hardcoded shared account id value as "common" in org.opengroup.osdu.schema.constants.SchemaConstants makes it difficult to run Schema service without a configured account. Currently, its presence is obligatory. However, an absence of "common" leads to a lot of exceptions related to NPE, when Schema service is trying to reach storage services with a help of this account.
## Decision
move
public static final String ACCOUNT_ID_COMMON_PROJECT = "common"
to schema-core application.properties file
account.id.common.project = common
## Rational
Hardcoded account id won't allow us to run Schema service without "common" account.
## Consequences
Use a spring annotation "@value" where account id is required instead of SchemaConstants.ACCOUNT_ID_COMMON_PROJECT.
Providers should change SchemaConstants.ACCOUNT_ID_COMMON_PROJECT to new value from properties file where they need to use it.Dmitriy RudkoDmitriy Rudko2020-09-22https://community.opengroup.org/osdu/platform/security-and-compliance/entitlements/-/issues/23update 'core-lib-azure' version2021-03-10T17:03:34ZRostyslav Matushkin (SLB)update 'core-lib-azure' versionRecently, a new version of [os-core-lib-azure](https://community.opengroup.org/osdu/platform/system/lib/cloud/azure/os-core-lib-azure) library was prepared, where the overwrite of the authorization header was fixed, see this [MR](https:...Recently, a new version of [os-core-lib-azure](https://community.opengroup.org/osdu/platform/system/lib/cloud/azure/os-core-lib-azure) library was prepared, where the overwrite of the authorization header was fixed, see this [MR](https://community.opengroup.org/osdu/platform/system/lib/cloud/azure/os-core-lib-azure/-/merge_requests/81) there.
This MR updates the version of this library as it is critical to have this fix for the authorization logic to start working correctly.Rostyslav Matushkin (SLB)Rostyslav Matushkin (SLB)https://community.opengroup.org/osdu/platform/system/lib/core/os-core-common/-/issues/17[Bug] Handling of 307 response is missing.2021-06-16T22:19:50ZKomal Makkar[Bug] Handling of 307 response is missing.## Type
<!-- Please choose the type of ticket. -->
- [ ] Feature Request
- [x] Bug Report
## Priority
- [ ] High
- [x] Medium
- [ ] Low
------------------------
------------------------
## Feature Request
<!-- If this is a featur...## Type
<!-- Please choose the type of ticket. -->
- [ ] Feature Request
- [x] Bug Report
## Priority
- [ ] High
- [x] Medium
- [ ] Low
------------------------
------------------------
## Feature Request
<!-- If this is a feature request, fill up the following -->
__Why is this change needed?__
<!-- Please add relevant details. -->
__Current behavior__
<!-- Please describe the current behavior you observe -->
__Expected behavior__
<!-- Please describe the behavior you anticipate -->
----------------------------
--------------------------
## Bug Report
<!-- If this is a bug report, fill up the following -->
__Breaking__
<!-- Is the bug breaking something. -->
- [x] YES
- [ ] NO
__Attached Logs?__
<!-- Please attach relevant logs. -->
- [x] YES
- [ ] NO
__Reproduction__
<!-- Please mention how often can you reproduce it. -->
__Current behavior__
<!-- Please describe the current behavior you observe -->
1. 3XX is not handled in the response. if the response is not code is not a success, it is assumed to be 4xx or 5xx. [details here](https://community.opengroup.org/osdu/platform/system/lib/core/os-core-common/-/blob/master/src/main/java/org/opengroup/osdu/core/common/http/AbstractHttpClient.java#L49)
__Expected behavior__
<!-- Please describe the behavior you anticipate -->
1. 3XX should be handled.
2. Body is an optional response. Having a null check, before operating on the body might be helpful.
__Steps to reproduce__
<!-- Please add how to reproduce the bug -->
Call any service which throws 307, with HTTP request.
--------------------------
--------------------------
## Other information
<!-- Any other information that is important to this PR such as screenshots of how the component looks before and after the change. -->
```
```
java.lang.NullPointerException
at java.io.Reader.<init>(Reader.java:78)
at java.io.InputStreamReader.<init>(InputStreamReader.java:72)
at org.opengroup.osdu.core.common.http.AbstractHttpClient.getBody(AbstractHttpClient.java:72)
at org.opengroup.osdu.core.common.http.AbstractHttpClient.send(AbstractHttpClient.java:53)
at org.opengroup.osdu.core.common.http.HttpClient.send(HttpClient.java:25)
at org.opengroup.osdu.core.common.notification.SubscriptionService.query(SubscriptionService.java:68)
at org.opengroup.osdu.notification.api.PubsubEndpoint.querySubscriptionAndUpdateCache(PubsubEndpoint.java:165)
at org.opengroup.osdu.notification.api.PubsubEndpoint.getSubscriptionFromCache(PubsubEndpoint.java:149)
at org.opengroup.osdu.notification.api.PubsubEndpoint.recordChanged(PubsubEndpoint.java:99)
at org.opengroup.osdu.notification.api.PubsubEndpoint$$FastClassBySpringCGLIB$$a0995cd9.invoke(<generated>)
at org.springframework.cglib.proxy.MethodProxy.invoke(MethodProxy.java:218)
at org.springframework.aop.framework.CglibAopProxy$CglibMethodInvocation.invokeJoinpoint(CglibAopProxy.java:749)
at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:163)
at org.springframework.security.access.intercept.aopalliance.MethodSecurityInterceptor.invoke(MethodSecurityInterceptor.java:69)
at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:186)
at org.springframework.aop.framework.CglibAopProxy$DynamicAdvisedInterceptor.intercept(CglibAopProxy.java:688)
at org.opengroup.osdu.notification.api.PubsubEndpoint$$EnhancerBySpringCGLIB$$e2c5c97b.recordChanged(<generated>)
at org.opengroup.osdu.notification.api.PubsubEndpoint$$FastClassBySpringCGLIB$$a0995cd9.invoke(<generated>)
at org.springframework.cglib.proxy.MethodProxy.invoke(MethodProxy.java:218)
at org.springframework.aop.framework.CglibAopProxy$CglibMethodInvocation.invokeJoinpoint(CglibAopProxy.java:749)
at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:163)
at org.springframework.aop.support.DelegatingIntroductionInterceptor.doProceed(DelegatingIntroductionInterceptor.java:136)
at org.springframework.aop.support.DelegatingIntroductionInterceptor.invoke(DelegatingIntroductionInterceptor.java:124)
at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:186)
at org.springframework.aop.framework.CglibAopProxy$DynamicAdvisedInterceptor.intercept(CglibAopProxy.java:688)
at org.opengroup.osdu.notification.api.PubsubEndpoint$$EnhancerBySpringCGLIB$$eca7bb65.recordChanged(<generated>)
at sun.reflect.GeneratedMethodAccessor93.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.springframework.web.method.support.InvocableHandlerMethod.doInvoke(InvocableHandlerMethod.java:190)
at org.springframework.web.method.support.InvocableHandlerMethod.invokeForRequest(InvocableHandlerMethod.java:138)
at org.springframework.web.servlet.mvc.method.annotation.ServletInvocableHandlerMethod.invokeAndHandle(ServletInvocableHandlerMethod.java:104)
at org.springframework.web.servlet.mvc.method.annotation.RequestMappingHandlerAdapter.invokeHandlerMethod(RequestMappingHandlerAdapter.java:892)
at org.springframework.web.servlet.mvc.method.annotation.RequestMappingHandlerAdapter.handleInternal(RequestMappingHandlerAdapter.java:797)
at org.springframework.web.servlet.mvc.method.AbstractHandlerMethodAdapter.handle(AbstractHandlerMethodAdapter.java:87)
at org.springframework.web.servlet.DispatcherServlet.doDispatch(DispatcherServlet.java:1039)
at org.springframework.web.servlet.DispatcherServlet.doService(DispatcherServlet.java:942)
at org.springframework.web.servlet.FrameworkServlet.processRequest(FrameworkServlet.java:1005)
at org.springframework.web.servlet.FrameworkServlet.doPost(FrameworkServlet.java:908)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:660)
at org.springframework.web.servlet.FrameworkServlet.service(FrameworkServlet.java:882)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:741)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:231)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166)
at com.microsoft.azure.spring.autoconfigure.aad.AADAppRoleStatelessAuthenticationFilter.doFilterInternal(AADAppRoleStatelessAuthenticationFilter.java:79)
at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:118)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166)
at org.opengroup.osdu.notification.logging.ResponseLogFilter.doFilter(ResponseLogFilter.java:64)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166)
at org.opengroup.osdu.azure.filters.Slf4jMDCFilter.doFilter(Slf4jMDCFilter.java:39)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166)
at org.opengroup.osdu.azure.filters.TransactionLogFilter.doFilter(TransactionLogFilter.java:67)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166)
at org.springframework.boot.actuate.web.trace.servlet.HttpTraceFilter.doFilterInternal(HttpTraceFilter.java:88)
at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:118)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166)
at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:320)
at org.springframework.security.web.access.ExceptionTranslationFilter.doFilter(ExceptionTranslationFilter.java:119)
at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:334)
at org.springframework.security.web.session.SessionManagementFilter.doFilter(SessionManagementFilter.java:137)
at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:334)
at org.springframework.security.web.authentication.AnonymousAuthenticationFilter.doFilter(AnonymousAuthenticationFilter.java:111)
at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:334)
at org.springframework.security.web.servletapi.SecurityContextHolderAwareRequestFilter.doFilter(SecurityContextHolderAwareRequestFilter.java:170)
at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:334)
at org.springframework.security.web.savedrequest.RequestCacheAwareFilter.doFilter(RequestCacheAwareFilter.java:63)
at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:334)
at org.springframework.security.web.authentication.logout.LogoutFilter.doFilter(LogoutFilter.java:116)
at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:334)
at org.springframework.security.web.header.HeaderWriterFilter.doFilterInternal(HeaderWriterFilter.java:74)
at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:118)
at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:334)
at org.springframework.security.web.context.SecurityContextPersistenceFilter.doFilter(SecurityContextPersistenceFilter.java:105)
at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:334)
at org.springframework.security.web.context.request.async.WebAsyncManagerIntegrationFilter.doFilterInternal(WebAsyncManagerIntegrationFilter.java:56)
at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:118)
at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:334)
at org.springframework.security.web.FilterChainProxy.doFilterInternal(FilterChainProxy.java:215)
at org.springframework.security.web.FilterChainProxy.doFilter(FilterChainProxy.java:178)
at org.springframework.web.filter.DelegatingFilterProxy.invokeDelegate(DelegatingFilterProxy.java:357)
at org.springframework.web.filter.DelegatingFilterProxy.doFilter(DelegatingFilterProxy.java:270)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166)
at org.springframework.web.filter.RequestContextFilter.doFilterInternal(RequestContextFilter.java:99)
at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:118)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166)
at org.springframework.web.filter.FormContentFilter.doFilterInternal(FormContentFilter.java:92)
at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:118)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166)
at org.springframework.web.filter.HiddenHttpMethodFilter.doFilterInternal(HiddenHttpMethodFilter.java:93)
at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:118)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166)
at org.springframework.boot.actuate.metrics.web.servlet.WebMvcMetricsFilter.filterAndRecordMetrics(WebMvcMetricsFilter.java:114)
at org.springframework.boot.actuate.metrics.web.servlet.WebMvcMetricsFilter.doFilterInternal(WebMvcMetricsFilter.java:104)
at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:118)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166)
at org.springframework.web.filter.CharacterEncodingFilter.doFilterInternal(CharacterEncodingFilter.java:200)
at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:118)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166)
at com.microsoft.applicationinsights.web.internal.WebRequestTrackingFilter.doFilter(WebRequestTrackingFilter.java:143)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166)
at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:202)
at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:96)
at org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:490)
at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:139)
at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:92)
at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:74)
at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:343)
at org.apache.coyote.http11.Http11Processor.service(Http11Processor.java:408)
at org.apache.coyote.AbstractProcessorLight.process(AbstractProcessorLight.java:66)
at org.apache.coyote.AbstractProtocol$ConnectionHandler.process(AbstractProtocol.java:853)
at org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.doRun(NioEndpoint.java:1587)
at org.apache.tomcat.util.net.SocketProcessorBase.run(SocketProcessorBase.java:49)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at org.apache.tomcat.util.threads.TaskThread$WrappingRunnable.run(TaskThread.java:61)
at java.lang.Thread.run(Thread.java:748)
```https://community.opengroup.org/osdu/platform/deployment-and-operations/infra-azure-provisioning/-/issues/56CRS Catalog Service Onboarding2022-08-23T10:47:29ZNicholas KarskyCRS Catalog Service Onboarding**Service name**: `CRS Catalog`
The following steps must be completed for a service to onboard with OSDU on Azure. Additionally, please add the `Service Onboarding` tag to this issue when it is created.
For more information, visit our ...**Service name**: `CRS Catalog`
The following steps must be completed for a service to onboard with OSDU on Azure. Additionally, please add the `Service Onboarding` tag to this issue when it is created.
For more information, visit our service onboarding documentation [here](https://community.opengroup.org/osdu/platform/deployment-and-operations/infra-azure-provisioning/-/blob/master/docs/service-onboarding.md).
## Steps:
**Infrastructure and Initial Requirements**
- [x] Add any additional Azure cloud infrastructure (Cosmos containers, Storage containers, fileshares, etc.) to the Terraform template. [Link](https://community.opengroup.org/osdu/platform/deployment-and-operations/infra-azure-provisioning/-/tree/master/infra/templates/osdu-r3-mvp). Note that if the infrastructure is a part of the data-partition template, you may need to add secrets to the keyvault that are partition specific; if doing so, update the createPartition REST request to include the keys that you have added so they are accessible in service code. [Link](https://community.opengroup.org/osdu/platform/deployment-and-operations/infra-azure-provisioning/-/blob/master/tools/rest/partition.http#L48)
- [x] Create an ingress point for the service. [Link](https://community.opengroup.org/osdu/platform/deployment-and-operations/infra-azure-provisioning/-/blob/master/charts/osdu-common/templates/appgw-ingress.yaml)
- [x] Add any test data that is required for the service integration tests. [Link](https://community.opengroup.org/osdu/platform/deployment-and-operations/infra-azure-provisioning/-/tree/master/tools/test_data)
- [x] Update `upload-data.py` to upload any new test data files you created. [Link](https://community.opengroup.org/osdu/platform/deployment-and-operations/infra-azure-provisioning/-/blob/master/tools/test_data/upload-data.py).
- [x] Update the integration tester with any entitlements required to test the service. [Link](https://community.opengroup.org/osdu/platform/deployment-and-operations/infra-azure-provisioning/-/blob/master/tools/test_data/user_info_1.json)
- [x] Add in any new secrets that the service needs to run. [Link](https://community.opengroup.org/osdu/platform/deployment-and-operations/infra-azure-provisioning/-/blob/master/charts/osdu-common/templates/kv-secrets.yaml)
- [x] Create environment variable script to generate .yaml files to be used with Intellij [EnvFile](https://plugins.jetbrains.com/plugin/7861-envfile) plugin and .envrc files to be used with [direnv](https://direnv.net/). [Link](https://community.opengroup.org/osdu/platform/deployment-and-operations/infra-azure-provisioning/-/tree/master/tools/variables)
**Gitlab Code and Documentation**
- [x] Complete the service code such that it passes all integration tests locally. There is some documentation on starting off implementing an Azure provider. [Link](./gitlab-service-readme-template.md)
- [x] Create helm charts for service. The charts for each service are located in the `devops/azure` directory. You can look at charts from other services as a model. The charts will be nearly identical except for the different environment variables, values, etc each service needs to run. [Link](./gitlab-service-guide.md)
- [x] Implement Istio for the service if this has not already been done. Here is an example MR that shows what steps are required. [Link](https://community.opengroup.org/osdu/platform/system/storage/-/merge_requests/64)
- [x] Create an Istio auth policy in the `devops/azure/chart/templates` directory. Here is an example of an Istio auth policy that is generic and can be used by other services. [Link](https://community.opengroup.org/osdu/platform/system/storage/-/blob/master/devops/azure/chart/templates/azure-istio-auth-policy.yaml)
- [x] Add any variables that are required for the service integration tests to the Azure CI-CD file. [Link](https://community.opengroup.org/osdu/platform/ci-cd-pipelines/-/blob/master/cloud-providers/azure.yml)
- [x] Verify that the README for the Azure provider correctly and clearly describes how to run and test the service. There is a README template to help. [Link](./gitlab-service-readme-template.md)
- [x] Push any changes and verify that the Gitlab pipeline is passing in master.
**Development and Demo Azure Devops Pipelines**
- [x] Create development ADO pipeline at `devops/azure/development-pipeline.yml` in the service repo.
- [x] Verify development pipeline passes in ADO.
- [x] Create Demo ADO pipeline at `devops/azure/pipeline.yml` in the service repo.
- [x] Verify demo pipeline is passing in ADO.
**User Documentation**
- [x] Add the service to the mirror pipeline instructions. [Link](https://community.opengroup.org/osdu/platform/deployment-and-operations/infra-azure-provisioning/-/blob/master/docs/code-mirroring.md)
- [x] Add the service to the manual deployment instructions. [Link](https://community.opengroup.org/osdu/platform/deployment-and-operations/infra-azure-provisioning/-/tree/master/charts)
- [x] Add any required variables to the already existing variable group instructions for automated deployment. You should know if any variables need to be added to existing variable groups from creating the development and demo pipelines. [Link](https://community.opengroup.org/osdu/platform/deployment-and-operations/infra-azure-provisioning/-/blob/master/docs/service-automation.md#create-osdu-service-libraries)
- [x] Add a variable group `Azure Service Release - $SERVICE_NAME` to the documentation. You should know what values to set for this variable group from creating the development and demo pipelines. [Link](https://community.opengroup.org/osdu/platform/deployment-and-operations/infra-azure-provisioning/-/blob/master/docs/service-automation.md#create-osdu-service-libraries)
- [x] Add a step for creating the service pipeline at the bottom of the service-automation page. [Link](https://community.opengroup.org/osdu/platform/deployment-and-operations/infra-azure-provisioning/-/blob/master/docs/service-automation.md#create-osdu-service-libraries)
- [x] Create a rest script with sample calls to the service for users. [Link](https://community.opengroup.org/osdu/platform/deployment-and-operations/infra-azure-provisioning/-/tree/master/tools/rest)
## Setup:
1. Create an empty repo `crs-catalog-service`
2. Add a variable into `Mirror Variables`
> ADO_ORGANIZATION and ADO_PROJECT should be your actual names.
| Variable | Value |
|----------|-------|
| CRS_CATALOG_REPO | `https://dev.azure.com/${ADO_ORGANIZATION}/$ADO_PROJECT/_git/crs-catalog-service` |
3. Edit the Mirror Pipeline and add the task
```
- task: swellaby.mirror-git-repository.mirror-git-repository-vsts-task.mirror-git-repository-vsts-task@1
displayName: 'crs-catalog'
inputs:
sourceGitRepositoryUri: 'https://community.opengroup.org/osdu/platform/system/reference/crs-catalog-service.git'
destinationGitRepositoryUri: '$(CRS_CATALOG_REPO)'
destinationGitRepositoryPersonalAccessToken: $(ACCESS_TOKEN)
```
4. Run the Mirror Pipeline
5. Create a Variable Group `Azure Service Release - crs catalog` with the variables:
| Variable | Value |
|----------|-------|
| MAVEN_DEPLOY_POM_FILE_PATH | `drop/provider/crs-catalog-azure/crs-catalog-aks` |
6. Create a new pipeline using the `crs-catalog-service` repo and the `/devops/azure/pipeline.yml` file of that repo.
7. Upload the [crs_catalog_v2.json](https://community.opengroup.org/osdu/platform/system/reference/crs-catalog-service/-/blob/master/data/crs_catalog_v2.json) file located in the Project data folder to the fileshare `crs` of the storage account in the service resources.
8. Execute the PipelineDecemberNicholas KarskyNicholas Karsky2020-12-19https://community.opengroup.org/osdu/platform/deployment-and-operations/infra-azure-provisioning/-/issues/82BUG - Service Template - AKS Template plan always calculates 1 change2021-06-14T04:26:40ZDaniel SchollBUG - Service Template - AKS Template plan always calculates 1 changeAfter the Software Upgrades from Issue #75 the plan for Service Resources always calculates a Diagnostic Change due to a new implementation of diagnostic settings.
```
# azurerm_monitor_diagnostic_setting.aks_diagnostics will be updat...After the Software Upgrades from Issue #75 the plan for Service Resources always calculates a Diagnostic Change due to a new implementation of diagnostic settings.
```
# azurerm_monitor_diagnostic_setting.aks_diagnostics will be updated in-place
~ resource "azurerm_monitor_diagnostic_setting" "aks_diagnostics" {
id = "/subscriptions/929e9ae0-7bb1-4563-a200-9863fe27cae4/resourcegroups/osdu-mvp-srscholl-0uq8-rg/providers/Microsoft.ContainerService/managedClusters/osdu-mvp-srscholl-0uq8-aks|aks_diagnostics"
name = "aks_diagnostics"
# (2 unchanged attributes hidden)
- metric {
- category = "API Server (PREVIEW)" -> null
- enabled = true -> null
- retention_policy {
- days = 100 -> null
- enabled = true -> null
}
}
+ metric {
+ category = "AllMetrics"
+ enabled = true
+ retention_policy {
+ days = 100
+ enabled = true
}
}
# (7 unchanged blocks hidden)
}
Plan: 0 to add, 1 to change, 0 to destroy.
```January - 21Daniel SchollDaniel Scholl