OSDU Software issueshttps://community.opengroup.org/groups/osdu/-/issues2022-10-31T10:26:11Zhttps://community.opengroup.org/osdu/platform/data-flow/ingestion/opc-ua-ingestion/-/issues/8Explore/Research on apache parquet data format storage mechanism2022-10-31T10:26:11ZAshutosh KumarExplore/Research on apache parquet data format storage mechanismWe need to explore on the apache parquest data format so that we can convert the data retrieved from OPC UA server in to parquet format.We need to explore on the apache parquest data format so that we can convert the data retrieved from OPC UA server in to parquet format.Ashutosh KumarAshutosh Kumarhttps://community.opengroup.org/osdu/platform/system/indexer-service/-/issues/69Integration test to include input value from timezone different from UTC and ...2022-09-29T13:41:06ZDebasis ChatterjeeIntegration test to include input value from timezone different from UTC and test NormalizerPlease see working example from @Kateryna_Kurach
https://community.opengroup.org/osdu/platform/pre-shipping/-/blob/main/R3-M12/Test_Plan_Results_M12/Manifest_Ingestion/M12-GCP-Master-FoR-Date-check-Debasis-Kateryna.txt
Please include t...Please see working example from @Kateryna_Kurach
https://community.opengroup.org/osdu/platform/pre-shipping/-/blob/main/R3-M12/Test_Plan_Results_M12/Manifest_Ingestion/M12-GCP-Master-FoR-Date-check-Debasis-Kateryna.txt
Please include test with positive and negative time shift.
https://community.opengroup.org/osdu/platform/system/indexer-service/-/blob/master/indexer-core/src/test/java/org/opengroup/osdu/indexer/util/parser/DateTimeParserTest.java#L31
cc @nthakurhttps://community.opengroup.org/osdu/platform/system/indexer-service/-/issues/68Documentation of DateTime normalization while data is indexed2022-09-29T13:41:07ZDebasis ChatterjeeDocumentation of DateTime normalization while data is indexedIt is not obvious as to what to provide in meta block, what in input value in order to have Normalizer adjust DateTime properly after respecting time shift from UTC.
See this worked example.
https://community.opengroup.org/osdu/platform...It is not obvious as to what to provide in meta block, what in input value in order to have Normalizer adjust DateTime properly after respecting time shift from UTC.
See this worked example.
https://community.opengroup.org/osdu/platform/pre-shipping/-/blob/main/R3-M12/Test_Plan_Results_M12/Manifest_Ingestion/M12-GCP-Master-FoR-Date-check-Debasis-Kateryna.txt
Please provide suitable documentation.
cc - @nthakur and @Kateryna_Kurachhttps://community.opengroup.org/osdu/platform/home/-/issues/49DR: Issue priority and merge request labeling guide2023-08-17T10:42:31ZChad LeongDR: Issue priority and merge request labeling guide# Introduction
Today, during issues reviewing process in the daily dev call, we do not have a clear guideline on prioritizing issues to be fixed. We mainly rely on issue reporters to come up with a fix/merge request(MR). In cases where ...# Introduction
Today, during issues reviewing process in the daily dev call, we do not have a clear guideline on prioritizing issues to be fixed. We mainly rely on issue reporters to come up with a fix/merge request(MR). In cases where issues are reported without any fix/MR, depending on the urgency/impact, these issues still need to be addressed with the appropriate attention.
Similarly, we need to have clear labeling to understand the issue/MR reported.
This is an extension of the current [PMC Issue Taxonomy](https://community.opengroup.org/osdu/governance/project-management-committee/-/wikis/PMC-Issue-Taxonomy)
# Objective
Here is a proposal for the community to prioritize and categorize issues and MRs that are being reported. The labels will be used to use to indicate the state of the issues and merge requests.
## Issue labels
### Issue Type
| Category| Label | Description
| ------ | ------ | ------ |
| Under review | ~"Issue::Under Review" | This issue is currently under review, needs more information from the submitter and/or needs to be confirmed if it is the intended behavior of the service. Once confirmed, it could be a Defect, Backlog, ADR or DR. |
| Defect | ~"Issue::Defect" | A **Defect** is an issue that is a software error, flaw or fault that causes that project or repo or service to produce an incorrect or unexpected result per the OSDU standard or requirements or that it behaves in unintended ways. A defect can be further categorized as a defect in the common code in the PMC project or as a defect within the CSP realization of the PMC project to ensure that it can be targeted towards the right development resource to address this issue. |
| Feature | ~"Issue::Feature Request" | A **Feature** is an issue that is either a new requirement that needs software enhancement, new feature development (perhaps requiring new repos, sub-projects or new PMC projects). These need to follow a template (see below) that provides enough clarity on the requirement, the definition of done and other necessary attributes so the issue can be curated and moved up in life cycle. Requires an entry in Aha portal. |
| Non Issue | ~"Issue::Non Issue" |A **Non Issue** is an issue that is not necessarily a defect or backlog, could be used by developers as a task to track the ongoing activity, clean-ups |
| Architecture Decision Record - ADR | ~"Issue::Architecture/Technology" | A **architecture decision** (ADR)is a result of an issue backlog, defect or a new OSDU standard that has triggered the need for a new design (perhaps requiring new technology selections, architecture patterns) |
| Decision Record on a process - DR | ~"Issue::Process Decision" | A **decision record** (DR) is a result of a process shortcomings where a new OSDU practice has been triggered to address the existing process shortcomings (perhaps requiring new process, operation patterns) |
Issue should be flagged along milestone
| Label | Description |
| ------ | ------ |
| ~"M12" | Milestone where issue is discovered |
Issue labels show the state of the issues and should be used alongside priority labels to indicate the urgency of the issue and prioritize resources to address the issue.
### Issue life-cycle
| Category| Label | Description
| ------ | ------ | ------ |
| Backlog | ~"KB::Backlog" | Label applied to indicate that issues are confirmed, but no active work are in progress. Pending volunteers. Label should be used alongside Confirmed issue. |
| Fix in progress | ~"KB::In Progress" | Label applied to indicate that issues are confirmed and fixes are in progress. Label should be used alongside Confirmed issue. |
| Done | ~"KB::Done" | Label applied to indicate that issues are confirmed and fixes are done. Issues can be closed. Label should be used alongside Confirmed issue. |
### Affected responsibility
| Category| Label | Description
| ------ | ------ | ------ |
| Confirmed issue | ~"Common Code" ~"AWS"<br /> ~"Azure"<br /> ~"GCP"<br /> ~"IBM" | Label applied to identify issue that has been confirmed and is affecting for all CSPs (common), Azure, IBM , GCP or AWS. |
## Priority labels
Priority label needs to be assigned along to issue label to indicate the urgency. Developers/volunteers should work on issues according to the agreed priority label.
| Label | Description |
| ------ | ------ |
| ~"Priority::Critical" | <ul><li>Catastrophic issue identified - Severe impact, contain breaking workflow/data loss, zero-day/critical security vulnerabilities</li><li>No workaround and should be fixed as an immediate priority</li><li>Need to be released as a patch during regular milestone cycle as soon as a fix is available</li></ul>|
| ~"Priority::High" | <ul><li>Major issue identified - High impact, might contain breaking workflow/data loss, critical/high security vulnerabilities</li><li>There is a workaround that exists but should be fixed as the next priority</li><li>Might need to be released as a patch during regular milestone cycle/N+1 milestone release</li></ul>|
| ~"Priority::Medium" | <ul><li>Serious issue identified - Medium impact, no breaking workflow/no data loss, high/medium security vulnerabilities</li><li>There is a workaround that exists and should be fixed after high priority</li><li>Can be released in N+1 milestone release</li></ul>|
| ~"Priority::Low" | <ul><li>Minor issue identified - Low impact, no breaking workflow/any workflow, medium/low security vulnerabilities</li><li>There is a workaround that exists</li><li>Can be released in N+1 or more milestone releases</li></ul>|
## Use cases
### Issue
| Label | Description |
| ------ | ------ |
| ~"Issue::Defect" ~"KB::Done" ~"Common Code" | A labeling strategy for defect that has been resolved after related MR(s) are merged |
### Merge request
| Label | Description |
| ------ | ------ |
| ~"MR::Bugfix" ~"Common Code" | A labeling strategy for defect that has been resolved after related MR(s) are merged |https://community.opengroup.org/osdu/platform/data-flow/ingestion/opc-ua-ingestion/-/issues/7Collect values of Evergyvue server for a minute to process the data2022-10-31T10:26:43ZAshutosh KumarCollect values of Evergyvue server for a minute to process the data1: Connect with Energyvue server using milo sdk
2: Fetch node and respective values
3: Get these values for one min and view the data to process further.1: Connect with Energyvue server using milo sdk
2: Fetch node and respective values
3: Get these values for one min and view the data to process further.Ashutosh KumarAshutosh Kumarhttps://community.opengroup.org/osdu/platform/data-flow/ingestion/opc-ua-ingestion/-/issues/6Connect with OPC UA server and read node values using eclipse milo2022-10-31T10:27:23ZAshutosh KumarConnect with OPC UA server and read node values using eclipse miloConnect EnergyVue server using milo sdk and try to read node valuesConnect EnergyVue server using milo sdk and try to read node valuesAshutosh KumarAshutosh Kumarhttps://community.opengroup.org/osdu/platform/data-flow/ingestion/opc-ua-ingestion/-/issues/5Connect Everyvue server using OPC UA client (UAExpert and OPC UA browser)2022-07-06T03:57:37ZAshutosh KumarConnect Everyvue server using OPC UA client (UAExpert and OPC UA browser)Connect and see the files/folders structure of Energyvue server after connecting using:
1: OPC UAExpert
2: Prosys OPC UA browserConnect and see the files/folders structure of Energyvue server after connecting using:
1: OPC UAExpert
2: Prosys OPC UA browserAshutosh KumarAshutosh Kumarhttps://community.opengroup.org/osdu/platform/security-and-compliance/entitlements/-/issues/109Confusion about the parent-child relationship for user.data.root group2023-11-10T23:01:40ZShuai LiConfusion about the parent-child relationship for user.data.root groupI am very confused about the parent-child relationship for _user.data.root_ group.
According to the documentation (https://community.opengroup.org/osdu/platform/security-and-compliance/entitlements/-/blob/master/docs/bootstrap/bootstrap-...I am very confused about the parent-child relationship for _user.data.root_ group.
According to the documentation (https://community.opengroup.org/osdu/platform/security-and-compliance/entitlements/-/blob/master/docs/bootstrap/bootstrap-groups-structure.md), the _user.data.root_ group is "_A group will be automatically added to all data groups so that the member of it has the permission to all data on the partition_". It means _user.data.root_ group is a member of all other data groups. In the OSDU implementation of parent-child relationship of roles, the "member" should be "child" and the target group should be "parent". The Add Member API works exactly in this way. "child" aggregates all rights of its parents. So in theory, _user.data.root_ group should be "child" of all other data groups. In this way, _user.data.root_ group will have all the rights of all other data groups.
But the implementation of parent-child relationship for _user.data.root_ group is the reversed order, i.e. _user.data.root_ group is parents of all other data groups. I think this implementation is wrong. It is not consistent with the documentation and the _user.data.root_ group cannot aggregate the rights of other data groups.
The wrong code logic is in addRootGroupNodeAsMemberOfGroupNewGroup method of org.opengroup.osdu.entitlements.v2.jdbc.spi.jdbc.creategroup.CreateGroupRepoJdbc. The name of this method indicates it will add root group as a member of a new data group, but its codes shows it adds the root group as parents of a new data group.
private void addRootGroupNodeAsMemberOfGroupNewGroup(GroupInfoEntity createdGroup, CreateGroupRepoDto createGroupRepoDto) {
GroupInfoEntity parentGroup = groupRepository
.findByEmail(createGroupRepoDto.getDataRootGroupNode().getNodeId())
.stream()
.findFirst()
.orElseThrow(() ->
new DatabaseAccessException(
HttpStatus.NOT_FOUND,
"Could not find the group with email: " +
createGroupRepoDto.getDataRootGroupNode().getNodeId()));
groupRepository.addChildGroupById(parentGroup.getId(), createdGroup.getId());
}
How could the root group aggregate the rights of all other data groups if it is the parent of other data groups.
**Besides the entitlement-v2-jdbc, entitlement-v2-AWS also implements the same wrong logic**. I have not checked the codes of other providers, I am not sure if they implement the logic correctly.
There is another evidence which can prove the logic of adding _user.data.root_ group as parents is wrong.
The run method in org.opengroup.osdu.entitlements.v2.service.CreateGroupService class has this logic (check the code below).
- when you are adding a new data group, it will check the existing parents of user.data.root group. If the number of parents of user.data.root group is larger than the quota, it will throw an exception. It implicitly indicates the user.data.root group should be child, not the parent. If user.data.root group is parent of every other data group, there will be no need to add this parent amount check logic in this method.
```
Set<ParentReference> allExistingParentsOfRootDataGroup = retrieveGroupRepo.loadAllParents(dataRootGroupNode).getParentReferences();
if (allExistingParentsOfRootDataGroup.size() >= dataRootGroupQuota) {
log.error(String.format("Identity %s already belong to %d groups", dataRootGroupNode.getNodeId(), allExistingParentsOfRootDataGroup.size()));
throw new AppException(HttpStatus.PRECONDITION_FAILED.value(), HttpStatus.PRECONDITION_FAILED.getReasonPhrase(), String.format("%s's group quota hit. Identity can't belong to more than %d groups",
dataRootGroupNode.getNodeId(), dataRootGroupQuota));
}
```
Maybe I have a wrong understanding, hope someone can give me some clarification. Thanks.https://community.opengroup.org/osdu/platform/security-and-compliance/entitlements/-/issues/108List Group API does not authorize the service principal2022-07-05T02:57:59ZShuai LiList Group API does not authorize the service principalLet's consider a typical scenario.
A user "John" calls the Storage service API to query or write some records. After receiving this call, the Storage service triggers its authorization flow. It will call the List Group API of the entile...Let's consider a typical scenario.
A user "John" calls the Storage service API to query or write some records. After receiving this call, the Storage service triggers its authorization flow. It will call the List Group API of the entilement service to retrieve the groups John belongs.
In this scenario, the List Group API of Entitlement service has to deal with two kinds of principals.
1. Storage service principals. This service principal is used for authorization between services. The Entitlement service should check who is the caller service and if it has the right to call its API. According to the Entilement documention (https://community.opengroup.org/osdu/documentation/-/wikis/OSDU-(C)/Design-and-Implementation/OpenDES-API-Specifications/Documentation/core-services/EntitlementsService#authorizing-calls-to-serviceapibackend) , the token of the caller service should be provided in Authorizaiton header and _storage@instance.osdu.opengroup.org.iam.gserviceaccount.com_ should be added to _service.entitlements.user@instance.osdu.opengroup.org_.
2. End user principals (John in this example), which is included in x-user-id header. Storage service is asking for groups John belongs. After successful authorization of Storage service, Entitlement service should queries the cache or database to return the groups John belongs.
But in the current implementation, I see the following issues.
1. The Entitlement only relies on x-user-id header, both for service authorization and group query of the end user. The _isAuthorized _method of _org.opengroup.osdu.entitlements.v2.auth.AuthorizationServiceEntitlements_ extract the principal from x-user-id header. This is not consistent with the design principle in the documentation. **The Service principal is expected to used for authorizaiton between service, not the end user identity.** **If this issue is not fixed, every user should be a member of _service.entitlements.user_ group in order to use OSDU services**, which is not consistent with the documentation. In the documentation, _storage@instance.osdu.opengroup.org.iam.gserviceaccount.com_ should be added to _service.entitlements.user@instance.osdu.opengroup.org_, not the end users. I agree with the way in the documentation. It is a bad design if every user has to be a member of _service.entitlements.user_ group to use OSDU services.
2. When Storage service calls List Group API of Entitlement service, it just forwards the header of the original API call, which means the List Group API call does not contains the service principal token of Storage service. Even Entitlement service has the correct logic for service authorization, if other services don't carry their service principal token in the API header, it is not possible to do service authorization. So this issue involves not only Entitlement service, but also other services.
I hope the Entitlement service could clearly differentiate the service principal token and user identity in its API documentation and use these two kinds of principals in the right way.
Maybe I have the wrong understanding. Could someone clarify this point for me? Thanks.https://community.opengroup.org/osdu/platform/data-flow/ingestion/opc-ua-ingestion/-/issues/4Write sample code to connect to Energyvue server using eclipse milo2022-10-31T10:27:40ZAshutosh KumarWrite sample code to connect to Energyvue server using eclipse miloWrite sample code to connect to energyvue server using eclipse milo
The OPC server endpoint is
opc.tcp://demo.energyvue.com:62546/EnergyVue/OpcServer
Preferred security is shown below and should be automatically adopted by the server ...Write sample code to connect to energyvue server using eclipse milo
The OPC server endpoint is
opc.tcp://demo.energyvue.com:62546/EnergyVue/OpcServer
Preferred security is shown below and should be automatically adopted by the server if you support it.
Mode: Sign & Encrypt
Policy: Aes256Sha256RsaPssAshutosh KumarAshutosh Kumarhttps://community.opengroup.org/osdu/platform/data-flow/ingestion/opc-ua-ingestion/-/issues/3Write/Explore Sample code to connect to Eclipse milo server using eclipse mil...2022-07-06T03:59:29ZAshutosh KumarWrite/Explore Sample code to connect to Eclipse milo server using eclipse milo sdkDownload eclipse milo sdk code and try to write and execute program to connect to Eclipse milo server
opc.tcp://milo.digitalpetri.com:62541/milo
Also try to connect Using:
1: Unified automation UAExpert
2: Using Eclipse milo sdk.Download eclipse milo sdk code and try to write and execute program to connect to Eclipse milo server
opc.tcp://milo.digitalpetri.com:62541/milo
Also try to connect Using:
1: Unified automation UAExpert
2: Using Eclipse milo sdk.Ashutosh KumarAshutosh Kumarhttps://community.opengroup.org/osdu/platform/data-flow/ingestion/opc-ua-ingestion/-/issues/2Explore the OPC UA client and server2022-07-06T03:59:50ZAshutosh KumarExplore the OPC UA client and server1: Explore OPC UA Client and server architecture.
2: Check the communication methods between them.1: Explore OPC UA Client and server architecture.
2: Check the communication methods between them.M14 - Release 0.17Ashutosh KumarAshutosh Kumarhttps://community.opengroup.org/osdu/platform/consumption/geospatial/-/issues/137Release - Document Docker requirement for Kubernetes2022-07-06T16:10:58ZJoel RomeroRelease - Document Docker requirement for KubernetesAs a GIS Analyst, I want to document Docker requirements for Kubernetes deployment.
Acceptance Criteria:
- Document results for future development
Note:
Do we have an Enterprise Docker license for OSDU? - Brian B
Need a Spike story for...As a GIS Analyst, I want to document Docker requirements for Kubernetes deployment.
Acceptance Criteria:
- Document results for future development
Note:
Do we have an Enterprise Docker license for OSDU? - Brian B
Need a Spike story for future implementation after MVPhttps://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/seismic-dms-suite/seismic-store-service/-/issues/62Release process does not follow general approach2023-03-24T19:22:10ZMikhail Piatliou (EPAM)Release process does not follow general approachThe pipelines for the `release` branches and `tags` https://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/seismic-dms-suite/seismic-store-service/-/blob/v0.15.0/devops/osdu/cloud-providers/gcp.yml use reference ...The pipelines for the `release` branches and `tags` https://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/seismic-dms-suite/seismic-store-service/-/blob/v0.15.0/devops/osdu/cloud-providers/gcp.yml use reference to the `master` branch in https://community.opengroup.org/osdu/platform/ci-cd-pipelines/-/tree/master/cloud-providers.
It is incorrect behavior since the service release/tags pipelines should have reference to the release/tags in ci-cd common pipelines accordingly.
For example, Partition service https://community.opengroup.org/osdu/platform/system/partition/-/blob/v0.15.1/.gitlab-ci.yml#L51.
The issue is valid for all CSPs.
Cc: @divido @Kateryna_Kurach @Oleksandr_KosseDaniel PerezDaniel Perezhttps://community.opengroup.org/osdu/platform/security-and-compliance/entitlements/-/issues/107Group name validation and email validation are not consistent2022-06-24T22:29:31ZAn NgoGroup name validation and email validation are not consistentWhen creating a group, group name is validated against expression: "^[A-Za-z0-9{}_.-]{3,128}$"
At the same time, when executing get members request, the email is validated
against expression: "^[A-Za-z0-9+_.-]{1,256}+@[A-Za-z0-9+_.-]{1,2...When creating a group, group name is validated against expression: "^[A-Za-z0-9{}_.-]{3,128}$"
At the same time, when executing get members request, the email is validated
against expression: "^[A-Za-z0-9+_.-]{1,256}+@[A-Za-z0-9+_.-]{1,256}$"
Such inconsistency caused existence of such group: {{new_creatordatagroup}}@some-partition.enterprisedata.cloud.some-domain.com, and members of such group could not be retrieved.https://community.opengroup.org/osdu/platform/system/file/-/issues/72File service - Compute and store properties such as filesize and checksum at ...2022-08-23T15:21:13ZDebasis ChatterjeeFile service - Compute and store properties such as filesize and checksum at the time of uploading fileCurrently, Data Loader is expected to provide this information manually. This can be error prone and can be missed.
cc - @Keith_Wall and @krveduru for informationCurrently, Data Loader is expected to provide this information manually. This can be error prone and can be missed.
cc - @Keith_Wall and @krveduru for informationhttps://community.opengroup.org/osdu/platform/system/dataset/-/issues/40Dataset service - Compute and store properties such as filesize and checksum ...2022-09-29T13:41:10ZDebasis ChatterjeeDataset service - Compute and store properties such as filesize and checksum at the time of uploading fileCurrently, Data Loader is expected to provide this information manually. This can be error prone and can be missed.
cc - @Keith_Wall and @krveduru for informationCurrently, Data Loader is expected to provide this information manually. This can be error prone and can be missed.
cc - @Keith_Wall and @krveduru for informationhttps://community.opengroup.org/osdu/platform/data-flow/ingestion/energistics/witsml-parser/-/issues/60Implement missing fields following Wellbore Trajectory schema update2022-06-21T18:48:18Zetienne peyssonImplement missing fields following Wellbore Trajectory schema updateFollowing the WEllboare trajectory schema update (1.0.0 -> 1.1.0), we need to add the mapping for the following fields :
```
"AppliedOperations": [
"Example AppliedOperations"
],
"CompanyID": "namespace:master-data--Org...Following the WEllboare trajectory schema update (1.0.0 -> 1.1.0), we need to add the mapping for the following fields :
```
"AppliedOperations": [
"Example AppliedOperations"
],
"CompanyID": "namespace:master-data--Organisation:SomeUniqueOrganisationID:",
```M13 - Release 0.16https://community.opengroup.org/osdu/platform/data-flow/ingestion/opc-ua-ingestion/-/issues/1Exploring OPC-UA open source SDK2022-07-06T04:00:23ZChad LeongExploring OPC-UA open source SDKEvaluating different OPC-UA open-source client SDK options
1. Eclipse Milo
2. OPC UA client SDKEvaluating different OPC-UA open-source client SDK options
1. Eclipse Milo
2. OPC UA client SDKAshutosh KumarAshutosh Kumarhttps://community.opengroup.org/osdu/platform/system/indexer-service/-/issues/66Reindex API - performance, scalability and reliability issues2023-12-18T16:13:23ZNeelesh ThakurReindex API - performance, scalability and reliability issuesRecent issues on Schema/Search backend requires us to re-index significant number of kinds/indices. Here are specifics on these issue:
- M10 schema [hints changes](https://community.opengroup.org/osdu/data/data-definitions/-/blob/master/...Recent issues on Schema/Search backend requires us to re-index significant number of kinds/indices. Here are specifics on these issue:
- M10 schema [hints changes](https://community.opengroup.org/osdu/data/data-definitions/-/blob/master/E-R/ChangeReport.md#snapshot-2021-11-09-towards-m10) on Schema service.
- Geoshape queries are broken when Elasticsearch server upgraded from 7.8.1 --> 7.17.x (Confirmed by Elasticsearch Support team, public issue is not available)
Current implementation of Reindex API (per kind) has serious performance, scalability and reliability issues. It does not work at all for kind with few million records. This is blocking us from adopting M10 (now M11) schema updates. Following list summarizes issues with API:
- API throughput is pretty slow and it can only re-index 250K-300K records per hour. In case of partition with 100 million records, this can run over 2 weeks.
- It’s not resilient, if operation fails in the middle, we have to start over.
- There is no transparency for Reindex operation, we don’t know how much progress has been made.
In addition to above issues, we cannot recover Search service in Disaster recovery scenarios as well. In this case, we can use ReindexAll API which use Reindex API (per kind) behind the scene. We run into to all of above issues.