Commit 2def0681 authored by David Diederich's avatar David Diederich
Browse files

Initial Import

parents
# Created by .ignore support plugin (hsz.mobi)
# IntelliJ
.idea/*
testing/.idea
testing/integration-tests/.idea
testing/integration-tests/search-test-core/.idea
testing/integration-tests/search-test-gcp/.idea
*.iml
# Output
target/
load-tests/*.pyc
# Eclipse
.classpath
.project
.settings/
.DS_Store
.factorypath
Copyright 2017-2019, Schlumberger
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
# Search and Indexer Service
## Azure Implementation
All documentation for the Azure implementation of `os-search` lives [here](./provider/search-azure/README.md)
## GCP Implementation
### Pre-requisites
* GCloud SDK with java (latest version)
* JDK 8
* Lombok 1.16 or later
* Maven
You will also require Git to work on the project.
### Update the Google cloud SDK to the latest version:
```sh
gcloud components update
```
### Setting up the local development environment
* Update the Google cloud SDK to the latest version:
```sh
gcloud components update
```
```sh
gcloud config set project <YOUR-PROJECT-ID>
```
* Perform a basic authentication in the selected project
```sh
gcloud auth application-default login
```
### Build project and run unit tests
* Navigate to search service's root folder and run:
```sh
mvn clean install
```
* If you wish to see the coverage report then go to testing/target/site/jacoco-aggregate and open index.html
* If you wish to build the project without running tests
```sh
mvn clean install -DskipTests
```
* If you wish to run integration tests
```sh
mvn clean install -P integration-test
```
* Running locally
* Navigate to search service's root folder and run:
```sh
mvn jetty:run
```
### Deployment
* Data-Lake Indexer Service Google Cloud Endpoints on App Engine Standard environment
* Edit the appengine-web.xml
* Open the [appengine-web.xml](indexer/src/main/webapp/WEB-INF/appengine-web.xml) file in editor, and replace the YOUR-PROJECT-ID `PROJECT` line with Google Cloud Platform project Id. Also update `STORAGE_HOST`, `STORAGE_SCHEMA_HOST`, `IDENTITY_QUERY_ACCESS_HOST` and `IDENTITY_AUTHORIZE_HOST` based on your deployment
* Deploy
```sh
mvn appengine:deploy -pl org.opengroup.osdu.search:indexer -amd
```
* If you wish to deploy the indexer service without running tests
```sh
mvn appengine:deploy -pl org.opengroup.osdu.search:indexer -amd -DskipTests
```
* Data-Lake Search Google Cloud Endpoints on App Engine Flex environment
* Edit the app.yaml
* Open the [app.yaml](search/src/main/appengine/app.yaml) file in editor, and replace the YOUR-PROJECT-ID `PROJECT` line with Google Cloud Platform project Id. Also update `SEARCH_HOST`, `STORAGE_HOST`, `STORAGE_SCHEMA_HOST`, `IDENTITY_QUERY_ACCESS_HOST` and `IDENTITY_AUTHORIZE_HOST` based on your deployment
* Deploy
```sh
mvn appengine:deploy -pl org.opengroup.osdu.search:search -amd
```
* If you wish to deploy the search service without running tests
```sh
mvn appengine:deploy -pl org.opengroup.osdu.search:search -amd -DskipTests
```
### Cloud Environment Setup
Refer to [Cloud Environment Setup](docs/setup.md) whenever setting up new services on new google projects
### Open API spec
go-swagger brings to the go community a complete suite of fully-featured, high-performance, API components to work with a Swagger API: server, client and data model.
* How to generate go client libraries?
Assumptions:
a. Running Windows
b. Using Powershell
c. Directory for source code: C:\devel\
1. Install Golang
2. Install go-swagger.exe, add to $PATH
```
go get -u github.com/go-swagger/go-swagger/cmd/swagger
```
3. Create the following directories:
```
C:\devel\datalake-test\src\
```
4. Copy “search_openapi.json” to “C:\devel\datalake-test\src”
5. Set environment variable GOPATH (run the following in Powershell):
```
$env:GOPATH="C:\devel\datalake-test\"
```
6. Change current directory to “C:\devel\datalake-test\src”
```
cd C:\devel\datalake-test\src
```
7. Run the following command:
```
swagger generate client -f 'search_openapi.json' -A search_openapi
```
### Maintenance
* Indexer:
* Cleanup indexes - Indexer has a cron job running which hits following url:
```
/_ah/cron/indexcleanup
```
Note: The job will run for all the tenants in a deployment. It will delete all the indices following the pattern as:
```
<accountid>indexpattern
```
where indexpattern is the index pattern regular expression which you want to delete
indexpattern is defined in web.xml (in indexer) file with an environment variable as CRON_INDEX_CLEANUP_PATTERN
The scheduling of cron is done in the following repository:
https://slb-swt.visualstudio.com/data-management/_git/deployment-init-scripts?path=%2F3_post_deploy%2F1_appengine_cron%2Fcron.yaml&version=GBmaster
\ No newline at end of file
# Maven
# Build your Java project and run tests with Apache Maven.
# Add steps that analyze code, save build artifacts, deploy, and more:
# https://docs.microsoft.com/azure/devops/pipelines/languages/java
trigger:
branches:
include:
- master
paths:
exclude:
- README.md
- .gitignore
pool:
name: dps-build
demands: maven
steps:
- task: Maven@3
displayName: 'build, test, code coverage'
inputs:
mavenPomFile: 'pom.xml'
options: '--settings ./search-core/maven/settings.xml -DVSTS_FEED_TOKEN=$(VSTS_FEED_TOKEN) -U'
#testResultsFiles: '**/*/TEST-*.xml'
#codeCoverageToolOption: JaCoCo
goals: 'clean install'
- task: ArchiveFiles@2
inputs:
rootFolderOrFile: 'testing/integration-tests'
includeRootFolder: true
archiveType: 'zip'
archiveFile: '$(Build.ArtifactStagingDirectory)/search-integration-tests.zip'
replaceExistingArchive: true
- task: CopyFiles@2
displayName: 'Copy Files to: $(build.artifactstagingdirectory)'
inputs:
SourceFolder:
Contents: |
provider/search-byoc/target/*-spring-boot.jar
provider/search-gcp/target/*-spring-boot.jar
provider/search-gcp/src/main/appengine/app.yaml
provider/search-gcp/src/main/resources/application.properties
provider/search-gcp/scripts/deploy.sh
provider/search-integration-tests.zip
TargetFolder: '$(build.artifactstagingdirectory)'
flattenFolders: true
- task: CopyFiles@2
displayName: 'Copy Azure artifacts for maven deploy to: $(build.artifactstagingdirectory)'
inputs:
SourceFolder:
Contents: |
pom.xml
provider/search-azure/maven/settings.xml
provider/search-azure/pom.xml
provider/search-azure/target/*-spring-boot.jar
TargetFolder: '$(build.artifactstagingdirectory)'
- task: PublishBuildArtifacts@1
displayName: 'Publish Artifact: drop'
inputs:
PathtoPublish: '$(build.artifactstagingdirectory)'
ArtifactName: 'drop'
publishLocation: 'Container'
condition: succeededOrFailed()
# Maven
# Build your Java project and run tests with Apache Maven.
# Add steps that analyze code, save build artifacts, deploy, and more:
# https://docs.microsoft.com/azure/devops/pipelines/languages/java
trigger:
branches:
include:
- kuber-migration
paths:
exclude:
- README.md
- .gitignore
pool:
name: Hosted Ubuntu 1604
demands: maven
variables:
buildMavenModules: search-core,provider/search-gcp
dockerDir: provider/search-gcp/docker
imageName: os-search-app
deploymentDir: provider/search-gcp/kubernetes/deployments
deploymentFile: deployment-os-search-service.yml
mavenSettings: ./search-core/maven/settings.xml
integrationTestCorePom: testing/integration-tests/search-test-core/pom.xml
integrationTestGcpPom: testing/integration-tests/search-test-gcp/pom.xml
steps:
- task: DownloadSecureFile@1
name: gcrKey
inputs:
secureFile: cicd-push-image-to-cr-keyfile.json
- task: DownloadSecureFile@1
name: kuberConfig
inputs:
secureFile: kubeconfig
- task: Maven@3
inputs:
mavenPomFile: 'pom.xml'
options: '--settings $(mavenSettings) -DVSTS_FEED_TOKEN=$(VSTS_FEED_TOKEN) -pl $(buildMavenModules) package'
publishJUnitResults: false
javaHomeOption: 'JDKVersion'
mavenVersionOption: 'Default'
mavenAuthenticateFeed: false
effectivePomSkip: false
sonarQubeRunAnalysis: false
- bash: |
#!/bin/bash
set -e
pushd $(dockerDir)
docker-compose build $(imageName)
echo 'Image done.'
cat $(gcrKey.secureFilePath) | docker login -u _json_key --password-stdin https://gcr.io
echo 'Login done.'
docker push gcr.io/opendes/$(imageName)
echo 'Push done.'
popd
pushd $(deploymentDir)
kubectl --kubeconfig $(kuberConfig.secureFilePath) delete -f $(deploymentFile)
kubectl --kubeconfig $(kuberConfig.secureFilePath) apply -f $(deploymentFile)
popd
attempt_counter=0
max_attempts=60
until [[ $(curl --head --write-out %{http_code} $(SEARCH_READINESS_URL) --silent -o /dev/null --fail) -eq 200 ]]; do
if [ ${attempt_counter} -eq ${max_attempts} ];then
echo "Service is not available, integraton tests are skipped"
exit 1
fi
printf '.'
attempt_counter=$(($attempt_counter+1))
sleep 2
done
- task: Maven@3
inputs:
mavenPomFile: '$(integrationTestCorePom)'
options: '--settings $(mavenSettings) -DVSTS_FEED_TOKEN=$(VSTS_FEED_TOKEN) install'
publishJUnitResults: false
javaHomeOption: 'JDKVersion'
mavenVersionOption: 'Default'
mavenAuthenticateFeed: false
effectivePomSkip: false
sonarQubeRunAnalysis: false
- task: Maven@3
inputs:
mavenPomFile: '$(integrationTestGcpPom)'
options: '--settings $(mavenSettings) -Dsurefire.useFile=false -DVSTS_FEED_TOKEN=$(VSTS_FEED_TOKEN) -DDEFAULT_DATA_PARTITION_ID_TENANT1=$(DEFAULT_DATA_PARTITION_ID_TENANT1) -DDEFAULT_DATA_PARTITION_ID_TENANT2=$(DEFAULT_DATA_PARTITION_ID_TENANT2) -DELASTIC_HOST=$(ELASTIC_HOST) -DELASTIC_PASSWORD=$(ELASTIC_PASSWORD) -DELASTIC_USER_NAME=$(ELASTIC_USER_NAME) -DENTITLEMENTS_DOMAIN=$(ENTITLEMENTS_DOMAIN) -DINDEXER_HOST=$(INDEXER_HOST) -DLEGAL_TAG=$(LEGAL_TAG) -DOTHER_RELEVANT_DATA_COUNTRIES=$(OTHER_RELEVANT_DATA_COUNTRIES) -DSEARCH_HOST=$(SEARCH_HOST) -DSEARCH_INTEGRATION_TESTER=$(SEARCH_INTEGRATION_TESTER) -DSTORAGE_HOST=$(STORAGE_HOST) compile'
publishJUnitResults: false
javaHomeOption: 'JDKVersion'
mavenVersionOption: 'Default'
mavenAuthenticateFeed: false
effectivePomSkip: false
sonarQubeRunAnalysis: false
#####################
# README: Defines a template to be used as a starting point for defining a service pipeline
#####################
trigger:
batch: true
branches:
include:
- master
paths:
exclude:
- /**/*.md
- .gitignore
- images/
pr:
autoCancel: false
branches:
include:
- '*'
exclude:
- master
paths:
exclude:
- /**/*.md
- .gitignore
- images/
resources:
repositories:
- repository: infrastructure-templates
type: git
name: open-data-ecosystem/infrastructure-templates
variables:
- group: 'Azure Common Secrets'
- group: 'Azure - Common'
- name: serviceName
value: 'search'
stages:
- template: devops/service-pipelines/build-stage.yml@infrastructure-templates
parameters:
mavenPublishJUnitResults: true
mavenOptions: '--settings ./search-core/maven/settings.xml -DVSTS_FEED_TOKEN=$(VSTS_FEED_TOKEN) -U'
copyFileContents: |
pom.xml
provider/search-azure/maven/settings.xml
provider/search-azure/pom.xml
provider/search-azure/target/*-spring-boot.jar
copyFileContentsToFlatten: |
provider/search-byoc/target/*-spring-boot.jar
provider/search-gcp/target/*-spring-boot.jar
provider/search-gcp/src/main/appengine/app.yaml
provider/search-gcp/src/main/resources/application.properties
provider/search-gcp/scripts/deploy.sh
provider/search-integration-tests.zip
serviceBase: ${{ variables.serviceName }}
testingRootFolder: 'testing/integration-tests'
- template: devops/service-pipelines/deploy-stages.yml@infrastructure-templates
parameters:
serviceName: ${{ variables.serviceName }}
testCoreMavenPomFile: 'integration-tests/search-test-core/pom.xml'
testCoreMavenOptions: '--settings $(System.DefaultWorkingDirectory)/drop/deploy/integration-tests/maven/settings.xml -DVSTS_FEED_TOKEN=$(VSTS_FEED_TOKEN)'
providers:
- name: Azure
# Merges into Master
${{ if eq(variables['Build.SourceBranchName'], 'master') }}:
environments: ['devint', 'qa', 'prod']
# PR updates / creations
${{ if ne(variables['Build.SourceBranchName'], 'master') }}:
environments: ['devint']
Copyright 2017-2019, Schlumberger
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
# Architecture
## Indexer-Service Deployment
![indexer-service](./images/indexer-service.png)
## Search-Service Deployment
![search-service](./images/search-service.png)
\ No newline at end of file
Copyright 2017-2019, Schlumberger
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
# Search service audit logs
This document presents the expected audit logs for each of API in Search service. __One and only one__ audit log is expected per request.
It is important to note audit logs __must not__ include technical information such as response codes, exceptions, stack trace, etc.
## ``GET /api/search/v2/index/schema/{kind}``
- Action id: ``SE001``
- Action: ``READ``
- Operation: Get index's schema
- Outcome: ``SUCCESS``
- Description: User got index's schema successfully
## ``POST /api/search/v2/query_with_cursor``
- Action id: ``SE002``
- Action: ``READ``
- Operation: Query index with cursor
- Outcome: ``SUCCESS``
- Description: User queried index with cursor successfully
## ``POST /api/search/v2/query``
- Action id: ``SE003``
- Action: ``READ``
- Operation: Query index
- Outcome: ``SUCCESS``
- Description: User queried index successfully
## ``DELETE /api/search/v2/index/{kind}``
- Action id: ``SE004``
- Action: ``DELETE``
- Operation: Delete index
- Outcome: ``SUCCESS``
- Description: User deleted index successfully
\ No newline at end of file
Copyright 2017-2019, Schlumberger
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
# Data Ecosystem Service Dashboard
## Links
Unified summary report: [link](https://datastudio.google.com/u/0/reporting/1h_edNOGn2aCefxuT17YXuHS9-nIqY5vP/page/nQN)
### Sub links
All the links below could be found on the unified summary report, which is suggested to be the entry point of the dashboard
Search Detail Report: [link](https://datastudio.google.com/u/0/reporting/1VdLunUNNlci6OG3Gx8X6pv4czSN3T5Wv/page/nQN)
Search Analytics Report: [link](https://datastudio.google.com/u/0/reporting/1-xZu20zsTlDtMCgUYz1wdNbYwVbbVNXm/page/nQN)
Indexer Detail Report: [link](https://datastudio.google.com/u/0/reporting/15VPuCNjeVw26Ek2E6WxAtEPe-oNdsP-6/page/nQN)
## Dashboard structure
There are 3 levels reports: Summary, Detail and Analytics, and all reports integrate all the environments.
Summary Report: is aim to integrates all services in data ecosystem in one page report and shows latency, request and error metrics
Detail Report: is aim to report latency break-down, and error details
Analytics Report: is aim to bring BI into the report
## Application log
There are 2 parts in the application log: audit log (necessary info for all services), customized log (customized info for individual service).
Refer to [AuditLogStructure.java](../core/src/main/java/com/slb/com/logging/AuditLogStrcture.java) about what are the necessary audit info to log in the application log
Refer to [SearchLogStrcture.java](../search/src/main/java/com/slb/com/logging/SearchLogStructure.java) as an example of customized info to log in application log
Remember to log the trace id in the application log at the same location as the nginx request. Refer to the [AuditLogStructure.java](../core/logging/AuditLogStrcture.java) about how to log the trace id in application log
## Query log sink
Once the logging implementation is finished, you need to create the sinks. Take search service as an example, the incoming request payload and equivalent elastic queries are logged for search service. Queries are logged in search.app in StackDriver. Also you could refer to the [existing sinks](https://console.cloud.google.com/logs/exports?project=evd-ddl-us-services&organizationId=980806506473&minLogLevel=0&expandAll=false&timestamp=2018-07-16T19:27:37.041000000Z&customFacets=&limitCustomFacetWidth=true&dateRangeStart=2018-07-16T18:27:37.292Z&dateRangeEnd=2018-07-16T19:27:37.292Z&interval=PT1H&resource=gae_app%2Fmodule_id%2Findexer&logName=projects%2Fevd-ddl-us-services%2Flogs%2Fappengine.googleapis.com%252Frequest_log) in the google cloud project to create an new one.
To create BigQuery sink to export logs
* Create filter in StackDriver
```sh
resource.type="gae_app"
resource.labels.module_id="search"
logName="projects/slb-data-lake-dev/logs/search.app"
jsonPayload.elastic-query="*" OR
jsonPayload.request-query="*"
```
* Create big query [sink](https://cloud.google.com/logging/docs/export/configure_export_v2)
* Query logs will start showing up in Big Query after sink is setup properly in query_log_ table
* jsonPayload_request_query & jsonPayload_elastic_query column are the logged queries.
## Big Query Scripts
Now you have the sinks that dumps the log into bigquery, and it will create separated table in bigquery for each day, so it is time to work on the big query scripts to organize your information to report
All dashboard related big query scripts are located under "service_dashboard_datasources" dataset in [Google BigQuery](https://bigquery.cloud.google.com/project/evd-ddl-us-services). If you have new services need to be added in to the summary report. Please follow the serivce_request script and modify it to union the new service request log. If you want to work on detail or analytics report for a new service, please take "search_combined", "indexer_combined", and "indexer_issue_records" big query views as examples.
### How to modify the existing views
Go to the [Google BigQuery](https://bigquery.cloud.google.com/project/evd-ddl-us-services), and find the view that you want to modify under the "service_dashboard_datasources" dataset, then clicking the "details" on the right and you will see the big query script in the bottom. You can modify the script by clicking the "Edit query" button and save your script by using "Save view" after you finished the modification.
### Summary report datasource (service_dashboard_datasrouce view)
This view is used for the summary report, and only depends on nginx request log now, so it could be easily expend to any other services and integrate them together into one page report. Once we implemented the application for all the services, we could include application log information into this view.
Here is the example and explanation of the search service script of the view
```bigquery script
select timestamp, receiveTimeStamp, httpRequest.status as Status, httpRequest.latency as ResponseTime, httpRequest.requestUrl as requestURL, httpRequest.requestMethod as requestMethod, "P4D-EU" as env, resource.labels.module_id as service, resource.labels.version_id as versionId,
labels.appengine_googleapis_com_trace_id as traceId,
CASE WHEN REGEXP_CONTAINS(httpRequest.requestURL, '.*/index/schema.*') AND httpRequest.requestMethod='GET' THEN 'getKindSchema'
WHEN REGEXP_CONTAINS(httpRequest.requestURL, '.*/index.*') AND httpRequest.requestMethod='DELETE' THEN 'deleteIndex'
WHEN REGEXP_CONTAINS(httpRequest.requestURL, '.*/query(\?)+.*') AND httpRequest.requestMethod='POST' THEN 'query'
WHEN REGEXP_CONTAINS(httpRequest.requestURL, '.*/query_with_cursor(\?).*') AND httpRequest.requestMethod='POST' THEN 'query'
ELSE 'others' END as APIName
from `p4d-ddl-eu-services.p4d_datalake_search_all_logs.appengine_googleapis_com_nginx_request_*`
```
The script uses the REGEX_CONTAINS function to classify the API name for all the search service requests, so that the data studio dashboard could build latency, request, error reports for each api separately within the same service. When there is new services need to be added into the summary report, you should work on the similar script for the service nginx request log and union them together.
### Detail/Analytics report datasource
Here we use "search_combined" view as an example. This view combines search service application log and nginx request log information together by using the trace id. For the detail report of the new service, you should work on a similar script for the service application log and save as a new bigquery view, then create a separated detail report in datastudio and add the link into the summary report datastource (Refer to [Add new service into summary report](###Add new service into summary report) about how to do this)
```bigquery script
select requestLog.timestamp as timestamp, requestLog.receiveTimeStamp as receiveTimeStamp, requestLog.httpRequest.status as Status, requestLog.httpRequest.latency as ResponseTime, entitlementsLog.httpRequest.latency as EntitlementsLatency,
searchLog.jsonPayload.applicationLog.onbehalfof, searchLog.jsonPayload.applicationLog.userid, searchLog.jsonPayload.applicationLog.slbaccountid, searchLog.jsonPayload.applicationLog.correlationid,
searchLog.jsonPayload.applicationLog.request.kind as kind, searchLog.jsonPayload.applicationLog.request.query as query,
safe_cast(searchLog.jsonPayload.applicationLog.elasticSearchLatency as float64) as elasticSearchLatency, "P4D-EU" as env,
searchLog.resource.labels.module_id as service, searchLog.resource.labels.version_id as versionId,
safe_cast(split(searchLog.jsonPayload.applicationLog.geolocation.userLocation, ",")[SAFE_OFFSET(0)] as float64) as latitude,
safe_cast(split(searchLog.jsonPayload.applicationLog.geolocation.userLocation, ",")[SAFE_OFFSET(1)] as float64) as longitude,
searchLog.jsonPayload.applicationLog.geolocation.userCity as city, searchLog.jsonPayload.applicationLog.geolocation.userCountry as country, searchLog.jsonPayload.applicationLog.geolocation.userRegion as region,
CASE WHEN REGEXP_CONTAINS(requestLog.httpRequest.requestURL, '.*/index/schema.*') AND requestLog.httpRequest.requestMethod='GET' THEN 'getKindSchema'
WHEN REGEXP_CONTAINS(requestLog.httpRequest.requestURL, '.*/index.*') AND requestLog.httpRequest.requestMethod='DELETE' THEN 'deleteIndex'
WHEN REGEXP_CONTAINS(requestLog.httpRequest.requestURL, '.*/query(\?)+.*') AND requestLog.httpRequest.requestMethod='POST' THEN 'query'
WHEN REGEXP_CONTAINS(requestLog.httpRequest.requestURL, '.*/query_with_cursor(\?).*') AND requestLog.httpRequest.requestMethod='POST' THEN 'query'
ELSE 'others' END as APIName,
requestLog.resource.labels.project_id as projectId,
requestLog.labels.appengine_googleapis_com_trace_id as traceId
from `p4d-ddl-eu-services.p4d_datalake_search_all_logs.appengine_googleapis_com_nginx_request_*` as requestLog
INNER JOIN `p4d-ddl-eu-services.p4d_datalake_application_log.search_app_*` as searchLog ON requestLog.labels.appengine_googleapis_com_trace_id = searchLog.labels.appengine_googleapis_com_trace_id
INNER JOIN `p4d-ddl-eu-services.p4d_datalake_entitlements_all_logs.appengine_googleapis_com_nginx_request_*` as entitlementsLog ON requestLog.labels.appengine_googleapis_com_trace_id = entitlementsLog.labels.appengine_googleapis_com_trace_id
```
## Data Studio
Once the big query scripts are ready, please contact [Mingyang Zhu](mailto:mzhu9@slb.com) for the editor permission of the dashboard, and reconnect the data sources that has been modified or add new reports.
### Add new service into summary report
Refresh the "service_request" datasource in datastudio by reconnecting the datasource to the modified bigquery view, and the service should be added into the summary report automatically.
### Add detail/Analytics report for new service
Create a new datasource by connecting to your new bigquery view of the new service. Clone the exiting detail/analytics report as the starting point and add or delete report components to make it sound. Remember to add more CASE blocks of the report link into the "Detail Report Link" and "Analytics Report Link" fields in the "service_request" datasource, which looks like the following.
```Detail Report Link
CASE
WHEN Service='search' THEN 'https://datastudio.google.com/open/1VdLunUNNlci6OG3Gx8X6pv4czSN3T5Wv'
WHEN Service='indexer' THEN 'https://datastudio.google.com/open/15VPuCNjeVw26Ek2E6WxAtEPe-oNdsP-6'
ELSE 'https://datastudio.google.com/open/1h_edNOGn2aCefxuT17YXuHS9-nIqY5vP'
END
```
\ No newline at end of file