Skip to content
Snippets Groups Projects
Commit 8d1f6160 authored by Aliaksandr Ramanovich (EPAM)'s avatar Aliaksandr Ramanovich (EPAM)
Browse files

GONRG-5894-gc-rename

parent 8748f406
No related branches found
No related tags found
1 merge request!430GONRG-5894-gc-rename
Showing
with 301 additions and 301 deletions
......@@ -8,47 +8,47 @@ cli:
project: Schema
analyze:
modules:
- name: os-schema
type: mvn
target: pom.xml
path: .
- name: os-schema-core
type: mvn
target: schema-core/pom.xml
path: .
- name: os-schema-aws
type: mvn
target: provider/schema-aws/pom.xml
path: .
- name: os-schema-gcp
type: mvn
target: provider/schema-gcp/pom.xml
path: .
- name: os-schema-ibm
type: mvn
target: provider/schema-ibm/pom.xml
path: .
- name: os-schema-azure
type: mvn
target: provider/schema-azure/pom.xml
path: .
- name: google
type: pip
target: deployments/scripts/google
path: deployments/scripts/google
- name: ibm
type: pip
target: deployments/scripts/ibm
path: deployments/scripts/ibm
- name: scripts
type: pip
target: deployments/scripts
path: deployments/scripts
- name: aws
type: pip
target: deployments/scripts/aws
path: deployments/scripts/aws
- name: azure
type: pip
target: deployments/scripts/azure
path: deployments/scripts/azure
- name: os-schema
type: mvn
target: pom.xml
path: .
- name: os-schema-core
type: mvn
target: schema-core/pom.xml
path: .
- name: os-schema-aws
type: mvn
target: provider/schema-aws/pom.xml
path: .
- name: os-schema-gc
type: mvn
target: provider/schema-gc/pom.xml
path: .
- name: os-schema-ibm
type: mvn
target: provider/schema-ibm/pom.xml
path: .
- name: os-schema-azure
type: mvn
target: provider/schema-azure/pom.xml
path: .
- name: google
type: pip
target: deployments/scripts/google
path: deployments/scripts/google
- name: ibm
type: pip
target: deployments/scripts/ibm
path: deployments/scripts/ibm
- name: scripts
type: pip
target: deployments/scripts
path: deployments/scripts
- name: aws
type: pip
target: deployments/scripts/aws
path: deployments/scripts/aws
- name: azure
type: pip
target: deployments/scripts/azure
path: deployments/scripts/azure
variables:
GCP_BUILD_SUBDIR: provider/schema-gcp
GCP_INT_TEST_SUBDIR: testing/schema-test-gcp
GCP_BUILD_SUBDIR: provider/schema-gc
GCP_INT_TEST_SUBDIR: testing/schema-test-gc
GCP_APPLICATION_NAME: os-schema
GCP_ENVIRONMENT: testing
GCP_PROJECT: opendes-evt
GCP_TENANT_NAME: opendesevt
GCP_DEPLOY_ENV: p4d
GCP_DOMAIN: cloud.slb-ds.com
# FIXME remove when all services are migrated to a single helm
OSDU_GCP_ENABLE_HELM_CONFIG: "false"
IBM_BUILD_SUBDIR: provider/schema-ibm
IBM_INT_TEST_SUBDIR: testing/schema-test-core
......@@ -79,9 +77,9 @@ include:
file: "cloud-providers/azure.yml"
- project: "osdu/platform/ci-cd-pipelines"
file: "cloud-providers/osdu-gcp-global.yml"
file: "cloud-providers/gc-global.yml"
- local: "devops/gcp/pipeline/override-stages.yml"
- local: "devops/gc/pipeline/override-stages.yml"
- local: "/devops/azure/gitlab-bootstrap.yml"
- local: "/devops/aws/bootstrap.yaml"
......
......@@ -34,6 +34,7 @@ Apache-2.0
========================================================================
The following software have components provided under the terms of this license:
- AHC/Client (from https://repo1.maven.org/maven2/org/asynchttpclient/async-http-client)
- AMQP 1.0 JMS Spring Boot AutoConfiguration (from https://repo1.maven.org/maven2/org/amqphub/spring/amqp-10-jms-spring-boot-autoconfigure)
- AMQP 1.0 JMS Spring Boot Starter (from https://repo1.maven.org/maven2/org/amqphub/spring/amqp-10-jms-spring-boot-starter)
- ASM based accessors helper used by json-smart (from https://urielch.github.io/)
......@@ -318,7 +319,6 @@ The following software have components provided under the terms of this license:
- Apache Log4j SLF4J Binding (from https://repo1.maven.org/maven2/org/apache/logging/log4j/log4j-slf4j-impl)
- Apache Log4j to SLF4J Adapter (from https://repo1.maven.org/maven2/org/apache/logging/log4j/log4j-to-slf4j)
- AssertJ Core (from ${project.organization.url}#${project.artifactId})
- Asynchronous Http Client (from https://repo1.maven.org/maven2/org/asynchttpclient/async-http-client)
- Asynchronous Http Client Netty Utils (from https://repo1.maven.org/maven2/org/asynchttpclient/async-http-client-netty-utils)
- AutoValue Annotations (from https://github.com/google/auto/tree/master/value, https://repo1.maven.org/maven2/com/google/auto/value/auto-value-annotations)
- BSON (from http://bsonspec.org, https://bsonspec.org)
......
......@@ -4,9 +4,9 @@ The Schema Service is a Maven multi-module project with each cloud implemention
### 1. Google Cloud deployment
Instructions for running the Google Cloud implementation in the cloud can be found [here](./provider/schema-gcp/README.md).
Instructions for running the Google Cloud implementation in the cloud can be found [here](./provider/schema-gc/README.md).
### 2. Azure deployment
### 2. Azure deployment
Instructions for running the Azure implementation in the cloud can be found [here](https://community.opengroup.org/osdu/platform/system/schema-service/-/blob/master/provider/schema-azure/README.md).
......@@ -15,34 +15,34 @@ Instructions for running the Azure implementation in the cloud can be found [her
DevSanity tests are located in a schema-core project in testing directory under the project root directory.
1. Google Cloud
These tests validate functionality of schema service.
These tests validate functionality of schema service.
They can then be run/debugged directly in your IDE of choice using the GUI or via the commandline using below command from schema-core project.
Below command has to be run post building complete project.
Instructions for running the Google Cloud integration tests can be found [here](./provider/schema-gcp/README.md).
Instructions for running the Google Cloud integration tests can be found [here](./provider/schema-gp/README.md).
Below command can be run through azure-pipeline.yml after setting environment variables in the pipeline.
verify
verify
## Deploy Shared Schemas
Schema service as part of deployment deploys pre-defined OSDU schemas so end users can get community accepted schemas to refer. Such schemas are present in [folder](./deployments/shared-schemas/osdu) and script to deploy the schema are present [here](deployments/scripts).
Schema service as part of deployment deploys pre-defined OSDU schemas so end users can get community accepted schemas to refer. Such schemas are present in [folder](./deployments/shared-schemas/osdu) and script to deploy the schema are present [here](deployments/scripts).
Details to deploy shared schemas can be found under [README.md](./deployments/shared-schemas/README.md)
##AWS
## AWS
Instructions for running and testing this service can be found [here](./provider/schema-aws/README.md)
## License
Copyright 2017-2020, Schlumberger
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
You may obtain a copy of the License at
[http://www.apache.org/licenses/LICENSE-2.0](http://www.apache.org/licenses/LICENSE-2.0)
......
# Shared Schemas
The purpose of this folder set is to contain schema definitions in a state ready to
register with the **Schema Service**. Each schema version will have its own file,
The purpose of this folder set is to contain schema definitions in a state ready to
register with the **Schema Service**. Each schema version will have its own file,
grouped together with all parallel versions under a folder carrying the entity name.
Example `<schema-authority>/<group-type-folder>/entity-schema-version.json`
The deployment pipeline will only deploy pre-processed schemas in this `shared-schemas`
folder. The script to do this is [DeploySharedSchemas.py](../scripts/DeploySharedSchemas.py), see
folder. The script to do this is [DeploySharedSchemas.py](../scripts/DeploySharedSchemas.py), see
step **Upload schema definitions** below. The pre-processed schemas are produced by
OSDU Data Definitions
OSDU Data Definitions
(see [](https://gitlab.opengroup.org/osdu/subcommittees/data-def/work-products/schema/-/tree/master)).
The structure of JSON files to register matches the expected payload of the Schema Service
The structure of JSON files to register matches the expected payload of the Schema Service
POST/PUT requests:
```json
......@@ -39,9 +38,9 @@ POST/PUT requests:
The `"schema"` property carries the full schema definition - omitted in the above example.
Schemas may refer to abstract entity definitions or other external schema fragments. The
Schema Service requires the abstract definitions and schema fragments to be registered prior
to the registration of the main entity schema. This is achieved by a file defining the
load sequence per schema version. An example can be found
Schema Service requires the abstract definitions and schema fragments to be registered prior
to the registration of the main entity schema. This is achieved by a file defining the
load sequence per schema version. An example can be found
[here for OSDU R3](../shared-schemas/osdu/load_sequence.1.0.0.json).
## Upload schema definitions
......@@ -67,8 +66,8 @@ example:
python deployments\scripts\DeploySharedSchemas.py -u https://opengroup.test.org/api/schema-service/v1/schema
```
### Environment value need to execute Token.py script
```python
import os
JSON_KEY = os.environ.get('JSON_KEY')
......@@ -77,9 +76,9 @@ JSON_KEY = os.environ.get('JSON_KEY')
The above snippet is from the [Token.py](../scripts/google/Token.py) script and lists the required
environment variable for json key. This value can be different as per cloud vendors token generation logic.
### Bearer Token Generation
###Bearer Token Generation
Bearer token generation logic can differ for each cloud vendors. So, each cloud vendor can provide their implementation in below format in specific folder under scripts [google](../scripts/google/). To generate token
Bearer token generation logic can differ for each cloud vendors. So, each cloud vendor can provide their implementation in below format in specific folder under scripts [google](../scripts/google/). To generate token
for google implementation below script is used in [azure pipeline](../../azure-pipelinea.yml)
```shell script
......@@ -88,8 +87,8 @@ BEARER_TOKEN=`python deployments/scripts/google/Token.py`
We export the token generated to `BEARER_TOKEN` which is used in DeploySharedSchemas.py script
### Environment value need to execute DeploySharedSchemas.py script
```python
import os
BEARER_TOKEN = os.environ.get('BEARER_TOKEN')
......@@ -100,8 +99,8 @@ DATA_PARTITION = os.environ.get('DATA_PARTITION')
The above snippet is from the [Utility.RunEnv](../scripts/Utility.py) class and lists the required
environment variables for bearer token, app key and tenant/data-partition-id.
### Yaml Pipeline configurations
```shell script
#!/bin/bash
pip install -r deployments/scripts/google/requirements.txt
......@@ -121,7 +120,8 @@ In the above script we first install all the required dependencies, then create
Sample yaml can be in [azure pipeline](../../azure-pipelinea.yml)
### Schema Registration
The upload will depend on the status of the schemas. Schemas in `DEVELOPMENT` can be updated,
The upload will depend on the status of the schemas. Schemas in `DEVELOPMENT` can be updated,
schemas in status `PUBLISHED` can only be created once (POST).
The script produces output like:
......@@ -140,14 +140,16 @@ All 120 schemas registered or updated.
In case of errors, the list of failed creations/updates are summarized at the end.
### Environment clean up (GCP)
### Environment clean up (Google Cloud)
Schema bootstrapping used during new platform configuration, creates schema records in Datastore, which cannot be removed during deletion.
If platform deployment must be re-installed, the cleanup script must be executed.
Scripts for cleanup schemas can be found in [DatastoreCleanUp.py](../scripts/DatastoreCleanUp.py)
Scripts for cleanup schemas can be found in [GCDatastoreCleanUp.py](../scripts/GCDatastoreCleanUp.py)
```bash
pip install -r gcp-deployment-requirements.txt
pip install -r gc-deployment-requirements.txt
```
You will need to have the following environment variables defined to run scripts.
| name | value | description | sensitive? | source |
| --- | --- | --- | --- | --- |
......@@ -155,4 +157,4 @@ You will need to have the following environment variables defined to run scripts
| `SHARED_PARTITION_ID` | ex `osdu`| Data partition id that will be used for deletion schemas by id `"{{SHARED_PARTITION_ID}}:wks:work-product-component--Activity:1.0.0"`| no | - |
| `SCHEMA_NAMESPACE` | ex `dataecosystem`| If not specified default `dataecosystem` will be used | no | - |
| `SCHEMA_KIND` | ex `schema`| If not specified default `schema` will be used | no | - |
| `GOOGLE_APPLICATION_CREDENTIALS` | ex`usr/key.json` | Google Service account credentials with delete access to Datastore | yes | - |
\ No newline at end of file
| `GOOGLE_APPLICATION_CREDENTIALS` | ex`usr/key.json` | Google Service account credentials with delete access to Datastore | yes | - |
trigger:
branches:
include:
- master
- master
paths:
exclude:
- README.md
- .gitignore
- .gitignore
variables:
osProjectName: schema
dockerImageName: os-$(osProjectName)-app
tag: $(Build.BuildNumber)
dockerDir: provider/$(osProjectName)-gcp/docker
dockerDir: provider/$(osProjectName)-gc/docker
deploymentName: os-$(osProjectName)-service
stages:
- stage: Build
jobs:
- job: Build
pool:
name: Hosted Ubuntu 1604
demands: Maven
steps:
- task: DownloadSecureFile@1
name: gcrKey
inputs:
secureFile: gcr-push-key-file.json
- task: DownloadSecureFile@1
name: kubeconfig
inputs:
secureFile: 'kubeconfig'
- task: Maven@3
displayName: Maven Build
inputs:
mavenPomFile: 'pom.xml'
mavenOptions: '-Xmx3072m'
javaHomeOption: 'JDKVersion'
jdkVersionOption: '1.8'
jdkArchitectureOption: 'x64'
publishJUnitResults: true
testResultsFiles: '**/surefire-reports/TEST-*.xml'
options: '--settings maven/settings.xml -DVSTS_FEED_TOKEN=$(VSTS_FEED_TOKEN)'
goals: 'clean install package'
- task: CopyFiles@2
inputs:
Contents: 'provider/$(osProjectName)-gcp/deployments/*'
TargetFolder: '$(build.artifactstagingdirectory)/deployments'
condition: succeeded()
- task: Bash@3
inputs:
targetType: 'inline'
script: |
#!/bin/bash
pushd $(dockerDir)
- stage: Build
jobs:
- job: Build
pool:
name: Hosted Ubuntu 1604
demands: Maven
steps:
- task: DownloadSecureFile@1
name: gcrKey
inputs:
secureFile: gcr-push-key-file.json
- task: DownloadSecureFile@1
name: kubeconfig
inputs:
secureFile: "kubeconfig"
- task: Maven@3
displayName: Maven Build
inputs:
mavenPomFile: "pom.xml"
mavenOptions: "-Xmx3072m"
javaHomeOption: "JDKVersion"
jdkVersionOption: "1.8"
jdkArchitectureOption: "x64"
publishJUnitResults: true
testResultsFiles: "**/surefire-reports/TEST-*.xml"
options: "--settings maven/settings.xml -DVSTS_FEED_TOKEN=$(VSTS_FEED_TOKEN)"
goals: "clean install package"
- task: CopyFiles@2
inputs:
Contents: "provider/$(osProjectName)-gc/deployments/*"
TargetFolder: "$(build.artifactstagingdirectory)/deployments"
condition: succeeded()
- task: Bash@3
inputs:
targetType: "inline"
script: |
#!/bin/bash
pushd $(dockerDir)
cat $(gcrKey.secureFilePath) | docker login -u _json_key --password-stdin https://gcr.io
echo $(dockerImageName)
docker-compose build $(dockerImageName)
docker tag gcr.io/opendes/$(dockerImageName) gcr.io/opendes/$(dockerImageName):$(tag)
docker push gcr.io/opendes/$(dockerImageName):$(tag)
docker push gcr.io/opendes/$(dockerImageName) echo 'Push done.'
kubectl --kubeconfig $(kubeconfig.secureFilePath) rollout restart deployment/$(deploymentName)
popd
sleep 10
OUTPUT="200 OK"
ENDPOINT=$(SCHEMA_DEV_URL)/health
echo $ENDPOINT
while [ -z "$STATUS" ]; do
STATUS=`curl -v --silent --http1.0 "$ENDPOINT" 2>&1 | grep "$OUTPUT"`
echo $STATUS
if [ -z "$STATUS" ]; then
echo "Endpoint is not up yet."
sleep 10
else
echo "Endpoint is up"
fi
done
condition: succeeded()
displayName: "build,upload and deploy docker image"
- task: Maven@3
displayName: "Running IntegrationTest"
inputs:
mavenPomFile: "testing/schema-test-core/pom.xml"
goals: "verify"
options: "--settings maven/settings.xml -DVSTS_FEED_TOKEN=$(VSTS_FEED_TOKEN)"
publishJUnitResults: false
javaHomeOption: "JDKVersion"
mavenVersionOption: "Default"
mavenAuthenticateFeed: false
effectivePomSkip: false
sonarQubeRunAnalysis: false
env:
INTEGRATION_TEST_AUDIENCE: $(INTEGRATION_TEST_AUDIENCE)
INTEGRATION_TESTER: $(INTEGRATION_TESTER)
PRIVATE_TENANT1: $(PRIVATE_TENANT1)
PRIVATE_TENANT2: $(PRIVATE_TENANT2)
SHARED_TENANT: $(SHARED_TENANT)
HOST: $(HOST)
VENDOR: $(VENDOR)
- task: UsePythonVersion@0
inputs:
versionSpec: "3.x"
addToPath: true
architecture: "x64"
- task: Bash@3
displayName: "Deploying shared schemas"
inputs:
targetType: "inline"
script: |
#!/bin/bash
pip install -r deployments/scripts/google/requirements.txt
export JSON_KEY=$(INTEGRATION_TESTER)
export AUDIENCE=$(INTEGRATION_TEST_AUDIENCE)
BEARER_TOKEN=`python deployments/scripts/google/Token.py`
export BEARER_TOKEN=$BEARER_TOKEN
export APP_KEY=""
export DATA_PARTITION=$(DATA_PARTITION)
python deployments/scripts/DeploySharedSchemas.py -u $(SCHEMA_DEV_URL)/schema
- task: PublishBuildArtifacts@1
displayName: "Publish Artifact: drop"
inputs:
PathtoPublish: "$(build.artifactstagingdirectory)"
ArtifactName: "drop"
publishLocation: "Container"
condition: succeededOrFailed()
- stage: DeployToQA
condition: and(succeeded(), eq(variables['Build.Reason'], 'Manual'))
variables:
sourceImageName: gcr.io/opendes/$(dockerImageName)
destinationImageName: us.gcr.io/opendes-evt/$(dockerImageName)
jobs:
- job: DeployToQA
steps:
- task: DownloadSecureFile@1
name: gcrKey
inputs:
secureFile: cicd-push-image-to-cr-keyfile.json
- task: DownloadSecureFile@1
name: gcrKeyEvt
inputs:
secureFile: cicd-push-image-to-cr-evt-keyfile.json
- task: DownloadSecureFile@1
name: kuberConfigEvt
inputs:
secureFile: kubeconfig-evt-opendes-qa-us
- bash: |
#!/bin/bash
set -e
cat $(gcrKey.secureFilePath) | docker login -u _json_key --password-stdin https://gcr.io
echo $(dockerImageName)
docker-compose build $(dockerImageName)
docker tag gcr.io/opendes/$(dockerImageName) gcr.io/opendes/$(dockerImageName):$(tag)
docker push gcr.io/opendes/$(dockerImageName):$(tag)
docker push gcr.io/opendes/$(dockerImageName) echo 'Push done.'
kubectl --kubeconfig $(kubeconfig.secureFilePath) rollout restart deployment/$(deploymentName)
popd
sleep 10
OUTPUT="200 OK"
ENDPOINT=$(SCHEMA_DEV_URL)/health
echo $ENDPOINT
while [ -z "$STATUS" ]; do
STATUS=`curl -v --silent --http1.0 "$ENDPOINT" 2>&1 | grep "$OUTPUT"`
echo $STATUS
if [ -z "$STATUS" ]; then
echo "Endpoint is not up yet."
sleep 10
else
echo "Endpoint is up"
fi
done
condition: succeeded()
displayName: 'build,upload and deploy docker image'
- task: Maven@3
displayName: 'Running IntegrationTest'
inputs:
mavenPomFile: 'testing/schema-test-core/pom.xml'
goals: 'verify'
options: '--settings maven/settings.xml -DVSTS_FEED_TOKEN=$(VSTS_FEED_TOKEN)'
publishJUnitResults: false
javaHomeOption: 'JDKVersion'
mavenVersionOption: 'Default'
mavenAuthenticateFeed: false
effectivePomSkip: false
sonarQubeRunAnalysis: false
env:
INTEGRATION_TEST_AUDIENCE: $(INTEGRATION_TEST_AUDIENCE)
INTEGRATION_TESTER : $(INTEGRATION_TESTER)
PRIVATE_TENANT1 : $(PRIVATE_TENANT1)
PRIVATE_TENANT2 : $(PRIVATE_TENANT2)
SHARED_TENANT : $(SHARED_TENANT)
HOST : $(HOST)
VENDOR : $(VENDOR)
- task: UsePythonVersion@0
inputs:
versionSpec: '3.x'
addToPath: true
architecture: 'x64'
- task: Bash@3
displayName: 'Deploying shared schemas'
inputs:
targetType: 'inline'
script: |
#!/bin/bash
pip install -r deployments/scripts/google/requirements.txt
export JSON_KEY=$(INTEGRATION_TESTER)
export AUDIENCE=$(INTEGRATION_TEST_AUDIENCE)
BEARER_TOKEN=`python deployments/scripts/google/Token.py`
export BEARER_TOKEN=$BEARER_TOKEN
export APP_KEY=""
export DATA_PARTITION=$(DATA_PARTITION)
python deployments/scripts/DeploySharedSchemas.py -u $(SCHEMA_DEV_URL)/schema
- task: PublishBuildArtifacts@1
displayName: 'Publish Artifact: drop'
inputs:
PathtoPublish: '$(build.artifactstagingdirectory)'
ArtifactName: 'drop'
publishLocation: 'Container'
condition: succeededOrFailed()
- stage: DeployToQA
condition: and(succeeded(), eq(variables['Build.Reason'], 'Manual'))
variables:
sourceImageName: gcr.io/opendes/$(dockerImageName)
destinationImageName: us.gcr.io/opendes-evt/$(dockerImageName)
jobs:
- job: DeployToQA
steps:
- task: DownloadSecureFile@1
name: gcrKey
inputs:
secureFile: cicd-push-image-to-cr-keyfile.json
- task: DownloadSecureFile@1
name: gcrKeyEvt
inputs:
secureFile: cicd-push-image-to-cr-evt-keyfile.json
- task: DownloadSecureFile@1
name: kuberConfigEvt
inputs:
secureFile: kubeconfig-evt-opendes-qa-us
- bash: |
#!/bin/bash
set -e
cat $(gcrKey.secureFilePath) | docker login -u _json_key --password-stdin https://gcr.io
docker pull $(sourceImageName):$(tag)
cat $(gcrKeyEvt.secureFilePath) | docker login -u _json_key --password-stdin https://us.gcr.io
docker tag $(sourceImageName):$(tag) $(destinationImageName):$(tag)
docker tag $(sourceImageName):$(tag) $(destinationImageName)
docker push $(destinationImageName):$(tag)
docker push $(destinationImageName)
kubectl --kubeconfig $(kuberConfigEvt.secureFilePath) rollout restart deployment/$(deploymentName)
\ No newline at end of file
docker pull $(sourceImageName):$(tag)
cat $(gcrKeyEvt.secureFilePath) | docker login -u _json_key --password-stdin https://us.gcr.io
docker tag $(sourceImageName):$(tag) $(destinationImageName):$(tag)
docker tag $(sourceImageName):$(tag) $(destinationImageName)
docker push $(destinationImageName):$(tag)
docker push $(destinationImageName)
kubectl --kubeconfig $(kuberConfigEvt.secureFilePath) rollout restart deployment/$(deploymentName)
......@@ -23,7 +23,7 @@ trigger:
- .gitignore
- /docs
- /provider/schema-aws
- /provider/schema-gcp
- /provider/schema-gc
- /provider/schema-ibm
resources:
......@@ -36,8 +36,8 @@ resources:
name: infra-azure-provisioning
variables:
- group: 'Azure - OSDU'
- group: 'Azure - OSDU Secrets'
- group: "Azure - OSDU"
- group: "Azure - OSDU Secrets"
- name: serviceName
value: "schema-service"
......@@ -45,20 +45,20 @@ variables:
value: "devops/azure/chart"
- name: valuesFile
value: "devops/azure/chart/helm-config.yml"
- name: 'MANIFEST_REPO'
- name: "MANIFEST_REPO"
value: $[ resources.repositories['FluxRepo'].name ]
- name: 'MAVEN_CACHE_FOLDER'
- name: "MAVEN_CACHE_FOLDER"
value: $(Pipeline.Workspace)/.m2/repository
- name: SKIP_TESTS
value: 'false'
value: "false"
stages:
- template: /devops/build-stage.yml@TemplateRepo
parameters:
mavenGoal: 'package'
mavenGoal: "package"
mavenPublishJUnitResults: true
serviceCoreMavenOptions: '-P schema-core --settings .mvn/community-maven.settings.xml'
mavenOptions: '-P schema-azure --settings .mvn/community-maven.settings.xml -Dmaven.repo.local=$(MAVEN_CACHE_FOLDER)'
serviceCoreMavenOptions: "-P schema-core --settings .mvn/community-maven.settings.xml"
mavenOptions: "-P schema-azure --settings .mvn/community-maven.settings.xml -Dmaven.repo.local=$(MAVEN_CACHE_FOLDER)"
copyFileContents: |
pom.xml
provider/schema-azure/maven/settings.xml
......@@ -66,27 +66,27 @@ stages:
provider/schema-azure/target/*-spring-boot.jar
.mvn/community-maven.settings.xml
deployments/**
copyFileContentsToFlatten: ''
mavenSettingsFile: '.mvn/community-maven.settings.xml'
copyFileContentsToFlatten: ""
mavenSettingsFile: ".mvn/community-maven.settings.xml"
serviceBase: ${{ variables.serviceName }}
testingRootFolder: 'testing'
testingRootFolder: "testing"
chartPath: ${{ variables.chartPath }}
- template: deploy-stage.yml
parameters:
serviceName: ${{ variables.serviceName }}
chartPath: ${{ variables.chartPath }}
valuesFile: ${{ variables.valuesFile }}
testCoreMavenPomFile: 'testing/schema-test-core/pom.xml'
testCoreMavenOptions: '--settings $(System.DefaultWorkingDirectory)/drop/.mvn/community-maven.settings.xml -DskipTests -DskipITs'
integrationTestMavenGoal: 'verify'
testCoreMavenPomFile: "testing/schema-test-core/pom.xml"
testCoreMavenOptions: "--settings $(System.DefaultWorkingDirectory)/drop/.mvn/community-maven.settings.xml -DskipTests -DskipITs"
integrationTestMavenGoal: "verify"
skipDeploy: ${{ variables.SKIP_DEPLOY }}
skipTest: ${{ variables.SKIP_TESTS }}
providers:
- name: Azure
environments: ['dev']
- name: Azure
environments: ["dev"]
- template: bootstrap-stage.yml
parameters:
serviceName: ${{ variables.serviceName }}
providers:
- name: Azure
environments: ['dev']
\ No newline at end of file
- name: Azure
environments: ["dev"]
......@@ -23,7 +23,7 @@ trigger:
- .gitignore
- /docs
- /provider/schema-aws
- /provider/schema-gcp
- /provider/schema-gc
- /provider/schema-ibm
resources:
......@@ -36,8 +36,8 @@ resources:
name: infra-azure-provisioning
variables:
- group: 'Azure - OSDU'
- group: 'Azure - OSDU Secrets'
- group: "Azure - OSDU"
- group: "Azure - OSDU Secrets"
- name: serviceName
value: "schema-service"
......@@ -45,22 +45,22 @@ variables:
value: "devops/azure/chart"
- name: valuesFile
value: "devops/azure/chart/helm-config.yml"
- name: 'MANIFEST_REPO'
- name: "MANIFEST_REPO"
value: $[ resources.repositories['FluxRepo'].name ]
- name: 'MAVEN_CACHE_FOLDER'
- name: "MAVEN_CACHE_FOLDER"
value: $(Pipeline.Workspace)/.m2/repository
- name: SKIP_TESTS
value: 'false'
value: "false"
- name: SKIP_DEPLOY
value: 'false'
value: "false"
stages:
- template: /devops/build-stage.yml@TemplateRepo
parameters:
mavenGoal: 'package'
mavenGoal: "package"
mavenPublishJUnitResults: true
serviceCoreMavenOptions: '-P schema-core --settings .mvn/community-maven.settings.xml'
mavenOptions: '-P schema-azure --settings .mvn/community-maven.settings.xml -Dmaven.repo.local=$(MAVEN_CACHE_FOLDER)'
serviceCoreMavenOptions: "-P schema-core --settings .mvn/community-maven.settings.xml"
mavenOptions: "-P schema-azure --settings .mvn/community-maven.settings.xml -Dmaven.repo.local=$(MAVEN_CACHE_FOLDER)"
copyFileContents: |
pom.xml
provider/schema-azure/maven/settings.xml
......@@ -68,27 +68,27 @@ stages:
provider/schema-azure/target/*-spring-boot.jar
.mvn/community-maven.settings.xml
deployments/**
copyFileContentsToFlatten: ''
mavenSettingsFile: '.mvn/community-maven.settings.xml'
copyFileContentsToFlatten: ""
mavenSettingsFile: ".mvn/community-maven.settings.xml"
serviceBase: ${{ variables.serviceName }}
testingRootFolder: 'testing'
testingRootFolder: "testing"
chartPath: ${{ variables.chartPath }}
- template: deploy-stage.yml
parameters:
serviceName: ${{ variables.serviceName }}
chartPath: ${{ variables.chartPath }}
valuesFile: ${{ variables.valuesFile }}
testCoreMavenPomFile: 'testing/schema-test-core/pom.xml'
testCoreMavenOptions: '--settings $(System.DefaultWorkingDirectory)/drop/.mvn/community-maven.settings.xml -DskipTests -DskipITs'
integrationTestMavenGoal: 'verify'
testCoreMavenPomFile: "testing/schema-test-core/pom.xml"
testCoreMavenOptions: "--settings $(System.DefaultWorkingDirectory)/drop/.mvn/community-maven.settings.xml -DskipTests -DskipITs"
integrationTestMavenGoal: "verify"
skipDeploy: ${{ variables.SKIP_DEPLOY }}
skipTest: ${{ variables.SKIP_TESTS }}
providers:
- name: Azure
environments: ['demo']
- name: Azure
environments: ["demo"]
- template: bootstrap-stage.yml
parameters:
serviceName: ${{ variables.serviceName }}
providers:
- name: Azure
environments: ['demo']
\ No newline at end of file
- name: Azure
environments: ["demo"]
......@@ -2,11 +2,11 @@ FROM google/cloud-sdk:alpine
WORKDIR /opt
COPY ./devops/gcp/bootstrap-osdu-module/*.sh ./
COPY ./devops/gc/bootstrap-osdu-module/*.sh ./
COPY ./deployments ./
RUN apk update && apk add jq bash py3-pip
RUN pip3 install --upgrade pip && pip3 install -r ./scripts/requirements.txt && pip3 install -r ./scripts/gcp-deployment-requirements.txt && pip3 install -r ./scripts/schema-cleaner/requirements.txt
RUN pip3 install --upgrade pip && pip3 install -r ./scripts/requirements.txt && pip3 install -r ./scripts/gc-deployment-requirements.txt && pip3 install -r ./scripts/schema-cleaner/requirements.txt
RUN chmod +x /opt/bootstrap_schema.sh
CMD ["/bin/bash", "-c", "/opt/bootstrap_schema.sh && sleep 365d"]
......@@ -4,7 +4,7 @@ Schema service bootstrap is based on python bootstrap scripts at Schema service
Boostrap scripts contain python script which executes clean-up in Datastore to prevent incorrect bootstrap for Schema service.
After bootstrap script execution, you can go to **GCP console** and look at logs under `Kubernetes Engine -> Workloads -> schema-bootstrap deployment`.
After bootstrap script execution, you can go to **Google Cloud console** and look at logs under `Kubernetes Engine -> Workloads -> schema-bootstrap deployment`.
Successful execution will lead to similar output:
......
#!/usr/bin/env bash
#
# Script that bootstraps schema service using Python scripts, that make requests to schema service
# Contains logic for both onprem and gcp version
# Contains logic for both Reference and Google Cloud version
#
# Expected environment variables:
# (both environments):
# - DATA_PARTITION
# - SCHEMA_URL
# - ENTITLEMENTS_HOST
# (for gcp):
# (for onprem):
# (for Google Cloud):
# - AUDIENCES
# (for Reference):
# - OPENID_PROVIDER_URL
# - OPENID_PROVIDER_CLIENT_ID
# - OPENID_PROVIDER_CLIENT_SECRET
......@@ -39,7 +40,7 @@ bootstrap_schema_gettoken_onprem() {
export BEARER_TOKEN="Bearer ${ID_TOKEN}"
}
bootstrap_schema_gettoken_gcp() {
bootstrap_schema_gettoken_gc() {
BEARER_TOKEN=$(gcloud auth print-identity-token)
......@@ -87,8 +88,8 @@ else
echo "Finished schema cleanup"
fi
# Get credentials for GCP
bootstrap_schema_gettoken_gcp
# Get credentials for Google Cloud
bootstrap_schema_gettoken_gc
fi
......
File moved
apiVersion: v2
name: gcp-schema-deploy
name: gc-schema-deploy
description: A Helm chart for Kubernetes
# A chart can be either an 'application' or a 'library' chart.
......@@ -21,4 +21,4 @@ version: 0.1.0
# incremented each time you make changes to the application. Versions are not expected to
# follow Semantic Versioning. They should reflect the version the application is using.
# It is recommended to use it with quotes.
appVersion: "1.16.0"
appVersion: "1.19.0"
......@@ -66,6 +66,7 @@ First you need to set variables in **values.yaml** file using any code editor. S
**rabbitmqSecretName** | secret for rabbitmq | string | `rabbitmq-secret` | yes
### Datastore cleanup and bootstrap schemas variables
> Datastore cleanup is used for cleaning Datastore Schema Entities if they are not present in Schema bucket
| Name | Description | Type | Default |Required |
......@@ -81,7 +82,7 @@ First you need to set variables in **values.yaml** file using any code editor. S
Run this command from within this directory:
```console
helm install gcp-schema-deploy .
helm install gc-schema-deploy .
```
## Uninstalling the Chart
......@@ -89,7 +90,7 @@ helm install gcp-schema-deploy .
To uninstall the helm deployment:
```console
helm uninstall gcp-schema-deploy
helm uninstall gc-schema-deploy
```
[Move-to-Top](#deploy-helm-chart)
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment