Commit 57be80b3 authored by Kamlesh Todai's avatar Kamlesh Todai
Browse files

Add notes about CI_CD

parent a763f1a7
Pipeline #3304 failed with stages
in 8 seconds
OSDU Platform Validation CI/CD pipeline settings notes and assumptions:
We are using a continuous service offered by GitLab to trigger CI
pipeline for each push.
Ensure that project is configured to use a Runner, as it will be used to
run the jobs defined in file
.gitlab-ci.yml.
In order to do that we have added file .gitlab-ci.yml to the
repository’s root directory.
Only project Maintainer and Admin users have the permissions to access
the project settings. 
The pipeline appears under the project’s **CI/CD \> Pipelines** page. If
everything runs OK (no non-zero return values), you get a green check
mark associated with the commit. This makes it easy to see whether a
commit caused any of the tests to fail before one even looks at the job
(test) log.
Certain variables to run the collection need protection and hence are
not defined in the postman_environment.json files. Variables left
undefined in the files and gets define in the
settings-\>CI/CD-\>variables section. The variable type is set to file
and values contains the JSON string defining those protected variables.
The protected variables are:
> TENANT_ID, CLIENT_ID and CLIENT_SECRECT
Example JSON string would be
> \[
>
> {
>
> "TENANT_ID": "Actual Tenant ID goes here",
>
> "CLIENT_ID": "Actual Client ID goes here",
>
> "CLIENT_SECRET": "Actual Client Secret goes here"
>
> }
>
> \]
The cloud providers usually provide the values for these variables.
For this project, at present there are four variables defined in
settings-\>CI/CD-\>variables section, one for each cloud provider.
- AWS_TEST_COLLECTION_CONFIG for Amazon cloud
- AZURE_TEST_COLLECTION_CONFIG for Microsoft Azure cloud
- GCP_TEST_COLLECTION_CONFIG for Google cloud
- IBM_TEST_COLLECTION_CONFIG for IBM cloud
Have added the python script, generate-pipeline.py to the repository’s
root directory. This script generates the pipeline for each configured
cloud provider and job for each collection in the repository.
The python script is making the following assumptions:
Configuration file for each cloud provider is named as follows:
> cloudProviderName.someString.**postman_environment.json**
>
> Where,
>
> cloudProvider = {aws, azure, gcp, ibm}
>
> someString = e.g. “OSDU R3 PROD v2.4”
>
> e.g. azure. OSDU R3 PROD v2.4.postman_environment.json
The collections files name are as follows:
> Collection file names end with **.postman_collection.json**
>
> e.g. Well CI-CD v2.6.postman_collection.json
>
> Trajectory CI-CD v1.3.3.postman_collection.json
Python script generates following files in build environment to run the
jobs. One file for each configured platform.
> aws.OSDU R3 PROD v2.4.gitlab-ci.yml
>
> azure.OSDU R3 PROD v2.4.gitlab-ci.yml
>
> gcp.OSDU R3 PROD v2.4.gitlab-ci.yml
>
> ibm.OSDU R3 PROD v2.4.gitlab-ci.yml
When adding the new environment file or changing the name of the
existing environment file is, one needs to update the file
.gitlab-ci.yml manually. This should not happen too often, but when it
does then one needs to add the section if the file is new or modify the
existing section to reflect the changed name. The artifact of the
include section needs to reflect the changed name as well as the stage
name needs to include the changed name. e.g.
> azure **v2.4**:
>
> trigger:
>
> strategy: depend
>
> include:
>
> \- artifact: azure.**OSDU R3 PROD v2.4**.gitlab-ci.yml
>
> job: generate-pipeline
Detail reference for GitLab Ci/CD refer to following link
<https://docs.gitlab.com/ee/ci/quick_start/>
**Side notes and current status:**
When pipeline executes the collection, it uses” newman” utility,
provided by Postman to run the collection.
For details on how use newman
[<u>https://www.npmjs.com/package/newman\#getting-started</u>](https://www.npmjs.com/package/newman#getting-started)
Newman can generate variety kind of reports like cli, html, htmlextra,
junit to name few. For each of this type a plugin needs to be loaded in
the system to generate the report.
At present, we will start with junit and htmlextra. Junit is easy to
integrate with git merge request and htmlextra generates reports
providing nice dashboard kind of look with summary and detail tabs.
In generate-pipeline.py script, the lines responsible to generate these
reports are commented out, because the docker image
**postman/newman_alpine33** that is being used does not have the
required plugins installed. Once that docker images or some other docker
images with correct plugin is available, one can uncomment the commented
lines to generate these reports.
Also in .gitlab-ci.yml file the pipeline for gcp and ibm are commented
out as at this point in time the environments are not available for
these platforms.
Supports Markdown
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment