**Steps To Perform Open Test Data for IBM - Data Loading Process**
This guide outlines the process of initiating the data loading for environments using the provided scripts. It involves executing a series of steps to ingest data into the IBM environment via OSDU.
The data loading process involves running a set of scripts sequentially. These scripts perform actions such as copying manifests, downloading datasets, generating authentication tokens, setting up environment variables, triggering ingestion processes, and verifying data ingestion.
Steps to Execute
The tno.sh script performs actions such as copying manifests, downloading datasets, generating authentication tokens, setting up environment variables, triggering ingestion for reference/master data, verification, and other data-related processes.
Setting environment variables for running the open test data can be achieved through CI/CD variables in GitLab or through input variables during the pipeline execution.
1. Configuration via CI/CD Variables :
Setting CI/CD Variables in GitLab:
Access project in GitLab.
Navigate to Settings > CI/CD > Variables.
Add the following variables and their corresponding values:
- CLIENT_ID: "abc123"
- CLIENT_SECRET: "secretpassword"
- keycloak_routes: "example-keycloak-host.com"
- keycloak_username: "username"
- keycloak_password: "P@ssw0rd!"
- CPD_ROUTE: "example-cpd.host.com"
- LOAD_SEISMICS_DATA: "false"
Note:- These dummy values serve as placeholders for demonstration purposes and should be replaced with actual values when implementing or configuring the environment variables
2. Using Input Variables during Pipeline Execution:
When triggering the pipeline manually or through automation, provide these variables as inputs.
For manual execution, GitLab provides an interface where you can input these variables before starting the pipeline run.
**Steps To Perform Open Test Data for IBM - Data Loading Process**
This guide outlines the process of initiating the data loading for environments using the provided scripts. It involves executing a series of steps to ingest data into the IBM environment via OSDU.
The data loading process involves running a set of scripts sequentially. These scripts perform actions such as copying manifests, downloading datasets, generating authentication tokens, setting up environment variables, triggering ingestion processes, and verifying data ingestion.
**Steps to Execute**
The tno.sh script performs actions such as copying manifests, downloading datasets, generating authentication tokens, setting up environment variables, triggering ingestion for reference/master data, verification, and other data-related processes.
Setting environment variables for running the open test data can be achieved through CI/CD variables in GitLab or through input variables during the pipeline execution.
1. Configuration via CI/CD Variables :
Setting CI/CD Variables in GitLab:
Access project in GitLab.
Navigate to Settings > CI/CD > Variables.
Add the following variables and their corresponding values:
- CLIENT_ID: "abc123"
- CLIENT_SECRET: "secretpassword"
- keycloak_routes: "example-keycloak-host.com"
- keycloak_username: "username"
- keycloak_password: "P@ssw0rd!"
- CPD_ROUTE: "example-cpd.host.com"
- LOAD_SEISMICS_DATA: "false"
Note:- These dummy values serve as placeholders for demonstration purposes and should be replaced with actual values when implementing or configuring the environment variables
2. Using Input Variables during Pipeline Execution:
When triggering the pipeline manually or through automation, provide these variables as inputs.
For manual execution, GitLab provides an interface where you can input these variables before starting the pipeline run.