Commit 1061d4f7 authored by Kishore Battula's avatar Kishore Battula
Browse files

Merge branch 'new-workflow-apis' into h2-ingestion

parents 6731e6e1 6d7a8c86
Pipeline #16207 failed with stages
in 4 minutes and 55 seconds
......@@ -16,6 +16,10 @@ analyze:
type: mvn
target: workflow-core/pom.xml
path: .
- name: workflow-aws
type: mvn
target: provider/workflow-aws/pom.xml
path: .
- name: workflow-azure
type: mvn
target: provider/workflow-azure/pom.xml
......@@ -24,11 +28,11 @@ analyze:
type: mvn
target: provider/workflow-gcp/pom.xml
path: .
- name: workflow-gcp-datastore
- name: workflow-ibm
type: mvn
target: provider/workflow-gcp-datastore/pom.xml
target: provider/workflow-ibm/pom.xml
path: .
- name: workflow-test-core
- name: workflow-gcp-datastore
type: mvn
target: testing/workflow-test-core/pom.xml
target: provider/workflow-gcp-datastore/pom.xml
path: .
......@@ -18,6 +18,7 @@ variables:
AWS_TEST_SUBDIR: testing/workflow-test-aws
AWS_SERVICE: ingestion-workflow
AWS_ENVIRONMENT: dev
AWS_SKIP_DEPLOY: 'true'
include:
......
This diff is collapsed.
......@@ -254,57 +254,5 @@ development purposes because signing a blob is only available with the service a
Remember to set the `GOOGLE_APPLICATION_CREDENTIALS` environment variable. Follow the [instructions
on the Google developer's portal][application-default-credentials].
**Integration tests**
Instructions for running the GCP integration tests can be found [here](./provider/workflow-gcp-datastore/README.md).
### Persistence layer
The GCP implementation contains two mutually exclusive modules to work with the persistence layer.
Presently, OSDU R2 connects to legacy Cloud Datastore for compatibility with the current OpenDES
implementation. In the future OSDU releases, Cloud Datastore will be replaced by the existing Cloud
Firestore implementation that's already available in the project.
* The Cloud Datastore implementation is located in the **provider/workflow-gcp-datastore** folder.
* The Cloud Firestore implementation is located in the **provider/workflow-gcp** folder.
To learn more about available collections, follow to the [Firestore collections](#firestore-collections)
section.
## Firestore collections
Upon an ingestion request, the Workflow service needs to determine which DAG to run. To do that, the
service queries the database with the workflow type and data type.
The GCP-based implementation of the Workflow service uses Cloud Firestore with the following
`ingestion-strategy` and `workflow-status` collections.
> The Cloud Datastore implementation in OSDU R2 uses the same collections as Cloud Firestore.
### `ingestion-strategy`
The database needs to store the following information to help determine a DAG.
| Property | Type | Description |
| ------------ | -------- | --------------------------------------------------- |
| WorkflowType | `String` | Supported workflow types — "osdu" or "ingest" |
| DataType | `String` | Supported data types — "well_log" or "opaque" |
| UserID | `String` | Unique identifier of the user group or role |
| DAGName | `String` | Name of the DAG |
> The OSDU R2 Prototype doesn't support the **UserID** property. When the security system is
> finalized, the **UserID** property will store the ID of the user group or role.
### `workflow-status`
After a workflow starts, the Workflow service stores the following information in the database.
| Property | Type | Description |
| ------------ | -------- | ---------------------------------------------------------------------------- |
| WorkflowID | `String` | Unique workflow ID |
| AirflowRunID | `String` | Unique Airflow process ID generated by the Workflow service |
| Status | `String` | Current status of a workflow — submitted, running, finished, or failed |
| SubmittedAt | `String` | Timestamp when the workflow job was submitted to Workflow Engine |
| SubmittedBy | `String` | ID of the user role or group. Not supported in OSDU R2 |
[application-default-credentials]: https://developers.google.com/identity/protocols/application-default-credentials#calling
* Documentation for the GCP Cloud Datastore implementation is located in [here](./provider/workflow-gcp-datastore/README.md)
* Documentation for the GCP Cloud Firestore implementation is located in [here](./provider/workflow-gcp/README.md)
......@@ -318,4 +318,23 @@
</snapshotRepository>
</distributionManagement>
<profiles>
<profile>
<id>Default</id>
<activation>
<property>
<name>!repo.releases.id</name>
</property>
</activation>
<properties>
<repo.releases.id>community-maven-repo</repo.releases.id>
<publish.snapshots.id>community-maven-via-job-token</publish.snapshots.id>
<publish.releases.id>community-maven-via-job-token</publish.releases.id>
<repo.releases.url>https://community.opengroup.org/api/v4/groups/17/-/packages/maven</repo.releases.url>
<publish.snapshots.url>https://community.opengroup.org/api/v4/projects/146/packages/maven</publish.snapshots.url>
<publish.releases.url>https://community.opengroup.org/api/v4/projects/146/packages/maven</publish.releases.url>
</properties>
</profile>
</profiles>
</project>
......@@ -52,13 +52,9 @@ az keyvault secret show --vault-name $KEY_VAULT_NAME --name $KEY_VAULT_SECRET_NA
| `AZURE_CLIENT_ID` | `********` | Identity to run the service locally. This enables access to Azure resources. You only need this if running locally | yes |
| `AZURE_TENANT_ID` | `********` | AD tenant to authenticate users from | yes |
| `AZURE_CLIENT_SECRET` | `********` | Secret for `$AZURE_CLIENT_ID` | yes |
| `azure.activedirectory.session-stateless` | `true` | Flag run in stateless mode (needed by AAD dependency) | no |
| `azure.activedirectory.AppIdUri` | `api://${azure.activedirectory.client-id}` | URI for AAD Application | no |
| `azure.activedirectory.client-id` | ******** | AAD client application ID | yes |
| `azure.application-insights.instrumentation-key` | ******** | API Key for App Insights | yes |
| `KEYVAULT_URI` | ex https://foo-keyvault.vault.azure.net/ | URI of KeyVault that holds application secrets | no |
| `cosmosdb_database` | ex `dev-osdu-r2-db` | Cosmos database for storage documents | no | output of infrastructure deployment |
| `cosmosdb_key` | `********` | Key for CosmosDB | yes | output of infrastructure deployments |
| `OSDU_ENTITLEMENTS_URL` | ex `https://foo-entitlements.azurewebsites.net` | Entitlements API endpoint | no | output of infrastructure deployment |
| `OSDU_ENTITLEMENTS_APPKEY` | `********` | The API key clients will need to use when calling the entitlements | yes | -- |
| `airflow_url` | ex `http://foo.org/test/airflow` | Airflow API endpoint | no |
......@@ -70,6 +66,22 @@ az keyvault secret show --vault-name $KEY_VAULT_NAME --name $KEY_VAULT_SECRET_NA
| `LOG_PREFIX` | `workflow` | Logging prefix | no | - |
| `server_port` | `8082` | Port of application. | no | -- |
In Order to run service with AAD authentication add below environment variables, which will enable Authentication in workflow service using AAD filter.
| name | value | description | sensitive? | source |
| --- | --- | --- | --- | --- |
| `azure_istioauth_enabled` | `false` | Flag to Disable AAD auth | no | -- |
| `azure.activedirectory.session-stateless` | `true` | Flag run in stateless mode (needed by AAD dependency) | no | -- |
| `azure.activedirectory.client-id` | `********` | AAD client application ID | yes | output of infrastructure deployment | output of infrastructure deployment |
| `azure.activedirectory.AppIdUri` | `api://${azure.activedirectory.client-id}` | URI for AAD Application | no | -- |
In Order to run service without authentication add below environment variables, which will disable authentication in workflow service.
name | value | description | sensitive? | source |
| --- | --- | --- | --- | --- |
| `azure_istioauth_enabled` | `true` | Flag to Disable AAD auth | no | -- |
**Required to run integration tests**
| name | value | description | sensitive? | source |
......
......@@ -31,13 +31,26 @@
<properties>
<azure.version>2.1.7</azure.version>
<osdu.azurecore.version>0.0.33</osdu.azurecore.version>
<osdu.corelibazure.version>0.0.44</osdu.corelibazure.version>
<azure.appservice.resourcegroup></azure.appservice.resourcegroup>
<azure.appservice.plan></azure.appservice.plan>
<azure.appservice.appname></azure.appservice.appname>
<azure.appservice.subscription></azure.appservice.subscription>
</properties>
<dependencyManagement>
<dependencies>
<!-- Inherit managed dependencies from core-lib-azure -->
<dependency>
<groupId>org.opengroup.osdu</groupId>
<artifactId>core-lib-azure</artifactId>
<version>${osdu.corelibazure.version}</version>
<type>pom</type>
<scope>import</scope>
</dependency>
</dependencies>
</dependencyManagement>
<dependencies>
<dependency>
<groupId>com.microsoft.azure</groupId>
......@@ -70,7 +83,7 @@
<dependency>
<groupId>org.opengroup.osdu</groupId>
<artifactId>core-lib-azure</artifactId>
<version>${osdu.azurecore.version}</version>
<version>${osdu.corelibazure.version}</version>
</dependency>
<dependency>
<groupId>org.opengroup.osdu</groupId>
......
......@@ -37,7 +37,7 @@ public class AzureBootstrapConfig {
public CosmosClient buildCosmosClient(SecretClient kv) {
final String cosmosEndpoint = KeyVaultFacade.getSecretWithValidation(kv, "opendes-cosmos-endpoint");
final String cosmosPrimaryKey = KeyVaultFacade.getSecretWithValidation(kv, "opendes-cosmos-primary-key");
return new CosmosClientBuilder().setEndpoint(cosmosEndpoint).setKey(cosmosPrimaryKey).buildClient();
return new CosmosClientBuilder().endpoint(cosmosEndpoint).key(cosmosPrimaryKey).buildClient();
}
@Bean
......
package org.opengroup.osdu.workflow.provider.azure.config;
import com.azure.cosmos.CosmosClient;
import com.azure.cosmos.internal.AsyncDocumentClient;
import org.opengroup.osdu.azure.cosmosdb.ICosmosClientFactory;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.context.annotation.Primary;
......@@ -18,9 +17,4 @@ public class SinglePartitionCosmosClientFactory implements ICosmosClientFactory
public CosmosClient getClient(final String s) {
return cosmosClient;
}
@Override
public AsyncDocumentClient getAsyncClient(final String s) {
return null;
}
}
......@@ -34,7 +34,7 @@ public class WorkflowMetadataRepository implements IWorkflowMetadataRepository {
public WorkflowMetadata createWorkflow(final WorkflowMetadata workflowMetadata) {
final WorkflowMetadataDoc workflowMetadataDoc = buildWorkflowMetadataDoc(workflowMetadata);
cosmosStore.createItem(dpsHeaders.getPartitionId(), cosmosConfig.getDatabase(),
cosmosConfig.getWorkflowMetadataCollection(), workflowMetadataDoc);
cosmosConfig.getWorkflowMetadataCollection(), workflowMetadataDoc.getWorkflowId(), workflowMetadataDoc);
return buildWorkflowMetadata(workflowMetadataDoc);
}
......
......@@ -35,7 +35,7 @@ public class WorkflowRunRepository implements IWorkflowRunRepository {
public WorkflowRun saveWorkflowRun(final WorkflowRun workflowRun) {
final WorkflowRunDoc workflowRunDoc = buildWorkflowRunDoc(workflowRun);
cosmosStore.createItem(dpsHeaders.getPartitionId(), cosmosConfig.getDatabase(),
cosmosConfig.getWorkflowRunCollection(), workflowRunDoc);
cosmosConfig.getWorkflowRunCollection(), workflowRunDoc.getWorkflowId(), workflowRunDoc);
return buildWorkflowRun(workflowRunDoc);
}
......
......@@ -90,7 +90,7 @@ public class WorkflowStatusRepository implements IWorkflowStatusRepository {
if (!existingDoc.isPresent()) {
WorkflowStatusDoc newStatusDoc = buildWorkflowStatusDoc(workflowStatus);
cosmosStore.upsertItem(dpsHeaders.getPartitionId(), cosmosConfig.getDatabase(),
cosmosConfig.getWorkflowStatusCollection(), newStatusDoc);
cosmosConfig.getWorkflowStatusCollection(), newStatusDoc.getWorkflowId(), newStatusDoc);
}
logger.log(Level.INFO, String.format("Fetch saved workflow status: {%s}", workflowStatus));
......@@ -123,7 +123,7 @@ public class WorkflowStatusRepository implements IWorkflowStatusRepository {
workflowStatusDoc.workflowStatusType = WorkflowStatusType.valueOf(workflowStatusType.toString());
cosmosStore.upsertItem(dpsHeaders.getPartitionId(), cosmosConfig.getDatabase(),
cosmosConfig.getWorkflowStatusCollection(), workflowStatusDoc);
cosmosConfig.getWorkflowStatusCollection(), workflowStatusDoc.getWorkflowId(), workflowStatusDoc);
WorkflowStatus workflowStatus = buildWorkflowStatus(workflowStatusDoc);
logger.log(Level.INFO, String.format("Updated workflow status : {%s}", workflowStatus));
......
......@@ -16,6 +16,7 @@ package org.opengroup.osdu.workflow.provider.azure.security;
import com.microsoft.azure.spring.autoconfigure.aad.AADAppRoleStatelessAuthenticationFilter;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.boot.autoconfigure.condition.ConditionalOnProperty;
import org.springframework.security.config.annotation.method.configuration.EnableGlobalMethodSecurity;
import org.springframework.security.config.annotation.web.builders.HttpSecurity;
import org.springframework.security.config.annotation.web.configuration.EnableWebSecurity;
......@@ -23,27 +24,29 @@ import org.springframework.security.config.annotation.web.configuration.WebSecur
import org.springframework.security.config.http.SessionCreationPolicy;
import org.springframework.security.web.authentication.UsernamePasswordAuthenticationFilter;
@EnableWebSecurity
@EnableGlobalMethodSecurity(prePostEnabled = true)
@ConditionalOnProperty(value = "azure.istio.auth.enabled", havingValue = "false", matchIfMissing = false)
public class AADSecurityConfig extends WebSecurityConfigurerAdapter {
@Autowired
private AADAppRoleStatelessAuthenticationFilter appRoleAuthFilter;
@Autowired
private AADAppRoleStatelessAuthenticationFilter appRoleAuthFilter;
@Override
protected void configure(HttpSecurity http) throws Exception {
http
@Override
protected void configure(HttpSecurity http) throws Exception {
http
.csrf().disable()
.sessionManagement().sessionCreationPolicy(SessionCreationPolicy.NEVER)
.and()
.authorizeRequests()
.antMatchers("/",
"/v2/api-docs",
"/swagger-resources/**",
"/swagger-ui.html",
"/webjars/**")
"/v2/api-docs",
"/swagger-resources/**",
"/swagger-ui.html",
"/webjars/**")
.permitAll()
.anyRequest().authenticated()
.and()
.addFilterBefore(appRoleAuthFilter, UsernamePasswordAuthenticationFilter.class);
}
}
}
// Copyright © Microsoft Corporation
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package org.opengroup.osdu.workflow.provider.azure.security;
import org.springframework.boot.autoconfigure.condition.ConditionalOnProperty;
import org.springframework.security.config.annotation.method.configuration.EnableGlobalMethodSecurity;
import org.springframework.security.config.annotation.web.builders.HttpSecurity;
import org.springframework.security.config.annotation.web.configuration.EnableWebSecurity;
import org.springframework.security.config.annotation.web.configuration.WebSecurityConfigurerAdapter;
@EnableWebSecurity
@EnableGlobalMethodSecurity(prePostEnabled = true)
@ConditionalOnProperty(value = "azure.istio.auth.enabled", havingValue = "true", matchIfMissing = true)
public class AzureIstioSecurityConfig extends WebSecurityConfigurerAdapter {
@Override
protected void configure(HttpSecurity http) throws Exception {
http.httpBasic().disable()
.csrf().disable(); //AuthN is disabled. AuthN is handled by sidecar proxy
}
}
......@@ -12,19 +12,25 @@
# See the License for the specific language governing permissions and
# limitations under the License.
# Application name
spring.application.name=workflow
LOG_PREFIX=workflow
# Server Path Configuration
server.servlet.contextPath=/api/workflow/v1/
# Istio Auth Config Toggle
azure.istio.auth.enabled=${azure_istioauth_enabled}
# Partition service
PARTITION_API=${partition_service_endpoint}
azure.activedirectory.app-resource-id=${aad_client_id}
# Azure AD configuration for OpenIDConnect
azure.activedirectory.session-stateless=true
azure.activedirectory.client-id=${aad_client_id}
azure.activedirectory.AppIdUri=api://${azure.activedirectory.client-id}
# azure.activedirectory.session-stateless=true
# azure.activedirectory.client-id=${aad_client_id}
# azure.activedirectory.AppIdUri=api://${azure.activedirectory.client-id}
# Azure CosmosDB configuration
osdu.azure.cosmosdb.database=${cosmosdb_database}
......
package org.opengroup.osdu.workflow.provider.azure.repository;
import com.azure.cosmos.CosmosClientException;
import com.azure.cosmos.CosmosException;
import org.junit.Assert;
import org.junit.Before;
import org.junit.Test;
......@@ -54,7 +54,7 @@ public class IngestionStrategyRepositoryTest {
}
@Test
public void findByWorkflowTypeAndDataTypeAndUserId() throws CosmosClientException, IOException {
public void findByWorkflowTypeAndDataTypeAndUserId() throws CosmosException, IOException {
IngestionStrategyDoc ingestionStrategyDoc = new IngestionStrategyDoc();
ingestionStrategyDoc.setDagName("osdu_python_sdk_well_log_ingestion");
ingestionStrategyDoc.setDataType("well_log");
......@@ -76,14 +76,14 @@ public class IngestionStrategyRepositoryTest {
}
@Test
public void shouldReturnNullWhenRecordNotFound() throws CosmosClientException {
public void shouldReturnNullWhenRecordNotFound() throws CosmosException {
when(cosmosStore.findItem(any(), any(), any(), any(), any(), any()))
.thenReturn(Optional.empty());
Assert.assertNull(repository.findByWorkflowTypeAndDataTypeAndUserId(WorkflowType.OSDU, "test", ""));
}
@Test(expected = AppException.class)
public void shouldThrowExceptionWhenCosmosException() throws CosmosClientException {
public void shouldThrowExceptionWhenCosmosException() throws CosmosException {
doThrow(AppException.class)
.when(cosmosStore)
.findItem(any(), any(), any(), any(), any(), any());
......
......@@ -95,10 +95,10 @@ public class WorkflowMetadataRepositoryTest {
when(cosmosConfig.getWorkflowMetadataCollection()).thenReturn(WORKFLOW_METADATA_COLLECTION);
when(dpsHeaders.getPartitionId()).thenReturn(PARTITION_ID);
doNothing().when(cosmosStore)
.createItem(eq(PARTITION_ID), eq(DATABASE_NAME), eq(WORKFLOW_METADATA_COLLECTION), eq(workflowMetadataDoc));
.createItem(eq(PARTITION_ID), eq(DATABASE_NAME), eq(WORKFLOW_METADATA_COLLECTION), eq(WORKFLOW_ID), eq(workflowMetadataDoc));
final WorkflowMetadata response = workflowMetadataRepository.createWorkflow(inputWorkflowMetadata);
verify(cosmosStore, times(1))
.createItem(eq(PARTITION_ID), eq(DATABASE_NAME), eq(WORKFLOW_METADATA_COLLECTION), eq(workflowMetadataDoc));
.createItem(eq(PARTITION_ID), eq(DATABASE_NAME), eq(WORKFLOW_METADATA_COLLECTION), eq(WORKFLOW_ID), eq(workflowMetadataDoc));
verify(cosmosConfig, times(1)).getDatabase();
verify(cosmosConfig, times(1)).getWorkflowMetadataCollection();
verify(dpsHeaders, times(1)).getPartitionId();
......
......@@ -46,6 +46,7 @@ public class WorkflowRunRepositoryTest {
" \"status\": \"SUBMITTED\",\n" +
" \"submittedBy\": \"user@mail.com\"\n" +
"}";
private static final String WORKFLOW_ID = "2afccfb8-1351-41c6-9127-61f2d7f22ff8";
private static final ObjectMapper OBJECT_MAPPER = new ObjectMapper();
@Mock
......@@ -71,10 +72,10 @@ public class WorkflowRunRepositoryTest {
when(cosmosConfig.getWorkflowRunCollection()).thenReturn(WORKFLOW_RUN_COLLECTION);
when(dpsHeaders.getPartitionId()).thenReturn(PARTITION_ID);
doNothing().when(cosmosStore)
.createItem(eq(PARTITION_ID), eq(DATABASE_NAME), eq(WORKFLOW_RUN_COLLECTION), eq(workflowRunDoc));
.createItem(eq(PARTITION_ID), eq(DATABASE_NAME), eq(WORKFLOW_RUN_COLLECTION), eq(WORKFLOW_ID), eq(workflowRunDoc));
final WorkflowRun response = workflowRunRepository.saveWorkflowRun(workflowRun);
verify(cosmosStore, times(1))
.createItem(eq(PARTITION_ID), eq(DATABASE_NAME), eq(WORKFLOW_RUN_COLLECTION), eq(workflowRunDoc));
.createItem(eq(PARTITION_ID), eq(DATABASE_NAME), eq(WORKFLOW_RUN_COLLECTION), eq(WORKFLOW_ID), eq(workflowRunDoc));
verify(cosmosConfig, times(1)).getDatabase();
verify(cosmosConfig, times(1)).getWorkflowRunCollection();
verify(dpsHeaders, times(1)).getPartitionId();
......
package org.opengroup.osdu.workflow.provider.azure.repository;
import com.azure.cosmos.CosmosClientException;
import com.azure.cosmos.CosmosException;
import org.junit.Assert;
import org.junit.Before;
import org.junit.Test;
......@@ -75,7 +75,7 @@ public class WorkflowStatusRepositoryTest {
@Test(expected = WorkflowNotFoundException.class)
public void shouldThrowExceptionWhenWorkflowNotFound() throws CosmosClientException {
public void shouldThrowExceptionWhenWorkflowNotFound() throws CosmosException {
when(cosmosStore.findItem(
eq(PARTITION_ID),
eq(DATABASE_NAME),
......@@ -87,7 +87,7 @@ public class WorkflowStatusRepositoryTest {
}
@Test(expected = AppException.class)
public void shouldThrowExceptionWhenCosmosException() throws CosmosClientException {
public void shouldThrowExceptionWhenCosmosException() throws CosmosException {
when(cosmosStore.findItem(
eq(PARTITION_ID),
eq(DATABASE_NAME),
......@@ -124,6 +124,7 @@ public class WorkflowStatusRepositoryTest {
eq(PARTITION_ID),
eq(DATABASE_NAME),
eq(WORKFLOW_STATUS_COLLECTION_NAME),
eq(TEST_WORKFLOW_ID),
any());
WorkflowStatus status = workflowStatusRepository.saveWorkflowStatus(workflowstatus);
......@@ -152,7 +153,7 @@ public class WorkflowStatusRepositoryTest {
}
@Test(expected = WorkflowNotFoundException.class)
public void updateWorkflowStatusThrowWorkflowIDNotFound() throws CosmosClientException {
public void updateWorkflowStatusThrowWorkflowIDNotFound() throws CosmosException {
when(cosmosStore.findItem(
eq(PARTITION_ID),
eq(DATABASE_NAME),
......
# workflow-gcp
The OSDU R2 Workflow service is designed to start business processes in the system. In the OSDU R2 prototype phase, the service only starts ingestion of OSDU data.
The Workflow service provides a wrapper functionality around the Apache Airflow functions and is designed to carry out preliminary work with files before running the Airflow Directed Acyclic Graphs (DAGs) that will perform actual ingestion of OSDU data.
In OSDU R2, depending on the types of data, workflow, and user, the Workflow service starts the necessary workflow such as well log ingestion or opaque ingestion.
## Running Locally
### Requirements
......@@ -28,8 +34,8 @@ In order to run the service locally, you will need to have the following environ
| name | value | description | sensitive? | source |
| --- | --- | --- | --- | --- |
| `DOMAIN` | ex `contoso.com` | OSDU R2 to run tests under | no | - |
| `INTEGRATION_TESTER` | `********` | Service account for API calls. Note: this user must have entitlements configured already | yes | https://console.cloud.google.com/iam-admin/serviceaccounts |
| `NO_DATA_ACCESS_TESTER` | `********` | Service account without data access | yes | https://console.cloud.google.com/iam-admin/serviceaccounts |
| `INTEGRATION_TESTER` | `********` | Service account for API calls, as a filename or JSON content, plain or Base64 encoded. Note: this user must have entitlements configured already | yes | https://console.cloud.google.com/iam-admin/serviceaccounts |
| `NO_DATA_ACCESS_TESTER` | `********` | Service account without data access, as a filename or JSON content, plain or Base64 encoded. | yes | https://console.cloud.google.com/iam-admin/serviceaccounts |
| `LEGAL_TAG` | `********` | Demo legal tag used to pass test| yes | Legal service |
| `WORKFLOW_HOST` | ex `https://os-workflow-dot-opendes.appspot.com` | Endpoint of workflow service | no | - |
| `DEFAULT_DATA_PARTITION_ID_TENANT1`| ex `opendes` | OSDU tenant used for testing | no | - |
......@@ -39,13 +45,23 @@ In order to run the service locally, you will need to have the following environ
**Entitlements configuration for integration accounts**
| INTEGRATION_TESTER | NO_DATA_ACCESS_TESTER |
| INTEGRATION_TESTER | NO_DATA_ACCESS_TESTER |
| --- | --- |
| users<br/>service.entitlements.user<br/>service.workflow.admin<br/>service.workflow.creator<br/>service.workflow.viewer<br/>service.legal.admin<br/>service.legal.editor<br/>data.test1<br/>data.integration.test | users |
### Configure Maven
### Persistence layer
The GCP implementation contains two mutually exclusive modules to work with the persistence layer.
Presently, OSDU R2 connects to legacy Cloud Datastore for compatibility with the current OpenDES
implementation. In the future OSDU releases, Cloud Datastore will be replaced by the existing Cloud
Firestore implementation that's already available in the project.
The Cloud Datastore implementation is located in the **provider/workflow-gcp-datastore** folder.
### Run Locally
Check that maven is installed:
```bash
$ mvn --version
Apache Maven 3.6.0
......@@ -55,6 +71,7 @@ Java version: 1.8.0_212, vendor: AdoptOpenJDK, runtime: /usr/lib/jvm/jdk8u212-b0
```
You may need to configure access to the remote maven repository that holds the OSDU dependencies. This file should live within `~/.mvn/community-maven.settings.xml`:
```bash
$ cat ~/.m2/settings.xml
<?xml version="1.0" encoding="UTF-8"?>
......@@ -78,12 +95,45 @@ $ cat ~/.m2/settings.xml
</servers>
</settings>
```
### Build and run the application
* Update the Google cloud SDK to the latest version:
```bash
gcloud components update
```
* Set Google Project Id:
```bash
gcloud config set project <YOUR-PROJECT-ID>
```
* Perform a basic authentication in the selected project:
```bash
gcloud auth application-default login
```
## Testing
* Navigate to workflow service's root folder and run:
```bash
mvn clean install
```
* If you wish to see the coverage report then go to testing/target/site/jacoco-aggregate and open index.html