Commit b78a64fa authored by Diego Molteni's avatar Diego Molteni
Browse files

Initial contribution from SLB

parents
#-------------------------------------------------------------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT License. See https://go.microsoft.com/fwlink/?linkid=2090316 for license information.
#-------------------------------------------------------------------------------------------------------------
# To fully customize the contents of this image, use the following Dockerfile instead:
# https://github.com/microsoft/vscode-dev-containers/tree/v0.112.0/containers/javascript-node-12/.devcontainer/Dockerfile
FROM mcr.microsoft.com/vscode/devcontainers/typescript-node:12
# The image referenced above includes a non-root user with sudo access. Add
# the "remoteUser" property to devcontainer.json to use it. On Linux, the container
# user's GID/UIDs will be updated to match your local UID/GID when using the image
# or dockerFile property. Update USER_UID/USER_GID below if you are using the
# dockerComposeFile property or want the image itself to start with different ID
# values. See https://aka.ms/vscode-remote/containers/non-root-user for details.
ARG USER_UID=1000
ARG USER_GID=$USER_UID
ARG USERNAME=node
RUN sudo apt-get -y update && sudo apt-get install -y redis-server
# [Optional] Update UID/GID if needed
RUN if [ "$USER_GID" != "1000" ] || [ "$USER_UID" != "1000" ]; then \
groupmod --gid $USER_GID $USERNAME \
&& usermod --uid $USER_UID --gid $USER_GID $USERNAME \
&& usermod -aG sudo $USERNAME \
&& chown -R $USER_UID:$USER_GID /home/$USERNAME; \
fi
## Background
The goal of this .devcontainer is to create a VS Code container with all the tools, libraries, and frameworks so that developers on the SDMS project team can quickly begin developing without having spending cycles setting up the correct development environtment.
This README assumes you have followed the installation steps outlined [here](https://code.visualstudio.com/docs/remote/containers#_installation).
## Visual Studio Code Remote - Containers
Using the [Visual Studio Code Remote - Containers](https://code.visualstudio.com/docs/remote/containers) feature allows for creation of a Docker container that is configured with the correct development environment. . We will use folder-based devcontainers for this repo.
### Folder-based Devcontainers
This current Devcontainer is built for folders. Currently the Docker image installs the following tools and libraries. See Dockerfile for more details.
#### Tools and Libraries
* Node 12.16.3
* Go 1.12.17
* Terraform 1.12.24
* Kubectl
* Helm
* Azure CLI
* Docker CE CLI
#### Extensions
The image also has the following VS Code extension installed. See devcontainer.json for more details.
* mauve.terraform
* ms-azuretools.vscode-azureterraform
* ms-vscode.azurecli
* ms-azuretools.vscode-docker
* ms-kubernetes-tools.vscode-kubernetes-tools
## Gettting started
Clone this repo, you will notice that there is a .devcontainer directory. Inside this directory is the Dockerfile and devcontainer.json which tell VS Code how to build the container.
Start VS Code, and in a new window, click on the quick actions stations on the lower left corner. Select 'Remote-Containers: Open Folder' in Container from the command list that appears. As you see on the screenshot below.
![](./readme/command-palette.png)
Select the repo that you just clone (the one with .devcontainer directory at the root), this will cause VS Code to build the container and open the directory as you can see below.
![](./readme/container.png)
You can start a new terminal window and build the project as outlined [here](../README.md).
> Note: This devcontainer does not auto-install the `node_modules` directory. That directory holds the package dependencies needed for the project. Package dependencies are listed in `package.json` and created when running `npm install`.
![](./readme/bash.png)
## Troubleshooting Devcontainers
If you're having trouble, the below documented errors may save you some time and get you back on track.
- **General Error**: There's a broad range of install errors that can be resolved by deleting both the `node_modules` and `dist` directory and then rebuilding the container. Having a strong hardwired network connection can speed up the `npm install` process and reduce the risk of installation timeouts.
```bash
$ tree os-seismic-store-svc
├───.devcontainer
├───.vscode
├───...
├───node_modules # generated from running npm install
└───...
```
# Copyright © Microsoft Corporation
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
cat << 'EOF' > ~/.bashrc
# ~/.bashrc: executed by bash(1) for non-login shells.
# see /usr/share/doc/bash/examples/startup-files (in the package bash-doc)
# for examples
# If not running interactively, don't do anything
case $- in
*i*) ;;
*) return;;
esac
# don't put duplicate lines or lines starting with space in the history.
# See bash(1) for more options
HISTCONTROL=ignoreboth
# append to the history file, don't overwrite it
shopt -s histappend
# for setting history length see HISTSIZE and HISTFILESIZE in bash(1)
HISTSIZE=10000
HISTFILESIZE=20000
# check the window size after each command and, if necessary,
# update the values of LINES and COLUMNS.
shopt -s checkwinsize
# If set, the pattern "**" used in a pathname expansion context will
# match all files and zero or more directories and subdirectories.
#shopt -s globstar
# set variable identifying the chroot you work in (used in the prompt below)
if [ -z "${debian_chroot:-}" ] && [ -r /etc/debian_chroot ]; then
debian_chroot=$(cat /etc/debian_chroot)
fi
# set a fancy prompt (non-color, unless we know we "want" color)
case "$TERM" in
xterm-color|*-256color) color_prompt=yes;;
esac
# uncomment for a colored prompt, if the terminal has the capability; turned
# off by default to not distract the user: the focus in a terminal window
# should be on the output of commands, not on the prompt
#force_color_prompt=yes
if [ -n "$force_color_prompt" ]; then
if [ -x /usr/bin/tput ] && tput setaf 1 >&/dev/null; then
# We have color support; assume it's compliant with Ecma-48
# (ISO/IEC-6429). (Lack of such support is extremely rare, and such
# a case would tend to support setf rather than setaf.)
color_prompt=yes
else
color_prompt=
fi
fi
if [ "$color_prompt" = yes ]; then
PS1='${debian_chroot:+($debian_chroot)}\[\033[01;32m\]\u@\h\[\033[00m\]:\[\033[01;34m\]\w\[\033[00m\]\$ '
else
PS1='${debian_chroot:+($debian_chroot)}\u@\h:\w\$ '
fi
unset color_prompt force_color_prompt
# If this is an xterm set the title to user@host:dir
case "$TERM" in
xterm*|rxvt*)
PS1="\[\e]0;${debian_chroot:+($debian_chroot)}\u@\h: \w\a\]$PS1"
;;
*)
;;
esac
# enable color support of ls and also add handy aliases
if [ -x /usr/bin/dircolors ]; then
test -r ~/.dircolors && eval "$(dircolors -b ~/.dircolors)" || eval "$(dircolors -b)"
alias ls='ls --color=auto'
#alias dir='dir --color=auto'
#alias vdir='vdir --color=auto'
alias grep='grep --color=auto'
alias fgrep='fgrep --color=auto'
alias egrep='egrep --color=auto'
fi
# colored GCC warnings and errors
#export GCC_COLORS='error=01;31:warning=01;35:note=01;36:caret=01;32:locus=01:quote=01'
# some more ls aliases
alias ll='ls -alF'
alias la='ls -A'
alias l='ls -CF'
# Add an "alert" alias for long running commands. Use like so:
# sleep 10; alert
alias alert='notify-send --urgency=low -i "$([ $? = 0 ] && echo terminal || echo error)" "$(history|tail -n1|sed -e '\''s/^\s*[0-9]\+\s*//;s/[;&|]\s*alert$//'\'')"'
# Alias definitions.
# You may want to put all your additions into a separate file like
# ~/.bash_aliases, instead of adding them here directly.
# See /usr/share/doc/bash-doc/examples in the bash-doc package.
if [ -f ~/.bash_aliases ]; then
. ~/.bash_aliases
fi
# enable programmable completion features (you don't need to enable
# this, if it's already enabled in /etc/bash.bashrc and /etc/profile
# sources /etc/bash.bashrc).
if ! shopt -oq posix; then
if [ -f /usr/share/bash-completion/bash_completion ]; then
. /usr/share/bash-completion/bash_completion
elif [ -f /etc/bash_completion ]; then
. /etc/bash_completion
fi
fi
export GOROOT=/usr/local/go
export GOPATH=${HOME}/go
export PATH="${PATH}:${GOROOT}/bin"
EOF
cat << 'EOF' > ~/.profile
# ~/.profile: executed by Bourne-compatible login shells.
if [ "$BASH" ]; then
if [ -f ~/.bashrc ]; then
. ~/.bashrc
fi
fi
eval "$(direnv hook bash)"
mesg n || true
EOF
cat << 'EOF' > ~/.gitconfig
[filter "lfs"]
clean = git-lfs clean -- %f
smudge = git-lfs smudge -- %f
process = git-lfs filter-process
required = true
[core]
quotepath = false
whitespace=fix,-indent-with-non-tab,trailing-space,cr-at-eol
editor = code
[alias]
st = status
co = checkout
br = branch
up = rebase
ci = commit
lol = log --pretty=oneline --abbrev-commit --graph --decorate --all
[hub]
protocol = https
[color]
ui = true
[color "branch"]
current = yellow black
local = yellow
remote = magenta
[color "diff"]
meta = yellow bold
frag = magenta bold
old = red reverse
new = green reverse
whitespace = white reverse
[color "status"]
added = yellow
changed = green
untracked = cyan reverse
branch = magenta
[push]
default = matching
EOF
# install direnv
apt update
apt install -y figlet lolcat fonts-powerline direnv watch tree vim
figlet $(date)
eval "$(direnv hook bash)"
direnv allow .
direnv allow ..
#pushd /tmp
#rm setup_*.sh
#wget https://raw.githubusercontent.com/jmspring/bedrock-dev-env/master/scripts/setup_docker.sh
#wget https://raw.githubusercontent.com/jmspring/bedrock-dev-env/master/scripts/setup_azure_cli.sh
#wget https://raw.githubusercontent.com/jmspring/bedrock-dev-env/master/scripts/setup_go.sh
#wget https://raw.githubusercontent.com/jmspring/bedrock-dev-env/master/scripts/setup_kubernetes_tools.sh
#wget https://raw.githubusercontent.com/jmspring/bedrock-dev-env/master/scripts/setup_system.sh
#wget https://raw.githubusercontent.com/jmspring/bedrock-dev-env/master/scripts/setup_terraform.sh
#chmod +x setup_*.sh
#figlet setup_system
#./setup_system.sh
#figlet setup_go
#./setup_go.sh
#figlet setup_terraform
#./setup_terraform.sh
#figlet setup_docker_WIATING_FOR_USER_INPUT
#./setup_docker.sh
#figlet setup_azure_cli
#./setup_azure_cli.sh
#figlet setup_kubernetes_tools
#./setup_kubernetes_tools.sh
#popd
figlet $(date)
figlet done
{
"name": "SDMS Node.js Project",
"dockerFile": "Dockerfile",
// Use 'settings' to set *default* container specific settings.json values on container create.
"settings": {
"terminal.integrated.shell.linux": "/bin/bash"
},
// Add the IDs of extensions you want installed when the container is created in the array below.
"extensions": [
"dbaeumer.vscode-eslint"
],
// Use 'forwardPorts' to make a list of ports inside the container available locally.
"forwardPorts": [5000, 6379],
//"initializeCommand": "npx rimraf ./node_modules",
// Specifies a command that should be run after the container has been created.
//"postCreateCommand": "(nohup redis-server > /tmp/redis.log 2>&1 &) && (npm ci)",
//"postCreateCommand": "npm install",
// Comment out the next line to run as root instead.
"remoteUser": "node"
}
\ No newline at end of file
# resolving-git-line-ending-issues-in-containers-resulting-in-many-modified-files
* text=auto eol=lf
*.{cmd,[cC][mM][dD]} text eol=crlf
*.{bat,[bB][aA][tT]} text eol=crlf
# node
**/node_modules/**
dist
# python
*.pyc
# Log Files
*.log
# test
.nyc_output
coverage
test-results.xml
# artifact
artifact
seismic-store-service.tar.gz
# keyfiles
keys
# vscode configurations
.vscode
# dotenv file
.env
# newman junit output
newman
\ No newline at end of file
{
"color": "true",
"require": "ts-node/register",
"timeout": 5000,
"opts": false,
"diff": true,
"sort": true,
"spec": "./tests/utest/**.ts"
}
\ No newline at end of file
# ***************************************************************************
# Copyright 2017 - 2019, Schlumberger
#
# Licensed under the Apache License, Version 2.0(the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ***************************************************************************
# SEISMIC STORE
Seismic Store is a cloud-based solution designed to store and manage datasets of any size in the cloud by enabling a secure way to access them through a scoped authorization mechanism. Seismic Store overcomes the object size limitations imposed by a cloud provider, by managing generic datasets as multi independent objects and, therefore, provides a generic, reliable and a better performed solution to handle data on a cloud storage.
Saving a dataset on a cloud-based storage, as single entity, may be a problem when it exceeds the maximum allowed object size. Adopting a single object storage approach is also not an optimal solution in terms of performance as a single entity cannot be easily uploaded and downloaded directly in parallel.
Seismic Store is a cloud-based solution composed by restful micro-services, client APIs and tools designed to implement a multi-object storage approach. The system saves objects that compose a dataset as a hierarchical data structure in a cloud storage and the dataset properties as a metadata entry in a no-relational catalogue. Having the datasets stored as multiple independent objects improve the overall performance, as generic I/O operations, for example read or write objects, can be easily parallelized.
Seismic Store manages data authorization at service level by protecting access to storage bucket resources. Only service authorized users are enabled to directly access a storage resource. The service implements a mechanism that generates an “impersonation token” by authorizing long running/background production jobs to access data without requiring further user interactions.
![service architecture diagram](docs/seistore-service-architecture.png "Service Architecture Diagram")
```python
# build the service
npm run build
# start the service
npm run start
# run integral/unit tests
npm run test
# run integral/unit tests and generate the code coverage
npm run code-coverage
# run the regression/e2e test suite
./tests/e2e/run_e2e_tests.sh \
--seistore-svc-url="seismic store service url" \
--seistore-svc-api-key="seismic store service api key" \
--user-idtoken="user or service agent idtoken" \
--tenant="seistore working tenant" \
--subproject="seistore working subproject" \
--admin-email="admin email"
# run the parallel regression/e2e test suite (add the --run-parallel option)
./tests/e2e/run_e2e_tests.sh \
--seistore-svc-url="seismic store service url" \
--seistore-svc-api-key="seismic store service api key" \
--user-idtoken="user or service agent idtoken" \
--tenant="seistore working tenant" \
--subproject="seistore working subproject" \
--admin-email="admin email" \
--run-parallel
# run the e2e test suite continuously (single agent, sequential execution)
#
# *NOTE*: Auth tokens typically are only valid for 24 hours. It may be
# necessary to terminate the continuous testing loop on occasion
# in order to rotate to a new token value.
#
./tests/e2e/loop_tests.sh \
--seistore-svc-url="seismic store service url" \
--seistore-svc-api-key="seismic store service api key" \
--user-idtoken="user or service agent idtoken" \
--tenant="seistore working tenant" \
--admin-email="admin email"
# run the linter on sources
tslint -c tslint.json 'src/**/*.ts'
```
## Environment configuration
Environment variables can be provided with a `.env` file in the root of the project to be consumed
by [dotenv](https://github.com/motdotla/dotenv). Environment variables are [preloaded](https://github.com/motdotla/dotenv#preload)
by the `npm start` command with the argument `-r dotenv/config`. A template `.env` file can be found
in `/docs/templates/.env-sample`.
# CI CD pipeline
## Overview
The `ci cd pipeline` template for SDMS provides CI CD for seistore-svc.
## Requirements
In order to be able to use this pipeline it is needed to create next library groups:
1. R3MVP - OSDU
```
AGENT_POOL: Name of the build agent pool to be used.
SERVICE_CONNECTION_NAME: Azure Resource service connection (could be for any environment, is just a place holder).
```
2. R3MVP - ${{ provider }} Service Release - seistore-svc
Please replace ${{ provider }} with one of next values: Azure, GCP
```
e2eAdminEmail: Email account with admin permissions used by end to end test.
e2eDataPartition: DataPartition to be used by end to end test.
e2eIdToken: First token to be used by end to end test.
e2eIdToken: Second token to be used by end to end test.
e2eLegaltag01: First legaltag to be used by end to end test.
e2eLegaltag01: Second legaltag to be used by end to end test.
e2eNewUser: User email to test add a new user.
e2eProjectId: Project id of project to be used by end to end test.
e2eServiceId: First service id of service to be used by end to end test.
e2eServiceId1: Second service id of service to be used by end to end test.
e2eSubproject: Subproject name to be used by end to end test.
e2eSubprojectLongname: Subproject long name to be used by end to end test.
e2eTargetProjId: Target project id of project to be used by end to end test.
e2eTargetServiceId: Target service id of service to be used by end to end test.
e2eTenant: Tenant name to be used by end to end test.
PORT: Port where seistore-svc is going to listen (only used when flux enabled).
REPLICA_COUNT: Number of pod replicas (only used when flux enabled).
serviceUrlSuffix: Url suffix where seistore-svc is listening, usually: seistore-svc/api/v3
utest.runtime.image: Name of container image, usually: seistore-svc-runtime
```
3. R3MVP - ${{ provider }} Target Env - ${{ environment }}
Please replace ${{ provider }} with one of next values: Azure, GCP
Please replace ${{ environment }} with name of environment. Please see Notes section.
```
cluster_name: Kubernetes cluster name (Used when flux is not enabled and GCP).
cluster_zone: Kubernetes cluster zone (Used when flux is not enabled and GCP).
CONTAINER_REGISTRY_NAME: Private container registry name. When using GCP is gcr.io.
container_registry_path: To provide subfolder in container registry.
DNS_HOST: Host/DNS name where entitlements service is.
ENVIRONMENT_NAME: Name of environment.
gcp_project: Provide gcp project name where kubernetes cluster is.
KEYVAULT_NAME: Azure keyvault name.
PROVIDER_NAME: Use one of next values: Azure, GCP
REDIS_HOST: Redis host name.
REDIS_PORT: Redis port.
secure_file_container_registry: Name of secure file for container registry connection (used only on GCP).
SERVICE_CONNECTION_NAME: Azure service connection with permissions to deploy to container registry (only used when provider is Azure).
```
## Secret files
1. GCP
```
secure_file_container_registry Secure file with connections to connect to container registry.
```
## Changes needed
1. Open devops/azure/pipeline.yml
2. Under build stage, add the providers. Example with both supported providers:
```
- template: template/build-stage.yml
parameters:
serviceName: ${{ variables.serviceName }}
providers:
- name: GCP
- name: Azure
```
3. If flux is enabled in your cluster, in devops/azure/pipeline.yml add your repo and name it FluxRepo, then, in devops/azure/template/task/aks-deployment-steps.yml uncomment
```
# - checkout: FluxRepo
# persistCredentials: true
```
4. Under deploy stage, add the providers and environments. Example with 2 different providers:
```
- template: template/deploy-stage.yml
parameters:
serviceName: ${{ variables.serviceName }}
chartPath: ${{ variables.chartPath }}
manifestRepo: ${{ variables.MANIFEST_REPO }}
providers:
- name: GCP
environments:
- name: 'evt'
fluxEnabled: false
secureFile: evt-seistore-services.json
- name: Azure
environments:
- name: 'dev'
fluxEnabled: true
- name: 'qa'
fluxEnabled: true
```
## Use pipeline
In pipelines create a new one and reference the pipeline to use devops/azure/pipeline.yml
## Notes
1. End to end (e2e) testing only happens in environments named: dev, qa, evd, and evt.
## License
Copyright © Microsoft Corporation
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
[http://www.apache.org/licenses/LICENSE-2.0](http://www.apache.org/licenses/LICENSE-2.0)
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
apiVersion: v2
name: sdms
appVersion: "latest"
description: Helm Chart for installing sdms service.
version: 0.1.0
type: application
global:
replicaCount: #{REPLICA_COUNT}#
namespace: osdu
podidentity: osdu-identity
configEnv:
cloudProvider: #{PROVIDER_NAME}#
keyvaultUrl: #{KEYVAULT_NAME}#
desServiceHost: #{DNS_HOST}#
seistoreSystemAdmins: #{SEISTORE_SYSTEM_ADMINS}#
redisInstanceAddress: #{REDIS_HOST}#
redisInstancePort: #{REDIS_PORT}#
appEnvironmentIdentifier: #{ENVIRONMENT_NAME}#
port: #{PORT}#
image:
repository: #{CONTAINER_REGISTRY_NAME}#.azurecr.io/#{utest.runtime.image}#
branch: master
tag: #{previousRuntimeTag}#
\ No newline at end of file
global:
replicaCount: #{REPLICA_COUNT}#
namespace: osdu
podidentity: osdu-identity
configEnv:
cloudProvider: #{PROVIDER_NAME}#
keyvaultUrl: #{KEYVAULT_NAME}#
desServiceHost: #{DNS_HOST}#
redisInstanceAddress: #{REDIS_HOST}#
redisInstancePort: #{REDIS_PORT}#
appEnvironmentIdentifier: #{ENVIRONMENT_NAME}#
port: #{PORT}#
image:
repository: #{CONTAINER_REGISTRY_NAME}#.azurecr.io/#{utest.runtime.image}#