Skip to content
Snippets Groups Projects
Code owners
Assign users and groups as approvers for specific file changes. Learn more.
Name Last commit Last update
..
templates
Chart.yaml
README.md
values.yaml

Deploy helm chart

Introduction

This chart bootstraps a deployment on a Kubernetes cluster using Helm package manager.

Prerequisites

The code was tested on Kubernetes cluster (v1.21.11) with Istio (1.12.6)

It is possible to use other versions, but it hasn't been tested

Operation system

The code works in Debian-based Linux (Debian 10 and Ubuntu 20.04) and Windows WSL 2. Also, it works but is not guaranteed in Google Cloud Shell. All other operating systems, including macOS, are not verified and supported.

Packages

Packages are only needed for installation from a local computer.

  • HELM (version: v3.7.1 or higher) helm
  • Kubectl (version: v1.21.0 or higher) kubectl

Installation

You need to set variables in values.yaml file using any code editor. Some of the values are prefilled, but you need to specify some values as well. You can find more information about them below.

Global variables

Name Description Type Default Required

global.domain | your domain for the external endpoint, ex example.com | string | - | yes global.onPremEnabled | whether on-prem is enabled | boolean | false | yes global.limitsEnabled | whether CPU and memory limits are enabled | boolean | true | yes global.logLevel | severity of logging level | string | ERROR | yes global.tier | Only PROD must be used to enable autoscaling | string | - | no global.autoscaling | enables horizontal pod autoscaling, when tier=PROD | boolean | true | yes

Configmap variables

Name Description Type Default Required

data.logLevel | logging severity level for this service only | string | - | yes, only if differs from the global.logLevel data.entitlementsHost | Entitlements service host | string | http://entitlements | yes data.policyHost | Policy service host | string | http://policy | yes data.partitionHost | Partition service host | string | http://partition | yes data.policyId | policy id from ex ${POLICY_HOST}/api/policy/v1/policies | string | search | yes data.securityHttpsCertificateTrust | Elastic client connection uses TrustSelfSignedStrategy(), if it is true | bool | true | yes data.redisSearchHost | The host for redis instance. If empty (by default), helm installs an internal redis instance | string | - | yes data.redisSearchPort | The port for redis instance | digit | 6379 | yes

Deployment variables

Name Description Type Default Required

data.requestsCpu | amount of requested CPU | string | 20m | yes data.requestsMemory | amount of requested memory| string | 550Mi | yes data.limitsCpu | CPU limit | string | 1 | only if global.limitsEnabled is true data.limitsMemory | memory limit | string | 1G | only if global.limitsEnabled is true data.serviceAccountName | name of your service account | string | search | yes data.imagePullPolicy | when to pull image | string | IfNotPresent | yes data.image | service image | string | - | yes data.redisImage | service image | string | redis:7 | yes

Configuration variables

Name Description Type Default Required

conf.appName | Service name | string | search | yes conf.elasticSecretName | secret for elasticsearch | string | search-elastic-secret | yes conf.searchRedisSecretName | search Redis secret that contains redis password with REDIS_PASSWORD key | string | search-redis-secret | yes

Istio variables

Name Description Type Default Required

istio.proxyCPU | CPU request for Envoy sidecars | string | 10m | yes istio.proxyCPULimit | CPU limit for Envoy sidecars | string | 200m | yes istio.proxyMemory | memory request for Envoy sidecars | string | 100Mi | yes istio.proxyMemoryLimit | memory limit for Envoy sidecars | string | 256Mi | yes

Horizontal Pod Autoscaling (HPA) variables (works only if tier=PROD and autoscaling=true)

Name Description Type Default Required
hpa.minReplicas minimum number of replicas integer 6 only if global.autoscaling is true and global.tier is PROD
hpa.maxReplicas maximum number of replicas integer 15 only if global.autoscaling is true and global.tier is PROD
hpa.targetType type of measurements: AverageValue or Value string "AverageValue" only if global.autoscaling is true and global.tier is PROD
hpa.targetValue threshold value to trigger the scaling up integer 200 only if global.autoscaling is true and global.tier is PROD
hpa.behaviorScaleUpStabilizationWindowSeconds time to start implementing the scale up when it is triggered integer 10 only if global.autoscaling is true and global.tier is PROD
hpa.behaviorScaleUpPoliciesValue the maximum number of new replicas to create (in percents from current state) integer 50 only if global.autoscaling is true and global.tier is PROD
hpa.behaviorScaleUpPoliciesPeriodSeconds pause for every new scale up decision integer 15 only if global.autoscaling is true and global.tier is PROD
hpa.behaviorScaleDownStabilizationWindowSeconds time to start implementing the scale down when it is triggered integer 60 only if global.autoscaling is true and global.tier is PROD
hpa.behaviorScaleDownPoliciesValue the maximum number of replicas to destroy (in percents from current state) integer 25 only if global.autoscaling is true and global.tier is PROD
hpa.behaviorScaleDownPoliciesPeriodSeconds pause for every new scale down decision integer 60 only if global.autoscaling is true and global.tier is PROD

Limits variables

Name Description Type Default Required
limits.maxTokens maximum number of requests per fillInterval integer 150 only if global.autoscaling is true and global.tier is PROD
limits.tokensPerFill number of new tokens allowed every fillInterval integer 150 only if global.autoscaling is true and global.tier is PROD
limits.fillInterval time interval string "1s" only if global.autoscaling is true and global.tier is PROD

Methodology for Parameter Calculation variables: hpa.targetValue, limits.maxTokens and limits.tokensPerFill

The parameters hpa.targetValue, limits.maxTokens and limits.tokensPerFill were determined through empirical testing during load testing. These tests were conducted using the N2D machine series, which can run on either AMD EPYC Milan or AMD EPYC Rome processors. The values were fine-tuned to ensure optimal performance under typical workloads.

Recommendations for New Instance Types

When changing the instance type to a newer generation, such as the C3D series, it is essential to conduct new load testing. This ensures the parameters are recalibrated to match the performance characteristics of the new processor architecture, optimizing resource utilization and maintaining application stability.

Install the Helm chart

Run this command from within this directory:

helm install gc-search-deploy .

Uninstall the Helm chart

To uninstall the helm deployment:

helm uninstall gc-search-deploy

Do not forget to delete all k8s secrets and PVCs accociated with the Service.

Move-to-Top