Introduced AKS pod policies
Issue Ref: #218 (closed) Main changes:
- Refactored AKS module added policies.tf
- Optional installation of policies
- Refactored variables, removed dashboard
- Upgraded azurerm provider (policy in 2.98.0)
- Ref: #218 (closed)
Already tested with one osdu environment, need to run -upgrade
option to upgrade azurerm provider.
NOTE: If policy is applied when a workload (pod) is runnigt and does not meet policy condition, downtime will not happen, Azure will mark that pod as non-compliant, however, if the pod gets recreated, the pod will not start due policy violation.
Policies introduced:
- Authorized IP ranges should be defined on Kubernetes (audit)
- AKS private clusters should be enabled (audit)
- AKS should not allow privileged containers (deny)
- AKS should not allow container privilege escalation (deny)
- Kubernetes cluster pods should only use allowed volume types (deny)
- Kubernetes cluster pod hostPath volumes should only use allowed host path (deny)
- Kubernetes cluster pods should only use approved host network and port range (deny)
Screenshots and few tests:
Test for not allowed privileged pod:
kind: Pod
metadata:
name: nginx-privilege-escalation-allowed
labels:
app: nginx-privilege-escalation
spec:
containers:
- name: nginx
image: nginx
securityContext:
allowPrivilegeEscalation: true
# kubectl apply -f not-allowed.yaml
Error from server ([azurepolicy-psp-container-no-privilege-esc-e6a74aee95507167737f] Privilege escalation container is not allowed: nginx): error when creating "not-allowed.yaml": admission webhook "validation.gatekeeper.sh" denied the request: [azurepolicy-psp-container-no-privilege-esc-e6a74aee95507167737f] Privilege escalation container is not allowed: nginx
TODO: Remove istio-system and airflow2 namespaces and fix the helm charts values to comply with the deny_privilege_escalation policy "Kubernetes clusters should not allow container privilege escalation", as for now those namespaces are not compliant, therefore the policy is configured to exclude those.