Skip to content

GitLab

  • Menu
Projects Groups Snippets
  • Help
    • Help
    • Support
    • Community forum
    • Submit feedback
  • Sign in / Register
  • I infra-azure-provisioning
  • Project information
    • Project information
    • Activity
    • Labels
    • Members
  • Repository
    • Repository
    • Files
    • Commits
    • Branches
    • Tags
    • Contributors
    • Graph
    • Compare
    • Locked Files
  • Issues 51
    • Issues 51
    • List
    • Boards
    • Service Desk
    • Milestones
    • Iterations
    • Requirements
  • Merge requests 10
    • Merge requests 10
  • CI/CD
    • CI/CD
    • Pipelines
    • Jobs
    • Schedules
    • Test Cases
  • Deployments
    • Deployments
    • Environments
    • Releases
  • Monitor
    • Monitor
    • Incidents
  • Packages & Registries
    • Packages & Registries
    • Package Registry
    • Container Registry
    • Infrastructure Registry
  • Analytics
    • Analytics
    • Value stream
    • CI/CD
    • Code review
    • Insights
    • Issue
    • Repository
  • Wiki
    • Wiki
  • Snippets
    • Snippets
  • Activity
  • Graph
  • Create a new issue
  • Jobs
  • Commits
  • Issue Boards
Collapse sidebar
  • Open Subsurface Data Universe Software
  • Platform
  • Deployment and Operations
  • infra-azure-provisioning
  • Issues
  • #181

Closed
Open
Created Jun 21, 2021 by Abhishek Chowdhry@abhishekDeveloper

[Breaking Change] Zonal Redundancy for Airflow

Point Airflow to use the newly created Zone Redundant Redis Instance for future purposes. This will break the existing Airflow Runs and they will need to be triggered again.

Consuming this Change:

This change will break the existing Airflow Runs. If they can be retriggered without losing any data, just retrigger the Airflow Runs once this change is merged.

If retriggering end to end runs is not possible due to any reason and we don't want to lose the existing runs, there are 2 suggested methods:

  1. Drain the entire Queue by do not sending any new requests to Airflow. Once the queue is drained, take the changes for pointing to the new queue(new redis instance) and resume the traffic to Airflow.

  2. Stop sending any new Requests to Airflow and take the changes for pointing to the new Queue(new Redis instance). Now requeue all the tasks from the old queue into the new queue. Resume the traffic to Airlfow.

Prefer the first option to the second one as the second option has a big overhead of requeuing and may still result in data loss.

Edited Aug 23, 2021 by Abhishek Chowdhry
Assignee
Assign to
Time tracking