top of page
Search

How to Design a Modern DevOps Pipeline That Actually Scales

  • Writer: Mark Kendall
    Mark Kendall
  • 4 days ago
  • 4 min read


How to Design a Modern DevOps Pipeline That Actually Scales



GitLab, YAML, Autonomy, and the Reality of Enterprise CI/CD


By Mark Kendall – Learn, Teach, Master





Introduction: DevOps Isn’t Hard Because It’s Complicated



It’s hard because it’s invisible.


Most DevOps failures don’t come from bad tools.

They come from pipelines that grew organically, script by script, exception by exception, until no one can explain:


  • Where logic really lives

  • Where secrets really come from

  • Why one environment behaves differently than another

  • Or why a change in one repo broke 40 others



If you’ve ever watched a GitLab pipeline fail 12 minutes in with a cryptic error from a script you didn’t even know existed — you already know what I mean.


In this article, I’ll walk through a real-world, enterprise-grade DevOps structure and show how to organize it top to bottom so that:


  • Teams can move fast in dev

  • Production remains protected

  • Pipelines are understandable

  • And the whole thing doesn’t collapse under its own complexity



This isn’t theory.

This is how modern DevOps actually works when you peel the layers back.





The Big Idea: Separate Structure from Behavior



The single most important design decision in DevOps is this:


YAML defines structure. Scripts define behavior. Variables define environment.


Once you accept that, everything else becomes simpler.





Layer 1: The App Repository (Thin by Design)



Your application repo should not contain your DevOps logic.


It should contain:

include:

  - project: company/devops/cicd-templates

    file: 'yaml_files/templates.yml'

And then minimal job declarations like:

build:

  extends: .build-template


test:

  extends: .test-template


deploy:

  extends: .deployment-template

That’s it.


Why?


Because app teams should not be debugging Helm, AWS CLI, Terraform, and GitLab quirks all at once.

They should be building applications.





Layer 2: The CI/CD Template Repository (Your DevOps Spine)



This is the real system.


It contains three kinds of assets:



1) Job Templates (YAML)



Example:

.build-template:

  stage: build

  script:

    - chmod +x scripts/build_scripts.sh

    - scripts/build_scripts.sh

  artifacts:

    paths:

      - build-artifacts/

This defines:


  • When the job runs

  • What image it uses

  • What script it calls

  • What artifacts it emits



But it does not define how building actually works.





2) Bootstrap Defaults (YAML)



Example:

default:

  before_script:

    - mkdir -p $CI_PROJECT_DIR/scripts

    - curl .../build_scripts.sh -o scripts/build_scripts.sh

    - curl .../deployment_scripts.sh -o scripts/deployment_scripts.sh

    - chmod +x scripts/*.sh

This is where the magic happens.


Every pipeline:


  • Pulls its real logic from a central repo

  • At runtime

  • Using GitLab’s API

  • With a token stored in GitLab variables



This gives you:


  • Central control

  • Instant global patching

  • Zero duplication

  • No copy/paste pipelines






3) Runtime Scripts (Shell)



This is where real work happens:


  • Installing AWS CLI

  • Installing kubectl

  • Installing Helm

  • Running Terraform

  • Building artifacts

  • Deploying to Kubernetes

  • Handling rollbacks



These scripts are:


  • Versioned

  • Testable

  • Loggable

  • Patchable

  • Reviewable



And most importantly:


They are not buried inside YAML.





Layer 3: Environment Resolution (The Part Everyone Gets Wrong)



Here’s the dirty secret of most DevOps pipelines:


Environments are usually implicit, tribal, and undocumented.


Someone knows which variables are set in prod.

Someone thinks dev behaves the same way.

No one has a contract.


That’s how outages happen.





The Minimal Environment Contract



You only need three variables to make a dev environment fully sovereign and production fully protected:

SCRIPT_BRANCH   → which version of the scripts to run

ENV_TIER        → what rules apply (dev / uat / prod)

TARGET_CLUSTER  → where deployments go

Example (Dev):

SCRIPT_BRANCH=dev

ENV_TIER=dev

TARGET_CLUSTER=dev

Example (Prod):

SCRIPT_BRANCH=main

ENV_TIER=prod

TARGET_CLUSTER=prod

Now your scripts can do things like:

if [ "$ENV_TIER" = "prod" ]; then

  echo "Direct prod deploys not allowed from this pipeline"

  exit 1

fi

And:

case "$TARGET_CLUSTER" in

  dev)  export KUBECONFIG=dev.kubeconfig ;;

  prod) export KUBECONFIG=prod.kubeconfig ;;

esac

This gives you:


  • Behavioral isolation

  • Logical isolation

  • Physical isolation



With three variables.





Why This Architecture Works in the Real World



This structure looks complex on paper.


In practice, it reduces chaos.



1) It Centralizes Risk



  • Security patches → one repo

  • Pipeline changes → one repo

  • Tool upgrades → one repo



No more 80 repos doing 80 slightly different things.





2) It Makes Failures Understandable



When a pipeline fails:


  • YAML shows what ran

  • Scripts show how it ran

  • Variables show where it ran



That’s debuggable.





3) It Gives Dev Teams Autonomy Without Letting Them Break Prod



Dev teams can:


  • Build artifacts

  • Deploy to dev Kubernetes

  • Run Terraform plans

  • Change pipeline behavior



Without ever touching:


  • Prod credentials

  • Prod clusters

  • Prod registries

  • Prod rules



That’s the right balance of freedom and control.





The Honest Part: This Stuff Is Hard



Yes — this architecture has:


  • Lots of moving parts

  • Multiple repos

  • YAML + shell + variables

  • Secrets

  • Branching logic

  • Tool bootstrapping

  • Runtime downloads



And yes — pipelines will fail.


Often.


But here’s the difference:


When they fail, you actually know where to look.


That alone puts you ahead of 90% of DevOps implementations.





Final Thoughts: DevOps Is a Product, Not a Script



The biggest mindset shift is this:


Your CI/CD system is a software product.


It needs:


  • Architecture

  • Versioning

  • Contracts

  • Environments

  • Guardrails

  • Observability

  • Ownership



Once you treat it that way, everything changes.





About Learn, Teach, Master



Learn, Teach, Master exists to capture real engineering knowledge

— not buzzwords, not hype, not vendor slides.


Just the stuff that actually works.





 
 
 

Recent Posts

See All

Comments

Rated 0 out of 5 stars.
No ratings yet

Add a rating
Post: Blog2_Post

Subscribe Form

Thanks for submitting!

©2020 by LearnTeachMaster DevOps. Proudly created with Wix.com

bottom of page