top of page
Search

Enterprise Certificate Management

  • Writer: Mark Kendall
    Mark Kendall
  • Jul 8
  • 10 min read

You're on the right track thinking about securing online connections, but the distinction between SSL and TLS is more about evolution than different use cases. In short, TLS (Transport Layer Security) is the modern, more secure version of SSL (Secure Sockets Layer).


Think of it like this: SSL was the original technology for encrypting data between a website and your browser. Over time, security vulnerabilities were discovered in SSL. To address these issues and improve the technology, TLS was created. All versions of SSL are now considered outdated and insecure.



Here’s a breakdown of the key differences:


It's All About a Secure Connection


Your idea that one is for "webs and certificates" and the other for "machines and servers" is a common misconception. Both SSL and TLS are used for the same fundamental purpose: to create a secure, encrypted connection between a client (like your web browser) and a server (the website you're visiting). This is what puts the "s" in "https://" and displays the padlock icon in your browser's address bar.


You'll often hear the term "SSL certificate," but in reality, when you get one today, you're actually getting a certificate that uses the modern TLS protocol. The term "SSL" has stuck around mostly due to common usage.


Key Differences at a Glance


Feature


SSL (Secure Sockets Layer)


TLS (Transport Layer Security)


Security


Older versions have known vulnerabilities and are deprecated.


Offers stronger encryption algorithms and is more secure.


Current Status


Outdated and no longer considered secure.


The current industry standard for online security.


Handshake Process


A more complex and slower connection process.


A more streamlined and faster connection process.


Essentially, while you might still see the term "SSL" used, the underlying technology that is actually securing your connection is almost certainly TLS. Both are fundamental to securing communications across the internet for websites, servers, and various other applications, not one or the other for specific types of machines.



Sources

27 problems with my company is having problems with keeping the certificates updated and rolling out new certificates and the whole infrastructure, you know, when you've got, you know, 100 repos and and 50 different feature teams all rolling out, you know anywhere from microservices to to Apache apps and jar files and and so on and so forth keeping the certificates updated and and you know and and circulating and And secures. It's a big task. Give me the overall view on how an Enterprise might attack. That and some some best practices.





Show thinking


Of course. Your company's problem is incredibly common as organizations scale. The core solution is to stop treating certificate management as a manual, ticket-based task and instead treat it as an automated infrastructure problem.


The modern enterprise approach is to build a centralized and automated Certificate Lifecycle Management (CLM) system. Here’s an overall view of how to attack this and the best practices involved.


## The Core Strategy: Automate and Centralize


Your goal is to get humans out of the certificate request, renewal, and deployment process as much as possible. When a feature team deploys a new microservice, the certificate for it should be provisioned automatically as part of the deployment pipeline, not by a developer filing a ticket and waiting.


This strategy revolves around four key pillars:


1. Standardize Your Certificate Authority (CA) 🎯


You can't have 50 teams using different certificate providers. You need to standardize.


For Public-Facing Services: Use a CA that supports automation. Let's Encrypt is the industry standard for this because its entire process is built around the ACME (Automated Certificate Management Environment) protocol.


For Internal Services: For services inside your VPC or on-prem network, use a private CA. Tools like HashiCorp Vault or cloud provider solutions (AWS Certificate Manager Private CA) can act as their own internal CA, issuing certificates that are trusted only within your infrastructure.


2. Implement ACME for Automation 🤖


The ACME protocol is the engine that drives modern certificate automation. An ACME client can automatically prove to a CA that you control a domain, and then fetch and renew a certificate without any human intervention.


Instead of a developer manually creating a Certificate Signing Request (CSR), they simply configure their service's deployment script to talk to your central ACME-enabled endpoint.


3. Use a Central Secrets Vault 🔑


Private keys and certificates should never be stored in Git repos or manually copied to servers. A central secrets vault is the secure "source of truth."


How it Works: The automation system (Pillar #2) requests a certificate. The CA issues the certificate and private key directly into a secure, centralized vault like HashiCorp Vault, AWS Secrets Manager, or Azure Key Vault.


Distribution: Your applications (Apache, microservices, etc.) are then given a secure identity (like an IAM role in AWS) that grants them permission to pull their specific certificate from the vault upon startup or deployment. The key never touches a developer's machine.


4. Integrate with CI/CD and Infrastructure as Code (IaC) ⚙️


This is where it all comes together. Certificate management becomes just another part of your deployment process.


When a developer defines a new service in their Terraform or Kubernetes manifest, the configuration includes a block that says, "This service needs a certificate for api.newfeature.company.com."


When the CI/CD pipeline (e.g., GitLab CI, GitHub Actions) runs, a script automatically interacts with your vault or ACME endpoint to provision and deploy the certificate along with the application code.


## Best Practices Checklist


✅ Use Short-Lived Certificates: Once you have automation, there's no reason to use 1-year certificates. Switch to 90-day certificates (or even shorter). This dramatically reduces the risk window if a private key is ever compromised and forces your teams to rely on the automation.


✅ Automate Everything: The goal is zero-touch renewal. Renewals should happen automatically (e.g., every 60 days for a 90-day cert) without anyone noticing.


✅ Discover and Monitor: Implement a tool that continuously scans your network (public and private) to discover all TLS certificates. This builds a complete inventory, flags certs that were created outside the system ("rogue" certs), and provides a dashboard of all upcoming expirations. This is your safety net.


✅ Centralize Control: Do not allow individual teams to procure or manage their own certificates. All requests must flow through your central, automated system. This ensures policy enforcement (e.g., key strength, approved CAs).


✅ Use Wildcard Certificates Sparingly: While *.apps.company.com seems convenient, its private key becomes a massive security risk. If that single key is compromised, every single subdomain is exposed. Individual certificates per service are generally more secure.


## Common Tools for the Job


Automation & Issuance:


Let's Encrypt with clients like Certbot for simple setups.


HashiCorp Vault has built-in CA capabilities and can integrate with public CAs.


Centralized Management & Vaulting:


HashiCorp Vault: The gold standard for self-hosted, comprehensive secrets management.


AWS Certificate Manager (ACM): Tightly integrated with AWS services like Load Balancers and CloudFront. It handles renewal automatically for you.


Azure Key Vault: The equivalent service in the Azure ecosystem.


Enterprise-Grade CLM Platforms:


Venafi and Keyfactor are commercial platforms that provide extensive discovery, policy enforcement, and reporting features for very large enterprises.



pipeline so what you're saying each pipeline each each team has its own GitHub repository and they're pushing the the images and the jar files and the artifacts into the various deployment centers or are clouds or what have you and and therefore their pipelines. Usually Jenkins or GitHub actions should incorporate on these the practices that you just talked about with the vaults and and a different services to do the management automate that and get it out of the enterprise-wide. So no problem there if that's correct then give me some sample again. Or factory models for the devops team so they don't have to write the code a per. Se in each pipeline cell just include the template or the the the source or the or the module into their pipeline. So that's what we want to do talk to me about that. How would I get it done and give me give me some samples.





Show thinking


Yes, you've got it exactly right. The strategy is to embed automated certificate management directly into the CI/CD pipelines your teams already use. This makes security a seamless, automated part of the deployment process rather than a separate, manual step managed by a central enterprise team.


To do this without each team writing boilerplate code, you create standardized, reusable "factory" models. Here’s how you can build these for GitHub Actions and Jenkins, along with samples.


## The "Factory" Model Concept 🏭


The core idea is to create a centralized, reusable piece of code (a template or module) that handles all the logic for interacting with your vault and Certificate Authority (CA). The feature team's pipeline simply calls this module and provides a few key inputs, like the desired certificate name.


The DevOps team maintains this central module, and all 50 feature teams simply include it.


## Model 1: GitHub Actions Reusable Workflow


In GitHub Actions, you can create a "reusable workflow" in a central DevOps repository. Other repositories can then call this workflow as a single step.


The Central Template (my-org/devops-templates/.github/workflows/get-certificate.yml):


Your DevOps team would create a repository (e.g., devops-templates) and define this reusable workflow. It takes the service name as an input, talks to Vault, and outputs the certificate details.


YAML


# This is the REUSABLE workflow that lives in a central repo

name: 'Reusable Certificate Manager'


on:

  workflow_call:

    inputs:

      service-name:

        required: true

        type: string

      common-name:

        required: true

        type: string

    secrets:

      VAULT_ADDR:

        required: true

      VAULT_ROLE_ID:

        required: true

      VAULT_SECRET_ID:

        required: true


jobs:

  issue-certificate:

    runs-on: ubuntu-latest

    steps:

      - name: 'Authenticate to Vault'

        uses: hashicorp/vault-action@v3

        with:

          url: ${{ secrets.VAULT_ADDR }}

          method: approle

          roleId: ${{ secrets.VAULT_ROLE_ID }}

          secretId: ${{ secrets.VAULT_SECRET_ID }}


      - name: 'Issue or Renew Certificate from Vault PKI'

        id: issue_cert

        run: |

          # Request the certificate from Vault's PKI engine

          # The output is saved to a file that can be used in deployment

          vault write -format=json pki_int/issue/${{ inputs.service-name }} \

            common_name=${{ inputs.common-name }} ttl="2160h" > certificate.json


          echo "Certificate issued for ${{ inputs.common-name }}"

          # In a real-world scenario, you would upload this as an artifact

          # or the deployment step would pull it directly from the vault.

A Feature Team's Pipeline (team-a/my-microservice/.github/workflows/deploy.yml):


The feature team's pipeline is now incredibly simple. They just use the central workflow.


YAML


# This is the feature team's pipeline. It's simple and declarative.

name: 'Deploy Microservice'


on:

  push:

    branches:

      - main


jobs:

  # Step 1: Get the certificate by calling the reusable factory

  get-cert:

    uses: my-org/devops-templates/.github/workflows/get-certificate.yml@main

    with:

      service-name: 'my-microservice'

      common-name: 'api.teams.company.com'

    secrets:

      VAULT_ADDR: ${{ secrets.VAULT_ADDR }} # These secrets are stored in the team's repo settings

      VAULT_ROLE_ID: ${{ secrets.VAULT_ROLE_ID }}

      VAULT_SECRET_ID: ${{ secrets.VAULT_SECRET_ID }}


  # Step 2: Deploy the app (this job runs after the cert is ready)

  deploy:

    runs-on: ubuntu-latest

    needs: get-cert

    steps:

      - name: 'Deploy to Kubernetes'

        run: |

          echo "Starting deployment..."

          # The deployment script would be configured to pull the cert

          # from Vault or use the artifact from the 'get-cert' job.

          kubectl apply -f k8s/deployment.yml

## Model 2: Jenkins Shared Library


In Jenkins, the equivalent concept is a "Shared Library." You create a separate Git repository containing custom Groovy scripts that define reusable pipeline steps.


The Central Shared Library (jenkins-shared-lib/vars/manageCertificate.groovy):


Your DevOps team creates a repository for the shared library. This file defines a custom pipeline step called manageCertificate.


Groovy


// This script defines a custom, reusable Jenkins step: `manageCertificate`

def call(Map config) {

    // config.serviceName, config.commonName

    

    script {

        // Use the Jenkins Credentials plugin to securely handle Vault tokens

        withCredentials([

            string(credentialsId: 'vault-role-id', variable: 'VAULT_ROLE_ID'),

            string(credentialsId: 'vault-secret-id', variable: 'VAULT_SECRET_ID')

        ]) {

            // Authenticate and issue the certificate using the vault CLI

            sh """

            export VAULT_ADDR=https://vault.company.com

            

            # Login to Vault using AppRole

            vault write auth/approle/login role_id=$VAULT_ROLE_ID secret_id=$VAULT_SECRET_ID

            

            # Request certificate from the PKI backend

            echo "Requesting certificate for ${config.commonName}..."

            vault write pki_int/issue/${config.serviceName} common_name=${config.commonName}

            """

        }

    }

}

A Feature Team's Jenkinsfile:


The team's Jenkinsfile becomes clean and readable. They just import the library and call the custom step.


Groovy


// This is the team's Jenkinsfile. It just calls the custom step.

@Library('my-enterprise-shared-lib') _


pipeline {

    agent any


    stages {

        stage('Build') {

            steps {

                echo 'Building the application JAR...'

                sh 'mvn clean package'

            }

        }


        stage('Get Certificate') {

            steps {

                // Here is the simple, reusable call to the factory!

                manageCertificate(

                    serviceName: 'my-java-app',

                    commonName: 'java-app.teams.company.com'

                )

            }

        }

        

        stage('Deploy') {

            steps {

                echo 'Deploying application...'

                // Deployment script knows how to find and use the new cert

            }

        }

    }

}

## Key to Success: The "Contract" 🤝


For this factory model to work, you establish a contract between the DevOps team and the feature teams. This contract defines:


How a service declares its needs: For instance, the common-name for the certificate is passed as a parameter.


Authentication: Each team's pipeline is given its own Vault AppRole credentials, stored securely in GitHub Secrets or Jenkins Credentials.


How the certificate is consumed: The deployment tool (like Helm or a Kubernetes operator) should be configured to securely fetch the newly issued certificate details directly from the vault.

 
 
 

Recent Posts

See All
Try resilience4j

📦 Recommended Downloadable Project • resilience4j-spring-boot-demo: A curated Spring Boot demo showcasing CircuitBreaker, Retry,...

 
 
 
Error Handling like a master

To do really good error checking — whether you’re using Java, Spring Boot, or any modern programming language — you need a comprehensive,...

 
 
 

Comments


Post: Blog2_Post

Subscribe Form

Thanks for submitting!

©2020 by LearnTeachMaster DevOps. Proudly created with Wix.com

bottom of page