← All essays
May 7, 20266 min read

Why I Never `gcloud run deploy` From My Laptop

If anyone can `gcloud run deploy` from a laptop, you do not have a deployment pipeline — you have a habit. Here is what I do instead, and why.

Why I Never gcloud run deploy From My Laptop

A friend asked me why I was being weird about a one-line deploy command. Fair question. The line was harmless on its face — gcloud run deploy api --source . --region us-central1. What it does to your audit trail, your provenance story, and your Monday-morning rollback is the part nobody mentions in the docs.

Every deploy from a laptop is a deploy that didn't happen, as far as your repo is concerned.

1. There is no record of what you shipped

When you gcloud run deploy from your laptop, the cloud sees a new container image with no link to a commit, no link to a pull request, no link to a passing test run. The new revision shows up in the console and the only metadata is "this image was pushed by you@gmail.com at 14:32 on Tuesday."

In a normal CI flow, every deploy is the artifact of a green build of a specific commit on a specific branch. You can git checkout the exact tree that's running in production. You can read the PR description to find out why it was shipped. You can find the test run that proved it builds. From your terminal, you have none of that.

The day you need to roll back, you will discover that "the version that was working" is a tag in your shell history and not on the remote.

2. The credentials on your laptop are a standing liability

When you gcloud auth login, you get a refresh token that's good for, effectively, forever. It sits in ~/.config/gcloud/. Anyone with read access to your home directory has standing access to your cloud project.

If you've ever lost a laptop, given a contractor sudo on your machine, run npx against an untrusted dependency, or used a coffee-shop wifi without thinking about it — you have already had a "trust this credential" moment. There's no way to know after the fact whether it was abused, because the credential is long-lived and unaudited.

The same is true of service-account JSON keys you may have downloaded to "make CI work." Once a JSON key exists on disk, you no longer control where it goes. The leaked SA key in the average contractor-built repo (see post #1) is a long-lived JSON key that someone downloaded once and put under config/ and forgot about.

3. Your IAM is wrong because it has to be

Laptop deploys force you to give your personal account the permissions a deploy needs. That's roles/run.admin plus roles/iam.serviceAccountUser plus roles/artifactregistry.writer plus whatever else gcloud run deploy --source . happens to need that week. So your personal account, every day, all day, has the ability to overwrite production.

In a CI-driven flow, those permissions live on a deploy service account that is only assumable by CI, only from a specific repository, only on a specific branch, with a token that expires in five minutes. Your personal account can go back to having roles/viewer and your own user can be locked behind 2FA.

There is no version of the laptop story where the principal of least privilege is satisfied. The principal is you, and you need write access, every time you deploy.

4. There is no policy boundary

Two engineers on the same team will deploy two different things from two different laptops. One has --allow-unauthenticated in their shell history; the other doesn't. One pushes the staging image to prod by mistake; the other doesn't. The cloud cannot tell them apart, because both are coming through the same user identity with the same broad permissions.

The fix is not "be more careful." The fix is to make the policy a property of the system, not of the operator. CI is the system. The deploy SA's permissions are the policy. Once you put a deploy behind a main-branch PR with required reviewers, the policy now says "this code was reviewed before it shipped." You can audit it. You can prove it.

What to do instead: short-lived tokens via OIDC

Modern CI systems can authenticate to cloud providers without a long-lived secret. The mechanism is OIDC token exchange: your CI presents an identity token signed by GitHub (or GitLab, or whoever), the cloud provider verifies that token against a federation it trusts, and issues a short-lived (usually 1-hour) access token in exchange. No JSON keys anywhere.

On GCP, the building blocks are Workload Identity Federation. The setup is roughly:

  1. Create a Workload Identity Pool — call it github.
  2. Add an OIDC provider to that pool, with the issuer URL https://token.actions.githubusercontent.com.
  3. Set an attribute_condition that restricts which GitHub repositories can use this provider — typically assertion.repository_owner == "<your-github-org>".
  4. Create a deploy service account — say, ci-deploy@<project>.iam.gserviceaccount.com. Grant it only the roles a deploy needs: roles/run.admin, roles/iam.serviceAccountUser, roles/artifactregistry.writer.
  5. Grant the federated identity permission to impersonate the deploy SA: roles/iam.workloadIdentityUser bound to principalSet://iam.googleapis.com/projects/.../locations/global/workloadIdentityPools/github/attribute.repository/<your-org>/<your-repo>.

That last step is the one that matters: it scopes the trust to a specific repository, not a specific user, not a specific machine. Any GitHub Action running in that repository can mint a short-lived GCP token. No Action running anywhere else can.

In your GitHub Actions workflow, you do:

permissions:
  id-token: write
  contents: read

jobs:
  deploy:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4
      - uses: google-github-actions/auth@v2
        with:
          workload_identity_provider: ${{ secrets.WIF_PROVIDER }}
          service_account:            ${{ secrets.WIF_SERVICE_ACCOUNT }}
      - run: gcloud run deploy ...

The secrets.WIF_PROVIDER and secrets.WIF_SERVICE_ACCOUNT are not secrets in the security sense — they're just resource names you don't feel like committing to YAML. There is no JSON key. There is no long-lived credential.

The gitleaks step you should add the same day

The other half of "no long-lived credentials anywhere" is a CI check that prevents a new one from sneaking back in. Add a gitleaks step as a required check in your PR pipeline. If anyone — a new contractor, an old habit, a copy-paste from Stack Overflow — tries to commit a key, the PR fails before merge.

This is the step that prevents next year's blog post from being written about your codebase.

What you're allowed to do from your laptop

I'm not opposed to gcloud on a developer machine. The line I draw is:

  • Read operations: sure. gcloud run services describe, gcloud logs read, gcloud sql instances describe. Reading is harmless.
  • Operations against non-prod environments: sure. Pushing to a staging service is fine; the worst case is you broke staging and your CI for staging deploys it again.
  • Operations against prod: no. Not the deploy, not a manual rollback, not an env-var update, not a secret rotation. All of those go through a script in a repo that runs in CI.

The principle: every action against prod should be reproducible from the repo. If you typed it into a terminal, it didn't happen.

Next in the series: sync({alter: false}) is not a no-op — the Sequelize trap that took a cutover plan from "the ORM handles migrations" to "the ORM has been silently ALTERing our schema for years."


Run the same audit on your own stack. Open the 30-question checklist →

Next in the series: sync({alter: false}) Is Not a No-Op →

Run the audit on your own stack

A 30-question self-audit. P0/P1/P2 severity. Takes about an hour.

Open the checklist →