Stop Logging `req.body`
If you log `req.body` on auth routes, your log aggregator has every customer's password in plaintext. Stop. Here is the redaction pattern I use everywhere.
Stop Logging req.body
I was tailing Cloud Logging while diagnosing a login complaint when I saw this:
2026-05-13 14:31:09 [info] login req: {
email: 'redacted@example.com',
password: 'CorrectHorseBatteryStaple1!'
}
A real user's real password, sitting in plaintext in Cloud Logging, with a 30-day retention bucket.
I have since found this exact pattern in every contractor-built codebase I have audited. It is the single most common security failure I find. It is also the one founders most resist treating as a real incident, because the system "works." Their users are logging in successfully. The logs are working as intended.
The logs are also a six-month-rolling password breach waiting for somebody to notice.
How the code gets written
The pattern always starts the same way. The contractor is debugging a login endpoint. They want to know what the frontend is sending. They write:
exports.login = async (req, res) => {
console.log('login req:', req.body);
// ...rest of the handler
};
The console.log line is for local debugging. It gets committed. The contractor moves on. On Cloud Run, every console.log flows directly into Cloud Logging. Every login attempt from that day forward writes a real password into a structured log entry, indexed, searchable, retained.
Then the pattern multiplies. The contractor builds a registration endpoint. Copy-paste: console.log('register user:', req.body). Then a forgot-password endpoint: console.log('forgotPassword:', req.body). Then an admin-create-user endpoint, an update-auth-user endpoint, a change-password endpoint. Every authentication-adjacent controller method ends up with a console.log(..., req.body) call.
The shared pattern is that the request body for every one of those endpoints contains a password, a reset token, a 2FA code, or a freshly-generated invitation token. So every one of those lines is writing a credential into a permanent log.
Why nobody notices
Three reasons.
The logs work. The application functions. Tests pass. No customer complains. There is no visible symptom. The only way to discover this is to read your own logs, and most teams only read logs during incidents.
Cloud Logging UIs don't highlight it. The console renders the JSON structure. The password field is a string like any other. Nothing about the display says "this is a credential." A casual look at the log entry shows a successful login, which looks like a good thing.
Retention is invisible by default. Cloud Logging's _Default bucket retains for 30 days. Many projects bump this to 90 or 365 for compliance. The longer the retention, the more credentials accumulate. Nobody intentionally chose "retain real passwords for a year" — they intentionally chose "retain logs for a year," and the credentials came along for the ride.
What's actually exposed
Anyone with logging.privateLogEntries.list or logging.logEntries.list on the project can read the logs. By default in a cloud project, that's:
- Every IAM principal with
roles/viewer(Read access to most resources). - Every IAM principal with
roles/editor(legacy default for service accounts). - The default compute service account, if it has
roles/editor(the legacy default). - Everyone the contractor added as a Project Owner over the lifetime of the project — including the personal Gmail accounts that didn't get cleaned up after handover.
The blast radius is "everyone who has ever had any non-trivial role on the project." In the contractor-built stack I was looking at, that included four personal Gmail accounts I'd never met.
The 30-day retention window is also the password-breach window. Every user who logged in during the last 30 days has their current password in the logs. If they reused that password elsewhere — and the average internet user does — every account on every site they reused it on is also exposed.
The fix, in one helper
The one-line version of "don't log passwords" is to write a redaction helper and use it everywhere req.body would appear in a log statement. The helper looks like:
const SENSITIVE_KEYS = new Set([
'password',
'currentPassword',
'newPassword',
'confirmPassword',
'oldPassword',
'token',
'setupToken',
'twoFactorSecret',
'code',
'otp',
'mfaCode',
'pin',
]);
function redactForLog(obj) {
if (!obj || typeof obj !== 'object') return obj;
const out = {};
for (const [key, value] of Object.entries(obj)) {
if (SENSITIVE_KEYS.has(key) || /password/i.test(key) || /token/i.test(key)) {
out[key] = '[REDACTED]';
} else if (value && typeof value === 'object') {
out[key] = redactForLog(value);
} else {
out[key] = value;
}
}
return out;
}
Then every console.log(..., req.body) becomes console.log(..., redactForLog(req.body)). The output:
2026-05-13 14:31:09 [info] login req: {
email: 'redacted@example.com',
password: '[REDACTED]'
}
Same diagnostic value, zero credential exposure. The diff to ship this across the controllers in a typical Express codebase is roughly 30 lines.
The case-insensitive regex on password and token is intentional — it catches oldPassword, setupToken, apiToken, csrfToken, and the various typo'd variants the contractor will eventually add. The explicit SENSITIVE_KEYS set catches the cases where the field name doesn't include either word — code, pin, twoFactorSecret, otp.
What to do about the historic log entries
Adding the helper to the code only protects future log entries. Historic entries — the ones already in Cloud Logging — are still there, and the people whose passwords are in them are still vulnerable.
Three actions, in order:
-
Force a password rotation for every user whose login attempt is in the affected log window. This is the loud one. You may have to do it as a forced-logout-and-reset email. Some founders will resist this because it's a customer-visible step. Do it anyway. The alternative is hoping nobody with log access ever decides to be a problem.
-
Shorten the retention window on the affected log bucket. Cloud Logging's
_Defaultbucket retention can be reduced to the minimum (one day) on the bucket configuration. The shorter window means future credential leaks (if any sneak past the helper) age out fast. -
Audit who has log access and remove anyone who shouldn't. Project Owners and Editors with personal Gmail accounts, default compute SAs with
roles/editor, anyone who was added during a contractor onboarding three years ago. The fewer principals with log read access, the smaller the blast radius of the next unintentional leak.
Where to go from per-call-site to systemic
The helper-per-call-site pattern is the right first fix because it ships today and protects every endpoint touched. The right second fix is a structured logger middleware that scrubs sensitive fields before anything reaches the transport layer.
The pattern looks like a Winston or Pino logger configured with a redaction list. The middleware wraps req and res and rewrites every log statement to pass through the scrubber. Per-call-site discipline goes away — the developer can logger.info('login', { req }) and the middleware does the right thing.
The reason to start with the per-call-site helper instead of jumping straight to the middleware is that the middleware is a refactor. The helper is a sticker. You can ship the sticker tonight. The middleware lands next quarter.
What I tell teams
Three things:
-
Run
git grep -E "console.log\(.*req\.body|console.log\(.*req\.params"on your codebase today. Every result is a candidate for the same problem. Most of them won't be exposing credentials, but the ones that are need a redaction helper before the next deploy. -
Treat existing log entries as a breach. Force the password rotation. Shorten the retention. Audit the access list.
-
Decide whether the structured-logger middleware is on this quarter's roadmap or next quarter's. It is the systemic fix. The helper is the patch.
In the next post in the series, I'll get into the integration-specific version of the same lesson: why we repointed six Twilio webhooks via REST API instead of clicking through the console.
Run the same audit on your own stack. Open the 30-question checklist →
Next in the series: Repointing Twilio Webhooks Via REST API, Not the Console →
Run the audit on your own stack
A 30-question self-audit. P0/P1/P2 severity. Takes about an hour.
Open the checklist →