Back to blog

April 27, 2026

The "Works on My Machine" Tax

Why environment-specific bugs are expensive to investigate, easy to miss locally, and much faster to fix when you can replay the session.

The "Works on My Machine" Tax

Every developer has said it. The app works locally. It works in your browser, on your OS, on your network. The bug someone reported? Can't reproduce it. Not on Chrome, not on Firefox, not on mobile. Everything behaves exactly as expected.

So you close the ticket. Or worse, you reply: "works on my end — can you try clearing your cache?"

That's the tax. Not the bug itself — the time you spend not finding it.


The Reproduction Trap

Reproducing a bug is supposed to be step one. But environment-specific bugs resist reproduction by definition. The conditions that cause them don't exist on your machine.

Your user is on Safari 16 on an M1 Mac with a company VPN that silently rewrites headers. You're on Chrome 124 on Linux with a clean network path. You could spend an hour trying to recreate their setup, or you could spend that hour on something useful — if you had a way to just see what happened on their end.

The expensive part isn't the fix. Most environment bugs are trivial once you understand them. The expensive part is the investigation: the back-and-forth, the guessing, the "can you open dev tools and tell me what the console says" messages that go unanswered for two days because your tester isn't a developer and shouldn't have to be.


The Bugs That Don't Exist (Until They Do)

Some categories of environment-specific bugs are nearly invisible from the developer's side:

Mixed content failures. Your tunnel serves HTTPS but one of your services is still on HTTP. Your browser silently blocks the request. No error in the UI — the feature just doesn't work. You never see it because your local setup doesn't have the HTTPS/HTTP mismatch.

Cookie scope issues. Cookies set on one domain don't apply when services are split across different tunnel URLs or ports. Auth works perfectly on localhost, breaks completely for anyone accessing it remotely.

CORS rejections. The browser blocks a cross-origin request your server didn't whitelist for the tunnel domain. Locally, everything's same-origin. For your user, half the API calls fail silently.

Viewport-dependent breakage. A modal that works on your 1440px screen overflows and becomes unusable at 1280px. The close button is off-screen. The user is stuck.

Extension interference. An ad blocker removes a DOM element your JavaScript depends on. The app crashes. The user sees a white screen. You see a perfectly functional app.

None of these show up in your test suite. None of them show up on your machine. All of them are real, and all of them make your product feel broken to the person experiencing them.


Console Errors You Never See

The most frustrating class of "works on my machine" bugs are client-side JavaScript errors that only fire under specific conditions. A race condition that depends on network latency. A null reference that only happens when data loads in a different order. A polyfill missing in a browser you don't test.

These errors exist in the user's console, not yours. And unless someone copies the error message and sends it to you — which non-technical users will never do — you'll never know it happened.

DemoTape captures console output as part of the session replay. When you watch a recorded session, you see the error at the exact moment it fires, in the context of what the user was doing when it happened. No "can you open dev tools." No "what browser are you using." Just the error, the DOM state, and the failed network request, all on the same timeline.

The gap between "I can't reproduce it" and "oh, that's what happened" is usually one replay.


The Cost Nobody Tracks

Environment-specific bugs don't just cost debugging time. They cost trust.

When a client reports something broken and you say you can't reproduce it, the subtext — whether you mean it or not — is "I don't believe you." Do it twice and they stop reporting. They assume your product is buggy. They work around the issues silently. They tell their team it's unreliable.

You never get the chance to fix problems you never hear about. And you never hear about problems you can't validate.

Session replay flips this. A client says something's broken. You open the replay, see the exact failure, and respond with the fix — not a request for more information. You look responsive. They feel heard. The bug gets fixed in one round-trip instead of five.


The Multi-Service Multiplier

Environment bugs get worse when your app runs multiple services. Each service adds another surface for mismatches: different HTTPS behavior, different cookie domains, different CORS policies, different response times.

Locally, all your services share the same host. Everything is same-origin, latency is zero, cookies propagate freely. The moment you expose it through a tunnel — especially multiple tunnels on different URLs — you've changed the fundamental networking assumptions your app depends on.

This is why DemoTape routes all your local services behind a single URL. It's not just a convenience — it eliminates an entire category of environment bugs that only exist because tunneling tools fragment your app across multiple origins. Your user gets the same networking behavior you have locally: same origin, same cookies, same CORS context.

Fewer environment mismatches means fewer bugs that "work on your machine" but break for everyone else.


Stop Guessing, Start Watching

The "works on my machine" conversation is a debugging dead end. You can't reproduce what you can't see, and you can't see what's happening on someone else's machine — unless you record it.

Capture the session. See the browser, the viewport, the console, the network requests, the clicks. See the bug in its natural habitat instead of trying to recreate it in yours.

npx @demotape.dev/cli — share your local app, capture every session, and close the gap between their experience and yours.