Web Development
The performance budget every business website should have
Quick answer: a working performance budget for a business marketing site is roughly: LCP ≤ 2.0s on 4G mobile, total JavaScript ≤ 150 KB compressed, total page weight ≤ 1 MB above-the-fold, and zero unexpected layout shifts. Enforced via CI checks, real-user monitoring, and the discipline to say no to features that exceed the budget. Without enforcement, a budget is a wish list.
“The site feels slow” is the most expensive sentence in front-end engineering. It’s subjective, hard to act on, and tends to result in a quarter of vague optimisation work that doesn’t move the needle.
A performance budget converts that sentence into numbers a team can hit and hold. It’s not exotic engineering. It’s management discipline applied to a metric.
What a performance budget actually is
A set of explicit numerical limits on:
- Time-based metrics: LCP, INP, TTFB, total blocking time
- Asset-based metrics: total page weight, JavaScript size, image size, font count
- Counts and ratios: number of requests, third-party script count, CSS file count
Each metric has a threshold the site is committed to staying under. Crossing a threshold triggers either a fix or a deliberate decision to widen the budget.
The budget gets enforced through tooling: CI checks that fail builds when budgets are exceeded, monitoring that alerts when production drifts above thresholds, and reviews that question every new feature against its impact.
Google’s web.dev guide to performance budgets is the canonical introduction to setting and enforcing them — the patterns below build on that foundation with numbers calibrated for typical business sites.
A working budget for a business marketing site
The numbers below are reasonable starting points for a serious marketing site — tight enough to require deliberate decisions, loose enough to be achievable.
Time-based:
- LCP: ≤ 2.0s on 4G mobile (well under the 2.5s “good” threshold for Core Web Vitals)
- INP: ≤ 150ms (well under the 200ms “good” threshold)
- TTFB: ≤ 600ms
- Total Blocking Time (TBT): ≤ 200ms
Asset-based:
- Total compressed JavaScript: ≤ 150 KB
- Total compressed CSS: ≤ 50 KB
- Hero image (above-the-fold): ≤ 200 KB
- Total above-the-fold weight: ≤ 1 MB
- Web fonts: ≤ 2 families, ≤ 4 weights
Counts and ratios:
- Render-blocking resources: ≤ 2
- Third-party origins: ≤ 5
- Third-party scripts: ≤ 8
- HTTP requests for initial render: ≤ 40
These aren’t universal — ecommerce sites legitimately need more, app-like sites are different, etc. But for a typical marketing site, hitting these numbers means the site will feel fast on real-world conditions.
Why “just use a CDN” isn’t a budget
CDNs solve one problem (geographic latency) and don’t solve most of the others. A site can be on a great CDN and still:
- Ship 2 MB of JavaScript that takes 3 seconds to parse on a mid-range Android phone
- Load 15 third-party scripts that each contribute to INP
- Have unsized images causing CLS
- Serve fonts that delay LCP
The CDN is necessary infrastructure, not sufficient. The budget is what enforces the rest.
The most common budget overruns
In the audits we run on existing sites, the same budget overruns appear repeatedly:
1. Third-party scripts. Analytics, chat widgets, A/B testing tools, marketing tags, ad pixels, customer survey tools. Each one is added because it “just adds a small script.” The aggregate is often the largest single performance cost on the site.
What helps: a written policy that every third-party script needs explicit performance approval before going live, with a measured budget cost (KB transferred, ms blocking time). Most sites can cut their third-party load by 50% without losing meaningful capability.
2. JavaScript framework bloat. React with Redux with React Query with five UI libraries. Each is justifiable individually; together they ship 800 KB of JS to a marketing site that displays 12 paragraphs of text and three images.
What helps: questioning whether a JavaScript framework is needed for the page at all (most marketing pages don’t need one), choosing leaner frameworks (Astro for content-led sites, Solid or Preact instead of React for app-like work), and rigorous code-splitting.
3. Unoptimised images.
4 MB hero images served at the original resolution to mobile devices. Modern formats not used. Responsive srcset not implemented. We see this on 70%+ of audits.
What helps: a build-time image pipeline that produces multiple resolutions, modern formats (AVIF / WebP), with explicit width and height to prevent CLS. Most modern frameworks (Astro, Next.js, Nuxt) do this by default if configured.
4. Web fonts. Three font families, eight weights each, loaded synchronously. Result: blocking, blank text during load, and a hefty download.
What helps: limit to one or two families, use variable fonts (one file, many weights), self-host (avoid the Google Fonts round-trip), use font-display: swap to prevent invisible text.
5. Render-blocking CSS. A single 200 KB stylesheet covering every page’s styles, loaded synchronously in the head.
What helps: critical CSS inlined, non-critical CSS deferred, page-specific styles split out. Modern frameworks make this less manual than it used to be.
Enforcing the budget
A budget without enforcement is decorative. The enforcement layers we deploy:
1. CI checks on PRs. Tools like Lighthouse CI, Bundlewatch, or framework-specific equivalents run on every pull request. PRs that exceed the budget fail until either fixed or budget is widened with an explicit decision.
2. Real-user monitoring on production. A tool like SpeedCurve, Sentry Performance, or even Plausible Analytics’ performance reports tracks actual user experience against budgets. Alerts trigger when 75th percentile metrics drift above thresholds for 24+ hours.
3. Quarterly performance reviews. Once a quarter, review the budget against actual performance, decide if the budget is still right, identify drift in either direction.
4. Feature reviews. Before any meaningful new feature ships, consider its performance cost. Sometimes the right answer is “not now” or “not in this form.”
These are management practices, not engineering tricks. The hard part of performance is rarely the technical fix — it’s saying no often enough.
Budget vs feature trade-offs
Real teams face the inevitable tension: a marketing team wants a chat widget, an analytics team wants enhanced tracking, leadership wants a personalisation engine. Each is a real ask. None of them is free.
The honest pattern: every feature request comes with a performance impact estimate. The decision to ship is made with that cost visible. Sometimes the answer is “yes, accept the cost.” Sometimes it’s “yes, but ship a lighter alternative.” Sometimes it’s “no — the cost outweighs the value.”
What kills site performance isn’t the individual feature decisions; it’s the cumulative effect of never having those decisions explicitly. A budget forces the conversation.
What a fast site looks like
For comparison, the budget our premium web development builds aim for:
- LCP: ≤ 1.2s on 4G mobile
- INP: ≤ 100ms
- Total JS: ≤ 50 KB compressed (often zero on pure marketing pages)
- Total CSS: ≤ 30 KB
- All images in modern formats with proper sizing
- Two or fewer third-party origins
- Lighthouse Performance score ≥ 95 consistently
These numbers aren’t universal; they’re the right numbers for a particular kind of site (premium marketing site, content-led, light interactivity). They’re also unusual in a way visitors notice. Sites in this performance class consistently outperform competitors on engagement metrics regardless of what the SEO looks like.
Common questions
What is a performance budget? A set of explicit numerical limits on a website’s metrics — loading time, page weight, JavaScript size, request count, etc. Enforced via tooling and team practice. Converts vague “the site is slow” concerns into measurable goals.
What should be in a website performance budget? Time-based metrics (LCP, INP, TTFB, TBT), asset-based limits (total JS, CSS, images, fonts), and counts/ratios (number of requests, third-party scripts, render-blocking resources). The exact numbers depend on the type of site and the audience’s expected devices and connections.
How do I enforce a performance budget? Three layers: CI checks on PRs that fail when limits are exceeded, real-user monitoring on production that alerts on drift, and quarterly reviews that revisit the budget. Without enforcement, a budget is a wish list.
What’s a good page weight target? For a marketing site: under 1 MB above-the-fold, under 2 MB total, in compressed bytes. For a content site: similar. For an app: it varies, but lazy-loaded routes should each stay similarly tight.
What’s the most common cause of slow sites? JavaScript — particularly third-party scripts (analytics, chat, ads, A/B testing) and framework bloat. Image weight is second. Web font loading is third. The fixes are typically structural rather than incremental.
If your site doesn’t have a performance budget today, start a project and we’ll set one with you. The hardest part is usually deciding the numbers; the implementation tends to follow.
More reading
What AI actually costs to run in production
AI demos are cheap. Production is not. Where the money actually goes when you ship an AI feature, and how to size the engineering investment around the model.
IntegrationsWhy integrations break in production (and what to design for)
Every integration that "just calls an API" eventually breaks. The five places they fail first, and the design patterns that keep them running unattended.
StrategyThe hidden costs of SaaS once your business is established
The per-seat licence is the visible cost. Integration tax, lock-in, configuration drift, and the seat tax at scale are the SaaS costs no one quotes up front.