Every WebAssembly tutorial follows the same script: compile a Mandelbrot set renderer, watch it run 10x faster than JavaScript, declare victory. Then you try to actually ship it in a Next.js app and spend three days fighting build tools.
I've shipped Rust/WASM to production twice now. Once it was absolutely worth it. Once I should have just written TypeScript. Here's how to tell the difference before you waste a week.
The Honest Pitch
WebAssembly lets you run compiled languages like Rust in the browser at near-native speed. The real benefit isn't raw speed though—it's predictable performance for CPU-intensive work.
JavaScript is fast enough for 90% of what you do. DOM updates, API calls, rendering React components—all fine in JS. The 10% where Rust earns its complexity: heavy computation that blocks the main thread, work you can parallelize, or logic that needs to be deterministic across platforms.
If your bottleneck is waiting for network requests or shuffling JSON around, WASM won't help you. If you're processing 50,000 records client-side or doing real-time image manipulation, we should talk.
The Toolchain Reality in 2026
The Rust-to-WASM pipeline has three parts: wasm-pack compiles your Rust code, wasm-bindgen generates JavaScript bindings, and your bundler (Vite, webpack, Next.js) needs to understand .wasm files.
Here's what actually works. First, your Cargo.toml:
[package]
name = "search-wasm"
version = "0.1.0"
edition = "2021"
[lib]
crate-type = ["cdylib"]
[dependencies]
wasm-bindgen = "0.2.92"
serde = { version = "1.0", features = ["derive"] }
serde-wasm-bindgen = "0.6"
[profile.release]
opt-level = "z" # Optimize for size
lto = true # Link-time optimization
codegen-units = 1 # Better optimization, slower compileThe cdylib crate type is critical—it tells Rust to output a dynamic library. The profile settings cut my WASM bundle from 380KB to 200KB gzipped.
Next.js config took me longer than I want to admit. The official docs are outdated. Here's what works in 2026:
// next.config.js
/** @type {import('next').NextConfig} */
const nextConfig = {
webpack: (config, { isServer }) => {
// WASM support
config.experiments = {
...config.experiments,
asyncWebAssembly: true,
layers: true,
}
// Don't try to run WASM server-side
if (isServer) {
config.externals = config.externals || []
config.externals.push({
'search-wasm': 'search-wasm',
})
}
return config
},
}
module.exports = nextConfigThe asyncWebAssembly flag enables async WASM loading. The isServer check prevents Next.js from trying to run your WASM during SSR, which will fail spectacularly. Trust me.
Build your WASM with:
wasm-pack build --target web --releaseThis generates a pkg/ directory with your .wasm file and JavaScript bindings. Import it like any other module:
import init, { fuzzy_search } from './search-wasm/pkg'
// Initialize once
await init()
// Now you can call your Rust functions
const results = fuzzy_search(query, items)I spent two hours debugging why init() wasn't working before realizing I forgot to await it. The error messages are cryptic. Always await the initialization.
The Benchmark That Matters
Forget Fibonacci calculators. I needed fuzzy search over 50,000 product records on the client. Users type in a search box, we filter and rank results instantly.
JavaScript baseline using Fuse.js (a solid fuzzy search library):
import Fuse from 'fuse.js'
const fuse = new Fuse(items, {
keys: ['name', 'description', 'tags'],
threshold: 0.3,
})
const results = fuse.search(query) // ~120ms for 50k recordsMy Rust implementation using the fuzzy-matcher crate:
use wasm_bindgen::prelude::*;
use fuzzy_matcher::FuzzyMatcher;
use fuzzy_matcher::skim::SkimMatcherV2;
#[wasm_bindgen]
pub fn fuzzy_search(query: &str, items_json: &str) -> String {
let items: Vec<Item> = serde_json::from_str(items_json).unwrap();
let matcher = SkimMatcherV2::default();
let mut results: Vec<_> = items
.iter()
.filter_map(|item| {
matcher.fuzzy_match(&item.name, query)
.map(|score| (item, score))
})
.collect();
results.sort_by(|a, b| b.1.cmp(&a.1));
serde_json::to_string(&results).unwrap()
}Real-world performance on my laptop (M1 MacBook Pro):
| Implementation | Cold Start | Steady State | Memory | |---|---|---|---| | JavaScript (Fuse.js) | 0ms | 120ms | 45MB | | Rust/WASM (naive) | 40ms | 18ms | 12MB | | Rust/WASM (optimized) | 40ms | 8ms | 8MB |
The 40ms cold start is the WASM initialization cost. You pay it once when the page loads. After that, searches are 6-15x faster than JavaScript.
For a search box with autocomplete, this is the difference between laggy and instant. Users notice.
The JS Interop Tax
The numbers above are for the optimized version. My first attempt was much slower because I didn't understand the boundary cost.
Every time you pass data between JavaScript and WASM, you serialize it. Strings get copied. Objects get JSON-stringified. This is expensive.
Here's my naive first version:
#[wasm_bindgen]
pub fn search_item(query: &str, name: &str, description: &str) -> i32 {
let matcher = SkimMatcherV2::default();
matcher.fuzzy_match(name, query).unwrap_or(0)
}In JavaScript, I called this function 50,000 times per search:
items.forEach(item => {
const score = search_item(query, item.name, item.description)
if (score > 0) results.push({ item, score })
})Performance was terrible—slower than JavaScript. Each call crossed the WASM boundary, allocated strings, did the work, then threw everything away.
The fix: do all the work in Rust, cross the boundary once.
use serde::{Deserialize, Serialize};
#[derive(Deserialize)]
struct Item {
name: String,
description: String,
}
#[derive(Serialize)]
struct SearchResult {
index: usize,
score: i32,
}
#[wasm_bindgen]
pub fn fuzzy_search(query: &str, items_json: &str) -> String {
let items: Vec<Item> = serde_json::from_str(items_json).unwrap();
let matcher = SkimMatcherV2::default();
let results: Vec<SearchResult> = items
.iter()
.enumerate()
.filter_map(|(idx, item)| {
matcher.fuzzy_match(&item.name, query)
.map(|score| SearchResult { index: idx, score })
})
.collect();
serde_json::to_string(&results).unwrap()
}Now I pass the entire dataset once, do all the work in Rust, and return just the indices and scores. The JavaScript side looks up the actual items by index.
This single change took me from 180ms per search to 18ms. The lesson: minimize boundary crossings. Batch your work in Rust.
Bundle Size & Loading Strategy
My compiled WASM file is 200KB gzipped. That's not huge, but it's not free either. Don't load it upfront.
Use dynamic imports to load WASM only when needed:
async function initSearch() {
const wasm = await import('./search-wasm/pkg')
await wasm.default() // Initialize WASM
return wasm
}
// Later, when user opens search
const search = useCallback(async (query: string) => {
if (!wasmRef.current) {
setLoading(true)
wasmRef.current = await initSearch()
setLoading(false)
}
const results = wasmRef.current.fuzzy_search(query, itemsJson)
return JSON.parse(results)
}, [itemsJson])The first search triggers the WASM load and initialization (40ms). Subsequent searches are instant.
For production, use streaming compilation:
async function initSearchStreaming() {
const wasmUrl = new URL('./search-wasm/pkg/search_wasm_bg.wasm', import.meta.url)
const response = await fetch(wasmUrl)
const { instance } = await WebAssembly.instantiateStreaming(response)
// Manual initialization since we bypassed wasm-bindgen's loader
return wrapWasmExports(instance.exports)
}instantiateStreaming compiles the WASM as it downloads, shaving off a few more milliseconds. I only did this after profiling showed the init time mattered. Start with the simple version.
On mobile, 200KB gzipped is more significant. I added a feature flag to serve JavaScript-only search on slow connections:
const useWasm = useMemo(() => {
if (typeof navigator === 'undefined') return false
// Check for slow connection
const connection = (navigator as any).connection
if (connection?.effectiveType === '2g' || connection?.saveData) {
return false
}
return true
}, [])Most users get WASM. Slow connections get a perfectly fine JavaScript fallback. Nobody gets a broken experience.
When It's Worth It
I've found three scenarios where Rust/WASM justifies the complexity.
Heavy client-side data work. If you're processing thousands of records in the browser—filtering, sorting, searching, transforming—WASM can make the difference between a sluggish UI and a snappy one. My search example is real. We have 50k products, and server-side search was too slow. WASM made instant client-side search possible.
Reusing existing Rust crates. The Rust ecosystem has excellent libraries for things like image processing (image crate), complex regex (regex crate), or cryptography. If you need that functionality in the browser and don't want to find (or trust) a JavaScript equivalent, compile the Rust crate to WASM. I did this for a batch image resizer. The image crate's quality was better than any JS library I found, and performance was 3x faster.
Deterministic cross-platform logic. If you have complex business logic that needs to run identically on mobile apps, desktop apps, and web, write it once in Rust and compile to WASM for web, native libraries for mobile. I haven't done this personally, but I've seen teams do it successfully for things like game logic or financial calculations.
Notice what's missing from this list: anything that's already fast in JavaScript. If your JS implementation is under 10ms, Rust won't meaningfully improve user experience.
When to Just Write TypeScript
Be honest about whether you actually have a performance problem. Most of the time, you don't.
DOM manipulation. Rust can't touch the DOM. You'll call JavaScript anyway. Just write JavaScript.
Network-bound tasks. If you're waiting on an API, the bottleneck is the network. WASM can't make the internet faster.
Anything already fast. Validating a form field? Parsing a small JSON response? Filtering a list of 100 items? JavaScript is fast enough. The WASM overhead (loading, initialization, boundary crossing) will make it slower.
I wasted a week trying to optimize JSON parsing with Rust before realizing JavaScript's JSON.parse is heavily optimized and already sub-millisecond for typical payloads. The WASM version was slower because of serialization overhead.
The question isn't "Is Rust faster than JavaScript?" It's "Is the thing I'm doing slow enough that Rust's speed advantage overcomes the WASM overhead?" Usually, the answer is no.
If you can solve your performance problem by debouncing, virtualizing a long list, or moving work to a web worker, do that first. It's simpler and you stay in JavaScript land.
The Verdict
Rust/WASM is a power tool. When you need it, nothing else works as well. When you don't, it's overkill that slows you down.
Would I do it again? For the search project, absolutely. For the JSON parsing thing, never. The difference was having a real performance problem that WASM actually solved.
Start with JavaScript. Profile with real data. If you find a genuine CPU-bound bottleneck that blocks the UI, then reach for Rust.
Here's the single most useful pattern from this post—the optimized boundary crossing:
use wasm_bindgen::prelude::*;
use serde::{Deserialize, Serialize};
#[derive(Deserialize)]
struct Input {
items: Vec<String>,
query: String,
}
#[derive(Serialize)]
struct Output {
results: Vec<usize>,
}
#[wasm_bindgen]
pub fn process_batch(input_json: &str) -> String {
let input: Input = serde_json::from_str(input_json).unwrap();
// Do all your heavy work here in Rust
let results: Vec<usize> = input.items
.iter()
.enumerate()
.filter(|(_, item)| item.contains(&input.query))
.map(|(idx, _)| idx)
.collect();
let output = Output { results };
serde_json::to_string(&output).unwrap()
}Pass everything in, do all the work in Rust, return everything out. Cross the boundary once. This pattern alone will save you hours of performance debugging.