Engagement examples
These are common patterns we encounter in on-site search audits. They are not fabricated case studies — no client names, no made-up performance numbers. Each one represents a real, recurring problem category.
High-intent queries return irrelevant results
Context
A mid-sized e-commerce team with strong traffic and a well-known brand. Users frequently search for specific product names or categories, but results show loosely related items or out-of-stock products first.
Problem
The ranking configuration treats all fields equally — product title, description, and metadata all have the same weight. Synonyms are not maintained. The result: intent-rich queries like brand names or specific product types get diluted by partial matches.
What the audit covers
- Audit top 50 revenue queries against actual result sets
- Map weighting rules and identify where ranking diverges from intent
- Review synonym coverage and query normalization
- Recommend reweighting strategy with testable hypotheses
Outcome
The team gets a prioritized list of ranking fixes tied to specific queries, with before/after examples and a measurement plan to track search CVR improvement.
Metric to track: search conversion rate on top queries
Zero-result searches are high but nobody owns them
Context
A large catalog retailer where 8–12% of searches return zero results. The search team knows this is a problem but lacks a clear breakdown of why queries fail or who should fix what.
Problem
Zero-result queries fall into multiple categories: misspellings the search engine doesn't handle, long-tail queries that need synonym mapping, and queries for products that are genuinely out of stock. Without categorization, the number feels overwhelming and nobody takes ownership.
What the audit covers
- Categorize zero-result queries: spelling, synonyms, gaps, out-of-stock
- Identify which categories are solvable with configuration vs. catalog changes
- Audit zero-results page UX: fallback paths, suggestions, recovery experience
- Build a prioritized fix list by query volume and revenue potential
Outcome
The team gets a clear taxonomy of failure modes with specific fix recommendations for each category, turning an abstract metric into actionable work items.
Metric to track: zero-result rate segmented by failure type
Search looks fine but nobody can prove it drives revenue
Context
A product team that recently invested in a new search platform. Adoption is up, but leadership asks 'what's the ROI?' and the team can't answer confidently. Search analytics are limited to basic usage metrics.
Problem
The team tracks search volume and click-through rate but has no visibility into search-to-purchase conversion, assisted revenue, or which query improvements actually moved the needle. Without this, search improvements compete poorly for engineering time.
What the audit covers
- Audit current analytics instrumentation: events, funnels, attribution
- Identify gaps between what's tracked and what's needed to prove search ROI
- Design a measurement framework: search CVR, assisted revenue, no-click rate
- Recommend dashboard structure and KPI definitions for recurring reporting
Outcome
The team gets an analytics specification they can hand to engineering, plus a KPI framework that makes search impact visible to leadership.
Metric to track: search-assisted revenue as % of total revenue
Filters and sorting create friction instead of clarity
Context
A fashion or home goods retailer with a deep and varied catalog. Users rely heavily on filters and sorting to narrow results, but the experience feels clunky — too many options, unclear labels, inconsistent behavior across categories.
Problem
Filters were built from product data (attributes, categories) rather than shopping intent. Users see 15+ filter options but the most useful ones aren't prominent. Sort-by options don't match how people actually shop (e.g., no 'best match' or 'trending' option). Mobile filter UX requires too many taps.
What the audit covers
- Audit filter taxonomy against actual query and browsing patterns
- Review sort-by options and their ranking logic
- Assess mobile filter UX: taps to filter, visibility, back-navigation
- Recommend filter simplification and intent-based reorganization
Outcome
The team gets a filter restructuring plan that reduces cognitive load, surfaces the most useful refinements first, and improves mobile usability — all testable via A/B experiments.
Metric to track: filter usage rate and post-filter conversion
See yourself in one of these patterns?
Book a call and I'll tell you which audit scope makes sense for your situation. No pitch, no pressure.