A/B Test
An experiment comparing two variants of a page, email, or ad to determine which performs better against a defined metric.
An A/B test (also called a split test) is a controlled experiment where you show two different versions of something to two randomly assigned groups of users, then measure which version drives a better outcome. The "something" can be a landing page headline, an email subject line, a checkout flow, a pricing page layout, or virtually any element that influences user behavior.
Why it matters: without A/B testing, teams make decisions based on opinions, HiPPO (Highest Paid Person's Opinion), or gut instinct. Testing replaces guesswork with evidence. A single well-run test on a high-traffic page can lift conversion rates by 10-30%, which translates directly to revenue.
How to run one properly: first, define a single primary metric (conversion rate, click-through rate, revenue per visitor). Then calculate the sample size you need for statistical significance, typically using a tool like Evan Miller's calculator or the stats engine built into platforms like Optimizely, VWO, or Google Optimize (now sunset, but the methodology persists in GA4 experiments). Split traffic 50/50 between control (A) and variant (B). Let the test run until you reach your pre-determined sample size. Do not peek at results and stop early when they look good, because that inflates your false positive rate.
Common mistakes: testing too many variables at once (that is a multivariate test, not an A/B test), stopping tests prematurely, running tests on pages with insufficient traffic, and ignoring segmented results. A test might show no overall winner, but variant B could be winning among mobile users or new visitors. Also, many teams test trivial changes (button color) instead of meaningful ones (value proposition, pricing structure, page layout).
A/B testing connects directly to conversion rate optimization, funnel analysis, and behavioral analytics. The insights you gain feed your broader experimentation culture.
Practical example: an e-commerce team tests two checkout page designs. Version A has a single-page checkout. Version B breaks it into three steps with a progress bar. After 20,000 visitors per variant, Version B shows a 12% higher completion rate with p < 0.05. The team ships Version B and moves on to testing payment method placement.
Related terms
The percentage of users who complete a desired action (purchase, signup, download) out of total visitors or ad clicks.
Measuring the conversion rate between sequential steps in a user flow, from entry to completion.
Analysis of user actions (clicks, page views, feature usage) to understand how people interact with a product or website.
Click-Through Rate. The percentage of people who click your search result after seeing it in the SERP.
Put these concepts into action
Oscom connects your SEO, content, ads, and analytics into one system. Stop context-switching between tools.
Start free trial