Kamanda Wycliffe - Data scientist

A/B - Kamanda Wycliffe data-analysis


A/B

Aim

Modern businesses rely on experimentation and simulation to make data-driven decisions. Here we showcase how A/B testing, experimental design, and Monte Carlo simulations are used to validate assumptions, measure outcomes, and forecast results. The goal is to improve business performance by testing what works and modeling what could happen under uncertainty.

Result

The results are delivered through interactive visualizations and simulation dashboards. These outputs include experiment matrices, outcome metrics (e.g., conversion lift, duration probabilities), and actionable insights. Management can use this evidence to confidently launch new features, adjust strategies, or allocate resources more efficiently.

Project Duration

Depending on the scope—such as the number of variants tested or complexity of simulations—project timelines typically range from 2 to 8 weeks. Timely access to clean and structured data is essential. Collaboration with data and product teams is often necessary during the experimental setup, data extraction, and metric definition phases.

A/B Testing

Our A/B testing methodology follows industry best practices to ensure statistically valid results. We determine appropriate sample sizes, run experiments for sufficient duration, and analyze results using appropriate statistical tests.

Case Study: E-commerce Checkout Optimization

An online retailer wanted to reduce cart abandonment and increase conversion rates. We designed an experiment with two checkout page variants.

Conversion Rate Comparison

Variant A (Control)

12.4%
Conversion Rate
Baseline

Traditional multi-step checkout

Variant B (Experimental)

15.8%
Conversion Rate
↑ 27.4%

Simplified one-page checkout

Statistical Significance Analysis (p-value = 0.0032)

Result: Variant B showed a statistically significant 27.4% improvement in conversion rate (p < 0.01). The new checkout design was rolled out to all users, resulting in an estimated $1.2M annual revenue increase.

Monte Carlo Simulation

Our Monte Carlo simulations model uncertainty by running thousands of iterations with randomized inputs based on probability distributions. This approach provides a comprehensive view of possible outcomes and their probabilities.

Case Study: Project Timeline Risk Analysis

A client needed to assess the likelihood of completing a complex project within deadlines. We modeled task durations with appropriate probability distributions. We then developed a dashboad that allows the client to test how changes would influence project completion.

Below is a sample functionality of the dashboard:

10%
Probability of On-Time Completion
30 days
Most Likely Duration
40 days
Worst-Case Duration (95th percentile)
Project Duration Simulation (100,000 Iterations)

Recommendation
Based on the simulation, we recommended adding 5 days as a buffer to the project timeline to increase on-time completion probability to 92%. This balanced risk mitigation with resource constraints.

Experimentation Design

We design experiments that balance scientific rigor with practical constraints. Our approach includes proper randomization, control groups, pre-experiment power analysis, and methods to avoid common biases.

Multi-variant Testing Framework

For a SaaS client, we implemented a multi-variant testing framework to simultaneously test multiple features and their interactions.

Experiment Design Matrix
8
Simultaneous Tests
24K
Participants
93%
Statistical Power
Feature Impact on Key Metrics

Outcome
Using the multi-variant approach we identified two winning features that increased user engagement by 18% when implemented together, while avoiding three features that showed negative interaction effects.


Want to design and conduct experiments? Please get in touch with me here.