This post contains affiliate links. As an Amazon Associate I earn from qualifying purchases
Deciding whether or not to launch a new product or feature is a resource management bet for any Internet business. Conducting rigorous online A/B tests flattens the risk. Drawing on her experience at Airbnb, data scientist Lisa Qian offers a practical ten-step guide to designing and executing statistically sound A/B tests. – Discover best practices for defining test goals and hypotheses – Learn to identify controls, treatments, key metrics, and data collection needs – Understand the role of appropriate logging in data collection – Determine how to frame your tests (size of difference detection, visitor sample size, etc.) – Master the importance of testing for systematic biases – Run power tests to determine how much data to collect – Learn how experimenting on logged out users can introduce bias – Understand when cannibalization is an issue and how to deal with it – Review accepted A/B testing tools (Google Analytics, Vanity, Unbounce, among others) Lisa Qian focuses on search and discovery at Airbnb. She has a PhD in Applied Physics from Stanford University.
This post contains affiliate links. As an Amazon Associate I earn from qualifying purchases