Hacker News new | past | comments | ask | show | jobs | submit login
Kayak's Most Interesting Mobile A/B Test (apptimize.com)
39 points by nancyhua on March 11, 2014 | hide | past | favorite | 10 comments



Interesting. I wonder if the "SSL/TLS" part matters, or if you could get the same result with "XRK/QNQ" or other random strings.


Makes sense, the interests of your target market are different. Some apps will have users that are tech savvy that will be motivated by that message and other apps will simply not care or will be confused about reading the "SSL/TLS" reminder discouraging them to make the purchase. Anyways this is the beauty of the A/B test, to test things and see what works better in specific cases and not to make generalizations about behaviours.


It also hinges heavily on how you're tracking these A/B results. There's less certainty and value in simply saying "Group A ended up booking more" —  if metrics indicated a statistically higher percent of users complete the purchase at that particular page with the single change on it.

Now that we know it worked for Kayak and not another company, we can delve into why to further inform everyone's future decisions. IMO a padlock with more pure language would do better, but only another A/B test by Kayak or similar company would validate this.


Can someone give a summary of what the A/B test was and why it was interesting?


The second paragraph of the article is paired with screenshots that document the test very clearly. Here's a summary:

1. Kayak added a small message saying "SSL/TLS encrypted payment" to the bottom of the form.

2. In Kayak's test, people tended to book less when the message was not included.

3. According to the interview, this contradicts what other people on the internet say.

4. Conclusion: "Just because something worked for someone else doesn’t mean it will work for all apps. You have to try it out yourself."


I think the crux of the question is why this was so interesting, and I would say it's just not. It wasn't surprising or interesting.


Sure, I suppose I am qualified to chime in here, after all, I'm a developer for an A/B testing company (but not the one posted here-apptimize).

A/B Testing is a bit of a catch-all phrase for "testing the effectiveness of website changes." Strictly Speaking, A/B only refers to the testing specifically is testing of one change "A" against another "B". You can have multi-variate testing as well where you add in a third group "A,B, or C", or mixtures "ABC vs AB vs AC vs BC vs CONTROL"... it's up to the test design.

But basically it's a tool to help you see if the changes you make to your website will work, while the website is up running, using real users. So, for example:

* If your goal is to get more downloads, maybe you'll tset some page layouts or images and see which test group clicks the download button more

* Or, See which splash page results in the most "Sign up for Newsletter" clicks

This is great information for marketers, or really testing out any old thing you want that is trackable through browser-events.

Really though, this is just the tip of the Iceberg of what is possible... but I'll leave it at that. If you really want to know more let me know.


They tested an "SSL/TLS encrypted payment" message underneath "Agree & Book" on mobile. Another site removed that and increased conversions, but the opposite was true for Kayak - it helped.


I wonder if Vinayak works for Kayak because his name is so similar.

-http://andrewgelman.com/2005/08/05/dennis_the_denv/


What's the point in running A/B tests if you don't report sample sizes, assumptions, statistical test used and p-values/F/whatever received? For all I know, these results are completely random and useless.

I mean, there's even excel spreadsheets for that so you can't say it's too hard: http://visualwebsiteoptimizer.com/split-testing-blog/ab-test...




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: