Hacker News new | past | comments | ask | show | jobs | submit login
Launch HN: GrowthBook (YC W22) – Open-source feature flagging and A/B testing (growthbook.io)
148 points by combat_wombat on March 1, 2022 | hide | past | favorite | 43 comments
Hi HN, Graham and Jeremy here - we are building GrowthBook, an open-source platform for feature flagging and A/B testing. The repo is here: https://github.com/growthbook/growthbook and our home page is https://www.growthbook.io. We did a Show HN 6 months ago (https://news.ycombinator.com/item?id=28088882, which helped us get into YC) and have since added feature flagging.

Developers often launch a feature without understanding the impact it has on their users and business. This is a big deal, because only 1/3 of product launches actually improve the desired metrics. Of the rest, 1/3 have no effect, and the last 1/3 actually hurt [1]. The best way to measure this is to use feature flags and controlled experiments (A/B tests).

Jeremy and I worked together for 10 years at an ed-tech startup as CTO and software architect. We spent far too long just building and launching features without really knowing how they impacted our users and if they were adding value to the company. We had product analytics, but there was too much noise in the data to draw real conclusions. We knew the “right” way to do this was to build feature flags and run controlled experiments, but that was daunting for our small team.

We looked into 3rd party tools, but it bothered us that they didn't use our existing data warehouse and metric definitions, and we really didn't like the idea of adding an API call in the critical rendering path of our application. We also didn’t want to send our data to 3rd parties, didn’t feel good about vendor lock-in, plus the vendors were expensive. So, we did what any engineers would do—build it ourselves. After all, how hard could it be?

After a couple painful years, we hacked something together that (mostly) worked and used it to help grow revenue 10x. We started talking to other teams and realized just how many larger companies spend years building these feature flagging and experimentation platforms in-house because, like us, they couldn’t find any tools that met their needs. So we took everything we learned and built the tool we wish had existed back when we started.

GrowthBook is an open source platform for feature flagging and A/B experimentation. Our SDKs are built to be fast and cache-friendly. We take data privacy seriously and don’t use cookies or deal with PII. We sit on top of your company’s existing data warehouse and metrics so you can maintain a single source of truth. We’re open source (MIT), so you can either self-host the platform (with Docker containers), or use our managed cloud offering.

In GrowthBook, feature flags are added and controlled within the UI. Engineers or PMs can add targeting rules (e.g.”beta users get feature X, everyone else does not”), do gradual rollouts, and run A/B tests on the features. The current state of features are stored in a JSON file and exposed via an API or kept in-sync with a cache/database using webhooks. Engineers install our SDK and pass in the JSON file. Then they can do feature checks throughout their code (e.g. `if feature.on { ... } else { ... }`).

For A/B test analysis, a data analyst or engineer connects GrowthBook to their data warehouse, then they write a few SQL queries that tell GrowthBook how to query their experiment assignment and metric conversion data. After that initial setup, GrowthBook is able to pull experiment results, run it through our Bayesian stats engine, and display results. Users can slice and dice the data by custom dimensions and/or export results to a Jupyter notebook for further analysis.

We’re used by over 60 companies in production. We have self-hosted and cloud versions (see our pricing here: https://www.growthbook.io/pricing), and both are self-serve and simple to set up. We currently have SDKs for Javascript/Typescript, React, PHP, Ruby, Python, Go, and Android with more in the works (C#, Java, Swift, and Elixir). We support all the major SQL data warehouses as well as Mixpanel and Google Analytics.

You can give it a spin at https://github.com/growthbook/growthbook. Let us know what you think! We would especially love feedback from anyone who has built platforms like this in the past.

[1] https://exp-platform.com/Documents/ExPThinkWeek2009Public.pd...




Looks great. I think every hypergrowth startup I've worked at has gone through the use commercial tool => tool gets crazy expensive as you scale => "let's just roll our own, how hard can it be?" => "oh crap, this actually is more complex than we thought" cycle.

There are _so many_ types of services that were similarly disrupted by solid open-source implementations due to their price: business intelligence (Metabase), live chat widgets (Chatwoot), monitoring/observability (Grafana). This feature set is even more crucial than the ones above so I'm surprised a default open-source implementation hasn't emerged.

Excited to see how this progresses!


Yeah, that's our exact experience as well. I was super surprised to find out how much of a graveyard open source A/B testing was. Intuit (Wasabi), Facebook (PlanOut), and SeatGuru (Sixpack) open sourced their internal platforms years ago, but they are all abandoned now.


Sixpack was from SeatGeek, not SeatGuru :)

It was upgraded internally to Sevenpack but something like GrowthBook looks much nicer!


Ah, I forgot about Sixpack. Brings me back to my very first feature launches ever :')


Congrats on the launch guys!

BoldVoice has been using GrowthBook for about ~6 weeks or so now, and it was super lucky that we found these guys right as we were considering way more expensive options like Optimizely (...6 figures). The tooling is pretty intuitive and Graham and Jeremy have been providing stellar support. There's still a ton to build here and I'm excited to see what these guys add to the platform.


Growthbook is awesome! We have used the self-hosted version a few months i my company now, and are very happy about it. Talented people behind it, and issues are resolved quickly.

Makes a lot of sense with an experimentation tool that just works on top of your existing customer data. This is the way to go for data tools in my opinion. Makes it so much easier to adopt. What we have done, is just exposing views on top of our regular click-stream data. With this approach, we can focus on having one data source of high quality, in stead of scattering data around to multiple third party tools. And we don't have to deal with lot's of data processing agreements and so on. I hope more and more data tools moves in this direction. Sending data to a third party is starting to become old school :)


There was just recently a discussion on how people use feature flagging on HN [1]. Awesome to see someone tackling that problem :). Good luck!

[1] https://news.ycombinator.com/item?id=30114856


Congrats on the launch. Your website really explained why I should check it out.

Is there a way to self host without the neatly packaged docker version? Reading what you are running implied it could work on my shared hosting playground that doesn't support docker, but all else (with probably a good bit of tinkering from my side) could run smoothly.

Maybe I could - if I get it running - write a lab tutorial for hosting on uberspace (German hosting of yet-download fame).

As said: congrats and great site.


Thanks! If you can recreate the installation steps in the Dockerfile (https://github.com/growthbook/growthbook/blob/main/Dockerfil...), it should be possible to run it yourself on a shared host. Basically, install NodeJs + Python + dependencies and run the build scripts.

A lab tutorial for uberspace sounds awesome! We're happy to help you get things set up if you run into any issues.


> We're happy to help you get things set up if you run into any issues.

I will probably take you up on that.

Will take a few days to wrap up some things but looking forward to trying it out.

Edit: BTW - your personal site rocks. The photoshopped artwork made me laugh. So lovely quirky.


Been using GrowthBook for a few weeks now at Ping Labs and man, this is exactly what we needed.

We're a small startup moving incredibly quick. I was shocked at how much setup time it took to get a simple feature flag into our app with other services like Darkly. We had Growthbook working in roughly 15 minutes with under 10 lines of code changed.

Feels a lot like the "a-ha!" moment I had with Plausible.io. This is all we will need for a long, long time.


This looks great, but if I can make a suggestion, a guide in your docs on how to use this with Matomo[1] would go far. Growthbook provides Feature Flag and A/B Testing, but you have to bring your own analytics. Matomo provides those analytics and is also open source and self hosted, and would be an excellent match. There's been an issue in about integrating since September[2], and having a guide with SQL examples for using this with Matomo would take it to the next level.

1. https://matomo.org/ 2. https://github.com/growthbook/growthbook/issues/86


Thanks for the suggestion. We added MySQL support so we can work with Matomo now, but we're definitely lacking in documentation and tutorials. There's a similar issue with people using GA4, Snowplow, Segment, and anything else with a pre-defined schema. It should be way easier to use these systems with GrowthBook. Like there's no reason to even write SQL most of the time if we know the Matomo table structure in MySQL. We're hoping to solve that issue in the next few weeks.


Very cool. Snowplow CEO here - if you need any help on integrating with Snowplow just ask in our forums: https://discourse.snowplowanalytics.com/


Nice launch. Will Growthbook one day support "Feature Rollouts" like statsig? https://www.youtube.com/watch?v=aFmSVP5Jxmc

A lot of big companies need to gradually introduce a new feature to an audience while also seeing how that feature impacts their metrics. If they go with just feature flagging, they have to build their own logging and analysis. But if they go with experiments, then they're overly constrained by that analysis since experiments are meant to be statistically rigorous.

It's a tricky problem which I think statsig solved with "pulse metrics" and "feature gates".


Yeah, Statsig is building a really cool product. There's definitely a gap in GrowthBook right now, like you mentioned, between pure feature flags and experiments. We're working on some light-weight event tracking features to try and bridge that gap. So you'll be able to see how many people are using the features, which values they are getting, and easily integrate with tools like DataDog and NewRelic for performance monitoring.


What's particularly interesting is Statsig essentially separated the concept of "A/B test" from experiment. Compare https://statsig.com/features/feature-gates and https://statsig.com/features/experiments

I think this isn't just marketing; I think this reflects a deeper understanding that you can do A/B testing on top of a feature flag and not consider it an experiment. It's an innovation that reduces inter-organizational strife on the decision to launch a feature to all users, i.e., ship.

If everything with automated metric analysis is considered an experiment, then the shipping decision only be made after analysis is performed. That leads to tension when Product Managers have already decided to ship and will only rollback if something hurts their metric. Data Scientists will then question why their "experiment" is only running for a day, and then question the decision to ship entirely. You can imagine how many meetings a misdefinition spawns.

The sooner you address these definitions, I think the happier you'll find your enterprise users.


How do you differentiate yourself from tools like Flagr? [https://github.com/checkr/flagr]


Tools like Flagr typically function as a microservice that handles all of the feature evaluation. That means you need a network request every time you want to get a feature's value. With GrowthBook, there's a single cacheable JSON definition of all of the features and then evaluation happens locally within the SDKs. We found this approach to be much better for performance and scalability.

The other missing part in most feature flagging tools is the analysis side. Most tools let you run an A/B test, but it's up to you to track the data, process it, and run it through a stats engine. We built that part directly into GrowthBook.


> Can I run multiple A/B tests at a time? Yes! In fact, we recommend running many experiments in parallel in your application

Isn't this a bad practice in general? Not only you split your (potentially limited) sample count, but it is very unlikely that there is ZERO interaction between A/B tests. Even if you test something on one page and another thing on a different page, there might be some interaction between the two changes.


Most interactions between tests are really minor and indistinguishable from random noise. Those ones aren't worth worrying about.

There are two ways to deal with the bigger interaction effects. First is to predict which experiments will meaningfully interact ahead of time and split the samples between them (making the tests take longer). Second is to run tests in parallel on all users and look at the data to determine interaction effects after the fact (and potentially need to invalidate some results).

In our experience, a mix of these approaches works best. It's really hard to predict meaningful interaction effects ahead of time, so save that for the really obvious cases (e.g. black text on a black background). For everything else, the benefit of running more experiments usually outweighs the cost of occasionally needing to throw out results because of interaction effects. It's much better to run 10 tests and need to throw out 1 than it is to run 5 tests.


So it looks like the plan to make money is offering a SaaS version of the open source product. This is an interesting trend, as I've seen it a few times.

Back in the day if you made open source software SaaS wasn't usually an option, but what you did was offer to install it on premise and offer support for a truckload of money. Often enterprises liked this because they wanted someone to call/blame when something went wrong.

I wonder if these new companies could still add that to their offering. Free open source, SaaS if you don't want to host it, or we'll install it for you on your premises and offer support for piles of cast

(This isn't meant for you guys specifically since I see that you basically offer this under "Enterprise")


This looks awesome. I have previously struggled on developing an internal tool for A/B testing and that gets complex really fast. At Miggos we have implemented Growth book and we got FF and A/B in our Vue frontend in a few hours. Excited to see the progress of the platform.


Congrats on the launch, this looks like exactly what we need at our stage, combo of feature flags and experiments that fit my mental model of how they should work. Excited to get it integrated and our first feature up and running!


Thanks! Let us know if you run into any issues integrating and setting up your first feature.


Can you please say how the open source business model work for you? In your case cloud based services and enterprise support? Is this model profitable as a company?

And why MIT license not AGPL?


We are planning on an open core model similar to PostHog. So we would eventually monetize from cloud hosting and license keys for enterprise features. We wanted to use MIT to make it as easy to adopt as possible for all companies.


Looks promising!

Personally, I am a happy user of PostHog (gitHub.com/posthog/posthog) which also supports experimentation and feature flags alongside product analytics.


We're big fans of PostHog ourselves. Any tool that encourages people to use data to make product decisions is a huge win in our book.


Oh, I remember your Show HN. Congrats on the launch!


Thanks! It's been a crazy 6 months, feels like years ago


How is this different than Eppo [1] or Optimizely?

[1] https://www.geteppo.com


Re: Eppo - they're similar in setup but Growthbook is open source with an optional cloud SaaS offering. Eppo is currently cloud SaaS only AFAIK.

Re: Optimizely - Optimizely is a walled garden, you need to push all of your metrics as events to Optimizely from your apps for any experiment analyses. Eppo and GrowthBook both connect to your data warehouse and support any metrics you can design in query form using your existing warehouse.


Congrats, this is something that makes a TON of sense and will be sharing with my team!


Congrats on the HN launch GrowthBook team! Love what you guys are building!


Congrats on the launch GrowthBook team! love what you guys are building :)


Congrats on the HN launch Graham and team! :)


Congrats on the launch Graham and Jeremy!


Off topic. @dang, may I suggest making the submission text a bit darker, something like #444 to #555 which is between the current rgb(130,130,130) and black that's used for title and links? Here's the reasons:

- why is the submission text lighter than comments?

- rgb(130,130,130) on #F6F6EF is a bit hard to read

- rgb(130,130,130) is or is too similar to downvoted text color

- rgb(130,130,130) is the same color as visited links, so it makes links users visited not appear as links, like the link in "We did a Show HN 6 months ago (https://news.ycombinator.com/item?id=28088882, ..." to me


Email that stuff in, there's no '@dang' notification or anything.


done


Congratulations on the launch, guys. Over at Statsig we have tremendous respect for what you have been doing.


Hey Vijaye, will Statsig support diff-in-diff / SEO Experiments in the future? I saw that you got Multi-armed bandit out the door and I'm hoping to see more kinds of experiment pop up on your platform.




Consider applying for YC's Summer 2025 batch! Applications are open till May 13

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: