First Steps...

How we got started

The idea for Engauge started when we noticed a pain point while implementing an A/B testing framework for a client. Not only did the existing tools feel clunky and restrictive, they added some serious overhead to the site we were working on. For example, Optimizely packages a specific version of jQuery and delivers it to your site along with the rest of their javascript library by default. Our requirements didn't allow for frivolous additional scripts do be loaded, and we were honestly just kind of turned off by a number of similar factors like a lack of developer control leading to QA nightmares, lack of transparency into the algorithms being used, astronomically high pricing and general jankyness to name a few.

So we did what many developers in our situation do, which was to build our own framework to run and track experiments on our client's website. Problem solved, right? Of course not, especially since now we had to figure out how to analyze the experiments, ensure we had statistically significant results before we declared a positive lift, then turn off the experiment when it finished. So, basically, everything important was left.

Well, in our eyes it was both too early to admit defeat and too late to give up on creating something really useful for our needs. We decided to try building our dream A/B testing platform that did just one thing well, and automated everything from baseline calculation to winner declaration. We wanted very low overhead, anonymous tracking (like many developers we know, we value our users privacy even if they don't) and a dashboard to view the results.

Hitting the Books

Armed with a plan and that burning desire to just go build the damn thing, we set out to research what current implementations existed for a/b testing. Having just shy of a math degree (I dropped out in my senior year to go Change The World, and plan to finish this year) was a great help in understanding the complex and varied approaches people have been taking to run A/B tests in software products, but it took a lot of time and effort to find something that would be worth basing our entire product around. Classic methods tend to require a lot of samples to get significant results, and almost always require a baseline conversion rate before starting an experiment. While these methods are fine for companies at a large enough scale, we needed something that would enable everyone, even small teams with low traffic, to reap the benefits of an analytical approach to UI improvements and conversion optimizations.

One article that stood out among the crowd was Audun M. Ƙygard's Rapid A/B-testing with Sequential Analysis which takes a fantastic approach to getting useful results with less traffic, an issue that comes up all the time when revenue is on the line. We started off using the a simple one-sided comparison of bernoulli distributed events, but plan to expand the number of algorithms we expose for use as well as continuing to research alternatives (we're Bayesians at heart). Audun goes into some great explanations of the math and motivations behind his stats library which you can read about here, and overall it looked solid as a foundation for our initial experiment engine.

With the amazing speed that this class of algorithms brings, there are a few drawbacks. We try to mitigate one of the main drawbacks of sequential hypothesis testing, the over-estimation of the true effect of an early-stopping experiment, by running a Whitehead bias correction after the experiment has finished. We will be taking deep-dives on all of our processes and assumptions soon, so stay tuned!

Building v0

Now our next step would be to build and MVP. We figured a webapp to monitor experiments and account management, some slick AWS infrastructure to handle and analyze the incoming experiment data (backed by redis), and firebase to store longer term and user-specific data would get us off the ground quickly while providing all the power and speed we needed to make this thing dope.

With our infrastructure and MVP requirements fleshed out, we had some fun hacking away at what would become the Engauge platform, but didn't want to get so far ahead of ourselves that we spent 6 months building something no one else wanted.

So, we slowed down the coding and decided to start finding people who shared our vision and get their feedback on how that vision should be realized. We wanted to hear from people who would use our software, and make them an integral part of our product design process right from the beginning. Hopefully this will help us to ensure we are pointing in the right direction as we finish up and launch our MVP. In achieving that goal, we decided to make this blog in order to start a dialog (so hit us up on twitter).

Reaching Out

With this blog, we hope to achieve a few things:

  • Spread awareness about what we're trying to achieve with Engauge, and why
  • Reach out to our potential users, and find out what they want so we can build it for them
  • Document our launch and growth for future entrepreneurs (especially fellow bootstrappers!)
  • Transparency. We want to show you how everything works, tell you our motivations and keep you posted on our progress because we think that's a big part of what builds trust.

So please, reach out to us on twitter @engaugeab and tell us what you think. Tell us why you're excited or why you think we suck. We promise to take it all in stride. A sincere thank you for embarking on this journey with us while we march toward our goal of building something that enables you to grow right along with us!