For the latest series of Britain’s Got Talent, Live Talkback are providing the infrastructure behind the audience buzzer. If you haven’t come across Britain’s Got Talent, it’s a talent show where each of the judges has a buzzer, with which they can “buzz off” an act.
For the audience buzzer, everyone at home can buzz their own buzzer, and instantly see how many other buzzes the act has got. Since acts can be on stage for just 30 seconds or so, everything has to keep up with the TV show.
Oh, did I mention there’s 10 million people or more watching, any of whom might decide to buzz along? Predicting the numbers of people who will choose to play along is notoriously hard, but we ended up with a target peak rate of 50,000 requests per second into the servers.
This is a scarily big number: 50K/s is 180 million requests/hr or or nearly 130 billion/month. For comparison, YouTube does 85 billion pages/month, and Twitter a mere 5.8 billion.
So how did a small startup with 4 guys in London build something that could scale to be bigger than Twitter? (OK, OK, I know page views != requests, and the peak != sustained, and twitter is more complicated than a buzzer. But still, the peak rate is *high*).
The underlying technology stack is pretty standard stuff – HAProxy, Django, mod_wsgi, Apache, MySQL, memcached, all running on Amazon EC2. But scaling this standard stack up to these sorts of levels showed up a few problems, and some useful tools.
The next few posts will detail some of the pitfalls, tools and techniques when scaling up to tens of thousands of requests per second.