Standing in line is for chumps. Consider instead what you do if you’re waiting for the bus in Cuba. It’s called el último:
- When you arrive at the stop, you announce “¿Quien es el último?”: Who is the last person in line?
- Someone else answers “Soy yo”: I am.
- That person is no longer el último. You are now el último. Eres tú.
- When the bus arrives, don’t sweat the details: just wait for the guy in front of you to go. Then you go.
The beautiful thing about this approach is that it frees everyone up to break ranks and mill about the area. Instead of standing in line like cattle, invading errbody’s personal space, you can sit down, find a shady spot, read the paper. It’s an elegant, real-world implementation of a singly-linked list, a classical data structure for queues. It works great if your queues fit in memory, but things become more complicated in the (virtual) real world where we store data on spinning metal platters and our queue clients perform all kinds of concurrent vagaries: pushing, popping, changing their minds about popping, peeking, flushing.
So today we’re open-sourcing Darner: a lightweight, resource-friendly queue server. It’s similar to Kestrel, a queue server built by some smart folks at Twitter. But our backend architecture has some specific requirements that we could only account for with a new design:
- We needed a queue server that could run on commodity (read: gimpy) EC2 machines that were busy doing other things. This means using very little memory but still running very efficiently. Darner achieves remarkable benchmark figures, carefully journaling every message it receives, while occupying about 15MB of RAM, more than 20x less than Kestrel.
- Given the resource limitations, we still needed the server to be blazing fast. Darner uses Boost.Asio and leveldb’s log-structured storage to push/pop ~25,000 times a second on each of our gimpy EC2 instances, close to an order of magnitude more throughput than Kestrel when the load is highly concurrent (many clients at once).
- Many of the messages passing through Wavii’s backend vary wildly in size. This can be tricky to handle fairly: a large message could cause smaller messages to time out. Think of it like ordering takeout: the guy in front of you makes it to the counter and pulls out a list. Turns out he’s ordering for all his buddies back at the office. Lame, right? In this case you don’t want Barth – you want a team of nimble teppanyaki chefs to efficiently work on everyone’s order at the same time, catching shrimp tails in their hats. Darner also handles this scenario an order of magnitude better than Kestrel.
There’s no shortage of queue servers out there – although we use Darner at Wavii, the software is at time of publishing only a week old, so caveat push-pop-ter. But it borrows features from a long line of queue servers, making it, for the time being, el último.