During my time at Amazon, I drank the performance obsession kool-aid. Geniuses like John did the hard-core data analysis to show that milliseconds matter to encourage customers to buy things. Using the great toolset Amazon had built for analyzing page performance, my teams worked to improve the response time at the average, the 99.9th percentile, etc.
Amazon’s challenges were very specific – pages were built by hundreds of service calls to hundreds of services across thousands of boxes, and so the page framework needed to be able to respond to failing or slowing services quickly, with appropriate backoff, etc. These were fun and heady problems to work on and follow.
When I left Amazon for WhitePages, I realized that the toolset I had grown to rely on both solved Amazon’s unique problems and were kept inside Amazon. I learned that other large companies had similar systems, and I could pay the usual suspects for some kinds of services, but there was nothing just off-the-shelf that could give me 10% of the information Amazon had.
Smaller web publishers, then, have the double-whammy of fewer tools to measure page performance and fewer engineers to build the tools to make that better.
That’s not right for the web, and so at WhitePages, we’ve built a toolset to help engineers build faster web sites, based on real-world data from real clients.
Today, we released that system as an open source project – Jiffy. Jiffy’s an end-to-end system for measuring page and component performance across real data.
I’ve blogged in detail on the WhitePages Developer Blog about Jiffy, and the code is immediately available for download. I announced the release at the O’Reilly Velocity conference this morning – here are the slides.
More to come about Velocity later, which has already had a few interesting moments, even if the room is very hard to make laugh. (I killed a few jokes before I started, likely a good call, since the ones I left fell flat.)