Inside Myspace

Here’s an excellent article called Inside Myspace by David F. Carr. It traces the steps MySpace had to go through as they grew from a small website to a social network with technology that needs to handle almost 40 billion page requests a month. The article talks about the architectural decisions that were taken as the scaling requirements for the application grew and grew and grew.

The initial configuration for the website consisted of two web servers talking to a single database server. Initially this was scaled by adding more web servers, but eventually this maxed out the database. The redesign of the system spread the database load across 3 SQL Servers, a master database with two replicated instances. As the web servers were scaled out, the SQL end was scaled out also, but eventually hit a limit where the I/O on the box was just reaching a physcal limit and the site was also exhibiting problems based on the time it to replicate information to all the children.

This spawned another application redesign, this time building around a concept of vertical partitioning, using different databases for different parts of the site. The performance improvements of this redesign were also compounded by moving the databases onto a SAN, which is more performant that physically attached storage. This worked for some time, but as the performance requirements increased, the coupling between the databases started to become the bottleneck. This prompted another redesign, this time moving to a distributed computing architecture.

Another change aspect that hit the project at this stage was the move from ColdFusion to .Net. This brought about performance improvements, not only because .Net is more efficient than ColdFusion, but also because a rewrite forces developers to rethink their code and design for efficiency. By this time MySpace had over 10 million accounts and was starting to max out the SAN’s I/O capacity prompting a move to a virtualised storage architecture. The next improvement was a caching tier between the presentation layer and the database. In retrospect, the team acknowledged that this was someting they should have done sooner. The final improvement was to most to SQL Server 2005 which brought about the benefit of a 64-bit architecture and the ability to address more RAM. By 2006, their standard SQL Server configuration consisted of boxes with 64GB of RAM, which brought about much better performance.

It’s an excellent article and makes for great reading. If you’re interested in system architecture, check it out

Leave a Reply

Your email address will not be published.

This site uses Akismet to reduce spam. Learn how your comment data is processed.