Define, Divide and Conquer: 5 Tips for Writing a Scalable App

Developing a scalable application can be a delicate balancing act, requiring you to carefully weigh your goals, resources and business needs.

The purpose of this article is to provide some helpful tips to guide you through the design side of this potentially harrowing process.

Scalability Defined

Before we can delve into making a scalable application, it’s worth spending some time to define just what scalability is and isn’t.

Contrary to what many novice developers think, scalability is not primarily about performance. Performance is virtually unrelated to the aggregate scalability of an application. The gains from any number of architectural decisions can dwarf the contributions that performance can make to scalability.

It’s almost a certainty that making your application scalable will make it slower. This is because scalability often means more layers, and more layers make for slower responses. As such, a monomaniacal focus on performance optimization gives low return, increases complexity and subsequently decreases productivity.

The essence of scalability is the ability to increase the throughput of your entire application. This can be contentious because increasing throughput means different things at different times during an application’s life-cycle. In the end, it usually comes down to your ability to spread the work across increasing amounts of physical equipment.

The Request Path

Whenever your application receives a request, a number of pieces come into play. For example, when common Web applications receive a request at the Web browser, it may interact with DNS (domain name system) to get some servers. Then those servers may contact a database server — and so on. The key here is to think of your application as starting at the users’ fingertips and moving down through a “stack” of components.

In this model of an application, performance is usually understood to be the time it takes for the request to enter the top of the stack and come back to the top of that stack. This has a modest affect on scalability, and this top-to-bottom measurement of performance is usually referred to as “vertical scalability.”

What’s critical is to consider the total load on your application. This makes the problem two-dimensional, and every layer in the stack becomes a brick in a wall. If done well, that wall can extend as far in either direction as you can afford to build it. Much like a wall, your “bricks” must be staggered in such a way as to support each other. In this transformed view, scalability has nothing to do with performance (i.e. the height of the wall), and everything to do with architecture. This is horizontal scalability.

That’s a clue as to why writing a scalable application costs performance. Performance is achieved by reducing layers and adding optimizing complexity, where horizontal scalability is achieved by using each layer as an opportunity to put more power behind the problem by spreading it out.

Sharing the Load

Just as multiple bricks in a wall share the load, so do the elements in each layer. The key to sharing the load is sharing information. The nature of that sharing determines exactly how widely that layer can scale. The most scalable type of layer is “shared-nothing.” If each horizontal brick shares nothing with its neighbors, there’s no trouble scaling horizontally as far as you want.

It’s worth noting, though, that no application is “shared-nothing” entirely. If you don’t have shared data, then you really don’t have an application. Rather, you have to keep it where it’s most appropriate. Must the data be highly available or always consistent? What’s the impact of losing it? These factors determine how you store and access it.

Now for the scalability tips:

  • Tip #1: Scale at the Client. The caches of Web browsers are literally a worldwide distributed storage system. Use it. Make as much of your site as static as possible. Don’t dynamically generate most pages. Have static pages, but provide Ajax APIs (application programming interfaces) that let your pages load data via JavaScript. Use the browser cache to store both the pages and possibly even the data itself, if appropriate. Just recognize that this cache can disappear at any time and that the user can easily tamper with this data. While this advice seems specific to the Web browser, any client-server system can take advantage of client-side caching.
  • Tip #2: Build Tiers. While the first instinct is to find a single database “bucket” to put all of your data in, don’t do it. Plan on having different data in different databases. Even within a database, reduce interdependencies between your tables. This lets you scale different data sets separately.

    Know that tiers aren’t just for databases. Your static assets should have their own tier. Your dynamic assets should have their own tier. Do you have user profile images? Give them a tier. Even build tiers within tiers. When you start, you can host these tiers together. As you scale, you can spread them out onto more hardware.

  • Tip #3: Have an API. The existence of defined APIs provides signposts necessary for quality assurance, availability and deployment.

    Hide these tiers behind RESTful interfaces. When you need to scale or refactor a tier, a services-based implementation will give you the freedom to make those changes. Similarly, you can change the API without changing the data. This decoupling makes continuing development of a running application easier.

  • Tip #4: Use Hashing. When you need a lot of data, a single database instance eventually won’t cut it. Instead, spread the data out. Use some key on your data as a value that can be hashed into a uniformly distributed key, then use this key to pick a server. This provides a nice way to spread out a database and distribute it horizontally. The RESTful APIs you used for your tiers make this easier to implement.
  • Tip #5: Pick the Right Tools. Need to do massively parallel computation? Use a map-reduce framework like Hadoop. Need to generate a complex user interface? Use a shared-nothing application server and some good Ajax, maybe Rails and jQuery. Need ultimately consistent data with some heavy reporting needs? Use an SQL database and maybe that map-reduce framework. Need fast access to volatile data? Use a distributed memory cache, like memcached. Need to make this all fit together? Wrap it in REST — probably using Rails again.

    The key is that you are using tools–as in the plural. No one tool will scale forever and do everything. You’re going to have to mix-and-match to make it work for you.

Room to Grow

The beauty of a scalable architecture is when your application must grow. If we need more throughput on a tier, just add more servers there. If you need more functionality, just refactor a tier. The APIs make it possible to do so without inflicting the refactor on the rest of your application.

Scalable architectures don’t happen by accident. It isn’t about picking certain magic components, either. You have to clearly model your data, define your application, and determine how it will be used. It’s about defining the pieces and how they fit together. Only then can you really figure out how to divide and conquer the problem in a scalable way.

Jayson Vantuyl is founder of Engine Yard, a provider of managed Rails hosting and Rails deployment solutions.

Leave a Comment

Please sign in to post or reply to a comment. New users create a free account.

LinuxInsider Channels