Application characteristics will change: The increasing importance of geolocation in apps will necessitate the rapid ability to shift context and data sets. If I'm driving in a taxi, the "nearby" services change quickly as the car moves down the road. Being able to shunt data in and out of working sets quickly (not to mention being able to blend contexts as applications support multiple people sharing a "nearby" context) will become vital. Naturally, this requires high performance.
Application topologies will become more complex: As scale and variability increase, architecture designs must change. I hinted at this last week, when I mentioned the use of memcached as a data caching mechanism used to increase throughput.
Complex applications often incorporate asynchronous processing for compute-intensive tasks; message queues are often used as part of this approach. Therefore, application architectures need to change to incorporate new software components and application design.
What are some practical steps you can take to ensure your cloud-targeted application can support these new application requirements? Here are some suggestions:
1. Review software components that you plan to use in the application. Many software components were designed to be used in a static environment with manual configuration and occasional updating. A common design pattern for these components is the use of a "conf" file which is edited by hand to configure the component context.
Once the conf file is complete, the component is started (or restarted), reads the configuration information into memory, and goes into operation. In a cloud world, in which context changes constantly as new connections and integration points join and drop, the "edit and restart" model is unsustainable. Look for components that have online interfaces to update context and dynamically add or delete connections. Nothing is worse than rolling out an application and later realizing that some part of it can't really support dynamic topology shifts.
2. Plan for load balancing throughout the application. Many applications support load balancing at the Web server layer, but assume constant numbers (and IP addresses) for application components at other layers. With very large load variability, other layers need to be scalable and need to support load balancing to ensure consistent throughput. Don't design an application with the expectation that only two application components will reside at certain layers. Plan for dynamism and load balancing at all layers.
3. Plan for application scalability. Maybe this is hammering the point home too many times, but double or triple your capacity planning and application architecture assumptions — maybe even factor in a 10X growth possibility. When you plan for much larger scales, you pay attention to bottlenecks and plan to how to relieve them dynamically. If you don't expect scalability, you don't examine your architecture assumptions critically. So review your architecture for scalability bottlenecks.
4. Plan for dynamic application upgrades. Forty years ago, auto manufacturers took two weeks to change over factories to prepare for new model manufacturing. Toyota figured out how to do it in two hours. That meant they had to design for dynamic factory upgrades. Cloud computing, with the 24 hour use cycles, means no downtime for application upgrades. Architecting applications so that the topologies can be changed while users continue to access individual servers requires Toyota-like planning. Likewise, upgrading database schemas (and data sets) to support new application versions necessitates Toyota-like approaches.
Bernard Golden is CEO of consulting firm HyperStratus, which specializes in virtualization, cloud computing and related issues. He is also the author of "Virtualization for Dummies," the best-selling book on virtualization to date.
Join the CIO Australia group on LinkedIn. The group is open to CIOs, IT Directors, COOs, CTOs and senior IT managers.