When vital systems go down, even briefly, this can injure a firm\u2019s services, productivity, reputation and more. In fact, a December 2013 study from Ponemon Institute looked at such impacts and found unplanned data center downtimes had an average cost of $7,900 <em>per minute<\/em>\u2014up 41 percent in just three years.\r\n\r\nThese figures underscore the importance of failover arrangements for any mission-critical application, whether it\u2019s a consumer-facing app or an enterprise system for internal users. That means using a number of different techniques to make sure that even if there\u2019s an outage in one server or facility, systems elsewhere can take over without a hitch.\r\n\r\nThe most difficult challenge is typically the database, if there is one\u2014especially a database that receives frequent updates. A strong failover plan typically includes continuous automatic backups of the database (as well as any changes to the software and OS configurations) to a server in a remote location. If the main system goes down, the backup will immediately become the new master database server, preventing any data loss or interruptions in service.\r\n\r\nIf an application is hosted in the cloud, architectural nuances can still have a big impact on its failover response. For example, the system could not only incorporate redundant databases, but also deploy every server image to multiple geographical regions and data centers, so service will continue even if outages hit more than one region. You could even use multiple cloud providers, although very few organizations go that far.\r\n\r\nIn short, failover is a key component of the developer\u2019s art, even if some systems call for more elaborate arrangements than others. In situations where unexpected downtimes could have serious implications for your bottom line, there\u2019s no alternative to a well-crafted failover scheme, executed with skill and care.