The situation can be much worse than a bad hair day for the IT department– imagine that your cloud provider’s power failed and that you lost all your data. Not likely? Agreed, because cloud providers go to great lengths to ensure that they have sufficient geographically separate sites that act as backups. If one site is down, your data should be recovered quickly, as defined in your SLA.
While losing all your data is rather unlikely, a more probable scenario is that while you have taken the required steps to secure your data, you may not have taken the same strict measures with your backups. In the event that some or all of your data is temporarily or permanently unavailable and you’ll be bringing your recovery site online, are you certain that this data is totally secure and that you are not bringing any malicious elements onboard?
Many companies have chosen to place their Disaster Recovery (DR) sites in the cloud to reduce data center space, IT infrastructure and IT resources. Migration of DR sites to the cloud has become increasingly attractive as WAN speeds closely approach LAN speeds, due primarily to software-only cloud backup solutions and improved bandwidth access. The cloud can be used for backup in various manners – for cloud-to-cloud backup where data is synchronized between multiple cloud data centers in real time. Alternately, or in parallel, private cloud offerings allow data to be synchronized across devices owned by the company.
With careful cloud capacity planning, companies can benefit from considerably lower costs. However, in determining your DR requirements, ensure that you consider the bandwidth and network capacity required to redirect all users to your recovery site in the event that disaster strikes. You may be restoring data from one cloud to another, or from a public or private cloud to on-premise infrastructure; each scenario has its own requirements. Priority and required Recovery Time Objectives (RTOs) will determine the disaster recovery approach.
Your data is only as available as its backup.
According to a GFI Survey, 32% of IT Admins do not test their backup solutions for effectiveness. Scary thought. To ensure full network connectivity, you’ll need to ensure that you can meet your Recovery Point Objectives (RPOs) and the Recovery Time Objectives (RTOs). The network’s RPO will determine the frequency of the backups, in other words, how old the data will be if you have to revert to the recovery site. Will you have lost a minute of data, hours or more? Obviously mission-critical applications require continuous or near-continuous backup.
The RTO defines how long it will take to restore the recovery site to full functionality for all employees and/or customers. Only by testing the backup and restore procedures at regular intervals to work out any glitches will you be prepared to meet the RTO when disaster strikes. Backing up and recovering directly to and from the cloud skips data movement staging, speeding up these processes.
Designing the network architecture for full recovery testing requires a comprehensive strategy. End-to-end testing necessitates switching back and forth between the live and recovery sites, which not all organizations are prepared to execute. Yet partial testing is a risk, especially in the event that the recovery site does not comprise your entire data set. In this situation, scalability is an important consideration – will you be able to scale your recovery site to support the entire deployment?
You’ll also need to ensure network connectivity, to assure continuous access to your resources with the ability to maintain them in their optimal state. Automatic program and operating system updates are not only important for smooth processing, but also are an important security issue. Phishers and hackers are more likely to exploit software that is unpatched, as it has greater vulnerabilities. After all:
Your data is only as secure as its backup.
What measures have you taken to ensure that your authentication and authorization safeguards extend throughout your deployment, including all backups? Is all data in your live and backup sites encrypted, both the data-in-transit and at rest? Have the same strict measures of user-access control, including authentication and authorization been applied to all locations?
Monitoring and managing both the production and recovery sites requires high visibility of all network elements including virtual servers and connectivity statuses, with automated alerts and notifications. Your cloud provider may not provide all these features out of the box. Especially if you are using more than one cloud provider for your production and recovery sites, you will likely require a third-party solution to encrypt the data-in-transit between the different providers and regions.
Incorporating data encryption and strict access control are crucial measures that protect your data – both at the live and recovery sites. Taking these preventive steps help to ensure that bringing your recovery site online will not compromise the security of your organization.
This article was syndicated from Business 2 Community: Disaster Recovery – Worse than a Bad Hair Day
More Technology & Innovation articles from Business 2 Community: