Nakivo 6.2 Enables AWS Backup

Background

Within the past few days Nakivo announced the release of version 6.2 of their backup software. For those not familiar the Nakivo product is a backup product for cloud and VMware based environments. Nakivo first announced support for Amazon EC2 instances in the v6 release. The latest incarnation of the product adds on this by providing a number of enhancements related to AWS.

Deployment

aws-1

NAKIVO Backup & Replication can now be purchased from the AWS market place and allows you to back up to the same region, across regions or back to on premise. The product is intelligent in the deployment of transporters. Each region is required to have a transporter to enable backups and should a backup be requested in a region without an existing transporter a new one will be spun up automatically.

Keeping costs down

aws-2

As AWS is billed on an hourly basis it is sensible that you only run servers when needed to reduce costs. Transporter instances can be automatically powered off when not needed. When a task is required of a powered off transporter Nakivo is able to power it back on to allow the completion of the task. Another new feature is the product is able to harness native AWS instance replication, to maintain identical copies allowing for near instant recovery.

 

Like the VMware version the AWS flavour continues to offer global deduplication across all jobs and application level backups. The press release contains the full details.

Survey reveals 95% of companies suffered an outage in the last year

95% of companies suffered an outage in the last year, this is one of the surprising facts discovered by iLand in a recent DR survey. They questioned 250 firms of 500 employees or more to understand what the state of the DR in the UK is.

 

The results are interesting and shows some key trends that should enable all companies to tighten up their DR policies.

 

This headline figure that most organisations suffered an outage in the past 12 months must be a wake-up call to most organisations to get their DR policy in order. Those that had experienced outages were then asked the cause, the top 2 causes were system failure and human error. The prevalence of ransomware and other cyber-attacks were seen in the survey coming in as the fourth most common cause of outages. I have covered in detail considerations regarding ransomware and backups previously.

.

 cloud-backup

Of those questioned 87% had initiated some kind of failover in the past 12 months. But offset against this was the fact that the majority of these had encountered issues during the process. This seems likely linked to another stat which showed that only 63% of respondents have a trained team that tests DR either quarterly or twice a year

 

 

Respondents were also asked about the amount of money the company was investing in DR. 57% believed the amount spent was correct, 26% said it was too little while 17% believed that actually too much was being spent.

Conclusions

 

So what practical use we can make of these figures to allow organisations to learn from them and ensure that they are ready for a DR situation.

 

The take home appears to be the need to failover in the next 12 months in some form or another is extremely likely, however most organisations are not able to do this with 100% confidence. Given the increasing need for organisations to be continually available and the potential financial and reputational losses of downtime this has to be a concern for most companies.

 

Planning and testing of DR have to be the number one and two priorities that come out of this survey. Any backup should be tested on a regular basis, and when the intention that backup is used for DR the need becomes even greater.

 

As a final comment I think it’s worth noting that in my opinion the figures were overly optimistic. It’s very difficult for any organisation to admit that they have failings and they will only be recognised if a company has undertaken the correct testing or been forced to invoke DR.

All IT departments have a limited budget the majority of that gets spent on end user computing where the focus from the business lies. However a business today without its data is no longer a business. Organisations need to consider the appropriate spend on planning and testing to ensure that they are DR ready.

The survey results are summarised in this infographic.

 

Ransomware & Backups

Ransomware was first seen in the mid 2000’s and has grown into a prevalent security threat, with TrendMicro reporting they blocked 100 million plus threats between 2015-16.

 What is Ransomeware?

Ransomware is essentially a hijack of the users of machine, that renders it unusable or operating at reduced capacity unless a payment is made. The hijacks fall into two main types of attack, a lockout screen which stops the users accessing any elements of the system until payment is made. In the second type of attack the users files are encrypted and again a ransom is demanded but this time to decrypt the files. The prevalence of these sort of attacks is unfortunately directly linked to that fact they have proved to be a highly effective business for the criminals behind them. We storage administrators have known for some time that both users and organisations data is critically important to them, now unfortunately it seems so do criminals and they are willing to cash in. ZDNet estimated based on Bitcoin transaction information that between 15 October and 18 December CryptoLocker had been used to extort $27 million from victims.

cryp

 Infection and removal

The method of infection generally takes two forms, with a machine already compromised with another form of malware triggering the attack or through email. E-mail based attacks will coerce the victim to releasing the pay load via a link or attachment, often by pretending to be from a legitimate source, e.g.click here to see your speeding fine.

 

In terms of removal the screen locking ransomware is generally easier to remove and can be removed using traditional malware protection products. However once users files have been encrypted this poses a significant challenge. The encryption is often based on a public and private key system, with the private key only known to the hijackers. It is generally impossible to crack these encryption keys, with the only options being to pay the ransom or restore from backup. The police and most security vendors suggest against paying the ransom since it is only fuelling the crime.

 Preventing Ransomeware

Prevention is better than cure and a multi layered approach is suggested.  This would include user education against the threats and giving users the most restrictive rights, so execution is not possible. More direct preventions methods include the use of firewalls, end user protection software and of course keeping patching levels up to date.

 Ransomeware and backups

Given that this is a data protection focused blog I wanted to look at the specific considerations around backup given that this is the predominant recovery method. It is an important consideration that the encryption type software will look to encrypt all attached local and network drives.  The behaviour of encrypting network shares can be particularly damaging to organisation and is why it is important that users are given the most restrictive rights possible so that the ransomware cannot execute.  

Considerations specific to backup are:                                  

Replication is not backup – Sometimes high availability and backup are confused.  Replication is not backup and ransomware is a good example of why not. If the primary end becomes infected, so will the target once replication is competeBear in mind this would include automatic backup to the cloud services.

Hold an offline copy of data – Whilst there have been no confirmed cases of backup software getting hit by an attack ,it is a sensible precaution to protect against a future variant by keeping a backup copy offline or at least in a separate media form.  This is in accordance with the standard good practice laid out in the 3-2-1 rule, have 3 copies of your data, 2 different types of media and one offsite copy.

RPO becomes key – With the random nature of these attacks and the potential level of destruction with multiple key file shares potentially being rendered unusable by a single users, how much data can you afford to lose? For those shares which you consider to be at greater risk perhaps due to the number of users you could consider a shorter RPO. Read this article to learn more about selecting an effective RPO and RTO.

Number of Recovery points – The number of recovery points and retention policy also needs to be considered. If you are using a simple policy of 14 days for example it is possible that an infrequently used share, such as one containing monthly finance reporting may only be noticed by a when time all the backups also contain the encrypted files.

Endpoint backup – If users save files locally to their desktop / laptop consider endpoint protection such as Mozzy or Veeam End point protection to safeguard these devices.

 

 

Veeam’s next big thing

The good ol days

As exciting as IT is now, things were simpler in the old days. You had a bunch of physical servers a tape drive in the corner and some backup software. As long as you remembered to change your tapes you knew everything was being backed up and if you really wanted to go the extra mile you took that tape off site. IT solutions including on-site, public/ private cloud, SAAS mean that data is disparate and the backup situation complex. Whilst there are data protection offerings to meet each individual requirement already, there wasn’t a single vendor with a vision that all the requirements of today’s IT departments. With Veeams next big thing they have come to the table and put forward this complete vision for data protection.

Key Announcements

Lets look at this collection of announcements that form the vision.  The diagram below summarises the proposal that Veeam can be used as a single tool to backup public, private clouds, physical machines and Office365.

 

VAC final

Office 365 Backup – Arguably the most significant announcement was the integration with Office 365. This new functionality allows the backup of Office 365 data to a Veeam repository. This allows the recovery of individual mailbox items and the eDiscovery of Exchange items.

Agents – Veeam End Point has been available for some time with a suggested use of backup for end user workstations. Veeam have enhanced support for application consistent backups and the Veeam Agent for MS Windows is now supported to protect your physical MS servers. The agent for Linux currently already available in beta, has a similar use case and can also be used for all your physical servers.

It has been a frustration of mine for some time that agents were not available for physical workloads so it good to see that covered off, but more significantly Veeam has stated that these agents can also be used for the backup of VMs that live in the public cloud.

Veeam Availability Console – This is the one console that ties together all the components and again nicely illustrates Veeams vision for a single product to backup and control your companies dispersed data. The availability console comprises the Veeam Availability Suite which is the console most of you are familiar with and probably think of when you think of Veeam. Plus VAC also enables you to manage all your agents from a single console, this was not possible with endpoint. This effectively means you can manage your traditional Veeam snapshot backups, plus physical and cloud backups from a single console.

Further Details

Watch the announcement

Good overview from Anthony Spiteri

Nice summary of all the news by Michael Cade

 

Staying connected

Stay informed and keep your dataON247 by following on LinkedIn and Twitter plus subscribe to our mailing list.