Virtual data availability conference

A quick post to alert readers to an event which may be of interest to many of you. Do you like conferences but don’t have the time, or a boss kind enough to let you go?  Fear not as Veeam are putting on a virtual conference tomorrow, like a standard conference there is a series of presentations starting at 1pm, I am particularly looking forward to the best practice recommendations and sessions on the new agents feature.

 

To make it more conference like you are going to be able to explore different areas, including exhibitors and the Experts Lounge, which is going to be a chat room where you can interact with bloggers.

virtual-conf

I have been asked to take part in Experts Lounge along with the following fellow bloggers.  Pop in if you fancy a chat.

You can still register for the event here and follow along on social media using #VeeamVirtual.

Veeam Copy Jobs Overview

Overview and Uses

Veeam copy jobs are a useful feature within the Veeam Backup and Replication suite. Copy jobs are not a backup of your primary data but rather a copy of the backup files themselves. Copy jobs can be used for a number of purposes:

  • Copying the data to another physical location, in this way they can be used to give you an offsite location of your data
  • Copy jobs can also be used to create an archive backup with different retention settings to your primary backups
  • Creating a copy of the data to another media type. Copy jobs can be used as a source for a tape job
  • To help create a tiered backup strategy. It maybe you want to keep two weeks of data on fast disks but several months’ worth on cheaper slower storage

3-2-1 Rule

Those familiar with the 3-2-1 backup rule will have realised that Copy Jobs are a feature that enables the implementation of this. The rule states that there is at least 3 copies of your data in 2 different formats with at least 1 of those stored off site. It is good practice to adhere to the the 3-2-1 rule.

Considerations

Copy jobs are linked to backup jobs since they are a copy of the primary backup. You can set them up through the GUI or by using PowerShell.

 

map-repository

When you create a new copy job it will default to being scheduled at midnight. This might seem counter intuitive to have all your copy jobs running at the same time as it would impact bandwidth / storage etc. However Copy Jobs are effectively are always running in the background waiting for their associated backup job to complete. Once the Backup Job completes the copy job will then kick in. So the timing is just a marker for the period in which the copy job will look for new Backup Jobs. By scheduling it to start at midnight this is when it will starts looking for a new backup job and it will continue to do this until the next monitoring period kicks in again i.e. 24 hours later.

When you setup Copy Jobs they will be disabled by default, when you enable them they will immediately try and run. Should the initial sync not complete within the time window you specified, Veeam is smart enough to cope with this and will complete the initial sync on the next run.  If you need to setup copy jobs across a slow link you can use WAN accelerators  to optimise transport of the data across the link.

When you think you have completed the setup if you have access to the VeeamONE reporter tool you can use the report VMs with no archive job, to highlight any VMs that do not have a copy job associated.

Free Office 365 Backup from Veeam

SAAS & Backups

SAAS has to be the first move for most companies to the cloud. Rather than build your own let someone else worry about the development and support costs, all you have to do is turn up and use it right? Whilst many companies will take this approach and put complete trust in their provider, an often forgotten aspect is backup.

office-logo_v3

 

Native Office 365 Protection

It is important to understand what Microsoft offers you in terms of data protection as it may vary from what you are used to internally. In a traditional on premise data protection scenario you would back up the full environment including the infrastructure and application layer. This would not only give you a DR position but also granular restore of all your applications within it. Microsoft assures the availability of Exchange data by using DAGs across a geographically dispersed area and backups for the protection of their infrastructure. To recover deleted items from Exchange within Office 365 by default you have 30 days of recovery points to choose from.

Limitations

First of all it’s clear this an all eggs in one basket approach relying on one vendor for backups within the same application. In the traditional world of backups this would be frowned upon, ideally backups should be offsite and in another format. The second clear short coming is the number of restore points available, what if you need to recover further back than 30days for a legal case? Whilst this can amended its clear it still won’t reach the time window many companies have come to expect. Finally the standard Office 365 settings do not protect you against the disgruntled or reckless admin who deletes the data and restore points.

Salvation is free!

The solution is of course is to take control of your backups again. With version 9.5 Veeam announced the backups of MS Office 365 would be supported. The good news there is now an offer on for free Office 365 e-mail backups for current Veeam Backup & Replication Enterprise Plus customers who will get two years of Veeam Backup for Microsoft Office 365 free, and customers with Standard or Enterprise edition will get one year free.

free

Nakivo 6.2 Enables AWS Backup

Background

Within the past few days Nakivo announced the release of version 6.2 of their backup software. For those not familiar the Nakivo product is a backup product for cloud and VMware based environments. Nakivo first announced support for Amazon EC2 instances in the v6 release. The latest incarnation of the product adds on this by providing a number of enhancements related to AWS.

Deployment

aws-1

NAKIVO Backup & Replication can now be purchased from the AWS market place and allows you to back up to the same region, across regions or back to on premise. The product is intelligent in the deployment of transporters. Each region is required to have a transporter to enable backups and should a backup be requested in a region without an existing transporter a new one will be spun up automatically.

Keeping costs down

aws-2

As AWS is billed on an hourly basis it is sensible that you only run servers when needed to reduce costs. Transporter instances can be automatically powered off when not needed. When a task is required of a powered off transporter Nakivo is able to power it back on to allow the completion of the task. Another new feature is the product is able to harness native AWS instance replication, to maintain identical copies allowing for near instant recovery.

 

Like the VMware version the AWS flavour continues to offer global deduplication across all jobs and application level backups. The press release contains the full details.

Survey reveals 95% of companies suffered an outage in the last year

95% of companies suffered an outage in the last year, this is one of the surprising facts discovered by iLand in a recent DR survey. They questioned 250 firms of 500 employees or more to understand what the state of the DR in the UK is.

 

The results are interesting and shows some key trends that should enable all companies to tighten up their DR policies.

 

This headline figure that most organisations suffered an outage in the past 12 months must be a wake-up call to most organisations to get their DR policy in order. Those that had experienced outages were then asked the cause, the top 2 causes were system failure and human error. The prevalence of ransomware and other cyber-attacks were seen in the survey coming in as the fourth most common cause of outages. I have covered in detail considerations regarding ransomware and backups previously.

.

 cloud-backup

Of those questioned 87% had initiated some kind of failover in the past 12 months. But offset against this was the fact that the majority of these had encountered issues during the process. This seems likely linked to another stat which showed that only 63% of respondents have a trained team that tests DR either quarterly or twice a year

 

 

Respondents were also asked about the amount of money the company was investing in DR. 57% believed the amount spent was correct, 26% said it was too little while 17% believed that actually too much was being spent.

Conclusions

 

So what practical use we can make of these figures to allow organisations to learn from them and ensure that they are ready for a DR situation.

 

The take home appears to be the need to failover in the next 12 months in some form or another is extremely likely, however most organisations are not able to do this with 100% confidence. Given the increasing need for organisations to be continually available and the potential financial and reputational losses of downtime this has to be a concern for most companies.

 

Planning and testing of DR have to be the number one and two priorities that come out of this survey. Any backup should be tested on a regular basis, and when the intention that backup is used for DR the need becomes even greater.

 

As a final comment I think it’s worth noting that in my opinion the figures were overly optimistic. It’s very difficult for any organisation to admit that they have failings and they will only be recognised if a company has undertaken the correct testing or been forced to invoke DR.

All IT departments have a limited budget the majority of that gets spent on end user computing where the focus from the business lies. However a business today without its data is no longer a business. Organisations need to consider the appropriate spend on planning and testing to ensure that they are DR ready.

The survey results are summarised in this infographic.

 

Ransomware & Backups

Ransomware was first seen in the mid 2000’s and has grown into a prevalent security threat, with TrendMicro reporting they blocked 100 million plus threats between 2015-16.

 What is Ransomeware?

Ransomware is essentially a hijack of the users of machine, that renders it unusable or operating at reduced capacity unless a payment is made. The hijacks fall into two main types of attack, a lockout screen which stops the users accessing any elements of the system until payment is made. In the second type of attack the users files are encrypted and again a ransom is demanded but this time to decrypt the files. The prevalence of these sort of attacks is unfortunately directly linked to that fact they have proved to be a highly effective business for the criminals behind them. We storage administrators have known for some time that both users and organisations data is critically important to them, now unfortunately it seems so do criminals and they are willing to cash in. ZDNet estimated based on Bitcoin transaction information that between 15 October and 18 December CryptoLocker had been used to extort $27 million from victims.

cryp

 Infection and removal

The method of infection generally takes two forms, with a machine already compromised with another form of malware triggering the attack or through email. E-mail based attacks will coerce the victim to releasing the pay load via a link or attachment, often by pretending to be from a legitimate source, e.g.click here to see your speeding fine.

 

In terms of removal the screen locking ransomware is generally easier to remove and can be removed using traditional malware protection products. However once users files have been encrypted this poses a significant challenge. The encryption is often based on a public and private key system, with the private key only known to the hijackers. It is generally impossible to crack these encryption keys, with the only options being to pay the ransom or restore from backup. The police and most security vendors suggest against paying the ransom since it is only fuelling the crime.

 Preventing Ransomeware

Prevention is better than cure and a multi layered approach is suggested.  This would include user education against the threats and giving users the most restrictive rights, so execution is not possible. More direct preventions methods include the use of firewalls, end user protection software and of course keeping patching levels up to date.

 Ransomeware and backups

Given that this is a data protection focused blog I wanted to look at the specific considerations around backup given that this is the predominant recovery method. It is an important consideration that the encryption type software will look to encrypt all attached local and network drives.  The behaviour of encrypting network shares can be particularly damaging to organisation and is why it is important that users are given the most restrictive rights possible so that the ransomware cannot execute.  

Considerations specific to backup are:                                  

Replication is not backup – Sometimes high availability and backup are confused.  Replication is not backup and ransomware is a good example of why not. If the primary end becomes infected, so will the target once replication is competeBear in mind this would include automatic backup to the cloud services.

Hold an offline copy of data – Whilst there have been no confirmed cases of backup software getting hit by an attack ,it is a sensible precaution to protect against a future variant by keeping a backup copy offline or at least in a separate media form.  This is in accordance with the standard good practice laid out in the 3-2-1 rule, have 3 copies of your data, 2 different types of media and one offsite copy.

RPO becomes key – With the random nature of these attacks and the potential level of destruction with multiple key file shares potentially being rendered unusable by a single users, how much data can you afford to lose? For those shares which you consider to be at greater risk perhaps due to the number of users you could consider a shorter RPO. Read this article to learn more about selecting an effective RPO and RTO.

Number of Recovery points – The number of recovery points and retention policy also needs to be considered. If you are using a simple policy of 14 days for example it is possible that an infrequently used share, such as one containing monthly finance reporting may only be noticed by a when time all the backups also contain the encrypted files.

Endpoint backup – If users save files locally to their desktop / laptop consider endpoint protection such as Mozzy or Veeam End point protection to safeguard these devices.

 

 

How to choose the correct RPO & RTO for your business

RPO v RTO

It may seem pedestrian to be covering RPO and RTO. Everyone knows this stuff right? Whilst many readers will already know the dictionary definition, it is commonplace to not fully understand the business implications of these decisions. This article will cover RPO and RTO but with a specific focus on how these terms relate to the business.

Definitions

clocks

RPO stands for Recovery Point Objective, whilst RTO stands for Recovery Time Objective. RPO measures the furthest point away from the current time that you must be able to roll back to. RTO is possibly simpler to understand and measures how quickly the restoration of data must occur. These are the definitions but to understand the implications for a business a better way to think about this is with two questions:

1 What is the maximum amount of data loss that is acceptable to the business? = RPO

2 How long can the business afford to be without the data or the services that rely on that data? =RTO

So I want zero right?

Once most organisations understand RPO + RTO in terms of business impact their next statement will be we cannot afford any data loss and zero down time. Whilst there will be certain circumstances where this is necessary, it will come at a price. In general the closer you get to zero for either measure the higher the cost of the solution will be. The amount of money spent on the solution needs to be proportionate against the financial and other cost of down time .  There may even be circumstances where the cost of protecting the data is more than just starting from scratch.

 

Not all data is equal

This leads nicely into the next point, not all data is equal. So whilst it may be required to assure minimum down time and zero data loss for financial data this may be less true for users file data. As well as data important to the business, organisations also need to consider industry regulatory and legal requirements around data. For example an organisation holding medical records would most certainly be required to ensure the availability of those records for years to come. Organisations will also need to consider what I would call soft factors, which are harder to measure but would include things like impact on reputation and uniqueness of the data coupled with the ability to recreate it. An example of unique data would be a media company which stores movies.

 

Putting it all together

Work through the following check points to ensure you select the correct RTO and RPO for your organisation.

  1. Understand the financial impact of down time to help to develop a budget
  2. Understand other impacts of downtime to also consider budget
  3. Understand regulatory and legal obligations
  4. Remember not all data is the same. Classifying it will allow a more targeted RPO/RTO and potentially reduce the cost of the project
  5. Ask how much data can we afford to loose, this is your RPO
  6. Ask how long can we afford for the system to be down, this is your RTO
  7. Remember cost will generally increase the closer you aim to get your RPO & RTO to zero

Stay in touch for more data protection articles by following on LinkedIn and Twitter.

Veeam’s next big thing

The good ol days

As exciting as IT is now, things were simpler in the old days. You had a bunch of physical servers a tape drive in the corner and some backup software. As long as you remembered to change your tapes you knew everything was being backed up and if you really wanted to go the extra mile you took that tape off site. IT solutions including on-site, public/ private cloud, SAAS mean that data is disparate and the backup situation complex. Whilst there are data protection offerings to meet each individual requirement already, there wasn’t a single vendor with a vision that all the requirements of today’s IT departments. With Veeams next big thing they have come to the table and put forward this complete vision for data protection.

Key Announcements

Lets look at this collection of announcements that form the vision.  The diagram below summarises the proposal that Veeam can be used as a single tool to backup public, private clouds, physical machines and Office365.

 

VAC final

Office 365 Backup – Arguably the most significant announcement was the integration with Office 365. This new functionality allows the backup of Office 365 data to a Veeam repository. This allows the recovery of individual mailbox items and the eDiscovery of Exchange items.

Agents – Veeam End Point has been available for some time with a suggested use of backup for end user workstations. Veeam have enhanced support for application consistent backups and the Veeam Agent for MS Windows is now supported to protect your physical MS servers. The agent for Linux currently already available in beta, has a similar use case and can also be used for all your physical servers.

It has been a frustration of mine for some time that agents were not available for physical workloads so it good to see that covered off, but more significantly Veeam has stated that these agents can also be used for the backup of VMs that live in the public cloud.

Veeam Availability Console – This is the one console that ties together all the components and again nicely illustrates Veeams vision for a single product to backup and control your companies dispersed data. The availability console comprises the Veeam Availability Suite which is the console most of you are familiar with and probably think of when you think of Veeam. Plus VAC also enables you to manage all your agents from a single console, this was not possible with endpoint. This effectively means you can manage your traditional Veeam snapshot backups, plus physical and cloud backups from a single console.

Further Details

Watch the announcement

Good overview from Anthony Spiteri

Nice summary of all the news by Michael Cade

 

Staying connected

Stay informed and keep your dataON247 by following on LinkedIn and Twitter plus subscribe to our mailing list.