Planning for power blackouts has been low priority for many businesses over the last couple of decades, because power supply has been good.

But we now know there is a good chance of at least one outage this winter, despite government and energy suppliers doing what they can to keep supplies in place.

If your business has not already prepared and communicated a blackout continuity plan, then time is of the essence. Failure to have a plan could be negligent in the event of an issue and regulatory bodies are unlikely to be impressed.

Assessing risk

You need to review your risk register and continuity planning to raise the likelihood of power outages. Part of this should include evaluating all key assets and making sure these are all logged.

This will affect your risk score and the focus needed in these areas but will help you contain the prospect of an even bigger problem in the future.

Blackout preparation

Remote workers

The increase in remote working since the pandemic could make the loss of power at employee homes a disruptive and costly problem.

If energy supplies are diverted to major business districts and away from residential areas at set times, remote working could become difficult. This is a likelihood in a supply-issue scenario.

Ensuring that employees keep their laptops on charge may seem like an answer but is useless if their broadband router has no power. Trying to connect from mobile hot spots could also prove unworkable if everyone in that area is doing the same.

An agreed process for blackouts at employees’ homes should be in place. It’s important to have agreement on what people do in the event of an outage, such as how long to wait before finding an alternative work location.

Workplace capacity

If you have a dispersed workforce, check whether you have office space to accommodate everyone during a power outage scenario.

If space is not sufficient, options include partnering with other organisations on other grids to allocate space. It’s unlikely that your people will be able to go to a coffee shop or hotel to work as the chances are that these will be already full.

Many larger businesses have diverse power feeds from different grids. If you do not have this, you may want to look at the options and consider the cost vs impact.

For office-based IT equipment, consider having two firewalls, two routers and redundant switches. One of the biggest fears of power outages and turn-ons is losing a key element of a network, which can take a whole office down.

Protect against spikes

The big worry of power switching on and off is that this can blow electronic devices, such as switches, routers, servers, storage, and firewalls.

Devices like these should be on UPS devices, to protect against spikes and to shut down any remaining on-premises server environments that may still be in place.

It’s important to ensure that all UPS devices have current warranties and have been load tested to ensure they have enough run time to shut down equipment gracefully.

Home workers also face the possibility of power outages and turn-on spiking equipment. To mitigate against this, you should keep spare wireless routers and laptops to push out to employees at home. Although the loss of a home router is not the employer’s fault, it is still a problem if employees are unable to work.

Supply chain

Evaluate your supply chain. You need to understand how power outages may affect your partners and suppliers and in turn, your business. Check that suppliers have clear plans for dealing with power outages and ask for this in writing if possible.

You should also speak to your energy suppliers and ask for written confirmation of what could, will and will not happen if supply difficulties occur.

Do you know how to keep your business running successfully in a crisis?

Business Continuity planning is the process of creating a strategy which identifies and documents risks to a company and outlines processes of prevention and recovery.

It ensures that, in the event of a disruption, disaster or accident, personnel and assets are protected and able to function normally. Something many businesses and business leaders have had to adapt for particularly in the last 18 months since the Covid-19 pandemic.

It should include steps to take before, during and after an event to maintain business operations and financial viability. Business continuity planning is essential for companies of all sizes. However, unfortunately, many still aren’t getting it right. Companies continue to suffer IT outages which should be avoidable or easy to recover from.

There are several reasons for weaknesses in current business continuity plans, such as underestimating risk and failing to test and review plans. Although nobody likes to think about “what if” situations, it is critical that business leaders understand the threats, prepare for them, and act accordingly.

Continuity Planning: 10 crucial areas business leaders must understand to create a successful plan

A 10 point basic Business Continuity plan checklist for business leaders

  1. Which IT systems need protecting? Email, ERP, Financial Applications, CRM etc.
  2. What data needs protecting from loss/corruption? Data within databases, Standard documents etc.
  3. What are the costs per hour of losing access to systems and/or data? Use a template to calculate.
  4. How long can the business tolerate the loss of particular systems/data? Consider potential financial and reputational damage.
  5. How much data can the business afford to lose without significant impact? None? A minutes worth? An hour or days worth?
  6. How long should saved data be retained? What are the regulatory requirements in the country and particular sector?
  7. How long will it take to restore services? Are the current technologies and systems in place to deliver the business’s operational needs?
  8. Do you have the skills inhouse to ensure the business can recover from a disaster?
  9. Do you have a documented plan that is reviewed?
  10. How often to you test recoveries? It should be as often as feasible, or at minimum, annually.

Your business’ Internet connection now means so much more than just being able to browse websites. So many programs, services and features rely on an Internet connection that if yours went down, you would feel an instantaneous impact.

Businesses constantly use the Internet to communicate with their clients, collaborate with colleagues and access cloud-based systems such as Office 365 and Salesforce. Using the Internet is so ingrained in our workday, there’s little you could do without it.

The only good thing which comes from an Internet outage is… well, there isn’t one.

How much does a lost Internet connection cost?

Internet outages cost UK businesses nearly £7 billion in 2016. Whilst that was a few years ago, don’t think the age of this stat makes you safe. This figure is set to increase year-on-year and so will now be something far higher.

The table below shows the impact downtime has on varying sizes of businesses based on both the productive time lost and the cost of an outage.

Table showing the costs of an outage in varying sizes of business.
The results of the study investigating the cost of Internet outages

Businesses experience an average of 4.7 outages per year – each of which cost a mid-sized business an average of £3,644.

Internet outages are clearly expensive and so your business should be doing everything to prevent them. Luckily, it’s not difficult to reduce the chance of your business seeing an outage.

How to prevent an Internet outage?

Difference between broadband, leased-lines and 4G services.

Before considering getting two Internet connections, you must ensure your primary connection is the correct type for your business. There are three main types of connection:

Getting a second Internet connection

Once your primary connection is suitable, the second step is to add redundancy to your Internet connection. Many businesses think a leased line gives them immunity to an outage. However, while you may still get limited connectivity during a wider network outage, you should aim for no loss of service at all.

Since we operate leased lines for many of our clients, there are a few best practices and common mistakes we’ve seen which you should be aware of when planning your own leased line.

The Last Mile Rule

The ‘Last Mile Rule’ states that the final mile of cabling which connects your business to the Internet should be physically separate between your two connections. This isn’t always possible due to external infrastructure and costs, but it’s worth aiming for.

Having the connections enter your office from alternate directions and cabinets means a physical disruption (perhaps caused by overzealous construction workers) only impacts one cable – allowing you to maintain connectivity.

To take it further, a secondary Internet connection should be run from a different telephony exchange – meaning that an issue at an exchange doesn’t bring down your connectivity.

Automatic switching

Manually reacting to an outage is not ideal. It’s stressful, confusing and results in unnecessary downtime for your business. Instead, you want to configure your connection to automatically switch over to the secondary circuit if the primary one is down.

Typically, this is achieved by intelligent firewalls or two carriers (ISPs) working in conjunction via a managed service.

It’s also worth considering using the second connection, rather than just having it sat idle. Many organisations push certain traffic over the secondary connections, such as backups or voice calls. Obviously, if the second line fails (often more likely) that traffic can just fail-back to the primary connection.

Diversify line providers

Rather than going straight to your current line provider for your secondary connection, consider diversifying to another provider instead.

In the UK, BT Openreach and Virgin Media are the two largest owners of cable infrastructure, so if you already have a connection with one, it’s worth diversifying into the other. This is so that if the network provider themselves experiences an outage, you don’t lose your primary and secondary connections because of it.

Another benefit to a diverse approach is that if one of the major providers goes down, you can be overwhelmingly smug that your operations keep humming along whilst your competitors are frantically putting out fires and incurring reputational damage.

What is the cost of two Internet connections?

The direct cost of a second Internet connection will vary depending on local pricing so research your providers. If an identical line is too expensive, you could consider purchasing a reduced capacity line instead, i.e. a broadband circuit to just allow critical services to run in a disaster.

Doing this ensures a primary line failure won’t completely take you down, but you may find it difficult to perform Internet-heavy actions. Consider how much bandwidth you use normally and your usage at peak times to help you choose a sufficiently effective backup line.

What is the ROI of a second connection?

A second Internet connection is a preventative investment, meaning you cannot calculate ROI in the traditional sense. Instead, look at how much money your business is losing from downtime, then map this against the cost of a backup line.

As medium-sized businesses typically lose £3,644 per outage and experience 4.3 outages per year, a secondary connection would save them £15,699.20 on average every year. This can be considered the yearly ROI.

To calculate this for your own business, use this simple downtime cost equation to find your cost of downtime then multiply it by the average length of your outages and multiply it again by your average number of outages per year.

It’s also important to not just focus on the hard costs. You also need to consider soft costs, such as reputational damage. If your operation was offline for 3 days (very possible) then how is that going to impact your reputation?

Does my business need two Internet connections?

This question is akin to asking, “does my business need to be accessible to clients and customers?” or “do my employees need to do their work?”. If you’re a micro-business, then you can probably get away with a single connection because downtime losses are minimal. But otherwise, it becomes not a question of if you should get two Internet connections, but when you should get your second connection.

Don’t make the mistake of thinking a disaster won’t happen to you. Too many businesses put off investing in their business continuity and then take a permanent blow to their reputation rather than enjoying business as usual. Don’t let that be you. 

Here’s an important question for you: Do you only eat your lunch after having died from starvation?

You will have likely answered “no” to that question, so here’s another one for you. Do you only fix your IT issues after they cause damage to your business?

You probably said “no” again. But unless you’re using proactive IT support, you should be saying “yes” instead because this is exactly what you’re doing. Waiting until the worst has happened before addressing a problem.

What is reactive IT support?

Reactive support (sometimes called break/fix support) is where the focus is on fixing IT issues after they occur, instead of preventing them from occurring in the first place.

For a long time, reactive support was the only type of IT support possible. But with modern analytics and systems management tools, better monitoring and even the rise of AI-enhanced predictive models, proactive support is now not only possible but widely available.

Yet despite the proactive model being available, many businesses continue using reactive support – often unaware of the damage it’s causing them.

Their choice of IT provider is most often to blame since the cheapest support rarely offers even a hint of proactivity. Instead, cheap providers favour the legacy break/fix approach as it allows them to get better margins on their clients.

Why is a reactive approach not good enough anymore?

1. Leaves the core of the business vulnerable

IT is vital to every department and process within a business. So, if there’s a problem with IT, there’s a direct business impact. This can range from being a simple inconvenience right through to a complete halt of operations. Accompanied by the typical reputational damage.

With a reactive approach, these problems both large and small can arise far more often. This isn’t necessarily because reactive support is worse at fixing problems, but because reactivity is worse at dealing with them.

With a reactive approach, an issue needs to be actively causing pain before it’s addressed. And this results in far more issues reaching employees.

Compared to the proactive approach’s continual improvement mindset, reactivity is also lacking. For a start, a reactive approach has no way to stop issues before they begin impacting the business. Reactivity also lacks the ability to apply past experience from one client to another. Eliminating most common issues completely.

With so many things going for proactivity, it seems like it should be the default. But it’s an approach that many IT providers only pay lip service to. Only with a focus on continual improvement, along with ensuring all systems are proactively monitored can an IT provider call themselves proactive. But once they do, many problems can be fixed long before their effects become visible. Reducing potential damage and minimising employee downtime.

2. Negligent to your clients/customers

By the time a reactive IT support provider begins addressing an issue, your customers or clients will already be feeling the negative effects. Perhaps a crypto-jacking infection on your web server is causing your website to become unresponsive, locking out customers. Or a failed piece of hardware has meant critical client assets are lost. These sorts of issues occur far more often with a reactive approach in place and can have major ramifications for your business.

The largest of these is that outages = lost clients. We live in a time where every business is commoditised. So if you experience frequent issues due to reliance on reactive IT support, your clients can and will switch to your competitors.

Additionally, if you have SLAs with clients, failing to meet them due to a spotty service can have direct financial repercussions. But needing to compensate your clients will not cost you a great deal but will also erode trust, resulting in further problems.

3. Allows issues to grow out of hand

With a reactive approach, issues are only fixed once they’re having an impact on your business. This means that a problem which has no immediately visible impact can go unnoticed until it’s far too late. Here are a couple of examples of the sort of things which can go wrong.

A few hours before going out to meet a prospect, a director’s laptop locks up with a message stating she must pay a ransom to unencrypt her data. Clearly the victim of a ransomware attack, the director is dismayed to find it has encrypted the files she needs for her meeting.

Upon recovering the files from their nightly backup, the company finds that the latest snapshot was actually several weeks old. The nightly backup had been encountering an error and failing each night. Without proactive monitoring in place to spot this issue, weeks of data were lost including the files she needed for her prospect.

On Monday morning, the finance department finds they can’t access their finance software and are all seeing an identical error code. Upon calling their reactive support desk they discover that the error code means that the software’s licence key has expired. It takes a day to renew the licence key and costs a considerable amount to do so. Due to no proactivity in relation to managing licenses, a whole day of productivity is lost and the unexpected cost takes a chunk out of the department’s budget.

A final point here: with a reactive support provider, there’s no guarantee that an issue fixed once is fixed for good or across all systems. Without proactivity, the same issue can arise many times, needing to be fixed from the ground up each time.

A proactive IT provider will instead flag the example issues as non-conformances due to their impact. Then, by putting controls in place, they would ensure that the issue won’t happen again not only for the affected client but for any of their clients. This prevents wasting resources on readdressing issues whilst also ensuring you’re always becoming more resilient to issues.

4. Blind to vulnerabilities

When most businesses think of cyber-attacks, they think of ransomware or DDoS attacks, both of which are very visible. But, most malware is designed to stay hidden on a network for as long as possible. Stealing as much data as it can or working its way up the chain of permissions to execute a catastrophic blow.

With the average compromised system staying undetected for 146 days, having no active monitoring due to reliance on reactive support is a dangerous choice to make. By leaving yourself blind to hidden vulnerabilities due to a lack of active monitoring, the impacts of a breach can also become far worse.

Lacking proactive monitoring and system vulnerability scanning allows these threats and more to stay on your network for far longer. Putting your business at a much greater level of risk than it needs to be. But with proactive monitoring and regular vulnerability scans, you can identify these risks and remediate them far quicker.

5. Normalises failure

When using reactive IT support, issues will often be common, recurring and irritating. The sheer volume of these small problems can easily overwhelm employees, causing them to either just get used to it or leave the company. Neither outcome is ideal.

In the case of employees who leave, a replacement must be found and retrained. But even after this, they will still have a chance of leaving the company for the same reasons.

Considering that the cost to replace a well-trained employee can exceed twice their yearly salary, high turnover can be catastrophic for your cashflow.

As for employees who get used to the issues, they may end up causing you more financial damage than those who leave…

6. Kills efficiency

With a reactive approach, each small issue needs an employee to take time out of their day to deal with it, instead of it being pre-emptively resolved.

Whether through having to call the reactive service desk or from reduced productivity whilst dealing with the issue. Even only a few minutes of disturbance per issue can make the wasted time mount up.

For example, if each small problem takes 5 minutes to identify, diagnose and fix and each employee experiences only one issue per day. A company with 40 employees will lose 16 hours and 40 minutes each workweek.

Extending this over a month, the company will lose 66 hours and 40 minutes. And over a year, 800 hours will be wasted. The same as having an employee lay on the floor all day, for 100 days whilst on full pay.

It’s also worth remembering that without proactive management, the same issues can keep recurring. From this, it should be easy to see how lost time can pile up, causing a significant impact on a business’s operations.

7. Proactivity is possible

This list could have consisted of this point alone because the simple fact that proactivity is possible should be enough of a reason to change to it. However, this wouldn’t have been very informative to you, the reader. Nor would it highlight the potential dangers of continuing to use the reactive model.

When comparing the two models, it’s not even a matter of weighing up the pros and cons. The proactive model is a direct upgrade. For one final analogy, it’s like determining whether to use a Palaeolithic hand axe (see: sharp rock) or a chainsaw to cut down a tree.

It’s also worth noting here that many IT support providers sell themselves as being proactive when in truth they’re not. It may be that their monitoring is proactive or one part of their operations. But this alone does not mean they are proactive.

You should aim to understand how your IT system is managed since this shows you what gains can be made with some quick initial changes.

 

It’s concerning how few businesses understand how much downtime costs, be it for an hour, a week or a day.

Fortunately, understanding these costs at a notepad level is easy and having the figures on hand allows you to make measured business decisions about how much to spend to improve your operations and to mitigate risks.

Many businesses assume they could survive with a day’s worth of downtime. However, they don’t factor in the true cost in terms of lost revenue and fixed costs, such as salaries and utilities.

Here are some basic calculations to help you work out how much downtime would actually cost your business.

How to calculate lost revenue to downtime

Often, when calculating the cost of an IT outage or other disaster, businesses will just look at their fixed costs such as the cost per hour of staff. However, the real cost comes from the lost earnings and revenue. The calculation is simple at a basic level:

Lost Revenue = (Weekly Revenue / Weekly Work Hours) * Hours of Downtime

As an example, if your business usually makes £200,000 per week over 40 working hours, a single hour’s outage will result in a loss of £5,000. That would be £40,000 a day.

Of course, the type of business is a factor. If it’s a law firm, you’re likely looking at the flat calculation above. If you’re an estate agency you may still be able to operate for a few days as your diary and contacts have been synchronised to local devices. However, you will be losing money regardless of if you can scrape by.

How to calculate fixed costs

During an outage you can’t send your employees home without pay, nor can you just skip the building’s rent for that day. In many business sectors, a serious IT outage will impact a large percentage of the workforce. A few will be fighting fires, but many will be idle and this is where the bulk of fixed costs will come from.

A simple calculation for fixed costs is:

Fixed Costs = Number of Employees * Hourly Wage * Hours of Downtime

As an example, if you have 50 staff and on average they are paid £20 an hour you’d be losing £1,000 an hour. That would be £8,000 for a day’s outage.

In short…

If you use the figures above, you’d be losing £6,000 an hour for a business turning over ~£10 million. Although the calculations are basic, they give insight into the fundamental costs which is enough to start informing your decision-making process regarding business continuity and disaster recovery.

You’d also need to look at other areas where you’d lose money – you could have reputational damage, recovery costs, etc. But it’s unlikely that you’d need to go into such detail to make measured decisions on how you’re going to control the areas of risks within your business.

You can certainly dive deeper and look at the cost per individual IT system, as these calculations are a good starting point to understand what you need to – and should be – doing to protect your business.

Thanks to the rapid development in technology and the ever-decreasing costs, controlling these risks for a sensible cost is a reality. A £10 million business should be able to protect their IT systems for the cost of a few hours downtime or less.

When browsing for an IT service, it’s common to see in the SLA a 99% uptime guarantee. Occasionally you might spot a 99.9% uptime guarantee. And rarely you might even find a 99.9999% uptime guarantee, but it’s typically a sales ploy. Whilst these numbers sound good, what do they actually mean for your business? As it turns out a 99% guarantee just isn’t good enough anymore. Here’s why…

Downtime infographic

The maths behind calculating downtime costs

How much downtime in a 99% guarantee?

If your business uses a service with a 99% uptime guarantee that means you should expect:

It’s important to note that these numbers rely on downtime only ever occurring during opening hours. In the ‘real world’ about 1/3 of downtime would occur during work hours (in the actual real world it’s more than 1/3 because higher loads on services at this time increase the chance of a service outage.)

But even if we take the more ‘realistic’ 1/3 value you still end up with 1 day, 5 hours, 7 minutes and 12 seconds of unplanned downtime a year. This would cost 98% of businesses over £2,233,000 in lost revenue according to data from an ITIC survey. And this is without mentioning the costs down the line if clients decide to leave based on poor performance.

For many companies, this level of downtime is unacceptable. Not only for financial reasons but for the impact it has on their image. Many customers expect a responsive service 24/7. So even a short period of downtime can permanently taint a user’s opinion of a company meaning that better guarantees are necessary.

How much downtime in a 99.9% guarantee?

99.9% uptime guarantees (referred to as “three-nines”) have become the new standard for most digital services. They provide decent availability with only a small amount of unplanned downtime. With a ‘three nines plan you should expect the following:

It might be surprising to see the impact of an additional 0.9%. But the reason that the change is so drastic is that 90% of the 99% guarantee’s downtime is removed with the addition of another nine. Because each nine reduces downtime by 90%, uptime guarantees become exponentially more effective.

How much downtime in a 99.99% guarantee?

A “four nines” guarantee gives even better rates of uptime. If you use a service offering a 99.99% guarantee you should expect a maximum of:

Less than an hour of downtime a year sounds good (and it is good). But there’s the big factor of cost to remember. As strange as it sounds, at a certain point it becomes more cost-effective to allow downtime to occur because the cost of a more reliable service outstrips the losses caused by an outage.

There’s still the reputational damage to consider though so, as it is with everything, it’s a matter of balancing things correctly.

How much downtime in a 99.999% guarantee?

“Five-nines” is currently regarded as the gold standard for uptime terms because of how small the margin for error is. A service running with exactly 99.999% uptime would take 11.45 years before the service would have been down for an hour.

What about a 100% uptime guarantee?

Unfortunately, it’s statistically impossible to have guaranteed 100% uptime because something can always go wrong. But despite this you still sometimes see a service offering this. Quite simply whenever you see a 100% guarantee it is either going to be a sales ploy, an overconfident service provider or a scam.

In a lot of cases, a 100% uptime guarantee is either backed with nothing of substance or nothing at all in their service level agreement (SLA). This means that if they don’t reach the promised 100% uptime (which they won’t) you get nothing as compensation.

An example of this would be if the SLA has a strict classification of what can be claimed on. An SLA could state that for it to be claimed on, a period of downtime must be longer than 2 hours and be caused by a technical fault on their hardware. This means that an hour-long outage wouldn’t be claimable. Neither would an outage caused by bad weather disrupting your connection, even where it causes damage to your business.

It’s important to always thoroughly read the terms and conditions for any service provider you sign up to. And equally important to consider alternative points besides just the uptime percent. Factors like the SLA, hardware, security and compensation can be just as, if not more, important.

How much downtime in a 99.9999999% guarantee?

Although a 100% guarantee is impossible, we could hypothetically get pretty close in the future. A “nine-nines” uptime guarantee would allow you to enjoy a maximum of:

Blink.

You just experienced a decade of downtime with a “nine-nines” uptime guarantee. In fact, if you’ve been reading this at an average speed, you’ve experienced about 9,000 years of downtime between the start of this blog and now. That’s pretty impressive.

Unfortunately, we’re a long way off getting hardware setups sophisticated enough to reliably give this level of availability. And even if we were, it would be ridiculously expensive because of the ludicrous amount of redundant hardware, backup, maintenance, monitoring and security systems, power infrastructure and planning that such a system would require.

That’s not to say it’s impossible, but it’s a long, long way from where we are right now.

Business continuity planning involves creating a strategy to prevent, reduce and recover from risks to an organisation.

Many organisations still have business-impacting IT outages that should be avoidable, or quick to recover from.

There are six key reasons why these types of IT outages continue to impact businesses.

1. Not understanding risk

Most businesses would be surprised if they listed out every asset or asset type within their business and then looked at every risk associated with it. What’s the likelihood of that risk type affecting the asset or the wider business? What would the impact be on the business? It’s impossible to protect against something you are unaware of. It’s critical that a business understands, at the very least, the IT assets they have and the associated risks to the business. However, when you’re talking business continuity it’s best to include other types of asset, such as key employees or sites.

2. Having no controls in place

Once you understand the risks, you can put controls in place to reduce or mitigate the risk. This can be something as simple as protecting a laptop from Trojan software with anti-virus protection, through to protecting against a systems outage by replicating all data and systems into the cloud, or into another site. Controls need to be sensible and considered, hence why it’s critical for a business to understand the true cost of a system outage.

3. No reviews

Business continuity must be a living entity within a business. Every new asset should be logged, have its associated risks identified and have applicable controls put in place. The controls, particularly around continuity, must be regularly reviewed and tested. And by ‘regularly’ that means you should be testing as often as feasibly possible. If you’re waiting for longer than a year between reviews, you’re leaving yourself highly vulnerable.

4. Not using the right technology

Over the last decade, technology has dramatically decreased outage windows and costs when it comes to business continuity. So it’s critical that you review requirements and evaluate the technology. This process takes time and experience to do correctly, so you may want to contact a consultant so you can keep focused on your own business and have confidence in your choice. You should be assessing technology every three years (at most) to look for continuity improvements, easier management and reduced costs.

5. Senior management don’t take responsibility

In businesses of all sizes, senior management, typically at the board level, do not take responsibility for business continuity. It’s usually up to IT to undertake this function, often with heads of departments. So when a disaster strikes, whatever happens, IT gets the blame – even though they’ve identified the risks and applied the controls. This is why it’s critical to get senior management to understand the risks to the business and to accept or reject controls.

Cost factors usually determine whether management accept or reject controls. The controls’ stated Recovery Point Objective (RPO) – how much data they can afford to lose – typically determine these factors. Recovery Time Objective (RTO) is also crucial to understand. This is how long certain systems can be down for without serious consequences. You will often hear a board state that no downtime and no data loss is acceptable, however, this viewpoint often changes when viewing the budget.

6. Thinking it’s just about IT

While IT is important, businesses will have a vast array of assets which will cause different levels of impact if unavailable. What happens if the Operations Manager disappears tomorrow? If a site burns down? Or if listeria from the onsite canteens takes out 30% of the workforce? There are so many scenarios that need to be understood, and suitable controls and processes need to be in place to deal with them if they arise.

Corporate emails are important records of business decisions, communications and information; and, just like paper documents, you must secure and store them properly. This is where an email archiving solution can assist, but many companies may believe they already store records correctly – by backing up their mailboxes on a regular basis.

There is often confusion between email archiving and email backup, with some believing they perform the same – or very similar functions. In reality, they are different solutions, which businesses should both use.

What is Email Backup?

In simple terms, a backup is designed as a short-term insurance policy to facilitate disaster recovery. A classic backup application takes images of active data periodically in order to provide a method of recovering records that have been deleted or destroyed.

Backups are usually only retained for a short period – a few days or weeks – as more recent backup images replace previous versions. It is important to understand that emails can be deleted in between backups and would thus not be retained. Data is usually kept in a proprietary format which can cause problems for long-term retention.

What is Email Archiving?

In contrast, email archiving is designed to provide businesses with an ultra-secure repository for email records that need to be stored for a long period of time. This may be necessary in order to meet certain regulatory obligations. Email archiving provides businesses with a full record of communications, and additional security features like time-stamping and digital fingerprinting ensure that the email has not been tampered with or edited in any way – essential when providing emails as evidence to courts.

It is also far easier to find and retrieve records from an archiving solution compared to a backup. Emails may be requested by an external auditor or can be the result of an internal investigation. Instead of asking your IT department to dig through volumes of saved data snapshots and format it to comply with the request, they can use the search facilities to locate the necessary records in their original and exact format.

Which solution should you choose?

In the short run, it may seem less expensive to back up your email data to a tape or local server. However, the volume of email data increases every day which results in greater storage requirements. In the long run, the cost of storing and protecting that data can far exceed the cost of implementing archiving.

However, this is not to say that an archiving solution should replace your backups. Both solutions fulfil important functions and should be used in tandem. It’s important to remember that it’s a legal obligation to provide copies of emails if asked by authorities or regulators. This is something that virtually all backup solutions cannot do. In order to choose the most effective and suitable solutions, companies should first distinguish between their backup and archiving needs. Then explore the appropriate storage solutions to meet these needs.

Ransomware is a type of malware designed to block access to a user’s system or files until a ransom (usually paid in a cryptocurrency such as bitcoin) is given to the hacker.

There’s a multitude of strains of ransomware out there. Notable examples include CryptoLocker, Crowti (also known as CryptoWall), Tescrypt or Teslacrypt, Teerac, Critrioni, Reveton, Troldesh and WannaCry. But all of these can be simplified down to two main types: encryption ransomware and splash-screen ransomware.

What is encryption ransomware?

This is the type of ransomware most people are aware of. It works by encrypting any files it can discover. Which can be anything from important documents to corporate databases, to personal photos and videos. Once encrypted, these files will be rendered useless. And in most cases will have their extension changed to make them unopenable.

Encryption ransomware will typically use a type of encryption called asymmetric encryption to lock your files. What this means is that the encryption key that encrypts your files is different to the decryption key which decrypts your files. Making the only way to get the decryption key and access back to your files – if you have no backups – is paying the ransom.

What is splash-screen ransomware?

Splash-screen (sometimes called lock-screen) ransomware restricts access to your files by placing an unclosable, unmovable and persistent window on your screen. The only way to remove this screen is to pay the ransom and only then will you be able to access your files and programs again.

Sometimes these splash-screens just include a message telling you that you’ve been infected and need to pay a ransom. But sometimes hackers take things a step further, including the logo of the police, FBI or similar organisation, then claiming that your computer has been locked due to illegal or malicious activity and demanding you to pay a ‘fine’ to have it unlocked. This, of course, is just another example of hackers using social engineering to try and coerce victims into paying up though.

How can you protect yourself against ransomware?

If you’re simply relying on anti-virus to protect you from ransomware, then that’s not a great business strategy. A new malware specimen emerges every 4.2 seconds and while anti-virus protection can block some of these. Other variants could slip past the filters. Luckily there are lots of other tactics you can implement:

Invest in employee training

Employees are often the weak link when it comes to security. You must make them aware of the impact their day to day actions can have on the business. If your workforce is unable to spot a phishing scam, for example, then your company is vulnerable. Investing in security awareness training can be greatly beneficial for your business and will help employees.

Perform regular back-ups

You should already be doing backups as part of your business continuity. But if you’re not, then make it a priority to perform regular, point-of-time back-ups you can restore from. Continuous back-up is ideal, but at the very least, you need to be doing one backup every day.

Even then though, a ransomware attack would mean losing a day’s worth of work. Something which could be a significant loss to a small company, let alone a large one. So make backups as often as possible.

Ensure backup locations are not networked

It’s no good to backup all your data if a ransomware infection can locate and encrypt it. Making sure that the backup location is not a mapped drive is one way to do this. Don’t copy files to another position on the PC, instead backup to an external drive which does not have a drive letter or which you only connect when performing the back-up.

Regularly patch or update your software

Hackers often rely on people running outdated software with known vulnerabilities which they can exploit. To decrease the likelihood of ransomware infection, make a practice of regularly updating your software. Alternatively, look into patch management for your business as a way to automatically manage the installation of patches for you.

Layer your security

Whilst relying solely on anti-virus is a bad idea, that doesn’t mean you should get rid of it altogether. Anti-virus should be one layer of your overall security suite, containing anti-malware software and a software firewall. A system like UTM can help you with this.

At a minimum though, ensure you are scanning at the email gateways, firewall and end-user devices if you are relying on anti-virus.

Filter .exe files in email

If your gateway email scanner has has the ability to do so, you may wish to set up a rule to deny emails sent with the “.exe” extension. .exe files, or executable files, are capable of executing code which can be used to instigate a ransomware attack. There are other types of extension capable of executing code to be aware of. But .exe is the most common one.

Conclusion

Ransomware is a dangerous threat to both home and business users of technology and as a result, needs to be treated that way. The worst thing you can possibly do is think that it won’t affect you. Anyone with an Internet connection is a potential victim. And hackers don’t care if they’re targeting individuals, small businesses or multinational enterprises. All they care about is taking your money.

Security as a service (SECaaS) is the outsourced management of business security to a third-party contractor. While a cyber-security subscription may seem odd, it’s not much different from paying for your anti-virus license. The difference is that SECaaS is the combination of a lot of security products wrapped up into one more central service.

The range of security services provided is vast and goes down to a granular level. Examples range from simple SPAM filtering for email, all the way through to cloud-hosted anti-virus, remote automated vulnerability scanning, managed backups, cloud-based DR and business continuity systems and cloud-based MFA systems.

The services are either delivered directly from the vendor where the reseller takes a commission or they are delivered from specialist firms who have the in-house skills capable of building, integrating and managing specialist security services for their customers.

Just a note here: you may have heard of SaaS (software as a service). This is different to SECaaS.

1. Is SECaaS dangerous?

Putting your security in the hands of another business may seem like a big risk. And if done incorrectly, it’s almost guaranteed to have a less than ideal outcome. But businesses have had success with SECaaS and there’s no reason you can’t either.

The most likely cause for an issue is choosing a supplier based solely on price. A business offering SECaaS that’s been around for a few years and has a range of clients but charges £50 per user per month is going to be very different from the business that offers “cloud-based security” for £10.99 per user per month.

Do not instantly go for the cheapest option when considering SECaaS.

Sure, you might be paying nearly 5 times as much. But if your SECaaS provider has the lowest price on the market they’re skimping on something. And if there’s one thing you don’t want to skimp on, it’s your cyber-security.

2. What are the advantages of SECaaS?

Cost-saving

Despite what was just said about avoiding cost-cutting when it comes to cyber-security, one of the main draws of SECaaS is the long term price savings it can have. Because you don’t actually own the infrastructure, you don’t need to pay for its floorspace or for its upkeep (prices which can fluctuate based on external factors). Instead, you only pay a flat rate that is unlikely to change.

Fully managed

Your provider is the person keeping up to date with the changing threat environment, not you. That means that you can focus more on your own business goals instead of diverting time towards understanding the various threats out there and ensuring that your defences deal with them.

Greater expertise

A good SECaaS provider is going to consist of people who know everything there is to know about cyber-security and regularly keep up with trends and changes in that area. As a result, they’ll have a much greater range of expertise which you can utilise to keep your business safe. This also lets you keep your core employee focus on your own sector rather than branching out and getting a dedicated cyber-security expert.

Frees up time from repetitive tasks

Time-consuming admin tasks that need to be done can be performed by your SECaaS provider instead. This can be things like reading system logs or monitoring the overall network status.

3. What are the disadvantages of SECaaS?

Reliant on SECaaS provider acting

This is the main reason that you should be choosing a high-end SECaaS provider.

Because SECaaS providers are the holders of a lot of data, they (and as an extension, you) become lucrative targets for cyber-criminals. If they are breached then you are breached so ensuring they have made big investments into their security is paramount.

To make sure that your chosen provider is continually investing in their security, be sure to keep in regular contact with them. Ask questions about what they are doing to address the latest types of exploit or flaw and dig deep into the specifics of what type of security they have in place on their own systems. Is it minimal or is it high-grade and comprehensive?

Whilst in the decision stage you should also be asking each provider exactly what kind of security they have in place or what is their policy is around topics like staff training. If they can’t prove that they are taking their own security seriously, you can bet that they won’t be taking yours seriously either.

Increases vulnerability to large scale attacks

The uniform security measures SECaaS providers have over multiple clients allow them to keep up a comprehensive level of security. But it also means that if a vulnerability is found for a business who use the same SECaaS provider as you, then that same vulnerability can be used against your security.

Because one vulnerability gives so many potential attacks for a hacker, probing the security of the SECaaS provider is much more rewarding for cyber-criminals. This means they put in a more concerted effort towards breaching the SECaaS provider’s security. This can inadvertently make you a prime target for cyber-attacks.

Be aware though, as a business (even a 2-10 employee one) you’re already a prime target for cyber-attacks. If done properly, the perceived increased danger of choosing SECaaS can be made negligible. Especially when compared to the increased overall security you would receive from a high-quality SECaaS provider.

3. Why is SECaaS being offered more often?

Security providers are becoming aware that with the rise of small businesses. There’s a growing market for security services that don’t need expensive internal employees or risky infrastructure investments.

Many growing businesses also don’t have the up-front funds to develop a hardware heavy security system. Therefore, they find a monthly plan to be much more manageable for their finances. For example, implementation of two-factor authentication and disaster recovery may have cost £100K five years ago. But SECaaS can deliver the same project on a £1,000 budget with no CapEx.

Because of the flexible nature of SECaaS, many of the decisions can now be addressed head-on. There is no longer the same level of risk anymore surrounding topics like setting up security infrastructure. Businesses can switch SECaaS providers more easily. So, this ‘de-risking’ of cyber-security has made the SECaaS market ideal for businesses who want to avoid making a bad decision.

Finally, with the rise of the cloud and increased internet speeds. Services offered over the internet are now on a par with in-house solutions. This has meant that cyber-security being offered as a service is now very feasible and is genuinely useful.

Conclusion

So, you may now be asking yourself if you should consider SECaaS for your business. Unfortunately, there’s no comprehensive answer. If you want to improve your security, without draining your budget, then it’s worth reviewing. But if you already have a fairly comprehensive security setup in place it may be better to ensure that it actually is as comprehensive as you think it to be and then just sticking with what you have, upgrading it and maintaining it as you already are. Alternatively, you could look into a UTM system for your business if you’re uncomfortable with SECaaS but want to make your security more comprehensive.