An IT infrastructure refresh can result in mixed feelings from IT managers. On one hand, you have the cost, complexity and risk of migrating systems but on the other, you have a great opportunity to significantly enhance your environment.
Given that a typical refresh cycle is now 4-5 years – due to the financial climate and the increased reliability of hardware – it is likely that you can greatly increase value when refresh time comes around.
Server Virtualisation
One option is a like-for-like deployment. Server virtualisation makes the transition to new platforms fairly straightforward. This is because workloads are portable, utilisation is visible and IT Managers are typically familiar – and confident – with the underlying technology.
However, there are limitations to consider, particularly around the agility of this model. It is difficult to efficiently accommodate changes in capacity, whether that’s an increase or decrease, compared to traditional infrastructure.
Hyper-Converged Platforms or HCI
A second option is to go for new technology, like HCI. While hyper-converged platforms are still relative newcomers, the market is definitely developing and maturing.
HCI is a modular approach to IT infrastructure and allows rapid, vast scaling through small form factor nodes that provide integrated RAM, CPU and Storage without complexity. This type of efficiency simply isn’t possible with traditional infrastructure. This option also offers advanced functionality and integrated services, such as deduplication, backup and disaster recovery.
Yet this doesn’t mean this approach is without its limitations. Relying on platform efficiencies can making sizing complicated and implementations more intricate. Furthermore adopting a platform over hardware could mean you end up locked into a specific vendor – something which some would view as a risk. Finally, HCI also represents a CapEx approach to IT infrastructure refresh which, given the financial climate, may not be desirable.
Cloud Services
An alternative approach could be to opt for cloud services. There are certainly many benefits and it does address some of the shortfalls of traditional infrastructure. As a utility-based, OpEx approach to IT, the cloud offers greater agility, greater elasticity and relieves the pressure of “keeping the lights on”.
Despite this, it doesn’t mean it’s suited to all opportunities. While moving to the cloud can relieve the pressure on networking or application delivery, it does not guarantee cost saving. Furthermore migrating to the cloud can be complex and time-consuming, so you need to ensure you have the resources on hand. At QuoStar our team specialise in zero downtime migrations and can manage your migration project from end-to-end.
What option should you choose?
There’s no one right answer here. You need to weigh up your available options and see which one aligns best with your business objectives. While the variety of options may seem daunting, it doesn’t need to be. Prior to undertaking an infrastructure refresh is often a good idea to seek out an independent consultant who has experience in this area and can offer an unbiased perspective.
In order to comply with the GDPR, organisations must implement appropriate technical measures that ensure compliance. This is established under Article 32, which delineates the GDPR’s “security of processing standards”, and is required of both data controllers and data processors.
When implementing these measures the Regulation does state that “the state of the art and the costs of implementation” and “the nature, scope, context and purposes of processing as well as the risk of varying likelihood and severity for the rights and freedoms of natural persons” must be taken into account.
Due to the different ways organisations collect, store and process data, as well as the different levels of risk this present to users, there will not be one universal set of technical and organisational measures. However, the GDPR has set out some suggested methods for data protection.
Privacy by Design and Privacy by Default
Although supervisory authorities have typically advised that organisations take this approach, for the first time GDPR actually lays out “privacy by design” and “privacy by default” as specific obligations. Under this requirement, companies will need to design compliant policies and systems from the outset.
Under Article 25, a data controller is required to implement appropriate technical and organisational measures at the time of determining the means of processing and at the time of the actual processing. When determining what measures to implement, the controller should take into account “the state of the art, the cost of implementation and the nature, scope, context and purposes of the processing, as well as the likelihood and severity of risks to the individual posed by the processing of their data”.
In addition, organisations must give individuals the maximum privacy protection as a baseline. For example, explicit opt-ins, safeguards to protect consumer data, restricted sharing, and retention policies. For example, if someone creates a new social media profile, the most privacy-friendly settings will be enabled. Then it would be up to the user to reduce these if they so wished. This approach directly lowers the data security risk profile. The less data you have, the less damaging a breach will be.
Data Minimisation
An essential principle of data protection, data minimisation establishes that personal data should not be retained or further used unless it is necessary for purposes clearly stated at the time of collection. The principle applies to the entire lifecycle of personal data. This includes the amount collected, the extent of the processing and the period of storage and accessibility.
Data must be “adequate, relevant and limited to what is necessary, in relation to the purposes for which they were processed”. This means controllers need to make sure that they collect enough data to achieve their purpose but not beyond that.
Privacy Impact Assessments
These are an integral part of the “privacy by design” approach and can help you identify and reduce the privacy risks of your projects. They allow organisations to find and fix problems at the early stage of any project, reduce the associated costs and reputational damage that may otherwise accompany a data breach.
Some situations where organisations should carry out a Privacy Impact Assessment (PIA) include:
- A new IT system for storing and accessing personal data
- A business acquisition
- A data-sharing initiative
- Using existing data for a new and unexpected or more intrusive purpose
- A new surveillance system
- A new database that consolidates information held by separate parts of an organisation
Under Article 35 of the GDPR PIAs are mandatory for organisations with technologies and processes that are likely to result in a high risk to the rights and freedoms of data subjects. However, they are a good strategic tool for any organisation which processes, stores or transfers personal data.
Pseudonymisation
Article 4(5) of the GDPR defines pseudonymisation as “the processing of data in such a way that it can no longer be attributed to a specific data subject without the use of additional information”. For a data set to be pseudonymized, organisations must keep the “additional information” separate and secure from the de-identified data.
The GDPR incentivizes data handlers to implement this method because it allows them to use personal data more liberally without infringing on individuals’ rights. This is outlined in Article 6(4)(e) which states that pseudonymised data may be processed for uses beyond the process that data was originally collected for. This is because the data only becomes identifiable when held with the “additional information”.
However, it is important to note that pseudonymisation is not a cast-iron guarantee of data protection. It does not mean organisations using this method would not need to report a data breach to their supervisory authority.
The effectiveness of pseudonymisation hinges on its ability to protect individuals from “re-identification”. This depends on a number of things including;
- the techniques used for pseudonymisation;
- the location of the additional identifiable elements in relation to the pseudonymised data; and
- the likelihood that non-identifiable elements could uniquely identify a specific individual
Unfortunately, the GDPR is quite vague on the level of data protection pseudonymisation provides itself. Only in Recital 26 does it mention that data handlers should take into account whether re-identification is “reasonably likely”.
There no official guidelines as to what constitutes “reasonably likely”, the GDPR merely advises that data handlers take into account “all objective factors”. For example, “the costs of and the amount of time required for identification, the available technology at the time of the processing and technological developments.”
What should organisations do?
The bottom line is that organisations should embed privacy into every process, procedure and system which handles data. Under GDPR organisations need a proactive approach to data privacy and protection. It should be an important part of the planning process and throughout the entire lifecycle.
There are many security measures that businesses can implement. Ideally, you should be looking at solutions that cover multiple angles. Relying solely on encryption or pseudonymisation won’t cut it.
IT is a critical part of almost every department, yet many businesses are not taking full advantage of new technology or realising the full potential of their IT investments. This where an IT strategy comes in.
An IT strategy, when done right, is a powerful tool for driving growth, increasing efficiency, achieving goals and supporting staff.
Below we’ve provided an overview of the five key steps involved in creating an effective IT strategy. Whether you’re completely new to the process or an experienced IT professional, this guide is an ideal starting point.
1. Outline business goals and objectives
In order to create an effective IT strategy, you must make sure it is aligned with your overall business strategy. This is because the primary function of an IT strategy is to support your business and help you to achieve your goals.
You should begin by outlining your business needs, goals and high-level objectives. Key areas to look at will include:
- Your sales pipeline and targets
- Future plans regarding partnerships, mergers or acquisitions
- Growth strategies and plans for the company
- Any other “actions plans” departments are working towards
2. Define your scope, stakeholders and timeline
Everyone must be clear about the purpose of your IT strategy, who is responsible for delivery and to whom it applies.
As part of this process, you should meet with key people from each department. They will be able to tell you how they’re currently using technology and their future business plans. With their input, you can ensure that your strategy provides the right IT support for each business unit.
Just as you would define a timeline for achieving specific goals, your IT strategy should also have a lifespan. Most IT strategies are long-term, but you might want to review and refine your strategy more frequently. For example, you may have a five-year roadmap, which gives a high-level overview of what you are aiming to achieve, but it is reviewed annually to define key phases and projects (e.g. implementations, integrations etc.).
Technology develops at a rapid rate so it is important that your IT strategy is flexible so it can adapt not to new technologies but to new organisational circumstances, changing business priorities, budgetary constraint and available skill sets.
3. Review your existing setup
When developing an IT strategy it is important to review your current IT infrastructure. This will help you to identify current problems, see what’s working and where resources are being used, all of which can be addressed by your strategy. Some key points to consider are:
- How are teams and departments using technology
- What tools, software and systems do they use?
Think critically about how IT is being used, and analyse what is delivering the most value. This will enable you to plan a strategy which utilises resources you already have and ensure better allocation.
4. Create a roadmap
This may appear to be the largest, most difficult step but as long as you have been following the right resource then it should actually be relatively easy to create a roadmap which defines resource allocation and architecture.
You should start by defining the overall technology architecture, this is made up of the major software, hardware and other tools you’ll be using. Then break it down to department-specific technology which may be required to meet business goals. Finally consider how the different parts of your architecture fit together, and what processes govern their integration.
Keep all the information related to your technology architecture in a document or spreadsheet so you can easily review.
5. Establish your metrics
Measurement is an essential part of any strategy, and without it, you will be unable to identify any gaps or weaknesses. You need to make sure that the IT strategy is functional and cost-effective. In order to do this, you should identify KPIs you can use to analyse performance over time. It is important to track a range of metrics as this can help your business to be more proactive in identifying and solving issues (e.g. resolving performance issues before they impact end-users).
Some examples of metrics you may wish to track include:
- Budget variance – Actual costs vs. budgeted costs
- Resource cost – The average cost of a technology resource
- Project delivery – The percentage of projects delivered “on time”. You may also want to track project satisfaction by using a set survey to solicit feedback from business partners
- Project cost – The percentage of projects delivered within budget
- Production incident – The number of problems in order of severity
- SLAs met – The percentage of jobs which finish on time
- Application availability and performance – The percentage of time an application is functioning properly, and the average time it takes to render a screen or page
- Employee satisfaction, feedback and reviews – Constructive feedback from employees can be highly useful for increasing productivity
- Number of help desk calls
If your leadership team is not particularly technology-savvy, then metrics are a simple and effective way to demonstrate the success of the company’s IT strategy, which will help to secure the confidence of management. Furthermore, being able to demonstrate that the IT strategy is aligned with the overall business strategy may assist in securing funding for future IT projects.
Business continuity planning involves creating a strategy to prevent, reduce and recover from risks to an organisation.
Many organisations still have business-impacting IT outages that should be avoidable, or quick to recover from.
There are six key reasons why these types of IT outages continue to impact businesses.
1. Not understanding risk
Most businesses would be surprised if they listed out every asset or asset type within their business and then looked at every risk associated with it. What’s the likelihood of that risk type affecting the asset or the wider business? What would the impact be on the business? It’s impossible to protect against something you are unaware of. It’s critical that a business understands, at the very least, the IT assets they have and the associated risks to the business. However, when you’re talking business continuity it’s best to include other types of asset, such as key employees or sites.
2. Having no controls in place
Once you understand the risks, you can put controls in place to reduce or mitigate the risk. This can be something as simple as protecting a laptop from Trojan software with anti-virus protection, through to protecting against a systems outage by replicating all data and systems into the cloud, or into another site. Controls need to be sensible and considered, hence why it’s critical for a business to understand the true cost of a system outage.
3. No reviews
Business continuity must be a living entity within a business. Every new asset should be logged, have its associated risks identified and have applicable controls put in place. The controls, particularly around continuity, must be regularly reviewed and tested. And by ‘regularly’ that means you should be testing as often as feasibly possible. If you’re waiting for longer than a year between reviews, you’re leaving yourself highly vulnerable.
4. Not using the right technology
Over the last decade, technology has dramatically decreased outage windows and costs when it comes to business continuity. So it’s critical that you review requirements and evaluate the technology. This process takes time and experience to do correctly, so you may want to contact a consultant so you can keep focused on your own business and have confidence in your choice. You should be assessing technology every three years (at most) to look for continuity improvements, easier management and reduced costs.
5. Senior management don’t take responsibility
In businesses of all sizes, senior management, typically at the board level, do not take responsibility for business continuity. It’s usually up to IT to undertake this function, often with heads of departments. So when a disaster strikes, whatever happens, IT gets the blame – even though they’ve identified the risks and applied the controls. This is why it’s critical to get senior management to understand the risks to the business and to accept or reject controls.
Cost factors usually determine whether management accept or reject controls. The controls’ stated Recovery Point Objective (RPO) – how much data they can afford to lose – typically determine these factors. Recovery Time Objective (RTO) is also crucial to understand. This is how long certain systems can be down for without serious consequences. You will often hear a board state that no downtime and no data loss is acceptable, however, this viewpoint often changes when viewing the budget.
6. Thinking it’s just about IT
While IT is important, businesses will have a vast array of assets which will cause different levels of impact if unavailable. What happens if the Operations Manager disappears tomorrow? If a site burns down? Or if listeria from the onsite canteens takes out 30% of the workforce? There are so many scenarios that need to be understood, and suitable controls and processes need to be in place to deal with them if they arise.
The continual running of IT operations in your business is essential for it to survive. This means you need to have a qualified team on hand to manage your systems. But the cost of hiring and retaining such a team internally is something only the largest companies have the time and budget for. To gain the competitive advantage big players get from their internal teams, smaller businesses have turned to IT outsourcing as a way to strike a balance between performance and cost. One of the most talked-about benefits of IT outsourcing is the cost savings it can bring, although the benefits extend far beyond this. However as there’s no one set pricing model it can be difficult to understand exactly what’s included, and if the service is priced appropriately.
Below are some of the most common pricing models you are likely to be presented with when exploring IT outsourcing providers.
Monitoring only pricing model
This pricing model typically provides network monitoring and alerting, but with different levels of service. For example, for a small business, it may include patch management, antivirus and anti-spam updates, disk optimisation and backup monitoring on a flat monthly fee. Additional remediation work, identified through monitoring, would be an additional charge.
For larger businesses, the internal IT team would receive monitoring alerts, with the provider responsible for all incident resolution.
Per-user pricing model
Most per-user pricing models charge a flat monthly fee per end-user to cover IT support across all devices. This is a very straightforward pricing model and ideal for those companies with a tight budget as it allows you to budget for your IT support exactly. It also makes it easy to forecast for any business growth. Planning to take on an extra 20 employees this year? You can see exactly how much that growth is going to cost you in terms of IT support.
Per-device pricing model
Another option is for IT support providers to charge per device, e.g. desktop, laptop, mobile, server. There would usually be one flat price per device type, which again makes it relatively easy to see exactly where your costs are coming from and allow you to budget for future additions e.g. you decide you want every member of the sales team to have a tablet for remote working. The per-device model will often come out marginally more expensive than the per-user model – owing to the fact a single user will likely have multiple devices which need covering.
Ad-hoc pricing model
The ad-hoc model means rather than paying a flat monthly fee you pay as and when you require support. This may sound good but, since prices can’t be normalised, you will likely end up paying far more overall. Additionally, as IT becomes increasingly critical, a purely reactive approach to IT support will leave you hurting after a major incident due to prolonged downtime and a large bill from your support provider. Because of this, the ad-hoc model is becoming increasingly rare with most businesses having transitioned to a fully managed service or “all-you-can-eat” model.
Tiered pricing model
Tiered pricing is where different bands of support are available. The higher the band, the more services or perks you’ll gain access to but at a greater cost. For example, you may see bronze, silver and gold tiers.
This is one of the most common pricing models but it does have its difficulties. As each tier includes its own services and limits, what can initially seem like great value can become a headache. For example, you take out a bronze level IT support contract which includes data backup. Imagine the worst happens and you lose your files. Then, on top of that, you find out your backup only covers a certain period – excluding the period you’ve lost.
It’s not to say that tiered pricing won’t work for some businesses, but if IT is critical for your operations it’s probably not something to gamble on. A fully managed service should be fully managed. There should be no limits on what your support includes.
‘All you can eat’ pricing model
The all you can eat model allows for an unlimited amount of support at a fixed rate each month. This makes it ideal for nearly every type of business looking to outsource their IT. It’s technically the same as the top level of a tiered pricing model, but without the artificial inflation from the lower tiers. This typically makes the all you can eat model better since it will include everything you need whilst being at a predictable cost.
When looking at this model, it’s important to check if it includes out-of-hours support as standard. Depending on the provider, 24/7 support might be there by default or it might have an additional charge. Although, it’s typically worth the extra money to have full peace of mind and to be able to prevent a late-night incident impacting the following day.
We receive around 121 emails a day, on average, so it’s a wonder how we manage to keep up the constant communication!
Email can quickly become a drain on time if not managed correctly. The average worker now spends 28% of their time managing email. This means if you work Monday-Friday, 9am-5pm, over one whole workday is dedicated to your inbox.
There are many suggestions out there on how we can better manage our inbox and email communications, but some of them aren’t that practical for the majority of people to use.
Luckily by using a few of the inbuilt tools in your inbox and some time management skills, you can organise your email inbox, read and process incoming mail more effectively and become more productive.
1. Unsubscribe, unsubscribe, unsubscribe
Set aside time to blitz your inbox and unsubscribe from any irrelevant newsletters and communications. Fear of missing out (FOMO) on the latest news can make us reluctant to hit unsubscribe but think how often you actually read those emails? There’s a chance that you open many of them to mark as “read” because they don’t deliver any real value.
Of course, if there’s a weekly newsletter you love seeing in your inbox and enjoy reading as soon as it arrives then keep on subscribing, but if you keep receiving weekly “offers” from that stationery supply company you placed an order with once, then hit unsubscribe. Don’t forget there’s nothing to stop you from re-subscribing if you find yourself missing a newsletter.
2. Make use of rules and folders
Quickly scan your emails and create a list of “big” categories. Depending on which department you are in you may have categories like Vendors, Customer Service, Receipts, Recruitment etc. If you want you can also create subfolders within each category to further divide your emails, but don’t worry about being too specific. You just want recognisable categories which make it easier to manage your inbox.
Don’t forget you can use Microsoft Outlook’s search function if you need to find a particular email, so there’s no need to create sub-folders by sender name, date, subject etc.
Another feature you might want to take advantage of is “Rules”, which automatically file messages away into their correct folders as they arrive. You can choose whether you want these messages to be displayed in the New Alert window (ideal for high priority messages) or to play a selected sound.
There’s also a whole host of Advanced Options to choose from such as mark as important, mark as read, delete or send an automatic reply so you can ensure you prioritise important communications.
3. Don’t check email so often
Checking email has become synonymous with work, but often it just distracts us from more important tasks. How often have you been in the middle of something only to be distracted by an email notification?
We immediately feel the need to check our inbox but it’s rarely urgent, and then it’s difficult to get refocused. Even if each new email only distracts you for 30 seconds, if you receive 100 emails a day that’s 50 minutes you’ve wasted on checking your inbox.
Luckily there’s an easy way to prevent emails from distracting – simply turn off your audio and visual notifications. Log into your email account and go to File > Options > Select Mail in the left-hand column > Scroll down to Message Arrival > Untick all the message alerts.
Worried about missing a specific email? You can set up a “Rule” that will override this setting. For example, you could choose to have any emails from your manager to play a selected sound.
If you’re getting distracted by the thought of new emails then try setting aside specific periods to check your inbox. You could check once when you first arrive at the office, once around lunchtime and once in the afternoon.
If checking three times a day doesn’t work for you then try once per hour. For example, 45 minutes of focused work and 15 minutes of email management. Chances are you’ll find it easier to focus if you have regular, allotted breaks.
4. Try to get to inbox zero every day
There’s nothing worse than logging in and finding your inbox overflowing with hundreds of messages. It can be tempting to just select all and hit delete but you never know what you might miss.
Instead of allowing emails to build up, try to set aside some time at the end of each day to review your inbox. Reply to important communications, file away emails in the relevant folders and unsubscribe to anything irrelevant as you go.
Tackling your email in a more strategic fashion should make it more manageable. Of course, you will get some emails overnight, but in the morning you’ll have significantly fewer unread ones than normal.
5. Try email archiving
Many people will recommend sorting emails into “keep” and “delete” and trashing any which are no longer relevant. Or declare “email bankruptcy”. While this suggestion would probably work for your personal inbox it can be a bit more tricky in a business situation. Emails are important records of business decisions and if your company were to become involved in legal proceedings you may be required to present all related email conversations dating back to as far as six years.
With a cloud-based email archiving solution you have long-term, ultra-secure and forensically compliant storage for your emails without clogging up your inbox. Emails are automatically archived and remain so even if deleted from your inbox. Every email will have a digital fingerprint and time stamp to ensure authenticity. You can even restore emails direct to your inbox if required.
Email is a necessary part of a business, but it doesn’t have to be a necessary evil. With a few simple tricks, you can prevent email from draining all your time and focus on your business instead. Don’t forget to encourage employees to pick up the phone, or speak to you in person, for urgent matters. Not everything needs to be done over email.
Email retention policies are all about decreasing the risk to your company. But for a truly successful policy, you need to strike the balance between a retention period which is too long and keeps useless mail around and one which is too short and loses mail that was important.
Your policy needs to take into account any applicable legal or industry regulations whilst not going overboard trying to store every email indefinitely. If your company does not yet have an email retention policy then it’s certainly worth drafting one, and here are five best tips to get you started.
How do I create an email retention policy?
1. Start with the regulatory minimums
Every business will be subject to different regulations, so the first thing you should do when creating your policy is to review the regulations your company is subject to and the relevant document retention requirements involved in each one. Some regulatory bodies you may need to consider include:
- The Data Retention Regulations 2009
- Freedom of Information Act
- Financial Services Act
- Sarbanes-Oxley Act (for US-related firms)
- The Data Protection Act 1998
If the retention period is unknown then six years is often the common safe denominator. This is because it’s possible to bring a “breach of contract” up to six years later. If your business is concerned about particular records then you should seek legal advice.
2. Segment your data by type of use
Once you have the regulatory minimums you will notice that the recommended periods vary widely. With this in mind, you may wish to segment emails by type, use or department to prevent having to store all content for the maximum retention period.
For specific documents like PAYE records, maternity pay or statutory pay it is up to employers to assess retention periods based on business needs. If an employment tribunal may require the document as evidence then a retention period of six years makes sense. If the document could be needed for HMRC reviews, then a minimum retention period of three years after the end of the tax year in which the payments were made would be necessary.
3. Draft a real policy
Creating a policy, and getting it approved by senior management and legal professionals, will give you the ability and authority to implement all the IT, security and process controls you need to enforces your email retention requirements. Your policy should include the following sections
- Purpose of the policy
- Retention time, including any segments you are using to define the retention periods. Durations are often listed as years or may be permanent
- Difference between paper and electronic documents – although ideally there should be none
- What constitutes destruction (e.g. shredding, deleting, overwriting, degaussing of media
You do not have to include specific technologies and processes, but it is a good idea to refer to capabilities and requirements (e.g. offsite archival). You should also omit areas you will not or can not support, such as the types of segmentation you are unable to determine or support. If you haven’t seen a full retention policy before there are plenty of examples online for you to reference.
4. Review the preferred solutions
Once you have the main points of your policy established, you can estimate your minimum requirements for a solution based on the number of users, the expected volume of email and the expected rate of growth. With this information, you may be able to loosely price out a solution, but you may also wish to obtain indicative quotes from suppliers. You should also prepare for any changes to the email retention policy which may affect your pricing e.g. the minimum retention period increases from 18 months to three years.
5. Involve legal in the policy process
If it is the IT department’s responsibility to draft the email retention policy, then it is important to involve legal. Whether that’s an internal legal team or an external law firm. The main reason for this is so they can review the viability of the policy and if it will meet your regulatory obligations.
Allowing legal to view the policy at this stage means you can present a unified front to the board. It also allows you to evaluate the options you have laid out, and remove any of the amendments legal have made that will drastically increase the price.
To conclude…
Given the number of different regulatory bodies and how they affect organisations, every business is likely to have an individual email retention policy. Following these best practice tips will help you to create a policy that is effective, sensible and which you can enforce.
Documents are a business asset. If an asset is lost, stolen or damaged, it becomes a risk. Both for the business and for their client.
This means having control systems in place to understand these risks is critical. And having the controls to counter them is equally as important.
It sounds simple. But after a decade of working with businesses, it’s clear that few of them have suitable controls in place. To address this, we’ve created 10 points to guide you through the process of creating your information classification policy.
1. Keeping it simple
When looking at security in any way, it’s important to keep it as simple as possible. This is particularly true when it’s something so regular as dealing with documents.
To make it simple, businesses need to invest in technology. In this case, there are three main technologies worth investing in:
- Document / Content management
- Data leak prevention
- Rights management solutions
A document getting into the wrong hands is going to cause your business, or a client’s business, damage. That is a fact. So aiming to implement all three is the best way to get a comprehensive solution.
2. Mapping your classifications
Before you get into classifying documents it’s important to ignore technology. Technology comes after you have decided the policies and processes you wish to follow.
What this means is that you need to map documents or types of documents into distinct groups. To do this, you should look at two key areas: the sensitivity of the document and their intended audience. This information will make up the foundation of your Information Classification Policy.
Many businesses already have classifications in place. But they’re often created, implemented and forgotten – quickly becoming unusable without weeks or months of additional work. You need to create an Information Classification Policy and not hide it away. It needs to be clear and easy for everyone to work with and conform to with little effort.
3. Building the Information Classification System
The foundation of any Information Classification Policy is categorising information. Here are a few example document classifications that will fit most business requirements:
- Public: Documents that are not sensitive and there is no issue with release to the general public i.e. on a website
- Confidential: Documents only to be viewed internally or with third parties that have signed a non-disclosure agreement
- Employee Confidential: Documents only to be viewed by employees at the company
- Management Restricted: Documents only to be viewed by the senior management at the company
- Private: Documents which contain personal information (useful for managing GDPR compliance)
In general, you don’t want to go over 10 classifications because classification should be as simple as possible. If you find that you have too many classifications, consider only looking at sensitivity or only looking at intended audience to begin with then filling in any gaps.
4. Assembling the Information Classification Team
A policy needs board-level support to ensure the business buys into and uses it. Once you have this, you should form a team which includes key departments in the business to enforce the policy.
This team may include people from technical, HR, legal and any other departments that are suitable for your industry. An appropriate team will be able to protect a business from security breaches whilst letting people access the information they need. And whilst it is important, the technical solution should be the last point to consider.
5. Designing the Information Classification Policy
Once you have your team assembled, you need to start going through your documents. In most organisations, it can be hard to know where to start.
To solve this, you should group documents at a high level. Looking at the impact that a data breach of that type could cause. Focus on the most sensitive document types first. And once that’s locked down, you can move through the less sensitive list.
When going through this process there are a few tips you can follow.
For company documents, it’s advisable to put your company name first. This helps them stand out from any other classification, i.e. from a client or a partner business.
It’s also useful to colour code classifications to help distinguish documents by eye. This helps you identify a sensitive document that’s left on a screen, printer or vacant desk. The beauty of colour classification is that it aids you in taking action internally or externally. It’s simple to prove that the defendant knew the information was restricted.
It’s important that you make it easy for staff to label and classify documents. If it takes more than three clicks to label a document, staff will find ways to circumvent the system. People naturally take the path of least resistance. So if your system is obtuse, employees will find ways to bypass it.
6. Enforcing control with automation
Once you’ve designed the Information Classification System, it’s finally time to look at the technology. Automation is very helpful to ensure enforcement. You shouldn’t rely on people alone as things will drop through the cracks.
It’s important that any technology links back into the core authentication system within a business. This will typically be Active Directory – the system you use to log in to your PC at the office.
Doing this simplifies things as you can use existing user groups to give access to certain classifications. There’s likely to already be an Active Directory group called “Board Members” for example, which you can use straight away.
Of course, grouping people doesn’t guarantee a user will know who they can and can’t send specific documents to. Nor will it prevent them from sending a document to a recipient by mistake.
This is why a business should be using a Rights Management system. Rights Management ensures that the systems know who has permission to access the document. So even if someone does send a restricted document, the recipient won’t be able to view it.
7. Educating employees
One of the largest reasons for data leakage is employees. Make sure to train them on how to use systems and refresh them periodically.
Also educate them on any security risks to the business – known, current or potential. They need to understand why following policies is important and how not following them can impact the business and therefore them.
8. Controlling leavers
So many organisations do not manage ex-employees. It’s important to disable their accounts once they leave the company. Even if they left on good terms, it’s best not to take a risk.
Loose accounts complicate the system at best and act as a open hole for attackers at worst. Hackers or insiders can hijack old accounts and make use of the access privileges. So you need to shut down accounts or strip them of all access rights to reduce the risk to your data.
9. Continually improving
It’s best if you adhere to common processes and document them somewhere accessible. To do this, you need robust information classification and risk policies that integrate with a wider standard. A good example to use as a framework is the ISO 27001 standard.
Doing this ensures that you assess and improve how you are controlling your risks within the business. Keeping you protected from an evolving threat landscape.
10. Widening the focus
It would be ridiculous to only focus on document security whilst ignoring the other risks to your business. So understanding all the risks your business faces and assigning suitable controls is something you must do.
Again, the ISO 27001 standard is a good framework to use for managing your information security on a wider basis. But this shouldn’t stop you going ahead and dealing with document security first. Getting this done will make things easier in the long term.
Summary
Businesses must control their risks, as failing to do so has catastrophic consequences. The key is to start simple and then improve. You don’t have to adopt everything at once.
A good starting point is to understand the sort of data you have and then classifying it. A good percentage of your business information could be used to extort or embarrass you. Or even worse, a client.
Once you’ve got your classifications, tie them into document templates. Then automate management and workflow automatically with technology. When done right, businesses can dramatically improve their security since it’s embedded onto the asset. Rights Management can then control who can edit, copy, paste, print, email, transfer or view it at a later date.
Once in place, this can be overlaid with network controls such as Data Leak Prevention. This watches documents flow in and out of the business and can isolate, sandbox or alert relevant people that a breach may occur.
To take it further, systems at the perimeter, such as gateway encryption solutions, can identify sensitive information. Encrypting it to ensure it won’t pass over the open Internet in clear text.
The list can go on but it’s important you start at the beginning by creating an Information Classification System. You need to understand what you have and what the risks and potential controls are first though.
Managed print and document solutions can bring a wealth of benefits, including increased employee productivity and efficiency, the ability to maximise billable hours, and greater document and data security. But in order to truly harness these benefits and enhance your operations, you need to choose the right print and document solutions partner.
Many companies will feel under pressure to simply pick the lowest cost option, or are blinded by a dazzling list of benefits which seem impressive on paper, but in reality, don’t quite deliver after installation. This why it’s critical to do your research, to ensure you’re choosing a solution that delivers a return on your investment, beyond simply the cost per print.
How to choose a solution that suits
The Installation Process
If planned and executed correctly, the impact of installation on day-to-day management and activity should be insignificant. If the print and document solution takes days to install and is difficult to integrate with your other applications, then it’s only going to have a negative impact. In the short-term, it’ll be costing your firm on the bottom line and will damage the end user’s perception of the solution. Inefficiencies will swallow up any potential returns in the long-run as users try to find a workaround for the solution.
How Does It Integrate?
If the platform will only integrate with a few pieces of third-party software then it’s going to be a struggle. Or it will become more of an expense in the long run. You want a solution that fits your needs and operations. Not one that you have to work around it, or which restricts future decisions. In order to truly integrate, you should be looking at firms who truly understand systems, and who can analyse your business and operations. You don’t want a provider who only looks at printer location and the cost per print. This is where so many traditional copier businesses fall down.
Flexibility
Your chosen print and document solution may integrate perfectly with your current infrastructure, but you don’t want it to affect software choices you make in the future. Otherwise, you could be left with an ineffective, cumbersome solution. Or have to pay out to start this whole costly process again. The right print and document solution should, as your IT infrastructure does, grow with you, allowing you to capitalise on new opportunities and changing markets.
Management & Long Term Planning
When most law firms receive a proposal from a print provider, the first things they will notice is a lower cost per click, due to standardisation, and a drop in paper consumption and waste. Whilst these are positives, there is only so much you can gain without further optimisation. You can achieve greater productivity and efficiency through scanning solutions, but this takes time, planning and ongoing management. Many print providers simply don’t have the knowledge to deliver this properly.
You need a provider who is in it for the long-haul, who will take the time to learn end-user trends and revisit the solution to see where they can change processes and automate staff functions. These are the areas which will make the solution completely bespoke and will enhance your margins. The provider needs to stage the solution, with every step optimised before progressing to the next. If a print company tries to deliver everything in one big project then something is probably not quite right.
Ask yourself, what do you want to achieve from this process? Do you just want to achieve quick wins? Or do you want to also optimise processes for ongoing operational and margin improvement? The answer to that question should give you an idea of what sort of providers you should be engaging with.
Generally, you haven’t moved away from Windows Server 2003 because a critical and extremely complex piece of internal software relies on it, or due to budget constraints. There are a few other reasons, but chances are that you are simply being negligent and putting your business at risk for the sake of saving a few £s. If you are ignoring the end-of-support warning due to financial concerns, then you are playing a dangerous game. In fact, if you are unfortunate, a savage enough attack could cripple your business or even put it under – and that’s not scare-mongering.
You will notice a few security vendors stating that they can protect you whilst you still run Windows Server 2003, but generally, this isn’t really the case as the weak link often comes in a process or a person. Also, if they were all so good we wouldn’t have any viruses or exploits, would we?
So, if you are in a difficult situation, where do the real threats lie?
- The server faces the Internet directly, i.e. many hosting companies give a customer a server with a live Internet address (IP) on it. The customer then installs a software firewall on top of the Windows 2003 operating system.
- The server indirectly faces the Internet, i.e. it’s connected through some sort of physical/virtual firewall, i.e. the server is acting as a web server, client portal, FTP server, etc. Even if the firewall has advanced intrusion prevention the risk is significant.
- The server is not accessed from the outside world but initiates communications,e.g. it is a Terminal Server/Citrix server, proxy server, etc. The threat comes from the server hitting a website with malicious code and fires an exploit that compromises that server and the LAN/WAN it sits on.
- The server sits on an open LAN with other network devices, such as PCs, laptops and other servers. Although these other machines may not be able to be infected – they can still potentially pass on ‘an infection’ to an unprotected Windows 2003 server.
- The server has other devices plugged into it at times, i.e. USB storage devices. The risks are lower here but still real.
There are other risks but these are the main ones and the most significant. Over the coming months, the risks to Windows Server 2003 are going to be pretty large as hackers and the like hold back exploits until the support ends. The flames will burn brightly for say 6-9 months and then slowly taper off as the easy prey has been picked off and the bandits look for new pickings.
If you have left it too late to switch from Windows Server 2003 then what are the key things you can do to protect your environment?
- Don’t connect it to the Internet directly or indirectly.
- Segregate it via the normal LAN via a VLAN and/or a firewall device.
- Any connections to it from internal pass through an intrusion protection firewall.
- Don’t plug any external devices into it.
- Plan to migrate services from Windows Server 2003.
The important thing to do is plan to protect services as soon as possible, then get your plan ready. Depending on the size of your environment, it’s unlikely to be a straightforward task, so you should probably start planning now or bring in a consultant quickly. You need to take a number of factors into account as a bare minimum. Here a few generic ones to get you thinking about the implications.
The implications
- Will your existing hardware support new operating systems and/or software?
- Do your IT staff need training to roll-out and manage the new operating systems and/or software?
- How will you overcome any compatibility issues?
- Will your other applications work on the new operating systems and/or software?
- Will your 3rd party application vendors support their applications on a new platform?
- How long will it take to test everything?
- Will you need to train other employees to use the new operating systems and/or software?
- What resource will you need to roll out the new operating systems and/or software?
- How long will it take to roll the new software out?
- What are your other options? Could you go thin-client? Could you go to the cloud?
- What do you need to budget for?
If you’ve been avoiding a move due to expense then remember that everything can be turned into an OpEx. This does help financing and budgeting immensely. You can go for a fully managed cloud, your own private cloud, or simply replace servers and software in-house. You can also finance development work and consultancy and wrap it into a monthly payment.
Running Windows Server 2003 past the end of support will likely leave you open to regulatory issues. It will also leave you open to a lot of issues from an insurance perspective should a breach happen. Also, how about the embarrassment of your breach in the press? I know I’ve been quite strong in my views here on a bit here, but this has been on the radar for years, there is no excuse.
Not taking action now is simply like knowing the spare bedroom window won’t close properly. Chances are at some point someone’s coming through it.