We left off last time out on Risk Treatment. The previous post is here. So following up using ISO 27001 as a base for our questions, the next section would be Performance Evaluation.
In ISO 27001 speak, this is the monitoring of your security controls. You can understand a lot about the maturity of your Third Party’s Security Controls through their use of monitoring.
To implement monitoring in line with the ISO 27001 Standard, you need to have made a risk based decision on which controls are important to monitor, and which controls are easy to monitor. Ease is definitely a secondary concern, but there is a level of maturity associated with picking effective and cheap monitoring controls.
So a valid question would be:
“Please provide a list of security controls that you are actively monitoring and the check that is being performed?”
No need for the results
You are not asking for any confidential information in terms of the actual results from the monitoring. We don’t need to see this detail. We should be inferring the maturity of the monitoring controls from both the security controls being monitored, and the tests that are being performed to provide that monitoring.
You can check whether you feel that these security controls are the right controls to be monitoring, and you can check if you think they are being monitored in the right way.
If a Third Party is performing a manual sample check on a control, this may indicate that they do not have a mechanism in place to do such a check automatically.
Monitoring Controls lifecycle
Monitoring over time usually looks something like this:
- Define a set of controls that you think are the most important.
- Instigate some form of manual checks on these controls.
- Review the results over a period (3-6 months) to confirm the controls are providing reliable data.
- Bin the controls that are not adding value and replace with your next best guess of new controls.
- Implement automated methods on a monitoring control once you know it is worth monitoring and when you have the tools available.
If you follow this pattern, the information you receive from your supplier will allow you to deduce:
- How long they have been actively doing this?
- How effective their monitoring environment is?
Red flags that we don’t want to see coming back from suppliers are:
“We monitor all of our security controls.”
“Please can you explain what you mean by security controls.”
Changes to Monitoring Controls is a good thing
Another associated question that will give you an indication of how formalised the monitoring process is, is to ask:
“When was the last time you changed a monitoring control and why?”
This is a good question because the right answer is not immediately obvious to a supplier with little security expertise. To the untrained eye, the right answer should be something like:
“We haven’t changed our monitoring controls for years. We have a very stable environment for monitoring”.
From a monitoring process perspective, this is the wrong answer.
You should be looking to add, modify, delete monitoring controls on a regular basis. Looking at the same results from the same monitoring controls gives you tunnel vision. Check those common monitoring results through different indicators. You need to be able to confirm that the monitoring control is accurate. You can only do this if multiple different views of data, point to the same conclusion.
What you are looking for from the supplier is something like:
“Our monitoring process dictates that we review our monitoring controls for effectiveness on a regular basis.”
You would then have evidence of a monitoring process that is formalising the behaviour. You have a supplier whose security controls are mature enough to understand that a monitoring process is a useful tool to ensure an effective monitoring regime.
Your Third Party Security Assurance questions should be looking to catch out the suppliers who simply respond with what they think the right answer should be, rather than what they actually do.
Never trust a first response without validation.
Some suppliers have to respond to so many questionnaires, that the work is delegated to a junior resource, who is encouraged to choose the most appropriate stock answer from some form of FAQ. This can lead to errors and misinterpretations, especially if you have been a bit clever with your question.
If the response you receive is not the one you were expecting, make sure you validate that this is indeed the correct answer that has been provided by the supplier as part of your initial review.
The effectiveness of an organisation’s security controls should be the subject of review from any 3rd line of defence model that might be in operation with the supplier.
If the supplier has an Internal Audit function, security should be one of the highest risk areas that they regularly review.
“Do you have an Internal Audit function and when was the last time they reviewed the effectiveness of your security controls?”
Having an Internal Audit function, but not conducting a review of security controls points to two possible conclusions.
- The Internal Audit team are not comfortable reviewing security controls as they feel they do not have the relevant skills required.
- The Internal Audit team have bigger problems to sort out within the organisation.
Neither of these are particularly positive outcomes. An Internal Audit team should seek external assistance, if they are reviewing anything outside of their area of expertise. This could indicate a lack of budget for Internal Audit. Which could indicate they are not taken particularly seriously.
If there are bigger things burning within the supplier such that security is not a major risk, then I would have concerns about their more general ability to deliver.
Short but sweet
As a guide, you don’t want to be sending out a questionnaire with hundreds of questions. A supplier receiving such a questionnaire can be assured of one thing. That nobody will be looking at the answers provided to the hundreds of questions and trying to make some sense of them, when you send them back.
Nobody designs a questionnaire in a way that makes it detrimental to review, and to getting to a risk evaluation.
Similarly, any questionnaire that has an automated scoring system is also suitably flawed. All context is lost if you rely on a scoring system that automatically assigns a score for a particular question. The point of extracting value from a questionnaire like this is to compare and contrast answers from different questions, and to build a picture from them, to understand where the supplier is on a spectrum of security risk at a holistic level.
Then once you have assessed the questionnaire, you can assign a risk score. The risk score matrix held internally will be a different format to the supplier questionnaire. In simple terms, it will be asking the assessor questions like:
Has the supplier evidenced a security maturity based on implemented Security Policies and controls?
Are there specific security risk areas we should seek to improve over the course of the supplier relationship?
Are there any mitigations that we could recommend to the supplier to improve their security risk position?
The risk matrix will be a way of evaluating a supplier at inception, and then throughout the life of the contract, based on interactions through the supplier relationship manager.
The aim of the Third Party Security Assurance questionnaire is to infer a risk score for the supplier, without asking many direct questions.
Ask for evidence where it cannot be challenged as confidential.
Apply significant analysis to the answers to infer the risk score.
The Assurance process does not stop at the point the risk score is determined. The Assurance process is about maneuvering the supplier to a better risk position over time. Where this cannot be done successfully over the life of the contract, you should actively look to block any future contract renewal.