The BA GDPR Penalty notice in detail

Sometimes the greatest lessons learnt are taken from the failures of others. This is likely to be the case with the BA GDPR fine, which landed on the BA HQ doormat on the 16th October 2020.

After a year of negotiating and legal representations, the ICO in the UK finally issued the much-awaited penalty. At £20m, it was substantially smaller than the original £183m that the ICO had threatened the year before.

The penalty notice can be found in full here, but what can other businesses learn from the missteps of Industrial Giants?

The actual technical security missteps are varied and plentiful, which all leads to a bumper article. I will summarise the key failings and what BA should have done to avoid them. 

The interesting point to be made about this particular penalty notice as opposed to virtually all others generated by the ICO in the UK, is the lack of consistent logic applied in the ICO’s arguments to justify its position. 

I will make this the subject of another post as this also has some merit for discussion.

In pretty much sequential order, here are the key screw-ups committed by BA as an organisation, in the quest to mismanage its customers personal data.

Coming in at number 1:

Not applying MFA to a remote access gateway intended for third party access.

The airline industry depends on a lot of data sharing. In this instance, the attacker was able to gain access to the third-party access gateway designed to share BA data with its third-party partners and suppliers. The username and password of an employee of Swissport in the West Indies was used to gain access to this gateway.

Now ordinarily this is not necessarily terminal, but it exposed a cascade of deficiencies that let the attacker eventually roam the BA network at will, undetected, with privileged access rights.

You may argue, as BA did, that this remote access gateway was not intended to share confidential data and was restricted to a Citrix hosted environment, where the third-party access was sandboxed from the wider BA network.

The own goal however was scored by BA when their policies were clearly shown to state that MFA must be used for all remote access into BA’s network. Ouch!

Critical finding 1:

Be careful what you put in your policies as it could backfire on you and cost you £20m. Despite attempts by the ICO to justify otherwise, this is the clear failing in this section. If you say you have to do something in your policies and then you don’t do it, you better have the risk assessment and the risk acceptance clearly recorded.

From the missing redactions, it is reasonably clear that BA tried to argue that they had assessed and accepted the risk of this remote access gateway configuration, but they had mislaid the evidence (Dog ate my homework moment there for the BA legal team. And Teacher wasn’t listening (Section 6.21- 6.22)).

Failing number 2:

Not adequately securing the Citrix environment used as the landing point for the gateway. This allowed the attacker to “jailbreak” the Citrix environment, releasing them into the wider BA network.

It is interesting from a security point of view to see such partial security. Clearly someone had originally envisaged the need for security by design, as a Citrix environment was considered to be the appropriate solution for limiting access to the third-party, to a discrete set of applications.

Why do you then not test the solution you have constructed to ensure it operated securely? 

No pen-testing or similar was conducted on this environment to see what a third-party could actually achieve with limited credentials once inside the gateway.(Section 6.53 – 6.56)

Why such a half-job? The most likely answer is that the original project either ran out of budget or did not have this testing work in scope in the first place. The redacted evidence points to an external pen-test with very limited scope, most likely on the existence of known OS vulnerabilities.

Critical finding 2:

If you are creating an environment or running a project with a specific security requirement, ensure that you thoroughly test whether the security features have been achieved by getting someone to try and break them.

Simply delivering a project on time and to budget, does not equate to it meeting the original security requirements.

Failing number 3:

This was the killer moment. The attacker had gained access to the network outside the Citrix environment and was able to find the domain password in plaintext on the filesystem.

On the face of it, this seems incredibly negligent. It is however far worse than your general level of incompetence. 

Pinch of salt required here, as a lot of the text is redacted in the penalty notice, however it is clear that the domain password was embedded in a script (Section 6.65). This script was to be run when a certain activity that need privileged access, was required to be run by a user without those privileges. 

In BA’s pleadings, they claimed this was required and commonplace in the industry (again this is heavily redacted, but this is how I have interpreted the section in the Penalty Notice (Section 6.74)).

This was the golden key the attacker was looking for and in conjunction with lousy monitoring of events, it allowed the attacker to roam the network at will from that point onwards.

Let that one sink in for a bit. You deliberately create a script to circumvent the privileged access of your systems, to be deliberately used by users without privileged access credentials, to gain privileged access. 90-degree face plant.

Critical finding 3:

Don’t be this stupid. If a task requires privileges and you want ordinary users to have access to it, think very, very hard about what you have done wrong. 

From basic security design principles, this should NEVER happen. You have not designed something properly. Writing a script to get around the immediate problem is not a solution. Anyone who had access to that filesystem and that script file could have accessed the domain password.

Failing number 4:

Not monitoring the network and its critical events to such a level as to be able to detect key attack signatures.

The Attacker was able to re-instate the guest account on the system and further give that guest account administrative level access. Both of these events should have triggered big flashing red lights within the BA network, but both remained undetected. In fact, BA did not finally detect the breach themselves, they were reliant on a third-party for the detection. Without this external assistance, the attacker may have remained on the network for substantially longer than the agreed 103 days.

Critical finding 4:

Monitoring and alerting is not boring. Set your systems up to generate alerts for network critical events. This is sufficient, assuming that someone actually responds to the alerts.

Failing number 5:

As if the handing over of the domain password were not enough, BA then proceeded to shoot themselves in the other foot, by forgetting to turn off a test logging mechanism associated with its web application. This had resulted in all credit card and transaction details (including CVV), being recorded in the application log in plaintext. This data had been written to the log since the application’s inception in 2015. Without detection. The only saving grace was that the log limited itself to 95 days of data, otherwise it would have contained every transaction from 2015 in plaintext.

Critical finding 5:

This is sort of complimentary to critical finding 4, in that logging was enabled here (incorrectly and in breach of PCI-DSS), but nobody looked at it.

Don’t create logs that expose personal data. There is no business or technical justification for this. Testing can be done in different ways such that live data is never exposed in this way. A significant amount of security design should go into how you will test something without exposing the live data in this way.

Failing number 6:

Once the attacker had found the source code for the web application on the BA network, they were able to change it without detection. This allowed the modification of the source code to send a duplicate of all the data processed by the application, to a domain called that the attacker controlled. 

This funnelling of traffic to the domain was what triggered the third-party detection and breach reporting to BA.

Critical finding 6:

Don’t let anyone arbitrarily change your source code. There should be controls in place so that this kind of activity can be detected. A change management system should be in place that controls all access to source code and detects any manipulation of it.

In Summary

The final outcome of this breach is that BA is down £20m in fines, plus a significant amount in legal fees to fight this over an entire year. The least we can do is to learn from it and make sure it doesn’t happen again.

Leave a Comment

Your email address will not be published. Required fields are marked *