Who monitors employee monitoring when AI is in the driving seat?

Article by Davey Winder – SC Magazine

Enterprises are increasingly monitoring employees by way of their email and social media usage. Given that increasingly this kind of monitoring is being done by AI-powered technologies, there are ethical questions that have to be asked. So SC Media UK asked them.

When Gartner surveyed large organisations last year, it discovered that more than half were employing what it refers to as “non-traditional monitoring techniques” to spy on employees. That number is expected to climb to 80 percent by next year. Non-traditional, in Gartner-speak, includes “analysing the text of emails and social-media messages, scrutinising who’s meeting with whom, gathering biometric data and understanding how employees are utilising their workspace.”

Gartner found that in 2018 some 30 percent of employees were comfortable with their emails being monitored, up from just 10 percent in 2015. However, if the reasoning behind the monitoring was explained to employees, the number who are happy with it climbs to 50 percent. At the end of last month, research from Gurucul confirmed the numbers are continuing to rise when it found that 62 percent of the IT professionals asked would not be deterred from accepting a job offer from an organisation engaged in this kind of user monitoring.

“Workplace monitoring is often viewed as a spying tactic, used by paranoid or nosy employers to keep an eye on staff behaviour,” Saryu Nayyar, CEO of Gurucul says, “but it depends on the type of monitoring being utilised.” Monitoring user behaviour for the purpose of identifying unusual or ‘risky’ actions isn’t the same as monitoring a particular employee “to snoop on their Internet browsing history,” Nayyar insists, adding “user and entity behaviour analytics is there to detect threats that would otherwise remain unknown. It’s one of the most effective ways for organisations to defend against insider threats – both malicious and accidental insider incidents.”

Which is all well and good, viewed through the lens of security posture. But what if you swap the security-tinted glasses for ones that come in a nice shade of workplace ethics? Which got SC Media UK to thinking about how AI plays into the whole ethical privacy debate surrounding this topic.

Given that increasingly this kind of monitoring is being done by AI-powered technologies, there are questions that have to be asked. So SC Media UK asked them.

Here’s the thing, as the AI (or ML to be more precise) learns on the go, then the programmers become increasingly separated from the criteria on which the “intelligent” algorithm is making decisions. Take the example of there being a correlation between insider threats and those staff who worked after 6.30 pm. The AI determines this as risky behaviour and starts closely watching all of these people, without taking into account that some might only start work at 6 pm, or needed to be in sync with the US West Coast for conference calls every now and then. It’s not a huge reach to imagine even more dubious correlations, which could be more controversial in context and contravene the privacy of the individual employee. But if the AI determined who was being watched and when, based on some big-data ML analysis, the question becomes who monitors the monitoring?

“Automated and iterative machine learning algorithms reveals patterns in big data, detects anomalies, and identifies structures that may be new and previously unknown and all data gathered from AI and machine learning can be viewed and analysed to see how decisions are being made,” Nayyar told SC Media UK. “It’s all about having the ability to aggregate, correlate and analyse data from multiple sources – SIEMs, CRMs, IAM systems, HR databases, AD and more,” she continued “to provide a 360 degree view of user and entity behaviours so that you can know what your users are doing – where, when and with what entitlements.” This is something that simply cannot be done by human analysis in large enterprise environments, Nayyar concludes.

Patricia Vella, founder of Resilience Matters, told SC that “this is one of those situations where there are concerns when AI shows clear bias and different concerns when it doesn’t.” So, we know AI decisions can be impacted by having a restricted training data set to learn from as well as from algorithmic bias, for example. “This could lead to it regarding normal behaviour carried out by a subset of employees, such as taking a regular break for Muslim prayers,” Vella continues, “as suspicious behaviour if it wasn’t seen in the original training set.” Yet, Vella concedes that by broadening the training data set beyond the typical male dataset “could mean there’s a danger of it being able to deduce from subtle changes in behaviour quite personal information that the employee is not yet ready to share such as pregnancy.”

Richard Piccone, a senior cyber-security consultant at ITC Secure, has experienced employee monitoring carried out in two ways in the past. In the first, existing networking monitoring was leveraged and if HR suspected an issue with an employee they would ask IT for evidential data. The second method was blanket monitoring and reporting of all employee activity, with targeted monitoring on particular groups. “In both cases I felt as though my tools were being corrupted,” he says, “snooping on employees wasn’t their purpose, and my position as their caretaker was being abused.”

When it comes to the AI question, Piccone admits things become a little grey in shade. “When you ask who monitors the monitoring?” he suggests, “in a sense you’re asking who’s responsible for the outcomes of that monitoring?” If the security team is using user behaviour analytics to monitor for anomalies that indicate a compromise, but the HR team want to use that data to find employees breaching company policies, then “the purpose and outcome of that monitoring has changed,” Piccone explains, “and the responsibility has shifted to the HR team because they’ve defined the purpose and control the response process.”

Piccone also reminded SC Media UK that the question is somewhat answered by GDPR as Article 22 makes provisions for automatic profiling and decision-making; an organisation is required to give full information about the processing, so if nobody can determine the criteria by which the AI is making decisions or profiling an employee, the organisation is at risk of falling afoul of data protection law.

“Ultimately, organisations now have to tread very carefully when using AI or machine learning for monitoring employee behaviour,” Piccone concludes, “anonymous network monitoring is one thing, but processing personal data in the same manner opens up a world of risk that might be too much for the organisation’s appetite…”