I come from the field of and I am interested in

Insights

IT Security: Artificial Intelligence as Digital Trainee

To err is human, especially in highly complex jobs such as software programming. And not every careless mistake in a line of code immediately results in a safety-relevant problem. But: Who checks and decides that in IT security teams, which, with their limited resources, need to concentrate on other priorities?

IT decision makers can delegate this task to an AI. After all, the technology for checking a large quantity of data in a short amount of time is unbeatable here. An example from Microsoft: With a machine learning system, the company aims to identify and prioritise the approximately 30,000 errors from some 47,000 developers (see blog post from 16 April 2020). The objective: To classify errors as not relevant/relevant as well as uncritical/critical for security – with a level of accuracy that comes as close as possible to that of a security expert.

Organising "Hybrid" Security Departments

To achieve such a goal, not only is the data pool decisive. Also important is how the IT security experts "train" the AI with their expertise. This was also found to be an important success factor for the accuracy of the AI results in the Microsoft project.

It is precisely this experience that is so exciting. After all, it represents a development that has no historical precedent or for which any methodical blueprints exist: the creation of teams consisting of those responsible for IT security and "digital trainees", i.e., technologies that perform human tasks with a more or less high level of "intelligence" and decision-making competency. This is limited not only to error classification: A specialized and well-trained AI can, for example, better identify undetected threats, such as zero-day attacks or advanced persistent threats, than humans.

These and other tasks are generally the starting point for those responsible for IT security as they explore the potential of AI support. What needs to be taken into consideration when organising a "hybrid" security department that integrates human employees and artificial intelligence? From our projects, we can share the following experiences and guidelines:

1. Define a clear range of tasks!

What specific support should the AI provide: Should it identify threats, be responsible for malware prevention or be used in another area of IT security? The more precise the task description, the higher the probability that the solution achieves optimum results. Products from genua that use artificial intelligence were designed and developed with an understanding of the limits of this technology. Our focus here is on the robustness against tampering and the usability of the applied processes to ensure the expected quality of the products in the future, even against highly advanced attackers. Furthermore, a clear utility value is important, such as if AI makes suggestions on the classification of network devices according to behavior or on improving the security policies.

2. Limit the risks!

Errors are not only human – as a statistical approach, AI also always makes a certain amount of mistakes. Attackers will want to exploit this as well to lead AI and, thus, users, to make false decisions that undermine security. This risk can be reduced by, e.g., using multiple AIs that cooperate with one another or work in sync and verify and check each other. In the long term, there will also be fields in which AI can only play the role of the supportive technology and the last word must lie with people. This is especially the case with tasks that require a person who not only makes decisions but who is also responsible and, if necessary, can be held liable for those decisions. These and other fields of application should be clearly differentiated from one another when designing a hybrid IT security team so as to avoid unrealistic objectives.


Even in the long term, there are fields in which the last word must lie with people.


3. Take training time into account!

If the AI is to make correct decisions in a matter of seconds, it must be appropriately trained. Here, a suitable data set is to be selected: Not only the "normal state" but the types of deviations that are to be detected must also be identifiable and present, as the AI learns from examples. Companies should allocate a longer period of time for this, possibly spanning several weeks. Furthermore, the AI must be regularly retrained. After all, external circumstances change over time. For the human colleagues, this is an automatic learning process during everyday life, but for the AI this must be included already in the planning and design. The time invested here pays for itself later when the AI consistently delivers good results and supports the users for the long term.

Master Dynamics and Complexity

The current focus of AI in hybrid security departments may still perhaps lie on detecting and automatically classifying complex information flows. But we have determined in our projects that the range of tasks is clearly continuing to develop. One clear direction of thrust is the further development of the AI role from "tool" to "supporter", as in solutions that have already been practically implemented such as cognitix Threat Defender. In this function, AI provides suggestions on the implementation and improvement of security measures to ensure the best-possible protection in a complex and dynamic environment. Whether AI will actually be accepted in the future as a "co-worker" is, of course, dependent on other factors such as a high-performance combination with technologies like voice control, avatars or interfaces. And perhaps whether not they should also happen to make a perfectly "human" – and forgivable – mistake.