Artificial expectations of cyber intelligence

Can we improve cyber resilience?

I racked my brains to figure out how America, the sweet spot of resilience in the world, could allow itself to be dissected digitally. We are getting a beating from nation states and hackers. Damage tears at the fabric of democracy; which is based on trust systems. Worse, it quickly becomes an expectation among citizens all will eventually be violated; so why should I care….

It’s serious. Cybersecurity experts predict that the growing success of illegal system access will have a serious impact on the global economy. As a forerunner, people are giving up all their privacy rights and won’t even change the default passwords. This moves system and network vulnerabilities to a more personal place; our daily lives. The question becomes how to stop the attacks and the severity of the consequences? And how do you minimize vulnerabilities while launching a defense if citizens won’t or can’t?

If we used all the best practices and tools currently available, we could eliminate 80% of all breaches. The problem with this fact is actually getting government, industry and academia to use all the best practices available. We can safely predict that a popular revival will never happen for a multitude of reasons. In addition, the pirates’ abilities increase daily. So maybe our next best hope is to unleash computer systems that can artificially do some of the things that we humans should be doing.

While the need for a well-trained workforce is evident, security professionals believe that artificial intelligence (AI) may be a viable solution to thwart maturing cyberattacks. Experts believe smart computers will meet challenges that professionals won’t. They believe that the growing role of AI in cybersecurity can help us better and faster understand the growing threats in the ever-changing security landscape. Yet, due to several biases, AI can also mislead us; make us feel safe when we are not.

AI, AKA machine intelligence, is an extension of computing that aims to generate human-like responses, faster than any human, by recognizing threats, detecting/solving problems through automated analysis and learning to act accordingly. It plays, and will play, a vital role in developing the cybersecurity technology needed to ensure that we are not one day sitting in the dark. AI can create capabilities and capabilities that a human workforce can only dream of providing.

AI system designers literally seek to give a system “a mind of its own”. However, to get there, AI requires humans who are responsible for writing the algorithms and mapping capabilities. They tell the AI ​​machine mechanism what to look for, where to look for it, how to look for it, and what to do when the target has been identified. But all that shine can be corrupted from the start. As such, human biases can be implicated when we don’t receive expected results. The same is true when we apply AI to cybersecurity. These AI biases can be identified in three domains represented by program, data and people.

Successfully identifying cybersecurity threats requires knowing what to look for. An AI cybersecurity program focusing on the wrong vulnerabilities will surely fail to detect the real security hazards. Regardless of its accuracy, if an algorithm is programmed to solve a faulty requirement, a true solution will not be identified. A business or organization would always be at risk of being the victim of a cyberattack.

The success of using AI in cybersecurity also depends on using representative data that paints the big picture. Using biased data will cause the AI ​​system to have a partial understanding of the problem. As such, the reaction is based on narrow perception which may not achieve the goal. Given our highly diverse risk ecosystem, it is important to have a comprehensive perspective of what is at stake. Increasingly sophisticated hackers use different routes to penetrate our systems, so it is very important to disclose all these informations.

Finally, human bias is one of the main factors inhibiting AI. Cybersecurity is an ever-changing field with threats, risks, technologies and more. People who come from the same background or culture may not have a full perspective for today’s diverse context. Their limited breadth of knowledge results in biased programming algorithms and eventually the vicious cycle begins again.

Eradicating all these prejudices is not realistic, but minimizing them certainly is. The successful implementation of the planning and testing stages is critical in determining the path forward for identifying cyber threats. Not everyone is tech-savvy, and most people don’t choose to be. However, training AI systems will require experts in a multitude of fields to ensure that technologies like AI enhance our lives and not artificial obstacles to our growth in cybersecurity.

For the latest tech news and reviews, information and help: follow https://mikeechols.com/ on Twitter, Facebook and subscribe to our newsletter. Subscribe us on Youtube

About Donald J. Beadle

Check Also

QCRI builds cyber intelligence platform to defend against security threats

Dr Issa Khalil Doha: Scientists from the Qatar Computing Research Institute (QCRI) at Hamad Bin …