This post is about capturing my talking points from the recent conference panel on the “Use of AI for Cybersecurity” at the Intelligence and National Security Alliance (INSA) conference. You can find my musings on the term AI in my previous blog post.
Here is the list of topics I injected into the panel conversation:
- Algorithms (AI) are Dangerous
- Privacy by Design
- Expert Knowledge over algorithms
- The need for a Security Paradigm Shift
- Efficacy in AI is non existent
- The need for learning how to work interdisciplinary
Please not that I am following in the vein of the conference and I won’t define specifically what I mean by “AI”. Have a look at my older blog posts for further opinions. Following are some elaborations on the different topics:
- Algorithms (AI) are Dangerous – We allow software engineers to use algorithms (libraries) for which they do not know what results are produced. There is no oversight demand – imagine the wrong algorithms being used to control any industrial control systems. Also realize that it’s not about using the next innovation in algorithms. When DeepLearning entered the arena, everyone tried to use it for their problems. Guess what; barely any problem could be solved by it. It’s not about the next algorithms. It’s about how these algorithms are used. The process around them. Interestingly enough, one of the most pressing and oldest problems that every CISO today is still wrestling with is ‘visibility’. Visibility into what devices and users are on a network. That has nothing to do with AI. It’s a simple engineering problem and we still haven’t solved it.
- Privacy by Design – The entire conference day didn’t talk enough about this. In a perfect world, our personal data would never leave us. As soon as we give information away it’s exposed and it can / and probably will be abused. How do we build such systems?
- Expert Knowledge – is still more important than algorithms. We have this illusion that AI (whatever that is), will solve our problems by analyzing data. Instead of using “AI” to augment human capabilities. In addition, we need experts who really understand the problems. Domain experts. Security experts. People with experience to help us build better systems.
- Security Paradigm Shift – We have been doing security the wrong way. For two decade we have engaged in the security cat and mouse game. We need to break out of that. Only an approach of understanding behaviors can get us there.
- Efficacy – There are no approaches to describing how well an AI system works. Is my system better than someone else’s? How do we measure these things?
- Interdisciplinary Collaboration – As highlighted in my ‘expert point’ above; we need to focus on people. And especially on domain experts. We need multi-disciplinary teams. Psychologists, counter intelligence people, security analysts, systems engineers, etc. to collaborate in order to help us come up with solutions to combat security issues. There are dozens of challenges with these teams. Even just something as simple as terminology or a common understanding of the goals pursued. And this is not security specific. Every area has this problem.
The following was a fairly interesting thing that was mentioned during one of the other conference panels. This is a “non verbatum” quote:
AI is one of the poster children of bipartisanship. Ever want to drive bipartisanship? Engage on an initiative with a common economical enemy called China.
Oh, and just so I have written proof when it comes to it: China will win the race on AI! Contrary to some of the other panels. Why? Let me list just four thoughts:
- No privacy laws or ethical barriers holding back any technology development
- Availability of lots of cheap, and many of them, very sophisticated resources
- The already existing vast and incredibly rich amount of data and experiences collected; from facial recognition to human interactions with social currencies
- A government that controls industry
I am not saying any of the above are good or bad. I am just listing arguments.