AI/Machine Learning & the future of CyberSecurity
Threats, Challenges and Opportunities.
--
Insights and excerpts from the IEEE Confluence report on Cybersecurity
The world has become a close knit place today due to the fast growing technology.Computers and internet have become indispensable today. The growth of smart devices have given a huge push to use of wireless devices like Bluetooth and Wifi. We are in the era of SMART devices ranging from smartphones ,smartTVs and various other device which constitute the Internet of Things. However, like they say, with great power comes great responsibility, in the same way with great smartness(smart devices)comes great vulnerability.Vulnerability in terms of cyber security.The U.S. alone, loses between $57 billion and $109 billion per year to malicious cyber activity, according to an estimate published Friday by the White House Council of Economic Advisers.
The core functionality of cybersecurity involves protecting information and systems from major cyberthreats. Keeping pace with cybersecurity strategy and operations can be a challenge, particularly in government and enterprise networks where, in their most disruptive form, cyberthreats often take aim at secret, political, military or infrastructural assets of a nation, or its people.Due to ever-evolving nature of cyber threats, it has become a more pressing issue. Researchers have been working hard to apply machine learning and AI principles to Cyber Security and results are promising. With more and more research going on there are certain weak areas which need to be identified and eliminated. A group of 19 IEEE experts in ML/AI got together to focus on the vital question:
Given the rapid evolution of AI/ML technologies and the enormous challenges we all face with respect to cybersecurity, what is needed from AI/ML, where can it be best applied and what must be done over the next ten years?
The cybersecurity threats have changed in three crucial ways in recent past:
- MOTIVE: In the past viruses were introduced by curious programmers .Today cyberattacks are a result of well executed plan by trained militaries in support of cyber warfare.
- SPEED : The potential rate at which the attack spreads has also increased and can effect computer all over the globe in seconds.
- IMPACT: the potential impact has increased manifold due to wide penetration of internet
To keep up with the rate and speed of attack is impossible for humans but no for computers.Hence using ML/AI for cyber safety is more of a necessity than a matter of choice.But there is a catch. Computers unlike humans are very good at repeating things.They can repeat a task millions of times without any hassle.But this can also include doing the wrong thing.It is pertinent to note that while it is good to increase the use of ML/AI , due diligence should also be given to bad entities that these algorithms have at their disposal.
In their conference trend paper, the researchers addressed six different facets of intersection of AI/ML with cybersecurity.
1. Legal and Policy issues
AI and Machine learning is a buzz word today with its implementations in almost every field.The results has been promising leading to a belief that AI/Ml application application will always lead to success.While there is no denying the fact that AI/ML promises to improve some aspects of defence through automation, great caution is needed for deployment of such systems.Any small glitch or error may jeopardize the national security and social structure.In 2016 , Mirai botnet started the era of botnet attack. involved multiple distributed denial-of-service attacks (DDoS attacks) targeting systems operated by Domain Name System (DNS) provider Dyn. This caused major Internet platforms and services to be unavailable to large swathes of users in Europe and North America. Now, Imagine the catastrophe that can be caused by a high level ,sophisticated ML program in the world
The other downside is that if a developer loses control of his AI program and causes a disaster, general public’s trust in the use of AI and ML for good will be broken. Hence , use of ML /AI for cyber safety should have some legal binding and adequate care should be taken in its the creation, deployment, and use.
2. Human Factors
Stanislav Petrov, a Soviet officer, with his sheer mind and experience, averted a nuclear war in 1983. Petrov had been assigned to the Serpukhov-15 secret command centre outside Moscow.It was then that the attack detection algorithms warned that the U.S had launched five intercontinental ballistic missiles at the Soviet Union. A normal tendency for any body in that situation would be to panic and report the situation to superior.Petrov did just the opposite. Petrov had years of experience and he knew exactly about the loopholes of the system.Hence, he could not trust the system completely which proved correct eventually.The predictive algorithms miscalculated and the alarm had been falsely triggered by sun’s reflection from clouds ,a data input the system’s programmers had apparently not adequately anticipated
This incident clearly highlights the future of security through ML/AI will not only require Technical trust but Human trust as well.Even the most sophisticated computer systems can fail.But the question is whether they will ‘fail well’ i.e minimizes harm.Indeed, it will be these human trust factors in the operationalization of AI/ML systems that will dictate their adoption rates.
3. Data: New Information Frontiers
Data is a critical Feature.Security ML/AI algorithms will actually realize their true potential when trained on large, diverse training data sets.Here not only the quantity matters but quality also is a key player. Even though large volumes of data are available today which is doubling every second, most of the data lack in completeness.This is because:
a) Most of the devices which are used were not primarily designed with instrumentation and measurement as an integral feature; data available from them is not able to capture critical points.
b) Most of the times individuals and companies do not disclose data pertaining to cybersecurity events either due to reputational concerns(as this reduces stakeholder’s confidence)or legal and privacy concerns.
Others concerns associated with data are Integrity and relevance.It is easy to generate simulated data sets but they do not imbibe the reality.Also regular updation of data in terms of all recents attacks should be done regularly.Data collection techniques, by their very nature, often include unintended human and technical biases. Understanding, documenting, and sharing those biases are important to ensure AI/ML effectiveness and operation.Data integrity also affects human confidence in AI/ML. If the AI/ML training data set is incomplete, includes questionable biases, or is, in general, not fully understood, then confidence in the entire system is diminished. Preprocessing of the data prior to use for training can also alter data integrity and reduce confidence.
4. Hardware for AI/ML and Cybersecurity
Today, a network implies human users connected by smart devices all over the globe.Network is not merely defined by electronic equipment in a room or building.Due to such vastness , it becomes an easy target for cyberattacks aided by AI bots.Many leading CISOs admit that cyberattacks is no more viewed as question that whether they will be hacked but when .
Hardware holds the key in more than one way.
- By incorporating security into hardware designs
- by creating hardware network architectures that can intelligently monitor the network’s security state
- creating hardware that allows AI/ML systems to solve more complex problems by eliminating existing compute barriers
AI requires a great deal of computer hardware for training purposes.This prevents the real time threat assessment and response required by cybersecurity for new threats. The only solution to this problem is by allowing computer hardware engineers to change their approach towards the concept of computing.Emphasis should be given on how data flows through a processor rather than how computations are done.Academia, funded by government agencies and industry, can lead the way by experimenting with new and novel outside-the-box architectures. Innovative approaches are the only way to shake up a field that hasn’t effectively changed in the last 50 years. Without a new architecture, AI/ML will be unable to solve large-scale problems such as those in the cybersecurity application.
AI/Ml can also be utilized to design and implement better hardware.AI can be incorporated into current design tools.Even if we are able to plug few of the hardware bugs, this will go a long way in making the network secure because hardware faults and design errors are among the most reliable targets for exploits. Based on a 2015 study by MITRE, 2,800 cyberattacks could be traced back to seven classes of hardware bugs. Eliminating these bugs using AI/ML in the design process will close several attack avenues used by hackers. . This effort should be mostly industry focused, with the government playing a supporting role in encouraging the development of these systems.
5. Software And Algorithms for AI/ML and Cybersecurity
Because typical cybersecurity data sets are extremely large, networks for data delivery and the processing of ML models must be capable of efficiently handling staggering amounts of diverse data. The scarcity of such networks today is a major hindrance to progress in the field. Achieving such networks for real-time analytics requires even more careful software design and algorithms.
Natural language processing (NLP) makes it possible to derive actionable insights from previously inaccessible data. Analyzing unstructured text with NLP enables the extraction of key actors from past cyber incidents, news stories, analysis reports, and many other similar text sources.
Cybersecurity is highly dynamic because the underlying technologies are evolving rapidly, and the offense and defense are locked in a threat–response–threat coevolution. This dynamic and constantly evolving landscape requires constant vigilance and updates to threat classification, identification, and response.
Finally, the adversarial nature of the cyber domain presents a modeling challenge that is also an opportunity. Cyber competitions, in which teams act and react to others, are valuable laboratories to explore interactions. The goal of these experiments is to imitate processes by which an adversary learns of defensive measures and then preempts evasive measures. Understanding an adversary’s strategy, then, helps refine the models.
6. Operationalization: Putting It All Together
The world has finite resources to dedicate to improving cybersecurity, a fact that will inevitably lead to issues of resource allocation. A properly developed and deployed AI/ML would be highly desirable to give the good guys at the meeting an advantage over bad ones.
But every possibility holds an opportunity.Through hardware and Software improvements, over the time, the organisations will be better able to integrate AI/ML systems in their CyberSecurity framework, which was next to impossible even few years ago. AI/ML will help create integrated meaning from hundreds and thousands of disparate data streams; support automated, real-time prevention platforms; and augment humans’ decision- making ability.
Probabilistic AI/ML systems will need to learn while avoiding misclassification in terms of frequency or severity (in the eyes of the user, not the security specialist) that could lead to distrust and disbelief — electronic versions of the boy who cried wolf, in a sense. The punishment in the story was that the boy was eaten; the outcome in this discussion could be reduced business growth due to general distrust of computer technology.
Repairing or mitigating vulnerabilities will remain a challenge. Most users either do not know or do not have a way to report discovered vulnerabilities. Current use cases, such as fraud detection in the banking industry and diagnosis in the health-care industry, serve as enablers for the future operationalization of AI/ML in the cybersecurity domain. Although not all use cases and current AI/ML algorithms are designed to be employed in real-time environments, they serve as foundations for real-time detect–defend or defend–attack situations in cybersecurity. For certain domains, the ability to consciously disable AI/ML actions or disregard recommendations is an enabler of AI/ML operationalization for cybersecurity. In such cases, it is important to have the ability to disable or alter specific system aspects without necessarily turning everything off while, at the same time, comprehending any repercussions.
AI/ML will become one of the key components of next-generation security, enabling elevated degrees of cybersecurity. At the same time, AI/ML can become a threat used by attackers too.So we need to act wisely and act early before the Bad guys take over.
reference: https://www.ieee.org/about/industry/confluence/feedback.html