Artificial intelligence and machine learning (AI/ML) have made inroads into enterprises for a variety of different uses, including decision support, product recommendations and process control. These fields are employing big-data concepts to train software algorithms to evaluate data and respond in a similar manner to human decision-makers.
These systems are boosted by data collected in the problem domain and used to successively adjust the algorithms to model that domain. For example, a retailer might use detailed data on sales experiences to recommend additional products for shoppers to purchase. By correlating purchases made by past customers, the retailer may be able to entice shoppers to make larger purchases than they had originally intended.
Increasingly, AI/ML systems are being employed in imaginative ways to provide intelligence to enterprise IT security systems. IT development and operations tend to produce large amounts of data, especially if all logging systems are engaged. Network and security systems can log and store data on users, systems and network activities at a very detailed level.
Learning To Recognize Potential Threats
How do AI/ML applications for cybersecurity make use of this data? Start with an ML model. This model incorporates a set of mathematical algorithms that manipulates the input data in successive layers. At the output layer, the result is a determination that a particular activity or activities represent a threat. The ML model “learns” based on matching input data with known output results and adjusting the algorithms with each subsequent pass through the data to better predict the correct output.
It’s not really learning in the human sense; instead, we’re adapting a generic model to more accurately produce correct results for given inputs. It’s more of a curve-fitting exercise for complex combinations of data. But by training using ML models with data applied to layers of algorithms, these ML systems can analyze large amounts of data in real time while mimicking the decision-making processes of trained professionals.
Data science gets part way there.
Much of the identification of anomalous events can be done through straightforward data science using correlations or standard deviations. These techniques are useful in identifying and classifying out-of-range events in real-time, giving security analysts and IT professionals an idea of what’s anomalous in individual events.
The problem with fundamental data science approaches is that correlation doesn’t imply causation. In other words, just because an activity is an outlier doesn’t make it a potential threat. There could be many reasons for such outliers, including random chance, new applications and workers filling in for one another. There may be a relationship between an unusual event and a threat, but the relationship may be spurious.
It’s possible for data science to generate many false positives such as these. That becomes an issue when security professionals are tasked with investigating every outlier — a process that can take more time than is available to them.
That’s where AI/ML models come in. These models look for more than just outliers from normal activity. They also assess, based on past learning experiences, the level of risk inherent in a particular activity. This lets security professionals look into the riskiest events, rather than having to spend time on all outliers. In other words, AI/ML helps to narrow the playing field by determining the risk of specific events by taking history and experience into account.
Data access is the key ingredient.
One of the challenges is to get the data from logs and other data-collecting tools into cybersecurity tools in real time. It’s not good enough to store log files in a data lake, then perform individual queries on that data lake. Even though organizations will likely want to keep log data for a period of time, any AI/ML system actively looking for potential threats has to be able to evaluate that data in close to real-time.
It helps to have direct APIs from a variety of data sources connected to the AI/ML system. A direct connection makes it fast and easy to pull data from log files and other sources so it can be examined by the AI/ML cybersecurity system in near real time. You don’t have to wait for batch sources to accumulate and stream data to the AI/ML system; it happens as the events are recorded.
Overall, AI/ML models have the potential to pinpoint potential threats by learning about the computing environment and using that knowledge to weed out false-positive activities. Given the growing number of threats, and the sophistication of attacks in recent years, any intelligent assistance that security professionals are able to make use of can make a difference in an organization.
Original article written by: Saryu Nayyar | Forbes
As your business grows, safeguarding the applications and systems it relies on involves a unique approach that balances accessibility with cybersecurity. At Raptor IT Consultants, our mission is to establish a foundation for your network resources that empowers users to work efficiently, while offering scalable, managed IT services that complement any business model; affordably. #raptoritnetwork