Machine Learning

Cyber Security and Machine Learning

Talking about the relationship between cyber security and machine learning, we need to first identify a concept change. In the past, cyber security focuses on blocking the intruders from outside of our network, but today, we have to believe that intruders are among us. They have invaded our systems and they are doing or going to do damages to us. Whatever the compromised device or machine is doing, it’s acting abnormally. So, cyber security means anomaly detection. Learning about what the machines are normally behaving, we can identify the unusual behaviors, thus find the intruders and terminate them.

First, let’s take a look at the different cyber attacks. The major types of cyber attack that could use some help from machine learning includes:

Malware – they are software installed from attachment in phishing emails, or from web sites with malicious links. Natural language processing can definitely help analyzing the content of text being distributed within the network, block the content and alert the users. Also, malware usually use resources intensively, so they could be pinpoint down by CPU usage monitoring. Installing Anti-Malware software which maintains a library of common files or malicious IP address will help as well.

Zero day attack – Hackers attack computers using vulnerability of software that is unknown to public. To reduce the impact, Patches need to be installed as soon as they are available, and unpatched or newly patched machines must be scanned more frequently. Understanding the nature of the vulnerability, and using that in the feature engineering process makes machine learning more efficient.

APT – Advanced Persistent Thread is the worst of all cyber attacks. Intruders do not make any immediate damages after they compromised a machine, instead, they hide in the network and slowly steal data, affect more machines, and wait for a perfect time to launch attacks. Without analytics, detecting APT is almost impossible.

Academically, using machine learning and deep learning algorithms for anomaly detection has started in the late 1990’s, when, at the time, we don’t even have the term deep learning. It’s neural networks, and we will use the two terms interchangeably in this blog.

So how do we detect anomaly? Before answering this question, we have to define normality. Every machine or device has a regular behavior, which can be analyzed and described using logs or events collected from all the machines. Any activities or sequence of activities that is different from the normal behavior may be an anomaly. We can define the following three kinds of anomalies:

  • Point Anomaly – in terms of machine activity, it could mean access to restricted systems, or any summation of behaviors statistically reaching predefined thresholds.
  • Contextual Anomaly – Unlike the point anomaly, contextual anomaly may look normal by itself. It’s only by comparing parameters within a timeframe can we find the irregularity.
  • Collective Anomaly – For this one, we need to look at a longer timespan and find out a collection of behaviors that doesn’t look normal.

We mentioned feature engineering earlier. Feature engineering means using the domain knowledge of the data to create features that will be used in machine learning. In the domain of cyber security, common features used are:

  • CPU usage
  • Login time
  • All Systems accessed
  • File directory
  • Amount of data transferred in and out
  • Application logs
  • Sys logs
  • Database logs

Last but not least, let’s review machine learning algorithms for detecting anomalies. According to Chandola, Banerjee, and Kumar in their 2009 Anomaly Detection: A Survey, there are following 6 anomaly detection techniques.

Classification based 
classification technique creates classifiers(models) through the training of labeled data, and then classify the test instances through the learnt models. It could be a single-class classification, where the whole training set has only one normal class. It can also be a multi-class classification, where the training set has more than one normal class.

Common algorithms includes Rule based algorithm, Naive bayesian, Support Vector Machines and Neural Networks. Application of classification based techniques on test instances can be fast and accurate, but it relies heavily on availability of accurate labels for the classes.

Nearest Neighbor based 
Nearest Neighbor based anomaly detection techniques assumes normal data occurs in dense neighborhood. If defines a distance between two data instances based on their similarity, and then either 1) use the distance of a data instance to its kth nearest neighbor, or 2) compute the relative density of each data instance to compute an anomaly score.

Nearest neighbor is an unsupervised technique, which, if appropriate distance measure for the given data is defined, is pure data driven.

Clustering based
Clustering algorithm is another unsupervised technique, which group similar data instances into clusters. The instance that is outside of any clusters is an anomaly. Similar to nearest neighbor technique, Clustering also require computation of distance between instances. The main difference is that the purpose of clustering is to find the absolute position of the center of the clusters, while nearest neighbor uses the relative position of each data instance.

Clustering algorithm includes K-means, Self-organizing maps, or Expectation Maximization. Some argue that clustering algorithm looks for similarity to identify clusters, and anomaly detection is just a by-product from unoptimized techniques.

Statistical
Statistical techniques use a statistical model built on historical data with normal behavior and then apply a statistical inference test to determine if an testing instance belongs to this model or not. If it does not belong, it’s anomaly.

Statistical techniques can be parametric, such as Gaussian or Regression model, or it can be non-parametric, such as Histogram based. The key to statistical techniques is the assumption that data is generated from a particular distribution. If it’s true, it’s a statistically justifiable solution. Unfortunate, it is not always the case, especially for high dimensional real data sets.

Information Theoretic
If anomalies in data induce irregularities in the information content of the data set, we can use information theoretic techniques. Information theoretic techniques can be described as: given a data set D, let C(D) denote the complexity, find the minimal subset of instances, I, such that C(D)−C(D−I) is maximum. All instances in the subset thus obtained, are deemed as anomalous. Common algorithms inlude Kolomogorov complexity, entropy and relative entropy.

Spectral
Spectral techniques can be used if data can be embedded into a lower dimensional subspace in which normal instances and anomalies appear significantly different. Such technique will find the subspace and identify the anomaly. A common method is Principal Component Analysis (PCA), which projects data into a lower dimensional space, and an instance of the data that deviates from the correlation structure is an anomaly.

Spectral techniques usually reduces the dimensions of data and have high computational complexity.

All the above techniques are dealing with point anomaly. For contextual and collective anomaly, it is a common practice to transform the sequences to a finite feature space and then use a point anomaly detection technique in the new space to detect anomalies.

All the anomaly we have discussed so far focus on the machine. Another approach focus on the users of the system, and it tracks, collects and assessing data regarding user activities. Analytical methods that focus on user behaviors are called User Behavior Analytics.


User Behavior Analytics (UBA)

User Behavior Analytics analyzes events collected and performs behavior modeling, peer group analytics, graph mining, and other techniques to find hidden threats by identifying anomalies and stitching them together to form actionable threat patters, for example:

  • Privileged account abuse
  • Suspicious login
  • Data exfiltration
  • Virtual machine/container breach
  • Unusual SaaS and remote user behavior
  • Rogue mobile device transmitting malware
  • Data theft from privileged app infiltration
  • Malware command and control (CnC)
  • Cloud compromise
  • System malware infection.

UBA utilizes the same machine anomaly detection algorithm. Security tools equipped with machine learning is moving from providing insights to security operators to taking defensive actions to threads, slowly but surely. Will machine learning replace cyber security experts one day? Probably not, because it is inevitable that human and machines will always be allies on both sides of the cyber war.

Advertisements

Advanced Analytics Reference Architecture

 

Building data platforms and deliverying advanced analytical services in the new age of data intelligence can be a daunting task. It’s not really helping with all the tools and methodologies that we know we can use. Therefore, a reference architecture is needed to provide guidelines for the process design and best practices for advanced analytics, so we can not only meet the business requirement, but also bring more value to the business.

1. Architectural Guidance

  • The architecture should cover all building blocks including the following: Data Infrastructure, Data Engineering, Traditional Business Intelligence, and Advanced Analytics. Within Advanced Analytics, we should include machine learning, deep learning, data science, predictive analytics, and the operationalization of models.
  • One of the first steps should be finding the gaps between current infrastructure, tools, technologies and the end state environment.
  • We need to create a unified approach to both structured and unstructured data. It’s perfectly fine to maintain two different environments for structured and unstructured data, although both systems will look more and more close to each other.
  • Rome is not built in one night. We need to first build a road map, with budget in mind, on how the organization can get to the end state, adept and/or pivot whenever needed along the way.

2. Best Practices

  • There is never one best solution for all. A different scenario will have its very own best approach. However, we can create standard approaches for different categories. Creating best practices for different categories or industries and make them options, it is by itself a best practice.
  • Things we need to consider when suggesting a best practice includes company size, current infrastructure, skillsets of existing IT personnel.

3. Framework for Solutions.

  • A reference architecture for Advanced Analytics is depicted in the following diagram. On the bottom of the picture are the data sources, divided into structured and unstructured categories. Structured data are mostly operational data from existing ERP, CRM, Accounting, and any other systems that create the transactions for the business. They are handled by relational databases RDBMS such as Oracle, Teradata, and MS SqlServer. The RDBMS can be used as the backend for applications which produce these transactions, and they are called OLTP – online transactional processing system. Periodically the transactional data will be copied over to data stores for analytical and reporting purpose. These data stores are also built on RDBMS, and they are called OLAP – online analytcal processing sytem. On top of data warehouse is business intelligence and data visualization. We have quite a few powerful tools to support this capability.
  • On the right side, where the unstructured data is processed, that’s the big data world. Just as for structured data, there is a variety of tools that we can use for ETL (Extract-Transform-Load) of the data into selected data platforms, which include Hadoop, NoSQL, and all those cloud based storage systems. Data is ingested into these filesystem based data stores, and is then processed by multiple analytical tools. The analytical results are either fed to the data visualization tools, or operationalized by APIs created using all kinds of technology.
  • Demanding for streaming process is also growing tremendously, which requires real-time or near real-time analytics of vast amount of data to identify treads, find anomalies, and predict results. A few tools that can be used in this category is recommended.

Screen Shot 2016-08-23 at 4.31.22 PM.png

4. The tools I recommend for multiple data processing purposes are listed as follows. (So they can be search-engine-friendly, even though they are all listed in the picture above.)

  • Data Ingestion (Paxata, Pentaho, Talend, informatica…)
  • Data Storage (Cloudera, Hortonworks, MapR Hadoop, Cassandra, HBase, MongoDB, S3, Google CloudPlatform…)
  • Data Analytics (Python, R, H2O (ML and DL), TensorFlow (GPU optional), Databricks/Spark, Gaffe (GPU), Torch (GPU) for for deep learning of image and sound)
  • Data Visualization (Tableau, Qlikview, Cisco DV…)

Machine Learning Tools

This is an incomplete list of all machine learning tools currently available as of July 2016. I categorized them into Open Source tools and commercial tools, however, the open source tools usually have a commercialized version with support, and the commercial tools tend to include a free version so you can download and try them out. Click the product links to learn more.

Open Source

spark-logo-trademark 

Spark MLlib

  • MLlib is Apache Spark’s scalable machine learning library.
    • Initial contribution from AMPLab, UC Berkeley
    • Shipped with Spark since version 0.8
    • Over 30 contributors
    • Includes any common machine learning and statistical algorithms
    • Supports Scala, Java and Python programming languages
  • Pros
    • Powerful processing performance of Spark. (10x faster in memory and 100x faster in hard disk.)
    • Runs on Hadoop, Mesos or Stand online.
    • Easy to code. (with Scala)
  • Cons
    • Spark requires experienced engineers.
  • Online Resources http://spark.apache.org/mllib/
  • Algorithm
  • –Basic Statistics
    • Summary, Correlation, Sampling, Hypothesis testing, and random data generation.

    –Classification and regression

    • linear regression with L1, L2, and elastic-net regularization
    • logistic regression and linear support vector machine (SVM)
    • Decision tree, naive Bayers, random forest and gradient-boosted trees
    • isotonic regression

    –Collaborative filtering/recommendation

    • alternating least squares (ALS)

    –Clustering

    • k-means, bisecting k-means, Gaussian mixtures (GMM),
    • power iteration clustering, and latent Dirichlet allocation (LDA)

    –Dimensionality reduction

    • singular value decomposition (SVD) and QR decomposition
    • principal component analysis (PCA)

    –Frequent pattern mining

    • FP-growth, association rules, and PrefixSpan

    –feature extraction and transformations

    –Optimization

    • limited-memory BFGS (L-BFGS)

scikit-learn-logo-small

Scikit-learn

Scikit-learn is a Python module for machine learning

  • built on top of SciPy
  • Open source, commercially usable – BSD license
  • Started in 2007 as a Google Summer of Code.
  • Built on NumPy, SciPy, and matplotlib

Git: https://github.com/scikit-learn/scikit-learn.git

  • Algorithms
    • classification: SVM, nearest neighbors, random forest
    • regression: support vector regression (SVR), ridge regression, Lasso, logistic regression
    • clustering: k-means, spectral clustering, …
    • decomposition: PCA, non-negative matrix factorization (NMF), independent component analysis (ICA), …
    • model selection: grid search, cross validation, metrics
    • preprocessing: preprocessing, feature extraction

 

h2o-home

H2O

  • H2O is open-source software for big-data analysis.
  • Built by a Startup H2O.ai in 2011 in Sillicon Valley.
  • Users can throw models at data to find usable information, allowing H2O to discover patterns.
  • Provides data structures and methods suitable for big data.
  • Works with cloud, hadoop, and all operating systems.
  • Written and supported Java, Python and R.
  • Graphical interface works with all browsers.
  • Website: http://www.h2o.ai 

Pandas

  • pandas is an open source, BSD-licensed library providing high-performance, easy-to-use data structures and data analysis tools for the Python programming language.
  • Python is good for data munging and preparation. Panda helps with data analysis and modeling.
  • Works great when combined with iPython toolkit.
  • Good for linear and panel regression. Others can be found in scikit-learn.

tensorflow

Google TensorFlow

  • Open source machine learning library developed by Google, and used in a lot of Google products such as google translate, map and gmails.
  • Uses data flow graphs for numeric computation. Nodes in the graph represent mathematical operations, while the graph edges represent the multidimensional data arrays (tensors) communicated between them.
  • Extensive built-in support for deep learning
  • Just another library. Not the trained models or suggested algorithm for google products.
  • Cloud offering – Google Cloud ML

rstudio-ball R

  • R is a free software environment for statistical computing and graphics.
  • Pros
    • Open source and enterprise ready with Rstudio.
    • Huge ecosystem, lots of libraries and packages.
    • Runs on all operating systems, and files of all format.
  • Cons
    • Algorithm implementations varies and results are different.
    • Memory management not good. Performance worsen with more data.
  • Most used R ML Packages
    • e1071 Functions for latent class analysis, short time Fourier transform, fuzzy clustering, support vector machines, shortest path computation, bagged clustering, naive Bayes classifier
    • rpart Recursive Partitioning and Regression Trees.
    • igraph A collection of network analysis tools.
    • nnet Feed-forward Neural Networks and Multinomial Log-Linear Models.
    • randomForest Breiman and Cutler’s random forests for classification and regression.
    • caret package (short for Classification And Regression Training)
    • glmnet Lasso and elastic-net regularized generalized linear models.
    • ROCR Visualizing the performance of scoring classifiers.
    • gbm Generalized Boosted Regression Models.
    • party A Laboratory for Recursive Partitioning.
    • arules Mining Association Rules and Frequent Itemsets.
    • tree Classification and regression trees.
    • klaR Classification and visualization.
    • RWeka R/Weka interface.
    • ipred Improved Predictors.
    • lars Least Angle Regression, Lasso and Forward Stagewise.
    • earth Multivariate Adaptive Regression Spline Models.
    • CORElearn Classification, regression, feature evaluation and ordinal evaluation.
    • mboost Model-Based Boosting.

theano_logo_allblue_200x46

Theano

  • Theano is a Python library that allows you to define, optimize, and evaluate mathematical expressions involving multi-dimensional arrays efficiently. Theano features:
    • tight integration with NumPy – Use numpy.ndarray in Theano-compiled functions.
    • transparent use of a GPU – Perform data-intensive calculations up to 140x faster than with CPU.(float32 only)
    • efficient symbolic differentiation – Theano does your derivatives for function with one or many inputs.
    • speed and stability optimizations – Get the right answer for log(1+x) even when x is really tiny.
    • dynamic C code generation – Evaluate expressions faster.
    • extensive unit-testing and self-verification – Detect and diagnose many types of errors.
  • Theano has been powering large-scale computationally intensive scientific investigations since 2007.

weka-logo
Weka

  • Waikato Environment for Knowledge Analysis (Weka) is a popular suite of machine learning software written in Java, developed at the University of Waikato, New Zealand.
  • It is free software licensed under the GNU General Public License.
  • contains a collection of visualization tools and algorithms for data analysis and predictive modeling.
  • Weka’s main user interface is the Explorer.
  • impossible to train models from large datasets using the Weka Explorer graphical user interface.
  • Use command-line interface (CLI) or write Java/Groovy/Jython.
  • Supports some streaming.

Commercial

ml AWS Machine Learning

  • Provides visualization tools and wizards to create machine learning models.
  • Easy to obtain predictions for the built model using simple APIs.
  • Used by internal data scientist community.
  • Highly scalable, supports real-time process and at high throughput.
  • Cloud based. Pay as you go.

9564126_orig  Azure Machine Learning

  • Provides visualization tools and wizards to create machine learning models.
  • Easy to obtain predictions for the built model using simple APIs.
  • Used by internal data scientist community.
  • Highly scalable, supports real-time process and at high throughput.
  • Cloud based. Pay as you go.

aaeaaqaaaaaaaaa-aaaajgi5y2y3nmjkltrmngytndi1my1ingvmlty4yzm4mdk5ymzina
IBM Watson Analytics

  • IBM data analysis solution in the cloud.
  • Automated visualization.
  • Professional version: 10m rows, 500 columns, 100GB storage.
  • Connects to social media data.
  • Supports free form text questions about data (Google Search Box).
  • Supports easy and secure collaboration.

sasviyalogomidnight  SAS Viya

  • Cloud ready analytics and visualization architecture from the leading analytics software company.
  • Can be onsite as well.
  • Supports following SAS platforms
    • SAS Visual Analytics
    • SAS Visual Statistics
    • SAS Visual Investigators (Search)
    • SAS Data Mining and Machine Learning
  • Supports Python, Lua, Java and all REST APIs.
  • Available third quarter of 2016

 

Statistics vs. Machine Learning

I was being asked the question “What’s the difference between statistics and machine learning?” quite a lot lately, almost as often as this one “What’s the difference between a data analyst and a data scientist”, (which I might write about in another blog.) People wondering about the differences between these subjects probably see a lot of similarity between the two: They are both means to learn about the data, and they share many of the same methods.

The fundamental difference about the two is: statistics is focused on inference and conclusions while machine learning emphasizes on predictions and decisions.

Statisticians care deeply about the data collection process, methodology and statistical properties of the estimator. They are interested in learning something about the data. Statistics may support or reject hypothesis based on the noise of the data, validate models, or make forecasts, but overall the goal is to arrive at a new scientific insight based on the data. It other word, it wants to draw a valid and precise conclusion on problems proposed.

Machine Learning is about making a prediction, and algorithm is just a means to the end. The goal is to solve complex computational task by feeding data to a machine so it will tell us what the outcome will be. Instead of figuring out the cause and effect, we will collect a large amount of examples of what the mechanism should be, and then run an algorithm which is able to perform the task by learning from the examples. it builds model to predict a result, and use data to improve its prediction.

You may have realized that quite a few algorithms used in machine learning are statistical in nature, but as long as the prediction works well, any kind of statistical insight into the data is not necessary.

A paper Statistical Modeling: The Two Cultures published by Leo Breiman in the year 2001 explains the differences between statistics and machine learning very well. I’m going to post the abstract here:

“Abstract

There are two cultures in the use of statistical modeling to reach conclusions from data. One assumes that the data are generated by a given stochastic data model. The other uses algorithmic models and treats the data mechanism as unknown. The statistical community has been committed to the almost exclusive use of data models. This commitment has led to irrelevant theory, questionable conclusions, and has kept statisticians from working on a large range of interesting current problems. Algorithmic modeling, both in theory and practice, has developed rapidly in fields outside statistics. It can be used both on large complex data sets and as a more accurate and informative alternative to data modeling on smaller data sets. If our goal as a field is to use data to solve problems, then we need to move away from exclusive dependence on data models and adopt a more diverse set of tools.”

In this paper, two cultures are introduced and we can treat Data Modeling Culture as Statistics and Algorithmic Modeling Culture as Machine Learning. (The term Machine Learning still resides mostly in science fictions in 2001.) The following two pictures show clearly the difference between the two cultures.

Data modeling culture assumes a data model and estimates the values from the parameters using the data model.

Screen Shot 2016-04-19 at 11.55.11 PM

Algorithmic modeling treats the true algorithm inside the box complex and unknown. It creates another algorithm that operates on x to predict y.

Screen Shot 2016-04-19 at 11.55.35 PM

Now I wonder if there will be anyone who can’t wait for my blog about the other frequently asked question, and pop this one: “What about Data Science?”

Data Science employs all the techniques and theories drawn from many fields including mathematics, statistics, information science, computer science, which also includes machine learning, data mining, predictive analytics, etc. to extract knowledge or insights from data. Data scientist is not a new fancy title on name cards; he is a true master of data.