Spark

Machine Learning Tools

This is an incomplete list of all machine learning tools currently available as of July 2016. I categorized them into Open Source tools and commercial tools, however, the open source tools usually have a commercialized version with support, and the commercial tools tend to include a free version so you can download and try them out. Click the product links to learn more.

Open Source

spark-logo-trademark 

Spark MLlib

  • MLlib is Apache Spark’s scalable machine learning library.
    • Initial contribution from AMPLab, UC Berkeley
    • Shipped with Spark since version 0.8
    • Over 30 contributors
    • Includes any common machine learning and statistical algorithms
    • Supports Scala, Java and Python programming languages
  • Pros
    • Powerful processing performance of Spark. (10x faster in memory and 100x faster in hard disk.)
    • Runs on Hadoop, Mesos or Stand online.
    • Easy to code. (with Scala)
  • Cons
    • Spark requires experienced engineers.
  • Online Resources http://spark.apache.org/mllib/
  • Algorithm
  • –Basic Statistics
    • Summary, Correlation, Sampling, Hypothesis testing, and random data generation.

    –Classification and regression

    • linear regression with L1, L2, and elastic-net regularization
    • logistic regression and linear support vector machine (SVM)
    • Decision tree, naive Bayers, random forest and gradient-boosted trees
    • isotonic regression

    –Collaborative filtering/recommendation

    • alternating least squares (ALS)

    –Clustering

    • k-means, bisecting k-means, Gaussian mixtures (GMM),
    • power iteration clustering, and latent Dirichlet allocation (LDA)

    –Dimensionality reduction

    • singular value decomposition (SVD) and QR decomposition
    • principal component analysis (PCA)

    –Frequent pattern mining

    • FP-growth, association rules, and PrefixSpan

    –feature extraction and transformations

    –Optimization

    • limited-memory BFGS (L-BFGS)

scikit-learn-logo-small

Scikit-learn

Scikit-learn is a Python module for machine learning

  • built on top of SciPy
  • Open source, commercially usable – BSD license
  • Started in 2007 as a Google Summer of Code.
  • Built on NumPy, SciPy, and matplotlib

Git: https://github.com/scikit-learn/scikit-learn.git

  • Algorithms
    • classification: SVM, nearest neighbors, random forest
    • regression: support vector regression (SVR), ridge regression, Lasso, logistic regression
    • clustering: k-means, spectral clustering, …
    • decomposition: PCA, non-negative matrix factorization (NMF), independent component analysis (ICA), …
    • model selection: grid search, cross validation, metrics
    • preprocessing: preprocessing, feature extraction

 

h2o-home

H2O

  • H2O is open-source software for big-data analysis.
  • Built by a Startup H2O.ai in 2011 in Sillicon Valley.
  • Users can throw models at data to find usable information, allowing H2O to discover patterns.
  • Provides data structures and methods suitable for big data.
  • Works with cloud, hadoop, and all operating systems.
  • Written and supported Java, Python and R.
  • Graphical interface works with all browsers.
  • Website: http://www.h2o.ai 

Pandas

  • pandas is an open source, BSD-licensed library providing high-performance, easy-to-use data structures and data analysis tools for the Python programming language.
  • Python is good for data munging and preparation. Panda helps with data analysis and modeling.
  • Works great when combined with iPython toolkit.
  • Good for linear and panel regression. Others can be found in scikit-learn.

tensorflow

Google TensorFlow

  • Open source machine learning library developed by Google, and used in a lot of Google products such as google translate, map and gmails.
  • Uses data flow graphs for numeric computation. Nodes in the graph represent mathematical operations, while the graph edges represent the multidimensional data arrays (tensors) communicated between them.
  • Extensive built-in support for deep learning
  • Just another library. Not the trained models or suggested algorithm for google products.
  • Cloud offering – Google Cloud ML

rstudio-ball R

  • R is a free software environment for statistical computing and graphics.
  • Pros
    • Open source and enterprise ready with Rstudio.
    • Huge ecosystem, lots of libraries and packages.
    • Runs on all operating systems, and files of all format.
  • Cons
    • Algorithm implementations varies and results are different.
    • Memory management not good. Performance worsen with more data.
  • Most used R ML Packages
    • e1071 Functions for latent class analysis, short time Fourier transform, fuzzy clustering, support vector machines, shortest path computation, bagged clustering, naive Bayes classifier
    • rpart Recursive Partitioning and Regression Trees.
    • igraph A collection of network analysis tools.
    • nnet Feed-forward Neural Networks and Multinomial Log-Linear Models.
    • randomForest Breiman and Cutler’s random forests for classification and regression.
    • caret package (short for Classification And Regression Training)
    • glmnet Lasso and elastic-net regularized generalized linear models.
    • ROCR Visualizing the performance of scoring classifiers.
    • gbm Generalized Boosted Regression Models.
    • party A Laboratory for Recursive Partitioning.
    • arules Mining Association Rules and Frequent Itemsets.
    • tree Classification and regression trees.
    • klaR Classification and visualization.
    • RWeka R/Weka interface.
    • ipred Improved Predictors.
    • lars Least Angle Regression, Lasso and Forward Stagewise.
    • earth Multivariate Adaptive Regression Spline Models.
    • CORElearn Classification, regression, feature evaluation and ordinal evaluation.
    • mboost Model-Based Boosting.

theano_logo_allblue_200x46

Theano

  • Theano is a Python library that allows you to define, optimize, and evaluate mathematical expressions involving multi-dimensional arrays efficiently. Theano features:
    • tight integration with NumPy – Use numpy.ndarray in Theano-compiled functions.
    • transparent use of a GPU – Perform data-intensive calculations up to 140x faster than with CPU.(float32 only)
    • efficient symbolic differentiation – Theano does your derivatives for function with one or many inputs.
    • speed and stability optimizations – Get the right answer for log(1+x) even when x is really tiny.
    • dynamic C code generation – Evaluate expressions faster.
    • extensive unit-testing and self-verification – Detect and diagnose many types of errors.
  • Theano has been powering large-scale computationally intensive scientific investigations since 2007.

weka-logo
Weka

  • Waikato Environment for Knowledge Analysis (Weka) is a popular suite of machine learning software written in Java, developed at the University of Waikato, New Zealand.
  • It is free software licensed under the GNU General Public License.
  • contains a collection of visualization tools and algorithms for data analysis and predictive modeling.
  • Weka’s main user interface is the Explorer.
  • impossible to train models from large datasets using the Weka Explorer graphical user interface.
  • Use command-line interface (CLI) or write Java/Groovy/Jython.
  • Supports some streaming.

Commercial

ml AWS Machine Learning

  • Provides visualization tools and wizards to create machine learning models.
  • Easy to obtain predictions for the built model using simple APIs.
  • Used by internal data scientist community.
  • Highly scalable, supports real-time process and at high throughput.
  • Cloud based. Pay as you go.

9564126_orig  Azure Machine Learning

  • Provides visualization tools and wizards to create machine learning models.
  • Easy to obtain predictions for the built model using simple APIs.
  • Used by internal data scientist community.
  • Highly scalable, supports real-time process and at high throughput.
  • Cloud based. Pay as you go.

aaeaaqaaaaaaaaa-aaaajgi5y2y3nmjkltrmngytndi1my1ingvmlty4yzm4mdk5ymzina
IBM Watson Analytics

  • IBM data analysis solution in the cloud.
  • Automated visualization.
  • Professional version: 10m rows, 500 columns, 100GB storage.
  • Connects to social media data.
  • Supports free form text questions about data (Google Search Box).
  • Supports easy and secure collaboration.

sasviyalogomidnight  SAS Viya

  • Cloud ready analytics and visualization architecture from the leading analytics software company.
  • Can be onsite as well.
  • Supports following SAS platforms
    • SAS Visual Analytics
    • SAS Visual Statistics
    • SAS Visual Investigators (Search)
    • SAS Data Mining and Machine Learning
  • Supports Python, Lua, Java and all REST APIs.
  • Available third quarter of 2016

 

Advertisements

Install Hadoop and Spark on a Mac

Hadoop best performs on a cluster of multiple nodes/servers, however, it can run perfectly on a single machine, even a Mac, so we can use it for development. Also, Spark is a popular tool to process data in Hadoop. The purpose of this blog is to show you the steps to install Hadoop and Spark on a Mac.

Operating System: Mac OSX Yosemite 10.11.3
Hadoop Version 2.7.2
Spark 1.6.1

Pre-requisites

1. Install Java

Open a terminal window to check what Java version is installed.
$ java -version

If Java is not installed, go to https://java.com/en/download/ to download and install latest JDK. If Java is installed, use following command in a terminal window to find the java home path
$ /usr/libexec/java_home

Next we need to set JAVA_HOME environment on mac
$ echo export “JAVA_HOME=$(/usr/libexec/java_home)” >> ~/.bash_profile
$ source ~/.bash_profile

2. Enable SSH as Hadoop requires it.

Go to System Preferences -> Sharing -> and check “Remote Login”.

Generate SSH Keys
$ ssh-keygen -t rsa -P “”
$ cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys

Open a terminal window, and make sure we can do this.
$>ssh localhost

Download Hadoop Distribution

Download the latest hadoop distribution (2.7.2 at the time of writing)
http://www.apache.org/dyn/closer.cgi/hadoop/common/hadoop-2.7.2/hadoop-2.7.2.tar.gz

Create Hadoop Folder

Open a new terminal window, and go to the download folder, (let’s use “~/Downloads”), and find hadoop-2.7.2.tar

$ cd ~/Downloads
$ tar xzvf hadoop-2.7.2.tar
$ mv hadoop-2.7.2 /usr/local/hadoop

Hadoop Configuration Files

Go to the directory where your hadoop distribution is installed.
$ cd /usr/local/hadoop

Then change the following files

$ vi etc/hadoop/hdfs-site.xml

1
2
3
4
5
6
<configuration>
    <property>
        <name>dfs.replication</name>
        <value>1</value>
    </property>
</configuration>

$ vi etc/hadoop/core-site.xml

1
2
3
4
5
6
<configuration>
    <property>
        <name>fs.defaultFS</name>
        <value>hdfs://localhost:9000</value>
    </property>
</configuration>

$ vi etc/hadoop/yarn-site.xml

1
2
3
4
5
6
<configuration>
    <property>
        <name>yarn.nodemanager.aux-services</name>
        <value>mapreduce_shuffle</value>
    </property>
</configuration>

$ vi etc/hadoop/mapred-site.xml

1
2
3
4
5
6
<configuration>
    <property>
        <name>mapreduce.framework.name</name>
        <value>yarn</value>
    </property>
</configuration>

Start Hadoop Services

Format HDFS
$ cd /usr/local/hadoop
$ bin/hdfs namenode -format

Start HDFS
$ sbin/start-dfs.sh

Start YARN
$ sbin/start-yarn.sh

Validation

Check HDFS file Directory
$ bin/hdfs dfs -ls /

If you don’t like to include the bin/ every time you run a hadoop command, you can do the following

$ vi ~/.bash_profile
append this line to the end of the file “export PATH=$PATH:/usr/local/hadoop/bin”
$ source ~/.bash_profile

Now try to add the following two folders in HDFS that is needed for MapReduce job, but this time, don’t include the bin/.

$ hdfs dfs -mkdir /user
$ hdfs dfs -mkdir /user/{your username}

You can also open a browser and access Hadoop by using the following URL
http://localhost:50070/

Next: Spark

Installing Spark is a little easier. You can download the latest Spark here:
http://spark.apache.org/downloads.html

It’s a little tricky on choosing which package type. We want to choose “pre-build with user provided Hadoop [can use with most Hadoop distributions]” type, and the downloaded file name is spark-1.6.1-bin-without-hadoop.tgz

After spark is downloaded, we need to untar it. Open a terminal window and do the following:

$ cd ~/Downloads
$ tar xzvf spark-1.6.1-bin-without-hadoop.tgz
$ mv spark-1.6.1-bin-without-hadoop /usr/local/spark

Add spark bin folder to PATH

$ vi ~/.bash_profile
append this line to the end of the file “export PATH=$PATH:/usr/local/spark/bin”
$ source ~/.bash_profile

What about Scala?

Spark is written in Scala, so even though we can use Java to write Spark code, we want to install Scala as well.

Download Scala from here: http://www.scala-lang.org/download/
Choose the first one to download Scala in binary, and the downloaded file is scala-2.11.8.tar

Untar Scala and move it to a dedicated folder

$ cd ~/Downloads
$ tar xzvf scala-2.11.8.tar
$ mv scala-2.11.8 /usr/local/scala

Add Scala bin folder to PATH

$ vi ~/.bash_profile
append this line to the end of the file “export PATH=$PATH:/usr/local/scala/bin”
$ source ~/.bash_profile

Now you should be able to do the following to access Spark shell for Scala

$ spark-shell

That’s it! Happy coding!

Big Data Everywhere

If your work is related to Big Data, but you have not heard of Big Data Everywhere Conference, don’t panic, chances are that you are just not using MapR Hadoop. This is an event sponsored by MapR and its many partners. However, the topics cover all area of Big Data, and you won’t feel discriminated if you have only been using Cloudera or Hortonworks.

The conference is held in many cities many times of a year and the one I attended is in San Diego on April 12, 2016. Traffic was really bad on Interstate 5 from Orange County to San Diego that morning, and I was 2 hours on the road and 45 minutes late. The breakfast provided was really good, so I decided to spend the 15 minutes eating instead of socializing with a full room of talented data professionals.

A full agenda is shown in the following picture, and I will summarize all the talks in this event based on my own written notes since the organizer still has not sent out the official presentation decks.

[Update 4/14/2016] Presentation full deck for the talks is available now.

FullSizeRender 5

First speaker is Jim Scott, director of enterprise strategy and architecture from MapR. His topic is Streaming in the Extreme. First he explained what is the enterprise architecture with a circular diagram he drew himself covering all area of company data strategy, with an emphasis on the fact that solution architecture is not equal to enterprise architecture. Later he introduced a streaming process he implemented using MapR streaming, which, according the statistics provided, beats Apache Kafka. When being asked if he considers MapR streamong is the best among all similar technology, including Flink, Spark, Apex, Storm, etc., Jim gave the opinion that MapR streaming is definitely the best when used with MapR Hadoop.

IMG_0243

Next on the stage is Alex Garbarini, information technology engineer from Cisco, and his topic is Build and Operationalize Enterprise Data Lake in Big Enterprise. Being a technology company, Cisco was able to implement a data lake themselves using Hadoop that handles 2 billion records on a daily basis. The data lake is now a hub for multiple business usage including the analysis of Webex user activities.

Right after a talk about data lake, is a topic titled Going Beyond Data Lake. Vik Kapoor, director of analytics technology architecture and platforms from Pfizer talked about how they leverage the entire analytics ecosystem. They formed their practices following 4 steps: find, explore, understand, and share, which going through the data load, data wrangling, data discovery and evaluation processes and builds data products as a result. He also introduced the tools they are using for each step.

Coming up next is a panel discussion. Scott Saufferer and Robert Warner from ID Analytics answered interview questions from a host. The director of data operations and director of engineers took turns to tell the audience how they introduced Hadoop into their company, and how both teams collaborate to make the best of it.

Next on the stage is Alex Bates, a soft spoken CTO from Mtell, talking about hardware – IoT. Mtell manufactures smart machines with sensors built inside to transmit data about the status of the machines. Data is collected and processed by apps written in Spark. With the help of machine learning, they learned a lot about the machines, and created different agents to monitor anomalies and prevent failures. Also, RESTful API are created for different clients to integrate this with their monitoring tools.

When data architects and data scientists are fighting for the driver’s seats of big data groups within any organizations, it’s only fair to invite speakers from both sides in any big data conferences. Allen Day, chief scientist from MapR, contributor of many open source projects and machine learning algorithm implementation, did an awesome job explaining how to build a Genome Analysis Pipeline in simple words and diagrams that people with little knowledge of data science can understand. For those who wanted to dig deeper into this, he also provided the git link to the source code: https://github.com/allenday/spark-genome-alignment-demo.

Last but not least, Energetic Stefan Groschupf, CEO of Datameer jumped on the stage and gave a speech about how to jumpstart a big data project for any organizations. As a seasoned entrepreneur, he has a lot of experience in running an organization and his advice is simple and straightforward. Instead of spending a whole lot of money on latest technology, he suggested that a small team within the company to be formed. The members should be cross functional with different types of employees including the visionary, the reality check, the challengers and the worker bees. The team will focus on problems within the company before they bring up discussion of innovative idea or other people’s use cases. As big data project goes, the team will find a pain point, identify a few possible solutions, start from one small angle to approach it, try out different tools to tackle it as a proof of concept. A process with a successful result will then be scaled up into a full solution that could bring even more value to the company. And the core team members will also become the implementers and managers of the new process.

Believe it or not, this is exactly the kind of idea I approach my clients with as a big data consultant, and I’ve seen them become more and more confident and successful with what they are doing within a couple of years. “Great minds think alike!” That is a great feeling to go home with after a long half-day event.

IMG_0266