📔
Data Science with Apache Spark
  • Preface
  • Contents
  • Basic Prerequisite Skills
  • Computer needed for this course
  • Spark Environment Setup
  • Dev environment setup, task list
  • JDK setup
  • Download and install Anaconda Python and create virtual environment with Python 3.6
  • Download and install Spark
  • Eclipse, the Scala IDE
  • Install findspark, add spylon-kernel for scala
  • ssh and scp client
  • Summary
  • Development environment on MacOS
  • Production Spark Environment Setup
  • VirtualBox VM
  • VirtualBox only shows 32bit on AMD CPU
  • Configure VirtualBox NAT as Network Adapter on Guest VM and Allow putty ssh Through Port Forwarding
  • Docker deployment of Spark Cluster
  • Create customized Apache Spark Docker container
  • Dockerfile
  • docker-compose and docker-compose.yml
  • Launch custom built Docker container with docker-compose
  • Entering Docker Container
  • Setup Hadoop, Hive and Spark on Linux without docker
  • Hadoop Preparation
  • Hadoop setup
  • Configure $HADOOP_HOME/etc/hadoop
  • HDFS
  • Start and stop Hadoop
  • Work with Hadoop and HDFS file system
  • Connect to Hadoop web interface port 50070 and 8088
  • Install Hive
  • hive home
  • Initialize hive schema
  • Start hive metastore service.
  • hive-site.xml
  • Hive client
  • Setup Apache Spark
  • Spark Home
  • Jupyter-notebook server
  • Python 3 Warm Up
  • Basics
  • Iterables/Collections
  • Strings
  • List
  • Tuple
  • Dictionary
  • Set
  • Conditional statement
  • for loop
  • while loop
  • Functions and methods
  • map and filter
  • map and filter takes function as input
  • lambda
  • Python Class
  • Input and if statement
  • Input from a file
  • Output to a file
  • try except
  • Python coding exercise
  • Scala Warm Up
  • Start Spylon-kernel on Jupyter-notebook
  • Type of Variable: Mutable or immutable
  • Block statement
  • Scala Data Type
  • Array in Scala
  • Methods
  • Functions
  • Anonymous function
  • Scala map and filter methods
  • Class
  • Objects
  • Trait
  • Tuple in Scala
  • List/Seq
  • Set in Scala
  • Scala Map
  • Scala if statement
  • Scala for loop
  • Scala While Loop
  • Scala Exceptions + try catch finally
  • Scala coding exercise
  • Run a program to estimate pi
  • Common Spark command line
  • Run Scala code with spark-submit
  • Python with Apache Spark using Jupyter notebook
  • Spark Core Introduction
  • Spark and Scala Version
  • Basic Spark Package
  • Resilient Distributed Datasets (RDDs)
  • RDD Operations
  • Passing Function to Spark
  • Printing elements of an RDD
  • Working with key value pair
  • RDD Transformation Functions
  • RDD Action Functions
  • SPARK SQL
  • SQL
  • Datasets and DataFrames
  • SparkSession
  • Creating DataFrames
  • Running SQL Queries Programmatically
  • Issue from running Cartesian Join Query
  • Creating Datasets
  • Interoperating with RDD
  • Untyped User-Defined Aggregate Functions
  • Generic Load/Save Functions
  • Manually specify file option
  • Run SQL on files directly
  • Save Mode
  • Saving to Persistent Tables
  • Bucketing, Sorting and Partitioning
  • Apache Arrow
  • Install Python Arrow Module PyArrow
  • Issue might happen import PyArrow
  • Enabling for Conversion to/from Pandas in Python
  • Connect to any data source the same consistent way
  • Spark SQL Implementation Example in Scala
  • Run scala code in Eclipse IDE
  • Hive Integration, run SQL or HiveQL queries on existing warehouses.
  • Example: Enrich JSON
  • Integrate Tableau Data Visualization with Hive Data Warehouse and Apache Spark SQL
  • Connect Tableau to Spark SQL running in VM with VirtualBox with NAT
  • Issues with connecting from Tableau to Spark SQL
  • SPARK Streaming
  • Discretized Streams (DStreams)
  • Transformations on DStreams
  • map(func)
  • filter(func)
  • repartition(numPartitions)
  • union(otherStream)
  • reduce(func)
  • count()
  • countByValue()
  • reduceByKey(func, [numTasks])
  • join(otherStream, [numTasks])
  • cogroup(otherStream, [numTasks])
  • transform(func)
  • updateStateByKey(func)
  • Scala Tips for updateStateByKey
  • repartition(numPartitions)
  • DStream Window Operations
  • DStream Window Transformation
  • countByWindow(windowLength, slideInterval)
  • reduceByWindow(func, windowLength, slideInterval)
  • reduceByKeyAndWindow(func, windowLength, slideInterval, [numTasks])
  • reduceByKeyAndWindow(func, invFunc, windowLength, slideInterval, [numTasks])
  • countByValueAndWindow(windowLength, slideInterval, [numTasks])
  • window(windowLength, slideInterval)
  • Window DStream print(n)
  • saveAsTextFiles(prefix, [suffix])
  • saveAsObjectFiles(prefix, [suffix])
  • saveAsHadoopFiles(prefix, [suffix])
  • foreachRDD(func)
  • Build Twitter Scala API Library for Spark Streaming using sbt
  • Spark Streaming with Twitter, you can get public tweets by using Twitter API.
  • Spark streaming use case with Python
  • Spark Graph Computing
  • Spark Graph Computing Continue
  • Graphx
  • Package org.apache.spark.graphx
  • Edge Class
  • EdgeContext Class
  • EdgeDirection Class
  • EdgeRDD Class
  • EdgeTriplet Class
  • Graph Class
  • GraphLoader Object
  • GraphOps Class
  • GraphXUtils Object
  • PartitionStrategy Trait
  • Pregel Object
  • TripletFields Class
  • VertexRDD Class
  • Package org.apache.spark.graphx.impl
  • AggregatingEdgeContext Class
  • EdgeRDDImpl Class
  • Class GraphImpl<VD,ED>
  • Class VertexRDDImpl<VD>
  • Package org.apache.spark.graphx.lib
  • Class ConnectedComponents
  • Class LabelPropagation
  • Class PageRank
  • Class ShortestPaths
  • Class StronglyConnectedComponents
  • Class SVDPlusPlus
  • Class SVDPlusPlus.Conf
  • Class TriangleCount
  • Package org.apache.spark.graphx.util
  • Class BytecodeUtils
  • Class GraphGenerators
  • Graphx Example 1
  • Graphx Example 2
  • Graphx Example 3
  • Spark Graphx Describes Organization Chart Easy and Fast
  • Page Rank with Apache Spark Graphx
  • bulk synchronous parallel with Google Pregel Graphx Implementation Use Cases
  • Tree and Graph Traversal with and without Spark Graphx
  • Graphx Graph Traversal with Pregel Explained
  • Spark Machine Learning
  • Binary Classification
  • Multiclass Classification
  • Regression
  • Correlation
  • Image Data Source
  • ML DataFrame is SQL DataFrame
  • ML Transformer
  • ML Estimator
  • ML Pipeline
  • Transformer/Estimator Parameters
  • Extracting, transforming and selecting features
  • TF-IDF
  • Word2Vec
  • FeatureHasher
  • Tokenizer
  • CountVectorizer
  • StopWordRemover
  • n-gram
  • Binarizer
  • PCA
  • PolynomialExpansion
  • StringIndexer
  • Discrete Cosine Transform (DCT)
  • One-hot encoding
  • StandardScaler
  • IndexToString
  • VectorIndexer
  • Interaction
  • Normalizer
  • MinMaxScaler
  • MaxAbScaler
  • Bucketizer
  • ElementwiseProduct
  • SQLTransformer
  • VectorAssembler
  • VectorSizeHint
  • QuantileDiscretizer
  • Imputer
  • VectorSlicer
  • RFormula
  • ChiSqSelector
  • Locality Sensitive Hashing
  • MinHash for Jaccard Distance
  • Classification and Regression
  • LogisticRegression
  • OneVsRest
  • Naive Bayes classifiers
  • Decision trees
  • Random forests
  • Gradient-boosted trees (GBTs)
  • Multilayer perceptron classifier
  • Linear Support Vector Machine
  • Linear Regression
  • Generalized linear regression
  • Isotonic regression
  • Decision Tree Regression
  • Random Forest Regression
  • Gradient-boosted tree regression
  • Survival regression
  • Clustering
  • k-means
  • Latent Dirichlet allocation or LDA
  • Bisecting k-means
  • A Gaussian Mixture Model
  • Collaborative filtering
  • Frequent Pattern Mining
  • FP-Growth
  • PrefixSpan
  • ML Tuning: model selection and hyperparameter tuning
  • Model selection (a.k.a. hyperparameter tuning)
  • Cross-Validation
  • Train-Validation Split
  • Spark Machine Learning Applications
  • Apache Spark SQL & Machine Learning on Genetic Variant Classifications
  • Data Visualization with Vegas Viz and Scala with Spark ML
  • Apache Spark Machine Learning with Dremio Data Lake Engine
  • Dremio Data Lake Engine Apache Arrow Flight Connector with Spark Machine Learning
  • Neural Network with Apache Spark Machine Learning Multilayer Perceptron Classifier
  • Setup TensorFlow, Keras, Theano, Pytorch/torchvision on the CentOS VM
  • Virus Xray Image Classification with Tensorflow Keras Python and Apache Spark Scala
  • Appendix -- Video Presentations
  • References
Powered by GitBook
On this page

Was this helpful?

Integrate Tableau Data Visualization with Hive Data Warehouse and Apache Spark SQL

PreviousExample: Enrich JSONNextConnect Tableau to Spark SQL running in VM with VirtualBox with NAT

Last updated 4 years ago

Was this helpful?

Components relevant to the subject:

Hive Warehouse

The Apache Hive data warehouse software facilitates reading, writing, and managing large datasets residing in distributed storage typically on Hadoop cluster using SQL. Structure can be projected onto data already in storage. A command line tool and JDBC driver are provided to connect users to Hive

SPARK SQL

Spark SQL is a Spark module for structured data processing. Unlike the basic Spark RDD API, the interfaces provided by Spark SQL provide Spark with more information about the structure of both the data and the computation being performed. Internally, Spark SQL uses this extra information to perform extra optimizations.

Apache Thrift Framework

The Apache Thrift software framework, for scalable cross-language services development, combines a software stack with a code generation engine to build services that work efficiently and seamlessly between C++, Java, Python, PHP, Ruby, Erlang, Perl, Haskell, C#, Cocoa, JavaScript, Node.js, Smalltalk, OCaml and Delphi and other languages.

Spark thrift server

Spark Thrift Server is Spark SQL’s implementation of Apache Hive’s HiveServer2 that allows JDBC/ODBC clients to execute SQL queries over JDBC and ODBC protocols on Apache Spark, default port is 10000 Spark thrift server is like hiveserver2 thrift, that allows running SQL queries against HIVE warehouse, however, SQL (HQL) queries on hive will be run as MR job, running SQL queries on Hive warehouse through Spark SQL will on spark cluster.

Tableau

A data visualization tool used in the Business Intelligence for analytics by visualizing data from rows into pictures.

Running query on Hive

A query runs on Hive is processed by Map Reduce jobs executing on Hadoop, as below demonstrates

(base) dv6@dv6:~$ hive
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/home/dv6/hive/hive/lib/log4j-slf4j-impl-2.10.0.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/home/dv6/hadoop-2.7.7/share/hadoop/common/lib/slf4j-log4j12–1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory]
Hive Session ID = 2a7264c1–44d2–4b7c-b1ad-ae4b78059bf4
For example:
Logging initialized using configuration in jar:file:/home/dv6/hive/hive/lib/hive-common-3.1.2.jar!/hive-log4j2.properties Async: true
Hive-on-MR is deprecated in Hive 2 and may not be available in the future versions. Consider using a different execution engine (i.e. spark, tez) or using Hive 1.X releases.
Hive Session ID = d8354df2–7a05–4444–931e-566676b7e421
hive> show schemas;
OK
default
jentekllc
Time taken: 1.184 seconds, Fetched: 2 row(s)
hive> use jentekllc;
OK
Time taken: 0.032 seconds
hive> show tables;
OK
abc
economy
economy_data
economydata
Time taken: 0.037 seconds, Fetched: 4 row(s)
hive> select * from economydata order by 1 limit 5;
Query ID = dv6_20200425115847_384d118f-6420–4a24-a27a-2f44f9072a04
Total jobs = 1
Launching Job 1 out of 1
Number of reduce tasks determined at compile time: 1
In order to change the average load for a reducer (in bytes):
 set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
 set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
 set mapreduce.job.reduces=<number>
Starting Job = job_1587711146522_0008, Tracking URL = http://dv6:8088/proxy/application_1587711146522_0008/
Kill Command = /home/dv6/hadoop-2.7.7/bin/mapred job -kill job_1587711146522_0008
Hadoop job information for Stage-1: number of mappers: 1; number of reducers: 1
2020–04–25 11:59:05,337 Stage-1 map = 0%, reduce = 0%
2020–04–25 11:59:14,563 Stage-1 map = 100%, reduce = 0%, Cumulative CPU 3.47 sec
2020–04–25 11:59:23,431 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 7.24 sec
MapReduce Total cumulative CPU time: 7 seconds 240 msec
Ended Job = job_1587711146522_0008
MapReduce Jobs Launched:
Stage-Stage-1: Map: 1 Reduce: 1 Cumulative CPU: 7.24 sec HDFS Read: 71037 HDFS Write: 319 SUCCESS
Total MapReduce CPU Time Spent: 7 seconds 240 msec
OK
1930–01–01 1930 3 Years -0.277691
1930–01–01 1930 10 Years -0.027198
1930–01–01 1930 P/E 16.682171
1930–01–01 1930 S&P Composite 21.520000
1930–01–01 1930 1 Year -0.241846
Time taken: 37.414 seconds, Fetched: 5 row(s)

Query response time seems high, at 37.414 seconds. Run same query again, repeat the same MR jobs again, with similar query response time, 30.073 seconds

hive> select * from economydata order by 1 limit 5;
Query ID = dv6_20200425120233_7e73b12f-669b-4118-8e5a-149da8bcad18
Total jobs = 1
Launching Job 1 out of 1
Number of reduce tasks determined at compile time: 1
In order to change the average load for a reducer (in bytes):
  set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
  set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
  set mapreduce.job.reduces=<number>
Starting Job = job_1587711146522_0009, Tracking URL = http://dv6:8088/proxy/application_1587711146522_0009/
Kill Command = /home/dv6/hadoop-2.7.7/bin/mapred job  -kill job_1587711146522_0009
Hadoop job information for Stage-1: number of mappers: 1; number of reducers: 1
2020-04-25 12:02:45,600 Stage-1 map = 0%,  reduce = 0%
2020-04-25 12:02:53,714 Stage-1 map = 100%,  reduce = 0%, Cumulative CPU 3.45 sec
2020-04-25 12:03:02,632 Stage-1 map = 100%,  reduce = 100%, Cumulative CPU 6.89 sec
MapReduce Total cumulative CPU time: 6 seconds 890 msec
Ended Job = job_1587711146522_0009
MapReduce Jobs Launched:
Stage-Stage-1: Map: 1  Reduce: 1   Cumulative CPU: 6.89 sec   HDFS Read: 71150 HDFS Write: 319 SUCCESS
Total MapReduce CPU Time Spent: 6 seconds 890 msec
OK
1930-01-01      1930    3 Years -0.277691
1930-01-01      1930    10 Years        -0.027198
1930-01-01      1930    P/E     16.682171
1930-01-01      1930    S&P Composite   21.520000
1930-01-01      1930    1 Year  -0.241846
Time taken: 30.073 seconds, Fetched: 5 row(s)

Running same query on same Hive table on Spark

Spark SQL both use Spark Core as its processing engine to perform the task. Spark supports in-memory processing which is usually 50–100 times faster than regular processing.

By the way, in any possible benchmark Spark SQL is way faster than Hive using same query on the same data persistently on hive warehouse.

(base) dv6@dv6:~/spark/spark$ $SPARK_HOME/bin/spark-sql
20/04/25 12:01:25 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Spark master: local[*], Application Id: local-1587841289677
spark-sql> show schemas;
default
jentekllc
Time taken: 4.109 seconds, Fetched 2 row(s)
spark-sql> use jentekllc;
Time taken: 0.113 seconds
spark-sql> show tables;
jentekllc       abc     false
jentekllc       economy false
jentekllc       economy_data    false
jentekllc       economydata     false
Time taken: 0.193 seconds, Fetched 4 row(s)
spark-sql> select * from economydata order by 1 limit 5;
1930-01-01      1930    1 Year  -0.241846
1930-01-01      1930    P/E     16.682171
1930-01-01      1930    10 Years        -0.027198
1930-01-01      1930    3 Years -0.277691
1930-01-01      1930    Inflation rate  -0.017544
Time taken: 4.356 seconds, Fetched 5 row(s)

It took about 4.35 seconds, that includes fetching the rows from HIVE table economydata into Spark dataframe and caching into Spark memory

Run it again, it only takes 0.379 seconds, because it queries from cache.

spark-sql> select * from economydata order by 1 limit 5;
1930-01-01      1930    1 Year  -0.241846
1930-01-01      1930    P/E     16.682171
1930-01-01      1930    10 Years        -0.027198
1930-01-01      1930    3 Years -0.277691
1930-01-01      1930    Inflation rate  -0.017544
Time taken: 0.379 seconds, Fetched 5 row(s)

Need to mention, if the HIVE table is changed, as result of insert for example, Spark SQL will update its cache incrementally to reflect the change.

spark-sql client

spark-sql is client of Hive thriftserver, basically executes below on the Linux command line:

exec “${SPARK_HOME}”/bin/spark-submit — class org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver “$@”

Source code for org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver is available if interested in:

hive-site.xml

The hive-site.xml is the global hive configuration file. The file hive-default.xml.template contains the default values. You will need to copy hive-default.xml.template to hive-site.xml and edit it to set the right parameters. I have a working copy of hive-site.xml in my GitHub in case it might be helpful, you still need to modify to fit your environment.

Integrate with Spark

If hive and spark runs on the same machine, to integrate Hive with Spark, simply to copy $HIVE_HOME/conf/hive-site.xml to $SPARK_HOME/conf, so Spark and Spark SQL will know where is the hive metastore , which is the data dictionary of Hive database is located and access the same metastore in the query.

You can simply create a soft link in $SPARK_HOME/conf/hive-site.xml pointing to $HIVE_HOME/conf/hive-site.xml

ln -s $HIVE_HOME/conf/hive-site.xml $SPARK_HOME/conf/hive-site.xml

Connect tableau to visualize business intelligence

Tableau can connect to Hive and Spark SQL, which one you want to connect? Spark SQL is a better option because query with Spark SQL on the same table client on Hive runs many times faster than with Hive, as demonstrated earlier.

Start spark thrift server on spark cluster master node

$SPARK_HOME/sbin/start-thriftserver.sh

Which runs

exec "${SPARK_HOME}"/sbin/spark-daemon.sh submit $CLASS 1 — name "Thrift JDBC/ODBC Server" "$@"

CLASS is "org.apache.spark.sql.hive.thriftserver.HiveThriftServer2"

which as consequence, starts up a 'SparkSQLContext' and a 'HiveThriftServer2' thrift server, that listens to port 10000

Choose your OS, and make sure download 64 bit ODBC driver

Download Tableau to Spark SQL ODBC driver

The first time you choose to connect to Spark SQL from Tableau, sign in button is grey out, but you are provided with a download link in the sign in screen:

If your Spark runs on a Virtualbox VM with NAT network adapter, to connect through port forwarding, see below:

Before you attempt to connect to Spark SQL from Tableau, you must download and install Tableau driver for Spark SQL:

After installation of ODBC driver in prior step, relaunch tableau desktop, choose connect to Spark SQL

Enter host name or IP address of the spark master node, leave port 10000 as is

Leave type field as is. For simplicity, I choose no authentication in field Authentication, transport is binary.

Do NOT check require SSL checkbox

Hive authentication is defined in $HIVE_HOME/conf/hive-site.xml (that has been copied to $SPARK_HOME/conf/)

<property>
    <name>hive.server2.authentication</name>
    <value>NOSASL</value>
    <description>
      Expects one of [nosasl, none, ldap, kerberos, pam, custom].
      Client authentication types.
        NONE: no authentication check
        LDAP: LDAP/AD based authentication
        KERBEROS: Kerberos/GSSAPI authentication
        CUSTOM: Custom authentication provider
                (Use with property hive.server2.custom.authentication.class)
        PAM: Pluggable authentication module
        NOSASL:  Raw transport
    </description>
  </property>

It is important to note that for “no authentication”, hive.server2.authentication needs to be set to NOSASL, not NONE, which is password authentication.

Transport is “binary” because in hive-site.xml, transport mode is set to binary

<property>
    <name>hive.server2.transport.mode</name>
    <value>binary</value>
    <description>
      Expects one of [binary, http].
      Transport mode of HiveServer2.
    </description>
  </property>
  <property>

Now just click sign in, specify schemas and you will see your tables such as economydata in schema jentekllc

You can do fancy things to visualize your query

Summary

Apache Spark is one of the greatest open source all in one enterprise big data/analytic engine. It combines distributed/clustered computing, high availability, disruption resilience and fault tolerance. It is in memory computing. It encapsulates sophisticated query capability on structure data like relational database tables and non-structure data like NoSQL key value pairs, along with robust streaming, rich machine learning and statistics features, paired with a graph computing engine for applications such as social network and internet advertisement revenue driven search engines.

Page Rank with Apache Spark Graphx

Dremio Data Lake Engine Apache Arrow Flight Connector with Spark Machine Learning

Because it’s in memory, distributed computing, Spark and Spark SQL is fast, and because it is in memory computing, Spark SQL does not keep data persistently, which needs a database or file system to hold data when Spark is shutdown. In additional to HIVE, Spark can easily connect to any database as long as there is JDBC driver available and use that database as "file server" to hold data and computing result, such as below Spark streaming use case.

My YouTube presentation “Develop Spark Streaming get Twitter tweets save to Hive, sbt assembly build needed twitter util jar”

https://thrift.apache.org/
https://github.com/apache/spark/blob/master/sql/hive-thriftserver/src/main/scala/org/apache/spark/sql/hive/thriftserver/SparkSQLCLIDriver.scala
https://github.com/geyungjen/jentekllc/blob/master/hive/hive-site.xml
https://github.com/apache/spark/blob/master/sql/hive-thriftserver/src/main/scala/org/apache/spark/sql/hive/thriftserver/HiveThriftServer2.scala
https://george-jen.gitbook.io/data-science-and-apache-spark/connect-tableau-to-spark-sql-running-in-vm-with-virtualbox-with-nat
https://www.tableau.com/support/drivers
https://medium.com/@jentekllc8888/page-rank-with-apache-spark-graphx-a51964467c56
https://medium.com/@jentekllc8888/dremio-data-lake-engine-apache-arrow-flight-connector-with-spark-machine-learning-48a45499f114
https://www.youtube.com/watch?v=XBfL6Jx4U2I&t=532s