Integrate Tableau Data Visualization with Hive Data Warehouse and Apache Spark SQL

Components relevant to the subject:

Hive Warehouse

The Apache Hive data warehouse software facilitates reading, writing, and managing large datasets residing in distributed storage typically on Hadoop cluster using SQL. Structure can be projected onto data already in storage. A command line tool and JDBC driver are provided to connect users to Hive

SPARK SQL

Spark SQL is a Spark module for structured data processing. Unlike the basic Spark RDD API, the interfaces provided by Spark SQL provide Spark with more information about the structure of both the data and the computation being performed. Internally, Spark SQL uses this extra information to perform extra optimizations.

Apache Thrift Framework

The Apache Thrift software framework, for scalable cross-language services development, combines a software stack with a code generation engine to build services that work efficiently and seamlessly between C++, Java, Python, PHP, Ruby, Erlang, Perl, Haskell, C#, Cocoa, JavaScript, Node.js, Smalltalk, OCaml and Delphi and other languages.

Spark thrift server

Spark Thrift Server is Spark SQL’s implementation of Apache Hive’s HiveServer2 that allows JDBC/ODBC clients to execute SQL queries over JDBC and ODBC protocols on Apache Spark, default port is 10000 Spark thrift server is like hiveserver2 thrift, that allows running SQL queries against HIVE warehouse, however, SQL (HQL) queries on hive will be run as MR job, running SQL queries on Hive warehouse through Spark SQL will on spark cluster.

Tableau

A data visualization tool used in the Business Intelligence for analytics by visualizing data from rows into pictures.

Running query on Hive

A query runs on Hive is processed by Map Reduce jobs executing on Hadoop, as below demonstrates
1
(base) [email protected]:~$ hive
2
SLF4J: Class path contains multiple SLF4J bindings.
3
SLF4J: Found binding in [jar:file:/home/dv6/hive/hive/lib/log4j-slf4j-impl-2.10.0.jar!/org/slf4j/impl/StaticLoggerBinder.class]
4
SLF4J: Found binding in [jar:file:/home/dv6/hadoop-2.7.7/share/hadoop/common/lib/slf4j-log4j12–1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
5
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
6
SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory]
7
Hive Session ID = 2a7264c1–44d2–4b7c-b1ad-ae4b78059bf4
8
For example:
9
Logging initialized using configuration in jar:file:/home/dv6/hive/hive/lib/hive-common-3.1.2.jar!/hive-log4j2.properties Async: true
10
Hive-on-MR is deprecated in Hive 2 and may not be available in the future versions. Consider using a different execution engine (i.e. spark, tez) or using Hive 1.X releases.
11
Hive Session ID = d8354df2–7a05–4444–931e-566676b7e421
12
hive> show schemas;
13
OK
14
default
15
jentekllc
16
Time taken: 1.184 seconds, Fetched: 2 row(s)
17
hive> use jentekllc;
18
OK
19
Time taken: 0.032 seconds
20
hive> show tables;
21
OK
22
abc
23
economy
24
economy_data
25
economydata
26
Time taken: 0.037 seconds, Fetched: 4 row(s)
27
hive> select * from economydata order by 1 limit 5;
28
Query ID = dv6_20200425115847_384d118f-6420–4a24-a27a-2f44f9072a04
29
Total jobs = 1
30
Launching Job 1 out of 1
31
Number of reduce tasks determined at compile time: 1
32
In order to change the average load for a reducer (in bytes):
33
set hive.exec.reducers.bytes.per.reducer=<number>
34
In order to limit the maximum number of reducers:
35
set hive.exec.reducers.max=<number>
36
In order to set a constant number of reducers:
37
set mapreduce.job.reduces=<number>
38
Starting Job = job_1587711146522_0008, Tracking URL = http://dv6:8088/proxy/application_1587711146522_0008/
39
Kill Command = /home/dv6/hadoop-2.7.7/bin/mapred job -kill job_1587711146522_0008
40
Hadoop job information for Stage-1: number of mappers: 1; number of reducers: 1
41
2020–04–25 11:59:05,337 Stage-1 map = 0%, reduce = 0%
42
2020–04–25 11:59:14,563 Stage-1 map = 100%, reduce = 0%, Cumulative CPU 3.47 sec
43
2020–04–25 11:59:23,431 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 7.24 sec
44
MapReduce Total cumulative CPU time: 7 seconds 240 msec
45
Ended Job = job_1587711146522_0008
46
MapReduce Jobs Launched:
47
Stage-Stage-1: Map: 1 Reduce: 1 Cumulative CPU: 7.24 sec HDFS Read: 71037 HDFS Write: 319 SUCCESS
48
Total MapReduce CPU Time Spent: 7 seconds 240 msec
49
OK
50
1930–01–01 1930 3 Years -0.277691
51
1930–01–01 1930 10 Years -0.027198
52
1930–01–01 1930 P/E 16.682171
53
1930–01–01 1930 S&P Composite 21.520000
54
1930–01–01 1930 1 Year -0.241846
55
Time taken: 37.414 seconds, Fetched: 5 row(s)
Copied!
Query response time seems high, at 37.414 seconds. Run same query again, repeat the same MR jobs again, with similar query response time, 30.073 seconds
1
hive> select * from economydata order by 1 limit 5;
2
Query ID = dv6_20200425120233_7e73b12f-669b-4118-8e5a-149da8bcad18
3
Total jobs = 1
4
Launching Job 1 out of 1
5
Number of reduce tasks determined at compile time: 1
6
In order to change the average load for a reducer (in bytes):
7
set hive.exec.reducers.bytes.per.reducer=<number>
8
In order to limit the maximum number of reducers:
9
set hive.exec.reducers.max=<number>
10
In order to set a constant number of reducers:
11
set mapreduce.job.reduces=<number>
12
Starting Job = job_1587711146522_0009, Tracking URL = http://dv6:8088/proxy/application_1587711146522_0009/
13
Kill Command = /home/dv6/hadoop-2.7.7/bin/mapred job -kill job_1587711146522_0009
14
Hadoop job information for Stage-1: number of mappers: 1; number of reducers: 1
15
2020-04-25 12:02:45,600 Stage-1 map = 0%, reduce = 0%
16
2020-04-25 12:02:53,714 Stage-1 map = 100%, reduce = 0%, Cumulative CPU 3.45 sec
17
2020-04-25 12:03:02,632 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 6.89 sec
18
MapReduce Total cumulative CPU time: 6 seconds 890 msec
19
Ended Job = job_1587711146522_0009
20
MapReduce Jobs Launched:
21
Stage-Stage-1: Map: 1 Reduce: 1 Cumulative CPU: 6.89 sec HDFS Read: 71150 HDFS Write: 319 SUCCESS
22
Total MapReduce CPU Time Spent: 6 seconds 890 msec
23
OK
24
1930-01-01 1930 3 Years -0.277691
25
1930-01-01 1930 10 Years -0.027198
26
1930-01-01 1930 P/E 16.682171
27
1930-01-01 1930 S&P Composite 21.520000
28
1930-01-01 1930 1 Year -0.241846
29
Time taken: 30.073 seconds, Fetched: 5 row(s)
Copied!

Running same query on same Hive table on Spark

Spark SQL both use Spark Core as its processing engine to perform the task. Spark supports in-memory processing which is usually 50–100 times faster than regular processing.
By the way, in any possible benchmark Spark SQL is way faster than Hive using same query on the same data persistently on hive warehouse.
1
(base) [email protected]:~/spark/spark$ $SPARK_HOME/bin/spark-sql
2
20/04/25 12:01:25 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
3
Spark master: local[*], Application Id: local-1587841289677
4
spark-sql> show schemas;
5
default
6
jentekllc
7
Time taken: 4.109 seconds, Fetched 2 row(s)
8
spark-sql> use jentekllc;
9
Time taken: 0.113 seconds
10
spark-sql> show tables;
11
jentekllc abc false
12
jentekllc economy false
13
jentekllc economy_data false
14
jentekllc economydata false
15
Time taken: 0.193 seconds, Fetched 4 row(s)
16
spark-sql> select * from economydata order by 1 limit 5;
17
1930-01-01 1930 1 Year -0.241846
18
1930-01-01 1930 P/E 16.682171
19
1930-01-01 1930 10 Years -0.027198
20
1930-01-01 1930 3 Years -0.277691
21
1930-01-01 1930 Inflation rate -0.017544
22
Time taken: 4.356 seconds, Fetched 5 row(s)
Copied!
It took about 4.35 seconds, that includes fetching the rows from HIVE table economydata into Spark dataframe and caching into Spark memory
Run it again, it only takes 0.379 seconds, because it queries from cache.
1
spark-sql> select * from economydata order by 1 limit 5;
2
1930-01-01 1930 1 Year -0.241846
3
1930-01-01 1930 P/E 16.682171
4
1930-01-01 1930 10 Years -0.027198
5
1930-01-01 1930 3 Years -0.277691
6
1930-01-01 1930 Inflation rate -0.017544
7
Time taken: 0.379 seconds, Fetched 5 row(s)
Copied!
Need to mention, if the HIVE table is changed, as result of insert for example, Spark SQL will update its cache incrementally to reflect the change.

spark-sql client

spark-sql is client of Hive thriftserver, basically executes below on the Linux command line:
1
exec β€œ${SPARK_HOME}”/bin/spark-submit β€” class org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver β€œ[email protected]”
Copied!
Source code for org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver is available if interested in:

hive-site.xml

The hive-site.xml is the global hive configuration file. The file hive-default.xml.template contains the default values. You will need to copy hive-default.xml.template to hive-site.xml and edit it to set the right parameters. I have a working copy of hive-site.xml in my GitHub in case it might be helpful, you still need to modify to fit your environment.

Integrate with Spark

If hive and spark runs on the same machine, to integrate Hive with Spark, simply to copy $HIVE_HOME/conf/hive-site.xml to $SPARK_HOME/conf, so Spark and Spark SQL will know where is the hive metastore , which is the data dictionary of Hive database is located and access the same metastore in the query.
You can simply create a soft link in $SPARK_HOME/conf/hive-site.xml pointing to $HIVE_HOME/conf/hive-site.xml
1
ln -s $HIVE_HOME/conf/hive-site.xml $SPARK_HOME/conf/hive-site.xml
Copied!
​

Connect tableau to visualize business intelligence

Tableau can connect to Hive and Spark SQL, which one you want to connect? Spark SQL is a better option because query with Spark SQL on the same table client on Hive runs many times faster than with Hive, as demonstrated earlier.
Start spark thrift server on spark cluster master node
1
$SPARK_HOME/sbin/start-thriftserver.sh
Copied!
Which runs
1
exec "${SPARK_HOME}"/sbin/spark-daemon.sh submit $CLASS 1 β€” name "Thrift JDBC/ODBC Server" "[email protected]"
Copied!
CLASS is "org.apache.spark.sql.hive.thriftserver.HiveThriftServer2"
which as consequence, starts up a 'SparkSQLContext' and a 'HiveThriftServer2' thrift server, that listens to port 10000
Choose your OS, and make sure download 64 bit ODBC driver

Download Tableau to Spark SQL ODBC driver

The first time you choose to connect to Spark SQL from Tableau, sign in button is grey out, but you are provided with a download link in the sign in screen:
If your Spark runs on a Virtualbox VM with NAT network adapter, to connect through port forwarding, see below:
Before you attempt to connect to Spark SQL from Tableau, you must download and install Tableau driver for Spark SQL:

After installation of ODBC driver in prior step, relaunch tableau desktop, choose connect to Spark SQL

Enter host name or IP address of the spark master node, leave port 10000 as is
Leave type field as is. For simplicity, I choose no authentication in field Authentication, transport is binary.
Do NOT check require SSL checkbox
Hive authentication is defined in $HIVE_HOME/conf/hive-site.xml (that has been copied to $SPARK_HOME/conf/)
1
<property>
2
<name>hive.server2.authentication</name>
3
<value>NOSASL</value>
4
<description>
5
Expects one of [nosasl, none, ldap, kerberos, pam, custom].
6
Client authentication types.
7
NONE: no authentication check
8
LDAP: LDAP/AD based authentication
9
KERBEROS: Kerberos/GSSAPI authentication
10
CUSTOM: Custom authentication provider
11
(Use with property hive.server2.custom.authentication.class)
12
PAM: Pluggable authentication module
13
NOSASL: Raw transport
14
</description>
15
</property>
Copied!
It is important to note that for β€œno authentication”, hive.server2.authentication needs to be set to NOSASL, not NONE, which is password authentication.
Transport is β€œbinary” because in hive-site.xml, transport mode is set to binary
1
<property>
2
<name>hive.server2.transport.mode</name>
3
<value>binary</value>
4
<description>
5
Expects one of [binary, http].
6
Transport mode of HiveServer2.
7
</description>
8
</property>
9
<property>
Copied!
Now just click sign in, specify schemas and you will see your tables such as economydata in schema jentekllc
You can do fancy things to visualize your query

Summary

Apache Spark is one of the greatest open source all in one enterprise big data/analytic engine. It combines distributed/clustered computing, high availability, disruption resilience and fault tolerance. It is in memory computing. It encapsulates sophisticated query capability on structure data like relational database tables and non-structure data like NoSQL key value pairs, along with robust streaming, rich machine learning and statistics features, paired with a graph computing engine for applications such as social network and internet advertisement revenue driven search engines.
Page Rank with Apache Spark Graphx
Dremio Data Lake Engine Apache Arrow Flight Connector with Spark Machine Learning
Because it’s in memory, distributed computing, Spark and Spark SQL is fast, and because it is in memory computing, Spark SQL does not keep data persistently, which needs a database or file system to hold data when Spark is shutdown. In additional to HIVE, Spark can easily connect to any database as long as there is JDBC driver available and use that database as "file server" to hold data and computing result, such as below Spark streaming use case.
My YouTube presentation β€œDevelop Spark Streaming get Twitter tweets save to Hive, sbt assembly build needed twitter util jar” https://www.youtube.com/watch?v=XBfL6Jx4U2I&t=532s​
Last modified 1yr ago
Copy link