📔
Data Science with Apache Spark
  • Preface
  • Contents
  • Basic Prerequisite Skills
  • Computer needed for this course
  • Spark Environment Setup
  • Dev environment setup, task list
  • JDK setup
  • Download and install Anaconda Python and create virtual environment with Python 3.6
  • Download and install Spark
  • Eclipse, the Scala IDE
  • Install findspark, add spylon-kernel for scala
  • ssh and scp client
  • Summary
  • Development environment on MacOS
  • Production Spark Environment Setup
  • VirtualBox VM
  • VirtualBox only shows 32bit on AMD CPU
  • Configure VirtualBox NAT as Network Adapter on Guest VM and Allow putty ssh Through Port Forwarding
  • Docker deployment of Spark Cluster
  • Create customized Apache Spark Docker container
  • Dockerfile
  • docker-compose and docker-compose.yml
  • Launch custom built Docker container with docker-compose
  • Entering Docker Container
  • Setup Hadoop, Hive and Spark on Linux without docker
  • Hadoop Preparation
  • Hadoop setup
  • Configure $HADOOP_HOME/etc/hadoop
  • HDFS
  • Start and stop Hadoop
  • Work with Hadoop and HDFS file system
  • Connect to Hadoop web interface port 50070 and 8088
  • Install Hive
  • hive home
  • Initialize hive schema
  • Start hive metastore service.
  • hive-site.xml
  • Hive client
  • Setup Apache Spark
  • Spark Home
  • Jupyter-notebook server
  • Python 3 Warm Up
  • Basics
  • Iterables/Collections
  • Strings
  • List
  • Tuple
  • Dictionary
  • Set
  • Conditional statement
  • for loop
  • while loop
  • Functions and methods
  • map and filter
  • map and filter takes function as input
  • lambda
  • Python Class
  • Input and if statement
  • Input from a file
  • Output to a file
  • try except
  • Python coding exercise
  • Scala Warm Up
  • Start Spylon-kernel on Jupyter-notebook
  • Type of Variable: Mutable or immutable
  • Block statement
  • Scala Data Type
  • Array in Scala
  • Methods
  • Functions
  • Anonymous function
  • Scala map and filter methods
  • Class
  • Objects
  • Trait
  • Tuple in Scala
  • List/Seq
  • Set in Scala
  • Scala Map
  • Scala if statement
  • Scala for loop
  • Scala While Loop
  • Scala Exceptions + try catch finally
  • Scala coding exercise
  • Run a program to estimate pi
  • Common Spark command line
  • Run Scala code with spark-submit
  • Python with Apache Spark using Jupyter notebook
  • Spark Core Introduction
  • Spark and Scala Version
  • Basic Spark Package
  • Resilient Distributed Datasets (RDDs)
  • RDD Operations
  • Passing Function to Spark
  • Printing elements of an RDD
  • Working with key value pair
  • RDD Transformation Functions
  • RDD Action Functions
  • SPARK SQL
  • SQL
  • Datasets and DataFrames
  • SparkSession
  • Creating DataFrames
  • Running SQL Queries Programmatically
  • Issue from running Cartesian Join Query
  • Creating Datasets
  • Interoperating with RDD
  • Untyped User-Defined Aggregate Functions
  • Generic Load/Save Functions
  • Manually specify file option
  • Run SQL on files directly
  • Save Mode
  • Saving to Persistent Tables
  • Bucketing, Sorting and Partitioning
  • Apache Arrow
  • Install Python Arrow Module PyArrow
  • Issue might happen import PyArrow
  • Enabling for Conversion to/from Pandas in Python
  • Connect to any data source the same consistent way
  • Spark SQL Implementation Example in Scala
  • Run scala code in Eclipse IDE
  • Hive Integration, run SQL or HiveQL queries on existing warehouses.
  • Example: Enrich JSON
  • Integrate Tableau Data Visualization with Hive Data Warehouse and Apache Spark SQL
  • Connect Tableau to Spark SQL running in VM with VirtualBox with NAT
  • Issues with connecting from Tableau to Spark SQL
  • SPARK Streaming
  • Discretized Streams (DStreams)
  • Transformations on DStreams
  • map(func)
  • filter(func)
  • repartition(numPartitions)
  • union(otherStream)
  • reduce(func)
  • count()
  • countByValue()
  • reduceByKey(func, [numTasks])
  • join(otherStream, [numTasks])
  • cogroup(otherStream, [numTasks])
  • transform(func)
  • updateStateByKey(func)
  • Scala Tips for updateStateByKey
  • repartition(numPartitions)
  • DStream Window Operations
  • DStream Window Transformation
  • countByWindow(windowLength, slideInterval)
  • reduceByWindow(func, windowLength, slideInterval)
  • reduceByKeyAndWindow(func, windowLength, slideInterval, [numTasks])
  • reduceByKeyAndWindow(func, invFunc, windowLength, slideInterval, [numTasks])
  • countByValueAndWindow(windowLength, slideInterval, [numTasks])
  • window(windowLength, slideInterval)
  • Window DStream print(n)
  • saveAsTextFiles(prefix, [suffix])
  • saveAsObjectFiles(prefix, [suffix])
  • saveAsHadoopFiles(prefix, [suffix])
  • foreachRDD(func)
  • Build Twitter Scala API Library for Spark Streaming using sbt
  • Spark Streaming with Twitter, you can get public tweets by using Twitter API.
  • Spark streaming use case with Python
  • Spark Graph Computing
  • Spark Graph Computing Continue
  • Graphx
  • Package org.apache.spark.graphx
  • Edge Class
  • EdgeContext Class
  • EdgeDirection Class
  • EdgeRDD Class
  • EdgeTriplet Class
  • Graph Class
  • GraphLoader Object
  • GraphOps Class
  • GraphXUtils Object
  • PartitionStrategy Trait
  • Pregel Object
  • TripletFields Class
  • VertexRDD Class
  • Package org.apache.spark.graphx.impl
  • AggregatingEdgeContext Class
  • EdgeRDDImpl Class
  • Class GraphImpl<VD,ED>
  • Class VertexRDDImpl<VD>
  • Package org.apache.spark.graphx.lib
  • Class ConnectedComponents
  • Class LabelPropagation
  • Class PageRank
  • Class ShortestPaths
  • Class StronglyConnectedComponents
  • Class SVDPlusPlus
  • Class SVDPlusPlus.Conf
  • Class TriangleCount
  • Package org.apache.spark.graphx.util
  • Class BytecodeUtils
  • Class GraphGenerators
  • Graphx Example 1
  • Graphx Example 2
  • Graphx Example 3
  • Spark Graphx Describes Organization Chart Easy and Fast
  • Page Rank with Apache Spark Graphx
  • bulk synchronous parallel with Google Pregel Graphx Implementation Use Cases
  • Tree and Graph Traversal with and without Spark Graphx
  • Graphx Graph Traversal with Pregel Explained
  • Spark Machine Learning
  • Binary Classification
  • Multiclass Classification
  • Regression
  • Correlation
  • Image Data Source
  • ML DataFrame is SQL DataFrame
  • ML Transformer
  • ML Estimator
  • ML Pipeline
  • Transformer/Estimator Parameters
  • Extracting, transforming and selecting features
  • TF-IDF
  • Word2Vec
  • FeatureHasher
  • Tokenizer
  • CountVectorizer
  • StopWordRemover
  • n-gram
  • Binarizer
  • PCA
  • PolynomialExpansion
  • StringIndexer
  • Discrete Cosine Transform (DCT)
  • One-hot encoding
  • StandardScaler
  • IndexToString
  • VectorIndexer
  • Interaction
  • Normalizer
  • MinMaxScaler
  • MaxAbScaler
  • Bucketizer
  • ElementwiseProduct
  • SQLTransformer
  • VectorAssembler
  • VectorSizeHint
  • QuantileDiscretizer
  • Imputer
  • VectorSlicer
  • RFormula
  • ChiSqSelector
  • Locality Sensitive Hashing
  • MinHash for Jaccard Distance
  • Classification and Regression
  • LogisticRegression
  • OneVsRest
  • Naive Bayes classifiers
  • Decision trees
  • Random forests
  • Gradient-boosted trees (GBTs)
  • Multilayer perceptron classifier
  • Linear Support Vector Machine
  • Linear Regression
  • Generalized linear regression
  • Isotonic regression
  • Decision Tree Regression
  • Random Forest Regression
  • Gradient-boosted tree regression
  • Survival regression
  • Clustering
  • k-means
  • Latent Dirichlet allocation or LDA
  • Bisecting k-means
  • A Gaussian Mixture Model
  • Collaborative filtering
  • Frequent Pattern Mining
  • FP-Growth
  • PrefixSpan
  • ML Tuning: model selection and hyperparameter tuning
  • Model selection (a.k.a. hyperparameter tuning)
  • Cross-Validation
  • Train-Validation Split
  • Spark Machine Learning Applications
  • Apache Spark SQL & Machine Learning on Genetic Variant Classifications
  • Data Visualization with Vegas Viz and Scala with Spark ML
  • Apache Spark Machine Learning with Dremio Data Lake Engine
  • Dremio Data Lake Engine Apache Arrow Flight Connector with Spark Machine Learning
  • Neural Network with Apache Spark Machine Learning Multilayer Perceptron Classifier
  • Setup TensorFlow, Keras, Theano, Pytorch/torchvision on the CentOS VM
  • Virus Xray Image Classification with Tensorflow Keras Python and Apache Spark Scala
  • Appendix -- Video Presentations
  • References
Powered by GitBook
On this page

Was this helpful?

Graphx Graph Traversal with Pregel Explained

PreviousTree and Graph Traversal with and without Spark GraphxNextSpark Machine Learning

Last updated 4 years ago

Was this helpful?

Breadth First Graph Traversal with Graphx using Pregel

To demonstrate Graph Traversal using Pregel from Graphx, construct the graph first, called graph:

val vertices=sc.parallelize(Seq((1L,1.0),(2L,2.0),(3L,3.0),(4L,4.0),(5L,5.0),(6L,6.0),(7L,7.0)))
val edges=sc.parallelize(Seq(Edge(1L,2L,1),Edge(1L,3L,2), Edge(2L,4L,3), Edge(2L,5L,4), Edge(3L,6L, 5), Edge(3L,7L, 6)))
val graph=Graph(vertices,edges)

Create a new graph called markedGraph by transforming original graph:

// travers starts from root of the tree, which is id 1
val start: VertexId = 1
//set start vertex attributes to 0.0, others to Double.inifinity
val markedGraph = graph.mapVertices((id, _) => if (id == start) 0.0 else Double.PositiveInfinity)

Thought Process:

markedGraph is the same as original graph except vertex attributes other than the VertexId=1 is Infinity. Vertex attribute for VertexId=1 is 0.0.

We will visit all vertices as long as vertex attributes are Infinity. During visit, set the visited Vertex attribute to non Infinity.

We will not visit vertices where vertex attributes are not Infinity.

When each vertex attribute is no longer Infinity in the end, visit (traverse) ends, thus complete the graph traversal.

Then print all vertex id out after traversal ends where no more vertex attributes is Infinity.

Implementation of Pregel:

Calling < graph >.pregel will return a new graph. We will use Pregel to traverse markedGraph that has starting vertex (1, 0.0), rest of vertices are (VertexId, Double.Infinity).

Internally, pregel will run in loop, each step of the loop is called SuperStep. During each SuperStep, the vertex where its vertex attribute not Double.Infinity will send its vertex attribute to connected neighboring vertices whose vertex attribites are Double.Infinity. The receiving vertices whose vertex attributes will be set to the smallest value of messages (if it receives more than one messages from different connected beighboring sending vertices), overwriting its original vertex value of Double.Infinity.

Vertices whose vertex attributes are Double.Infinity do not send message, just stay put.

Also, vertices do not send messages to connected neighboring vertices whose vertex attributes are NOT Double.Infinity, also stay put.

Loop or iteration will end when all vertices whose vertex attributes are NOT Double.Infinity. Hence, traverse completes.

//vprog is Vertex Program, take action with message received, to set the Vertex attribute with smaller value message, if it was ∞, it will be replaced // by a value which is smaller than ∞

val vprog = { (id: VertexId, attr: Double, msg: Double) => math.min(attr,msg) }

val sendMessage = { (triplet: EdgeTriplet[Double, Int]) =>
//define the return variable “run” as an iterator, initialized during each superstep
  var run:Iterator[(VertexId, Double)] = Iterator.empty
//Only the vertex that has Double.PositiveInfinity has not been visited, to avoid endless loop
  if(!(triplet.srcAttr != Double.PositiveInfinity && triplet.dstAttr != Double.PositiveInfinity)){
  if(triplet.srcAttr != Double.PositiveInfinity){
//If source Vertex attribute is not infinity, dest vertex attribute is infinity, send source attribute+1, 
// as new dest vertex attribute, overwrite its original infinity value
      run = Iterator((triplet.dstId,triplet.srcAttr+1))
    }else if(triplet.dstAttr != Double.PositiveInfinity) {
//If dest Vertex attribute is not infinity, source vertex attribute is infinity, send dest attribute+1, 
// as new source vertex attribute, overwrite its original infinity value
      run = Iterator((triplet.srcId,triplet.dstAttr+1))
    }
  }
//return the the run that has the Iterator
  run
}

//MergeMessage is a reduce function, the set the vertex attribute that has received more than one message in the Iterator
//if the receiving vertex has connect to more than one sending vertex and received more than one messages, choose the smallest message by reduce
//val mergeMessage = { (a: Double, b: Double) => math.min(a,b) }

val graphTraverse = markedGraph.pregel(Double.PositiveInfinity, 20)(vprog, sendMessage, mergeMessage)

How Pregel works

Pregel method take 2 argument lists

pregel(< argument list 1 >)(< argument list 2 >)

argument list 1 is a list of value arguments, argument list 1 is to set initial value

argument list 2 is a list of functional arguments, which is an example of Scala higher order function, meaning each function argument is free to have its own execution algorithm, its own logic.

In this example

val graphTraverse = markedGraph.pregel`(Double.PositiveInfinity, 20)`(vprog, sendMessage, mergeMessage)

first argument list is

(Double.PositiveInfinity, 20)

It defines:

  1. Initial message is Double.PositiveInfinity. The initial message is to start the computation. This message is passed to all the vertices in the vertexRDD to do the 1st iteration. Including the starting vertex whose vertex attribute is 0.0, but Double.PositiveInfinity is greater than 0.0, therefore, it will not replace vertex attribute that is not greater than the message which is Double.PositiveInfinity.

  2. Maximum iteration 20, loop will end after 20 SuperSteps. In this example, 20 will be sufficient as a mechanism to break endless loop if it happens.

second argument list is

(vprog, sendMessage, mergeMessage)

that contains 3 function arguments:

Important to note, during each SuperStep, each sending vertex sends the message to its directly connected vertex in parallel, you can imagine multiple instances of sendMessage, mergeMessage and vprog running in parallel across all worker nodes (computers) in the Spark network cluster, updating vertex attributes at the same time across many vertices. This is not done in serial. vprog (stands for vertex program, it does the change to the vertex attribute if condition is met)

//vprog is Vertex Program, take action with message received, to set the Vertex attribute with smaller value message, if it was ∞, it will be replaced // by a value which is smaller than ∞

val vprog = { (id: VertexId, attr: Double, msg: Double) => math.min(attr,msg) }

//vprog is Vertex Program, take action with message received, to set the Vertex attribute with smaller value message, if it was ∞, it will be replaced 
// by a value which is smaller than ∞

val vprog = { (id: VertexId, attr: Double, msg: Double) => math.min(attr,msg) }

sendMessage During each SuperStep in the loop once the computation starts, each triplet that has the connected neighboring vertices to examine whether sending message can proceed, if and only if a vertex whose vertex attribute is NOT Double.PositiveInfinity can send message ONLY to directly connected vertex whose vertex attribute is Double.PositiveInfinity

The message if send is in the form of sending vertex attribute plus 1.

Logic of this code is:

Vertex will not send message if its vertex attribute is Double.PositiveInfinity or if vertex attribue of the connected neighboring vertex is NOT Double.PositiveInfinity.

For each SuperStep, sendMessage returns an Iterator of VertexId and message received. (Again, message is in the form of sending vertex attribute plus 1)

val sendMessage = { (triplet: EdgeTriplet[Double, Int]) =>
//define the return variable “run” as an iterator, initialized during each superstep
  var run:Iterator[(VertexId, Double)] = Iterator.empty
//Only the vertex that has Double.PositiveInfinity has not been visited, to avoid endless loop
  if(!(triplet.srcAttr != Double.PositiveInfinity && triplet.dstAttr != Double.PositiveInfinity)){
  if(triplet.srcAttr != Double.PositiveInfinity){
//If source Vertex attribute is not infinity, dest vertex attribute is infinity, send source attribute+1, 
// as new dest vertex attribute, overwrite its original infinity value
      run = Iterator((triplet.dstId,triplet.srcAttr+1))
    }else if(triplet.dstAttr != Double.PositiveInfinity) {
//If dest Vertex attribute is not infinity, source vertex attribute is infinity, send dest attribute+1, 
// as new source vertex attribute, overwrite its original infinity value
      run = Iterator((triplet.srcId,triplet.dstAttr+1))
    }
  }
//return the the run that has the Iterator
  run
}

mergeMessage

For reciving Vertex, if multiple messages are received, return the smallest message value.

//MergeMessage is a reduce function, the set the vertex attribute that has received more than one message in the Iterator
//if the receiving vertex has connect to more than one sending vertex and received more than one messages, choose the smallest message by reduce
//val mergeMessage = { (a: Double, b: Double) => math.min(a,b) }

Following is visualization of each SuperStep in the loop:

Super Step 1: root vertex (1,0.0) sends message which its vertex attribute 0.0+1=1.0 to its connected neighboring vertices (vertexId=2) and (vertexId=3) which had Double.PositiveInfinity prior, setting vertex (vertexId=2) to (2,1.0), (vertexId=3) to (3,2.0)

Super Step 2: In this step, vertices (vertexId=2) and (vertexId=3) sends message which its vertex attribute 1.0+1=2.0 to its connected neighboring vertices, sent the vertex attributes of these receiving vertices to 2.0

Super Step 3: No vertex attributes are Double.PositiveInfinity, loop ends. Here is the end state of the Graph that is returned by pregel call

Summary:

Before Super Loop starts

markedGraph.vertices.collect

res2: Array[(org.apache.spark.graphx.VertexId, Double)] = Array((4,Infinity), (1,0.0), (5,Infinity), (6,Infinity), (2,Infinity), (3,Infinity), (7,Infinity))

After Super Loop ends, all vertices with Infinity attributes have been changed to a non infinity value during visiting the vertex by message

graphTraverse.vertices.collect

res3: Array((4,2.0), (1,0.0), (5,2.0), (6,2.0), (2,1.0), (3,1.0), (7,2.0))

If you print out based on traverse, it should be starting from where the vertex attribute to be 0.0, then propagating to 1.0, then 2.0

Here is the entire Graph traverse code and the result of running it is the path of graph traversal

import org.apache.spark._
import org.apache.spark.graphx._
import org.apache.spark.rdd.RDD



def graphTraversal(startVertexId: VertexId, graph: org.apache.spark.graphx.Graph[Double,Int]): 
org.apache.spark.graphx.Graph[Double,Int]={

//graph.triangleCount.vertices.collect.foreach(println)
/*
Output tuple pair, 1st element is node id, 2nd element is triangle count
All zeros
(4,0)
(6,0)
(2,0)
(1,0)
(3,0)
(7,0)
(5,0)
*/
//Following codes travers the tree graph:
// travers starts from root of the tree, which is id 1
var traverseList: List[Long]=List()
val start: VertexId = startVertexId
//set start vertex attributes to 0.0, others to Double.inifinity
val markedGraph = graph.mapVertices((id, _) => if (id == start) 0.0 else
  Double.PositiveInfinity)


//val vprog = { (id: VertexId, attr: Double, msg: Double) => math.min(attr,msg)}
//vprog receiving messages in parallel for messages in the same SuperStep, sendMessage always returns Iterator
def vprog (id: org.apache.spark.graphx.VertexId, attr: Double, msg: Double): Double = 
    math.min(attr,msg)
val sendMessage = { (triplet: EdgeTriplet[Double, Int]) => {
  var run:Iterator[(VertexId, Double)] = Iterator.empty
//Only the vertex that has Double.PositiveInfinity has not been visited, to avoid endless loop
  if(!(triplet.srcAttr != Double.PositiveInfinity && triplet.dstAttr != Double.PositiveInfinity)){
    if(triplet.srcAttr != Double.PositiveInfinity && triplet.dstAttr == Double.PositiveInfinity){
//      println(" ")
      run = Iterator((triplet.dstId,triplet.srcAttr+1))
    }else if(triplet.dstAttr != Double.PositiveInfinity && triplet.srcAttr == Double.PositiveInfinity){
//      println(" ")
      run = Iterator((triplet.srcId,triplet.dstAttr+1))
    }
  }
  run
}
}
val mergeMessage = { (a: Double, b: Double) => math.min(a,b) }
return markedGraph.pregel(Double.PositiveInfinity, 20)(vprog, sendMessage, mergeMessage)

}

//Now call the function graphTraversal

val vertices=sc.parallelize(Seq((1L,1.0),(2L,2.0),(3L,3.0),(4L,4.0),(5L,5.0),(6L,6.0),(7L,7.0)))
val edges=sc.parallelize(Seq(Edge(1L,2L,1),Edge(1L,3L,2), Edge(2L,4L,3), Edge(2L,5L,4), Edge(3L,6L, 5), Edge(3L,7L, 6)))

graphTraversal(1L, Graph(vertices,edges)).vertices.sortBy(_._2).map(x=>x._1).collect.foreach(println)

/*
Output the traverse sequence:

1
2
3
4
5
6
7
*/

Summary

Clearly, as you can see, Graph traversal by sending message from the starting Vertex to its directly connected vertices then propagate to next layer of vertices is accomplished by breadth first traversal.