Resilient Distributed Datasets (RDDs)

Spark revolves around the concept of a resilient distributed dataset (RDD), which is a fault-tolerant collection of elements that can be operated on in parallel.

There are two ways to create RDDs:

Parallelizing an existing collection in your driver program, for example:

val rdd=sc.parallelize(Seq(1,2,3,4,5))

Referencing a dataset in an external storage system, such as a shared filesystem, HDFS, HBase, or any data source offering a Hadoop InputFormat.

val rddFromFile = spark.sparkContext.textFile("file:///home/dv6/spark/spark/data/graphx/followers.txt")

You can consider RDD like a table that has only 1 column (line) per row, that line may have logical column that is separated by delimiter, you can not query by the logical column. You can only get the whole line out, then use function like map(func) to get the logical column inside the line separated by delimiter

For example

val rdd=sc.parallelize(Seq("jack,student","mary,instructor","ann,researcher"))
rdd.collect.foreach(println)
/*
jack,student
mary,instructor
ann,researcher
*/
rdd.map(x=>x.split(",")(0)).collect
//res10: Array[String] = Array(jack, mary, ann)

Last updated