RDD Operations

RDDs support two types of operations:

Transformations -- create a new RDD from an existing one.

All transformations in Spark are lazy, in that they do not compute their results right away. Instead, they just remember the transformations applied to some base dataset (e.g. a file). The transformations are only computed when an action requires a result to be returned to the driver program.

This design enables Spark to run more efficiently. For example, we can realize that a RDD created through map will be used in a reduce and return only the result of the reduce to the driver, rather than the larger mapped RDD.

Actions -- return a value to the driver program after running a computation on the RDD.

For example, map is a transformation that passes each RDD element through a function and returns a new RDD representing the results. On the other hand, reduce is an action that aggregates all the elements of the RDD using some function and returns the final result to the driver program (although there is also a parallel reduceByKey that returns a distributed dataset

val rddFromFile = sc.textFile("file:///home/dv6/spark/spark/data/graphx/followers.txt")
val lineLengths = rddFromFile.map(s => s.length)
//if you want to save transformation lineLengths in memory
lineLengths.persist()
//without persist(), lineLengths transformtion will be recompiled
val totalLength = lineLengths.reduce((a, b) => a + b)

Last updated