Spark revolves around the concept of a resilient distributed dataset (RDD), which is a fault-tolerant collection of elements that can be operated on in parallel.
There are two ways to create RDDs:
Parallelizing an existing collection in your driver program, for example:
Referencing a dataset in an external storage system, such as a shared filesystem, HDFS, HBase, or any data source offering a Hadoop InputFormat.
val rddFromFile = spark.sparkContext.textFile("file:///home/dv6/spark/spark/data/graphx/followers.txt")
You can consider RDD like a table that has only 1 column (line) per row, that line may have logical column that is separated by delimiter, you can not query by the logical column. You can only get the whole line out, then use function like map(func) to get the logical column inside the line separated by delimiter