Class VertexRDDImpl<VD>

Object
 org.apache.spark.rdd.RDD<scala.Tuple2<Object,VD>>
  org.apache.spark.graphx.VertexRDD<VD>
   org.apache.spark.graphx.impl.VertexRDDImpl<VD>

All Implemented Interfaces:
java.io.Serializable, Logging

Methods:

<VD2> VertexRDD<VD2>	aggregateUsingIndex(RDD<scala.Tuple2<Object,VD2>> messages, scala.Function2<VD2,VD2,VD2> reduceFunc, scala.reflect.ClassTag<VD2> evidence$12)

Aggregates vertices in messages that have the same ids using reduceFunc, returning a 
VertexRDD co-indexed with this.
VertexRDDImpl<VD>	cache()

Persists the vertex partitions at targetStorageLevel, which defaults to MEMORY_ONLY.

void	checkpoint()

Mark this RDD for checkpointing.

long	count()

The number of vertices in the RDD.

VertexRDD<VD>	diff(RDD<scala.Tuple2<Object,VD>> other)

For each vertex present in both this and other, diff returns only those vertices with differing values; for values that are different, keeps the values from other.

VertexRDD<VD>	diff(VertexRDD<VD> other)

For each vertex present in both this and other, diff returns only those vertices with differing values; for values that are different, keeps the values from other.

scala.Option<String>	getCheckpointFile()

Gets the name of the directory to which this RDD was checkpointed.

StorageLevel	getStorageLevel()

Get the RDD's current storage level, or StorageLevel.NONE if none is set.

<U,VD2> VertexRDD<VD2>	innerJoin(RDD<scala.Tuple2<Object,U>> other, scala.Function3<Object,VD,U,VD2> f, scala.reflect.ClassTag<U> evidence$10, scala.reflect.ClassTag<VD2> evidence$11)

Inner joins this VertexRDD with an RDD containing vertex attribute pairs.

<U,VD2> VertexRDD<VD2>	innerZipJoin(VertexRDD<U> other, scala.Function3<Object,VD,U,VD2> f, scala.reflect.ClassTag<U> evidence$8, scala.reflect.ClassTag<VD2> evidence$9)

Efficiently inner joins this VertexRDD with another VertexRDD sharing the same index.

boolean	isCheckpointed()

Return whether this RDD is checkpointed and materialized, either reliably or locally.

<VD2,VD3> VertexRDD<VD3>	leftJoin(RDD<scala.Tuple2<Object,VD2>> other, scala.Function3<Object,VD,scala.Option<VD2>,VD3> f, scala.reflect.ClassTag<VD2> evidence$6, scala.reflect.ClassTag<VD3> evidence$7)

Left joins this VertexRDD with an RDD containing vertex attribute pairs.

<VD2,VD3> VertexRDD<VD3>	leftZipJoin(VertexRDD<VD2> other, scala.Function3<Object,VD,scala.Option<VD2>,VD3> f, scala.reflect.ClassTag<VD2> evidence$4, scala.reflect.ClassTag<VD3> evidence$5)

Left joins this RDD with another VertexRDD with the same index.

<VD2> VertexRDD<VD2>	mapValues(scala.Function1<VD,VD2> f, scala.reflect.ClassTag<VD2> evidence$2)

Maps each vertex attribute, preserving the index.

<VD2> VertexRDD<VD2>	mapValues(scala.Function2<Object,VD,VD2> f, scala.reflect.ClassTag<VD2> evidence$3)

Maps each vertex attribute, additionally supplying the vertex ID.

VertexRDD<VD>	minus(RDD<scala.Tuple2<Object,VD>> other)

For each VertexId present in both this and other, minus will act as a set difference operation returning only those unique VertexId's present in this.

VertexRDD<VD>	minus(VertexRDD<VD> other)

For each VertexId present in both this and other, minus will act as a set difference operation returning only those unique VertexId's present in this.

scala.Option<Partitioner>	partitioner()

Optionally overridden by subclasses to specify how they are partitioned.

RDD<org.apache.spark.graphx.impl.ShippableVertexPartition<VD>>	partitionsRDD() 

VertexRDDImpl<VD>	persist(StorageLevel newLevel)

Persists the vertex partitions at the specified storage level, ignoring any existing target storage level.

VertexRDD<VD>	reindex()

Construct a new VertexRDD that is indexed by only the visible vertices.

VertexRDD<VD>	reverseRoutingTables()

Last updated