Multilayer perceptron classifier

Multilayer perceptron classifier (MLPC) is a classifier based on the feedforward artificial neural network. MLPC consists of multiple layers of nodes. Each layer is fully connected to the next layer in the network. Nodes in the input layer represent the input data. All other nodes map inputs to outputs by a linear combination of the inputs with the node's weights w and bias b and applying an activation function. This can be written in matrix form for MLPC with K+1 layers as follows:
Nodes in intermediate layers use sigmoid (logistic) function:
Nodes in the output layer use softmax function:
The number of nodes N in the output layer corresponds to the number of classes.
MLPC employs backpropagation for learning the model. We use the logistic loss function for optimization and L-BFGS as an optimization routine.
// Load the data stored in LIBSVM format as a DataFrame.
val data ="libsvm")
// Split the data into train and test
val splits = data.randomSplit(Array(0.6, 0.4), seed = 1234L)
val train = splits(0)
val test = splits(1)
// specify layers for the neural network:
// input layer of size 4 (features), two intermediate of size 5 and 4
// and output of size 3 (classes)
val layers = Array[Int](4, 5, 4, 3)
// create the trainer and set its parameters
val trainer = new MultilayerPerceptronClassifier()
// train the model
val model =
// compute accuracy on the test set
val result = model.transform(test)
val predictionAndLabels ="prediction", "label")
val evaluator = new MulticlassClassificationEvaluator()
println(s"Test set accuracy = ${evaluator.evaluate(predictionAndLabels)}")
Test set accuracy = 0.9019607843137255
The number of neurons in the first layer needs to be equal to number of features