FeatureHasher
Feature hashing projects a set of categorical or numerical features into a feature vector of specified dimension (typically substantially smaller than that of the original feature space). This is done using the hashing trick to map features to indices in the feature vector.
The FeatureHasher transformer operates on multiple columns. Each column may contain either numeric or categorical features. Behavior and handling of column data types is as follows:
Numeric columns: For numeric features, the hash value of the column name is used to map the feature value to its index in the feature vector. By default, numeric features are not treated as categorical (even when they are integers). To treat them as categorical, specify the relevant columns using the categoricalCols parameter. String columns: For categorical features, the hash value of the string “column_name=value” is used to map to the vector index, with an indicator value of 1.0. Thus, categorical features are “one-hot” encoded (similarly to using OneHotEncoder with dropLast=false).
Boolean columns: Boolean values are treated in the same way as string columns. That is, boolean features are represented as “column_name=true” or “column_name=false”, with an indicator value of 1.0.
Null (missing) values are ignored (implicitly zero in the resulting feature vector).
The hash function used here is also the MurmurHash 3 used in HashingTF. Since a simple modulo on the hashed value is used to determine the vector index, it is advisable to use a power of two as the numFeatures parameter; otherwise the features will not be mapped evenly to the vector indices.
Examples
Assume that we have a DataFrame with 4 input columns real, bool, stringNum, and string. These different data types as input will illustrate the behavior of the transform to produce a column of feature vectors.
real
bool
stringNum
string
2.2
true
1
foo
3.3
false
2
bar
4.4
false
3
baz
5.5
false
4
foo
Then the output of FeatureHasher.transform on this DataFrame is:
real
bool
stringNum
string
features
2.2
true
1
foo
(262144,[51871, 63643,174475,253195],[1.0,1.0,2.2,1.0])
3.3
false
2
bar
(262144,[6031, 80619,140467,174475],[1.0,1.0,1.0,3.3])
4.4
false
3
baz
(262144,[24279,140467,174475,196810],[1.0,1.0,4.4,1.0])
5.5
false
4
foo
(262144,[63643,140467,168512,174475],[1.0,1.0,1.0,5.5])
The resulting feature vectors could then be passed to a learning algorithm.
1
import org.apache.spark.ml.feature.FeatureHasher
2
3
val dataset = spark.createDataFrame(Seq(
4
(2.2, true, "1", "foo"),
5
(3.3, false, "2", "bar"),
6
(4.4, false, "3", "baz"),
7
(5.5, false, "4", "foo")
8
)).toDF("real", "bool", "stringNum", "string")
9
10
val hasher = new FeatureHasher()
11
.setInputCols("real", "bool", "stringNum", "string")
12
.setOutputCol("features")
13
14
val featurized = hasher.transform(dataset)
15
featurized.show(false)
16
17
/*
18
Output:
19
+----+-----+---------+------+--------------------------------------------------------+
20
|real|bool |stringNum|string|features |
21
+----+-----+---------+------+--------------------------------------------------------+
22
|2.2 |true |1 |foo |(262144,[174475,247670,257907,262126],[2.2,1.0,1.0,1.0])|
23
|3.3 |false|2 |bar |(262144,[70644,89673,173866,174475],[1.0,1.0,1.0,3.3]) |
24
|4.4 |false|3 |baz |(262144,[22406,70644,174475,187923],[1.0,1.0,4.4,1.0]) |
25
|5.5 |false|4 |foo |(262144,[70644,101499,174475,257907],[1.0,1.0,5.5,1.0]) |
26
+----+-----+---------+------+--------------------------------------------------------+
27
28
*/
Copied!
Last modified 1yr ago
Copy link