​ 在Spark ML库中,TF-IDF被分成两部分:TF (+hashing)IDF

TF: HashingTF 是一个Transformer,在文本处理中,接收词条的集合然后把这些集合转化成固定长度的特征向量。这个算法在哈希的同时会统计各个词条的词频

IDF: IDF是一个Estimator,在一个数据集上应用它的fit()方法,产生一个IDFModel。 该IDFModel 接收特征向量(由HashingTF产生),然后计算每一个词在文档中出现的频次。IDF会减少那些在语料库中出现频率较高的词的权重。

​ Spark.mllib 中实现词频率统计使用特征hash的方式,原始特征通过hash函数,映射到一个索引值。后面只需要统计这些索引值的频率,就可以知道对应词的频率。这种方式避免设计一个全局1对1的词到索引的映射,这个映射在映射大量语料库时需要花费更长的时间。但需要注意,通过hash的方式可能会映射到同一个值的情况,即不同的原始特征通过Hash映射后是同一个值。为了降低这种情况出现的概率,我们只能对特征向量升维。i.e., 提高hash表的桶数,默认特征维度是 2^20 = 1,048,576.

​ 在下面的代码段中,我们以一组句子开始。首先使用分解器Tokenizer把句子划分为单个词语。对每一个句子(词袋),我们使用HashingTF将句子转换为特征向量,最后使用IDF重新调整特征向量。这种转换通常可以提高使用文本特征的性能。

import org.apache.spark.ml.feature.{HashingTF, IDF, IDFModel, Tokenizer}
import org.apache.spark.sql.{DataFrame, SparkSession}

object tfidf {
  def main(args: Array[String]): Unit = {
    val spark: SparkSession = SparkSession.builder().master("local").getOrCreate()
    val sentenceData: DataFrame = spark.createDataFrame(Seq(
      (0, "I heard about Spark and I love Spark"),
      (0, "I wish Java could use case classes"),
      (1, "Logistic regression models are neat")
    )).toDF("label", "sentence")
    sentenceData.show()
    val tokenizer: Tokenizer = new Tokenizer()
      .setInputCol("sentence")
      .setOutputCol("words")
    //+-----+--------------------+--------------------+
    //|label|            sentence|               words|
    //+-----+--------------------+--------------------+
    //|    0|I heard about Spa...|[i, heard, about,...|
    //|    0|I wish Java could...|[i, wish, java, c...|
    //|    1|Logistic regressi...|[logistic, regres...|
    //+-----+--------------------+--------------------+
    //利用分词器进行分词并成为df的一列,结果如上图
    //分词后还是个dataframe,且多了一列
    //tokenizer的transform()方法把每个句子拆分成了一个个单词。
    val wordsData: DataFrame = tokenizer.transform(sentenceData)
    wordsData.show()
    //用HashingTF的transform()方法把句子哈希成特征向量。我们这里设置哈希表的桶数为2000。
    val hashingTF: HashingTF = new HashingTF()
      .setInputCol("words").setOutputCol("rawFeatures").setNumFeatures(2000)
    val featurizedData: DataFrame = hashingTF.transform(wordsData)
    //+-----+------------------------------------+---------------------------------------------+---------------------------------------------------------------------+
    //|label|sentence                            |words                                        |rawFeatures                                                          |
    //+-----+------------------------------------+---------------------------------------------+---------------------------------------------------------------------+
    //|0    |I heard about Spark and I love Spark|[i, heard, about, spark, and, i, love, spark]|(2000,[240,333,1105,1329,1357,1777],[1.0,1.0,2.0,2.0,1.0,1.0])       |
    //|0    |I wish Java could use case classes  |[i, wish, java, could, use, case, classes]   |(2000,[213,342,489,495,1329,1809,1967],[1.0,1.0,1.0,1.0,1.0,1.0,1.0])|
    //|1    |Logistic regression models are neat |[logistic, regression, models, are, neat]    |(2000,[286,695,1138,1193,1604],[1.0,1.0,1.0,1.0,1.0])                |
    //+-----+------------------------------------+---------------------------------------------+---------------------------------------------------------------------+
    // 我们可以看到每一个单词被哈希成了一个不同的索引值。
    // 以”I heard about Spark and I love Spark”为例,输出结果中2000代表哈希表的桶数,
    // “[240,333,1105,1329,1357,1777]”分别代表着“i, spark, heard, about, and, love”的哈希值,
    // “1.0,1.0,2.0,2.0,1.0,1.0]”为对应单词的出现次数。
    featurizedData.show(false)
    //调用IDF方法来重新构造特征向量的规模,生成的idf是一个Estimator,在特征向量上应用它的fit()方法,会产生一个IDFModel。
    val idf: IDF = new IDF().setInputCol("rawFeatures").setOutputCol("features")
    val idfModel: IDFModel = idf.fit(featurizedData)
    //调用IDFModel的transform方法,可以得到每一个单词对应的TF-IDF 度量值
    val rescaledData: DataFrame = idfModel.transform(featurizedData)
    //|label|sentence                            |words                                        |rawFeatures                                                          |features                                                                                                                                                                       |
    //+-----+------------------------------------+---------------------------------------------+---------------------------------------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
    //|0    |I heard about Spark and I love Spark|[i, heard, about, spark, and, i, love, spark]|(2000,[240,333,1105,1329,1357,1777],[1.0,1.0,2.0,2.0,1.0,1.0])       |(2000,[240,333,1105,1329,1357,1777],[0.6931471805599453,0.6931471805599453,1.3862943611198906,0.5753641449035617,0.6931471805599453,0.6931471805599453])                       |
    //|0    |I wish Java could use case classes  |[i, wish, java, could, use, case, classes]   |(2000,[213,342,489,495,1329,1809,1967],[1.0,1.0,1.0,1.0,1.0,1.0,1.0])|(2000,[213,342,489,495,1329,1809,1967],[0.6931471805599453,0.6931471805599453,0.6931471805599453,0.6931471805599453,0.28768207245178085,0.6931471805599453,0.6931471805599453])|
    //|1    |Logistic regression models are neat |[logistic, regression, models, are, neat]    |(2000,[286,695,1138,1193,1604],[1.0,1.0,1.0,1.0,1.0])                |(2000,[286,695,1138,1193,1604],[0.6931471805599453,0.6931471805599453,0.6931471805599453,0.6931471805599453,0.6931471805599453])                                               |
    //+-----+------------------------------------+---------------------------------------------+---------------------------------------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
    rescaledData.show(false)
  }
}

参考

Spark入门:特征抽取: TF-IDF — spark.ml_厦大数据库实验室博客
https://dblab.xmu.edu.cn/blog/1261-2/

Logo

腾讯云面向开发者汇聚海量精品云计算使用和开发经验,营造开放的云计算技术生态圈。

更多推荐