提问者:小点点

如何针对 Spark 数据帧并行化/分发查询/计数?


我有一个DataFrame,我必须对其应用一系列过滤查询。例如,我按如下方式加载DataFrame

val df = spark.read.parquet("hdfs://box/some-parquet")

然后我有一堆“任意的”过滤器如下。

  • C0='true'和C1='false'
  • C0='假'和C3='真'
  • 等等…

我通常使用util方法动态获取这些过滤器。

val filters: List[String] = getFilters()

我所做的就是将这些过滤器应用于DataFrame以获取计数。例如

val counts = filters.map(filter => {
 df.where(filter).count
})

我注意到在过滤器上映射时这不是并行/分布式操作。如果我将过滤器粘贴到 RDD/DataFrame 中,这种方法也行不通,因为我随后将执行嵌套数据帧操作(正如我在 SO 上读到的那样,Spark 中不允许这样做)。如下所示的内容给出了一个 NullPointerException (NPE)。

val df = spark.read.parquet("hdfs://box/some-parquet")
val filterRDD = spark.sparkContext.parallelize(List("C0='false'", "C1='true'"))
val counts = filterRDD.map(df.filter(_).count).collect
Caused by: java.lang.NullPointerException
  at org.apache.spark.sql.Dataset.filter(Dataset.scala:1127)
  at $anonfun$1.apply(:27)
  at $anonfun$1.apply(:27)
  at scala.collection.Iterator$$anon$11.next(Iterator.scala:409)
  at scala.collection.Iterator$class.foreach(Iterator.scala:893)
  at scala.collection.AbstractIterator.foreach(Iterator.scala:1336)
  at scala.collection.generic.Growable$class.$plus$plus$eq(Growable.scala:59)
  at scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:104)
  at scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:48)
  at scala.collection.TraversableOnce$class.to(TraversableOnce.scala:310)
  at scala.collection.AbstractIterator.to(Iterator.scala:1336)
  at scala.collection.TraversableOnce$class.toBuffer(TraversableOnce.scala:302)
  at scala.collection.AbstractIterator.toBuffer(Iterator.scala:1336)
  at scala.collection.TraversableOnce$class.toArray(TraversableOnce.scala:289)
  at scala.collection.AbstractIterator.toArray(Iterator.scala:1336)
  at org.apache.spark.rdd.RDD$$anonfun$collect$1$$anonfun$13.apply(RDD.scala:912)
  at org.apache.spark.rdd.RDD$$anonfun$collect$1$$anonfun$13.apply(RDD.scala:912)
  at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:1899)
  at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:1899)
  at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:70)
  at org.apache.spark.scheduler.Task.run(Task.scala:86)
  at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:274)
  at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
  at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
  at java.lang.Thread.run(Thread.java:745)

是否有任何方法可以在Spark中的DataFrame上并行化/分发计数过滤器?顺便说一下,我使用的是Spark v2.0.2。


共1个答案

匿名用户

通过这样做,唯一可预期的增益(可能非常大)将是仅对输入数据传递一次。

我会这样做(程序化解决方案,但等效的SQL是可能的):

  1. 将筛选器转换为返回1或0的UDF
  2. 为每个UDFS添加一列
  3. 分组依据/汇总数据

示例火花会话如下所示:

scala> val data = spark.createDataFrame(Seq("A", "BB", "CCC").map(Tuple1.apply)).withColumnRenamed("_1", "input")

data: org.apache.spark.sql.DataFrame = [input: string]

scala> data.show
+-----+
|input|
+-----+
|    A|
|   BB|
|  CCC|
+-----+

scala> val containsBFilter = udf((input: String) => if(input.contains("B")) 1 else 0)
containsBFilter: org.apache.spark.sql.expressions.UserDefinedFunction = UserDefinedFunction(<function1>,IntegerType,Some(List(StringType)))

scala> val lengthFilter = udf((input: String) => if (input.length < 3) 1 else 0)
lengthFilter: org.apache.spark.sql.expressions.UserDefinedFunction = UserDefinedFunction(<function1>,IntegerType,Some(List(StringType)))

scala> data.withColumn("inputLength", lengthFilter($"input")).withColumn("containsB", containsBFilter($"input")).select(sum($"inputLength"), sum($"containsB")).show

+----------------+--------------+
|sum(inputLength)|sum(containsB)|
+----------------+--------------+
|               2|             1|
+----------------+--------------+