Why Spark SQL translates String "null" as Object null for Float/Double types?

By himanshuIIITian

I have a dataframe containing float and double values.

scala> val df = List((Float.NaN, Double.NaN), (1f, 0d)).toDF("x", "y")
df: org.apache.spark.sql.DataFrame = [x: float, y: double]
scala> df.show
+---+---+
|  x|  y|
+---+---+
|NaN|NaN|
|1.0|0.0|
+---+---+
scala> df.printSchema
root
 |-- x: float (nullable = false)
 |-- y: double (nullable = false)

When I replace NaN values with null value, I gave null as String to the Map in fill operation.

scala> val map = df.columns.map((_, "null")).toMap
map: scala.collection.immutable.Map[String,String] = Map(x -> null, y -> null)
scala> df.na.fill(map).printSchema
root
 |-- x: float (nullable = true)
 |-- y: double (nullable = true)
scala> df.na.fill(map).show
+----+----+
|   x|   y|
+----+----+
|null|null|
| 1.0| 0.0|
+----+----+

And I got correct value. But I was not able to understand as to How/Why Spark SQL is translating null as a String to a null object ?

Source: Stack Overflow

    

Share it with your friends!

    Fatal error: Uncaught Exception: 12: REST API is deprecated for versions v2.1 and higher (12) thrown in /home/content/19/9652219/html/wp-content/plugins/seo-facebook-comments/facebook/base_facebook.php on line 1273