假设我正在做类似的事情:val df = sqlContext.load("com.databricks.spark.csv", Map("path" -> "cars.csv", "header" -> "true"))df.printSchema()root |-- year: string (nullable = true) |-- make: string (nullable = true) |-- model: string (nullable = true) |-- comment: string (nullable = true) |-- blank: string (nullable = true)df.show()year make model comment blank2012 Tesla S No comment 1997 Ford E350 Go get one now th... 但我真的想要yearas Int(也许可以转换其他一些列)。我能想到的最好的是df.withColumn("year2", 'year.cast("Int")).select('year2 as 'year, 'make, 'model, 'comment, 'blank)org.apache.spark.sql.DataFrame = [year: int, make: string, model: string, comment: string, blank: string]这有点令人费解。我来自R,我习惯于写作,例如df2 <- df %>% mutate(year = year %>% as.integer, make = make %>% toupper)我可能会错过一些东西,因为应该有一种更好的方法来解决此问题。
添加回答
举报
0/150
提交
取消