问题:如何在pyspark中更改数据框列名称?

我来自熊猫背景,习惯于将CSV文件中的数据读取到数据帧中,然后使用简单的命令将列名更改为有用的东西:

df.columns = new_column_name_list

但是,这在使用sqlContext创建的pyspark数据帧中无效。我能想到的唯一解决方案是:

df = sqlContext.read.format("com.databricks.spark.csv").options(header='false', inferschema='true', delimiter='\t').load("data.txt")
oldSchema = df.schema
for i,k in enumerate(oldSchema.fields):
  k.name = new_column_name_list[i]
df = sqlContext.read.format("com.databricks.spark.csv").options(header='false', delimiter='\t').load("data.txt", schema=oldSchema)

这基本上是两次定义变量,然后首先推断模式,然后重命名列名,然后使用更新后的模式再次加载数据框。

有没有像我们在大熊猫中那样做的更好,更有效的方法?

我的Spark版本是1.5.0

I come from pandas background and am used to reading data from CSV files into a dataframe and then simply changing the column names to something useful using the simple command:

df.columns = new_column_name_list

However, the same doesn’t work in pyspark dataframes created using sqlContext. The only solution I could figure out to do this easily is the following:

df = sqlContext.read.format("com.databricks.spark.csv").options(header='false', inferschema='true', delimiter='\t').load("data.txt")
oldSchema = df.schema
for i,k in enumerate(oldSchema.fields):
  k.name = new_column_name_list[i]
df = sqlContext.read.format("com.databricks.spark.csv").options(header='false', delimiter='\t').load("data.txt", schema=oldSchema)

This is basically defining the variable twice and inferring the schema first then renaming the column names and then loading the dataframe again with the updated schema.

Is there a better and more efficient way to do this like we do in pandas ?

My spark version is 1.5.0


回答 0

有很多方法可以做到这一点:

  • 选项1.使用selectExpr

    data = sqlContext.createDataFrame([("Alberto", 2), ("Dakota", 2)], 
                                      ["Name", "askdaosdka"])
    data.show()
    data.printSchema()
    
    # Output
    #+-------+----------+
    #|   Name|askdaosdka|
    #+-------+----------+
    #|Alberto|         2|
    #| Dakota|         2|
    #+-------+----------+
    
    #root
    # |-- Name: string (nullable = true)
    # |-- askdaosdka: long (nullable = true)
    
    df = data.selectExpr("Name as name", "askdaosdka as age")
    df.show()
    df.printSchema()
    
    # Output
    #+-------+---+
    #|   name|age|
    #+-------+---+
    #|Alberto|  2|
    #| Dakota|  2|
    #+-------+---+
    
    #root
    # |-- name: string (nullable = true)
    # |-- age: long (nullable = true)
  • 选项2。使用withColumnRenamed时,请注意,此方法允许您“覆盖”同一列。对于Python3,请替换xrangerange

    from functools import reduce
    
    oldColumns = data.schema.names
    newColumns = ["name", "age"]
    
    df = reduce(lambda data, idx: data.withColumnRenamed(oldColumns[idx], newColumns[idx]), xrange(len(oldColumns)), data)
    df.printSchema()
    df.show()
  • 选项3.使用 别名,在Scala中,您还可以将as用作

    from pyspark.sql.functions import col
    
    data = data.select(col("Name").alias("name"), col("askdaosdka").alias("age"))
    data.show()
    
    # Output
    #+-------+---+
    #|   name|age|
    #+-------+---+
    #|Alberto|  2|
    #| Dakota|  2|
    #+-------+---+
  • 选项4.使用sqlContext.sql,它使您可以在DataFrames注册为表的地方使用SQL查询。

    sqlContext.registerDataFrameAsTable(data, "myTable")
    df2 = sqlContext.sql("SELECT Name AS name, askdaosdka as age from myTable")
    
    df2.show()
    
    # Output
    #+-------+---+
    #|   name|age|
    #+-------+---+
    #|Alberto|  2|
    #| Dakota|  2|
    #+-------+---+

There are many ways to do that:

  • Option 1. Using selectExpr.

    data = sqlContext.createDataFrame([("Alberto", 2), ("Dakota", 2)], 
                                      ["Name", "askdaosdka"])
    data.show()
    data.printSchema()
    
    # Output
    #+-------+----------+
    #|   Name|askdaosdka|
    #+-------+----------+
    #|Alberto|         2|
    #| Dakota|         2|
    #+-------+----------+
    
    #root
    # |-- Name: string (nullable = true)
    # |-- askdaosdka: long (nullable = true)
    
    df = data.selectExpr("Name as name", "askdaosdka as age")
    df.show()
    df.printSchema()
    
    # Output
    #+-------+---+
    #|   name|age|
    #+-------+---+
    #|Alberto|  2|
    #| Dakota|  2|
    #+-------+---+
    
    #root
    # |-- name: string (nullable = true)
    # |-- age: long (nullable = true)
    
  • Option 2. Using withColumnRenamed, notice that this method allows you to “overwrite” the same column. For Python3, replace xrange with range.

    from functools import reduce
    
    oldColumns = data.schema.names
    newColumns = ["name", "age"]
    
    df = reduce(lambda data, idx: data.withColumnRenamed(oldColumns[idx], newColumns[idx]), xrange(len(oldColumns)), data)
    df.printSchema()
    df.show()
    
  • Option 3. using alias, in Scala you can also use as.

    from pyspark.sql.functions import col
    
    data = data.select(col("Name").alias("name"), col("askdaosdka").alias("age"))
    data.show()
    
    # Output
    #+-------+---+
    #|   name|age|
    #+-------+---+
    #|Alberto|  2|
    #| Dakota|  2|
    #+-------+---+
    
  • Option 4. Using sqlContext.sql, which lets you use SQL queries on DataFrames registered as tables.

    sqlContext.registerDataFrameAsTable(data, "myTable")
    df2 = sqlContext.sql("SELECT Name AS name, askdaosdka as age from myTable")
    
    df2.show()
    
    # Output
    #+-------+---+
    #|   name|age|
    #+-------+---+
    #|Alberto|  2|
    #| Dakota|  2|
    #+-------+---+
    

回答 1

df = df.withColumnRenamed("colName", "newColName")
       .withColumnRenamed("colName2", "newColName2")

使用这种方式的优势:对于长列列表,您只想更改几个列名。在这些情况下,这可能非常方便。连接具有重复列名的表时非常有用。

df = df.withColumnRenamed("colName", "newColName")\
       .withColumnRenamed("colName2", "newColName2")

Advantage of using this way: With long list of columns you would like to change only few column names. This can be very convenient in these scenarios. Very useful when joining tables with duplicate column names.


回答 2

如果要更改所有列名称,请尝试 df.toDF(*cols)

If you want to change all columns names, try df.toDF(*cols)


回答 3

如果您想对所有列名应用简单的转换,则此代码可以解决问题:(我用下划线替换所有空格)

new_column_name_list= list(map(lambda x: x.replace(" ", "_"), df.columns))

df = df.toDF(*new_column_name_list)

感谢@ user8117731的toDf把戏。

In case you would like to apply a simple transformation on all column names, this code does the trick: (I am replacing all spaces with underscore)

new_column_name_list= list(map(lambda x: x.replace(" ", "_"), df.columns))

df = df.toDF(*new_column_name_list)

Thanks to @user8117731 for toDf trick.


回答 4

如果要重命名单个列,并保留其余的原样:

from pyspark.sql.functions import col
new_df = old_df.select(*[col(s).alias(new_name) if s == column_to_change else s for s in old_df.columns])

If you want to rename a single column and keep the rest as it is:

from pyspark.sql.functions import col
new_df = old_df.select(*[col(s).alias(new_name) if s == column_to_change else s for s in old_df.columns])

回答 5

df.withColumnRenamed('age', 'age2')

df.withColumnRenamed('age', 'age2')


回答 6

这是我使用的方法:

创建pyspark会话:

import pyspark
from pyspark.sql import SparkSession
spark = SparkSession.builder.appName('changeColNames').getOrCreate()

创建数据框:

df = spark.createDataFrame(data = [('Bob', 5.62,'juice'),  ('Sue',0.85,'milk')], schema = ["Name", "Amount","Item"])

查看具有列名称的df:

df.show()
+----+------+-----+
|Name|Amount| Item|
+----+------+-----+
| Bob|  5.62|juice|
| Sue|  0.85| milk|
+----+------+-----+

用新的列名创建一个列表:

newcolnames = ['NameNew','AmountNew','ItemNew']

更改df的列名:

for c,n in zip(df.columns,newcolnames):
    df=df.withColumnRenamed(c,n)

使用新的列名查看df:

df.show()
+-------+---------+-------+
|NameNew|AmountNew|ItemNew|
+-------+---------+-------+
|    Bob|     5.62|  juice|
|    Sue|     0.85|   milk|
+-------+---------+-------+

this is the approach that I used:

create pyspark session:

import pyspark
from pyspark.sql import SparkSession
spark = SparkSession.builder.appName('changeColNames').getOrCreate()

create dataframe:

df = spark.createDataFrame(data = [('Bob', 5.62,'juice'),  ('Sue',0.85,'milk')], schema = ["Name", "Amount","Item"])

view df with column names:

df.show()
+----+------+-----+
|Name|Amount| Item|
+----+------+-----+
| Bob|  5.62|juice|
| Sue|  0.85| milk|
+----+------+-----+

create a list with new column names:

newcolnames = ['NameNew','AmountNew','ItemNew']

change the column names of the df:

for c,n in zip(df.columns,newcolnames):
    df=df.withColumnRenamed(c,n)

view df with new column names:

df.show()
+-------+---------+-------+
|NameNew|AmountNew|ItemNew|
+-------+---------+-------+
|    Bob|     5.62|  juice|
|    Sue|     0.85|   milk|
+-------+---------+-------+

回答 7

我提供了一个易于使用的函数来为pyspark数据帧重命名多个列,以防有人使用它:

def renameCols(df, old_columns, new_columns):
    for old_col,new_col in zip(old_columns,new_columns):
        df = df.withColumnRenamed(old_col,new_col)
    return df

old_columns = ['old_name1','old_name2']
new_columns = ['new_name1', 'new_name2']
df_renamed = renameCols(df, old_columns, new_columns)

请注意,两个列表的长度必须相同。

I made an easy to use function to rename multiple columns for a pyspark dataframe, in case anyone wants to use it:

def renameCols(df, old_columns, new_columns):
    for old_col,new_col in zip(old_columns,new_columns):
        df = df.withColumnRenamed(old_col,new_col)
    return df

old_columns = ['old_name1','old_name2']
new_columns = ['new_name1', 'new_name2']
df_renamed = renameCols(df, old_columns, new_columns)

Be careful, both lists must be the same length.


回答 8

重命名一个列的另一种方法(使用import pyspark.sql.functions as F):

df = df.select( '*', F.col('count').alias('new_count') ).drop('count')

Another way to rename just one column (using import pyspark.sql.functions as F):

df = df.select( '*', F.col('count').alias('new_count') ).drop('count')

回答 9

我用这个:

from pyspark.sql.functions import col
df.select(['vin',col('timeStamp').alias('Date')]).show()

I use this one:

from pyspark.sql.functions import col
df.select(['vin',col('timeStamp').alias('Date')]).show()

回答 10

您可以使用以下函数来重命名数据框的所有列。

def df_col_rename(X, to_rename, replace_with):
    """
    :param X: spark dataframe
    :param to_rename: list of original names
    :param replace_with: list of new names
    :return: dataframe with updated names
    """
    import pyspark.sql.functions as F
    mapping = dict(zip(to_rename, replace_with))
    X = X.select([F.col(c).alias(mapping.get(c, c)) for c in to_rename])
    return X

如果只需要更新几个列名,则可以在replace_with列表中使用相同的列名

重命名所有列

df_col_rename(X,['a', 'b', 'c'], ['x', 'y', 'z'])

重命名一些列

df_col_rename(X,['a', 'b', 'c'], ['a', 'y', 'z'])

You can use the following function to rename all the columns of your dataframe.

def df_col_rename(X, to_rename, replace_with):
    """
    :param X: spark dataframe
    :param to_rename: list of original names
    :param replace_with: list of new names
    :return: dataframe with updated names
    """
    import pyspark.sql.functions as F
    mapping = dict(zip(to_rename, replace_with))
    X = X.select([F.col(c).alias(mapping.get(c, c)) for c in to_rename])
    return X

In case you need to update only a few columns’ names, you can use the same column name in the replace_with list

To rename all columns

df_col_rename(X,['a', 'b', 'c'], ['x', 'y', 'z'])

To rename a some columns

df_col_rename(X,['a', 'b', 'c'], ['a', 'y', 'z'])

回答 11

对于单列重命名,您仍然可以使用toDF()。例如,

df1.selectExpr("SALARY*2").toDF("REVISED_SALARY").show()

For a single column rename, you can still use toDF(). For example,

df1.selectExpr("SALARY*2").toDF("REVISED_SALARY").show()

回答 12

我们可以使用各种方法来重命名列名称。

首先,让我们创建一个简单的DataFrame。

df = spark.createDataFrame([("x", 1), ("y", 2)], 
                                  ["col_1", "col_2"])

现在,让我们尝试将col_1重命名为col_3。PFB有几种方法可以做到相同。

# Approach - 1 : using withColumnRenamed function.
df.withColumnRenamed("col_1", "col_3").show()

# Approach - 2 : using alias function.
df.select(df["col_1"].alias("col3"), "col_2").show()

# Approach - 3 : using selectExpr function.
df.selectExpr("col_1 as col_3", "col_2").show()

# Rename all columns
# Approach - 4 : using toDF function. Here you need to pass the list of all columns present in DataFrame.
df.toDF("col_3", "col_2").show()

这是输出。

+-----+-----+
|col_3|col_2|
+-----+-----+
|    x|    1|
|    y|    2|
+-----+-----+

我希望这有帮助。

We can use various approaches to rename the column name.

First, let create a simple DataFrame.

df = spark.createDataFrame([("x", 1), ("y", 2)], 
                                  ["col_1", "col_2"])

Now let’s try to rename col_1 to col_3. PFB a few approaches to do the same.

# Approach - 1 : using withColumnRenamed function.
df.withColumnRenamed("col_1", "col_3").show()

# Approach - 2 : using alias function.
df.select(df["col_1"].alias("col3"), "col_2").show()

# Approach - 3 : using selectExpr function.
df.selectExpr("col_1 as col_3", "col_2").show()

# Rename all columns
# Approach - 4 : using toDF function. Here you need to pass the list of all columns present in DataFrame.
df.toDF("col_3", "col_2").show()

Here is the output.

+-----+-----+
|col_3|col_2|
+-----+-----+
|    x|    1|
|    y|    2|
+-----+-----+

I hope this helps.


声明:本站所有文章,如无特殊说明或标注,均为本站原创发布。任何个人或组织,在未征得本站同意时,禁止复制、盗用、采集、发布本站内容到任何网站、书籍等各类媒体平台。如若本站内容侵犯了原著者的合法权益,可联系我们进行处理。