问题:使用无值过滤Pyspark数据框列

我正在尝试过滤具有None作为行值的PySpark数据框:

df.select('dt_mvmt').distinct().collect()

[Row(dt_mvmt=u'2016-03-27'),
 Row(dt_mvmt=u'2016-03-28'),
 Row(dt_mvmt=u'2016-03-29'),
 Row(dt_mvmt=None),
 Row(dt_mvmt=u'2016-03-30'),
 Row(dt_mvmt=u'2016-03-31')]

我可以使用字符串值正确过滤:

df[df.dt_mvmt == '2016-03-31']
# some results here

但这失败了:

df[df.dt_mvmt == None].count()
0
df[df.dt_mvmt != None].count()
0

但是每个类别上肯定都有价值。这是怎么回事?

I’m trying to filter a PySpark dataframe that has None as a row value:

df.select('dt_mvmt').distinct().collect()

[Row(dt_mvmt=u'2016-03-27'),
 Row(dt_mvmt=u'2016-03-28'),
 Row(dt_mvmt=u'2016-03-29'),
 Row(dt_mvmt=None),
 Row(dt_mvmt=u'2016-03-30'),
 Row(dt_mvmt=u'2016-03-31')]

and I can filter correctly with an string value:

df[df.dt_mvmt == '2016-03-31']
# some results here

but this fails:

df[df.dt_mvmt == None].count()
0
df[df.dt_mvmt != None].count()
0

But there are definitely values on each category. What’s going on?


回答 0

您可以使用Column.isNull/ Column.isNotNull

df.where(col("dt_mvmt").isNull())

df.where(col("dt_mvmt").isNotNull())

如果你想简单地丢弃NULL值,您可以使用na.dropsubset参数:

df.na.drop(subset=["dt_mvmt"])

与Equals进行基于相等的比较NULL将不起作用,因为在SQL NULL中未定义,因此任何将其与另一个值进行比较的尝试都将返回NULL

sqlContext.sql("SELECT NULL = NULL").show()
## +-------------+
## |(NULL = NULL)|
## +-------------+
## |         null|
## +-------------+


sqlContext.sql("SELECT NULL != NULL").show()
## +-------------------+
## |(NOT (NULL = NULL))|
## +-------------------+
## |               null|
## +-------------------+

与值进行比较的唯一有效方法NULLIS/ IS NOT,它等效于isNull/ isNotNull方法调用。

You can use Column.isNull / Column.isNotNull:

df.where(col("dt_mvmt").isNull())

df.where(col("dt_mvmt").isNotNull())

If you want to simply drop NULL values you can use na.drop with subset argument:

df.na.drop(subset=["dt_mvmt"])

Equality based comparisons with NULL won’t work because in SQL NULL is undefined so any attempt to compare it with another value returns NULL:

sqlContext.sql("SELECT NULL = NULL").show()
## +-------------+
## |(NULL = NULL)|
## +-------------+
## |         null|
## +-------------+


sqlContext.sql("SELECT NULL != NULL").show()
## +-------------------+
## |(NOT (NULL = NULL))|
## +-------------------+
## |               null|
## +-------------------+

The only valid method to compare value with NULL is IS / IS NOT which are equivalent to the isNull / isNotNull method calls.


回答 1

尝试仅使用isNotNull函数。

df.filter(df.dt_mvmt.isNotNull()).count()

Try to just use isNotNull function.

df.filter(df.dt_mvmt.isNotNull()).count()

回答 2

为了获得dt_mvmt列中的值不为null的条目,我们有

df.filter("dt_mvmt is not NULL")

对于为空的条目,我们有

df.filter("dt_mvmt is NULL")

To obtain entries whose values in the dt_mvmt column are not null we have

df.filter("dt_mvmt is not NULL")

and for entries which are null we have

df.filter("dt_mvmt is NULL")

回答 3

如果您想保留Pandas语法,这对我有用。

df = df[df.dt_mvmt.isNotNull()]

If you want to keep with the Pandas syntex this worked for me.

df = df[df.dt_mvmt.isNotNull()]

回答 4

如果列=无

COLUMN_OLD_VALUE
----------------
None
1
None
100
20
------------------

使用在数据框上创建一个临时表:

sqlContext.sql("select * from tempTable where column_old_value='None' ").show()

因此使用: column_old_value='None'

if column = None

COLUMN_OLD_VALUE
----------------
None
1
None
100
20
------------------

Use create a temptable on data frame:

sqlContext.sql("select * from tempTable where column_old_value='None' ").show()

So use : column_old_value='None'


回答 5

您可以通过多种方式从DataFrame的列中删除/过滤空值。

让我们用下面的代码创建一个简单的DataFrame:

date = ['2016-03-27','2016-03-28','2016-03-29', None, '2016-03-30','2016-03-31']
df = spark.createDataFrame(date, StringType())

现在,您可以尝试使用以下方法之一来筛选出空值。

# Approach - 1
df.filter("value is not null").show()

# Approach - 2
df.filter(col("value").isNotNull()).show()

# Approach - 3
df.filter(df["value"].isNotNull()).show()

# Approach - 4
df.filter(df.value.isNotNull()).show()

# Approach - 5
df.na.drop(subset=["value"]).show()

# Approach - 6
df.dropna(subset=["value"]).show()

# Note: You can also use where function instead of a filter.

您也可以在我的博客上查看“使用NULL值”部分以获取更多信息。

希望对您有所帮助。

There are multiple ways you can remove/filter the null values from a column in DataFrame.

Lets create a simple DataFrame with below code:

date = ['2016-03-27','2016-03-28','2016-03-29', None, '2016-03-30','2016-03-31']
df = spark.createDataFrame(date, StringType())

Now you can try one of the below approach to filter out the null values.

# Approach - 1
df.filter("value is not null").show()

# Approach - 2
df.filter(col("value").isNotNull()).show()

# Approach - 3
df.filter(df["value"].isNotNull()).show()

# Approach - 4
df.filter(df.value.isNotNull()).show()

# Approach - 5
df.na.drop(subset=["value"]).show()

# Approach - 6
df.dropna(subset=["value"]).show()

# Note: You can also use where function instead of a filter.

You can also check the section “Working with NULL Values” on my blog for more information.

I hope it helps.


回答 6

PySpark根据算术,逻辑和其他条件提供各种过滤选项。NULL值的存在可能会妨碍进一步的处理。可以选择删除它们或从统计学上估算它们。

可以考虑以下代码集:

# Dataset is df
# Column name is dt_mvmt
# Before filtering make sure you have the right count of the dataset
df.count() # Some number

# Filter here
df = df.filter(df.dt_mvmt.isNotNull())

# Check the count to ensure there are NULL values present (This is important when dealing with large dataset)
df.count() # Count should be reduced if NULL values are present

PySpark provides various filtering options based on arithmetic, logical and other conditions. Presence of NULL values can hamper further processes. Removing them or statistically imputing them could be a choice.

Below set of code can be considered:

# Dataset is df
# Column name is dt_mvmt
# Before filtering make sure you have the right count of the dataset
df.count() # Some number

# Filter here
df = df.filter(df.dt_mvmt.isNotNull())

# Check the count to ensure there are NULL values present (This is important when dealing with large dataset)
df.count() # Count should be reduced if NULL values are present

回答 7

我也会尝试:

df = df.dropna(subset=["dt_mvmt"])

I would also try:

df = df.dropna(subset=["dt_mvmt"])


回答 8

如果要过滤出列中没有值的记录,请参见以下示例:

df=spark.createDataFrame([[123,"abc"],[234,"fre"],[345,None]],["a","b"])

现在过滤掉空值记录:

df=df.filter(df.b.isNotNull())

df.show()

如果要从DF中删除这些记录,请参见以下内容:

df1=df.na.drop(subset=['b'])

df1.show()

If you want to filter out records having None value in column then see below example:

df=spark.createDataFrame([[123,"abc"],[234,"fre"],[345,None]],["a","b"])

Now filter out null value records:

df=df.filter(df.b.isNotNull())

df.show()

If you want to remove those records from DF then see below:

df1=df.na.drop(subset=['b'])

df1.show()

回答 9

None / Null是pyspark / python中类NoneType的数据类型,因此,当您尝试将NoneType对象与字符串对象进行比较时,下面的方法将不起作用

错误的过滤方法

df [df.dt_mvmt ==无] .count()0 df [df.dt_mvmt!=无] .count()0

正确

df = df.where(col(“ dt_mvmt”)。isNotNull())返回dt_mvmt为None / Null的所有记录

None/Null is a data type of the class NoneType in pyspark/python so, Below will not work as you are trying to compare NoneType object with string object

Wrong way of filreting

df[df.dt_mvmt == None].count() 0 df[df.dt_mvmt != None].count() 0

correct

df=df.where(col(“dt_mvmt”).isNotNull()) returns all records with dt_mvmt as None/Null


声明:本站所有文章,如无特殊说明或标注,均为本站原创发布。任何个人或组织,在未征得本站同意时,禁止复制、盗用、采集、发布本站内容到任何网站、书籍等各类媒体平台。如若本站内容侵犯了原著者的合法权益,可联系我们进行处理。