标签归档:pandas

按日期对熊猫数据框进行排序

问题:按日期对熊猫数据框进行排序

我有一个熊猫数据框,如下所示:

Symbol  Date
A       02/20/2015
A       01/15/2016
A       08/21/2015

我想按它排序Date,但该列只是一个object

我试图将列设置为日期对象,但是遇到了一种格式不需要的格式的问题。所需的格式为2015-02-20,等。

因此,现在我试图找出如何使numpy将“美国”日期转换为ISO标准,以便可以使它们成为日期对象,以便可以对它们进行排序。

我该如何将这些美国日期转换为ISO标准,或者我在熊猫中缺少更直接的方法?

I have a pandas dataframe as follows:

Symbol  Date
A       02/20/2015
A       01/15/2016
A       08/21/2015

I want to sort it by Date, but the column is just an object.

I tried to make the column a date object, but I ran into an issue where that format is not the format needed. The format needed is 2015-02-20, etc.

So now I’m trying to figure out how to have numpy convert the ‘American’ dates into the ISO standard, so that I can make them date objects, so that I can sort by them.

How would I convert these american dates into ISO standard, or is there a more straight forward method I’m missing within pandas?


回答 0

您可以pd.to_datetime()用来转换为日期时间对象。它带有一个format参数,但是在您的情况下,我认为您不需要它。

>>> import pandas as pd
>>> df = pd.DataFrame( {'Symbol':['A','A','A'] ,
    'Date':['02/20/2015','01/15/2016','08/21/2015']})
>>> df
         Date Symbol
0  02/20/2015      A
1  01/15/2016      A
2  08/21/2015      A
>>> df['Date'] =pd.to_datetime(df.Date)
>>> df.sort('Date') # This now sorts in date order
        Date Symbol
0 2015-02-20      A
2 2015-08-21      A
1 2016-01-15      A

为了将来搜索,您可以更改sort语句:

>>> df.sort_values(by='Date') # This now sorts in date order
        Date Symbol
0 2015-02-20      A
2 2015-08-21      A
1 2016-01-15      A

You can use pd.to_datetime() to convert to a datetime object. It takes a format parameter, but in your case I don’t think you need it.

>>> import pandas as pd
>>> df = pd.DataFrame( {'Symbol':['A','A','A'] ,
    'Date':['02/20/2015','01/15/2016','08/21/2015']})
>>> df
         Date Symbol
0  02/20/2015      A
1  01/15/2016      A
2  08/21/2015      A
>>> df['Date'] =pd.to_datetime(df.Date)
>>> df.sort('Date') # This now sorts in date order
        Date Symbol
0 2015-02-20      A
2 2015-08-21      A
1 2016-01-15      A

For future search, you can change the sort statement:

>>> df.sort_values(by='Date') # This now sorts in date order
        Date Symbol
0 2015-02-20      A
2 2015-08-21      A
1 2016-01-15      A

回答 1

sort方法已弃用,并用代替sort_values。使用转换为datetime对象后df['Date']=pd.to_datetime(df['Date'])

df.sort_values(by=['Date'])

注意:按原位和/或降序排序(最新的优先):

df.sort_values(by=['Date'], inplace=True, ascending=False)

sort method has been deprecated and replaced with sort_values. After converting to datetime object using df['Date']=pd.to_datetime(df['Date'])

df.sort_values(by=['Date'])

Note: to sort in-place and/or in a descending order (the most recent first):

df.sort_values(by=['Date'], inplace=True, ascending=False)

回答 2

@JAB的答案非常简洁。但这会改变DataFrame您尝试排序的方式,您可能想要也可能不想要。

注意:您几乎肯定想要它,因为您的日期列应该是日期,而不是字符串!)

万一您不想将日期更改为日期,也可以使用其他方法。

首先,从排序Date列中获取索引:

In [25]: pd.to_datetime(df.Date).order().index
Out[25]: Int64Index([0, 2, 1], dtype='int64')

然后使用它索引原始DataFrame文档,使其保持不变:

In [26]: df.ix[pd.to_datetime(df.Date).order().index]
Out[26]: 
        Date Symbol
0 2015-02-20      A
2 2015-08-21      A
1 2016-01-15      A

魔法!

注意:对于Pandas 0.20.0及更高版本,请使用loc而不是ix,现在已弃用。

@JAB’s answer is fast and concise. But it changes the DataFrame you are trying to sort, which you may or may not want.

(Note: You almost certainly will want it, because your date columns should be dates, not strings!)

In the unlikely event that you don’t want to change the dates into dates, you can also do it a different way.

First, get the index from your sorted Date column:

In [25]: pd.to_datetime(df.Date).order().index
Out[25]: Int64Index([0, 2, 1], dtype='int64')

Then use it to index your original DataFrame, leaving it untouched:

In [26]: df.ix[pd.to_datetime(df.Date).order().index]
Out[26]: 
        Date Symbol
0 2015-02-20      A
2 2015-08-21      A
1 2016-01-15      A

Magic!

Note: for Pandas versions 0.20.0 and later, use loc instead of ix, which is now deprecated.


回答 3

可以使用以下代码读取包含日期列的数据:

data = pd.csv(file_path,parse_dates=[date_column])

使用上面的代码行读取数据后,可以使用以下方式访问包含有关日期的信息的列pd.date_time()

pd.date_time(data[date_column], format = '%d/%m/%y')

根据要求更改日期格式。

The data containing the date column can be read by using the below code:

data = pd.csv(file_path,parse_dates=[date_column])

Once the data is read by using the above line of code, the column containing the information about the date can be accessed using pd.date_time() like:

pd.date_time(data[date_column], format = '%d/%m/%y')

to change the format of date as per the requirement.


python pandas dataframe列转换为dict键和值

问题:python pandas dataframe列转换为dict键和值

我有一个带有多列的pandas数据框,我想从两列构造一个dict:一个作为dict的键,另一个作为dict的值。我怎样才能做到这一点?

数据框:

           area  count
co tp
DE Lake      10      7
Forest       20      5
FR Lake      30      2
Forest       40      3

我需要将区域定义为键,在dict中计为值。先感谢您。

I have a pandas data frame with multiple columns and I would like to construct a dict from two columns: one as the dict’s keys and the other as the dict’s values. How can I do that?

Dataframe:

           area  count
co tp
DE Lake      10      7
Forest       20      5
FR Lake      30      2
Forest       40      3

I need to define area as key, count as value in dict. Thank you in advance.


回答 0

如果lakes是您DataFrame,则可以执行以下操作

area_dict = dict(zip(lakes.area, lakes.count))

If lakes is your DataFrame, you can do something like

area_dict = dict(zip(lakes.area, lakes.count))

回答 1

使用大熊猫可以做到:

如果lakes是您的DataFrame:

area_dict = lakes.to_dict('records')

With pandas it can be done as:

If lakes is your DataFrame:

area_dict = lakes.to_dict('records')

回答 2

如果您想和熊猫玩耍,也可以这样做。但是,我喜欢punchagan的方式。

# replicating your dataframe
lake = pd.DataFrame({'co tp': ['DE Lake', 'Forest', 'FR Lake', 'Forest'], 
                 'area': [10, 20, 30, 40], 
                 'count': [7, 5, 2, 3]})
lake.set_index('co tp', inplace=True)

# to get key value using pandas
area_dict = lake.set_index('area').T.to_dict('records')[0]
print(area_dict)

output: {10: 7, 20: 5, 30: 2, 40: 3}

You can also do this if you want to play around with pandas. However, I like punchagan’s way.

# replicating your dataframe
lake = pd.DataFrame({'co tp': ['DE Lake', 'Forest', 'FR Lake', 'Forest'], 
                 'area': [10, 20, 30, 40], 
                 'count': [7, 5, 2, 3]})
lake.set_index('co tp', inplace=True)

# to get key value using pandas
area_dict = lake.set_index('area').T.to_dict('records')[0]
print(area_dict)

output: {10: 7, 20: 5, 30: 2, 40: 3}

python-pandas和类似mysql的数据库

问题:python-pandas和类似mysql的数据库

Pandas的文档提供了许多使用各种格式存储的最佳实践示例。

但是,我找不到用于处理像MySQL这样的数据库的任何好的示例。

谁能指向我链接或提供一些代码片段,说明如何使用mysql-python将查询结果有效地转换为Pandas中的数据帧?

The documentation for Pandas has numerous examples of best practices for working with data stored in various formats.

However, I am unable to find any good examples for working with databases like MySQL for example.

Can anyone point me to links or give some code snippets of how to convert query results using mysql-python to data frames in Pandas efficiently ?


回答 0

正如Wes所说,一旦使用DBI兼容库建立了数据库连接,io / sql的read_sql就可以完成。我们可以看两个使用MySQLdbcx_Oracle库连接到Oracle和MySQL并查询其数据字典的简短示例。这是以下示例cx_Oracle

import pandas as pd
import cx_Oracle

ora_conn = cx_Oracle.connect('your_connection_string')
df_ora = pd.read_sql('select * from user_objects', con=ora_conn)    
print 'loaded dataframe from Oracle. # Records: ', len(df_ora)
ora_conn.close()

这是等效的示例MySQLdb

import MySQLdb
mysql_cn= MySQLdb.connect(host='myhost', 
                port=3306,user='myusername', passwd='mypassword', 
                db='information_schema')
df_mysql = pd.read_sql('select * from VIEWS;', con=mysql_cn)    
print 'loaded dataframe from MySQL. records:', len(df_mysql)
mysql_cn.close()

As Wes says, io/sql’s read_sql will do it, once you’ve gotten a database connection using a DBI compatible library. We can look at two short examples using the MySQLdb and cx_Oracle libraries to connect to Oracle and MySQL and query their data dictionaries. Here is the example for cx_Oracle:

import pandas as pd
import cx_Oracle

ora_conn = cx_Oracle.connect('your_connection_string')
df_ora = pd.read_sql('select * from user_objects', con=ora_conn)    
print 'loaded dataframe from Oracle. # Records: ', len(df_ora)
ora_conn.close()

And here is the equivalent example for MySQLdb:

import MySQLdb
mysql_cn= MySQLdb.connect(host='myhost', 
                port=3306,user='myusername', passwd='mypassword', 
                db='information_schema')
df_mysql = pd.read_sql('select * from VIEWS;', con=mysql_cn)    
print 'loaded dataframe from MySQL. records:', len(df_mysql)
mysql_cn.close()

回答 1

对于这个问题的新读者:熊猫在其版本14.0文档中具有以下警告:

警告:一些现有功能或功能别名已被弃用,并将在以后的版本中删除。这包括:tquery,uquery,read_frame,frame_query,write_frame。

和:

警告:不建议使用DBAPI连接对象时支持’mysql’风格。SQLAlchemy引擎(GH6900)将进一步支持MySQL。

这使得这里的许多答案已经过时了。您应该使用sqlalchemy

from sqlalchemy import create_engine
import pandas as pd
engine = create_engine('dialect://user:pass@host:port/schema', echo=False)
f = pd.read_sql_query('SELECT * FROM mytable', engine, index_col = 'ID')

For recent readers of this question: pandas have the following warning in their docs for version 14.0:

Warning: Some of the existing functions or function aliases have been deprecated and will be removed in future versions. This includes: tquery, uquery, read_frame, frame_query, write_frame.

And:

Warning: The support for the ‘mysql’ flavor when using DBAPI connection objects has been deprecated. MySQL will be further supported with SQLAlchemy engines (GH6900).

This makes many of the answers here outdated. You should use sqlalchemy:

from sqlalchemy import create_engine
import pandas as pd
engine = create_engine('dialect://user:pass@host:port/schema', echo=False)
f = pd.read_sql_query('SELECT * FROM mytable', engine, index_col = 'ID')

回答 2

作为记录,这是一个使用sqlite数据库的示例:

import pandas as pd
import sqlite3

with sqlite3.connect("whatever.sqlite") as con:
    sql = "SELECT * FROM table_name"
    df = pd.read_sql_query(sql, con)
    print df.shape

For the record, here is an example using a sqlite database:

import pandas as pd
import sqlite3

with sqlite3.connect("whatever.sqlite") as con:
    sql = "SELECT * FROM table_name"
    df = pd.read_sql_query(sql, con)
    print df.shape

回答 3

我更喜欢使用SQLAlchemy创建查询,然后从中创建一个DataFrame。如果您打算一遍又一遍地混合和匹配内容,SQLAlchemy可以更轻松地通过Python 组合SQL条件。

from sqlalchemy.ext.declarative import declarative_base
from sqlalchemy import Table
from sqlalchemy import create_engine
from sqlalchemy.orm import sessionmaker
from pandas import DataFrame
import datetime

# We are connecting to an existing service
engine = create_engine('dialect://user:pwd@host:port/db', echo=False)
Session = sessionmaker(bind=engine)
session = Session()
Base = declarative_base()

# And we want to query an existing table
tablename = Table('tablename', 
    Base.metadata, 
    autoload=True, 
    autoload_with=engine, 
    schema='ownername')

# These are the "Where" parameters, but I could as easily 
# create joins and limit results
us = tablename.c.country_code.in_(['US','MX'])
dc = tablename.c.locn_name.like('%DC%')
dt = tablename.c.arr_date >= datetime.date.today() # Give me convenience or...

q = session.query(tablename).\
            filter(us & dc & dt) # That's where the magic happens!!!

def querydb(query):
    """
    Function to execute query and return DataFrame.
    """
    df = DataFrame(query.all());
    df.columns = [x['name'] for x in query.column_descriptions]
    return df

querydb(q)

I prefer to create queries with SQLAlchemy, and then make a DataFrame from it. SQLAlchemy makes it easier to combine SQL conditions Pythonically if you intend to mix and match things over and over.

from sqlalchemy.ext.declarative import declarative_base
from sqlalchemy import Table
from sqlalchemy import create_engine
from sqlalchemy.orm import sessionmaker
from pandas import DataFrame
import datetime

# We are connecting to an existing service
engine = create_engine('dialect://user:pwd@host:port/db', echo=False)
Session = sessionmaker(bind=engine)
session = Session()
Base = declarative_base()

# And we want to query an existing table
tablename = Table('tablename', 
    Base.metadata, 
    autoload=True, 
    autoload_with=engine, 
    schema='ownername')

# These are the "Where" parameters, but I could as easily 
# create joins and limit results
us = tablename.c.country_code.in_(['US','MX'])
dc = tablename.c.locn_name.like('%DC%')
dt = tablename.c.arr_date >= datetime.date.today() # Give me convenience or...

q = session.query(tablename).\
            filter(us & dc & dt) # That's where the magic happens!!!

def querydb(query):
    """
    Function to execute query and return DataFrame.
    """
    df = DataFrame(query.all());
    df.columns = [x['name'] for x in query.column_descriptions]
    return df

querydb(q)

回答 4

MySQL示例:

import MySQLdb as db
from pandas import DataFrame
from pandas.io.sql import frame_query

database = db.connect('localhost','username','password','database')
data     = frame_query("SELECT * FROM data", database)

MySQL example:

import MySQLdb as db
from pandas import DataFrame
from pandas.io.sql import frame_query

database = db.connect('localhost','username','password','database')
data     = frame_query("SELECT * FROM data", database)

回答 5

相同的语法也适用于使用podbc的Ms SQL服务器。

import pyodbc
import pandas.io.sql as psql

cnxn = pyodbc.connect('DRIVER={SQL Server};SERVER=servername;DATABASE=mydb;UID=username;PWD=password') 
cursor = cnxn.cursor()
sql = ("""select * from mytable""")

df = psql.frame_query(sql, cnxn)
cnxn.close()

The same syntax works for Ms SQL server using podbc also.

import pyodbc
import pandas.io.sql as psql

cnxn = pyodbc.connect('DRIVER={SQL Server};SERVER=servername;DATABASE=mydb;UID=username;PWD=password') 
cursor = cnxn.cursor()
sql = ("""select * from mytable""")

df = psql.frame_query(sql, cnxn)
cnxn.close()

回答 6

这就是您使用psycopg2驱动程序连接到PostgreSQL的方式(如果您在Debian Linux派生OS上,则使用“ apt-get install python-psycopg2”安装)。

import pandas.io.sql as psql
import psycopg2

conn = psycopg2.connect("dbname='datawarehouse' user='user1' host='localhost' password='uberdba'")

q = """select month_idx, sum(payment) from bi_some_table"""

df3 = psql.frame_query(q, conn)

And this is how you connect to PostgreSQL using psycopg2 driver (install with “apt-get install python-psycopg2” if you’re on Debian Linux derivative OS).

import pandas.io.sql as psql
import psycopg2

conn = psycopg2.connect("dbname='datawarehouse' user='user1' host='localhost' password='uberdba'")

q = """select month_idx, sum(payment) from bi_some_table"""

df3 = psql.frame_query(q, conn)

回答 7

对于Sybase,以下工作原理(使用http://python-sybase.sourceforge.net

import pandas.io.sql as psql
import Sybase

df = psql.frame_query("<Query>", con=Sybase.connect("<dsn>", "<user>", "<pwd>"))

For Sybase the following works (with http://python-sybase.sourceforge.net)

import pandas.io.sql as psql
import Sybase

df = psql.frame_query("<Query>", con=Sybase.connect("<dsn>", "<user>", "<pwd>"))

回答 8

pandas.io.sql.frame_query不推荐使用。使用pandas.read_sql代替。

pandas.io.sql.frame_query is deprecated. Use pandas.read_sql instead.


回答 9

导入模块

import pandas as pd
import oursql

连接

conn=oursql.connect(host="localhost",user="me",passwd="mypassword",db="classicmodels")
sql="Select customerName, city,country from customers order by customerName,country,city"
df_mysql = pd.read_sql(sql,conn)
print df_mysql

可以正常工作,并使用pandas.io.sql frame_works(带有弃用警告)。使用的数据库是mysql教程中的示例数据库。

import the module

import pandas as pd
import oursql

connect

conn=oursql.connect(host="localhost",user="me",passwd="mypassword",db="classicmodels")
sql="Select customerName, city,country from customers order by customerName,country,city"
df_mysql = pd.read_sql(sql,conn)
print df_mysql

That works just fine and using pandas.io.sql frame_works (with the deprecation warning). Database used is the sample database from mysql tutorial.


回答 10

这应该很好。

import MySQLdb as mdb
import pandas as pd
con = mdb.connect(‘127.0.0.1’, root’, password’, database_name’);
with con:
 cur = con.cursor()
 cur.execute(“select random_number_one, random_number_two, random_number_three from randomness.a_random_table”)
 rows = cur.fetchall()
 df = pd.DataFrame( [[ij for ij in i] for i in rows] )
 df.rename(columns={0: Random Number One’, 1: Random Number Two’, 2: Random Number Three’}, inplace=True);
 print(df.head(20))

This should work just fine.

import MySQLdb as mdb
import pandas as pd
con = mdb.connect(‘127.0.0.1’, ‘root’, ‘password’, ‘database_name’);
with con:
 cur = con.cursor()
 cur.execute(“select random_number_one, random_number_two, random_number_three from randomness.a_random_table”)
 rows = cur.fetchall()
 df = pd.DataFrame( [[ij for ij in i] for i in rows] )
 df.rename(columns={0: ‘Random Number One’, 1: ‘Random Number Two’, 2: ‘Random Number Three’}, inplace=True);
 print(df.head(20))

回答 11

这对我帮助从基于python 3.xlambda函数连接到AWS MYSQL(RDS)并加载到pandas DataFrame中

import json
import boto3
import pymysql
import pandas as pd
user = 'username'
password = 'XXXXXXX'
client = boto3.client('rds')
def lambda_handler(event, context):
    conn = pymysql.connect(host='xxx.xxxxus-west-2.rds.amazonaws.com', port=3306, user=user, passwd=password, db='database name', connect_timeout=5)
    df= pd.read_sql('select * from TableName limit 10',con=conn)
    print(df)
    # TODO implement
    #return {
    #    'statusCode': 200,
    #    'df': df
    #}

This helped for me for connecting to AWS MYSQL(RDS) from python 3.x based lambda function and loading into a pandas DataFrame

import json
import boto3
import pymysql
import pandas as pd
user = 'username'
password = 'XXXXXXX'
client = boto3.client('rds')
def lambda_handler(event, context):
    conn = pymysql.connect(host='xxx.xxxxus-west-2.rds.amazonaws.com', port=3306, user=user, passwd=password, db='database name', connect_timeout=5)
    df= pd.read_sql('select * from TableName limit 10',con=conn)
    print(df)
    # TODO implement
    #return {
    #    'statusCode': 200,
    #    'df': df
    #}

回答 12

对于Postgres用户

import psycopg2
import pandas as pd

conn = psycopg2.connect("database='datawarehouse' user='user1' host='localhost' password='uberdba'")

customers = 'select * from customers'

customers_df = pd.read_sql(customers,conn)

customers_df

For Postgres users

import psycopg2
import pandas as pd

conn = psycopg2.connect("database='datawarehouse' user='user1' host='localhost' password='uberdba'")

customers = 'select * from customers'

customers_df = pd.read_sql(customers,conn)

customers_df

单个变量的频率表

问题:单个变量的频率表

当天最后一个新手熊猫问题:如何为单个系列生成一张表?

例如:

my_series = pandas.Series([1,2,2,3,3,3])
pandas.magical_frequency_function( my_series )

>> {
     1 : 1,
     2 : 2, 
     3 : 3
   }

大量的搜索使我进入了Series.describe()和pandas.crosstabs,但是这些都不满足我的需要:一个变量,按类别计数。哦,如果它适用于不同的数据类型(字符串,整数等),那就太好了。

One last newbie pandas question for the day: How do I generate a table for a single Series?

For example:

my_series = pandas.Series([1,2,2,3,3,3])
pandas.magical_frequency_function( my_series )

>> {
     1 : 1,
     2 : 2, 
     3 : 3
   }

Lots of googling has led me to Series.describe() and pandas.crosstabs, but neither of these does quite what I need: one variable, counts by categories. Oh, and it’d be nice if it worked for different data types: strings, ints, etc.


回答 0

也许.value_counts()吧?

>>> import pandas
>>> my_series = pandas.Series([1,2,2,3,3,3, "fred", 1.8, 1.8])
>>> my_series
0       1
1       2
2       2
3       3
4       3
5       3
6    fred
7     1.8
8     1.8
>>> counts = my_series.value_counts()
>>> counts
3       3
2       2
1.8     2
fred    1
1       1
>>> len(counts)
5
>>> sum(counts)
9
>>> counts["fred"]
1
>>> dict(counts)
{1.8: 2, 2: 2, 3: 3, 1: 1, 'fred': 1}

Maybe .value_counts()?

>>> import pandas
>>> my_series = pandas.Series([1,2,2,3,3,3, "fred", 1.8, 1.8])
>>> my_series
0       1
1       2
2       2
3       3
4       3
5       3
6    fred
7     1.8
8     1.8
>>> counts = my_series.value_counts()
>>> counts
3       3
2       2
1.8     2
fred    1
1       1
>>> len(counts)
5
>>> sum(counts)
9
>>> counts["fred"]
1
>>> dict(counts)
{1.8: 2, 2: 2, 3: 3, 1: 1, 'fred': 1}

回答 1

您可以对数据框使用列表理解来计算列的频率,例如

[my_series[c].value_counts() for c in list(my_series.select_dtypes(include=['O']).columns)]

分解:

my_series.select_dtypes(include=['O']) 

仅选择分类数据

list(my_series.select_dtypes(include=['O']).columns) 

从上方将列转换为列表

[my_series[c].value_counts() for c in list(my_series.select_dtypes(include=['O']).columns)] 

遍历上面的列表,并将value_counts()应用于每个列

You can use list comprehension on a dataframe to count frequencies of the columns as such

[my_series[c].value_counts() for c in list(my_series.select_dtypes(include=['O']).columns)]

Breakdown:

my_series.select_dtypes(include=['O']) 

Selects just the categorical data

list(my_series.select_dtypes(include=['O']).columns) 

Turns the columns from above into a list

[my_series[c].value_counts() for c in list(my_series.select_dtypes(include=['O']).columns)] 

Iterates through the list above and applies value_counts() to each of the columns


回答 2

@DSM提供的答案很简单明了,但是我想我将自己的输入添加到该问题中。如果查看pandas.value_counts的代码,将会发现发生了很多事情。

如果您需要计算多个序列的频率,则可能需要一段时间。更快的实现是将numpy.uniquereturn_counts = True

这是一个例子:

import pandas as pd
import numpy as np

my_series = pd.Series([1,2,2,3,3,3])

print(my_series.value_counts())
3    3
2    2
1    1
dtype: int64

注意这里返回的项目是pandas.Series

相比之下,numpy.unique返回具有两个项目的元组,即唯一值和计数。

vals, counts = np.unique(my_series, return_counts=True)
print(vals, counts)
[1 2 3] [1 2 3]

然后,您可以将它们组合成字典:

results = dict(zip(vals, counts))
print(results)
{1: 1, 2: 2, 3: 3}

然后变成 pandas.Series

print(pd.Series(results))
1    1
2    2
3    3
dtype: int64

The answer provided by @DSM is simple and straightforward, but I thought I’d add my own input to this question. If you look at the code for pandas.value_counts, you’ll see that there is a lot going on.

If you need to calculate the frequency of many series, this could take a while. A faster implementation would be to use numpy.unique with return_counts = True

Here is an example:

import pandas as pd
import numpy as np

my_series = pd.Series([1,2,2,3,3,3])

print(my_series.value_counts())
3    3
2    2
1    1
dtype: int64

Notice here that the item returned is a pandas.Series

In comparison, numpy.unique returns a tuple with two items, the unique values and the counts.

vals, counts = np.unique(my_series, return_counts=True)
print(vals, counts)
[1 2 3] [1 2 3]

You can then combine these into a dictionary:

results = dict(zip(vals, counts))
print(results)
{1: 1, 2: 2, 3: 3}

And then into a pandas.Series

print(pd.Series(results))
1    1
2    2
3    3
dtype: int64

回答 3

对于具有过多值的变量的频率分布,您可以将类中的值折叠起来,

在这里,我为employrate变量设置了过多的值,并且没有直接频率分布的含义values_count(normalize=True)

                country  employrate alcconsumption
0           Afghanistan   55.700001            .03
1               Albania   11.000000           7.29
2               Algeria   11.000000            .69
3               Andorra         nan          10.17
4                Angola   75.699997           5.57
..                  ...         ...            ...
208             Vietnam   71.000000           3.91
209  West Bank and Gaza   32.000000               
210         Yemen, Rep.   39.000000             .2
211              Zambia   61.000000           3.56
212            Zimbabwe   66.800003           4.96

[213 rows x 3 columns]

values_count(normalize=True)没有分类的频率分布,结果长度为139(似乎无意义的频率分布):

print(gm["employrate"].value_counts(sort=False,normalize=True))

50.500000   0.005618
61.500000   0.016854
46.000000   0.011236
64.500000   0.005618
63.500000   0.005618

58.599998   0.005618
63.799999   0.011236
63.200001   0.005618
65.599998   0.005618
68.300003   0.005618
Name: employrate, Length: 139, dtype: float64

进行分类时,我们将所有值都放在一定范围内。

0-10为1
11-20为2  
21-30为3,依此类推。
gm["employrate"]=gm["employrate"].str.strip().dropna()  
gm["employrate"]=pd.to_numeric(gm["employrate"])
gm['employrate'] = np.where(
   (gm['employrate'] <=10) & (gm['employrate'] > 0) , 1, gm['employrate']
   )
gm['employrate'] = np.where(
   (gm['employrate'] <=20) & (gm['employrate'] > 10) , 1, gm['employrate']
   )
gm['employrate'] = np.where(
   (gm['employrate'] <=30) & (gm['employrate'] > 20) , 2, gm['employrate']
   )
gm['employrate'] = np.where(
   (gm['employrate'] <=40) & (gm['employrate'] > 30) , 3, gm['employrate']
   )
gm['employrate'] = np.where(
   (gm['employrate'] <=50) & (gm['employrate'] > 40) , 4, gm['employrate']
   )
gm['employrate'] = np.where(
   (gm['employrate'] <=60) & (gm['employrate'] > 50) , 5, gm['employrate']
   )
gm['employrate'] = np.where(
   (gm['employrate'] <=70) & (gm['employrate'] > 60) , 6, gm['employrate']
   )
gm['employrate'] = np.where(
   (gm['employrate'] <=80) & (gm['employrate'] > 70) , 7, gm['employrate']
   )
gm['employrate'] = np.where(
   (gm['employrate'] <=90) & (gm['employrate'] > 80) , 8, gm['employrate']
   )
gm['employrate'] = np.where(
   (gm['employrate'] <=100) & (gm['employrate'] > 90) , 9, gm['employrate']
   )
print(gm["employrate"].value_counts(sort=False,normalize=True))

分类后,我们有一个清晰的频率分布。在这里我们可以很容易地看到,37.64%一个国家的雇佣率介于51-60%11.79%之间。71-80%

5.000000   0.376404
7.000000   0.117978
4.000000   0.179775
6.000000   0.264045
8.000000   0.033708
3.000000   0.028090
Name: employrate, dtype: float64

for frequency distribution of a variable with excessive values you can collapse down the values in classes,

Here I excessive values for employrate variable, and there’s no meaning of it’s frequency distribution with direct values_count(normalize=True)

                country  employrate alcconsumption
0           Afghanistan   55.700001            .03
1               Albania   11.000000           7.29
2               Algeria   11.000000            .69
3               Andorra         nan          10.17
4                Angola   75.699997           5.57
..                  ...         ...            ...
208             Vietnam   71.000000           3.91
209  West Bank and Gaza   32.000000               
210         Yemen, Rep.   39.000000             .2
211              Zambia   61.000000           3.56
212            Zimbabwe   66.800003           4.96

[213 rows x 3 columns]

frequency distribution with values_count(normalize=True) with no classification,length of result here is 139 (seems meaningless as a frequency distribution):

print(gm["employrate"].value_counts(sort=False,normalize=True))

50.500000   0.005618
61.500000   0.016854
46.000000   0.011236
64.500000   0.005618
63.500000   0.005618

58.599998   0.005618
63.799999   0.011236
63.200001   0.005618
65.599998   0.005618
68.300003   0.005618
Name: employrate, Length: 139, dtype: float64

putting classification we put all values with a certain range ie.

0-10 as 1,
11-20 as 2  
21-30 as 3, and so forth.
gm["employrate"]=gm["employrate"].str.strip().dropna()  
gm["employrate"]=pd.to_numeric(gm["employrate"])
gm['employrate'] = np.where(
   (gm['employrate'] <=10) & (gm['employrate'] > 0) , 1, gm['employrate']
   )
gm['employrate'] = np.where(
   (gm['employrate'] <=20) & (gm['employrate'] > 10) , 1, gm['employrate']
   )
gm['employrate'] = np.where(
   (gm['employrate'] <=30) & (gm['employrate'] > 20) , 2, gm['employrate']
   )
gm['employrate'] = np.where(
   (gm['employrate'] <=40) & (gm['employrate'] > 30) , 3, gm['employrate']
   )
gm['employrate'] = np.where(
   (gm['employrate'] <=50) & (gm['employrate'] > 40) , 4, gm['employrate']
   )
gm['employrate'] = np.where(
   (gm['employrate'] <=60) & (gm['employrate'] > 50) , 5, gm['employrate']
   )
gm['employrate'] = np.where(
   (gm['employrate'] <=70) & (gm['employrate'] > 60) , 6, gm['employrate']
   )
gm['employrate'] = np.where(
   (gm['employrate'] <=80) & (gm['employrate'] > 70) , 7, gm['employrate']
   )
gm['employrate'] = np.where(
   (gm['employrate'] <=90) & (gm['employrate'] > 80) , 8, gm['employrate']
   )
gm['employrate'] = np.where(
   (gm['employrate'] <=100) & (gm['employrate'] > 90) , 9, gm['employrate']
   )
print(gm["employrate"].value_counts(sort=False,normalize=True))

after classification we have a clear frequency distribution. here we can easily see, that 37.64% of countries have employ rate between 51-60% and 11.79% of countries have employ rate between 71-80%

5.000000   0.376404
7.000000   0.117978
4.000000   0.179775
6.000000   0.264045
8.000000   0.033708
3.000000   0.028090
Name: employrate, dtype: float64

如何在熊猫数据框中将单元格设置为NaN

问题:如何在熊猫数据框中将单元格设置为NaN

我想用NaN替换数据框列中的错误值。

mydata = {'x' : [10, 50, 18, 32, 47, 20], 'y' : ['12', '11', 'N/A', '13', '15', 'N/A']}
df = pd.DataFrame(mydata)

df[df.y == 'N/A']['y'] = np.nan

虽然,最后一行失败,并发出警告,因为它正在处理df副本。那么,处理此问题的正确方法是什么?我已经见过许多使用iloc或ix的解决方案,但是在这里,我需要使用布尔条件。

I’d like to replace bad values in a column of a dataframe by NaN’s.

mydata = {'x' : [10, 50, 18, 32, 47, 20], 'y' : ['12', '11', 'N/A', '13', '15', 'N/A']}
df = pd.DataFrame(mydata)

df[df.y == 'N/A']['y'] = np.nan

Though, the last line fails and throws a warning because it’s working on a copy of df. So, what’s the correct way to handle this? I’ve seen many solutions with iloc or ix but here, I need to use a boolean condition.


回答 0

只需使用replace

In [106]:
df.replace('N/A',np.NaN)

Out[106]:
    x    y
0  10   12
1  50   11
2  18  NaN
3  32   13
4  47   15
5  20  NaN

您正在尝试的操作称为链索引:http : //pandas.pydata.org/pandas-docs/stable/indexing.html#indexing-view-versus-copy

您可以loc用来确保对原始dF进行操作:

In [108]:
df.loc[df['y'] == 'N/A','y'] = np.nan
df

Out[108]:
    x    y
0  10   12
1  50   11
2  18  NaN
3  32   13
4  47   15
5  20  NaN

just use replace:

In [106]:
df.replace('N/A',np.NaN)

Out[106]:
    x    y
0  10   12
1  50   11
2  18  NaN
3  32   13
4  47   15
5  20  NaN

What you’re trying is called chain indexing: http://pandas.pydata.org/pandas-docs/stable/indexing.html#indexing-view-versus-copy

You can use loc to ensure you operate on the original dF:

In [108]:
df.loc[df['y'] == 'N/A','y'] = np.nan
df

Out[108]:
    x    y
0  10   12
1  50   11
2  18  NaN
3  32   13
4  47   15
5  20  NaN

回答 1

虽然使用replace似乎可以解决问题,但我想提出一种替代方法。列中数字和某些字符串值混合的问题不是用np.nan替换字符串,而是使整个列正确。我敢打赌,原始列很可能是对象类型

Name: y, dtype: object

您真正需要的是使它成为一个数字列(它将具有适当的类型,并且速度会更快),并且所有非数字值都将替换为NaN。

因此,良好的转换代码将是

pd.to_numeric(df['y'], errors='coerce')

指定errors='coerce'强制将无法解析为数字值的字符串变为NaN。列类型为

Name: y, dtype: float64

While using replace seems to solve the problem, I would like to propose an alternative. Problem with mix of numeric and some string values in the column not to have strings replaced with np.nan, but to make whole column proper. I would bet that original column most likely is of an object type

Name: y, dtype: object

What you really need is to make it a numeric column (it will have proper type and would be quite faster), with all non-numeric values replaced by NaN.

Thus, good conversion code would be

pd.to_numeric(df['y'], errors='coerce')

Specify errors='coerce' to force strings that can’t be parsed to a numeric value to become NaN. Column type would be

Name: y, dtype: float64

回答 2

您可以使用replace:

df['y'] = df['y'].replace({'N/A': np.nan})

另请注意的inplace参数replace。您可以执行以下操作:

df.replace({'N/A': np.nan}, inplace=True)

这将替换df中的所有实例,而不创建副本。

同样,如果遇到其他类型的未知值,例如空字符串或无值:

df['y'] = df['y'].replace({'': np.nan})

df['y'] = df['y'].replace({None: np.nan})

参考:熊猫最新-替换

You can use replace:

df['y'] = df['y'].replace({'N/A': np.nan})

Also be aware of the inplace parameter for replace. You can do something like:

df.replace({'N/A': np.nan}, inplace=True)

This will replace all instances in the df without creating a copy.

Similarly, if you run into other types of unknown values such as empty string or None value:

df['y'] = df['y'].replace({'': np.nan})

df['y'] = df['y'].replace({None: np.nan})

Reference: Pandas Latest – Replace


回答 3

df.loc[df.y == 'N/A',['y']] = np.nan

这样可以解决您的问题。使用double [],您正在处理DataFrame的副本。您必须在一个呼叫中指定确切位置才能进行修改。

df.loc[df.y == 'N/A',['y']] = np.nan

This solve your problem. With the double [], you are working on a copy of the DataFrame. You have to specify exact location in one call to be able to modify it.


回答 4

您可以尝试这些片段。

在[16]:mydata = {'x':[10,50,18,32,47,20],'y':['12','11','N / A','13',' 15','N / A']}
在[17]:df = pd.DataFrame(mydata)

在[18]:df.y [df.y ==“ N / A”] = np.nan

出[19]:df 
    y
0 10 12
1 50 11
2 18 NaN
3 32 13
4 47 15
5 20 NaN

You can try these snippets.

In [16]:mydata = {'x' : [10, 50, 18, 32, 47, 20], 'y' : ['12', '11', 'N/A', '13', '15', 'N/A']}
In [17]:df=pd.DataFrame(mydata)

In [18]:df.y[df.y=="N/A"]=np.nan

Out[19]:df 
    x    y
0  10   12
1  50   11
2  18  NaN
3  32   13
4  47   15
5  20  NaN

回答 5

从pandas 1.0.0开始,您不再需要使用numpy在数据框中创建空值。相反,您只能使用pandas.NA(类型为pandas._libs.missing.NAType),因此它将在数据帧内被视为null,但在数据帧上下文之外将不被视为null。

As of pandas 1.0.0, you no longer need to use numpy to create null values in your dataframe. Instead you can just use pandas.NA (which is of type pandas._libs.missing.NAType), so it will be treated as null within the dataframe but will not be null outside dataframe context.


如何从mongodb导入数据到熊猫?

问题:如何从mongodb导入数据到熊猫?

我需要分析mongodb中的集合中有大量数据。如何将这些数据导入熊猫?

我是熊猫和numpy的新手。

编辑:mongodb集合包含带有日期和时间标记的传感器值。传感器值是float数据类型。

样本数据:

{
"_cls" : "SensorReport",
"_id" : ObjectId("515a963b78f6a035d9fa531b"),
"_types" : [
    "SensorReport"
],
"Readings" : [
    {
        "a" : 0.958069536790466,
        "_types" : [
            "Reading"
        ],
        "ReadingUpdatedDate" : ISODate("2013-04-02T08:26:35.297Z"),
        "b" : 6.296118156595,
        "_cls" : "Reading"
    },
    {
        "a" : 0.95574014778624,
        "_types" : [
            "Reading"
        ],
        "ReadingUpdatedDate" : ISODate("2013-04-02T08:27:09.963Z"),
        "b" : 6.29651468650064,
        "_cls" : "Reading"
    },
    {
        "a" : 0.953648289182713,
        "_types" : [
            "Reading"
        ],
        "ReadingUpdatedDate" : ISODate("2013-04-02T08:27:37.545Z"),
        "b" : 7.29679823731148,
        "_cls" : "Reading"
    },
    {
        "a" : 0.955931884300997,
        "_types" : [
            "Reading"
        ],
        "ReadingUpdatedDate" : ISODate("2013-04-02T08:28:21.369Z"),
        "b" : 6.29642922525632,
        "_cls" : "Reading"
    },
    {
        "a" : 0.95821381,
        "_types" : [
            "Reading"
        ],
        "ReadingUpdatedDate" : ISODate("2013-04-02T08:41:20.801Z"),
        "b" : 7.28956613,
        "_cls" : "Reading"
    },
    {
        "a" : 4.95821335,
        "_types" : [
            "Reading"
        ],
        "ReadingUpdatedDate" : ISODate("2013-04-02T08:41:36.931Z"),
        "b" : 6.28956574,
        "_cls" : "Reading"
    },
    {
        "a" : 9.95821341,
        "_types" : [
            "Reading"
        ],
        "ReadingUpdatedDate" : ISODate("2013-04-02T08:42:09.971Z"),
        "b" : 0.28956488,
        "_cls" : "Reading"
    },
    {
        "a" : 1.95667927,
        "_types" : [
            "Reading"
        ],
        "ReadingUpdatedDate" : ISODate("2013-04-02T08:43:55.463Z"),
        "b" : 0.29115237,
        "_cls" : "Reading"
    }
],
"latestReportTime" : ISODate("2013-04-02T08:43:55.463Z"),
"sensorName" : "56847890-0",
"reportCount" : 8
}

I have a large amount of data in a collection in mongodb which I need to analyze. How do i import that data to pandas?

I am new to pandas and numpy.

EDIT: The mongodb collection contains sensor values tagged with date and time. The sensor values are of float datatype.

Sample Data:

{
"_cls" : "SensorReport",
"_id" : ObjectId("515a963b78f6a035d9fa531b"),
"_types" : [
    "SensorReport"
],
"Readings" : [
    {
        "a" : 0.958069536790466,
        "_types" : [
            "Reading"
        ],
        "ReadingUpdatedDate" : ISODate("2013-04-02T08:26:35.297Z"),
        "b" : 6.296118156595,
        "_cls" : "Reading"
    },
    {
        "a" : 0.95574014778624,
        "_types" : [
            "Reading"
        ],
        "ReadingUpdatedDate" : ISODate("2013-04-02T08:27:09.963Z"),
        "b" : 6.29651468650064,
        "_cls" : "Reading"
    },
    {
        "a" : 0.953648289182713,
        "_types" : [
            "Reading"
        ],
        "ReadingUpdatedDate" : ISODate("2013-04-02T08:27:37.545Z"),
        "b" : 7.29679823731148,
        "_cls" : "Reading"
    },
    {
        "a" : 0.955931884300997,
        "_types" : [
            "Reading"
        ],
        "ReadingUpdatedDate" : ISODate("2013-04-02T08:28:21.369Z"),
        "b" : 6.29642922525632,
        "_cls" : "Reading"
    },
    {
        "a" : 0.95821381,
        "_types" : [
            "Reading"
        ],
        "ReadingUpdatedDate" : ISODate("2013-04-02T08:41:20.801Z"),
        "b" : 7.28956613,
        "_cls" : "Reading"
    },
    {
        "a" : 4.95821335,
        "_types" : [
            "Reading"
        ],
        "ReadingUpdatedDate" : ISODate("2013-04-02T08:41:36.931Z"),
        "b" : 6.28956574,
        "_cls" : "Reading"
    },
    {
        "a" : 9.95821341,
        "_types" : [
            "Reading"
        ],
        "ReadingUpdatedDate" : ISODate("2013-04-02T08:42:09.971Z"),
        "b" : 0.28956488,
        "_cls" : "Reading"
    },
    {
        "a" : 1.95667927,
        "_types" : [
            "Reading"
        ],
        "ReadingUpdatedDate" : ISODate("2013-04-02T08:43:55.463Z"),
        "b" : 0.29115237,
        "_cls" : "Reading"
    }
],
"latestReportTime" : ISODate("2013-04-02T08:43:55.463Z"),
"sensorName" : "56847890-0",
"reportCount" : 8
}

回答 0

pymongo 可能会帮助您,以下是我正在使用的一些代码:

import pandas as pd
from pymongo import MongoClient


def _connect_mongo(host, port, username, password, db):
    """ A util for making a connection to mongo """

    if username and password:
        mongo_uri = 'mongodb://%s:%s@%s:%s/%s' % (username, password, host, port, db)
        conn = MongoClient(mongo_uri)
    else:
        conn = MongoClient(host, port)


    return conn[db]


def read_mongo(db, collection, query={}, host='localhost', port=27017, username=None, password=None, no_id=True):
    """ Read from Mongo and Store into DataFrame """

    # Connect to MongoDB
    db = _connect_mongo(host=host, port=port, username=username, password=password, db=db)

    # Make a query to the specific DB and Collection
    cursor = db[collection].find(query)

    # Expand the cursor and construct the DataFrame
    df =  pd.DataFrame(list(cursor))

    # Delete the _id
    if no_id:
        del df['_id']

    return df

pymongo might give you a hand, followings are some codes I’m using:

import pandas as pd
from pymongo import MongoClient


def _connect_mongo(host, port, username, password, db):
    """ A util for making a connection to mongo """

    if username and password:
        mongo_uri = 'mongodb://%s:%s@%s:%s/%s' % (username, password, host, port, db)
        conn = MongoClient(mongo_uri)
    else:
        conn = MongoClient(host, port)


    return conn[db]


def read_mongo(db, collection, query={}, host='localhost', port=27017, username=None, password=None, no_id=True):
    """ Read from Mongo and Store into DataFrame """

    # Connect to MongoDB
    db = _connect_mongo(host=host, port=port, username=username, password=password, db=db)

    # Make a query to the specific DB and Collection
    cursor = db[collection].find(query)

    # Expand the cursor and construct the DataFrame
    df =  pd.DataFrame(list(cursor))

    # Delete the _id
    if no_id:
        del df['_id']

    return df

回答 1

您可以使用此代码将mongodb数据加载到pandas DataFrame。这个对我有用。希望也能为您服务。

import pymongo
import pandas as pd
from pymongo import MongoClient
client = MongoClient()
db = client.database_name
collection = db.collection_name
data = pd.DataFrame(list(collection.find()))

You can load your mongodb data to pandas DataFrame using this code. It works for me. Hopefully for you too.

import pymongo
import pandas as pd
from pymongo import MongoClient
client = MongoClient()
db = client.database_name
collection = db.collection_name
data = pd.DataFrame(list(collection.find()))

回答 2

Monary正是这样做的,而且它超级快。(另一个链接

请参阅这篇很酷的文章,其中包括快速教程和一些时间安排。

Monary does exactly that, and it’s super fast. (another link)

See this cool post which includes a quick tutorial and some timings.


回答 3

根据PEP,简单胜于复杂:

import pandas as pd
df = pd.DataFrame.from_records(db.<database_name>.<collection_name>.find())

您可以像使用常规mongoDB数据库一样包含条件,甚至可以使用find_one()从数据库中仅获取一个元素,等等。

和瞧!

As per PEP, simple is better than complicated:

import pandas as pd
df = pd.DataFrame.from_records(db.<database_name>.<collection_name>.find())

You can include conditions as you would working with regular mongoDB database or even use find_one() to get only one element from the database, etc.

and voila!


回答 4

import pandas as pd
from odo import odo

data = odo('mongodb://localhost/db::collection', pd.DataFrame)
import pandas as pd
from odo import odo

data = odo('mongodb://localhost/db::collection', pd.DataFrame)

回答 5

为了有效地处理内核外(不适合RAM)数据(即并行执行),可以尝试使用Python Blaze生态系统:Blaze / Dask / Odo。

Blaze(和Odo)具有开箱即用的功能来处理MongoDB。

一些有用的文章开始:

还有一篇文章展示了Blaze堆栈可能带来的惊人功能:使用Blaze和Impala分析17亿条Reddit注释(本质上是在几秒钟内查询975 Gb Reddit注释)。

PS我不属于任何这些技术。

For dealing with out-of-core (not fitting into RAM) data efficiently (i.e. with parallel execution), you can try Python Blaze ecosystem: Blaze / Dask / Odo.

Blaze (and Odo) has out-of-the-box functions to deal with MongoDB.

A few useful articles to start off:

And an article which shows what amazing things are possible with Blaze stack: Analyzing 1.7 Billion Reddit Comments with Blaze and Impala (essentially, querying 975 Gb of Reddit comments in seconds).

P.S. I’m not affiliated with any of these technologies.


回答 6

我发现非常有用的另一个选择是:

from pandas.io.json import json_normalize

cursor = my_collection.find()
df = json_normalize(cursor)

这样,您就可以免费获取嵌套的mongodb文档。

Another option I found very useful is:

from pandas.io.json import json_normalize

cursor = my_collection.find()
df = json_normalize(cursor)

this way you get the unfolding of nested mongodb documents for free.


回答 7

使用

pandas.DataFrame(list(...))

如果迭代器/生成器结果很大,将消耗大量内存

更好地生成小块并最终合并

def iterator2dataframes(iterator, chunk_size: int):
  """Turn an iterator into multiple small pandas.DataFrame

  This is a balance between memory and efficiency
  """
  records = []
  frames = []
  for i, record in enumerate(iterator):
    records.append(record)
    if i % chunk_size == chunk_size - 1:
      frames.append(pd.DataFrame(records))
      records = []
  if records:
    frames.append(pd.DataFrame(records))
  return pd.concat(frames)

Using

pandas.DataFrame(list(...))

will consume a lot of memory if the iterator/generator result is large

better to generate small chunks and concat at the end

def iterator2dataframes(iterator, chunk_size: int):
  """Turn an iterator into multiple small pandas.DataFrame

  This is a balance between memory and efficiency
  """
  records = []
  frames = []
  for i, record in enumerate(iterator):
    records.append(record)
    if i % chunk_size == chunk_size - 1:
      frames.append(pd.DataFrame(records))
      records = []
  if records:
    frames.append(pd.DataFrame(records))
  return pd.concat(frames)

回答 8

http://docs.mongodb.org/manual/reference/mongoexport

导出到csv并使用read_csv 或JSON并使用DataFrame.from_records()

http://docs.mongodb.org/manual/reference/mongoexport

export to csv and use read_csv or JSON and use DataFrame.from_records()


回答 9

waitingkuo回答了这个很好的答案之后,我想添加使用与.read_sql().read_csv()一致的 chunksize进行此操作的可能性。我通过避免“迭代器” /“光标”的每个“记录”一一列举来扩大Deu Leung的答案。我将借用以前的read_mongo函数。

def read_mongo(db, 
           collection, query={}, 
           host='localhost', port=27017, 
           username=None, password=None,
           chunksize = 100, no_id=True):
""" Read from Mongo and Store into DataFrame """


# Connect to MongoDB
#db = _connect_mongo(host=host, port=port, username=username, password=password, db=db)
client = MongoClient(host=host, port=port)
# Make a query to the specific DB and Collection
db_aux = client[db]


# Some variables to create the chunks
skips_variable = range(0, db_aux[collection].find(query).count(), int(chunksize))
if len(skips_variable)<=1:
    skips_variable = [0,len(skips_variable)]

# Iteration to create the dataframe in chunks.
for i in range(1,len(skips_variable)):

    # Expand the cursor and construct the DataFrame
    #df_aux =pd.DataFrame(list(cursor_aux[skips_variable[i-1]:skips_variable[i]]))
    df_aux =pd.DataFrame(list(db_aux[collection].find(query)[skips_variable[i-1]:skips_variable[i]]))

    if no_id:
        del df_aux['_id']

    # Concatenate the chunks into a unique df
    if 'df' not in locals():
        df =  df_aux
    else:
        df = pd.concat([df, df_aux], ignore_index=True)

return df

Following this great answer by waitingkuo I would like to add the possibility of doing that using chunksize in line with .read_sql() and .read_csv(). I enlarge the answer from Deu Leung by avoiding go one by one each ‘record’ of the ‘iterator’ / ‘cursor’. I will borrow previous read_mongo function.

def read_mongo(db, 
           collection, query={}, 
           host='localhost', port=27017, 
           username=None, password=None,
           chunksize = 100, no_id=True):
""" Read from Mongo and Store into DataFrame """


# Connect to MongoDB
#db = _connect_mongo(host=host, port=port, username=username, password=password, db=db)
client = MongoClient(host=host, port=port)
# Make a query to the specific DB and Collection
db_aux = client[db]


# Some variables to create the chunks
skips_variable = range(0, db_aux[collection].find(query).count(), int(chunksize))
if len(skips_variable)<=1:
    skips_variable = [0,len(skips_variable)]

# Iteration to create the dataframe in chunks.
for i in range(1,len(skips_variable)):

    # Expand the cursor and construct the DataFrame
    #df_aux =pd.DataFrame(list(cursor_aux[skips_variable[i-1]:skips_variable[i]]))
    df_aux =pd.DataFrame(list(db_aux[collection].find(query)[skips_variable[i-1]:skips_variable[i]]))

    if no_id:
        del df_aux['_id']

    # Concatenate the chunks into a unique df
    if 'df' not in locals():
        df =  df_aux
    else:
        df = pd.concat([df, df_aux], ignore_index=True)

return df

回答 10

使用分页的类似方法,例如Rafael Valero,waitingkuo和Deu Leung :

def read_mongo(
       # db, 
       collection, query=None, 
       # host='localhost', port=27017, username=None, password=None,
       chunksize = 100, page_num=1, no_id=True):

    # Connect to MongoDB
    db = _connect_mongo(host=host, port=port, username=username, password=password, db=db)

    # Calculate number of documents to skip
    skips = chunksize * (page_num - 1)

    # Sorry, this is in spanish
    # https://www.toptal.com/python/c%C3%B3digo-buggy-python-los-10-errores-m%C3%A1s-comunes-que-cometen-los-desarrolladores-python/es
    if not query:
        query = {}

    # Make a query to the specific DB and Collection
    cursor = db[collection].find(query).skip(skips).limit(chunksize)

    # Expand the cursor and construct the DataFrame
    df =  pd.DataFrame(list(cursor))

    # Delete the _id
    if no_id:
        del df['_id']

    return df

A similar approach like Rafael Valero, waitingkuo and Deu Leung using pagination:

def read_mongo(
       # db, 
       collection, query=None, 
       # host='localhost', port=27017, username=None, password=None,
       chunksize = 100, page_num=1, no_id=True):

    # Connect to MongoDB
    db = _connect_mongo(host=host, port=port, username=username, password=password, db=db)

    # Calculate number of documents to skip
    skips = chunksize * (page_num - 1)

    # Sorry, this is in spanish
    # https://www.toptal.com/python/c%C3%B3digo-buggy-python-los-10-errores-m%C3%A1s-comunes-que-cometen-los-desarrolladores-python/es
    if not query:
        query = {}

    # Make a query to the specific DB and Collection
    cursor = db[collection].find(query).skip(skips).limit(chunksize)

    # Expand the cursor and construct the DataFrame
    df =  pd.DataFrame(list(cursor))

    # Delete the _id
    if no_id:
        del df['_id']

    return df

回答 11

您可以通过三行使用pdmongo实现所需的功能

import pdmongo as pdm
import pandas as pd
df = pdm.read_mongo("MyCollection", [], "mongodb://localhost:27017/mydb")

如果您的数据非常大,则可以首先通过过滤不需要的数据来进行汇总查询,然后将它们映射到所需的列。

这是映射Readings.a到列a并按列过滤的示例reportCount

import pdmongo as pdm
import pandas as pd
df = pdm.read_mongo("MyCollection", [{'$match': {'reportCount': {'$gt': 6}}}, {'$unwind': '$Readings'}, {'$project': {'a': '$Readings.a'}}], "mongodb://localhost:27017/mydb")

read_mongo接受与pymongo聚合相同的参数

You can achieve what you want with pdmongo in three lines:

import pdmongo as pdm
import pandas as pd
df = pdm.read_mongo("MyCollection", [], "mongodb://localhost:27017/mydb")

If your data is very large, you can do an aggregate query first by filtering data you do not want, then map them to your desired columns.

Here is an example of mapping Readings.a to column a and filtering by reportCount column:

import pdmongo as pdm
import pandas as pd
df = pdm.read_mongo("MyCollection", [{'$match': {'reportCount': {'$gt': 6}}}, {'$unwind': '$Readings'}, {'$project': {'a': '$Readings.a'}}], "mongodb://localhost:27017/mydb")

read_mongo accepts the same arguments as pymongo aggregate


将pandas dataframe列导入为字符串而不是int

问题:将pandas dataframe列导入为字符串而不是int

我想将以下csv作为字符串而不是int64导入。熊猫read_csv自动将其转换为int64,但我需要将此列作为字符串。

ID
00013007854817840016671868
00013007854817840016749251
00013007854817840016754630
00013007854817840016781876
00013007854817840017028824
00013007854817840017963235
00013007854817840018860166


df = read_csv('sample.csv')

df.ID
>>

0   -9223372036854775808
1   -9223372036854775808
2   -9223372036854775808
3   -9223372036854775808
4   -9223372036854775808
5   -9223372036854775808
6   -9223372036854775808
Name: ID

不幸的是,使用转换器会得到相同的结果。

df = read_csv('sample.csv', converters={'ID': str})
df.ID
>>

0   -9223372036854775808
1   -9223372036854775808
2   -9223372036854775808
3   -9223372036854775808
4   -9223372036854775808
5   -9223372036854775808
6   -9223372036854775808
Name: ID

I would like to import the following csv as strings not as int64. Pandas read_csv automatically converts it to int64, but I need this column as string.

ID
00013007854817840016671868
00013007854817840016749251
00013007854817840016754630
00013007854817840016781876
00013007854817840017028824
00013007854817840017963235
00013007854817840018860166


df = read_csv('sample.csv')

df.ID
>>

0   -9223372036854775808
1   -9223372036854775808
2   -9223372036854775808
3   -9223372036854775808
4   -9223372036854775808
5   -9223372036854775808
6   -9223372036854775808
Name: ID

Unfortunately using converters gives the same result.

df = read_csv('sample.csv', converters={'ID': str})
df.ID
>>

0   -9223372036854775808
1   -9223372036854775808
2   -9223372036854775808
3   -9223372036854775808
4   -9223372036854775808
5   -9223372036854775808
6   -9223372036854775808
Name: ID

回答 0

只是想重申一下,这将适用于> = 0.9.1的熊猫:

In [2]: read_csv('sample.csv', dtype={'ID': object})
Out[2]: 
                           ID
0  00013007854817840016671868
1  00013007854817840016749251
2  00013007854817840016754630
3  00013007854817840016781876
4  00013007854817840017028824
5  00013007854817840017963235
6  00013007854817840018860166

我还在创建一个有关检测整数溢出的问题。

编辑:在这里查看分辨率:https : //github.com/pydata/pandas/issues/2247

Just want to reiterate this will work in pandas >= 0.9.1:

In [2]: read_csv('sample.csv', dtype={'ID': object})
Out[2]: 
                           ID
0  00013007854817840016671868
1  00013007854817840016749251
2  00013007854817840016754630
3  00013007854817840016781876
4  00013007854817840017028824
5  00013007854817840017963235
6  00013007854817840018860166

I’m creating an issue about detecting integer overflows also.

EDIT: See resolution here: https://github.com/pydata/pandas/issues/2247


回答 1

这可能不是最优雅的方法,但是可以完成工作。

In[1]: import numpy as np

In[2]: import pandas as pd

In[3]: df = pd.DataFrame(np.genfromtxt('/Users/spencerlyon2/Desktop/test.csv', dtype=str)[1:], columns=['ID'])

In[4]: df
Out[4]: 
                       ID
0  00013007854817840016671868
1  00013007854817840016749251
2  00013007854817840016754630
3  00013007854817840016781876
4  00013007854817840017028824
5  00013007854817840017963235
6  00013007854817840018860166

只需替换'/Users/spencerlyon2/Desktop/test.csv'为文件的路径

This probably isn’t the most elegant way to do it, but it gets the job done.

In[1]: import numpy as np

In[2]: import pandas as pd

In[3]: df = pd.DataFrame(np.genfromtxt('/Users/spencerlyon2/Desktop/test.csv', dtype=str)[1:], columns=['ID'])

In[4]: df
Out[4]: 
                       ID
0  00013007854817840016671868
1  00013007854817840016749251
2  00013007854817840016754630
3  00013007854817840016781876
4  00013007854817840017028824
5  00013007854817840017963235
6  00013007854817840018860166

Just replace '/Users/spencerlyon2/Desktop/test.csv' with the path to your file


回答 2

从pandas 1.0开始,它变得更加简单。这会将列“ ID”读取为dtype“ string”:

pd.read_csv('sample.csv',dtype={'ID':'string'})

如我们在本入门指南中所见,引入了’string’dtype(在将字符串视为dtype’object’之前)。

Since pandas 1.0 it became much more straightforward. This will read column ‘ID’ as dtype ‘string’:

pd.read_csv('sample.csv',dtype={'ID':'string'})

As we can see in this Getting started guide, ‘string’ dtype has been introduced (before strings were treated as dtype ‘object’).


熊猫read_csv并使用usecols过滤列

问题:熊猫read_csv并使用usecols过滤列

我有一个csv文件,pandas.read_csv当我使用过滤列usecols并使用多个索引时,该文件输入不正确。

import pandas as pd
csv = r"""dummy,date,loc,x
   bar,20090101,a,1
   bar,20090102,a,3
   bar,20090103,a,5
   bar,20090101,b,1
   bar,20090102,b,3
   bar,20090103,b,5"""

f = open('foo.csv', 'w')
f.write(csv)
f.close()

df1 = pd.read_csv('foo.csv',
        header=0,
        names=["dummy", "date", "loc", "x"], 
        index_col=["date", "loc"], 
        usecols=["dummy", "date", "loc", "x"],
        parse_dates=["date"])
print df1

# Ignore the dummy columns
df2 = pd.read_csv('foo.csv', 
        index_col=["date", "loc"], 
        usecols=["date", "loc", "x"], # <----------- Changed
        parse_dates=["date"],
        header=0,
        names=["dummy", "date", "loc", "x"])
print df2

我希望df1和df2除了丢失的虚拟列外应该相同,但是这些列的标签错误。日期也被解析为日期。

In [118]: %run test.py
               dummy  x
date       loc
2009-01-01 a     bar  1
2009-01-02 a     bar  3
2009-01-03 a     bar  5
2009-01-01 b     bar  1
2009-01-02 b     bar  3
2009-01-03 b     bar  5
              date
date loc
a    1    20090101
     3    20090102
     5    20090103
b    1    20090101
     3    20090102
     5    20090103

使用列号而不是名称给我同样的问题。我可以通过在read_csv步骤之后删除虚拟列来解决此问题,但是我试图了解出了什么问题。我正在使用熊猫0.10.1。

编辑:修复错误的标头用法。

I have a csv file which isn’t coming in correctly with pandas.read_csv when I filter the columns with usecols and use multiple indexes.

import pandas as pd
csv = r"""dummy,date,loc,x
   bar,20090101,a,1
   bar,20090102,a,3
   bar,20090103,a,5
   bar,20090101,b,1
   bar,20090102,b,3
   bar,20090103,b,5"""

f = open('foo.csv', 'w')
f.write(csv)
f.close()

df1 = pd.read_csv('foo.csv',
        header=0,
        names=["dummy", "date", "loc", "x"], 
        index_col=["date", "loc"], 
        usecols=["dummy", "date", "loc", "x"],
        parse_dates=["date"])
print df1

# Ignore the dummy columns
df2 = pd.read_csv('foo.csv', 
        index_col=["date", "loc"], 
        usecols=["date", "loc", "x"], # <----------- Changed
        parse_dates=["date"],
        header=0,
        names=["dummy", "date", "loc", "x"])
print df2

I expect that df1 and df2 should be the same except for the missing dummy column, but the columns come in mislabeled. Also the date is getting parsed as a date.

In [118]: %run test.py
               dummy  x
date       loc
2009-01-01 a     bar  1
2009-01-02 a     bar  3
2009-01-03 a     bar  5
2009-01-01 b     bar  1
2009-01-02 b     bar  3
2009-01-03 b     bar  5
              date
date loc
a    1    20090101
     3    20090102
     5    20090103
b    1    20090101
     3    20090102
     5    20090103

Using column numbers instead of names give me the same problem. I can workaround the issue by dropping the dummy column after the read_csv step, but I’m trying to understand what is going wrong. I’m using pandas 0.10.1.

edit: fixed bad header usage.


回答 0

@chip的答案完全错过了两个关键字参数的含义。

  • 名称是只在必要时有没有头和要指定使用的列名,而不是整数索引等参数。
  • usecols应该在将整个DataFrame读入内存之前提供过滤器;如果使用得当,则读取后永远不需要删除列。

此解决方案纠正了这些怪异现象:

import pandas as pd
from StringIO import StringIO

csv = r"""dummy,date,loc,x
bar,20090101,a,1
bar,20090102,a,3
bar,20090103,a,5
bar,20090101,b,1
bar,20090102,b,3
bar,20090103,b,5"""

df = pd.read_csv(StringIO(csv),
        header=0,
        index_col=["date", "loc"], 
        usecols=["date", "loc", "x"],
        parse_dates=["date"])

这给了我们:

                x
date       loc
2009-01-01 a    1
2009-01-02 a    3
2009-01-03 a    5
2009-01-01 b    1
2009-01-02 b    3
2009-01-03 b    5

The answer by @chip completely misses the point of two keyword arguments.

  • names is only necessary when there is no header and you want to specify other arguments using column names rather than integer indices.
  • usecols is supposed to provide a filter before reading the whole DataFrame into memory; if used properly, there should never be a need to delete columns after reading.

This solution corrects those oddities:

import pandas as pd
from StringIO import StringIO

csv = r"""dummy,date,loc,x
bar,20090101,a,1
bar,20090102,a,3
bar,20090103,a,5
bar,20090101,b,1
bar,20090102,b,3
bar,20090103,b,5"""

df = pd.read_csv(StringIO(csv),
        header=0,
        index_col=["date", "loc"], 
        usecols=["date", "loc", "x"],
        parse_dates=["date"])

Which gives us:

                x
date       loc
2009-01-01 a    1
2009-01-02 a    3
2009-01-03 a    5
2009-01-01 b    1
2009-01-02 b    3
2009-01-03 b    5

回答 1

这段代码实现了您想要的—也很奇怪,甚至有很多错误:

我观察到它在以下情况下有效:

a)您指定index_col相关。到您实际使用的列数-因此在此示例中为三列,而不是四列(dummy从此开始滴下并开始计数)

b)相同 parse_dates

c)并非usecols出于明显原因;)

d)在这里我改编了names以反映此行为

import pandas as pd
from StringIO import StringIO

csv = """dummy,date,loc,x
bar,20090101,a,1
bar,20090102,a,3
bar,20090103,a,5
bar,20090101,b,1
bar,20090102,b,3
bar,20090103,b,5
"""

df = pd.read_csv(StringIO(csv),
        index_col=[0,1],
        usecols=[1,2,3], 
        parse_dates=[0],
        header=0,
        names=["date", "loc", "", "x"])

print df

哪个打印

                x
date       loc   
2009-01-01 a    1
2009-01-02 a    3
2009-01-03 a    5
2009-01-01 b    1
2009-01-02 b    3
2009-01-03 b    5

This code achieves what you want — also its weird and certainly buggy:

I observed that it works when:

a) you specify the index_col rel. to the number of columns you really use — so its three columns in this example, not four (you drop dummy and start counting from then onwards)

b) same for parse_dates

c) not so for usecols ;) for obvious reasons

d) here I adapted the names to mirror this behaviour

import pandas as pd
from StringIO import StringIO

csv = """dummy,date,loc,x
bar,20090101,a,1
bar,20090102,a,3
bar,20090103,a,5
bar,20090101,b,1
bar,20090102,b,3
bar,20090103,b,5
"""

df = pd.read_csv(StringIO(csv),
        index_col=[0,1],
        usecols=[1,2,3], 
        parse_dates=[0],
        header=0,
        names=["date", "loc", "", "x"])

print df

which prints

                x
date       loc   
2009-01-01 a    1
2009-01-02 a    3
2009-01-03 a    5
2009-01-01 b    1
2009-01-02 b    3
2009-01-03 b    5

回答 2

如果您的csv文件包含额外的数据,则可以在导入后从DataFrame中删除列。

import pandas as pd
from StringIO import StringIO

csv = r"""dummy,date,loc,x
bar,20090101,a,1
bar,20090102,a,3
bar,20090103,a,5
bar,20090101,b,1
bar,20090102,b,3
bar,20090103,b,5"""

df = pd.read_csv(StringIO(csv),
        index_col=["date", "loc"], 
        usecols=["dummy", "date", "loc", "x"],
        parse_dates=["date"],
        header=0,
        names=["dummy", "date", "loc", "x"])
del df['dummy']

这给了我们:

                x
date       loc
2009-01-01 a    1
2009-01-02 a    3
2009-01-03 a    5
2009-01-01 b    1
2009-01-02 b    3
2009-01-03 b    5

If your csv file contains extra data, columns can be deleted from the DataFrame after import.

import pandas as pd
from StringIO import StringIO

csv = r"""dummy,date,loc,x
bar,20090101,a,1
bar,20090102,a,3
bar,20090103,a,5
bar,20090101,b,1
bar,20090102,b,3
bar,20090103,b,5"""

df = pd.read_csv(StringIO(csv),
        index_col=["date", "loc"], 
        usecols=["dummy", "date", "loc", "x"],
        parse_dates=["date"],
        header=0,
        names=["dummy", "date", "loc", "x"])
del df['dummy']

Which gives us:

                x
date       loc
2009-01-01 a    1
2009-01-02 a    3
2009-01-03 a    5
2009-01-01 b    1
2009-01-02 b    3
2009-01-03 b    5

回答 3

您只需添加index_col=False参数

df1 = pd.read_csv('foo.csv',
     header=0,
     index_col=False,
     names=["dummy", "date", "loc", "x"], 
     index_col=["date", "loc"], 
     usecols=["dummy", "date", "loc", "x"],
     parse_dates=["date"])
  print df1

You have to just add the index_col=False parameter

df1 = pd.read_csv('foo.csv',
     header=0,
     index_col=False,
     names=["dummy", "date", "loc", "x"], 
     index_col=["date", "loc"], 
     usecols=["dummy", "date", "loc", "x"],
     parse_dates=["date"])
  print df1

回答 4

首先导入csv并使用csv.DictReader其易于处理…

import csv first and use csv.DictReader its easy to process…


CSV导入熊猫时跳过行

问题:CSV导入熊猫时跳过行

我正在尝试使用导入.csv文件pandas.read_csv(),但是我不想导入数据文件的第二行(索引为0的索引为1的行)。

我看不到如何不导入它,因为与命令一起使用的参数似乎模棱两可:

从熊猫网站:

skiprows :类列表或整数

文件开头要跳过的行号(索引为0)或要跳过的行数(整数)。”

如果输入skiprows=1参数,它如何知道是跳过第一行还是跳过索引为1的行?

I’m trying to import a .csv file using pandas.read_csv(), however I don’t want to import the 2nd row of the data file (the row with index = 1 for 0-indexing).

I can’t see how not to import it because the arguments used with the command seem ambiguous:

From the pandas website:

skiprows : list-like or integer

Row numbers to skip (0-indexed) or number of rows to skip (int) at the start of the file.”

If I put skiprows=1 in the arguments, how does it know whether to skip the first row or skip the row with index 1?


回答 0

您可以尝试:

>>> import pandas as pd
>>> from StringIO import StringIO
>>> s = """1, 2
... 3, 4
... 5, 6"""
>>> pd.read_csv(StringIO(s), skiprows=[1], header=None)
   0  1
0  1  2
1  5  6
>>> pd.read_csv(StringIO(s), skiprows=1, header=None)
   0  1
0  3  4
1  5  6

You can try yourself:

>>> import pandas as pd
>>> from StringIO import StringIO
>>> s = """1, 2
... 3, 4
... 5, 6"""
>>> pd.read_csv(StringIO(s), skiprows=[1], header=None)
   0  1
0  1  2
1  5  6
>>> pd.read_csv(StringIO(s), skiprows=1, header=None)
   0  1
0  3  4
1  5  6

回答 1

我尚无任何评论的信誉,但我想添加到alko答案以供进一步参考。

文档

skiprows:文件中要跳过的行的数字集合。也可以是整数以跳过前n行

I don’t have reputation to comment yet, but I want to add to alko answer for further reference.

From the docs:

skiprows: A collection of numbers for rows in the file to skip. Can also be an integer to skip the first n rows


回答 2

在读取csv文件时运行行列时遇到相同的问题。我当时在做skip_rows = 1这行不通

一个简单的示例给出了一个在读取csv文件时如何使用跳栏的想法。

import pandas as pd

#skiprows=1 will skip first line and try to read from second line
df = pd.read_csv('my_csv_file.csv', skiprows=1)  ## pandas as pd

#print the data frame
df

I got the same issue while running the skiprows while reading the csv file. I was doning skip_rows=1 this will not work

Simple example gives an idea how to use skiprows while reading csv file.

import pandas as pd

#skiprows=1 will skip first line and try to read from second line
df = pd.read_csv('my_csv_file.csv', skiprows=1)  ## pandas as pd

#print the data frame
df

回答 3

所有这些答案都遗漏了一个重要点-第n行是文件中的第n行,而不是数据集中的第n行。我遇到从USGS下载一些过时的流量表数据的情况。数据集的开头用“#”注释,其后的第一行是标签,下一行是描述日期类型的行,最后是数据本身。我不知道有多少条注释行,但是我知道前几行是什么。例:

– – – – – – – – – – – – – – – 警告 – – – – – – – – – – ————–

您从此美国地质调查局数据库中获得的一些数据

可能尚未获得董事的批准。… agency_cd site_no datetime tz_cd 139719_00065 139719_00065_cd

5s 15s 20d 6s 14n 10s USGS 08041780 2018-05-06 00:00 CDT 1.98 A

如果有一种方法可以自动跳过第n行和第n行,那就太好了。

作为说明,我能够通过以下方式解决问题:

import pandas as pd
ds = pd.read_csv(fname, comment='#', sep='\t', header=0, parse_dates=True)
ds.drop(0, inplace=True)

All of these answers miss one important point — the n’th line is the n’th line in the file, and not the n’th row in the dataset. I have a situation where I download some antiquated stream gauge data from the USGS. The head of the dataset is commented with ‘#’, the first line after that are the labels, next comes a line that describes the date types, and last the data itself. I never know how many comment lines there are, but I know what the first couple of rows are. Example:

—————————– WARNING ———————————-

Some of the data that you have obtained from this U.S. Geological Survey database

may not have received Director’s approval. … agency_cd site_no datetime tz_cd 139719_00065 139719_00065_cd

5s 15s 20d 6s 14n 10s USGS 08041780 2018-05-06 00:00 CDT 1.98 A

It would be nice if there was a way to automatically skip the n’th row as well as the n’th line.

As a note, I was able to fix my issue with:

import pandas as pd
ds = pd.read_csv(fname, comment='#', sep='\t', header=0, parse_dates=True)
ds.drop(0, inplace=True)

回答 4

skip[1] 将跳过第二行,而不是第一行。

skip[1] will skip second line, not the first one.


回答 5

另外,请确保您的文件实际上是CSV文件。例如,如果您有一个.xls文件,并且只是将文件扩展名更改为.csv,则该文件将不会导入,并且会出现上述错误。要检查是否是您的问题,请在excel中打开文件,该文件可能会显示:

“’Filename.csv’的文件格式和扩展名不匹配。该文件可能已损坏或不安全。除非您信任它的来源,否则请不要打开它。是否仍要打开它?”

修复文件:在Excel中打开文件,单击“另存为”,选择要另存为的文件格式(使用.cvs),然后替换现有文件。

这是我的问题,并为我修复了错误。

Also be sure that your file is actually a CSV file. For example, if you had an .xls file, and simply changed the file extension to .csv, the file won’t import and will give the error above. To check to see if this is your problem open the file in excel and it will likely say:

“The file format and extension of ‘Filename.csv’ don’t match. The file could be corrupted or unsafe. Unless you trust its source, don’t open it. Do you want to open it anyway?”

To fix the file: open the file in Excel, click “Save As”, Choose the file format to save as (use .cvs), then replace the existing file.

This was my problem, and fixed the error for me.


将熊猫数据框转换为序列

问题:将熊猫数据框转换为序列

我对熊猫有些陌生。我有一个熊猫数据框,它是1行乘23列。

我想将其转换为系列吗?我想知道最pythonic的方法是什么?

我已经尝试过了,pd.Series(myResults)但是抱怨ValueError: cannot copy sequence with size 23 to array axis with dimension 1。它还不够聪明,无法意识到它仍然是数学上的“向量”。

谢谢!

I’m somewhat new to pandas. I have a pandas data frame that is 1 row by 23 columns.

I want to convert this into a series? I’m wondering what the most pythonic way to do this is?

I’ve tried pd.Series(myResults) but it complains ValueError: cannot copy sequence with size 23 to array axis with dimension 1. It’s not smart enough to realize it’s still a “vector” in math terms.

Thanks!


回答 0

它还不够聪明,无法意识到它仍然是数学上的“向量”。

可以说它足够聪明,可以识别尺寸差异。:-)

我认为您可以做的最简单的事情是使用位置选择该行iloc,这将为您提供一个Series,其列作为新索引,值作为值:

>>> df = pd.DataFrame([list(range(5))], columns=["a{}".format(i) for i in range(5)])
>>> df
   a0  a1  a2  a3  a4
0   0   1   2   3   4
>>> df.iloc[0]
a0    0
a1    1
a2    2
a3    3
a4    4
Name: 0, dtype: int64
>>> type(_)
<class 'pandas.core.series.Series'>

It’s not smart enough to realize it’s still a “vector” in math terms.

Say rather that it’s smart enough to recognize a difference in dimensionality. :-)

I think the simplest thing you can do is select that row positionally using iloc, which gives you a Series with the columns as the new index and the values as the values:

>>> df = pd.DataFrame([list(range(5))], columns=["a{}".format(i) for i in range(5)])
>>> df
   a0  a1  a2  a3  a4
0   0   1   2   3   4
>>> df.iloc[0]
a0    0
a1    1
a2    2
a3    3
a4    4
Name: 0, dtype: int64
>>> type(_)
<class 'pandas.core.series.Series'>

回答 1

您可以转置单行数据框(仍会生成一个数据框),然后结果压缩为一系列(与相反to_frame)。

df = pd.DataFrame([list(range(5))], columns=["a{}".format(i) for i in range(5)])

>>> df.T.squeeze()  # Or more simply, df.squeeze() for a single row dataframe.
a0    0
a1    1
a2    2
a3    3
a4    4
Name: 0, dtype: int64

注意:为了适应@IanS提出的观点(即使不是OP的问题),请测试数据框的大小。我假设这df是一个数据框,但是边缘情况是一个空的数据框,一个形状为(1,1)的数据框以及一个具有多行的数据框,在这种情况下,使用应实现其所需的功能。

if df.empty:
    # Empty dataframe, so convert to empty Series.
    result = pd.Series()
elif df.shape == (1, 1)
    # DataFrame with one value, so convert to series with appropriate index.
    result = pd.Series(df.iat[0, 0], index=df.columns)
elif len(df) == 1:
    # Convert to series per OP's question.
    result = df.T.squeeze()
else:
    # Dataframe with multiple rows.  Implement desired behavior.
    pass

也可以按照@themachinist提供的答案进行简化。

if len(df) > 1:
    # Dataframe with multiple rows.  Implement desired behavior.
    pass
else:
    result = pd.Series() if df.empty else df.iloc[0, :]

You can transpose the single-row dataframe (which still results in a dataframe) and then squeeze the results into a series (the inverse of to_frame).

df = pd.DataFrame([list(range(5))], columns=["a{}".format(i) for i in range(5)])

>>> df.T.squeeze()  # Or more simply, df.squeeze() for a single row dataframe.
a0    0
a1    1
a2    2
a3    3
a4    4
Name: 0, dtype: int64

Note: To accommodate the point raised by @IanS (even though it is not in the OP’s question), test for the dataframe’s size. I am assuming that df is a dataframe, but the edge cases are an empty dataframe, a dataframe of shape (1, 1), and a dataframe with more than one row in which case the use should implement their desired functionality.

if df.empty:
    # Empty dataframe, so convert to empty Series.
    result = pd.Series()
elif df.shape == (1, 1)
    # DataFrame with one value, so convert to series with appropriate index.
    result = pd.Series(df.iat[0, 0], index=df.columns)
elif len(df) == 1:
    # Convert to series per OP's question.
    result = df.T.squeeze()
else:
    # Dataframe with multiple rows.  Implement desired behavior.
    pass

This can also be simplified along the lines of the answer provided by @themachinist.

if len(df) > 1:
    # Dataframe with multiple rows.  Implement desired behavior.
    pass
else:
    result = pd.Series() if df.empty else df.iloc[0, :]

回答 2

您可以使用以下两种方法之一对数据框进行切片来检索系列:

http://pandas.pydata.org/pandas-docs/stable/generation/pandas.DataFrame.iloc.html http://pandas.pydata.org/pandas-docs/stable/generation/pandas.DataFrame.loc.html

import pandas as pd
import numpy as np
df = pd.DataFrame(data=np.random.randn(1,8))

series1=df.iloc[0,:]
type(series1)
pandas.core.series.Series

You can retrieve the series through slicing your dataframe using one of these two methods:

http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.iloc.html http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.loc.html

import pandas as pd
import numpy as np
df = pd.DataFrame(data=np.random.randn(1,8))

series1=df.iloc[0,:]
type(series1)
pandas.core.series.Series

回答 3

其他方式 –

假设myResult是包含1 col和23行形式的数据的dataFrame

// label your columns by passing a list of names
myResult.columns = ['firstCol']

// fetch the column in this way, which will return you a series
myResult = myResult['firstCol']

print(type(myResult))

以类似的方式,您可以从具有多个列的Dataframe中获得序列。

Another way –

Suppose myResult is the dataFrame that contains your data in the form of 1 col and 23 rows

// label your columns by passing a list of names
myResult.columns = ['firstCol']

// fetch the column in this way, which will return you a series
myResult = myResult['firstCol']

print(type(myResult))

In similar fashion, you can get series from Dataframe with multiple columns.


回答 4

您也可以使用stack()

df= DataFrame([list(range(5))], columns = [“a{}”.format(I) for I in range(5)])

在您运行df之后,请运行:

df.stack()

您获得系列数据

You can also use stack()

df= DataFrame([list(range(5))], columns = [“a{}”.format(I) for I in range(5)])

After u run df, then run:

df.stack()

You obtain your dataframe in series


回答 5

data = pd.DataFrame({"a":[1,2,3,34],"b":[5,6,7,8]})
new_data = pd.melt(data)
new_data.set_index("variable", inplace=True)

这给出了一个带有索引的数据框,作为数据的列名,并且所有数据都在“值”列中

data = pd.DataFrame({"a":[1,2,3,34],"b":[5,6,7,8]})
new_data = pd.melt(data)
new_data.set_index("variable", inplace=True)

This gives a dataframe with index as column name of data and all data are present in “values” column