问题:分配熊猫数据框列dtypes

我想在中设置dtype多列的s pd.Dataframe(我有一个文件必须手动解析为列表列表,因为该文件不适合pd.read_csv

import pandas as pd
print pd.DataFrame([['a','1'],['b','2']],
                   dtype={'x':'object','y':'int'},
                   columns=['x','y'])

我懂了

ValueError: entry not a 2- or 3- tuple

我可以设置它们的唯一方法是循环遍历每个列变量并使用进行重铸astype

dtypes = {'x':'object','y':'int'}
mydata = pd.DataFrame([['a','1'],['b','2']],
                      columns=['x','y'])
for c in mydata.columns:
    mydata[c] = mydata[c].astype(dtypes[c])
print mydata['y'].dtype   #=> int64

有没有更好的办法?

I want to set the dtypes of multiple columns in pd.Dataframe (I have a file that I’ve had to manually parse into a list of lists, as the file was not amenable for pd.read_csv)

import pandas as pd
print pd.DataFrame([['a','1'],['b','2']],
                   dtype={'x':'object','y':'int'},
                   columns=['x','y'])

I get

ValueError: entry not a 2- or 3- tuple

The only way I can set them is by looping through each column variable and recasting with astype.

dtypes = {'x':'object','y':'int'}
mydata = pd.DataFrame([['a','1'],['b','2']],
                      columns=['x','y'])
for c in mydata.columns:
    mydata[c] = mydata[c].astype(dtypes[c])
print mydata['y'].dtype   #=> int64

Is there a better way?


回答 0

从0.17开始,您必须使用显式转换:

pd.to_datetime, pd.to_timedelta and pd.to_numeric

(如下所述,convert_objects在0.17中已不再弃用“魔术” )

df = pd.DataFrame({'x': {0: 'a', 1: 'b'}, 'y': {0: '1', 1: '2'}, 'z': {0: '2018-05-01', 1: '2018-05-02'}})

df.dtypes

x    object
y    object
z    object
dtype: object

df

   x  y           z
0  a  1  2018-05-01
1  b  2  2018-05-02

您可以将它们应用于要转换的每一列:

df["y"] = pd.to_numeric(df["y"])
df["z"] = pd.to_datetime(df["z"])    
df

   x  y          z
0  a  1 2018-05-01
1  b  2 2018-05-02

df.dtypes

x            object
y             int64
z    datetime64[ns]
dtype: object

并确认dtype已更新。


大熊猫0.12-0.16的旧/建议答案:您可以convert_objects用来推断更好的dtypes:

In [21]: df
Out[21]: 
   x  y
0  a  1
1  b  2

In [22]: df.dtypes
Out[22]: 
x    object
y    object
dtype: object

In [23]: df.convert_objects(convert_numeric=True)
Out[23]: 
   x  y
0  a  1
1  b  2

In [24]: df.convert_objects(convert_numeric=True).dtypes
Out[24]: 
x    object
y     int64
dtype: object

魔法!(遗憾地看到它过时了。)

Since 0.17, you have to use the explicit conversions:

pd.to_datetime, pd.to_timedelta and pd.to_numeric

(As mentioned below, no more “magic”, convert_objects has been deprecated in 0.17)

df = pd.DataFrame({'x': {0: 'a', 1: 'b'}, 'y': {0: '1', 1: '2'}, 'z': {0: '2018-05-01', 1: '2018-05-02'}})

df.dtypes

x    object
y    object
z    object
dtype: object

df

   x  y           z
0  a  1  2018-05-01
1  b  2  2018-05-02

You can apply these to each column you want to convert:

df["y"] = pd.to_numeric(df["y"])
df["z"] = pd.to_datetime(df["z"])    
df

   x  y          z
0  a  1 2018-05-01
1  b  2 2018-05-02

df.dtypes

x            object
y             int64
z    datetime64[ns]
dtype: object

and confirm the dtype is updated.


OLD/DEPRECATED ANSWER for pandas 0.12 – 0.16: You can use convert_objects to infer better dtypes:

In [21]: df
Out[21]: 
   x  y
0  a  1
1  b  2

In [22]: df.dtypes
Out[22]: 
x    object
y    object
dtype: object

In [23]: df.convert_objects(convert_numeric=True)
Out[23]: 
   x  y
0  a  1
1  b  2

In [24]: df.convert_objects(convert_numeric=True).dtypes
Out[24]: 
x    object
y     int64
dtype: object

Magic! (Sad to see it deprecated.)


回答 1

对于那些来自Google(例如我)的人:

convert_objects 从0.17开始不推荐使用-如果您使用它,则会收到类似以下的警告:

FutureWarning: convert_objects is deprecated.  Use the data-type specific converters 
pd.to_datetime, pd.to_timedelta and pd.to_numeric.

您应该执行以下操作:

For those coming from Google (etc.) such as myself:

convert_objects has been deprecated since 0.17 – if you use it, you get a warning like this one:

FutureWarning: convert_objects is deprecated.  Use the data-type specific converters 
pd.to_datetime, pd.to_timedelta and pd.to_numeric.

You should do something like the following:


回答 2

您可以使用pandas显式设置类型,DataFrame.astype(dtype, copy=True, raise_on_error=True, **kwargs)并使用想要的dtypes传递字典dtype

这是一个例子:

import pandas as pd
wheel_number = 5
car_name = 'jeep'
minutes_spent = 4.5

# set the columns
data_columns = ['wheel_number', 'car_name', 'minutes_spent']

# create an empty dataframe
data_df = pd.DataFrame(columns = data_columns)
df_temp = pd.DataFrame([[wheel_number, car_name, minutes_spent]],columns = data_columns)
data_df = data_df.append(df_temp, ignore_index=True) 

In [11]: data_df.dtypes
Out[11]:
wheel_number     float64
car_name          object
minutes_spent    float64
dtype: object

data_df = data_df.astype(dtype= {"wheel_number":"int64",
        "car_name":"object","minutes_spent":"float64"})

现在您可以看到它已更改

In [18]: data_df.dtypes
Out[18]:
wheel_number       int64
car_name          object
minutes_spent    float64

you can set the types explicitly with pandas DataFrame.astype(dtype, copy=True, raise_on_error=True, **kwargs) and pass in a dictionary with the dtypes you want to dtype

here’s an example:

import pandas as pd
wheel_number = 5
car_name = 'jeep'
minutes_spent = 4.5

# set the columns
data_columns = ['wheel_number', 'car_name', 'minutes_spent']

# create an empty dataframe
data_df = pd.DataFrame(columns = data_columns)
df_temp = pd.DataFrame([[wheel_number, car_name, minutes_spent]],columns = data_columns)
data_df = data_df.append(df_temp, ignore_index=True) 

In [11]: data_df.dtypes
Out[11]:
wheel_number     float64
car_name          object
minutes_spent    float64
dtype: object

data_df = data_df.astype(dtype= {"wheel_number":"int64",
        "car_name":"object","minutes_spent":"float64"})

now you can see that it’s changed

In [18]: data_df.dtypes
Out[18]:
wheel_number       int64
car_name          object
minutes_spent    float64

回答 3

设置列类型的另一种方法是,首先使用所需的类型构造一个numpy记录数组,将其填充,然后将其传递给DataFrame构造函数。

import pandas as pd
import numpy as np    

x = np.empty((10,), dtype=[('x', np.uint8), ('y', np.float64)])
df = pd.DataFrame(x)

df.dtypes ->

x      uint8
y    float64

Another way to set the column types is to first construct a numpy record array with your desired types, fill it out and then pass it to a DataFrame constructor.

import pandas as pd
import numpy as np    

x = np.empty((10,), dtype=[('x', np.uint8), ('y', np.float64)])
df = pd.DataFrame(x)

df.dtypes ->

x      uint8
y    float64

回答 4

面临类似的问题。在我的情况下,我需要手动解析来自Cisco日志的1000个文件。

为了灵活处理字段和类型,我已经使用StringIO + read_cvs成功地进行了测试,它确实接受dtype规范的要求。

我通常将每个文件(5k-20k行)放入缓冲区并动态创建dtype字典。

最终,我将这些数据帧连接(通过分类…感谢0.19)到一个大数据帧中,然后将其转储到hdf5中。

这些东西

import pandas as pd
import io 

output = io.StringIO()
output.write('A,1,20,31\n')
output.write('B,2,21,32\n')
output.write('C,3,22,33\n')
output.write('D,4,23,34\n')

output.seek(0)


df=pd.read_csv(output, header=None,
        names=["A","B","C","D"],
        dtype={"A":"category","B":"float32","C":"int32","D":"float64"},
        sep=","
       )

df.info()

<class 'pandas.core.frame.DataFrame'>
RangeIndex: 5 entries, 0 to 4
Data columns (total 4 columns):
A    5 non-null category
B    5 non-null float32
C    5 non-null int32
D    5 non-null float64
dtypes: category(1), float32(1), float64(1), int32(1)
memory usage: 205.0 bytes
None

不是很pythonic ….但是能胜任

希望能帮助到你。

杰西

facing similar problem to you. In my case I have 1000’s of files from cisco logs that I need to parse manually.

In order to be flexible with fields and types I have successfully tested using StringIO + read_cvs which indeed does accept a dict for the dtype specification.

I usually get each of the files ( 5k-20k lines) into a buffer and create the dtype dictionaries dynamically.

Eventually I concatenate ( with categorical… thanks to 0.19) these dataframes into a large data frame that I dump into hdf5.

Something along these lines

import pandas as pd
import io 

output = io.StringIO()
output.write('A,1,20,31\n')
output.write('B,2,21,32\n')
output.write('C,3,22,33\n')
output.write('D,4,23,34\n')

output.seek(0)


df=pd.read_csv(output, header=None,
        names=["A","B","C","D"],
        dtype={"A":"category","B":"float32","C":"int32","D":"float64"},
        sep=","
       )

df.info()

<class 'pandas.core.frame.DataFrame'>
RangeIndex: 5 entries, 0 to 4
Data columns (total 4 columns):
A    5 non-null category
B    5 non-null float32
C    5 non-null int32
D    5 non-null float64
dtypes: category(1), float32(1), float64(1), int32(1)
memory usage: 205.0 bytes
None

Not very pythonic…. but does the job

Hope it helps.

JC


回答 5

最好使用键入的np.arrays,然后将数据和列名作为字典传递。

import numpy as np
import pandas as pd
# Feature: np arrays are 1: efficient, 2: can be pre-sized
x = np.array(['a', 'b'], dtype=object)
y = np.array([ 1 ,  2 ], dtype=np.int32)
df = pd.DataFrame({
   'x' : x,    # Feature: column name is near data array
   'y' : y,
   }
 )

You’re better off using typed np.arrays, and then pass the data and column names as a dictionary.

import numpy as np
import pandas as pd
# Feature: np arrays are 1: efficient, 2: can be pre-sized
x = np.array(['a', 'b'], dtype=object)
y = np.array([ 1 ,  2 ], dtype=np.int32)
df = pd.DataFrame({
   'x' : x,    # Feature: column name is near data array
   'y' : y,
   }
 )

声明:本站所有文章,如无特殊说明或标注,均为本站原创发布。任何个人或组织,在未征得本站同意时,禁止复制、盗用、采集、发布本站内容到任何网站、书籍等各类媒体平台。如若本站内容侵犯了原著者的合法权益,可联系我们进行处理。