I have a dataframe in pandas which I would like to write to a CSV file. I am doing this using:
df.to_csv('out.csv')
And getting the error:
UnicodeEncodeError: 'ascii' codec can't encode character u'\u03b1' in position 20: ordinal not in range(128)
Is there any way to get around this easily (i.e. I have unicode characters in my data frame)? And is there a way to write to a tab delimited file instead of a CSV using e.g. a ‘to-tab’ method (that I dont think exists)?
When you are storing a DataFrame object into a csv file using the to_csv method, you probably wont be needing to store the preceding indices of each row of the DataFrame object.
You can avoid that by passing a False boolean value to index parameter.
To write a pandas DataFrame to a CSV file, you will need DataFrame.to_csv. This function offers many arguments with reasonable defaults that you will more often than not need to override to suit your specific use case. For example, you might want to use a different separator, change the datetime format, or drop the index when writing. to_csv has arguments you can pass to address these requirements.
Here’s a table listing some common scenarios of writing to CSV files and the corresponding arguments you can use for them.
Footnotes
The default separator is assumed to be a comma (','). Don’t change this unless you know you need to.
By default, the index of df is written as the first column. If your DataFrame does not have an index (IOW, the df.index is the default RangeIndex), then you will want to set index=False when writing. To explain this in a different way, if your data DOES have an index, you can (and should) use index=True or just leave it out completely (as the default is True).
It would be wise to set this parameter if you are writing string data so that other applications know how to read your data. This will also avoid any potential UnicodeEncodeErrors you might encounter while saving.
Compression is recommended if you are writing large DataFrames (>100K rows) to disk as it will result in much smaller output files.
OTOH, it will mean the write time will increase (and consequently, the
read time since the file will need to be decompressed).
回答 3
如果您遇到编码为’utf-8’的问题,并且想要逐个单元移动,可以尝试以下方法。
Python 2
(其中“ df”是您的DataFrame对象。)
for column in df.columns:for idx in df[column].index:
x = df.get_value(idx,column)try:
x = unicode(x.encode('utf-8','ignore'),errors ='ignore')if type(x)== unicode else unicode(str(x),errors='ignore')
df.set_value(idx,column,x)exceptException:print'encoding error: {0} {1}'.format(idx,column)
df.set_value(idx,column,'')continue
然后尝试:
df.to_csv(file_name)
您可以通过以下方式检查列的编码:
for column in df.columns:print'{0} {1}'.format(str(type(df[column][0])),str(column))
for column in df.columns:for idx in df[column].index:
x = df.get_value(idx,column)try:
x = x if type(x)== str else str(x).encode('utf-8','ignore').decode('utf-8','ignore')
df.set_value(idx,column,x)exceptException:print('encoding error: {0} {1}'.format(idx,column))
df.set_value(idx,column,'')continue
Something else you can try if you are having issues encoding to ‘utf-8’ and want to go cell by cell you could try the following.
Python 2
(Where “df” is your DataFrame object.)
for column in df.columns:
for idx in df[column].index:
x = df.get_value(idx,column)
try:
x = unicode(x.encode('utf-8','ignore'),errors ='ignore') if type(x) == unicode else unicode(str(x),errors='ignore')
df.set_value(idx,column,x)
except Exception:
print 'encoding error: {0} {1}'.format(idx,column)
df.set_value(idx,column,'')
continue
Then try:
df.to_csv(file_name)
You can check the encoding of the columns by:
for column in df.columns:
print '{0} {1}'.format(str(type(df[column][0])),str(column))
Warning: errors=’ignore’ will just omit the character e.g.
for column in df.columns:
for idx in df[column].index:
x = df.get_value(idx,column)
try:
x = x if type(x) == str else str(x).encode('utf-8','ignore').decode('utf-8','ignore')
df.set_value(idx,column,x)
except Exception:
print('encoding error: {0} {1}'.format(idx,column))
df.set_value(idx,column,'')
continue
Sometimes you face these problems if you specify UTF-8 encoding also.
I recommend you to specify encoding while reading file and same encoding while writing to file.
This might solve your problem.
回答 5
在Windows上具有完整路径的文件导出示例,如果文件具有标题,请执行以下操作:
df.to_csv (r'C:\Users\John\Desktop\export_dataframe.csv', index =None, header=True)
it could be not the answer for this case, but as I had the same error-message with .to_csvI tried .toCSV('name.csv') and the error-message was different (“SparseDataFrame' object has no attribute 'toCSV'). So the problem was solved by turning dataframe to dense dataframe
df.to_dense().to_csv("submission.csv", index = False, sep=',', encoding='utf-8')