问题:读取巨大的.csv文件
我目前正在尝试从Python 2.7中的.csv文件读取数据,该文件最多包含100万行和200列(文件范围从100mb到1.6gb)。对于少于300,000行的文件,我可以(非常缓慢地)执行此操作,但是一旦超过该行,就会出现内存错误。我的代码如下所示:
def getdata(filename, criteria):
data=[]
for criterion in criteria:
data.append(getstuff(filename, criteron))
return data
def getstuff(filename, criterion):
import csv
data=[]
with open(filename, "rb") as csvfile:
datareader=csv.reader(csvfile)
for row in datareader:
if row[3]=="column header":
data.append(row)
elif len(data)<2 and row[3]!=criterion:
pass
elif row[3]==criterion:
data.append(row)
else:
return data
在getstuff函数中使用else子句的原因是,所有符合条件的元素都将一起列在csv文件中,因此当我经过它们时,为了节省时间,我离开了循环。
我的问题是:
我如何设法使其与较大的文件一起使用?
有什么办法可以使它更快?
我的计算机具有8gb RAM,运行64位Windows 7,处理器为3.40 GHz(不确定您需要什么信息)。
I’m currently trying to read data from .csv files in Python 2.7 with up to 1 million rows, and 200 columns (files range from 100mb to 1.6gb). I can do this (very slowly) for the files with under 300,000 rows, but once I go above that I get memory errors. My code looks like this:
def getdata(filename, criteria):
data=[]
for criterion in criteria:
data.append(getstuff(filename, criteron))
return data
def getstuff(filename, criterion):
import csv
data=[]
with open(filename, "rb") as csvfile:
datareader=csv.reader(csvfile)
for row in datareader:
if row[3]=="column header":
data.append(row)
elif len(data)<2 and row[3]!=criterion:
pass
elif row[3]==criterion:
data.append(row)
else:
return data
The reason for the else clause in the getstuff function is that all the elements which fit the criterion will be listed together in the csv file, so I leave the loop when I get past them to save time.
My questions are:
How can I manage to get this to work with the bigger files?
Is there any way I can make it faster?
My computer has 8gb RAM, running 64bit Windows 7, and the processor is 3.40 GHz (not certain what information you need).
回答 0
您正在将所有行读入列表,然后处理该列表。不要那样做。
在生成行时对其进行处理。如果需要先过滤数据,请使用生成器函数:
import csv
def getstuff(filename, criterion):
with open(filename, "rb") as csvfile:
datareader = csv.reader(csvfile)
yield next(datareader) # yield the header row
count = 0
for row in datareader:
if row[3] == criterion:
yield row
count += 1
elif count:
# done when having read a consecutive series of rows
return
我还简化了您的过滤器测试;逻辑相同,但更为简洁。
因为只匹配与条件匹配的单个行序列,所以还可以使用:
import csv
from itertools import dropwhile, takewhile
def getstuff(filename, criterion):
with open(filename, "rb") as csvfile:
datareader = csv.reader(csvfile)
yield next(datareader) # yield the header row
# first row, plus any subsequent rows that match, then stop
# reading altogether
# Python 2: use `for row in takewhile(...): yield row` instead
# instead of `yield from takewhile(...)`.
yield from takewhile(
lambda r: r[3] == criterion,
dropwhile(lambda r: r[3] != criterion, datareader))
return
您现在可以getstuff()
直接循环。在getdata()
:
def getdata(filename, criteria):
for criterion in criteria:
for row in getstuff(filename, criterion):
yield row
现在直接getdata()
在您的代码中循环:
for row in getdata(somefilename, sequence_of_criteria):
# process row
现在,您仅在内存中保留一行,而不是每个条件存储数千行。
yield
使函数成为生成器函数,这意味着直到开始循环它之前,它不会做任何工作。
You are reading all rows into a list, then processing that list. Don’t do that.
Process your rows as you produce them. If you need to filter the data first, use a generator function:
import csv
def getstuff(filename, criterion):
with open(filename, "rb") as csvfile:
datareader = csv.reader(csvfile)
yield next(datareader) # yield the header row
count = 0
for row in datareader:
if row[3] == criterion:
yield row
count += 1
elif count:
# done when having read a consecutive series of rows
return
I also simplified your filter test; the logic is the same but more concise.
Because you are only matching a single sequence of rows matching the criterion, you could also use:
import csv
from itertools import dropwhile, takewhile
def getstuff(filename, criterion):
with open(filename, "rb") as csvfile:
datareader = csv.reader(csvfile)
yield next(datareader) # yield the header row
# first row, plus any subsequent rows that match, then stop
# reading altogether
# Python 2: use `for row in takewhile(...): yield row` instead
# instead of `yield from takewhile(...)`.
yield from takewhile(
lambda r: r[3] == criterion,
dropwhile(lambda r: r[3] != criterion, datareader))
return
You can now loop over getstuff()
directly. Do the same in getdata()
:
def getdata(filename, criteria):
for criterion in criteria:
for row in getstuff(filename, criterion):
yield row
Now loop directly over getdata()
in your code:
for row in getdata(somefilename, sequence_of_criteria):
# process row
You now only hold one row in memory, instead of your thousands of lines per criterion.
yield
makes a function a generator function, which means it won’t do any work until you start looping over it.
回答 1
尽管Martijin的答案是最好的。这是为初学者处理大型csv文件的更直观的方法。这使您可以一次处理一组行或块。
import pandas as pd
chunksize = 10 ** 8
for chunk in pd.read_csv(filename, chunksize=chunksize):
process(chunk)
Although Martijin’s answer is prob best. Here is a more intuitive way to process large csv files for beginners. This allows you to process groups of rows, or chunks, at a time.
import pandas as pd
chunksize = 10 ** 8
for chunk in pd.read_csv(filename, chunksize=chunksize):
process(chunk)
回答 2
I do a fair amount of vibration analysis and look at large data sets (tens and hundreds of millions of points). My testing showed the pandas.read_csv() function to be 20 times faster than numpy.genfromtxt(). And the genfromtxt() function is 3 times faster than the numpy.loadtxt(). It seems that you need pandas for large data sets.
I posted the code and data sets I used in this testing on a blog discussing MATLAB vs Python for vibration analysis.
回答 3
对我有用的是而且超快速的是
import pandas as pd
import dask.dataframe as dd
import time
t=time.clock()
df_train = dd.read_csv('../data/train.csv', usecols=[col1, col2])
df_train=df_train.compute()
print("load train: " , time.clock()-t)
另一个可行的解决方案是:
import pandas as pd
from tqdm import tqdm
PATH = '../data/train.csv'
chunksize = 500000
traintypes = {
'col1':'category',
'col2':'str'}
cols = list(traintypes.keys())
df_list = [] # list to hold the batch dataframe
for df_chunk in tqdm(pd.read_csv(PATH, usecols=cols, dtype=traintypes, chunksize=chunksize)):
# Can process each chunk of dataframe here
# clean_data(), feature_engineer(),fit()
# Alternatively, append the chunk to list and merge all
df_list.append(df_chunk)
# Merge all dataframes into one dataframe
X = pd.concat(df_list)
# Delete the dataframe list to release memory
del df_list
del df_chunk
what worked for me was and is superfast is
import pandas as pd
import dask.dataframe as dd
import time
t=time.clock()
df_train = dd.read_csv('../data/train.csv', usecols=[col1, col2])
df_train=df_train.compute()
print("load train: " , time.clock()-t)
Another working solution is:
import pandas as pd
from tqdm import tqdm
PATH = '../data/train.csv'
chunksize = 500000
traintypes = {
'col1':'category',
'col2':'str'}
cols = list(traintypes.keys())
df_list = [] # list to hold the batch dataframe
for df_chunk in tqdm(pd.read_csv(PATH, usecols=cols, dtype=traintypes, chunksize=chunksize)):
# Can process each chunk of dataframe here
# clean_data(), feature_engineer(),fit()
# Alternatively, append the chunk to list and merge all
df_list.append(df_chunk)
# Merge all dataframes into one dataframe
X = pd.concat(df_list)
# Delete the dataframe list to release memory
del df_list
del df_chunk
回答 4
对于着陆这个问题的人。将熊猫与’ chunksize ‘和’ usecols ‘ 一起使用,比其他建议的选项更快地读取了一个巨大的zip文件。
import pandas as pd
sample_cols_to_keep =['col_1', 'col_2', 'col_3', 'col_4','col_5']
# First setup dataframe iterator, ‘usecols’ parameter filters the columns, and 'chunksize' sets the number of rows per chunk in the csv. (you can change these parameters as you wish)
df_iter = pd.read_csv('../data/huge_csv_file.csv.gz', compression='gzip', chunksize=20000, usecols=sample_cols_to_keep)
# this list will store the filtered dataframes for later concatenation
df_lst = []
# Iterate over the file based on the criteria and append to the list
for df_ in df_iter:
tmp_df = (df_.rename(columns={col: col.lower() for col in df_.columns}) # filter eg. rows where 'col_1' value grater than one
.pipe(lambda x: x[x.col_1 > 0] ))
df_lst += [tmp_df.copy()]
# And finally combine filtered df_lst into the final lareger output say 'df_final' dataframe
df_final = pd.concat(df_lst)
For someone who lands to this question. Using pandas with ‘chunksize’ and ‘usecols’ helped me to read a huge zip file faster than the other proposed options.
import pandas as pd
sample_cols_to_keep =['col_1', 'col_2', 'col_3', 'col_4','col_5']
# First setup dataframe iterator, ‘usecols’ parameter filters the columns, and 'chunksize' sets the number of rows per chunk in the csv. (you can change these parameters as you wish)
df_iter = pd.read_csv('../data/huge_csv_file.csv.gz', compression='gzip', chunksize=20000, usecols=sample_cols_to_keep)
# this list will store the filtered dataframes for later concatenation
df_lst = []
# Iterate over the file based on the criteria and append to the list
for df_ in df_iter:
tmp_df = (df_.rename(columns={col: col.lower() for col in df_.columns}) # filter eg. rows where 'col_1' value grater than one
.pipe(lambda x: x[x.col_1 > 0] ))
df_lst += [tmp_df.copy()]
# And finally combine filtered df_lst into the final lareger output say 'df_final' dataframe
df_final = pd.concat(df_lst)
回答 5
这是Python3的另一个解决方案:
import csv
with open(filename, "r") as csvfile:
datareader = csv.reader(csvfile)
count = 0
for row in datareader:
if row[3] in ("column header", criterion):
doSomething(row)
count += 1
elif count > 2:
break
这datareader
是一个生成器函数。
here’s another solution for Python3:
import csv
with open(filename, "r") as csvfile:
datareader = csv.reader(csvfile)
count = 0
for row in datareader:
if row[3] in ("column header", criterion):
doSomething(row)
count += 1
elif count > 2:
break
here datareader
is a generator function.
回答 6
如果您使用的是熊猫并且有很多RAM(足以将整个文件读入内存),请尝试使用pd.read_csv
with low_memory=False
,例如:
import pandas as pd
data = pd.read_csv('file.csv', low_memory=False)
If you are using pandas and have lots of RAM (enough to read the whole file into memory) try using pd.read_csv
with low_memory=False
, e.g.:
import pandas as pd
data = pd.read_csv('file.csv', low_memory=False)