标签归档:text-files

如何跳转到巨大文本文件中的特定行?

问题:如何跳转到巨大文本文件中的特定行?

以下代码是否有替代方法:

startFromLine = 141978 # or whatever line I need to jump to

urlsfile = open(filename, "rb", 0)

linesCounter = 1

for line in urlsfile:
    if linesCounter > startFromLine:
        DoSomethingWithThisLine(line)

    linesCounter += 1

如果我正在处理一个巨大的文本文件(~15MB),其行数未知但长度不同,并且需要跳转到特定行我应该事先知道哪个号码?当我知道我至少可以忽略文件的前半部分时,我很难一一处理它们。寻找更优雅的解决方案(如果有)。

Are there any alternatives to the code below:

startFromLine = 141978 # or whatever line I need to jump to

urlsfile = open(filename, "rb", 0)

linesCounter = 1

for line in urlsfile:
    if linesCounter > startFromLine:
        DoSomethingWithThisLine(line)

    linesCounter += 1

If I’m processing a huge text file (~15MB) with lines of unknown but different length, and need to jump to a particular line which number I know in advance? I feel bad by processing them one by one when I know I could ignore at least first half of the file. Looking for more elegant solution if there is any.


回答 0

行缓存

linecache模块允许人们从Python源文件中获取任何行,同时尝试使用缓存在内部进行优化,这是从单个文件读取许多行的常见情况。traceback模块使用它来检索源行以包含在格式化的追溯中…

linecache:

The linecache module allows one to get any line from a Python source file, while attempting to optimize internally, using a cache, the common case where many lines are read from a single file. This is used by the traceback module to retrieve source lines for inclusion in the formatted traceback…


回答 1

由于您不知道换行符在哪里,因此无法至少一次不读入文件就无法跳转。您可以执行以下操作:

# Read in the file once and build a list of line offsets
line_offset = []
offset = 0
for line in file:
    line_offset.append(offset)
    offset += len(line)
file.seek(0)

# Now, to skip to line n (with the first line being line 0), just do
file.seek(line_offset[n])

You can’t jump ahead without reading in the file at least once, since you don’t know where the line breaks are. You could do something like:

# Read in the file once and build a list of line offsets
line_offset = []
offset = 0
for line in file:
    line_offset.append(offset)
    offset += len(line)
file.seek(0)

# Now, to skip to line n (with the first line being line 0), just do
file.seek(line_offset[n])

回答 2

如果各行的长度不同,则实际上没有太多选择。可悲的是,您需要处理行结束符以知道何时前进到下一行。

但是,您可以通过将最后一个参数“ open”更改为非0来显着加快此速度并减少内存使用。

0表示文件读取操作是无缓冲的,这非常慢并且占用大量磁盘。1表示文件是行缓冲的,这将是一个改进。大于1的任何值(例如8k ..即:8096或更高)都会将文件的块读取到内存中。您仍然可以通过访问它for line in open(etc):,但是python一次只能执行一点操作,在处理完每个缓冲的块后将其丢弃。

You don’t really have that many options if the lines are of different length… you sadly need to process the line ending characters to know when you’ve progressed to the next line.

You can, however, dramatically speed this up AND reduce memory usage by changing the last parameter to “open” to something not 0.

0 means the file reading operation is unbuffered, which is very slow and disk intensive. 1 means the file is line buffered, which would be an improvement. Anything above 1 (say 8k.. ie: 8096, or higher) reads chunks of the file into memory. You still access it through for line in open(etc):, but python only goes a bit at a time, discarding each buffered chunk after its processed.


回答 3

我可能被大量的ram宠坏了,但是15 M并不庞大。readlines() 我通常用这种大小的文件读入内存。在那之后访问一条线很简单。

I’m probably spoiled by abundant ram, but 15 M is not huge. Reading into memory with readlines() is what I usually do with files of this size. Accessing a line after that is trivial.


回答 4

我很惊讶没有人提到伊丽丝

line = next(itertools.islice(Fhandle,index_of_interest,index_of_interest+1),None) # just the one line

或者如果您想要整个文件的其余部分

rest_of_file = itertools.islice(Fhandle,index_of_interest)
for line in rest_of_file:
    print line

或者如果您想要文件中的其他所有行

rest_of_file = itertools.islice(Fhandle,index_of_interest,None,2)
for odd_line in rest_of_file:
    print odd_line

I am suprised no one mentioned islice

line = next(itertools.islice(Fhandle,index_of_interest,index_of_interest+1),None) # just the one line

or if you want the whole rest of the file

rest_of_file = itertools.islice(Fhandle,index_of_interest)
for line in rest_of_file:
    print line

or if you want every other line from the file

rest_of_file = itertools.islice(Fhandle,index_of_interest,None,2)
for odd_line in rest_of_file:
    print odd_line

回答 5

由于没有阅读前就无法确定所有行的长度,因此您别无选择,只能在开始行之前遍历所有行。您所要做的就是使它看起来不错。如果文件确实很大,那么您可能要使用基于生成器的方法:

from itertools import dropwhile

def iterate_from_line(f, start_from_line):
    return (l for i, l in dropwhile(lambda x: x[0] < start_from_line, enumerate(f)))

for line in iterate_from_line(open(filename, "r", 0), 141978):
    DoSomethingWithThisLine(line)

注意:在这种方法中,索引为零。

Since there is no way to determine the lenght of all lines without reading them, you have no choice but to iterate over all lines before your starting line. All you can do is to make it look nice. If the file is really huge then you might want to use a generator based approach:

from itertools import dropwhile

def iterate_from_line(f, start_from_line):
    return (l for i, l in dropwhile(lambda x: x[0] < start_from_line, enumerate(f)))

for line in iterate_from_line(open(filename, "r", 0), 141978):
    DoSomethingWithThisLine(line)

Note: the index is zero based in this approach.


回答 6

如果您不想读取内存中的整个文件,则可能需要使用纯文本以外的其他格式。

当然,这完全取决于您要执行的操作以及您在文件中跳转的频率。

例如,如果您要在同一个文件中多次跳转到第行,并且知道该文件在使用时不会更改,则可以执行以下操作:
首先,遍历整个文件,并记录“某些关键行号(例如,曾经有1000行)的“ seek-location”,
然后,如果您想要12005行,请跳到12000(已记录)的位置,然后阅读5行,您就会知道在12005行,依此类推

If you don’t want to read the entire file in memory .. you may need to come up with some format other than plain text.

of course it all depends on what you’re trying to do, and how often you will jump across the file.

For instance, if you’re gonna be jumping to lines many times in the same file, and you know that the file does not change while working with it, you can do this:
First, pass through the whole file, and record the “seek-location” of some key-line-numbers (such as, ever 1000 lines),
Then if you want line 12005, jump to the position of 12000 (which you’ve recorded) then read 5 lines and you’ll know you’re in line 12005 and so on


回答 7

如果您事先知道文件中的位置(而不是行号),则可以使用file.seek()转到该位置。

编辑:您可以使用linecache.getline(filename,lineno)函数,该函数将返回lineno行的内容,但仅在将整个文件读入内存后才返回。如果您要从文件中随机访问行,则很好(因为python本身可能想打印回溯),但对于15MB的文件则不好。

If you know in advance the position in the file (rather the line number), you can use file.seek() to go to that position.

Edit: you can use the linecache.getline(filename, lineno) function, which will return the contents of the line lineno, but only after reading the entire file into memory. Good if you’re randomly accessing lines from within the file (as python itself might want to do to print a traceback) but not good for a 15MB file.


回答 8

什么会生成您要处理的文件?如果它在您的控制之下,则可以在附加文件时生成一个索引(哪一行在哪个位置。)。索引文件可以是固定的行大小(用空格填充或0填充数字),并且肯定会更小。因此可以快速读取和处理。

  • 您要哪条线?
  • 计算索引文件中相应行号的字节偏移量(可能因为索引文件的行大小恒定)。
  • 使用seek或其他任何方法直接跳转以从索引文件获取行。
  • 解析以获得实际文件对应行的字节偏移量。

What generates the file you want to process? If it is something under your control, you could generate an index (which line is at which position.) at the time the file is appended to. The index file can be of fixed line size (space padded or 0 padded numbers) and will definitely be smaller. And thus can be read and processed qucikly.

  • Which line do you want?.
  • Calculate byte offset of corresponding line number in index file(possible because line size of index file is constant).
  • Use seek or whatever to directly jump to get the line from index file.
  • Parse to get byte offset for corresponding line of actual file.

回答 9

我遇到了同样的问题(需要从大文件特定行中检索)。

当然,我每次可以遍历文件中的所有记录,并在计数器等于目标行时停止它,但是在想要获取多个特定行的情况下,它不能有效工作。这导致要解决的主要问题-如何直接处理到必要的文件位置。

我找到了下一个决定:首先,我完成了字典,其中每行的起始位置(键是行号,而值是前一行的累积长度)。

t = open(file,’r’)
dict_pos = {}

kolvo = 0
length = 0
for each in t:
    dict_pos[kolvo] = length
    length = length+len(each)
    kolvo = kolvo+1

最终,瞄准功能:

def give_line(line_number):
    t.seek(dict_pos.get(line_number))
    line = t.readline()
    return line

t.seek(line_number)–执行对文件的修剪直到开始的命令。因此,如果您下次提交readline –您将获得目标行。

使用这种方法,我节省了大量时间。

I have had the same problem (need to retrieve from huge file specific line).

Surely, I can every time run through all records in file and stop it when counter will be equal to target line, but it does not work effectively in a case when you want to obtain plural number of specific rows. That caused main issue to be resolved – how handle directly to necessary place of file.

I found out next decision: Firstly I completed dictionary with start position of each line (key is line number, and value – cumulated length of previous lines).

t = open(file,’r’)
dict_pos = {}

kolvo = 0
length = 0
for each in t:
    dict_pos[kolvo] = length
    length = length+len(each)
    kolvo = kolvo+1

ultimately, aim function:

def give_line(line_number):
    t.seek(dict_pos.get(line_number))
    line = t.readline()
    return line

t.seek(line_number) – command that execute pruning of file up to line inception. So, if you next commit readline – you obtain your target line.

Using such approach I have saved significant part of time.


回答 10

您可以使用mmap查找行的偏移量。MMap似乎是处理文件的最快方法

例:

with open('input_file', "r+b") as f:
    mapped = mmap.mmap(f.fileno(), 0, prot=mmap.PROT_READ)
    i = 1
    for line in iter(mapped.readline, ""):
        if i == Line_I_want_to_jump:
            offsets = mapped.tell()
        i+=1

然后使用f.seek(offsets)移至所需的行

You may use mmap to find the offset of the lines. MMap seems to be the fastest way to process a file

example:

with open('input_file', "r+b") as f:
    mapped = mmap.mmap(f.fileno(), 0, prot=mmap.PROT_READ)
    i = 1
    for line in iter(mapped.readline, ""):
        if i == Line_I_want_to_jump:
            offsets = mapped.tell()
        i+=1

then use f.seek(offsets) to move to the line you need


回答 11

这些行本身是否包含任何索引信息?如果每一行的内容都类似于“ <line index>:Data”,则该seek()方法可用于对文件进行二进制搜索,即使Data可变。您将寻找到文件的中点,读取一行,检查其索引是高于还是低于您想要的索引,等等。

否则,您能做的最好就是readlines()。如果您不想读取全部15MB的内存,则可以使用sizehint参数至少用readline()较少的调用替换很多readlines()

Do the lines themselves contain any index information? If the content of each line was something like “<line index>:Data“, then the seek() approach could be used to do a binary search through the file, even if the amount of Data is variable. You’d seek to the midpoint of the file, read a line, check whether its index is higher or lower than the one you want, etc.

Otherwise, the best you can do is just readlines(). If you don’t want to read all 15MB, you can use the sizehint argument to at least replace a lot of readline()s with a smaller number of calls to readlines().


回答 12

如果您要处理基于Linux系统文本文件,则可以使用linux命令。 对我来说,这很好!

import commands

def read_line(path, line=1):
    return commands.getoutput('head -%s %s | tail -1' % (line, path))

line_to_jump = 141978
read_line("path_to_large_text_file", line_to_jump)

If you’re dealing with a text file & based on linux system, you could use the linux commands.
For me, this worked well!

import commands

def read_line(path, line=1):
    return commands.getoutput('head -%s %s | tail -1' % (line, path))

line_to_jump = 141978
read_line("path_to_large_text_file", line_to_jump)

回答 13

这是一个使用’readlines(sizehint)’一次读取一行代码的示例。DNS指出了该解决方案。我写这个例子是因为这里的其他例子都是单行的。

def getlineno(filename, lineno):
    if lineno < 1:
        raise TypeError("First line is line 1")
    f = open(filename)
    lines_read = 0
    while 1:
        lines = f.readlines(100000)
        if not lines:
            return None
        if lines_read + len(lines) >= lineno:
            return lines[lineno-lines_read-1]
        lines_read += len(lines)

print getlineno("nci_09425001_09450000.smi", 12000)

Here’s an example using ‘readlines(sizehint)’ to read a chunk of lines at a time. DNS pointed out that solution. I wrote this example because the other examples here are single-line oriented.

def getlineno(filename, lineno):
    if lineno < 1:
        raise TypeError("First line is line 1")
    f = open(filename)
    lines_read = 0
    while 1:
        lines = f.readlines(100000)
        if not lines:
            return None
        if lines_read + len(lines) >= lineno:
            return lines[lineno-lines_read-1]
        lines_read += len(lines)

print getlineno("nci_09425001_09450000.smi", 12000)

回答 14

没有一个答案特别令人满意,因此这里有一个小片段可以帮助您。

class LineSeekableFile:
    def __init__(self, seekable):
        self.fin = seekable
        self.line_map = list() # Map from line index -> file position.
        self.line_map.append(0)
        while seekable.readline():
            self.line_map.append(seekable.tell())

    def __getitem__(self, index):
        # NOTE: This assumes that you're not reading the file sequentially.  
        # For that, just use 'for line in file'.
        self.fin.seek(self.line_map[index])
        return self.fin.readline()

用法示例:

In: !cat /tmp/test.txt

Out:
Line zero.
Line one!

Line three.
End of file, line four.

In:
with open("/tmp/test.txt", 'rt') as fin:
    seeker = LineSeekableFile(fin)    
    print(seeker[1])
Out:
Line one!

这涉及到很多文件查找,但是对于无法将整个文件放入内存的情况很有用。它进行一次初始读取以获取行位置(因此它确实读取了整个文件,但并未将其全部保存在内存中),然后每次访问都根据事实查找文件。

根据用户的判断,我根据MIT或Apache许可提供了以上代码段。

None of the answers are particularly satisfactory, so here’s a small snippet to help.

class LineSeekableFile:
    def __init__(self, seekable):
        self.fin = seekable
        self.line_map = list() # Map from line index -> file position.
        self.line_map.append(0)
        while seekable.readline():
            self.line_map.append(seekable.tell())

    def __getitem__(self, index):
        # NOTE: This assumes that you're not reading the file sequentially.  
        # For that, just use 'for line in file'.
        self.fin.seek(self.line_map[index])
        return self.fin.readline()

Example usage:

In: !cat /tmp/test.txt

Out:
Line zero.
Line one!

Line three.
End of file, line four.

In:
with open("/tmp/test.txt", 'rt') as fin:
    seeker = LineSeekableFile(fin)    
    print(seeker[1])
Out:
Line one!

This involves doing a lot of file seeks, but is useful for the cases where you can’t fit the whole file in memory. It does one initial read to get the line locations (so it does read the whole file, but doesn’t keep it all in memory), and then each access does a file seek after the fact.

I offer the snippet above under the MIT or Apache license at the discretion of the user.


回答 15

可以使用此函数返回第n行:

def skipton(infile, n):
    with open(infile,'r') as fi:
        for i in range(n-1):
            fi.next()
        return fi.next()

Can use this function to return line n:

def skipton(infile, n):
    with open(infile,'r') as fi:
        for i in range(n-1):
            fi.next()
        return fi.next()

如何确定文本的编码?

问题:如何确定文本的编码?

我收到了一些经过编码的文本,但是我不知道使用了什么字符集。有没有一种方法可以使用Python确定文本文件的编码?如何检测 C#处理的文本文件的编码/代码页

I received some text that is encoded, but I don’t know what charset was used. Is there a way to determine the encoding of a text file using Python? How can I detect the encoding/codepage of a text file deals with C#.


回答 0

始终无法正确检测编码。

(来自chardet常见问题解答:)

但是,某些编码针对特定语言进行了优化,并且语言不是随机的。某些字符序列始终弹出,而其他字符序列毫无意义。一个会说英语的人,打开报纸发现“ txzqJv 2!dasd0a QqdKjvz”,会立即意识到这不是英语(即使它完全由英文字母组成)。通过研究大量的“典型”文本,计算机算法可以模拟这种流利程度,并对文本的语言做出有根据的猜测。

有一个chardet库使用该研究来尝试检测编码。chardet是Mozilla中自动检测代码的端口。

您也可以使用UnicodeDammit。它将尝试以下方法:

  • 在文档本身中发现的编码:例如,在XML声明或(对于HTML文档)http等效的META标记中。如果Beautiful Soup在文档中找到这种编码,它将从头开始再次解析文档,然后尝试使用新的编码。唯一的exceptions是,如果您显式指定了一种编码,并且该编码确实起作用:那么它将忽略它在文档中找到的任何编码。
  • 通过查看文件的前几个字节来嗅探编码。如果在此阶段检测到编码,它将是UTF- *编码,EBCDIC或ASCII之一。
  • chardet库嗅探到的编码(如果已安装)。
  • UTF-8
  • Windows-1252

Correctly detecting the encoding all times is impossible.

(From chardet FAQ:)

However, some encodings are optimized for specific languages, and languages are not random. Some character sequences pop up all the time, while other sequences make no sense. A person fluent in English who opens a newspaper and finds “txzqJv 2!dasd0a QqdKjvz” will instantly recognize that that isn’t English (even though it is composed entirely of English letters). By studying lots of “typical” text, a computer algorithm can simulate this kind of fluency and make an educated guess about a text’s language.

There is the chardet library that uses that study to try to detect encoding. chardet is a port of the auto-detection code in Mozilla.

You can also use UnicodeDammit. It will try the following methods:

  • An encoding discovered in the document itself: for instance, in an XML declaration or (for HTML documents) an http-equiv META tag. If Beautiful Soup finds this kind of encoding within the document, it parses the document again from the beginning and gives the new encoding a try. The only exception is if you explicitly specified an encoding, and that encoding actually worked: then it will ignore any encoding it finds in the document.
  • An encoding sniffed by looking at the first few bytes of the file. If an encoding is detected at this stage, it will be one of the UTF-* encodings, EBCDIC, or ASCII.
  • An encoding sniffed by the chardet library, if you have it installed.
  • UTF-8
  • Windows-1252

回答 1

进行编码的另一种方法是使用 libmagic(这是file命令后面的代码 )。有大量可用的python绑定。

存在于文件源树中的python绑定可以作为 python-magic(或python3-magic)debian软件包使用。它可以通过执行以下操作来确定文件的编码:

import magic

blob = open('unknown-file', 'rb').read()
m = magic.open(magic.MAGIC_MIME_ENCODING)
m.load()
encoding = m.buffer(blob)  # "utf-8" "us-ascii" etc

在pypi上有一个同名但不兼容的python-magic pip包,该包也使用libmagic。它还可以通过执行以下操作获取编码:

import magic

blob = open('unknown-file', 'rb').read()
m = magic.Magic(mime_encoding=True)
encoding = m.from_buffer(blob)

Another option for working out the encoding is to use libmagic (which is the code behind the file command). There are a profusion of python bindings available.

The python bindings that live in the file source tree are available as the python-magic (or python3-magic) debian package. It can determine the encoding of a file by doing:

import magic

blob = open('unknown-file', 'rb').read()
m = magic.open(magic.MAGIC_MIME_ENCODING)
m.load()
encoding = m.buffer(blob)  # "utf-8" "us-ascii" etc

There is an identically named, but incompatible, python-magic pip package on pypi that also uses libmagic. It can also get the encoding, by doing:

import magic

blob = open('unknown-file', 'rb').read()
m = magic.Magic(mime_encoding=True)
encoding = m.from_buffer(blob)

回答 2

一些编码策略,请不加评论​​:

#!/bin/bash
#
tmpfile=$1
echo '-- info about file file ........'
file -i $tmpfile
enca -g $tmpfile
echo 'recoding ........'
#iconv -f iso-8859-2 -t utf-8 back_test.xml > $tmpfile
#enca -x utf-8 $tmpfile
#enca -g $tmpfile
recode CP1250..UTF-8 $tmpfile

您可能想通过以循环形式打开和读取文件来检查编码…但是您可能需要先检查文件大小:

encodings = ['utf-8', 'windows-1250', 'windows-1252' ...etc]
            for e in encodings:
                try:
                    fh = codecs.open('file.txt', 'r', encoding=e)
                    fh.readlines()
                    fh.seek(0)
                except UnicodeDecodeError:
                    print('got unicode error with %s , trying different encoding' % e)
                else:
                    print('opening the file with encoding:  %s ' % e)
                    break              

Some encoding strategies, please uncomment to taste :

#!/bin/bash
#
tmpfile=$1
echo '-- info about file file ........'
file -i $tmpfile
enca -g $tmpfile
echo 'recoding ........'
#iconv -f iso-8859-2 -t utf-8 back_test.xml > $tmpfile
#enca -x utf-8 $tmpfile
#enca -g $tmpfile
recode CP1250..UTF-8 $tmpfile

You might like to check the encoding by opening and reading the file in a form of a loop… but you might need to check the filesize first :

encodings = ['utf-8', 'windows-1250', 'windows-1252' ...etc]
            for e in encodings:
                try:
                    fh = codecs.open('file.txt', 'r', encoding=e)
                    fh.readlines()
                    fh.seek(0)
                except UnicodeDecodeError:
                    print('got unicode error with %s , trying different encoding' % e)
                else:
                    print('opening the file with encoding:  %s ' % e)
                    break              

回答 3

这是一个读取和获取chardet编码预测值的示例,n_lines如果文件很大,则从文件中读取。

chardet还为您提供confidence了其编码预测的概率(即)(尚未查看它们是如何得出的),该预测的预测结果来自chardet.predict(),因此您可以根据需要以某种方式进行操作。

def predict_encoding(file_path, n_lines=20):
    '''Predict a file's encoding using chardet'''
    import chardet

    # Open the file as binary data
    with open(file_path, 'rb') as f:
        # Join binary lines for specified number of lines
        rawdata = b''.join([f.readline() for _ in range(n_lines)])

    return chardet.detect(rawdata)['encoding']

Here is an example of reading and taking at face value a chardet encoding prediction, reading n_lines from the file in the event it is large.

chardet also gives you a probability (i.e. confidence) of it’s encoding prediction (haven’t looked how they come up with that), which is returned with its prediction from chardet.predict(), so you could work that in somehow if you like.

def predict_encoding(file_path, n_lines=20):
    '''Predict a file's encoding using chardet'''
    import chardet

    # Open the file as binary data
    with open(file_path, 'rb') as f:
        # Join binary lines for specified number of lines
        rawdata = b''.join([f.readline() for _ in range(n_lines)])

    return chardet.detect(rawdata)['encoding']

回答 4

# Function: OpenRead(file)

# A text file can be encoded using:
#   (1) The default operating system code page, Or
#   (2) utf8 with a BOM header
#
#  If a text file is encoded with utf8, and does not have a BOM header,
#  the user can manually add a BOM header to the text file
#  using a text editor such as notepad++, and rerun the python script,
#  otherwise the file is read as a codepage file with the 
#  invalid codepage characters removed

import sys
if int(sys.version[0]) != 3:
    print('Aborted: Python 3.x required')
    sys.exit(1)

def bomType(file):
    """
    returns file encoding string for open() function

    EXAMPLE:
        bom = bomtype(file)
        open(file, encoding=bom, errors='ignore')
    """

    f = open(file, 'rb')
    b = f.read(4)
    f.close()

    if (b[0:3] == b'\xef\xbb\xbf'):
        return "utf8"

    # Python automatically detects endianess if utf-16 bom is present
    # write endianess generally determined by endianess of CPU
    if ((b[0:2] == b'\xfe\xff') or (b[0:2] == b'\xff\xfe')):
        return "utf16"

    if ((b[0:5] == b'\xfe\xff\x00\x00') 
              or (b[0:5] == b'\x00\x00\xff\xfe')):
        return "utf32"

    # If BOM is not provided, then assume its the codepage
    #     used by your operating system
    return "cp1252"
    # For the United States its: cp1252


def OpenRead(file):
    bom = bomType(file)
    return open(file, 'r', encoding=bom, errors='ignore')


#######################
# Testing it
#######################
fout = open("myfile1.txt", "w", encoding="cp1252")
fout.write("* hi there (cp1252)")
fout.close()

fout = open("myfile2.txt", "w", encoding="utf8")
fout.write("\u2022 hi there (utf8)")
fout.close()

# this case is still treated like codepage cp1252
#   (User responsible for making sure that all utf8 files
#   have a BOM header)
fout = open("badboy.txt", "wb")
fout.write(b"hi there.  barf(\x81\x8D\x90\x9D)")
fout.close()

# Read Example file with Bom Detection
fin = OpenRead("myfile1.txt")
L = fin.readline()
print(L)
fin.close()

# Read Example file with Bom Detection
fin = OpenRead("myfile2.txt")
L =fin.readline() 
print(L) #requires QtConsole to view, Cmd.exe is cp1252
fin.close()

# Read CP1252 with a few undefined chars without barfing
fin = OpenRead("badboy.txt")
L =fin.readline() 
print(L)
fin.close()

# Check that bad characters are still in badboy codepage file
fin = open("badboy.txt", "rb")
fin.read(20)
fin.close()
# Function: OpenRead(file)

# A text file can be encoded using:
#   (1) The default operating system code page, Or
#   (2) utf8 with a BOM header
#
#  If a text file is encoded with utf8, and does not have a BOM header,
#  the user can manually add a BOM header to the text file
#  using a text editor such as notepad++, and rerun the python script,
#  otherwise the file is read as a codepage file with the 
#  invalid codepage characters removed

import sys
if int(sys.version[0]) != 3:
    print('Aborted: Python 3.x required')
    sys.exit(1)

def bomType(file):
    """
    returns file encoding string for open() function

    EXAMPLE:
        bom = bomtype(file)
        open(file, encoding=bom, errors='ignore')
    """

    f = open(file, 'rb')
    b = f.read(4)
    f.close()

    if (b[0:3] == b'\xef\xbb\xbf'):
        return "utf8"

    # Python automatically detects endianess if utf-16 bom is present
    # write endianess generally determined by endianess of CPU
    if ((b[0:2] == b'\xfe\xff') or (b[0:2] == b'\xff\xfe')):
        return "utf16"

    if ((b[0:5] == b'\xfe\xff\x00\x00') 
              or (b[0:5] == b'\x00\x00\xff\xfe')):
        return "utf32"

    # If BOM is not provided, then assume its the codepage
    #     used by your operating system
    return "cp1252"
    # For the United States its: cp1252


def OpenRead(file):
    bom = bomType(file)
    return open(file, 'r', encoding=bom, errors='ignore')


#######################
# Testing it
#######################
fout = open("myfile1.txt", "w", encoding="cp1252")
fout.write("* hi there (cp1252)")
fout.close()

fout = open("myfile2.txt", "w", encoding="utf8")
fout.write("\u2022 hi there (utf8)")
fout.close()

# this case is still treated like codepage cp1252
#   (User responsible for making sure that all utf8 files
#   have a BOM header)
fout = open("badboy.txt", "wb")
fout.write(b"hi there.  barf(\x81\x8D\x90\x9D)")
fout.close()

# Read Example file with Bom Detection
fin = OpenRead("myfile1.txt")
L = fin.readline()
print(L)
fin.close()

# Read Example file with Bom Detection
fin = OpenRead("myfile2.txt")
L =fin.readline() 
print(L) #requires QtConsole to view, Cmd.exe is cp1252
fin.close()

# Read CP1252 with a few undefined chars without barfing
fin = OpenRead("badboy.txt")
L =fin.readline() 
print(L)
fin.close()

# Check that bad characters are still in badboy codepage file
fin = open("badboy.txt", "rb")
fin.read(20)
fin.close()

回答 5

根据您的平台,我只是选择使用linux shell file命令。这对我有用,因为我在专门在我们的Linux机器之一上运行的脚本中使用了它。

显然,这不是理想的解决方案或答案,但可以对其进行修改以满足您的需求。就我而言,我只需要确定文件是否为UTF-8。

import subprocess
file_cmd = ['file', 'test.txt']
p = subprocess.Popen(file_cmd, stdout=subprocess.PIPE)
cmd_output = p.stdout.readlines()
# x will begin with the file type output as is observed using 'file' command
x = cmd_output[0].split(": ")[1]
return x.startswith('UTF-8')

Depending on your platform, I just opt to use the linux shell file command. This works for me since I am using it in a script that exclusively runs on one of our linux machines.

Obviously this isn’t an ideal solution or answer, but it could be modified to fit your needs. In my case I just need to determine whether a file is UTF-8 or not.

import subprocess
file_cmd = ['file', 'test.txt']
p = subprocess.Popen(file_cmd, stdout=subprocess.PIPE)
cmd_output = p.stdout.readlines()
# x will begin with the file type output as is observed using 'file' command
x = cmd_output[0].split(": ")[1]
return x.startswith('UTF-8')

回答 6

这可能会有所帮助

from bs4 import UnicodeDammit
with open('automate_data/billboard.csv', 'rb') as file:
   content = file.read()

suggestion = UnicodeDammit(content)
suggestion.original_encoding
#'iso-8859-1'

This might be helpful

from bs4 import UnicodeDammit
with open('automate_data/billboard.csv', 'rb') as file:
   content = file.read()

suggestion = UnicodeDammit(content)
suggestion.original_encoding
#'iso-8859-1'

回答 7

在一般情况下,原则上不可能确定文本文件的编码。因此,没有,没有标准的Python库可以为您执行此操作。

如果您对文本文件有更具体的了解(例如,它是XML),则可能有库函数。

It is, in principle, impossible to determine the encoding of a text file, in the general case. So no, there is no standard Python library to do that for you.

If you have more specific knowledge about the text file (e.g. that it is XML), there might be library functions.


回答 8

如果您知道文件的某些内容,则可以尝试使用几种编码对其进行解码,然后查看缺少的内容。通常,由于文本文件是文本文件而那些文本文件是愚蠢的,所以没有办法;)

If you know the some content of the file you can try to decode it with several encoding and see which is missing. In general there is no way since a text file is a text file and those are stupid ;)


回答 9

该站点具有用于识别ascii,使用boms编码和utf8 no bom的python代码:https ://unicodebook.readthedocs.io/guess_encoding.html 。将文件读入字节数组(数据):http : //www.codecodex.com/wiki/Read_a_file_into_a_byte_array。这是一个例子。我在osx中​​。

#!/usr/bin/python                                                                                                  

import sys

def isUTF8(data):
    try:
        decoded = data.decode('UTF-8')
    except UnicodeDecodeError:
        return False
    else:
        for ch in decoded:
            if 0xD800 <= ord(ch) <= 0xDFFF:
                return False
        return True

def get_bytes_from_file(filename):
    return open(filename, "rb").read()

filename = sys.argv[1]
data = get_bytes_from_file(filename)
result = isUTF8(data)
print(result)


PS /Users/js> ./isutf8.py hi.txt                                                                                     
True

This site has python code for recognizing ascii, encoding with boms, and utf8 no bom: https://unicodebook.readthedocs.io/guess_encoding.html. Read file into byte array (data): http://www.codecodex.com/wiki/Read_a_file_into_a_byte_array. Here’s an example. I’m in osx.

#!/usr/bin/python                                                                                                  

import sys

def isUTF8(data):
    try:
        decoded = data.decode('UTF-8')
    except UnicodeDecodeError:
        return False
    else:
        for ch in decoded:
            if 0xD800 <= ord(ch) <= 0xDFFF:
                return False
        return True

def get_bytes_from_file(filename):
    return open(filename, "rb").read()

filename = sys.argv[1]
data = get_bytes_from_file(filename)
result = isUTF8(data)
print(result)


PS /Users/js> ./isutf8.py hi.txt                                                                                     
True

如何在Python中便宜地获取大文件的行数?

问题:如何在Python中便宜地获取大文件的行数?

我需要在python中获取一个大文件(数十万行)的行数。内存和时间最有效的方法是什么?

目前,我这样做:

def file_len(fname):
    with open(fname) as f:
        for i, l in enumerate(f):
            pass
    return i + 1

有可能做得更好吗?

I need to get a line count of a large file (hundreds of thousands of lines) in python. What is the most efficient way both memory- and time-wise?

At the moment I do:

def file_len(fname):
    with open(fname) as f:
        for i, l in enumerate(f):
            pass
    return i + 1

is it possible to do any better?


回答 0

没有比这更好的了。

毕竟,任何解决方案都必须读取整个文件,找出有多少文件\n,然后返回该结果。

您是否有一种更好的方法,而无需读取整个文件?不确定…最好的解决方案将永远是受I / O约束的,您可以做的最好的事情是确保您不使用不必要的内存,但是看起来您已经解决了这一问题。

You can’t get any better than that.

After all, any solution will have to read the entire file, figure out how many \n you have, and return that result.

Do you have a better way of doing that without reading the entire file? Not sure… The best solution will always be I/O-bound, best you can do is make sure you don’t use unnecessary memory, but it looks like you have that covered.


回答 1

一行,可能非常快:

num_lines = sum(1 for line in open('myfile.txt'))

One line, probably pretty fast:

num_lines = sum(1 for line in open('myfile.txt'))

回答 2

我相信内存映射文件将是最快的解决方案。我尝试了四个函数:OP(opcount)发布的函数;文件(simplecount)中各行的简单迭代;带有内存映射字段(mmap)的readline(mapcount); 以及Mykola Kharechko(bufcount)提供的缓冲区读取解决方案。

我对每个函数运行了五次,并计算了120万行文本文件的平均运行时间。

Windows XP,Python 2.5、2GB RAM,2 GHz AMD处理器

这是我的结果:

mapcount : 0.465599966049
simplecount : 0.756399965286
bufcount : 0.546800041199
opcount : 0.718600034714

编辑:Python 2.6的数字:

mapcount : 0.471799945831
simplecount : 0.634400033951
bufcount : 0.468800067902
opcount : 0.602999973297

因此,缓冲区读取策略似乎对于Windows / Python 2.6是最快的

这是代码:

from __future__ import with_statement
import time
import mmap
import random
from collections import defaultdict

def mapcount(filename):
    f = open(filename, "r+")
    buf = mmap.mmap(f.fileno(), 0)
    lines = 0
    readline = buf.readline
    while readline():
        lines += 1
    return lines

def simplecount(filename):
    lines = 0
    for line in open(filename):
        lines += 1
    return lines

def bufcount(filename):
    f = open(filename)                  
    lines = 0
    buf_size = 1024 * 1024
    read_f = f.read # loop optimization

    buf = read_f(buf_size)
    while buf:
        lines += buf.count('\n')
        buf = read_f(buf_size)

    return lines

def opcount(fname):
    with open(fname) as f:
        for i, l in enumerate(f):
            pass
    return i + 1


counts = defaultdict(list)

for i in range(5):
    for func in [mapcount, simplecount, bufcount, opcount]:
        start_time = time.time()
        assert func("big_file.txt") == 1209138
        counts[func].append(time.time() - start_time)

for key, vals in counts.items():
    print key.__name__, ":", sum(vals) / float(len(vals))

I believe that a memory mapped file will be the fastest solution. I tried four functions: the function posted by the OP (opcount); a simple iteration over the lines in the file (simplecount); readline with a memory-mapped filed (mmap) (mapcount); and the buffer read solution offered by Mykola Kharechko (bufcount).

I ran each function five times, and calculated the average run-time for a 1.2 million-line text file.

Windows XP, Python 2.5, 2GB RAM, 2 GHz AMD processor

Here are my results:

mapcount : 0.465599966049
simplecount : 0.756399965286
bufcount : 0.546800041199
opcount : 0.718600034714

Edit: numbers for Python 2.6:

mapcount : 0.471799945831
simplecount : 0.634400033951
bufcount : 0.468800067902
opcount : 0.602999973297

So the buffer read strategy seems to be the fastest for Windows/Python 2.6

Here is the code:

from __future__ import with_statement
import time
import mmap
import random
from collections import defaultdict

def mapcount(filename):
    f = open(filename, "r+")
    buf = mmap.mmap(f.fileno(), 0)
    lines = 0
    readline = buf.readline
    while readline():
        lines += 1
    return lines

def simplecount(filename):
    lines = 0
    for line in open(filename):
        lines += 1
    return lines

def bufcount(filename):
    f = open(filename)                  
    lines = 0
    buf_size = 1024 * 1024
    read_f = f.read # loop optimization

    buf = read_f(buf_size)
    while buf:
        lines += buf.count('\n')
        buf = read_f(buf_size)

    return lines

def opcount(fname):
    with open(fname) as f:
        for i, l in enumerate(f):
            pass
    return i + 1


counts = defaultdict(list)

for i in range(5):
    for func in [mapcount, simplecount, bufcount, opcount]:
        start_time = time.time()
        assert func("big_file.txt") == 1209138
        counts[func].append(time.time() - start_time)

for key, vals in counts.items():
    print key.__name__, ":", sum(vals) / float(len(vals))

回答 3

我不得不将其发布到一个类似的问题上,直到我的声誉得分略有提高(这要感谢任何碰到我的人!)。

所有这些解决方案都忽略了一种使运行速度显着提高的方法,即使用无缓冲(原始)接口,使用字节数组以及自己进行缓冲。(这仅适用于Python3。在Python 2中,默认情况下可能会或可能不会使用raw接口,但是在Python 3中,您将默认使用Unicode。)

使用计时工具的修改版,我相信以下代码比提供的任何解决方案都更快(并且稍微多了一些pythonic):

def rawcount(filename):
    f = open(filename, 'rb')
    lines = 0
    buf_size = 1024 * 1024
    read_f = f.raw.read

    buf = read_f(buf_size)
    while buf:
        lines += buf.count(b'\n')
        buf = read_f(buf_size)

    return lines

使用单独的生成器函数,可以更快地运行:

def _make_gen(reader):
    b = reader(1024 * 1024)
    while b:
        yield b
        b = reader(1024*1024)

def rawgencount(filename):
    f = open(filename, 'rb')
    f_gen = _make_gen(f.raw.read)
    return sum( buf.count(b'\n') for buf in f_gen )

使用itertools内联生成器表达式可以完全完成此操作,但是看起来很奇怪:

from itertools import (takewhile,repeat)

def rawincount(filename):
    f = open(filename, 'rb')
    bufgen = takewhile(lambda x: x, (f.raw.read(1024*1024) for _ in repeat(None)))
    return sum( buf.count(b'\n') for buf in bufgen )

这是我的时间安排:

function      average, s  min, s   ratio
rawincount        0.0043  0.0041   1.00
rawgencount       0.0044  0.0042   1.01
rawcount          0.0048  0.0045   1.09
bufcount          0.008   0.0068   1.64
wccount           0.01    0.0097   2.35
itercount         0.014   0.014    3.41
opcount           0.02    0.02     4.83
kylecount         0.021   0.021    5.05
simplecount       0.022   0.022    5.25
mapcount          0.037   0.031    7.46

I had to post this on a similar question until my reputation score jumped a bit (thanks to whoever bumped me!).

All of these solutions ignore one way to make this run considerably faster, namely by using the unbuffered (raw) interface, using bytearrays, and doing your own buffering. (This only applies in Python 3. In Python 2, the raw interface may or may not be used by default, but in Python 3, you’ll default into Unicode.)

Using a modified version of the timing tool, I believe the following code is faster (and marginally more pythonic) than any of the solutions offered:

def rawcount(filename):
    f = open(filename, 'rb')
    lines = 0
    buf_size = 1024 * 1024
    read_f = f.raw.read

    buf = read_f(buf_size)
    while buf:
        lines += buf.count(b'\n')
        buf = read_f(buf_size)

    return lines

Using a separate generator function, this runs a smidge faster:

def _make_gen(reader):
    b = reader(1024 * 1024)
    while b:
        yield b
        b = reader(1024*1024)

def rawgencount(filename):
    f = open(filename, 'rb')
    f_gen = _make_gen(f.raw.read)
    return sum( buf.count(b'\n') for buf in f_gen )

This can be done completely with generators expressions in-line using itertools, but it gets pretty weird looking:

from itertools import (takewhile,repeat)

def rawincount(filename):
    f = open(filename, 'rb')
    bufgen = takewhile(lambda x: x, (f.raw.read(1024*1024) for _ in repeat(None)))
    return sum( buf.count(b'\n') for buf in bufgen )

Here are my timings:

function      average, s  min, s   ratio
rawincount        0.0043  0.0041   1.00
rawgencount       0.0044  0.0042   1.01
rawcount          0.0048  0.0045   1.09
bufcount          0.008   0.0068   1.64
wccount           0.01    0.0097   2.35
itercount         0.014   0.014    3.41
opcount           0.02    0.02     4.83
kylecount         0.021   0.021    5.05
simplecount       0.022   0.022    5.25
mapcount          0.037   0.031    7.46

回答 4

您可以执行一个子流程并运行 wc -l filename

import subprocess

def file_len(fname):
    p = subprocess.Popen(['wc', '-l', fname], stdout=subprocess.PIPE, 
                                              stderr=subprocess.PIPE)
    result, err = p.communicate()
    if p.returncode != 0:
        raise IOError(err)
    return int(result.strip().split()[0])

You could execute a subprocess and run wc -l filename

import subprocess

def file_len(fname):
    p = subprocess.Popen(['wc', '-l', fname], stdout=subprocess.PIPE, 
                                              stderr=subprocess.PIPE)
    result, err = p.communicate()
    if p.returncode != 0:
        raise IOError(err)
    return int(result.strip().split()[0])

回答 5

这是一个使用多处理库在机器/内核之间分配行数的python程序。我的测试使用8核Windows 64服务器将2000万行文件的计数从26秒提高到7秒。注意:不使用内存映射会使事情变慢。

import multiprocessing, sys, time, os, mmap
import logging, logging.handlers

def init_logger(pid):
    console_format = 'P{0} %(levelname)s %(message)s'.format(pid)
    logger = logging.getLogger()  # New logger at root level
    logger.setLevel( logging.INFO )
    logger.handlers.append( logging.StreamHandler() )
    logger.handlers[0].setFormatter( logging.Formatter( console_format, '%d/%m/%y %H:%M:%S' ) )

def getFileLineCount( queues, pid, processes, file1 ):
    init_logger(pid)
    logging.info( 'start' )

    physical_file = open(file1, "r")
    #  mmap.mmap(fileno, length[, tagname[, access[, offset]]]

    m1 = mmap.mmap( physical_file.fileno(), 0, access=mmap.ACCESS_READ )

    #work out file size to divide up line counting

    fSize = os.stat(file1).st_size
    chunk = (fSize / processes) + 1

    lines = 0

    #get where I start and stop
    _seedStart = chunk * (pid)
    _seekEnd = chunk * (pid+1)
    seekStart = int(_seedStart)
    seekEnd = int(_seekEnd)

    if seekEnd < int(_seekEnd + 1):
        seekEnd += 1

    if _seedStart < int(seekStart + 1):
        seekStart += 1

    if seekEnd > fSize:
        seekEnd = fSize

    #find where to start
    if pid > 0:
        m1.seek( seekStart )
        #read next line
        l1 = m1.readline()  # need to use readline with memory mapped files
        seekStart = m1.tell()

    #tell previous rank my seek start to make their seek end

    if pid > 0:
        queues[pid-1].put( seekStart )
    if pid < processes-1:
        seekEnd = queues[pid].get()

    m1.seek( seekStart )
    l1 = m1.readline()

    while len(l1) > 0:
        lines += 1
        l1 = m1.readline()
        if m1.tell() > seekEnd or len(l1) == 0:
            break

    logging.info( 'done' )
    # add up the results
    if pid == 0:
        for p in range(1,processes):
            lines += queues[0].get()
        queues[0].put(lines) # the total lines counted
    else:
        queues[0].put(lines)

    m1.close()
    physical_file.close()

if __name__ == '__main__':
    init_logger( 'main' )
    if len(sys.argv) > 1:
        file_name = sys.argv[1]
    else:
        logging.fatal( 'parameters required: file-name [processes]' )
        exit()

    t = time.time()
    processes = multiprocessing.cpu_count()
    if len(sys.argv) > 2:
        processes = int(sys.argv[2])
    queues=[] # a queue for each process
    for pid in range(processes):
        queues.append( multiprocessing.Queue() )
    jobs=[]
    prev_pipe = 0
    for pid in range(processes):
        p = multiprocessing.Process( target = getFileLineCount, args=(queues, pid, processes, file_name,) )
        p.start()
        jobs.append(p)

    jobs[0].join() #wait for counting to finish
    lines = queues[0].get()

    logging.info( 'finished {} Lines:{}'.format( time.time() - t, lines ) )

Here is a python program to use the multiprocessing library to distribute the line counting across machines/cores. My test improves counting a 20million line file from 26 seconds to 7 seconds using an 8 core windows 64 server. Note: not using memory mapping makes things much slower.

import multiprocessing, sys, time, os, mmap
import logging, logging.handlers

def init_logger(pid):
    console_format = 'P{0} %(levelname)s %(message)s'.format(pid)
    logger = logging.getLogger()  # New logger at root level
    logger.setLevel( logging.INFO )
    logger.handlers.append( logging.StreamHandler() )
    logger.handlers[0].setFormatter( logging.Formatter( console_format, '%d/%m/%y %H:%M:%S' ) )

def getFileLineCount( queues, pid, processes, file1 ):
    init_logger(pid)
    logging.info( 'start' )

    physical_file = open(file1, "r")
    #  mmap.mmap(fileno, length[, tagname[, access[, offset]]]

    m1 = mmap.mmap( physical_file.fileno(), 0, access=mmap.ACCESS_READ )

    #work out file size to divide up line counting

    fSize = os.stat(file1).st_size
    chunk = (fSize / processes) + 1

    lines = 0

    #get where I start and stop
    _seedStart = chunk * (pid)
    _seekEnd = chunk * (pid+1)
    seekStart = int(_seedStart)
    seekEnd = int(_seekEnd)

    if seekEnd < int(_seekEnd + 1):
        seekEnd += 1

    if _seedStart < int(seekStart + 1):
        seekStart += 1

    if seekEnd > fSize:
        seekEnd = fSize

    #find where to start
    if pid > 0:
        m1.seek( seekStart )
        #read next line
        l1 = m1.readline()  # need to use readline with memory mapped files
        seekStart = m1.tell()

    #tell previous rank my seek start to make their seek end

    if pid > 0:
        queues[pid-1].put( seekStart )
    if pid < processes-1:
        seekEnd = queues[pid].get()

    m1.seek( seekStart )
    l1 = m1.readline()

    while len(l1) > 0:
        lines += 1
        l1 = m1.readline()
        if m1.tell() > seekEnd or len(l1) == 0:
            break

    logging.info( 'done' )
    # add up the results
    if pid == 0:
        for p in range(1,processes):
            lines += queues[0].get()
        queues[0].put(lines) # the total lines counted
    else:
        queues[0].put(lines)

    m1.close()
    physical_file.close()

if __name__ == '__main__':
    init_logger( 'main' )
    if len(sys.argv) > 1:
        file_name = sys.argv[1]
    else:
        logging.fatal( 'parameters required: file-name [processes]' )
        exit()

    t = time.time()
    processes = multiprocessing.cpu_count()
    if len(sys.argv) > 2:
        processes = int(sys.argv[2])
    queues=[] # a queue for each process
    for pid in range(processes):
        queues.append( multiprocessing.Queue() )
    jobs=[]
    prev_pipe = 0
    for pid in range(processes):
        p = multiprocessing.Process( target = getFileLineCount, args=(queues, pid, processes, file_name,) )
        p.start()
        jobs.append(p)

    jobs[0].join() #wait for counting to finish
    lines = queues[0].get()

    logging.info( 'finished {} Lines:{}'.format( time.time() - t, lines ) )

回答 6

使用现代函数的类似于此答案的单行bash解决方案subprocess.check_output

def line_count(filename):
    return int(subprocess.check_output(['wc', '-l', filename]).split()[0])

A one-line bash solution similar to this answer, using the modern subprocess.check_output function:

def line_count(filename):
    return int(subprocess.check_output(['wc', '-l', filename]).split()[0])

回答 7

我将使用Python的文件对象方法readlines,如下所示:

with open(input_file) as foo:
    lines = len(foo.readlines())

这将打开文件,在文件中创建行列表,计算列表的长度,将其保存到变量中,然后再次关闭文件。

I would use Python’s file object method readlines, as follows:

with open(input_file) as foo:
    lines = len(foo.readlines())

This opens the file, creates a list of lines in the file, counts the length of the list, saves that to a variable and closes the file again.


回答 8

def file_len(full_path):
  """ Count number of lines in a file."""
  f = open(full_path)
  nr_of_lines = sum(1 for line in f)
  f.close()
  return nr_of_lines
def file_len(full_path):
  """ Count number of lines in a file."""
  f = open(full_path)
  nr_of_lines = sum(1 for line in f)
  f.close()
  return nr_of_lines

回答 9

这是我用的,看起来很干净:

import subprocess

def count_file_lines(file_path):
    """
    Counts the number of lines in a file using wc utility.
    :param file_path: path to file
    :return: int, no of lines
    """
    num = subprocess.check_output(['wc', '-l', file_path])
    num = num.split(' ')
    return int(num[0])

更新:这比使用纯python快一点,但是以内存使用为代价。子进程将在执行命令时派生一个与父进程具有相同内存占用量的新进程。

Here is what I use, seems pretty clean:

import subprocess

def count_file_lines(file_path):
    """
    Counts the number of lines in a file using wc utility.
    :param file_path: path to file
    :return: int, no of lines
    """
    num = subprocess.check_output(['wc', '-l', file_path])
    num = num.split(' ')
    return int(num[0])

UPDATE: This is marginally faster than using pure python but at the cost of memory usage. Subprocess will fork a new process with the same memory footprint as the parent process while it executes your command.


回答 10

这是我发现使用纯python最快的东西。您可以通过设置缓冲区使用任意数量的内存,尽管2 ** 16似乎是我计算机上的最佳选择。

from functools import partial

buffer=2**16
with open(myfile) as f:
        print sum(x.count('\n') for x in iter(partial(f.read,buffer), ''))

我在这里找到了答案,为什么在C ++中从stdin读取行比在Python中慢?并稍作调整。这是一本很好的文章,可以理解如何快速计数行数,尽管wc -l仍然比其他任何东西都要快约75%。

This is the fastest thing I have found using pure python. You can use whatever amount of memory you want by setting buffer, though 2**16 appears to be a sweet spot on my computer.

from functools import partial

buffer=2**16
with open(myfile) as f:
        print sum(x.count('\n') for x in iter(partial(f.read,buffer), ''))

I found the answer here Why is reading lines from stdin much slower in C++ than Python? and tweaked it just a tiny bit. Its a very good read to understand how to count lines quickly, though wc -l is still about 75% faster than anything else.


回答 11

我对该版本进行了小幅改进(4-8%),该版本重新使用了常量缓冲区,因此应避免任何内存或GC开销:

lines = 0
buffer = bytearray(2048)
with open(filename) as f:
  while f.readinto(buffer) > 0:
      lines += buffer.count('\n')

您可以尝试使用缓冲区大小,并且可能会看到一些改进。

I got a small (4-8%) improvement with this version which re-uses a constant buffer so it should avoid any memory or GC overhead:

lines = 0
buffer = bytearray(2048)
with open(filename) as f:
  while f.readinto(buffer) > 0:
      lines += buffer.count('\n')

You can play around with the buffer size and maybe see a little improvement.


回答 12

凯尔的答案

num_lines = sum(1 for line in open('my_file.txt'))

可能是最好的,对此的替代方法是

num_lines =  len(open('my_file.txt').read().splitlines())

这是两者的性能比较

In [20]: timeit sum(1 for line in open('Charts.ipynb'))
100000 loops, best of 3: 9.79 µs per loop

In [21]: timeit len(open('Charts.ipynb').read().splitlines())
100000 loops, best of 3: 12 µs per loop

Kyle’s answer

num_lines = sum(1 for line in open('my_file.txt'))

is probably best, an alternative for this is

num_lines =  len(open('my_file.txt').read().splitlines())

Here is the comparision of performance of both

In [20]: timeit sum(1 for line in open('Charts.ipynb'))
100000 loops, best of 3: 9.79 µs per loop

In [21]: timeit len(open('Charts.ipynb').read().splitlines())
100000 loops, best of 3: 12 µs per loop

回答 13

一线解决方案:

import os
os.system("wc -l  filename")  

我的片段:

>>> os.system('wc -l *.txt')

0 bar.txt
1000 command.txt
3 test_file.txt
1003 total

One line solution:

import os
os.system("wc -l  filename")  

My snippet:

>>> os.system('wc -l *.txt')

0 bar.txt
1000 command.txt
3 test_file.txt
1003 total

回答 14

为了完成上述方法,我尝试了使用fileinput模块的变体:

import fileinput as fi   
def filecount(fname):
        for line in fi.input(fname):
            pass
        return fi.lineno()

并向上述所有方法传递了6000万行文件:

mapcount : 6.1331050396
simplecount : 4.588793993
opcount : 4.42918205261
filecount : 43.2780818939
bufcount : 0.170812129974

我感到有些惊讶的是,fileinput的性能和扩展性都比其他方法差很多。

Just to complete the above methods I tried a variant with the fileinput module:

import fileinput as fi   
def filecount(fname):
        for line in fi.input(fname):
            pass
        return fi.lineno()

And passed a 60mil lines file to all the above stated methods:

mapcount : 6.1331050396
simplecount : 4.588793993
opcount : 4.42918205261
filecount : 43.2780818939
bufcount : 0.170812129974

It’s a little surprise to me that fileinput is that bad and scales far worse than all the other methods…


回答 15

对于我来说,这种变体将是最快的:

#!/usr/bin/env python

def main():
    f = open('filename')                  
    lines = 0
    buf_size = 1024 * 1024
    read_f = f.read # loop optimization

    buf = read_f(buf_size)
    while buf:
        lines += buf.count('\n')
        buf = read_f(buf_size)

    print lines

if __name__ == '__main__':
    main()

原因:缓冲比逐行读取更快,而且string.count速度也非常快

As for me this variant will be the fastest:

#!/usr/bin/env python

def main():
    f = open('filename')                  
    lines = 0
    buf_size = 1024 * 1024
    read_f = f.read # loop optimization

    buf = read_f(buf_size)
    while buf:
        lines += buf.count('\n')
        buf = read_f(buf_size)

    print lines

if __name__ == '__main__':
    main()

reasons: buffering faster than reading line by line and string.count is also very fast


回答 16

该代码更短,更清晰。这可能是最好的方法:

num_lines = open('yourfile.ext').read().count('\n')

This code is shorter and clearer. It’s probably the best way:

num_lines = open('yourfile.ext').read().count('\n')

回答 17

我修改了这样的缓冲情况:

def CountLines(filename):
    f = open(filename)
    try:
        lines = 1
        buf_size = 1024 * 1024
        read_f = f.read # loop optimization
        buf = read_f(buf_size)

        # Empty file
        if not buf:
            return 0

        while buf:
            lines += buf.count('\n')
            buf = read_f(buf_size)

        return lines
    finally:
        f.close()

现在,空文件和最后一行(不带\ n)也被计算在内。

I have modified the buffer case like this:

def CountLines(filename):
    f = open(filename)
    try:
        lines = 1
        buf_size = 1024 * 1024
        read_f = f.read # loop optimization
        buf = read_f(buf_size)

        # Empty file
        if not buf:
            return 0

        while buf:
            lines += buf.count('\n')
            buf = read_f(buf_size)

        return lines
    finally:
        f.close()

Now also empty files and the last line (without \n) are counted.


回答 18

那这个呢

def file_len(fname):
  counts = itertools.count()
  with open(fname) as f: 
    for _ in f: counts.next()
  return counts.next()

What about this

def file_len(fname):
  counts = itertools.count()
  with open(fname) as f: 
    for _ in f: counts.next()
  return counts.next()

回答 19

count = max(enumerate(open(filename)))[0]

count = max(enumerate(open(filename)))[0]


回答 20

print open('file.txt', 'r').read().count("\n") + 1
print open('file.txt', 'r').read().count("\n") + 1

回答 21

def line_count(path):
    count = 0
    with open(path) as lines:
        for count, l in enumerate(lines, start=1):
            pass
    return count
def line_count(path):
    count = 0
    with open(path) as lines:
        for count, l in enumerate(lines, start=1):
            pass
    return count

回答 22

如果要在Linux的Python中便宜地获得行数,我建议使用以下方法:

import os
print os.popen("wc -l file_path").readline().split()[0]

file_path可以是抽象文件路径,也可以是相对路径。希望这会有所帮助。

If one wants to get the line count cheaply in Python in Linux, I recommend this method:

import os
print os.popen("wc -l file_path").readline().split()[0]

file_path can be both abstract file path or relative path. Hope this may help.


回答 23

这个怎么样?

import fileinput
import sys

counter=0
for line in fileinput.input([sys.argv[1]]):
    counter+=1

fileinput.close()
print counter

How about this?

import fileinput
import sys

counter=0
for line in fileinput.input([sys.argv[1]]):
    counter+=1

fileinput.close()
print counter

回答 24

一线如何?

file_length = len(open('myfile.txt','r').read().split('\n'))

使用此方法花费0.003秒在3900线文件上计时

def c():
  import time
  s = time.time()
  file_length = len(open('myfile.txt','r').read().split('\n'))
  print time.time() - s

How about this one-liner:

file_length = len(open('myfile.txt','r').read().split('\n'))

Takes 0.003 sec using this method to time it on a 3900 line file

def c():
  import time
  s = time.time()
  file_length = len(open('myfile.txt','r').read().split('\n'))
  print time.time() - s

回答 25

def count_text_file_lines(path):
    with open(path, 'rt') as file:
        line_count = sum(1 for _line in file)
    return line_count
def count_text_file_lines(path):
    with open(path, 'rt') as file:
        line_count = sum(1 for _line in file)
    return line_count

回答 26

简单方法:

1)

>>> f = len(open("myfile.txt").readlines())
>>> f

430

2)

>>> f = open("myfile.txt").read().count('\n')
>>> f
430
>>>

3)

num_lines = len(list(open('myfile.txt')))

Simple method:

1)

>>> f = len(open("myfile.txt").readlines())
>>> f

430

2)

>>> f = open("myfile.txt").read().count('\n')
>>> f
430
>>>

3)

num_lines = len(list(open('myfile.txt')))

回答 27

打开文件的结果是一个迭代器,可以将其转换为长度为以下的序列:

with open(filename) as f:
   return len(list(f))

这比您的显式循环更为简洁,并且避免了enumerate

the result of opening a file is an iterator, which can be converted to a sequence, which has a length:

with open(filename) as f:
   return len(list(f))

this is more concise than your explicit loop, and avoids the enumerate.


回答 28

您可以通过os.path以下方式使用该模块:

import os
import subprocess
Number_lines = int( (subprocess.Popen( 'wc -l {0}'.format( Filename ), shell=True, stdout=subprocess.PIPE).stdout).readlines()[0].split()[0] )

,其中Filename是文件的绝对路径。

You can use the os.path module in the following way:

import os
import subprocess
Number_lines = int( (subprocess.Popen( 'wc -l {0}'.format( Filename ), shell=True, stdout=subprocess.PIPE).stdout).readlines()[0].split()[0] )

, where Filename is the absolute path of the file.


回答 29

如果文件可以放入内存,则

with open(fname) as f:
    count = len(f.read().split(b'\n')) - 1

If the file can fit into memory, then

with open(fname) as f:
    count = len(f.read().split(b'\n')) - 1