标签归档:text

如何使用Python将文本文件读取到列表或数组中

问题:如何使用Python将文本文件读取到列表或数组中

我正在尝试将文本文件的行读入python中的列表或数组中。创建后,我只需要能够单独访问列表或数组中的任何项目。

文本文件的格式如下:

0,0,200,0,53,1,0,255,...,0.

...以上,有实际的文本文件中有数百或数千多个项目。

我正在使用以下代码尝试将文件读入列表:

text_file = open("filename.dat", "r")
lines = text_file.readlines()
print lines
print len(lines)
text_file.close()

我得到的输出是:

['0,0,200,0,53,1,0,255,...,0.']
1

显然,它将整个文件读入一个项目列表,而不是单个项目列表。我究竟做错了什么?

I am trying to read the lines of a text file into a list or array in python. I just need to be able to individually access any item in the list or array after it is created.

The text file is formatted as follows:

0,0,200,0,53,1,0,255,...,0.

Where the ... is above, there actual text file has hundreds or thousands more items.

I’m using the following code to try to read the file into a list:

text_file = open("filename.dat", "r")
lines = text_file.readlines()
print lines
print len(lines)
text_file.close()

The output I get is:

['0,0,200,0,53,1,0,255,...,0.']
1

Apparently it is reading the entire file into a list of just one item, rather than a list of individual items. What am I doing wrong?


回答 0

您将必须使用以下方法将字符串拆分为值列表 split()

所以,

lines = text_file.read().split(',')

You will have to split your string into a list of values using split()

So,

lines = text_file.read().split(',')

EDIT: I didn’t realise there would be so much traction to this. Here’s a more idiomatic approach.

import csv
with open('filename.csv', 'r') as fd:
    reader = csv.reader(fd)
    for row in reader:
        # do something

回答 1

您也可以使用numpy loadtxt

from numpy import loadtxt
lines = loadtxt("filename.dat", comments="#", delimiter=",", unpack=False)

You can also use numpy loadtxt like

from numpy import loadtxt
lines = loadtxt("filename.dat", comments="#", delimiter=",", unpack=False)

回答 2

所以您想创建一个列表列表…我们需要从一个空列表开始

list_of_lists = []

接下来,我们逐行读取文件内容

with open('data') as f:
    for line in f:
        inner_list = [elt.strip() for elt in line.split(',')]
        # in alternative, if you need to use the file content as numbers
        # inner_list = [int(elt.strip()) for elt in line.split(',')]
        list_of_lists.append(inner_list)

一个常见的用例是列式数据,但我们的存储单位是文件的行,我们已逐一读取它,因此您可能需要转置 列表列表。这可以通过以下成语来完成

by_cols = zip(*list_of_lists)

另一个常见的用法是为每列命名

col_names = ('apples sold', 'pears sold', 'apples revenue', 'pears revenue')
by_names = {}
for i, col_name in enumerate(col_names):
    by_names[col_name] = by_cols[i]

这样您就可以对同类数据项进行操作

 mean_apple_prices = [money/fruits for money, fruits in
                     zip(by_names['apples revenue'], by_names['apples_sold'])]

我编写的大多数内容都可以使用csv标准库中的模块来加速。另一个第三方模块是pandas,它使您可以自动化典型数据分析的大多数方面(但具有许多依赖性)。


更新虽然在Python 2中zip(*list_of_lists)返回了一个不同的列表(换位后的列表),但在Python 3中情况发生了变化,并zip(*list_of_lists)返回了一个不能下标的zip对象

如果您需要索引访问,则可以使用

by_cols = list(zip(*list_of_lists))

为您提供了两个Python版本中的列表列表。

另一方面,如果您不需要索引访问,而您想要的只是构建一个按列名称索引的字典,那么zip对象就可以了。

file = open('some_data.csv')
names = get_names(next(file))
columns = zip(*((x.strip() for x in line.split(',')) for line in file)))
d = {}
for name, column in zip(names, columns): d[name] = column

So you want to create a list of lists… We need to start with an empty list

list_of_lists = []

next, we read the file content, line by line

with open('data') as f:
    for line in f:
        inner_list = [elt.strip() for elt in line.split(',')]
        # in alternative, if you need to use the file content as numbers
        # inner_list = [int(elt.strip()) for elt in line.split(',')]
        list_of_lists.append(inner_list)

A common use case is that of columnar data, but our units of storage are the rows of the file, that we have read one by one, so you may want to transpose your list of lists. This can be done with the following idiom

by_cols = zip(*list_of_lists)

Another common use is to give a name to each column

col_names = ('apples sold', 'pears sold', 'apples revenue', 'pears revenue')
by_names = {}
for i, col_name in enumerate(col_names):
    by_names[col_name] = by_cols[i]

so that you can operate on homogeneous data items

 mean_apple_prices = [money/fruits for money, fruits in
                     zip(by_names['apples revenue'], by_names['apples_sold'])]

Most of what I’ve written can be speeded up using the csv module, from the standard library. Another third party module is pandas, that lets you automate most aspects of a typical data analysis (but has a number of dependencies).


Update While in Python 2 zip(*list_of_lists) returns a different (transposed) list of lists, in Python 3 the situation has changed and zip(*list_of_lists) returns a zip object that is not subscriptable.

If you need indexed access you can use

by_cols = list(zip(*list_of_lists))

that gives you a list of lists in both versions of Python.

On the other hand, if you don’t need indexed access and what you want is just to build a dictionary indexed by column names, a zip object is just fine…

file = open('some_data.csv')
names = get_names(next(file))
columns = zip(*((x.strip() for x in line.split(',')) for line in file)))
d = {}
for name, column in zip(names, columns): d[name] = column

回答 3

这个问题问如何将文件中的逗号分隔值内容读取到可迭代列表中:

0,0,200,0,53,1,0,255,...,0.

最简单的方法是使用以下csv模块:

import csv
with open('filename.dat', newline='') as csvfile:
    spamreader = csv.reader(csvfile, delimiter=',')

现在,您可以spamreader像这样轻松地进行迭代:

for row in spamreader:
    print(', '.join(row))

有关更多示例,请参见文档

This question is asking how to read the comma-separated value contents from a file into an iterable list:

0,0,200,0,53,1,0,255,...,0.

The easiest way to do this is with the csv module as follows:

import csv
with open('filename.dat', newline='') as csvfile:
    spamreader = csv.reader(csvfile, delimiter=',')

Now, you can easily iterate over spamreader like this:

for row in spamreader:
    print(', '.join(row))

See documentation for more examples.


如何修改文本文件?

问题:如何修改文本文件?

我正在使用Python,并且想在不删除或复制文件的情况下将字符串插入文本文件。我怎样才能做到这一点?

I’m using Python, and would like to insert a string into a text file without deleting or copying the file. How can I do that?


回答 0

不幸的是,没有重写的方法就无法插入文件的中间。如先前的张贴者所指出的,您可以将文件追加到文件中或使用“搜索”覆盖文件的一部分,但是如果要在文件的开头或中间添加内容,则必须重写它。

这是操作系统,而不是Python。所有语言均相同。

我通常要做的是从文件中读取,进行修改并将其写到名为myfile.txt.tmp或类似名称的新文件中。这比将整个文件读入内存更好,因为文件可能太大了。临时文件完成后,我将其重命名为原始文件。

这是一种很好的安全方法,因为如果文件写入由于任何原因而崩溃或中止,您仍然可以拥有原始文件。

Unfortunately there is no way to insert into the middle of a file without re-writing it. As previous posters have indicated, you can append to a file or overwrite part of it using seek but if you want to add stuff at the beginning or the middle, you’ll have to rewrite it.

This is an operating system thing, not a Python thing. It is the same in all languages.

What I usually do is read from the file, make the modifications and write it out to a new file called myfile.txt.tmp or something like that. This is better than reading the whole file into memory because the file may be too large for that. Once the temporary file is completed, I rename it the same as the original file.

This is a good, safe way to do it because if the file write crashes or aborts for any reason, you still have your untouched original file.


这是从哪里来的:-*-编码:utf-8-*-

问题:这是从哪里来的:-*-编码:utf-8-*-

Python将以下内容识别为定义文件编码的指令:

# -*- coding: utf-8 -*-

我确实在(-*- var: value -*-)之前看到过这种说明。它从何而来?完整规范是什么,例如,值可以包含空格,特殊符号,换行符,甚至-*-本身吗?

我的程序将编写纯文本文件,我想使用这种格式在其中包含一些元数据。

Python recognizes the following as instruction which defines file’s encoding:

# -*- coding: utf-8 -*-

I definitely saw this kind of instructions before (-*- var: value -*-). Where does it come from? What is the full specification, e.g. can the value include spaces, special symbols, newlines, even -*- itself?

My program will be writing plain text files and I’d like to include some metadata in them using this format.


回答 0

这种指定Python文件编码的方式来自PEP 0263-定义Python源代码编码

GNU Emacs也可以识别它(请参阅Python语言参考,2.1.4编码声明),尽管我不知道它是否是第一个使用该语法的程序。

This way of specifying the encoding of a Python file comes from PEP 0263 – Defining Python Source Code Encodings.

It is also recognized by GNU Emacs (see Python Language Reference, 2.1.4 Encoding declarations), though I don’t know if it was the first program to use that syntax.


回答 1

# -*- coding: utf-8 -*-是Python 2的东西。在Python 3+中,源文件默认编码已经是UTF-8,并且该行是无用的。

请参阅:我应该在Python 3中使用编码声明吗?

pyupgrade是一个可以在代码上运行的工具,用于从Python 2中删除这些注释和其他不再有用的遗留物,例如让所有类都继承自object

# -*- coding: utf-8 -*- is a Python 2 thing. In Python 3+, the default encoding of source files is already UTF-8 and that line is useless.

See: Should I use encoding declaration in Python 3?

pyupgrade is a tool you can run on your code to remove those comments and other no-longer-useful leftovers from Python 2, like having all your classes inherit from object.


回答 2

这就是所谓的文件局部变量,Emacs可以理解并相应地进行设置。请参阅Emacs手册中的相应部分 -您可以在文件的页眉或页脚中定义它们

This is so called file local variables, that are understood by Emacs and set correspondingly. See corresponding section in Emacs manual – you can define them either in header or in footer of file


回答 3

在PyCharm中,我将其省略。它将关闭底部的UTF-8指示器,并警告该编码为硬编码。不要以为您需要上面提到的PyCharm评论。

In PyCharm, I’d leave it out. It turns off the UTF-8 indicator at the bottom with a warning that the encoding is hard-coded. Don’t think you need the PyCharm comment mentioned above.


BeautifulSoup抓取可见网页文本

问题:BeautifulSoup抓取可见网页文本

基本上,我想使用BeautifulSoup来严格抓取网页上的可见文本。例如,此网页是我的测试用例。我主要想获取正文文本(文章),甚至在这里和那里甚至几个标签名称。我尝试了这个SO问题中的建议,该建议返回很多<script>我不想要的标签和html注释。我无法弄清楚该函数所需的参数findAll(),以便仅获取网页上的可见文本。

那么,我应该如何查找除脚本,注释,CSS等之外的所有可见文本?

Basically, I want to use BeautifulSoup to grab strictly the visible text on a webpage. For instance, this webpage is my test case. And I mainly want to just get the body text (article) and maybe even a few tab names here and there. I have tried the suggestion in this SO question that returns lots of <script> tags and html comments which I don’t want. I can’t figure out the arguments I need for the function findAll() in order to just get the visible texts on a webpage.

So, how should I find all visible text excluding scripts, comments, css etc.?


回答 0

试试这个:

from bs4 import BeautifulSoup
from bs4.element import Comment
import urllib.request


def tag_visible(element):
    if element.parent.name in ['style', 'script', 'head', 'title', 'meta', '[document]']:
        return False
    if isinstance(element, Comment):
        return False
    return True


def text_from_html(body):
    soup = BeautifulSoup(body, 'html.parser')
    texts = soup.findAll(text=True)
    visible_texts = filter(tag_visible, texts)  
    return u" ".join(t.strip() for t in visible_texts)

html = urllib.request.urlopen('http://www.nytimes.com/2009/12/21/us/21storm.html').read()
print(text_from_html(html))

Try this:

from bs4 import BeautifulSoup
from bs4.element import Comment
import urllib.request


def tag_visible(element):
    if element.parent.name in ['style', 'script', 'head', 'title', 'meta', '[document]']:
        return False
    if isinstance(element, Comment):
        return False
    return True


def text_from_html(body):
    soup = BeautifulSoup(body, 'html.parser')
    texts = soup.findAll(text=True)
    visible_texts = filter(tag_visible, texts)  
    return u" ".join(t.strip() for t in visible_texts)

html = urllib.request.urlopen('http://www.nytimes.com/2009/12/21/us/21storm.html').read()
print(text_from_html(html))

回答 1

@jbochi批准的答案对我不起作用。str()函数调用会引发异常,因为它无法对BeautifulSoup元素中的非ASCII字符进行编码。这是将示例网页过滤为可见文本的一种更为简洁的方法。

html = open('21storm.html').read()
soup = BeautifulSoup(html)
[s.extract() for s in soup(['style', 'script', '[document]', 'head', 'title'])]
visible_text = soup.getText()

The approved answer from @jbochi does not work for me. The str() function call raises an exception because it cannot encode the non-ascii characters in the BeautifulSoup element. Here is a more succinct way to filter the example web page to visible text.

html = open('21storm.html').read()
soup = BeautifulSoup(html)
[s.extract() for s in soup(['style', 'script', '[document]', 'head', 'title'])]
visible_text = soup.getText()

回答 2

import urllib
from bs4 import BeautifulSoup

url = "https://www.yahoo.com"
html = urllib.urlopen(url).read()
soup = BeautifulSoup(html)

# kill all script and style elements
for script in soup(["script", "style"]):
    script.extract()    # rip it out

# get text
text = soup.get_text()

# break into lines and remove leading and trailing space on each
lines = (line.strip() for line in text.splitlines())
# break multi-headlines into a line each
chunks = (phrase.strip() for line in lines for phrase in line.split("  "))
# drop blank lines
text = '\n'.join(chunk for chunk in chunks if chunk)

print(text.encode('utf-8'))
import urllib
from bs4 import BeautifulSoup

url = "https://www.yahoo.com"
html = urllib.urlopen(url).read()
soup = BeautifulSoup(html)

# kill all script and style elements
for script in soup(["script", "style"]):
    script.extract()    # rip it out

# get text
text = soup.get_text()

# break into lines and remove leading and trailing space on each
lines = (line.strip() for line in text.splitlines())
# break multi-headlines into a line each
chunks = (phrase.strip() for line in lines for phrase in line.split("  "))
# drop blank lines
text = '\n'.join(chunk for chunk in chunks if chunk)

print(text.encode('utf-8'))

回答 3

我完全尊重使用Beautiful Soup获取呈现的内容,但是它可能不是获取页面上呈现的内容的理想软件包。

我遇到了类似的问题,无法获取渲染的内容或典型浏览器中的可见内容。特别是,在下面的一个简单示例中,我可能有许多非典型案例。在这种情况下,不可显示的标签嵌套在样式标签中,在我检查过的许多浏览器中都不可见。存在其他变体,例如将类标签设置显示定义为无。然后将此类用于div。

<html>
  <title>  Title here</title>

  <body>

    lots of text here <p> <br>
    <h1> even headings </h1>

    <style type="text/css"> 
        <div > this will not be visible </div> 
    </style>


  </body>

</html>

上面发布的一种解决方案是:

html = Utilities.ReadFile('simple.html')
soup = BeautifulSoup.BeautifulSoup(html)
texts = soup.findAll(text=True)
visible_texts = filter(visible, texts)
print(visible_texts)


[u'\n', u'\n', u'\n\n        lots of text here ', u' ', u'\n', u' even headings ', u'\n', u' this will not be visible ', u'\n', u'\n']

该解决方案当然在许多情况下都有应用程序,并且通常可以很好地完成工作,但是在上面发布的html中,它保留了未呈现的文本。经过搜索之后,这里出现了一些解决方案,BeautifulSoup get_text不会剥离所有标签和JavaScript ,这里是使用Python将HTML渲染为纯文本的方式

我尝试了这两种解决方案:html2text和nltk.clean_html,并且对计时结果感到惊讶,因此认为它们值得后代的答案。当然,速度很大程度上取决于数据的内容。

@Helge的一个答案是关于使用nltk的所有东西。

import nltk

%timeit nltk.clean_html(html)
was returning 153 us per loop

返回带有呈现的html的字符串的效果很好。这个nltk模块甚至比html2text还要快,尽管html2text可能更健壮。

betterHTML = html.decode(errors='ignore')
%timeit html2text.html2text(betterHTML)
%3.09 ms per loop

I completely respect using Beautiful Soup to get rendered content, but it may not be the ideal package for acquiring the rendered content on a page.

I had a similar problem to get rendered content, or the visible content in a typical browser. In particular I had many perhaps atypical cases to work with such a simple example below. In this case the non displayable tag is nested in a style tag, and is not visible in many browsers that I have checked. Other variations exist such as defining a class tag setting display to none. Then using this class for the div.

<html>
  <title>  Title here</title>

  <body>

    lots of text here <p> <br>
    <h1> even headings </h1>

    <style type="text/css"> 
        <div > this will not be visible </div> 
    </style>


  </body>

</html>

One solution posted above is:

html = Utilities.ReadFile('simple.html')
soup = BeautifulSoup.BeautifulSoup(html)
texts = soup.findAll(text=True)
visible_texts = filter(visible, texts)
print(visible_texts)


[u'\n', u'\n', u'\n\n        lots of text here ', u' ', u'\n', u' even headings ', u'\n', u' this will not be visible ', u'\n', u'\n']

This solution certainly has applications in many cases and does the job quite well generally but in the html posted above it retains the text that is not rendered. After searching SO a couple solutions came up here BeautifulSoup get_text does not strip all tags and JavaScript and here Rendered HTML to plain text using Python

I tried both these solutions: html2text and nltk.clean_html and was surprised by the timing results so thought they warranted an answer for posterity. Of course, the speeds highly depend on the contents of the data…

One answer here from @Helge was about using nltk of all things.

import nltk

%timeit nltk.clean_html(html)
was returning 153 us per loop

It worked really well to return a string with rendered html. This nltk module was faster than even html2text, though perhaps html2text is more robust.

betterHTML = html.decode(errors='ignore')
%timeit html2text.html2text(betterHTML)
%3.09 ms per loop

回答 4

如果您关心性能,这是另一种更有效的方法:

import re

INVISIBLE_ELEMS = ('style', 'script', 'head', 'title')
RE_SPACES = re.compile(r'\s{3,}')

def visible_texts(soup):
    """ get visible text from a document """
    text = ' '.join([
        s for s in soup.strings
        if s.parent.name not in INVISIBLE_ELEMS
    ])
    # collapse multiple spaces to two spaces.
    return RE_SPACES.sub('  ', text)

soup.strings是一个迭代器,它返回,NavigableString以便您可以直接检查父级的标记名,而无需经历多个循环。

If you care about performance, here’s another more efficient way:

import re

INVISIBLE_ELEMS = ('style', 'script', 'head', 'title')
RE_SPACES = re.compile(r'\s{3,}')

def visible_texts(soup):
    """ get visible text from a document """
    text = ' '.join([
        s for s in soup.strings
        if s.parent.name not in INVISIBLE_ELEMS
    ])
    # collapse multiple spaces to two spaces.
    return RE_SPACES.sub('  ', text)

soup.strings is an iterator, and it returns NavigableString so that you can check the parent’s tag name directly, without going through multiple loops.


回答 5

标题位于<nyt_headline>标签内,该标签嵌套在<h1>标签和<div>ID为“ article” 的标签内。

soup.findAll('nyt_headline', limit=1)

应该管用。

文章正文位于<nyt_text>标记内,该标记嵌套在<div>ID为“ articleBody” 的标记内。在<nyt_text> 元素内部,文本本身包含在<p> 标签中。图片不在这些<p>标签内。对我来说,尝试语法很难,但是我希望工作的草稿看起来像这样。

text = soup.findAll('nyt_text', limit=1)[0]
text.findAll('p')

The title is inside an <nyt_headline> tag, which is nested inside an <h1> tag and a <div> tag with id “article”.

soup.findAll('nyt_headline', limit=1)

Should work.

The article body is inside an <nyt_text> tag, which is nested inside a <div> tag with id “articleBody”. Inside the <nyt_text> element, the text itself is contained within <p> tags. Images are not within those <p> tags. It’s difficult for me to experiment with the syntax, but I expect a working scrape to look something like this.

text = soup.findAll('nyt_text', limit=1)[0]
text.findAll('p')

回答 6

虽然,我会完全建议一般使用精美的汤,但是,如果有人希望显示格式错误的html的可见部分(例如,您只有网页的一段或一行),无论出于何种原因,以下内容将删除<>标签之间的内容:

import re   ## only use with malformed html - this is not efficient
def display_visible_html_using_re(text):             
    return(re.sub("(\<.*?\>)", "",text))

While, i would completely suggest using beautiful-soup in general, if anyone is looking to display the visible parts of a malformed html (e.g. where you have just a segment or line of a web-page) for whatever-reason, the the following will remove content between < and > tags:

import re   ## only use with malformed html - this is not efficient
def display_visible_html_using_re(text):             
    return(re.sub("(\<.*?\>)", "",text))

回答 7

使用BeautifulSoup是最简单的方法,只需较少的代码即可获取字符串,而不会出现空行和废话。

tag = <Parent_Tag_that_contains_the_data>
soup = BeautifulSoup(tag, 'html.parser')

for i in soup.stripped_strings:
    print repr(i)

Using BeautifulSoup the easiest way with less code to just get the strings, without empty lines and crap.

tag = <Parent_Tag_that_contains_the_data>
soup = BeautifulSoup(tag, 'html.parser')

for i in soup.stripped_strings:
    print repr(i)

回答 8

处理这种情况的最简单方法是使用getattr()。您可以根据需要调整此示例:

from bs4 import BeautifulSoup

source_html = """
<span class="ratingsDisplay">
    <a class="ratingNumber" href="https://www.youtube.com/watch?v=oHg5SJYRHA0" target="_blank" rel="noopener">
        <span class="ratingsContent">3.7</span>
    </a>
</span>
"""

soup = BeautifulSoup(source_html, "lxml")
my_ratings = getattr(soup.find('span', {"class": "ratingsContent"}), "text", None)
print(my_ratings)

如果存在,它将"3.7"在标记对象中找到文本元素,<span class="ratingsContent">3.7</span>但是默认为NoneType不存在时。

getattr(object, name[, default])

返回对象的命名属性的值。名称必须是字符串。如果字符串是对象属性之一的名称,则结果是该属性的值。例如,getattr(x,’foobar’)等同于x.foobar。如果命名属性不存在,则返回默认值(如果提供),否则引发AttributeError。

The simplest way to handle this case is by using getattr(). You can adapt this example to your needs:

from bs4 import BeautifulSoup

source_html = """
<span class="ratingsDisplay">
    <a class="ratingNumber" href="https://www.youtube.com/watch?v=oHg5SJYRHA0" target="_blank" rel="noopener">
        <span class="ratingsContent">3.7</span>
    </a>
</span>
"""

soup = BeautifulSoup(source_html, "lxml")
my_ratings = getattr(soup.find('span', {"class": "ratingsContent"}), "text", None)
print(my_ratings)

This will find the text element,"3.7", within the tag object <span class="ratingsContent">3.7</span> when it exists, however, default to NoneType when it does not.

getattr(object, name[, default])

Return the value of the named attribute of object. name must be a string. If the string is the name of one of the object’s attributes, the result is the value of that attribute. For example, getattr(x, ‘foobar’) is equivalent to x.foobar. If the named attribute does not exist, default is returned if provided, otherwise, AttributeError is raised.


回答 9

from bs4 import BeautifulSoup
from bs4.element import Comment
import urllib.request
import re
import ssl

def tag_visible(element):
    if element.parent.name in ['style', 'script', 'head', 'title', 'meta', '[document]']:
        return False
    if isinstance(element, Comment):
        return False
    if re.match(r"[\n]+",str(element)): return False
    return True
def text_from_html(url):
    body = urllib.request.urlopen(url,context=ssl._create_unverified_context()).read()
    soup = BeautifulSoup(body ,"lxml")
    texts = soup.findAll(text=True)
    visible_texts = filter(tag_visible, texts)  
    text = u",".join(t.strip() for t in visible_texts)
    text = text.lstrip().rstrip()
    text = text.split(',')
    clean_text = ''
    for sen in text:
        if sen:
            sen = sen.rstrip().lstrip()
            clean_text += sen+','
    return clean_text
url = 'http://www.nytimes.com/2009/12/21/us/21storm.html'
print(text_from_html(url))
from bs4 import BeautifulSoup
from bs4.element import Comment
import urllib.request
import re
import ssl

def tag_visible(element):
    if element.parent.name in ['style', 'script', 'head', 'title', 'meta', '[document]']:
        return False
    if isinstance(element, Comment):
        return False
    if re.match(r"[\n]+",str(element)): return False
    return True
def text_from_html(url):
    body = urllib.request.urlopen(url,context=ssl._create_unverified_context()).read()
    soup = BeautifulSoup(body ,"lxml")
    texts = soup.findAll(text=True)
    visible_texts = filter(tag_visible, texts)  
    text = u",".join(t.strip() for t in visible_texts)
    text = text.lstrip().rstrip()
    text = text.split(',')
    clean_text = ''
    for sen in text:
        if sen:
            sen = sen.rstrip().lstrip()
            clean_text += sen+','
    return clean_text
url = 'http://www.nytimes.com/2009/12/21/us/21storm.html'
print(text_from_html(url))

如何将文本分成句子?

问题:如何将文本分成句子?

我有一个文本文件。我需要得到一个句子清单。

如何实现呢?有很多微妙之处,例如缩写中使用了点。

我的旧正则表达式效果很差:

re.compile('(\. |^|!|\?)([A-Z][^;↑\.<>@\^&/\[\]]*(\.|!|\?) )',re.M)

I have a text file. I need to get a list of sentences.

How can this be implemented? There are a lot of subtleties, such as a dot being used in abbreviations.

My old regular expression works badly:

re.compile('(\. |^|!|\?)([A-Z][^;↑\.<>@\^&/\[\]]*(\.|!|\?) )',re.M)

回答 0

自然语言工具包(nltk.org)满足您的需求。 该群组发布表明这样做:

import nltk.data

tokenizer = nltk.data.load('tokenizers/punkt/english.pickle')
fp = open("test.txt")
data = fp.read()
print '\n-----\n'.join(tokenizer.tokenize(data))

(我还没有尝试过!)

The Natural Language Toolkit (nltk.org) has what you need. This group posting indicates this does it:

import nltk.data

tokenizer = nltk.data.load('tokenizers/punkt/english.pickle')
fp = open("test.txt")
data = fp.read()
print '\n-----\n'.join(tokenizer.tokenize(data))

(I haven’t tried it!)


回答 1

此功能可以在约0.1秒内将Huckleberry Finn的整个文本拆分成句子,并处理许多使句子解析变得不平凡的更痛苦的案例,例如“ John Johnson Jr.先生出生于美国,但获得了博士学位。 D.在以色列加入耐克公司的工程师之前,他还曾在craigslist.org业务分析师。

# -*- coding: utf-8 -*-
import re
alphabets= "([A-Za-z])"
prefixes = "(Mr|St|Mrs|Ms|Dr)[.]"
suffixes = "(Inc|Ltd|Jr|Sr|Co)"
starters = "(Mr|Mrs|Ms|Dr|He\s|She\s|It\s|They\s|Their\s|Our\s|We\s|But\s|However\s|That\s|This\s|Wherever)"
acronyms = "([A-Z][.][A-Z][.](?:[A-Z][.])?)"
websites = "[.](com|net|org|io|gov)"

def split_into_sentences(text):
    text = " " + text + "  "
    text = text.replace("\n"," ")
    text = re.sub(prefixes,"\\1<prd>",text)
    text = re.sub(websites,"<prd>\\1",text)
    if "Ph.D" in text: text = text.replace("Ph.D.","Ph<prd>D<prd>")
    text = re.sub("\s" + alphabets + "[.] "," \\1<prd> ",text)
    text = re.sub(acronyms+" "+starters,"\\1<stop> \\2",text)
    text = re.sub(alphabets + "[.]" + alphabets + "[.]" + alphabets + "[.]","\\1<prd>\\2<prd>\\3<prd>",text)
    text = re.sub(alphabets + "[.]" + alphabets + "[.]","\\1<prd>\\2<prd>",text)
    text = re.sub(" "+suffixes+"[.] "+starters," \\1<stop> \\2",text)
    text = re.sub(" "+suffixes+"[.]"," \\1<prd>",text)
    text = re.sub(" " + alphabets + "[.]"," \\1<prd>",text)
    if "”" in text: text = text.replace(".”","”.")
    if "\"" in text: text = text.replace(".\"","\".")
    if "!" in text: text = text.replace("!\"","\"!")
    if "?" in text: text = text.replace("?\"","\"?")
    text = text.replace(".",".<stop>")
    text = text.replace("?","?<stop>")
    text = text.replace("!","!<stop>")
    text = text.replace("<prd>",".")
    sentences = text.split("<stop>")
    sentences = sentences[:-1]
    sentences = [s.strip() for s in sentences]
    return sentences

This function can split the entire text of Huckleberry Finn into sentences in about 0.1 seconds and handles many of the more painful edge cases that make sentence parsing non-trivial e.g. “Mr. John Johnson Jr. was born in the U.S.A but earned his Ph.D. in Israel before joining Nike Inc. as an engineer. He also worked at craigslist.org as a business analyst.

# -*- coding: utf-8 -*-
import re
alphabets= "([A-Za-z])"
prefixes = "(Mr|St|Mrs|Ms|Dr)[.]"
suffixes = "(Inc|Ltd|Jr|Sr|Co)"
starters = "(Mr|Mrs|Ms|Dr|He\s|She\s|It\s|They\s|Their\s|Our\s|We\s|But\s|However\s|That\s|This\s|Wherever)"
acronyms = "([A-Z][.][A-Z][.](?:[A-Z][.])?)"
websites = "[.](com|net|org|io|gov)"

def split_into_sentences(text):
    text = " " + text + "  "
    text = text.replace("\n"," ")
    text = re.sub(prefixes,"\\1<prd>",text)
    text = re.sub(websites,"<prd>\\1",text)
    if "Ph.D" in text: text = text.replace("Ph.D.","Ph<prd>D<prd>")
    text = re.sub("\s" + alphabets + "[.] "," \\1<prd> ",text)
    text = re.sub(acronyms+" "+starters,"\\1<stop> \\2",text)
    text = re.sub(alphabets + "[.]" + alphabets + "[.]" + alphabets + "[.]","\\1<prd>\\2<prd>\\3<prd>",text)
    text = re.sub(alphabets + "[.]" + alphabets + "[.]","\\1<prd>\\2<prd>",text)
    text = re.sub(" "+suffixes+"[.] "+starters," \\1<stop> \\2",text)
    text = re.sub(" "+suffixes+"[.]"," \\1<prd>",text)
    text = re.sub(" " + alphabets + "[.]"," \\1<prd>",text)
    if "”" in text: text = text.replace(".”","”.")
    if "\"" in text: text = text.replace(".\"","\".")
    if "!" in text: text = text.replace("!\"","\"!")
    if "?" in text: text = text.replace("?\"","\"?")
    text = text.replace(".",".<stop>")
    text = text.replace("?","?<stop>")
    text = text.replace("!","!<stop>")
    text = text.replace("<prd>",".")
    sentences = text.split("<stop>")
    sentences = sentences[:-1]
    sentences = [s.strip() for s in sentences]
    return sentences

回答 2

除了使用正则表达式将文本拆分为句子外,还可以使用nltk库。

>>> from nltk import tokenize
>>> p = "Good morning Dr. Adams. The patient is waiting for you in room number 3."

>>> tokenize.sent_tokenize(p)
['Good morning Dr. Adams.', 'The patient is waiting for you in room number 3.']

参考:https : //stackoverflow.com/a/9474645/2877052

Instead of using regex for spliting the text into sentences, you can also use nltk library.

>>> from nltk import tokenize
>>> p = "Good morning Dr. Adams. The patient is waiting for you in room number 3."

>>> tokenize.sent_tokenize(p)
['Good morning Dr. Adams.', 'The patient is waiting for you in room number 3.']

ref: https://stackoverflow.com/a/9474645/2877052


回答 3

您可以尝试使用Spacy代替正则表达式。我用它就可以了。

import spacy
nlp = spacy.load('en')

text = '''Your text here'''
tokens = nlp(text)

for sent in tokens.sents:
    print(sent.string.strip())

You can try using Spacy instead of regex. I use it and it does the job.

import spacy
nlp = spacy.load('en')

text = '''Your text here'''
tokens = nlp(text)

for sent in tokens.sents:
    print(sent.string.strip())

回答 4

这是不依赖任何外部库的中间方法。我使用列表推导来排除缩写词和终止符之间的重叠以及排除终止符变体之间的重叠,例如:“。” 与“。”

abbreviations = {'dr.': 'doctor', 'mr.': 'mister', 'bro.': 'brother', 'bro': 'brother', 'mrs.': 'mistress', 'ms.': 'miss', 'jr.': 'junior', 'sr.': 'senior',
                 'i.e.': 'for example', 'e.g.': 'for example', 'vs.': 'versus'}
terminators = ['.', '!', '?']
wrappers = ['"', "'", ')', ']', '}']


def find_sentences(paragraph):
   end = True
   sentences = []
   while end > -1:
       end = find_sentence_end(paragraph)
       if end > -1:
           sentences.append(paragraph[end:].strip())
           paragraph = paragraph[:end]
   sentences.append(paragraph)
   sentences.reverse()
   return sentences


def find_sentence_end(paragraph):
    [possible_endings, contraction_locations] = [[], []]
    contractions = abbreviations.keys()
    sentence_terminators = terminators + [terminator + wrapper for wrapper in wrappers for terminator in terminators]
    for sentence_terminator in sentence_terminators:
        t_indices = list(find_all(paragraph, sentence_terminator))
        possible_endings.extend(([] if not len(t_indices) else [[i, len(sentence_terminator)] for i in t_indices]))
    for contraction in contractions:
        c_indices = list(find_all(paragraph, contraction))
        contraction_locations.extend(([] if not len(c_indices) else [i + len(contraction) for i in c_indices]))
    possible_endings = [pe for pe in possible_endings if pe[0] + pe[1] not in contraction_locations]
    if len(paragraph) in [pe[0] + pe[1] for pe in possible_endings]:
        max_end_start = max([pe[0] for pe in possible_endings])
        possible_endings = [pe for pe in possible_endings if pe[0] != max_end_start]
    possible_endings = [pe[0] + pe[1] for pe in possible_endings if sum(pe) > len(paragraph) or (sum(pe) < len(paragraph) and paragraph[sum(pe)] == ' ')]
    end = (-1 if not len(possible_endings) else max(possible_endings))
    return end


def find_all(a_str, sub):
    start = 0
    while True:
        start = a_str.find(sub, start)
        if start == -1:
            return
        yield start
        start += len(sub)

我从以下条目中使用了Karl的find_all函数: 查找Python中所有出现的子字符串

Here is a middle of the road approach that doesn’t rely on any external libraries. I use list comprehension to exclude overlaps between abbreviations and terminators as well as to exclude overlaps between variations on terminations, for example: ‘.’ vs. ‘.”‘

abbreviations = {'dr.': 'doctor', 'mr.': 'mister', 'bro.': 'brother', 'bro': 'brother', 'mrs.': 'mistress', 'ms.': 'miss', 'jr.': 'junior', 'sr.': 'senior',
                 'i.e.': 'for example', 'e.g.': 'for example', 'vs.': 'versus'}
terminators = ['.', '!', '?']
wrappers = ['"', "'", ')', ']', '}']


def find_sentences(paragraph):
   end = True
   sentences = []
   while end > -1:
       end = find_sentence_end(paragraph)
       if end > -1:
           sentences.append(paragraph[end:].strip())
           paragraph = paragraph[:end]
   sentences.append(paragraph)
   sentences.reverse()
   return sentences


def find_sentence_end(paragraph):
    [possible_endings, contraction_locations] = [[], []]
    contractions = abbreviations.keys()
    sentence_terminators = terminators + [terminator + wrapper for wrapper in wrappers for terminator in terminators]
    for sentence_terminator in sentence_terminators:
        t_indices = list(find_all(paragraph, sentence_terminator))
        possible_endings.extend(([] if not len(t_indices) else [[i, len(sentence_terminator)] for i in t_indices]))
    for contraction in contractions:
        c_indices = list(find_all(paragraph, contraction))
        contraction_locations.extend(([] if not len(c_indices) else [i + len(contraction) for i in c_indices]))
    possible_endings = [pe for pe in possible_endings if pe[0] + pe[1] not in contraction_locations]
    if len(paragraph) in [pe[0] + pe[1] for pe in possible_endings]:
        max_end_start = max([pe[0] for pe in possible_endings])
        possible_endings = [pe for pe in possible_endings if pe[0] != max_end_start]
    possible_endings = [pe[0] + pe[1] for pe in possible_endings if sum(pe) > len(paragraph) or (sum(pe) < len(paragraph) and paragraph[sum(pe)] == ' ')]
    end = (-1 if not len(possible_endings) else max(possible_endings))
    return end


def find_all(a_str, sub):
    start = 0
    while True:
        start = a_str.find(sub, start)
        if start == -1:
            return
        yield start
        start += len(sub)

I used Karl’s find_all function from this entry: Find all occurrences of a substring in Python


回答 5

对于简单的情况(句子正常终止),这应该起作用:

import re
text = ''.join(open('somefile.txt').readlines())
sentences = re.split(r' *[\.\?!][\'"\)\]]* *', text)

regex是*\. +,它匹配一个由左0或更多空格和右1或更多空格包围的句点(以防止re.split的句点被视为句子的变化)。

显然,这不是最可靠的解决方案,但是在大多数情况下,它会很好用。唯一无法解决的情况就是缩写(也许遍历句子列表并检查每个字符串中是否都sentences以大写字母开头?)

For simple cases (where sentences are terminated normally), this should work:

import re
text = ''.join(open('somefile.txt').readlines())
sentences = re.split(r' *[\.\?!][\'"\)\]]* *', text)

The regex is *\. +, which matches a period surrounded by 0 or more spaces to the left and 1 or more to the right (to prevent something like the period in re.split being counted as a change in sentence).

Obviously, not the most robust solution, but it’ll do fine in most cases. The only case this won’t cover is abbreviations (perhaps run through the list of sentences and check that each string in sentences starts with a capital letter?)


回答 6

您还可以在NLTK中使用句子标记化功能:

from nltk.tokenize import sent_tokenize
sentence = "As the most quoted English writer Shakespeare has more than his share of famous quotes.  Some Shakespare famous quotes are known for their beauty, some for their everyday truths and some for their wisdom. We often talk about Shakespeare’s quotes as things the wise Bard is saying to us but, we should remember that some of his wisest words are spoken by his biggest fools. For example, both ‘neither a borrower nor a lender be,’ and ‘to thine own self be true’ are from the foolish, garrulous and quite disreputable Polonius in Hamlet."

sent_tokenize(sentence)

You can also use sentence tokenization function in NLTK:

from nltk.tokenize import sent_tokenize
sentence = "As the most quoted English writer Shakespeare has more than his share of famous quotes.  Some Shakespare famous quotes are known for their beauty, some for their everyday truths and some for their wisdom. We often talk about Shakespeare’s quotes as things the wise Bard is saying to us but, we should remember that some of his wisest words are spoken by his biggest fools. For example, both ‘neither a borrower nor a lender be,’ and ‘to thine own self be true’ are from the foolish, garrulous and quite disreputable Polonius in Hamlet."

sent_tokenize(sentence)

回答 7

@Artyom,

嗨!您可以使用此功能为俄语(和其他一些语言)创建新的令牌生成器:

def russianTokenizer(text):
    result = text
    result = result.replace('.', ' . ')
    result = result.replace(' .  .  . ', ' ... ')
    result = result.replace(',', ' , ')
    result = result.replace(':', ' : ')
    result = result.replace(';', ' ; ')
    result = result.replace('!', ' ! ')
    result = result.replace('?', ' ? ')
    result = result.replace('\"', ' \" ')
    result = result.replace('\'', ' \' ')
    result = result.replace('(', ' ( ')
    result = result.replace(')', ' ) ') 
    result = result.replace('  ', ' ')
    result = result.replace('  ', ' ')
    result = result.replace('  ', ' ')
    result = result.replace('  ', ' ')
    result = result.strip()
    result = result.split(' ')
    return result

然后以这种方式调用它:

text = 'вы выполняете поиск, используя Google SSL;'
tokens = russianTokenizer(text)

祝你好运,Marilena。

@Artyom,

Hi! You could make a new tokenizer for Russian (and some other languages) using this function:

def russianTokenizer(text):
    result = text
    result = result.replace('.', ' . ')
    result = result.replace(' .  .  . ', ' ... ')
    result = result.replace(',', ' , ')
    result = result.replace(':', ' : ')
    result = result.replace(';', ' ; ')
    result = result.replace('!', ' ! ')
    result = result.replace('?', ' ? ')
    result = result.replace('\"', ' \" ')
    result = result.replace('\'', ' \' ')
    result = result.replace('(', ' ( ')
    result = result.replace(')', ' ) ') 
    result = result.replace('  ', ' ')
    result = result.replace('  ', ' ')
    result = result.replace('  ', ' ')
    result = result.replace('  ', ' ')
    result = result.strip()
    result = result.split(' ')
    return result

and then call it in this way:

text = 'вы выполняете поиск, используя Google SSL;'
tokens = russianTokenizer(text)

Good luck, Marilena.


回答 8

毫无疑问,NLTK最适合此目的。但是开始使用NLTK会很痛苦(但是一旦安装,您就可以得到回报)

因此,这是位于http://pythonicprose.blogspot.com/2009/09/python-split-paragraph-into-sentences.html的简单的基于re的代码

# split up a paragraph into sentences
# using regular expressions


def splitParagraphIntoSentences(paragraph):
    ''' break a paragraph into sentences
        and return a list '''
    import re
    # to split by multile characters

    #   regular expressions are easiest (and fastest)
    sentenceEnders = re.compile('[.!?]')
    sentenceList = sentenceEnders.split(paragraph)
    return sentenceList


if __name__ == '__main__':
    p = """This is a sentence.  This is an excited sentence! And do you think this is a question?"""

    sentences = splitParagraphIntoSentences(p)
    for s in sentences:
        print s.strip()

#output:
#   This is a sentence
#   This is an excited sentence

#   And do you think this is a question 

No doubt that NLTK is the most suitable for the purpose. But getting started with NLTK is quite painful (But once you install it – you just reap the rewards)

So here is simple re based code available at http://pythonicprose.blogspot.com/2009/09/python-split-paragraph-into-sentences.html

# split up a paragraph into sentences
# using regular expressions


def splitParagraphIntoSentences(paragraph):
    ''' break a paragraph into sentences
        and return a list '''
    import re
    # to split by multile characters

    #   regular expressions are easiest (and fastest)
    sentenceEnders = re.compile('[.!?]')
    sentenceList = sentenceEnders.split(paragraph)
    return sentenceList


if __name__ == '__main__':
    p = """This is a sentence.  This is an excited sentence! And do you think this is a question?"""

    sentences = splitParagraphIntoSentences(p)
    for s in sentences:
        print s.strip()

#output:
#   This is a sentence
#   This is an excited sentence

#   And do you think this is a question 

回答 9

我必须阅读字幕文件,并将其拆分为句子。经过预处理(如删除.srt文件中的时间信息等)后,变量fullFile包含字幕文件的全文。下面的粗略方法将它们整齐地分成句子。可能我很幸运,句子总是(正确)以空格结尾。首先尝试执行此操作,如果有任何exceptions,请添加更多的检查和余额。

# Very approximate way to split the text into sentences - Break after ? . and !
fullFile = re.sub("(\!|\?|\.) ","\\1<BRK>",fullFile)
sentences = fullFile.split("<BRK>");
sentFile = open("./sentences.out", "w+");
for line in sentences:
    sentFile.write (line);
    sentFile.write ("\n");
sentFile.close;

哦! 好。我现在意识到,由于我的内容是西班牙语,所以我没有遇到与“史密斯先生”等人打交道的问题。但是,如果有人想要快速又脏的解析器…

I had to read subtitles files and split them into sentences. After pre-processing (like removing time information etc in the .srt files), the variable fullFile contained the full text of the subtitle file. The below crude way neatly split them into sentences. Probably I was lucky that the sentences always ended (correctly) with a space. Try this first and if it has any exceptions, add more checks and balances.

# Very approximate way to split the text into sentences - Break after ? . and !
fullFile = re.sub("(\!|\?|\.) ","\\1<BRK>",fullFile)
sentences = fullFile.split("<BRK>");
sentFile = open("./sentences.out", "w+");
for line in sentences:
    sentFile.write (line);
    sentFile.write ("\n");
sentFile.close;

Oh! well. I now realize that since my content was Spanish, I did not have the issues of dealing with “Mr. Smith” etc. Still, if someone wants a quick and dirty parser…


回答 10

我希望这对您使用拉丁文,中文,阿拉伯文有帮助

import re

punctuation = re.compile(r"([^\d+])(\.|!|\?|;|\n|。|!|?|;|…| |!|؟|؛)+")
lines = []

with open('myData.txt','r',encoding="utf-8") as myFile:
    lines = punctuation.sub(r"\1\2<pad>", myFile.read())
    lines = [line.strip() for line in lines.split("<pad>") if line.strip()]

i hope this will help you on latin,chinese,arabic text

import re

punctuation = re.compile(r"([^\d+])(\.|!|\?|;|\n|。|!|?|;|…| |!|؟|؛)+")
lines = []

with open('myData.txt','r',encoding="utf-8") as myFile:
    lines = punctuation.sub(r"\1\2<pad>", myFile.read())
    lines = [line.strip() for line in lines.split("<pad>") if line.strip()]

回答 11

通过执行一些链接并为nltk进行了一些练习,正在处理类似的任务并遇到了此查询,下面的代码对我来说就像魔术一样。

from nltk.tokenize import sent_tokenize 
  
text = "Hello everyone. Welcome to GeeksforGeeks. You are studying NLP article"
sent_tokenize(text) 

输出:

['Hello everyone.',
 'Welcome to GeeksforGeeks.',
 'You are studying NLP article']

资料来源:https : //www.geeksforgeeks.org/nlp-how-tokenizing-text-sentence-words-works/

Was working on similar task and came across this query, by following few links and working on few exercises for nltk the below code worked for me like magic.

from nltk.tokenize import sent_tokenize 
  
text = "Hello everyone. Welcome to GeeksforGeeks. You are studying NLP article"
sent_tokenize(text) 

output:

['Hello everyone.',
 'Welcome to GeeksforGeeks.',
 'You are studying NLP article']

Source: https://www.geeksforgeeks.org/nlp-how-tokenizing-text-sentence-words-works/


如何将没有空格的文本分割成单词列表?

问题:如何将没有空格的文本分割成单词列表?

输入: "tableapplechairtablecupboard..."很多单词

将此类文本拆分为单词列表并获得以下内容的有效算法是什么?

输出: ["table", "apple", "chair", "table", ["cupboard", ["cup", "board"]], ...]

首先想到的是遍历所有可能的单词(从第一个字母开始)并找到可能的最长单词,然后从 position=word_position+len(word)

PS:
我们列出了所有可能的单词。
单词“ cupboard”可以是“ cup”和“ board”,选择时间最长。
语言:python,但主要是算法本身。

Input: "tableapplechairtablecupboard..." many words

What would be an efficient algorithm to split such text to the list of words and get:

Output: ["table", "apple", "chair", "table", ["cupboard", ["cup", "board"]], ...]

First thing that cames to mind is to go through all possible words (starting with first letter) and find the longest word possible, continue from position=word_position+len(word)

P.S.
We have a list of all possible words.
Word “cupboard” can be “cup” and “board”, select longest.
Language: python, but main thing is the algorithm itself.


回答 0

如果将朴素的算法应用于现实世界的数据,则效果不佳。这是一条20行的算法,该算法利用相对词频来为实词文本提供准确的结果。

(如果您想回答不使用词频的原始问题的答案,则需要完善“最长词”的确切含义:最好是使用20个字母的单词和10个3字母的单词,还是最好有五个10个字母的单词?确定精确的定义后,只需更改定义的行wordcost即可反映出预期的含义。)

这个主意

最好的进行方法是对输出的分布进行建模。一个很好的第一近似是假设所有单词都是独立分布的。然后,您只需要知道所有单词的相对频率即可。可以合理地假设它们遵循Zipf定律,也就是说,单词列表中排名为n的单词的概率约为1 /(n log N),其中N是词典中的单词数。

固定模型后,可以使用动态编程来推断空间的位置。最可能的句子是最大化每个单词的概率乘积的句子,并且可以通过动态编程轻松地计算出该句子。代替直接使用概率,我们使用定义为概率倒数的对数的代价来避免溢出。

代码

from math import log

# Build a cost dictionary, assuming Zipf's law and cost = -math.log(probability).
words = open("words-by-frequency.txt").read().split()
wordcost = dict((k, log((i+1)*log(len(words)))) for i,k in enumerate(words))
maxword = max(len(x) for x in words)

def infer_spaces(s):
    """Uses dynamic programming to infer the location of spaces in a string
    without spaces."""

    # Find the best match for the i first characters, assuming cost has
    # been built for the i-1 first characters.
    # Returns a pair (match_cost, match_length).
    def best_match(i):
        candidates = enumerate(reversed(cost[max(0, i-maxword):i]))
        return min((c + wordcost.get(s[i-k-1:i], 9e999), k+1) for k,c in candidates)

    # Build the cost array.
    cost = [0]
    for i in range(1,len(s)+1):
        c,k = best_match(i)
        cost.append(c)

    # Backtrack to recover the minimal-cost string.
    out = []
    i = len(s)
    while i>0:
        c,k = best_match(i)
        assert c == cost[i]
        out.append(s[i-k:i])
        i -= k

    return " ".join(reversed(out))

您可以使用

s = 'thumbgreenappleactiveassignmentweeklymetaphor'
print(infer_spaces(s))

结果

我使用的是从Wikipedia的一小部分中拼凑而成的这个快速且肮脏的125k字字典

之前: thumbgreenappleactiveassignmentweekly隐喻。
之后:拇指绿苹果积极地分配每周的隐喻。

之前:有一种从html解析出的软注释信息,但是却重新获得了有限的字符内阁,例如,thumbgreenappleactiveassignmentlymetapho rapparentlytherearthumbgreenappleetc 字符串中应该用词条查询该词是否合理,这是最快的提取方法。

之后:从html解析出大量的人民评论文本信息,但是其中没有定界字符,例如,拇指绿苹果活跃作业每周隐喻显然在字符串中有拇指绿苹果等,我也有一个大词典查询单词是否合理,那么提取最快的方法是什么。

之前:夜晚是阴暗的暴风雨,除了偶然的间隔外,当被狂风吹过一阵子的时候,暴风雨席卷了整个伦敦的大街小巷,这在我们的幕后掠过,并迅速燃烧了灯火,直到黑暗中挣扎了。

之后:那是一个黑暗而暴风雨的夜晚,雨水倾盆而下,除了偶尔被狂风吹拂而扫过街道的间隔,因为在伦敦,我们的场景在房顶上嘎嘎作响,剧烈地搅动着房屋。挣扎着黑暗的灯火。

如您所见,它本质上是完美的。最重要的部分是确保将单词列表训练成与您实际遇到的相似的语料库,否则结果将很糟糕。


优化

该实现消耗线性时间和内存,因此它是相当有效的。如果需要进一步的提速,则可以从单词列表中构建后缀树,以减少候选集的大小。

如果您需要处理非常大的连续字符串,则合理的做法是拆分字符串以避免过多的内存使用。例如,您可以按10000个字符的块处理文本,并在每侧加1000个字符的边距,以避免边界效应。这将使内存使用降至最低,并且几乎可以肯定不会对质量产生任何影响。

A naive algorithm won’t give good results when applied to real-world data. Here is a 20-line algorithm that exploits relative word frequency to give accurate results for real-word text.

(If you want an answer to your original question which does not use word frequency, you need to refine what exactly is meant by “longest word”: is it better to have a 20-letter word and ten 3-letter words, or is it better to have five 10-letter words? Once you settle on a precise definition, you just have to change the line defining wordcost to reflect the intended meaning.)

The idea

The best way to proceed is to model the distribution of the output. A good first approximation is to assume all words are independently distributed. Then you only need to know the relative frequency of all words. It is reasonable to assume that they follow Zipf’s law, that is the word with rank n in the list of words has probability roughly 1/(n log N) where N is the number of words in the dictionary.

Once you have fixed the model, you can use dynamic programming to infer the position of the spaces. The most likely sentence is the one that maximizes the product of the probability of each individual word, and it’s easy to compute it with dynamic programming. Instead of directly using the probability we use a cost defined as the logarithm of the inverse of the probability to avoid overflows.

The code

from math import log

# Build a cost dictionary, assuming Zipf's law and cost = -math.log(probability).
words = open("words-by-frequency.txt").read().split()
wordcost = dict((k, log((i+1)*log(len(words)))) for i,k in enumerate(words))
maxword = max(len(x) for x in words)

def infer_spaces(s):
    """Uses dynamic programming to infer the location of spaces in a string
    without spaces."""

    # Find the best match for the i first characters, assuming cost has
    # been built for the i-1 first characters.
    # Returns a pair (match_cost, match_length).
    def best_match(i):
        candidates = enumerate(reversed(cost[max(0, i-maxword):i]))
        return min((c + wordcost.get(s[i-k-1:i], 9e999), k+1) for k,c in candidates)

    # Build the cost array.
    cost = [0]
    for i in range(1,len(s)+1):
        c,k = best_match(i)
        cost.append(c)

    # Backtrack to recover the minimal-cost string.
    out = []
    i = len(s)
    while i>0:
        c,k = best_match(i)
        assert c == cost[i]
        out.append(s[i-k:i])
        i -= k

    return " ".join(reversed(out))

which you can use with

s = 'thumbgreenappleactiveassignmentweeklymetaphor'
print(infer_spaces(s))

The results

I am using this quick-and-dirty 125k-word dictionary I put together from a small subset of Wikipedia.

Before: thumbgreenappleactiveassignmentweeklymetaphor.
After: thumb green apple active assignment weekly metaphor.

Before: thereismassesoftextinformationofpeoplescommentswhichisparsedfromhtmlbuttherearen odelimitedcharactersinthemforexamplethumbgreenappleactiveassignmentweeklymetapho rapparentlytherearethumbgreenappleetcinthestringialsohavealargedictionarytoquery whetherthewordisreasonablesowhatsthefastestwayofextractionthxalot.

After: there is masses of text information of peoples comments which is parsed from html but there are no delimited characters in them for example thumb green apple active assignment weekly metaphor apparently there are thumb green apple etc in the string i also have a large dictionary to query whether the word is reasonable so what s the fastest way of extraction thx a lot.

Before: itwasadarkandstormynighttherainfellintorrentsexceptatoccasionalintervalswhenitwascheckedbyaviolentgustofwindwhichsweptupthestreetsforitisinlondonthatoursceneliesrattlingalongthehousetopsandfiercelyagitatingthescantyflameofthelampsthatstruggledagainstthedarkness.

After: it was a dark and stormy night the rain fell in torrents except at occasional intervals when it was checked by a violent gust of wind which swept up the streets for it is in london that our scene lies rattling along the housetops and fiercely agitating the scanty flame of the lamps that struggled against the darkness.

As you can see it is essentially flawless. The most important part is to make sure your word list was trained to a corpus similar to what you will actually encounter, otherwise the results will be very bad.


Optimization

The implementation consumes a linear amount of time and memory, so it is reasonably efficient. If you need further speedups, you can build a suffix tree from the word list to reduce the size of the set of candidates.

If you need to process a very large consecutive string it would be reasonable to split the string to avoid excessive memory usage. For example you could process the text in blocks of 10000 characters plus a margin of 1000 characters on either side to avoid boundary effects. This will keep memory usage to a minimum and will have almost certainly no effect on the quality.


回答 1

基于最佳答案中的出色工作,我创建了一个pip易于使用的软件包。

>>> import wordninja
>>> wordninja.split('derekanderson')
['derek', 'anderson']

要安装,请运行pip install wordninja

唯一的区别是微小的。这将返回a list而不是a str,它适用于python3,它包括单词列表并正确拆分,即使存在非字母字符(如下划线,破折号等)也是如此。

再次感谢通用人类!

https://github.com/keredson/wordninja

Based on the excellent work in the top answer, I’ve created a pip package for easy use.

>>> import wordninja
>>> wordninja.split('derekanderson')
['derek', 'anderson']

To install, run pip install wordninja.

The only differences are minor. This returns a list rather than a str, it works in python3, it includes the word list and properly splits even if there are non-alpha chars (like underscores, dashes, etc).

Thanks again to Generic Human!

https://github.com/keredson/wordninja


回答 2

这是使用递归搜索的解决方案:

def find_words(instring, prefix = '', words = None):
    if not instring:
        return []
    if words is None:
        words = set()
        with open('/usr/share/dict/words') as f:
            for line in f:
                words.add(line.strip())
    if (not prefix) and (instring in words):
        return [instring]
    prefix, suffix = prefix + instring[0], instring[1:]
    solutions = []
    # Case 1: prefix in solution
    if prefix in words:
        try:
            solutions.append([prefix] + find_words(suffix, '', words))
        except ValueError:
            pass
    # Case 2: prefix not in solution
    try:
        solutions.append(find_words(suffix, prefix, words))
    except ValueError:
        pass
    if solutions:
        return sorted(solutions,
                      key = lambda solution: [len(word) for word in solution],
                      reverse = True)[0]
    else:
        raise ValueError('no solution')

print(find_words('tableapplechairtablecupboard'))
print(find_words('tableprechaun', words = set(['tab', 'table', 'leprechaun'])))

Yield

['table', 'apple', 'chair', 'table', 'cupboard']
['tab', 'leprechaun']

Here is solution using recursive search:

def find_words(instring, prefix = '', words = None):
    if not instring:
        return []
    if words is None:
        words = set()
        with open('/usr/share/dict/words') as f:
            for line in f:
                words.add(line.strip())
    if (not prefix) and (instring in words):
        return [instring]
    prefix, suffix = prefix + instring[0], instring[1:]
    solutions = []
    # Case 1: prefix in solution
    if prefix in words:
        try:
            solutions.append([prefix] + find_words(suffix, '', words))
        except ValueError:
            pass
    # Case 2: prefix not in solution
    try:
        solutions.append(find_words(suffix, prefix, words))
    except ValueError:
        pass
    if solutions:
        return sorted(solutions,
                      key = lambda solution: [len(word) for word in solution],
                      reverse = True)[0]
    else:
        raise ValueError('no solution')

print(find_words('tableapplechairtablecupboard'))
print(find_words('tableprechaun', words = set(['tab', 'table', 'leprechaun'])))

yields

['table', 'apple', 'chair', 'table', 'cupboard']
['tab', 'leprechaun']

回答 3

使用一个trie 数据结构(该结构包含可能的单词的列表),执行以下操作不会太复杂:

  1. 前进指针(在串联字符串中)
  2. 查找并将相应的节点存储在trie中
  3. 如果trie节点有子节点(例如,单词较长),请转到1。
  4. 如果到达的节点没有孩子,则会出现最长的单词匹配;将单词(存储在节点中或在遍历遍历过程中只是串联在一起)添加到结果列表中,将指针重置为trie(或重置引用),然后重新开始

Using a trie data structure, which holds the list of possible words, it would not be too complicated to do the following:

  1. Advance pointer (in the concatenated string)
  2. Lookup and store the corresponding node in the trie
  3. If the trie node has children (e.g. there are longer words), go to 1.
  4. If the node reached has no children, a longest word match happened; add the word (stored in the node or just concatenated during trie traversal) to the result list, reset the pointer in the trie (or reset the reference), and start over

回答 4

Unutbu的解决方案非常接近,但是我发现代码难以阅读,并且没有产生预期的结果。通用人类解决方案的缺点是需要单词频率。不适用于所有用例。

这是一个使用分而治之算法的简单解决方案。

  1. 它试图使 Eg find_words('cupboard')返回['cupboard']而不是['cup', 'board'](假设cupboardcup并且board在字典中)的单词数量最少。
  2. 最佳解决方案不是唯一的,下面的实现返回一个解决方案。find_words('charactersin')可能会返回,['characters', 'in']或者可能会返回['character', 'sin'](如下所示)。您可以很容易地修改算法以返回所有最优解。
  3. 在此实现中,将记住解决方案,以使其在合理的时间内运行。

代码:

words = set()
with open('/usr/share/dict/words') as f:
    for line in f:
        words.add(line.strip())

solutions = {}
def find_words(instring):
    # First check if instring is in the dictionnary
    if instring in words:
        return [instring]
    # No... But maybe it's a result we already computed
    if instring in solutions:
        return solutions[instring]
    # Nope. Try to split the string at all position to recursively search for results
    best_solution = None
    for i in range(1, len(instring) - 1):
        part1 = find_words(instring[:i])
        part2 = find_words(instring[i:])
        # Both parts MUST have a solution
        if part1 is None or part2 is None:
            continue
        solution = part1 + part2
        # Is the solution found "better" than the previous one?
        if best_solution is None or len(solution) < len(best_solution):
            best_solution = solution
    # Remember (memoize) this solution to avoid having to recompute it
    solutions[instring] = best_solution
    return best_solution

在我的3GHz机器上,这大约需要5秒:

result = find_words("thereismassesoftextinformationofpeoplescommentswhichisparsedfromhtmlbuttherearenodelimitedcharactersinthemforexamplethumbgreenappleactiveassignmentweeklymetaphorapparentlytherearethumbgreenappleetcinthestringialsohavealargedictionarytoquerywhetherthewordisreasonablesowhatsthefastestwayofextractionthxalot")
assert(result is not None)
print ' '.join(result)

从html解析的人民评论文本信息的全部内容,但没有定界字符,例如,拇指绿苹果活动任务每周隐喻显然在字符串中有拇指绿苹果等,我也有一个大型词典来查询是否这个词很合理,所以提取很多最快的方法是什么

Unutbu’s solution was quite close but I find the code difficult to read, and it didn’t yield the expected result. Generic Human’s solution has the drawback that it needs word frequencies. Not appropriate for all use case.

Here’s a simple solution using a Divide and Conquer algorithm.

  1. It tries to minimize the number of words E.g. find_words('cupboard') will return ['cupboard'] rather than ['cup', 'board'] (assuming that cupboard, cup and board are in the dictionnary)
  2. The optimal solution is not unique, the implementation below returns a solution. find_words('charactersin') could return ['characters', 'in'] or maybe it will return ['character', 'sin'] (as seen below). You could quite easily modify the algorithm to return all optimal solutions.
  3. In this implementation solutions are memoized so that it runs in a reasonable time.

The code:

words = set()
with open('/usr/share/dict/words') as f:
    for line in f:
        words.add(line.strip())

solutions = {}
def find_words(instring):
    # First check if instring is in the dictionnary
    if instring in words:
        return [instring]
    # No... But maybe it's a result we already computed
    if instring in solutions:
        return solutions[instring]
    # Nope. Try to split the string at all position to recursively search for results
    best_solution = None
    for i in range(1, len(instring) - 1):
        part1 = find_words(instring[:i])
        part2 = find_words(instring[i:])
        # Both parts MUST have a solution
        if part1 is None or part2 is None:
            continue
        solution = part1 + part2
        # Is the solution found "better" than the previous one?
        if best_solution is None or len(solution) < len(best_solution):
            best_solution = solution
    # Remember (memoize) this solution to avoid having to recompute it
    solutions[instring] = best_solution
    return best_solution

This will take about about 5sec on my 3GHz machine:

result = find_words("thereismassesoftextinformationofpeoplescommentswhichisparsedfromhtmlbuttherearenodelimitedcharactersinthemforexamplethumbgreenappleactiveassignmentweeklymetaphorapparentlytherearethumbgreenappleetcinthestringialsohavealargedictionarytoquerywhetherthewordisreasonablesowhatsthefastestwayofextractionthxalot")
assert(result is not None)
print ' '.join(result)

the reis masses of text information of peoples comments which is parsed from h t m l but there are no delimited character sin them for example thumb green apple active assignment weekly metaphor apparently there are thumb green apple e t c in the string i also have a large dictionary to query whether the word is reasonable so whats the fastest way of extraction t h x a lot


回答 5

https://stackoverflow.com/users/1515832/generic-human的答案很棒。但是我见过的最好的实现是彼得·诺维格本人在他的《美丽的数据》一书中写的。

在粘贴他的代码之前,让我先解释一下为什么Norvig的方法更准确(尽管在代码方面会更慢一些,也更长一些)。

1)数据要好一些-就大小和精度而言(他使用字数而不是简单的排名)2)更重要的是,n-gram背后的逻辑确实使方法如此精确。

他在书中提供的示例是拆分字符串“ sitdown”的问题。现在,一个非二字组合的字符串拆分方法将考虑p(’sit’)* p(’down’),如果小于p(’sitdown’)-这种情况经常发生-它将不会拆分它,但我们希望(大多数时候)。

但是,当您具有bigram模型时,可以将p(’sit down’)视为bigram vs p(’sitdown’),并且前者获胜。基本上,如果您不使用双字母组,它将把您拆分的单词的概率视为独立,而不是这种情况,某些单词更有可能一个接一个地出现。不幸的是,这些词在很多情况下经常被粘在一起,使拆分器感到困惑。

这是数据的链接(它是3个独立问题的数据,而细分仅仅是一个。请阅读本章以获取详细信息):http : //norvig.com/ngrams/

这是代码的链接:http : //norvig.com/ngrams/ngrams.py

这些链接已经建立了一段时间,但是无论如何我都会在此处复制粘贴代码的分段部分

import re, string, random, glob, operator, heapq
from collections import defaultdict
from math import log10

def memo(f):
    "Memoize function f."
    table = {}
    def fmemo(*args):
        if args not in table:
            table[args] = f(*args)
        return table[args]
    fmemo.memo = table
    return fmemo

def test(verbose=None):
    """Run some tests, taken from the chapter.
    Since the hillclimbing algorithm is randomized, some tests may fail."""
    import doctest
    print 'Running tests...'
    doctest.testfile('ngrams-test.txt', verbose=verbose)

################ Word Segmentation (p. 223)

@memo
def segment(text):
    "Return a list of words that is the best segmentation of text."
    if not text: return []
    candidates = ([first]+segment(rem) for first,rem in splits(text))
    return max(candidates, key=Pwords)

def splits(text, L=20):
    "Return a list of all possible (first, rem) pairs, len(first)<=L."
    return [(text[:i+1], text[i+1:]) 
            for i in range(min(len(text), L))]

def Pwords(words): 
    "The Naive Bayes probability of a sequence of words."
    return product(Pw(w) for w in words)

#### Support functions (p. 224)

def product(nums):
    "Return the product of a sequence of numbers."
    return reduce(operator.mul, nums, 1)

class Pdist(dict):
    "A probability distribution estimated from counts in datafile."
    def __init__(self, data=[], N=None, missingfn=None):
        for key,count in data:
            self[key] = self.get(key, 0) + int(count)
        self.N = float(N or sum(self.itervalues()))
        self.missingfn = missingfn or (lambda k, N: 1./N)
    def __call__(self, key): 
        if key in self: return self[key]/self.N  
        else: return self.missingfn(key, self.N)

def datafile(name, sep='\t'):
    "Read key,value pairs from file."
    for line in file(name):
        yield line.split(sep)

def avoid_long_words(key, N):
    "Estimate the probability of an unknown word."
    return 10./(N * 10**len(key))

N = 1024908267229 ## Number of tokens

Pw  = Pdist(datafile('count_1w.txt'), N, avoid_long_words)

#### segment2: second version, with bigram counts, (p. 226-227)

def cPw(word, prev):
    "Conditional probability of word, given previous word."
    try:
        return P2w[prev + ' ' + word]/float(Pw[prev])
    except KeyError:
        return Pw(word)

P2w = Pdist(datafile('count_2w.txt'), N)

@memo 
def segment2(text, prev='<S>'): 
    "Return (log P(words), words), where words is the best segmentation." 
    if not text: return 0.0, [] 
    candidates = [combine(log10(cPw(first, prev)), first, segment2(rem, first)) 
                  for first,rem in splits(text)] 
    return max(candidates) 

def combine(Pfirst, first, (Prem, rem)): 
    "Combine first and rem results into one (probability, words) pair." 
    return Pfirst+Prem, [first]+rem 

The answer by https://stackoverflow.com/users/1515832/generic-human is great. But the best implementation of this I’ve ever seen was written Peter Norvig himself in his book ‘Beautiful Data’.

Before I paste his code, let me expand on why Norvig’s method is more accurate (although a little slower and longer in terms of code).

1) The data is a bit better – both in terms of size and in terms of precision (he uses a word count rather than a simple ranking) 2) More importantly, it’s the logic behind n-grams that really makes the approach so accurate.

The example he provides in his book is the problem of splitting a string ‘sitdown’. Now a non-bigram method of string split would consider p(‘sit’) * p (‘down’), and if this less than the p(‘sitdown’) – which will be the case quite often – it will NOT split it, but we’d want it to (most of the time).

However when you have the bigram model you could value p(‘sit down’) as a bigram vs p(‘sitdown’) and the former wins. Basically, if you don’t use bigrams, it treats the probability of the words you’re splitting as independent, which is not the case, some words are more likely to appear one after the other. Unfortunately those are also the words that are often stuck together in a lot of instances and confuses the splitter.

Here’s the link to the data (it’s data for 3 separate problems and segmentation is only one. Please read the chapter for details): http://norvig.com/ngrams/

and here’s the link to the code: http://norvig.com/ngrams/ngrams.py

These links have been up a while, but I’ll copy paste the segmentation part of the code here anyway

import re, string, random, glob, operator, heapq
from collections import defaultdict
from math import log10

def memo(f):
    "Memoize function f."
    table = {}
    def fmemo(*args):
        if args not in table:
            table[args] = f(*args)
        return table[args]
    fmemo.memo = table
    return fmemo

def test(verbose=None):
    """Run some tests, taken from the chapter.
    Since the hillclimbing algorithm is randomized, some tests may fail."""
    import doctest
    print 'Running tests...'
    doctest.testfile('ngrams-test.txt', verbose=verbose)

################ Word Segmentation (p. 223)

@memo
def segment(text):
    "Return a list of words that is the best segmentation of text."
    if not text: return []
    candidates = ([first]+segment(rem) for first,rem in splits(text))
    return max(candidates, key=Pwords)

def splits(text, L=20):
    "Return a list of all possible (first, rem) pairs, len(first)<=L."
    return [(text[:i+1], text[i+1:]) 
            for i in range(min(len(text), L))]

def Pwords(words): 
    "The Naive Bayes probability of a sequence of words."
    return product(Pw(w) for w in words)

#### Support functions (p. 224)

def product(nums):
    "Return the product of a sequence of numbers."
    return reduce(operator.mul, nums, 1)

class Pdist(dict):
    "A probability distribution estimated from counts in datafile."
    def __init__(self, data=[], N=None, missingfn=None):
        for key,count in data:
            self[key] = self.get(key, 0) + int(count)
        self.N = float(N or sum(self.itervalues()))
        self.missingfn = missingfn or (lambda k, N: 1./N)
    def __call__(self, key): 
        if key in self: return self[key]/self.N  
        else: return self.missingfn(key, self.N)

def datafile(name, sep='\t'):
    "Read key,value pairs from file."
    for line in file(name):
        yield line.split(sep)

def avoid_long_words(key, N):
    "Estimate the probability of an unknown word."
    return 10./(N * 10**len(key))

N = 1024908267229 ## Number of tokens

Pw  = Pdist(datafile('count_1w.txt'), N, avoid_long_words)

#### segment2: second version, with bigram counts, (p. 226-227)

def cPw(word, prev):
    "Conditional probability of word, given previous word."
    try:
        return P2w[prev + ' ' + word]/float(Pw[prev])
    except KeyError:
        return Pw(word)

P2w = Pdist(datafile('count_2w.txt'), N)

@memo 
def segment2(text, prev='<S>'): 
    "Return (log P(words), words), where words is the best segmentation." 
    if not text: return 0.0, [] 
    candidates = [combine(log10(cPw(first, prev)), first, segment2(rem, first)) 
                  for first,rem in splits(text)] 
    return max(candidates) 

def combine(Pfirst, first, (Prem, rem)): 
    "Combine first and rem results into one (probability, words) pair." 
    return Pfirst+Prem, [first]+rem 

回答 6

这是转换为JavaScript的可接受答案(需要node.js,以及来自https://github.com/keredson/wordninja的文件“ wordninja_words.txt” ):

var fs = require("fs");

var splitRegex = new RegExp("[^a-zA-Z0-9']+", "g");
var maxWordLen = 0;
var wordCost = {};

fs.readFile("./wordninja_words.txt", 'utf8', function(err, data) {
    if (err) {
        throw err;
    }
    var words = data.split('\n');
    words.forEach(function(word, index) {
        wordCost[word] = Math.log((index + 1) * Math.log(words.length));
    })
    words.forEach(function(word) {
        if (word.length > maxWordLen)
            maxWordLen = word.length;
    });
    console.log(maxWordLen)
    splitRegex = new RegExp("[^a-zA-Z0-9']+", "g");
    console.log(split(process.argv[2]));
});


function split(s) {
    var list = [];
    s.split(splitRegex).forEach(function(sub) {
        _split(sub).forEach(function(word) {
            list.push(word);
        })
    })
    return list;
}
module.exports = split;


function _split(s) {
    var cost = [0];

    function best_match(i) {
        var candidates = cost.slice(Math.max(0, i - maxWordLen), i).reverse();
        var minPair = [Number.MAX_SAFE_INTEGER, 0];
        candidates.forEach(function(c, k) {
            if (wordCost[s.substring(i - k - 1, i).toLowerCase()]) {
                var ccost = c + wordCost[s.substring(i - k - 1, i).toLowerCase()];
            } else {
                var ccost = Number.MAX_SAFE_INTEGER;
            }
            if (ccost < minPair[0]) {
                minPair = [ccost, k + 1];
            }
        })
        return minPair;
    }

    for (var i = 1; i < s.length + 1; i++) {
        cost.push(best_match(i)[0]);
    }

    var out = [];
    i = s.length;
    while (i > 0) {
        var c = best_match(i)[0];
        var k = best_match(i)[1];
        if (c == cost[i])
            console.log("Alert: " + c);

        var newToken = true;
        if (s.slice(i - k, i) != "'") {
            if (out.length > 0) {
                if (out[-1] == "'s" || (Number.isInteger(s[i - 1]) && Number.isInteger(out[-1][0]))) {
                    out[-1] = s.slice(i - k, i) + out[-1];
                    newToken = false;
                }
            }
        }

        if (newToken) {
            out.push(s.slice(i - k, i))
        }

        i -= k

    }
    return out.reverse();
}

Here is the accepted answer translated to JavaScript (requires node.js, and the file “wordninja_words.txt” from https://github.com/keredson/wordninja):

var fs = require("fs");

var splitRegex = new RegExp("[^a-zA-Z0-9']+", "g");
var maxWordLen = 0;
var wordCost = {};

fs.readFile("./wordninja_words.txt", 'utf8', function(err, data) {
    if (err) {
        throw err;
    }
    var words = data.split('\n');
    words.forEach(function(word, index) {
        wordCost[word] = Math.log((index + 1) * Math.log(words.length));
    })
    words.forEach(function(word) {
        if (word.length > maxWordLen)
            maxWordLen = word.length;
    });
    console.log(maxWordLen)
    splitRegex = new RegExp("[^a-zA-Z0-9']+", "g");
    console.log(split(process.argv[2]));
});


function split(s) {
    var list = [];
    s.split(splitRegex).forEach(function(sub) {
        _split(sub).forEach(function(word) {
            list.push(word);
        })
    })
    return list;
}
module.exports = split;


function _split(s) {
    var cost = [0];

    function best_match(i) {
        var candidates = cost.slice(Math.max(0, i - maxWordLen), i).reverse();
        var minPair = [Number.MAX_SAFE_INTEGER, 0];
        candidates.forEach(function(c, k) {
            if (wordCost[s.substring(i - k - 1, i).toLowerCase()]) {
                var ccost = c + wordCost[s.substring(i - k - 1, i).toLowerCase()];
            } else {
                var ccost = Number.MAX_SAFE_INTEGER;
            }
            if (ccost < minPair[0]) {
                minPair = [ccost, k + 1];
            }
        })
        return minPair;
    }

    for (var i = 1; i < s.length + 1; i++) {
        cost.push(best_match(i)[0]);
    }

    var out = [];
    i = s.length;
    while (i > 0) {
        var c = best_match(i)[0];
        var k = best_match(i)[1];
        if (c == cost[i])
            console.log("Alert: " + c);

        var newToken = true;
        if (s.slice(i - k, i) != "'") {
            if (out.length > 0) {
                if (out[-1] == "'s" || (Number.isInteger(s[i - 1]) && Number.isInteger(out[-1][0]))) {
                    out[-1] = s.slice(i - k, i) + out[-1];
                    newToken = false;
                }
            }
        }

        if (newToken) {
            out.push(s.slice(i - k, i))
        }

        i -= k

    }
    return out.reverse();
}

回答 7

如果将单词表预编译到DFA中(这将非常慢),则匹配输入所花费的时间将与字符串的长度成比例(实际上,这仅比遍历字符串慢一点)。

实际上,这是前面提到的trie算法的更通用的版本。我只是为了完全完成而提到它-到目前为止,您还不能使用DFA实现。RE2可以使用,但是我不知道Python绑定是否可以让您调整DFA的大小,然后再丢弃编译的DFA数据并进行NFA搜索。

If you precompile the wordlist into a DFA (which will be very slow), then the time it takes to match an input will be proportional to the length of the string (in fact, only a little slower than just iterating over the string).

This is effectively a more general version of the trie algorithm which was mentioned earlier. I only mention it for completeless — as of yet, there’s no DFA implementation you can just use. RE2 would work, but I don’t know if the Python bindings let you tune how large you allow a DFA to be before it just throws away the compiled DFA data and does NFA search.


回答 8

看起来相当平凡的回溯可以做到。从字符串的开头开始。向右扫描,直到您有一个字。然后,在字符串的其余部分调用该函数。如果函数一直扫描到右边而没有识别出一个单词,则该函数返回“ false”。否则,返回找到的单词以及递归调用返回的单词列表。

示例:“ tableapple”。查找“ tab”,然后查找“ leap”,但“ ple”中没有单词。“ leapple”中没有其他词。查找“表”,然后查找“ app”。“ le”不是一个词,所以尝试苹果,认识并返回。

为了尽可能长的时间,请继续前进,只发出(而不是返回)正确的解决方案;然后,根据您选择的任何标准(最大值,最小值,平均值等)选择最佳的一种

It seems like fairly mundane backtracking will do. Start at the beggining of the string. Scan right until you have a word. Then, call the function on the rest of the string. Function returns “false” if it scans all the way to the right without recognizing a word. Otherwise, returns the word it found and the list of words returned by the recursive call.

Example: “tableapple”. Finds “tab”, then “leap”, but no word in “ple”. No other word in “leapple”. Finds “table”, then “app”. “le” not a word, so tries apple, recognizes, returns.

To get longest possible, keep going, only emitting (rather than returning) correct solutions; then, choose the optimal one by any criterion you choose (maxmax, minmax, average, etc.)


回答 9

基于unutbu的解决方案,我实现了Java版本:

private static List<String> splitWordWithoutSpaces(String instring, String suffix) {
    if(isAWord(instring)) {
        if(suffix.length() > 0) {
            List<String> rest = splitWordWithoutSpaces(suffix, "");
            if(rest.size() > 0) {
                List<String> solutions = new LinkedList<>();
                solutions.add(instring);
                solutions.addAll(rest);
                return solutions;
            }
        } else {
            List<String> solutions = new LinkedList<>();
            solutions.add(instring);
            return solutions;
        }

    }
    if(instring.length() > 1) {
        String newString = instring.substring(0, instring.length()-1);
        suffix = instring.charAt(instring.length()-1) + suffix;
        List<String> rest = splitWordWithoutSpaces(newString, suffix);
        return rest;
    }
    return Collections.EMPTY_LIST;
}

输入: "tableapplechairtablecupboard"

输出: [table, apple, chair, table, cupboard]

输入: "tableprechaun"

输出: [tab, leprechaun]

Based on unutbu’s solution I’ve implemented a Java version:

private static List<String> splitWordWithoutSpaces(String instring, String suffix) {
    if(isAWord(instring)) {
        if(suffix.length() > 0) {
            List<String> rest = splitWordWithoutSpaces(suffix, "");
            if(rest.size() > 0) {
                List<String> solutions = new LinkedList<>();
                solutions.add(instring);
                solutions.addAll(rest);
                return solutions;
            }
        } else {
            List<String> solutions = new LinkedList<>();
            solutions.add(instring);
            return solutions;
        }

    }
    if(instring.length() > 1) {
        String newString = instring.substring(0, instring.length()-1);
        suffix = instring.charAt(instring.length()-1) + suffix;
        List<String> rest = splitWordWithoutSpaces(newString, suffix);
        return rest;
    }
    return Collections.EMPTY_LIST;
}

Input: "tableapplechairtablecupboard"

Output: [table, apple, chair, table, cupboard]

Input: "tableprechaun"

Output: [tab, leprechaun]


回答 10

对于德语,有CharSplit,它使用机器学习,并且对几个单词的字符串非常有效。

https://github.com/dtuggener/CharSplit

For German language there is CharSplit which uses machine learning and works pretty good for strings of a few words.

https://github.com/dtuggener/CharSplit


回答 11

扩展@miku的建议使用a Trie,仅追加Trie是相对简单的实现python

class Node:
    def __init__(self, is_word=False):
        self.children = {}
        self.is_word = is_word

class TrieDictionary:
    def __init__(self, words=tuple()):
        self.root = Node()
        for word in words:
            self.add(word)

    def add(self, word):
        node = self.root
        for c in word:
            node = node.children.setdefault(c, Node())
        node.is_word = True

    def lookup(self, word, from_node=None):
        node = self.root if from_node is None else from_node
        for c in word:
            try:
                node = node.children[c]
            except KeyError:
                return None

        return node

然后,我们可以Trie根据一组单词构建一个基于字典的字典:

dictionary = {"a", "pea", "nut", "peanut", "but", "butt", "butte", "butter"}
trie_dictionary = TrieDictionary(words=dictionary)

它将产生如下所示的树(*指示单词的开头或结尾):

* -> a*
 \\\ 
  \\\-> p -> e -> a*
   \\              \-> n -> u -> t*
    \\
     \\-> b -> u -> t*
      \\             \-> t*
       \\                 \-> e*
        \\                     \-> r*
         \
          \-> n -> u -> t*

通过将其与关于如何选择单词的启发式方法相结合,我们可以将其合并到解决方案中。例如,与较短的单词相比,我们更喜欢较长的单词:

def using_trie_longest_word_heuristic(s):
    node = None
    possible_indexes = []

    # O(1) short-circuit if whole string is a word, doesn't go against longest-word wins
    if s in dictionary:
        return [ s ]

    for i in range(len(s)):
        # traverse the trie, char-wise to determine intermediate words
        node = trie_dictionary.lookup(s[i], from_node=node)

        # no more words start this way
        if node is None:
            # iterate words we have encountered from biggest to smallest
            for possible in possible_indexes[::-1]:
                # recurse to attempt to solve the remaining sub-string
                end_of_phrase = using_trie_longest_word_heuristic(s[possible+1:])

                # if we have a solution, return this word + our solution
                if end_of_phrase:
                    return [ s[:possible+1] ] + end_of_phrase

            # unsolvable
            break

        # if this is a leaf, append the index to the possible words list
        elif node.is_word:
            possible_indexes.append(i)

    # empty string OR unsolvable case 
    return []

我们可以这样使用此功能:

>>> using_trie_longest_word_heuristic("peanutbutter")
[ "peanut", "butter" ]

由于我们在Trie搜索越来越长的单词时一直保持在的位置,因此trie每个可能的解决方案最多遍历一次(而不是:,的2次数)。最终的短路使我们免于在最坏的情况下通过字符串随意移动。peanutpeapeanut

最终结果仅需进行少量检查:

'peanutbutter' - not a word, go charwise
'p' - in trie, use this node
'e' - in trie, use this node
'a' - in trie and edge, store potential word and use this node
'n' - in trie, use this node
'u' - in trie, use this node
't' - in trie and edge, store potential word and use this node
'b' - not in trie from `peanut` vector
'butter' - remainder of longest is a word

该解决方案的一个好处是,您可以很快地知道是否存在带有给定前缀的较长单词,从而无需针对字典全面测试序列组合。unsolvable与其他实现方式相比,这样做还使答案相对便宜。

该解决方案的缺点是要占用大量内存,trie以及trie前期构建成本。

Expanding on @miku’s suggestion to use a Trie, an append-only Trie is relatively straight-forward to implement in python:

class Node:
    def __init__(self, is_word=False):
        self.children = {}
        self.is_word = is_word

class TrieDictionary:
    def __init__(self, words=tuple()):
        self.root = Node()
        for word in words:
            self.add(word)

    def add(self, word):
        node = self.root
        for c in word:
            node = node.children.setdefault(c, Node())
        node.is_word = True

    def lookup(self, word, from_node=None):
        node = self.root if from_node is None else from_node
        for c in word:
            try:
                node = node.children[c]
            except KeyError:
                return None

        return node

We can then build a Trie-based dictionary from a set of words:

dictionary = {"a", "pea", "nut", "peanut", "but", "butt", "butte", "butter"}
trie_dictionary = TrieDictionary(words=dictionary)

Which will produce a tree that looks like this (* indicates beginning or end of a word):

* -> a*
 \\\ 
  \\\-> p -> e -> a*
   \\              \-> n -> u -> t*
    \\
     \\-> b -> u -> t*
      \\             \-> t*
       \\                 \-> e*
        \\                     \-> r*
         \
          \-> n -> u -> t*

We can incorporate this into a solution by combining it with a heuristic about how to choose words. For example we can prefer longer words over shorter words:

def using_trie_longest_word_heuristic(s):
    node = None
    possible_indexes = []

    # O(1) short-circuit if whole string is a word, doesn't go against longest-word wins
    if s in dictionary:
        return [ s ]

    for i in range(len(s)):
        # traverse the trie, char-wise to determine intermediate words
        node = trie_dictionary.lookup(s[i], from_node=node)

        # no more words start this way
        if node is None:
            # iterate words we have encountered from biggest to smallest
            for possible in possible_indexes[::-1]:
                # recurse to attempt to solve the remaining sub-string
                end_of_phrase = using_trie_longest_word_heuristic(s[possible+1:])

                # if we have a solution, return this word + our solution
                if end_of_phrase:
                    return [ s[:possible+1] ] + end_of_phrase

            # unsolvable
            break

        # if this is a leaf, append the index to the possible words list
        elif node.is_word:
            possible_indexes.append(i)

    # empty string OR unsolvable case 
    return []

We can use this function like this:

>>> using_trie_longest_word_heuristic("peanutbutter")
[ "peanut", "butter" ]

Because we maintain our position in the Trie as we search for longer and longer words, we traverse the trie at most once per possible solution (rather than 2 times for peanut: pea, peanut). The final short-circuit saves us from walking char-wise through the string in the worst-case.

The final result is only a handful of inspections:

'peanutbutter' - not a word, go charwise
'p' - in trie, use this node
'e' - in trie, use this node
'a' - in trie and edge, store potential word and use this node
'n' - in trie, use this node
'u' - in trie, use this node
't' - in trie and edge, store potential word and use this node
'b' - not in trie from `peanut` vector
'butter' - remainder of longest is a word

A benefit of this solution is in the fact that you know very quickly if longer words exist with a given prefix, which spares the need to exhaustively test sequence combinations against a dictionary. It also makes getting to an unsolvable answer comparatively cheap to other implementations.

The downsides of this solution are a large memory footprint for the trie and the cost of building the trie up-front.


回答 12

如果您包含字符串中包含的单词的详尽列表:

word_list = ["table", "apple", "chair", "cupboard"]

使用列表理解来遍历列表以找到单词以及单词出现的次数。

string = "tableapplechairtablecupboard"

def split_string(string, word_list):

    return ("".join([(item + " ")*string.count(item.lower()) for item in word_list if item.lower() in string])).strip()

该函数string按列表顺序返回单词的输出table table apple chair cupboard

If you have an exhaustive list of the words contained within the string:

word_list = ["table", "apple", "chair", "cupboard"]

Using list comprehension to iterate over the list to locate the word and how many times it appears.

string = "tableapplechairtablecupboard"

def split_string(string, word_list):

    return ("".join([(item + " ")*string.count(item.lower()) for item in word_list if item.lower() in string])).strip()


The function returns a string output of words in order of the list table table apple chair cupboard


回答 13

非常感谢https://github.com/keredson/wordninja/中的帮助

在我看来,Java的贡献很小。

可以将public方法splitContiguousWords与其他2种方法一起嵌入在同一目录中具有ninja_words.txt的类中(或根据编码器的选择进行修改)。并且该方法splitContiguousWords可以用于该目的。

public List<String> splitContiguousWords(String sentence) {

    String splitRegex = "[^a-zA-Z0-9']+";
    Map<String, Number> wordCost = new HashMap<>();
    List<String> dictionaryWords = IOUtils.linesFromFile("ninja_words.txt", StandardCharsets.UTF_8.name());
    double naturalLogDictionaryWordsCount = Math.log(dictionaryWords.size());
    long wordIdx = 0;
    for (String word : dictionaryWords) {
        wordCost.put(word, Math.log(++wordIdx * naturalLogDictionaryWordsCount));
    }
    int maxWordLength = Collections.max(dictionaryWords, Comparator.comparing(String::length)).length();
    List<String> splitWords = new ArrayList<>();
    for (String partSentence : sentence.split(splitRegex)) {
        splitWords.add(split(partSentence, wordCost, maxWordLength));
    }
    log.info("Split word for the sentence: {}", splitWords);
    return splitWords;
}

private String split(String partSentence, Map<String, Number> wordCost, int maxWordLength) {
    List<Pair<Number, Number>> cost = new ArrayList<>();
    cost.add(new Pair<>(Integer.valueOf(0), Integer.valueOf(0)));
    for (int index = 1; index < partSentence.length() + 1; index++) {
        cost.add(bestMatch(partSentence, cost, index, wordCost, maxWordLength));
    }
    int idx = partSentence.length();
    List<String> output = new ArrayList<>();
    while (idx > 0) {
        Pair<Number, Number> candidate = bestMatch(partSentence, cost, idx, wordCost, maxWordLength);
        Number candidateCost = candidate.getKey();
        Number candidateIndexValue = candidate.getValue();
        if (candidateCost.doubleValue() != cost.get(idx).getKey().doubleValue()) {
            throw new RuntimeException("Candidate cost unmatched; This should not be the case!");
        }
        boolean newToken = true;
        String token = partSentence.substring(idx - candidateIndexValue.intValue(), idx);
        if (token != "\'" && output.size() > 0) {
            String lastWord = output.get(output.size() - 1);
            if (lastWord.equalsIgnoreCase("\'s") ||
                    (Character.isDigit(partSentence.charAt(idx - 1)) && Character.isDigit(lastWord.charAt(0)))) {
                output.set(output.size() - 1, token + lastWord);
                newToken = false;
            }
        }
        if (newToken) {
            output.add(token);
        }
        idx -= candidateIndexValue.intValue();
    }
    return String.join(" ", Lists.reverse(output));
}


private Pair<Number, Number> bestMatch(String partSentence, List<Pair<Number, Number>> cost, int index,
                      Map<String, Number> wordCost, int maxWordLength) {
    List<Pair<Number, Number>> candidates = Lists.reverse(cost.subList(Math.max(0, index - maxWordLength), index));
    int enumerateIdx = 0;
    Pair<Number, Number> minPair = new Pair<>(Integer.MAX_VALUE, Integer.valueOf(enumerateIdx));
    for (Pair<Number, Number> pair : candidates) {
        ++enumerateIdx;
        String subsequence = partSentence.substring(index - enumerateIdx, index).toLowerCase();
        Number minCost = Integer.MAX_VALUE;
        if (wordCost.containsKey(subsequence)) {
            minCost = pair.getKey().doubleValue() + wordCost.get(subsequence).doubleValue();
        }
        if (minCost.doubleValue() < minPair.getKey().doubleValue()) {
            minPair = new Pair<>(minCost.doubleValue(), enumerateIdx);
        }
    }
    return minPair;
}

Many Thanks for the help in https://github.com/keredson/wordninja/

A small contribution of the same in Java from my side.

The public method splitContiguousWords could be embedded with the other 2 methods in the class having ninja_words.txt in same directory(or modified as per the choice of coder). And the method splitContiguousWords could be used for the purpose.

public List<String> splitContiguousWords(String sentence) {

    String splitRegex = "[^a-zA-Z0-9']+";
    Map<String, Number> wordCost = new HashMap<>();
    List<String> dictionaryWords = IOUtils.linesFromFile("ninja_words.txt", StandardCharsets.UTF_8.name());
    double naturalLogDictionaryWordsCount = Math.log(dictionaryWords.size());
    long wordIdx = 0;
    for (String word : dictionaryWords) {
        wordCost.put(word, Math.log(++wordIdx * naturalLogDictionaryWordsCount));
    }
    int maxWordLength = Collections.max(dictionaryWords, Comparator.comparing(String::length)).length();
    List<String> splitWords = new ArrayList<>();
    for (String partSentence : sentence.split(splitRegex)) {
        splitWords.add(split(partSentence, wordCost, maxWordLength));
    }
    log.info("Split word for the sentence: {}", splitWords);
    return splitWords;
}

private String split(String partSentence, Map<String, Number> wordCost, int maxWordLength) {
    List<Pair<Number, Number>> cost = new ArrayList<>();
    cost.add(new Pair<>(Integer.valueOf(0), Integer.valueOf(0)));
    for (int index = 1; index < partSentence.length() + 1; index++) {
        cost.add(bestMatch(partSentence, cost, index, wordCost, maxWordLength));
    }
    int idx = partSentence.length();
    List<String> output = new ArrayList<>();
    while (idx > 0) {
        Pair<Number, Number> candidate = bestMatch(partSentence, cost, idx, wordCost, maxWordLength);
        Number candidateCost = candidate.getKey();
        Number candidateIndexValue = candidate.getValue();
        if (candidateCost.doubleValue() != cost.get(idx).getKey().doubleValue()) {
            throw new RuntimeException("Candidate cost unmatched; This should not be the case!");
        }
        boolean newToken = true;
        String token = partSentence.substring(idx - candidateIndexValue.intValue(), idx);
        if (token != "\'" && output.size() > 0) {
            String lastWord = output.get(output.size() - 1);
            if (lastWord.equalsIgnoreCase("\'s") ||
                    (Character.isDigit(partSentence.charAt(idx - 1)) && Character.isDigit(lastWord.charAt(0)))) {
                output.set(output.size() - 1, token + lastWord);
                newToken = false;
            }
        }
        if (newToken) {
            output.add(token);
        }
        idx -= candidateIndexValue.intValue();
    }
    return String.join(" ", Lists.reverse(output));
}


private Pair<Number, Number> bestMatch(String partSentence, List<Pair<Number, Number>> cost, int index,
                      Map<String, Number> wordCost, int maxWordLength) {
    List<Pair<Number, Number>> candidates = Lists.reverse(cost.subList(Math.max(0, index - maxWordLength), index));
    int enumerateIdx = 0;
    Pair<Number, Number> minPair = new Pair<>(Integer.MAX_VALUE, Integer.valueOf(enumerateIdx));
    for (Pair<Number, Number> pair : candidates) {
        ++enumerateIdx;
        String subsequence = partSentence.substring(index - enumerateIdx, index).toLowerCase();
        Number minCost = Integer.MAX_VALUE;
        if (wordCost.containsKey(subsequence)) {
            minCost = pair.getKey().doubleValue() + wordCost.get(subsequence).doubleValue();
        }
        if (minCost.doubleValue() < minPair.getKey().doubleValue()) {
            minPair = new Pair<>(minCost.doubleValue(), enumerateIdx);
        }
    }
    return minPair;
}

回答 14

这会有所帮助

from wordsegment import load, segment
load()
segment('providesfortheresponsibilitiesofperson')

This will help

from wordsegment import load, segment
load()
segment('providesfortheresponsibilitiesofperson')

回答 15

您需要确定您的词汇表-也许任何免费单词列表都可以。

完成后,使用该词汇表来构建后缀树,并将输入流与此匹配:http : //en.wikipedia.org/wiki/Suffix_tree

You need to identify your vocabulary – perhaps any free word list will do.

Once done, use that vocabulary to build a suffix tree, and match your stream of input against that: http://en.wikipedia.org/wiki/Suffix_tree


如何使用Python删除非ASCII字符但保留句点和空格?

问题:如何使用Python删除非ASCII字符但保留句点和空格?

我正在使用.txt文件。我希望文件中的文本字符串不包含非ASCII字符。但是,我想留空格和句点。目前,我也正在剥离它们。这是代码:

def onlyascii(char):
    if ord(char) < 48 or ord(char) > 127: return ''
    else: return char

def get_my_string(file_path):
    f=open(file_path,'r')
    data=f.read()
    f.close()
    filtered_data=filter(onlyascii, data)
    filtered_data = filtered_data.lower()
    return filtered_data

我应该如何修改onlyascii()以保留空格和句点?我想这并不太复杂,但我无法弄清楚。

I’m working with a .txt file. I want a string of the text from the file with no non-ASCII characters. However, I want to leave spaces and periods. At present, I’m stripping those too. Here’s the code:

def onlyascii(char):
    if ord(char) < 48 or ord(char) > 127: return ''
    else: return char

def get_my_string(file_path):
    f=open(file_path,'r')
    data=f.read()
    f.close()
    filtered_data=filter(onlyascii, data)
    filtered_data = filtered_data.lower()
    return filtered_data

How should I modify onlyascii() to leave spaces and periods? I imagine it’s not too complicated but I can’t figure it out.


回答 0

您可以使用string.printable过滤字符串中所有不可打印的字符,如下所示:

>>> s = "some\x00string. with\x15 funny characters"
>>> import string
>>> printable = set(string.printable)
>>> filter(lambda x: x in printable, s)
'somestring. with funny characters'

我机器上的string.printable包含:

0123456789abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ
!"#$%&\'()*+,-./:;<=>?@[\\]^_`{|}~ \t\n\r\x0b\x0c

编辑:在Python 3上,筛选器将返回可迭代。返回字符串的正确方法是:

''.join(filter(lambda x: x in printable, s))

You can filter all characters from the string that are not printable using string.printable, like this:

>>> s = "some\x00string. with\x15 funny characters"
>>> import string
>>> printable = set(string.printable)
>>> filter(lambda x: x in printable, s)
'somestring. with funny characters'

string.printable on my machine contains:

0123456789abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ
!"#$%&\'()*+,-./:;<=>?@[\\]^_`{|}~ \t\n\r\x0b\x0c

EDIT: On Python 3, filter will return an iterable. The correct way to obtain a string back would be:

''.join(filter(lambda x: x in printable, s))

回答 1

更改为其他编解码器的简单方法是使用encode()或decode()。在您的情况下,您想转换为ASCII并忽略所有不支持的符号。例如,瑞典字母å不是ASCII字符:

    >>>s = u'Good bye in Swedish is Hej d\xe5'
    >>>s = s.encode('ascii',errors='ignore')
    >>>print s
    Good bye in Swedish is Hej d

编辑:

Python3:str->字节-> str

>>>"Hej då".encode("ascii", errors="ignore").decode()
'hej d'

Python2:unicode-> str-> unicode

>>> u"hej då".encode("ascii", errors="ignore").decode()
u'hej d'

Python2:str-> unicode-> str(以相反的顺序解码和编码)

>>> "hej d\xe5".decode("ascii", errors="ignore").encode()
'hej d'

An easy way to change to a different codec, is by using encode() or decode(). In your case, you want to convert to ASCII and ignore all symbols that are not supported. For example, the Swedish letter å is not an ASCII character:

    >>>s = u'Good bye in Swedish is Hej d\xe5'
    >>>s = s.encode('ascii',errors='ignore')
    >>>print s
    Good bye in Swedish is Hej d

Edit:

Python3: str -> bytes -> str

>>>"Hej då".encode("ascii", errors="ignore").decode()
'hej d'

Python2: unicode -> str -> unicode

>>> u"hej då".encode("ascii", errors="ignore").decode()
u'hej d'

Python2: str -> unicode -> str (decode and encode in reverse order)

>>> "hej d\xe5".decode("ascii", errors="ignore").encode()
'hej d'

回答 2

根据@artfulrobot,这应该比filter和lambda更快:

re.sub(r'[^\x00-\x7f]',r'', your-non-ascii-string) 

在此处查看更多示例 http://stackoverflow.com/questions/20078816/replace-non-ascii-characters-with-a-single-space/20079244#20079244

According to @artfulrobot, this should be faster than filter and lambda:

re.sub(r'[^\x00-\x7f]',r'', your-non-ascii-string) 

See more examples here Replace non-ASCII characters with a single space


回答 3

您的问题不明确;前两个句子加在一起表示您认为空格和“句点”是非ASCII字符。这是不正确的。等于ord(char)<= 127的所有字符都是ASCII字符。例如,您的函数不包括这些字符!“#$%&\’()* +,-。/,但包括其他几个字符,例如[] {}。

请退后一步,三思而后行,然后编辑您的问题以告诉我们您要做什么,而无需提及ASCII单词,以及为什么您认为ord(char)> = 128这样的chars是可忽略的。另外:哪个版本的Python?输入数据的编码是什么?

请注意,您的代码将整个输入文件读取为单个字符串,并且您对另一个答案的注释(“最佳解决方案”)意味着您无需关心数据中的换行符。如果您的文件包含这样的两行:

this is line 1
this is line 2

结果将是'this is line 1this is line 2'……您真正想要的是什么?

更好的解决方案包括:

  1. 过滤器功能比一个更好的名字 onlyascii
  2. 认识到如果要保留参数,则过滤器功能仅需要返回真实值:

    def filter_func(char):
        return char == '\n' or 32 <= ord(char) <= 126
    # and later:
    filtered_data = filter(filter_func, data).lower()

Your question is ambiguous; the first two sentences taken together imply that you believe that space and “period” are non-ASCII characters. This is incorrect. All chars such that ord(char) <= 127 are ASCII characters. For example, your function excludes these characters !”#$%&\'()*+,-./ but includes several others e.g. []{}.

Please step back, think a bit, and edit your question to tell us what you are trying to do, without mentioning the word ASCII, and why you think that chars such that ord(char) >= 128 are ignorable. Also: which version of Python? What is the encoding of your input data?

Please note that your code reads the whole input file as a single string, and your comment (“great solution”) to another answer implies that you don’t care about newlines in your data. If your file contains two lines like this:

this is line 1
this is line 2

the result would be 'this is line 1this is line 2' … is that what you really want?

A greater solution would include:

  1. a better name for the filter function than onlyascii
  2. recognition that a filter function merely needs to return a truthy value if the argument is to be retained:

    def filter_func(char):
        return char == '\n' or 32 <= ord(char) <= 126
    # and later:
    filtered_data = filter(filter_func, data).lower()
    

回答 4

您可以使用以下代码删除非英语字母:

import re
str = "123456790 ABC#%? .(朱惠英)"
result = re.sub(r'[^\x00-\x7f]',r'', str)
print(result)

这将返回

123456790 ABC#%?。()

You may use the following code to remove non-English letters:

import re
str = "123456790 ABC#%? .(朱惠英)"
result = re.sub(r'[^\x00-\x7f]',r'', str)
print(result)

This will return

123456790 ABC#%? .()


回答 5

如果您需要可打印的ASCII字符,则可能应将代码更正为:

if ord(char) < 32 or ord(char) > 126: return ''

等同于string.printable(@jterrace的答案),除了没有返回和制表符(’\ t’,’\ n’,’\ x0b’,’\ x0c’和’\ r’),但不对应您问题的范围

If you want printable ascii characters you probably should correct your code to:

if ord(char) < 32 or ord(char) > 126: return ''

this is equivalent, to string.printable (answer from @jterrace), except for the absence of returns and tabs (‘\t’,’\n’,’\x0b’,’\x0c’ and ‘\r’) but doesnt correspond to the range on your question


回答 6

我强烈推荐使用Fluent Python(Ramalho)。列出受第二章启发的单线Class理解:

onlyascii = ''.join([s for s in data if ord(s) < 127])
onlymatch = ''.join([s for s in data if s in
              'ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz'])

Working my way through Fluent Python (Ramalho) – highly recommended. List comprehension one-ish-liners inspired by Chapter 2:

onlyascii = ''.join([s for s in data if ord(s) < 127])
onlymatch = ''.join([s for s in data if s in
              'ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz'])

如何在Python中检查文本是否为“空”(空格,制表符,换行符)?

问题:如何在Python中检查文本是否为“空”(空格,制表符,换行符)?

如何在Python中测试字符串是否为空?

例如,

"<space><space><space>" 是空的,所以是

"<space><tab><space><newline><space>",也是

"<newline><newline><newline><tab><newline>"

How can I test if a string contains only whitespace?

Example strings:

  • " " (space, space, space)

  • " \t \n " (space, tab, space, newline, space)

  • "\n\n\n\t\n" (newline, newline, newline, tab, newline)


回答 0

yourString.isspace()

“如果字符串中只有空格字符并且至少有一个字符,则返回true,否则返回false。”

结合特殊情况处理空字符串。

或者,您可以使用

strippedString = yourString.strip()

然后检查strippedString是否为空。

Use the str.isspace() method:

Return True if there are only whitespace characters in the string and there is at least one character, False otherwise.

A character is whitespace if in the Unicode character database (see unicodedata), either its general category is Zs (“Separator, space”), or its bidirectional class is one of WS, B, or S.

Combine that with a special case for handling the empty string.

Alternatively, you could use str.strip() and check if the result is empty.


回答 1

>>> tests = ['foo', ' ', '\r\n\t', '', None]
>>> [bool(not s or s.isspace()) for s in tests]
[False, True, True, True, True]
>>>
>>> tests = ['foo', ' ', '\r\n\t', '', None]
>>> [not s or s.isspace() for s in tests]
[False, True, True, True, True]

回答 2

您要使用的isspace()方法

海峡 isspace()

如果字符串中只有空格字符并且至少有一个字符,则返回true,否则返回false。

这是在每个字符串对象上定义的。这是您的特定用例的用法示例:

if aStr and (not aStr.isspace()):
    print aStr

You want to use the isspace() method

str.isspace()

Return true if there are only whitespace characters in the string and there is at least one character, false otherwise.

That’s defined on every string object. Here it is an usage example for your specific use case:

if aStr and (not aStr.isspace()):
    print aStr

回答 3

You can use the str.isspace() method.


回答 4

对于那些希望像Apache StringUtils.isBlank或Guava Strings.isNullOrEmpty这样的行为的用户:

if mystring and mystring.strip():
    print "not blank string"
else:
    print "blank string"

for those who expect a behaviour like the apache StringUtils.isBlank or Guava Strings.isNullOrEmpty :

if mystring and mystring.strip():
    print "not blank string"
else:
    print "blank string"

回答 5

检查split()方法给定的列表的长度。

if len(your_string.split()==0:
     print("yes")

或者将strip()方法的输出与null进行比较。

if your_string.strip() == '':
     print("yes")

Check the length of the list given by of split() method.

if len(your_string.split()==0:
     print("yes")

Or Compare output of strip() method with null.

if your_string.strip() == '':
     print("yes")

回答 6

这是在所有情况下都适用的答案:

def is_empty(s):
    "Check whether a string is empty"
    return not s or not s.strip()

如果变量为None,它将在处停止not s并且不再进行评估(因为not None == True)。显然,该strip()方法可以处理tab,换行符等常见情况。

Here is an answer that should work in all cases:

def is_empty(s):
    "Check whether a string is empty"
    return not s or not s.strip()

If the variable is None, it will stop at not sand not evaluate further (since not None == True). Apparently, the strip()method takes care of the usual cases of tab, newline, etc.


回答 7

我假设在您的情况下,空字符串是真正为空的字符串或包含所有空白的字符串。

if(str.strip()):
    print("string is not empty")
else:
    print("string is empty")

请注意,这不会检查 None

I’m assuming in your scenario, an empty string is a string that is truly empty or one that contains all white space.

if(str.strip()):
    print("string is not empty")
else:
    print("string is empty")

Note this does not check for None


回答 8

我使用以下内容:

if str and not str.isspace():
  print('not null and not empty nor whitespace')
else:
  print('null or empty or whitespace')

I used following:

if str and not str.isspace():
  print('not null and not empty nor whitespace')
else:
  print('null or empty or whitespace')

回答 9

检查字符串只是空格还是换行符

使用这个简单的代码

mystr = "      \n  \r  \t   "
if not mystr.strip(): # The String Is Only Spaces!
    print("\n[!] Invalid String !!!")
    exit(1)
mystr = mystr.strip()
print("\n[*] Your String Is: "+mystr)

To check if a string is just a spaces or newline

Use this simple code

mystr = "      \n  \r  \t   "
if not mystr.strip(): # The String Is Only Spaces!
    print("\n[!] Invalid String !!!")
    exit(1)
mystr = mystr.strip()
print("\n[*] Your String Is: "+mystr)

回答 10

与c#字符串静态方法类似isNullOrWhiteSpace。

def isNullOrWhiteSpace(str):
  """Indicates whether the specified string is null or empty string.
     Returns: True if the str parameter is null, an empty string ("") or contains 
     whitespace. Returns false otherwise."""
  if (str is None) or (str == "") or (str.isspace()):
    return True
  return False

isNullOrWhiteSpace(None) -> True // None equals null in c#, java, php
isNullOrWhiteSpace("")   -> True
isNullOrWhiteSpace(" ")  -> True

Resemblence with c# string static method isNullOrWhiteSpace.

def isNullOrWhiteSpace(str):
  """Indicates whether the specified string is null or empty string.
     Returns: True if the str parameter is null, an empty string ("") or contains 
     whitespace. Returns false otherwise."""
  if (str is None) or (str == "") or (str.isspace()):
    return True
  return False

isNullOrWhiteSpace(None) -> True // None equals null in c#, java, php
isNullOrWhiteSpace("")   -> True
isNullOrWhiteSpace(" ")  -> True

Matplotlib散点图在每个数据点具有不同的文本

问题:Matplotlib散点图在每个数据点具有不同的文本

我正在尝试绘制散点图,并用列表中的不同数字注释数据点。因此,例如,我想绘制yvs x并使用中的相应数字进行注释n

y = [2.56422, 3.77284, 3.52623, 3.51468, 3.02199]
z = [0.15, 0.3, 0.45, 0.6, 0.75]
n = [58, 651, 393, 203, 123]
ax = fig.add_subplot(111)
ax1.scatter(z, y, fmt='o')

有任何想法吗?

I am trying to make a scatter plot and annotate data points with different numbers from a list. So, for example, I want to plot y vs x and annotate with corresponding numbers from n.

y = [2.56422, 3.77284, 3.52623, 3.51468, 3.02199]
z = [0.15, 0.3, 0.45, 0.6, 0.75]
n = [58, 651, 393, 203, 123]
ax = fig.add_subplot(111)
ax1.scatter(z, y, fmt='o')

Any ideas?


回答 0

我不知道有任何采用数组或列表的绘图方法,但可以annotate()在对中的值进行迭代时使用n

y = [2.56422, 3.77284, 3.52623, 3.51468, 3.02199]
z = [0.15, 0.3, 0.45, 0.6, 0.75]
n = [58, 651, 393, 203, 123]

fig, ax = plt.subplots()
ax.scatter(z, y)

for i, txt in enumerate(n):
    ax.annotate(txt, (z[i], y[i]))

的格式设置选项很多annotate(),请参见matplotlib网站:

I’m not aware of any plotting method which takes arrays or lists but you could use annotate() while iterating over the values in n.

y = [2.56422, 3.77284, 3.52623, 3.51468, 3.02199]
z = [0.15, 0.3, 0.45, 0.6, 0.75]
n = [58, 651, 393, 203, 123]

fig, ax = plt.subplots()
ax.scatter(z, y)

for i, txt in enumerate(n):
    ax.annotate(txt, (z[i], y[i]))

There are a lot of formatting options for annotate(), see the matplotlib website:


回答 1

在版本低于matplotlib 2.0的版本中,ax.scatter绘制没有标记的文本不是必需的。在2.0版中,您需要ax.scatter为文本设置适当的范围和标记。

y = [2.56422, 3.77284, 3.52623, 3.51468, 3.02199]
z = [0.15, 0.3, 0.45, 0.6, 0.75]
n = [58, 651, 393, 203, 123]

fig, ax = plt.subplots()

for i, txt in enumerate(n):
    ax.annotate(txt, (z[i], y[i]))

在此链接中,您可以找到3d中的示例。

In version’s earlier than matplotlib 2.0, ax.scatter is not necessary to plot text without markers. In version 2.0 you’ll need ax.scatter to set the proper range and markers for text.

y = [2.56422, 3.77284, 3.52623, 3.51468, 3.02199]
z = [0.15, 0.3, 0.45, 0.6, 0.75]
n = [58, 651, 393, 203, 123]

fig, ax = plt.subplots()

for i, txt in enumerate(n):
    ax.annotate(txt, (z[i], y[i]))

And in this link you can find an example in 3d.


回答 2

如果有人试图将上述解决方案应用于.scatter()而不是.subplot(),

我尝试运行以下代码

y = [2.56422, 3.77284, 3.52623, 3.51468, 3.02199]
z = [0.15, 0.3, 0.45, 0.6, 0.75]
n = [58, 651, 393, 203, 123]

fig, ax = plt.scatter(z, y)

for i, txt in enumerate(n):
    ax.annotate(txt, (z[i], y[i]))

但是遇到错误,指出“无法解压缩不可迭代的PathCollection对象”,该错误专门指向代码行无花果,ax = plt.scatter(z,y)

我最终使用以下代码解决了该错误

plt.scatter(z, y)

for i, txt in enumerate(n):
    plt.annotate(txt, (z[i], y[i]))

我没想到我应该更清楚地知道.scatter()和.subplot()之间没有区别。

In case anyone is trying to apply the above solutions to a .scatter() instead of a .subplot(),

I tried running the following code

y = [2.56422, 3.77284, 3.52623, 3.51468, 3.02199]
z = [0.15, 0.3, 0.45, 0.6, 0.75]
n = [58, 651, 393, 203, 123]

fig, ax = plt.scatter(z, y)

for i, txt in enumerate(n):
    ax.annotate(txt, (z[i], y[i]))

But ran into errors stating “cannot unpack non-iterable PathCollection object”, with the error specifically pointing at codeline fig, ax = plt.scatter(z, y)

I eventually solved the error using the following code

plt.scatter(z, y)

for i, txt in enumerate(n):
    plt.annotate(txt, (z[i], y[i]))

I didn’t expect there to be a difference between .scatter() and .subplot() I should have known better.


回答 3

您也可以使用pyplot.text(请参阅此处)。

def plot_embeddings(M_reduced, word2Ind, words):
""" Plot in a scatterplot the embeddings of the words specified in the list "words".
    Include a label next to each point.
"""
for word in words:
    x, y = M_reduced[word2Ind[word]]
    plt.scatter(x, y, marker='x', color='red')
    plt.text(x+.03, y+.03, word, fontsize=9)
plt.show()

M_reduced_plot_test = np.array([[1, 1], [-1, -1], [1, -1], [-1, 1], [0, 0]])
word2Ind_plot_test = {'test1': 0, 'test2': 1, 'test3': 2, 'test4': 3, 'test5': 4}
words = ['test1', 'test2', 'test3', 'test4', 'test5']
plot_embeddings(M_reduced_plot_test, word2Ind_plot_test, words)

You may also use pyplot.text (see here).

def plot_embeddings(M_reduced, word2Ind, words):
    """ 
        Plot in a scatterplot the embeddings of the words specified in the list "words".
        Include a label next to each point.
    """
    for word in words:
        x, y = M_reduced[word2Ind[word]]
        plt.scatter(x, y, marker='x', color='red')
        plt.text(x+.03, y+.03, word, fontsize=9)
    plt.show()

M_reduced_plot_test = np.array([[1, 1], [-1, -1], [1, -1], [-1, 1], [0, 0]])
word2Ind_plot_test = {'test1': 0, 'test2': 1, 'test3': 2, 'test4': 3, 'test5': 4}
words = ['test1', 'test2', 'test3', 'test4', 'test5']
plot_embeddings(M_reduced_plot_test, word2Ind_plot_test, words)


回答 4

Python 3.6及更高版本:

coordinates = [('a',1,2), ('b',3,4), ('c',5,6)]
for x in coordinates: plt.annotate(x[0], (x[1], x[2]))

Python 3.6+:

coordinates = [('a',1,2), ('b',3,4), ('c',5,6)]
for x in coordinates: plt.annotate(x[0], (x[1], x[2]))

回答 5

作为一个使用列表理解和numpy的班轮:

[ax.annotate(x[0], (x[1], x[2])) for x in np.array([n,z,y]).T]

设置与Rutger的答案相同。

As a one liner using list comprehension and numpy:

[ax.annotate(x[0], (x[1], x[2])) for x in np.array([n,z,y]).T]

setup is ditto to Rutger’s answer.


回答 6

我想补充一点,您甚至可以使用箭头/文本框来标注标签。这是我的意思:

import random
import matplotlib.pyplot as plt


y = [2.56422, 3.77284, 3.52623, 3.51468, 3.02199]
z = [0.15, 0.3, 0.45, 0.6, 0.75]
n = [58, 651, 393, 203, 123]

fig, ax = plt.subplots()
ax.scatter(z, y)

ax.annotate(n[0], (z[0], y[0]), xytext=(z[0]+0.05, y[0]+0.3), 
    arrowprops=dict(facecolor='red', shrink=0.05))

ax.annotate(n[1], (z[1], y[1]), xytext=(z[1]-0.05, y[1]-0.3), 
    arrowprops = dict(  arrowstyle="->",
                        connectionstyle="angle3,angleA=0,angleB=-90"))

ax.annotate(n[2], (z[2], y[2]), xytext=(z[2]-0.05, y[2]-0.3), 
    arrowprops = dict(arrowstyle="wedge,tail_width=0.5", alpha=0.1))

ax.annotate(n[3], (z[3], y[3]), xytext=(z[3]+0.05, y[3]-0.2), 
    arrowprops = dict(arrowstyle="fancy"))

ax.annotate(n[4], (z[4], y[4]), xytext=(z[4]-0.1, y[4]-0.2),
    bbox=dict(boxstyle="round", alpha=0.1), 
    arrowprops = dict(arrowstyle="simple"))

plt.show()

将生成以下图形:

I would love to add that you can even use arrows /text boxes to annotate the labels. Here is what I mean:

import random
import matplotlib.pyplot as plt


y = [2.56422, 3.77284, 3.52623, 3.51468, 3.02199]
z = [0.15, 0.3, 0.45, 0.6, 0.75]
n = [58, 651, 393, 203, 123]

fig, ax = plt.subplots()
ax.scatter(z, y)

ax.annotate(n[0], (z[0], y[0]), xytext=(z[0]+0.05, y[0]+0.3), 
    arrowprops=dict(facecolor='red', shrink=0.05))

ax.annotate(n[1], (z[1], y[1]), xytext=(z[1]-0.05, y[1]-0.3), 
    arrowprops = dict(  arrowstyle="->",
                        connectionstyle="angle3,angleA=0,angleB=-90"))

ax.annotate(n[2], (z[2], y[2]), xytext=(z[2]-0.05, y[2]-0.3), 
    arrowprops = dict(arrowstyle="wedge,tail_width=0.5", alpha=0.1))

ax.annotate(n[3], (z[3], y[3]), xytext=(z[3]+0.05, y[3]-0.2), 
    arrowprops = dict(arrowstyle="fancy"))

ax.annotate(n[4], (z[4], y[4]), xytext=(z[4]-0.1, y[4]-0.2),
    bbox=dict(boxstyle="round", alpha=0.1), 
    arrowprops = dict(arrowstyle="simple"))

plt.show()

Which will generate the following graph:


如何替换一个字符串的多个子字符串?

问题:如何替换一个字符串的多个子字符串?

我想使用.replace函数替换多个字符串。

我目前有

string.replace("condition1", "")

但想有类似的东西

string.replace("condition1", "").replace("condition2", "text")

尽管那听起来不像是好的语法

正确的方法是什么?有点像如何在grep / regex中进行操作\1以及\2如何将字段替换为某些搜索字符串

I would like to use the .replace function to replace multiple strings.

I currently have

string.replace("condition1", "")

but would like to have something like

string.replace("condition1", "").replace("condition2", "text")

although that does not feel like good syntax

what is the proper way to do this? kind of like how in grep/regex you can do \1 and \2 to replace fields to certain search strings


回答 0

这是一个简短的示例,应该使用正则表达式来解决问题:

import re

rep = {"condition1": "", "condition2": "text"} # define desired replacements here

# use these three lines to do the replacement
rep = dict((re.escape(k), v) for k, v in rep.iteritems()) 
#Python 3 renamed dict.iteritems to dict.items so use rep.items() for latest versions
pattern = re.compile("|".join(rep.keys()))
text = pattern.sub(lambda m: rep[re.escape(m.group(0))], text)

例如:

>>> pattern.sub(lambda m: rep[re.escape(m.group(0))], "(condition1) and --condition2--")
'() and --text--'

Here is a short example that should do the trick with regular expressions:

import re

rep = {"condition1": "", "condition2": "text"} # define desired replacements here

# use these three lines to do the replacement
rep = dict((re.escape(k), v) for k, v in rep.iteritems()) 
#Python 3 renamed dict.iteritems to dict.items so use rep.items() for latest versions
pattern = re.compile("|".join(rep.keys()))
text = pattern.sub(lambda m: rep[re.escape(m.group(0))], text)

For example:

>>> pattern.sub(lambda m: rep[re.escape(m.group(0))], "(condition1) and --condition2--")
'() and --text--'

回答 1

您可以制作一个不错的小循环功能。

def replace_all(text, dic):
    for i, j in dic.iteritems():
        text = text.replace(i, j)
    return text

其中,text是完整的字符串,dic是一个字典—每个定义都是一个字符串,它将替换与该词匹配的字符串。

注意:在Python 3中,iteritems()已替换为items()


注意: Python字典没有可靠的迭代顺序。该解决方案仅在以下情况下解决您的问题:

  • 更换顺序无关紧要
  • 可以更改以前的替换结果

例如:

d = { "cat": "dog", "dog": "pig"}
my_sentence = "This is my cat and this is my dog."
replace_all(my_sentence, d)
print(my_sentence)

可能的输出#1:

“这是我的猪,这是我的猪。”

可能的输出#2

“这是我的狗,这是我的猪。”

一种可能的解决方法是使用OrderedDict。

from collections import OrderedDict
def replace_all(text, dic):
    for i, j in dic.items():
        text = text.replace(i, j)
    return text
od = OrderedDict([("cat", "dog"), ("dog", "pig")])
my_sentence = "This is my cat and this is my dog."
replace_all(my_sentence, od)
print(my_sentence)

输出:

"This is my pig and this is my pig."

小心#2:如果您的text字符串太大或字典中有很多对,效率会很低。

You could just make a nice little looping function.

def replace_all(text, dic):
    for i, j in dic.iteritems():
        text = text.replace(i, j)
    return text

where text is the complete string and dic is a dictionary — each definition is a string that will replace a match to the term.

Note: in Python 3, iteritems() has been replaced with items()


Careful: Python dictionaries don’t have a reliable order for iteration. This solution only solves your problem if:

  • order of replacements is irrelevant
  • it’s ok for a replacement to change the results of previous replacements

Update: The above statement related to ordering of insertion does not apply to Python versions greater than or equal to 3.6, as standard dicts were changed to use insertion ordering for iteration.

For instance:

d = { "cat": "dog", "dog": "pig"}
my_sentence = "This is my cat and this is my dog."
replace_all(my_sentence, d)
print(my_sentence)

Possible output #1:

"This is my pig and this is my pig."

Possible output #2

"This is my dog and this is my pig."

One possible fix is to use an OrderedDict.

from collections import OrderedDict
def replace_all(text, dic):
    for i, j in dic.items():
        text = text.replace(i, j)
    return text
od = OrderedDict([("cat", "dog"), ("dog", "pig")])
my_sentence = "This is my cat and this is my dog."
replace_all(my_sentence, od)
print(my_sentence)

Output:

"This is my pig and this is my pig."

Careful #2: Inefficient if your text string is too big or there are many pairs in the dictionary.


回答 2

为什么不提供这样的解决方案?

s = "The quick brown fox jumps over the lazy dog"
for r in (("brown", "red"), ("lazy", "quick")):
    s = s.replace(*r)

#output will be:  The quick red fox jumps over the quick dog

Why not one solution like this?

s = "The quick brown fox jumps over the lazy dog"
for r in (("brown", "red"), ("lazy", "quick")):
    s = s.replace(*r)

#output will be:  The quick red fox jumps over the quick dog

回答 3

这是第一种使用reduce的解决方案的变体,以防您喜欢功能。:)

repls = {'hello' : 'goodbye', 'world' : 'earth'}
s = 'hello, world'
reduce(lambda a, kv: a.replace(*kv), repls.iteritems(), s)

martineau的更好版本:

repls = ('hello', 'goodbye'), ('world', 'earth')
s = 'hello, world'
reduce(lambda a, kv: a.replace(*kv), repls, s)

Here is a variant of the first solution using reduce, in case you like being functional. 🙂

repls = {'hello' : 'goodbye', 'world' : 'earth'}
s = 'hello, world'
reduce(lambda a, kv: a.replace(*kv), repls.iteritems(), s)

martineau’s even better version:

repls = ('hello', 'goodbye'), ('world', 'earth')
s = 'hello, world'
reduce(lambda a, kv: a.replace(*kv), repls, s)

回答 4

这只是对FJ和MiniQuark最佳答案的更简明扼要的回顾。要实现多个同时的字符串替换,您所需要做的就是以下功能:

def multiple_replace(string, rep_dict):
    pattern = re.compile("|".join([re.escape(k) for k in sorted(rep_dict,key=len,reverse=True)]), flags=re.DOTALL)
    return pattern.sub(lambda x: rep_dict[x.group(0)], string)

用法:

>>>multiple_replace("Do you like cafe? No, I prefer tea.", {'cafe':'tea', 'tea':'cafe', 'like':'prefer'})
'Do you prefer tea? No, I prefer cafe.'

如果您愿意,您可以从此简单的功能开始制作自己的专用替换功能。

This is just a more concise recap of F.J and MiniQuark great answers. All you need to achieve multiple simultaneous string replacements is the following function:

def multiple_replace(string, rep_dict):
    pattern = re.compile("|".join([re.escape(k) for k in sorted(rep_dict,key=len,reverse=True)]), flags=re.DOTALL)
    return pattern.sub(lambda x: rep_dict[x.group(0)], string)

Usage:

>>>multiple_replace("Do you like cafe? No, I prefer tea.", {'cafe':'tea', 'tea':'cafe', 'like':'prefer'})
'Do you prefer tea? No, I prefer cafe.'

If you wish, you can make your own dedicated replacement functions starting from this simpler one.


回答 5

我基于FJ的出色答案:

import re

def multiple_replacer(*key_values):
    replace_dict = dict(key_values)
    replacement_function = lambda match: replace_dict[match.group(0)]
    pattern = re.compile("|".join([re.escape(k) for k, v in key_values]), re.M)
    return lambda string: pattern.sub(replacement_function, string)

def multiple_replace(string, *key_values):
    return multiple_replacer(*key_values)(string)

一杆用法:

>>> replacements = (u"café", u"tea"), (u"tea", u"café"), (u"like", u"love")
>>> print multiple_replace(u"Do you like café? No, I prefer tea.", *replacements)
Do you love tea? No, I prefer café.

请注意,由于更换仅需一遍,因此“café”更改为“ tea”,但不会更改为“café”。

如果您需要多次进行相同的替换,则可以轻松创建替换功能:

>>> my_escaper = multiple_replacer(('"','\\"'), ('\t', '\\t'))
>>> many_many_strings = (u'This text will be escaped by "my_escaper"',
                       u'Does this work?\tYes it does',
                       u'And can we span\nmultiple lines?\t"Yes\twe\tcan!"')
>>> for line in many_many_strings:
...     print my_escaper(line)
... 
This text will be escaped by \"my_escaper\"
Does this work?\tYes it does
And can we span
multiple lines?\t\"Yes\twe\tcan!\"

改进之处:

  • 把代码变成函数
  • 增加了多行支持
  • 修复了转义中的错误
  • 易于为特定的多个替换创建函数

请享用!:-)

I built this upon F.J.s excellent answer:

import re

def multiple_replacer(*key_values):
    replace_dict = dict(key_values)
    replacement_function = lambda match: replace_dict[match.group(0)]
    pattern = re.compile("|".join([re.escape(k) for k, v in key_values]), re.M)
    return lambda string: pattern.sub(replacement_function, string)

def multiple_replace(string, *key_values):
    return multiple_replacer(*key_values)(string)

One shot usage:

>>> replacements = (u"café", u"tea"), (u"tea", u"café"), (u"like", u"love")
>>> print multiple_replace(u"Do you like café? No, I prefer tea.", *replacements)
Do you love tea? No, I prefer café.

Note that since replacement is done in just one pass, “café” changes to “tea”, but it does not change back to “café”.

If you need to do the same replacement many times, you can create a replacement function easily:

>>> my_escaper = multiple_replacer(('"','\\"'), ('\t', '\\t'))
>>> many_many_strings = (u'This text will be escaped by "my_escaper"',
                       u'Does this work?\tYes it does',
                       u'And can we span\nmultiple lines?\t"Yes\twe\tcan!"')
>>> for line in many_many_strings:
...     print my_escaper(line)
... 
This text will be escaped by \"my_escaper\"
Does this work?\tYes it does
And can we span
multiple lines?\t\"Yes\twe\tcan!\"

Improvements:

  • turned code into a function
  • added multiline support
  • fixed a bug in escaping
  • easy to create a function for a specific multiple replacement

Enjoy! 🙂


回答 6

我想提出字符串模板的用法。只需将要替换的字符串放在字典中,就可以完成所有操作!来自docs.python.org的示例

>>> from string import Template
>>> s = Template('$who likes $what')
>>> s.substitute(who='tim', what='kung pao')
'tim likes kung pao'
>>> d = dict(who='tim')
>>> Template('Give $who $100').substitute(d)
Traceback (most recent call last):
[...]
ValueError: Invalid placeholder in string: line 1, col 10
>>> Template('$who likes $what').substitute(d)
Traceback (most recent call last):
[...]
KeyError: 'what'
>>> Template('$who likes $what').safe_substitute(d)
'tim likes $what'

I would like to propose the usage of string templates. Just place the string to be replaced in a dictionary and all is set! Example from docs.python.org

>>> from string import Template
>>> s = Template('$who likes $what')
>>> s.substitute(who='tim', what='kung pao')
'tim likes kung pao'
>>> d = dict(who='tim')
>>> Template('Give $who $100').substitute(d)
Traceback (most recent call last):
[...]
ValueError: Invalid placeholder in string: line 1, col 10
>>> Template('$who likes $what').substitute(d)
Traceback (most recent call last):
[...]
KeyError: 'what'
>>> Template('$who likes $what').safe_substitute(d)
'tim likes $what'

回答 7

就我而言,我需要用名称简单替换唯一键,所以我想到了:

a = 'This is a test string.'
b = {'i': 'I', 's': 'S'}
for x,y in b.items():
    a = a.replace(x, y)
>>> a
'ThIS IS a teSt StrIng.'

In my case, I needed a simple replacing of unique keys with names, so I thought this up:

a = 'This is a test string.'
b = {'i': 'I', 's': 'S'}
for x,y in b.items():
    a = a.replace(x, y)
>>> a
'ThIS IS a teSt StrIng.'

回答 8

从开始Python 3.8,并引入赋值表达式(PEP 572):=运算符),我们可以在列表推导中应用替换项:

# text = "The quick brown fox jumps over the lazy dog"
# replacements = [("brown", "red"), ("lazy", "quick")]
[text := text.replace(a, b) for a, b in replacements]
# text = 'The quick red fox jumps over the quick dog'

Starting Python 3.8, and the introduction of assignment expressions (PEP 572) (:= operator), we can apply the replacements within a list comprehension:

# text = "The quick brown fox jumps over the lazy dog"
# replacements = [("brown", "red"), ("lazy", "quick")]
[text := text.replace(a, b) for a, b in replacements]
# text = 'The quick red fox jumps over the quick dog'

回答 9

这是我的$ 0.02。它基于安德鲁·克拉克(Andrew Clark)的回答,稍微清晰了一点,并且还涵盖了替换字符串是另一个替换字符串的子字符串(更长的字符串获胜)的情况。

def multireplace(string, replacements):
    """
    Given a string and a replacement map, it returns the replaced string.

    :param str string: string to execute replacements on
    :param dict replacements: replacement dictionary {value to find: value to replace}
    :rtype: str

    """
    # Place longer ones first to keep shorter substrings from matching
    # where the longer ones should take place
    # For instance given the replacements {'ab': 'AB', 'abc': 'ABC'} against 
    # the string 'hey abc', it should produce 'hey ABC' and not 'hey ABc'
    substrs = sorted(replacements, key=len, reverse=True)

    # Create a big OR regex that matches any of the substrings to replace
    regexp = re.compile('|'.join(map(re.escape, substrs)))

    # For each match, look up the new string in the replacements
    return regexp.sub(lambda match: replacements[match.group(0)], string)

在此主旨中,如有任何建议,请随时对其进行修改。

Here my $0.02. It is based on Andrew Clark’s answer, just a little bit clearer, and it also covers the case when a string to replace is a substring of another string to replace (longer string wins)

def multireplace(string, replacements):
    """
    Given a string and a replacement map, it returns the replaced string.

    :param str string: string to execute replacements on
    :param dict replacements: replacement dictionary {value to find: value to replace}
    :rtype: str

    """
    # Place longer ones first to keep shorter substrings from matching
    # where the longer ones should take place
    # For instance given the replacements {'ab': 'AB', 'abc': 'ABC'} against 
    # the string 'hey abc', it should produce 'hey ABC' and not 'hey ABc'
    substrs = sorted(replacements, key=len, reverse=True)

    # Create a big OR regex that matches any of the substrings to replace
    regexp = re.compile('|'.join(map(re.escape, substrs)))

    # For each match, look up the new string in the replacements
    return regexp.sub(lambda match: replacements[match.group(0)], string)

It is in this this gist, feel free to modify it if you have any proposal.


回答 10

我需要一种解决方案,其中要替换的字符串可以是正则表达式,例如,通过用一个替换多个空格字符来帮助规范长文本。我基于其他人(包括MiniQuark和mmj)的答案链,得出了以下结论:

def multiple_replace(string, reps, re_flags = 0):
    """ Transforms string, replacing keys from re_str_dict with values.
    reps: dictionary, or list of key-value pairs (to enforce ordering;
          earlier items have higher priority).
          Keys are used as regular expressions.
    re_flags: interpretation of regular expressions, such as re.DOTALL
    """
    if isinstance(reps, dict):
        reps = reps.items()
    pattern = re.compile("|".join("(?P<_%d>%s)" % (i, re_str[0])
                                  for i, re_str in enumerate(reps)),
                         re_flags)
    return pattern.sub(lambda x: reps[int(x.lastgroup[1:])][1], string)

它适用于其他答案中给出的示例,例如:

>>> multiple_replace("(condition1) and --condition2--",
...                  {"condition1": "", "condition2": "text"})
'() and --text--'

>>> multiple_replace('hello, world', {'hello' : 'goodbye', 'world' : 'earth'})
'goodbye, earth'

>>> multiple_replace("Do you like cafe? No, I prefer tea.",
...                  {'cafe': 'tea', 'tea': 'cafe', 'like': 'prefer'})
'Do you prefer tea? No, I prefer cafe.'

对我来说,最主要的是您还可以使用正则表达式,例如仅替换整个单词,或规范化空白:

>>> s = "I don't want to change this name:\n  Philip II of Spain"
>>> re_str_dict = {r'\bI\b': 'You', r'[\n\t ]+': ' '}
>>> multiple_replace(s, re_str_dict)
"You don't want to change this name: Philip II of Spain"

如果您想将字典键用作普通字符串,则可以使用以下函数在调用multiple_replace之前对它们进行转义:

def escape_keys(d):
    """ transform dictionary d by applying re.escape to the keys """
    return dict((re.escape(k), v) for k, v in d.items())

>>> multiple_replace(s, escape_keys(re_str_dict))
"I don't want to change this name:\n  Philip II of Spain"

以下函数可以帮助您在字典键中查找错误的正则表达式(因为来自multiple_replace的错误消息不是很清楚):

def check_re_list(re_list):
    """ Checks if each regular expression in list is well-formed. """
    for i, e in enumerate(re_list):
        try:
            re.compile(e)
        except (TypeError, re.error):
            print("Invalid regular expression string "
                  "at position {}: '{}'".format(i, e))

>>> check_re_list(re_str_dict.keys())

请注意,它不会链接替换项,而是同时执行它们。这样可以提高效率,而不会限制它可以做什么。为了模拟链接的效果,您可能只需要添加更多的字符串替换对并确保对的预期顺序即可:

>>> multiple_replace("button", {"but": "mut", "mutton": "lamb"})
'mutton'
>>> multiple_replace("button", [("button", "lamb"),
...                             ("but", "mut"), ("mutton", "lamb")])
'lamb'

I needed a solution where the strings to be replaced can be a regular expressions, for example to help in normalizing a long text by replacing multiple whitespace characters with a single one. Building on a chain of answers from others, including MiniQuark and mmj, this is what I came up with:

def multiple_replace(string, reps, re_flags = 0):
    """ Transforms string, replacing keys from re_str_dict with values.
    reps: dictionary, or list of key-value pairs (to enforce ordering;
          earlier items have higher priority).
          Keys are used as regular expressions.
    re_flags: interpretation of regular expressions, such as re.DOTALL
    """
    if isinstance(reps, dict):
        reps = reps.items()
    pattern = re.compile("|".join("(?P<_%d>%s)" % (i, re_str[0])
                                  for i, re_str in enumerate(reps)),
                         re_flags)
    return pattern.sub(lambda x: reps[int(x.lastgroup[1:])][1], string)

It works for the examples given in other answers, for example:

>>> multiple_replace("(condition1) and --condition2--",
...                  {"condition1": "", "condition2": "text"})
'() and --text--'

>>> multiple_replace('hello, world', {'hello' : 'goodbye', 'world' : 'earth'})
'goodbye, earth'

>>> multiple_replace("Do you like cafe? No, I prefer tea.",
...                  {'cafe': 'tea', 'tea': 'cafe', 'like': 'prefer'})
'Do you prefer tea? No, I prefer cafe.'

The main thing for me is that you can use regular expressions as well, for example to replace whole words only, or to normalize white space:

>>> s = "I don't want to change this name:\n  Philip II of Spain"
>>> re_str_dict = {r'\bI\b': 'You', r'[\n\t ]+': ' '}
>>> multiple_replace(s, re_str_dict)
"You don't want to change this name: Philip II of Spain"

If you want to use the dictionary keys as normal strings, you can escape those before calling multiple_replace using e.g. this function:

def escape_keys(d):
    """ transform dictionary d by applying re.escape to the keys """
    return dict((re.escape(k), v) for k, v in d.items())

>>> multiple_replace(s, escape_keys(re_str_dict))
"I don't want to change this name:\n  Philip II of Spain"

The following function can help in finding erroneous regular expressions among your dictionary keys (since the error message from multiple_replace isn’t very telling):

def check_re_list(re_list):
    """ Checks if each regular expression in list is well-formed. """
    for i, e in enumerate(re_list):
        try:
            re.compile(e)
        except (TypeError, re.error):
            print("Invalid regular expression string "
                  "at position {}: '{}'".format(i, e))

>>> check_re_list(re_str_dict.keys())

Note that it does not chain the replacements, instead performs them simultaneously. This makes it more efficient without constraining what it can do. To mimic the effect of chaining, you may just need to add more string-replacement pairs and ensure the expected ordering of the pairs:

>>> multiple_replace("button", {"but": "mut", "mutton": "lamb"})
'mutton'
>>> multiple_replace("button", [("button", "lamb"),
...                             ("but", "mut"), ("mutton", "lamb")])
'lamb'

回答 11

这是一个示例,在长字符串上有很多小替换项时,效率更高。

source = "Here is foo, it does moo!"

replacements = {
    'is': 'was', # replace 'is' with 'was'
    'does': 'did',
    '!': '?'
}

def replace(source, replacements):
    finder = re.compile("|".join(re.escape(k) for k in replacements.keys())) # matches every string we want replaced
    result = []
    pos = 0
    while True:
        match = finder.search(source, pos)
        if match:
            # cut off the part up until match
            result.append(source[pos : match.start()])
            # cut off the matched part and replace it in place
            result.append(replacements[source[match.start() : match.end()]])
            pos = match.end()
        else:
            # the rest after the last match
            result.append(source[pos:])
            break
    return "".join(result)

print replace(source, replacements)

关键是要避免长字符串的许多串联。我们将源字符串切成片段,在形成列表时替换一些片段,然后将整个内容重新组合成字符串。

Note: Test your case, see comments.

Here’s a sample which is more efficient on long strings with many small replacements.

source = "Here is foo, it does moo!"

replacements = {
    'is': 'was', # replace 'is' with 'was'
    'does': 'did',
    '!': '?'
}

def replace(source, replacements):
    finder = re.compile("|".join(re.escape(k) for k in replacements.keys())) # matches every string we want replaced
    result = []
    pos = 0
    while True:
        match = finder.search(source, pos)
        if match:
            # cut off the part up until match
            result.append(source[pos : match.start()])
            # cut off the matched part and replace it in place
            result.append(replacements[source[match.start() : match.end()]])
            pos = match.end()
        else:
            # the rest after the last match
            result.append(source[pos:])
            break
    return "".join(result)

print replace(source, replacements)

The point is in avoiding many concatenations of long strings. We chop the source string to fragments, replacing some of the fragments as we form the list, and then join the whole thing back into a string.


回答 12

您真的不应该这样,但是我觉得它太酷了:

>>> replacements = {'cond1':'text1', 'cond2':'text2'}
>>> cmd = 'answer = s'
>>> for k,v in replacements.iteritems():
>>>     cmd += ".replace(%s, %s)" %(k,v)
>>> exec(cmd)

现在,answer是所有替换的结果

再次,这 hacky,不是您应该定期使用的东西。但是,很高兴知道您可以根据需要执行以下操作。

You should really not do it this way, but I just find it way too cool:

>>> replacements = {'cond1':'text1', 'cond2':'text2'}
>>> cmd = 'answer = s'
>>> for k,v in replacements.iteritems():
>>>     cmd += ".replace(%s, %s)" %(k,v)
>>> exec(cmd)

Now, answer is the result of all the replacements in turn

again, this is very hacky and is not something that you should be using regularly. But it’s just nice to know that you can do something like this if you ever need to.


回答 13

我也在这个问题上挣扎。string.replace在进行许多替换后,正则表达式很难解决,并且比循环慢大约四倍(在我的实验条件下)。

你绝对应该尝试使用Flashtext库(此处的博客文章Github上这里)。就我而言,每个文档的速度从1.8 s到0.015 s(正则表达式花费7.7 s)快两个数量级

在上面的链接中很容易找到使用示例,但这是一个有效的示例:

    from flashtext import KeywordProcessor
    self.processor = KeywordProcessor(case_sensitive=False)
    for k, v in self.my_dict.items():
        self.processor.add_keyword(k, v)
    new_string = self.processor.replace_keywords(string)

请注意,Flashtext会在一次通过中进行替换(以避免-> bb-> c将’a’转换为’c’)。Flashtext还会查找整个单词(因此,“ is”将与“ th is ” 不匹配)。如果您的目标是几个单词(用“ Hello”代替“ This is”),则效果很好。

I was struggling with this problem as well. With many substitutions regular expressions struggle, and are about four times slower than looping string.replace (in my experiment conditions).

You should absolutely try using the Flashtext library (blog post here, Github here). In my case it was a bit over two orders of magnitude faster, from 1.8 s to 0.015 s (regular expressions took 7.7 s) for each document.

It is easy to find use examples in the links above, but this is a working example:

    from flashtext import KeywordProcessor
    self.processor = KeywordProcessor(case_sensitive=False)
    for k, v in self.my_dict.items():
        self.processor.add_keyword(k, v)
    new_string = self.processor.replace_keywords(string)

Note that Flashtext makes substitutions in a single pass (to avoid a –> b and b –> c translating ‘a’ into ‘c’). Flashtext also looks for whole words (so ‘is’ will not match ‘this‘). It works fine if your target is several words (replacing ‘This is’ by ‘Hello’).


回答 14

我觉得这个问题需要单行递归lambda函数答案才能完整,仅因为如此。所以那里:

>>> mrep = lambda s, d: s if not d else mrep(s.replace(*d.popitem()), d)

用法:

>>> mrep('abcabc', {'a': '1', 'c': '2'})
'1b21b2'

笔记:

  • 这消耗了输入字典。
  • Python字典从3.6开始保留键顺序;其他答案中的相应警告不再适用。为了向后兼容,可以采用基于元组的版本:
>>> mrep = lambda s, d: s if not d else mrep(s.replace(*d.pop()), d)
>>> mrep('abcabc', [('a', '1'), ('c', '2')])

注意:与python中的所有递归函数一样,太大的递归深度(即太大的替换字典)将导致错误。参见例如这里

I feel this question needs a single-line recursive lambda function answer for completeness, just because. So there:

>>> mrep = lambda s, d: s if not d else mrep(s.replace(*d.popitem()), d)

Usage:

>>> mrep('abcabc', {'a': '1', 'c': '2'})
'1b21b2'

Notes:

  • This consumes the input dictionary.
  • Python dicts preserve key order as of 3.6; corresponding caveats in other answers are not relevant anymore. For backward compatibility one could resort to a tuple-based version:
>>> mrep = lambda s, d: s if not d else mrep(s.replace(*d.pop()), d)
>>> mrep('abcabc', [('a', '1'), ('c', '2')])

Note: As with all recursive functions in python, too large recursion depth (i.e. too large replacement dictionaries) will result in an error. See e.g. here.


回答 15

我不知道速度,但这是我的工作日快速解决方案:

reduce(lambda a, b: a.replace(*b)
    , [('o','W'), ('t','X')] #iterable of pairs: (oldval, newval)
    , 'tomato' #The string from which to replace values
    )

…但是我喜欢上面的#1正则表达式答案。注意-如果一个新值是另一个值的子字符串,则该操作不是可交换的。

I don’t know about speed but this is my workaday quick fix:

reduce(lambda a, b: a.replace(*b)
    , [('o','W'), ('t','X')] #iterable of pairs: (oldval, newval)
    , 'tomato' #The string from which to replace values
    )

… but I like the #1 regex answer above. Note – if one new value is a substring of another one then the operation is not commutative.


回答 16

您可以使用支持完全匹配以及正则表达式替换的pandas库和replace函数。例如:

df = pd.DataFrame({'text': ['Billy is going to visit Rome in November', 'I was born in 10/10/2010', 'I will be there at 20:00']})

to_replace=['Billy','Rome','January|February|March|April|May|June|July|August|September|October|November|December', '\d{2}:\d{2}', '\d{2}/\d{2}/\d{4}']
replace_with=['name','city','month','time', 'date']

print(df.text.replace(to_replace, replace_with, regex=True))

修改后的文本是:

0    name is going to visit city in month
1                      I was born in date
2                 I will be there at time

您可以在此处找到示例。请注意,文本上的替换是按照它们在列表中出现的顺序进行的

You can use the pandas library and the replace function which supports both exact matches as well as regex replacements. For example:

df = pd.DataFrame({'text': ['Billy is going to visit Rome in November', 'I was born in 10/10/2010', 'I will be there at 20:00']})

to_replace=['Billy','Rome','January|February|March|April|May|June|July|August|September|October|November|December', '\d{2}:\d{2}', '\d{2}/\d{2}/\d{4}']
replace_with=['name','city','month','time', 'date']

print(df.text.replace(to_replace, replace_with, regex=True))

And the modified text is:

0    name is going to visit city in month
1                      I was born in date
2                 I will be there at time

You can find an example here. Notice that the replacements on the text are done with the order they appear in the lists


回答 17

要只替换一个字符,请使用translatestr.maketrans是我最喜欢的方法。

tl; dr> result_string = your_string.translate(str.maketrans(dict_mapping))


演示

my_string = 'This is a test string.'
dict_mapping = {'i': 's', 's': 'S'}
result_good = my_string.translate(str.maketrans(dict_mapping))
result_bad = my_string
for x, y in dict_mapping.items():
    result_bad = result_bad.replace(x, y)
print(result_good)  # ThsS sS a teSt Strsng.
print(result_bad)   # ThSS SS a teSt StrSng.

For replace only one character, use the translate and str.maketrans is my favorite method.

tl;dr > result_string = your_string.translate(str.maketrans(dict_mapping))


demo

my_string = 'This is a test string.'
dict_mapping = {'i': 's', 's': 'S'}
result_good = my_string.translate(str.maketrans(dict_mapping))
result_bad = my_string
for x, y in dict_mapping.items():
    result_bad = result_bad.replace(x, y)
print(result_good)  # ThsS sS a teSt Strsng.
print(result_bad)   # ThSS SS a teSt StrSng.

回答 18

从安德鲁的宝贵答案开始,我开发了一个脚本,该脚本从文件中加载字典并详细说明打开的文件夹中的所有文件以进行替换。该脚本从可在其中设置分隔符的外部文件中加载映射。我是一个初学者,但是在多个文件中进行多次替换时,我发现此脚本非常有用。它以秒为单位加载了包含1000多个条目的字典。这不是优雅,但对我有用

import glob
import re

mapfile = input("Enter map file name with extension eg. codifica.txt: ")
sep = input("Enter map file column separator eg. |: ")
mask = input("Enter search mask with extension eg. 2010*txt for all files to be processed: ")
suff = input("Enter suffix with extension eg. _NEW.txt for newly generated files: ")

rep = {} # creation of empy dictionary

with open(mapfile) as temprep: # loading of definitions in the dictionary using input file, separator is prompted
    for line in temprep:
        (key, val) = line.strip('\n').split(sep)
        rep[key] = val

for filename in glob.iglob(mask): # recursion on all the files with the mask prompted

    with open (filename, "r") as textfile: # load each file in the variable text
        text = textfile.read()

        # start replacement
        #rep = dict((re.escape(k), v) for k, v in rep.items()) commented to enable the use in the mapping of re reserved characters
        pattern = re.compile("|".join(rep.keys()))
        text = pattern.sub(lambda m: rep[m.group(0)], text)

        #write of te output files with the prompted suffice
        target = open(filename[:-4]+"_NEW.txt", "w")
        target.write(text)
        target.close()

Starting from the precious answer of Andrew i developed a script that loads the dictionary from a file and elaborates all the files on the opened folder to do the replacements. The script loads the mappings from an external file in which you can set the separator. I’m a beginner but i found this script very useful when doing multiple substitutions in multiple files. It loaded a dictionary with more than 1000 entries in seconds. It is not elegant but it worked for me

import glob
import re

mapfile = input("Enter map file name with extension eg. codifica.txt: ")
sep = input("Enter map file column separator eg. |: ")
mask = input("Enter search mask with extension eg. 2010*txt for all files to be processed: ")
suff = input("Enter suffix with extension eg. _NEW.txt for newly generated files: ")

rep = {} # creation of empy dictionary

with open(mapfile) as temprep: # loading of definitions in the dictionary using input file, separator is prompted
    for line in temprep:
        (key, val) = line.strip('\n').split(sep)
        rep[key] = val

for filename in glob.iglob(mask): # recursion on all the files with the mask prompted

    with open (filename, "r") as textfile: # load each file in the variable text
        text = textfile.read()

        # start replacement
        #rep = dict((re.escape(k), v) for k, v in rep.items()) commented to enable the use in the mapping of re reserved characters
        pattern = re.compile("|".join(rep.keys()))
        text = pattern.sub(lambda m: rep[m.group(0)], text)

        #write of te output files with the prompted suffice
        target = open(filename[:-4]+"_NEW.txt", "w")
        target.write(text)
        target.close()

回答 19

这是我解决问题的方法。我在聊天机器人中使用它立即替换了不同的单词。

def mass_replace(text, dct):
    new_string = ""
    old_string = text
    while len(old_string) > 0:
        s = ""
        sk = ""
        for k in dct.keys():
            if old_string.startswith(k):
                s = dct[k]
                sk = k
        if s:
            new_string+=s
            old_string = old_string[len(sk):]
        else:
            new_string+=old_string[0]
            old_string = old_string[1:]
    return new_string

print mass_replace("The dog hunts the cat", {"dog":"cat", "cat":"dog"})

这将成为 The cat hunts the dog

this is my solution to the problem. I used it in a chatbot to replace the different words at once.

def mass_replace(text, dct):
    new_string = ""
    old_string = text
    while len(old_string) > 0:
        s = ""
        sk = ""
        for k in dct.keys():
            if old_string.startswith(k):
                s = dct[k]
                sk = k
        if s:
            new_string+=s
            old_string = old_string[len(sk):]
        else:
            new_string+=old_string[0]
            old_string = old_string[1:]
    return new_string

print mass_replace("The dog hunts the cat", {"dog":"cat", "cat":"dog"})

this will become The cat hunts the dog


回答 20

另一个例子:输入列表

error_list = ['[br]', '[ex]', 'Something']
words = ['how', 'much[ex]', 'is[br]', 'the', 'fish[br]', 'noSomething', 'really']

所需的输出将是

words = ['how', 'much', 'is', 'the', 'fish', 'no', 'really']

代码:

[n[0][0] if len(n[0]) else n[1] for n in [[[w.replace(e,"") for e in error_list if e in w],w] for w in words]] 

Another example : Input list

error_list = ['[br]', '[ex]', 'Something']
words = ['how', 'much[ex]', 'is[br]', 'the', 'fish[br]', 'noSomething', 'really']

The desired output would be

words = ['how', 'much', 'is', 'the', 'fish', 'no', 'really']

Code :

[n[0][0] if len(n[0]) else n[1] for n in [[[w.replace(e,"") for e in error_list if e in w],w] for w in words]] 

回答 21

或者只是为了快速破解:

for line in to_read:
    read_buffer = line              
    stripped_buffer1 = read_buffer.replace("term1", " ")
    stripped_buffer2 = stripped_buffer1.replace("term2", " ")
    write_to_file = to_write.write(stripped_buffer2)

Or just for a fast hack:

for line in to_read:
    read_buffer = line              
    stripped_buffer1 = read_buffer.replace("term1", " ")
    stripped_buffer2 = stripped_buffer1.replace("term2", " ")
    write_to_file = to_write.write(stripped_buffer2)

回答 22

这是使用字典的另一种方法:

listA="The cat jumped over the house".split()
modify = {word:word for number,word in enumerate(listA)}
modify["cat"],modify["jumped"]="dog","walked"
print " ".join(modify[x] for x in listA)

Here is another way of doing it with a dictionary:

listA="The cat jumped over the house".split()
modify = {word:word for number,word in enumerate(listA)}
modify["cat"],modify["jumped"]="dog","walked"
print " ".join(modify[x] for x in listA)