标签归档:regex

如何在string.replace中输入正则表达式?

问题:如何在string.replace中输入正则表达式?

我需要一些帮助来声明正则表达式。我的输入如下:

this is a paragraph with<[1> in between</[1> and then there are cases ... where the<[99> number ranges from 1-100</[99>. 
and there are many other lines in the txt files
with<[3> such tags </[3>

所需的输出是:

this is a paragraph with in between and then there are cases ... where the number ranges from 1-100. 
and there are many other lines in the txt files
with such tags

我已经试过了:

#!/usr/bin/python
import os, sys, re, glob
for infile in glob.glob(os.path.join(os.getcwd(), '*.txt')):
    for line in reader: 
        line2 = line.replace('<[1> ', '')
        line = line2.replace('</[1> ', '')
        line2 = line.replace('<[1>', '')
        line = line2.replace('</[1>', '')

        print line

我也尝试过此方法(但似乎我使用了错误的regex语法):

    line2 = line.replace('<[*> ', '')
    line = line2.replace('</[*> ', '')
    line2 = line.replace('<[*>', '')
    line = line2.replace('</[*>', '')

我不想replace从1到99 进行硬编码。。。

I need some help on declaring a regex. My inputs are like the following:

this is a paragraph with<[1> in between</[1> and then there are cases ... where the<[99> number ranges from 1-100</[99>. 
and there are many other lines in the txt files
with<[3> such tags </[3>

The required output is:

this is a paragraph with in between and then there are cases ... where the number ranges from 1-100. 
and there are many other lines in the txt files
with such tags

I’ve tried this:

#!/usr/bin/python
import os, sys, re, glob
for infile in glob.glob(os.path.join(os.getcwd(), '*.txt')):
    for line in reader: 
        line2 = line.replace('<[1> ', '')
        line = line2.replace('</[1> ', '')
        line2 = line.replace('<[1>', '')
        line = line2.replace('</[1>', '')

        print line

I’ve also tried this (but it seems like I’m using the wrong regex syntax):

    line2 = line.replace('<[*> ', '')
    line = line2.replace('</[*> ', '')
    line2 = line.replace('<[*>', '')
    line = line2.replace('</[*>', '')

I dont want to hard-code the replace from 1 to 99 . . .


回答 0

这个经过测试的代码段应该做到这一点:

import re
line = re.sub(r"</?\[\d+>", "", line)

编辑:这是解释其工作方式的注释版本:

line = re.sub(r"""
  (?x) # Use free-spacing mode.
  <    # Match a literal '<'
  /?   # Optionally match a '/'
  \[   # Match a literal '['
  \d+  # Match one or more digits
  >    # Match a literal '>'
  """, "", line)

正则表达式是 有趣!但我强烈建议您花一两个小时来学习基础知识。对于初学者,您需要学习哪些特殊字符:需要转义的“元字符”(即,前面加反斜杠-字符类的内外规则都不同。)在以下位置有一个出色的在线教程:www .regular-expressions.info。您在那里度过的时间将使自己获得很多回报。祝您满意!

This tested snippet should do it:

import re
line = re.sub(r"</?\[\d+>", "", line)

Edit: Here’s a commented version explaining how it works:

line = re.sub(r"""
  (?x) # Use free-spacing mode.
  <    # Match a literal '<'
  /?   # Optionally match a '/'
  \[   # Match a literal '['
  \d+  # Match one or more digits
  >    # Match a literal '>'
  """, "", line)

Regexes are fun! But I would strongly recommend spending an hour or two studying the basics. For starters, you need to learn which characters are special: “metacharacters” which need to be escaped (i.e. with a backslash placed in front – and the rules are different inside and outside character classes.) There is an excellent online tutorial at: www.regular-expressions.info. The time you spend there will pay for itself many times over. Happy regexing!


回答 1

str.replace()进行固定替换。使用re.sub()代替。

str.replace() does fixed replacements. Use re.sub() instead.


回答 2

我会这样(正则表达式在注释中说明):

import re

# If you need to use the regex more than once it is suggested to compile it.
pattern = re.compile(r"</{0,}\[\d+>")

# <\/{0,}\[\d+>
# 
# Match the character “<” literally «<»
# Match the character “/” literally «\/{0,}»
#    Between zero and unlimited times, as many times as possible, giving back as needed (greedy) «{0,}»
# Match the character “[” literally «\[»
# Match a single digit 0..9 «\d+»
#    Between one and unlimited times, as many times as possible, giving back as needed (greedy) «+»
# Match the character “>” literally «>»

subject = """this is a paragraph with<[1> in between</[1> and then there are cases ... where the<[99> number ranges from 1-100</[99>. 
and there are many other lines in the txt files
with<[3> such tags </[3>"""

result = pattern.sub("", subject)

print(result)

如果您想了解有关正则表达式的更多信息,建议阅读 Jan Goyvaerts和Steven Levithan撰写的《表达式食谱》

I would go like this (regex explained in comments):

import re

# If you need to use the regex more than once it is suggested to compile it.
pattern = re.compile(r"</{0,}\[\d+>")

# <\/{0,}\[\d+>
# 
# Match the character “<” literally «<»
# Match the character “/” literally «\/{0,}»
#    Between zero and unlimited times, as many times as possible, giving back as needed (greedy) «{0,}»
# Match the character “[” literally «\[»
# Match a single digit 0..9 «\d+»
#    Between one and unlimited times, as many times as possible, giving back as needed (greedy) «+»
# Match the character “>” literally «>»

subject = """this is a paragraph with<[1> in between</[1> and then there are cases ... where the<[99> number ranges from 1-100</[99>. 
and there are many other lines in the txt files
with<[3> such tags </[3>"""

result = pattern.sub("", subject)

print(result)

If you want to learn more about regex I recomend to read Regular Expressions Cookbook by Jan Goyvaerts and Steven Levithan.


回答 3

最简单的方法

import re

txt='this is a paragraph with<[1> in between</[1> and then there are cases ... where the<[99> number ranges from 1-100</[99>.  and there are many other lines in the txt files with<[3> such tags </[3>'

out = re.sub("(<[^>]+>)", '', txt)
print out

The easiest way

import re

txt='this is a paragraph with<[1> in between</[1> and then there are cases ... where the<[99> number ranges from 1-100</[99>.  and there are many other lines in the txt files with<[3> such tags </[3>'

out = re.sub("(<[^>]+>)", '', txt)
print out

回答 4

字符串对象的replace方法不接受正则表达式,而仅接受固定的字符串(请参见文档:http : //docs.python.org/2/library/stdtypes.html#str.replace)。

您必须使用re模块:

import re
newline= re.sub("<\/?\[[0-9]+>", "", line)

replace method of string objects does not accept regular expressions but only fixed strings (see documentation: http://docs.python.org/2/library/stdtypes.html#str.replace).

You have to use re module:

import re
newline= re.sub("<\/?\[[0-9]+>", "", line)

回答 5

不必使用正则表达式(用于您的示例字符串)

>>> s
'this is a paragraph with<[1> in between</[1> and then there are cases ... where the<[99> number ranges from 1-100</[99>. \nand there are many other lines in the txt files\nwith<[3> such tags </[3>\n'

>>> for w in s.split(">"):
...   if "<" in w:
...      print w.split("<")[0]
...
this is a paragraph with
 in between
 and then there are cases ... where the
 number ranges from 1-100
.
and there are many other lines in the txt files
with
 such tags

don’t have to use regular expression (for your sample string)

>>> s
'this is a paragraph with<[1> in between</[1> and then there are cases ... where the<[99> number ranges from 1-100</[99>. \nand there are many other lines in the txt files\nwith<[3> such tags </[3>\n'

>>> for w in s.split(">"):
...   if "<" in w:
...      print w.split("<")[0]
...
this is a paragraph with
 in between
 and then there are cases ... where the
 number ranges from 1-100
.
and there are many other lines in the txt files
with
 such tags

回答 6

import os, sys, re, glob

pattern = re.compile(r"\<\[\d\>")
replacementStringMatchesPattern = "<[1>"

for infile in glob.glob(os.path.join(os.getcwd(), '*.txt')):
   for line in reader: 
      retline =  pattern.sub(replacementStringMatchesPattern, "", line)         
      sys.stdout.write(retline)
      print (retline)
import os, sys, re, glob

pattern = re.compile(r"\<\[\d\>")
replacementStringMatchesPattern = "<[1>"

for infile in glob.glob(os.path.join(os.getcwd(), '*.txt')):
   for line in reader: 
      retline =  pattern.sub(replacementStringMatchesPattern, "", line)         
      sys.stdout.write(retline)
      print (retline)

检查字符串是否匹配模式

问题:检查字符串是否匹配模式

如何检查字符串是否与此模式匹配?

大写字母,数字,大写字母,数字…

例如,这些将匹配:

A1B2
B10L1
C1N200J1

这些不会(’^’表示问题)

a1B2
^
A10B
   ^
AB400
^

How do I check if a string matches this pattern?

Uppercase letter, number(s), uppercase letter, number(s)…

Example, These would match:

A1B2
B10L1
C1N200J1

These wouldn’t (‘^’ points to problem)

a1B2
^
A10B
   ^
AB400
^

回答 0

import re
pattern = re.compile("^([A-Z][0-9]+)+$")
pattern.match(string)

编辑:如注释中所述,match仅在字符串开头检查匹配项,而re.search()将匹配字符串中任何位置的模式。(另请参见:https : //docs.python.org/library/re.html#search-vs-match

import re
pattern = re.compile("^([A-Z][0-9]+)+$")
pattern.match(string)

Edit: As noted in the comments match checks only for matches at the beginning of the string while re.search() will match a pattern anywhere in string. (See also: https://docs.python.org/library/re.html#search-vs-match)


回答 1

单线: re.match(r"pattern", string) # No need to compile

import re
>>> if re.match(r"hello[0-9]+", 'hello1'):
...     print('Yes')
... 
Yes

您可以bool根据需要进行评估

>>> bool(re.match(r"hello[0-9]+", 'hello1'))
True

One-liner: re.match(r"pattern", string) # No need to compile

import re
>>> if re.match(r"hello[0-9]+", 'hello1'):
...     print('Yes')
... 
Yes

You can evalute it as bool if needed

>>> bool(re.match(r"hello[0-9]+", 'hello1'))
True

回答 2

请尝试以下操作:

import re

name = ["A1B1", "djdd", "B2C4", "C2H2", "jdoi","1A4V"]

# Match names.
for element in name:
     m = re.match("(^[A-Z]\d[A-Z]\d)", element)
     if m:
        print(m.groups())

Please try the following:

import re

name = ["A1B1", "djdd", "B2C4", "C2H2", "jdoi","1A4V"]

# Match names.
for element in name:
     m = re.match("(^[A-Z]\d[A-Z]\d)", element)
     if m:
        print(m.groups())

回答 3

import re
import sys

prog = re.compile('([A-Z]\d+)+')

while True:
  line = sys.stdin.readline()
  if not line: break

  if prog.match(line):
    print 'matched'
  else:
    print 'not matched'
import re
import sys

prog = re.compile('([A-Z]\d+)+')

while True:
  line = sys.stdin.readline()
  if not line: break

  if prog.match(line):
    print 'matched'
  else:
    print 'not matched'

回答 4

正则表达式使这变得容易…

[A-Z] 将恰好匹配A和Z之间的一个字符

\d+ 将匹配一个或多个数字

() 对事物进行分组(并且还返回事物…但是现在仅考虑将它们分组)

+ 选择1个或更多

regular expressions make this easy …

[A-Z] will match exactly one character between A and Z

\d+ will match one or more digits

() group things (and also return things… but for now just think of them grouping)

+ selects 1 or more


回答 5

  
import re

ab = re.compile("^([A-Z]{1}[0-9]{1})+$")
ab.match(string)
  


我认为这应该适用于大写的数字模式。

  
import re

ab = re.compile("^([A-Z]{1}[0-9]{1})+$")
ab.match(string)
  


I believe that should work for an uppercase, number pattern.


如何在Python中找到与正则表达式的所有匹配项?

问题:如何在Python中找到与正则表达式的所有匹配项?

在我编写的程序中,我使用Python re.search()函数在文本块中查找匹配项并打印结果。但是,一旦找到文本块中的第一个匹配项,程序就会退出。

在找到所有匹配项之前程序不停止的情况下,如何重复执行此操作?是否有单独的功能来执行此操作?

In a program I’m writing I have Python use the re.search() function to find matches in a block of text and print the results. However, the program exits once it finds the first match in the block of text.

How do I do this repeatedly where the program doesn’t stop until ALL matches have been found? Is there a separate function to do this?


回答 0

使用re.findallre.finditer代替。

re.findall(pattern, string) 返回匹配字符串的列表。

re.finditer(pattern, string)返回MatchObject对象上的迭代器。

例:

re.findall( r'all (.*?) are', 'all cats are smarter than dogs, all dogs are dumber than cats')
# Output: ['cats', 'dogs']

[x.group() for x in re.finditer( r'all (.*?) are', 'all cats are smarter than dogs, all dogs are dumber than cats')]
# Output: ['all cats are', 'all dogs are']

Use re.findall or re.finditer instead.

re.findall(pattern, string) returns a list of matching strings.

re.finditer(pattern, string) returns an iterator over MatchObject objects.

Example:

re.findall( r'all (.*?) are', 'all cats are smarter than dogs, all dogs are dumber than cats')
# Output: ['cats', 'dogs']

[x.group() for x in re.finditer( r'all (.*?) are', 'all cats are smarter than dogs, all dogs are dumber than cats')]
# Output: ['all cats are', 'all dogs are']

如何找到所有出现的子串?

问题:如何找到所有出现的子串?

Python具有string.find()string.rfind()获取字符串中子字符串的索引。

我想知道是否有类似的东西string.find_all()可以返回所有找到的索引(不仅是开头的第一个,还是结尾的第一个)。

例如:

string = "test test test test"

print string.find('test') # 0
print string.rfind('test') # 15

#this is the goal
print string.find_all('test') # [0,5,10,15]

Python has string.find() and string.rfind() to get the index of a substring in a string.

I’m wondering whether there is something like string.find_all() which can return all found indexes (not only the first from the beginning or the first from the end).

For example:

string = "test test test test"

print string.find('test') # 0
print string.rfind('test') # 15

#this is the goal
print string.find_all('test') # [0,5,10,15]

回答 0

没有简单的内置字符串函数可以满足您的需求,但是您可以使用功能更强大的正则表达式

import re
[m.start() for m in re.finditer('test', 'test test test test')]
#[0, 5, 10, 15]

如果要查找重叠的匹配项,先行搜索将做到:

[m.start() for m in re.finditer('(?=tt)', 'ttt')]
#[0, 1]

如果您想要一个没有重叠的反向查找全部,则可以将正向和负向超前组合成这样的表达式:

search = 'tt'
[m.start() for m in re.finditer('(?=%s)(?!.{1,%d}%s)' % (search, len(search)-1, search), 'ttt')]
#[1]

re.finditer返回一个generator,所以您可以更改[]上述内容以()获取一个Generator而不是一个列表,如果只迭代一次结果,则列表会更有效。

There is no simple built-in string function that does what you’re looking for, but you could use the more powerful regular expressions:

import re
[m.start() for m in re.finditer('test', 'test test test test')]
#[0, 5, 10, 15]

If you want to find overlapping matches, lookahead will do that:

[m.start() for m in re.finditer('(?=tt)', 'ttt')]
#[0, 1]

If you want a reverse find-all without overlaps, you can combine positive and negative lookahead into an expression like this:

search = 'tt'
[m.start() for m in re.finditer('(?=%s)(?!.{1,%d}%s)' % (search, len(search)-1, search), 'ttt')]
#[1]

re.finditer returns a generator, so you could change the [] in the above to () to get a generator instead of a list which will be more efficient if you’re only iterating through the results once.


回答 1

>>> help(str.find)
Help on method_descriptor:

find(...)
    S.find(sub [,start [,end]]) -> int

因此,我们可以自己构建它:

def find_all(a_str, sub):
    start = 0
    while True:
        start = a_str.find(sub, start)
        if start == -1: return
        yield start
        start += len(sub) # use start += 1 to find overlapping matches

list(find_all('spam spam spam spam', 'spam')) # [0, 5, 10, 15]

不需要临时字符串或正则表达式。

>>> help(str.find)
Help on method_descriptor:

find(...)
    S.find(sub [,start [,end]]) -> int

Thus, we can build it ourselves:

def find_all(a_str, sub):
    start = 0
    while True:
        start = a_str.find(sub, start)
        if start == -1: return
        yield start
        start += len(sub) # use start += 1 to find overlapping matches

list(find_all('spam spam spam spam', 'spam')) # [0, 5, 10, 15]

No temporary strings or regexes required.


回答 2

这是一种获取所有(甚至重叠)匹配项的方法(效率很低):

>>> string = "test test test test"
>>> [i for i in range(len(string)) if string.startswith('test', i)]
[0, 5, 10, 15]

Here’s a (very inefficient) way to get all (i.e. even overlapping) matches:

>>> string = "test test test test"
>>> [i for i in range(len(string)) if string.startswith('test', i)]
[0, 5, 10, 15]

回答 3

同样,旧线程,但这是我使用生成器和plain的解决方案str.find

def findall(p, s):
    '''Yields all the positions of
    the pattern p in the string s.'''
    i = s.find(p)
    while i != -1:
        yield i
        i = s.find(p, i+1)

x = 'banananassantana'
[(i, x[i:i+2]) for i in findall('na', x)]

退货

[(2, 'na'), (4, 'na'), (6, 'na'), (14, 'na')]

Again, old thread, but here’s my solution using a generator and plain str.find.

def findall(p, s):
    '''Yields all the positions of
    the pattern p in the string s.'''
    i = s.find(p)
    while i != -1:
        yield i
        i = s.find(p, i+1)

Example

x = 'banananassantana'
[(i, x[i:i+2]) for i in findall('na', x)]

returns

[(2, 'na'), (4, 'na'), (6, 'na'), (14, 'na')]

回答 4

您可以将其re.finditer()用于非重叠匹配。

>>> import re
>>> aString = 'this is a string where the substring "is" is repeated several times'
>>> print [(a.start(), a.end()) for a in list(re.finditer('is', aString))]
[(2, 4), (5, 7), (38, 40), (42, 44)]

不适用于:

In [1]: aString="ababa"

In [2]: print [(a.start(), a.end()) for a in list(re.finditer('aba', aString))]
Output: [(0, 3)]

You can use re.finditer() for non-overlapping matches.

>>> import re
>>> aString = 'this is a string where the substring "is" is repeated several times'
>>> print [(a.start(), a.end()) for a in list(re.finditer('is', aString))]
[(2, 4), (5, 7), (38, 40), (42, 44)]

but won’t work for:

In [1]: aString="ababa"

In [2]: print [(a.start(), a.end()) for a in list(re.finditer('aba', aString))]
Output: [(0, 3)]

回答 5

来吧,让我们一起递归。

def locations_of_substring(string, substring):
    """Return a list of locations of a substring."""

    substring_length = len(substring)    
    def recurse(locations_found, start):
        location = string.find(substring, start)
        if location != -1:
            return recurse(locations_found + [location], location+substring_length)
        else:
            return locations_found

    return recurse([], 0)

print(locations_of_substring('this is a test for finding this and this', 'this'))
# prints [0, 27, 36]

这样就不需要正则表达式。

Come, let us recurse together.

def locations_of_substring(string, substring):
    """Return a list of locations of a substring."""

    substring_length = len(substring)    
    def recurse(locations_found, start):
        location = string.find(substring, start)
        if location != -1:
            return recurse(locations_found + [location], location+substring_length)
        else:
            return locations_found

    return recurse([], 0)

print(locations_of_substring('this is a test for finding this and this', 'this'))
# prints [0, 27, 36]

No need for regular expressions this way.


回答 6

如果您只是寻找一个字符,这将起作用:

string = "dooobiedoobiedoobie"
match = 'o'
reduce(lambda count, char: count + 1 if char == match else count, string, 0)
# produces 7

也,

string = "test test test test"
match = "test"
len(string.split(match)) - 1
# produces 4

我的直觉是,这些(尤其是第二名)都没有表现出色。

If you’re just looking for a single character, this would work:

string = "dooobiedoobiedoobie"
match = 'o'
reduce(lambda count, char: count + 1 if char == match else count, string, 0)
# produces 7

Also,

string = "test test test test"
match = "test"
len(string.split(match)) - 1
# produces 4

My hunch is that neither of these (especially #2) is terribly performant.


回答 7

这是一个老话题,但是我很感兴趣,想分享我的解决方案。

def find_all(a_string, sub):
    result = []
    k = 0
    while k < len(a_string):
        k = a_string.find(sub, k)
        if k == -1:
            return result
        else:
            result.append(k)
            k += 1 #change to k += len(sub) to not search overlapping results
    return result

它应该返回找到子字符串的位置列表。如果您发现错误或需要改进的地方,请发表评论。

this is an old thread but i got interested and wanted to share my solution.

def find_all(a_string, sub):
    result = []
    k = 0
    while k < len(a_string):
        k = a_string.find(sub, k)
        if k == -1:
            return result
        else:
            result.append(k)
            k += 1 #change to k += len(sub) to not search overlapping results
    return result

It should return a list of positions where the substring was found. Please comment if you see an error or room for improvment.


回答 8

这使用re.finditer对我有用

import re

text = 'This is sample text to test if this pythonic '\
       'program can serve as an indexing platform for '\
       'finding words in a paragraph. It can give '\
       'values as to where the word is located with the '\
       'different examples as stated'

#  find all occurances of the word 'as' in the above text

find_the_word = re.finditer('as', text)

for match in find_the_word:
    print('start {}, end {}, search string \'{}\''.
          format(match.start(), match.end(), match.group()))

This does the trick for me using re.finditer

import re

text = 'This is sample text to test if this pythonic '\
       'program can serve as an indexing platform for '\
       'finding words in a paragraph. It can give '\
       'values as to where the word is located with the '\
       'different examples as stated'

#  find all occurances of the word 'as' in the above text

find_the_word = re.finditer('as', text)

for match in find_the_word:
    print('start {}, end {}, search string \'{}\''.
          format(match.start(), match.end(), match.group()))

回答 9

这个线程有点旧,但是对我有用:

numberString = "onetwothreefourfivesixseveneightninefiveten"
testString = "five"

marker = 0
while marker < len(numberString):
    try:
        print(numberString.index("five",marker))
        marker = numberString.index("five", marker) + 1
    except ValueError:
        print("String not found")
        marker = len(numberString)

This thread is a little old but this worked for me:

numberString = "onetwothreefourfivesixseveneightninefiveten"
testString = "five"

marker = 0
while marker < len(numberString):
    try:
        print(numberString.index("five",marker))
        marker = numberString.index("five", marker) + 1
    except ValueError:
        print("String not found")
        marker = len(numberString)

回答 10

你可以试试 :

>>> string = "test test test test"
>>> for index,value in enumerate(string):
    if string[index:index+(len("test"))] == "test":
        print index

0
5
10
15

You can try :

>>> string = "test test test test"
>>> for index,value in enumerate(string):
    if string[index:index+(len("test"))] == "test":
        print index

0
5
10
15

回答 11

无论其他人提供的解决方案完全基于可用的方法find()或任何可用的方法。

查找字符串中所有子字符串出现的核心基本算法是什么?

def find_all(string,substring):
    """
    Function: Returning all the index of substring in a string
    Arguments: String and the search string
    Return:Returning a list
    """
    length = len(substring)
    c=0
    indexes = []
    while c < len(string):
        if string[c:c+length] == substring:
            indexes.append(c)
        c=c+1
    return indexes

您也可以将str类继承到新类,并可以在下面使用此函数。

class newstr(str):
def find_all(string,substring):
    """
    Function: Returning all the index of substring in a string
    Arguments: String and the search string
    Return:Returning a list
    """
    length = len(substring)
    c=0
    indexes = []
    while c < len(string):
        if string[c:c+length] == substring:
            indexes.append(c)
        c=c+1
    return indexes

调用方法

newstr.find_all(’您觉得这个答案有用吗?然后投票!’,’this’)

Whatever the solutions provided by others are completely based on the available method find() or any available methods.

What is the core basic algorithm to find all the occurrences of a substring in a string?

def find_all(string,substring):
    """
    Function: Returning all the index of substring in a string
    Arguments: String and the search string
    Return:Returning a list
    """
    length = len(substring)
    c=0
    indexes = []
    while c < len(string):
        if string[c:c+length] == substring:
            indexes.append(c)
        c=c+1
    return indexes

You can also inherit str class to new class and can use this function below.

class newstr(str):
def find_all(string,substring):
    """
    Function: Returning all the index of substring in a string
    Arguments: String and the search string
    Return:Returning a list
    """
    length = len(substring)
    c=0
    indexes = []
    while c < len(string):
        if string[c:c+length] == substring:
            indexes.append(c)
        c=c+1
    return indexes

Calling the method

newstr.find_all(‘Do you find this answer helpful? then upvote this!’,’this’)


回答 12

此函数不会查看字符串内部的所有位置,也不会浪费计算资源。我的尝试:

def findAll(string,word):
    all_positions=[]
    next_pos=-1
    while True:
        next_pos=string.find(word,next_pos+1)
        if(next_pos<0):
            break
        all_positions.append(next_pos)
    return all_positions

使用它的方式是这样的:

result=findAll('this word is a big word man how many words are there?','word')

This function does not look at all positions inside the string, it does not waste compute resources. My try:

def findAll(string,word):
    all_positions=[]
    next_pos=-1
    while True:
        next_pos=string.find(word,next_pos+1)
        if(next_pos<0):
            break
        all_positions.append(next_pos)
    return all_positions

to use it call it like this:

result=findAll('this word is a big word man how many words are there?','word')

回答 13

在文档中查找大量关键字时,请使用flashtext

from flashtext import KeywordProcessor
words = ['test', 'exam', 'quiz']
txt = 'this is a test'
kwp = KeywordProcessor()
kwp.add_keywords_from_list(words)
result = kwp.extract_keywords(txt, span_info=True)

在大量搜索词中,Flashtext的运行速度比正则表达式快。

When looking for a large amount of key words in a document, use flashtext

from flashtext import KeywordProcessor
words = ['test', 'exam', 'quiz']
txt = 'this is a test'
kwp = KeywordProcessor()
kwp.add_keywords_from_list(words)
result = kwp.extract_keywords(txt, span_info=True)

Flashtext runs faster than regex on large list of search words.


回答 14

src = input() # we will find substring in this string
sub = input() # substring

res = []
pos = src.find(sub)
while pos != -1:
    res.append(pos)
    pos = src.find(sub, pos + 1)
src = input() # we will find substring in this string
sub = input() # substring

res = []
pos = src.find(sub)
while pos != -1:
    res.append(pos)
    pos = src.find(sub, pos + 1)

回答 15

这是来自hackerrank的类似问题的解决方案。希望对您有所帮助。

import re
a = input()
b = input()
if b not in a:
    print((-1,-1))
else:
    #create two list as
    start_indc = [m.start() for m in re.finditer('(?=' + b + ')', a)]
    for i in range(len(start_indc)):
        print((start_indc[i], start_indc[i]+len(b)-1))

输出:

aaadaa
aa
(0, 1)
(1, 2)
(4, 5)

This is solution of a similar question from hackerrank. I hope this could help you.

import re
a = input()
b = input()
if b not in a:
    print((-1,-1))
else:
    #create two list as
    start_indc = [m.start() for m in re.finditer('(?=' + b + ')', a)]
    for i in range(len(start_indc)):
        print((start_indc[i], start_indc[i]+len(b)-1))

Output:

aaadaa
aa
(0, 1)
(1, 2)
(4, 5)

回答 16

通过切片,我们找到了所有可能的组合,并将它们附加在列表中,并使用count函数查找了发生的次数

s=input()
n=len(s)
l=[]
f=input()
print(s[0])
for i in range(0,n):
    for j in range(1,n+1):
        l.append(s[i:j])
if f in l:
    print(l.count(f))

By slicing we find all the combinations possible and append them in a list and find the number of times it occurs using count function

s=input()
n=len(s)
l=[]
f=input()
print(s[0])
for i in range(0,n):
    for j in range(1,n+1):
        l.append(s[i:j])
if f in l:
    print(l.count(f))

回答 17

请看下面的代码

#!/usr/bin/env python
# coding:utf-8
'''黄哥Python'''


def get_substring_indices(text, s):
    result = [i for i in range(len(text)) if text.startswith(s, i)]
    return result


if __name__ == '__main__':
    text = "How much wood would a wood chuck chuck if a wood chuck could chuck wood?"
    s = 'wood'
    print get_substring_indices(text, s)

please look at below code

#!/usr/bin/env python
# coding:utf-8
'''黄哥Python'''


def get_substring_indices(text, s):
    result = [i for i in range(len(text)) if text.startswith(s, i)]
    return result


if __name__ == '__main__':
    text = "How much wood would a wood chuck chuck if a wood chuck could chuck wood?"
    s = 'wood'
    print get_substring_indices(text, s)

回答 18

pythonic的方式是:

mystring = 'Hello World, this should work!'
find_all = lambda c,s: [x for x in range(c.find(s), len(c)) if c[x] == s]

# s represents the search string
# c represents the character string

find_all(mystring,'o')    # will return all positions of 'o'

[4, 7, 20, 26] 
>>> 

The pythonic way would be:

mystring = 'Hello World, this should work!'
find_all = lambda c,s: [x for x in range(c.find(s), len(c)) if c[x] == s]

# s represents the search string
# c represents the character string

find_all(mystring,'o')    # will return all positions of 'o'

[4, 7, 20, 26] 
>>> 

回答 19

您可以轻松使用:

string.count('test')!

https://www.programiz.com/python-programming/methods/string/count

干杯!


不区分大小写的正则表达式,无需重新编译?

问题:不区分大小写的正则表达式,无需重新编译?

在Python中,我可以使用re.compile以下命令将正则表达式编译为不区分大小写:

>>> s = 'TeSt'
>>> casesensitive = re.compile('test')
>>> ignorecase = re.compile('test', re.IGNORECASE)
>>> 
>>> print casesensitive.match(s)
None
>>> print ignorecase.match(s)
<_sre.SRE_Match object at 0x02F0B608>

有没有办法做同样的事情,但是不用re.compile。在文档中找不到Perl的i后缀(例如m/test/i)。

In Python, I can compile a regular expression to be case-insensitive using re.compile:

>>> s = 'TeSt'
>>> casesensitive = re.compile('test')
>>> ignorecase = re.compile('test', re.IGNORECASE)
>>> 
>>> print casesensitive.match(s)
None
>>> print ignorecase.match(s)
<_sre.SRE_Match object at 0x02F0B608>

Is there a way to do the same, but without using re.compile. I can’t find anything like Perl’s i suffix (e.g. m/test/i) in the documentation.


回答 0

传递re.IGNORECASEflags的PARAM searchmatchsub

re.search('test', 'TeSt', re.IGNORECASE)
re.match('test', 'TeSt', re.IGNORECASE)
re.sub('test', 'xxxx', 'Testing', flags=re.IGNORECASE)

Pass re.IGNORECASE to the flags param of search, match, or sub:

re.search('test', 'TeSt', re.IGNORECASE)
re.match('test', 'TeSt', re.IGNORECASE)
re.sub('test', 'xxxx', 'Testing', flags=re.IGNORECASE)

回答 1

您还可以使用不带IGNORECASE标志(已在Python 2.7.3中进行测试)的搜索/匹配来执行不区分大小写的搜索:

re.search(r'(?i)test', 'TeSt').group()    ## returns 'TeSt'
re.match(r'(?i)test', 'TeSt').group()     ## returns 'TeSt'

You can also perform case insensitive searches using search/match without the IGNORECASE flag (tested in Python 2.7.3):

re.search(r'(?i)test', 'TeSt').group()    ## returns 'TeSt'
re.match(r'(?i)test', 'TeSt').group()     ## returns 'TeSt'

回答 2

不区分大小写的标记(?i)可以直接合并到regex模式中:

>>> import re
>>> s = 'This is one Test, another TEST, and another test.'
>>> re.findall('(?i)test', s)
['Test', 'TEST', 'test']

The case-insensitive marker, (?i) can be incorporated directly into the regex pattern:

>>> import re
>>> s = 'This is one Test, another TEST, and another test.'
>>> re.findall('(?i)test', s)
['Test', 'TEST', 'test']

回答 3

您还可以在模式编译期间定义不区分大小写的代码:

pattern = re.compile('FIle:/+(.*)', re.IGNORECASE)

You can also define case insensitive during the pattern compile:

pattern = re.compile('FIle:/+(.*)', re.IGNORECASE)

回答 4

进口中

import re

在运行时处理中:

RE_TEST = r'test'
if re.match(RE_TEST, 'TeSt', re.IGNORECASE):

应当指出,不使用re.compile是浪费。每次调用上述match方法时,都会编译正则表达式。这在其他编程语言中也是错误的做法。下面是更好的做法。

在应用程序初始化中:

self.RE_TEST = re.compile('test', re.IGNORECASE)

在运行时处理中:

if self.RE_TEST.match('TeSt'):

In imports

import re

In run time processing:

RE_TEST = r'test'
if re.match(RE_TEST, 'TeSt', re.IGNORECASE):

It should be mentioned that not using re.compile is wasteful. Every time the above match method is called, the regular expression will be compiled. This is also faulty practice in other programming languages. The below is the better practice.

In app initialization:

self.RE_TEST = re.compile('test', re.IGNORECASE)

In run time processing:

if self.RE_TEST.match('TeSt'):

回答 5

#'re.IGNORECASE' for case insensitive results short form re.I
#'re.match' returns the first match located from the start of the string. 
#'re.search' returns location of the where the match is found 
#'re.compile' creates a regex object that can be used for multiple matches

 >>> s = r'TeSt'   
 >>> print (re.match(s, r'test123', re.I))
 <_sre.SRE_Match object; span=(0, 4), match='test'>
 # OR
 >>> pattern = re.compile(s, re.I)
 >>> print(pattern.match(r'test123'))
 <_sre.SRE_Match object; span=(0, 4), match='test'>
#'re.IGNORECASE' for case insensitive results short form re.I
#'re.match' returns the first match located from the start of the string. 
#'re.search' returns location of the where the match is found 
#'re.compile' creates a regex object that can be used for multiple matches

 >>> s = r'TeSt'   
 >>> print (re.match(s, r'test123', re.I))
 <_sre.SRE_Match object; span=(0, 4), match='test'>
 # OR
 >>> pattern = re.compile(s, re.I)
 >>> print(pattern.match(r'test123'))
 <_sre.SRE_Match object; span=(0, 4), match='test'>

回答 6

要执行不区分大小写的操作,请提供re.IGNORECASE

>>> import re
>>> test = 'UPPER TEXT, lower text, Mixed Text'
>>> re.findall('text', test, flags=re.IGNORECASE)
['TEXT', 'text', 'Text']

如果我们要替换与大小写匹配的文本…

>>> def matchcase(word):
        def replace(m):
            text = m.group()
            if text.isupper():
                return word.upper()
            elif text.islower():
                return word.lower()
            elif text[0].isupper():
                return word.capitalize()
            else:
                return word
        return replace

>>> re.sub('text', matchcase('word'), test, flags=re.IGNORECASE)
'UPPER WORD, lower word, Mixed Word'

To perform case-insensitive operations, supply re.IGNORECASE

>>> import re
>>> test = 'UPPER TEXT, lower text, Mixed Text'
>>> re.findall('text', test, flags=re.IGNORECASE)
['TEXT', 'text', 'Text']

and if we want to replace text matching the case…

>>> def matchcase(word):
        def replace(m):
            text = m.group()
            if text.isupper():
                return word.upper()
            elif text.islower():
                return word.lower()
            elif text[0].isupper():
                return word.capitalize()
            else:
                return word
        return replace

>>> re.sub('text', matchcase('word'), test, flags=re.IGNORECASE)
'UPPER WORD, lower word, Mixed Word'

回答 7

如果您想替换但仍保留以前str的样式。有可能的。

例如:高亮显示字符串“ test asdasd TEST asd tEst asdasd”。

sentence = "test asdasd TEST asd tEst asdasd"
result = re.sub(
  '(test)', 
  r'<b>\1</b>',  # \1 here indicates first matching group.
  sentence, 
  flags=re.IGNORECASE)

测试 asdasd TEST ASD 测试 asdasd

If you would like to replace but still keeping the style of previous str. It is possible.

For example: highlight the string “test asdasd TEST asd tEst asdasd”.

sentence = "test asdasd TEST asd tEst asdasd"
result = re.sub(
  '(test)', 
  r'<b>\1</b>',  # \1 here indicates first matching group.
  sentence, 
  flags=re.IGNORECASE)

test asdasd TEST asd tEst asdasd


回答 8

对于不区分大小写的正则表达式(Regex):通过两种方式添加代码:

  1. flags=re.IGNORECASE

    Regx3GList = re.search("(WCDMA:)((\d*)(,?))*", txt, **re.IGNORECASE**)
  2. 不区分大小写的标记 (?i)

    Regx3GList = re.search("**(?i)**(WCDMA:)((\d*)(,?))*", txt)

For Case insensitive regular expression(Regex): There are two ways by adding in your code:

  1. flags=re.IGNORECASE

    Regx3GList = re.search("(WCDMA:)((\d*)(,?))*", txt, **re.IGNORECASE**)
    
  2. The case-insensitive marker (?i)

    Regx3GList = re.search("**(?i)**(WCDMA:)((\d*)(,?))*", txt)
    

re.search和re.match有什么区别?

问题:re.search和re.match有什么区别?

search()Python 模块中的match()函数和有什么区别?re

我已经阅读了文档当前文档),但是我似乎从未记得它。我一直在查找并重新学习它。我希望有人会用示例清楚地回答它,以便(也许)它会贴在我的头上。或者至少我将有一个更好的地方来回答我的问题,并且重新学习它所花的时间会更少。

What is the difference between the search() and match() functions in the Python re module?

I’ve read the documentation (current documentation), but I never seem to remember it. I keep having to look it up and re-learn it. I’m hoping that someone will answer it clearly with examples so that (perhaps) it will stick in my head. Or at least I’ll have a better place to return with my question and it will take less time to re-learn it.


回答 0

re.match锚定在字符串的开头。这与换行无关,因此它与^在模式中使用的方式不同。

重新匹配文档所述

如果字符串开头的零个或多个字符 与正则表达式模式匹配,则返回相应的MatchObject实例。None如果字符串与模式不匹配,则返回;否则返回false。请注意,这与零长度匹配不同。

注意:如果要在字符串中的任何位置找到匹配项,请search() 改用。

re.search搜索整个字符串,如文档所述

扫描字符串以查找正则表达式模式产生匹配项的位置,然后返回相应的MatchObject实例。None如果字符串中没有位置与模式匹配,则返回;否则返回false。请注意,这与在字符串中的某个点找到零长度匹配不同。

因此,如果您需要匹配字符串的开头,或者匹配整个字符串,请使用match。它更快。否则使用search

该文档中有一个专门针对matchvs.的部分search,还涵盖了多行字符串:

基于正则表达式的Python提供两种不同的基本操作:match检查是否有比赛 才刚刚开始的字符串,同时search检查是否有匹配 的任何地方的字符串(这是Perl并默认情况下)。

请注意, 即使使用以:开头的正则表达式match也可能仅在字符串的开头,或者在模式下也紧接换行符之后才匹配 。仅当模式在字符串开头 无论模式如何)或在可选 参数给定的起始位置匹配(无论换行符是否在其前面)时,“ ”操作才会成功。search'^''^'MULTILINEmatchpos

现在,足够的讨论。现在来看一些示例代码:

# example code:
string_with_newlines = """something
someotherthing"""

import re

print re.match('some', string_with_newlines) # matches
print re.match('someother', 
               string_with_newlines) # won't match
print re.match('^someother', string_with_newlines, 
               re.MULTILINE) # also won't match
print re.search('someother', 
                string_with_newlines) # finds something
print re.search('^someother', string_with_newlines, 
                re.MULTILINE) # also finds something

m = re.compile('thing$', re.MULTILINE)

print m.match(string_with_newlines) # no match
print m.match(string_with_newlines, pos=4) # matches
print m.search(string_with_newlines, 
               re.MULTILINE) # also matches

re.match is anchored at the beginning of the string. That has nothing to do with newlines, so it is not the same as using ^ in the pattern.

As the re.match documentation says:

If zero or more characters at the beginning of string match the regular expression pattern, return a corresponding MatchObject instance. Return None if the string does not match the pattern; note that this is different from a zero-length match.

Note: If you want to locate a match anywhere in string, use search() instead.

re.search searches the entire string, as the documentation says:

Scan through string looking for a location where the regular expression pattern produces a match, and return a corresponding MatchObject instance. Return None if no position in the string matches the pattern; note that this is different from finding a zero-length match at some point in the string.

So if you need to match at the beginning of the string, or to match the entire string use match. It is faster. Otherwise use search.

The documentation has a specific section for match vs. search that also covers multiline strings:

Python offers two different primitive operations based on regular expressions: match checks for a match only at the beginning of the string, while search checks for a match anywhere in the string (this is what Perl does by default).

Note that match may differ from search even when using a regular expression beginning with '^': '^' matches only at the start of the string, or in MULTILINE mode also immediately following a newline. The “match” operation succeeds only if the pattern matches at the start of the string regardless of mode, or at the starting position given by the optional pos argument regardless of whether a newline precedes it.

Now, enough talk. Time to see some example code:

# example code:
string_with_newlines = """something
someotherthing"""

import re

print re.match('some', string_with_newlines) # matches
print re.match('someother', 
               string_with_newlines) # won't match
print re.match('^someother', string_with_newlines, 
               re.MULTILINE) # also won't match
print re.search('someother', 
                string_with_newlines) # finds something
print re.search('^someother', string_with_newlines, 
                re.MULTILINE) # also finds something

m = re.compile('thing$', re.MULTILINE)

print m.match(string_with_newlines) # no match
print m.match(string_with_newlines, pos=4) # matches
print m.search(string_with_newlines, 
               re.MULTILINE) # also matches

回答 1

search ⇒在字符串中的任何地方找到东西,然后返回一个匹配对象。

match⇒ 在字符串的开头找到一些内容,然后返回匹配对象。

search ⇒ find something anywhere in the string and return a match object.

match ⇒ find something at the beginning of the string and return a match object.


回答 2

re.search 搜索 ES该模式在整个字符串,而re.match没有搜索不到的格局; 如果没有,则除了在字符串开头匹配外,别无选择。

re.search searches for the pattern throughout the string, whereas re.match does not search the pattern; if it does not, it has no other choice than to match it at start of the string.


回答 3

match比搜索快得多,因此如果不使用regex.search(“ word”),则可以执行regex.match((。*?)word(。*?)),如果您使用数百万个样品。

@ivan_bilan在上面接受的答案下的评论让我开始思考是否有这样的hack是否真的在加速任何事情,所以让我们找出您将真正获得多少性能。

我准备了以下测试套件:

import random
import re
import string
import time

LENGTH = 10
LIST_SIZE = 1000000

def generate_word():
    word = [random.choice(string.ascii_lowercase) for _ in range(LENGTH)]
    word = ''.join(word)
    return word

wordlist = [generate_word() for _ in range(LIST_SIZE)]

start = time.time()
[re.search('python', word) for word in wordlist]
print('search:', time.time() - start)

start = time.time()
[re.match('(.*?)python(.*?)', word) for word in wordlist]
print('match:', time.time() - start)

我进行了10次测量(1M,2M,…,10M个单词),得出以下图表:

产生的直线令人惊讶地(实际上并不那么令人惊讶)直线。鉴于这种特定的模式组合,该search功能(略)更快。该测试的实质是避免过度优化代码。

match is much faster than search, so instead of doing regex.search(“word”) you can do regex.match((.*?)word(.*?)) and gain tons of performance if you are working with millions of samples.

This comment from @ivan_bilan under the accepted answer above got me thinking if such hack is actually speeding anything up, so let’s find out how many tons of performance you will really gain.

I prepared the following test suite:

import random
import re
import string
import time

LENGTH = 10
LIST_SIZE = 1000000

def generate_word():
    word = [random.choice(string.ascii_lowercase) for _ in range(LENGTH)]
    word = ''.join(word)
    return word

wordlist = [generate_word() for _ in range(LIST_SIZE)]

start = time.time()
[re.search('python', word) for word in wordlist]
print('search:', time.time() - start)

start = time.time()
[re.match('(.*?)python(.*?)', word) for word in wordlist]
print('match:', time.time() - start)

I made 10 measurements (1M, 2M, …, 10M words) which gave me the following plot:

The resulting lines are surprisingly (actually not that surprisingly) straight. And the search function is (slightly) faster given this specific pattern combination. The moral of this test: Avoid overoptimizing your code.


回答 4

您可以参考以下示例以了解re.matchand.search 的工作原理

a = "123abc"
t = re.match("[a-z]+",a)
t = re.search("[a-z]+",a)

re.match会回来none,但是re.search会回来abc

You can refer the below example to understand the working of re.match and re.search

a = "123abc"
t = re.match("[a-z]+",a)
t = re.search("[a-z]+",a)

re.match will return none, but re.search will return abc.


回答 5

区别在于,re.match()误导习惯于Perlgrepsed正则表达式匹配的任何人,并且re.search()不会。:-)

正如约翰·D·库克(John D. Cook)所说,要更加清醒一些,re.match()“表现得好像每个模式都以^开头。” 换句话说,re.match('pattern')等于re.search('^pattern')。因此,它锚定了图案的左侧。但这也没有固定模式的右侧:仍然需要终止$

坦白地说,我认为re.match()应该弃用。我很想知道应该保留它的原因。

The difference is, re.match() misleads anyone accustomed to Perl, grep, or sed regular expression matching, and re.search() does not. :-)

More soberly, As John D. Cook remarks, re.match() “behaves as if every pattern has ^ prepended.” In other words, re.match('pattern') equals re.search('^pattern'). So it anchors a pattern’s left side. But it also doesn’t anchor a pattern’s right side: that still requires a terminating $.

Frankly given the above, I think re.match() should be deprecated. I would be interested to know reasons it should be retained.


回答 6

re.match尝试匹配字符串开头的模式。re.search尝试在整个字符串中匹配模式直到找到匹配项。

re.match attempts to match a pattern at the beginning of the string. re.search attempts to match the pattern throughout the string until it finds a match.


回答 7

矮得多:

  • search 扫描整个字符串。

  • match 仅扫描字符串的开头。

以下Ex表示:

>>> a = "123abc"
>>> re.match("[a-z]+",a)
None
>>> re.search("[a-z]+",a)
abc

Much shorter:

  • search scans through the whole string.

  • match scans only the beginning of the string.

Following Ex says it:

>>> a = "123abc"
>>> re.match("[a-z]+",a)
None
>>> re.search("[a-z]+",a)
abc

使用Python的re.compile是否值得?

问题:使用Python的re.compile是否值得?

在Python中对正则表达式使用compile有什么好处?

h = re.compile('hello')
h.match('hello world')

re.match('hello', 'hello world')

Is there any benefit in using compile for regular expressions in Python?

h = re.compile('hello')
h.match('hello world')

vs

re.match('hello', 'hello world')

回答 0

与动态编译相比,我有1000多次运行已编译的正则表达式的经验,并且没有注意到任何可察觉的差异。显然,这是轶事,当然也不是反对编译的一个很好的论据,但是我发现区别可以忽略不计。

编辑:快速浏览一下实际的Python 2.5库代码后,我发现无论何时使用Python(包括对的调用re.match()),Python都会在内部编译和缓存正则表达式,因此您实际上只是在更改正则表达式时进行更改,因此根本不会节省太多时间-仅节省检查缓存(在内部dict类型上进行键查找)所花费的时间。

从模块re.py(评论是我的):

def match(pattern, string, flags=0):
    return _compile(pattern, flags).match(string)

def _compile(*key):

    # Does cache check at top of function
    cachekey = (type(key[0]),) + key
    p = _cache.get(cachekey)
    if p is not None: return p

    # ...
    # Does actual compilation on cache miss
    # ...

    # Caches compiled regex
    if len(_cache) >= _MAXCACHE:
        _cache.clear()
    _cache[cachekey] = p
    return p

我仍然经常预编译正则表达式,但只是将它们绑定到一个不错的,可重用的名称上,而不是为了获得预期的性能提升。

I’ve had a lot of experience running a compiled regex 1000s of times versus compiling on-the-fly, and have not noticed any perceivable difference. Obviously, this is anecdotal, and certainly not a great argument against compiling, but I’ve found the difference to be negligible.

EDIT: After a quick glance at the actual Python 2.5 library code, I see that Python internally compiles AND CACHES regexes whenever you use them anyway (including calls to re.match()), so you’re really only changing WHEN the regex gets compiled, and shouldn’t be saving much time at all – only the time it takes to check the cache (a key lookup on an internal dict type).

From module re.py (comments are mine):

def match(pattern, string, flags=0):
    return _compile(pattern, flags).match(string)

def _compile(*key):

    # Does cache check at top of function
    cachekey = (type(key[0]),) + key
    p = _cache.get(cachekey)
    if p is not None: return p

    # ...
    # Does actual compilation on cache miss
    # ...

    # Caches compiled regex
    if len(_cache) >= _MAXCACHE:
        _cache.clear()
    _cache[cachekey] = p
    return p

I still often pre-compile regular expressions, but only to bind them to a nice, reusable name, not for any expected performance gain.


回答 1

对我来说,最大的好处是 re.compile就是能够将正则表达式的定义与使用分开。

即使是一个简单的表达式0|[1-9][0-9]*((以10 为基数的整数,且不带前导零))可能也足够复杂,以至于您不必重新键入它,检查是否输入了拼写错误,之后又不得不在开始调试时重新检查是否有输入错误。 。另外,使用变量名(例如num或num_b10)比更好0|[1-9][0-9]*

当然可以存储字符串并将其传递给re.match;但是,它的可读性较差

num = "..."
# then, much later:
m = re.match(num, input)

与编译:

num = re.compile("...")
# then, much later:
m = num.match(input)

尽管距离很近,但是第二遍的最后一行在重复使用时感觉更自然,更简单。

For me, the biggest benefit to re.compile is being able to separate definition of the regex from its use.

Even a simple expression such as 0|[1-9][0-9]* (integer in base 10 without leading zeros) can be complex enough that you’d rather not have to retype it, check if you made any typos, and later have to recheck if there are typos when you start debugging. Plus, it’s nicer to use a variable name such as num or num_b10 than 0|[1-9][0-9]*.

It’s certainly possible to store strings and pass them to re.match; however, that’s less readable:

num = "..."
# then, much later:
m = re.match(num, input)

Versus compiling:

num = re.compile("...")
# then, much later:
m = num.match(input)

Though it is fairly close, the last line of the second feels more natural and simpler when used repeatedly.


回答 2

FWIW:

$ python -m timeit -s "import re" "re.match('hello', 'hello world')"
100000 loops, best of 3: 3.82 usec per loop

$ python -m timeit -s "import re; h=re.compile('hello')" "h.match('hello world')"
1000000 loops, best of 3: 1.26 usec per loop

因此,如果您将大量使用相同的正则表达式,则值得这样做re.compile(尤其是对于更复杂的正则表达式而言)。

反对过早优化的标准论点适用,但是re.compile如果您怀疑正则表达式可能会成为性能瓶颈,那么我认为您不会因为使用正则表达式而损失太多的清晰度/简单性。

更新:

在Python 3.6(我怀疑以上计时是​​使用Python 2.x完成的)和2018硬件(MacBook Pro)下,我现在得到以下计时:

% python -m timeit -s "import re" "re.match('hello', 'hello world')"
1000000 loops, best of 3: 0.661 usec per loop

% python -m timeit -s "import re; h=re.compile('hello')" "h.match('hello world')"
1000000 loops, best of 3: 0.285 usec per loop

% python -m timeit -s "import re" "h=re.compile('hello'); h.match('hello world')"
1000000 loops, best of 3: 0.65 usec per loop

% python --version
Python 3.6.5 :: Anaconda, Inc.

我还添加了一个案例(请注意最后两个运行之间的引号差异),该案例显示出re.match(x, ...)在字面上[大致]等同于re.compile(x).match(...),即,似乎没有发生已编译表示形式的后台缓存。

FWIW:

$ python -m timeit -s "import re" "re.match('hello', 'hello world')"
100000 loops, best of 3: 3.82 usec per loop

$ python -m timeit -s "import re; h=re.compile('hello')" "h.match('hello world')"
1000000 loops, best of 3: 1.26 usec per loop

so, if you’re going to be using the same regex a lot, it may be worth it to do re.compile (especially for more complex regexes).

The standard arguments against premature optimization apply, but I don’t think you really lose much clarity/straightforwardness by using re.compile if you suspect that your regexps may become a performance bottleneck.

Update:

Under Python 3.6 (I suspect the above timings were done using Python 2.x) and 2018 hardware (MacBook Pro), I now get the following timings:

% python -m timeit -s "import re" "re.match('hello', 'hello world')"
1000000 loops, best of 3: 0.661 usec per loop

% python -m timeit -s "import re; h=re.compile('hello')" "h.match('hello world')"
1000000 loops, best of 3: 0.285 usec per loop

% python -m timeit -s "import re" "h=re.compile('hello'); h.match('hello world')"
1000000 loops, best of 3: 0.65 usec per loop

% python --version
Python 3.6.5 :: Anaconda, Inc.

I also added a case (notice the quotation mark differences between the last two runs) that shows that re.match(x, ...) is literally [roughly] equivalent to re.compile(x).match(...), i.e. no behind-the-scenes caching of the compiled representation seems to happen.


回答 3

这是一个简单的测试用例:

~$ for x in 1 10 100 1000 10000 100000 1000000; do python -m timeit -n $x -s 'import re' 're.match("[0-9]{3}-[0-9]{3}-[0-9]{4}", "123-123-1234")'; done
1 loops, best of 3: 3.1 usec per loop
10 loops, best of 3: 2.41 usec per loop
100 loops, best of 3: 2.24 usec per loop
1000 loops, best of 3: 2.21 usec per loop
10000 loops, best of 3: 2.23 usec per loop
100000 loops, best of 3: 2.24 usec per loop
1000000 loops, best of 3: 2.31 usec per loop

与重新编译:

~$ for x in 1 10 100 1000 10000 100000 1000000; do python -m timeit -n $x -s 'import re' 'r = re.compile("[0-9]{3}-[0-9]{3}-[0-9]{4}")' 'r.match("123-123-1234")'; done
1 loops, best of 3: 1.91 usec per loop
10 loops, best of 3: 0.691 usec per loop
100 loops, best of 3: 0.701 usec per loop
1000 loops, best of 3: 0.684 usec per loop
10000 loops, best of 3: 0.682 usec per loop
100000 loops, best of 3: 0.694 usec per loop
1000000 loops, best of 3: 0.702 usec per loop

因此,即使只匹配一次,使用这种简单的情况似乎编译起来也会更快。

Here’s a simple test case:

~$ for x in 1 10 100 1000 10000 100000 1000000; do python -m timeit -n $x -s 'import re' 're.match("[0-9]{3}-[0-9]{3}-[0-9]{4}", "123-123-1234")'; done
1 loops, best of 3: 3.1 usec per loop
10 loops, best of 3: 2.41 usec per loop
100 loops, best of 3: 2.24 usec per loop
1000 loops, best of 3: 2.21 usec per loop
10000 loops, best of 3: 2.23 usec per loop
100000 loops, best of 3: 2.24 usec per loop
1000000 loops, best of 3: 2.31 usec per loop

with re.compile:

~$ for x in 1 10 100 1000 10000 100000 1000000; do python -m timeit -n $x -s 'import re' 'r = re.compile("[0-9]{3}-[0-9]{3}-[0-9]{4}")' 'r.match("123-123-1234")'; done
1 loops, best of 3: 1.91 usec per loop
10 loops, best of 3: 0.691 usec per loop
100 loops, best of 3: 0.701 usec per loop
1000 loops, best of 3: 0.684 usec per loop
10000 loops, best of 3: 0.682 usec per loop
100000 loops, best of 3: 0.694 usec per loop
1000000 loops, best of 3: 0.702 usec per loop

So, it would seem to compiling is faster with this simple case, even if you only match once.


回答 4

我自己尝试过。对于从字符串中解析数字并将其求和的简单情况,使用已编译的正则表达式对象的速度大约是使用正则表达式对象的两倍。re方法的。

正如其他人指出的那样,这些re方法(包括re.compile)在以前编译的表达式的缓存中查找正则表达式字符串。因此,在正常情况下,使用re方法只是缓存查找的成本。

但是,检查代码后,显示缓存限制为100个表达式。这就引出了一个问题,溢出缓存有多痛苦?该代码包含正则表达式编译器的内部接口re.sre_compile.compile。如果调用它,我们将绕过缓存。对于一个基本的正则表达式,事实证明它要慢大约两个数量级,例如r'\w+\s+([0-9_]+)\s+\w*'

这是我的测试:

#!/usr/bin/env python
import re
import time

def timed(func):
    def wrapper(*args):
        t = time.time()
        result = func(*args)
        t = time.time() - t
        print '%s took %.3f seconds.' % (func.func_name, t)
        return result
    return wrapper

regularExpression = r'\w+\s+([0-9_]+)\s+\w*'
testString = "average    2 never"

@timed
def noncompiled():
    a = 0
    for x in xrange(1000000):
        m = re.match(regularExpression, testString)
        a += int(m.group(1))
    return a

@timed
def compiled():
    a = 0
    rgx = re.compile(regularExpression)
    for x in xrange(1000000):
        m = rgx.match(testString)
        a += int(m.group(1))
    return a

@timed
def reallyCompiled():
    a = 0
    rgx = re.sre_compile.compile(regularExpression)
    for x in xrange(1000000):
        m = rgx.match(testString)
        a += int(m.group(1))
    return a


@timed
def compiledInLoop():
    a = 0
    for x in xrange(1000000):
        rgx = re.compile(regularExpression)
        m = rgx.match(testString)
        a += int(m.group(1))
    return a

@timed
def reallyCompiledInLoop():
    a = 0
    for x in xrange(10000):
        rgx = re.sre_compile.compile(regularExpression)
        m = rgx.match(testString)
        a += int(m.group(1))
    return a

r1 = noncompiled()
r2 = compiled()
r3 = reallyCompiled()
r4 = compiledInLoop()
r5 = reallyCompiledInLoop()
print "r1 = ", r1
print "r2 = ", r2
print "r3 = ", r3
print "r4 = ", r4
print "r5 = ", r5
</pre>
And here is the output on my machine:
<pre>
$ regexTest.py 
noncompiled took 4.555 seconds.
compiled took 2.323 seconds.
reallyCompiled took 2.325 seconds.
compiledInLoop took 4.620 seconds.
reallyCompiledInLoop took 4.074 seconds.
r1 =  2000000
r2 =  2000000
r3 =  2000000
r4 =  2000000
r5 =  20000

“ reallyCompiled”方法使用内部接口,该接口绕过缓存。请注意,在每次循环迭代中编译的代码仅迭代10,000次,而不是100万次。

I just tried this myself. For the simple case of parsing a number out of a string and summing it, using a compiled regular expression object is about twice as fast as using the re methods.

As others have pointed out, the re methods (including re.compile) look up the regular expression string in a cache of previously compiled expressions. Therefore, in the normal case, the extra cost of using the re methods is simply the cost of the cache lookup.

However, examination of the code, shows the cache is limited to 100 expressions. This begs the question, how painful is it to overflow the cache? The code contains an internal interface to the regular expression compiler, re.sre_compile.compile. If we call it, we bypass the cache. It turns out to be about two orders of magnitude slower for a basic regular expression, such as r'\w+\s+([0-9_]+)\s+\w*'.

Here’s my test:

#!/usr/bin/env python
import re
import time

def timed(func):
    def wrapper(*args):
        t = time.time()
        result = func(*args)
        t = time.time() - t
        print '%s took %.3f seconds.' % (func.func_name, t)
        return result
    return wrapper

regularExpression = r'\w+\s+([0-9_]+)\s+\w*'
testString = "average    2 never"

@timed
def noncompiled():
    a = 0
    for x in xrange(1000000):
        m = re.match(regularExpression, testString)
        a += int(m.group(1))
    return a

@timed
def compiled():
    a = 0
    rgx = re.compile(regularExpression)
    for x in xrange(1000000):
        m = rgx.match(testString)
        a += int(m.group(1))
    return a

@timed
def reallyCompiled():
    a = 0
    rgx = re.sre_compile.compile(regularExpression)
    for x in xrange(1000000):
        m = rgx.match(testString)
        a += int(m.group(1))
    return a


@timed
def compiledInLoop():
    a = 0
    for x in xrange(1000000):
        rgx = re.compile(regularExpression)
        m = rgx.match(testString)
        a += int(m.group(1))
    return a

@timed
def reallyCompiledInLoop():
    a = 0
    for x in xrange(10000):
        rgx = re.sre_compile.compile(regularExpression)
        m = rgx.match(testString)
        a += int(m.group(1))
    return a

r1 = noncompiled()
r2 = compiled()
r3 = reallyCompiled()
r4 = compiledInLoop()
r5 = reallyCompiledInLoop()
print "r1 = ", r1
print "r2 = ", r2
print "r3 = ", r3
print "r4 = ", r4
print "r5 = ", r5
</pre>
And here is the output on my machine:
<pre>
$ regexTest.py 
noncompiled took 4.555 seconds.
compiled took 2.323 seconds.
reallyCompiled took 2.325 seconds.
compiledInLoop took 4.620 seconds.
reallyCompiledInLoop took 4.074 seconds.
r1 =  2000000
r2 =  2000000
r3 =  2000000
r4 =  2000000
r5 =  20000

The ‘reallyCompiled’ methods use the internal interface, which bypasses the cache. Note the one that compiles on each loop iteration is only iterated 10,000 times, not one million.


回答 5

我同意诚实的安倍晋三的观点,即match(...)所给的示例不同。它们不是一对一的比较,因此结果会有所不同。为了简化我的答复,我将A,B,C,D用于这些功能。哦,是的,我们正在处理4个函数,re.py而不是3个。

运行这段代码:

h = re.compile('hello')                   # (A)
h.match('hello world')                    # (B)

与运行此代码相同:

re.match('hello', 'hello world')          # (C)

因为,当查看源代码时re.py,(A + B)表示:

h = re._compile('hello')                  # (D)
h.match('hello world')

(C)实际上是:

re._compile('hello').match('hello world')

因此,(C)与(B)不同。实际上,(C)在调用(D)之后又调用(B),后者也被(A)调用。换句话说,(C) = (A) + (B)。因此,比较循环内的(A + B)与循环内的(C)具有相同的结果。

乔治regexTest.py为我们证明了这一点。

noncompiled took 4.555 seconds.           # (C) in a loop
compiledInLoop took 4.620 seconds.        # (A + B) in a loop
compiled took 2.323 seconds.              # (A) once + (B) in a loop

每个人的兴趣在于,如何获得2.323秒的结果。为了确保compile(...)仅被调用一次,我们需要将已编译的regex对象存储在内存中。如果使用的是类,则可以存储对象并在每次调用函数时重用。

class Foo:
    regex = re.compile('hello')
    def my_function(text)
        return regex.match(text)

如果我们不使用类(今天是我的要求),那么我无可奉告。我仍在学习在Python中使用全局变量,并且我知道全局变量是一件坏事。

还有一点,我认为使用(A) + (B)方法具有优势。这是我观察到的一些事实(如果我记错了,请纠正我):

  1. 调用A一次,它将先搜索一次,然后再搜索_cache一次sre_compile.compile()以创建正则表达式对象。调用A两次,它将进行两次搜索和一次编译(因为正则表达式对象已缓存)。

  2. 如果 _cache两者之间被刷新,则正则表达式对象将从内存中释放出来,Python需要再次编译。(有人建议Python不会重新编译。)

  3. 如果我们使用(A)保留regex对象,则regex对象仍将进入_cache并以某种方式刷新。但是我们的代码对此保留了引用,并且正则表达式对象不会从内存中释放。这些,Python无需再次编译。

  4. 乔治测试的compedInLoop与已编译的测试之间的2秒差异主要是构建密钥和搜索_cache所需的时间。这并不意味着正则表达式的编译时间。

  5. George的真正编译测试显示了每次真正重新进行编译会发生什么情况:它将慢100倍(他将循环从1,000,000减少到10,000)。

以下是(A + B)优于(C)的情况:

  1. 如果我们可以在一个类中缓存正则表达式对象的引用。
  2. 如果需要重复(在循环内或多次)调用(B),则必须在循环外缓存对regex对象的引用。

(C)足够好的情况:

  1. 我们无法缓存参考。
  2. 我们只会偶尔使用一次。
  3. 总的来说,我们没有太多的正则表达式(假设编译过的正则表达式永远不会被刷新)

回顾一下,这里是ABC:

h = re.compile('hello')                   # (A)
h.match('hello world')                    # (B)
re.match('hello', 'hello world')          # (C)

谢谢阅读。

I agree with Honest Abe that the match(...) in the given examples are different. They are not a one-to-one comparisons and thus, outcomes are vary. To simplify my reply, I use A, B, C, D for those functions in question. Oh yes, we are dealing with 4 functions in re.py instead of 3.

Running this piece of code:

h = re.compile('hello')                   # (A)
h.match('hello world')                    # (B)

is same as running this code:

re.match('hello', 'hello world')          # (C)

Because, when looked into the source re.py, (A + B) means:

h = re._compile('hello')                  # (D)
h.match('hello world')

and (C) is actually:

re._compile('hello').match('hello world')

So, (C) is not the same as (B). In fact, (C) calls (B) after calling (D) which is also called by (A). In other words, (C) = (A) + (B). Therefore, comparing (A + B) inside a loop has same result as (C) inside a loop.

George’s regexTest.py proved this for us.

noncompiled took 4.555 seconds.           # (C) in a loop
compiledInLoop took 4.620 seconds.        # (A + B) in a loop
compiled took 2.323 seconds.              # (A) once + (B) in a loop

Everyone’s interest is, how to get the result of 2.323 seconds. In order to make sure compile(...) only get called once, we need to store the compiled regex object in memory. If we are using a class, we could store the object and reuse when every time our function get called.

class Foo:
    regex = re.compile('hello')
    def my_function(text)
        return regex.match(text)

If we are not using class (which is my request today), then I have no comment. I’m still learning to use global variable in Python, and I know global variable is a bad thing.

One more point, I believe that using (A) + (B) approach has an upper hand. Here are some facts as I observed (please correct me if I’m wrong):

  1. Calls A once, it will do one search in the _cache followed by one sre_compile.compile() to create a regex object. Calls A twice, it will do two searches and one compile (because the regex object is cached).

  2. If the _cache get flushed in between, then the regex object is released from memory and Python need to compile again. (someone suggest that Python won’t recompile.)

  3. If we keep the regex object by using (A), the regex object will still get into _cache and get flushed somehow. But our code keep a reference on it and the regex object will not be released from memory. Those, Python need not to compile again.

  4. The 2 seconds differences in George’s test compiledInLoop vs compiled is mainly the time required to build the key and search the _cache. It doesn’t mean the compile time of regex.

  5. George’s reallycompile test show what happen if it really re-do the compile every time: it will be 100x slower (he reduced the loop from 1,000,000 to 10,000).

Here are the only cases that (A + B) is better than (C):

  1. If we can cache a reference of the regex object inside a class.
  2. If we need to calls (B) repeatedly (inside a loop or multiple times), we must cache the reference to regex object outside the loop.

Case that (C) is good enough:

  1. We cannot cache a reference.
  2. We only use it once in a while.
  3. In overall, we don’t have too many regex (assume the compiled one never get flushed)

Just a recap, here are the A B C:

h = re.compile('hello')                   # (A)
h.match('hello world')                    # (B)
re.match('hello', 'hello world')          # (C)

Thanks for reading.


回答 6

通常,是否使用re.compile几乎没有区别。在内部,所有功能都是通过编译步骤实现的:

def match(pattern, string, flags=0):
    return _compile(pattern, flags).match(string)

def fullmatch(pattern, string, flags=0):
    return _compile(pattern, flags).fullmatch(string)

def search(pattern, string, flags=0):
    return _compile(pattern, flags).search(string)

def sub(pattern, repl, string, count=0, flags=0):
    return _compile(pattern, flags).sub(repl, string, count)

def subn(pattern, repl, string, count=0, flags=0):
    return _compile(pattern, flags).subn(repl, string, count)

def split(pattern, string, maxsplit=0, flags=0):
    return _compile(pattern, flags).split(string, maxsplit)

def findall(pattern, string, flags=0):
    return _compile(pattern, flags).findall(string)

def finditer(pattern, string, flags=0):
    return _compile(pattern, flags).finditer(string)

另外,re.compile()绕过了额外的间接和缓存逻辑:

_cache = {}

_pattern_type = type(sre_compile.compile("", 0))

_MAXCACHE = 512
def _compile(pattern, flags):
    # internal: compile pattern
    try:
        p, loc = _cache[type(pattern), pattern, flags]
        if loc is None or loc == _locale.setlocale(_locale.LC_CTYPE):
            return p
    except KeyError:
        pass
    if isinstance(pattern, _pattern_type):
        if flags:
            raise ValueError(
                "cannot process flags argument with a compiled pattern")
        return pattern
    if not sre_compile.isstring(pattern):
        raise TypeError("first argument must be string or compiled pattern")
    p = sre_compile.compile(pattern, flags)
    if not (flags & DEBUG):
        if len(_cache) >= _MAXCACHE:
            _cache.clear()
        if p.flags & LOCALE:
            if not _locale:
                return p
            loc = _locale.setlocale(_locale.LC_CTYPE)
        else:
            loc = None
        _cache[type(pattern), pattern, flags] = p, loc
    return p

除了使用re.compile带来的小速度优势外,人们还喜欢命名潜在的复杂模式规范并将其与应用了业务逻辑的业务逻辑分开的可读性:

#### Patterns ############################################################
number_pattern = re.compile(r'\d+(\.\d*)?')    # Integer or decimal number
assign_pattern = re.compile(r':=')             # Assignment operator
identifier_pattern = re.compile(r'[A-Za-z]+')  # Identifiers
whitespace_pattern = re.compile(r'[\t ]+')     # Spaces and tabs

#### Applications ########################################################

if whitespace_pattern.match(s): business_logic_rule_1()
if assign_pattern.match(s): business_logic_rule_2()

请注意,另一位受访者错误地认为pyc文件直接存储了已编译的模式。但是,实际上,每次加载PYC时都会对其进行重建:

>>> from dis import dis
>>> with open('tmp.pyc', 'rb') as f:
        f.read(8)
        dis(marshal.load(f))

  1           0 LOAD_CONST               0 (-1)
              3 LOAD_CONST               1 (None)
              6 IMPORT_NAME              0 (re)
              9 STORE_NAME               0 (re)

  3          12 LOAD_NAME                0 (re)
             15 LOAD_ATTR                1 (compile)
             18 LOAD_CONST               2 ('[aeiou]{2,5}')
             21 CALL_FUNCTION            1
             24 STORE_NAME               2 (lc_vowels)
             27 LOAD_CONST               1 (None)
             30 RETURN_VALUE

上面的反汇编来自PYC文件,其中tmp.py包含:

import re
lc_vowels = re.compile(r'[aeiou]{2,5}')

Mostly, there is little difference whether you use re.compile or not. Internally, all of the functions are implemented in terms of a compile step:

def match(pattern, string, flags=0):
    return _compile(pattern, flags).match(string)

def fullmatch(pattern, string, flags=0):
    return _compile(pattern, flags).fullmatch(string)

def search(pattern, string, flags=0):
    return _compile(pattern, flags).search(string)

def sub(pattern, repl, string, count=0, flags=0):
    return _compile(pattern, flags).sub(repl, string, count)

def subn(pattern, repl, string, count=0, flags=0):
    return _compile(pattern, flags).subn(repl, string, count)

def split(pattern, string, maxsplit=0, flags=0):
    return _compile(pattern, flags).split(string, maxsplit)

def findall(pattern, string, flags=0):
    return _compile(pattern, flags).findall(string)

def finditer(pattern, string, flags=0):
    return _compile(pattern, flags).finditer(string)

In addition, re.compile() bypasses the extra indirection and caching logic:

_cache = {}

_pattern_type = type(sre_compile.compile("", 0))

_MAXCACHE = 512
def _compile(pattern, flags):
    # internal: compile pattern
    try:
        p, loc = _cache[type(pattern), pattern, flags]
        if loc is None or loc == _locale.setlocale(_locale.LC_CTYPE):
            return p
    except KeyError:
        pass
    if isinstance(pattern, _pattern_type):
        if flags:
            raise ValueError(
                "cannot process flags argument with a compiled pattern")
        return pattern
    if not sre_compile.isstring(pattern):
        raise TypeError("first argument must be string or compiled pattern")
    p = sre_compile.compile(pattern, flags)
    if not (flags & DEBUG):
        if len(_cache) >= _MAXCACHE:
            _cache.clear()
        if p.flags & LOCALE:
            if not _locale:
                return p
            loc = _locale.setlocale(_locale.LC_CTYPE)
        else:
            loc = None
        _cache[type(pattern), pattern, flags] = p, loc
    return p

In addition to the small speed benefit from using re.compile, people also like the readability that comes from naming potentially complex pattern specifications and separating them from the business logic where there are applied:

#### Patterns ############################################################
number_pattern = re.compile(r'\d+(\.\d*)?')    # Integer or decimal number
assign_pattern = re.compile(r':=')             # Assignment operator
identifier_pattern = re.compile(r'[A-Za-z]+')  # Identifiers
whitespace_pattern = re.compile(r'[\t ]+')     # Spaces and tabs

#### Applications ########################################################

if whitespace_pattern.match(s): business_logic_rule_1()
if assign_pattern.match(s): business_logic_rule_2()

Note, one other respondent incorrectly believed that pyc files stored compiled patterns directly; however, in reality they are rebuilt each time when the PYC is loaded:

>>> from dis import dis
>>> with open('tmp.pyc', 'rb') as f:
        f.read(8)
        dis(marshal.load(f))

  1           0 LOAD_CONST               0 (-1)
              3 LOAD_CONST               1 (None)
              6 IMPORT_NAME              0 (re)
              9 STORE_NAME               0 (re)

  3          12 LOAD_NAME                0 (re)
             15 LOAD_ATTR                1 (compile)
             18 LOAD_CONST               2 ('[aeiou]{2,5}')
             21 CALL_FUNCTION            1
             24 STORE_NAME               2 (lc_vowels)
             27 LOAD_CONST               1 (None)
             30 RETURN_VALUE

The above disassembly comes from the PYC file for a tmp.py containing:

import re
lc_vowels = re.compile(r'[aeiou]{2,5}')

回答 7

通常,我发现使用标志(至少更容易记住操作方式)(例如re.I在编译模式时)比内联使用标志更容易。

>>> foo_pat = re.compile('foo',re.I)
>>> foo_pat.findall('some string FoO bar')
['FoO']

>>> re.findall('(?i)foo','some string FoO bar')
['FoO']

In general, I find it is easier to use flags (at least easier to remember how), like re.I when compiling patterns than to use flags inline.

>>> foo_pat = re.compile('foo',re.I)
>>> foo_pat.findall('some string FoO bar')
['FoO']

vs

>>> re.findall('(?i)foo','some string FoO bar')
['FoO']

回答 8

使用给定的示例:

h = re.compile('hello')
h.match('hello world')

上面的示例中的match方法与以下使用的方法不同:

re.match('hello', 'hello world')

re.compile()返回一个正则表达式对象,这意味着它h是一个正则表达式对象。

regex对象具有自己的match方法,该方法带有可选的posendpos参数:

regex.match(string[, pos[, endpos]])

位置

可选的第二个参数pos在开始搜索的字符串中给出一个索引;它默认为0。这并不完全等同于切片字符串;该'^'模式字符在字符串的真正开始,并在仅仅一个换行符后的位置相匹配,但不一定,其中搜索是启动索引。

端点

可选参数endpos限制了将搜索字符串的距离;就像字符串是endpos字符长一样,因此仅搜索从pos到的字符endpos - 1进行匹配。如果endpos小于pos,则不会找到匹配项;否则,如果rx是已编译的正则表达式对象,rx.search(string, 0, 50)则等效于rx.search(string[:50], 0)

regex对象的searchfindallfinditer方法也支持这些参数。

re.match(pattern, string, flags=0)如您所见,它不支持它们,
也不支持searchfindallfinditer对应项。

一个匹配对象有补充这些参数属性:

match.pos

传递给正则表达式对象的search()或match()方法的pos值。这是RE引擎开始寻找匹配项的字符串索引。

match.endpos

传递给正则表达式对象的search()或match()方法的endpos的值。这是字符串的索引,RE引擎将超出该索引。


一个正则表达式对象有两个独特的,可能有用的,属性:

正则表达式组

模式中的捕获组数。

正则表达式

字典,将由(?P)定义的任何符号组名映射到组号。如果模式中未使用任何符号组,则词典为空。


最后,match对象具有以下属性:

匹配

其match()或search()方法产生此match实例的正则表达式对象。

Using the given examples:

h = re.compile('hello')
h.match('hello world')

The match method in the example above is not the same as the one used below:

re.match('hello', 'hello world')

re.compile() returns a regular expression object, which means h is a regex object.

The regex object has its own match method with the optional pos and endpos parameters:

regex.match(string[, pos[, endpos]])

pos

The optional second parameter pos gives an index in the string where the search is to start; it defaults to 0. This is not completely equivalent to slicing the string; the '^' pattern character matches at the real beginning of the string and at positions just after a newline, but not necessarily at the index where the search is to start.

endpos

The optional parameter endpos limits how far the string will be searched; it will be as if the string is endpos characters long, so only the characters from pos to endpos - 1 will be searched for a match. If endpos is less than pos, no match will be found; otherwise, if rx is a compiled regular expression object, rx.search(string, 0, 50) is equivalent to rx.search(string[:50], 0).

The regex object’s search, findall, and finditer methods also support these parameters.

re.match(pattern, string, flags=0) does not support them as you can see,
nor does its search, findall, and finditer counterparts.

A match object has attributes that complement these parameters:

match.pos

The value of pos which was passed to the search() or match() method of a regex object. This is the index into the string at which the RE engine started looking for a match.

match.endpos

The value of endpos which was passed to the search() or match() method of a regex object. This is the index into the string beyond which the RE engine will not go.


A regex object has two unique, possibly useful, attributes:

regex.groups

The number of capturing groups in the pattern.

regex.groupindex

A dictionary mapping any symbolic group names defined by (?P) to group numbers. The dictionary is empty if no symbolic groups were used in the pattern.


And finally, a match object has this attribute:

match.re

The regular expression object whose match() or search() method produced this match instance.


回答 9

除了性能差异外,使用re.compile和使用编译后的正则表达式对象进行匹配(无论与正则表达式相关的任何操作)都使语义在Python运行时更清晰。

我有一些调试一些简单代码的痛苦经历:

compare = lambda s, p: re.match(p, s)

后来我用比较

[x for x in data if compare(patternPhrases, x[columnIndex])]

其中patternPhrases被认为是含有正则表达式字符串变量,x[columnIndex]的变量是包含字符串的变量。

我遇到了patternPhrases与某些预期字符串不匹配的问题!

但是,如果我使用re.compile形式:

compare = lambda s, p: p.match(s)

然后在

[x for x in data if compare(patternPhrases, x[columnIndex])]

Python会抱怨“字符串没有匹配的属性”,如在位置参数映射comparex[columnIndex]作为正则表达式!当我真正的意思

compare = lambda p, s: p.match(s)

在我的情况下,使用re.compile可以更清楚地说明正则表达式的目的,当它的值被肉眼隐藏时,因此可以从Python运行时检查中获得更多帮助。

因此,我这堂课的寓意是,当正则表达式不仅仅是文字字符串时,我应该使用re.compile来让Python帮助我断言自己的假设。

Performance difference aside, using re.compile and using the compiled regular expression object to do match (whatever regular expression related operations) makes the semantics clearer to Python run-time.

I had some painful experience of debugging some simple code:

compare = lambda s, p: re.match(p, s)

and later I’d use compare in

[x for x in data if compare(patternPhrases, x[columnIndex])]

where patternPhrases is supposed to be a variable containing regular expression string, x[columnIndex] is a variable containing string.

I had trouble that patternPhrases did not match some expected string!

But if I used the re.compile form:

compare = lambda s, p: p.match(s)

then in

[x for x in data if compare(patternPhrases, x[columnIndex])]

Python would have complained that “string does not have attribute of match”, as by positional argument mapping in compare, x[columnIndex] is used as regular expression!, when I actually meant

compare = lambda p, s: p.match(s)

In my case, using re.compile is more explicit of the purpose of regular expression, when it’s value is hidden to naked eyes, thus I could get more help from Python run-time checking.

So the moral of my lesson is that when the regular expression is not just literal string, then I should use re.compile to let Python to help me to assert my assumption.


回答 10

使用re.compile()有一个额外的好处,即使用re.VERBOSE向我的正则表达式模式添加注释

pattern = '''
hello[ ]world    # Some info on my pattern logic. [ ] to recognize space
'''

re.search(pattern, 'hello world', re.VERBOSE)

尽管这不会影响运行代码的速度,但我喜欢这样做,因为它是我注释习惯的一部分。当我想进行修改时,我完全不喜欢花时间试图记住代码后面2个月的逻辑。

There is one addition perk of using re.compile(), in the form of adding comments to my regex patterns using re.VERBOSE

pattern = '''
hello[ ]world    # Some info on my pattern logic. [ ] to recognize space
'''

re.search(pattern, 'hello world', re.VERBOSE)

Although this does not affect the speed of running your code, I like to do it this way as it is part of my commenting habit. I throughly dislike spending time trying to remember the logic that went behind my code 2 months down the line when I want to make modifications.


回答 11

根据Python 文档

序列

prog = re.compile(pattern)
result = prog.match(string)

相当于

result = re.match(pattern, string)

但是使用 re.compile(),当在单个程序中多次使用表达式时,并保存生成的正则表达式对象以供重用会更有效。

所以我的结论是,如果您要为许多不同的文本匹配相同的模式,则最好对其进行预编译。

According to the Python documentation:

The sequence

prog = re.compile(pattern)
result = prog.match(string)

is equivalent to

result = re.match(pattern, string)

but using re.compile() and saving the resulting regular expression object for reuse is more efficient when the expression will be used several times in a single program.

So my conclusion is, if you are going to match the same pattern for many different texts, you better precompile it.


回答 12

有趣的是,编译对我来说确实更有效(Win XP上的Python 2.5.2):

import re
import time

rgx = re.compile('(\w+)\s+[0-9_]?\s+\w*')
str = "average    2 never"
a = 0

t = time.time()

for i in xrange(1000000):
    if re.match('(\w+)\s+[0-9_]?\s+\w*', str):
    #~ if rgx.match(str):
        a += 1

print time.time() - t

按原样运行上面的代码,然后以if相反的方式注释两行,则编译后的regex的运行速度快一倍

Interestingly, compiling does prove more efficient for me (Python 2.5.2 on Win XP):

import re
import time

rgx = re.compile('(\w+)\s+[0-9_]?\s+\w*')
str = "average    2 never"
a = 0

t = time.time()

for i in xrange(1000000):
    if re.match('(\w+)\s+[0-9_]?\s+\w*', str):
    #~ if rgx.match(str):
        a += 1

print time.time() - t

Running the above code once as is, and once with the two if lines commented the other way around, the compiled regex is twice as fast


回答 13

在绊倒这里的讨论之前,我进行了此测试。但是,运行它后,我认为我至少会发布结果。

我偷了杰夫·弗里德尔(Jeff Friedl)的“精通正则表达式”(Mastering Regular Expressions)中的示例并将其混为一谈。这是在运行OSX 10.6(2Ghz Intel Core 2 duo,4GB ram)的Macbook上。Python版本是2.6.1。

运行1-使用re.compile

import re 
import time 
import fpformat
Regex1 = re.compile('^(a|b|c|d|e|f|g)+$') 
Regex2 = re.compile('^[a-g]+$')
TimesToDo = 1000
TestString = "" 
for i in range(1000):
    TestString += "abababdedfg"
StartTime = time.time() 
for i in range(TimesToDo):
    Regex1.search(TestString) 
Seconds = time.time() - StartTime 
print "Alternation takes " + fpformat.fix(Seconds,3) + " seconds"

StartTime = time.time() 
for i in range(TimesToDo):
    Regex2.search(TestString) 
Seconds = time.time() - StartTime 
print "Character Class takes " + fpformat.fix(Seconds,3) + " seconds"

Alternation takes 2.299 seconds
Character Class takes 0.107 seconds

运行2-不使用re.compile

import re 
import time 
import fpformat

TimesToDo = 1000
TestString = "" 
for i in range(1000):
    TestString += "abababdedfg"
StartTime = time.time() 
for i in range(TimesToDo):
    re.search('^(a|b|c|d|e|f|g)+$',TestString) 
Seconds = time.time() - StartTime 
print "Alternation takes " + fpformat.fix(Seconds,3) + " seconds"

StartTime = time.time() 
for i in range(TimesToDo):
    re.search('^[a-g]+$',TestString) 
Seconds = time.time() - StartTime 
print "Character Class takes " + fpformat.fix(Seconds,3) + " seconds"

Alternation takes 2.508 seconds
Character Class takes 0.109 seconds

I ran this test before stumbling upon the discussion here. However, having run it I thought I’d at least post my results.

I stole and bastardized the example in Jeff Friedl’s “Mastering Regular Expressions”. This is on a macbook running OSX 10.6 (2Ghz intel core 2 duo, 4GB ram). Python version is 2.6.1.

Run 1 – using re.compile

import re 
import time 
import fpformat
Regex1 = re.compile('^(a|b|c|d|e|f|g)+$') 
Regex2 = re.compile('^[a-g]+$')
TimesToDo = 1000
TestString = "" 
for i in range(1000):
    TestString += "abababdedfg"
StartTime = time.time() 
for i in range(TimesToDo):
    Regex1.search(TestString) 
Seconds = time.time() - StartTime 
print "Alternation takes " + fpformat.fix(Seconds,3) + " seconds"

StartTime = time.time() 
for i in range(TimesToDo):
    Regex2.search(TestString) 
Seconds = time.time() - StartTime 
print "Character Class takes " + fpformat.fix(Seconds,3) + " seconds"

Alternation takes 2.299 seconds
Character Class takes 0.107 seconds

Run 2 – Not using re.compile

import re 
import time 
import fpformat

TimesToDo = 1000
TestString = "" 
for i in range(1000):
    TestString += "abababdedfg"
StartTime = time.time() 
for i in range(TimesToDo):
    re.search('^(a|b|c|d|e|f|g)+$',TestString) 
Seconds = time.time() - StartTime 
print "Alternation takes " + fpformat.fix(Seconds,3) + " seconds"

StartTime = time.time() 
for i in range(TimesToDo):
    re.search('^[a-g]+$',TestString) 
Seconds = time.time() - StartTime 
print "Character Class takes " + fpformat.fix(Seconds,3) + " seconds"

Alternation takes 2.508 seconds
Character Class takes 0.109 seconds

回答 14

这个答案可能迟到了,但这是一个有趣的发现。如果您打算多次使用正则表达式,那么使用编译确实可以节省您的时间(在文档中也提到了这一点)。在下面您可以看到,直接在其上调用match方法时,使用编译的正则表达式最快。将已编译的正则表达式传递给re.match会使它变得更慢,并且将re.match与模式字符串传递在中间。

>>> ipr = r'\D+((([0-2][0-5]?[0-5]?)\.){3}([0-2][0-5]?[0-5]?))\D+'
>>> average(*timeit.repeat("re.match(ipr, 'abcd100.10.255.255 ')", globals={'ipr': ipr, 're': re}))
1.5077415757028423
>>> ipr = re.compile(ipr)
>>> average(*timeit.repeat("re.match(ipr, 'abcd100.10.255.255 ')", globals={'ipr': ipr, 're': re}))
1.8324008992184038
>>> average(*timeit.repeat("ipr.match('abcd100.10.255.255 ')", globals={'ipr': ipr, 're': re}))
0.9187896518778871

This answer might be arriving late but is an interesting find. Using compile can really save you time if you are planning on using the regex multiple times (this is also mentioned in the docs). Below you can see that using a compiled regex is the fastest when the match method is directly called on it. passing a compiled regex to re.match makes it even slower and passing re.match with the patter string is somewhere in the middle.

>>> ipr = r'\D+((([0-2][0-5]?[0-5]?)\.){3}([0-2][0-5]?[0-5]?))\D+'
>>> average(*timeit.repeat("re.match(ipr, 'abcd100.10.255.255 ')", globals={'ipr': ipr, 're': re}))
1.5077415757028423
>>> ipr = re.compile(ipr)
>>> average(*timeit.repeat("re.match(ipr, 'abcd100.10.255.255 ')", globals={'ipr': ipr, 're': re}))
1.8324008992184038
>>> average(*timeit.repeat("ipr.match('abcd100.10.255.255 ')", globals={'ipr': ipr, 're': re}))
0.9187896518778871

回答 15

除了表现。

使用compile帮助我区分
1. module(re)
2. regex对象
3. match对象的概念
当我开始学习regex时

#regex object
regex_object = re.compile(r'[a-zA-Z]+')
#match object
match_object = regex_object.search('1.Hello')
#matching content
match_object.group()
output:
Out[60]: 'Hello'
V.S.
re.search(r'[a-zA-Z]+','1.Hello').group()
Out[61]: 'Hello'

作为补充,我制作了一个详尽的模块速查表,re以供您参考。

regex = {
'brackets':{'single_character': ['[]', '.', {'negate':'^'}],
            'capturing_group' : ['()','(?:)', '(?!)' '|', '\\', 'backreferences and named group'],
            'repetition'      : ['{}', '*?', '+?', '??', 'greedy v.s. lazy ?']},
'lookaround' :{'lookahead'  : ['(?=...)', '(?!...)'],
            'lookbehind' : ['(?<=...)','(?<!...)'],
            'caputuring' : ['(?P<name>...)', '(?P=name)', '(?:)'],},
'escapes':{'anchor'          : ['^', '\b', '$'],
          'non_printable'   : ['\n', '\t', '\r', '\f', '\v'],
          'shorthand'       : ['\d', '\w', '\s']},
'methods': {['search', 'match', 'findall', 'finditer'],
              ['split', 'sub']},
'match_object': ['group','groups', 'groupdict','start', 'end', 'span',]
}

Besides the performance.

Using compile helps me to distinguish the concepts of
1. module(re),
2. regex object
3. match object
When I started learning regex

#regex object
regex_object = re.compile(r'[a-zA-Z]+')
#match object
match_object = regex_object.search('1.Hello')
#matching content
match_object.group()
output:
Out[60]: 'Hello'
V.S.
re.search(r'[a-zA-Z]+','1.Hello').group()
Out[61]: 'Hello'

As a complement, I made an exhaustive cheatsheet of module re for your reference.

regex = {
'brackets':{'single_character': ['[]', '.', {'negate':'^'}],
            'capturing_group' : ['()','(?:)', '(?!)' '|', '\\', 'backreferences and named group'],
            'repetition'      : ['{}', '*?', '+?', '??', 'greedy v.s. lazy ?']},
'lookaround' :{'lookahead'  : ['(?=...)', '(?!...)'],
            'lookbehind' : ['(?<=...)','(?<!...)'],
            'caputuring' : ['(?P<name>...)', '(?P=name)', '(?:)'],},
'escapes':{'anchor'          : ['^', '\b', '$'],
          'non_printable'   : ['\n', '\t', '\r', '\f', '\v'],
          'shorthand'       : ['\d', '\w', '\s']},
'methods': {['search', 'match', 'findall', 'finditer'],
              ['split', 'sub']},
'match_object': ['group','groups', 'groupdict','start', 'end', 'span',]
}

回答 16

我真的尊重上述所有答案。我认为是的!当然,值得一次使用re.compile而不是一次编译regex。

使用re.compile可以使您的代码更具动态性,因为您可以调用已编译的regex,而无需再次编译。在以下情况下,这件事会使您受益:

  1. 处理器的工作
  2. 时间复杂度。
  3. 使正则表达式通用。(可用于findall,search,match)
  4. 并使您的程序看起来很酷。

范例:

  example_string = "The room number of her room is 26A7B."
  find_alpha_numeric_string = re.compile(r"\b\w+\b")

在Findall中使用

 find_alpha_numeric_string.findall(example_string)

在搜索中使用

  find_alpha_numeric_string.search(example_string)

同样,您可以将其用于:匹配和替换

I really respect all the above answers. From my opinion Yes! For sure it is worth to use re.compile instead of compiling the regex, again and again, every time.

Using re.compile makes your code more dynamic, as you can call the already compiled regex, instead of compiling again and aagain. This thing benefits you in cases:

  1. Processor Efforts
  2. Time Complexity.
  3. Makes regex Universal.(can be used in findall, search, match)
  4. And makes your program looks cool.

Example :

  example_string = "The room number of her room is 26A7B."
  find_alpha_numeric_string = re.compile(r"\b\w+\b")

Using in Findall

 find_alpha_numeric_string.findall(example_string)

Using in search

  find_alpha_numeric_string.search(example_string)

Similarly you can use it for: Match and Substitute


回答 17

这是一个很好的问题。您经常看到人们无缘无故地使用re.compile。它降低了可读性。但是请确保在很多时候需要对表达式进行预编译。就像您在循环中重复使用它或类似方法时一样。

就像关于编程的一切(实际上生活中的一切)一样。应用常识。

This is a good question. You often see people use re.compile without reason. It lessens readability. But sure there are lots of times when pre-compiling the expression is called for. Like when you use it repeated times in a loop or some such.

It’s like everything about programming (everything in life actually). Apply common sense.


回答 18

(几个月后),您可以轻松地在re.match或与此相关的其他任何事情上添加自己的缓存-

""" Re.py: Re.match = re.match + cache  
    efficiency: re.py does this already (but what's _MAXCACHE ?)
    readability, inline / separate: matter of taste
"""

import re

cache = {}
_re_type = type( re.compile( "" ))

def match( pattern, str, *opt ):
    """ Re.match = re.match + cache re.compile( pattern ) 
    """
    if type(pattern) == _re_type:
        cpat = pattern
    elif pattern in cache:
        cpat = cache[pattern]
    else:
        cpat = cache[pattern] = re.compile( pattern, *opt )
    return cpat.match( str )

# def search ...

wibni,如果满足以下条件,那就不是很好了:cachehint(size =),cacheinfo()-> size,hits,nclear …

(months later) it’s easy to add your own cache around re.match, or anything else for that matter —

""" Re.py: Re.match = re.match + cache  
    efficiency: re.py does this already (but what's _MAXCACHE ?)
    readability, inline / separate: matter of taste
"""

import re

cache = {}
_re_type = type( re.compile( "" ))

def match( pattern, str, *opt ):
    """ Re.match = re.match + cache re.compile( pattern ) 
    """
    if type(pattern) == _re_type:
        cpat = pattern
    elif pattern in cache:
        cpat = cache[pattern]
    else:
        cpat = cache[pattern] = re.compile( pattern, *opt )
    return cpat.match( str )

# def search ...

A wibni, wouldn’t it be nice if: cachehint( size= ), cacheinfo() -> size, hits, nclear …


回答 19

与动态编译相比,我有1000多次运行已编译的正则表达式的经验,并且没有注意到任何可察觉的差异

对已接受答案的投票导致一个假设,即@Triptych所说的在所有情况下都是正确的。这不一定是真的。一个很大的不同是何时必须决定是否接受正则表达式字符串或已编译的正则表达式对象作为函数的参数:

>>> timeit.timeit(setup="""
... import re
... f=lambda x, y: x.match(y)       # accepts compiled regex as parameter
... h=re.compile('hello')
... """, stmt="f(h, 'hello world')")
0.32881879806518555
>>> timeit.timeit(setup="""
... import re
... f=lambda x, y: re.compile(x).match(y)   # compiles when called
... """, stmt="f('hello', 'hello world')")
0.809190034866333

最好编译正则表达式,以防您需要重用它们。

请注意,上面timeit中的示例在导入时一次模拟了一个已编译的regex对象的创建,而在进行匹配时则模拟了“即时”的创建。

I’ve had a lot of experience running a compiled regex 1000s of times versus compiling on-the-fly, and have not noticed any perceivable difference

The votes on the accepted answer leads to the assumption that what @Triptych says is true for all cases. This is not necessarily true. One big difference is when you have to decide whether to accept a regex string or a compiled regex object as a parameter to a function:

>>> timeit.timeit(setup="""
... import re
... f=lambda x, y: x.match(y)       # accepts compiled regex as parameter
... h=re.compile('hello')
... """, stmt="f(h, 'hello world')")
0.32881879806518555
>>> timeit.timeit(setup="""
... import re
... f=lambda x, y: re.compile(x).match(y)   # compiles when called
... """, stmt="f('hello', 'hello world')")
0.809190034866333

It is always better to compile your regexs in case you need to reuse them.

Note the example in the timeit above simulates creation of a compiled regex object once at import time versus “on-the-fly” when required for a match.


回答 20

作为一个替代的答案,如我所见,以前没有提到过,我将继续引用Python 3文档

您应该使用这些模块级功能,还是应该自己获取模式并调用其方法?如果要在循环中访问正则表达式,则对其进行预编译将节省一些函数调用。在循环之外,由于内部缓存,差异不大。

As an alternative answer, as I see that it hasn’t been mentioned before, I’ll go ahead and quote the Python 3 docs:

Should you use these module-level functions, or should you get the pattern and call its methods yourself? If you’re accessing a regex within a loop, pre-compiling it will save a few function calls. Outside of loops, there’s not much difference thanks to the internal cache.


回答 21

这是一个示例,其中使用re.compile速度比要求快50倍以上。

这一点与我在上面的评论中提到的观点相同,即,re.compile当您的用法无法从编译缓存中获得太多好处时,使用可能会带来很大的好处。至少在一种特定情况下(我在实践中遇到过),即在满足以下所有条件时,才会发生这种情况:

  • 您有很多正则表达式模式(超过个re._MAXCACHE,当前默认值为512个),并且
  • 您经常使用这些正则表达式,并且
  • 您在相同模式下的连续使用之间的间隔要比re._MAXCACHE其他正则表达式更多,因此在两次连续使用之间,每个正则表达式都会从缓存中清除。
import re
import time

def setup(N=1000):
    # Patterns 'a.*a', 'a.*b', ..., 'z.*z'
    patterns = [chr(i) + '.*' + chr(j)
                    for i in range(ord('a'), ord('z') + 1)
                    for j in range(ord('a'), ord('z') + 1)]
    # If this assertion below fails, just add more (distinct) patterns.
    # assert(re._MAXCACHE < len(patterns))
    # N strings. Increase N for larger effect.
    strings = ['abcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyz'] * N
    return (patterns, strings)

def without_compile():
    print('Without re.compile:')
    patterns, strings = setup()
    print('searching')
    count = 0
    for s in strings:
        for pat in patterns:
            count += bool(re.search(pat, s))
    return count

def without_compile_cache_friendly():
    print('Without re.compile, cache-friendly order:')
    patterns, strings = setup()
    print('searching')
    count = 0
    for pat in patterns:
        for s in strings:
            count += bool(re.search(pat, s))
    return count

def with_compile():
    print('With re.compile:')
    patterns, strings = setup()
    print('compiling')
    compiled = [re.compile(pattern) for pattern in patterns]
    print('searching')
    count = 0
    for s in strings:
        for regex in compiled:
            count += bool(regex.search(s))
    return count

start = time.time()
print(with_compile())
d1 = time.time() - start
print(f'-- That took {d1:.2f} seconds.\n')

start = time.time()
print(without_compile_cache_friendly())
d2 = time.time() - start
print(f'-- That took {d2:.2f} seconds.\n')

start = time.time()
print(without_compile())
d3 = time.time() - start
print(f'-- That took {d3:.2f} seconds.\n')

print(f'Ratio: {d3/d1:.2f}')

我在笔记本电脑上得到的示例输出(Python 3.7.7):

With re.compile:
compiling
searching
676000
-- That took 0.33 seconds.

Without re.compile, cache-friendly order:
searching
676000
-- That took 0.67 seconds.

Without re.compile:
searching
676000
-- That took 23.54 seconds.

Ratio: 70.89

我没有打扰,timeit因为差异是如此明显,但是每次我得到的定性数字都差不多。请注意,即使不re.compile使用,多次使用相同的regex并移至下一个也不是很糟糕(大约是的慢2倍re.compile),但以另一种顺序(遍历许多regexes),则更糟,正如预期的那样。另外,增加缓存大小也可以:仅re._MAXCACHE = len(patterns)setup()上面进行设置(当然,我不建议在生产环境中进行此类操作,因为带下划线的名称通常是“私有”的)将〜23秒降低为〜0.7秒,这也符合我们的理解。

Here is an example where using re.compile is over 50 times faster, as requested.

The point is just the same as what I made in the comment above, namely, using re.compile can be a significant advantage when your usage is such as to not benefit much from the compilation cache. This happens at least in one particular case (that I ran into in practice), namely when all of the following are true:

  • You have a lot of regex patterns (more than re._MAXCACHE, whose default is currently 512), and
  • you use these regexes a lot of times, and
  • you consecutive usages of the same pattern are separated by more than re._MAXCACHE other regexes in between, so that each one gets flushed from the cache between consecutive usages.
import re
import time

def setup(N=1000):
    # Patterns 'a.*a', 'a.*b', ..., 'z.*z'
    patterns = [chr(i) + '.*' + chr(j)
                    for i in range(ord('a'), ord('z') + 1)
                    for j in range(ord('a'), ord('z') + 1)]
    # If this assertion below fails, just add more (distinct) patterns.
    # assert(re._MAXCACHE < len(patterns))
    # N strings. Increase N for larger effect.
    strings = ['abcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyz'] * N
    return (patterns, strings)

def without_compile():
    print('Without re.compile:')
    patterns, strings = setup()
    print('searching')
    count = 0
    for s in strings:
        for pat in patterns:
            count += bool(re.search(pat, s))
    return count

def without_compile_cache_friendly():
    print('Without re.compile, cache-friendly order:')
    patterns, strings = setup()
    print('searching')
    count = 0
    for pat in patterns:
        for s in strings:
            count += bool(re.search(pat, s))
    return count

def with_compile():
    print('With re.compile:')
    patterns, strings = setup()
    print('compiling')
    compiled = [re.compile(pattern) for pattern in patterns]
    print('searching')
    count = 0
    for s in strings:
        for regex in compiled:
            count += bool(regex.search(s))
    return count

start = time.time()
print(with_compile())
d1 = time.time() - start
print(f'-- That took {d1:.2f} seconds.\n')

start = time.time()
print(without_compile_cache_friendly())
d2 = time.time() - start
print(f'-- That took {d2:.2f} seconds.\n')

start = time.time()
print(without_compile())
d3 = time.time() - start
print(f'-- That took {d3:.2f} seconds.\n')

print(f'Ratio: {d3/d1:.2f}')

Example output I get on my laptop (Python 3.7.7):

With re.compile:
compiling
searching
676000
-- That took 0.33 seconds.

Without re.compile, cache-friendly order:
searching
676000
-- That took 0.67 seconds.

Without re.compile:
searching
676000
-- That took 23.54 seconds.

Ratio: 70.89

I didn’t bother with timeit as the difference is so stark, but I get qualitatively similar numbers each time. Note that even without re.compile, using the same regex multiple times and moving on to the next one wasn’t so bad (only about 2 times as slow as with re.compile), but in the other order (looping through many regexes), it is significantly worse, as expected. Also, increasing the cache size works too: simply setting re._MAXCACHE = len(patterns) in setup() above (of course I don’t recommend doing such things in production as names with underscores are conventionally “private”) drops the ~23 seconds back down to ~0.7 seconds, which also matches our understanding.


回答 22

使用第二个版本时,正则表达式在使用前先进行编译。如果要执行多次,最好先编译一下。如果您每次都不匹配,则无需编译。

Regular Expressions are compiled before being used when using the second version. If you are going to executing it many times it is definatly better to compile it first. If not compiling every time you match for one off’s is fine.


回答 23

易读性/认知负荷偏好

对我来说,主要的收获是,我只需要记住,读,一个复杂的正则表达式语法API的形式-的<compiled_pattern>.method(xxx)形式,而不是re.func(<pattern>, xxx)形式。

re.compile(<pattern>)有点多余样板的,真实的。

但是就正则表达式而言,额外的编译步骤不太可能是造成认知负担的主要原因。实际上,在复杂的模式上,您甚至可以通过将声明与随后在其上调用的任何regex方法分开来获得清晰度。

我倾向于首先在Regex101之类的网站中甚至在一个单独的最小测试脚本中调整复杂的模式,然后将它们引入我的代码中,因此将声明与使用分开也是适合我的工作流程的。

Legibility/cognitive load preference

To me, the main gain is that I only need to remember, and read, one form of the complicated regex API syntax – the <compiled_pattern>.method(xxx) form rather than that and the re.func(<pattern>, xxx) form.

The re.compile(<pattern>) is a bit of extra boilerplate, true.

But where regex are concerned, that extra compile step is unlikely to be a big cause of cognitive load. And in fact, on complicated patterns, you might even gain clarity from separating the declaration from whatever regex method you then invoke on it.

I tend to first tune complicated patterns in a website like Regex101, or even in a separate minimal test script, then bring them into my code, so separating the declaration from its use fits my workflow as well.


回答 24

我想激发预编译在概念上和“文学上”(如“文学编程”中)都是有利的。看一下下面的代码片段:

from re import compile as _Re

class TYPO:

  def text_has_foobar( self, text ):
    return self._text_has_foobar_re_search( text ) is not None
  _text_has_foobar_re_search = _Re( r"""(?i)foobar""" ).search

TYPO = TYPO()

在您的应用程序中,您将编写:

from TYPO import TYPO
print( TYPO.text_has_foobar( 'FOObar ) )

就其可获得的功能而言,这几乎是简单的。因为这是一个简短的示例,所以我混淆了将_text_has_foobar_re_search所有内容统一在一起的方法。该代码的缺点是无论TYPO库对象的生存期如何,它都占用很少的内存;优点是,进行foobar搜索时,您将获得两个函数调用和两个类字典查找。缓存了多少个正则表达式re此处,该缓存的开销无关紧要。

将此与以下更常用的样式进行比较:

import re

class Typo:

  def text_has_foobar( self, text ):
    return re.compile( r"""(?i)foobar""" ).search( text ) is not None

在应用程序中:

typo = Typo()
print( typo.text_has_foobar( 'FOObar ) )

我很容易承认我的风格对于python非常不寻常,甚至值得商bat。但是,在更紧密地匹配python最常用方式的示例中,为了进行单个匹配,我们必须实例化一个对象,执行三个实例字典查找,并执行三个函数调用;另外,我们可能会进入re当使用100多个正则表达式时缓存问题。同样,正则表达式隐藏在方法体内,大多数情况下并不是一个好主意。

是否说措施的每个子集-有针对性的别名导入语句;适用的别名方法;减少函数调用和对象字典查找-可以帮助减少计算和概念上的复杂性。

i’d like to motivate that pre-compiling is both conceptually and ‘literately’ (as in ‘literate programming’) advantageous. have a look at this code snippet:

from re import compile as _Re

class TYPO:

  def text_has_foobar( self, text ):
    return self._text_has_foobar_re_search( text ) is not None
  _text_has_foobar_re_search = _Re( r"""(?i)foobar""" ).search

TYPO = TYPO()

in your application, you’d write:

from TYPO import TYPO
print( TYPO.text_has_foobar( 'FOObar ) )

this is about as simple in terms of functionality as it can get. because this is example is so short, i conflated the way to get _text_has_foobar_re_search all in one line. the disadvantage of this code is that it occupies a little memory for whatever the lifetime of the TYPO library object is; the advantage is that when doing a foobar search, you’ll get away with two function calls and two class dictionary lookups. how many regexes are cached by re and the overhead of that cache are irrelevant here.

compare this with the more usual style, below:

import re

class Typo:

  def text_has_foobar( self, text ):
    return re.compile( r"""(?i)foobar""" ).search( text ) is not None

In the application:

typo = Typo()
print( typo.text_has_foobar( 'FOObar ) )

I readily admit that my style is highly unusual for python, maybe even debatable. however, in the example that more closely matches how python is mostly used, in order to do a single match, we must instantiate an object, do three instance dictionary lookups, and perform three function calls; additionally, we might get into re caching troubles when using more than 100 regexes. also, the regular expression gets hidden inside the method body, which most of the time is not such a good idea.

be it said that every subset of measures—targeted, aliased import statements; aliased methods where applicable; reduction of function calls and object dictionary lookups—can help reduce computational and conceptual complexity.


回答 25

我的理解是,这两个示例实际上是等效的。唯一的区别是,在第一个实例中,您可以在其他地方重用已编译的正则表达式,而无需再次对其进行编译。

这是给您的参考:http : //diveintopython3.ep.io/refactoring.html

用字符串“ M”调用已编译模式对象的搜索功能与使用正则表达式和字符串“ M”调用re.search的操作相同。只有很多,更快。(实际上,re.search函数只是编译正则表达式并为您调用结果模式对象的search方法。)

My understanding is that those two examples are effectively equivalent. The only difference is that in the first, you can reuse the compiled regular expression elsewhere without causing it to be compiled again.

Here’s a reference for you: http://diveintopython3.ep.io/refactoring.html

Calling the compiled pattern object’s search function with the string ‘M’ accomplishes the same thing as calling re.search with both the regular expression and the string ‘M’. Only much, much faster. (In fact, the re.search function simply compiles the regular expression and calls the resulting pattern object’s search method for you.)


如何在Python中从字符串中提取数字?

问题:如何在Python中从字符串中提取数字?

我将提取字符串中包含的所有数字。哪个更适合于目的,正则表达式或isdigit()方法?

例:

line = "hello 12 hi 89"

结果:

[12, 89]

I would extract all the numbers contained in a string. Which is the better suited for the purpose, regular expressions or the isdigit() method?

Example:

line = "hello 12 hi 89"

Result:

[12, 89]

回答 0

如果只想提取正整数,请尝试以下操作:

>>> str = "h3110 23 cat 444.4 rabbit 11 2 dog"
>>> [int(s) for s in str.split() if s.isdigit()]
[23, 11, 2]

我认为这比正则表达式示例更好,原因有三点。首先,您不需要其他模块;其次,它更具可读性,因为您无需解析正则表达式迷你语言;第三,它更快(因此可能更pythonic):

python -m timeit -s "str = 'h3110 23 cat 444.4 rabbit 11 2 dog' * 1000" "[s for s in str.split() if s.isdigit()]"
100 loops, best of 3: 2.84 msec per loop

python -m timeit -s "import re" "str = 'h3110 23 cat 444.4 rabbit 11 2 dog' * 1000" "re.findall('\\b\\d+\\b', str)"
100 loops, best of 3: 5.66 msec per loop

这将无法识别浮点数,负整数或十六进制格式的整数。如果您不能接受这些限制,则可以通过以下亭亭玉立的答案解决问题

If you only want to extract only positive integers, try the following:

>>> str = "h3110 23 cat 444.4 rabbit 11 2 dog"
>>> [int(s) for s in str.split() if s.isdigit()]
[23, 11, 2]

I would argue that this is better than the regex example for three reasons. First, you don’t need another module; secondly, it’s more readable because you don’t need to parse the regex mini-language; and third, it is faster (and thus likely more pythonic):

python -m timeit -s "str = 'h3110 23 cat 444.4 rabbit 11 2 dog' * 1000" "[s for s in str.split() if s.isdigit()]"
100 loops, best of 3: 2.84 msec per loop

python -m timeit -s "import re" "str = 'h3110 23 cat 444.4 rabbit 11 2 dog' * 1000" "re.findall('\\b\\d+\\b', str)"
100 loops, best of 3: 5.66 msec per loop

This will not recognize floats, negative integers, or integers in hexadecimal format. If you can’t accept these limitations, slim’s answer below will do the trick.


回答 1

我会使用regexp:

>>> import re
>>> re.findall(r'\d+', 'hello 42 I\'m a 32 string 30')
['42', '32', '30']

这也将匹配来自的42 bla42bla。如果只需要数字以单词边界(空格,句点,逗号)分隔,则可以使用\ b:

>>> re.findall(r'\b\d+\b', 'he33llo 42 I\'m a 32 string 30')
['42', '32', '30']

要以数字列表而不是字符串列表结尾:

>>> [int(s) for s in re.findall(r'\b\d+\b', 'he33llo 42 I\'m a 32 string 30')]
[42, 32, 30]

I’d use a regexp :

>>> import re
>>> re.findall(r'\d+', 'hello 42 I\'m a 32 string 30')
['42', '32', '30']

This would also match 42 from bla42bla. If you only want numbers delimited by word boundaries (space, period, comma), you can use \b :

>>> re.findall(r'\b\d+\b', 'he33llo 42 I\'m a 32 string 30')
['42', '32', '30']

To end up with a list of numbers instead of a list of strings:

>>> [int(s) for s in re.findall(r'\b\d+\b', 'he33llo 42 I\'m a 32 string 30')]
[42, 32, 30]

回答 2

这已经有点晚了,但是您也可以扩展regex表达式以说明科学计数法。

import re

# Format is [(<string>, <expected output>), ...]
ss = [("apple-12.34 ba33na fanc-14.23e-2yapple+45e5+67.56E+3",
       ['-12.34', '33', '-14.23e-2', '+45e5', '+67.56E+3']),
      ('hello X42 I\'m a Y-32.35 string Z30',
       ['42', '-32.35', '30']),
      ('he33llo 42 I\'m a 32 string -30', 
       ['33', '42', '32', '-30']),
      ('h3110 23 cat 444.4 rabbit 11 2 dog', 
       ['3110', '23', '444.4', '11', '2']),
      ('hello 12 hi 89', 
       ['12', '89']),
      ('4', 
       ['4']),
      ('I like 74,600 commas not,500', 
       ['74,600', '500']),
      ('I like bad math 1+2=.001', 
       ['1', '+2', '.001'])]

for s, r in ss:
    rr = re.findall("[-+]?[.]?[\d]+(?:,\d\d\d)*[\.]?\d*(?:[eE][-+]?\d+)?", s)
    if rr == r:
        print('GOOD')
    else:
        print('WRONG', rr, 'should be', r)

一切都好!

此外,您可以查看AWS Glue内置正则表达式

This is more than a bit late, but you can extend the regex expression to account for scientific notation too.

import re

# Format is [(<string>, <expected output>), ...]
ss = [("apple-12.34 ba33na fanc-14.23e-2yapple+45e5+67.56E+3",
       ['-12.34', '33', '-14.23e-2', '+45e5', '+67.56E+3']),
      ('hello X42 I\'m a Y-32.35 string Z30',
       ['42', '-32.35', '30']),
      ('he33llo 42 I\'m a 32 string -30', 
       ['33', '42', '32', '-30']),
      ('h3110 23 cat 444.4 rabbit 11 2 dog', 
       ['3110', '23', '444.4', '11', '2']),
      ('hello 12 hi 89', 
       ['12', '89']),
      ('4', 
       ['4']),
      ('I like 74,600 commas not,500', 
       ['74,600', '500']),
      ('I like bad math 1+2=.001', 
       ['1', '+2', '.001'])]

for s, r in ss:
    rr = re.findall("[-+]?[.]?[\d]+(?:,\d\d\d)*[\.]?\d*(?:[eE][-+]?\d+)?", s)
    if rr == r:
        print('GOOD')
    else:
        print('WRONG', rr, 'should be', r)

Gives all good!

Additionally, you can look at the AWS Glue built-in regex


回答 3

我假设您想要的不仅是浮点数,所以我会做这样的事情:

l = []
for t in s.split():
    try:
        l.append(float(t))
    except ValueError:
        pass

请注意,此处发布的其他一些解决方案不适用于负数:

>>> re.findall(r'\b\d+\b', 'he33llo 42 I\'m a 32 string -30')
['42', '32', '30']

>>> '-3'.isdigit()
False

I’m assuming you want floats not just integers so I’d do something like this:

l = []
for t in s.split():
    try:
        l.append(float(t))
    except ValueError:
        pass

Note that some of the other solutions posted here don’t work with negative numbers:

>>> re.findall(r'\b\d+\b', 'he33llo 42 I\'m a 32 string -30')
['42', '32', '30']

>>> '-3'.isdigit()
False

回答 4

如果您知道字符串中只有一个数字,即“ hello 12 hi”,则可以尝试过滤。

例如:

In [1]: int(''.join(filter(str.isdigit, '200 grams')))
Out[1]: 200
In [2]: int(''.join(filter(str.isdigit, 'Counters: 55')))
Out[2]: 55
In [3]: int(''.join(filter(str.isdigit, 'more than 23 times')))
Out[3]: 23

但是要小心!:

In [4]: int(''.join(filter(str.isdigit, '200 grams 5')))
Out[4]: 2005

If you know it will be only one number in the string, i.e ‘hello 12 hi’, you can try filter.

For example:

In [1]: int(''.join(filter(str.isdigit, '200 grams')))
Out[1]: 200
In [2]: int(''.join(filter(str.isdigit, 'Counters: 55')))
Out[2]: 55
In [3]: int(''.join(filter(str.isdigit, 'more than 23 times')))
Out[3]: 23

But be carefull !!! :

In [4]: int(''.join(filter(str.isdigit, '200 grams 5')))
Out[4]: 2005

回答 5

# extract numbers from garbage string:
s = '12//n,_@#$%3.14kjlw0xdadfackvj1.6e-19&*ghn334'
newstr = ''.join((ch if ch in '0123456789.-e' else ' ') for ch in s)
listOfNumbers = [float(i) for i in newstr.split()]
print(listOfNumbers)
[12.0, 3.14, 0.0, 1.6e-19, 334.0]
# extract numbers from garbage string:
s = '12//n,_@#$%3.14kjlw0xdadfackvj1.6e-19&*ghn334'
newstr = ''.join((ch if ch in '0123456789.-e' else ' ') for ch in s)
listOfNumbers = [float(i) for i in newstr.split()]
print(listOfNumbers)
[12.0, 3.14, 0.0, 1.6e-19, 334.0]

回答 6

我一直在寻找一种解决方案,特别是从巴西的电话号码中删除字符串的掩码,这篇帖子没有得到回答,但给了我启发。这是我的解决方案:

>>> phone_number = '+55(11)8715-9877'
>>> ''.join([n for n in phone_number if n.isdigit()])
'551187159877'

I was looking for a solution to remove strings’ masks, specifically from Brazilian phones numbers, this post not answered but inspired me. This is my solution:

>>> phone_number = '+55(11)8715-9877'
>>> ''.join([n for n in phone_number if n.isdigit()])
'551187159877'

回答 7

在下面使用正则表达式是

lines = "hello 12 hi 89"
import re
output = []
#repl_str = re.compile('\d+.?\d*')
repl_str = re.compile('^\d+$')
#t = r'\d+.?\d*'
line = lines.split()
for word in line:
        match = re.search(repl_str, word)
        if match:
            output.append(float(match.group()))
print (output)

与findall re.findall(r'\d+', "hello 12 hi 89")

['12', '89']

re.findall(r'\b\d+\b', "hello 12 hi 89 33F AC 777")

 ['12', '89', '777']

Using Regex below is the way

lines = "hello 12 hi 89"
import re
output = []
#repl_str = re.compile('\d+.?\d*')
repl_str = re.compile('^\d+$')
#t = r'\d+.?\d*'
line = lines.split()
for word in line:
        match = re.search(repl_str, word)
        if match:
            output.append(float(match.group()))
print (output)

with findall re.findall(r'\d+', "hello 12 hi 89")

['12', '89']

re.findall(r'\b\d+\b', "hello 12 hi 89 33F AC 777")

 ['12', '89', '777']

回答 8

line2 = "hello 12 hi 89"
temp1 = re.findall(r'\d+', line2) # through regular expression
res2 = list(map(int, temp1))
print(res2)

嗨,

您可以使用findall表达式通过数字搜索字符串中的所有整数。

在第二步中,创建一个列表res2并将在字符串中找到的数字添加到此列表中

希望这可以帮助

此致Diwakar Sharma

line2 = "hello 12 hi 89"
temp1 = re.findall(r'\d+', line2) # through regular expression
res2 = list(map(int, temp1))
print(res2)

Hi ,

you can search all the integers in the string through digit by using findall expression .

In the second step create a list res2 and add the digits found in string to this list

hope this helps

Regards, Diwakar Sharma


回答 9

此答案还包含数字在字符串中为浮点的情况

def get_first_nbr_from_str(input_str):
    '''
    :param input_str: strings that contains digit and words
    :return: the number extracted from the input_str
    demo:
    'ab324.23.123xyz': 324.23
    '.5abc44': 0.5
    '''
    if not input_str and not isinstance(input_str, str):
        return 0
    out_number = ''
    for ele in input_str:
        if (ele == '.' and '.' not in out_number) or ele.isdigit():
            out_number += ele
        elif out_number:
            break
    return float(out_number)

This answer also contains the case when the number is float in the string

def get_first_nbr_from_str(input_str):
    '''
    :param input_str: strings that contains digit and words
    :return: the number extracted from the input_str
    demo:
    'ab324.23.123xyz': 324.23
    '.5abc44': 0.5
    '''
    if not input_str and not isinstance(input_str, str):
        return 0
    out_number = ''
    for ele in input_str:
        if (ele == '.' and '.' not in out_number) or ele.isdigit():
            out_number += ele
        elif out_number:
            break
    return float(out_number)

回答 10

令我惊讶的是,还没有人提到使用itertools.groupby替代实现这一目标的方法。

您可以使用itertools.groupby()str.isdigit()来从字符串中提取数字,如下所示:

from itertools import groupby
my_str = "hello 12 hi 89"

l = [int(''.join(i)) for is_digit, i in groupby(my_str, str.isdigit) if is_digit]

保留的值l将是:

[12, 89]

PS:这只是出于说明的目的,以表明作为替代方案,我们也可以使用它groupby来实现此目的。但这不是推荐的解决方案。如果要实现此目的,则应基于将列表理解与as过滤器一起使用fmark可接受答案str.isdigit

I am amazed to see that no one has yet mentioned the usage of itertools.groupby as an alternative to achieve this.

You may use itertools.groupby() along with str.isdigit() in order to extract numbers from string as:

from itertools import groupby
my_str = "hello 12 hi 89"

l = [int(''.join(i)) for is_digit, i in groupby(my_str, str.isdigit) if is_digit]

The value hold by l will be:

[12, 89]

PS: This is just for illustration purpose to show that as an alternative we could also use groupby to achieve this. But this is not a recommended solution. If you want to achieve this, you should be using accepted answer of fmark based on using list comprehension with str.isdigit as filter.


回答 11

我只是添加这个答案,因为没有人使用异常处理添加了一个答案,因为这也适用于浮点数

a = []
line = "abcd 1234 efgh 56.78 ij"
for word in line.split():
    try:
        a.append(float(word))
    except ValueError:
        pass
print(a)

输出:

[1234.0, 56.78]

I am just adding this answer because no one added one using Exception handling and because this also works for floats

a = []
line = "abcd 1234 efgh 56.78 ij"
for word in line.split():
    try:
        a.append(float(word))
    except ValueError:
        pass
print(a)

Output :

[1234.0, 56.78]

回答 12

要捕获不同的模式,使用不同的模式进行查询很有帮助。

设置捕获不同兴趣数字模式的所有模式:

(查找逗号)12,300或12,300.00

‘[\ d] + [。,\ d] +’

(发现浮动)0.123或.123

‘[\ d] * [。] [\ d] +’

(找到整数)123

‘[\ d] +’

与管道(|)组合为一个具有多个或有条件的模式。

(注意:首先放置复杂模式,否则简单模式将返回复杂捕获的块,而不是复杂捕获返回完整的捕获)。

p = '[\d]+[.,\d]+|[\d]*[.][\d]+|[\d]+'

在下面,我们将确认存在的模式re.search(),然后返回捕获的可迭代列表。最后,我们将使用方括号符号打印每个捕获,以从匹配对象中选择匹配对象的返回值。

s = 'he33llo 42 I\'m a 32 string 30 444.4 12,001'

if re.search(p, s) is not None:
    for catch in re.finditer(p, s):
        print(catch[0]) # catch is a match object

返回值:

33
42
32
30
444.4
12,001

To catch different patterns it is helpful to query with different patterns.

Setup all the patterns that catch different number patterns of interest:

(finds commas) 12,300 or 12,300.00

‘[\d]+[.,\d]+’

(finds floats) 0.123 or .123

‘[\d]*[.][\d]+’

(finds integers) 123

‘[\d]+’

Combine with pipe ( | ) into one pattern with multiple or conditionals.

(Note: Put complex patterns first else simple patterns will return chunks of the complex catch instead of the complex catch returning the full catch).

p = '[\d]+[.,\d]+|[\d]*[.][\d]+|[\d]+'

Below, we’ll confirm a pattern is present with re.search(), then return an iterable list of catches. Finally, we’ll print each catch using bracket notation to subselect the match object return value from the match object.

s = 'he33llo 42 I\'m a 32 string 30 444.4 12,001'

if re.search(p, s) is not None:
    for catch in re.finditer(p, s):
        print(catch[0]) # catch is a match object

Returns:

33
42
32
30
444.4
12,001

回答 13

由于这些都不涉及我需要查找的excel和word docs中的真实财务数字,因此这里是我的变体。它处理整数,浮点数,负数,货币数字(因为它不会在拆分时回复),并且可以选择删除小数部分并仅返回整数或返回所有内容。

它还处理印第安拉克斯数字系统,其中逗号不规则出现,而不是每3个数字分开。

它不处理科学计数法,否则预算中括号内的负数将显示为正数。

它还不会提取日期。有更好的方法来查找字符串中的日期。

import re
def find_numbers(string, ints=True):            
    numexp = re.compile(r'[-]?\d[\d,]*[\.]?[\d{2}]*') #optional - in front
    numbers = numexp.findall(string)    
    numbers = [x.replace(',','') for x in numbers]
    if ints is True:
        return [int(x.replace(',','').split('.')[0]) for x in numbers]            
    else:
        return numbers

Since none of these dealt with real world financial numbers in excel and word docs that I needed to find, here is my variation. It handles ints, floats, negative numbers, currency numbers (because it doesn’t reply on split), and has the option to drop the decimal part and just return ints, or return everything.

It also handles Indian Laks number system where commas appear irregularly, not every 3 numbers apart.

It does not handle scientific notation or negative numbers put inside parentheses in budgets — will appear positive.

It also does not extract dates. There are better ways for finding dates in strings.

import re
def find_numbers(string, ints=True):            
    numexp = re.compile(r'[-]?\d[\d,]*[\.]?[\d{2}]*') #optional - in front
    numbers = numexp.findall(string)    
    numbers = [x.replace(',','') for x in numbers]
    if ints is True:
        return [int(x.replace(',','').split('.')[0]) for x in numbers]            
    else:
        return numbers

回答 14

@jmnas,我很喜欢您的回答,但没有找到浮点数。我正在处理一个脚本,以解析要输入CNC铣床的代码,并且需要查找可以是整数或浮点数的X和Y尺寸,因此我将代码修改为以下内容。查找具有正值和负值的int,float。仍然找不到十六进制格式的值,但是您可以在num_char元组中添加“ x”和“ A”至“ F” ,我认为它将解析“ 0x23AC”之类的内容。

s = 'hello X42 I\'m a Y-32.35 string Z30'
xy = ("X", "Y")
num_char = (".", "+", "-")

l = []

tokens = s.split()
for token in tokens:

    if token.startswith(xy):
        num = ""
        for char in token:
            # print(char)
            if char.isdigit() or (char in num_char):
                num = num + char

        try:
            l.append(float(num))
        except ValueError:
            pass

print(l)

@jmnas, I liked your answer, but it didn’t find floats. I’m working on a script to parse code going to a CNC mill and needed to find both X and Y dimensions that can be integers or floats, so I adapted your code to the following. This finds int, float with positive and negative vals. Still doesn’t find hex formatted values but you could add “x” and “A” through “F” to the num_char tuple and I think it would parse things like ‘0x23AC’.

s = 'hello X42 I\'m a Y-32.35 string Z30'
xy = ("X", "Y")
num_char = (".", "+", "-")

l = []

tokens = s.split()
for token in tokens:

    if token.startswith(xy):
        num = ""
        for char in token:
            # print(char)
            if char.isdigit() or (char in num_char):
                num = num + char

        try:
            l.append(float(num))
        except ValueError:
            pass

print(l)

回答 15

我发现的最佳选择如下。它将提取一个数字并可以消除任何类型的字符。

def extract_nbr(input_str):
    if input_str is None or input_str == '':
        return 0

    out_number = ''
    for ele in input_str:
        if ele.isdigit():
            out_number += ele
    return float(out_number)    

The best option I found is below. It will extract a number and can eliminate any type of char.

def extract_nbr(input_str):
    if input_str is None or input_str == '':
        return 0

    out_number = ''
    for ele in input_str:
        if ele.isdigit():
            out_number += ele
    return float(out_number)    

回答 16

对于电话号码,您只需在正则表达式中使用\ D排除所有非数字字符:

import re

phone_number = '(619) 459-3635'
phone_number = re.sub(r"\D", "", phone_number)
print(phone_number)

For phone numbers you can simply exclude all non-digit characters with \D in regex:

import re

phone_number = '(619) 459-3635'
phone_number = re.sub(r"\D", "", phone_number)
print(phone_number)

有没有一种简单的方法来删除字符串中的多个空格?

问题:有没有一种简单的方法来删除字符串中的多个空格?

假设此字符串:

The   fox jumped   over    the log.

转变为:

The fox jumped over the log.

什么是最简单的方法(1-2行),而无需拆分并进入列表?

Suppose this string:

The   fox jumped   over    the log.

Turning into:

The fox jumped over the log.

What is the simplest (1-2 lines) to achieve this, without splitting and going into lists?


回答 0

>>> import re
>>> re.sub(' +', ' ', 'The     quick brown    fox')
'The quick brown fox'
>>> import re
>>> re.sub(' +', ' ', 'The     quick brown    fox')
'The quick brown fox'

回答 1

foo 是您的字符串:

" ".join(foo.split())

请注意,尽管这样做会删除“所有空白字符(空格,制表符,换行符,返回符,换页符)”(由于hhsaffar,请参见注释)。即,"this is \t a test\n"将有效地终止为"this is a test"

foo is your string:

" ".join(foo.split())

Be warned though this removes “all whitespace characters (space, tab, newline, return, formfeed)” (thanks to hhsaffar, see comments). I.e., "this is \t a test\n" will effectively end up as "this is a test".


回答 2

import re
s = "The   fox jumped   over    the log."
re.sub("\s\s+" , " ", s)

要么

re.sub("\s\s+", " ", s)

正如使用者Martin Thoma在评论中所提到的,因为逗号前的空格在PEP 8中被列为“ 宠儿”

import re
s = "The   fox jumped   over    the log."
re.sub("\s\s+" , " ", s)

or

re.sub("\s\s+", " ", s)

since the space before comma is listed as a pet peeve in PEP 8, as mentioned by user Martin Thoma in the comments.


回答 3

将正则表达式与“ \ s”一起使用并执行简单的string.split()也会删除其他空格,例如换行符,回车符,制表符。除非需要这样做,否则我只介绍多个示例。

我使用11个段落,1000个单词,6665字节的Lorem Ipsum进行了真实的时间测试,并在整个过程中使用了随机长度的额外空间:

original_string = ''.join(word + (' ' * random.randint(1, 10)) for word in lorem_ipsum.split(' '))

一衬垫将基本上做任何前/后间隔的条带,并且它保留一个前/后空间(但只ONE ;-)。

# setup = '''

import re

def while_replace(string):
    while '  ' in string:
        string = string.replace('  ', ' ')

    return string

def re_replace(string):
    return re.sub(r' {2,}' , ' ', string)

def proper_join(string):
    split_string = string.split(' ')

    # To account for leading/trailing spaces that would simply be removed
    beg = ' ' if not split_string[ 0] else ''
    end = ' ' if not split_string[-1] else ''

    # versus simply ' '.join(item for item in string.split(' ') if item)
    return beg + ' '.join(item for item in split_string if item) + end

original_string = """Lorem    ipsum        ... no, really, it kept going...          malesuada enim feugiat.         Integer imperdiet    erat."""

assert while_replace(original_string) == re_replace(original_string) == proper_join(original_string)

#'''

# while_replace_test
new_string = original_string[:]

new_string = while_replace(new_string)

assert new_string != original_string

# re_replace_test
new_string = original_string[:]

new_string = re_replace(new_string)

assert new_string != original_string

# proper_join_test
new_string = original_string[:]

new_string = proper_join(new_string)

assert new_string != original_string

注意: while版本”制作了的副本original_string,因为我相信一旦在第一次运行中对其进行了修改,后续运行就会更快(如果只是一点点的话)。随着时间的增加,我将此字符串副本添加到其他两个字符串中,以便时间仅显示逻辑上的差异。 请记住,主要stmttimeit情况下,将只执行一次 ; 我执行此操作的原始方式是,while循环在相同的标签上工作original_string,因此第二次运行将无事可做。现在设置的方式,使用两个不同的标签调用函数,这没有问题。我assert向所有工作人员添加了语句,以验证我们在每次迭代中都对某些内容进行了更改(对于那些可能令人怀疑的人)。例如,更改为它并中断:

# while_replace_test
new_string = original_string[:]

new_string = while_replace(new_string)

assert new_string != original_string # will break the 2nd iteration

while '  ' in original_string:
    original_string = original_string.replace('  ', ' ')

Tests run on a laptop with an i5 processor running Windows 7 (64-bit).

timeit.Timer(stmt = test, setup = setup).repeat(7, 1000)

test_string = 'The   fox jumped   over\n\t    the log.' # trivial

Python 2.7.3, 32-bit, Windows
                test |      minum |    maximum |    average |     median
---------------------+------------+------------+------------+-----------
  while_replace_test |   0.001066 |   0.001260 |   0.001128 |   0.001092
     re_replace_test |   0.003074 |   0.003941 |   0.003357 |   0.003349
    proper_join_test |   0.002783 |   0.004829 |   0.003554 |   0.003035

Python 2.7.3, 64-bit, Windows
                test |      minum |    maximum |    average |     median
---------------------+------------+------------+------------+-----------
  while_replace_test |   0.001025 |   0.001079 |   0.001052 |   0.001051
     re_replace_test |   0.003213 |   0.004512 |   0.003656 |   0.003504
    proper_join_test |   0.002760 |   0.006361 |   0.004626 |   0.004600

Python 3.2.3, 32-bit, Windows
                test |      minum |    maximum |    average |     median
---------------------+------------+------------+------------+-----------
  while_replace_test |   0.001350 |   0.002302 |   0.001639 |   0.001357
     re_replace_test |   0.006797 |   0.008107 |   0.007319 |   0.007440
    proper_join_test |   0.002863 |   0.003356 |   0.003026 |   0.002975

Python 3.3.3, 64-bit, Windows
                test |      minum |    maximum |    average |     median
---------------------+------------+------------+------------+-----------
  while_replace_test |   0.001444 |   0.001490 |   0.001460 |   0.001459
     re_replace_test |   0.011771 |   0.012598 |   0.012082 |   0.011910
    proper_join_test |   0.003741 |   0.005933 |   0.004341 |   0.004009

test_string = lorem_ipsum
# Thanks to http://www.lipsum.com/
# "Generated 11 paragraphs, 1000 words, 6665 bytes of Lorem Ipsum"

Python 2.7.3, 32-bit
                test |      minum |    maximum |    average |     median
---------------------+------------+------------+------------+-----------
  while_replace_test |   0.342602 |   0.387803 |   0.359319 |   0.356284
     re_replace_test |   0.337571 |   0.359821 |   0.348876 |   0.348006
    proper_join_test |   0.381654 |   0.395349 |   0.388304 |   0.388193    

Python 2.7.3, 64-bit
                test |      minum |    maximum |    average |     median
---------------------+------------+------------+------------+-----------
  while_replace_test |   0.227471 |   0.268340 |   0.240884 |   0.236776
     re_replace_test |   0.301516 |   0.325730 |   0.308626 |   0.307852
    proper_join_test |   0.358766 |   0.383736 |   0.370958 |   0.371866    

Python 3.2.3, 32-bit
                test |      minum |    maximum |    average |     median
---------------------+------------+------------+------------+-----------
  while_replace_test |   0.438480 |   0.463380 |   0.447953 |   0.446646
     re_replace_test |   0.463729 |   0.490947 |   0.472496 |   0.468778
    proper_join_test |   0.397022 |   0.427817 |   0.406612 |   0.402053    

Python 3.3.3, 64-bit
                test |      minum |    maximum |    average |     median
---------------------+------------+------------+------------+-----------
  while_replace_test |   0.284495 |   0.294025 |   0.288735 |   0.289153
     re_replace_test |   0.501351 |   0.525673 |   0.511347 |   0.508467
    proper_join_test |   0.422011 |   0.448736 |   0.436196 |   0.440318

对于琐碎的字符串,似乎while循环是最快的,其次是Pythonic字符串拆分/连接,而regex则拉到后面。

对于非平凡的字符串,似乎还有更多需要考虑的地方。32位2.7?正则表达式可以解救!2.7 64位?一while环是最好的,通过一个体面的保证金。32位3.2,使用“ proper” join。64位3.3,进行while循环。再次。

最后,如果需要/在哪里/何时需要,人们可以提高性能,但始终最好记住这一口头禅

  1. 让它起作用
  2. 改正它
  3. 快一点

IANAL,YMMV,警告加油站!

Using regexes with “\s” and doing simple string.split()’s will also remove other whitespace – like newlines, carriage returns, tabs. Unless this is desired, to only do multiple spaces, I present these examples.

I used 11 paragraphs, 1000 words, 6665 bytes of Lorem Ipsum to get realistic time tests and used random-length extra spaces throughout:

original_string = ''.join(word + (' ' * random.randint(1, 10)) for word in lorem_ipsum.split(' '))

The one-liner will essentially do a strip of any leading/trailing spaces, and it preserves a leading/trailing space (but only ONE ;-).

# setup = '''

import re

def while_replace(string):
    while '  ' in string:
        string = string.replace('  ', ' ')

    return string

def re_replace(string):
    return re.sub(r' {2,}' , ' ', string)

def proper_join(string):
    split_string = string.split(' ')

    # To account for leading/trailing spaces that would simply be removed
    beg = ' ' if not split_string[ 0] else ''
    end = ' ' if not split_string[-1] else ''

    # versus simply ' '.join(item for item in string.split(' ') if item)
    return beg + ' '.join(item for item in split_string if item) + end

original_string = """Lorem    ipsum        ... no, really, it kept going...          malesuada enim feugiat.         Integer imperdiet    erat."""

assert while_replace(original_string) == re_replace(original_string) == proper_join(original_string)

#'''

# while_replace_test
new_string = original_string[:]

new_string = while_replace(new_string)

assert new_string != original_string

# re_replace_test
new_string = original_string[:]

new_string = re_replace(new_string)

assert new_string != original_string

# proper_join_test
new_string = original_string[:]

new_string = proper_join(new_string)

assert new_string != original_string

NOTE: The “while version” made a copy of the original_string, as I believe once modified on the first run, successive runs would be faster (if only by a bit). As this adds time, I added this string copy to the other two so that the times showed the difference only in the logic. Keep in mind that the main stmt on timeit instances will only be executed once; the original way I did this, the while loop worked on the same label, original_string, thus the second run, there would be nothing to do. The way it’s set up now, calling a function, using two different labels, that isn’t a problem. I’ve added assert statements to all the workers to verify we change something every iteration (for those who may be dubious). E.g., change to this and it breaks:

# while_replace_test
new_string = original_string[:]

new_string = while_replace(new_string)

assert new_string != original_string # will break the 2nd iteration

while '  ' in original_string:
    original_string = original_string.replace('  ', ' ')

Tests run on a laptop with an i5 processor running Windows 7 (64-bit).

timeit.Timer(stmt = test, setup = setup).repeat(7, 1000)

test_string = 'The   fox jumped   over\n\t    the log.' # trivial

Python 2.7.3, 32-bit, Windows
                test |      minum |    maximum |    average |     median
---------------------+------------+------------+------------+-----------
  while_replace_test |   0.001066 |   0.001260 |   0.001128 |   0.001092
     re_replace_test |   0.003074 |   0.003941 |   0.003357 |   0.003349
    proper_join_test |   0.002783 |   0.004829 |   0.003554 |   0.003035

Python 2.7.3, 64-bit, Windows
                test |      minum |    maximum |    average |     median
---------------------+------------+------------+------------+-----------
  while_replace_test |   0.001025 |   0.001079 |   0.001052 |   0.001051
     re_replace_test |   0.003213 |   0.004512 |   0.003656 |   0.003504
    proper_join_test |   0.002760 |   0.006361 |   0.004626 |   0.004600

Python 3.2.3, 32-bit, Windows
                test |      minum |    maximum |    average |     median
---------------------+------------+------------+------------+-----------
  while_replace_test |   0.001350 |   0.002302 |   0.001639 |   0.001357
     re_replace_test |   0.006797 |   0.008107 |   0.007319 |   0.007440
    proper_join_test |   0.002863 |   0.003356 |   0.003026 |   0.002975

Python 3.3.3, 64-bit, Windows
                test |      minum |    maximum |    average |     median
---------------------+------------+------------+------------+-----------
  while_replace_test |   0.001444 |   0.001490 |   0.001460 |   0.001459
     re_replace_test |   0.011771 |   0.012598 |   0.012082 |   0.011910
    proper_join_test |   0.003741 |   0.005933 |   0.004341 |   0.004009

test_string = lorem_ipsum
# Thanks to http://www.lipsum.com/
# "Generated 11 paragraphs, 1000 words, 6665 bytes of Lorem Ipsum"

Python 2.7.3, 32-bit
                test |      minum |    maximum |    average |     median
---------------------+------------+------------+------------+-----------
  while_replace_test |   0.342602 |   0.387803 |   0.359319 |   0.356284
     re_replace_test |   0.337571 |   0.359821 |   0.348876 |   0.348006
    proper_join_test |   0.381654 |   0.395349 |   0.388304 |   0.388193    

Python 2.7.3, 64-bit
                test |      minum |    maximum |    average |     median
---------------------+------------+------------+------------+-----------
  while_replace_test |   0.227471 |   0.268340 |   0.240884 |   0.236776
     re_replace_test |   0.301516 |   0.325730 |   0.308626 |   0.307852
    proper_join_test |   0.358766 |   0.383736 |   0.370958 |   0.371866    

Python 3.2.3, 32-bit
                test |      minum |    maximum |    average |     median
---------------------+------------+------------+------------+-----------
  while_replace_test |   0.438480 |   0.463380 |   0.447953 |   0.446646
     re_replace_test |   0.463729 |   0.490947 |   0.472496 |   0.468778
    proper_join_test |   0.397022 |   0.427817 |   0.406612 |   0.402053    

Python 3.3.3, 64-bit
                test |      minum |    maximum |    average |     median
---------------------+------------+------------+------------+-----------
  while_replace_test |   0.284495 |   0.294025 |   0.288735 |   0.289153
     re_replace_test |   0.501351 |   0.525673 |   0.511347 |   0.508467
    proper_join_test |   0.422011 |   0.448736 |   0.436196 |   0.440318

For the trivial string, it would seem that a while-loop is the fastest, followed by the Pythonic string-split/join, and regex pulling up the rear.

For non-trivial strings, seems there’s a bit more to consider. 32-bit 2.7? It’s regex to the rescue! 2.7 64-bit? A while loop is best, by a decent margin. 32-bit 3.2, go with the “proper” join. 64-bit 3.3, go for a while loop. Again.

In the end, one can improve performance if/where/when needed, but it’s always best to remember the mantra:

  1. Make It Work
  2. Make It Right
  3. Make It Fast

IANAL, YMMV, Caveat Emptor!


回答 4

我必须同意保罗·麦圭尔的评论。对我来说,

' '.join(the_string.split())

比使用正则表达式要好得多。

我的测量结果(Linux和Python 2.5)显示split-then-join几乎比执行“ re.sub(…)”快五倍,如果预编译一次regex并执行操作,则快三倍。多次。它是比较容易理解的任何措施- 很多更Python。

I have to agree with Paul McGuire’s comment. To me,

' '.join(the_string.split())

is vastly preferable to whipping out a regex.

My measurements (Linux and Python 2.5) show the split-then-join to be almost five times faster than doing the “re.sub(…)”, and still three times faster if you precompile the regex once and do the operation multiple times. And it is by any measure easier to understand — much more Pythonic.


回答 5

与以前的解决方案类似,但更具体:用一个替换两个或多个空格:

>>> import re
>>> s = "The   fox jumped   over    the log."
>>> re.sub('\s{2,}', ' ', s)
'The fox jumped over the log.'

Similar to the previous solutions, but more specific: replace two or more spaces with one:

>>> import re
>>> s = "The   fox jumped   over    the log."
>>> re.sub('\s{2,}', ' ', s)
'The fox jumped over the log.'

回答 6

一个简单的灵魂

>>> import re
>>> s="The   fox jumped   over    the log."
>>> print re.sub('\s+',' ', s)
The fox jumped over the log.

A simple soultion

>>> import re
>>> s="The   fox jumped   over    the log."
>>> print re.sub('\s+',' ', s)
The fox jumped over the log.

回答 7

您也可以在Pandas DataFrame中使用字符串拆分技术,而无需使用.apply(..),如果您需要对大量字符串快速执行操作,此方法将非常有用。这是一行:

df['message'] = (df['message'].str.split()).str.join(' ')

You can also use the string splitting technique in a Pandas DataFrame without needing to use .apply(..), which is useful if you need to perform the operation quickly on a large number of strings. Here it is on one line:

df['message'] = (df['message'].str.split()).str.join(' ')

回答 8

import re
string = re.sub('[ \t\n]+', ' ', 'The     quick brown                \n\n             \t        fox')

这将删除所有选项卡,换行和带有单个空格的多个空格。

import re
string = re.sub('[ \t\n]+', ' ', 'The     quick brown                \n\n             \t        fox')

This will remove all the tabs, new lines and multiple white spaces with single white space.


回答 9

我尝试了以下方法,甚至可以在极端情况下使用:

str1='          I   live    on    earth           '

' '.join(str1.split())

但是,如果您更喜欢正则表达式,则可以通过以下方式完成:

re.sub('\s+', ' ', str1)

尽管必须进行一些预处理才能删除尾随和结尾的空间。

I have tried the following method and it even works with the extreme case like:

str1='          I   live    on    earth           '

' '.join(str1.split())

But if you prefer a regular expression it can be done as:

re.sub('\s+', ' ', str1)

Although some preprocessing has to be done in order to remove the trailing and ending space.


回答 10

这似乎也可行:

while "  " in s:
    s = s.replace("  ", " ")

其中变量s代表您的字符串。

This also seems to work:

while "  " in s:
    s = s.replace("  ", " ")

Where the variable s represents your string.


回答 11

在某些情况下,它是希望用的单个实例来代替每个空格字符的连续出现字符。您将使用带有反向引用的正则表达式来执行此操作。

(\s)\1{1,}匹配任何空白字符,后跟一个或多个该字符。现在,您所需要做的就是指定第一个组(\1)作为匹配项的替换。

将其包装在函数中:

import re

def normalize_whitespace(string):
    return re.sub(r'(\s)\1{1,}', r'\1', string)
>>> normalize_whitespace('The   fox jumped   over    the log.')
'The fox jumped over the log.'
>>> normalize_whitespace('First    line\t\t\t \n\n\nSecond    line')
'First line\t \nSecond line'

In some cases it’s desirable to replace consecutive occurrences of every whitespace character with a single instance of that character. You’d use a regular expression with backreferences to do that.

(\s)\1{1,} matches any whitespace character, followed by one or more occurrences of that character. Now, all you need to do is specify the first group (\1) as the replacement for the match.

Wrapping this in a function:

import re

def normalize_whitespace(string):
    return re.sub(r'(\s)\1{1,}', r'\1', string)
>>> normalize_whitespace('The   fox jumped   over    the log.')
'The fox jumped over the log.'
>>> normalize_whitespace('First    line\t\t\t \n\n\nSecond    line')
'First line\t \nSecond line'

回答 12

另一种选择:

>>> import re
>>> str = 'this is a            string with    multiple spaces and    tabs'
>>> str = re.sub('[ \t]+' , ' ', str)
>>> print str
this is a string with multiple spaces and tabs

Another alternative:

>>> import re
>>> str = 'this is a            string with    multiple spaces and    tabs'
>>> str = re.sub('[ \t]+' , ' ', str)
>>> print str
this is a string with multiple spaces and tabs

回答 13

一行代码删除句子之前,之后和之内的所有多余空格:

sentence = "  The   fox jumped   over    the log.  "
sentence = ' '.join(filter(None,sentence.split(' ')))

说明:

  1. 将整个字符串拆分为一个列表。
  2. 从列表中过滤空元素。
  3. 用一个空格重新合并其余元素*

*其余元素应该是单词或带有标点符号的单词等。我没有对此进行广泛的测试,但这应该是一个很好的起点。祝一切顺利!

One line of code to remove all extra spaces before, after, and within a sentence:

sentence = "  The   fox jumped   over    the log.  "
sentence = ' '.join(filter(None,sentence.split(' ')))

Explanation:

  1. Split the entire string into a list.
  2. Filter empty elements from the list.
  3. Rejoin the remaining elements* with a single space

*The remaining elements should be words or words with punctuations, etc. I did not test this extensively, but this should be a good starting point. All the best!


回答 14

适用于Python开发人员的解决方案:

import re

text1 = 'Python      Exercises    Are   Challenging Exercises'
print("Original string: ", text1)
print("Without extra spaces: ", re.sub(' +', ' ', text1))

输出:
Original string: Python Exercises Are Challenging Exercises Without extra spaces: Python Exercises Are Challenging Exercises

Solution for Python developers:

import re

text1 = 'Python      Exercises    Are   Challenging Exercises'
print("Original string: ", text1)
print("Without extra spaces: ", re.sub(' +', ' ', text1))

Output:
Original string: Python Exercises Are Challenging Exercises Without extra spaces: Python Exercises Are Challenging Exercises


回答 15

def unPretty(S):
   # Given a dictionary, JSON, list, float, int, or even a string...
   # return a string stripped of CR, LF replaced by space, with multiple spaces reduced to one.
   return ' '.join(str(S).replace('\n', ' ').replace('\r', '').split())
def unPretty(S):
   # Given a dictionary, JSON, list, float, int, or even a string...
   # return a string stripped of CR, LF replaced by space, with multiple spaces reduced to one.
   return ' '.join(str(S).replace('\n', ' ').replace('\r', '').split())

回答 16

用户生成的字符串最快的速度是:

if '  ' in text:
    while '  ' in text:
        text = text.replace('  ', ' ')

短路使其比pythonlarry的综合答案要快一些。如果您追求效率,并严格寻求除掉单个空间种类的多余空白,则可以这样做

The fastest you can get for user-generated strings is:

if '  ' in text:
    while '  ' in text:
        text = text.replace('  ', ' ')

The short circuiting makes it slightly faster than pythonlarry’s comprehensive answer. Go for this if you’re after efficiency and are strictly looking to weed out extra whitespaces of the single space variety.


回答 17

非常令人惊讶-没有人发布过简单的功能,它会比所有其他发布的解决方案快得多。它去了:

def compactSpaces(s):
    os = ""
    for c in s:
        if c != " " or os[-1] != " ":
            os += c 
    return os

Quite surprising – no one posted simple function which will be much faster than ALL other posted solutions. Here it goes:

def compactSpaces(s):
    os = ""
    for c in s:
        if c != " " or os[-1] != " ":
            os += c 
    return os

回答 18

如果您要处理的是空格,则在None上分割将不会在返回值中包含空字符串。

5.6.1。字符串方法,str.split()

If it’s whitespace you’re dealing with, splitting on None will not include an empty string in the returned value.

5.6.1. String Methods, str.split()


回答 19

string = 'This is a             string full of spaces          and taps'
string = string.split(' ')
while '' in string:
    string.remove('')
string = ' '.join(string)
print(string)

结果

这是一个充满空格和水龙头的字符串

string = 'This is a             string full of spaces          and taps'
string = string.split(' ')
while '' in string:
    string.remove('')
string = ' '.join(string)
print(string)

Results:

This is a string full of spaces and taps


回答 20

要删除空格,请考虑单词之间的前导,尾随和多余的空格,请使用:

(?<=\s) +|^ +(?=\s)| (?= +[\n\0])

第一个or处理前导空白,第二个or处理字符串前导空白的开始,最后一个处理尾随空白。

为了使用证明,此链接将为您提供测试。

https://regex101.com/r/meBYli/4

这将与re.split函数一起使用。

To remove white space, considering leading, trailing and extra white space in between words, use:

(?<=\s) +|^ +(?=\s)| (?= +[\n\0])

The first or deals with leading white space, the second or deals with start of string leading white space, and the last one deals with trailing white space.

For proof of use, this link will provide you with a test.

https://regex101.com/r/meBYli/4

This is to be used with the re.split function.


回答 21

我有我在大学中使用过的简单方法。

line = "I     have            a       nice    day."

end = 1000
while end != 0:
    line.replace("  ", " ")
    end -= 1

这会将每个双倍空格替换为一个空格并将执行1000次。这意味着您可以有2000个额外空间,并且仍然可以使用。:)

I have my simple method which I have used in college.

line = "I     have            a       nice    day."

end = 1000
while end != 0:
    line.replace("  ", " ")
    end -= 1

This will replace every double space with a single space and will do it 1000 times. It means you can have 2000 extra spaces and will still work. :)


回答 22

我有一个不分裂的简单方法:

a = "Lorem   Ipsum Darum     Diesrum!"
while True:
    count = a.find("  ")
    if count > 0:
        a = a.replace("  ", " ")
        count = a.find("  ")
        continue
    else:
        break

print(a)

I’ve got a simple method without splitting:

a = "Lorem   Ipsum Darum     Diesrum!"
while True:
    count = a.find("  ")
    if count > 0:
        a = a.replace("  ", " ")
        count = a.find("  ")
        continue
    else:
        break

print(a)

回答 23

import re

Text = " You can select below trims for removing white space!!   BR Aliakbar     "
  # trims all white spaces
print('Remove all space:',re.sub(r"\s+", "", Text), sep='') 
# trims left space
print('Remove leading space:', re.sub(r"^\s+", "", Text), sep='') 
# trims right space
print('Remove trailing spaces:', re.sub(r"\s+$", "", Text), sep='')  
# trims both
print('Remove leading and trailing spaces:', re.sub(r"^\s+|\s+$", "", Text), sep='')
# replace more than one white space in the string with one white space
print('Remove more than one space:',re.sub(' +', ' ',Text), sep='') 

结果:

删除所有空间:您可以选择下面的修剪来删除空白!BRAliakbar删除前导空间:您可以选择下面的修剪来删除空白!BR Aliakbar
删除尾部空格:您可以选择以下修剪以删除空白!BR Aliakbar删除前导和尾随空格:您可以选择以下修饰来删除空白!!BR Aliakbar删除多个空格:您可以选择以下修剪以删除空白!!BR Aliakbar

import re

Text = " You can select below trims for removing white space!!   BR Aliakbar     "
  # trims all white spaces
print('Remove all space:',re.sub(r"\s+", "", Text), sep='') 
# trims left space
print('Remove leading space:', re.sub(r"^\s+", "", Text), sep='') 
# trims right space
print('Remove trailing spaces:', re.sub(r"\s+$", "", Text), sep='')  
# trims both
print('Remove leading and trailing spaces:', re.sub(r"^\s+|\s+$", "", Text), sep='')
# replace more than one white space in the string with one white space
print('Remove more than one space:',re.sub(' +', ' ',Text), sep='') 

Result:

Remove all space:Youcanselectbelowtrimsforremovingwhitespace!!BRAliakbar Remove leading space:You can select below trims for removing white space!! BR Aliakbar
Remove trailing spaces: You can select below trims for removing white space!! BR Aliakbar Remove leading and trailing spaces:You can select below trims for removing white space!! BR Aliakbar Remove more than one space: You can select below trims for removing white space!! BR Aliakbar


回答 24

我没有在其他示例中读很多书,但是我刚刚创建了用于合并多个连续空格字符的方法。

它不使用任何库,尽管就脚本长度而言比较长,但它不是一个复杂的实现:

def spaceMatcher(command):
    """
    Function defined to consolidate multiple whitespace characters in
    strings to a single space
    """
    # Initiate index to flag if more than one consecutive character
    iteration
    space_match = 0
    space_char = ""
    for char in command:
      if char == " ":
          space_match += 1
          space_char += " "
      elif (char != " ") & (space_match > 1):
          new_command = command.replace(space_char, " ")
          space_match = 0
          space_char = ""
      elif char != " ":
          space_match = 0
          space_char = ""
   return new_command

command = None
command = str(input("Please enter a command ->"))
print(spaceMatcher(command))
print(list(spaceMatcher(command)))

I haven’t read a lot into the other examples, but I have just created this method for consolidating multiple consecutive space characters.

It does not use any libraries, and whilst it is relatively long in terms of script length, it is not a complex implementation:

def spaceMatcher(command):
    """
    Function defined to consolidate multiple whitespace characters in
    strings to a single space
    """
    # Initiate index to flag if more than one consecutive character
    iteration
    space_match = 0
    space_char = ""
    for char in command:
      if char == " ":
          space_match += 1
          space_char += " "
      elif (char != " ") & (space_match > 1):
          new_command = command.replace(space_char, " ")
          space_match = 0
          space_char = ""
      elif char != " ":
          space_match = 0
          space_char = ""
   return new_command

command = None
command = str(input("Please enter a command ->"))
print(spaceMatcher(command))
print(list(spaceMatcher(command)))