问题:在Python中从字符串中剥离HTML

from mechanize import Browser
br = Browser()
br.open('http://somewebpage')
html = br.response().readlines()
for line in html:
  print line

当在HTML文件中打印一行时,我试图找到一种仅显示每个HTML元素的内容而不显示格式本身的方法。如果找到'<a href="whatever.com">some text</a>',它将仅打印“某些文本”,'<b>hello</b>'打印“ hello”,等等。如何做到这一点?

from mechanize import Browser
br = Browser()
br.open('http://somewebpage')
html = br.response().readlines()
for line in html:
  print line

When printing a line in an HTML file, I’m trying to find a way to only show the contents of each HTML element and not the formatting itself. If it finds '<a href="whatever.com">some text</a>', it will only print ‘some text’, '<b>hello</b>' prints ‘hello’, etc. How would one go about doing this?


回答 0

我一直使用此函数来剥离HTML标记,因为它仅需要Python stdlib:

对于Python 3:

from io import StringIO
from html.parser import HTMLParser

class MLStripper(HTMLParser):
    def __init__(self):
        super().__init__()
        self.reset()
        self.strict = False
        self.convert_charrefs= True
        self.text = StringIO()
    def handle_data(self, d):
        self.text.write(d)
    def get_data(self):
        return self.text.getvalue()

def strip_tags(html):
    s = MLStripper()
    s.feed(html)
    return s.get_data()

对于Python 2:

from HTMLParser import HTMLParser
from StringIO import StringIO

class MLStripper(HTMLParser):
    def __init__(self):
        self.reset()
        self.text = StringIO()
    def handle_data(self, d):
        self.text.write(d)
    def get_data(self):
        return self.text.getvalue()

def strip_tags(html):
    s = MLStripper()
    s.feed(html)
    return s.get_data()

I always used this function to strip HTML tags, as it requires only the Python stdlib:

For Python 3:

from io import StringIO
from html.parser import HTMLParser

class MLStripper(HTMLParser):
    def __init__(self):
        super().__init__()
        self.reset()
        self.strict = False
        self.convert_charrefs= True
        self.text = StringIO()
    def handle_data(self, d):
        self.text.write(d)
    def get_data(self):
        return self.text.getvalue()

def strip_tags(html):
    s = MLStripper()
    s.feed(html)
    return s.get_data()

For Python 2:

from HTMLParser import HTMLParser
from StringIO import StringIO

class MLStripper(HTMLParser):
    def __init__(self):
        self.reset()
        self.text = StringIO()
    def handle_data(self, d):
        self.text.write(d)
    def get_data(self):
        return self.text.getvalue()

def strip_tags(html):
    s = MLStripper()
    s.feed(html)
    return s.get_data()

回答 1

我没有想太多会丢失的情况,但是您可以做一个简单的正则表达式:

re.sub('<[^<]+?>', '', text)

对于不了解正则表达式的用户,这会搜索一个字符串<...>,其内部内容由一个或多个(+)而不是的字符组成<。该?意味着它会匹配它所能找到的最小字符串。例如<p>Hello</p>,它将与匹配<'p></p>分开?。没有它,它将匹配整个字符串<..Hello..>

如果非标记<出现在html中(例如2 < 3),则&...无论如何都应将其标记为转义序列,因此^<可能不必要。

I haven’t thought much about the cases it will miss, but you can do a simple regex:

re.sub('<[^<]+?>', '', text)

For those that don’t understand regex, this searches for a string <...>, where the inner content is made of one or more (+) characters that isn’t a <. The ? means that it will match the smallest string it can find. For example given <p>Hello</p>, it will match <'p> and </p> separately with the ?. Without it, it will match the entire string <..Hello..>.

If non-tag < appears in html (eg. 2 < 3), it should be written as an escape sequence &... anyway so the ^< may be unnecessary.


回答 2

您可以使用BeautifulSoup get_text()功能。

from bs4 import BeautifulSoup

html_str = '''
<td><a href="http://www.fakewebsite.com">Please can you strip me?</a>
<br/><a href="http://www.fakewebsite.com">I am waiting....</a>
</td>
'''
soup = BeautifulSoup(html_str)

print(soup.get_text()) 
#or via attribute of Soup Object: print(soup.text)

建议明确指定解析器(例如)BeautifulSoup(html_str, features="html.parser"),以使输出可再现。

You can use BeautifulSoup get_text() feature.

from bs4 import BeautifulSoup

html_str = '''
<td><a href="http://www.fakewebsite.com">Please can you strip me?</a>
<br/><a href="http://www.fakewebsite.com">I am waiting....</a>
</td>
'''
soup = BeautifulSoup(html_str)

print(soup.get_text()) 
#or via attribute of Soup Object: print(soup.text)

It is advisable to explicitly specify the parser, for example as BeautifulSoup(html_str, features="html.parser"), for the output to be reproducible.


回答 3

精简版!

import re, cgi
tag_re = re.compile(r'(<!--.*?-->|<[^>]*>)')

# Remove well-formed tags, fixing mistakes by legitimate users
no_tags = tag_re.sub('', user_input)

# Clean up anything else by escaping
ready_for_web = cgi.escape(no_tags)

正则表达式来源:MarkupSafe。它们的版本也可以处理HTML实体,而这一版本却不能。

为什么我不能只剥离标签并留下标签?

让人们远离<i>italicizing</i>事物,而又不让事物i浮起是一回事。但是,接受任意输入并使其完全无害是另一回事。此页面上的大多数技术都会保留未封闭的注释(<!--)和不属于标签(blah <<<><blah)的尖括号等内容。如果HTMLParser版本在未封闭的注释中,则它们甚至可以保留完整的标签。

如果您的模板是{{ firstname }} {{ lastname }}什么? firstname = '<a'lastname = 'href="http://evil.com/">'会被此页面上的每个标记剥离器(@Medeiros!除外)允许通过,因为它们本身并不是完整的标记。剥离普通的HTML标签是不够的。

Django的strip_tags最佳答案的改进版本(请参见下一标题),给出以下警告:

绝对不能保证结果字符串是HTML安全的。因此,切勿在strip_tags未先转义的情况下将通话结果标记为安全,例如使用escape()

遵循他们的建议!

要使用HTMLParser剥离标签,您必须多次运行它。

绕开这个问题的最佳答案很容易。

查看以下字符串(来源和讨论):

<img<!-- --> src=x onerror=alert(1);//><!-- -->

HTMLParser第一次看到它时,无法分辨出<img...>是标签。它看起来很残破,因此HTMLParser不会摆脱它。它只取出<!-- comments -->,让您

<img src=x onerror=alert(1);//>

该问题已在2014年3月的Django项目中披露。他们的旧时strip_tags基本上与该问题的最佳答案相同。 他们的新版本基本上以循环方式运行它,直到再次运行它不会更改字符串为止:

# _strip_once runs HTMLParser once, pulling out just the text of all the nodes.

def strip_tags(value):
    """Returns the given HTML with all tags stripped."""
    # Note: in typical case this loop executes _strip_once once. Loop condition
    # is redundant, but helps to reduce number of executions of _strip_once.
    while '<' in value and '>' in value:
        new_value = _strip_once(value)
        if len(new_value) >= len(value):
            # _strip_once was not able to detect more tags
            break
        value = new_value
    return value

当然,如果您始终逃避的结果,那么这都不是问题strip_tags()

2015年3月19日更新:1.4.20、1.6.11、1.7.7和1.8c1之前的Django版本中存在错误。这些版本可能会在strip_tags()函数中进入无限循环。固定版本如上复制。 更多细节在这里

复制或使用好东西

我的示例代码无法处理HTML实体-Django和MarkupSafe打包版本可以处理HTML实体。

我的示例代码是从出色的MarkupSafe库中提取的,以防止跨站点脚本编写。它既方便又快速(C加速到其本机Python版本)。它包含在Google App Engine中,并由Jinja2(2.7及更高版本),Mako,Pylons等使用。它可以轻松地与Django 1.7中的Django模板一起使用。

Django的strip_tags和最新版本的其他html实用程序都不错,但是我发现它们不如MarkupSafe方便。它们非常独立,您可以从此文件中复制所需内容。

如果您需要剥离几乎所有标签,则Bleach库很好。您可以让它强制执行诸如“我的用户可以将其斜体显示,但他们不能创建iframe”之类的规则。

了解标签剥离器的属性!对它进行模糊测试! 这是我用来对此答案进行研究的代码

令人毛骨悚然的注释 -问题本身是关于打印到控制台的问题,但这是Google针对“从字符串中提取python剥离html”的最高结果,所以这就是为什么网上答案是99%。

Short version!

import re, cgi
tag_re = re.compile(r'(<!--.*?-->|<[^>]*>)')

# Remove well-formed tags, fixing mistakes by legitimate users
no_tags = tag_re.sub('', user_input)

# Clean up anything else by escaping
ready_for_web = cgi.escape(no_tags)

Regex source: MarkupSafe. Their version handles HTML entities too, while this quick one doesn’t.

Why can’t I just strip the tags and leave it?

It’s one thing to keep people from <i>italicizing</i> things, without leaving is floating around. But it’s another to take arbitrary input and make it completely harmless. Most of the techniques on this page will leave things like unclosed comments (<!--) and angle-brackets that aren’t part of tags (blah <<<><blah) intact. The HTMLParser version can even leave complete tags in, if they’re inside an unclosed comment.

What if your template is {{ firstname }} {{ lastname }}? firstname = '<a' and lastname = 'href="http://evil.com/">' will be let through by every tag stripper on this page (except @Medeiros!), because they’re not complete tags on their own. Stripping out normal HTML tags is not enough.

Django’s strip_tags, an improved (see next heading) version of the top answer to this question, gives the following warning:

Absolutely NO guarantee is provided about the resulting string being HTML safe. So NEVER mark safe the result of a strip_tags call without escaping it first, for example with escape().

Follow their advice!

To strip tags with HTMLParser, you have to run it multiple times.

It’s easy to circumvent the top answer to this question.

Look at this string (source and discussion):

<img<!-- --> src=x onerror=alert(1);//><!-- -->

The first time HTMLParser sees it, it can’t tell that the <img...> is a tag. It looks broken, so HTMLParser doesn’t get rid of it. It only takes out the <!-- comments -->, leaving you with

<img src=x onerror=alert(1);//>

This problem was disclosed to the Django project in March, 2014. Their old strip_tags was essentially the same as the top answer to this question. Their new version basically runs it in a loop until running it again doesn’t change the string:

# _strip_once runs HTMLParser once, pulling out just the text of all the nodes.

def strip_tags(value):
    """Returns the given HTML with all tags stripped."""
    # Note: in typical case this loop executes _strip_once once. Loop condition
    # is redundant, but helps to reduce number of executions of _strip_once.
    while '<' in value and '>' in value:
        new_value = _strip_once(value)
        if len(new_value) >= len(value):
            # _strip_once was not able to detect more tags
            break
        value = new_value
    return value

Of course, none of this is an issue if you always escape the result of strip_tags().

Update 19 March, 2015: There was a bug in Django versions before 1.4.20, 1.6.11, 1.7.7, and 1.8c1. These versions could enter an infinite loop in the strip_tags() function. The fixed version is reproduced above. More details here.

Good things to copy or use

My example code doesn’t handle HTML entities – the Django and MarkupSafe packaged versions do.

My example code is pulled from the excellent MarkupSafe library for cross-site scripting prevention. It’s convenient and fast (with C speedups to its native Python version). It’s included in Google App Engine, and used by Jinja2 (2.7 and up), Mako, Pylons, and more. It works easily with Django templates from Django 1.7.

Django’s strip_tags and other html utilities from a recent version are good, but I find them less convenient than MarkupSafe. They’re pretty self-contained, you could copy what you need from this file.

If you need to strip almost all tags, the Bleach library is good. You can have it enforce rules like “my users can italicize things, but they can’t make iframes.”

Understand the properties of your tag stripper! Run fuzz tests on it! Here is the code I used to do the research for this answer.

sheepish note – The question itself is about printing to the console, but this is the top Google result for “python strip html from string”, so that’s why this answer is 99% about the web.


回答 4

我需要一种剥离标签并将 HTML实体解码为纯文本的方法。以下解决方案基于Eloff的答案(我无法使用,因为它剥离了实体)。

from HTMLParser import HTMLParser
import htmlentitydefs

class HTMLTextExtractor(HTMLParser):
    def __init__(self):
        HTMLParser.__init__(self)
        self.result = [ ]

    def handle_data(self, d):
        self.result.append(d)

    def handle_charref(self, number):
        codepoint = int(number[1:], 16) if number[0] in (u'x', u'X') else int(number)
        self.result.append(unichr(codepoint))

    def handle_entityref(self, name):
        codepoint = htmlentitydefs.name2codepoint[name]
        self.result.append(unichr(codepoint))

    def get_text(self):
        return u''.join(self.result)

def html_to_text(html):
    s = HTMLTextExtractor()
    s.feed(html)
    return s.get_text()

快速测试:

html = u'<a href="#">Demo <em>(&not; \u0394&#x03b7;&#956;&#x03CE;)</em></a>'
print repr(html_to_text(html))

结果:

u'Demo (\xac \u0394\u03b7\u03bc\u03ce)'

错误处理:

  • 无效的HTML结构可能导致HTMLParseError
  • 无效的命名HTML实体(例如&#apos;,在XML和XHTML中有效,但在纯HTML中无效)将导致ValueError异常。
  • 指定代码点超出Python可接受的Unicode范围的数字HTML实体(例如,在某些系统上,基本多语言平面之外的字符)将导致ValueError异常。

安全说明:不要将HTML剥离(将HTML转换为纯文本)与HTML清理(将纯文本转换为HTML)混淆。此答案将删除HTML并将实体解码为纯文本-这不能确保在HTML上下文中安全使用结果。

示例:&lt;script&gt;alert("Hello");&lt;/script&gt;将转换为<script>alert("Hello");</script>,这是100%正确的行为,但是如果将生成的纯文本原样插入HTML页面,则显然不够。

规则并不难:每当您在HTML输出中插入纯文本字符串时,即使您“知道”它不包含HTML(例如,因为剥离了HTML内容),也应始终使用()使用HTML转义它。cgi.escape(s, True)

(但是,OP询问有关将结果打印到控制台的情况,在这种情况下,无需转义HTML。)

Python 3.4以上版本:(带有doctest!)

import html.parser

class HTMLTextExtractor(html.parser.HTMLParser):
    def __init__(self):
        super(HTMLTextExtractor, self).__init__()
        self.result = [ ]

    def handle_data(self, d):
        self.result.append(d)

    def get_text(self):
        return ''.join(self.result)

def html_to_text(html):
    """Converts HTML to plain text (stripping tags and converting entities).
    >>> html_to_text('<a href="#">Demo<!--...--> <em>(&not; \u0394&#x03b7;&#956;&#x03CE;)</em></a>')
    'Demo (\xac \u0394\u03b7\u03bc\u03ce)'

    "Plain text" doesn't mean result can safely be used as-is in HTML.
    >>> html_to_text('&lt;script&gt;alert("Hello");&lt;/script&gt;')
    '<script>alert("Hello");</script>'

    Always use html.escape to sanitize text before using in an HTML context!

    HTMLParser will do its best to make sense of invalid HTML.
    >>> html_to_text('x < y &lt z <!--b')
    'x < y < z '

    Unrecognized named entities are included as-is. '&apos;' is recognized,
    despite being XML only.
    >>> html_to_text('&nosuchentity; &apos; ')
    "&nosuchentity; ' "
    """
    s = HTMLTextExtractor()
    s.feed(html)
    return s.get_text()

请注意,HTMLParser在Python 3中得到了改进(意味着更少的代码和更好的错误处理)。

I needed a way to strip tags and decode HTML entities to plain text. The following solution is based on Eloff’s answer (which I couldn’t use because it strips entities).

from HTMLParser import HTMLParser
import htmlentitydefs

class HTMLTextExtractor(HTMLParser):
    def __init__(self):
        HTMLParser.__init__(self)
        self.result = [ ]

    def handle_data(self, d):
        self.result.append(d)

    def handle_charref(self, number):
        codepoint = int(number[1:], 16) if number[0] in (u'x', u'X') else int(number)
        self.result.append(unichr(codepoint))

    def handle_entityref(self, name):
        codepoint = htmlentitydefs.name2codepoint[name]
        self.result.append(unichr(codepoint))

    def get_text(self):
        return u''.join(self.result)

def html_to_text(html):
    s = HTMLTextExtractor()
    s.feed(html)
    return s.get_text()

A quick test:

html = u'<a href="#">Demo <em>(&not; \u0394&#x03b7;&#956;&#x03CE;)</em></a>'
print repr(html_to_text(html))

Result:

u'Demo (\xac \u0394\u03b7\u03bc\u03ce)'

Error handling:

  • Invalid HTML structure may cause an HTMLParseError.
  • Invalid named HTML entities (such as &#apos;, which is valid in XML and XHTML, but not plain HTML) will cause a ValueError exception.
  • Numeric HTML entities specifying code points outside the Unicode range acceptable by Python (such as, on some systems, characters outside the Basic Multilingual Plane) will cause a ValueError exception.

Security note: Do not confuse HTML stripping (converting HTML into plain text) with HTML sanitizing (converting plain text into HTML). This answer will remove HTML and decode entities into plain text – that does not make the result safe to use in a HTML context.

Example: &lt;script&gt;alert("Hello");&lt;/script&gt; will be converted to <script>alert("Hello");</script>, which is 100% correct behavior, but obviously not sufficient if the resulting plain text is inserted as-is into a HTML page.

The rule is not hard: Any time you insert a plain-text string into HTML output, you should always HTML escape it (using cgi.escape(s, True)), even if you “know” that it doesn’t contain HTML (e.g. because you stripped HTML content).

(However, the OP asked about printing the result to the console, in which case no HTML escaping is needed.)

Python 3.4+ version: (with doctest!)

import html.parser

class HTMLTextExtractor(html.parser.HTMLParser):
    def __init__(self):
        super(HTMLTextExtractor, self).__init__()
        self.result = [ ]

    def handle_data(self, d):
        self.result.append(d)

    def get_text(self):
        return ''.join(self.result)

def html_to_text(html):
    """Converts HTML to plain text (stripping tags and converting entities).
    >>> html_to_text('<a href="#">Demo<!--...--> <em>(&not; \u0394&#x03b7;&#956;&#x03CE;)</em></a>')
    'Demo (\xac \u0394\u03b7\u03bc\u03ce)'

    "Plain text" doesn't mean result can safely be used as-is in HTML.
    >>> html_to_text('&lt;script&gt;alert("Hello");&lt;/script&gt;')
    '<script>alert("Hello");</script>'

    Always use html.escape to sanitize text before using in an HTML context!

    HTMLParser will do its best to make sense of invalid HTML.
    >>> html_to_text('x < y &lt z <!--b')
    'x < y < z '

    Unrecognized named entities are included as-is. '&apos;' is recognized,
    despite being XML only.
    >>> html_to_text('&nosuchentity; &apos; ')
    "&nosuchentity; ' "
    """
    s = HTMLTextExtractor()
    s.feed(html)
    return s.get_text()

Note that HTMLParser has improved in Python 3 (meaning less code and better error handling).


回答 5

有一个简单的方法:

def remove_html_markup(s):
    tag = False
    quote = False
    out = ""

    for c in s:
            if c == '<' and not quote:
                tag = True
            elif c == '>' and not quote:
                tag = False
            elif (c == '"' or c == "'") and tag:
                quote = not quote
            elif not tag:
                out = out + c

    return out

此处说明了此想法:http: //youtu.be/2tu9LTDujbw

您可以在这里看到它的工作原理:http: //youtu.be/HPkNPcYed9M?t=35s

PS-如果您对类感兴趣(关于使用python进行智能调试),请给我一个链接:http : //www.udacity.com/overview/Course/cs259/CourseRev/1。免费!

别客气!:)

There’s a simple way to this:

def remove_html_markup(s):
    tag = False
    quote = False
    out = ""

    for c in s:
            if c == '<' and not quote:
                tag = True
            elif c == '>' and not quote:
                tag = False
            elif (c == '"' or c == "'") and tag:
                quote = not quote
            elif not tag:
                out = out + c

    return out

The idea is explained here: http://youtu.be/2tu9LTDujbw

You can see it working here: http://youtu.be/HPkNPcYed9M?t=35s

PS – If you’re interested in the class(about smart debugging with python) I give you a link: http://www.udacity.com/overview/Course/cs259/CourseRev/1. It’s free!

You’re welcome! :)


回答 6

如果您需要保留HTML实体(即&amp;),则在Eloff的answer中添加了“ handle_entityref ”方法。

from HTMLParser import HTMLParser

class MLStripper(HTMLParser):
    def __init__(self):
        self.reset()
        self.fed = []
    def handle_data(self, d):
        self.fed.append(d)
    def handle_entityref(self, name):
        self.fed.append('&%s;' % name)
    def get_data(self):
        return ''.join(self.fed)

def html_to_text(html):
    s = MLStripper()
    s.feed(html)
    return s.get_data()

If you need to preserve HTML entities (i.e. &amp;), I added “handle_entityref” method to Eloff’s answer.

from HTMLParser import HTMLParser

class MLStripper(HTMLParser):
    def __init__(self):
        self.reset()
        self.fed = []
    def handle_data(self, d):
        self.fed.append(d)
    def handle_entityref(self, name):
        self.fed.append('&%s;' % name)
    def get_data(self):
        return ''.join(self.fed)

def html_to_text(html):
    s = MLStripper()
    s.feed(html)
    return s.get_data()

回答 7

如果要剥离所有HTML标签,我发现的最简单方法是使用BeautifulSoup:

from bs4 import BeautifulSoup  # Or from BeautifulSoup import BeautifulSoup

def stripHtmlTags(htmlTxt):
    if htmlTxt is None:
            return None
        else:
            return ''.join(BeautifulSoup(htmlTxt).findAll(text=True)) 

我尝试了接受的答案的代码,但得到的是“ RuntimeError:超出最大递归深度”,上述代码块未发生这种情况。

If you want to strip all HTML tags the easiest way I found is using BeautifulSoup:

from bs4 import BeautifulSoup  # Or from BeautifulSoup import BeautifulSoup

def stripHtmlTags(htmlTxt):
    if htmlTxt is None:
            return None
        else:
            return ''.join(BeautifulSoup(htmlTxt).findAll(text=True)) 

I tried the code of the accepted answer but I was getting “RuntimeError: maximum recursion depth exceeded”, which didn’t happen with the above block of code.


回答 8

一个基于lxml.html的解决方案(lxml是本机库,因此比任何纯python解决方案都快得多)。

from lxml import html
from lxml.html.clean import clean_html

tree = html.fromstring("""<span class="item-summary">
                            Detailed answers to any questions you might have
                        </span>""")

print(clean_html(tree).strip())

# >>> Detailed answers to any questions you might have

另请参阅http://lxml.de/lxmlhtml.html#cleaning-up-html,了解lxml.cleaner的确切功能。

如果在转换为文本之前需要更多控制权,则可能需要通过在构造函数中传递所需的选项来显式使用lxml Cleaner,例如:

cleaner = Cleaner(page_structure=True,
                  meta=True,
                  embedded=True,
                  links=True,
                  style=True,
                  processing_instructions=True,
                  inline_style=True,
                  scripts=True,
                  javascript=True,
                  comments=True,
                  frames=True,
                  forms=True,
                  annoying_tags=True,
                  remove_unknown_tags=True,
                  safe_attrs_only=True,
                  safe_attrs=frozenset(['src','color', 'href', 'title', 'class', 'name', 'id']),
                  remove_tags=('span', 'font', 'div')
                  )
sanitized_html = cleaner.clean_html(unsafe_html)

An lxml.html-based solution (lxml is a native library and can be more performant than a pure python solution).

Remove ALL tags

from lxml import html


## from file-like object or URL
tree = html.parse(file_like_object_or_url)

## from string
tree = html.fromstring('safe <script>unsafe</script> safe')

print(tree.text_content().strip())

### OUTPUT: 'safe unsafe safe'

Remove ALL tags with pre-sanitizing HTML (dropping some tags)

from lxml import html
from lxml.html.clean import clean_html

tree = html.fromstring("""<script>dangerous</script><span class="item-summary">
                            Detailed answers to any questions you might have
                        </span>""")

## text only
print(clean_html(tree).text_content().strip())

### OUTPUT: 'Detailed answers to any questions you might have'

Also see http://lxml.de/lxmlhtml.html#cleaning-up-html for what exactly the lxml.cleaner does.

If you need more control over what exactly is sanitized before converting to text then you might want to use the lxml Cleaner explicitly by passing the options you want in the constructor, e.g:

cleaner = Cleaner(page_structure=True,
                  meta=True,
                  embedded=True,
                  links=True,
                  style=True,
                  processing_instructions=True,
                  inline_style=True,
                  scripts=True,
                  javascript=True,
                  comments=True,
                  frames=True,
                  forms=True,
                  annoying_tags=True,
                  remove_unknown_tags=True,
                  safe_attrs_only=True,
                  safe_attrs=frozenset(['src','color', 'href', 'title', 'class', 'name', 'id']),
                  remove_tags=('span', 'font', 'div')
                  )
sanitized_html = cleaner.clean_html(unsafe_html)

If you need more control over how plain text is generated then instead of text_content() you can use lxml.etree.tostring:

plain_bytes = tostring(tree, method='text', encoding='utf-8')
print(plain.decode('utf-8'))


回答 9

这是一个简单的解决方案,它基于惊人的快速lxml库剥离HTML标签并解码HTML实体:

from lxml import html

def strip_html(s):
    return str(html.fromstring(s).text_content())

strip_html('Ein <a href="">sch&ouml;ner</a> Text.')  # Output: Ein schöner Text.

Here is a simple solution that strips HTML tags and decodes HTML entities based on the amazingly fast lxml library:

from lxml import html

def strip_html(s):
    return str(html.fromstring(s).text_content())

strip_html('Ein <a href="">sch&ouml;ner</a> Text.')  # Output: Ein schöner Text.

回答 10

Beautiful Soup包会立即为您执行此操作。

from bs4 import BeautifulSoup

soup = BeautifulSoup(html)
text = soup.get_text()
print(text)

The Beautiful Soup package does this immediately for you.

from bs4 import BeautifulSoup

soup = BeautifulSoup(html)
text = soup.get_text()
print(text)

回答 11

这是我对python 3的解决方案。

import html
import re

def html_to_txt(html_text):
    ## unescape html
    txt = html.unescape(html_text)
    tags = re.findall("<[^>]+>",txt)
    print("found tags: ")
    print(tags)
    for tag in tags:
        txt=txt.replace(tag,'')
    return txt

不知道它是否完美,但是解决了我的用例,看起来很简单。

Here’s my solution for python 3.

import html
import re

def html_to_txt(html_text):
    ## unescape html
    txt = html.unescape(html_text)
    tags = re.findall("<[^>]+>",txt)
    print("found tags: ")
    print(tags)
    for tag in tags:
        txt=txt.replace(tag,'')
    return txt

Not sure if it is perfect, but solved my use case and seems simple.


回答 12

您可以使用其他HTML解析器(例如lxmlBeautiful Soup),该解析器提供仅提取文本的功能。或者,您可以在行字符串上运行正则表达式以去除标记。有关更多信息,请参见Python文档

You can use either a different HTML parser (like lxml, or Beautiful Soup) — one that offers functions to extract just text. Or, you can run a regex on your line string that strips out the tags. See Python docs for more.


回答 13

我已经成功地将Eloff的答案用于Python 3.1 [非常感谢!]。

我升级到Python 3.2.3,并遇到错误。

感谢响应者Thomas K 提供的解决方案是将super().__init__()以下代码插入:

def __init__(self):
    self.reset()
    self.fed = []

…为了使其看起来像这样:

def __init__(self):
    super().__init__()
    self.reset()
    self.fed = []

…,它将适用于Python 3.2.3。

再次感谢Thomas K的修复以及上面提供的Eloff原始代码!

I have used Eloff’s answer successfully for Python 3.1 [many thanks!].

I upgraded to Python 3.2.3, and ran into errors.

The solution, provided here thanks to the responder Thomas K, is to insert super().__init__() into the following code:

def __init__(self):
    self.reset()
    self.fed = []

… in order to make it look like this:

def __init__(self):
    super().__init__()
    self.reset()
    self.fed = []

… and it will work for Python 3.2.3.

Again, thanks to Thomas K for the fix and for Eloff’s original code provided above!


回答 14

您可以编写自己的函数:

def StripTags(text):
     finished = 0
     while not finished:
         finished = 1
         start = text.find("<")
         if start >= 0:
             stop = text[start:].find(">")
             if stop >= 0:
                 text = text[:start] + text[start+stop+1:]
                 finished = 0
     return text

You can write your own function:

def StripTags(text):
     finished = 0
     while not finished:
         finished = 1
         start = text.find("<")
         if start >= 0:
             stop = text[start:].find(">")
             if stop >= 0:
                 text = text[:start] + text[start+stop+1:]
                 finished = 0
     return text

回答 15

如果HTML-Parser的解决方案仅运行一次,则它们都是易碎的:

html_to_text('<<b>script>alert("hacked")<</b>/script>

结果是:

<script>alert("hacked")</script>

您打算防止的事情。如果您使用HTML解析器,请对标签计数,直到替换为零为止:

from HTMLParser import HTMLParser

class MLStripper(HTMLParser):
    def __init__(self):
        self.reset()
        self.fed = []
        self.containstags = False

    def handle_starttag(self, tag, attrs):
       self.containstags = True

    def handle_data(self, d):
        self.fed.append(d)

    def has_tags(self):
        return self.containstags

    def get_data(self):
        return ''.join(self.fed)

def strip_tags(html):
    must_filtered = True
    while ( must_filtered ):
        s = MLStripper()
        s.feed(html)
        html = s.get_data()
        must_filtered = s.has_tags()
    return html

The solutions with HTML-Parser are all breakable, if they run only once:

html_to_text('<<b>script>alert("hacked")<</b>/script>

results in:

<script>alert("hacked")</script>

what you intend to prevent. if you use a HTML-Parser, count the Tags until zero are replaced:

from HTMLParser import HTMLParser

class MLStripper(HTMLParser):
    def __init__(self):
        self.reset()
        self.fed = []
        self.containstags = False

    def handle_starttag(self, tag, attrs):
       self.containstags = True

    def handle_data(self, d):
        self.fed.append(d)

    def has_tags(self):
        return self.containstags

    def get_data(self):
        return ''.join(self.fed)

def strip_tags(html):
    must_filtered = True
    while ( must_filtered ):
        s = MLStripper()
        s.feed(html)
        html = s.get_data()
        must_filtered = s.has_tags()
    return html

回答 16

这是一个快速修复,甚至可以进行优化,但是效果很好。这段代码将所有非空标签替换为“”,并将所有html标签剥离成给定的输入文本。您可以使用./file.py输入输出来运行它

    #!/usr/bin/python
import sys

def replace(strng,replaceText):
    rpl = 0
    while rpl > -1:
        rpl = strng.find(replaceText)
        if rpl != -1:
            strng = strng[0:rpl] + strng[rpl + len(replaceText):]
    return strng


lessThanPos = -1
count = 0
listOf = []

try:
    #write File
    writeto = open(sys.argv[2],'w')

    #read file and store it in list
    f = open(sys.argv[1],'r')
    for readLine in f.readlines():
        listOf.append(readLine)         
    f.close()

    #remove all tags  
    for line in listOf:
        count = 0;  
        lessThanPos = -1  
        lineTemp =  line

            for char in lineTemp:

            if char == "<":
                lessThanPos = count
            if char == ">":
                if lessThanPos > -1:
                    if line[lessThanPos:count + 1] != '<>':
                        lineTemp = replace(lineTemp,line[lessThanPos:count + 1])
                        lessThanPos = -1
            count = count + 1
        lineTemp = lineTemp.replace("&lt","<")
        lineTemp = lineTemp.replace("&gt",">")                  
        writeto.write(lineTemp)  
    writeto.close() 
    print "Write To --- >" , sys.argv[2]
except:
    print "Help: invalid arguments or exception"
    print "Usage : ",sys.argv[0]," inputfile outputfile"

This is a quick fix and can be even more optimized but it will work fine. This code will replace all non empty tags with “” and strips all html tags form a given input text .You can run it using ./file.py input output

    #!/usr/bin/python
import sys

def replace(strng,replaceText):
    rpl = 0
    while rpl > -1:
        rpl = strng.find(replaceText)
        if rpl != -1:
            strng = strng[0:rpl] + strng[rpl + len(replaceText):]
    return strng


lessThanPos = -1
count = 0
listOf = []

try:
    #write File
    writeto = open(sys.argv[2],'w')

    #read file and store it in list
    f = open(sys.argv[1],'r')
    for readLine in f.readlines():
        listOf.append(readLine)         
    f.close()

    #remove all tags  
    for line in listOf:
        count = 0;  
        lessThanPos = -1  
        lineTemp =  line

            for char in lineTemp:

            if char == "<":
                lessThanPos = count
            if char == ">":
                if lessThanPos > -1:
                    if line[lessThanPos:count + 1] != '<>':
                        lineTemp = replace(lineTemp,line[lessThanPos:count + 1])
                        lessThanPos = -1
            count = count + 1
        lineTemp = lineTemp.replace("&lt","<")
        lineTemp = lineTemp.replace("&gt",">")                  
        writeto.write(lineTemp)  
    writeto.close() 
    print "Write To --- >" , sys.argv[2]
except:
    print "Help: invalid arguments or exception"
    print "Usage : ",sys.argv[0]," inputfile outputfile"

回答 17

søren-løvborg答案的python 3改编

from html.parser import HTMLParser
from html.entities import html5

class HTMLTextExtractor(HTMLParser):
    """ Adaption of http://stackoverflow.com/a/7778368/196732 """
    def __init__(self):
        super().__init__()
        self.result = []

    def handle_data(self, d):
        self.result.append(d)

    def handle_charref(self, number):
        codepoint = int(number[1:], 16) if number[0] in (u'x', u'X') else int(number)
        self.result.append(unichr(codepoint))

    def handle_entityref(self, name):
        if name in html5:
            self.result.append(unichr(html5[name]))

    def get_text(self):
        return u''.join(self.result)

def html_to_text(html):
    s = HTMLTextExtractor()
    s.feed(html)
    return s.get_text()

A python 3 adaption of søren-løvborg’s answer

from html.parser import HTMLParser
from html.entities import html5

class HTMLTextExtractor(HTMLParser):
    """ Adaption of http://stackoverflow.com/a/7778368/196732 """
    def __init__(self):
        super().__init__()
        self.result = []

    def handle_data(self, d):
        self.result.append(d)

    def handle_charref(self, number):
        codepoint = int(number[1:], 16) if number[0] in (u'x', u'X') else int(number)
        self.result.append(unichr(codepoint))

    def handle_entityref(self, name):
        if name in html5:
            self.result.append(unichr(html5[name]))

    def get_text(self):
        return u''.join(self.result)

def html_to_text(html):
    s = HTMLTextExtractor()
    s.feed(html)
    return s.get_text()

回答 18

对于一个项目,我需要剥离HTML,同时剥离CSS和js。因此,我对Eloffs回答做了一个变体:

class MLStripper(HTMLParser):
    def __init__(self):
        self.reset()
        self.strict = False
        self.convert_charrefs= True
        self.fed = []
        self.css = False
    def handle_starttag(self, tag, attrs):
        if tag == "style" or tag=="script":
            self.css = True
    def handle_endtag(self, tag):
        if tag=="style" or tag=="script":
            self.css=False
    def handle_data(self, d):
        if not self.css:
            self.fed.append(d)
    def get_data(self):
        return ''.join(self.fed)

def strip_tags(html):
    s = MLStripper()
    s.feed(html)
    return s.get_data()

For one project, I needed so strip HTML, but also css and js. Thus, I made a variation of Eloffs answer:

class MLStripper(HTMLParser):
    def __init__(self):
        self.reset()
        self.strict = False
        self.convert_charrefs= True
        self.fed = []
        self.css = False
    def handle_starttag(self, tag, attrs):
        if tag == "style" or tag=="script":
            self.css = True
    def handle_endtag(self, tag):
        if tag=="style" or tag=="script":
            self.css=False
    def handle_data(self, d):
        if not self.css:
            self.fed.append(d)
    def get_data(self):
        return ''.join(self.fed)

def strip_tags(html):
    s = MLStripper()
    s.feed(html)
    return s.get_data()

回答 19

这是一种与当前接受的答案(https://stackoverflow.com/a/925630/95989)类似的解决方案,除了它HTMLParser直接使用内部类(即没有子类),从而使其简洁得多:

def strip_html(文字):
    零件= []                                                                      
    解析器= HTMLParser()                                                           
    parser.handle_data = parts.append                                               
    parser.feed(文本)                                                               
    返回''.join(parts)

Here’s a solution similar to the currently accepted answer (https://stackoverflow.com/a/925630/95989), except that it uses the internal HTMLParser class directly (i.e. no subclassing), thereby making it significantly more terse:

def strip_html(text):
    parts = []                                                                      
    parser = HTMLParser()                                                           
    parser.handle_data = parts.append                                               
    parser.feed(text)                                                               
    return ''.join(parts)

回答 20

我正在解析Github自述文件,发现以下内容确实有效:

import re
import lxml.html

def strip_markdown(x):
    links_sub = re.sub(r'\[(.+)\]\([^\)]+\)', r'\1', x)
    bold_sub = re.sub(r'\*\*([^*]+)\*\*', r'\1', links_sub)
    emph_sub = re.sub(r'\*([^*]+)\*', r'\1', bold_sub)
    return emph_sub

def strip_html(x):
    return lxml.html.fromstring(x).text_content() if x else ''

然后

readme = """<img src="https://raw.githubusercontent.com/kootenpv/sky/master/resources/skylogo.png" />

            sky is a web scraping framework, implemented with the latest python versions in mind (3.4+). 
            It uses the asynchronous `asyncio` framework, as well as many popular modules 
            and extensions.

            Most importantly, it aims for **next generation** web crawling where machine intelligence 
            is used to speed up the development/maintainance/reliability of crawling.

            It mainly does this by considering the user to be interested in content 
            from *domains*, not just a collection of *single pages*
            ([templating approach](#templating-approach))."""

strip_markdown(strip_html(readme))

正确删除所有markdown和html。

I’m parsing Github readmes and I find that the following really works well:

import re
import lxml.html

def strip_markdown(x):
    links_sub = re.sub(r'\[(.+)\]\([^\)]+\)', r'\1', x)
    bold_sub = re.sub(r'\*\*([^*]+)\*\*', r'\1', links_sub)
    emph_sub = re.sub(r'\*([^*]+)\*', r'\1', bold_sub)
    return emph_sub

def strip_html(x):
    return lxml.html.fromstring(x).text_content() if x else ''

And then

readme = """<img src="https://raw.githubusercontent.com/kootenpv/sky/master/resources/skylogo.png" />

            sky is a web scraping framework, implemented with the latest python versions in mind (3.4+). 
            It uses the asynchronous `asyncio` framework, as well as many popular modules 
            and extensions.

            Most importantly, it aims for **next generation** web crawling where machine intelligence 
            is used to speed up the development/maintainance/reliability of crawling.

            It mainly does this by considering the user to be interested in content 
            from *domains*, not just a collection of *single pages*
            ([templating approach](#templating-approach))."""

strip_markdown(strip_html(readme))

Removes all markdown and html correctly.


回答 21

大多数情况下,使用BeautifulSoup,html2text或@Eloff中的代码,它仍然保留一些html元素,javascript代码…

因此,您可以结合使用这些库并删除markdown格式(Python 3):

import re
import html2text
from bs4 import BeautifulSoup
def html2Text(html):
    def removeMarkdown(text):
        for current in ["^[ #*]{2,30}", "^[ ]{0,30}\d\\\.", "^[ ]{0,30}\d\."]:
            markdown = re.compile(current, flags=re.MULTILINE)
            text = markdown.sub(" ", text)
        return text
    def removeAngular(text):
        angular = re.compile("[{][|].{2,40}[|][}]|[{][*].{2,40}[*][}]|[{][{].{2,40}[}][}]|\[\[.{2,40}\]\]")
        text = angular.sub(" ", text)
        return text
    h = html2text.HTML2Text()
    h.images_to_alt = True
    h.ignore_links = True
    h.ignore_emphasis = False
    h.skip_internal_links = True
    text = h.handle(html)
    soup = BeautifulSoup(text, "html.parser")
    text = soup.text
    text = removeAngular(text)
    text = removeMarkdown(text)
    return text

它对我来说效果很好,但是可以增强,当然…

Using BeautifulSoup, html2text or the code from @Eloff, most of the time, it remains some html elements, javascript code…

So you can use a combination of these libraries and delete markdown formatting (Python 3):

import re
import html2text
from bs4 import BeautifulSoup
def html2Text(html):
    def removeMarkdown(text):
        for current in ["^[ #*]{2,30}", "^[ ]{0,30}\d\\\.", "^[ ]{0,30}\d\."]:
            markdown = re.compile(current, flags=re.MULTILINE)
            text = markdown.sub(" ", text)
        return text
    def removeAngular(text):
        angular = re.compile("[{][|].{2,40}[|][}]|[{][*].{2,40}[*][}]|[{][{].{2,40}[}][}]|\[\[.{2,40}\]\]")
        text = angular.sub(" ", text)
        return text
    h = html2text.HTML2Text()
    h.images_to_alt = True
    h.ignore_links = True
    h.ignore_emphasis = False
    h.skip_internal_links = True
    text = h.handle(html)
    soup = BeautifulSoup(text, "html.parser")
    text = soup.text
    text = removeAngular(text)
    text = removeMarkdown(text)
    return text

It works well for me but it can be enhanced, of course…


回答 22

简单的代码!这将删除其中的所有标记和内容。

def rm(s):
    start=False
    end=False
    s=' '+s
    for i in range(len(s)-1):
        if i<len(s):
            if start!=False:
                if s[i]=='>':
                    end=i
                    s=s[:start]+s[end+1:]
                    start=end=False
            else:
                if s[i]=='<':
                    start=i
    if s.count('<')>0:
        self.rm(s)
    else:
        s=s.replace('&nbsp;', ' ')
        return s

但是,如果文本中包含<>符号,则不会给出完整的结果。

Simple code!. This will remove all kind of tags and content inside of it.

def rm(s):
    start=False
    end=False
    s=' '+s
    for i in range(len(s)-1):
        if i<len(s):
            if start!=False:
                if s[i]=='>':
                    end=i
                    s=s[:start]+s[end+1:]
                    start=end=False
            else:
                if s[i]=='<':
                    start=i
    if s.count('<')>0:
        self.rm(s)
    else:
        s=s.replace('&nbsp;', ' ')
        return s

But it won’t give full result if text contains <> symbols inside it.


回答 23

# This is a regex solution.
import re
def removeHtml(html):
  if not html: return html
  # Remove comments first
  innerText = re.compile('<!--[\s\S]*?-->').sub('',html)
  while innerText.find('>')>=0: # Loop through nested Tags
    text = re.compile('<[^<>]+?>').sub('',innerText)
    if text == innerText:
      break
    innerText = text

  return innerText.strip()
# This is a regex solution.
import re
def removeHtml(html):
  if not html: return html
  # Remove comments first
  innerText = re.compile('<!--[\s\S]*?-->').sub('',html)
  while innerText.find('>')>=0: # Loop through nested Tags
    text = re.compile('<[^<>]+?>').sub('',innerText)
    if text == innerText:
      break
    innerText = text

  return innerText.strip()

回答 24

此方法对我而言完美无缺,不需要其他安装:

import re
import htmlentitydefs

def convertentity(m):
    if m.group(1)=='#':
        try:
            return unichr(int(m.group(2)))
        except ValueError:
            return '&#%s;' % m.group(2)
        try:
            return htmlentitydefs.entitydefs[m.group(2)]
        except KeyError:
            return '&%s;' % m.group(2)

def converthtml(s):
    return re.sub(r'&(#?)(.+?);',convertentity,s)

html =  converthtml(html)
html.replace("&nbsp;", " ") ## Get rid of the remnants of certain formatting(subscript,superscript,etc).

This method works flawlessly for me and requires no additional installations:

import re
import htmlentitydefs

def convertentity(m):
    if m.group(1)=='#':
        try:
            return unichr(int(m.group(2)))
        except ValueError:
            return '&#%s;' % m.group(2)
        try:
            return htmlentitydefs.entitydefs[m.group(2)]
        except KeyError:
            return '&%s;' % m.group(2)

def converthtml(s):
    return re.sub(r'&(#?)(.+?);',convertentity,s)

html =  converthtml(html)
html.replace("&nbsp;", " ") ## Get rid of the remnants of certain formatting(subscript,superscript,etc).

声明:本站所有文章,如无特殊说明或标注,均为本站原创发布。任何个人或组织,在未征得本站同意时,禁止复制、盗用、采集、发布本站内容到任何网站、书籍等各类媒体平台。如若本站内容侵犯了原著者的合法权益,可联系我们进行处理。