使用python中的正则表达式从文本中删除html标记

时间:2022-10-16 09:01:00

I'm trying to look at a html file and remove all the tags from it so that only the text is left but I'm having a problem with my regex. This is what I have so far.

我正在尝试查看一个html文件并从中删除所有标签,以便只留下文本,但我的正则表达式有问题。这就是我到目前为止所拥有的。

import urllib.request, re
def test(url):
html = str(urllib.request.urlopen(url).read())
print(re.findall('<[\w\/\.\w]*>',html))

The html is a simple page with a few links and text but my regex won't pick up !DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN" and 'a href="...." tags. Can anyone explain what I need to change in my regex?

html是一个带有一些链接和文本的简单页面,但是我的正则表达式不会被接收!DOCTYPE HTML PUBLIC“ - // W3C // DTD HTML 4.01 Transitional // EN”和'a href =“....”标签。谁能解释我的正则表达式需要改变什么?

2 个解决方案

#1


14  

Use BeautifulSoup. Use lxml. Do not use regular expressions to parse HTML.

使用BeautifulSoup。使用lxml。不要使用正则表达式来解析HTML。


Edit 2010-01-29: This would be a reasonable starting point for lxml:

编辑2010-01-29:这将是lxml的合理起点:

from lxml.html import fromstring
from lxml.html.clean import Cleaner
import requests

url = "https://*.com/questions/2165943/removing-html-tags-from-a-text-using-regular-expression-in-python"
html = requests.get(url).text

doc = fromstring(html)

tags = ['h1','h2','h3','h4','h5','h6',
       'div', 'span', 
       'img', 'area', 'map']
args = {'meta':False, 'safe_attrs_only':False, 'page_structure':False, 
       'scripts':True, 'style':True, 'links':True, 'remove_tags':tags}
cleaner = Cleaner(**args)

path = '/html/body'
body = doc.xpath(path)[0]

print cleaner.clean_html(body).text_content().encode('ascii', 'ignore')

You want the content, so presumably you don't want any javascript or CSS. Also, presumably you want only the content in the body and not HTML from the head, too. Read up on lxml.html.clean to see what you can easily strip out. Way smarter than regular expressions, no?

你想要内容,所以大概你不想要任何javascript或CSS。此外,大概你只想要身体中的内容而不是头脑中的HTML。阅读lxml.html.clean,了解您可以轻松删除的内容。比正则表达更聪明,不是吗?

Also, watch out for unicode encoding problems. You can easily end up with HTML that you cannot print.

另外,请注意unicode编码问题。您可以轻松地使用无法打印的HTML。


2012-11-08: changed from using urllib2 to requests. Just use requests!

2012-11-08:从使用urllib2变为请求。只需使用请求!

#2


-1  

import re
patjunk = re.compile("<.*?>|&nbsp;|&amp;",re.DOTALL|re.M)
url="http://www.yahoo.com"
def test(url,pat):
    html = urllib2.urlopen(url).read()
    return pat.sub("",html)

print test(url,patjunk)

#1


14  

Use BeautifulSoup. Use lxml. Do not use regular expressions to parse HTML.

使用BeautifulSoup。使用lxml。不要使用正则表达式来解析HTML。


Edit 2010-01-29: This would be a reasonable starting point for lxml:

编辑2010-01-29:这将是lxml的合理起点:

from lxml.html import fromstring
from lxml.html.clean import Cleaner
import requests

url = "https://*.com/questions/2165943/removing-html-tags-from-a-text-using-regular-expression-in-python"
html = requests.get(url).text

doc = fromstring(html)

tags = ['h1','h2','h3','h4','h5','h6',
       'div', 'span', 
       'img', 'area', 'map']
args = {'meta':False, 'safe_attrs_only':False, 'page_structure':False, 
       'scripts':True, 'style':True, 'links':True, 'remove_tags':tags}
cleaner = Cleaner(**args)

path = '/html/body'
body = doc.xpath(path)[0]

print cleaner.clean_html(body).text_content().encode('ascii', 'ignore')

You want the content, so presumably you don't want any javascript or CSS. Also, presumably you want only the content in the body and not HTML from the head, too. Read up on lxml.html.clean to see what you can easily strip out. Way smarter than regular expressions, no?

你想要内容,所以大概你不想要任何javascript或CSS。此外,大概你只想要身体中的内容而不是头脑中的HTML。阅读lxml.html.clean,了解您可以轻松删除的内容。比正则表达更聪明,不是吗?

Also, watch out for unicode encoding problems. You can easily end up with HTML that you cannot print.

另外,请注意unicode编码问题。您可以轻松地使用无法打印的HTML。


2012-11-08: changed from using urllib2 to requests. Just use requests!

2012-11-08:从使用urllib2变为请求。只需使用请求!

#2


-1  

import re
patjunk = re.compile("<.*?>|&nbsp;|&amp;",re.DOTALL|re.M)
url="http://www.yahoo.com"
def test(url,pat):
    html = urllib2.urlopen(url).read()
    return pat.sub("",html)

print test(url,patjunk)