3 回答
![?](http://img1.sycdn.imooc.com/545845d30001ee8a02200220-100-100.jpg)
TA贡献1784条经验 获得超8个赞
从文本中找出句子是很困难的。通常,您会查找可以完成句子的字符,例如“.”。和 '!'。但句点(“.”)可能出现在句子的中间,例如人名的缩写。我使用正则表达式来查找句点,后跟单个空格或字符串末尾,这适用于前三个句子,但不适用于任何任意句子。
import requests
from bs4 import BeautifulSoup
import re
url = 'https://www.troyhunt.com/the-773-million-record-collection-1-data-reach/'
res = requests.get(url)
html_page = res.content
soup = BeautifulSoup(html_page, 'html.parser')
paragraphs = soup.select('section.article_text p')
sentences = []
for paragraph in paragraphs:
matches = re.findall(r'(.+?[.!])(?: |$)', paragraph.text)
needed = 3 - len(sentences)
found = len(matches)
n = min(found, needed)
for i in range(n):
sentences.append(matches[i])
if len(sentences) == 3:
break
print(sentences)
印刷:
['Many people will land on this page after learning that their email address has appeared in a data breach I\'ve called "Collection #1".', "Most of them won't have a tech background or be familiar with the concept of credential stuffing so I'm going to write this post for the masses and link out to more detailed material for those who want to go deeper.", "Let's start with the raw numbers because that's the headline, then I'll drill down into where it's from and what it's composed of."]
![?](http://img1.sycdn.imooc.com/54584dad0001dd7802200220-100-100.jpg)
TA贡献1806条经验 获得超5个赞
实际上使用beautify soup你可以通过类“article_text post”进行过滤,查看源代码:
myData=soup.find('section',class_ = "article_text post") print(myData.p.text)
并获取p元素的内部文本
用这个代替soup = BeautifulSoup(html_page, 'html.parser')
![?](http://img1.sycdn.imooc.com/5458464a00013eb602200220-100-100.jpg)
TA贡献2051条经验 获得超10个赞
要抓取前三个句子,只需将这些行添加到您的代码中:
section = soup.find('section',class_ = "article_text post") #Finds the section tag with class "article_text post"
txt = section.p.text #Gets the text within the first p tag within the variable section (the section tag)
print(txt)
输出:
Many people will land on this page after learning that their email address has appeared in a data breach I've called "Collection #1". Most of them won't have a tech background or be familiar with the concept of credential stuffing so I'm going to write this post for the masses and link out to more detailed material for those who want to go deeper.
希望这有帮助!
添加回答
举报