为了账号安全,请及时绑定邮箱和手机立即绑定

Python3用eclipse编写本节课编码,顺利完成

# -*- coding: gb2312 -*-
from bs4 import BeautifulSoup
import re
html_doc = """
<html><head><title>The Dormouse's story</title></head>
<body>
<p class="title"><b>The Dormouse's story</b></p>

<p class="story">Once upon a time there were three little sisters; and their names were
<a href="http://example.com/elsie" class="sister" id="link1">Elsie</a>,
<a href="http://example.com/lacie" class="sister" id="link2">Lacie</a> and
<a href="http://example.com/tillie" class="sister" id="link3">Tillie</a>;
and they lived at the bottom of a well.</p>

<p class="story">...</p>
"""

soup = BeautifulSoup(html_doc, 'html.parser')      # 创建对象

print('--------------获取所有的链接----------------')                                                # 提取出所有的连接
links = soup.find_all('a')
for link in links:
    print(link.name, link['href'],link.get_text() )
   
print('-------------------------------------获取lacie的链接------------------------------')
link_node = soup.find('a', href='http://example.com/lacie')
print(link_node.name, link_node['href'],link_node.get_text())

print('-----------------------------------------模糊匹配/正则匹配--------------------------------------')
link_node = soup.find('a', href=re.compile(r"ill"))
print(link_node.name, link_node['href'],link_node.get_text)

print('----------------------------------获取p段落文字----------------------------------------------')
p_node = soup.find("p", class_ = "title")
print(p_node.name, p_node.get_text())

正在回答

2 回答

666

0 回复 有任何疑惑可以回复我~

so cool

0 回复 有任何疑惑可以回复我~

举报

0/150
提交
取消
Python开发简单爬虫
  • 参与学习       227670    人
  • 解答问题       1219    个

本教程带您解开python爬虫这门神奇技术的面纱

进入课程

Python3用eclipse编写本节课编码,顺利完成

我要回答 关注问题
意见反馈 帮助中心 APP下载
官方微信