1、urlblib
urllib提供了一系列用于操作URL的功能
url是统一资源定位符,对可以从互联网上得到的资源的位置和访问方法的一种简洁的表示,是互联网上标准资源的地址
互联网上的每个文件都有一个唯一的URL,它包含的信息指出文件的位置以及浏览器应该怎么处理它
(1)GET
urllib的request
模块可以非常方便地抓取URL内容,也就是发送一个GET请求到指定的页面,然后返回HTTP的响应
#对豆瓣的一个URLhttps://api.douban.com/v2/book/2129650进行抓取,并返回响应from urllib import request with request.urlopen('https://api.douban.com/v2/book/2129650') as f: data = f.read() print('Status:', f.status, f.reason) for k, v in f.getheaders(): print('%s: %s' % (k, v)) print('Data:', data.decode('utf-8')) 结果: Status: 200 OK Date: Sun, 09 Dec 2018 01:23:48 GMT Content-Type: application/json; charset=utf-8Content-Length: 2138Connection: close Vary: Accept-Encoding X-Ratelimit-Remaining2: 99X-Ratelimit-Limit2: 100Expires: Sun, 1 Jan 2006 01:00:00 GMT Pragma: no-cache Cache-Control: must-revalidate, no-cache, private Set-Cookie: bid=fdBz3SLSf0s; Expires=Mon, 09-Dec-19 01:23:48 GMT; Domain=.douban.com; Path=/X-DOUBAN-NEWBID: fdBz3SLSf0s X-DAE-Node: brand55 X-DAE-App: book Server: dae X-Frame-Options: SAMEORIGIN Data: {"rating":{"max":10,"numRaters":16,"average":"7.4","min":0},"subtitle":"","author":["廖雪峰"],...}
如果我们要想模拟浏览器发送GET请求,就需要使用Request
对象,通过往Request
对象添加HTTP头,我们就可以把请求伪装成浏览器
#模拟iPhone 6去请求豆瓣首页from urllib import request req = request.Request('http://www.douban.com/') req.add_header('User-Agent', 'Mozilla/6.0 (iPhone; CPU iPhone OS 8_0 like Mac OS X) AppleWebKit/536.26 (KHTML, like Gecko) Version/8.0 Mobile/10A5376e Safari/8536.25') with request.urlopen(req) as f: print('Status:', f.status, f.reason) for k, v in f.getheaders(): print('%s: %s' % (k, v)) print('Data:', f.read().decode('utf-8')) 结果:<title>豆瓣(手机版)</title> <meta name="google-site-verification" content="ok0wCgT20tBBgo9_zat2iAcimtN4Ftf5ccsh092Xeyw" /> <meta name="viewport" content="width=device-width, height=device-height, user-scalable=no, initial-scale=1.0, minimum-scale=1.0, maximum-scale=1.0"> <meta name="format-detection" content="telephone=no"> <link rel="canonical" href="http://m.douban.com/"> <link href="https://img3.doubanio.com/f/talion/4b1de333c0e597678522bd3c3af276ba6c667b95/css/card/base.css" rel="stylesheet">
(2)POST
如果要以POST发送一个请求,只需要把参数data
以bytes形式传入
#模拟微博登录,先读取登录的邮箱和口令from urllib import request, parseprint('Login to weibo.cn...') email = input('Email: ') passwd = input('Password: ') login_data = parse.urlencode([ ('username', email), ('password', passwd), ('entry', 'mweibo'), ('client_id', ''), ('savestate', '1'), ('ec', ''), ('pagerefer', 'https://passport.weibo.cn/signin/welcome?entry=mweibo&r=http%3A%2F%2Fm.weibo.cn%2F') ]) req = request.Request('https://passport.weibo.cn/sso/login') req.add_header('Origin', 'https://passport.weibo.cn') req.add_header('User-Agent', 'Mozilla/6.0 (iPhone; CPU iPhone OS 8_0 like Mac OS X) AppleWebKit/536.26 (KHTML, like Gecko) Version/8.0 Mobile/10A5376e Safari/8536.25') req.add_header('Referer', 'https://passport.weibo.cn/signin/login?entry=mweibo&res=wel&wm=3349&r=http%3A%2F%2Fm.weibo.cn%2F') with request.urlopen(req, data=login_data.encode('utf-8')) as f: print('Status:', f.status, f.reason) for k, v in f.getheaders(): print('%s: %s' % (k, v)) print('Data:', f.read().decode('utf-8')) 结果: Login to weibo.cn... Email: email Password: password Status: 200 OK Server: nginx/1.6.1Date: Sun, 09 Dec 2018 02:01:40 GMT Content-Type: text/html Transfer-Encoding: chunked Connection: close Vary: Accept-Encoding Cache-Control: no-cache, must-revalidate Expires: Sat, 26 Jul 1997 05:00:00 GMT Pragma: no-cache Access-Control-Allow-Origin: https://passport.weibo.cn Access-Control-Allow-Credentials: true DPOOL_HEADER: 85-144-160-aliyun-core.jpool.sinaimg.cn Set-Cookie: login=9da7cd806ada2c22779667e8e1c039c2; Path=/Data: {"retcode":50011002,"msg":"\u7528\u6237\u540d\u6216\u5bc6\u7801\u9519\u8bef","data":{"username":"email","errline":669}}
(3)Handler
如果还需要更复杂的控制,比如通过一个Proxy去访问网站,我们需要利用ProxyHandler
来处理
import urllib proxy_handler = urllib.request.ProxyHandler({'http': 'http://www.example.com:3128/'}) proxy_auth_handler = urllib.request.ProxyBasicAuthHandler() proxy_auth_handler.add_password('realm', 'host', 'username', 'password') opener = urllib.request.build_opener(proxy_handler, proxy_auth_handler) with opener.open('http://www.example.com/login.html') as f: pass
2、XML
操作XML有两种方法:DOM和SAX
DOM会把整个XML读入内存,解析为树,因此占用内存大,解析慢,优点是可以任意遍历树的节点
SAX是流模式,边读边解析,占用内存小,解析快,缺点是我们需要自己处理事件
正常情况下,优先考虑SAX,因为DOM实在太占内存
解析XML
在Python中使用SAX解析XML非常简洁,通常我们关心的事件是start_element
,end_element
和char_data
,准备好这3个函数,然后就可以解析xml了
<a href="/">python</a> ……
start_element
读取<a href="/">,
char_data读取Python,
end_element读取
</a>
from xml.parsers.expat import ParserCreateclass DefaultSaxHandler(object): def start_element(self, name, attrs): print('sax:start_element: %s, attrs: %s' % (name, str(attrs))) def end_element(self, name): print('sax:end_element: %s' % name) def char_data(self, text): print('sax:char_data: %s' % text) xml = r'''<?xml version="1.0"?> <ol> <li><a href="/python">Python</a></li> <li><a href="/ruby">Ruby</a></li> </ol>'''
生成XML
最简单也是最有效的生成XML的方法是拼接字符串
L = [] L.append(r'<?xml version="1.0"?>') L.append(r'<root>') L.append(encode('some & data')) L.append(r'</root>')return ''.join(L)
生成复杂的XML要用JSON
3、HTMLParser
利用HTMLParser,可以把网页中的文本、图像等解析出来
HTML本质上是XML的子集,但是HTML的语法没有XML那么严格,所以不能用标准的DOM或SAX来解析HTML。
好Python提供了HTMLParser来非常方便地解析HTML
from html.parser import HTMLParserfrom html.entities import name2codepointclass MyHTMLParser(HTMLParser): def handle_starttag(self, tag, attrs): print('<%s>' % tag) def handle_endtag(self, tag): print('</%s>' % tag) def handle_startendtag(self, tag, attrs): print('<%s/>' % tag) def handle_data(self, data): print(data) def handle_comment(self, data): print('<!--', data, '-->') def handle_entityref(self, name): print('&%s;' % name) def handle_charref(self, name): print('&#%s;' % name) parser = MyHTMLParser() parser.feed('''<html> <head></head> <body> <!-- test html parser --> <p>Some <a href=\"#\">html</a> HTML tutorial...<br>END</p> </body></html>''') 结果:<html> <head> </head> <body> <!-- test html parser --> <p>Some <a>html</a> HTML tutorial...<br>END</p> </body> </html>
feed()
方法可以多次调用,也就是不一定一次把整个HTML字符串都塞进去,可以一部分一部分塞进去。
特殊字符有两种,一种是英文表示的
,一种是数字表示的Ӓ
,这两种字符都可以通过Parser解析出来
作者:finsom
网址:https://www.cnblogs.com/finsomway/p/10090378.html
共同学习,写下你的评论
评论加载中...
作者其他优质文章