https://github.com/Yan15/SimpleCrawSpider
自己写的源代码,请给个github star(就是收藏下,)谢谢。
自己写的源代码,请给个github star(就是收藏下,)谢谢。
2016-11-07
有些人听不懂就埋怨,看见这种人就觉得这种人真的是低智商的麻瓜,自己不学习,没人有义务教会你,你自己愿意学就自己下功夫,不愿意学不会是你自己活该,别再评论里恶心人了
2016-11-03
if __name__ = "__main__":
^
SyntaxError: invalid syntax
^
SyntaxError: invalid syntax
2016-11-02
Python第三种方法
import urllib2
import cookielib
url = "http://www.baidu.com/"
print 'third'
cj = cookielib.CookieJar()
opener = urllib2.build_opener(urllib2.HTTPCookieProcessor(cj))
urllib2.install_opener(opener)
response3 = urllib2.urlopen(url)
print response3.getcode()
print cj
print response3.read()
import urllib2
import cookielib
url = "http://www.baidu.com/"
print 'third'
cj = cookielib.CookieJar()
opener = urllib2.build_opener(urllib2.HTTPCookieProcessor(cj))
urllib2.install_opener(opener)
response3 = urllib2.urlopen(url)
print response3.getcode()
print cj
print response3.read()
2016-11-01
Python2.7.12 第二种方法
————————————————————————————————
import urllib2
import cookielib
url = "http://www.baidu.com/"
print 'second'
request = urllib2.Request(url)
request.add_header('user-agent', 'Mozilla/5.0')
response2 = urllib2.urlopen(request)
print response2.getcode()
print len(response2.read())
————————————————————————————————
import urllib2
import cookielib
url = "http://www.baidu.com/"
print 'second'
request = urllib2.Request(url)
request.add_header('user-agent', 'Mozilla/5.0')
response2 = urllib2.urlopen(request)
print response2.getcode()
print len(response2.read())
2016-11-01
Python2.7.12
————————————————————————————————
import urllib2
import cookielib
url = "http://www.baidu.com/"
print 'first'
response1 = urllib2.urlopen(url)
print response1.getcode()
print len(response1.read())
————————————————————————————————
import urllib2
import cookielib
url = "http://www.baidu.com/"
print 'first'
response1 = urllib2.urlopen(url)
print response1.getcode()
print len(response1.read())
2016-11-01
#增加一些东西
def output_html(self):
fount=open("output.html","w",encoding='utf-8')
fount.write("<meta charset=\'utf-8\'>")
def output_html(self):
fount=open("output.html","w",encoding='utf-8')
fount.write("<meta charset=\'utf-8\'>")
2016-10-31
我的输出是这个C:\Python27\python.exe D:/pycharm/xiexie/baike_spider/spider_main.py
craw 1 : None
craw failed
Process finished with exit code 0
为什么?
craw 1 : None
craw failed
Process finished with exit code 0
为什么?
2016-10-27