我试图从本网站的第一页爬到第 14 页:https : //cross-currents.berkeley.edu/archives?author=&title=&type=All&issue=All®ion=All 这是我的代码:import requests as rfrom bs4 import BeautifulSoup as soupimport pandas #make a list of all web pages' urlswebpages=[]for i in range(15): root_url = 'https://cross-currents.berkeley.edu/archives?author=&title=&type=All&issue=All®ion=All&page='+ str(i) webpages.append(root_url) print(webpages)#start looping through all pagesfor item in webpages: headers = {'User-Agent': 'Mozilla/5.0'} data = r.get(item, headers=headers) page_soup = soup(data.text, 'html.parser')#find targeted info and put them into a list to be exported to a csv file via pandas title_list = [title.text for title in page_soup.find_all('div', {'class':'field field-name-node-title'})] title = [el.replace('\n', '') for el in title_list]#export to csv file via pandas dataset = {'Title': title} df = pandas.DataFrame(dataset) df.index.name = 'ArticleID' df.to_csv('example31.csv',encoding="utf-8")输出 csv 文件仅包含最后一页的目标信息。当我打印“网页”时,它显示所有页面的网址都已正确放入列表中。我究竟做错了什么?先感谢您!
添加回答
举报
0/150
提交
取消