为了账号安全,请及时绑定邮箱和手机立即绑定

如何在蟒蛇中获得嵌套的href?

如何在蟒蛇中获得嵌套的href?

ABOUTYOU 2022-09-13 15:08:30
目标(我需要反复搜索数百次):1. 在“https://www.ncbi.nlm.nih.gov/ipg/”中搜索(例如“WP_000177210.1”"(即 https://www.ncbi.nlm.nih.gov/ipg/?term=WP_000177210.1)2. 选择表的第二列“核苷酸中的 CDS 区域”中的第一条记录(即“NC_011415.1 1997353-1998831 (-)”,https://www.ncbi.nlm.nih.gov/nuccore/NC_011415.1?from=1997353&to=1998831&strand=2)3. 在此序列名称下选择“FASTA”4. 获取快速序列(即“>NC_011415.1:c1998831-1997353 大肠杆菌 SE11,完全序列法典1. 在“https://www.ncbi.nlm.nih.gov/ipg/”中搜索(例如“WP_000177210.1”"import requestsfrom bs4 import BeautifulSoupurl = "https://www.ncbi.nlm.nih.gov/ipg/"r = requests.get(url, params = "WP_000177210.1")if r.status_code == requests.codes.ok:    soup = BeautifulSoup(r.text,"lxml")2. 选择表的第二列“核苷酸中的 CDS 区域”中的第一条记录(在本例中为“NC_011415.1 1997353-1998831 (-)”)(即 https://www.ncbi.nlm.nih.gov/nuccore/NC_011415.1?from=1997353&to=1998831&strand=2)# try 1 (wrong)## I tried this first, but it seemed like it only accessed to the first level of the href?!for a in soup.find_all('a', href=True):    if (a['href'][:8]) =="/nuccore":        print("Found the URL:", a['href'])# try 2 (not sure how to access nested href)## According to the label I saw in the Develop Tools, I think I need to get the href in the following nested structure. However, it didn't work.soup.select("html div #maincontent div div div #ph-ipg div table tbody tr td a")我现在被困在这一步....断续器这是我第一次处理html格式。这也是我第一次在这里提问。我可能不会很好地表达这个问题。如果有什么问题,请告诉我。
查看完整描述

1 回答

?
慕雪6442864

TA贡献1812条经验 获得超5个赞

在不使用 NCBI 的 REST API 的情况下,


import time

from bs4 import BeautifulSoup

from selenium import webdriver


# Opens a firefox webbrowser for scrapping purposes

browser = webdriver.Firefox(executable_path=r'your\path\geckodriver.exe') # Put your own path here


# Allows you to load a page completely (with all of the JS)

browser.get('https://www.ncbi.nlm.nih.gov/ipg/?term=WP_000177210.1')


# Delay turning the page into a soup in order to collect the newly fetched data

time.sleep(3)


# Creates the soup

soup = BeautifulSoup(browser.page_source, "html")


# Gets all the links by filtering out ones with just '/nuccore' and keeping ones that include '/nuccore'

links = [a['href'] for a in soup.find_all('a', href=True) if '/nuccore' in a['href'] and not a['href'] == '/nuccore']

注意:

你需要包装

你需要安装壁虎司机


查看完整回答
反对 回复 2022-09-13
  • 1 回答
  • 0 关注
  • 62 浏览
慕课专栏
更多

添加回答

举报

0/150
提交
取消
意见反馈 帮助中心 APP下载
官方微信