1 回答
TA贡献1807条经验 获得超9个赞
(即使我不理解我得到的图像,因为我在网站上找不到它们,似乎爬虫不是从网站的起始页开始的)。
是的,你是对的。您的代码不会从起始页下载图像,因为它从起始页获取的唯一内容是所有锚点标记元素,然后调用在起始页上找到的每个锚点元素 -processElement()
response, err := http.Get(currWebsite)
if err != nil {
log.Fatalln("error on searching website")
}
defer response.Body.Close()
document, err := goquery.NewDocumentFromReader(response.Body)
if err != nil {
log.Fatalln("Error loading HTTP response body. ", err)
}
document.Find("a").Each(processElement) // Here
要从起始页下载所有图像,您应该定义另一个函数来执行获取元素和下载图像的工作,但是在函数中,您只需要获取链接并在该链接上调用 -processUrl()imgprocessElement()hrefprocessUrl()
func processElement(index int, element *goquery.Selection) {
href, exists := element.Attr("href")
if exists && strings.HasPrefix(href, "http") {
crawlWebsite = href
processUrl(crawlWebsite)
}
}
func processUrl(crawlWebsite string) {
response, err := http.Get(crawlWebsite)
if err != nil {
log.Fatalf("error on current website")
}
defer response.Body.Close()
document, err := goquery.NewDocumentFromReader(response.Body)
if err != nil {
log.Fatal("Error loading HTTP response body.", err)
}
document.Find("img").Each(func(index int, element *goquery.Selection) {
imgSrc, exists := element.Attr("src")
if strings.HasPrefix(imgSrc, "http") && exists {
fileName := fmt.Sprintf("./images/img" + strconv.Itoa(imageCount) + ".jpg")
currWebsite := fmt.Sprint(imgSrc)
fmt.Println("[+]", currWebsite)
DownloadFile(fileName, currWebsite)
imageCount++
}
})
}
现在只需在处理所有链接之前从起始页抓取图像 -
func main() {
...
document, err := goquery.NewDocumentFromReader(response.Body)
if err != nil {
log.Fatalln("Error loading HTTP response body. ", err)
}
// First crawl images from start page url
processUrl(currWebsite)
document.Find("a").Each(processElement)
}
- 1 回答
- 0 关注
- 112 浏览
添加回答
举报