1 回答
TA贡献1866条经验 获得超5个赞
似乎有些链接是重复的,所以最好收集最终页面的所有链接,对链接列表进行重复数据删除,然后刮掉最终页面。(您也可以将最终页面的链接保存在文件中以供以后使用。)该脚本收集了 5395 个链接(已删除)。
'use strict';
const puppeteer = require('puppeteer');
(async function main() {
try {
const browser = await puppeteer.launch({ headless: false, defaultViewport: null });
const [page] = await browser.pages();
await page.goto('https://well.ca/categories/medicine-health_2.html');
const hrefsCategoriesDeduped = new Set(await page.evaluate(
() => Array.from(
document.querySelectorAll('.panel-body-content a[href]'),
a => a.href
)
));
const hrefsPages = [];
for (const url of hrefsCategoriesDeduped) {
await page.goto(url);
hrefsPages.push(...await page.evaluate(
() => Array.from(
document.querySelectorAll('.col-lg-5ths.col-md-3.col-sm-4.col-xs-6 a[href]'),
a => a.href
)
));
}
const hrefsPagesDeduped = new Set(hrefsPages);
// hrefsPagesDeduped can be converted back to an array
// and saved in a JSON file now if needed.
for (const url of hrefsPagesDeduped) {
await page.goto(url);
// Scrape the page.
}
await browser.close();
} catch (err) {
console.error(err);
}
})();
添加回答
举报