一、准备爬取
- 这里我们选取的网站是彼岸图网:4K美女壁纸_高清4K美女图片_彼岸图网
- 使用的爬虫工具包是asyncio、aiohttp异步请求包,还有xpath爬取方法
- 同时需要os模块帮我们创建一个存储图片的的文件夹
二、页面分析
? ? ? ? 当我们在切换彼岸图网的页面时,我们会发现,页面有两种请求,一种是第一页https://pic.netbian.com/4kmeinv/还有其他页https://pic.netbian.com/4kmeinv/index_2.html不同在于从第二页开始请求链接多了一个index,因此我们要写两种链接
? ? ? ? 还有就是,分析页面结构,不难发现,图片链接与名称都是存储在一个ul>li模块里面,如图
用xpath方法很容易得到,得到后遍历请求就能得到所需结果
三、有关异步爬取
? ? ? ? 这里我们用两个异步爬取的函数,一个是分页爬取,一个是一个页面图片的异步爬取,这样可以让我们的爬取速度达到最快,(大概4~5秒就可以获得所以页面的养眼美图)代码如下,相信学过asyncio、aiohttp异步爬取,很容易看懂代码。我们的图片文件就创建在当前py文件所在的文件夹下(文件夹会自动创建)
async def request(url):
async with aiohttp.ClientSession() as session:
# 异步爬取的请求
async with await session.get(url=url, headers=headers) as response:
page_text = await response.text()
tree = etree.HTML(page_text)
li_list = tree.xpath('//ul[@class="clearfix"]/li/a')
for li in li_list:
img_url = 'https://pic.netbian.com/' + li.xpath('./img/@src')[0]
img_urls.append(img_url)
async def request_img(img_url):
img_name = img_url.split('-')[-1]
async with aiohttp.ClientSession() as session:
async with await session.get(url=img_url, headers=headers) as response:
img_res = await response.read()
with open('./美图/' + img_name, 'wb') as fp:
fp.write(img_res)
print('done')
?四、完整代码
# -*- coding = utf-8 -*-
# @Time: 2021/12/12 10:03
# @Author: HYP
# @File: 03.aiohttp.py
# @Software:PyCharm
import asyncio
import aiohttp
from lxml import etree
import os
if not os.path.exists('./美图'):
os.mkdir('./美图')
headers = {
'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/95.0.4638.54 Safari/537.36'
}
urls = ['https://pic.netbian.com/4kmeinv/']
other_url = 'https://pic.netbian.com/4kmeinv/index_{num}.html'
#这里是分页,你可以想爬多少页就爬多少页
for i in range(2, 3):
new_other_url = other_url.format(num=i)
urls.append(new_other_url)
img_urls = []
async def request(url):
async with aiohttp.ClientSession() as session:
# 异步爬取的请求
async with await session.get(url=url, headers=headers) as response:
page_text = await response.text()
tree = etree.HTML(page_text)
li_list = tree.xpath('//ul[@class="clearfix"]/li/a')
for li in li_list:
img_url = 'https://pic.netbian.com/' + li.xpath('./img/@src')[0]
img_urls.append(img_url)
async def request_img(img_url):
img_name = img_url.split('-')[-1]
async with aiohttp.ClientSession() as session:
async with await session.get(url=img_url, headers=headers) as response:
img_res = await response.read()
with open('./美图/' + img_name, 'wb') as fp:
fp.write(img_res)
print('done')
# 协程请求页面
tasks = []
for url in urls:
c = request(url)
task = asyncio.ensure_future(c)
tasks.append(task)
loop = asyncio.get_event_loop()
loop.run_until_complete(asyncio.wait(tasks))
# 协程请求图片
tasks2 = []
for img_url in img_urls:
c = request_img(img_url)
task = asyncio.ensure_future(c)
tasks2.append(task)
loop2 = asyncio.get_event_loop()
loop2.run_until_complete(asyncio.wait(tasks2))
五、注意事项
- 如果一直请求网站可能会被封ip哦,现在有用的代理ip还比较不好找
- 异步爬虫注意使用异步的方法
|