在工作中,我们经常遇到需要大量图片的场景。尤其是机器学习方向,需要大量的图片作为样本进行训练。这里用python+scrapy实现一个从百度图片爬取图片的工具。经实际测试,效率非常快。以前没有使用过scrapy框架,查阅一下资料,发现,使用起来真的非常简单。
安装scrapy:
我的运行环境是python3.9,全局安装下面的工具
- 安装wheel支持:
pip3 install wheel
- 安装scrapy框架:
pip install scrapy
pip3 install pillow
创建项目
通过下面命令可以创建一个爬虫的模版项目
scrapy startproject reptile_picture
创建爬虫模版
通过下面命令可以创建一个爬虫内容的模版
scrapy genspider -t basic book image.baidu.com
百度图片规则分析
直接爬取百度图片的html网页是无效的,因为里面的图片内容都是通过动态请求和加载出来的。可以直接爬取百度图片的查询接口的策略。通过浏览器的控制台可以分析出接口请求的规律。这里我就不详细的叙述了,有兴趣的可以点浏览一下百度图片的页面,自己对比一下控制台发出的请求。
代码编写
book.py
import scrapy
import time
import re
from ..items import ReptilePictureItem
def translate(num):
num = str(hex(num))
return num.replace("0x", "", 1)
def get_timestamp():
t = time.time()
return str(int(round(t * 1000)))
class BookSpider(scrapy.Spider):
name = 'book'
allowed_domains = ['image.baidu.com']
offset = 30
current_offset = 30
max_times = 300
url = 'https://image.baidu.com/search/acjson?tn=resultjson_com&logid=8605576373647774920&ipn=rj&ct=201326592&is=&fp=result&fr=&word=%E5%AD%99%E5%85%81%E7%8F%A0&queryWord=%E5%AD%99%E5%85%81%E7%8F%A0&cl=2&lm=-1&ie=utf-8&oe=utf-8&adpicid=&st=-1&z=&ic=0&hd=&latest=©right=&s=&se=&tab=&width=&height=&face=0&istype=2&qc=&nc=1&expermode=&nojc=&isAsync=&pn={}&rn=30&gsm={}&{}='
start_urls = [url.format(str(current_offset), translate(current_offset), get_timestamp())]
def parse(self, response):
pattern = re.compile(r'"middleURL":"(.*?)",', re.S)
datas = re.findall(pattern, response.text)
for data in datas:
item = ReptilePictureItem()
item['imageLink'] = data
yield item
self.current_offset = self.current_offset + self.offset
if self.current_offset > 300:
return
yield scrapy.Request(self.url.format(str(self.current_offset), translate(self.current_offset), get_timestamp()),
callback=self.parse)
items.py
import scrapy
class ReptilePictureItem(scrapy.Item):
imageLink = scrapy.Field()
pass
pipelines.py
from scrapy.pipelines.images import ImagesPipeline
import scrapy
from scrapy.exceptions import DropItem
from scrapy.utils.project import get_project_settings
import uuid
class ReptilePicturePipeline(ImagesPipeline):
IMAGES_STORE = get_project_settings().get('IMAGES_STORE')
def file_path(self, request, response=None, info=None):
url = request.url
file_name = str(uuid.uuid4())
if url.find("f=PNG") >= 0:
file_name = file_name + ".png"
else:
file_name = file_name + ".jpg"
return file_name
def get_media_requests(self, item, info):
image_url = item["imageLink"]
yield scrapy.Request(image_url)
def item_completed(self, result, item, info):
image_path = [x["path"] for ok, x in result if ok]
if not image_path:
raise DropItem('Image Dowload Failed')
return item
settings.py
BOT_NAME = 'reptile_picture'
SPIDER_MODULES = ['reptile_picture.spiders']
NEWSPIDER_MODULE = 'reptile_picture.spiders'
USER_AGENT = 'Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/55.0.2883.87 Safari/537.36'
ROBOTSTXT_OBEY = False
IMAGES_STORE = "./images"
ITEM_PIPELINES = {
'reptile_picture.pipelines.ReptilePicturePipeline': 300,
}
运行爬虫程序
scrapy crawl book
爬虫效果
|