导语:
学习爬虫的过程中为了提高爬取的速度之前一直使用多线程,近日研究了一下协程,协程也被认为是轻量级的线程,使用协程在I/O密集型运算中能够更好的缩短运行时间,因为协程是真正的实现了异步操作,并且对比多线程而言,协程没有数量限制,而且协程本质上是单线程处理多任务时进行了超级大循环(即遇到阻塞就切换下一个任务,遍历所有任务后再回到第一个任务),所以协程相对于多线程来说,没有生成以及销毁线程的开销。下面,以爬取校花网的图片为例,我们测试一下协程和多线程的速度。
1. 先爬取20页图片(320张)的地址
校花网的url:http://www.521609.com/daxuexiaohua/list31.html
import json
from time import sleep
from lxml import etree
import requests
headers = {'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/88.0.4324.104 Safari/537.36'}
url = 'http://www.521609.com/daxuexiaohua/list3{}.html'
url_list = []
img_num = 0
for page in range(1, 21):
new_url = url.format(page)
sleep(1)
res = requests.get(new_url, headers = headers)
res.encoding = 'gbk'
tree = etree.HTML(res.text)
li_list = tree.xpath('//*[@id="content"]/div[2]/div[2]/ul/li')
for li in li_list:
img_url = 'http://www.521609.com' + li.xpath('./a[1]/img/@src')[0]
img_name = li.xpath('./a[1]/img/@alt')[0]
url_list.append((img_name, img_url))
img_num += 1
print((img_name, img_url))
js_file = './url.json'
with open(js_file, 'w', encoding = 'utf8') as f:
json.dump(url_list, f)
f.close()
print('已获取 %d 个图片地址' % img_num)
2. 单线程的性能测试
import json, time
import requests
def get_img(url):
headers = {'user-agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/88.0.4324.104 Safari/537.36'}
res = requests.get(url[1] ,headers = headers).content
with open('./2/%s'%url[0], 'wb') as f:
f.write(res)
f.close()
print('%s下载完成'%url)
with open('./url.json', 'r') as f:
s = json.load(f)
f.close()
print('单线程测试计时开始')
start = time.time()
for url in s:
get_img(url)
print('爬取完毕, 用时:%d' %(time.time()-start))
3. 单线程+协程性能测试
import json,time
import asyncio
import aiohttp
async def get_img(url):
headers = {'user-agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/88.0.4324.104 Safari/537.36'}
async with aiohttp.ClientSession() as sess:
async with await sess.get(url[1], headers = headers) as res:
res = await res.read()
with open('./1/%s'%url[0], 'wb') as f:
f.write(res)
f.close()
print('%s下载完成'%url)
f = open('./url.json', 'r')
s = json.load(f)
f.close()
tasks = []
print('协程计时开始')
start = time.time()
for url in s:
c = get_img(url)
task = asyncio.ensure_future(c)
tasks.append(task)
loop = asyncio.get_event_loop()
loop.run_until_complete(asyncio.wait(tasks))
print('爬取完毕, 用时:%d' %(time.time()-start))
4. 多线程性能测试
import requests
import time
import json
from multiprocessing.dummy import Pool
def get_img(url):
headers = {'user-agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/88.0.4324.104 Safari/537.36'}
res = requests.get(url[1] ,headers = headers).content
with open('./2/%s'%url[0], 'wb') as f:
f.write(res)
f.close()
print('%s下载完成'%url)
with open('./url.json', 'r') as f:
s = json.load(f)
f.close()
print('多线程测试计时开始')
start = time.time()
pool = Pool(50)
pool.map(get_img, s)
print('爬取完毕, 用时:%d' %(time.time()-start))
10个线程:
50个线程:
100个线程:
160个线程:
320个线程:
5. 结论
从结果可以看到,协程的性能优于多线程,当要下载的任务越多,协程的优势就越明显,毕竟创建大量线程是要消耗很多资源的,而协程可以无限个创建,并且只消耗当前线程内的资源,所以协程优势更明显。