[Scrape Center - ssr2]https无效证书网站的数据爬取
本帖最后由 三滑稽甲苯 于 2022-2-8 17:26 编辑Scrape Center: https://scrape.center/
今日题目: https://ssr2.scrape.center/
进入网址,映入眼帘的是。。。警告?而且不能忽略?无法访问了还怎么爬取?
搜索一下,在这篇文章中找到了解决方法,键盘敲入“thisisunsafe”即可忽略错误继续访问
加载了网页后,发现和题目ssr1里分析的一模一样,那么稍微修改一下代码即可
完整代码:
from requests import Session
from bs4 import BeautifulSoup as bs
from time import time
from urllib3 import disable_warnings
start = time()
x = Session()
url = 'https://ssr2.scrape.center'
disable_warnings() # 禁用警告
for i in range(1, 10):
r = x.get(f'{url}/page/{i}', verify=False) # 关闭 ssl 证书验证
soup = bs(r.text, 'html.parser')
cards = soup.find_all('div', class_='el-card__body')
print(f'Page {i}/10')
for card in cards:
print()
print(' ', card.h2.text)
tags = card.find('div', class_='categories').find_all('span')
print('Score:', card.find('p', class_='score').text[-3:])
print('Tags:', ' '.join(.text for i in range(len(tags))]))
infos = card.find_all('div', class_='info')
# print('Info:', ''.join( for i in infos]))
spans = infos.find_all('span')
print('Country:', spans.text)
print('Duration:', spans.text)
print('Release date:', infos.text[-1:-3])
print('Link:', url + card.find('a', class_='name')['href'])
print()
print(f'Time used: {time() - start}s')
input()
效果图:
页:
[1]