[Scrape Center - ssr3]带Basic Authentication的数据爬取
Scrape Center: https://scrape.center/今日题目: https://ssr2.scrape.center/
首先登录,用户名密码均为 admin
后续步骤便与以前分析的完全一样了。
HTTP Basic Authentication 在 Python 中的实现方法有以下两种:
1. requests 库自动处理,只需要设置 auth=(username, password)
2. 手动添加 Authorization: Basic <base64 of 'username:password'> 请求头
简单起见,我们采用方法1。
完整代码:
from requests import Session
from bs4 import BeautifulSoup as bs
from time import time
start = time()
x = Session()
x.auth = ('admin', 'admin')
url = 'https://ssr3.scrape.center'
for i in range(1, 10):
r = x.get(f'{url}/page/{i}')
soup = bs(r.text, 'html.parser')
cards = soup.find_all('div', class_='el-card__body')
print(f'Page {i}/10')
for card in cards:
print()
print(' ', card.h2.text)
tags = card.find('div', class_='categories').find_all('span')
print('Score:', card.find('p', class_='score').text[-3:])
print('Tags:', ' '.join(.text for i in range(len(tags))]))
infos = card.find_all('div', class_='info')
# print('Info:', ''.join( for i in infos]))
spans = infos.find_all('span')
print('Country:', spans.text)
print('Duration:', spans.text)
print('Release date:', infos.text[-1:-3])
print('Link:', url + card.find('a', class_='name')['href'])
print()
print(f'Time used: {time() - start}s')
input() 欢迎分析讨论交流,吾爱破解论坛有你更精彩!
页:
[1]