Scrape Center: https://scrape.center/
今日题目: https://ssr2.scrape.center/
首先登录,用户名密码均为 admin
后续步骤便与以前分析的完全一样了。
HTTP Basic Authentication 在 Python 中的实现方法有以下两种:
1. requests 库自动处理,只需要设置 auth=(username, password)
2. 手动添加 Authorization: Basic <base64 of 'username:password'> 请求头
简单起见,我们采用方法1。
完整代码:
[Python] 纯文本查看 复制代码 from requests import Session
from bs4 import BeautifulSoup as bs
from time import time
start = time()
x = Session()
x.auth = ('admin', 'admin')
url = 'https://ssr3.scrape.center'
for i in range(1, 10):
r = x.get(f'{url}/page/{i}')
soup = bs(r.text, 'html.parser')
cards = soup.find_all('div', class_='el-card__body')
print(f'Page {i}/10')
for card in cards:
print()
print(' ', card.h2.text)
tags = card.find('div', class_='categories').find_all('span')
print(' Score:', card.find('p', class_='score').text[-3:])
print(' Tags:', ' '.join([tags[i].text for i in range(len(tags))]))
infos = card.find_all('div', class_='info')
# print(' Info:', ''.join([i.text[-1:] for i in infos[0]]))
spans = infos[0].find_all('span')
print(' Country:', spans[0].text)
print(' Duration:', spans[2].text)
print(' Release date:', infos[1].text[-1:-3])
print(' Link:', url + card.find('a', class_='name')['href'])
print()
print(f'Time used: {time() - start}s')
input() |