爬虫学习-scrape center闯关(ssr系列)
本帖最后由 鸣蜩十四 于 2021-11-2 10:53 编辑# 场景
最近在学习爬虫,实践使用的是https://scrape.center/ 网站环境
网站的环境是一个系列的,一个系列中不通关卡考验的是小技巧,目前的环境是第一个环境,结果爬取的是所有的电影地址,标题,主题,分数,剧情简介
# 技术
主要使用的是request库和BeautifulSoup,最后导出一个csv文档
# 关卡
### 第一关 ssr1
电影数据网站,无反爬,数据通过服务端渲染,适合基本爬虫练习。
```
import pandas as pd
import urllib3
from bs4 import BeautifulSoup
import requests
urllib3.disable_warnings()#去除因为网页没有ssl证书出现的警告
url,title,theme,score,content = [],[],[],[],[]
headers ={'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) '
'Chrome/87.0.4280.141 Safari/537.36'}
global url_list,title_list,theme_list,score_list,content_list
for i in range(1,11):
the_url = 'https://ssr1.scrape.center/page/' + str(i)
html = requests.get(the_url,headers=headers,verify=False)
soup = BeautifulSoup(html.content,'lxml')
url_list = soup.find_all(class_='name')
for x in url_list :
url.append('https://ssr1.scrape.center'+x['href'])
for a in url :
html = requests.get(a,headers=headers,verify=False)
soup = BeautifulSoup(html.content, 'lxml')
title_list = soup.find_all(class_='m-b-sm')
theme_list = soup.find_all(class_='categories')
score_list = soup.find_all(class_='score m-t-md m-b-n-sm')
content_list = soup.find_all("div",class_='drama')
for y,z,i,x in zip(title_list,theme_list,score_list,content_list):
title.append(y.text)
theme.append(z.text.replace('\n','').replace('\r',''))
score.append(i.text.strip())
content.append(x.text.replace('剧情简介','').replace('\n','').replace('\r','').strip())
bt = {
'链接':url,
'标题':title,
'主题':theme,
'评分':score,
'剧情简介':content
}
work = pd.DataFrame(bt)
work.to_csv('work.csv')
```
### 第二关 ssr2
电影数据网站,无反爬,无 HTTPS 证书,适合用作 HTTPS 证书验证。
上关代码可以继续使用,能够爬取结果
### 第三关 ssr3
电影数据网站,无反爬,带有 HTTP Basic Authentication,适合用作 HTTP 认证案例,用户名密码均为 admin。
这一关主要考验的是网站初始登录HTTP认证,在代码第一次访问网站中添加认证
```
auth = HTTPBasicAuth('admin','admin')
html = requests.get(the_url,headers=headers,auth=auth,verify=False)
```
总代码
```
import pandas as pd
import urllib3
from bs4 import BeautifulSoup
import requests
from requests.auth import HTTPBasicAuth
urllib3.disable_warnings()#去除因为网页没有ssl证书出现的警告
url,title,theme,score,content = [],[],[],[],[]
headers ={'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) '
'Chrome/87.0.4280.141 Safari/537.36'}
global url_list,title_list,theme_list,score_list,content_list
for i in range(1,11):
the_url = 'https://ssr3.scrape.center/page/' + str(i)
auth = HTTPBasicAuth('admin','admin')
html = requests.get(the_url,headers=headers,auth=auth,verify=False)
soup = BeautifulSoup(html.content,'lxml')
url_list = soup.find_all(class_='name')
for x in url_list :
url.append('https://ssr1.scrape.center'+x['href'])
for a in url :
html = requests.get(a,headers=headers,verify=False)
soup = BeautifulSoup(html.content, 'lxml')
title_list = soup.find_all(class_='m-b-sm')
theme_list = soup.find_all(class_='categories')
score_list = soup.find_all(class_='score m-t-md m-b-n-sm')
content_list = soup.find_all("div",class_='drama')
for y,z,i,x in zip(title_list,theme_list,score_list,content_list):
title.append(y.text)
theme.append(z.text.replace('\n','').replace('\r',''))
score.append(i.text.strip())
content.append(x.text.replace('剧情简介','').replace('\n','').replace('\r','').strip())
bt = {
'链接':url,
'标题':title,
'主题':theme,
'评分':score,
'剧情简介':content
}
work = pd.DataFrame(bt)
work.to_csv('work.csv')
```
### 第四关 ssr4
电影数据网站,无反爬,每个响应增加了 5 秒延迟,适合测试慢速网站爬取或做爬取速度测试,减少网速干扰。
这关主要是限制了延迟,在网站中添加超时时间,因为他设置的是5秒延迟,但是在实际测试还有其他的延迟,所以需要15秒左右以上的时间进行访问,不然会访问超时
```
html = requests.get(the_url,headers=headers,verify=False,timeout=15)
```
在这里因为延迟关系,所以只爬取第一页,并没有所有爬取
总代码
```
import pandas as pd
import urllib3
from bs4 import BeautifulSoup
import requests
global url_list, title_list, theme_list, score_list, content_list
urllib3.disable_warnings()#去除因为网页没有ssl证书出现的警告
url,title,theme,score,content = [],[],[],[],[]
headers ={'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) '
'Chrome/87.0.4280.141 Safari/537.36'}
the_url = 'https://ssr4.scrape.center/page/1'
html = requests.get(the_url,headers=headers,verify=False,timeout=15)
soup = BeautifulSoup(html.content,'lxml')
url_list = soup.find_all(class_='name')
for x in url_list :
url.append('https://ssr1.scrape.center'+x['href'])
for a in url :
html = requests.get(a,headers=headers,verify=False,timeout=15)
soup = BeautifulSoup(html.content, 'lxml')
title_list = soup.find_all(class_='m-b-sm')
theme_list = soup.find_all(class_='categories')
score_list = soup.find_all(class_='score m-t-md m-b-n-sm')
content_list = soup.find_all("div",class_='drama')
for y,z,i,x in zip(title_list,theme_list,score_list,content_list):
title.append(y.text)
theme.append(z.text.replace('\n','').replace('\r',''))
score.append(i.text.strip())
content.append(x.text.replace('剧情简介','').replace('\n','').replace('\r','').strip())
bt = {
'链接':url,
'标题':title,
'主题':theme,
'评分':score,
'剧情简介':content
}
work = pd.DataFrame(bt)
work.to_csv('work.csv')
``` ruanjl 发表于 2021-11-2 10:33
看不懂啊
哈哈,我也刚开始学习,这你肯定要有些基础,别直接上手看代码,比如我这里用的是BeautifulSoup,肯定要学习相关的知识,然后上手练习,肯定事半功倍 看不懂啊 学习了,希望多一些 jyting 发表于 2021-11-2 10:45
学习了,希望多一些
刚更新完第一个系列,这不来再看看{:1_923:} 感谢,多多学习,留存 挺棒的,赞! 不错,学习学习 感谢分享. 加油学习