抓住双十一注册机会小尾巴的我终于有帐号了!哇哈哈哈哈哈
来直接切入正题,因为工作需求最近刚刚开始接触python,拿电影天堂(http://www.btbtdy.me)练练手。
进入网站直接开始搜索需要的电影,发现这个网址很简单嘛
可这样只是搜索出第一页的资源,切到第二页发现网址变了!
通过万能的百度发现quote模块能解决这个编码的问题,网址解决了拿这些资源就容易了!
因为不知道搜索的内容一共有几页,直接for循环肯定会出问题,还好发现可以通过下方页面跳转按钮知道全的页码
[Asm] 纯文本查看 复制代码 All_page_url = Html.xpath('/html/body/div[@class="pages"]//a')
End_page_url = Html.xpath('/html/body/div[@class="pages"]/a[' + str(len(All_page_url)) + ']/@href')
page_num = re.findall(r"\d+\.?\d*", End_page_url[0][6:9])[0] # 最后一页页码
然后就是循环加基础的爬取标签内容啦
遇到第一个卡了很久的问题是,这个标签内的href不知道为什么用xpath一直拿不出来(输出的一直是一个函数,忘记截图了)
最后还是用比较笨的方法直接拼接(这个url还是比较简单的)
[Asm] 纯文本查看 复制代码 player = Html1.xpath('//*[@id="nucms_downlist"]//div')
for iii in range(1, len(player)+1):
Down = 0
Type = Html1.xpath('//*[@id="nucms_downlist"]/div[' + str(iii) + ']/h2/text()')[0]
Number = len(Html1.xpath('//*[@id="nucms_downlist"]/div[' + str(iii) + ']/ul/li'))
if iii == 1:
if Type == "云播放":
Down = 1
FIN_URL = 'http://www.btbtdy.me/play/' + str(re.findall(r"\d+\.?\d*", href[0])[0].replace('.', '')) + '-0-0.html'
else:
for x in range(0, Number):
FIN_URL = 'http://www.btbtdy.me/down/' + str(re.findall(r"\d+\.?\d*", href[0])[0].replace('.', '')) + '-0-' + str(x) + '.html'
elif iii == 2:
for x in range(0, Number):
FIN_URL = 'http://www.btbtdy.me/down/' + str(re.findall(r"\d+\.?\d*", href[0])[0].replace('.', '')) + '-1-' + str(x) + '.html'
到下面一个页面直接拿取资源链接,本以为到这步就结束了,输出的却一直是“加载中”,原因是这个网页会延迟加载,直接f12到network里找吧,然后就找到了最终地址!
最后就是输出了
用的都是基础的代码,慢慢学习哈。
上代码:
[Asm] 纯文本查看 复制代码 # BT电影天堂
import re
import requests
from lxml import etree
from urllib.parse import quote
url = 'http://www.btbtdy.com/'
headers = {'User-Agent': 'Mozilla/5.0 (Windows NT 6.1; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/75.0.3770.142 Safari/537.36'}
print("===========")
find_name = input("请输入片名:")
print("===========")
name = quote(find_name) # 文字转码
find_url = 'http://www.btbtdy.me/search/' + name + '.html?page=1&searchword=' + name + '&searchtype=' # 第一页URL
response = requests.get(find_url, headers=headers)
data = response.content.decode('UTF-8')
Html = etree.HTML(data)
All_page_url = Html.xpath('/html/body/div[@class="pages"]//a')
End_page_url = Html.xpath('/html/body/div[@class="pages"]/a[' + str(len(All_page_url)) + ']/@href')
page_num = re.findall(r"\d+\.?\d*", End_page_url[0][6:9])[0] # 最后一页页码
for i in range(1, int(page_num)+1):
final_all_url = 'http://www.btbtdy.me/search/' + name + '.html?page=' + str(i) + '&searchword=' + name + '&searchtype='
response = requests.get(final_all_url, headers=headers)
data = response.content.decode('UTF-8')
Html = etree.HTML(data)
Lists = Html.xpath('/html/body/div[@class="list_so"]//dl')
for ii in range(1, len(Lists)+1):
title = Html.xpath('/html/body/div[@class="list_so"]/dl[' + str(ii) + ']/dt/a/@title')
href = Html.xpath('/html/body/div[@class="list_so"]/dl[' + str(ii) + ']/dt/a/@href')
final_url = 'http://www.btbtdy.me' + href[0]
response = requests.get(final_url, headers=headers)
data = response.content.decode('UTF-8')
Html1 = etree.HTML(data)
print(title)
player = Html1.xpath('//*[@id="nucms_downlist"]//div')
for iii in range(1, len(player)+1):
Down = 0
Type = Html1.xpath('//*[@id="nucms_downlist"]/div[' + str(iii) + ']/h2/text()')[0]
Number = len(Html1.xpath('//*[@id="nucms_downlist"]/div[' + str(iii) + ']/ul/li'))
if iii == 1:
if Type == "云播放":
Down = 1
FIN_URL = 'http://www.btbtdy.me/play/' + str(re.findall(r"\d+\.?\d*", href[0])[0].replace('.', '')) + '-0-0.html'
else:
for x in range(0, Number):
FIN_URL = 'http://www.btbtdy.me/downlist/' + str(re.findall(r"\d+\.?\d*", href[0])[0].replace('.', '')) + '-0-' + str(x) + '.html'
elif iii == 2:
for x in range(0, Number):
FIN_URL = 'http://www.btbtdy.me/downlist/' + str(re.findall(r"\d+\.?\d*", href[0])[0].replace('.', '')) + '-1-' + str(x) + '.html'
elif iii == 3:
for x in range(0, Number):
FIN_URL = 'http://www.btbtdy.me/downlist/' + str(re.findall(r"\d+\.?\d*", href[0])[0].replace('.', '')) + '-2-' + str(x) + '.html'
elif iii == 4:
for x in range(0, Number):
FIN_URL = 'http://www.btbtdy.me/downlist/' + str(re.findall(r"\d+\.?\d*", href[0][0].replace('.', ''))) + '-3-' + str(x) + '.html'
if Down == 0:
response = requests.get(FIN_URL, headers=headers)
data = response.content.decode('UTF-8')
Html2 = etree.HTML(data)
Magnet = Html2.xpath('/html/body/p[2]/text()')
if not Magnet:
Magnet = '暂无该资源'
else:
Magnet = Magnet[0]
print(' ' + Type + ' :' + Magnet)
|