某视频爬虫批量下载
本帖最后由 话痨司机啊 于 2022-6-7 11:08 编辑链接地址:aHR0cDovL2YxMDIwLndvcmthcmVhNS5saXZlL3YucGhwP2NhdGVnb3J5PWhvdCZ2aWV3dHlwZT1iYXNpYyZwYWdlPQ==
测试过,可以批量下载!!需要挂Dai_Li
ffmpeg下载链接和M3U8下载器说明:https://github.com/hecoter/m3u8download_hecoter
"""
************************************
Description: 好东西,放在这里
Author: @
Github: https://github.com/jianala
Date: 2022-06-02 13:57:24
FilePath:on_download.py
LastEditors: @
LastEditTime: 2022-06-02 15:17:53
善始者实繁,克终者盖寡。
************************************
"""
from lxml import etree
import re
import requests
import os
import urllib3
from m3u8download_hecoter import m3u8download
import base64
urllib3.disable_warnings()
#
# 是否配置代{过}{滤}理,国内访问速度较慢
def proxy_set():
proxy_set = input('Do you want to use proxy?')
if proxy_set == 'y':
global my_proxies
proxies_set = input('input your proxy config ep:"127.0.0.1:7890"')
my_proxies = {"http": "http://127.0.0.1:7890", "https": "https://127.0.0.1:7890"}
if proxies_set != '':
my_proxies['http'] = 'http://'+proxies_set
my_proxies['https'] = 'https://'+proxies_set
elif proxy_set == 'n':
my_proxies = ''
else:
proxy_set()
def get_well(response):
'''
获取m3u8下载号码
'''
try:
et = etree.HTML(response.text)
well_list = et.xpath('//div[@class="thumb-overlay"]/@id')[:-1]
well_title = et.xpath('//div[@class="thumb-overlay"]/../span/text()')[:-1]
except:
print('缺少参数')
finally:
m3u8_list = [{'m3u8url':'https://la.killcovid2021.com/m3u8/{num}/{num}.m3u8'.format(num=re.findall('\d+',well_list)),'title':well_title} for well_title,well_list in zip(well_title,well_list)]
return m3u8_list
# 爬虫主体,flag为页码
def spider(flag):
# 如果连接访问不了,在这里把base_url替换成你知道的标准地址
page_url = b'aHR0cDovL2YxMDIwLndvcmthcmVhNS5saXZlL3YucGhwP2NhdGVnb3J5PWhvdCZ2aWV3dHlwZT1iYXNpYyZwYWdlPQ=='
page_url = base64.b64decode(page_url).decode('utf-8')
page_url = page_url+ str(flag)
headers = {
'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/63.0.3239.132 Safari/537.36Name',
'Referer': 'http://91porn.com',
'Accept-Language': 'zh-CN,zh;q=0.9'}
get_page=requests.get(url=page_url, headers=headers)
m3u8_list = get_well(get_page)
dir_download = os.path.join(os.path.dirname(__file__),'_download')
for m3u8 in m3u8_list:
m3u8download(m3u8url=m3u8['m3u8url'],title = m3u8['title'],enable_del=True,proxies=my_proxies,work_dir=dir_download)
if __name__ == '__main__':
proxy_set()
for i in range(1,4):
spider(i)
特别注意的点有2个,第一个是m3u8download 源码没挂代{过}{滤}理,需要手动改一下,第二个urllib3的版本必须是urllib3==1.25.11
Ace803 发表于 2022-6-7 11:00
import requests.packages.urllib3
urllib3标示红线,显示为没有名称为urllib3的模块,后续也导致reques ...
import urllib3
urllib3.disable_warnings()
话痨司机啊 发表于 2022-6-7 11:20
不给你们提示怎么知道这是一个比较符合大家口味的批量下载器~
挂了dl,选择y或者n都是同样的错误
Traceback (most recent call last):
File "D:\python******91爬取.py", line 70, in <module>
spider(i)
File "D:\python******91爬取.py", line 63, in spider
m3u8download(m3u8url=m3u8['m3u8url'], title=m3u8['title'], enable_del=True, proxies=my_proxies,
TypeError: m3u8download() got an unexpected keyword argument 'proxies'
话痨司机啊 发表于 2022-6-7 10:56
改成requests.urllib3
代码里
requests.urllib3.disable_warnings() # 防止H ...
import requests.packages.urllib3
urllib3标示红线,显示为没有名称为urllib3的模块,后续也导致requests.packages.urllib3.disable_warnings()有问题
py是3.10,requests库是2.27.1 m3u8download_hecote这个是本地库吗 但凡看到代{过}{滤}理,我一般就路过~~~~~~~~~~~~~ Referer亮眼 91真是厉害。。居然通过了 厉害,谢谢分享 厉害👍 这个收藏了{:1_893:} iawyxkdn8 发表于 2022-6-2 16:51
但凡看到代{过}{滤}理,我一般就路过~~~~~~~~~~~~~
没这么绝对嘛{:1_918:} 感谢分享。{:1_921:}{:1_921:}