nianboy 发表于 2021-11-12 13:11

看大家发的小说爬虫,本人爬虫小白写的

我是爬虫小白,之前搞得给我老爹下小说的,大佬看了不要喷# -*- coding: utf-8 -*-
import requests
import re
import os
from lxml import etree
from urllib.parse import quote

if __name__ == '__main__':
    keyword = input("请输入书籍名:").encode("gb2312")
    url = "https://www.tingchina.com/search1.asp?keyword=" + quote(keyword)
    headers = {
      "user-agent": "Mozilla/7.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/91.0.4472.77 Safari/537.36"
    }
    page_text = requests.get(url=url, headers=headers).text.encode('iso-8859-1').decode('gbk')
    tree = etree.HTML(page_text)
    booksname = tree.xpath('/html/body/div/div/dl/dd/ul/li/a/text()')
    booknum = tree.xpath('/html/body/div/div/dl/dd/ul/li/a/@href')
    result = ''.join(booknum)
    booknumber = re.findall(r'yousheng/disp_(.*?).htm', result)
    for name,num in zip(booksname, booknumber):
      print("书名:" + name)
      print("编号:" + num)
    pagenum=input("请输入书籍编号:")
    page_text1 = requests.get(url="https://www.tingchina.com/yousheng/"+str(pagenum)+"/play_"+str(pagenum)+"_0.htm", headers=headers).text.encode('iso-8859-1').decode('gbk')
    pagenum1 = re.findall(r'play+_+\d+_(\d+)', page_text1)
    m = (int(pagenum1) + 1)
    print("总章节数:"+str(m))
    firstnum=input("请输入开始下载章数")
    endnum=input("请输入结束下载章数")
for page in range(int(firstnum)-1,int(endnum)):
    page=str(page)
    indexurl="https://www.tingchina.com/yousheng/"+str(pagenum)+"/play_"+str(pagenum)+"_"+page+".htm"
    url="https://img.tingchina.com/play/h5_jsonp.asp?0.9091809774033375"
    headers={
      "User-Agent": "Mozilla/7.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/89.0.4389.114 Safari/537.36",
      "Referer": "https://www.tingchina.com/yousheng/"+str(pagenum)+"/play_"+str(pagenum)+"_5.htm",
    }
    getbook=requests.get(url=indexurl,headers=headers,verify=False).text.encode('iso-8859-1').decode('gbk')
    sonurl=re.findall(r'fileUrl= "(.*?)"',getbook)
    name=re.findall(r'fileUrl= "/yousheng/(.*?).mp3"',getbook)
    bookname=re.findall(r'如果您喜欢的话,请为(.*?).mp3投一票',getbook)
    book=re.findall(r';"><strong>(.*?)</strong>',getbook)
    bookshu=re.findall(r'.htm">(.*?)</ul>',getbook)
    getshu=requests.get(url=url,headers=headers,verify=False).text
    son=re.findall(r'"(.*?)";',getshu).replace('" +"',"")
    bookurl="https://t33.tingchina.com"+sonurl+son
    headers1={
      "Referer": "http://www.23ts.com/",
      "User-Agent": "Mozilla/6.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/91.0.4472.77 Safari/537.36",
    }
    getdj=requests.get(url=bookurl,headers=headers,verify=False).content
    file_name="./"+book+"/"
    if not os.path.exists(file_name):
      os.mkdir(file_name)
    with open(file_name+"{}.mp3".format(bookname),"wb") as f:
      f.write(getdj)
      print(bookname+"-----------下载成功!")

jiatao 发表于 2021-11-12 13:51

一提到爬虫大家都用python哈,看来应该考虑封装一个通用的爬虫函数了

ericzhao666 发表于 2021-11-12 14:52

新人报道,之前见过一个人开发的网页版的,自动爬取小说加入书架,然后会做本地缓存,存储书和书签和阅读到哪,支持kindle,可惜后来地址无法访问了

nianboy 发表于 2021-11-12 13:25

昨天刚注册的吾爱,之前就经常看吾爱的帖子,本人技术有限,写的东西也是乱七八糟,大佬们看了不要嫌我菜:lol

xiaoming123456 发表于 2021-11-12 13:53

经常看吾爱的帖子

lzn223568 发表于 2021-11-12 13:56

新人报道,好好学习

lye123456 发表于 2021-11-12 14:16

你写的这个爬虫如何用?

zkz6969 发表于 2021-11-12 14:17

加个多线程就不错了

wsz12312 发表于 2021-11-12 14:18

新人报道

testc0de 发表于 2021-11-12 14:21

厉害。我还是几年前写过,为了方便看天涯的小说。

52pojie66 发表于 2021-11-12 14:38

这是用来下哪个网站的呀,某点可以吗
页: [1] 2 3
查看完整版本: 看大家发的小说爬虫,本人爬虫小白写的