吾爱破解 - 52pojie.cn

 找回密码
 注册[Register]

QQ登录

只需一步,快速开始

查看: 13872|回复: 89
收起左侧

[Python 转载] 笔趣阁全站小说爬取

  [复制链接]
Raohz520 发表于 2021-4-24 07:58
本帖最后由 Raohz520 于 2021-4-24 16:55 编辑

笔趣阁全站小说爬取

1.使用了五个模块

import time   
import requests  #pip install requests
import os
import random
from lxml import etree  
import webbrowser
3.
源代码:

[Python] 纯文本查看 复制代码
#[url=https://www.biquge.info/wanjiexiaoshuo/]https://www.biquge.info/wanjiexiaoshuo/[/url]   笔趣阁小说全本爬虫
import time
import requests
import os
import random
from lxml import etree
import webbrowser
header = {
    "User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/89.0.4389.128 Safari/537.36 Edg/89.0.774.77"
}
noName = ['#','/','\\',':','*','?','\"','<','>','|']     #\/:*?"<>|
filePath = './保存小说'
def strZ(_str): #将特殊字符转换为空格
    ret = ''
    for _ in _str:
        if _ in noName:
            ret += " "
        else:
            ret += _
    return ret
def main():
    webbrowser.open('https://www.biquwx.la/')
    if not os.path.exists(filePath):
        os.mkdir(filePath)
    print('1.爬取指定小说')
    print('2.爬取整个站点')
    if input('使用哪种方式爬取小说?  ') == '1':
        appintDown()
    else :
        allDown()
    input("按下任意键退出")
def appintDown(): #爬取指定小说  前提是网页没错
    page_url = input('输入要爬取的小说网站(例如 [url=https://www.biquwx.la/10_10240/]https://www.biquwx.la/10_10240/[/url]) :  ')
    page = requests.get(url=page_url, headers=header)
    if page.status_code == 200:  # 响应就爬取
        page.encoding = 'utf-8'
        page_tree = etree.HTML(page.text)
        page_title = page_tree.xpath('//div[@id="info"]/h1/text()')[0]
        _filePath = filePath + '/' + page_title
        if not os.path.exists(_filePath):
            os.mkdir(_filePath)
        page_dl_list = page_tree.xpath('//div[@class="box_con"]/div[@id="list"]/dl/dd')
        for _ in page_dl_list:
            _page_url = page_url + _.xpath('./a/@href')[0]
            _page_title = _filePath + '/' + strZ(_.xpath('./a/@title')[0]) + '.txt'
            _page = requests.get(_page_url, headers=header)
            if _page.status_code == 200:
                _page.encoding = 'utf-8'
                _tree = etree.HTML(_page.text)
                _page_content = _tree.xpath('//div[@id="content"]/text()')
                fileContent = ''
                for _ in _page_content:
                    fileContent += _ + '\n'
                with open(_page_title, 'w', encoding='utf-8') as fp:
                    fp.write(fileContent)
                    print('%s成功下载到本地' % (_page_title))
                time.sleep(random.uniform(0.05, 0.2))
def allDown(): #整个站点小说爬取
    url = 'https://www.biquge.info/wanjiexiaoshuo/'  # 目录
    page = requests.get(url=url, headers=header)
    if page.status_code == 200:  # 响应就爬取
        page.encoding = 'utf-8'
        tree = etree.HTML(page.text)
        page_last = tree.xpath('//div[@class="pagelink"]/a[@class="last"]/text()')[0]
        for page_i in range(1, int(page_last)):  # 小说页数遍历
            url = 'https://www.biquge.info/wanjiexiaoshuo/' + str(page_i)
            page = requests.get(url=url, headers=header)
            if page.status_code == 200:  # 响应就爬取
                page.encoding = 'utf-8'
                tree = etree.HTML(page.text)
                li_list = tree.xpath('//div[@class="novelslistss"]/ul/li')
                for li in li_list:
                    page_url = li.xpath('./span[@class="s2"]/a/@href')[0]  # 目录链接
                    page_title = strZ(li.xpath('./span[@class="s2"]/a/text()')[0])
                    page = requests.get(url=page_url, headers=header)
                    if page.status_code == 200:  # 响应就爬取
                        page.encoding = 'utf-8'
                        page_tree = etree.HTML(page.text)
                        _filePath = filePath + '/' + page_title
                        if not os.path.exists(_filePath):
                            os.mkdir(_filePath)
                        page_dl_list = page_tree.xpath('//div[@class="box_con"]/div[@id="list"]/dl/dd')
                        for _ in page_dl_list:
                            _page_url = page_url + _.xpath('./a/@href')[0]
                            _page_title = _filePath + '/' + strZ(_.xpath('./a/@title')[0]) + '.txt'
                            _page = requests.get(_page_url, headers=header)
                            if _page.status_code == 200:
                                _page.encoding = 'utf-8'
                                _tree = etree.HTML(_page.text)
                                _page_content = _tree.xpath('//div[@id="content"]/text()')
                                fileContent = ''
                                for _ in _page_content:
                                    fileContent += _ + '\n'
                                with open(_page_title, 'w', encoding='utf-8') as fp:
                                    fp.write(fileContent)
                                    print('%s成功下载到本地' % (_page_title))
                                time.sleep(random.uniform(0.05, 0.2))
if __name__ == '__main__':
    main()

免费评分

参与人数 13吾爱币 +12 热心值 +12 收起 理由
bswbsw + 1 + 1 我很赞同!
banro512 + 1 我很赞同!
Qianmou2718 + 1 + 1 我很赞同!
soushen15793 + 1 + 1 谢谢@Thanks!
hwlhwlxyz + 1 + 1 用心讨论,共获提升!
kkdasiki + 1 + 1 谢谢@Thanks!
wanshiz + 1 + 1 热心回复!
hshcompass + 1 + 1 热心回复!
aurora43 + 1 + 1 谢谢@Thanks!
hexiaomo + 1 + 1 热心回复!
a397555462 + 1 + 1 谢谢@Thanks!
99910369 + 1 谢谢@Thanks!
爱吃鱼的有点帅 + 1 + 1 谢谢@Thanks!

查看全部评分

本帖被以下淘专辑推荐:

发帖前要善用论坛搜索功能,那里可能会有你要找的答案或者已经有人发布过相同内容了,请勿重复发帖。

Natu 发表于 2021-4-24 09:38
dr-pan 发表于 2021-4-24 09:32
最好给我们菜鸟上个软件或成品

这里是编程区,不是精品软件分享区,但是,你把我心里话说出来了,哈哈
dr-pan 发表于 2021-4-24 09:32
kylinwyz 发表于 2021-4-24 08:03
jhcybb 发表于 2021-4-24 08:09
不知道怎么用,请教!
haokonglin 发表于 2021-4-24 08:12
好东西先试试
ccwuax 发表于 2021-4-24 08:13
我是来学习的,这东东不好爬,收藏了慢慢学,感谢分享
xinyuguy 发表于 2021-4-24 08:23
看了一遍 还 是 用 delphi  实现 简单
熊一只 发表于 2021-4-24 08:27
谢谢分享
a397555462 发表于 2021-4-24 08:45
怎么指定下载哪部小说?
ghy197674 发表于 2021-4-24 09:10
谢谢分享
爱吃鱼的有点帅 发表于 2021-4-24 09:12
nice nice
您需要登录后才可以回帖 登录 | 注册[Register]

本版积分规则

返回列表

RSS订阅|小黑屋|处罚记录|联系我们|吾爱破解 - LCG - LSG ( 京ICP备16042023号 | 京公网安备 11010502030087号 )

GMT+8, 2024-11-25 03:55

Powered by Discuz!

Copyright © 2001-2020, Tencent Cloud.

快速回复 返回顶部 返回列表