小白求助 为啥写不进去呢 差了2小时度娘搞不定
本帖最后由 lihu5841314 于 2021-6-2 22:58 编辑importrequests
import re
importos
from urllib import parse
import time
from lxml import etree
for i in range(1,15):
url = "https://www.taiuu.com/book/quanbu/default-0-0-0-0-0-0-{}.html".format(i)
headers = {
"User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/87.0.4280.88 Safari/537.36"
}
r= requests.get(url=url,headers=headers,timeout=200)
r.encoding="gb2312"
tree = etree.HTML(r.text)
dl_list = tree.xpath('//div[@class="sitebox"]/dl')
book_page_url = tree.xpath('//div[@id="pages"]/a/@href')
ifnotos.path.exists("./books"):
os.mkdir("./books")
fordl in dl_list:
detail_url = dl.xpath('./dt/a/@href')
# print(detail_url)
new_url = parse.urljoin("https://www.taiuu.com",detail_url)
# print(new_url)
r2 = requests.get(url=new_url,headers=headers)
r2.encoding ="gb2312"
tree2 = etree.HTML(r2.text)
book_name = tree2.xpath('//div[@class="book_info"]//img/@title') + ".txt"
book_zuozhe = tree2.xpath('//div[@class="options"]/span/text()')
book_title = tree2.xpath('//h3[@class="bookinfo_intro"]//text()')
path = "./books/" +book_name
li_list =tree2.xpath('//div[@class="book_list"]/ul/li')
# print(book_title)
for liin li_list:
book_detail_url = li.xpath('./a/@href')
book_url_mu = parse.urljoin(new_url,book_detail_url)
book_mulu = li.xpath('./a/text()')
# print(book_url_mu)
r3 = requests.get(url=book_url_mu,headers=headers)
tree3 = etree.HTML(r3.text)
book_detail_nr = tree3.xpath('//div[@id="htmlContent"]//text()')
# book_detail_nr = re.sub(r'(\s+)','',book_detail_nr)#怎么不行呢显示类型错误例子:\r\n\xa0\xa0\xa0\xa0吴仙师的脸色彻底阴沉了下来,
# out = "".join(book_detail_nr.split()) #不知道原理还是不行
# foriinbook_detail_nr:
# with open(path,"a",encoding="utf8")aspf:
# pf.write(i) #不换掉\xa0 就写不进去提示gbk错误
print(book_mulu,"下载完毕")
print(book_name,"下载完毕")importrequests
本帖最后由 lihu5841314 于 2021-6-2 22:59 编辑
目标是爬全部图书的翻页写在哪里呢 有点懵第一页倒是跑完了用utf-8倒是可以写进去 就是全乱码换了解码就写不进去提示类型不对
翻页解决了乱码还是搞不定 能存就乱码不乱码就不能写进去
# 原来
book_detail_nr = re.sub(r'(\xa0)','',book_detail_nr)
# 更改
book_detail_nr = re.sub(r'(\xa0)','',"".join(book_detail_nr))
book_detail_nr 这是一个列表,应该转换成string,在第45行,你可以看一下 你爬取的网站编码格式是GBK而你解码UTF-8 lihu5841314 发表于 2021-6-2 22:36
目标是爬全部图书的翻页写在哪里呢 有点懵第一页倒是跑完了用utf-8倒是可以写进去 就是全乱码 ...
小说展示翻页 可以直接构造页码id,
https://www.taiuu.com/book/quanbu/default-0-0-0-0-0-0-1081.html
就是这个1081,1-1081
小说章节翻页,可以在小说详情页检测 下一章这个元素,获取下一章的url,然后next request 翻页这个网站有问题 hxh-linux 发表于 2021-6-2 23:03
你爬取的网站编码格式是GBK而你解码UTF-8
这不重要,页面写着是gbk 你就用gbk吧,如果有非法字符,你在写入的时候可以加一个error等于“忽视的英文”,那个我不会拼.....自己百度翻译一下ig.... 1039468583 发表于 2021-6-2 23:06
翻页这个网站有问题
这个网站问题大呢,txt全本下载,全部都是404要不是这样情况,我早就把他当下来了 chinapython 发表于 2021-6-2 23:09
这个网站问题大呢,txt全本下载,全部都是404要不是这样情况,我早就把他当下来了
分页没有问题我一张一张的爬 lihu5841314 发表于 2021-6-2 23:13
分页没有问题我一张一张的爬
主要是练习用scrapy简单些明天继续那这个网站下刀
页:
[1]
2