本帖最后由 HaNnnng 于 2018-11-14 00:51 编辑
第一次发帖,记录一下自己学习爬虫的过程。
很简单的一个例子,爬取精彩阅读网的小说,如果爬取指定小说则需要手动更改第九行的URL。
这是面向过程的爬虫,明天改一下写个面向过程的
[Python] 纯文本查看 复制代码 # _*_ coding: utf_8 _*_
__author__ = 'lwh'
__date__ = '2018/11/10 15:12'
import requests
import re
# 获取网页信息
url = 'http://www.jingcaiyuedu.com/book/317834.html'
response = requests.get(url)
response.encoding = 'utf-8'
html = response.text
# 获取小说的名称
title = re.findall(r'<meta property="og:novel:book_name" content="(.*?)"/>', html)[0]
# 获取小说的章节数据,章节名称跟url
dl = re.findall(r'<dl class="panel-body panel-chapterlist"> <dd class="col-md-3">.*?</dl>', html, re.S)[0]
chapter_info_list = re.findall(r'href="(.*?)">(.*?)<', dl)
# 写文件
f = open('%s.txt' % title, "w", encoding='utf-8')
# 循环下载每一个章节
for chapter_url, chapter_title in chapter_info_list:
chapter_url = 'http://www.jingcaiyuedu.com%s' % chapter_url
response = requests.get(chapter_url)
response.encoding = 'utf-8'
html = response.text
# 提取章节内容
chapter_content = re.findall(r' <div class="panel-body" id="htmlContent">(.*?)</div> ', html, re.S)[0]
chapter_content = chapter_content.replace('<br />', '')
chapter_content = chapter_content.replace('<br>', '')
chapter_content = chapter_content.replace('<br />', '')
chapter_content = chapter_content.replace('<p>', '')
chapter_content = chapter_content.replace('</p>', '')
chapter_content = chapter_content.replace(' ', '')
f.write(chapter_title)
f.write('\n')
f.write(chapter_content)
f.write('\n\n\n\n\n')
print(chapter_url)
|