好友
阅读权限10
听众
最后登录1970-1-1
|
本帖最后由 carole1102 于 2019-11-30 22:39 编辑
在52学习python,刚接触爬虫,练手爬取,各位大佬轻拍。。。。
from lxml import etree
import requests
import csv
headers = {
'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/78.0.3904.108 Safari/537.36'
}
df = open(r'e:\douban.csv','wt',newline='',encoding='utf-8-sig')
writer = csv.writer(df)
writer.writerow(('name','url','author','publisher','date','price','rate','comment'))
urls = ['https://book.douban.com/top250?start={}'.format(str(i)) for i in range(0,250,25)]for url in urls: html = requests.get(url,headers=headers)
selector = etree.HTML(html.text)
infos = selector.xpath('//tr[@Class ="item"]')
for info in infos:
name = info.xpath('td/div/a/@title')[0]
url = info.xpath('td/div/a/@href')[0]
book_info = info.xpath('td/p/text()')[0]
author = book_info.split('/')[0]
publisher = book_info.split('/')[2]
date = book_info.split('/')[-2]
price = book_info.split('/')[-1]
rate = info.xpath('td/div/span[2]/text()')[0]
comments = info.xpath('td/p/span/text()')
comment = comments[0] if len(comments) != 0 else '空'
writer.writerow((name,url,author,publisher,date,price,rate,comment))
df.close() |
-
免费评分
-
查看全部评分
|
发帖前要善用【论坛搜索】功能,那里可能会有你要找的答案或者已经有人发布过相同内容了,请勿重复发帖。 |
|
|
|
|