到底爱不爱我不 发表于 2024-5-9 15:44

用Python的Scrapy爬彼岸网的图片


[*]前置操作

pip install scrapy

# 创建项目
scrapy startproject bian

# 在项目下创建爬虫文件
scrapy genspider -t crawl bian_pic https://pic.netbian.com

[*]编写爬虫代码

# settings.py
USER_AGENT = "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/124.0.0.0 Safari/537.36"

ROBOTSTXT_OBEY = False

LOG_LEVEL = "ERROR"

CONCURRENT_REQUESTS = 32

ITEM_PIPELINES = {
    "bian.pipelines.BianPipeline": 300,
}
# items.py

class BianItem(scrapy.Item):
    href = scrapy.Field()
    title = scrapy.Field()
    src = scrapy.Field()
# bian_pic.py

import scrapy
from scrapy.linkextractors import LinkExtractor
from scrapy.spiders import CrawlSpider, Rule

from bian.items import BianItem


class BianPicSpider(CrawlSpider):
    name = "bian_pic"
    # allowed_domains = ["pic.netbian.com"]
    base_url = "https://pic.netbian.com"
    start_urls = [
      "https://pic.netbian.com/4kdongman",
      "https://pic.netbian.com/4kyouxi",
      "https://pic.netbian.com/4kmeinv",
      "https://pic.netbian.com/4kfengjing",
      "https://pic.netbian.com/4kyingshi",
      "https://pic.netbian.com/4kqiche",
      "https://pic.netbian.com/4krenwu",
      "https://pic.netbian.com/4kdongwu",
      "https://pic.netbian.com/4kzongjiao",
      "https://pic.netbian.com/4kbeijing",
      "https://pic.netbian.com/pingban",
      "https://pic.netbian.com/shoujibizhi",
    ]

    link = LinkExtractor(restrict_xpaths='//*[@class="page"]/a')
    rules = (Rule(link, callback="parse_item", follow=True),)

    def parse_item(self, response):
      a_list = response.xpath('//*[@class="slist"]/ul/li/a')
      for a in a_list:
            if a.xpath('./@target').extract_first():
                href = a.xpath('./@href').extract_first()
                item = BianItem()
                item["href"] = href
                yield scrapy.Request(url=self.base_url + href, callback=self.parse_detail)

    def parse_detail(self, response):
      src = response.xpath('//*[@id="img"]/img/@src').extract_first()
      title = response.xpath('//*[@id="img"]/img/@title').extract_first()
      item = BianItem()
      item["src"] = self.base_url + src
      item["title"] = title
      yield item
# pipelines.py

class BianPipeline:
    fp = None

    def open_spider(self, spider):
      print("开始写入爬虫文件")
      self.fp = open("pic.txt", "w", encoding="utf-8")

    def process_item(self, item, spider):
      self.fp.write(item["title"] + " | " + item["src"] + "\n")
      return item

    def close_spider(self, spider):
      print("写入爬虫完成结束")
      self.fp.close()

[*]结语

因为在公司无聊写的,所以爬到的数据直接写到文件中了,不敢download图片怕流量异常。有兴趣的可以在pipelines中写下载文件的方法

甜萝 发表于 2024-5-9 15:47

感觉楼主可以把帖子移步到编程语言区{:301_998:}

qwe5333515 发表于 2024-5-9 15:48

看不懂 围观一下{:301_1009:}

pastorcd 发表于 2024-5-9 15:51

谢谢楼主分享

到底爱不爱我不 发表于 2024-5-9 15:56

paypojie 发表于 2024-5-9 15:47
感觉楼主可以把帖子移步到编程语言区

是要在那个版块再发一贴吗?还是说该贴可以编辑?

modlive 发表于 2024-5-9 16:09

呃,还以为进错区了嘞,背手昂头假装懂路过……{:301_998:}

stone102 发表于 2024-5-9 16:10

专业的水文?

niluelf 发表于 2024-5-9 16:11

其实可以@管理帮忙转移~这个帖子明显不是用来水的~

八月初三 发表于 2024-5-9 16:16

留在这也挺好让水货们见见世面{:1_886:}

xn2113 发表于 2024-5-9 16:25

paypojie 发表于 2024-5-9 15:47
感觉楼主可以把帖子移步到编程语言区

这不就在编程语言区嘛{:1_904:}
页: [1] 2 3
查看完整版本: 用Python的Scrapy爬彼岸网的图片