发表于 2021-12-5 16:28

LiangMo

申请标题:申请会员ID:LiangMo

1、申 请 I D:LiangMo
2、个人邮箱:2434672036@qq.com

自己写了一个关于Python爬虫的程序

爬虫写的是下载某网站的小姐姐图片

保存地址为D:/Python照片输出/meizi/

源码为:
import requests
from bs4 import BeautifulSoup

headers = {
    "referer": "https://www.mzitu.com/",
    "user-agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/94.0.4606.61 Safari/537.36"
}

response = requests.get('https://www.mzitu.com/', headers=headers)
infos = response.text
soup = BeautifulSoup(infos, 'lxml')
photos = soup.select('#pins>li>a')
name = 0
for photo in photos:
    name += 1
    dizhi = photo['href']
    response = requests.get(dizhi, headers=headers)
    infos = response.text
    soup = BeautifulSoup(infos, 'lxml')
    wei = soup.select('.pagenavi>a')[-2].select('span').text
    wei = int(wei)
    for i in range(1, wei):
      response = requests.get(dizhi + '/{}'.format(i), headers=headers)
      infos = response.text
      soup = BeautifulSoup(infos, 'lxml')
      # e = etree.HTML(response.text)
      urls = soup.select('.main-image>p>a>img')
      url = urls['src']
      response = requests.get(url, headers=headers)
      with open('D:Python照片输出//meizi//{}.jpg'.format(str(name) + '.' + str(i)), 'wb') as f:
            print('正在下载{}'.format(str(name) + '.' + str(i)))
            f.write(response.content)
            print('{}下载完成'.format(str(name) + '.' + str(i)))




Hmily 发表于 2021-12-6 10:26

抱歉,未能达到申请要求,申请不通过,可以关注论坛官方微信(吾爱破解论坛),等待开放注册通知。
页: [1]
查看完整版本: LiangMo