好友
阅读权限10
听众
最后登录1970-1-1
|
本帖最后由 rookieicrack 于 2022-8-1 16:22 编辑
根据关键词用python爬取微博内容
comments_ID = []
def get_title_id():
for page in range(2, 45): # 每个页面大约有9个话题
headers = {
"User-Agent": UserAgent().chrome # chrome浏览器随机代{过}{滤}理
}
time.sleep(1)
# 该链接通过抓包获得(这里需要更改成你的话题url)
api_url='https://m.weibo.cn/api/container/getIndex?containerid=231583&page_type=searchall='+str(page)
print(api_url)
rep1 = requests.get(url=api_url, headers=headers)
rep=json.loads(rep1.text)
# 获取ID值并写入列表comment_ID中
for json1 in rep['data']['cards']:
comment_ID = json1["card_group"][0]['mblog']['id']
comments_ID.append(comment_ID)
报错内容如下`D:\software\python\python.exe D:/BaiduNetdiskDownload/weibo/weibotopic.py
https://m.weibo.cn/api/container/getIndex?containerid=231583&page_type=searchall=2
Traceback (most recent call last):
File "D:/BaiduNetdiskDownload/weibo/weibotopic.py", line 278, in
get_title_id()
File "D:/BaiduNetdiskDownload/weibo/weibotopic.py", line 80, in get_title_id
comment_ID = json1["card_group"][0]['mblog']['id']
KeyError: 'card_group'Process finished with exit code 1
改为
for json1 in rep['data']['cards']:
comment_ID = json1[0]["card_group"][0]['mblog']['id']
comments_ID.append(comment_ID)
又报错
D:\software\python\python.exe D:/BaiduNetdiskDownload/weibo/weibotopic.py
https://m.weibo.cn/api/container/getIndex?containerid=231583&page_type=searchall=2
Traceback (most recent call last):
File "D:/BaiduNetdiskDownload/weibo/weibotopic.py", line 278, in <module>
get_title_id()
File "D:/BaiduNetdiskDownload/weibo/weibotopic.py", line 80, in get_title_id
comment_ID = json1[0]["card_group"][0]['mblog']['id']
KeyError: 0
Process finished with exit code 1
本人菜鸟,求大佬指点 |
|
发帖前要善用【论坛搜索】功能,那里可能会有你要找的答案或者已经有人发布过相同内容了,请勿重复发帖。 |
|
|
|
|