爬虫 403 增加header和代理ip也没用?有可能是cloudflare在搞事情

  • Post author:
  • Post category:其他


当爬虫遇到了403,有可能的原因主要有:

1. 你的User-Agent暴露了你,解决方案,增加header

import requests
import cfscrape
from urllib import request
from urllib import parse
from http.cookiejar import CookieJar

headers = {
    "User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/94.0.4606.61 Safari/537.36"
}

target_url = "www.baidu.com"
resp = requests.get(target_url, headers=headers)
print(resp)
print(resp.status_code)
print(resp.text)

关于如何获取user-agent:

1. 自动获取:使用现成的库,

https://github.com/hellysmile/fake-useragent

2.手动获取:打开你需要爬虫的网页,右键 检查, 刷新网页,network下随便点一个,在request headers中获取

2.你的爬虫太频繁了,ip被封了,使用代理ip

import requests
import cfscrape
from urllib import request
from urllib import parse
from http.cookiejar import CookieJar


target_url = "www.baidu.com"
proxyMeta = "http://host:port"
proxies = {

    "http": proxyMeta,
    "https": proxyMeta
}
resp = requests.get(target_url, proxies=proxies)
print(resp)
print(resp.status_code)
print(resp.text)

代理ip你可以买,当然也可以用一些免费的,参考:


GitHub – constverum/ProxyBroker: Proxy [Finder | Checker | Server]. HTTP(S) & SOCKS

3. 跳过cloudflare 验证

如果你把resp.text 的内容,用浏览器打开之后,显示了这个Please stand by, while we are checking your browser.就需要用这中方法解决

#pip install cloudscraper
import cloudscraper
scraper = cloudscraper.create_scraper()
ret = scraper.get(target_url)
print(ret)
print(ret.status_code)


reference : GitHub – VeNoMouS/cloudscraper: A Python module to bypass Cloudflare’s anti-bot page.

GitHub – VeNoMouS/cloudscraper: A Python module to bypass Cloudflare’s anti-bot page.



版权声明:本文为SuperYR_210原创文章,遵循 CC 4.0 BY-SA 版权协议,转载请附上原文出处链接和本声明。