We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
我对代码进行了改造,使用了代理ip但是仍然报错:
uk:2518160999 error to fetch files,try again later
getShareLists errno:-55
代码如下: def getHtml(url,ref=None,reget=5): try: proxies={'http': '222.194.14.130:808'} proxy_support = urllib2.ProxyHandler(proxies) opener = urllib2.build_opener(proxy_support, urllib2.HTTPHandler) #定义Opener # urllib2.install_opener(opener) request = urllib2.Request(url) request.add_header('User-Agent', 'Mozilla/5.0 (Windows NT 6.3; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/42.0.2311.135 Safari/537.36') if ref: request.add_header('Referer',ref) page = urllib2.urlopen(request,timeout=10) html = page.read() except: if reget>=1: #如果getHtml失败,则再次尝试5次 print 'getHtml error,reget...%d'%(6-reget) time.sleep(2) return getHtml(url,ref,reget-1) else: print 'request url:'+url print 'failed to fetch html' exit() else: return html
The text was updated successfully, but these errors were encountered:
No branches or pull requests
我对代码进行了改造,使用了代理ip但是仍然报错:
uk:2518160999 error to fetch files,try again later
getShareLists errno:-55
代码如下:
def getHtml(url,ref=None,reget=5):
try:
proxies={'http': '222.194.14.130:808'}
proxy_support = urllib2.ProxyHandler(proxies)
opener = urllib2.build_opener(proxy_support, urllib2.HTTPHandler)
#定义Opener
# urllib2.install_opener(opener)
request = urllib2.Request(url)
request.add_header('User-Agent', 'Mozilla/5.0 (Windows NT 6.3; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/42.0.2311.135 Safari/537.36')
if ref:
request.add_header('Referer',ref)
page = urllib2.urlopen(request,timeout=10)
html = page.read()
except:
if reget>=1:
#如果getHtml失败,则再次尝试5次
print 'getHtml error,reget...%d'%(6-reget)
time.sleep(2)
return getHtml(url,ref,reget-1)
else:
print 'request url:'+url
print 'failed to fetch html'
exit()
else:
return html
The text was updated successfully, but these errors were encountered: