下载LXML报错
# 具体遇到的问题
如何升级pip命令
# 报错信息的截图
E:\python课件\入门主流框架Scrapy与爬虫项目\初探网络爬虫\爬虫进阶与实战\第二课python爬虫进阶与实战\第二课 python爬虫进阶与实战>pip install lxml
Collecting lxml
WARNING: Retrying (Retry(total=4, connect=None, read=None, redirect=None, status=None)) after connection broken by 'SSLError(SSLCertVerificationError(1, '[SSL: CERTIFICATE_VERIFY_FAI
LED] certificate verify failed: unable to get local issuer certificate (_ssl.c:1108)'))': /packages/99/cd/7aecddd5e0290379cf40e47490f3dc04a2e6ea8317c1d21ff2caad24f284/lxml-4.6.1-cp38-c
p38-win_amd64.whl
WARNING: Retrying (Retry(total=3, connect=None, read=None, redirect=None, status=None)) after connection broken by 'SSLError(SSLCertVerificationError(1, '[SSL: CERTIFICATE_VERIFY_FAI
LED] certificate verify failed: unable to get local issuer certificate (_ssl.c:1108)'))': /packages/99/cd/7aecddd5e0290379cf40e47490f3dc04a2e6ea8317c1d21ff2caad24f284/lxml-4.6.1-cp38-c
p38-win_amd64.whl
Downloading lxml-4.6.1-cp38-cp38-win_amd64.whl (3.5 MB)
|█ | 133 kB 37 kB/s eta 0:01:32ERROR: Exception:
Traceback (most recent call last):
File "d:\python\lib\site-packages\pip\_vendor\urllib3\response.py", line 425, in _error_catcher
yield
File "d:\python\lib\site-packages\pip\_vendor\urllib3\response.py", line 507, in read
data = self._fp.read(amt) if not fp_closed else b""
File "d:\python\lib\site-packages\pip\_vendor\cachecontrol\filewrapper.py", line 62, in read
data = self.__fp.read(amt)
File "d:\python\lib\http\client.py", line 454, in read
n = self.readinto(b)
File "d:\python\lib\http\client.py", line 498, in readinto
n = self.fp.readinto(b)
File "d:\python\lib\socket.py", line 669, in readinto
return self._sock.recv_into(b)
File "d:\python\lib\ssl.py", line 1241, in recv_into
return self.read(nbytes, buffer)
File "d:\python\lib\ssl.py", line 1099, in read
return self._sslobj.read(len, buffer)
socket.timeout: The read operation timed out
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "d:\python\lib\site-packages\pip\_internal\cli\base_command.py", line 188, in _main
status = self.run(options, args)
File "d:\python\lib\site-packages\pip\_internal\cli\req_command.py", line 185, in wrapper
return func(self, options, args)
File "d:\python\lib\site-packages\pip\_internal\commands\install.py", line 332, in run
requirement_set = resolver.resolve(
File "d:\python\lib\site-packages\pip\_internal\resolution\legacy\resolver.py", line 179, in resolve
discovered_reqs.extend(self._resolve_one(requirement_set, req))
File "d:\python\lib\site-packages\pip\_internal\resolution\legacy\resolver.py", line 362, in _resolve_one
abstract_dist = self._get_abstract_dist_for(req_to_install)
File "d:\python\lib\site-packages\pip\_internal\resolution\legacy\resolver.py", line 314, in _get_abstract_dist_for
abstract_dist = self.preparer.prepare_linked_requirement(req)
File "d:\python\lib\site-packages\pip\_internal\operations\prepare.py", line 467, in prepare_linked_requirement
local_file = unpack_url(
File "d:\python\lib\site-packages\pip\_internal\operations\prepare.py", line 255, in unpack_url
file = get_http_url(
File "d:\python\lib\site-packages\pip\_internal\operations\prepare.py", line 129, in get_http_url
from_path, content_type = _download_http_url(
File "d:\python\lib\site-packages\pip\_internal\operations\prepare.py", line 281, in _download_http_url
for chunk in download.chunks:
File "d:\python\lib\site-packages\pip\_internal\cli\progress_bars.py", line 166, in iter
for x in it:
File "d:\python\lib\site-packages\pip\_internal\network\utils.py", line 15, in response_chunks
for chunk in response.raw.stream(
File "d:\python\lib\site-packages\pip\_vendor\urllib3\response.py", line 564, in stream
data = self.read(amt=amt, decode_content=decode_content)
File "d:\python\lib\site-packages\pip\_vendor\urllib3\response.py", line 529, in read
raise IncompleteRead(self._fp_bytes_read, self.length_remaining)
File "d:\python\lib\contextlib.py", line 131, in __exit__
self.gen.throw(type, value, traceback)
File "d:\python\lib\site-packages\pip\_vendor\urllib3\response.py", line 430, in _error_catcher
raise ReadTimeoutError(self._pool, None, "Read timed out.")
pip._vendor.urllib3.exceptions.ReadTimeoutError: HTTPSConnectionPool(host='files.pythonhosted.org', port=443): Read timed out.
WARNING: You are using pip version 20.1.1; however, version 20.2.4 is available.
You should consider upgrading via the 'd:\python\python.exe -m pip install --upgrade pip' command.
E:\python课件\入门主流框架Scrapy与爬虫项目\初探网络爬虫\爬虫进阶与实战\第二课python爬虫进阶与实战\第二课 python爬虫进阶与实战>
# 相关课程内容截图
# 尝试过的解决思路和结果
# 粘贴全部相关代码,切记添加代码注释(请勿截图)
在这里输入代码,可通过选择【代码语言】突出显示
正在回答 回答被采纳积分+1
同学,你好。同学的是连接超时错误,可能是由于网络原因导致的。同学可以切换国内源下载
例:pip install lxml -i http://pypi.douban.com/simple/
清华:https://pypi.tuna.tsinghua.edu.cn/simple
阿里云:http://mirrors.aliyun.com/pypi/simple/
中国科技大学 https://pypi.mirrors.ustc.edu.cn/simple/
华中理工大学:http://pypi.hustunique.com/
山东理工大学:http://pypi.sdutlinux.org/
豆瓣:http://pypi.douban.com/simple/
- 参与学习 人
- 提交作业 107 份
- 解答问题 1672 个
Python最广为人知的应用就是爬虫了,有趣且酷的爬虫技能并没有那么遥远,本阶段带你学会利用主流Scrapy框架完成爬取招聘网站和二手车网站的项目实战。
了解课程
恭喜解决一个难题,获得1积分~
来为老师/同学的回答评分吧
0 星