请老师看一下报错原因

请老师看一下报错原因

请老师看一下为什么报错了

2020-06-08 18:09:45 [scrapy.utils.log] INFO: Scrapy 2.1.0 started (bot: guazi_project)
2020-06-08 18:09:45 [scrapy.utils.log] INFO: Versions: lxml 4.5.1.0, libxml2 2.9.5, cssselect 1.1.0, parsel 1.6.0, w3lib 1.22.0, Twisted 20.3.0, Python 3.7.1 (v3.7.1:260ec2c36a, Oct 20 2018, 14:57:15) [MSC v.1915 64 bit (AMD64)], pyOpenSSL 19.1.0 (OpenSSL 1.1.1g  21 Apr 2020), cryptography 2.9.2, Platform Windows-10-10.0.18362-SP0
2020-06-08 18:09:45 [scrapy.utils.log] DEBUG: Using reactor: twisted.internet.selectreactor.SelectReactor
2020-06-08 18:09:45 [scrapy.crawler] INFO: Overridden settings:
{'BOT_NAME': 'guazi_project',
 'NEWSPIDER_MODULE': 'guazi_project.spiders',
 'SPIDER_MODULES': ['guazi_project.spiders'],
 'USER_AGENT': ('Mozilla/5.0 (Windows NT 5.1) AppleWebKit/535.11 (KHTML, like '
                'Gecko) Chrome/17.0.963.84 Safari/535.11 SE 2.X MetaSr 1.0',)}
2020-06-08 18:09:45 [scrapy.extensions.telnet] INFO: Telnet Password: f31d8abdc97bf266
2020-06-08 18:09:46 [scrapy.middleware] INFO: Enabled extensions:
['scrapy.extensions.corestats.CoreStats',
 'scrapy.extensions.telnet.TelnetConsole',
 'scrapy.extensions.logstats.LogStats']
2020-06-08 18:09:46 [scrapy.middleware] INFO: Enabled downloader middlewares:
['scrapy.downloadermiddlewares.httpauth.HttpAuthMiddleware',
 'scrapy.downloadermiddlewares.downloadtimeout.DownloadTimeoutMiddleware',
 'scrapy.downloadermiddlewares.defaultheaders.DefaultHeadersMiddleware',
 'scrapy.downloadermiddlewares.useragent.UserAgentMiddleware',
 'guazi_project.middlewares.Guazi_Pro_Dow_Mid',
 'scrapy.downloadermiddlewares.retry.RetryMiddleware',
 'scrapy.downloadermiddlewares.redirect.MetaRefreshMiddleware',
 'scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware',
 'scrapy.downloadermiddlewares.redirect.RedirectMiddleware',
 'scrapy.downloadermiddlewares.cookies.CookiesMiddleware',
 'scrapy.downloadermiddlewares.httpproxy.HttpProxyMiddleware',
 'scrapy.downloadermiddlewares.stats.DownloaderStats']
2020-06-08 18:09:46 [scrapy.middleware] INFO: Enabled spider middlewares:
['scrapy.spidermiddlewares.httperror.HttpErrorMiddleware',
 'scrapy.spidermiddlewares.offsite.OffsiteMiddleware',
 'scrapy.spidermiddlewares.referer.RefererMiddleware',
 'scrapy.spidermiddlewares.urllength.UrlLengthMiddleware',
 'scrapy.spidermiddlewares.depth.DepthMiddleware']
2020-06-08 18:09:46 [scrapy.middleware] INFO: Enabled item pipelines:
['guazi_project.pipelines.GuaziProjectPipeline']
2020-06-08 18:09:46 [scrapy.core.engine] INFO: Spider opened
2020-06-08 18:09:46 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2020-06-08 18:09:46 [scrapy.extensions.telnet] INFO: Telnet console listening on 127.0.0.1:6023
2020-06-08 18:09:46 [scrapy.core.engine] INFO: Closing spider (finished)
2020-06-08 18:09:46 [scrapy.statscollectors] INFO: Dumping Scrapy stats:
{'elapsed_time_seconds': 0.01464,
 'finish_reason': 'finished',
 'finish_time': datetime.datetime(2020, 6, 8, 10, 9, 46, 892060),
 'log_count/INFO': 10,
 'start_time': datetime.datetime(2020, 6, 8, 10, 9, 46, 877420)}
2020-06-08 18:09:46 [scrapy.core.engine] INFO: Spider closed (finished)

正在回答

登陆购买课程后可参与讨论,去登陆

2回答

同学,你好。报错显示关键字错误,查看关键字是否如图划横线处样式书写了:

http://img1.sycdn.imooc.com//climg/5edf36ee092aa33409000291.jpg

祝学习愉快~ 

  • 慕勒1399825 提问者 #1
    我是这样写的啊,为什么还会报关键字错误啊? if task['item_type'] == 'info_type':
    2020-06-09 15:41:14
  • 好帮手慕笑蓉 回复 提问者 慕勒1399825 #2
    同学,你好。可能是数据库在执行完一次后就没有数据了,此时会报没有关键字导致的错误,同学执行时可以先执行handle_guazi_task文件。 祝学习愉快~
    2020-06-09 19:05:50
好帮手慕笑蓉 2020-06-08 18:58:29

同学,你好。这段内容是没有报错的,同学是有什么功能没有实现吗?可将问题详细描述,并且粘贴相关代码到问答区,会有老师帮忙解决的。

祝学习愉快~

  • 提问者 慕勒1399825 #1
    运行之后什么输出都没有,也没有数据保存到数据库
    2020-06-08 19:55:25
  • 好帮手慕笑蓉 回复 提问者 慕勒1399825 #2
    同学,你好。先看一下数据库的guazi_task中是不是有数据,没有数据的话先执行handle_guazi_task文件,将数据保存到MongoDB数据库后,再执行。 祝学习愉快~
    2020-06-09 10:30:07
  • 提问者 慕勒1399825 回复 好帮手慕笑蓉 #3
    为什么执行过一次之后停止运行,再次运行会报KeyError啊? 2020-06-09 11:02:15 [scrapy.core.engine] ERROR: Error while obtaining start requests Traceback (most recent call last): File "D:\Program Files\Learning\python37\lib\site-packages\scrapy\core\engine.py", line 129, in _next_request request = next(slot.start_requests) File "D:\Program Files\Learning\pycharm\pachong\gaoji\shizhan\guazi_project\guazi_project\spiders\guazi.py", line 29, in start_requests if task['item_type'] == 'info_type': KeyError: 'item_type'
    2020-06-09 11:04:08
问题已解决,确定采纳
还有疑问,暂不采纳

恭喜解决一个难题,获得1积分~

来为老师/同学的回答评分吧

0 星
4.入门主流框架Scrapy与爬虫项目实战
  • 参与学习           人
  • 提交作业       107    份
  • 解答问题       1672    个

Python最广为人知的应用就是爬虫了,有趣且酷的爬虫技能并没有那么遥远,本阶段带你学会利用主流Scrapy框架完成爬取招聘网站和二手车网站的项目实战。

了解课程
请稍等 ...
意见反馈 帮助中心 APP下载
官方微信

在线咨询

领取优惠

免费试听

领取大纲

扫描二维码,添加
你的专属老师