转载请注明出处!!!
实验对象:豆瓣电影--人民的名义
实验目的:通过使用scrapy框架采集“人民的名义”评价内容,进一步体会信息检索的过程。
实验过程:分析采集实体->确定采集方法->制定爬取规则->编写代码并调试->得到数据
![img_c3cae20ccfa9b7015f9d0d1412ac6d0a.png](https://yqfile.alicdn.com/img_c3cae20ccfa9b7015f9d0d1412ac6d0a.png?x-oss-process=image/resize,w_1400/format,webp)
ps:由于最近豆瓣发布的 Api V2测试版 需要授权 走oauth2,但是现在不开放key申请,所以直接爬了网页。
---------------------------------欢迎纠错和提问!24小时在线不打烊!!---------------------
目录
- 分析采集实体
- 确定采集方法
- 制定爬取规则
- 编写代码并调试
- 得到数据
- 使用分词工具包进行数据分析
- 总结和感悟
1. 分析采集实体
当前页面中,评价相关的内容有很多,我们通过分析选取更具代表性的数据进行采集。
1.1 IMDb (备用)
豆瓣提供了IMDB的链接。
![img_3454ae437cd8118920430c9ecabe87f2.png](https://yqfile.alicdn.com/img_3454ae437cd8118920430c9ecabe87f2.png?x-oss-process=image/resize,w_1400/format,webp)
![img_45b7e35f33d1324dd0fb67c638c92b67.png](https://yqfile.alicdn.com/img_45b7e35f33d1324dd0fb67c638c92b67.png?x-oss-process=image/resize,w_1400/format,webp)
IMDb只提供了5条英文评价
![img_83fa240f7008146b824bd01daacc3d61.png](https://yqfile.alicdn.com/img_83fa240f7008146b824bd01daacc3d61.png?x-oss-process=image/resize,w_1400/format,webp)
记录网址备用: http://www.imdb.com/user/ur70913446/comments?ref_=tt_urv
1.2 全部评价(不采集)
![img_fc73f1529de8bd2e96bcfad89ba23777.png](https://yqfile.alicdn.com/img_fc73f1529de8bd2e96bcfad89ba23777.png?x-oss-process=image/resize,w_1400/format,webp)
这里指向了全部评价,没有分类,不考虑
![img_98acea7396a29ff8366483ea884925ec.png](https://yqfile.alicdn.com/img_98acea7396a29ff8366483ea884925ec.png?x-oss-process=image/resize,w_1400/format,webp)
1.3 分集短评(不采集)
这里提供了分集短评,不具代表性,不考虑
![img_a1d13eef603fc204a574bfbf8844f24e.png](https://yqfile.alicdn.com/img_a1d13eef603fc204a574bfbf8844f24e.png?x-oss-process=image/resize,w_1400/format,webp)
1.4 全部短评(采集部分)
这里提供了人民的名义的全部短评,考虑采集看过/热门的前50条
![img_0f057a60113da313ce56b7c82dead309.png](https://yqfile.alicdn.com/img_0f057a60113da313ce56b7c82dead309.png?x-oss-process=image/resize,w_1400/format,webp)
1.5 全部剧评(采集部分)
人民的名义的剧评考虑采集最受欢迎的前50条
![img_0991562495ad66265bb7f7408013614c.png](https://yqfile.alicdn.com/img_0991562495ad66265bb7f7408013614c.png?x-oss-process=image/resize,w_1400/format,webp)
1.6 确定采集实体
豆瓣提供了部分xml格式的影评
![img_0e816dec54c3169aad4ea2d220d84dc5.png](https://yqfile.alicdn.com/img_0e816dec54c3169aad4ea2d220d84dc5.png?x-oss-process=image/resize,w_1400/format,webp)
![img_7fcd5d3862f4873a772c2f37b55e7706.png](https://yqfile.alicdn.com/img_7fcd5d3862f4873a772c2f37b55e7706.png?x-oss-process=image/resize,w_1400/format,webp)
采集的内容很全面,参考该官方示例确定采集实体
- title(剧评)
- description
- star
- creator
- pubDate
2. 确定采集方法
2.1短评采集
start_urls:https://movie.douban.com/subject/26727273/comments?status=P
内容:当前页内采集
![img_a386ff98ade2fa69777ea694d7ae4bcb.png](https://yqfile.alicdn.com/img_a386ff98ade2fa69777ea694d7ae4bcb.png?x-oss-process=image/resize,w_1400/format,webp)
分页:【后页】跳转下一页
![img_a50665bf41ca77ef88e2b49b3bef0335.png](https://yqfile.alicdn.com/img_a50665bf41ca77ef88e2b49b3bef0335.png?x-oss-process=image/resize,w_1400/format,webp)
2.2剧评采集
start_urls:https://movie.douban.com/subject/26727273/reviews
内容:完整评价在当前页面可以爬取
![img_3ebfc289ef8abf834b9cac326c5f32e6.png](https://yqfile.alicdn.com/img_3ebfc289ef8abf834b9cac326c5f32e6.png?x-oss-process=image/resize,w_1400/format,webp)
![img_3f7ecf04c1df7320ee7f5f790b1fb2f7.png](https://yqfile.alicdn.com/img_3f7ecf04c1df7320ee7f5f790b1fb2f7.png?x-oss-process=image/resize,w_1400/format,webp)
![img_69cbd387f6dea6e52815b2f01b44a539.png](https://yqfile.alicdn.com/img_69cbd387f6dea6e52815b2f01b44a539.png?x-oss-process=image/resize,w_1400/format,webp)
![img_f25319c55c9ebcc8770ff5e6f94cd236.png](https://yqfile.alicdn.com/img_f25319c55c9ebcc8770ff5e6f94cd236.png?x-oss-process=image/resize,w_1400/format,webp)
可以看出,页面通过js控制改变class来控制内容的显示隐藏和ajax动态赋值。
3. 制定爬取规则
3.1 短评规则
3.1.1 description
![img_64c76193067212f083dadbf1ee59c4c8.png](https://yqfile.alicdn.com/img_64c76193067212f083dadbf1ee59c4c8.png?x-oss-process=image/resize,w_1400/format,webp)
div#comments div.comment-item div.comment p::text
3.1.2 star
![img_11f0f358906a4481240a638ca8c053df.png](https://yqfile.alicdn.com/img_11f0f358906a4481240a638ca8c053df.png?x-oss-process=image/resize,w_1400/format,webp)
div#comments div.comment-item div.comment h3 span.comment-info span.rating::attr(title)
3.1.3 creator
![img_c292faf01c9ee606dbb02f0dfc2cab71.png](https://yqfile.alicdn.com/img_c292faf01c9ee606dbb02f0dfc2cab71.png?x-oss-process=image/resize,w_1400/format,webp)
div#comments div.comment-item div.comment h3 span.comment-info a::attr(href)
3.1.4 pubDate
![img_ebc7ef9805edf98968f284ef16f9e5b2.png](https://yqfile.alicdn.com/img_ebc7ef9805edf98968f284ef16f9e5b2.png?x-oss-process=image/resize,w_1400/format,webp)
div#comments div.comment-item div.comment h3 span.comment-info span.comment-time::text
3.1.5 next_page
![img_182f74ea1e2771180097a44bfc3c8c41.png](https://yqfile.alicdn.com/img_182f74ea1e2771180097a44bfc3c8c41.png?x-oss-process=image/resize,w_1400/format,webp)
div#paginator a.next::attr(href)
3.2 剧评规则
3.2.1 title
3.2.2 description
3.2.3 star
3.2.4 creator
3.2.5 pubDate
3.2.6 next_page
4. 编写代码并调试
4.1 爬取短评
新建工程douban
![img_8ec98b989bf45b9eaaebe012e7aa5dc7.png](https://yqfile.alicdn.com/img_8ec98b989bf45b9eaaebe012e7aa5dc7.png?x-oss-process=image/resize,w_1400/format,webp)
编写items.py
import scrapy
class DoubanItem(scrapy.Item):
# define the fields for your item here like:
# name = scrapy.Field()
title = scrapy.Field()
description = scrapy.Field()
star = scrapy.Field()
creator = scrapy.Field()
pubDate = scrapy.Field()
编写my_short.py
import scrapy
from douban.items import DoubanItem
class MyShortSpider(scrapy.Spider):
name = "my_short"
allowed_domains = ["douban.com"]
start_urls = [
'https://movie.douban.com/subject/26727273/comments?status=P',
]
def parse(self, response):
for comment in response.css('div#comments div.comment-item div.comment'):
item = DoubanItem()
item['description'] = comment.css('p::text').extract_first(),
item['star'] = comment.css('h3 span.comment-info span.rating::attr(title)').extract_first(),
item['creator'] = comment.css('h3 span.comment-info a::attr(href)').extract_first(),
item['pubDate'] = comment.css('h3 span.comment-info span.comment-time::text').extract_first(),
yield item
next_page = response.css('div#paginator a.next::attr(href)')
if next_page is not None:
next_urls = response.urljoin(next_page.extract_first())
yield scrapy.Request(next_urls,callback = self.parse)
403爬取失败
![img_ea6b1fd82e68a7b6700e1bb505230390.png](https://yqfile.alicdn.com/img_ea6b1fd82e68a7b6700e1bb505230390.png?x-oss-process=image/resize,w_1400/format,webp)
可选方案:
- 动态设置user-agent
- 禁用cookies
- 设置延迟下载
- 使用Google cache
- 使用代理ip
- 使用crawlera
scrapy cloud--crawlera的尝试
登录scrapy cloud创建自己的工程并获取key
![img_6ec3c0da9ead6e8d7c445e76ade9b485.png](https://yqfile.alicdn.com/img_6ec3c0da9ead6e8d7c445e76ade9b485.png?x-oss-process=image/resize,w_1400/format,webp)
在自己的服务器安装crawlera
![img_e82f3528e7e5b4236d4b120b36f0e8db.png](https://yqfile.alicdn.com/img_e82f3528e7e5b4236d4b120b36f0e8db.png?x-oss-process=image/resize,w_1400/format,webp)
修改settings.py:
找到settings.py文件
![img_6309c4ad360787ddcbe647dd051142dd.png](https://yqfile.alicdn.com/img_6309c4ad360787ddcbe647dd051142dd.png?x-oss-process=image/resize,w_1400/format,webp)
添加crawler代理
![img_8897cabba8287a3ab1b170f00f3af0a1.png](https://yqfile.alicdn.com/img_8897cabba8287a3ab1b170f00f3af0a1.png?x-oss-process=image/resize,w_1400/format,webp)
配置并填写自己的key
pass字段不用填写
![img_e59216de46941234bfe641cdd3b6f31f.png](https://yqfile.alicdn.com/img_e59216de46941234bfe641cdd3b6f31f.png?x-oss-process=image/resize,w_1400/format,webp)
如果你的spider中保留了cookie,在header中添加
![img_0e848138725547e6e28376a8da794660.png](https://yqfile.alicdn.com/img_0e848138725547e6e28376a8da794660.png?x-oss-process=image/resize,w_1400/format,webp)
407 错误如下:
![img_6b4debbf4b88cda9a0ee9871f1a2f61d.png](https://yqfile.alicdn.com/img_6b4debbf4b88cda9a0ee9871f1a2f61d.png?x-oss-process=image/resize,w_1400/format,webp)
安装shub
![img_43a3bac67dffd9833742a69ad19a065f.png](https://yqfile.alicdn.com/img_43a3bac67dffd9833742a69ad19a065f.png?x-oss-process=image/resize,w_1400/format,webp)
用自己的key登录shub
![img_818c0e2e6343ed7d0adb8c9015b8432f.png](https://yqfile.alicdn.com/img_818c0e2e6343ed7d0adb8c9015b8432f.png?x-oss-process=image/resize,w_1400/format,webp)
上传工程
![img_af6fe7db61583275f865a183ea1a5636.png](https://yqfile.alicdn.com/img_af6fe7db61583275f865a183ea1a5636.png?x-oss-process=image/resize,w_1400/format,webp)
运行之后还是407
![img_69b2851df393c893a2c46120bed440d1.png](https://yqfile.alicdn.com/img_69b2851df393c893a2c46120bed440d1.png?x-oss-process=image/resize,w_1400/format,webp)
说好的免费现在好像是收费了。。弃坑
![img_4aeb90b2e25eabb07ef200b6b440c7c6.png](https://yqfile.alicdn.com/img_4aeb90b2e25eabb07ef200b6b440c7c6.png?x-oss-process=image/resize,w_1400/format,webp)
使用代理ip
在经历了403 503 111 400 等一系列错误码之后,又尝试了许多代理ip,终于爬到了数据。
![img_01842b6f085d58c9f439b539206d11aa.png](https://yqfile.alicdn.com/img_01842b6f085d58c9f439b539206d11aa.png?x-oss-process=image/resize,w_1400/format,webp)
然而没过多久就又挂了...
![img_a95df51ebc77ac4118fea24da547e56d.png](https://yqfile.alicdn.com/img_a95df51ebc77ac4118fea24da547e56d.png?x-oss-process=image/resize,w_1400/format,webp)
总算是可以爬到数据了,只要及时更换代理ip就没有问题。
修改后的settings.py代码
是否遵循robots.txt
![img_1b8c5aefaf40b60f7b86c4a692c6dfd6.png](https://yqfile.alicdn.com/img_1b8c5aefaf40b60f7b86c4a692c6dfd6.png?x-oss-process=image/resize,w_1400/format,webp)
设置下载延迟时间
![img_18498b43cbdcc82cff12598a9898fbfd.png](https://yqfile.alicdn.com/img_18498b43cbdcc82cff12598a9898fbfd.png?x-oss-process=image/resize,w_1400/format,webp)
不保存cookie
![img_2c4ddab0fd08426449f435b3397f292c.png](https://yqfile.alicdn.com/img_2c4ddab0fd08426449f435b3397f292c.png?x-oss-process=image/resize,w_1400/format,webp)
这是中间件middlewares的一个函数,543是随便写的,只要不重复就可以
![img_722aa714b0accd811afb69ad3c9233e0.png](https://yqfile.alicdn.com/img_722aa714b0accd811afb69ad3c9233e0.png?x-oss-process=image/resize,w_1400/format,webp)
user-agent包头可以在chrome开发者工具获取到
![img_018e3e68bd6e720640f6adaf5a2f08f4.png](https://yqfile.alicdn.com/img_018e3e68bd6e720640f6adaf5a2f08f4.png?x-oss-process=image/resize,w_1400/format,webp)
![img_688db9009dd79ae2275d60dea15c1204.png](https://yqfile.alicdn.com/img_688db9009dd79ae2275d60dea15c1204.png?x-oss-process=image/resize,w_1400/format,webp)
在middlewares.py增加如下代码
![img_5c573f3fc1e0be3d7db2de0d6a613e0f.png](https://yqfile.alicdn.com/img_5c573f3fc1e0be3d7db2de0d6a613e0f.png?x-oss-process=image/resize,w_1400/format,webp)
其中,引号中的url是代理ip
国内高匿代理IP
西刺免费代理IP
但是这样依然会在爬到一半的时候挂
更好的方法是放一组代理ip,在爬到一半的时候接上继续爬
5. 得到数据
某一次爬到数据180条
![img_3a528015299fc5a92baa467a343422bb.png](https://yqfile.alicdn.com/img_3a528015299fc5a92baa467a343422bb.png?x-oss-process=image/resize,w_1400/format,webp)
部分xml数据展示
![img_d7983f28ba0a67298075ec4006762dad.png](https://yqfile.alicdn.com/img_d7983f28ba0a67298075ec4006762dad.png?x-oss-process=image/resize,w_1400/format,webp)
6. 使用分词工具包进行数据分析
7. 总结和感悟
未完待续。
参考链接:
如何让你的scrapy爬虫不再被ban
scrapy爬虫代理——利用crawlera神器,无需再寻找代理IP