使用selenium能够非常方便的获取网页的ajax内容,并且能够模拟用户点击和输入文本等诸多操作,这在使用scrapy爬取网页的过程中非常有用。
网上将selenium集成到scrapy的文章很多,但是很少有能够实现异步爬取的,下面这段代码就重写了scrapy的downloader,同时实现了selenium的集成以及异步。
使用时需要PhantomJSDownloadHandler添加到配置文件的DOWNLOADER中。
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
|
# encoding: utf-8 from __future__ import unicode_literals from scrapy import signals from scrapy.signalmanager import SignalManager from scrapy.responsetypes import responsetypes from scrapy.xlib.pydispatch import dispatcher from selenium import webdriver from six.moves import queue from twisted.internet import defer, threads from twisted.python.failure import Failure class PhantomJSDownloadHandler( object ): def __init__( self , settings): self .options = settings.get( 'PHANTOMJS_OPTIONS' , {}) max_run = settings.get( 'PHANTOMJS_MAXRUN' , 10 ) self .sem = defer.DeferredSemaphore(max_run) self .queue = queue.LifoQueue(max_run) SignalManager(dispatcher. Any ).connect( self ._close, signal = signals.spider_closed) def download_request( self , request, spider): """use semaphore to guard a phantomjs pool""" return self .sem.run( self ._wait_request, request, spider) def _wait_request( self , request, spider): try : driver = self .queue.get_nowait() except queue.Empty: driver = webdriver.PhantomJS( * * self .options) driver.get(request.url) # ghostdriver won't response when switch window until page is loaded dfd = threads.deferToThread( lambda : driver.switch_to.window(driver.current_window_handle)) dfd.addCallback( self ._response, driver, spider) return dfd def _response( self , _, driver, spider): body = driver.execute_script( "return document.documentElement.innerHTML" ) if body.startswith( "<head></head>" ): # cannot access response header in Selenium body = driver.execute_script( "return document.documentElement.textContent" ) url = driver.current_url respcls = responsetypes.from_args(url = url, body = body[: 100 ].encode( 'utf8' )) resp = respcls(url = url, body = body, encoding = "utf-8" ) response_failed = getattr (spider, "response_failed" , None ) if response_failed and callable (response_failed) and response_failed(resp, driver): driver.close() return defer.fail(Failure()) else : self .queue.put(driver) return defer.succeed(resp) def _close( self ): while not self .queue.empty(): driver = self .queue.get_nowait() driver.close() |
以上这篇在scrapy中使用phantomJS实现异步爬取的方法就是小编分享给大家的全部内容了,希望能给大家一个参考,也希望大家多多支持服务器之家。
原文链接:https://blog.csdn.net/whueratsjtuer/article/details/79198863