前言
pyquery库是jQuery的Python实现,能够以jQuery的语法来操作解析 HTML 文档,易用性和解析速度都很好,和它差不多的还有BeautifulSoup,都是用来解析的。相比BeautifulSoup完美翔实的文档,虽然PyQuery库的文档弱爆了, 但是使用起来还是可以的,有些地方用起来很方便简洁。
安装
关于PyQuery的安装可以参考这篇文章:http://www.zzvips.com/article/95860.html
PyQuery库官方文档
- 初始化为PyQuery对象
- 常用的CCS选择器
- 伪类选择器
- 查找标签
- 获取标签信息
初始化为PyQuery对象
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
|
html = """ < html lang = "en" > < head > 简单好用的 < title >PyQuery</ title > </ head > < body > < ul id = "container" > < li class = "object-1" >Python</ li > < li class = "object-2" >大法</ li > < li class = "object-3" >好</ li > </ ul > </ body > </ html > """ |
相当于BeautifulSoup库的初识化方法,将html转化为BeautifulSoup对象。
1
|
bsObj = BeautifulSoup(html, 'html.parser' ) |
PyQuery库也要有自己的初始化。
1.1 将字符串初始化
1
2
3
4
5
|
from pyquery import PyQuery as pq #初始化为PyQuery对象 doc = pq(html) print ( type (doc)) print (doc) |
返回
1
2
3
4
5
6
7
8
9
10
11
12
13
14
|
< class 'pyquery.pyquery.PyQuery'> < html lang = "en" > < head > < title >PyQuery学习</ title > </ head > < body > < ul id = "container" > < li class = "object-1" /> < li class = "object-2" /> < li class = "object-3" /> </ ul > </ body > </ html > |
1.2 将html文件初始化
1
2
3
4
|
#filename参数为html文件路径 test_html = pq(filename = 'test.html' ) print ( type (test_html)) print (test_html) |
返回
1
2
3
4
5
6
7
8
9
10
11
12
|
< class 'pyquery.pyquery.PyQuery'>< html lang = "en" > < head > < title >PyQuery学习</ title > </ head > < body > < ul id = "container" > < li class = "object-1" /> < li class = "object-2" /> < li class = "object-3" /> </ ul > </ body > </ html > |
1.3 对网址响应进行初始化
1
2
3
|
response = pq(url = 'https://www.baidu.com' ) print ( type (response)) print (response) |
返回
1
2
3
4
5
|
< class 'pyquery.pyquery.PyQuery'> < html > < head >< meta http-equiv = "content-type" content = "text/html;charset=utf-8" />< meta http-equiv = "X-UA-Compatible" content = "IE=Edge" />< meta content = "always" name = "referrer" />< link rel = "stylesheet" type = "text/css" href = "https://ss1.bdstatic.com/5eN1bjq8AAUYm2zgoY3K/r/www/cache/bdorz/baidu.min.css" rel = "external nofollow" />< title >ç�¾åº¦ä¸�ä¸�ï¼�ä½ å°±ç�¥é��</ title ></ head > < body link = "#0000cc" > < div id = "wrapper" > < div id = "head" > < div class = "head_wrapper" > < div class = "s_form" > < div class = "s_form_wrapper" > < div id = "lg" > < img hidefocus = "true" src = "//www.baidu.com/img/bd_logo1.png" width = "270" height = "129" /> </ div > < form id = "form" name = "f" action = "//www.baidu.com/s" class = "fm" > < input type = "hidden" name = "bdorz_come" value = "1" /> < input type = "hidden" name = "ie" value = "utf-8" /> < input type = "hidden" name = "f" value = "8" /> < input type = "hidden" name = "rsv_bp" value = "1" /> < input type = "hidden" name = "rsv_idx" value = "1" /> < input type = "hidden" name = "tn" value = "baidu" />< span class = "bg s_ipt_wr" >< input id = "kw" name = "wd" class = "s_ipt" value = "" maxlength = "255" autocomplete = "off" autofocus = "autofocus" /></ span >< span class = "bg s_btn_wr" >< input type = "submit" id = "su" value = "ç�¾åº¦ä¸�ä¸�" class = "bg s_btn" autofocus = "" /></ span > </ form > </ div > </ div > < div id = "u1" > < a href = "http://news.baidu.com" rel = "external nofollow" name = "tj_trnews" class = "mnav" >æ�°é�»</ a > < a href = "https://www.hao123.com" rel = "external nofollow" name = "tj_trhao123" class = "mnav" >hao123</ a > < a href = "http://map.baidu.com" rel = "external nofollow" name = "tj_trmap" class = "mnav" >å�°å�¾</ a > < a href = "http://v.baidu.com" rel = "external nofollow" name = "tj_trvideo" class = "mnav" >è§�é¢�</ a > < a href = "http://tieba.baidu.com" rel = "external nofollow" name = "tj_trtieba" class = "mnav" >è´´å�§</ a > < noscript > < a href = "http://www.baidu.com/bdorz/login.gif?login&tpl=mn&u=http%3A%2F%2Fwww.baidu.com%2f%3fbdorz_come%3d1" rel = "external nofollow" name = "tj_login" class = "lb" >ç�»å½�</ a > </ noscript > < script >document.write('<a href="http://www.baidu.com/bdorz/login.gif?login&tpl=mn&u='+ encodeURIComponent(window.location.href+ (window.location.search === " rel="external nofollow" " ? "?" : "&")+ "bdorz_come=1")+ '" name="tj_login" class="lb">ç�»å½�</a>'); </ script > < a href = "//www.baidu.com/more/" rel = "external nofollow" name = "tj_briicon" class = "bri" style = "display: block;" >æ�´å¤�产å��</ a > </ div > </ div > </ div > < div id = "ftCon" > < div id = "ftConw" > < p id = "lh" > < a href = "http://home.baidu.com" rel = "external nofollow" >å ³äº�ç�¾åº¦</ a > < a href = "http://ir.baidu.com" rel = "external nofollow" >About Baidu</ a > </ p > < p id = "cp" >©2017 Baidu < a href = "http://www.baidu.com/duty/" rel = "external nofollow" >使ç�¨ç�¾åº¦å��å¿ è¯»</ a > < a href = "http://jianyi.baidu.com/" rel = "external nofollow" class = "cp-feedback" >æ��è§�å��é¦�</ a > 京ICPè¯�030173å�· < img src = "//www.baidu.com/img/gs.gif" /> </ p > </ div > </ div > </ div > </ body > </ html > |
二、常用的CCS选择器
打印id为container的标签
1
2
|
print (doc( '#container' )) print ( type (doc( '#container' ))) |
返回
1
2
3
4
5
6
7
|
< ul id = "container" > < li class = "object-1" /> < li class = "object-2" /> < li class = "object-3" /> </ ul > < class 'pyquery.pyquery.PyQuery'> |
打印class为object-1的标签
1
|
print (doc( '.object-1' )) |
返回
1
|
< li class = "object-1" /> |
打印标签名为body的标签
1
|
print (doc( 'body' )) |
返回
1
2
3
4
5
6
7
|
< body > < ul id = "container" > < li class = "object-1" /> < li class = "object-2" /> < li class = "object-3" /> </ ul > </ body > |
多种css选择器使用
1
|
print (doc( 'html #container' )) |
返回
1
2
3
4
5
|
< ul id = "container" > < li class = "object-1" /> < li class = "object-2" /> < li class = "object-3" /> </ ul > |
三、伪类选择器
伪类nth
1
2
3
4
5
|
print (pseudo_doc( 'li:nth-child(2)' )) #打印第一个li标签 print (pseudo_doc( 'li:first-child' )) #打印最后一个标签 print (pseudo_doc( 'li:last-child' )) |
返回
1
2
3
|
< li class = "object-2" >大法</ li > < li class = "object-1" >Python</ li > < li class = "object-6" >好玩</ li > |
contains
1
2
3
4
|
#找到含有Python的li标签 print (pseudo_doc( "li:contains('Python')" )) #找到含有好的li标签 print (pseudo_doc( "li:contains('好')" )) |
返回
1
2
3
4
|
< li class = "object-1" >Python</ li > < li class = "object-3" >好</ li > < li class = "object-4" >好</ li > < li class = "object-6" >好玩</ li > |
四、查找标签
按照条件在Pyquery对象中查找符合条件的标签,类似于BeautifulSoup中的find方法。
打印id=container的标签
1
|
print (doc.find( '#container' )) |
返回
1
2
3
4
5
6
|
< ul id = "container" > < li class = "object-1" /> < li class = "object-2" /> < li class = "object-3" /> </ ul > print(doc.find('li')) |
返回
1
2
3
|
< li class = "object-1" /> < li class = "object-2" /> < li class = "object-3" /> |
4.2 子辈标签-children方法
1
2
3
|
#id=container的标签的子辈标签 container = doc.find( '#container' ) print (container.children()) |
返回
1
2
3
|
< li class = "object-1" /> < li class = "object-2" /> < li class = "object-3" /> |
4.3 父辈标签-parent方法
1
2
|
object_2 = doc.find( '.object-2' ) print (object_2.parent()) |
返回
1
2
3
4
5
|
< ul id = "container" > < li class = "object-1" /> < li class = "object-2" /> < li class = "object-3" /> </ ul > |
4.4 兄弟标签-siblings方法
1
2
|
object_2 = doc.find( '.object-2' ) print (object_2.siblings()) |
返回
1
2
|
< li class = "object-1" /> < li class = "object-3" /> |
五、获取标签的信息
定位到目标标签后,我们需要标签内部的文本或者属性值,这时候需要进行提取文本或属性值操作
5.1 标签属性值的提取
.attr() 传入 标签的属性名,返回属性值
1
2
|
object_2 = doc.find( '.object-2' ) print (object_2.attr( 'class' )) |
返回
1
|
object-2 |
5.2 标签内的文本
.text()
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
|
html_text = """ < html lang = "en" > < head > 简单好用的 < title >PyQuery</ title > </ head > < body > < ul id = "container" > Hello World! < li class = "object-1" >Python</ li > < li class = "object-2" >大法</ li > < li class = "object-3" >好</ li > </ ul > </ body > </ html > """ docs = pq(html_text) print(docs.text()) |
返回
1
|
简单好用的 PyQuery Hello World! Python 大法 好 |
1
2
3
4
|
object_1 = docs.find( '.object-1' ) print (object_1.text()) container = docs.find( '#container' ) print (container.text()) |
返回
1
2
3
|
Python Hello World! Python 大法 好 |
tips:如果我只想获得Hello World这个,不想得到其他的文本,可以使用remove方法将li标签去掉,然后再使用text方法
1
2
3
|
container = docs.find( '#container' ) container.remove( 'li' ) print (container.text()) |
返回
1
|
Hello World! |
pyquery一些自定义的用法
访问网址
PyQuery与BeautifulSoup对比,我们会发现PyQuery可以对网址发起请求。 比如
1
2
|
from pyquery import PyQuery PyQuery(url = 'https://www.baidu.com' ) |
opener参数
这是PyQuery对百度网址进行请求,并将请求返回的响应数据处理为PyQuery对象。一般pyquery库会默认调用urllib库,如果想使用selenium或者requests库,可以自定义PyQuery的opener参数。
opener参数作用是告诉pyquery用什么请求库对网址发起请求。常见的请求库如urllib、requests、selenium。这里我们自定义一个selenium的opener。
1
2
3
4
5
6
7
8
9
10
11
12
|
from pyquery import PyQuery from selenium.webdriver import PhantomJS #用selenium访问url def selenium_opener(url): #我没有将Phantomjs放到环境变量,所以每次用都要放上路径 driver = PhantomJS(executable_path = 'phantomjs的路径' ) driver.get(url) html = driver.page_source driver.quit() return html #注意,使用时opener参数是函数名,没有括号的! PyQuery(url = 'https://www.baidu.com/' , opener = selenium_opener) |
这时候我们就能对PyQuery对象进行操作,提取有用的信息。具体请看上次的分享,如果想了解更多的功能,pyquery文档写的不怎么详细,好在基本跟jQuery功能吻合,我们如果想用好pyquery,需要查看jQuery文档。
cookies、headers
在requests用法中,一般为了访问网址更加真实,模仿成浏览器。一般我们需要传入headers,必要的时候还需要传入cookies参数。而pyquery库就有这功能,也能伪装浏览器。
1
2
3
4
|
from pyquery import PyQuery cookies = { 'Cookie' : '你的cookie' } headers = { 'User-Agent' : 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_12_5) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/61.0.3163.100 Safari/537.36' } PyQuery(url = 'https://www.baidu.com/' ,headers = headers,cookies = cookies) |
让你的selenium带上pyquery功能
让driver访问的网址得到的网页直接变为PyQuery对象,更方便提取数据
1
2
3
4
5
6
7
8
9
10
11
12
|
from pyquery import PyQuery from selenium.webdriver import PhantomJS class Browser(PhantomJS): @property def dom( self ): return PyQuery( self .page_source) """ 这部分property是装饰器,需要知道@property下面紧跟的函数,实现了类的属性功能。 这里browser.dom,就是browser的dom属性。 """ browser = Browser(executable_path = 'phantomjs的路径' ) browser.get(url = 'https://www.baidu.com/' ) print ( type (browser.dom)) |
返回
1
|
< class 'pyquery.pyquery.PyQuery'> |
总结
以上就是这篇文章的全部内容了,希望本文的内容对大家的学习或者工作具有一定的参考学习价值,如果有疑问大家可以留言交流,谢谢大家对服务器之家的支持。
原文链接:https://www.jianshu.com/p/770c0cdef481