BS4

BeautifulSoup是用来从HTML or XML中提取数据的Python
lib。BeautifulSoup将文档转化为树形结构(DOM),各种节点都是下述五种档次的Python对象:

  1. BeautifulSoup <class 'bs4.BeatifulSoup'>
  2. Tag <class 'bs4.element.Tag'>
  3. NavigableString <class 'bs4.element.NavigableString'>
  4. Comment <class 'bs4.element.Comment'>

从集合角度掌握以上4中类的涉嫌(类概念上并不可相信)

  • BeautifulSoup 为全集(将Document以入参传入生成BeautifulSoup
    object), 包括 Tag子集
  • Tag 包含 NavigableString 子集
  • Comment 为 NavigableString 特殊集合

Usage

BeautifulSoup的首先个入参是Document,第③个入参内定Document parser 类型.

from bs4 import BeautifulSoup
import requests, re

url = 'http://m.kdslife.com/club/'
# get whole HTTP response
response = requests.get(url)
# args[0] is HTML document, args[1] select LXML parser. returned BeautifulSoup object
soup = BeautifulSoup( response.text, 'lxml')
print soup.name
# [document]'
print type(soup)
# <class 'bs4.BeatifulSoup'>

Sample codes for Tag objects

# BeutifulSoup --> Tag 
# get the Tag object(title)
res = soup.title
print res
# <title>KDS Life</title>

res = soup.title.name
print res
# title

# attribules of a Tag object
res = soup.section
print type(res)
# <class 'bs4.element.Tag'>

print res['class']
# ['forum-head-hot', 'clearfix']

# All the attributes of section Tag object, returned a dict
print res.attrs
#{'class': ['forum-head-hot', 'clearfix']}

Sample codes for NavigableString object

# NavigableString object describes the string in Tag object
res = soup.title
print res.string
# KDS Life
print type(res.string)
# <class 'bs4.element.NavigableString'>

Sample codes for Comment object

# Comment, is a special NavigableString object
markup = "<b><!--Hey, buddy. Want to buy a used parser?--></b>"
soup = BeautifulSoup(markup)
comment = soup.b.string
print type(comment)
# <class 'bs4.element.Comment'>

BS4 Parser

遵从优先顺序自动分析,’lxml’ –> ‘html5lib’ –> ‘html.parser’


常用Tag对象方法

find_all()

find_all(name,attrs,recursive,text,**kwargs) 不解释,间接看代码

# filter, returned a matching list
# returned [] if matching nothing
title = soup.find_all('title')
print title
#[<title>Google</title>]

res = soup.find_all('div', 'topAd')
print res

# find all the elements whose id is 'gb-main'
res = soup.find_all(id='topAd')
print res
#[<div id="topAd">...</div>]

# find all the elements with 'img' tag and 'src' attribute matching the specific pattern
res = soup.find_all('img', src=re.compile(r'^http://club-img',re.I))
print res
# [![](http://upload-images.jianshu.io/upload_images/1876246-100fdca5a06a87b5.src?imageMogr2/auto-orient/strip%7CimageView2/2/w/1240),
#...]

select()

# css selector
# select those whose tag's id = wrapperto
res = soup.select('#wrapperto')
print res
# [<div class="swiper-wrapper clearfix" id="wrapperto"></div>]

# select those 'img' tags who have 'src' attribute
res = soup.select('img[src]')
print res
#[![](http://upload-images.jianshu.io/upload_images/1876246-e154ab8cb1175dfd.png?imageMogr2/auto-orient/strip%7CimageView2/2/w/1240), <im
#g src="http://club-img.kdslife.com/attach/1k0/gs/a/o41gty-1coa.png@0o_1l_600w_90q.src"/>]

# select those 'img' tags whose 'src' attribute is 
res = soup.select('img[src=http://icon.pch-img.net/kds/club_m/club/icon/user1.png]')
print res
# [![](http://upload-images.jianshu.io/upload_images/1876246-e154ab8cb1175dfd.png?imageMogr2/auto-orient/strip%7CimageView2/2/w/1240)]

Other

# get_text()
markup = '<a href="http://example.com/">\n a link to <i>example.com</i>\n</a>'
soup = BeautifulSoup(markup,'lxml')
res = soup.get_text()
print res
#  a link to example.com

res = soup.i.get_text()
print res
# example.com

# .stripped_string
res = soup.stripped_strings
print list(res)
# [u'a link to', u'example.com']

最终贴上1个简单的KDS图片爬虫

A KDS image
spider


Note

  • BeautifulSoup进行了编码检测并自行转为Unicode.
    soup.original_encoding属性来取得自动识别编码的结果。
  • Input converts to unicode, output encodes with utf-8
  • 在BS使用中,可配合 XPath expression使用

相关文章

网站地图xml地图