示例網(wǎng)頁用豆瓣電影top250。豆瓣top250其實是一個多頁面的爬取,并沒有什么可怕之處,首先做第一個頁面的爬蟲
from bs4 import BeautifulSoup
import requests
import time
url = 'https://movie.douban.com/top250?start=0&filter='
wb_data = requests.get(url)
soup = BeautifulSoup(wb_data.text,'lxml')
imgs = soup.select('#content div.pic > a > img')
titles = soup.select('#content div.info > div.hd > a > span')
rates = soup.select('#content span.rating_num')
for img,title,rate in zip(imgs,titles,rates):
data = {
'img':img.get('src'),
'title':title.get_text(),
'rate':rate.get_text()
}
print(data)
OK,做完一個之后其實工作完成了大半,接下來稍微修改即可。
B71EFAAF-4FD4-4F74-BF68-905593E48EBF.png
8401C0A7-1833-495D-88A5-2D0E1EB8A850.png
上面兩張圖是豆瓣top250第一頁和第二頁的鏈接,不難看出只有start后面的數(shù)字在發(fā)生改變,其實這個數(shù)字代表的就是每個頁面的加載量,每頁都會加載25個電影,找到這個規(guī)律后我們使用列表推導式完成多頁面的集合,修改上面的url行如下。
urls = ['https://movie.douban.com/top250?start={}&filter='.format(str(i)) for i in range(0,250,25)]
之后將這些代碼都封裝進一個函數(shù)中,用for循環(huán)讀出即可,最終代碼如下。
from bs4 import BeautifulSoup
import requests
import time
urls = ['https://movie.douban.com/top250?start={}&filter='.format(str(i)) for i in range(0,250,25)]
def get_attractions(url,data=None):
wb_data = requests.get(url)
time.sleep(2)
soup = BeautifulSoup(wb_data.text,'lxml')
imgs = soup.select('#content div.pic > a > img')
titles = soup.select('#content div.info > div.hd > a > span')
rates = soup.select('#content span.rating_num')
if data == None:
for img,title,rate in zip(imgs,titles,rates):
data = {
'img':img.get('src'),
'title':title.get_text(),
'rate':rate.get_text()
}
print(data)
for single_url in urls:
get_attractions(single_url)
這里引入了python的time模塊,使用它的sleep()方法來推遲調(diào)用線程的運行,這里用來讓爬蟲每隔兩秒請求一次,可以防止有的網(wǎng)站因為頻繁的請求把我們IP封掉。