爬蟲學(xué)習(xí)筆記(一)--urllib總結(jié)

基礎(chǔ)知識:

1.url(Uniform Resource Locator):叫做統(tǒng)一資源定位符,是互聯(lián)網(wǎng)上標(biāo)準(zhǔn)資源的地址,俗稱“網(wǎng)址”。

2.在python 3.x中已經(jīng)沒有了urllib2庫,只有urllib一個庫了。

3.url Encoding也叫做percent—encode,即URL編碼也叫做百分號編碼。

4.python2.7中的urllib2就是python3中的urllib.request

robotparser變?yōu)榱藆rllib庫中的一個模塊


根據(jù)官方手冊,urllib是處理url的一個庫:

其中有四個模塊:

1.urllib.request用來打開和讀取urls

? ? ?1.1.urlopen函數(shù)是常用的打開url方式。

? ? ?1.2.用built_opener函數(shù)構(gòu)建opener來打開網(wǎng)頁時高級方式。

2.urllib.error包含了運(yùn)行urllib.request的過程中發(fā)生的錯誤

3.urllib.parse用來分析網(wǎng)址(urls)

4.urllib.robotparser用來分析robots.txt文件



一、urllib.request中常用的函數(shù)

urllib.request.urlopen(url, data=None, [timeout,], cafile=None, capath=None, cadefault=False, context=None)

1.urllib.request 模塊用HTTP/1.1協(xié)議以及包括Connection:close的頭部在它的http請求中。

2.可供選擇的timeout參數(shù)指明阻止連接時間,請求連接的操作timeout秒后還沒連接上,就會拋出連接超時的異常。若沒有設(shè)置則為全局變量中缺省的超時時間。

3.對于HTTP and HTTPS URLs,這個函數(shù)返回的是一個http.client.HTTPResponse對象(進(jìn)行了輕微的修飾),該對象有如下方法:

- ? 該對象是類文件對象,類文件的方法都可以使用,(read,readline,fileno,close)

- ? geturl():返回請求的url

- ? getcode():返回響應(yīng)的http狀態(tài)碼,200表示請求成功得到響應(yīng),404表示請求沒響應(yīng)

- ? info():返回httplib.HTTPMessage對象,表示遠(yuǎn)程服務(wù)器返回的頭部信息


二、urllib.parse中常用函數(shù):

1.urllib.parse.urlparse(url,scheme='',allow_fragments=True):

-用來分析一個URL,并分解為6個組成部分

-返回一個6個元素的元組:(scheme,netloc,path,params,query,fragment)是一個urllib.parse.ParseResult對象

并且該對象有這6個元素對應(yīng)的方法

eg:

>>>from urllib import parse

>>>url = r'https://docs.python.org/3.5/search.html?q=parse&check_keywords=yes&area=default'

>>>parseResult= parse.urlparse(url)

>>>parseResult#把地址解析成組件

ParseResult(scheme='https', netloc='docs.python.org', path='/3.5/search.html', params='', query='q=parse&check_keywords=yes&area=default', fragment='')

>>>parseResult.query

'q=parse&check_keywords=yes&area=default'

看結(jié)果就知道是什么意思了


2.urllib.parse.urlunparse(Tuple)

-是urlparse的逆過程

-輸入是6個元素的元組,輸出是完整的url地址


3.urllib.parse.urljoin

urljoin(base, url, allow_fragments=True)

? ? ? ? Join a base URL and a possibly relative URL to form an absolute

? ? ? ? interpretation of the latter.

-base是url的基地址

-base與第二個參數(shù)中的相對地址相結(jié)合組成一個絕對URL地址


eg:

>>>scheme='http'

>>>netloc='www.python.org'

>>>path='lib/module-urlparse.html'

>>>modlist=('urllib','urllib2','httplib')

>>> unparsed_url=parse.urlunparse((scheme,netloc,path,'','',''))

>>> unparsed_url

'http://www.python.org/lib/module-urlparse.html'

>>> for mod in modlist:

url=parse.urljoin(unparsed_url,'module-%s.html'%mod)

print(url)


#替換是從最后一個"/"處替換的

http://www.python.org/lib/module-urllib.html

http://www.python.org/lib/module-urllib2.html

http://www.python.org/lib/module-httplib.html

>>>?


4.urllib.parse.parse_qs(qs,keep_blank_values=False,strict_parsing=False,encoding='urf-8',error='replace'):

-用來分析字符串形式的query請求。(Parse a query given as a string argument)

qs參數(shù):url編碼的字符串query請求(get請求)。

-返回query請求的參數(shù)字典

eg:

接上,

>>> param_dict=parse.parse_qs(parseResult.query)

>>> param_dict

>>> {'area': ['default'], 'check_keywords': ['yes'], 'q': ['parse']}


5.urlencode(query, doseq=False, safe='', encoding=None, errors=None, quote_via=<function quote_plus at 0x0365CC90>)

#對query合并,并且進(jìn)行url編碼

>>> from urllib import parse

>>> query={'name':'walker','age':99}

>>> parse.urlencode(query)

'name=walker&age=99'


總結(jié):

1.2.是對url整體的處理,包括分解和組合。

4.5是對url中的query這個參數(shù)的處理。


5.urllib.parse.quote(string, safe='/', encoding=None, errors=None)

#對字符串進(jìn)行url編碼

1.url字符串中如果帶有中文的編碼,要使用url時。先將中文部分編碼由gbk譯為utf8

然后在urllib.quote(str)才可以使用url正常訪問打開,否則編碼會出問題。

2.同樣如果從url中取出相應(yīng)中文字段解碼時,需要先unquote,然后在decode,具體按照gbk或者utf8,視情況而定。

eg:

>>>from urllib import parse

>>>parse.quote('a&b/c')#未編碼斜線

'a%26b/c'

>>>parse.quote_plus('a&b/c')#編碼了斜線

6.unquote(string, encoding='utf-8', errors='replace')

>>>parse.unquote('1+2')

'1+2'

>>> parse.unquote_plus('1+2')

'1 2'


三、urllib.robotparser

用來分析robots.txt文件,看是否支持該爬蟲

eg:

>>>from urlli import robotparser

>>>rp=robotparser.RobotFileParser()

>>>rp.set_url('http://example.webscraping.com/robots.txt')#讀入robots.txt文件

>>>rp.read()

>>>url='http://example.webscraping.com'

>>>user_agent='GoodCrawler'

>>>rp.can_fetch(user_agent,url)

True


詳細(xì)說明,見下面函數(shù)文檔:

FUNCTIONS

? ? parse_qs(qs, keep_blank_values=False, strict_parsing=False, encoding='utf-8', errors='replace')

? ? ? ? Parse a query given as a string argument.


? ? ? Arguments:


? ? ? ? qs: percent-encoded query string to be parsed


? ? ? ? keep_blank_values: flag indicating whether blank values in

? ? ? ? ? ? percent-encoded queries should be treated as blank strings.

? ? ? ? ? ? A true value indicates that blanks should be retained as

? ? ? ? ? ? blank strings. ?The default false value indicates that

? ? ? ? ? ? blank values are to be ignored and treated as if they were

? ? ? ? ? ? not included.


? ? ? ? strict_parsing: flag indicating what to do with parsing errors.

? ? ? ? ? ? If false (the default), errors are silently ignored.

? ? ? ? ? ? If true, errors raise a ValueError exception.


? ? ? ? encoding and errors: specify how to decode percent-encoded sequences

? ? ? ? ? ? into Unicode characters, as accepted by the bytes.decode() method.


? ? parse_qsl(qs, keep_blank_values=False, strict_parsing=False, encoding='utf-8', errors='replace')

? ? ? ? Parse a query given as a string argument.


? ? ? ? Arguments:


? ? ? ? qs: percent-encoded query string to be parsed


? ? ? ? keep_blank_values: flag indicating whether blank values in

? ? ? ? ? ? percent-encoded queries should be treated as blank strings. ?A

? ? ? ? ? ? true value indicates that blanks should be retained as blank

? ? ? ? ? ? strings. ?The default false value indicates that blank values

? ? ? ? ? ? are to be ignored and treated as if they were ?not included.


? ? ? ? strict_parsing: flag indicating what to do with parsing errors. If

? ? ? ? ? ? false (the default), errors are silently ignored. If true,

? ? ? ? ? ? errors raise a ValueError exception.


? ? ? ? encoding and errors: specify how to decode percent-encoded sequences

? ? ? ? ? ? into Unicode characters, as accepted by the bytes.decode() method.


? ? ? ? Returns a list, as G-d intended.


? ? quote(string, safe='/', encoding=None, errors=None)

? ? ? ? quote('abc def') -> 'abc%20def'


? ? ? ? Each part of a URL, e.g. the path info, the query, etc., has a

? ? ? ? different set of reserved characters that must be quoted.


? ? ? ? RFC 2396 Uniform Resource Identifiers (URI): Generic Syntax lists

? ? ? ? the following reserved characters.


? ? ? ? reserved ? ?= ";" | "/" | "?" | ":" | "@" | "&" | "=" | "+" |

? ? ? ? ? ? ? ? ? ? ? "$" | ","


? ? ? ? Each of these characters is reserved in some component of a URL,

? ? ? ? but not necessarily in all of them.


? ? ? ? By default, the quote function is intended for quoting the path

? ? ? ? section of a URL. ?Thus, it will not encode '/'. ?This character

? ? ? ? is reserved, but in typical usage the quote function is being

? ? ? ? called on a path where the existing slash characters are used as

? ? ? ? reserved characters.


? ? ? ? string and safe may be either str or bytes objects. encoding and errors

? ? ? ? must not be specified if string is a bytes object.


? ? ? ? The optional encoding and errors parameters specify how to deal with

? ? ? ? non-ASCII characters, as accepted by the str.encode method.

? ? ? ? By default, encoding='utf-8' (characters are encoded with UTF-8), and

? ? ? ? errors='strict' (unsupported characters raise a UnicodeEncodeError).


? ? quote_from_bytes(bs, safe='/')

? ? ? ? Like quote(), but accepts a bytes object rather than a str, and does

? ? ? ? not perform string-to-bytes encoding. ?It always returns an ASCII string.

? ? ? ? quote_from_bytes(b'abc def?') -> 'abc%20def%3f'


? ? quote_plus(string, safe='', encoding=None, errors=None)

? ? ? ? Like quote(), but also replace ' ' with '+', as required for quoting

? ? ? ? HTML form values. Plus signs in the original string are escaped unless

? ? ? ? they are included in safe. It also does not have safe default to '/'.


? ? unquote(string, encoding='utf-8', errors='replace')

? ? ? ? Replace %xx escapes by their single-character equivalent. The optional

? ? ? ? encoding and errors parameters specify how to decode percent-encoded

? ? ? ? sequences into Unicode characters, as accepted by the bytes.decode()

? ? ? ? method.

? ? ? ? By default, percent-encoded sequences are decoded with UTF-8, and invalid

? ? ? ? sequences are replaced by a placeholder character.


? ? ? ? unquote('abc%20def') -> 'abc def'.


? ? unquote_plus(string, encoding='utf-8', errors='replace')

? ? ? ? Like unquote(), but also replace plus signs by spaces, as required for

? ? ? ? unquoting HTML form values.


? ? ? ? unquote_plus('%7e/abc+def') -> '~/abc def'


? ? unquote_to_bytes(string)

? ? ? ? unquote_to_bytes('abc%20def') -> b'abc def'.


? ? urldefrag(url)

? ? ? ? Removes any existing fragment from URL.


? ? ? ? Returns a tuple of the defragmented URL and the fragment. ?If

? ? ? ? the URL contained no fragments, the second element is the

? ? ? ? empty string.


? ? urlencode(query, doseq=False, safe='', encoding=None, errors=None, quote_via=<function quote_plus at 0x0365CC90>)

? ? ? ? Encode a dict or sequence of two-element tuples into a URL query string.


? ? ? ? If any values in the query arg are sequences and doseq is true, each

? ? ? ? sequence element is converted to a separate parameter.


? ? ? ? If the query arg is a sequence of two-element tuples, the order of the

? ? ? ? parameters in the output will match the order of parameters in the

? ? ? ? input.


? ? ? ? The components of a query arg may each be either a string or a bytes type.


? ? ? ? The safe, encoding, and errors parameters are passed down to the function

? ? ? ? specified by quote_via (encoding and errors only if a component is a str).


? ? urljoin(base, url, allow_fragments=True)

? ? ? ? Join a base URL and a possibly relative URL to form an absolute

? ? ? ? interpretation of the latter.


? ? urlparse(url, scheme='', allow_fragments=True)

? ? ? ? Parse a URL into 6 components:

? ? ? ? <scheme>://<netloc>/<path>;<params>?<query>#<fragment>

? ? ? ? Return a 6-tuple: (scheme, netloc, path, params, query, fragment).

? ? ? ? Note that we don't break the components up in smaller bits

? ? ? ? (e.g. netloc is a single string) and we don't expand % escapes.


? ? urlsplit(url, scheme='', allow_fragments=True)

? ? ? ? Parse a URL into 5 components:

? ? ? ? <scheme>://<netloc>/<path>?<query>#<fragment>

? ? ? ? Return a 5-tuple: (scheme, netloc, path, query, fragment).

? ? ? ? Note that we don't break the components up in smaller bits

? ? ? ? (e.g. netloc is a single string) and we don't expand % escapes.


? ? urlunparse(components)

? ? ? ? Put a parsed URL back together again. ?This may result in a

? ? ? ? slightly different, but equivalent URL, if the URL that was parsed

? ? ? ? originally had redundant delimiters, e.g. a ? with an empty query

? ? ? ? (the draft states that these are equivalent).


? ? urlunsplit(components)

? ? ? ? Combine the elements of a tuple as returned by urlsplit() into a

? ? ? ? complete URL as a string. The data argument can be any five-item iterable.

? ? ? ? This may result in a slightly different, but equivalent URL, if the URL that

? ? ? ? was parsed originally had unnecessary delimiters (for example, a ? with an

? ? ? ? empty query; the RFC states that these are equivalent).


DATA

? ? __all__ = ['urlparse', 'urlunparse', 'urljoin', 'urldefrag', 'urlsplit...


FILE

? ? d:\python3\lib\urllib\parse.py

?著作權(quán)歸作者所有,轉(zhuǎn)載或內(nèi)容合作請聯(lián)系作者
平臺聲明:文章內(nèi)容(如有圖片或視頻亦包括在內(nèi))由作者上傳并發(fā)布,文章內(nèi)容僅代表作者本人觀點(diǎn),簡書系信息發(fā)布平臺,僅提供信息存儲服務(wù)。
  • 序言:七十年代末,一起剝皮案震驚了整個濱河市,隨后出現(xiàn)的幾起案子,更是在濱河造成了極大的恐慌,老刑警劉巖,帶你破解...
    沈念sama閱讀 227,818評論 6 531
  • 序言:濱河連續(xù)發(fā)生了三起死亡事件,死亡現(xiàn)場離奇詭異,居然都是意外死亡,警方通過查閱死者的電腦和手機(jī),發(fā)現(xiàn)死者居然都...
    沈念sama閱讀 98,185評論 3 414
  • 文/潘曉璐 我一進(jìn)店門,熙熙樓的掌柜王于貴愁眉苦臉地迎上來,“玉大人,你說我怎么就攤上這事。” “怎么了?”我有些...
    開封第一講書人閱讀 175,656評論 0 373
  • 文/不壞的土叔 我叫張陵,是天一觀的道長。 經(jīng)常有香客問我,道長,這世上最難降的妖魔是什么? 我笑而不...
    開封第一講書人閱讀 62,647評論 1 309
  • 正文 為了忘掉前任,我火速辦了婚禮,結(jié)果婚禮上,老公的妹妹穿的比我還像新娘。我一直安慰自己,他們只是感情好,可當(dāng)我...
    茶點(diǎn)故事閱讀 71,446評論 6 405
  • 文/花漫 我一把揭開白布。 她就那樣靜靜地躺著,像睡著了一般。 火紅的嫁衣襯著肌膚如雪。 梳的紋絲不亂的頭發(fā)上,一...
    開封第一講書人閱讀 54,951評論 1 321
  • 那天,我揣著相機(jī)與錄音,去河邊找鬼。 笑死,一個胖子當(dāng)著我的面吹牛,可吹牛的內(nèi)容都是我干的。 我是一名探鬼主播,決...
    沈念sama閱讀 43,041評論 3 440
  • 文/蒼蘭香墨 我猛地睜開眼,長吁一口氣:“原來是場噩夢啊……” “哼!你這毒婦竟也來了?” 一聲冷哼從身側(cè)響起,我...
    開封第一講書人閱讀 42,189評論 0 287
  • 序言:老撾萬榮一對情侶失蹤,失蹤者是張志新(化名)和其女友劉穎,沒想到半個月后,有當(dāng)?shù)厝嗽跇淞掷锇l(fā)現(xiàn)了一具尸體,經(jīng)...
    沈念sama閱讀 48,718評論 1 333
  • 正文 獨(dú)居荒郊野嶺守林人離奇死亡,尸身上長有42處帶血的膿包…… 初始之章·張勛 以下內(nèi)容為張勛視角 年9月15日...
    茶點(diǎn)故事閱讀 40,602評論 3 354
  • 正文 我和宋清朗相戀三年,在試婚紗的時候發(fā)現(xiàn)自己被綠了。 大學(xué)時的朋友給我發(fā)了我未婚夫和他白月光在一起吃飯的照片。...
    茶點(diǎn)故事閱讀 42,800評論 1 369
  • 序言:一個原本活蹦亂跳的男人離奇死亡,死狀恐怖,靈堂內(nèi)的尸體忽然破棺而出,到底是詐尸還是另有隱情,我是刑警寧澤,帶...
    沈念sama閱讀 38,316評論 5 358
  • 正文 年R本政府宣布,位于F島的核電站,受9級特大地震影響,放射性物質(zhì)發(fā)生泄漏。R本人自食惡果不足惜,卻給世界環(huán)境...
    茶點(diǎn)故事閱讀 44,045評論 3 347
  • 文/蒙蒙 一、第九天 我趴在偏房一處隱蔽的房頂上張望。 院中可真熱鬧,春花似錦、人聲如沸。這莊子的主人今日做“春日...
    開封第一講書人閱讀 34,419評論 0 26
  • 文/蒼蘭香墨 我抬頭看了看天上的太陽。三九已至,卻和暖如春,著一層夾襖步出監(jiān)牢的瞬間,已是汗流浹背。 一陣腳步聲響...
    開封第一講書人閱讀 35,671評論 1 281
  • 我被黑心中介騙來泰國打工, 沒想到剛下飛機(jī)就差點(diǎn)兒被人妖公主榨干…… 1. 我叫王不留,地道東北人。 一個月前我還...
    沈念sama閱讀 51,420評論 3 390
  • 正文 我出身青樓,卻偏偏與公主長得像,于是被迫代替她去往敵國和親。 傳聞我的和親對象是個殘疾皇子,可洞房花燭夜當(dāng)晚...
    茶點(diǎn)故事閱讀 47,755評論 2 371

推薦閱讀更多精彩內(nèi)容