标签:内容 python miss org edit 函数 lob urlopen center
阅读OReilly.Web.Scraping.with.Python.2015.6笔记---Crawl
1.函数调用它自身,这样就形成了一个循环,一环套一环:
from urllib.request import urlopen
from bs4 import BeautifulSoup
import re
pages = set()
def getLinks(pageUrl):
global pages
html = urlopen("http://en.wikipedia.org"+pageUrl)
bsObj = BeautifulSoup(html,"lxml")
try:
print(bsObj.h1.get_text())
print(bsObj.find(id ="mw-content-text").findAll("p")[0]) //找到网页中 id=mw-content-text,然后在这个基础上查找"p"这个标签的内容 [0]则代表选择第0个
print(bsObj.find(id="ca-edit").find("span").find("a").attrs[‘href‘]) //找到id=ca-edit里面的span标签里面的a标签里面的href的值
except AttributeError:
print("This page is missing something! No worries though!")
for link in bsObj.findAll("a", href=re.compile("^(/wiki/)")):
if ‘href‘ in link.attrs:
if link.attrs[‘href‘] not in pages:
#We have encountered a new page
newPage = link.attrs[‘href‘]
print(newPage)
pages.add(newPage)
getLinks(newPage)
getLinks("")
2.对网址进行处理,通过"/"对网址中的字符进行分割
def splitAddress(address):
addressParts = address.replace("http://", "").split("/")
return addressParts
addr = splitAddress("https://hao.360.cn/?a1004")
print(addr)
运行结果为:
runfile(‘C:/Users/user/Desktop/chensimin.py‘, wdir=‘C:/Users/user/Desktop‘) [‘https:‘, ‘‘, ‘hao.360.cn‘, ‘?a1004‘] //两个//之间没有内容,所用用‘‘表示
def splitAddress(address):
addressParts = address.replace("http://", "").split("/")
return addressParts
addr = splitAddress("http://www.autohome.com.cn/wuhan/#pvareaid=100519")
print(addr)
运行结果为:
runfile(‘C:/Users/user/Desktop/chensimin.py‘, wdir=‘C:/Users/user/Desktop‘) [‘www.autohome.com.cn‘, ‘wuhan‘, ‘#pvareaid=100519‘]
阅读OReilly.Web.Scraping.with.Python.2015.6笔记---Crawl
标签:内容 python miss org edit 函数 lob urlopen center
原文地址:http://www.cnblogs.com/chensimin1990/p/7213933.html