码迷,mamicode.com
首页 > 编程语言 > 详细

Python爬虫实例-多线程爬虫糗事百科搞笑内涵段子

时间:2020-08-31 13:21:14      阅读:69      评论:0      收藏:0      [点我收藏+]

标签:join   global   src   false   lxml   exce   info   exiting   队列   

学习爬虫,其乐无穷!
今天给大家带来一个爬虫案例,爬取糗事百科搞笑内涵段子。
爬取糗事百科段?,假设??的 URL 是:http://www.qiushibaike.com/8hr/page/1

一、爬取要求:

  1. 使?requests 获取??信息,?XPath / re 做数据提取。
  2. 获取每个帖??的 ?户头像链接 、 ?户姓名 、 段?内容 、 点赞次数 和 评论次数。
  3. 保存到 json ?件内。

二、先来看看单线程案例

参考代码:

#qiushibaike.py
#import urllib
#import re
#import chardet
import requests
from lxml import etree
page = 1
url = http://www.qiushibaike.com/8hr/page/ + str(page)
headers = {
User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit
/537.36 (KHTML, like Gecko) Chrome/52.0.2743.116 Safari/537.36,Accept-Language: zh-CN,zh;q=0.8}
try:
response = requests.get(url, headers=headers)
167
resHtml = response.text
html = etree.HTML(resHtml)
result = html.xpath(//div[contains(@id,"qiushi_tag")])
for site in result:
item = {}
imgUrl = site.xpath(./div/a/img/@src)[0].encode(utf-8)
username = site.xpath(./div/a/@title)[0].encode(utf-8)
#username = site.xpath(‘.//h2‘)[0].text
content = site.xpath(.//div[@class="content"]/span)[0].te
xt.strip().encode(utf-8) # 投票次数
vote = site.xpath(.//i)[0].text
#print site.xpath(‘.//*[@class="number"]‘)[0].text
# 评论信息
comments = site.xpath(.//i)[1].text
print imgUrl, username, content, vote, comments
except Exception, e:
print e

演示效果:

技术图片

 

三、多线程案例

Queue(队列对象):
Queue 是 python 中的标准库,可以直接 import Queue 引?;队列是线程间最常?的交换数据的形式python 下多线程的思考 对于资源,加锁是个重要的环节。因为 python 原?的list,dict 等,都是 not thread safe 的。?Queue,是线程安全的,因此在满?使?条件下,建议使?队列。

    1. 初始化: class Queue.Queue(maxsize) FIFO 先进先出
    2. 包中的常??法:
Queue.qsize() 返回队列的??
Queue.empty() 如果队列为空,返回 True,反之 False
Queue.full() 如果队列满了,返回 True,反之 False
Queue.full 与 maxsize ??对应
Queue.get([block[, timeout]])获取队列,timeout 等待时间
  1. 创建?个“队列”对象import Queue myqueue = Queue.Queue(maxsize = 10)
  2. 将?个值放?队列中myqueue.put(10)
  3. 将?个值从队列中取出myqueue.get()

 

多线程示意图:

技术图片

如需要python爬虫全套完整的入门到项目实战视频教程,请点:爬虫视频教程

 

参考代码:

# -*- coding:utf-8 -*-
import requests
from lxml import etree
from Queue import Queue
import threading
import time
import json
class thread_crawl(threading.Thread):
‘‘‘
抓取线程类
‘‘‘
def init (self, threadID, q):
threading.Thread. init (self)
self.threadID = threadID
self.q = q
def run(self):
print "Starting " + self.threadID
self.qiushi_spider()
print "Exiting ", self.threadID
def qiushi_spider(self): # page = 1
while True:
if self.q.empty():
break
else:
page = self.q.get()
print qiushi_spider=, self.threadID, ,page=, str(page)
url = http://www.qiushibaike.com/hot/page/ + str(
page) + /
headers = {
User-Agent: Mozilla/5.0 (Windows NT 10.0; WO
W64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/52.0.2743.116 Sa
fari/537.36,Accept-Language: zh-CN,zh;q=0.8} # 多次尝试失败结束、防?死循环
timeout = 4
while timeout > 0:
timeout -= 1
try:
content = requests.get(url, headers=headers
)
data_queue.put(content.text)
break
except Exception, e:
print qiushi_spider, e
if timeout < 0:
print timeout, url
class Thread_Parser(threading.Thread):
‘‘‘
??解析类;
‘‘‘
def init (self, threadID, queue, lock, f):
threading.Thread. init (self)
self.threadID = threadID
self.queue = queue
self.lock = lock
self.f = f
def run(self):
print starting , self.threadID
global total, exitFlag_Parser
while not exitFlag_Parser:
try:
‘‘‘
调?队列对象的 get()?法从队头删除并返回?个项?。可选参 数 为 block,默认为 True。
如果队列为空且 block 为 True,get()就使调?线程暂停,直?
有项?可?。
otal
如果队列为空且 block 为 False,队列将引发 Empty 异常。
‘‘‘
item = self.queue.get(False)
if not item:
pass
self.parse_data(item)
self.queue.task_done()
print Thread_Parser=, self.threadID, ,total=, t
except:
pass
print Exiting , self.threadID
def parse_data(self, item):
‘‘‘
解析??函数
:param item: ??内容
:return:
‘‘‘
global total
try:
html = etree.HTML(item)
result = html.xpath(//div[contains(@id,"qiushi_tag")]
)
for site in result:
try:
imgUrl = site.xpath(.//img/@src)[0]
title = site.xpath(.//h2)[0].text
content = site.xpath(.//div[@class="content"]/
span)[0].text.strip()
vote = None
comments = None
try:
vote = site.xpath(.//i)[0].text
comments = site.xpath(.//i)[1].text
except:
pass
result = {
imgUrl: imgUrl,
title: title,
content: content,
vote: vote,
comments: comments,
}
with self.lock:
# print ‘write %s‘ % json.dumps(result)
self.f.write(json.dumps(result, ensure_asci
i=False).encode(utf-8) + "\n")
except Exception, e:
print site in result, e
except Exception, e:
print parse_data, e
with self.lock:
total += 1
data_queue = Queue()
exitFlag_Parser = False
lock = threading.Lock()
total = 0
def main():
output = open(qiushibaike.json, a) #初始化???码 page 从 1-10 个??
pageQueue = Queue(50)
for page in range(1, 11):
pageQueue.put(page)
#初始化采集线程
crawlthreads = []
crawlList = ["crawl-1", "crawl-2", "crawl-3"]
for threadID in crawlList:
thread = thread_crawl(threadID, pageQueue)
thread.start()
crawlthreads.append(thread)
#初始化解析线程 parserList
parserthreads = []
parserList = ["parser-1", "parser-2", "parser-3"] #分别启动 parserList
for threadID in parserList:
thread = Thread_Parser(threadID, data_queue, lock, output)
thread.start()
parserthreads.append(thread)
# 等待队列清空
while not pageQueue.empty():
pass
# 等待所有线程完成
for t in crawlthreads:
t.join()
while not data_queue.empty():
pass
# 通知线程是时候退出
global exitFlag_Parser
exitFlag_Parser = True
for t in parserthreads:
t.join()
print "Exiting Main Thread"
with lock:
output.close()
if name ==  main :
main()

技术图片

 

四、课后作业

1、爬取笔趣?某部?说的所有页面。
2、国家?品药品监督管理总局:http://app1.sfda.gov.cn 采集分类 国产药品商品名(6994) 下的所有的商品信息。
这个大家可以试着做一做,如需爬虫视频详解,请点爬虫视频

 

Python爬虫实例-多线程爬虫糗事百科搞笑内涵段子

标签:join   global   src   false   lxml   exce   info   exiting   队列   

原文地址:https://www.cnblogs.com/shsxt/p/13560682.html

(0)
(0)
   
举报
评论 一句话评论(0
登录后才能评论!
© 2014 mamicode.com 版权所有  联系我们:gaon5@hotmail.com
迷上了代码!