码迷,mamicode.com
首页 > 其他好文 > 详细

有道语料库爬虫

时间:2017-10-13 17:53:42      阅读:211      评论:0      收藏:0      [点我收藏+]

标签:open   readlines   gbk   print   urllib   .text   users   amp   dict   

from bs4 import BeautifulSoup
import urllib.parse
import re
import requests
import time

index = 0
session = requests.session()
f = open(‘C:\\Users\\Administrator\\Desktop\\dictionary\\words.txt‘,‘r‘,encoding=‘gb2312‘)
output = open("C:\\Users\\Administrator\\Desktop\\dictionary\\output.txt", "w")
for line in f.readlines():
time.sleep(10)
if(line != ‘\n‘):
m = re.match(r‘([a-zA-Z ]+)‘, line)
if m:
index += 1
url = ‘http://dict.youdao.com/example/blng/eng/‘+ m.group(1) + ‘/#keyfrom=dict.main.moreblng‘
s = ‘(‘ + str(index) + ‘)銆? + line + "\n\r"
print("%s"% (s) ,file = output)
soup = BeautifulSoup(session.get(url).text)
blingual = soup.findAll(id = ‘bilingual‘)
if blingual:
ol = blingual[0].ul
if ol:
ul = ol.findAll(‘li‘)
if(len(ul)<6):
num = len(ul)
else:
num = 6
for i in range(num):
li = ul[i]
if li.p:
if li.p.a:
if li.p.a.get(‘data-rel‘):
s1 = urllib.parse.unquote(li.p.a[‘data-rel‘].replace(‘+‘,‘ ‘).split(‘.‘)[0])
print(s1+"\n\r")
print("%s" % (s1), file = output)
if len(li.findAll(‘p‘)) > 1:
s2 = urllib.parse.unquote(li.findAll(‘p‘)[1].get_text().split()[0]).encode("gb18030").decode(‘gbk‘,‘ignore‘)
print("%s" % (s2), file = output)
print("\n\r\n\r", file = output)

有道语料库爬虫

标签:open   readlines   gbk   print   urllib   .text   users   amp   dict   

原文地址:http://www.cnblogs.com/Alex0111/p/7662242.html

(0)
(0)
   
举报
评论 一句话评论(0
登录后才能评论!
© 2014 mamicode.com 版权所有  联系我们:gaon5@hotmail.com
迷上了代码!