成人国产在线小视频_日韩寡妇人妻调教在线播放_色成人www永久在线观看_2018国产精品久久_亚洲欧美高清在线30p_亚洲少妇综合一区_黄色在线播放国产_亚洲另类技巧小说校园_国产主播xx日韩_a级毛片在线免费

資訊專欄INFORMATION COLUMN

百度圖片爬蟲(python語(yǔ)言)

Ethan815 / 3381人閱讀

摘要:好的,我也不想多說,爬蟲的代碼我會(huì)分享到去轉(zhuǎn)盤網(wǎng),想下載本爬蟲代碼的孩子請(qǐng)點(diǎn)我下載,如果沒有下載到,請(qǐng)點(diǎn)擊這個(gè)鏈接。

上一篇我寫了如何爬取百度網(wǎng)盤的爬蟲,在這里還是重溫一下,把鏈接附上:

http://www.cnblogs.com/huangx...

這一篇我想寫寫如何爬取百度圖片的爬蟲,這個(gè)爬蟲也是:搜搜gif(在線制作功能點(diǎn)我) 的爬蟲代碼,其實(shí)爬蟲整體框架還是差不多的,但就是會(huì)涉及到圖片的的一些處理,還是花費(fèi)了我不少時(shí)間的,所以我請(qǐng)閱讀的本爬蟲的孩子還是認(rèn)真一些,畢竟程序猿都不容易啊。好的,我也不想多說,爬蟲的代碼我會(huì)分享到去轉(zhuǎn)盤網(wǎng),想下載本爬蟲代碼的孩子請(qǐng)點(diǎn)我下載,如果沒有下載到,請(qǐng)點(diǎn)擊這個(gè)鏈接。

附代碼:

PS:不會(huì)python的孩子趕快去補(bǔ)補(bǔ)吧,先把基礎(chǔ)搞清楚再說

#coding:utf-8
 
"""
 
Created on 2015-9-17
 
  
 
@author: huangxie
 
"""
 
import time,math,os,re,urllib,urllib2,cookielib
 
from bs4 import BeautifulSoup
 
import time 
 
import re
 
import uuid
 
import json
 
from threading import Thread
 
from Queue import Queue
 
import MySQLdb as mdb
 
import sys
 
import threading
 
import utils
 
import imitate_browser
 
from MySQLdb.constants.REFRESH import STATUS
 
reload(sys)
 
sys.setdefaultencoding("utf-8")
 
  
 
DB_HOST = "127.0.0.1"
 
DB_USER = "root"
 
DB_PASS = "root"
 
proxy = {u"http":u"222.39.64.13:8118"}
 
TOP_URL="http://image.baidu.com/i?tn=resultjsonavatarnew&ie=utf-8&word={word}&pn={pn}&rn={rn}"
 
KEYWORD_URL="https://www.baidu.com/s?ie=utf-8&f=8&tn=baidu&wd={wd}"
 
  
 
"""
 
i_headers = {"User-Agent":"Mozilla/5.0 (Windows NT 6.1) AppleWebKit/537.11 (KHTML, like Gecko) Chrome/23.0.1271.64 Safari/537.11",
 
              "Accept":"json;q=0.9,*/*;q=0.8",
 
              "Accept-Charset":"utf-8;q=0.7,*;q=0.3",
 
              "Accept-Encoding":"gzip",
 
              "Connection":"close",
 
              "Referer":None #注意如果依然不能抓取的話,這里可以設(shè)置抓取網(wǎng)站的host
 
            }
 
"""
 
i_headers = {"User-Agent":"Mozilla/5.0 (Windows NT 6.1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/31.0.1650.48"}
 
  
 
def GetDateString():
 
    x = time.localtime(time.time())
 
    foldername = str(x.__getattribute__("tm_year"))+"-"+str(x.__getattribute__("tm_mon"))+"-"+str(x.__getattribute__("tm_mday"))
 
    return foldername
 
  
 
class BaiduImage(threading.Thread):    
 
  
 
    def __init__(self):
 
        Thread.__init__(self)
 
        self.browser=imitate_browser.BrowserBase()
 
        self.chance=0
 
        self.chance1=0
 
        self.request_queue=Queue()
 
        self.wait_ana_queue=Queue()
 
        #self.key_word_queue.put((("動(dòng)態(tài)圖", 0, 24)))
 
        self.count=0
 
        self.mutex = threading.RLock() #可重入鎖,使單線程可以再次獲得已經(jīng)獲得的鎖
 
        self.commit_count=0
 
        self.ID=500
 
        self.next_proxy_set = set()
 
        self.dbconn = mdb.connect(DB_HOST, DB_USER, DB_PASS, "sosogif", charset="utf8")
 
        self.dbconn.autocommit(False)
 
        self.dbcurr = self.dbconn.cursor()
 
        self.dbcurr.execute("SET NAMES utf8")
 
         
 
    """
 
    def run(self):
 
        while True:
 
            self.get_pic()
 
    """
 
     
 
    def work(self,item):
 
        print "start thread",item
 
        while True: #MAX_REQUEST條以上則等待
 
            self.get_pic()
 
            self.prepare_request()
 
     
 
    def format_keyword_url(self,keyword):
 
   
 
        return KEYWORD_URL.format(wd=keyword).encode("utf-8")
 
            
 
    def generateSeed(self,url):
 
         
 
        html = self.browser.openurl(url).read()
 
        if html:
 
            try:
 
                soup = BeautifulSoup(html)
 
                trs = soup.find("div", id="rs").find("table").find_all("tr") #獲得所有行
 
                for tr in trs:
 
                    ths=tr.find_all("th")
 
                    for th in ths:
 
                        a=th.find_all("a")[0]
 
                        keyword=a.text.strip()
 
                        if "動(dòng)態(tài)圖" in keyword or "gif" in keyword:
 
                            print "keyword",keyword
 
                            self.dbcurr.execute("select id from info where word=%s",(keyword))
 
                            y = self.dbcurr.fetchone()
 
                            if not y:
 
                                self.dbcurr.execute("INSERT INTO info(word,status,page_num,left_num,how_many) VALUES(%s,0,0,0,0)",(keyword))
 
                    self.dbconn.commit()
 
            except:
 
                pass
 
                 
 
                
 
    def prepare_request(self):
 
        self.lock()
 
        self.dbcurr.execute("select * from info where status=0")
 
        result = self.dbcurr.fetchone()
 
        if result:
 
            id,word,status,page_num,left_num,how_many=result
 
            self.request_queue.put((id,word,page_num))
 
            if page_num==0 and left_num==0 and how_many==0:
 
                url=self.format_keyword_url(word)
 
                self.generateSeed(url)
 
                html=""
 
                try:
 
                    url=self.format_top_url(word, page_num, 24)
 
                    html = self.browser.openurl(url).read()
 
                except Exception as err:
 
                    print "err",err
 
                    #pass
 
                if html!="":
 
                    how_many=self.how_many(html)
 
                    print "how_many",how_many
 
                    if how_many==None:
 
                        how_many=0
 
                    t=math.ceil(how_many/24*100) #只要前1/100即可
 
                    num = int(t)
 
                    for i  in xrange(0,num-1):
 
                        self.dbcurr.execute("INSERT INTO info(word,status,page_num,left_num,how_many) VALUES(%s,%s,%s,%s,%s)",(word,0,i*24,num-i,how_many))
 
                    self.dbcurr.execute("update info SET status=1 WHERE id=%s",(id)) #置為已經(jīng)訪問
 
                    self.dbconn.commit()
 
        self.unlock()
 
                 
 
             
 
    def start_work(self,req_max):
 
        for item in xrange(req_max):
 
            t = threading.Thread(target=self.work, args=(item,))
 
            t.setDaemon(True)
 
            t.start()
 
             
 
    def lock(self): #加鎖
 
        self.mutex.acquire()
 
  
 
    def unlock(self): #解鎖
 
        self.mutex.release()
 
  
 
    def get_para(self,url,key):
 
        values = url.split("?")[-1]
 
        for key_value in values.split("&"):
 
            value=key_value.split("=")
 
            if value[0]==key:
 
                return value[1]
 
        return None 
 
     
 
    def makeDateFolder( self,par,child):
 
        #self.lock()
 
        if os.path.isdir( par ):
 
            path=par + "http://" + GetDateString()
 
            newFolderName = path+"http://"+child
 
            if not os.path.isdir(path):
 
                os.mkdir(path)
 
            if not os.path.isdir( newFolderName ):
 
                os.mkdir( newFolderName )
 
            return newFolderName
 
        else:
 
            return par
 
        #self.unlock()
 
         
 
    def parse_json(self,data):
 
         
 
        ipdata = json.loads(data)
 
        try:
 
            if ipdata["imgs"]: 
 
                for n in ipdata["imgs"]: #data子項(xiàng)
 
                    if n["objURL"]: 
 
                        try:
 
                            proxy_support = urllib2.ProxyHandler(proxy)
 
                            opener = urllib2.build_opener(proxy_support)
 
                            urllib2.install_opener(opener)
 
                            #print "proxy",proxy
 
                            self.lock()
 
                            self.dbcurr.execute("select ID from pic_info where objURL=%s", (n["objURL"]))
 
                            y = self.dbcurr.fetchone()
 
                            #print "y=",y
 
                            if y:
 
                                print "database exist"
 
                                self.unlock() #continue 前解鎖
 
                                continue
 
                            else:
 
                                real_extension=utils.get_extension(n["objURL"])
 
                                req = urllib2.Request(n["objURL"],headers=i_headers)
 
                                resp = urllib2.urlopen(req,None,5)
 
                                dataimg=resp.read()
 
                                name=str(uuid.uuid1())
 
                                filename=""
 
                                if len(real_extension)>4:
 
                                    real_extension=".gif"
 
                                real_extension=real_extension.lower()
 
                                if real_extension==".gif":
 
                                    filename  =self.makeDateFolder("E://sosogif", "d"+str(self.count % 60))+"http://"+name+"-www.sosogif.com-搜搜gif貢獻(xiàn)"+real_extension
 
                                    self.count+=1
 
                                else:
 
                                    filename  =self.makeDateFolder("E://sosogif", "o"+str(self.count % 20))+"http://"+name+"-www.sosogif.com-搜搜gif貢獻(xiàn)"+real_extension
 
                                    self.count+=1
 
                                """
 
                                name=str(uuid.uuid1())
 
                                filename=""
 
                                if len(real_extension)>4:
 
                                    real_extension=".gif"
 
                                filename  =self.makeDateFolder("E://sosogif", "d"+str(self.count % 60))+"http://"+name+"-www.sosogif.com-搜搜gif貢獻(xiàn)"+real_extension
 
                                self.count+=1
 
                                """
 
                                try:
 
                                    if not os.path.exists(filename):
 
                                        file_object = open(filename,"w+b") 
 
                                        file_object.write(dataimg) 
 
                                        file_object.close()
 
                                        self.anaylis_info(n,filename,real_extension) #入庫(kù)操作
 
                                    else:
 
                                        print "file exist"
 
                                except IOError,e1: 
 
                                    print "e1=",e1
 
                                    pass
 
                            self.unlock()
 
                        except IOError,e2: 
 
                            #print "e2=",e2
 
                            pass 
 
                            self.chance1+=1
 
        except Exception as parse_error:
 
            print "parse_error",parse_error
 
            pass
 
     
 
    def title_dealwith(self,title):
 
         
 
        #print "title",title
 
        a=title.find("")
 
        temp1=title[0:a]
 
        b=title.find("")
 
        temp2=title[a+8:b]
 
        temp3=title[b+9:len(title)]
 
        return (temp1+temp2+temp3).strip()
 
         
 
    def anaylis_info(self,n,filename,real_extension):
 
        print "success."
 
         
 
        #if self.wait_ana_queue.qsize()!=0:
 
            #n,filename,real_extension=self.wait.ana_queue.get()
 
        #self.lock()
 
        objURL=n["objURL"] #圖片地址
 
        fromURLHost=n["fromURLHost"] #來源網(wǎng)站
 
        width=n["width"]  #寬度
 
        height=n["height"] #高度
 
        di=n["di"] #用來唯一標(biāo)識(shí)
 
        type=n["type"] #格式
 
        fromPageTitle=n["fromPageTitle"] #來自網(wǎng)站
 
        keyword=self.title_dealwith(fromPageTitle)
 
        cs=n["cs"] #未知
 
        os=n["os"] #未知
 
        temp = time.time()
 
        x = time.localtime(float(temp))
 
        acTime = time.strftime("%Y-%m-%d %H:%M:%S",x) #爬取時(shí)間
 
        self.dbcurr.execute("select ID from pic_info where cs=%s", (cs))
 
        y = self.dbcurr.fetchone()
 
        if not y:
 
            print "add pic",filename
 
            self.commit_count+=1
 
            self.dbcurr.execute("INSERT INTO pic_info(objURL,fromURLHost,width,height,di,type,keyword,cs,os,acTime,filename,real_extension) VALUES(%s,%s,%s,%s,%s,%s,%s,%s,%s,%s,%s,%s)",(objURL,fromURLHost,width,height,di,type,keyword,cs,os,acTime,filename,real_extension))
 
            if self.commit_count==10:
 
                self.dbconn.commit()
 
                self.commit_count=0
 
        #self.unlock()
 
            
 
  
 
    def format_top_url(self,word,pn,rn):
 
  
 
        url = TOP_URL.format(word=word, pn=pn,rn=rn).encode("utf-8")
 
        return url
 
  
 
    def how_many(self,data):
 
        try:
 
            ipdata = json.loads(data)
 
            if ipdata["displayNum"]>0:
 
                how_many=ipdata["displayNum"]
 
                return int(how_many)
 
            else:
 
                return 0
 
        except Exception as e:
 
            pass
 
         
 
    def get_pic(self):
 
        """
 
        word="gif"
 
        pn=0
 
        rn=24
 
        if self.key_word_queue.qsize()!=0:
 
            word,pn,rn=self.key_word_queue.get()
 
        url=self.format_top_url(word,pn,rn)
 
        global proxy
 
        if url:
 
            try:
 
                html=""
 
                try:
 
                    req = urllib2.Request(url,headers=i_headers)
 
                    response = urllib2.urlopen(req, None,5)
 
                    #print "url",url
 
                    html = self.browser.openurl(url).read()
 
                except Exception as err:
 
                    print "err",err
 
                    #pass
 
                if html:
 
                    how_many=self.how_many(html)
 
                    #how_many=10000
 
                    print "how_many",how_many
 
                    word=self.get_para(url,"word")
 
                    rn=int(self.get_para(url,"rn"))
 
                    t=math.ceil(how_many/rn)
 
                    num = int(t)
 
                    for item  in xrange(0,num-1):
 
        """
 
        try:
 
            global proxy
 
            print "size of queue",self.request_queue.qsize()
 
            if self.request_queue.qsize()!=0:
 
                id,word,page_num = self.request_queue.get()           
 
                u=self.format_top_url(word,page_num,24)
 
                self.lock()
 
                self.dbcurr.execute("update info SET status=1 WHERE id=%s",(id))
 
                self.dbconn.commit()
 
                if self.chance >0 or self.chance1>1: #任何一個(gè)出問題都給換代理
 
                    if self.ID % 100==0:
 
                        self.dbcurr.execute("select count(*) from proxy")
 
                        for r in self.dbcurr:
 
                            count=r[0]
 
                        if self.ID>count:
 
                            self.ID=50
 
                    self.dbcurr.execute("select * from proxy where ID=%s",(self.ID))
 
                    results = self.dbcurr.fetchall()
 
                    for r in results:
 
                        protocol=r[1]
 
                        ip=r[2]
 
                        port=r[3]
 
                        pro=(protocol,ip+":"+port)
 
                        if pro not in self.next_proxy_set:
 
                            self.next_proxy_set.add(pro)
 
                    self.chance=0
 
                    self.chance1=0
 
                    self.ID+=1
 
                self.unlock()
 
                proxy_support = urllib2.ProxyHandler(proxy)
 
                opener = urllib2.build_opener(proxy_support)
 
                urllib2.install_opener(opener)
 
                html=""
 
                try:
 
                    req = urllib2.Request(u,headers=i_headers)
 
                    #print "u=",u
 
                    response = urllib2.urlopen(req, None,5)
 
                    html = response.read()
 
                    if html:
 
                        #print "html",type(html)
 
                        self.parse_json(html)
 
                except Exception as ex1:
 
                    #print "error=",ex1
 
                    pass
 
                    self.chance+=1
 
                    if self.chance>0 or self.chance1>1:
 
                        if len(self.next_proxy_set)>0:
 
                            protocol,socket=self.next_proxy_set.pop()
 
                            proxy= {protocol:socket}
 
                            print "change proxy finished<<",proxy,self.ID
 
        except Exception as e:
 
            print "error1",e
 
            pass
 
             
 
if __name__ == "__main__":
 
  
 
    app = BaiduImage()
 
    app.start_work(80)
 
    #app.generateSeed()
 
    while 1:
 
        pass

 本人建個(gè)qq群,歡迎大家一起交流技術(shù), 群號(hào):512245829 喜歡微博的朋友關(guān)注:轉(zhuǎn)盤娛樂即可

文章版權(quán)歸作者所有,未經(jīng)允許請(qǐng)勿轉(zhuǎn)載,若此文章存在違規(guī)行為,您可以聯(lián)系管理員刪除。

轉(zhuǎn)載請(qǐng)注明本文地址:http://systransis.cn/yun/38133.html

相關(guān)文章

  • 利用Python爬取百度貼吧圖片

    摘要:背景介紹我大一的時(shí)候?qū)W校就開設(shè)了,但是并沒有好好學(xué),基本等于是什么也不會(huì),最近才開始看,所以本身也是摸著石頭過河,見諒心得講真的,爬蟲確實(shí)不像別人想象的那樣簡(jiǎn)單,爬蟲首先要靜下心來,細(xì)心尋找目標(biāo)網(wǎng)站的布局規(guī)律,最重要的是的變化,這是一個(gè)考驗(yàn) 背景介紹 我大一的時(shí)候?qū)W校就開設(shè)了 python,但是并沒有好好學(xué),基本等于是什么也不會(huì),最近才開始看,所以本身也是摸著石頭過河,見諒... 心得...

    YPHP 評(píng)論0 收藏0
  • 首次公開,整理12年積累的博客收藏夾,零距離展示《收藏夾吃灰》系列博客

    摘要:時(shí)間永遠(yuǎn)都過得那么快,一晃從年注冊(cè),到現(xiàn)在已經(jīng)過去了年那些被我藏在收藏夾吃灰的文章,已經(jīng)太多了,是時(shí)候把他們整理一下了。那是因?yàn)槭詹貖A太亂,橡皮擦給設(shè)置私密了,不收拾不好看呀。 ...

    Harriet666 評(píng)論0 收藏0
  • 爬蟲requests模塊 入門到入獄 :基礎(chǔ)知識(shí)+實(shí)戰(zhàn)分析

    ?????? ???Hello,大家好我叫是Dream呀,一個(gè)有趣的Python博主,小白一枚,多多關(guān)照??? ???CSDN Python領(lǐng)域新星創(chuàng)作者,大二在讀,歡迎大家找我合作學(xué)習(xí) ?入門須知:這片樂園從不缺乏天才,努力才是你的最終入場(chǎng)券!??? ?最后,愿我們都能在看不到的地方閃閃發(fā)光,一起加油進(jìn)步??? ???一萬次悲傷,依然會(huì)有Dream,我一直在最溫暖的地方等你,唱的就是我!哈哈哈~...

    yagami 評(píng)論0 收藏0
  • Python爬蟲教學(xué)(寫給入門的新手) 一

    摘要:在不懂和等協(xié)議的情況下,我直接打個(gè)比方來解釋一下什么是請(qǐng)求,以瀏覽器為例,人在瀏覽器輸入,然后敲擊鍵,直到頁(yè)面出現(xiàn),整個(gè)過程,我們可以抽象為我們向百度服務(wù)器發(fā)起的一次請(qǐng)求。更專業(yè),更詳細(xì)的解釋,自己去百度學(xué)習(xí)吧。 前言 ??剛學(xué)完python基礎(chǔ),想學(xué)習(xí)爬蟲的新手,這里有你想要的東西。??本文著重點(diǎn)在于教新手如何學(xué)習(xí)爬蟲,并且會(huì)以外行人的思維進(jìn)行形象地講解。最近我一兄弟想學(xué),我就想寫個(gè)...

    zone 評(píng)論0 收藏0
  • 零基礎(chǔ)如何學(xué)爬蟲技術(shù)

    摘要:楚江數(shù)據(jù)是專業(yè)的互聯(lián)網(wǎng)數(shù)據(jù)技術(shù)服務(wù),現(xiàn)整理出零基礎(chǔ)如何學(xué)爬蟲技術(shù)以供學(xué)習(xí),。本文來源知乎作者路人甲鏈接楚江數(shù)據(jù)提供網(wǎng)站數(shù)據(jù)采集和爬蟲軟件定制開發(fā)服務(wù),服務(wù)范圍涵蓋社交網(wǎng)絡(luò)電子商務(wù)分類信息學(xué)術(shù)研究等。 楚江數(shù)據(jù)是專業(yè)的互聯(lián)網(wǎng)數(shù)據(jù)技術(shù)服務(wù),現(xiàn)整理出零基礎(chǔ)如何學(xué)爬蟲技術(shù)以供學(xué)習(xí),http://www.chujiangdata.com。 第一:Python爬蟲學(xué)習(xí)系列教程(來源于某博主:htt...

    KunMinX 評(píng)論0 收藏0

發(fā)表評(píng)論

0條評(píng)論

最新活動(dòng)
閱讀需要支付1元查看
<