成人国产在线小视频_日韩寡妇人妻调教在线播放_色成人www永久在线观看_2018国产精品久久_亚洲欧美高清在线30p_亚洲少妇综合一区_黄色在线播放国产_亚洲另类技巧小说校园_国产主播xx日韩_a级毛片在线免费

資訊專欄INFORMATION COLUMN

Scrapy爬蟲 - 獲取知乎用戶數(shù)據(jù)

Miyang / 1848人閱讀

摘要:爬蟲獲取知乎用戶數(shù)據(jù)安裝爬蟲框架關(guān)于如何安裝以及框架,這里不做介紹,請(qǐng)自行網(wǎng)上搜索。

2016-04-10

Scrapy爬蟲 - 獲取知乎用戶數(shù)據(jù) 安裝Scrapy爬蟲框架

關(guān)于如何安裝Python以及Scrapy框架,這里不做介紹,請(qǐng)自行網(wǎng)上搜索。

初始化

安裝好Scrapy后,執(zhí)行 scrapy startproject myspider
接下來(lái)你會(huì)看到 myspider 文件夾,目錄結(jié)構(gòu)如下:

scrapy.cfg

myspider

items.py

pipelines.py

settings.py

__init__.py

spiders

__init__.py

編寫爬蟲文件

在spiders目錄下新建 users.py

# -*- coding: utf-8 -*-
import scrapy
import os
import time
from zhihu.items import UserItem
from zhihu.myconfig import UsersConfig # 爬蟲配置

class UsersSpider(scrapy.Spider):
    name = "users"
    domain = "https://www.zhihu.com"
    login_url = "https://www.zhihu.com/login/email"
    headers = {
        "Accept": "text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8",
        "Accept-Language": "zh-CN,zh;q=0.8",
        "Connection": "keep-alive",
        "Host": "www.zhihu.com",
        "Upgrade-Insecure-Requests": "1",
        "User-Agent": "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_11_3) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/48.0.2564.109 Safari/537.36"
    }

    def __init__(self, url = None):
        self.user_url = url

    def start_requests(self):
        yield scrapy.Request(
            url = self.domain,
            headers = self.headers,
            meta = {
                "proxy": UsersConfig["proxy"],
                "cookiejar": 1
            },
            callback = self.request_captcha
        )

    def request_captcha(self, response):
        # 獲取_xsrf值
        _xsrf = response.css("input[name="_xsrf"]::attr(value)").extract()[0]
        # 獲取驗(yàn)證碼地址
        captcha_url = "http://www.zhihu.com/captcha.gif?r=" + str(time.time() * 1000)
        # 準(zhǔn)備下載驗(yàn)證碼
        yield scrapy.Request(
            url = captcha_url,
            headers = self.headers,
            meta = {
                "proxy": UsersConfig["proxy"],
                "cookiejar": response.meta["cookiejar"],
                "_xsrf": _xsrf
            },
            callback = self.download_captcha
        )

    def download_captcha(self, response):
        # 下載驗(yàn)證碼
        with open("captcha.gif", "wb") as fp:
            fp.write(response.body)
        # 用軟件打開(kāi)驗(yàn)證碼圖片
        os.system("start captcha.gif")
        # 輸入驗(yàn)證碼
        print "Please enter captcha: "
        captcha = raw_input()

        yield scrapy.FormRequest(
            url = self.login_url,
            headers = self.headers,
            formdata = {
                "email": UsersConfig["email"],
                "password": UsersConfig["password"],
                "_xsrf": response.meta["_xsrf"],
                "remember_me": "true",
                "captcha": captcha
            },
            meta = {
                "proxy": UsersConfig["proxy"],
                "cookiejar": response.meta["cookiejar"]
            },
            callback = self.request_zhihu
        )

    def request_zhihu(self, response):
        yield scrapy.Request(
            url = self.user_url + "/about",
            headers = self.headers,
            meta = {
                "proxy": UsersConfig["proxy"],
                "cookiejar": response.meta["cookiejar"],
                "from": {
                    "sign": "else",
                    "data": {}
                }
            },
            callback = self.user_item,
            dont_filter = True
        )

        yield scrapy.Request(
            url = self.user_url + "/followees",
            headers = self.headers,
            meta = {
                "proxy": UsersConfig["proxy"],
                "cookiejar": response.meta["cookiejar"],
                "from": {
                    "sign": "else",
                    "data": {}
                }
            },
            callback = self.user_start,
            dont_filter = True
        )

        yield scrapy.Request(
            url = self.user_url + "/followers",
            headers = self.headers,
            meta = {
                "proxy": UsersConfig["proxy"],
                "cookiejar": response.meta["cookiejar"],
                "from": {
                    "sign": "else",
                    "data": {}
                }
            },
            callback = self.user_start,
            dont_filter = True
        )

    def user_start(self, response):
        sel_root = response.xpath("http://h2[@class="zm-list-content-title"]")
        # 判斷關(guān)注列表是否為空
        if len(sel_root):
            for sel in sel_root:
                people_url = sel.xpath("a/@href").extract()[0]

                yield scrapy.Request(
                    url = people_url + "/about",
                    headers = self.headers,
                    meta = {
                        "proxy": UsersConfig["proxy"],
                        "cookiejar": response.meta["cookiejar"],
                        "from": {
                            "sign": "else",
                            "data": {}
                        }
                    },
                    callback = self.user_item,
                    dont_filter = True
                )

                yield scrapy.Request(
                    url = people_url + "/followees",
                    headers = self.headers,
                    meta = {
                        "proxy": UsersConfig["proxy"],
                        "cookiejar": response.meta["cookiejar"],
                        "from": {
                            "sign": "else",
                            "data": {}
                        }
                    },
                    callback = self.user_start,
                    dont_filter = True
                )

                yield scrapy.Request(
                    url = people_url + "/followers",
                    headers = self.headers,
                    meta = {
                        "proxy": UsersConfig["proxy"],
                        "cookiejar": response.meta["cookiejar"],
                        "from": {
                            "sign": "else",
                            "data": {}
                        }
                    },
                    callback = self.user_start,
                    dont_filter = True
                )

    def user_item(self, response):
        def value(list):
            return list[0] if len(list) else ""

        sel = response.xpath("http://div[@class="zm-profile-header ProfileCard"]")

        item = UserItem()
        item["url"] = response.url[:-6]
        item["name"] = sel.xpath("http://a[@class="name"]/text()").extract()[0].encode("utf-8")
        item["bio"] = value(sel.xpath("http://span[@class="bio"]/@title").extract()).encode("utf-8")
        item["location"] = value(sel.xpath("http://span[contains(@class, "location")]/@title").extract()).encode("utf-8")
        item["business"] = value(sel.xpath("http://span[contains(@class, "business")]/@title").extract()).encode("utf-8")
        item["gender"] = 0 if sel.xpath("http://i[contains(@class, "icon-profile-female")]") else 1
        item["avatar"] = value(sel.xpath("http://img[@class="Avatar Avatar--l"]/@src").extract())
        item["education"] = value(sel.xpath("http://span[contains(@class, "education")]/@title").extract()).encode("utf-8")
        item["major"] = value(sel.xpath("http://span[contains(@class, "education-extra")]/@title").extract()).encode("utf-8")
        item["employment"] = value(sel.xpath("http://span[contains(@class, "employment")]/@title").extract()).encode("utf-8")
        item["position"] = value(sel.xpath("http://span[contains(@class, "position")]/@title").extract()).encode("utf-8")
        item["content"] = value(sel.xpath("http://span[@class="content"]/text()").extract()).strip().encode("utf-8")
        item["ask"] = int(sel.xpath("http://div[contains(@class, "profile-navbar")]/a[2]/span[@class="num"]/text()").extract()[0])
        item["answer"] = int(sel.xpath("http://div[contains(@class, "profile-navbar")]/a[3]/span[@class="num"]/text()").extract()[0])
        item["agree"] = int(sel.xpath("http://span[@class="zm-profile-header-user-agree"]/strong/text()").extract()[0])
        item["thanks"] = int(sel.xpath("http://span[@class="zm-profile-header-user-thanks"]/strong/text()").extract()[0])

        yield item
添加爬蟲配置文件

在myspider目錄下新建myconfig.py,并添加以下內(nèi)容,將你的配置信息填入相應(yīng)位置

# -*- coding: utf-8 -*-
UsersConfig = {
    # 代理
    "proxy": "",

    # 知乎用戶名和密碼
    "email": "your email",
    "password": "your password",
}

DbConfig = {
    # db config
    "user": "db user",
    "passwd": "db password",
    "db": "db name",
    "host": "db host",
}
修改items.py
# -*- coding: utf-8 -*-
import scrapy

class UserItem(scrapy.Item):
    # define the fields for your item here like:
    url = scrapy.Field()
    name = scrapy.Field()
    bio = scrapy.Field()
    location = scrapy.Field()
    business = scrapy.Field()
    gender = scrapy.Field()
    avatar = scrapy.Field()
    education = scrapy.Field()
    major = scrapy.Field()
    employment = scrapy.Field()
    position = scrapy.Field()
    content = scrapy.Field()
    ask = scrapy.Field()
    answer = scrapy.Field()
    agree = scrapy.Field()
    thanks = scrapy.Field()
將用戶數(shù)據(jù)存入mysql數(shù)據(jù)庫(kù)

修改pipelines.py

# -*- coding: utf-8 -*-
import MySQLdb
import datetime
from zhihu.myconfig import DbConfig

class UserPipeline(object):
    def __init__(self):
        self.conn = MySQLdb.connect(user = DbConfig["user"], passwd = DbConfig["passwd"], db = DbConfig["db"], host = DbConfig["host"], charset = "utf8", use_unicode = True)
        self.cursor = self.conn.cursor()
        # 清空表
        # self.cursor.execute("truncate table weather;")
        # self.conn.commit()

    def process_item(self, item, spider):
        curTime = datetime.datetime.now()
        try:
            self.cursor.execute(
                """INSERT IGNORE INTO users (url, name, bio, location, business, gender, avatar, education, major, employment, position, content, ask, answer, agree, thanks, create_at)
                VALUES (%s, %s, %s, %s, %s, %s, %s, %s, %s, %s, %s, %s, %s, %s, %s, %s, %s)""",
                (
                    item["url"],
                    item["name"],
                    item["bio"],
                    item["location"],
                    item["business"],
                    item["gender"],
                    item["avatar"],
                    item["education"],
                    item["major"],
                    item["employment"],
                    item["position"],
                    item["content"],
                    item["ask"],
                    item["answer"],
                    item["agree"],
                    item["thanks"],
                    curTime
                )
            )
            self.conn.commit()
        except MySQLdb.Error, e:
            print "Error %d %s" % (e.args[0], e.args[1])

        return item
修改settings.py

找到 ITEM_PIPELINES,改為:

ITEM_PIPELINES = {
   "myspider.pipelines.UserPipeline": 300,
}

在末尾添加,設(shè)置爬蟲的深度

DEPTH_LIMIT=10
爬取知乎用戶數(shù)據(jù)

確保MySQL已經(jīng)打開(kāi),在項(xiàng)目根目錄下打開(kāi)終端,
執(zhí)行 scrapy crawl users -a url=https://www.zhihu.com/people/,
其中user為爬蟲的第一個(gè)用戶,之后會(huì)根據(jù)該用戶關(guān)注的人和被關(guān)注的人進(jìn)行爬取數(shù)據(jù)
接下來(lái)會(huì)下載驗(yàn)證碼圖片,若未自動(dòng)打開(kāi),請(qǐng)到根目錄下打開(kāi) captcha.gif,在終端輸入驗(yàn)證碼
數(shù)據(jù)爬取Loading...

源碼

源碼可以在這里找到 github

文章版權(quán)歸作者所有,未經(jīng)允許請(qǐng)勿轉(zhuǎn)載,若此文章存在違規(guī)行為,您可以聯(lián)系管理員刪除。

轉(zhuǎn)載請(qǐng)注明本文地址:http://systransis.cn/yun/45426.html

相關(guān)文章

  • 23個(gè)Python爬蟲開(kāi)源項(xiàng)目代碼,包含微信、淘寶、豆瓣、知乎、微博等

    摘要:今天為大家整理了個(gè)爬蟲項(xiàng)目。地址新浪微博爬蟲主要爬取新浪微博用戶的個(gè)人信息微博信息粉絲和關(guān)注。代碼獲取新浪微博進(jìn)行登錄,可通過(guò)多賬號(hào)登錄來(lái)防止新浪的反扒。涵蓋鏈家爬蟲一文的全部代碼,包括鏈家模擬登錄代碼。支持微博知乎豆瓣。 showImg(https://segmentfault.com/img/remote/1460000018452185?w=1000&h=667); 今天為大家整...

    jlanglang 評(píng)論0 收藏0
  • 零基礎(chǔ)如何學(xué)爬蟲技術(shù)

    摘要:楚江數(shù)據(jù)是專業(yè)的互聯(lián)網(wǎng)數(shù)據(jù)技術(shù)服務(wù),現(xiàn)整理出零基礎(chǔ)如何學(xué)爬蟲技術(shù)以供學(xué)習(xí),。本文來(lái)源知乎作者路人甲鏈接楚江數(shù)據(jù)提供網(wǎng)站數(shù)據(jù)采集和爬蟲軟件定制開(kāi)發(fā)服務(wù),服務(wù)范圍涵蓋社交網(wǎng)絡(luò)電子商務(wù)分類信息學(xué)術(shù)研究等。 楚江數(shù)據(jù)是專業(yè)的互聯(lián)網(wǎng)數(shù)據(jù)技術(shù)服務(wù),現(xiàn)整理出零基礎(chǔ)如何學(xué)爬蟲技術(shù)以供學(xué)習(xí),http://www.chujiangdata.com。 第一:Python爬蟲學(xué)習(xí)系列教程(來(lái)源于某博主:htt...

    KunMinX 評(píng)論0 收藏0
  • 22、Python快速開(kāi)發(fā)分布式搜索引擎Scrapy精講—scrapy模擬登陸和知乎倒立文字驗(yàn)證碼識(shí)

    【百度云搜索,搜各種資料:http://www.bdyss.cn】 【搜網(wǎng)盤,搜各種資料:http://www.swpan.cn】 第一步。首先下載,大神者也的倒立文字驗(yàn)證碼識(shí)別程序 下載地址:https://github.com/muchrooms/... 注意:此程序依賴以下模塊包   Keras==2.0.1  Pillow==3.4.2  jupyter==1.0.0  matplotli...

    array_huang 評(píng)論0 收藏0
  • Python爬蟲Scrapy學(xué)習(xí)(基礎(chǔ)篇)

    摘要:下載器下載器負(fù)責(zé)獲取頁(yè)面數(shù)據(jù)并提供給引擎,而后提供給。下載器中間件下載器中間件是在引擎及下載器之間的特定鉤子,處理傳遞給引擎的。一旦頁(yè)面下載完畢,下載器生成一個(gè)該頁(yè)面的,并將其通過(guò)下載中間件返回方向發(fā)送給引擎。 作者:xiaoyu微信公眾號(hào):Python數(shù)據(jù)科學(xué)知乎:Python數(shù)據(jù)分析師 在爬蟲的路上,學(xué)習(xí)scrapy是一個(gè)必不可少的環(huán)節(jié)。也許有好多朋友此時(shí)此刻也正在接觸并學(xué)習(xí)sc...

    pkhope 評(píng)論0 收藏0
  • scrapy模擬登陸知乎--抓取熱點(diǎn)話題

    摘要:在抓取數(shù)據(jù)之前,請(qǐng)?jiān)跒g覽器中登錄過(guò)知乎,這樣才使得是有效的。所謂的模擬登陸,只是在中盡量的模擬在瀏覽器中的交互過(guò)程,使服務(wù)端無(wú)感抓包過(guò)程。若是幫你解決了問(wèn)題,或者給了你啟發(fā),不要吝嗇給加一星。 折騰了將近兩天,中間數(shù)次想要放棄,還好硬著頭皮搞下去了,在此分享出來(lái),希望有同等需求的各位能少走一些彎路。 源碼放在了github上, 歡迎前往查看。 若是幫你解決了問(wèn)題,或者給了你啟發(fā),不要吝...

    leanxi 評(píng)論0 收藏0

發(fā)表評(píng)論

0條評(píng)論

最新活動(dòng)
閱讀需要支付1元查看
<