摘要:開源安裝通過通過使用方法作為運行請先確保當前主機已經(jīng)安裝和啟動通過命令啟動訪問假設(shè)運行于端口訪問以獲取某個爬蟲任務(wù)的日志分析詳情配合實現(xiàn)爬蟲進度可視化詳見在代碼中使用
GitHub 開源
my8100 / logparser
安裝通過 pip:
pip install logparser
通過 git:
git clone https://github.com/my8100/logparser.git cd logparser python setup.py install使用方法 作為 service 運行
請先確保當前主機已經(jīng)安裝和啟動 Scrapyd
通過命令 logparser 啟動 LogParser
訪問 http://127.0.0.1:6800/logs/stats.json (假設(shè) Scrapyd 運行于端口 6800)
訪問 http://127.0.0.1:6800/logs/projectname/spidername/jobid.json 以獲取某個爬蟲任務(wù)的日志分析詳情
配合 ScrapydWeb 實現(xiàn)爬蟲進度可視化詳見 my8100 / scrapydweb
In [1]: from logparser import parse In [2]: log = """2018-10-23 18:28:34 [scrapy.utils.log] INFO: Scrapy 1.5.0 started (bot: demo) ...: 2018-10-23 18:29:41 [scrapy.statscollectors] INFO: Dumping Scrapy stats: ...: {"downloader/exception_count": 3, ...: "downloader/exception_type_count/twisted.internet.error.TCPTimedOutError": 3, ...: "downloader/request_bytes": 1336, ...: "downloader/request_count": 7, ...: "downloader/request_method_count/GET": 7, ...: "downloader/response_bytes": 1669, ...: "downloader/response_count": 4, ...: "downloader/response_status_count/200": 2, ...: "downloader/response_status_count/302": 1, ...: "downloader/response_status_count/404": 1, ...: "dupefilter/filtered": 1, ...: "finish_reason": "finished", ...: "finish_time": datetime.datetime(2018, 10, 23, 10, 29, 41, 174719), ...: "httperror/response_ignored_count": 1, ...: "httperror/response_ignored_status_count/404": 1, ...: "item_scraped_count": 2, ...: "log_count/CRITICAL": 5, ...: "log_count/DEBUG": 14, ...: "log_count/ERROR": 5, ...: "log_count/INFO": 75, ...: "log_count/WARNING": 3, ...: "offsite/domains": 1, ...: "offsite/filtered": 1, ...: "request_depth_max": 1, ...: "response_received_count": 3, ...: "retry/count": 2, ...: "retry/max_reached": 1, ...: "retry/reason_count/twisted.internet.error.TCPTimedOutError": 2, ...: "scheduler/dequeued": 7, ...: "scheduler/dequeued/memory": 7, ...: "scheduler/enqueued": 7, ...: "scheduler/enqueued/memory": 7, ...: "start_time": datetime.datetime(2018, 10, 23, 10, 28, 35, 70938)} ...: 2018-10-23 18:29:42 [scrapy.core.engine] INFO: Spider closed (finished)""" In [3]: d = parse(log, headlines=1, taillines=1) In [4]: d Out[4]: OrderedDict([("head", "2018-10-23 18:28:34 [scrapy.utils.log] INFO: Scrapy 1.5.0 started (bot: demo)"), ("tail", "2018-10-23 18:29:42 [scrapy.core.engine] INFO: Spider closed (finished)"), ("first_log_time", "2018-10-23 18:28:34"), ("latest_log_time", "2018-10-23 18:29:42"), ("elapsed", "0:01:08"), ("first_log_timestamp", 1540290514), ("latest_log_timestamp", 1540290582), ("datas", []), ("pages", 3), ("items", 2), ("latest_matches", {"resuming_crawl": "", "latest_offsite": "", "latest_duplicate": "", "latest_crawl": "", "latest_scrape": "", "latest_item": "", "latest_stat": ""}), ("latest_crawl_timestamp", 0), ("latest_scrape_timestamp", 0), ("log_categories", {"critical_logs": {"count": 5, "details": []}, "error_logs": {"count": 5, "details": []}, "warning_logs": {"count": 3, "details": []}, "redirect_logs": {"count": 1, "details": []}, "retry_logs": {"count": 2, "details": []}, "ignore_logs": {"count": 1, "details": []}}), ("shutdown_reason", "N/A"), ("finish_reason", "finished"), ("last_update_timestamp", 1547559048), ("last_update_time", "2019-01-15 21:30:48")]) In [5]: d["elapsed"] Out[5]: "0:01:08" In [6]: d["pages"] Out[6]: 3 In [7]: d["items"] Out[7]: 2 In [8]: d["finish_reason"] Out[8]: "finished"
文章版權(quán)歸作者所有,未經(jīng)允許請勿轉(zhuǎn)載,若此文章存在違規(guī)行為,您可以聯(lián)系管理員刪除。
轉(zhuǎn)載請注明本文地址:http://systransis.cn/yun/43066.html
摘要:支持一鍵部署項目到集群。添加郵箱帳號設(shè)置郵件工作時間和基本觸發(fā)器,以下示例代表每隔小時或當某一任務(wù)完成時,并且當前時間是工作日的點,點和點,將會發(fā)送通知郵件。除了基本觸發(fā)器,還提供了多種觸發(fā)器用于處理不同類型的,包括和等。 showImg(https://segmentfault.com/img/remote/1460000018772067?w=1680&h=869); 安裝和配置 ...
摘要:通用網(wǎng)絡(luò)爬蟲通用網(wǎng)絡(luò)爬蟲又稱全網(wǎng)爬蟲,爬取對象從一些種子擴充到整個。為提高工作效率,通用網(wǎng)絡(luò)爬蟲會采取一定的爬取策略。介紹是一個國人編寫的強大的網(wǎng)絡(luò)爬蟲系統(tǒng)并帶有強大的。 爬蟲 簡單的說網(wǎng)絡(luò)爬蟲(Web crawler)也叫做網(wǎng)絡(luò)鏟(Web scraper)、網(wǎng)絡(luò)蜘蛛(Web spider),其行為一般是先爬到對應(yīng)的網(wǎng)頁上,再把需要的信息鏟下來。 分類 網(wǎng)絡(luò)爬蟲按照系統(tǒng)結(jié)構(gòu)和實現(xiàn)技術(shù),...
摘要:通用網(wǎng)絡(luò)爬蟲通用網(wǎng)絡(luò)爬蟲又稱全網(wǎng)爬蟲,爬取對象從一些種子擴充到整個。為提高工作效率,通用網(wǎng)絡(luò)爬蟲會采取一定的爬取策略。介紹是一個國人編寫的強大的網(wǎng)絡(luò)爬蟲系統(tǒng)并帶有強大的。 爬蟲 簡單的說網(wǎng)絡(luò)爬蟲(Web crawler)也叫做網(wǎng)絡(luò)鏟(Web scraper)、網(wǎng)絡(luò)蜘蛛(Web spider),其行為一般是先爬到對應(yīng)的網(wǎng)頁上,再把需要的信息鏟下來。 分類 網(wǎng)絡(luò)爬蟲按照系統(tǒng)結(jié)構(gòu)和實現(xiàn)技術(shù),...
摘要:時間永遠都過得那么快,一晃從年注冊,到現(xiàn)在已經(jīng)過去了年那些被我藏在收藏夾吃灰的文章,已經(jīng)太多了,是時候把他們整理一下了。那是因為收藏夾太亂,橡皮擦給設(shè)置私密了,不收拾不好看呀。 ...
摘要:現(xiàn)已全面發(fā)布,采用主線內(nèi)核,并且支持離線安裝,給你更好的部署體驗。在中,新的服務(wù)裝載著內(nèi)核服務(wù),下載源代碼后進行編譯,接著創(chuàng)建并啟動一種可以在操作臺顯示的服務(wù)。 RancherOS v0.8.0現(xiàn)已全面發(fā)布,采用Linux 4.9.9主線內(nèi)核,并且支持離線安裝,給你更好的部署體驗。同時,還有更早啟動cloud-init、支持cloud-config驗證、新的ZFS服務(wù)等一系列新功能。 ...
閱讀 1720·2021-11-25 09:43
閱讀 2681·2019-08-30 15:53
閱讀 1832·2019-08-30 15:52
閱讀 2911·2019-08-29 13:56
閱讀 3334·2019-08-26 12:12
閱讀 576·2019-08-23 17:58
閱讀 2151·2019-08-23 16:59
閱讀 945·2019-08-23 16:21