欧美bbbwbbbw肥妇,免费乱码人妻系列日韩,一级黄片

利用scrapy將爬到的數(shù)據(jù)保存到mysql(防止重復(fù))

 更新時(shí)間:2018年03月31日 13:03:13   作者:Waiting For You  
這篇文章主要給大家介紹了關(guān)于利用scrapy將爬到的數(shù)據(jù)保存到mysql(防止重復(fù))的相關(guān)資料,文中通過(guò)示例代碼介紹的非常詳細(xì),對(duì)大家的學(xué)習(xí)或者工作具有一定的參考學(xué)習(xí)價(jià)值,需要的朋友們下面來(lái)一起看看吧。

前言

本文主要給大家介紹了關(guān)于scrapy爬到的數(shù)據(jù)保存到mysql(防止重復(fù))的相關(guān)內(nèi)容,分享出來(lái)供大家參考學(xué)習(xí),下面話不多說(shuō)了,來(lái)一起看看詳細(xì)的介紹吧。

1.環(huán)境建立

     1.使用xmapp安裝php, mysql ,phpmyadmin

     2.安裝python3,pip

     3.安裝pymysql

     3.(windows 略)我這邊是mac,安裝brew,用brew 安裝scrapy

2.整個(gè)流程

     1. 創(chuàng)建數(shù)據(jù)庫(kù)和數(shù)據(jù)庫(kù)表,準(zhǔn)備保存

     2.寫入爬蟲目標(biāo)URL,進(jìn)行網(wǎng)絡(luò)請(qǐng)求

     3.對(duì)爬返回?cái)?shù)據(jù)進(jìn)行處理,得到具體數(shù)據(jù)

     4.對(duì)于具體數(shù)據(jù)保存到數(shù)據(jù)庫(kù)中

2.1.創(chuàng)建數(shù)據(jù)庫(kù)

首先創(chuàng)建一個(gè)數(shù)據(jù)庫(kù)叫scrapy,然后創(chuàng)建一個(gè)表article,我們這里給body加了唯一索引,防止重復(fù)插入數(shù)據(jù)

--
-- Database: `scrapy`
--
 
-- --------------------------------------------------------
 
--
-- 表的結(jié)構(gòu) `article`
--
 
CREATE TABLE `article` (
 `id` int(11) NOT NULL,
 `body` varchar(200) CHARACTER SET utf8 COLLATE utf8_bin NOT NULL,
 `author` varchar(50) CHARACTER SET utf8 COLLATE utf8_bin NOT NULL,
 `createDate` datetime NOT NULL
) ENGINE=InnoDB DEFAULT CHARSET=latin1;
 
 
--
-- Indexes for table `article`
--
ALTER TABLE `article`
 ADD PRIMARY KEY (`id`),
 ADD UNIQUE KEY `uk_body` (`body`);

弄好以后是這樣的。

2.2 先看下整個(gè)爬蟲項(xiàng)目的結(jié)構(gòu)


quotes_spider.py是核心,負(fù)責(zé)對(duì)網(wǎng)絡(luò)請(qǐng)求和對(duì)內(nèi)容進(jìn)行處理,然后對(duì)整理好的內(nèi)容拋給pipelines進(jìn)行具體處理,保存到數(shù)據(jù)庫(kù)中,這樣不會(huì)影響速度。

其他的看 圖說(shuō)明

2.2 寫入爬蟲目標(biāo)URL,進(jìn)行網(wǎng)絡(luò)請(qǐng)求

import scrapy
from tutorial.items import TutorialItem
class QuotesSpider(scrapy.Spider):
 name = "quotes"
 def start_requests(self):
  url = 'http://quotes.toscrape.com/tag/humor/'
  yield scrapy.Request(url)
 def parse(self, response):
  item = TutorialItem()
  for quote in response.css('div.quote'):
   item['body'] = quote.css('span.text::text').extract_first()
   item['author'] = quote.css('small.author::text').extract_first()
   yield item
  next_page = response.css('li.next a::attr("href")').extract_first()
  if next_page is not None:
   yield response.follow(next_page, self.parse)

start_requests 就是要寫入具體要爬的URL

parse就是核心的對(duì)返回的數(shù)據(jù)進(jìn)行處理的地方,然后以item的形式拋出,接下來(lái)定義好下一個(gè)要爬的內(nèi)容 

2.3  items

# -*- coding: utf-8 -*-
# Define here the models for your scraped items
#
# See documentation in:
# https://doc.scrapy.org/en/latest/topics/items.html
import scrapy
class TutorialItem(scrapy.Item):
 body = scrapy.Field()
 author = scrapy.Field()
 pass

2.4 pipelines

# -*- coding: utf-8 -*-
# Define your item pipelines here
#
# Don't forget to add your pipeline to the ITEM_PIPELINES setting
# See: https://doc.scrapy.org/en/latest/topics/item-pipeline.html
import pymysql
import datetime
from tutorial import settings
import logging
class TutorialPipeline(object):
 def __init__(self):
  self.connect = pymysql.connect(
   host = settings.MYSQL_HOST,
   db = settings.MYSQL_DBNAME,
   user = settings.MYSQL_USER,
   passwd = settings.MYSQL_PASSWD,
   charset = 'utf8',
   use_unicode = True
  )
  self.cursor = self.connect.cursor(); 
 def process_item(self, item, spider):
  try:
   self.cursor.execute(
    "insert into article (body, author, createDate) value(%s, %s, %s) on duplicate key update author=(author)",
    (item['body'],
     item['author'],
     datetime.datetime.now()
     ))
   self.connect.commit()
  except Exception as error:
   logging.log(error)
  return item
 def close_spider(self, spider):
  self.connect.close();

2.5 配置

ITEM_PIPELINES = {
 'tutorial.pipelines.TutorialPipeline':300
}
MYSQL_HOST = 'localhost'
MYSQL_DBNAME = 'scrapy'
MYSQL_USER = 'root'
MYSQL_PASSWD = '123456'
MYSQL_PORT = 3306

3.啟動(dòng)爬蟲

scrapy crawl quotes

總結(jié)

以上就是這篇文章的全部?jī)?nèi)容了,希望本文的內(nèi)容對(duì)大家的學(xué)習(xí)或者工作具有一定的參考學(xué)習(xí)價(jià)值,如果有疑問(wèn)大家可以留言交流,謝謝大家對(duì)腳本之家的支持。

相關(guān)文章

最新評(píng)論