2023数据采集与融合技术实践作业三

发布时间 2023-10-20 22:09:13作者: 失重漂流

  • 作业①:

    • 要求:

      指定一个网站,爬取这个网站中的所有的所有图片,例如:中国气象网(

      http://www.weather.com.cn

      )。使用scrapy框架分别实现单线程和多线程的方式爬取。

      –务必控制总页数(学号尾数2位)、总下载的图片数量(尾数后3位)等限制爬取的措施。

    • 输出信息: 将下载的Url信息在控制台输出,并将下载的图片存储在images子文件中,并给出截图。

    • Gitee

      weather

    • 代码

      单线程:

      weather.py

      import scrapy
      from weather.items import WeatherItem
      
      
      class WeatherSpider(scrapy.Spider):
          name = 'weather'
          start_urls = ['http://p.weather.com.cn/zrds/index.shtml']
      
          def start_requests(self):
              num_pages = int(getattr(self, 'pages', 4))  #爬取4页
      
              for page in range(1, num_pages + 1):
                  if page == 1:
                      start_url = f'http://p.weather.com.cn/zrds/index.shtml'
                  else:
                      start_url = f'http://p.weather.com.cn/zrds/index_{page}.shtml'
                  yield scrapy.Request(start_url, callback=self.parse)
              
          def parse(self, response):
              image_urls = response.css('img::attr(src)').extract()
              for image_url in image_urls:
                  item = WeatherItem()
                  item['image_url'] = response.urljoin(image_url)
                  yield item
      

      items.py

      import scrapy
      
      class WeatherItem(scrapy.Item):
          image_url = scrapy.Field()
      

      pipelines.py

      import os
      import scrapy
      import logging
      from scrapy.pipelines.images import ImagesPipeline
      
      class WeatherImagesPipeline(ImagesPipeline):
          IMAGES_STORE = 'images'  # 保存图片的本地目录
          def get_media_requests(self, item, info):
              yield scrapy.Request(url=item['image_url'])
      
          def file_path(self, request, response=None, info=None, *, item=None):
              image_name = os.path.basename(request.url)
              return f'{image_name}'
      
          def item_completed(self, results, item, info):
              for success, file_info in results:
                  if success:
                      image_paths = [file_info['path']]
                      item['image_path'] = image_paths[0]
                  else:
                      self.logger.error(f"Image download failed for {item['image_url']}")  # 使用self.logger记录错误
              return item
      

      setting.py

      CONCURRENT_REQUESTS = 16  
      
      ITEM_PIPELINES = {
          'weather.pipelines.WeatherImagesPipeline': 1,
      }
      IMAGES_STORE = 'images'  # 保存图片的本地目录
      
      USER_AGENT = 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/86.0.4240.75 Safari/537.36 Edg/86.0.622.38'
      

      run.py

      #run.py
      from scrapy import cmdline
      
      cmdline.execute('scrapy crawl weather'.split())
      

      多线程:

      只需将setting中CONCURRENT_REQUESTS值改为32即可

      CONCURRENT_REQUESTS = 32  
      
      • 输出结果

      • 心得体会

        最开始一直保存不到相应的文件夹中,只能在终端输出图片的路径,且打开皆为有效图片,后面查询了原因之后再pipelines中添加

        def file_path(self, request, response=None, info=None, *, item=None):
                image_name = os.path.basename(request.url)
                return f'{image_name}'
        

        后,就可以爬取图片出现在文件夹中。

        而多线程的修改并非像urllib等所使用的threading,而是scrapy框架本身就是异步的,因此只需要改动CONCURRENT_REQUESTS值。

        本次实验对单线程多线程爬取scrapy有了一定的了解。

  • 作业②

    • 要求:熟练掌握 scrapy 中 Item、Pipeline 数据的序列化输出方法;使用scrapy框架+Xpath+MySQL数据库存储技术路线爬取股票相关信息。

    • 候选网站:东方财富网:https://www.eastmoney.com/

    • 输出信息:MySQL数据库存储和输出格式如下:

    • 表头英文命名例如:序号id,股票代码:bStockNo……,由同学们自行定义设计

    • 序号 股票代码 股票名称 最新报价 涨跌幅 涨跌额 成交量 振幅 最高 最低 今开 昨收
      1 688093 N世华 28.47 10.92 26.13万 7.6亿 22.34 32.0 28.08 30.20 17.55
      2……
    • Gitee

      [stock](作业3/stock · 杨予晴/2023级数据采集与融合技术 - 码云 - 开源中国 (gitee.com))

      • 代码

        stock.py

        import scrapy
        import json
        from stock.items import StockItem
        
        class StockSpider(scrapy.Spider):
            name = "stock"
            allowed_domains = ["eastmoney.com"]
            
            def start_requests(self):
                base_url1 = 'http://95.push2.eastmoney.com/api/qt/clist/get?cb=jQuery112404577990037157569_1696660645140'
                base_url2 = '&pz=20&po=1&np=1&ut=bd1d9ddb04089700cf9c27f6f7426281&fltt=2&invt=2&wbp2u=|0|0|0|web&fid=f3&fs=m:0+t:6,m:0+t:80,m:1+t:2,m:1+t:23,m:0+t:81+s:2048&fields=f1,f2,f3,f4,f5,f6,f7,f8,f9,f10,f12,f13,f14,f15,f16,f17,f18,f20,f21,f23,f24,f25,f22,f11,f62,f128,f136,f115,f152&_=1696660645141'
                total_pages = 4  #爬取前4页
        
                for page_number in range(1, total_pages + 1):
                    page_url = f"{base_url1}&pn={page_number}{base_url2}"
                    yield scrapy.Request(page_url, callback=self.parse)
        
            def parse(self, response):
                data = response.text 
                json_data = json.loads(data[data.find('{'):data.rfind('}') + 1])
                stock_list = json_data['data']['diff']
        
                for stock in stock_list:
                    item = StockItem()
                    item['code'] = stock['f12']
                    item['name'] = stock['f14']
                    item['latest_price'] = stock['f2']
                    item['change_percent'] = stock['f3']
                    item['change_amount'] = stock['f4']
                    item['volume'] = stock['f5']
                    item['turnover'] = stock['f6']
                    item['amplitude'] = stock['f7']
                    item['highest'] = stock['f15']
                    item['lowest'] = stock['f16']
                    item['open_price'] = stock['f17']
                    item['close_price'] = stock['f18']
                    yield item
        
        

        items.py

        import scrapy
        
        class StockItem(scrapy.Item):
            code = scrapy.Field()
            name = scrapy.Field()
            latest_price = scrapy.Field()
            change_percent = scrapy.Field()
            change_amount = scrapy.Field()
            volume = scrapy.Field()
            turnover = scrapy.Field()
            amplitude = scrapy.Field()
            highest = scrapy.Field()
            lowest = scrapy.Field()
            open_price = scrapy.Field()
            close_price = scrapy.Field()
        

        pipelines.py

        import sqlite3
        
        class StockPipeline:
            def __init__(self):
                self.create_database()
        
            def create_database(self):
                self.conn = sqlite3.connect('stock.db')
                self.cursor = self.conn.cursor()
                self.cursor.execute('''
                    CREATE TABLE IF NOT EXISTS stocks (
                        id INTEGER PRIMARY KEY AUTOINCREMENT,
                        code TEXT,
                        name TEXT,
                        latest_price REAL,
                        change_percent REAL,
                        change_amount REAL,
                        volume INTEGER,
                        turnover REAL,
                        amplitude REAL,
                        highest REAL,
                        lowest REAL,
                        open_price REAL,
                        close_price REAL
                    )
                ''')
                self.conn.commit()
        
            def process_item(self, item, spider):
                self.save_stock_data_to_database(item)
                return item
        
            def save_stock_data_to_database(self, item):
                self.cursor.execute('''
                    INSERT INTO stocks (
                        code, name, latest_price, change_percent, change_amount,
                        volume, turnover, amplitude, highest, lowest, open_price, close_price
                    ) VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?)
                ''', (
                    item['code'], item['name'], item['latest_price'], item['change_percent'],
                    item['change_amount'], item['volume'], item['turnover'], item['amplitude'],
                    item['highest'], item['lowest'], item['open_price'], item['close_price']
                ))
                self.conn.commit()
        
                return item
        
            def close_spider(self, spider):
                self.conn.close()
        
        class TerminalOutputPipeline(object):
            def process_item(self, item, spider):
                # 在终端输出数据
                print(f"股票代码: {item['code']}")
                print(f"股票名称: {item['name']}")
                print(f"最新报价: {item['latest_price']}")
                print(f"涨跌幅: {item['change_percent']}")
                print(f"跌涨额: {item['change_amount']}")
                print(f"成交量: {item['volume']}")
                print(f"成交额: {item['turnover']}")
                print(f"振幅: {item['amplitude']}")
                print(f"最高: {item['highest']}")
                print(f"最低: {item['lowest']}")
                print(f"今开: {item['open_price']}")
                print(f"昨收: {item['close_price']}")
                return item
        

        setting.py

        USER_AGENT = 'Your User-Agent String'
        

        run.py

        from scrapy import cmdline
        
        cmdline.execute(['scrapy', 'crawl', 'stock'])
        
        • 运行结果(20*4页=80条)

        • 心得体会

          这道题花费的时间非常的长,先是在实践课上使用scrapy框架爬取动态网页一直不出结果,最后在老师的提示下延续上次的爬取东方财富网的实验使用抓包再转换成scrapy框架格式爬取静态网页后才得以成功,花了很多时间做了很多无用功,不过使用抓包之后修改代码的过程还算顺利,巩固了将爬取信息存入数据库的方法,以及重温了抓包的过程。

  • 作业③:

    • 要求:熟练掌握 scrapy 中 Item、Pipeline 数据的序列化输出方法;使用scrapy框架+Xpath+MySQL数据库存储技术路线爬取外汇网站数据。
    • 候选网站:****中国银行网:https://www.boc.cn/sourcedb/whpj/
    • 输出信息:
Currency TBP CBP TSP CSP Time
阿联酋迪拉姆 198.58 192.31 199.98 206.59 11:27:14
  • Gitee

bank

  • 代码

    bank.py

    import scrapy
    from bank.items import BankItem
    
    class BankSpider(scrapy.Spider):
        name = 'bank'
        start_urls = ['https://www.boc.cn/sourcedb/whpj/index.html']
        
        def start_requests(self):
            num_pages = int(getattr(self, 'pages', 4))  
    
            for page in range(1, num_pages + 1):
                if page == 1:
                    start_url = f'https://www.boc.cn/sourcedb/whpj/index.html'
                else:
                    start_url = f'https://www.boc.cn/sourcedb/whpj/index_{page-1}.html'
                yield scrapy.Request(start_url, callback=self.parse)
        
        def parse(self, response):   
            bank_list = response.xpath('//tr[position()>1]')
            for bank in bank_list:
                item = BankItem()
                item['Currency'] = bank.xpath('.//td[1]/text()').get()
                item['TBP'] = bank.xpath('.//td[2]/text()').get()
                item['CBP'] = bank.xpath('.//td[3]/text()').get()
                item['TSP'] = bank.xpath('.//td[4]/text()').get()
                item['CSP'] = bank.xpath('.//td[5]/text()').get()
                item['Time'] = bank.xpath('.//td[8]/text()').get()
                
                yield item
    

    items.py

    import scrapy
    class BankItem(scrapy.Item):
        Currency = scrapy.Field()
        TBP = scrapy.Field()
        CBP = scrapy.Field()
        TSP = scrapy.Field()
        CSP = scrapy.Field()
        Time = scrapy.Field()
    

    pipelines.py

    from itemadapter import ItemAdapter
    import sqlite3
    
    class BankPipeline:
        def open_spider(self, spider):
            self.conn = sqlite3.connect('bank.db')
            self.cursor = self.conn.cursor()
            self.create_database()
    
        def create_database(self):
            self.cursor.execute('''
                CREATE TABLE IF NOT EXISTS banks (
                    Currency TEXT ,
                    TBP REAL,
                    CBP REAL,
                    TSP REAL,
                    CSP REAL,
                    Time TEXT
                )
            ''')
            self.conn.commit()
    
        def process_item(self, item, spider):
            self.cursor.execute('''
                INSERT INTO banks (
                    Currency,
                    TBP,
                    CBP,
                    TSP,
                    CSP,
                    Time
                ) VALUES (?, ?, ?, ?, ?, ?)
            ''', (item['Currency'],item['TBP'],item['CBP'],item['TSP'],item['CSP'],item['Time'] ))
            self.conn.commit()
            return item
    
        def close_spider(self, spider):
            self.conn.close()
    

    setting.py

    CONCURRENT_REQUESTS = 1
    DOWNLOAD_DELAY = 1
    
    ITEM_PIPELINES = {
        'bank.pipelines.BankPipeline': 300,
    }
    

    run.py

    rom scrapy import cmdline
    
    cmdline.execute(['scrapy', 'crawl', 'bank'])
    
    • 运行结果

    • 心得体会

      在这个实验中,关于xpath遇到了一些小困难,最开始bank_list中xpath对应“/html/body/div/div[5]/div[1]/div[2]/table/tbody/tr[position()>1]”一直爬不出结果,但是改成"//tr[position()>1]"之后竟然就可以了,现在想起来还是有点疑惑的;这道题需要注意的是position=1时对应表头,故需要加>1的条件来去表头,数据库才可以正常存储,否则表格格式不符合数据库中的设定;以及第二页对应的url中index_后的数字是1而不是2,需要对1以上的页码进行page-1的操作。本次实验相对前两次都比较顺利,在第二题的基础上套框架进行修改,不过好像这个网页每一页的内容都一样,所以爬取4页就是把第一页的内容重复了4遍,顺带重温了翻页操作。