• 'pip install logparser' on host 'scrapyd-1:6800' and run command 'logparser'. Or wait until LogParser parses the log.
  • Fail to request logfile from http://scrapyd-1:6800/logs/sourcing_v2/bca_login_check.gb/23e82ba862fd11f0808ee62340b0abf2 with extensions ['.log', '.log.gz', '.txt']
  • Using backup stats: LogParser v0.8.2, last updated at 2025-07-17 11:01:07, /var/lib/scrapydweb/data/stats/scrapyd-1_6800/sourcing_v2/bca_login_check.gb/23e82ba862fd11f0808ee62340b0abf2.json

PROJECT (sourcing_v2), SPIDER (bca_login_check.gb)

  • Log analysis
  • Log categorization
  • Progress visualization
  • View log
  • Crawler.stats
  • projectsourcing_v2
    spiderbca_login_check.gb
    job23e82ba862fd11f0808ee62340b0abf2
    first_log_time2025-07-17 10:59:45
    latest_log_time2025-07-17 11:01:06
    runtime0:01:21
    crawled_pages 0
    scraped_items 0
    shutdown_reasonN/A
    finish_reasonfinished
    log_critical_count0
    log_error_count1
    log_warning_count0
    log_redirect_count0
    log_retry_count0
    log_ignore_count0
    latest_crawl
    latest_scrape
    latest_log
    current_time
    latest_itemN/A
    • WARNING+

    • error_logs
      1 in total

      2025-07-17 11:01:06 [bca_login_check.gb] ERROR: Failed to load login page: Page.goto: Timeout 60000ms exceeded.
      Call log:
      navigating to "https://www.bca.co.uk/", waiting until "load"
      

      INFO

      DEBUG

    • latest_stat

      2025-07-17 11:00:45 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
    • scrapy_version

      2.11.2
    • telnet_console

      127.0.0.1:6023
    • telnet_password

      e131d1b67403f266
    • Head

      2025-07-17 10:59:45 [scrapy.utils.log] INFO: Scrapy 2.11.2 started (bot: SourcingV2)
      2025-07-17 10:59:45 [scrapy.utils.log] INFO: Versions: lxml 5.2.2.0, libxml2 2.12.6, cssselect 1.2.0, parsel 1.9.1, w3lib 2.1.2, Twisted 24.3.0, Python 3.11.2 (main, Mar 13 2023, 12:18:29) [GCC 12.2.0], pyOpenSSL 24.1.0 (OpenSSL 3.2.1 30 Jan 2024), cryptography 42.0.7, Platform Linux-5.15.0-1090-azure-x86_64-with-glibc2.36
      2025-07-17 10:59:45 [urllib3.connectionpool] DEBUG: Starting new HTTPS connection (1): api.alx.test-cluster.alx.tech:443
      2025-07-17 10:59:45 [urllib3.connectionpool] DEBUG: https://api.alx.test-cluster.alx.tech:443 "POST /car-purchase/debug/supplier-credential-verification-task HTTP/1.1" 204 0
      2025-07-17 10:59:45 [scrapy.addons] INFO: Enabled addons:
      []
      2025-07-17 10:59:45 [asyncio] DEBUG: Using selector: EpollSelector
      2025-07-17 10:59:45 [scrapy.utils.log] DEBUG: Using reactor: twisted.internet.asyncioreactor.AsyncioSelectorReactor
      2025-07-17 10:59:45 [scrapy.utils.log] DEBUG: Using asyncio event loop: asyncio.unix_events._UnixSelectorEventLoop
      2025-07-17 10:59:45 [scrapy.extensions.telnet] INFO: Telnet Password: e131d1b67403f266
      2025-07-17 10:59:45 [scrapy.middleware] INFO: Enabled extensions:
      ['scrapy.extensions.corestats.CoreStats',
       'scrapy.extensions.telnet.TelnetConsole',
       'scrapy.extensions.memusage.MemoryUsage',
       'scrapy.extensions.feedexport.FeedExporter',
       'scrapy.extensions.logstats.LogStats',
       'scrapy.extensions.closespider.CloseSpider']
      2025-07-17 10:59:45 [scrapy.crawler] INFO: Overridden settings:
      {'BOT_NAME': 'SourcingV2',
       'CLOSESPIDER_TIMEOUT': 7200,
       'FEED_EXPORT_ENCODING': 'utf-8',
       'LOG_FILE': '/var/log/scrapyd/logs/sourcing_v2/bca_login_check.gb/23e82ba862fd11f0808ee62340b0abf2.log',
       'LOG_FORMATTER': 'crawlers.log_formatter.SourcingLogFormatter',
       'NEWSPIDER_MODULE': 'spiders',
       'REQUEST_FINGERPRINTER_CLASS': 'scrapy_zyte_api.ScrapyZyteAPIRequestFingerprinter',
       'REQUEST_FINGERPRINTER_IMPLEMENTATION': '2.7',
       'SPIDER_MODULES': ['spiders', 'auth_check'],
       'TWISTED_REACTOR': 'twisted.internet.asyncioreactor.AsyncioSelectorReactor',
       'USER_AGENT': ''}
      2025-07-17 10:59:45 [scrapy_zyte_api.handler] INFO: Using a Zyte API key starting with '5857011'
      2025-07-17 10:59:45 [scrapy_zyte_api.handler] INFO: Using a Zyte API key starting with '5857011'
      2025-07-17 10:59:45 [scrapy.middleware] INFO: Enabled downloader middlewares:
      ['scrapy.downloadermiddlewares.offsite.OffsiteMiddleware',
       'scrapy.downloadermiddlewares.httpauth.HttpAuthMiddleware',
       'scrapy.downloadermiddlewares.downloadtimeout.DownloadTimeoutMiddleware',
       'scrapy.downloadermiddlewares.defaultheaders.DefaultHeadersMiddleware',
       'scrapy.downloadermiddlewares.useragent.UserAgentMiddleware',
       'scrapy_zyte_api.ScrapyZyteAPIDownloaderMiddleware',
       'scrapy.downloadermiddlewares.retry.RetryMiddleware',
       'scrapy.downloadermiddlewares.redirect.MetaRefreshMiddleware',
       'scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware',
       'scrapy.downloadermiddlewares.redirect.RedirectMiddleware',
       'scrapy.downloadermiddlewares.cookies.CookiesMiddleware',
       'scrapy.downloadermiddlewares.httpproxy.HttpProxyMiddleware',
       'scrapy.downloadermiddlewares.stats.DownloaderStats']
      2025-07-17 10:59:45 [scrapy.middleware] INFO: Enabled spider middlewares:
      ['scrapy.spidermiddlewares.httperror.HttpErrorMiddleware',
       'scrapy.spidermiddlewares.referer.RefererMiddleware',
       'scrapy.spidermiddlewares.urllength.UrlLengthMiddleware',
       'scrapy.spidermiddlewares.depth.DepthMiddleware']
      2025-07-17 10:59:45 [scrapy.middleware] INFO: Enabled item pipelines:
      []
      2025-07-17 10:59:45 [scrapy.core.engine] INFO: Spider opened
      2025-07-17 10:59:45 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
      2025-07-17 10:59:45 [scrapy.extensions.telnet] INFO: Telnet console listening on 127.0.0.1:6023
      2025-07-17 10:59:45 [scrapy-playwright] INFO: Starting download handler
      
      2025-07-17 10:59:45 [scrapy-playwright] INFO: Starting download handler
      2025-07-17 11:00:05 [scrapy-playwright] INFO: Launching browser firefox
      2025-07-17 11:00:05 [scrapy-playwright] INFO: Browser firefox launched
      2025-07-17 11:00:05 [scrapy-playwright] DEBUG: Browser context started: 'default' (persistent=False, remote=False)
      2025-07-17 11:00:06 [scrapy-playwright] DEBUG: [Context=default] New page created, page count is 1 (1 for all contexts)
      2025-07-17 11:00:06 [scrapy-playwright] DEBUG: [Context=default] Request: <GET https://www.bca.co.uk/> (resource type: document)
      2025-07-17 11:00:06 [scrapy-playwright] DEBUG: [Context=default] Response: <407 https://www.bca.co.uk/>
      
      2025-07-17 11:00:06 [scrapy-playwright] DEBUG: [Context=default] Request: <GET https://www.bca.co.uk/> (resource type: document)
      
      2025-07-17 11:00:12 [scrapy-playwright] DEBUG: [Context=default] Response: <403 https://www.bca.co.uk/>
      
      2025-07-17 11:00:45 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
      2025-07-17 11:01:06 [bca_login_check.gb] ERROR: Failed to load login page: Page.goto: Timeout 60000ms exceeded.
      Call log:
      navigating to "https://www.bca.co.uk/", waiting until "load"
      
      2025-07-17 11:01:06 [scrapy.core.engine] INFO: Closing spider (finished)
      2025-07-17 11:01:06 [scrapy.extensions.feedexport] INFO: Stored jsonlines feed (0 items) in: file:///var/lib/scrapyd/items/sourcing_v2/bca_login_check.gb/23e82ba862fd11f0808ee62340b0abf2.jl
      2025-07-17 11:01:06 [scrapy.statscollectors] INFO: Dumping Scrapy stats:
      {'downloader/exception_count': 1,
       'downloader/exception_type_count/playwright._impl._errors.TimeoutError': 1,
       'downloader/request_bytes': 169,
       'downloader/request_count': 1,
       'downloader/request_method_count/GET': 1,
       'elapsed_time_seconds': 81.35733,
       'feedexport/success_count/FileFeedStorage': 1,
       'finish_reason': 'finished',
       'finish_time': datetime.datetime(2025, 7, 17, 11, 1, 6, 770784, tzinfo=datetime.timezone.utc),
       'log_count/DEBUG': 9,
       'log_count/ERROR': 1,
       'log_count/INFO': 18,
       'memusage/max': 115372032,
       'memusage/startup': 111591424,
       'playwright/context_count': 1,
       'playwright/context_count/max_concurrent': 1,
       'playwright/context_count/persistent/False': 1,
       'playwright/context_count/remote/False': 1,
       'playwright/page_count': 1,
       'playwright/page_count/max_concurrent': 1,
       'playwright/request_count': 2,
       'playwright/request_count/method/GET': 2,
       'playwright/request_count/navigation': 2,
       'playwright/request_count/resource_type/document': 2,
       'playwright/response_count': 2,
       'playwright/response_count/method/GET': 2,
       'playwright/response_count/resource_type/document': 2,
       'scheduler/dequeued': 1,
    • Tail

      2025-07-17 10:59:45 [scrapy.utils.log] INFO: Scrapy 2.11.2 started (bot: SourcingV2)
      2025-07-17 10:59:45 [scrapy.utils.log] INFO: Versions: lxml 5.2.2.0, libxml2 2.12.6, cssselect 1.2.0, parsel 1.9.1, w3lib 2.1.2, Twisted 24.3.0, Python 3.11.2 (main, Mar 13 2023, 12:18:29) [GCC 12.2.0], pyOpenSSL 24.1.0 (OpenSSL 3.2.1 30 Jan 2024), cryptography 42.0.7, Platform Linux-5.15.0-1090-azure-x86_64-with-glibc2.36
      2025-07-17 10:59:45 [urllib3.connectionpool] DEBUG: Starting new HTTPS connection (1): api.alx.test-cluster.alx.tech:443
      2025-07-17 10:59:45 [urllib3.connectionpool] DEBUG: https://api.alx.test-cluster.alx.tech:443 "POST /car-purchase/debug/supplier-credential-verification-task HTTP/1.1" 204 0
      2025-07-17 10:59:45 [scrapy.addons] INFO: Enabled addons:
      []
      2025-07-17 10:59:45 [asyncio] DEBUG: Using selector: EpollSelector
      2025-07-17 10:59:45 [scrapy.utils.log] DEBUG: Using reactor: twisted.internet.asyncioreactor.AsyncioSelectorReactor
      2025-07-17 10:59:45 [scrapy.utils.log] DEBUG: Using asyncio event loop: asyncio.unix_events._UnixSelectorEventLoop
      2025-07-17 10:59:45 [scrapy.extensions.telnet] INFO: Telnet Password: e131d1b67403f266
      2025-07-17 10:59:45 [scrapy.middleware] INFO: Enabled extensions:
      ['scrapy.extensions.corestats.CoreStats',
       'scrapy.extensions.telnet.TelnetConsole',
       'scrapy.extensions.memusage.MemoryUsage',
       'scrapy.extensions.feedexport.FeedExporter',
       'scrapy.extensions.logstats.LogStats',
       'scrapy.extensions.closespider.CloseSpider']
      2025-07-17 10:59:45 [scrapy.crawler] INFO: Overridden settings:
      {'BOT_NAME': 'SourcingV2',
       'CLOSESPIDER_TIMEOUT': 7200,
       'FEED_EXPORT_ENCODING': 'utf-8',
       'LOG_FILE': '/var/log/scrapyd/logs/sourcing_v2/bca_login_check.gb/23e82ba862fd11f0808ee62340b0abf2.log',
       'LOG_FORMATTER': 'crawlers.log_formatter.SourcingLogFormatter',
       'NEWSPIDER_MODULE': 'spiders',
       'REQUEST_FINGERPRINTER_CLASS': 'scrapy_zyte_api.ScrapyZyteAPIRequestFingerprinter',
       'REQUEST_FINGERPRINTER_IMPLEMENTATION': '2.7',
       'SPIDER_MODULES': ['spiders', 'auth_check'],
       'TWISTED_REACTOR': 'twisted.internet.asyncioreactor.AsyncioSelectorReactor',
       'USER_AGENT': ''}
      2025-07-17 10:59:45 [scrapy_zyte_api.handler] INFO: Using a Zyte API key starting with '5857011'
      2025-07-17 10:59:45 [scrapy_zyte_api.handler] INFO: Using a Zyte API key starting with '5857011'
      2025-07-17 10:59:45 [scrapy.middleware] INFO: Enabled downloader middlewares:
      ['scrapy.downloadermiddlewares.offsite.OffsiteMiddleware',
       'scrapy.downloadermiddlewares.httpauth.HttpAuthMiddleware',
       'scrapy.downloadermiddlewares.downloadtimeout.DownloadTimeoutMiddleware',
       'scrapy.downloadermiddlewares.defaultheaders.DefaultHeadersMiddleware',
       'scrapy.downloadermiddlewares.useragent.UserAgentMiddleware',
       'scrapy_zyte_api.ScrapyZyteAPIDownloaderMiddleware',
       'scrapy.downloadermiddlewares.retry.RetryMiddleware',
       'scrapy.downloadermiddlewares.redirect.MetaRefreshMiddleware',
       'scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware',
       'scrapy.downloadermiddlewares.redirect.RedirectMiddleware',
       'scrapy.downloadermiddlewares.cookies.CookiesMiddleware',
       'scrapy.downloadermiddlewares.httpproxy.HttpProxyMiddleware',
       'scrapy.downloadermiddlewares.stats.DownloaderStats']
      2025-07-17 10:59:45 [scrapy.middleware] INFO: Enabled spider middlewares:
      ['scrapy.spidermiddlewares.httperror.HttpErrorMiddleware',
       'scrapy.spidermiddlewares.referer.RefererMiddleware',
       'scrapy.spidermiddlewares.urllength.UrlLengthMiddleware',
       'scrapy.spidermiddlewares.depth.DepthMiddleware']
      2025-07-17 10:59:45 [scrapy.middleware] INFO: Enabled item pipelines:
      []
      2025-07-17 10:59:45 [scrapy.core.engine] INFO: Spider opened
      2025-07-17 10:59:45 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
      2025-07-17 10:59:45 [scrapy.extensions.telnet] INFO: Telnet console listening on 127.0.0.1:6023
      2025-07-17 10:59:45 [scrapy-playwright] INFO: Starting download handler
      
      2025-07-17 10:59:45 [scrapy-playwright] INFO: Starting download handler
      2025-07-17 11:00:05 [scrapy-playwright] INFO: Launching browser firefox
      2025-07-17 11:00:05 [scrapy-playwright] INFO: Browser firefox launched
      2025-07-17 11:00:05 [scrapy-playwright] DEBUG: Browser context started: 'default' (persistent=False, remote=False)
      2025-07-17 11:00:06 [scrapy-playwright] DEBUG: [Context=default] New page created, page count is 1 (1 for all contexts)
      2025-07-17 11:00:06 [scrapy-playwright] DEBUG: [Context=default] Request: <GET https://www.bca.co.uk/> (resource type: document)
      2025-07-17 11:00:06 [scrapy-playwright] DEBUG: [Context=default] Response: <407 https://www.bca.co.uk/>
      
      2025-07-17 11:00:06 [scrapy-playwright] DEBUG: [Context=default] Request: <GET https://www.bca.co.uk/> (resource type: document)
      
      2025-07-17 11:00:12 [scrapy-playwright] DEBUG: [Context=default] Response: <403 https://www.bca.co.uk/>
      
      2025-07-17 11:00:45 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
      2025-07-17 11:01:06 [bca_login_check.gb] ERROR: Failed to load login page: Page.goto: Timeout 60000ms exceeded.
      Call log:
      navigating to "https://www.bca.co.uk/", waiting until "load"
      
      2025-07-17 11:01:06 [scrapy.core.engine] INFO: Closing spider (finished)
      2025-07-17 11:01:06 [scrapy.extensions.feedexport] INFO: Stored jsonlines feed (0 items) in: file:///var/lib/scrapyd/items/sourcing_v2/bca_login_check.gb/23e82ba862fd11f0808ee62340b0abf2.jl
      2025-07-17 11:01:06 [scrapy.statscollectors] INFO: Dumping Scrapy stats:
      {'downloader/exception_count': 1,
       'downloader/exception_type_count/playwright._impl._errors.TimeoutError': 1,
       'downloader/request_bytes': 169,
       'downloader/request_count': 1,
       'downloader/request_method_count/GET': 1,
       'elapsed_time_seconds': 81.35733,
       'feedexport/success_count/FileFeedStorage': 1,
       'finish_reason': 'finished',
       'finish_time': datetime.datetime(2025, 7, 17, 11, 1, 6, 770784, tzinfo=datetime.timezone.utc),
       'log_count/DEBUG': 9,
       'log_count/ERROR': 1,
       'log_count/INFO': 18,
       'memusage/max': 115372032,
       'memusage/startup': 111591424,
       'playwright/context_count': 1,
       'playwright/context_count/max_concurrent': 1,
       'playwright/context_count/persistent/False': 1,
       'playwright/context_count/remote/False': 1,
       'playwright/page_count': 1,
       'playwright/page_count/max_concurrent': 1,
       'playwright/request_count': 2,
       'playwright/request_count/method/GET': 2,
       'playwright/request_count/navigation': 2,
       'playwright/request_count/resource_type/document': 2,
       'playwright/response_count': 2,
       'playwright/response_count/method/GET': 2,
       'playwright/response_count/resource_type/document': 2,
       'scheduler/dequeued': 1,
       'scheduler/dequeued/memory': 1,
       'scheduler/enqueued': 1,
       'scheduler/enqueued/memory': 1,
       'start_time': datetime.datetime(2025, 7, 17, 10, 59, 45, 413454, tzinfo=datetime.timezone.utc)}
      2025-07-17 11:01:06 [scrapy.core.engine] INFO: Spider closed (finished)
      
    • Log

      /1/log/utf8/sourcing_v2/bca_login_check.gb/23e82ba862fd11f0808ee62340b0abf2/?job_finished=True

    • Source

      http://scrapyd-1:6800/logs/sourcing_v2/bca_login_check.gb/23e82ba862fd11f0808ee62340b0abf2.log

  • sourcelog
    last_update_time2025-07-17 11:01:06
    last_update_timestamp1752750066
    downloader/exception_count1
    downloader/exception_type_count/playwright._impl._errors.TimeoutError1
    downloader/request_bytes169
    downloader/request_count1
    downloader/request_method_count/GET1
    elapsed_time_seconds81.35733
    feedexport/success_count/FileFeedStorage1
    finish_reasonfinished
    finish_timedatetime.datetime(2025, 7, 17, 11, 1, 6, 770784, tzinfo=datetime.timezone.utc)
    log_count/DEBUG9
    log_count/ERROR1
    log_count/INFO18
    memusage/max115372032
    memusage/startup111591424
    playwright/context_count1
    playwright/context_count/max_concurrent1
    playwright/context_count/persistent/False1
    playwright/context_count/remote/False1
    playwright/page_count1
    playwright/page_count/max_concurrent1
    playwright/request_count2
    playwright/request_count/method/GET2
    playwright/request_count/navigation2
    playwright/request_count/resource_type/document2
    playwright/response_count2
    playwright/response_count/method/GET2
    playwright/response_count/resource_type/document2
    scheduler/dequeued1
    scheduler/dequeued/memory1
    scheduler/enqueued1
    scheduler/enqueued/memory1
    start_timedatetime.datetime(2025, 7, 17, 10, 59, 45, 413454, tzinfo=datetime.timezone.utc)