Make Concurrent Requests in Ruby

  • Author
    by Josselin Liebe
    4 months ago
  • Our API is designed to let you perform multiple concurrent scraping operations. This allows you to speed up scraping for hundreds, thousands, or even millions of pages per day, depending on your plan.

    The higher your concurrent requests limit, the more calls you can run in parallel, and the faster you can scrape.

    Making concurrent requests in Ruby is simple: just create threads for our scraping functions! The code below demonstrates how to make two concurrent requests to www.piloterr.com and display the HTML content of each page:

    require 'net/http'
    require 'net/https'
    require 'addressable/uri'
    
    

    def send_request(query) uri = Addressable::URI.parse("https://piloterr.com/api/v2/website/crawler") x_api_key = "YOUR-X-API-KEY" # ⚠️ Don't forget to add your API token here! uri.query_values = { 'x_api_key' => x_api_key, 'query' => query } uri = URI(uri) http = Net::HTTP.new(uri.host, uri.port) http.use_ssl = true http.verify_mode = OpenSSL::SSL::VERIFY_PEER puts "Sending the request to " + query req = Net::HTTP::Get.new(uri) res = http.request(req) puts res.body rescue StandardError => e puts "HTTP Request failed (#{ e.message })" end

    first_request = Thread.new { send_request("https://www.piloterr.com") } second_request = Thread.new { send_request("https://www.piloterr.com/blog") } first_request.join second_request.join puts "Process Ended"

    This guide demonstrates how to efficiently run concurrent requests in Ruby while using x_api_key, targeting www.piloterr.com for your scraping operations.