Senior Backend Developer – Web Crawling ( 5 to 9 yrs )

Company: AIMLEAP
Apply for the Senior Backend Developer – Web Crawling ( 5 to 9 yrs )
Location:
Job Description:

Senior Backend Developer – Web Crawling

Experience: 5+ Years

Location: Remote / Bengaluru

Mode of Engagement: Full-time

Number of Positions: 2 to 5

Educational Qualifications: Bachelor’s degree in Computer Science, Information Technology, or related field

Industry: IT Services / Data Engineering / AI & Automation

Notice Period: Immediate to 30 Days Preferred

What We Are Looking For

  • 5+ years of strong backend development experience with deep expertise in Python and scalable system design.
  • Proven hands-on experience in large-scale web crawling using Scrapy, Requests, Playwright, and Selenium, including handling dynamic and JavaScript-heavy websites.
  • Strong understanding of proxy/IP rotation, anti-bot mechanisms, session & cookie management, and enterprise-grade data extraction workflows.
  • Ability to design, optimize, and maintain high-performance, distributed crawling infrastructures for structured and reliable data pipelines.

Responsibilities

  • Design, develop, and maintain large-scale web crawling and scraping systems.
  • Build scalable backend services and crawling architectures using Python.
  • Implement and optimize crawling frameworks using Scrapy and Requests.
  • Handle dynamic and JavaScript-rendered websites using Playwright and Selenium.
  • Develop and manage proxy rotation, IP management, and anti-bot bypass mechanisms.
  • Implement authentication flows, cookie persistence, and session handling strategies.
  • Build distributed crawling systems for enterprise-scale data extraction.
  • Ensure data validation, normalization, and structured storage pipelines.
  • Monitor crawler performance, logging, retries, error handling, and system stability.
  • Collaborate with data engineering and AI teams for downstream data processing and automation.

Qualifications

  • 5+ years of backend development experience with strong Python expertise.
  • Proven experience with Scrapy, Requests, Playwright, and Selenium.
  • Strong understanding of HTTP protocols, headers, sessions, cookies, and browser behavior.
  • Experience implementing proxy rotation, IP management, and rate-limit handling.
  • Familiarity with CAPTCHA handling and anti-detection strategies.
  • Experience building large-scale or distributed crawling systems.
  • Strong knowledge of databases such as PostgreSQL or MongoDB.
  • Experience deploying and managing applications on AWS or similar cloud platforms.
  • Strong analytical, debugging, and performance optimization skills.
  • Excellent logical thinking and problem-solving ability.

Posted: February 24th, 2026