HEADLESS CHROME

HIGH RISK⚡ AUTOMATION AGENT

Automated headless Chrome browsers — commonly used for scraping, testing, and bot activity

ORGANIZATION
Unknown
FIRST SEEN
2017-06
RESPECTS ROBOTS.TXT
✗ NO
DOCUMENTATION
Not published
DAILY VISITS
COUNTRIES ACTIVE
TRACKING
STATUS
LAST SEEN

📡 HEADLESS CHROME USER-AGENT STRING

Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) HeadlessChrome/120.0.0.0 Safari/537.36

This is the User-Agent header sent by Headless Chrome in HTTP requests. Use this to identify Headless Chrome in your server access logs.

📋 ABOUT HEADLESS CHROME

Headless Chrome refers to Google Chrome running without a visible graphical interface, commonly used for browser automation. While headless Chrome has many legitimate uses — including automated testing, server-side rendering, PDF generation, and web monitoring — it is also the primary tool used for large-scale web scraping, credential stuffing attacks, and other forms of automated abuse.

Detecting headless Chrome is one of the most challenging aspects of bot management because it fully renders JavaScript, executes page scripts, and can be configured to closely mimic real user behavior. While basic detection can check for the 'HeadlessChrome' User-Agent string or navigator.webdriver property, sophisticated operators spoof these signals using tools like puppeteer-extra-plugin-stealth.

NORAD.io classifies headless Chrome traffic as high risk due to its association with scraping and abuse scenarios. The NORAD radar network uses behavioral analysis, browser fingerprinting, and network-level signals to identify headless Chrome traffic even when common fingerprinting countermeasures are deployed. Sites experiencing automated access patterns should consider implementing NORAD's detection capabilities.

🎯 HOW TO DETECT HEADLESS CHROME

  • Check for 'HeadlessChrome' in the User-Agent string (easily spoofed)
  • Test navigator.webdriver property — true in automated Chrome
  • Check for missing browser plugins (navigator.plugins.length === 0)
  • Detect Chrome DevTools Protocol connections
  • Analyze window.chrome object for missing properties
  • Check for consistent viewport sizes (common automation default: 800x600)
  • Behavioral analysis: unrealistically fast page interactions, no mouse movements

🔄 CRAWL BEHAVIOR

Varies widely — can range from polite single-page requests to aggressive high-volume scraping. Fully renders JavaScript and executes page scripts. Often used from cloud infrastructure (AWS, GCP, Azure). No standardized crawl behavior since it's a tool rather than a specific bot.

PURPOSE

General-purpose browser automation. Legitimate uses include testing, monitoring, and screenshot generation. Illegitimate uses include content scraping, credential stuffing, ad fraud, and web attack automation.

🤖 ROBOTS.TXT CONFIGURATION

# Headless Chrome does not check robots.txt by default.
# Detection requires JavaScript-based challenges or behavioral analysis.
# Consider implementing bot detection middleware.

⚠ Headless Chrome may not fully respect robots.txt. Consider supplementing with IP-level blocking or bot detection middleware.

🗺️ WHERE IS HEADLESS CHROME ACTIVE?

⚠️ RELATED THREATS

🔗 RELATED BOTS

📂 MORE ⚡ AUTOMATION AGENTS

📚 RELATED GUIDES

PROTECT YOUR WEBSITE

Deploy SiteTrust to monitor and control AI bot access to your site with the Agent Passport Standard.

INSTALL SITETRUST →