You moved your mouse. You clicked. Then a wall appeared. You did nothing wrong, yet the page doubts you today.
Across major UK news sites, including titles under News Group Newspapers, readers now face tougher human checks as publishers push back against automated scraping. The message looks abrupt. The logic behind it runs deeper: keep bots out, keep journalism funded, and stop unauthorised AI training runs.
What triggered the roadblock
The screen you saw signals that the site flagged your activity as automated. That can happen for ordinary reasons: you loaded many pages quickly, you used a VPN, or your browser blocked scripts or cookies. Security filters read those signals and, in a split second, decide whether to ask for proof that you are human.
Automated scraping and text or data mining — including activity for AI, machine learning and LLMs — violates News Group Newspapers’ terms.
The publisher’s policy bans bots from collecting or mining its content by any automated means, whether directly or through a third party service. The terms also cover commercial uses. If a legitimate reader gets caught in the net, support exists. If a company wants licensed access, a dedicated address handles requests.
The policy in plain words
News Group Newspapers states that it prohibits automated access, collection, and text/data mining of its content. That includes AI training runs, machine learning pipelines, and LLM ingestion. The message repeats a core clause found in its terms and conditions. For commercial licensing enquiries, the company lists [email protected]. For customer support, it gives [email protected].
See a verification screen? It doesn’t always mean you did something wrong. It means the system needs a quick signal that you’re a person.
How to prove you are human in 90 seconds
You can usually clear the block in a minute or two. These steps tend to work.
- Enable JavaScript and allow essential cookies, then refresh the page.
- Complete any on-screen challenge, such as a simple puzzle or checkbox.
- Pause rapid reloading. Wait 30–60 seconds before trying again.
- Disable VPN or proxy temporarily. Some ranges look suspicious to filters.
- Close background tools that automate browsing, even benign extensions.
- Open a fresh window or try a clean browser profile to rule out conflicts.
If the problem persists, contact the support team at [email protected] and describe what you were doing, the time, and any error code shown. Keep a screenshot. That context helps the team whitelist a false positive more quickly.
Why publishers fight bots now
Robots don’t read headlines, but they still consume bandwidth, scrape entire sections, and undercut paid licences. Industry reports suggest bots account for roughly half of global web traffic, with “bad bots” — tools designed to scrape or spam — making up a growing share. For a large news site, that can mean millions of unwanted requests a day. The costs hit in three places: infrastructure load, lower ad quality, and content misuse.
AI models have escalated the stakes. Automated systems can pull thousands of articles in minutes, store them, and reuse the text in ways the publisher never approved. Rights holders have started to assert opt-outs, tighten filters, and seek commercial agreements. The verification pages you see are one layer in that defensive stack.
Common triggers and quick fixes
| Trigger | What the system sees | What you can do | Time to clear |
|---|---|---|---|
| Rapid page requests | Dozens of hits in seconds | Slow down, wait 60 seconds, refresh once | 1–2 minutes |
| VPN or data centre IP | Address linked to automation | Turn off VPN or switch to a residential connection | Immediate after reconnect |
| Disabled scripts/cookies | Missing signals needed for human checks | Enable JavaScript and essential cookies, then reload | Under 1 minute |
| Headless browser traits | Automation fingerprints in headers | Use a standard browser profile; remove automation flags | 2–5 minutes |
| Unusual extensions | Auto-refresh or scraping add-ons | Disable suspect extensions; try a private window | 1–3 minutes |
What if you run an AI or data project
Do not scrape the site. The policy bans automated collection and text/data mining for AI, machine learning, and LLMs. If your organisation needs lawful access, you must ask for a licence. Send a concise brief — scope, volume, frequency, and intended use — to [email protected]. Expect the publisher to set limits, request safeguards, and charge for access. Many newsrooms now watermark their content and monitor for reuse. A licence protects you from takedowns and legal disputes later.
Your privacy and what the check looks at
The verification process typically inspects technical signals: your IP range, request rate, browser features, cookie support, and basic behaviour patterns like scroll and click cadence. It does not ask for personal identity. It looks for signs that a browser runs like a script rather than a person. You retain control: you can withhold cookies, but that choice may trigger more checks.
Behind the scenes: how false positives happen
Security systems balance precision and protection. When attacks spike, filters tighten and catch more humans by mistake. Travel patterns add noise: you might log in from a hotel network known for heavy automated traffic. Phone browsers change network routes mid-session. Even daylight saving time shifts can briefly skew behaviour models. Engineers adjust thresholds, but the web constantly changes, so no filter stays perfect for long.
Five practical steps if you keep getting blocked
- Switch device and network to isolate the cause.
- Update your browser; older versions misreport features that checks rely on.
- Clear site data for the affected domain and restart the browser.
- Remove auto-refresh settings; set refresh to manual while reading.
- Email [email protected] with timestamp, IP (if you can see it), and any error code.
Why this matters to you
Human checks feel frustrating when they interrupt a quick read on your lunch break. They also protect the value of the journalism you came to read. When bots drain pages, advertisers pull back and subscriptions carry more of the weight. Robust verification helps keep reporting visible and sustainable. It also curbs the silent copying that feeds synthetic summaries with no credit or licence.
Extra context you can use
Think of verification as a traffic light. Green flows for normal reading. Amber flashes when patterns look risky. Red stops when requests cross a threshold. Most readers pass with a single prompt. You can test your setup: open a new profile with default settings, connect via your home network, and compare the result. If the clean profile loads while your main one stalls, an extension likely caused the block.
For researchers, consider a compliant route. Many publishers now offer data products with capped frequency, delayed feeds, and watermark checks. Those feeds cost money, but they reduce legal risk and improve reliability. For readers, one habit helps most: read at a human cadence, keep scripts on, and avoid aggressive refreshes. If a screen still stops you, you now know where to ask for a hand — and why the question “Are you real?” appears in the first place.



Clear, practical, and timely. The “traffic light” analogy finally made these checks make sense. Thanks for spelling out the VPN and extension issues.
50% of web traffic is bots—really? Feels like a scare stat. Can you cite the report, or are we just guessing based on “bad bot” detctions?