In September, Google quietly removed the num=100 parameter, a small technical update with big implications for every team that relies on search data.
What once took a single request to collect now requires multiple. This shift is reshaping how SEO, eCommerce, and AI research teams think about data depth, cost, and performance at scale.
Here’s what changed, why it matters, and how Traject Data is helping teams adapt with confidence.
What Changed
Google’s num=100 parameter previously allowed up to 100 search results per request. Now, each request returns 10 or less results per page. To access deeper results, teams must paginate manually using page parameters.
This shift has affected every SERP API provider. Collecting the same depth of data now requires multiple requests, which increases request volume, latency, and cost.
Why It Matters
This change forces every data team to rethink how they collect, process, and evaluate SERP data at scale. Three key challenges have emerged as a result:
1. Efficiency
Teams are balancing request counts, cost, and infrastructure load as volume increases. What used to be a single call may now require 10 or more, changing how organizations measure efficiency and budget for data collection.
2. Accuracy
Deeper pagination can introduce variability in results. Teams are navigating how to manage duplicates, shifting result sets, and ensure clean, reliable datasets across paged queries.
3. Speed & Volume
Latency and throughput have become critical considerations. For high-frequency or large-scale workloads, even small inefficiencies compound quickly, impacting everything from SEO monitoring to AI model training.
For SEO, eCommerce, and AI research teams, the question isn’t just how deep to scrape, it’s whether the added visibility of deeper ranks is worth the additional cost, complexity, and time to collect.
How Traject Data Helps
At Traject Data, we’ve built our infrastructure to help teams stay flexible, visible, and dependable as Google’s behavior evolves.
- Pagination Support: Teams can define their own pagination parameters to control depth and coverage across search types.
- Adaptive Infrastructure: Designed to handle high request volumes and maintain stable performance as workloads increase.
- Transparent Usage Controls: Our team works directly with customers to fine-tune query settings, frequency, and cost efficiency as needs evolve, ensuring stability without unexpected spend or performance tradeoffs.
- Flexible Coverage Options: Supports multi-page collection for teams that need deeper visibility, without sacrificing predictability or reliability.
Built for Data Teams at Scale
Trusted by data-intensive organizations across SEO, eCommerce, and AI, Traject Data’s infrastructure supports high request volumes and consistent performance across regions.
Our systems are built to stay stable through changes like Google’s num=100 update, giving teams confidence that their workflows, and their data quality, remain dependable as the SERP landscape continues to shift.
Traject Data’s Take
At Traject Data, we believe stability is a product feature.
As Google continues to evolve its search results behavior, our focus remains on helping customers maintain reliable, efficient, and transparent data pipelines — no matter what changes next.
If you’d like to discuss how these changes might affect your data workflows, contact our team.
Ready to See What Traject Data Can Help You Do?
We’re your premier partner in web scraping for SERP data. Get started with one of our APIs for free and see the data possibilities that you can start to collect.

