This is a submission for the Bright Data Web Scraping Challenge: Build a Web Scraper API to Solve Business Problems
What I Built
I crea...
For further actions, you may consider blocking this person and/or reporting abuse
Hello brother, It is not able to return any results when I used this against amazon.com, for your info, amazon is one of world hardest website to scrape and similar goes for walmart both of these implements more than 5 anti-bots captcha's in their website
I see. But you aren't giving it the unique product URL, I guess. Let me try it out both ways.
As expected.
For the next unique product URL example, I'm using this product: a.co/d/jdUb6sa
And yes, it works!
On single product its works I agree but you are talking about using it in real business needs, in that scenarios crawling is done for more than 100 millions products at once, in that scenario it won't work.
Here you can see try by yourself,
Url : walmart.com/all-departments
Input prompt : give me all categories urls, title, skus till the max level sub categories available, final output would be a huge list of around 16k categories/sub categories going upto 5 nested sub categories.
It works, brother.
But yes, this issue lies:
Any suggestions for this? (using gemini pro here)
You basically need to tell Gemini to perform pagination with some notion of "max entries per json string"and specify some delimiter token you can use to find json string boundaries, split the response on your specified delimiter, and attempt to decode each section and merge them until you find something invalid within the Gemini context window.
Gemini can't fully count as it generates but it will be approximately close and allow the API to at least return a valid and useable response even if it's not literally every product on the page
Thanks for the suggestion!
I'll look into implementing this method to improve the functionality.
Appreciate your insight, this is super helpful!
This is awesome @arjuncodess
Thank you so much! Means a lot!
Nice 🔥
Thanks, man!
Let me try it again:
Walmart Product Link for testing: https://www.walmart.com/ip/Star-Wars-Force-N-Telling-Vader-Star-Wars-Toys-for-Kids-Ages-4-and-Up-Walmart-Exclusive/5254334148?classType=REGULAR&athbdg=L1600&sid=9f74642d-e12d-4e30-970b-914104b1f54b
Response:
First try:
Second try:
So yes, scraping some large websites doesn't really work on the first or even second try, but eventually, it does.
Your are not getting my point still brother, don't take it as offensive, I have been working in large scale web scraping in python mostly, I have pretty much understanding of what companies look's when its comes to large scale web scraping, anyways its great project, appreciate your efforts, I just got to know about this challenge, let me come up with my submission for all three prompts, you can also regress my submission.
Got it, brother, and no offence taken at all!
Really appreciate you sharing your expertise.
Hello Brother again,
Just Reviewed your project codebase, you have just given the html content to gemini pro model, this ain't going to work in large scale web scraping, AI(No matters which LLM you use paid or free, None of them is capable of performing complex web scraping of its own, scraping is not a straight forward Software Engineering Task, its quite complicated) is not that much capable as of now that it can parse complicated information out of HTML, it can definitely work for such websites that have clear DOM structure, to be exact, those website which have clear names in div classes such as product_description/description, product_title/title, product_price/price, but for large scale web scraping and sophisticated kind of scraping you have to write a generic web scraping code written in core python.
You have to understand that any complex web scraping system have these components :
1.Loading the Web Page ::
For this, one can use Dynamic Browser such as Selenium, Bright Data Browser and other private Browsers, Playwright or directly using raw http libraries "requests", "urllib3", "httpx" and the holy "curl" cli.
Possible Workarounds :
2.1. Use High Availability Proxies such as Bright Data Proxies, I have used them in one of my large scale web scraping projects and they are really good.
2.2. Bypass captcha's using Specifically Crafted Scripts for common captcha's such as cloud flare, Imperva, Google ReCaptcha(v1, v2, v3, v4), GeeTest Slide Captcha(v1 to v4) and recent Puzzle Pieces Based Captcha's and last those move objects according to static image direction.
2.3. Solve using Captcha's Solving Paid APIs.
Parse the HTML Source and get the required Data:
One can parse the html source using standing bs4 library and as well as lxml library for parsing the lxml based web pages, for example parsing robots.txt
This parsing can be done using two methods :
For Specific Websites Scrapers, One can use static Parsing Techniques where sections are pre defined which one have to extract with defined classes names, html tag names that may or may not change over time.
For Generic Use Cases Websites,One must use dynamic parsing techniques which implements the "parents-childs-sibbling" relationship based scraping approach.
So, in this scenario, One must have to write Different Sections Specific Generic Code using the dynamic parsing approach, it won't be done using few lines of python code this type of generic API requires several months of dedicated coding over different market Domains..
then only one can develop truly Generic Crawl API.
***Note : Large Scale Web Scraping i.e., Real World Web Scraping Scenario's often requires Speedy Execution, if one always going to use the dynamic Browser for every page loading, its going to take weeks to scrape even Millions Pages, so one must know when to use the Dynamic Web browsers & when to use raw HTTP based Browsing Capabilities.
I hope it will help you to see the big picture behind large scale web scraping systems and anybody who will be reading this huge comment will learn real world scraping key aspects.
Any more clarification, if you need, you can let me know.
Here is the snapshot of your code method:
Thanks & Regards
Tanuj Sharma
Thank you so much for providing such a detailed and insightful explanation. �
You’re right - my current implementation is quite basic.
While it works for simple, well-structured websites, I now see how it falls short for more complex or large-scale use cases.
Thanks again for sharing your knowledge!
I tried extraction of my name on Google as well as case information on Spokanecounty.org.
I couldn't get it to work at all.
It would be nice to pull the case info on a dirty lawyer operating in Spokane county. Thoughts ?
I don’t think this aligns with the ethical or intended use of my project.
Thanks for the comment. Have a good one.
great tool for web scraping!!
Thank you! Appreciate the support!
Great Job!
Thanks a lot!
AI web scraper idea is awesome. Where's the GitHub?
Thank you! Glad you liked the idea!
Oh, yeah, thanks for reminding me - just added it in the article.
Here is the GitHub - github.com/ArjunCodess/WebCrawlAI
Thank you for this useful sharing.
Welcome! Glad you liked it!
Cool 🧊
Thanks, bro! ��
Wow! Thank You!
You're very welcome!
Super Man, maybe there are some issues but I know you will fix those. Great brother keep it up �
Thanks, brother!
You're right—there is an issue right now, and it's because the scraper isn't working at the moment.
I lost all my credits due to the unexpected attraction this post received. I'm working on getting it back up and running soon.