Web 3.0 has given opportunity for AI community to work around with huge amount of data, which are always needed by data miners.
Web 3.0 can behave like human mind where we store information in neurons and all are linked other with weight-age of relevance.
Same concept is in web 3.0 where website publishers will organize extra information about data, articles, photos, videos etc. in XML format so it can be used later for data analyzing tools and software or more specifically for crawlers.
Now crawlers also need to change to read those XML not only page contents. Whoever will evolve crawler first will be pioneer in new search engine techniques, and it can be other then Google who monopolize search on internet.