Google Search Now Has Limited Lifespan

Twoogle

Google has now been the King of Search for about a decade and a half, and most people envisage that the organisation’s supremacy cannot be challenged. However, Google’s basic foundations are now old, and well out of date. Although the Mighty G has continued to tackle attempts to game its system over time, we’re now close to the end of the road. The key problems with Google Search are no longer about controlling spam tactics per se. They’re about the fact that Google is a machine, using an outdated concept, whilst the Internet is increasingly human, and behaves very differently from the way it behaved 20 years ago.

In the late 1990s, Google revolutionised the search market by introducing PageRank – a system that automatically gauges the perceived quality of indexed items based on the number and type of links pointing at them.

SHARING FOR ATTENTION

In its day that system was genius. But it’s much less compatible with the modern Internet, where human behaviour distorts the qualitative grading of content. For example, Social media has created a culture in which people will share content not because it’s inherently good, but because they want to ingratiate themselves to, or be noticed by, the influential person who posted it. People are sharing and rating for attention.

This isn’t a serious problem on social media itself, because people can recognise why specific posts are being rated or shared. Literally everything some celebrities say will be met with a wall of digital applause, however stupid or pointless it is. We can see why a picture of a celebrity’s breakfast, as tweeted by the celebrity, is being shared thousands of times, and we know it’s not because the photo is a work or art, or because the information: “Egg and fries!” is something the world urgently needs to know.

But a machine can’t inherently see why material gets shared, and therefore, the Google Search results can be heavily perverted by instances of mass sharing which are motivated by reasons other than content quality. Making things worse, on Google, the user can’t see the reasons something got to the top of the results. We just have to trust that the top results will be the right ones to click. But even after nearly 20 years, the disappointment rate is extremely high.

RELATED: The Google Monopoly
How the mighty Google monopolised the search market.

MORE KEY PROBLEMS WITH PAGERANK

That’s almost entirely down to Google’s PageRank system, and its assumption that real people only share content when it’s worth sharing. That’s never been less true, so simply shutting out spammers isn’t enough to ensure good search results. Other major drawbacks directly related to the PageRank system include…

  • The context of search terms is poorly observed. Especially when one context of a word or phrase is hugely more popular in search than others, the less popular contexts are overwhelmed out of the picture. Searches on obscure people’s names can be overwhelmed by celebrity-related posts in a similar manner – to the point where you can’t find the person you’re looking for, even if their name is only similar to that of a celebrity. The Melanie Spice example I gave in How To Find Everything On The Internet perfectly sums this up. PageRank can usually tell what’s popular, but it doesn’t understand that the most popular answer is not always the correct one.
  • Google has severe difficulty in correctly pathing positive and negative search terms. For instance, if you ask how to stop something, but a much larger number of people want to know how to start it, you’re almost certain to see results relating to the latter. Even using Advanced Search it can be quite difficult to combat this problem.
  • Google gives far too much weight to preferred sites. Wikipedia, YouTube, certain news sites, etc, will rank front page regardless of the quality, substance or completeness of their content, and Wikipedia will frequently rank for completely irrelevant searches. It may have been cool in 1999, but in 2015 this is a very lazy (if not unfairly biased) way of delivering search results, and it perfectly highlights the problem with global PageRank (as applied to whole sites rather than individual posts). There’s also still a huge problem with high-ranking sites being able to spin other people’s work and then outrank the originator. Laziness breeds laziness.
  • Google persistently falls prey to exploitation of its mechanical system. Ultimately, Google’s system is a machine, and no matter what the organisation does to update it, human beings will always be able to think their way round a machine. That’s why we STILL see sites that have gamed the system, right at the top of the results. It should not be possible for an “SEO plan” to place inferior and user-unfriendly content above superior and user-friendly content. But it very obviously is possible, and it happens all the time.

There are numerous other problems associated with PageRank. The above list is nothing like exhaustive.

GOOGLE’S VULNERABILITY

But how vulnerable is Google to a new search system as we speak? Well in one way, it’s extremely vulnerable, because the basics of Google Search have stood still for so long. A social media style, user-generated search system would be much more valid in 2015 than the old concept of PageRank. It’s come to the point where human input is required for reliable Web search. I believe this type of search is where the real threat to Google lies, and I believe it’s the primary reason the Big G has wanted to club together with Twitter.

But that, of course, hints at why Google, in another way, is not vulnerable. The organisation has so much money and power, that even when a potential threat comes along, Google can just buy it up and incorporate it into its own search model.

Not that Twitter could have threatened Google as a search engine the way things currently stand, obviously. Twitter doesn’t index any substantial content – it only indexes tweets. But the concept of Twitter, using millions of human beings to verbally recommend and point, rather than a mechanical computer code à la Google, makes more sense as a means for people to find their way round the Web in the future. There will always be a place for thorough, mechanical indexing of the entire Internet, but ‘user-generated’ search is going to take over the populist ground eventually, I think.

RELATED: Browse More Articles
Access more essential reading from the site that says it first…

PEOPLE POWER

I strongly feel that the advancements in search relevance are going to come through the addition of human input features. If it were possible for users of a search engine to have profiles on the site (anonymous if required by the user), and leave a sort of search ‘tweet’ after doing a search, I think many would. The search engine could then build a separate index of those short snippets of feedback, then compile trends based on that feedback, for a much more effective, human-controlled recommendation system than outdated backlinks. Machines are never going to be people. For me, involving people in search is the only way to move on from the 1990s.

There would still be attempts to spam and game the system, obviously. But when you let people decide whom they trust and whom they don’t, spam becomes impotent. Look at Twitter. It’s full of spam. But if you go on there and only follow interesting or entertaining people, will you see any of it? Apart from paid adverts, no. In user-generated search, you’d only be guided by those you trust to guide you.

It would be good to see one of the privacy-orientated search engines taking up the idea of user profiles and ‘search notes’. I’d have an anonymous (or probably more semi-anonymous in my case) profile on Ixquick, and I think such a measure would be a really cool way to draw new users to the site. Everyone wants to be part of something – especially when it’s different.

Probably the most worrying thing about this post, is that the organisation in possession of by far the most social data is Facebook. If social search did get really serious as an entity in its own right, then we may all be seeing a transfer of power from the Big G to the Big F. The thing I think would stop Facebook’s progress in its current form is the brand’s insistence on users providing their real identities. That is ABSOLUTELY incompatible with serious web search, which is why I think the privacy-focused search engines could build good momentum with human input features. You can’t underestimate the volume of data Facebook holds though, or its significance in any future project.

CONCLUSION

None of the above is going to happen overnight, but I would be very surprised indeed if, ten years down the line, there wasn’t a much greater union between social media and serious web search. Can Google be toppled from its perch? If it doesn’t seek to move away from PageRank, and someone else devises a more reliable system of qualitative recommendation, then yes – very easily.

Author: Bob Leggitt

.

Advertisements