Search Engines

Analyzing Google

Let’s analise Google, the “center” of the web.

Next month, precisely on 15th September 2017, will be the 20th anniversary of Google’s dominion. It’s been twenty years since two students of the University of Stanford, Larry Page and Sergey Brin, have completed the first step in the implementation of Google. Twenty years that brought all of us online. Years in which we have learned to entrust our curiosities and our search needs to a specific tool, based on a single field and specific algorithms. social-google

Analysing the course done by Google, we identify,  in these two decades, three distinct stages:

Stage 1: The idea 

In a few years our nephews will read on the history books about Page and Brin and their idea of building “something” in order to instantly collect the network’s informative flow, something like a “shopping list”.

The legend narrates about their wandering from one company to another and their meeting with smart-alecky and short-sighted executives who did nothing but mocking ideas and perspectives.

What remains today of this first stage is their visionary idea, definitely original.


Google, the search engine, is a tool that feeds itself from the (online) product of human kind. It connects information and people. It is the automaton, the scribe who observes the history and takes notes while everything happens.

All of it inside a model of fruition based on a white page, without invasive advertising and without useless waiting time.

Google doesn’t invade our personal or visual space, on the contrary it suggests us what to do.

 Stage 2: The expansion 

In the early years of its existence, Google on par with a child, keeps growing and learning very fast, gaining billion of web pages.  It analyses them and improves its own algorithms of ranking. The impression during this stage of technological revival is that of a system able to support our knowledge. Computers placed all over the world able to store information, to rapidly acquire, index and draft lists of contents.

At the same time it provides us a service of personal electronic mail, always available, endowed with a large space for storing and without advertising intervals: Gmail.

This stage of acquisition and data supply, through a single search field, has made a crucial contribution to the evolution of Internet. It has become easy, very easy, looking for and being searched.

Stage 3: The regression. The new goals are missed 

The following third stage has started with amazing advertisings, the creation of a universal translator able to bring together people of different origin, glasses able to maximize our reality with additional information, modulated and personalised smartphones, self-driving cars.

It is the beginning of a new era thanks to Google? Absolutely not! At least for now…

To this day Google translator, Google Glasses, project ARA, Google car, attest a lack of growth stage. Probably these research projects were aiming to high with too ingenious elaborations to be carried out by the computers in use till now.

Not even the social network “Google+” has reached the pre-fixed popularity and use, compared to those of Facebook, just to be clear.

These science fiction visions, in the later decade of the 21st century, failed to materialize. There was no further improvement, in either possible direction. Today’s Internet is not so dissimilar than the one of five or ten years ago.

And what about Google’s search engine? 

Neither this did have the evolution we had hoped for. A few years ago on the network were rumors regarding a semantic web, web 3.0, able to offer correlated contents, suggestions, information and intelligent support. None of this happened up till now, at least as far as Google is concerned.

Google’s list result proposes, in the central part, a greater number of advertisements comparing to some years ago. It promotes geo-located contents and popular in certain cases at the expense of the original and cultural contents. It applies “automatic” logical assessment of the contents.

The amount of information available online grown out of all proportion, certainly doesn’t help. The search engine in response to each attempt offers millions and millions of results.

In fact, a completely useless list except the first or the second page, for ten or twenty results characterized by a short description.

The search engine has not become more intelligent or more accurate if not for some trivial aspects of less importance.  

It follows predefined schemes assigning scores based on rules that for the most part have been presumed by computers scientist and copywriters replicating or bypassing them attentively. This leads to a high competition between right information and promotional information aimed at the sale, or worse surrogate information, false and showy.

The network users didn’t become more intelligent.  

Probably they are faster, more connected, but in front of a result list they tend to rely on what is proposed by Google and think less. This paradoxically doesn’t improve the knowledge on the contrary it reduces it to popular elements, already chosen by others; pages and texts that follow syntax and semantic rules appreciated by the algorithms. Many data, a lot of opportunities but only one list of ten results from which the cyberuser more and more passive may choose.

Criticising Google

Multimedia texts and contents shared online are centrifuged and lyophilized up to their essence. Billion of ideas, words, images and videos become a short list with minimal descriptions.  All the information outcome of all our searches is contained inside a “small postage stamp” on which all of us want to leave the signature. The first page of Google’s search result list, the page on which today everyone wants and has to be.

Online information grows and evolves. The cyberusers are faster and faster getting more and more compulsive. Google is becoming the absolute judge of this virtual universe.

For this, for its central role which has assumed in our virtual society Google can and must do better. Its algorithms must become more sensitive and less automatic; more careful in promoting useful contents to the community; more explanatory on the result lists in order to provide to the most receptive users a greater number of information; more careful in analysing the users behavior not predominantly for commercial purposes. It has to be able to understand the user requests and raise his level of knowledge and self-awareness.

It is a very difficult task, almost impossible, but if there is anyone who can do it, that is Google.



The Google Doodle

The Google Doodle is one of the Internet symbols seen online every day by hundreds of millions of people; it is a real icon of our time, the emblem of a different way of communication, of providing information, of highlighting or reminding something.

But why do we like Google Doodle so much? 

First of all because it is simple; it is an image of well-defined size, never excessive, always introduced into the same context, placed on the same spot and on the same white page; simple and at the same time efficient.

Its sound, in complete harmony with the main brand “Google”, and its literally meaning,  make the doodle rather informal, contemporary and  close to the new way of doing network.


The more we use Google the better we understand its interface, its instructions for use minimalist and intuitive; it doesn’t represent point of departure, in terms of communication, of interaction visual or technological, but a point of arrival.

Year after year Portals, Social networks, Apps and Blogs follow a process of simplification aimed at usability, with the intent to provide more and more information, services, multimedia contents and so on, and at the same time, with the purpose to get easier and easier to use. Step by step they get closer to “that white page”, now so familiar but not too long ago the one that seemed to us almost empty.

Google is a global search engine, but above all, remains the Forerunner in terms of simplicity of use and visual impact.

Strangely enough we also like Google Doodle for the opposite reason, on an extremely white page it represents a touch of color, an element of highest standards, pleasant and never too excessive, that renews the page and catch our attention diverting it from the main objective which is searching something online.

It is the antagonist of the mono search field. 

The Doodle is the yin in complete, chromatic and functional, opposition to the yang of Google. It has the difficult task of minimizing the role of the mono search field, the real web’s “center” and the binder of thought.


We like even more Google Doodle because it doesn’t have direct and explicit promotional purposes (except to remind us the fact that we are on Google, but we already know this). Is no small thing in a world, real and virtual, swamped with banners, messages and advertising spots.

The Google Doodle reminds us birthday anniversaries of eminent people who give value to science, community, culture, human rights and so on. It reminds us recent and less recent events, holidays, anniversaries… It is a “bridge”, a “clasp” with the past and traditions.

If older people remember, the young people and the very young ones, attracted to the image and its colors like bears to honey, may discover some interesting character or event of the past and casually learn something.

But the real reason why we like Google Doodle “so much” is the fact that it has no borders, just the way internet is transnational, shows-and-tells “a little bit of everything”. At a time when in the real world the right of citizenship stands above the human and asylum rights, the Google Doodle tells us situations and events which often go beyond our “virtual” geographical boundaries.

Furthermore, for those who do not know, Google provides an archive of Doodle available at the following address: 

To older people it may seem impossible, but in a few decades the archive of Google Doodle will be seen as an archive of memory, a database of Remembrance, an element of nations’ union which goes beyond the borders.

Are we going too far? Maybe, but if we try to imagine a world where everyone is connected to the network, it isn’t difficult to imagine the next step, and that is hypothesizing where the knowledge will  be stored, and how the new generations  shall refer to this source of knowledge. We challenge you to think to a different place other than Google or its natural evolution where to place our past, events, anniversaries, the eminent people and so on…

What is missing? A Doodle dedicated to Worldtwodotzero. We are certain that Google, after having read and appreciate this wonderful article, will decide to dedicate a Doodle to Worldtwodotzero, to the fact that we mention internet without any commercial purposes. You think it won’t happen?  It doesn’t matter, we like Google Doodle anyway.


Me, you, them, Google and the first three search results

Motori_ricercaThis article takes cue from the news released a few weeks ago which revealed the important goal achieved by Google: 100 billion searches by month.
The first result was the impressive spread of this news on web, a phenomenon already noticed by us in some other similar occasions. After all, it was only a piece of news to be released, furthermore of a huge impact, with Google as protagonist, just perfect to capture the readers’ attention.
The following days I found this news practically everywhere, proving once again that one of the web’s main tasks is that of a sounding board: well-known journalists, editors, bloggers, computer scientists, not one succeeded in resisting the temptation to report the terms “Google“, “100” and “billion.”
The paradox I would like to underline consists in the fact that the news, almost like a tape duplicated over and over again for hundreds and hundreds of times, began to lose identity copy after copy and in short time reduced itself to a mere synthetic announcement. Unfortunately, as often happens, almost all the “loudspeakers” limited themselves to announce the fact making only some fast comments on the subject, a tam-tam which I personally found banal. In brief, this led me to check some aspects, therefore, indirectly to write this article.
The oxymoron subsists in the fact that the news which states the Google’s growth has also produced thousands of clones, so much so if you search today through Google, the three terms “Google 100 billion” you’ll find 1.040.000 potential results. A short-circuit: the news which states that Google has monthly billions of contacts it has billions occurrences, traceable precisely through a Google search and, consequently, causing dozens of millions of clicks from users: billions of clicks which produce millions of clicks, internet is really a self-supporting system.
From the numerical point of view I agree with Larry Page, the goals Google can achieve, are above and beyond. With at least 1 billion of potential internet users per day, an efficient engine which is also transnational, an unlimited space at its disposal,… Google could be used by all the people of this planet multiple times per day. With a number of cyber users (smart phone and tablet) which are constantly increasing, I do believe that the goal achieved is significant, but also extremely easy to surpass.
We know, based on statistics, that the 93% of the “digital natives” (age group 16-24) uses internet at least once a week (Source EuroStat 2011). Therefore, not only all of us the grown-ups use the search engines, but most probably, the young ones and school-age children use daily these tools too.
At this point I wonder where is channeled all this potential? Are we progressively abandoning other search patterns limiting ourselves to use only one tool? Do we entrust ourselves unconditionally to these search engines?
More precisely, how we behave in front of a list of search results? Are we able to choose the most appropriate information? Do we have enough analytical capacity and patience?
I searched online some statistic data which could confirm my suppositions. I have to say that my search revealed itself long and less gratifying than I expected. Maybe once again, we find ourselves in front of an oxymoron. I wasn’t able to find, through the main search engines, some reliable and complete statistic information in order to certify the use of it. Internet is not always a transparent universe.
I have found however three examples.
The first one takes cue from the leak of information given by AOL, an “accidental” mistake, as it is said on web: some tens of millions of search query made by over 600.000 users. This “data pack” is probably the only significant data, in terms of numbers, available on internet.
I briefly summarized the gather data in the following table; here are recorded about 20 millions clicks by the data’s sample AOL, I would say a good number (click on the image to read the detail).


What shall I say other than I’m truly impressed! In 42.3% of cases we select the first result present on the search list, the second result has a delta of -71.82% with only 11.92% clicks, the third result has 8.44%, while the following results have minimal percentages of selection.
I also find interesting the fact that, quantitatively, positions like the 21st, the 31st and the 41th, are not so far-between them, this shows how an insignificant percentage of users is willing to search with attention what they need and therefore to scroll down the list further than the first or the second page.
The first page has 89.68% of the clicks, basically 9 out of 10 users remain on the first page. I find fanciful that position n.10 is the only one that doesn’t respect the decreasing trend, obtaining more clicks than the 9th one. It is possible that our eye (or our curiosity) have the tendency to pay much more attention to what is placed on the bottom of the list.
These data not only confirm my “feeling” whereby if, after a search, you are not on the first page, the possibilities of being selected are reduced to a minimum, but also concentrates the 62.66% of selections, therefore two-thirds of clicks, on the first three results.
The “relevance ranking” of the search engines, or rather the algorithm which determines the position of the search results, seems to guide inexorably our clicks.
My second reference source is even more “unusual”; it is about data coming from “BrandSoftech“, a supplier of software solutions for online games and casinos. The example is composed of 5.357.519 clicks of 29.327 different key sentences typed on Google extrapolated from 63 different betting sites.
Online are available only the data concerning the first page, therefore relative to the first ten results. It is not clear if the remaining pages capture less than 1% of the clicks or if the study analyses exclusively the assignment of the clicks made on the first page, and only as a consequence of the rounding of two decimal numbers, the example can’t be given of 100%. I personally choose the second hypothesis! In my opinion, we have an important sample of clicks but only relative to the first ten results.


The data are very similar to those proposed on the first table list; in fact they strengthen the role of the first entries of the list. Now the first three entries reach 71.86% and if the percentage report is only between the first ten values it is necessary, in absolute terms, to deduct about 7%. Which is to say that in this case too, it is better to be tenth that ninth; actually in this case the gap between the ninth and the tenth place is even more evident, this certifies the fact that, most probably, we trust much more our visual perceptions, (first and last places), than what we read, or rather, we pay more attention to what is present on certain points of the video screen.
If about 2/3 of the users choose one of the three first results proposed by the search engine, I can well understand why Google, as a consequence of a search, proposes always at the top of the list, at most, three sponsored links. It refers to the three links which, statistically, are the most watched.
The positions n.1, n.2, and n.3 are those preferred by the cyber users.
If it is true that the most of the users “prefer” or “is satisfied” with the first results of the list, we have however, based on the data AOL earlier revealed, about 10% of the users who goes on the second page. So, if from a certain point of view, the phenomenon of the “easy click” on the first three results may result worrisome, it is important to underline that all is not lost yet. There is a category of people capable of selecting what ensues on the following pages and, I would like to add from my experience, more and more skilful audience is able to repeat the same search adding further terms, reducing so the number of occurrences outcome of the same search.
These two evolved behaviors cause on web a phenomenon called “Long Tail“, from the expression coined by Chris Anderson.
Such phenomenon highlight the 80% of the less popular needs (of any type or kind): basically it allows us (you) to track down that movie, that book, that information “of niche” which is of no interest for the most part of the users and, therefore, hardly traceable without the search engine.
In addition to these two sources which, as I mentioned before, are far from institutional, I recommend you to read the research carried out by Cornell University called “Eye-Tracking Analysis of User Behavior in WWW Search“.
In this case the number of clicks and the users’ sample are not so important, but a keen attention is being paid to the users’ perception of the page, through the study of “ocular index “, in other words, how long the user observes the page and in what manner. Again, the first three needs have a leading role compare to the others.
For the most curious of you, I also recommend the view of the services “Google trends” which monitor the web, proposing daily statistics relative to what is of the great interest for users. Unfortunately, the clicks on the big numbers normally show general information and they end up underlining national popular events such as gossips, football and politics… From the social point of view, I believe web is much more complex and definitely much more receptive than synthesized in these “trends” pages.
In conclusion, there is to be both optimistic and pessimistic because web gives us the opportunity to communicate, to express our opinions, to post a video, a photo, material which certifies unequivocally a fact, to look for something very difficult to find present only at the end of the “long tail”,… But, at the same time, we often tend to be satisfied with what web offers us without using completely the tools to our disposal, first of all, our brain.
Do not be satisfied with the first three results given by the search engines, you may find something more interesting on the second or the third page.


The website popularity, an ephemeral love between algorithms and culture

Imagine putting all our personal and cultural references in a large jar: photographs, books, notes, stories, poems, birthday and Christmas greeting cards, drawings made by our children when they were little and so on and so forth.
Imagine that, just after filling the jar to the top, we realize that the mouth of the jar is a bit tight, and therefore we can pull out only one object at a time. At this point we have to overcome the impasse and establish a picking rule, otherwise all our valuable objects will become unreachable.
Well, this is what happened on the Internet! Network has become our multimedia knowledge container, but not only that. So, while we were putting our information into the global container, we discovered that without the appropriate tools for research and without a network of relationship, the information was often unreachable. After all, how many times in the past the treasure map has been lost and with it the hidden treasure?
Today’ success of a search engine is a direct consequence of its ability to collect and display information from the network, which means, as I metaphorically mentioned above, to have the jar completely full. The search engine has to show this information in a logical sequence, but above all, it has to guess the user’s request basing its research on the few terms received from the user.
The increase number of information along with the increase number of people on network have placed on top the best web search engines, able to satisfy our demands less than in a few instants.  Over the top are placed the “algorithms“ as well as methods of calculation, everything designed by man and ran obsessively by computer. Computer able to judge by itself the information spread on network.
Among these algorithms probably the most famous is the “pagerank“. This algorithm is able to attach different importance to internet web pages according to their popularity, or, rather to the number of sites relative to these pages. Higher will be the number of sites that link to “our” web page, greater will be our popularity.
Today there are many other intelligent algorithms that analyze “our” web page from the point of view of contents expounded, from every single term used, from the frequency with which we update our content, from the profundity of navigation expressed by our portal / site / blog, from the interest expressed by the social network, from the cleverness of writing our page (title, subtitle, …), etc …
As happens in the real world, also happens in the virtual world: to reach a high popularity, means hard work. We must be well known by lots of network people therefore clicked, linked, posted, mentioned … And of course, same as in the real world, upkeep our popularity is extremely difficult, that requires efficiency and talent even for the most tenacious ones.
However sometimes the popularity generates further popularity without new strategies application … being on  the “front page” means being featured on the “global showcase“, selected and mentioned, and this will increase our ranking, as a result, our new location will bring us to the fore rising our ranking higher and higher.
The popularity on web, the climax of every blogger or writer, dependents on a selection process extremely rational, algorithmic, I dare say similar to that described by Darwin, a process of evolutionary selection, that assigns notoriety to those already famous, but not less interested in new, original, lasting forms of information and culture. We talk about a rational cynical process, unceasingly active, open to innovation, a “mutant”, sometimes complicated sometimes incredibly superficial.
Given these premises, who knows what kind of love will blossom between the ranking algorithms and our culture: a true one or just an ephemeral one?