Sunday 24 November 2013

Do we still need to think?

In my research into Young Adult fiction, I have chosen to appropriate the word ‘geek’ to describe a very technologically able individual in a world filled with digital insiders who understands the technology in their life rather than just being a user of the technology. While it would be interesting to canvas opinion on my choice and to explore what people perceive is meant by the word, a conversation I had last night reminded me of the need for us all to understand our technology, rather than just use it.

I was 18 in 1995 when the Internet started to become available to people outside academic and military institutions and I believe that having been an adult user close to its inception has helped me to think critically about it. At that time AltaVista and Yahoo! were the main search engines, with Google only joining the party in 1998. To find something, people had to parse their question, say ‘Which is the oldest college in Cambridge?’, to select the keywords in order of importance; the query would therefore become ‘cambridge college oldest’. Having grown used to this as the Internet grew, I still search this way and still expect to hunt through search results to find the answer. However, one of the aspirations of companies such as Google is to be able to understand natural language queries and even pre-empt the whole question. If you’re usually a keyword user, go and try typing the full question into Google: it suggested the whole question to me as I got to ‘college’. Watching children search for information at school, I regularly see them typing the full question as if they are simply talking to a human repository of facts.

The information I have given Google over the years helps it to guess my question as it knows I am in the UK and I assume it knows I look at .cam.ac.uk pages quite regularly. This must also be quite a common question as Google is able to find it in its archive of previous searches people have executed to find other old things.

Take another example: Amazon. How often are we told that people who bought the book that is just being downloaded to one’s Kindle also enjoyed a host of thematically related books? This is a great way for researchers to discover similar texts which may be of use, and a helpful way to find something which is unlikely to be a risky choice.

Again, our personally shopping history combined with millions of other people’s shopping history is combined, processed and spat out as recommendations.

These are only two examples out of many, but they exemplify perfectly the need to understand the technology.

Both Google and Amazon are providing an amazing and useful service in helping their users find what they want quickly, easily and efficiently. However, in doing so they are perpetrating the same questions and same recommendations to a wealth of different people linked only by a gossamer thread of curiosity or reading tastes.

Considering this in the broadest of terms, there are three things which worry me – and I believe should worry you – about this.

Firstly, moments of serendipity when we discover something by accident are excluded from the algorithms which predict what we want to know, and the likelihood of stumbling across something accidently is lost. (Communicating virtually through timed Skype meetings also has this effect as ‘watercooler moments’ are less likely to happen, but that’s for another day.)

Secondly, without serendipity, we are all being made into homogenous individuals who are privy to the same information and are expected to have similar consumer tastes.

Thirdly, an array of digital intelligences is growing on the servers of big companies dotted around the world that knows us as well as we do and is thinking for us and shaping our choices.

The technological singularity has been something of science fiction for decades. It wouldn’t be the first thing written about in fiction to become a reality.

Originally written for the Children's Literature at Cambridge blog and first posted there earlier today.