Does artificial intelligence have a language problem?

Technology loves a bandwagon. The current one, fuelled by academic research, startups and attention from all the big names in technology and beyond, is artificial intelligence (AI).

AI is commonly defined as the ability of a machine to perform tasks associated with intelligent beings. And that’s where our first problem with language appears.

Intelligence is a highly subjective phenomenon. Often the tasks machines struggle with most, such as navigating a busy station, are those people do effortlessly without a great deal of intelligence.

Understanding intelligence

We tend to anthropomorphise AI based on our own understanding of “intelligence” and cultural baggage, such as the portrayal of AI in science fiction.

In 1983, the American developmental psychologist Howard Gardener described nine types of human intelligence – naturalist (nature smart), musical (sound smart), logical-mathematical (number/reasoning smart), existential (life smart), interpersonal (people smart), bodily-kinaesthetic (body smart), and linguistic (word smart).

If AI were truly intelligent, it should have equal potential in all these areas, but we instinctively know machines would be better at some than others.

Even when technological progress appears to be made, the language can mask what is actually happening. In the field of affective computing, where machines can both recognise and reflect human emotions, the machine processing of emotions is entirely different from the biological process in people, and the interpersonal emotional intelligence categorised by Gardener.

So, having established the term “intelligence” can be somewhat problematic in describing what machines can and can’t do, let’s now focus on machine learning – the domain within AI that offers the greatest attraction and benefits to businesses today.

Read more: Computer Weekly

By | 2018-02-22T00:27:06+00:00 February 7th, 2018|A.I., Artificial intelligence, Language|0 Comments

About the Author:

Leave A Comment