Artificial Intelligence for Users – The Fundamentals – Part 1


Olaf Lipinski

Be it self-driving cars or face recognition at the airport, “Artificial Intelligence” (AI) seems to everywhere in the media these days. While self-driving cars are still a long way off, there are many practical examples of “Artificial Intelligence” in our daily life already: Sorting letters, evaluating CVs in HR departments or the ever-friendly chatbot at the service hotline.

In this series of articles, we will examine the opportunities and changes AI has brought to the daily lives of Product Owners, DevOps and IT professionals – and let’s not forget the users! We will also focus specifically on “testing”. Which testing tools already work with AI (article #3) and what is different when testing AI-controlled software (article #4).

As soon as AI is mentioned, many acronyms and terms follow: AI, ML, Data Science, Neural Networks, etc. So let’s start by defining the terminology:

  • Artificial Intelligence (AI): A subdomain of computer science, which focuses on the simulation of intelligent behaviour
  • Machine Learning (ML): A subdomain of AI which uses computer algorithms to analyse data based on what it has learnt, without being explicitly programmed to do so.
  • Neural Networks (NN): The technical emulation of biological neural networks. Just as neurons react to the sensory inputs they receive by passing on signals to the brain, so too do neural networks transmit data in an artificial setting by transmitting 0s or 1s
  • Deep Learning (DL): A specialised subdomain of “Machine Learning”, which uses layered Neural Networks to simulate the human decision-making process.

  • Data Science is an interdisciplinary science which includes all methods of data processing. Data Science makes use of AI, but is not part of it. A data scientist is therefore a person who is well-versed in AI, ML, DL, NN and much more.

 

And while we’re talking about definitions, here are the ways in which AI is able to learn:

  1. Supervised Learning: During training, for every sample shown, the correct answer is also supplied (labeled data). For example, for handwriting recognition, a picture showing a “3” also has the label “3”.
  2. Unsupervised Learning: AI has to discover patterns and rules in the training data (unlabeled data) on its own. The focus is on clusterings and correlations, for example for the analysis of customer behaviour.
  3. Reinforced Learning: “Carrot and stick” principle. AI is taught the difference between right and wrong answers with “rewards” and “punishments”. It is up to the AI to determine what the correct answers have in common.

A short walk through history

It might feel like “Artificial Intelligence” appeared just a few years ago, but in reality AI is nothing new. It all started in the 1950s with the first discoveries of Alan Turing, but remained for years in its rather small insider niche. This changed all of a sudden in 1997, when IBM’s “Deep Blue” managed to beat then-world chess champion Garry Kasparov in a regular six-match tournament under regular tournament rules.

In the 21st century, the continual increase of hardware processing speed coupled with decreasing costs meant that it was suddenly possible to build AIs with a calculating power which was not possible before. 2011 Apple’s “Siri” piped up – AI for Joe Bloggs suddenly became reality and while “Computer, lights on” had been pure science fiction in Star Trek, these days it is perfectly normal for little kids to speak with Amazon’s Alexa.

Artificial Intelligence – what is different?

What distinguishes “Artificial Intelligence” from a Cloud computer or the PC on your desk? The main difference is that “classical” computer science works with various forms of if-else statements –  as a decision tree, taxonomy or in another form. Every reaction is predetermined, and every occurrence and the subsequent decision has to be declared in advance. This method works fairly well with structured data in databases or forms. But as soon as a rule is forgotten or is ambiguous, this system reaches its limits, resulting in error messages or wrong results.

Enter AI. AI is able to handle unstructured data and ambiguous expressions in context. That is why when you enter “Golf wheel” in your Google search you will see car parts as results and for “Golf green” golf courses and equipment.

Goggle’s AI is also pretty good at Natural Language Processing (NLP) – think subtitles in Youtube videos. With personal names it can still be pretty hilarious at times, but in general the accuracy is impressive. This technology suddenly opens a whole new world to the hearing-impaired (keyword: accessibility).

Another example, which applies to nearly all of us, is a Spam filter. Not too long ago a large portion of incoming email was Spam. Do you remember?

In 2015, only 0.1% of all Spam emails destined for a Gmail account actually made it into the user’s inbox, thanks to an AI-Spam filter. The rate of false positives was only 0.05%.[i]

So, what is different about AI, that it can deliver such a result? The set-up emulates the human brain (keyword: neural networks). There is no longer an explicitly pre-programmed way, rather AI has a “grasp” of the rules and is able to deduce them.

It detects these rules through training. The more high-quality training data is available, the higher will be the accuracy of the final result. “Quality” means in this context heterogeneous data. If your training data for handwriting detection is only supplied by right-handed year one pupils, the detection accurary of a text written by a 60-year old left-hander is probably rather random.

In opposite to the deterministic approach, AI is all about “probabilities”. The result of an identified letter in an image might be “Letter A, with 93% accuracy”, which of course means there is also a chance of 7% that it is something different.

To give you an idea how the “technical neurons” work together, here an example from image recognition: The image is first of all split into sections of 3×3 pixels. Each section is evaluated by a neuron for “light” or “dark” and the resulting output value is handed over to the next neural layer. The neurons in this layer maybe compare light and dark sections in relative position to their neighbours and hand this result over to the next layer, etc. until in the end a result is delivered, which might read “Letter A, with 93% accuracy”.

After this short introduction into the field of “Artificial Intelligence”, the next article in this series will give you some ideas where AI-controlled software is already in use today.

This article is a collaboration of Yannick Galeuchet, Wilhelm Kapp, Olaf Lipinski and Dejan Husrefovic, SwissQ Consulting, June 2020. This English version is based on the German version, release in September 2020

 

Sources:

[i] https://en.wikipedia.org/wiki/Timeline_of_artificial_intelligence, geprüft am 24.05.2020

[ii] https://ec.europa.eu/futurium/en/ai-alliance-consultation, geprüft am 24.05.2020

[iii] https://expandedramblings.com/index.php/gmail-statistics, geprüft am 24.05.2020

 

0 thoughts on “Artificial Intelligence for Users – The Fundamentals – Part 1”


    Leave a Reply