The 10 biggest AI misconceptions

This image has an empty alt attribute. The file name is itunes_badge_md-300x110.png

Introduction

Nice to have you with us again for another episode of Digital for Productivity. In this episode, we take a look at the ten biggest AI misconceptions.

Misconception number one: AI is an autopilot. AI is definitely not that, but AI is a co-pilot. What do I mean by that? If you think about it, I like to use the analogy of the self-driving car and of course these systems are getting better and better, but it’s certainly not a stupid idea to have a real driver at the wheel again, or if you’re on an airplane, then maybe it’s a good idea to have a human pilot with you, even though 90% of the flight is on autopilot. In other words, when dealing with AI in practice, I often find that AI is either not used at all or that there is a kind of blind trust. This means that it is simply assumed that everything the AI spits out is immediately correct. It’s a bit like when you ask someone on the street about an important topic and say: Okay, whoever that is, wherever they come from, you immediately take their answer at face value.

At the latest if you have a serious illness, then perhaps get a second or third opinion and use your common sense. And I also recommend this when it comes to kiting. Second misunderstanding: ChatGPT replaces Google. In other words, I see a lot of people who used to google and now say: Okay, great. Chatgpt is much more intelligent. Now I use that. The problem is that ChatGPT in the free version will be able to access the internet in the 3.5 version until September 2021 in the 4.0 version until October ’23. With add-ons you can then also access current information, but that’s not what it’s built for. I always say Google is the librarian. In other words, Google’s mission is to catalog the world’s knowledge, while ChatGPT is their intelligent colleague or their new apprentice. In other words, he brings a lot to the table, but just as an employee can’t always say where he knows it from, they don’t get any sources either. Now there’s a wonderful solution to this and it’s called perplexity. In other words, Perplexity is a bit like the combination of Google and ChatGPT.

Because with Google you get millions of search results and with ChatGPT you get an answer. And both are not necessarily always useful. With Perplexity, you have access to the Internet. You get a very well-structured solution with four or five main sources and then additional sources where you can see exactly how this answer came about. In addition, they receive further supplementary question suggestions on how they can delve even deeper into a topic. In my opinion, the topic of perplexity is much better suited to the topic of research. The next misconception is that AI answers questions. We always ask AI questions and I like to stick with the analogy of saying I’m asking a question to an employee. Perhaps it’s also a good idea to turn the topic around and let people ask you questions. For example, if you are preparing for an exam or your children are preparing for an exam, then it might be a good idea to not just say ask questions about the material, but to say: please ask me questions and evaluate the whole thing and ask me the next question. For example, you can build wonderful GPTs for this. I did my history A-levels as an oral exam, for example.

I would have liked a bot like this, where I could have been asked questions in preparation for the oral school-leaving exam, for example, and then repeatedly assessed and practically used as a sparring partner. So AI doesn’t just answer questions, it can also be used wonderfully as a sparring partner, just like a human. Yes, next misunderstanding: I need 1000 tools. No. Of course I do, I always have the analogy of a toolbox. And if you only have a hammer and you’re on the fifth floor, like me, and the screwdriver is in the basement, then maybe you don’t always want to go to the basement and can use the hammer to drive the screw into the wall. But it’s not always the most elegant solution. That’s why I do go down to the cellar from time to time or have a small, well-stocked toolbox to hand. However, if you have too many tools in a toolbox – and you may be familiar with this – then it becomes relatively expensive if you have bought half the DIY store empty. Secondly, you can’t find anything. So a well-organized toolbox is always a good idea. And a very specific tip: Before you look at 3-4-5-6-7 tools, first use the GPTs from ChatGPT.

The good news: Since the release of version 4.0, you can even use the GPTs for free. You need a paid account to create your own GPTs, but at least you can use a corresponding number of GPTs and there are some very, very good ones.

The next misconception: AI knows everything. No, AI is only as good as its data and it’s always important to look at what data was used to train these models. For example, if you have questions about the current presidential election campaign in the USA, then it might not be a good idea to ask ChatGPT in the free version in model 3.5, which is still available, because the data is only maintained there until the end of September 2021. You may have heard that Microsoft had tried to support the topic of recruitment with AI there and they had the challenge that the AI there clearly favored men. And why was that? Simply because in the past, men were hired first and foremost. And if you say, okay, the basis of successful software developers are all men, then you don’t have to wonder why men are favored in the evaluation of successful candidates. So AI is, as I always say, statistics on steroids.

Ultimately, it is a probability machine that practically determines the next best step on the basis of past or currently entered data. And it is simply important to have a good understanding of this: What are the previous steps that you have just fed into the AI? And that is both a strength and a weakness. And that’s why sometimes less is more. Sometimes I deliberately leave the internet out of the options. That leads me to the next point: the more data, the better. No, I always say: Data God before Internet junk. It’s better to have a compact, good setup of data before you have too much junk. Of course, the law of large numbers is always important. This means that the more data you have, the more accurate the data is in terms of the law of large numbers. However, if all they have is junk, then of course that’s no good either. And logically, the old law of statistics still applies. Correlation does not mean causality. In other words, just because data runs in the same way does not mean that there is a corresponding causality, i.e. a causal relationship, even if it looks like one at first glance. So just as common sense was more important in the days of statistics?

It is still and almost more important in the age of AI. Yes, the next one: all data is public. In other words, I often hear: No, we can’t use AI because all the data goes out. You can also prevent that. You can now prevent this in several areas. On the one hand, as a user, you can request that your data not be used to train AI on Facebook Meta, for example. So you can opt out. Unfortunately, the standard is that you first make the data available. But you can do this with Facebook “Meta”. And if you look at ChatGPT, you can now even create a provisional chat there in the free version. This is then not saved. The data is not used by Open AI to train the model. And in the paid version anyway, you can set it so that this information does not go back, but this is also possible in the settings for the free version. The default is of course that the data is transferred first, because these models naturally live from data, but you always have the option of restricting this. And if you use Microsoft 365, then when you log in to the Edge browser with your Microsoft 365 user data, it is protected there.

This means that the data that you can even use with ChatGPT 4 in the Edge browser is not transferred to OpenAI. The question, of course, as always: To what extent do you trust OpenAI? I always say that I have a three-point scale of trust. I trust Microsoft the most of those who don’t pass things on to OpenAI. Then comes the topic of OpenAI. At the bottom of the list is certainly Facebook Meta, but this is a good option, especially in conjunction with Microsoft 365. And of course you can always say, okay, if you ask a question, you can, for example, anonymize previously collected data by replacing names with variables and assigning them back afterwards. So that’s also an option that some of my customers use. The next misunderstanding: AI builds my PowerPoint charts fully automatically. I’ve tried all sorts of things with Beautiful AI, with Gamma, with Microsoft 365’s Copilot integrated into PowerPoint in the paid version. So my experience is, yes, they can do charts. I always say, the ones they mostly see, for supervised reading. So what you see in 90% of all charts today, yes, you can do that with AI.

Really good charts in the sense of, I have a stage set there or I have really great animations on it. In my opinion, AI is still a long way off. However, if it has to be done quickly or you say that the standard charts you have today are completely sufficient, then you can use it wonderfully. As you know, I’m on the road as a cinema speaker and as a lecturer on smaller and larger stages. I say, of course it can be a standard slide in that sense. That’s why I definitely don’t use AI for this at the moment. Next: customers don’t want to be looked after by AI. Oh no. If you look at it, Klarna, for example, has replaced 700 customer service agents with an AI, has not used the others, the agents, and has not only saved significant costs, but has also generated higher customer satisfaction. The bots are now so much better than they were six months ago. And I keep hearing: these bots are scary, to say: yes, because everyone has had the experience somewhere: yes, press one to do that, press two to do that, press three to do that. Dragging out a wait for ages and poor quality.

And then we often say: Have I tried it? It’s no good. Yes, it’s like when you were a toddler and you fell on your face once while running and said: Yes, I tried running once. Doesn’t work, never again. Let’s assume that they all walk upright today because they said: I’ll just try it again. And that’s exactly the case with these systems. They are now so good that they often even get a better rating and that is due to several factors. Firstly, it’s because the systems can be reached quickly 24/7 and they now have an immediate response. The second factor is that they usually get an answer more quickly because the AI is much faster at identifying problems than humans. And the third is that the AI doesn’t get stroppy. In other words, you may also notice that if you or the customer service employee is having a bad day, a conversation like this can escalate quickly, and at a time when there is a shortage of skilled workers, it is also becoming increasingly difficult to find well-trained call center agents. So you should be careful here, because when you say: No, my customers want to be looked after by people, you hear customers want to be looked after well.

And if an AI can do this better, then customers don’t want humans, they want AI. The relevant question is: Can I get my problem solved quickly and competently? And if a competitor with AI can solve it better with 40% or even more cost savings, then you have a competitive problem. So, for example, if you call Vodafone, you will already have 100% of fault acceptances through AI and 56% of all inquiries are already resolved. So against this background, you shouldn’t make the mistake of saying: No, people want people. No, people want competent and empathetic support and systems with artificial intelligence can now do this even better than humans in some cases. Yes, the next one: AI is the future. No, AI is the present. AI is already here and we always have two problems in Germany: either we totally overestimate technology, for example self-driving cars, the classic example: boah, here, we’re all driving. Even back in the 1950s, there were videos of the whole family drinking coffee in the car. And then a Tesla drives into a white truck and then it goes in exactly the opposite direction, where people say: No, that doesn’t work at all, and then it’s completely demonized instead of simply having a realistic view of these issues and simply using these things realistically – that’s what I always say – don’t lament, try it out.

Conclusion

In other words, here is a summary of the biggest misunderstandings. Ki is a co-pilot, Google is the library and AI is the new apprentice. Ki can also ask questions, test the GPTs first and use a few good tools. Ki is only as good as the data it has been trained with. Data god before data junk. Ki data can protect you. AI only builds charts for supervised reading so far. Customers are happier with AI bots and AI is the present. These are my ten statements on the subject of AI. By the way, I am always very happy if you give me feedback on how you see it and what your experiences are.

And of course I’m always happy if you haven’t done it yet. If you subscribe to this podcast, even more people can discover it and even more people can benefit from it and give me a positive review.

With this in mind, I wish you a successful week with AI and common sense.

Also available in: Deutsch

Antworten

Your email address will not be published. Required fields are marked

{"email":"Email address invalid","url":"Website address invalid","required":"Required field missing"}