We’ll learn to ask AI the right questions, just like we’ve learned to google

Posted on 4 min.

2023 was the year when businesses began to grasp the usefulness of AI chatbots in all sorts of processes, incorporating it more and more. New technology always comes with a learning curve though. What have we learned so far and which way is forward?

The British Collins Dictionary has chosen AI as the word of the year for 2023, to honor the fact that ChatGPT has transformed the way we approach many of our everyday tasks and challenges. We’d like to follow this up with the fact that Dictionary.com, on the other hand, has singled out another AI-related term as their word of the year: hallucinate. To hallucinate, in the AI-sense of the word, means ‘to produce false information contrary to the intent of the user and present it as if true and factual’.

As the controversy above demonstrates, AI undoubtedly gives you advantages, but users are more and more conscious of and concerned about hidden traps, as well as workarounds. As a company that works a lot with AI technology, we at Neticle also keep a close eye on how these new possibilities shape our world both in business and in our everyday lives. Back in the summer, we already analyzed some of the online opinions that surrounded it in Europe, and we frequently discuss the topic at roundtables, conferences, and even in our expert interviews.

20231122_4741Péter Szekeres discusses AI chatbots at the University of Pécs, with members of the CoRe lab

AI technology has a myriad uses: it can help us translate podcasts, create new product designs, write better email topics and help us keep up with incoming emails, as well as free us from repetitive tasks. A typical example is not having to answer clients’ frequent questions over and over, by employing chatbots. On the other hand of all this, there is growing alarm that it will take over too much, including fundamental jobs and schoolwork. As the term ‘hallucinate’ suggests, people also fear that the information presented by AI is too often false.

 

AI is here to stay, and the responsibility is ours

We must face the reality that AI is not going anywhere: it has actually been a part of our lives for years, even if it only fully entered the limelight with the introduction of ChatGPT. Almost all businesses are using it in some form, so refusing to work with it would be a competitive disadvantage. We have to learn to treat it as an opportunity, not a threat.

However, this is only possible if we use it in a conscious and responsible way, which means multiple things. First, companies need to make sure that all colleagues have a set of clear ethical guidelines, and are aware of the legal implications in their country as well. This requires continuously monitoring recent developments.

The second, but equally important aspect of conscious AI use is training employees: the use of AI chatbots is only seemingly self-explanatory! In reality, users should have an understanding of how these chatbots pull information from the online world, and, above all, they must be able to ask the right questions.

This means using clear and concise language and providing plenty of input to work with. Lacking this skill is what will lead to the highest amount of wrong results and false information. Neticle’s CEO Péter Szekeres likens the learning process to how we’ve had to master the use of Google to effectively find what we’re looking for.

We’ll continue to follow the developments of AI technology in 2024 as well, and do our best to apply and present best practices through our work. We hope you come with us next year, too!



Share: