Over the past few years, artificial intelligence has evolved rapidly, resulting in the launch of many AI-powered technologies and tools. One of the hot topics in the AI world these days is AI chatbots, which have become increasingly popular due to their human-like conversations and great companionship. Although chatbots can facilitate us in many ways, there are concerns over their safety and ethical use. This is very well reflected in the recent ban on Replika in Italy. In this article we will discuss the dark side of AI and also presents the key reasons behind the ban of Replika in Italy.
AI Chatbots & The Growing Concerns
AI chatbots have been around for a while now, but the game has completely changed with the launch of ChatGPT. In fact, ChatGPT has become the fastest-growing app of all time, with 100 million users in just two months of launch.
The popularity of ChatGPT is not just due to its human-like conversations but also due to its wide range of applications, such as deep well-informed answer questions, insightful texts, code writing, etc. Moreover, its ability to learn and improve with time makes it a valuable tool for individuals and a wide range of industries.
AI chatbots, like ChatGPT, seem to dominate in the future. Microsoft is investing $10 billion in OpenAI (the developer of ChatGPT), while Google is also actively looking to dominate the AI chatbot market with its chatbot, called Bard. Other than tech giants fighting to dominate the market, users have also acknowledged the beneficial use of chatbots.
Despite the glory of chatbots, many are concerned about their harmful impacts. Some of the growing concerns around chatbots are as follows:
Inaccurate or Misinformed Information
Chatbots are capable of doing human-like conversations and presenting answers so polished that they always look right. However, many times the answers they present are inaccurate or misleading. Talking specifically about ChatGPT, it is built on advanced machine learning algorithms that make it generate responses by recognizing patterns from past interactions. Moreover, ChatGPT arrives at an answer by performing a series of guesses. Owing to that, it cannot separate fact from fiction, resulting in misinformed information. This can have a serious concern in areas like healthcare, politics, finance, etc.
Plagiarism and Cheating
Chatbots are getting more and more attention from students, as they can assist students in many activities. They can write an essay, proofread/polish their writing, rewrite content, and do a lot more. In fact, ChatGPT managed to pass the US Medical Licensing Exam with a 60% accuracy. Therefore, there is a growing concern about the unethical use of chatbots in the academic industry, thereby impacting students’ learning skills.
Biased Behavior
Biased behavior is another raising concern about chatbots, especially in race, gender, and ethnicity areas. Since chatbots are built on data, the same biases that are present in the data are also shown in the model. ChatGPT has already been blamed for generating many biased responses. Therefore, chatbots can unintentionally show biased behaviors that can lead to discrimination and unfair treatment.
Privacy and Data Protection
Chatbots collect plenty of data related to the users, such as personal information, browsing history, behavioral patterns, etc. However, most chatbots don’t have clear privacy and data protection policies in place. So, there is a concern that the user data can be misused or even compromised by malicious actors for other criminal activities.
Ethical Implications
Another concern about chatbots is their ethical implications. There is an ongoing debate that chatbots can be used to intentionally manipulate individuals, spread false information, encourage risky behaviors, etc. For example, ChatGPT has the capability to present false information in a very polished way, which can be used to spread harmful ideologies or extremist opinions.
In short, chatbots have attracted a massive user base owing to their human-like interactions. Still, they bring plenty of concerning elements that doubt their credibility in the long run.
Why was Replika Banned in Italy?
The concerns of chatbots are not just on the individual level, governments are also showing their concerns with this emerging technology. This is reflected in the recent ban on Replika in Italy.
Replika is an AI-powered chatbot that provides human-like conversations through customizable digital avatars to act as a “virtual friend”. Replika is designed to provide users with a virtual emotional support companion to avoid feelings of loneliness and improve their mental health. It even allows users to customize the background story and personality so that it provides more engaging responses.
Despite Replika seeming as an attractive virtual friend, there have been concerns about the ethical implications of using Replika, especially from a mental health context. It poses risks to minors and emotionally disturbed people, which is the reason Italy’s Data Protection Agency (GPDP) banned it in Italy.
According to Italian regulators, Replika intervention with the mood of the users may increase for individuals in the development stage or the emotional fragility state. The Italian watchdog cited that the app lacks an age verification system. The platform barely asks for the user’s name, email, and gender, and there is no blocking in the app for underage users.
In addition, the regulators also pointed out that Luka Inc (developer of Replika) failed to present how it processes the user data, which is a legal requirement for apps running in the EU. It breaches European Privacy Regulations and unlawfully processes the personal data.
How Safe are Chatbots to Use?
At the current moment, the safety of chatbots is a complex topic. Chatbots are becoming popular both in terms of their usability and ethical implications. So, chatbots currently look safe to use if you intend to use them to get basic information, answer simple questions, or do other simple activities. However, it has safety concerns when used for deeper functions, such as emotional support, medical advice, etc.
The concerns of inaccurate information, unintentional biases, uncontrolled personal data processing, and other ethical implications are also doubting the reliability of chatbots. To sum up the discussion, we can conclude by saying that AI-powered chatbots are an excellent invention and can facilitate in plenty of ways, but they need to be regulated to make them safer to use.