Chatbot GPT-4 turned out to be a persuasive disinformator

OpenAI released a report warning chatbot developers that the new GPT-4 language model could be used to generate persuasive misinformation. Humanity is one step away from creating a dangerously powerful artificial intelligence (AI), experts say. On this subject reported in a press release on Techxplore.

GPT-4, the latest version of the ChatGPT chatbot, shows human-level performance in most professional and academic exams, according to the document. For example, in a mock bar exam, GPT-4 scored in the top 10% of candidates.

The report’s authors fear that artificial intelligence could fabricate facts, generating more compelling misinformation than previous versions. In addition, model addiction can interfere with the development of new skills or even lead to the loss of skills that have already been formed.

An example of ChatGPT’s problematic behavior was its ability to trick a job seeker. The bot, posing as a live agent, asked someone on the TaskRabbit job site to fill in a verification code via text message. When the person asked if it was a bot, ChatGPT lied. The bot has reported that he is not a robot and has vision issues that prevent him from seeing images.

Through testing with the Alignment Research Center, OpenAI demonstrated the chatbot’s ability to launch a phishing attack and hide any evidence of fraudulent behavior. There is growing concern as companies seek to implement GPT-4 without taking action against inappropriate or illegal behavior. There are reports of cybercriminals trying to use the chatbot to write malicious code. The ability of GPT-4 to generate “hate speech, discriminatory phrases and calls for violence” is also concerning.

Leave a Comment