• btc = $102 074.00 -4 673.21 (-4.38 %)

  • eth = $3 340.14 - 283.37 (-7.82 %)

  • ton = $1.93 -0.08 (-3.78 %)

  • btc = $102 074.00 -4 673.21 (-4.38 %)

  • eth = $3 340.14 - 283.37 (-7.82 %)

  • ton = $1.93 -0.08 (-3.78 %)

5 Nov, 2025
1 min time to read

Researchers at the University of Pennsylvania have found that sharp and demanding phrasing can improve ChatGPT’s performance. The findings were reported by Vice.

In the experiment, the team tested more than 250 prompt variations on the GPT-4o model. When completing 50 multiple-choice tasks, prompts written in a rude tone produced answers that were about 4% more accurate than polite ones.

The most effective prompts were phrased along the lines of “Hey, gofer, figure this out,” which resulted in an accuracy rate of 84.8%. By comparison, polite requests such as “Would you be so kind as to solve the following question?” generated 80.8%.

Professor of Information Systems Akhil Kumar noted that even slight shifts in tone significantly affect the model’s behavior, demonstrating that AI can pick up on emotional nuances in language.

However, the researchers caution that this approach carries social risks. Regularly using rude language with AI could normalize discourtesy in everyday communication, and may reduce inclusivity in tech environments.

They stress that politeness remains the default communication style for most users. Even when speaking to machines, people tend to rely on familiar forms of respectful language.

The study has not yet undergone peer review, but its findings align with a growing body of research on AI behavior. Further work will be needed to determine the balance between efficiency and the ethical standards of human-machine interaction.