Not everyone is afraid of world domination – the reasons are deeper
Despite the widespread adoption of generative artificial intelligence (AI) like ChatGPT, not everyone is rushing to use it. A new study by professors at Brigham Young University has found that AI rejection is often not due to fear of robots or fantasies about artificial intelligence. The reasons are much more mundane and personal.
/imgs/2025/06/04/11/6839390/da5c8b5cdd999ffc4c10fedd4d61cec0ba56f9da.jpeg)
The study’s authors asked participants about situations in which they intentionally did not use AI. After analyzing the responses, a second survey was compiled, which assessed the willingness to use AI in different areas and the level of anxiety about its use.
The results showed that the main reasons for refusal were mistrust in the quality of answers, uncertainty about the ethics of AI, fear of personal data leakage, and a sense of loss of live human interaction. This is especially true in sensitive or meaningful situations: when creating art, writing poetry, composing obituaries, or seeking medical and financial advice.
In education, participants often perceive AI as a substitute for learning. They fear that by using AI to complete assignments, they are losing the ability to truly learn the material. That is, the technology becomes an obstacle to skill development rather than an assistant.
Researchers note that AI is a tool that can be very useful, but only if used correctly. Like a hammer, it is suitable for certain tasks, but not for all. It all depends on the goal: does a person want to learn, do something quickly, or make an impression?
Ultimately, the study highlights the importance of a mindful approach.