Humans spent thousands of years looking for ways to make life easier. We turned from hunting and gathering to agriculture, then we invented machines to help with labor. Now we have artificial intelligence so we don't have to bother thinking anymore. Recent research has identified what is called "cognitive surrender," when critical reasoning is abandoned to trust in external reasoning from algorithms.
The experiments used Cognitive Reflection Tests, which are designed to separate people who make decisions on a quick, intuitive basis and those who slowly deliberate the details to come to different conclusions. In other words, the tasks are designed to be somewhat confusing on the surface. The option to consult AI was added (except for a control group), with the AI programmed to give the correct solution only half of the time. While many people took the option to consult AI, some double-checked the algorithm's advice and others blithely accepted what the algorithm told them.
Overall, across 1,372 participants and over 9,500 individual trials, the researchers found subjects were willing to accept faulty AI reasoning a whopping 73.2 percent of the time, while only overruling it 19.7 percent of the time.
Significantly, those who trusted artificial intelligence with their answers were more confident about their decisions, no matter how correct they were or weren't. And adding a time limit only increased the number who trusted AI. Read more about this research at Ars Technica.