has a neat little trick to convince people it's smart: A kind of style over substance approach.
Researchers from Purdue University analyzed ChatGPT's replies to 517 questions posted to Stack Overflow, an essential Q&A site for software developers and engineers. After assessing the bot's responses for"correctness, consistency, comprehensiveness, and conciseness," the researchers found that 52% of the answers were flat-out incorrect, and 77% committed the writing sin of being verbose. found ChatGPT users prefer its responses to questions versus the human responses on Stack Overflow a startling 40% of the time — despite all the errors it throws up.
"When asked why they preferred ChatGPT answers even when they were incorrect, participants suggested the comprehensiveness and articulated language structures of the answers to be some reason for their preference," the research noted.or those written by humans on Stack Overflow to 2,000 randomly sampled questions. But OpenAI itself has warned that the bot can write"plausible-sounding but incorrect or nonsensical answers.
OpenAI didn't respond to Insider's request for comment on the research findings outside regular working hours.
Health Health Latest News, Health Health Headlines
Similar News:You can also read news stories similar to this one that we have collected from other news sources.
3 ways to get ChatGPT to write better code, according to expertsInsider tells the global tech, finance, markets, media, healthcare, and strategy stories you want to know.
Source: BusinessInsider - 🏆 729. / 51 Read more »