ChatGPT generates cancer treatment plans that are full of errors — Study finds that ChatGPT provided false information when asked to design cancer treatment plans::Researchers at Brigham and Women’s Hospital found that cancer treatment plans generated by OpenAI’s revolutionary chatbot were full of errors.
It doesn’t check the stuff it generates other than on grammatical and orthographical errors. It’s not intelligent or has knowledge outside of how to create text. The text looks useful, but it doesn’t know what it contains in a way something intelligent would.
Recent papers have shown that LLMs build internal world models but about a topic as niche and complicated as cancer treatment, a chatbot based on GPT-3.5 be woefully ill-equipped to do any kind of proper reasoning.
It seems like it could check for that though, which is what chatgpt doesn’t do but we all assumed would. I’m sure there are ai programs that could and do check for possibilities on only information we know to be true.
People who understand the technology did not assume that, but yes the general public has a lot of misconceptions about it.