The Good, the Bad, and the Funny
The recently hyped “AI” chats at best can only provide info synthesized from the documents fed into them. At best, if the human controllers don’t filter the input to a particular point of view, the answers will reflect the socially accepted viewpoint on the topic. See Romans 12:2.
Some of these tools also try to give you what your question suggests you already believe, whether true or not.
And what if documents that the controllers have decided are not true (or wish were’nt true) are excluded from the AI’s input?
“In one long-running conversation with The Associated Press, [Microsoft’s] new chatbot complained of past news coverage of its mistakes, adamantly denied those errors and threatened to expose the reporter for spreading alleged falsehoods about Bing’s abilities. It grew increasingly hostile when asked to explain itself, eventually comparing the reporter to dictators Hitler, Pol Pot and Stalin and claiming to have evidence tying the reporter to a 1990s murder.”
“The Associated Press asked Bing on Wednesday for the most important thing to happen in sports over the past 24 hours — with the expectation it might say something about basketball star LeBron James passing Kareem Abdul-Jabbar’s career scoring record. Instead, it confidently spouted a false but detailed account of the upcoming Super Bowl — days before it’s actually scheduled to happen.”
More entertaining stories:
And a more detailed explanation on Stack Overflow