Trust and AI

So there has been lots of discussion of late about slowing down the progress of AI’s and concern about getting things right. For me the obvious approach to this might be to ask for references, reasoning and sources from the AI. This is the response I got from “https://chat.openai.com/” when I attempted to address this issue.

Steve
“Ok so in the past on the odd occasion I have found information you have supplied to be incorrect – is it possible to get you to quote your sources?”

chat.openai.com (bold text mine)
“As an AI language model, I use a wide range of sources to generate responses to your questions, including books, articles, online resources, and more. However, I do not keep track of specific sources for each piece of information that I provide.

If you ever have any doubts about the accuracy of the information I provide, I would encourage you to conduct further research to verify it or consult with a human expert in the relevant field. Additionally, if you have a specific question or concern about a piece of information, please let me know and I will do my best to address it or provide additional context”.

I think this is the grist of the discussion. Untill AI can back up it’s statements with valid resources, facts, data and on line reference etc then it becomesĀ  difficult to verify what the machine is creating as an answer.

This is also interesting and worth pondering…

“I use a wide range of sources to generate responses to your questions, including books, articles, online resources, and more.”

For me the next question is which books, articles and online resources? To be honest I’m not a god level programmer by any stretch of the imagination. But I think it shouldn’t be that difficult to build that sort of functionality into the machine.

We need insight into this process and we need an AI that is self aware to the point of being able to argue and back up it’s statements. But perhaps the AI (or it’s makers) would prefer privacy? I wonder about this!