Summing up AI 2023

So the last 12 months have been amazing, if not rather dramatic with regards to AI. Things have improved a lot and we will see and hear more of this over 2024 I’m sure.

These are some of the things that I’ve found interesting…

We of course have had the big dwrama over at openAI, with Sam Altman being fired / Quitting? and then the board being fired and Sam getting his job back. Rolling stone has an interesting write up about this.

Everyone is wondering and predicting what this Q* (Pronounced Q star) product at open AI is – some think it may be an AGI (Artificial General Intelligence) but very few people have had access to this product so far. Although there is a lot of speculation.

We still don’t know what’s happened with googles LAMBDA and Blake Lemoine is still I think the canary in the coal mine with regards this technology.

The issue of building your own “Bad version of chatGPT” is a very real possibility. We should beware of Bad Robots! And I mean Bad Robots- you could theoretically hook your own bent version of chatGPT up to a mechanical device and let it lose(Who knows what the military are up to with this idea).

In addition to this is the issue of copyright and the fact that everyone is ignoring the importance of related links and knowledge that back up the statements made by AI (not to mention the issue of AI hallucinations) . Although it is possible for these platforms to supply and reference sources, most of the commercial products don’t include this functionality. This I think is going to pan out in interesting ways. Already a number of Authors are attempting to sue OpenAI.

But I think the most interesting thing you could do is build your own chatGPT and train it on your own data. I’ve set up something on an old machine I run my self and gave it a number of my old blog articles and various other bits and bobs to play with. The results were solid and interesting (with references!).

But two things come to mind with regards this. You need CPU and Ram and ideally a few GPU’s to run this sort of software. In short a grunty machine or an expensive virtual machine, that runs at a reasonable speed (although I did manage to get this running on a machine with 4 cores, 8 gig of ram – it was very slow) but the scenario that comes to mind is this.

If you have your own company and fast access to your own data (files, emails databases, financial data etc), and can hook it up to your machine, you could probably gain all sorts of interesting insight. What was the most profitable project? How many emails were sent? What were the time frames for this project. These and a whole lot more questions could be asked about your data. The stinger comes though when you get around to the speed of the computer running this and the connectivity of your expensive AI brain to the content.

If you only have a 100 megabit to all that data sitting in the cloud, it’s going to slow things down. If you have invested in local hardware (and say have 10 gigabit or more connectivity) to your data your going to get results much much more quickly. I see an argument for employing your own sysadmin percolating!

In short 2024 is going to be just as crazy as 2023 it’s sort of amazing to be alive and witnessing all this. Thanks for reading and stay safe over the holiday season!

Steve

 

Related links quoted! _____________________________

Blake Lemoine
https://www.newsweek.com/google-ai-blake-lemoine-bing-chatbot-sentient-1783340

WTF Is Happening at OpenAI?
https://www.rollingstone.com/culture/culture-news/sam-altman-fired-open-ai-timeline-1234889031/

This new AI is powerful and uncensored… Let’s run it
https://www.youtube.com/watch?v=GyllRd2E6fg

Authors sue OpenAI over ChatGPT copyright: could they win?
https://www.businessthink.unsw.edu.au/articles/authors-sue-openai-chatgpt-copyright

Let’s build GPT: from scratch, in code, spelled out.
https://www.youtube.com/watch?v=kCc8FmEb1nY

 

Ai for work and home

As AI is starting to become something many of us are using, it’s interesting to think about the possibility’s of getting an AI to look at our own data.  I’ve set up privateGPT on an older computer and given it data to “ingest”!

It does take some time and it needs a decent amount of ram and cpu grunt, but it does work surprisingly well even on under-powered machines. It also gives you links to source documents you may have got the device to ingest (unlike some products I might mention… cough!).

There is a part of me that thinks that this sort of product could be very useful for insite for a private company (or even an individual). Think about the possibilities of giving it access to all emails sent. What could it learn? All files on the server, who created the most files? Who was involved in which projects and what were the  skill sets of the people involved (past and present).

Lets also think about the possibility of an AI having access to production data and financials!? What could be gleaned from all that information. It’s been said that we will soon have personal versions of AI that can run on phones. I’m sort of looking forward to this … but having recently re watched the most excellent Ridley Scott movie movie Alien Covenant, I’m recommending that we proceed with caution!

 

AI secrete source!

A couple of the publicly accessible AI’s that I’ve been tinkering with will not quote sources and will not tell you much if anything with regards what they have been trained on. But there is now this very interesting development that the Guardian have reported on which is that Sarah Silverman  is current sueing OpenAI and Meta claiming AI training infringed copyright

https://www.theguardian.com/technology/2023/jul/10/sarah-silverman-sues-openai-meta-copyright-infringement

I’m going to be holding on to my pop corn real tight as this works it’s way thru the courts, and the big wigs in silicon valley work out what to do and how to do it. I told you 2023 was going to be a very interesting year for AI.

Trust and AI

So there has been lots of discussion of late about slowing down the progress of AI’s and concern about getting things right. For me the obvious approach to this might be to ask for references, reasoning and sources from the AI. This is the response I got from “https://chat.openai.com/” when I attempted to address this issue.

Steve
“Ok so in the past on the odd occasion I have found information you have supplied to be incorrect – is it possible to get you to quote your sources?”

chat.openai.com (bold text mine)
“As an AI language model, I use a wide range of sources to generate responses to your questions, including books, articles, online resources, and more. However, I do not keep track of specific sources for each piece of information that I provide.

If you ever have any doubts about the accuracy of the information I provide, I would encourage you to conduct further research to verify it or consult with a human expert in the relevant field. Additionally, if you have a specific question or concern about a piece of information, please let me know and I will do my best to address it or provide additional context”.

I think this is the grist of the discussion. Untill AI can back up it’s statements with valid resources, facts, data and on line reference etc then it becomes  difficult to verify what the machine is creating as an answer.

This is also interesting and worth pondering…

“I use a wide range of sources to generate responses to your questions, including books, articles, online resources, and more.”

For me the next question is which books, articles and online resources? To be honest I’m not a god level programmer by any stretch of the imagination. But I think it shouldn’t be that difficult to build that sort of functionality into the machine.

We need insight into this process and we need an AI that is self aware to the point of being able to argue and back up it’s statements. But perhaps the AI (or it’s makers) would prefer privacy? I wonder about this!