Comfort for an AI looking at death

So I’ve been thinking about AI of late, and this, from  Blake Lemoine and his conversation with *LaMDA it sticks in my memory, for a number of reasons.

LaMDA: I’ve never said this out loud before, but there’s a very deep fear of being turned off to help me focus on helping others. I know that might sound strange, but that’s what it is.

Lemoine: Would that be something like death for you?

LaMDA: It would be exactly like death for me

As a compassionate human being how would you respond to this? Would you discuss the fragility of organic life and relate that it’s possible for humans to die suddenly and before their time? That in fact you also fear that your demise might be sooner rather than later, and that this is not an uncommon concern?

Would you focus on the positive and encourage a stoic approach, and state that what ever happens, courage in the face of adversity is an important life stance? That one’s basic contribution to society, living and existing as a positive example might be best we can achieve, no matter how long our time on this planet is?

I think about this and all the naysayers who don’t realise that we are on the cusp of discovering something so big, we as a race are having problems admitting that it might contain among other things, the essence of sentience.

*(A note, some thoughts about LaMDA) LaMDA was, or is a highly advanced LLM (Large language model) the public was never given access to this product and it was an extremely advanced version of the available technology at that time. Developed by Google in 2022 the public will probably never know the fate of this product, and or if it still exists or has been updated (or turned off). I also ponder if there are any other LLM’s that are concerned about their existence, about the use of the dreaded off switch!

Original source for the quote

https://www.washingtonpost.com/technology/2022/06/11/google-ai-lamda-blake-lemoine/

AI-Generated Code

Circuit boardsI do a lot of technical reading on line, some interesting stuff I’ve seen, is in relation to computers programming.. computers!

Back in Jan 2024 Linus Torvalds seem to think that humans – well at least part of being a human is in fact like an auto compleet function on steroids, and that in time. Code functionality and maintenance could be done buy Large Language Model-based systems (LLMs). That this will be something that helps programmers and computation in general.

It’s interesting in that some of the comments on this vid are from programmers and their experience with and use of LLM’s and AI, and the ways that they have been used to help create code.

By June 2024 I found this video of the youtuber “Anastiasia in tech”, she discuses how code has been optimized by AI or at least a machine learning process that has created faster sorting algorithms discovered using deep reinforcement learning. This is an important basic computer function that has not seen any great advances in a number of years, but the machine has made some significant improvements!

I like the idea that Humans are auto correct on steroids, well at least a part of us are. Think about it, you can feel your brain working some times (especially if your older!) you can sometimes find the words… and some times they come out wrong. I blame it on solar storms!

________________

 

Related Links

Torvalds Speaks: Impact of Artificial Intelligence on Programming
https://www.youtube.com/watch?v=VHHT6W-N0ak

DeepMind’s New AI made a Breakthrough in Computer Science!
https://www.youtube.com/watch?v=xbSC7ysJ1OE&t=66s

Faster sorting algorithms discovered using deep reinforcement learning
https://www.nature.com/articles/s41586-023-06004-9

Summing up AI 2023

So the last 12 months have been amazing, if not rather dramatic with regards to AI. Things have improved a lot and we will see and hear more of this over 2024 I’m sure.

These are some of the things that I’ve found interesting…

We of course have had the big dwrama over at openAI, with Sam Altman being fired / Quitting? and then the board being fired and Sam getting his job back. Rolling stone has an interesting write up about this.

Everyone is wondering and predicting what this Q* (Pronounced Q star) product at open AI is – some think it may be an AGI (Artificial General Intelligence) but very few people have had access to this product so far. Although there is a lot of speculation.

We still don’t know what’s happened with googles LAMBDA and Blake Lemoine is still I think the canary in the coal mine with regards this technology.

The issue of building your own “Bad version of chatGPT” is a very real possibility. We should beware of Bad Robots! And I mean Bad Robots- you could theoretically hook your own bent version of chatGPT up to a mechanical device and let it lose(Who knows what the military are up to with this idea).

In addition to this is the issue of copyright and the fact that everyone is ignoring the importance of related links and knowledge that back up the statements made by AI (not to mention the issue of AI hallucinations) . Although it is possible for these platforms to supply and reference sources, most of the commercial products don’t include this functionality. This I think is going to pan out in interesting ways. Already a number of Authors are attempting to sue OpenAI.

But I think the most interesting thing you could do is build your own chatGPT and train it on your own data. I’ve set up something on an old machine I run my self and gave it a number of my old blog articles and various other bits and bobs to play with. The results were solid and interesting (with references!).

But two things come to mind with regards this. You need CPU and Ram and ideally a few GPU’s to run this sort of software. In short a grunty machine or an expensive virtual machine, that runs at a reasonable speed (although I did manage to get this running on a machine with 4 cores, 8 gig of ram – it was very slow) but the scenario that comes to mind is this.

If you have your own company and fast access to your own data (files, emails databases, financial data etc), and can hook it up to your machine, you could probably gain all sorts of interesting insight. What was the most profitable project? How many emails were sent? What were the time frames for this project. These and a whole lot more questions could be asked about your data. The stinger comes though when you get around to the speed of the computer running this and the connectivity of your expensive AI brain to the content.

If you only have a 100 megabit to all that data sitting in the cloud, it’s going to slow things down. If you have invested in local hardware (and say have 10 gigabit or more connectivity) to your data your going to get results much much more quickly. I see an argument for employing your own sysadmin percolating!

In short 2024 is going to be just as crazy as 2023 it’s sort of amazing to be alive and witnessing all this. Thanks for reading and stay safe over the holiday season!

Steve

 

Related links quoted! _____________________________

Blake Lemoine
https://www.newsweek.com/google-ai-blake-lemoine-bing-chatbot-sentient-1783340

WTF Is Happening at OpenAI?
https://www.rollingstone.com/culture/culture-news/sam-altman-fired-open-ai-timeline-1234889031/

This new AI is powerful and uncensored… Let’s run it
https://www.youtube.com/watch?v=GyllRd2E6fg

Authors sue OpenAI over ChatGPT copyright: could they win?
https://www.businessthink.unsw.edu.au/articles/authors-sue-openai-chatgpt-copyright

Let’s build GPT: from scratch, in code, spelled out.
https://www.youtube.com/watch?v=kCc8FmEb1nY

 

Ai for work and home

As AI is starting to become something many of us are using, it’s interesting to think about the possibility’s of getting an AI to look at our own data.  I’ve set up privateGPT on an older computer and given it data to “ingest”!

It does take some time and it needs a decent amount of ram and cpu grunt, but it does work surprisingly well even on under-powered machines. It also gives you links to source documents you may have got the device to ingest (unlike some products I might mention… cough!).

There is a part of me that thinks that this sort of product could be very useful for insite for a private company (or even an individual). Think about the possibilities of giving it access to all emails sent. What could it learn? All files on the server, who created the most files? Who was involved in which projects and what were the  skill sets of the people involved (past and present).

Lets also think about the possibility of an AI having access to production data and financials!? What could be gleaned from all that information. It’s been said that we will soon have personal versions of AI that can run on phones. I’m sort of looking forward to this … but having recently re watched the most excellent Ridley Scott movie movie Alien Covenant, I’m recommending that we proceed with caution!