AI and 2023

So I think 2023 is going to be a very interesting year as far as AI is concerned. In one corner you have OpenAI with ChatGPT and a 10 Billion dollar investment from Microsoft. In the other corner you have google with LaMDA. Which will prevail is going to be very interesting. Although at the present time it looks as though Microsoft, if they don’t mess it up may be able to overtake google in the search field if they get this right (They are close to integrating their products with ChatGPT and they also own a big hunk of the OpenAI company).

Google on the other had may have a different problem in that Lambda may be to advanced already – they may have they created something that might be difficult to monetise.

I’ve been thinking more about the emergence of sentience, and conciseness.
There has been a lot of arguing about what and if a sentient being could be developed/ evolved in a computational environment. I’ve touched on the concept of the Chinese room in the past. But there are 2 things I find interesting.

That AI is based on similar structures to the human brain,  it can learn (although as far as we know it can only do repetitive things well, we are not sure about the process of it evolving … yet!).

But think about the process of learning, or perhaps the process of growing up as a child. Becoming a human, in some ways we our selves start as and get  past the state of the “Chinese room”.

Think of a child and how it learns. It all starts with people and basic communication – words like NO, MUMMA, DADDA, etc. In the beginning the child does not understand, and in fact just answers in a manner that it thinks may be correct – it starts by making noise and imitation but in time (and with feedback) that understanding grows.

Do you remember when you became aware of your self? Do you remember when you discovered what feelings were? Computational feedback loops are not uncommon. Could they evolve into structures similar to human emotion or something similar?

 

Additional thoughts about AI

Vegetable, Animal or Mineral?

I’ve found an interesting article about AI by research psychologist  Robert Epstein over at Aeon.co . It’s a good read and a breath of fresh air with regards a lot of the noise that is being generated of late, about AI running amok (yes I’m guilty of jumping on that band wagon!).

Mr Epstein’s basic premise is that the brain does not work like a computer – it’s different because it contains 86 billion neurons with  100 trillion interconnections. This big hunk of humanity changes due to each unique experience, and we can’t just reduce human conscious down to a big bag of self learning algorithms. That and the computational model that a lot of people rely on is flawed.

I sort of agree with him on a lot of this, but I am also  still a little concerned about things like people with photographic memories, how studies in childhood development will influence our understanding of the creation of human consciousness, and I’m still worried about self learning. But if you want a fresh perspective on this stuff the article is well worth a read. May be we are not doomed after all!

Ridley Scott movies, AI and humanity!

Interesting things happen at the men’s shed!

So I’ll start with a spoiler alert. If you haven’t seen the Ridley Scott  movies “Prometheus” or  “Alien: Covenant” you may like to stop reading now!

One of the things that I liked about Prometheus was that Ridley Scott starts to look at the concept of “Bad robot” and by the time we get to “Alien: Covenant” we are talking about a megalomaniacal monster who, because he has access to so much knowledge and power compared to these lesser humans, makes the decision that we are not worthy – of existence!

This of course evolves into a plot that will have you sitting on the edge of your seat right to the moment you leave the theatre. Your brain will still be doing back flicks many hours or days later. The technical ramifications of the plot twists are brilliant.

These movies got me thinking about AI and robotics and Arther C Clarke’s “Three Laws of Robotics“. I have always thought that these laws have influenced a lot of science fiction writing, in that the Robot is usually a force of good. When I think of my own existence as an IT person and someone rather fond of cables & chips… and software. The concept of an AI gone wrong upsets me. This is because we are human and we are all flawed on some level but we also have hope. But the cynic in me asks  “So how the heck could we create an AI and not get it wrong?”

As a byproduct of watching these movies, I went searching for more information about humanity and AI. Whereupon I came across this interesting interview between two of the the biggest supporters / brains in the business of the development of AI… “Marvin Minskey & Ray Kurzweil“.

The late Mr Minskey is arguably the grandaddy of AI. He’s interesting  – but also I feel he could be considered some what of  an intellectual snob. I would not want him programming an AI.

But you might say – Robots, Artificial intelligence. That could never happen! Well lets just look at the facts shall we! The common knowledge game of Jeopardy  back in 2011 yep a computer beat 2 of the best humans in the world at this game.

There is of course our diminished skill with regards the game of  Chess, and the even more complex game of Go , The Physical presence of an AI may be expressed in a robotic format such as this…

We might also take into consideration related developments in robotics (not jus the type that walk) including improvements in things like Brain surgery  . I shudder to think what the military are up to, but this is something we need to think about.

Additional issues are that the AI will presumably design the next generation of AI! It’s a very deep rabit hole.

If someone ever does get round to creating an AI we would need a management and review process. In addition to programmers who can create something with the wisdom and compassion of Buddha and  the patience of a Saint.

We need to think about this, talk about this, a lot. Not to mention act carefully!