So I think 2023 is going to be a very interesting year as far as AI is concerned. In one corner you have OpenAI with ChatGPT and a 10 Billion dollar investment from Microsoft. In the other corner you have google with LaMDA. Which will prevail is going to be very interesting. Although at the present time it looks as though Microsoft, if they don’t mess it up may be able to overtake google in the search field if they get this right (They are close to integrating their products with ChatGPT and they also own a big hunk of the OpenAI company).
Google on the other had may have a different problem in that Lambda may be to advanced already – they may have they created something that might be difficult to monetise.
I’ve been thinking more about the emergence of sentience, and conciseness.
There has been a lot of arguing about what and if a sentient being could be developed/ evolved in a computational environment. I’ve touched on the concept of the Chinese room in the past. But there are 2 things I find interesting.
That AI is based on similar structures to the human brain, it can learn (although as far as we know it can only do repetitive things well, we are not sure about the process of it evolving … yet!).
But think about the process of learning, or perhaps the process of growing up as a child. Becoming a human, in some ways we our selves start as and get past the state of the “Chinese room”.
Think of a child and how it learns. It all starts with people and basic communication – words like NO, MUMMA, DADDA, etc. In the beginning the child does not understand, and in fact just answers in a manner that it thinks may be correct – it starts by making noise and imitation but in time (and with feedback) that understanding grows.
Do you remember when you became aware of your self? Do you remember when you discovered what feelings were? Computational feedback loops are not uncommon. Could they evolve into structures similar to human emotion or something similar?