Mr Epstein’s basic premise is that the brain does not work like a computer – it’s different because it contains 86 billion neurons with 100 trillion interconnections. This big hunk of humanity changes due to each unique experience, and we can’t just reduce human conscious down to a big bag of self learning algorithms. That and the computational model that a lot of people rely on is flawed.
I sort of agree with him on a lot of this, but I am also still a little concerned about things like people with photographic memories, how studies in childhood development will influence our understanding of the creation of human consciousness, and I’m still worried about self learning. But if you want a fresh perspective on this stuff the article is well worth a read. May be we are not doomed after all!
So I’ll start with a spoiler alert. If you haven’t seen the Ridley Scott movies “Prometheus” or “Alien: Covenant” you may like to stop reading now!
One of the things that I liked about Prometheus was that Ridley Scott starts to look at the concept of “Bad robot” and by the time we get to “Alien: Covenant” we are talking about a megalomaniacal monster who, because he has access to so much knowledge and power compared to these lesser humans, makes the decision that we are not worthy – of existence!
This of course evolves into a plot that will have you sitting on the edge of your seat right to the moment you leave the theatre. Your brain will still be doing back flicks many hours or days later. The technical ramifications of the plot twists are brilliant.
These movies got me thinking about AI and robotics and Arther C Clarke’s “Three Laws of Robotics“. I have always thought that these laws have influenced a lot of science fiction writing, in that the Robot is usually a force of good. When I think of my own existence as an IT person and someone rather fond of cables & chips… and software. The concept of an AI gone wrong upsets me. This is because we are human and we are all flawed on some level but we also have hope. But the cynic in me asks “So how the heck could we create an AI and not get it wrong?”
As a byproduct of watching these movies, I went searching for more information about humanity and AI. Whereupon I came across this interesting interview between two of the the biggest supporters / brains in the business of the development of AI… “Marvin Minskey & Ray Kurzweil“.
The late Mr Minskey is arguably the grandaddy of AI. He’s interesting – but also I feel he could be considered some what of an intellectual snob. I would not want him programming an AI.
But you might say – Robots, Artificial intelligence. That could never happen! Well lets just look at the facts shall we! The common knowledge game of Jeopardy back in 2011 yep a computer beat 2 of the best humans in the world at this game.
There is of course our diminished skill with regards the game of Chess, and the even more complex game of Go , The Physical presence of an AI may be expressed in a robotic format such as this…
We might also take into consideration related developments in robotics (not jus the type that walk) including improvements in things like Brain surgery . I shudder to think what the military are up to, but this is something we need to think about.
Additional issues are that the AI will presumably design the next generation of AI! It’s a very deep rabit hole.
If someone ever does get round to creating an AI we would need a management and review process. In addition to programmers who can create something with the wisdom and compassion of Buddha and the patience of a Saint.
We need to think about this, talk about this, a lot. Not to mention act carefully!
So, I got a nice email from the google AI? May be not… Anyway it was an invite to submit the site to https://testmysite.thinkwithgoogle.com/ and so I did. I’m happy with some of the initial results, one of which was the Mobile friendly test. Have a look at this!