AI secrete source!

A couple of the publicly accessible AI’s that I’ve been tinkering with will not quote sources and will not tell you much if anything with regards what they have been trained on. But there is now this very interesting development that the Guardian have reported on which is that Sarah Silverman  is current sueing OpenAI and Meta claiming AI training infringed copyright

I’m going to be holding on to my pop corn real tight as this works it’s way thru the courts, and the big wigs in silicon valley work out what to do and how to do it. I told you 2023 was going to be a very interesting year for AI.

Zuck doesn’t care about 14,000 people

Car crash siteAlthough I hate FB I do use it some time. So I’m in this Face Book group – and from what we can work out the owner has passed away.

All the admin accounts have been hacked, and despite a number of people attempting to appeal to the criminal a-hole that usurped this group, and of course a number of us have written to the so called FB support and I even sent an email to Zuk himself.

We have been met by this huge wave of silence. Nothing nada – and it really makes me mad – this is a community that has/had 14,000 people and probably about 10 or more years worth of data in it. This has been usurped, abused and the only thing one can really do is abandon it.

Part of me thought it would be / might be a good idea to hack back. But you know is it worth it?

In life I have learnt that one must chose one’s battles with care – or to put it another way “Never wrestle with pigs- you both get covered in mud and the pig likes it”. For now I’ll put this on the back burner … but If I ever meet Mr Zuckerberg we are going to have an interesting and probably rather short conversation.


One ring – not a good idea!

There was a fairly famous post once about how Sys Admins are lazy by nature and that automating things is a good habit to have – thus supporting the lazy label. But have we got too lazy, and is that laziness creating environments that are vulnerable?

The reason for this post is that over the last few years I’ve been reading about systems that are designed to “Run the whole network”, “Manage all users and applications”.  The problem is, when these systems go wrong you have a huge clean up job – lets look at a couple of examples.

SolarWinds Hack
This was what is know as a “supply chain breach” it is effectively where the software is hacked at the the source level “in other words at the SolarWinds development space” and that compramised software is distributed in good faith buy the company. This created a situation where one piece of software, which gave some users “God” access to the network and all the machines, and all the users. Was compromised (More than 30,000 public and private organisations)!  You can read about it in detail on this page.

One of the problems is this attitude of “Single sign on”,  one place to do everything.  It’s a lovely concept isn’t it? It means that one machine can control all the services and all the users and all the groups and all the network and… well you get the picture. It means that you can have less staff, less skill sets to train and you know just be lazy.

But lets look at the NotPetya attack. If you want to have a few nightmares read up about what happened to the international company that had 150 domain controllers compromised, and the insane amount of money it took to clear this up. It’s one of the few times where slow internet probably saved a multinational company billions.

As someone who is interested in IT security and sysadmin, I keep coming back to the responsibility of an inquisitive person who knows the network, the machines and what they are all up to.

Ai, automation, constant monitoring, automated alerts  etc are all importnat and on one level mind blowing. But we still need human intelligence and awareness. How much space is on that server? What is the network throughput on a day to day basis? Has there been a change? If so – go investigate it! NOW

Related links






Trust and AI

So there has been lots of discussion of late about slowing down the progress of AI’s and concern about getting things right. For me the obvious approach to this might be to ask for references, reasoning and sources from the AI. This is the response I got from “” when I attempted to address this issue.

“Ok so in the past on the odd occasion I have found information you have supplied to be incorrect – is it possible to get you to quote your sources?” (bold text mine)
“As an AI language model, I use a wide range of sources to generate responses to your questions, including books, articles, online resources, and more. However, I do not keep track of specific sources for each piece of information that I provide.

If you ever have any doubts about the accuracy of the information I provide, I would encourage you to conduct further research to verify it or consult with a human expert in the relevant field. Additionally, if you have a specific question or concern about a piece of information, please let me know and I will do my best to address it or provide additional context”.

I think this is the grist of the discussion. Untill AI can back up it’s statements with valid resources, facts, data and on line reference etc then it becomes  difficult to verify what the machine is creating as an answer.

This is also interesting and worth pondering…

“I use a wide range of sources to generate responses to your questions, including books, articles, online resources, and more.”

For me the next question is which books, articles and online resources? To be honest I’m not a god level programmer by any stretch of the imagination. But I think it shouldn’t be that difficult to build that sort of functionality into the machine.

We need insight into this process and we need an AI that is self aware to the point of being able to argue and back up it’s statements. But perhaps the AI (or it’s makers) would prefer privacy? I wonder about this!