Should we be afraid of the next Carrington event?

The last big Carrington event happened in 1859. In a nut shell the Carrington event was a large solar storm. Simply put this caused a massive amount of energy to enter the earths lower atmosphere. Things like power lines, and telegraph lines, acted like conductors and absorbed lots of this electrical energy.

Much doom and gloom has been discussed about what this might mean in today’s highly electronic world. Replacing a power station transformer that has exploded is an expensive and time consuming process, in some situations it could take years.

Large Carrington events occur about every 150 years and we are due for a big one soon! Appart from all the doom and gloom there are some positives.

Humanity is staring to develop techniques to manage these storms. Lloyds of London the insurance company publicly published this paper back in 2013 it’s an interesting read (The executive summary is food for thought!) and it brings up the issue of money, insurance and culpability of large power companies.

Power companies are aware of the need to harden and prepare their networks for these potentially dangerous situations and we have some recent examples of responsible management.

This article from May 2024 makes interesting reading. In effect a small New Zealand company working with a university to manage solar storms.

But many power companies and grid managers don’t like to talk about this sort of thing – because it could involve criticism, cost, culpability. Not to mention share price value!

Another positive is that we also have “The space weather prediction centre” which utilises various satellites and ground based stations to monitor the sun and it’s predicted output.

Solar storms not only effect us on the ground – up in space things can get tricky and satellites often have to power down and maneuver them selves to avoid the worst of the storm. But this is tricky to get right. As recently as February 2022, Elon Musk’s Starlink company lost 38 satellites due to a geomagnetic storm.

One need also consider the effect on GPS systems with some flight, farming and radio systems being effected. This is a very sobering thought – and you might ask can modern jets fly and navigate without GPS (Makes mental note to check swpc site next time I fly)!

So you might ask how might SMB companies prepare for such an event?

Well the short answer is to back up! A Faraday cage for an off site version of that back up would be a very good investment, as well as knowing how long it would take you to re build your server from scratch.

Also if you depend on the cloud to host your data – do you know where that data is physically located? It might be a good idea to have at least a backup located on the other side of the planet – if not a data mirror, or fail over option – again on the other side of the planet.

The internet was designed to survive even if we have a nuclear war, so connectivity may be available but don’t count on it.

In a worst case scenario you would probably lose some data but not all – and you would be able to re-establish systems once the storm had cleared.

Although expensive – another option is traditional insurance, but that may be a very deep and litigious rabbit hole!

It’s at moment s like this that I like to quote my favourite character from the Incredibles  movie, Edna Mode “Luck favours the prepared darling!” Also it may be a good idea if your running something that’s very important…  to keep an eye on the SWPC site!

Related links in full.
https://assets.lloyds.com/assets/pdf-solar-storm-risk-to-the-north-american-electric-grid/1/pdf-Solar-Storm-Risk-to-the-North-American-Electric-Grid.pdf

Space weather prediction center
https://www.swpc.noaa.gov/

Transpower Link
https://www.transpower.co.nz/news/transpower-restores-electricity-transmission-circuits-after-solar-storm-subsides

Dear George!

So I’ve been waiting for this insane sort of a situation to happen for about 20 years now. That is, a major part / piece of international IT infrastructure failing. I’m of course talking about the recent crowdstrike failure.

With regards Cyber Security, large companies like to pay someone else to take care of the problem for them. It means they can avoid accountability and pass the buck to someone else. Strange how the crowdstrike stock price is tanking!

https://www.google.com/search?q=crowdstrike+stock+price

The crowdstrike issue was / is an example of humans loving the fact that they can pass the buck. It’s also about environments that lack diversity and this creates or leaves open a single point of failure. Many many machines are going to have to be physically restarted and modified to sort this problem, and the answer to all this is something IT tends to shy away from – that being an environment of variation can be a lot more robust.

What do I mean by this? The answer is a variety of operating systems (ie Mac, Linux, Bsd even Android or ChromeOS) both at the server and user level. Not to mention a variety of routers and switches all carefully constructed to operate together. It’s an interesting thought experiment that few people want to consider because, we are all about the bottom dollar! We want things to be as easy as possible!

The crowdstrike insanity is a huge home goal event, an embarrassing hiccup for a number of companies and even your humble author (I had to go and find cash to try to pay for my weekend vino! By the time I came back the store had closed, because even their cash draws no longer could function)!

It will be interesting to see if we learn anything from this large hiccup that took out about 8.5 million machines. Many of which were parts of important infrastructure including including airlines, banks and hospitals.

A few interesting observations. George Kurtz, the CEO of Crowdstrike has gone thru something similar before – In 2010 while he was working for McAffe a similar problem, that caused a global IT meltdown due to a faulty update. CTO at this time was George Kurtz! Who would have thought!

In a number of companies I have worked for we had a rule. Never, ever run out a major update on a Friday! Unless the client is willing to pay for out of hours weekend support. Guess George is still working on that lesson.

Now I want you to put your thinking caps on and go read about what notpetya did. Think about what you might do if we ever have total Cyber warfare!