Dear George!

So I’ve been waiting for this insane sort of a situation to happen for about 20 years now. That is, a major part / piece of international IT infrastructure failing. I’m of course talking about the recent crowdstrike failure.

With regards Cyber Security, large companies like to pay someone else to take care of the problem for them. It means they can avoid accountability and pass the buck to someone else. Strange how the crowdstrike stock price is tanking!

https://www.google.com/search?q=crowdstrike+stock+price

The crowdstrike issue was / is an example of humans loving the fact that they can pass the buck. It’s also about environments that lack diversity and this creates or leaves open a single point of failure. Many many machines are going to have to be physically restarted and modified to sort this problem, and the answer to all this is something IT tends to shy away from – that being an environment of variation can be a lot more robust.

What do I mean by this? The answer is a variety of operating systems (ie Mac, Linux, Bsd even Android or ChromeOS) both at the server and user level. Not to mention a variety of routers and switches all carefully constructed to operate together. It’s an interesting thought experiment that few people want to consider because, we are all about the bottom dollar! We want things to be as easy as possible!

The crowdstrike insanity is a huge home goal event, an embarrassing hiccup for a number of companies and even your humble author (I had to go and find cash to try to pay for my weekend vino! By the time I came back the store had closed, because even their cash draws no longer could function)!

It will be interesting to see if we learn anything from this large hiccup that took out about 8.5 million machines. Many of which were parts of important infrastructure including including airlines, banks and hospitals.

A few interesting observations. George Kurtz, the CEO of Crowdstrike has gone thru something similar before – In 2010 while he was working for McAffe a similar problem, that caused a global IT meltdown due to a faulty update. CTO at this time was George Kurtz! Who would have thought!

In a number of companies I have worked for we had a rule. Never, ever run out a major update on a Friday! Unless the client is willing to pay for out of hours weekend support. Guess George is still working on that lesson.

Now I want you to put your thinking caps on and go read about what notpetya did. Think about what you might do if we ever have total Cyber warfare!

Tech note for certbot!

Did some testing this morning on the new certs and realised that things were not working in firefox and at one point I think I saw an erro in chrome!
Problem was fire fox needed both www and non www versions of the site name. Re issuing the cert sorted this in no time!

This is how the process worked out…!

sudo certbot --nginx
Saving debug log to /var/log/letsencrypt/letsencrypt.log
Plugins selected: Authenticator nginx, Installer nginx
No names were found in your configuration files. Please enter in your domain
name(s) (comma and/or space separated) (Enter 'c' to cancel): www.gingercatsoftware.com gingercatsoftware.com


Recent outage and snow flake servers!

This is a Wombat not a snow flake!

My server hasn’t been working too well over the last 24 hours due to it becoming a bit of a snow flake, that and the fact the the plumber always has leaky pipes! Not to mention that I was running a rather old version of Debian.

What’s a snow flake server you may ask? It’s what all system admins should avoid! It’s a server that does all sorts of things (often rather well) and as such is a precious little snow flake! The problem with this is that the server will not, or is not, easy to manage or update or improve due to lack of documentation, configuration issues, and / or as was my issue- software and hardware conflicts.

There are a number of ways to manage machine production and developer working environments. These include approaches such as blue green servers, machine imaging with products like puppet and Ansible. As well as a VM approach with products like Vagrant or a software container product like  Docker.

Whats also interesting is that with good old fashioned tools like password less key managed ssh access, and shell scripting you can control a lot of the process that the above products like to take claim for.

I’m going to think quite a bit about this snowflake problem some more in the coming weeks. I shall probably write more about how I, as someone with a “production server” and a number of other needs keeps all the ducks on the wall.  The end result is that I hope I can create a machine from scratch in a very short space of time. Or at least learn a few things.

Stay tuned!

 

 

 

 

Notes on nginx config

screen-shot-2016-10-05-at-12-32-46-am

 

This is just a collection of nginx config notes I’ll up date it on occasion… So with out further ado!
_________________

Getting upload problems
When using the old uploader you may get
“413 Error: Request Entity Too Large”

Mod the nginx.conf file

add/increase client_max_body_size in the nginx configuration file http area/section:
http {
client_max_body_size 32m;
(other lines will also be here)
}

Also look at the php.ini file
/etc/php5/fpm directory if your using php-fpm

php.ini

Check and/or increase the following:
upload_max_filesize = 32M
post_max_size = 32M

Optionally increase:
max_execution_time =300
max_input_time=300
memory_limit =128M

_________________

If the server is not generating php pages

add the follwoing line to fastcgi_params within the /etc/nginx directory

fastcgi_param  SCRIPT_FILENAME    $document_root$fastcgi_script_name;

restart both nginx and php5-fpm

__________________________

The “server_names_hash_bucket_size error”

To fix this issue add this line

server_names_hash_bucket_size  64;

or

server_names_hash_bucket_size  128;

into the

/etc/nginx/nginx.conf file 

after the http Declaration

http {

##

# Basic Settings

##

server_names_hash_bucket_size  64;

 …..

That should fix things!

______________________________

Feb 10 2017

Added this to

/etc/nginx/nginx.conf

fix file size up load issue that became apparent in new WordPress install

        # set client body size to 10M #

        client_max_body_size 10M;