ListUtil.c: loadable library and perl
binaries are mismatched (got handshake key 0xdb00080, needed 0xeb00080)
when perl
or cpan
commands are run), all Perl modules
had to be reinstalled. In addition to this there was also a boot problem.
Thus the server was completely or partially inaccessible from about 2022-02-11 17:50 EEST to 2022-02-12 03:00 EEST, but on the next day and the day after more problems with public and private services were tackled. Some more problems with hosted applications and sites were fixed as well.
Both Slackware 15.0 and MySQL 8.0 were much needed upgrades and I'm happy to be running both now.
]]>The new drive is 4 times the size of the previous one, so eventually more mirrors and/or other content may come soon. The mirrors of GNU and Slackware are available again now and up to sync, so happy downloading!
]]>Therefore, since about 12:00 UTC today the hosted sites are no longer supporting TLS v1.0 and v1.1 clients. I also disabled weaker ciphers like CBC. This effectively cuts the following browsers as SSL Labs' report shows:
If you are still using any of these apart from testing purposes in isolated environments (like me), then too bad for you. It is really time to upgrade!
This is considered a necessary step for improving web sites security and something that perhaps should have been done earlier. The changes were done following Mozilla's Server Side TLS recommendations. It is not possible to enable TLS v1.3 for now, because this requires OpenSSL 1.1.1 or later, which would become available with Slackware 15.0 hopefully later this year.
Happy surfing and stay safe!
Update 2021-01-26: Apparently, NSA urged for the same just three days ago :-)
]]>Unfortunately, due to change of the service, I had to change also the IP address (it is now 46.10.161.161), which caused some unexpected unavailability, but for not more than 5-10 minutes as the DNS servers updated quickly.
Cheers!
]]>Today, between 18:40 and 20:00 EEST the server was not accessible, because I was finally able to move the system to the new machine (Dell PowerEdge R340). The operation took so long, because I wanted to copy the bootable SSD disk to the new drive that came with the server, but I was not able to install it - a special bracket is required to fit 2.5'' SSDs into Dell's 14th generation 3.5'' carriers.
After I moved all the drives I got myself into cabling, which took me some more time although my new rack is well organized. I first chose patch cables with the wrong length and then I had to reorganize some existing cables, so I could fit around the new ones.
The first boot was smooth as expected with the default kernel, but I forgot to comment out the persistent interface rules of udev for the network adapters of the previous machine, so the server was without network. After fixing it I decided to reboot once again to test that everything would get up properly.
I had to tune several other things after the server was up, but nothing that would disrupt its normal work. The server is now ready to serve its purpose.
Cheers!
]]>Today between 10:40 EEST and 11:40 EEST the server was unavailable as I was moving my equipment to a new server rack. I could have done this faster and in future I plan less downtime, but the important thing is that now all my equipment except the server itself is into the rack. I'm now waiting for the new server to arrive in about 10 days, so I could move the current tower server into the rack.
Earlier this year I noticed that the server is shutting down even though the UPS still has power to support it. So, three weeks ago I changed the batteries, but as everything is connected to it, currently the expected runtime is about 40 minutes at best. However, I had some troubles with the power recently and in one of the cases there was no line power for about 2 hours (due to accident). I'm now considering increasing the runtime of my systems on batters or in other words buying a new and more powerful UPS. I'll need more than one to sustain my two servers and network devices for at least 2 hours that is the power interruption I could expect from time to time.
Cheers until I have better news!
]]>After I added a new 8 GB DDR3 module the server started normally. I thought everything is OK and left it, but then I tried to connect and wasn't able. So, I went back to check the server and on the console there were many EXT4 error messages. I had to stop the machine again, check the devices in BIOS. Since one of the hard drives was missing I immediately checked the connectors and solved the problem.
I should have done this upgrade long ago, but until recently I though the memory is enough for all the services running. Anyway, the server has enough memory now.
]]>The expected change with the replacement is better network performance for the increasing outer and inner traffic.
]]>dd
took about 5 hours,
which is why the server was offline somewhere between 2017-12-08 22:00 EET and 2017-12-09 05:00 EET. The disk started failing in
beginning of September, but recently the number of reallocated sectors become
extremely high and I started detecting bad sectors on some system files. The
read performance had also dropped and during the copy it fell to 5 MB/s (!), which explains the fore
mentioned slow copy of just 64 GB between
the old and new SSD. The disk
failed only after about 24 000 power on hours (i.e. about 2 years and 9 months),
which is rather strange, but maybe this is the normal life span of consumer
SSDs?
Anyway, the drive is now replaced with a brand new ADATA SU800
128 GB, which unfortunately is not yet in
smartctl
database (see ticket 954).
The server is back online and fully operational.
Yesterday, evening the datbase server was also upgraded to MySQL 5.7.13, so it now remains only to upgrade the kernel to a LTS version from 4.4 series.
]]>