If you prefer you can simply follow the RSS feed.
Looks like my IPv6 entries were removed from the public list. So I cleaned up all of my stamps and to include port numbers and I requested that the IPv6 entries by reinstated. Hopefully this will result in more traffic. To celebrate, I changed the web site to a dark theme.
This morning while I slept OVH bombed their own network and the resolvers were down between about 3:30am and 4:30am. According to their statement, everything is working again.
Looks like my VPS provider's hardware had a bad barf while I slept last night. When I woke up this morning I found an email (thanks Mark) letting me know servers were both down. I opened a ticket with ULayer and they were back up by about 10:30am this morning. Apologies for the outage folks!
Some time yesterday both servers started having a problem where some queries because excessively slow and some were timing out completely. I opened a support ticket with my VPS provider to see if they could spot any problem upstream. My concern was that OVH may be dropping traffic that it sees as an amplification attack. During the troubleshooting though I saw that apparmor was barfing Unbound for some reason. I was able to get both servers going again by setting them to listen on 127.0.0.55 instead of 127.0.0.1, and things seemed normal. The bad news is that I went to bed and about an hour later they were barfing again.
Today I was able to upgrade Unbound by using buster-backports and both servers seem to have remained stable for the last few hours. Apologies for the downtime, and I'll keep monitoring in case it happens again.
Sorry folks, server #2 went down this morning around 1am and had to be moved to a different physical host. The IPv4 address stays the same and is up right now. The IPv6 address has changed and I have to make a request [on ms-github] to have the stamp updated in the public resolver list.
I heard some crappy stuff about php yesterday and some systemd dependence. I already get annoyed with minor php issues, so I overreacted and migrated the web site from php to SSI.
DNSCrypt services crashed on Server #2 around 11am and were down for about half an hour. DoH services were unaffected. Sorry for the down time folks!
I noticed recently that when using the DoH services I would occasionally see failed queries but that it was not happening when I used only the dnscrypt services. Tonight I switched Server #2 from RouteDNS back to the m13253 DoH server with Nginx to provide the "s" part of https. So far it seems to be performing better but it is still too early to tell whether or not the problem is fixed. I'll be monitoring and testing over the next few days and if it has indeed solved the problem then I'll switch Server #1 back as well.
Curious eh? Well, have a look at the 2020 News.