Tuesday 15 March 2022

AdGuard Home and time based rules

I have to say, I really like AdGuard Home (AGH)... and I am a bit torn between PiHole and AGH. PiHole seems to have better dashboard and allows to drill more into logs, but AGH has more features, so in a way it is horses for courses.

Recently a new use case came up for me - to block certain websites/services based on time of day. Think of it as technical layer of parental controls. I call it technical layer, because I know a conversation with a child is way more effective than any technical solution. At the same time kids being kids (even the most obedient and respectful ones) will sooner or later try to see if something is really blocked or is dad bluffing. Let them... it's good they try.

AGH API

AGH has a working API that is documented here. If you paste the contents of openapi.yaml into web based Swagger Editor, you will be able to easily navigate through the API docs.

AGH requires user to authenticate when using the API, so let's assume our username is admin and password is password. Invoking API is as simple as adding a HTTP header and encoding admin:password as Base64 string to include in the header - you can use CyberChef for this.

curl -H 'Authorization: Basic YWRtaW46cGFzc3dvcmQK' ...

 Here we are interested in 2 API endpoints:

  • /control/filtering/add_url
  • /control/filtering/remove_url
Swagger Editor shows us exact invocation method with examples:

Sunday 5 January 2020

Mikrotik + Pi Zero + Pi-hole = advertising sinkhole with fail-safe

Components

  • Mikrotik router with USB port - I tested on RB2011UiAS-2HnD-IN and hAP ac models
  • RouterOS in modern version - I tested with long term (6.44.6)
  • Raspberry Pi Zero - I use old one without "W", with 4GB microSD card running latest Raspbian 10 Buster (minimal, without GUI!)
  • Short micro-USB data cable - because many cheap cables don't do data

Pi Zero actually has more than enough power to run Pi-hole serving even quite large home/family network and running it completely self-contained off Mikrotik seems to work great!


Initial setup

  1. Download and burn the latest Raspbian onto the SD card - I used for this Etcher and 2019-09-26-raspbian-buster-lite.img
  2. Connect SD card to a PC and in partition called boot edit two files to enable Ethernet gadget:
    1. config.txt - at the very end of the file add a line saying dtoverlay=dwc2
    2. cmdline.txt - add modules-load=dwc2,g_ether directly after 'rootwait' and before any other parameters that may (or not) be there
  3. Boot up RPi powering from PC using the port marked as USB on the board - not the PWR IN; it's the one in the centre - only that one does power + gadget
  4. After all boots up, you should be able to run ssh pi@raspberrypi.local (thanks mDNS!) with password raspberry
  5. On the RPi create file called /etc/modprobe.d/g_ether.conf with the following content (single line of text)
    options g_ether idVendor=0x05ac idProduct=0x1402 iProduct=Pi0 iManufacturer=Raspberry
    NOTE - This is required for RPi to show up as LTE interface on Mikrotik!

Saturday 30 November 2019

Hacking smart plugs for fun and profit

Smarter, the smart way...

You'd like to have your home a bit smarter but not spend a ton of money and you don't like the fact that you have to trust an unnamed 3rd-party company with your data and more importantly access to things that can trigger kinetic actions in your household.

Chinese company Itead is the maker of well known Sonoff devices (do not confuse with audio gear from Sonos). They have a vast range of wifi controlled relays, in various formats and sizes - overall very cool stuff. There is a really nice video on youtube that will give you more idea about what I mean by this.
One of the best parts of Sonoff devices is that they use ESP8266 as the main driver chip and attach relays to it, so being ESP8266 based they are very hackable! That's just what we need!

These days, Itead and others are making smart devices using the same tested pattern and rebrand them under hundreds of names. You will find on Amazon various devices using the same management app (usualy eWeLink or Smart Life), which means OEM devices - good for us!

NOTE:
Since 2019 more and more devices switch from ESP82xx to Realtek chips - the OEM ecosystem is also a curse. For example in 2018 I bought Teckin SP27 smart plug - worked perfect... bought some SP27 and SP23 this week and they run Realtek, so DO NOT BUY those if you want to reflash :-)

What works? I got this set and as of late November 2019 it was still ESP82xx based. Others reported at the same time that this is also working. Basically depends on which batch you get and a bit of luck :-)

Tuesday 29 January 2019

More range, more fun - ADS-B setup updates

Quick post today...

I've updated my ADS-B receiving station about a week ago and today, most likely due to radio propagation conditions change, I set a new detection range record of 241nm (or 466km) - here's the view from my FlightRadar24 dashboard, and the day's not over yet!


Strangely most detections are due South today, which wasn't the case over the rest of the week (most are N-NW due to window where the antenna is being exactly North facing). This is the magic of radio propagation - band conditions change constantly, sometimes in most surprising way.

Right, so what what were the updates I made last week? I made only one...

Saturday 17 November 2018

Slimming down 1-node Elastic cluster

If you ever ran Elastic Search especially quick and dirty - single node and default config, you will notice the health is always showing yellow and that it's a proper hog for the system. Well, yes, it will be, especially in default config, as my good friend Justin Borland pointed out.

I'm a complete newbie when it comes to Elastic, deployed few in Docker containers to quickly ingest data and dig in with Kibana, but that was it. Luckily for me Justin is absolute beast when it comes to all things Elastic - he just looked at my node and right on the spot explained what's wrong with it and how to fix/improve.

Basically my default setup was running 5 shards for each of the indices stored in the system, and I had quite a few daily indices already there - we're talking months of DNS research data and web spider runs across thousands of websites... all repeated daily. This means the optimisation to be really effective needs to also deal with what's in there, not just new data I will be adding.

Plan:
  1. Change default template to run only 1 shard and 0 replicas - it's a single node deployment, so anything more complex doesn't make much sense.
  2. Use reindex API to rewrite all of existing indices as single shard versions, the deleting the old ones using 5 shards - there's no other way to do it than through reindexing.
  3. My indices are treated append-only on the day, then become read-only, so we can merge the segments - leaving technical details behind, this will mean no random access later, just linear file reads, but that's perfectly acceptable in my particular use scenario.

Let's do it!

Friday 2 November 2018

Solution - Rancher 2 (k8s), private registry, self-signed certificates

Since Rancher switched to Kubernetes in version 2.x, I'm exposed to a lot of stupidity and limitations k8s introduced, but I can live with that, at least for a moment... What I couldn't accept was that I could no longer use my private registry (with self-signed certificate) that works perfectly fine with older Rancher (1.6 - before move to k8s).

That is now resolved!

My cluster setup


  • Rancher 2 cluster (based on Kubernetes), all running on latest RancherOS
  • Private registry available only within the LAB network - hence self-signed certificate
  • Registry has an internal host name, resolvable via internal DNS server
  • Registry does not require user accounts, so no need for credentials, but self-signed certificate prevents it from working, resulting with following error when image is pulled
x509: certificate signed by unknown authority

Dead ends


First of all, please ignore RancherOS documentation - last one I found was for version 1.2, current RancherOS is 1.4.2... anyway, it no longer works (it did for older RancherOS and Rancher 1.6 though, but new Rancher is more Kubernetes than anything else). In my research I also read a bunch of bug reports, feature requests, stack exchange articles, etc... mostly waste of time, but they gave me a good idea on rabbit holes to avoid. Some of the more useful reads are here and here, I also have a feeling this will be useful for me quite soon.
Another trick I noticed was that if I followed RancherOS docs above, the registry CA key was overwritten with something else on node reboot.

Solution (a.k.a "works for me")


Go old school Linux admin style:

  1. SSH to the RancherOS node (user is rancher@<node>), having your private CA certificate at hand
  2. As user rancheros try docker pull <registry:port>/<my image> - you should get a CA error
  3. Check your /etc/resolv.conf - mine was regularly overwritten by dhcp but it was not writing name servers correctly - this should be easily fixed by writing what you want to /etc/resolv.conf.tail (in hopes dhcp will append it when it regenerates resolv.conf).
  4. Now the key element - edit the OS wide trusted CA list (hint hint - may disappear after sudo ros os upgrade, but this can be fixed with sudo chattr +i /etc/resolv.conf)  and add your CA certificate there. Running vi /etc/ssl/certs/ca-certificates.crt and copy'n'paste does the trick!
  5. Try docker pull again, now it worked for me.

Friday 3 June 2016

Recipe - Docker, web apps and Lets Encrypt

Intro


If you're after easy hosting of dockerized web services with automatic certificate enrolment using Lets Encrypt, then the solution is to use 2 docker containers - nginx as a web proxy and Lets Encrypt Companion to handle certificates. LE Companion can provide either LIVE or STAGING certificates, depending on configuration, but you can run only one at a time.

Container definitions below are in a docker-compose format and the recipe below contains absolutely no security hardening of the Docker installation - this is something you need to consider separately

Web proxy

TLSproxy:
  image: 'jwilder/nginx-proxy:latest'
  ports:
    - '80:80'
    - '443:443'
  volumes:
    - '/etc/letsencrypt:/etc/nginx/certs:ro'
    - /etc/nginx/vhost.d
    - /usr/share/nginx/html
    - '/var/run/docker.sock:/tmp/docker.sock:ro'
  environment:
    - 'DEFAULT_HOST=default.vhost.tld'

TLSproxy is nginx based reverse proxy that automatically discovers and configures virtual hosts running on the same machine. See image description on docker hub for details. TL;DR simple approach is:

docker run -d -e VIRTUAL_HOST=blog.domain.tld ghost

Please note, the DEFAULT_HOST variable - it's quite useful to have it set right :-)