My Wireguard Setup

Disclaimer

Someone has been submitting my recent posts to online tech news aggregators, where they are criticized for not being cutting edge or paradigm shifting enough. If you’ve been led to believe that this post awe and amaze you, complain to the person who submitted it, not me. This is just my personal blog where I write about stuff that I’m doing, mostly technology based. It will not change your life. That said…

Background

I’ve had a “home server” for close to ten years now. It’s a Linux-based desktop pc. It acts as a file server, media server, backup server and a place to try out different things. I guess it’s what is now popularly called a “home lab”. All that’s great when I’m at home on my home network. I can stream movies and music, get files, ssh into the server and do whatever I need to do.

But when I’m out and about, traveling, working (when we used to go out and do stuff like that), I’d also like to have that same access. That’s all simple enough. You go into your router settings, do some port forwarding to that box and then you can stream, ssh, ftp, vnc, whatever. I’ve certainly done just that often enough. But as I became more security conscious, this started to worry me more and more. Having all those ports open into my main machine made me nervous. Yeah, they are behind passwords, or hopefully keys. I locked down ssh pretty tightly, but still worried about it, and all those other services. When I was on Xfinity for home internet, their management app provided a security section which listed all the various attempts to access different ports on the network with their IPs and locations. It was shocking. It became something that was not just theoretical. People were (and are) actually trying to hack into my network. That’s when I shut everything down.

Enter Wireguard

I’d heard quite a bit about Wireguard and it sounded like what I needed. I came upon this tutorial which described exactly what I wanted to do and in pretty clear terms:

https://zach.bloomqu.ist/blog/2019/11/site-to-site-wireguard-vpn.html

This all went together really well. It took a bit of learning and messing things up and fixing them, but I eventually got it all working really nicely and doing exactly what I need. Here’s my current setup:

  • Main wireguard server hosted on an inexpensive VPS in the cloud.
    • ufw set up to block all traffic other than specific ports from specific wireguard clients.
    • rinetd to forward any needed ports to my home server. Currently, that’s just the port that my airsonic server is running on.
  • Main wireguard server hosted on an inexpensive VPS in the cloud.
    • ufw set up to block all traffic other than specific ports from specific wireguard clients.
    • rinetd to forward any needed ports to my home server. Currently, that’s just the port that my airsonic server is running on.
  • wireguard client running on my home server.
    • airsonic music streaming server running there.
  • wireguard clients running on a couple of laptops, my Android phone and tablet. Each client has it’s own private key and the public key of the server. The server has its own private key and the public keys of each client.

With this setup I can ssh into the VPS from anywhere in the world, provided I’m doing it from one of the configured clients. Once I’m into the VPS, I can then ssh into any one of the other clients that has an ssh server running. I could use rinetd to forward ssh on specific ports to specific clients. But for now, that use case is not that common. When the world gets back to normal and I’m out of the house more, that will be useful.

I’ve got my airsonic server running on a specific port of my home server, let’s say it’s 1234. rinetd is set up to forward port 1234 on the VPS to port 1234 on the home server. So I can access my music in the browser from any wireguard client, or I can use any one of many subsonic-compatible Android apps and have my music streaming to my phone or tablet no matter where I am.

This setup is pretty flexible, and I will be able to add other services to it just by opening up a port in ufw and forwarding it as needed using rinetd. Important thing to remember is that when I say “opening up a port in ufw” I mean a wireguard client accessible port. Nothing is open on the VPS except via wireguard. Nothing is open on my home server except via the VPS or local LAN.

Monitoring and Recovery

One downside to this setup is that to access my music for example, I’m relying on a chain of multiple links: wireguard on VPS, ufw, rinetd, wireguard on home server, airsonic. If any one of those doesn’t function just right, I’m listening to silence. This has happened a couple of times, especially when I first set things up and had some things not quite right. Actually, if ufw goes down, I’ll still be able to listen to my music, but my VPS will be open. So I wanted to get some monitoring in place. When things were down early on, I’d be making assumptions on which piece was broke and spending time trying to fix it, only to find out it was one of the other links. With correct monitoring, I can now tell exactly what is up and down.

Monitoring with Healthchecks

I’ve been a big fan of Healthchecks.io. You set up “checks” which provide you with a url to ping. If a check doesn’t get a ping within a specified time period, it notifies you via email, sms, or through more than twenty other integrated services. I’ve been using it to monitor my daily backups. If a backup doesn’t happen at a specified time, I know about it.

So I set up a cron job that runs a script every 10 minutes on my VPS, and a similar one on my home server. This script first checks the status of wireguard. If it’s up, it pings Healthchecks. It does the same for rinetd and ufw. My home server checks wireguard and airsonic. Each of these five services is set up as a separate check in Healthchecks so I can see the status of each of them separately. The cron job runs every 10 minutes, so I give it one extra minute leeway – if Healthchecks doesn’t get a new ping after 11 minutes, that service is marked as down.

Recovery

Eventually I realized that if a particular service was down, once I became aware of it, I’d just go to whatever machine and restart it, so why not just do that automatically. So I built that into each of my checks.

If, say, wireguard is down on the VPS, it will NOT send the ping to Healthchecks. So a minute or so later it will be flagged as being down. But in this case, the script will also automatically try to restart wireguard. The next time it runs (10 minutes later), hopefully it sees that wireguard is up and sends the ping.

Healthchecks also has a “grace period” configuration. Once it notices something is down, it will not alert you until that grace period is done. I set this to 10 minutes. This results in the following sequence if something goes down:

  1. Service X is up and Healthchecks gets pinged at 10:00 pm.
  2. Service X goes down at 10:05 pm.
  3. At 10:10 pm, the script sees that Service X is down and fails to ping Healthchecks.
  4. The script also attempts to restart Service X.
  5. At 10:11 pm Healthchecks has not had a ping in 11 minutes and marks Service X as down.
  6. At 10:20 pm, the script runs again. Service X is up so it pings Healthchecks, which marks Service X as up again.
  7. Alternately, the restart didn’t work and at 10:20 pm no ping is sent.
  8. In this alternate case, at 10:21 pm, Healthchecks emails and texts me about the fact that Service X is down.

A potential improvement to this is that after step 4, when Service X is restarted, I could verify that it’s now working and ping Healthchecks. immediately. This way, if the restart works, nothing is marked as down. But I’m going to run it as is for a while and see how this works out. So far, so good.

I’ve gone through and tested each on of these checks, turning the service off and leaving it off. Within 11 minutes it was marked as down and restarted. And shortly thereafter marked as back up. All automatically.

If this were some kind of public service or mission critical workflow, I could easily set up the pings for every minute or so. But the 10 minutes seems perfectly adequate for my purposes.

More Details?

This post is pretty high level. Most of what went into the wireguard setup is covered in the above link. If you want to set up something similar, I’d be happy to go into more detail on any specific points. Just let me know.

version 1.3

Me again, talking about this silly version program still.

Actually, there are some pretty cool updates over the past few point releases. They came fast and on the heels of each other. The idea was posed to use the Linux package manager – apt or pacman or whatever – to get data on a program instead of relying on a hard-coded list.

Background

After some back and forth I warmed up to the idea, but as a backup to the known program list, not as a replacement. My reasoning is that you might have multiple versions of foo installed. Maybe one was through the default package manager, one through some download-and-run-an-install-script method. They might get installed to different locations in your PATH. But when you call foo on the command line, you’ll only get one of them.

If you query the package manager, it’s going to tell you about the one that it knows, which may or may not be the default. But when you run foo -v on the command line, you will get the one that’s going to be actually run in most cases. So that should be the first place we look. If version doesn’t know about foo then it can turn to the package manager.

Details

I decided to tackle two of the major Linux package managers first – apt (used on Ubuntu and most other Debian derivatives) and pacman (used on Manjaro and other Arch derivatives).

On apt, you can find info about a package, say neovim, you’d type:

apt list neovim --installed

This will give you something like:

neovim/focal,now 0.4.3-3 amd64 [installed]

That 0.4.3-3 is the version number that we’re looking for. It took a bit of regex trickery, but I was able to parse that bit out of it.

On pacman you’d type pacman -Qi neovim and the result would look something like:

Name : neovim
Version : 0.4.4-1
Description : Fork of Vim aiming to improve user experience, plugins, and GUIs
Architecture : x86_64
URL : https://neovim.io
Licenses : custom:neovim
Groups : None
Provides : vim-plugin-runtime
Depends On : libtermkey libuv msgpack-c unibilium libvterm luajit libluv
Optional Deps : python-neovim: for Python 3 plugin support (see :help python)
xclip: for clipboard support on X11 (or xsel) (see :help clipboard) [installed]
xsel: for clipboard support on X11 (or xclip) (see :help clipboard) [installed]
wl-clipboard: for clipboard support on wayland (see :help clipboard)
Required By : None
Optional For : None
Conflicts With : None
Replaces : None
Installed Size : 20.45 MiB
Packager : Sven-Hendrik Haase svenstaro@gmail.com
Build Date : Wed 05 Aug 2020 04:16:43 AM EDT
Install Date : Fri 21 Aug 2020 07:37:52 AM EDT
Install Reason : Explicitly installed
Install Script : No
Validated By : Signature

So we can use grep and/or sed to find the one line of that which starts with Version: and grab the 0.4.4-1 part of it.

I then did basically the same thing for dnf which is the package manager on Redhat, Fedora, and derivatives.

So the process is:

  1. Check to see if version already knows about the program. If so, just do what it already does.
  2. If now, check apt, pacman and dnf. First we can just check to see if each one of those exist and only run the one that does exist. It’s unlikely that many people will have more than one of those. If we find one of those, we do the parsing and spit out the version it tells us about.
  3. If those all fail, then we can just tell the user we couldn’t find any information on that command.

Can we do more?

There are all kinds of other package managers on both Linux and Mac. I started making a list of the different ways you can find and install software and came up with

  • snaps
  • flatpaks
  • pip
  • npm
  • homebrew / linuxbrew

There are others, but those all cover a huge amount of ground. And it turns out that most of them were able to be solved with the same general strategy:

  • Does this package manager exist?
  • Does it know about this program and what info does it have?
  • Parse out the version number from the info it returns.

So, now version supports all of those. It just looks at each one of them in turn until it finds one that give an answer.

This also has the added functionality of being able to return the version of more than just executable programs. Package managers know about various libraries and other assets that aren’t directly executable or don’t have any way of querying them directly for their version. But version can tell you about them. Want to know what version of libusb you have installed? Typing version libusb will tell you.

A personal perk of doing this project is that I was forced to really learn grep and sed. Two programs that ranged from confusing to very mysterious in my mind. Now I get them and really like them. I wrote something up about them too: https://www.bit-101.com/blog/2020/09/grep-and-sed-demystified/

Catalina VM

I’m not a big fan of Apple. Their products themselves are fine, for the most part. Not to my preference in a lot of ways, but that’s fine. I know plenty of people who love their iPhones and Macbooks and watches. I’m not going to argue. I definitely don’t like the company though. I perceive them as being overly controlling, developer hostile, and incredibly narcissistic. All this is just a pretext for saying that I don’t want to spend any money on Apple hardware.

But now and then I do want to be able to test something on Macos. Some program or script or utility I’m working on, like version or my C or Go based graphics and animation libraries. It’s not my main target, but since command line tools developed on Linux are usually trivial to get working on Mac, I’m happy to test them out and make some minor tweaks so they work there.

I’ve been considering building a “Hackintosh” system. But with Apple’s plans of going to ARM possibly by the end of this year, I don’t want to invest a lot in hardware that’s going to be obsolete soon.

This led me to see if it was possible to get Macos running in a VM. And I found this project:

https://github.com/foxlet/macOS-Simple-KVM

This is a git repo with a couple of scripts that create a qemu VM, download the official Macos installer image and runs that installer in that VM. It runs on Linux that I know of.

  1. Install the dependencies.
  2. Check out the repo.
  3. Run the jumpstart.sh script. This even allows you to choose which version you want. Defaults to Catalina.
  4. Create a virtual disk image and add that to the basic.sh script.
  5. Run the basic.sh script and choose install.

The UI that comes up is called “Clover”

The ui actually confused me at first. I was trying to click things, but you can navigate around with arrow keys and use enter to choose things. Choose the default option shown here to install.

This boots up a Macos system right off the install media. The first thing you’ll need to do is format that virtual disk you created. Then run the installer and tell it to install Macos to that disk you jut formatted.

The install takes a long time. Like close to an hour. At points it says it has one minute remaining and hangs there forever, before saying it’s calculating the remaining time, and hanging there forever. But be patient, it will finish after rebooting a couple times.

When it’s done, it will boot right into the OS, asking you to set up a username and password. And then you’re in a full Macos install. Use Control-Alt-F to toggle full screen and Control-Alt-G to toggle capture of the keyboard and mouse. You can shut down as you’d usually shut down a Mac or just close the VM window.

When you boot back in, you’ll get the Clover screen again. This time choose the last option in the top row to boot.

This is the one that messed me up the first time. I just hit enter and wound up in the install flow again.

Performance

By default, the basic.sh script allocates 2 GB RAM and a minimal amount of CPU resources. It’s also hard-coded to 1280×720 resolution I think. Read through the documentation to find out how to beef up the VM. I gave mine 8GB and a lot more CPU. I also got it running at full resolution on my 2560×1440 (or something like that) monitor. With the extra resources, it’s surprisingly performant. I mean, I’m not going to be doing gaming or video editing or trying to run XCode on it, but for browsing the web, regular apps, anything console-based, it’s perfectly adequate.

Once you go full screen with it, it’s honestly hard to tell it’s not the real thing. It is a bit laggy on my Thinkpad with an i5 CPU, but pretty zippy on my Ryzen 5 3600 desktop. I’ve installed various tools, utilities and other programs, as well as home brew and a bunch of packages form there. I haven’t had any problems with it so far at all.

Although the install was slow as hell, it now boots up fully in under a minute for me. That’s good because it’s not something I want running all the time. Giving it all those resources means the underlying Linux OS does not have access to them. You can pause the VM though, which stops it from using any CPU power. I haven’t checked if that affects the memory use though. I’d guess not so much.

Here is Catalina at 1920×1080 on my Thinkpad

Legality

Who knows. Use at your own risk. Just to avoid any problems, I did not sign in with my Apple ID. I’ve never heard of anyone getting sued for running an OS in a VM.

Summary

I’m pretty excited about this. This is perfect for my use cases of testing things out here and there. I’m not expecting it to be a full replacement for actual Apple hardware running Macos. And I don’t need that.

I imagine once Apple switches over to ARM, this project will be obsolete. Hopefully someone figures out a way to continue it with their new architecture.

A Cooler Cooler from Cooler Master

Some weeks ago I shared my new PC build. It’s been wonderful. Working perfectly. One thing I did recently was put in another 500GB drive to hold my VM images. While I was in there, I moved the front mounted drives around to the back and did some better cable arranging.

One thing I had planned to do for a while was add a new CPU cooler. After reading some reviews and watching some Youtube videos, I settled on the Cooler Master Hyper 212 Black Edition.

I probably didn’t really need this. But I think it’s a good investment. I’d been using the stock cooler that came with my Ryzen 5 3600. It did the job adequately. I’m not into any kind of crazy overclocking or anything, and wasn’t having any real heat problems. In general use, with a browser and a few tabs open, terminal and some music playing, maybe Slack open, the CPU would be in the high 30s to mid 40s C. Unless it was completely idle it wouldn’t stay in the 30s much. More in the low 40s. With Slack and some more active tabs open, it’d get into the 50s, maybe some short peaks into the 60s. Completely idle with nothing running, it would be mid-to-high 30s.

With the new cooler, I can see an immediate difference. Idle with just a browser and several tabs, it will settle down to 31-32. I never saw it go that low with the stock cooler. So it’s promising. I’ve only put it in this morning, so I’ll give it a few days to see how it performs under daily loads, but seems like a good improvement so far.

Installation wasn’t too bad. Took off the old cooler and cleaned up the thermal paste. You have to use their custom back plate, so I removed the old one and set up the new one. This was probably the most complex part. The cooler is designed to fit on a number of different socket types. You have to install various posts or screws and clips and move them to various positions depending on which socket you have, as well as a couple different types of brackets that go on the cooler itself. There’s a decent manual with Ikea-like diagrams for each configuration. I managed to get it right the first time without too much difficulty.

I came very close to forgetting to put on fresh thermal paste, but caught myself before tightening any screws. Getting the four screws on the brackets started was a bit tricky. They’re all spring mounted and wobble around. You have apply a little pressure and get them at the right angle to get them going. But once they were started, easy to tighten up by alternate corners.

The cooler comes with a single 120mm Silencio fan. The instructions have you mount it in a configuration that pushes air across the cooler and towards the back fan, which makes sense. But you can also purchase and add a second fan to mount on the other side of the cooler, for a push-pull configuration. There’s also an RGB version, but I think I have enough RGB going on in there as it is.

The cooler itself is pretty tall and the reviews all said to make sure you have enough room in your case. My case is ludicrously cavernous, so there’s plenty of room to spare, but it’s good advice to check.

version 1.0

A while back I posted about a script I wrote called version.

https://github.com/bit101/version

You pass it the name of a program and it tells you what version of that program you have installed. Example:

version java

This saves you from having to remember if it’s java -v, java --version, java -V or something else (no spoilers).

version now knows how to get the version of 156 different programs (including itself). It has 9 contributors and 15 stars. Not exactly React, but it’s cool to have people contributing.

In the original proof of concept, I was using bash case statements. In fact, this was the entire first iteration:

! /bin/bash

case $1 in

java)
$1 -version
;;

gcc | rustc)
$1 --version
;;

node | perl | lua)
$1 -v
;;

python)
$1 -V
;;

go)
$1 version
;;

esac

Once I started adding more programs though, it became obvious that this wasn’t going to work. I discovered that bash and zsh support a form of associative arrays. I thought that would be the perfect thing. It would look something like:

declare -A tools
tools[gcc]=--version
tools[java]=-version
tools[node]=-v

Sadly, these are not supported in bash 3, which is still in use. In fact, my MacBook Pro has bash 3 on it.

Plan C was to fake associative arrays. Essentially, you just make a bunch of variables, one for each tool, with a common prefix:

tools_gcc=--version
tools_java=-version
tools_node=-v

I just had to do a bit of fancy regex with grep and sed to get the argument from the name of the tool that was passed as an argument. The initial pass with this method was pretty ugly, and I didn’t really understand what I had done. One of the contributors made some nice changes, and this led me to learning a lot more about grep and sed and I was finally able to get rid of grep altogether and do it all in sed. I was pretty happy with that. sed has always seemed like one of those arcane tools that only wizards knew how to use.

I also learned how to make man pages. And I made an install and uninstall script. One of the other contributors has been working on making a snap package, but from what I can tell that’s probably not going to work too sell due to the strict confinement of snaps.

Anyway, I don’t think there’s a whole lot more to be done with the simple tool. Hopefully people will still find new programs to add to it. But I figured it was done enough to slap a 1.0.0 sticker on it.

Grep and Sed, Demystified

I’ve kind of half understood grep for a while, but assumed that I didn’t really get it at all. I thought I knew nothing at all about sed. I took some time this weekend to sit down and actually learn about these two commands and discovered I already knew a good deal about both of them and filled in some of what I didn’t know pretty easily. Both are a lot more simple and straightforward than I thought they were.

Grep

grep comes from “global regular expression print”. This is not really an acronym, but comes from the old time ed line editor tool. In that tool, if you wanted to globally search a file you were editing and print the lines that matched, you’d type g/re/p (where re is the regular expression you are using to search. The functionality got pulled out of ed and made into a standalone tool, grep.

Basics

grep, in its simplest use, searches all the lines of a given file or files and prints all the lines that match a regular expression. Syntax:

grep <options> <expression> <file(s)>

So if you want to search the file animals.txt for all the lines that contain the word dog, you just type:

grep dog animals.txt

It’s usually suggested that you include the expression in single quotes. This prevents a lot of potential problems, such as misinterpretations of spaces or unintentional expansion:

grep 'flying squirrel' animals.txt

It’s also a good idea to explicitly use the -e flag before the expression. This explicitly tells grep that the thing coming next is the expression. Say you had a file that was list of items each preceded with a dash, and you wanted to search for -dog

grep '-dog' animals.txt

Even with the quotes, grep will try to parse -dog as a command line flag. This handles it:

grep -e '-dog' animals.txt

You can search multiple files at the same time with wildcards:

grep -e 'dog' *

This will find all the lines that contain dog in any file in the current directory.

You can also recurse directories using the -r (or -R) flag:

grep -r -e 'dog' *

You can combine flags, but make sure that e is the last one before the expression:

grep -re 'dog' *

A very common use of grep is to pipe the output of one command into grep.

cat animals.txt | grep -e 'dog'

This simple example is exactly the same as just using grep with the file name, so is an unnecessary use of cat, but if you have some other command that generates a bunch of text, this is very useful.

grep simply outputs its results to stdout – the terminal. You could pipe that into another command or save it to a new file…

grep -e 'dog' animals.txt > dogs.txt

Extended

When you get into more complex regular expressions, you’ll need to start escaping the special characters you use to construct them, like parentheses and brackets:

grep -e '\(flying \)\?squirrel' animals.txt

This can quickly become a pain. Time for extended regular expressions, using the -E flag:

grep -Ee '(flying )?squirrel' animals.txt

Much easier. Note that -E had nothing at all to do with -e. That confused me earlier. You should use both in this case. You may have heard of the tool egrep. This is simply grep -E. In some systems egrep is literally a shell script that calls grep -E. In others it’s a separate executable, but it’s just grep -E under the hood.

egrep -e '(flying )?squirrel' animals.txt

Other Stuff

The above covers most of what you need to know to use basic grep. There are some other useful flags you can check into as well:

-o prints only the text that matches the expression, instead of the whole line

-h suppresses the file name from printing

-n prints the line number of each printed line.

Simple Text Search

If the thing you are searching for is simple text, you can use grep -F or fgrep. All the same, but you can’t use regular expressions, just search for a simple string.

grep -Fe 'dog' animals.text
fgrep -e 'dog' animals.text

Perl

There’s also grep with Perl syntax for regular expressions. This is a lot more powerful than normal grep regex syntax, but a lot more complex, so only use it if you really need it. It’s also not supported on every system. To use it, use the -P flag.

grep -Pe 'dog' animals.text

In this simple case, using Perl syntax gives us nothing beyond the usual syntax. Also note that pgrep is NOT an alternate form of grep -P. So much for consistency.

Sed

I thought that I knew next to nothing about sed, but it turns out that I’ve been using it for a few years for text replacement in vim! sed stands for “stream editor” and also hails from the ed line editor program. The syntax is:

sed <options> command <file(s)>

The two main options you’ll use most of the time are -e and -E which work the same way they do in grep.

The most common use of sed is to replace text in files. There are other uses which can edit the files in other ways, but I’ll stick to the basic replacement use case.

Like grep, sed reads each line of text in a file or files and looks for a match. It then performs a replacement on the matched text and prints out the resulting lines. The expression to use for replacement is

s/x/y/

where x is the text you are looking for, and y is what to replace it with. So to replace all instances of cat with the word feline in animals.txt

sed -e 's/cat/feline/' animals.txt

Note that sed will print every line of text in the file, whether or not it found a match or not. But the lines that it matched will be changed the way you specified.

After the final slash in the expression, you can add other regex flags like g for global or i for case insensitivity.

Like grep, sed just outputs to stdout. You can redirect that to another file using > or pipe it to another process using |. But do NOT save the output back to the original file. Try it out some time on a test file that you don’t care about and see what happens.

There are lots of other options you can use with sed but the above will probably get you by for a while to come. As you need more, just read up.

Summary

The biggest thing I took away from the couple of hours I spent on this was actually how easy these two commands were to learn. I’d been avoiding doing so for a long time and now wish that I had spend the effort on this much earlier.

New Shell Script Shortcut

I’m often making shell scripts for various things – sometimes just a quick one for a specific task, sometimes just to test something out, sometimes for some kind of workflow task that I plan to keep around.

It’s always the same steps:

  1. Create the file.
  2. Add the header: #! /bin/bash
  3. Write the code.
  4. Save it.
  5. Exit the editor.
  6. Make it executable with chmod +x <filename>
  7. Run, test, edit, etc.

When you’re repeating yourself, time for some automation. So I wrote a shell script that creates shell scripts.

#! /bin/bash
if [ -f "$1" ]
then
  echo "$1 already exists"
exit 1
fi
echo '#! /bin/bash' > $1
chmod +x $1
$EDITOR $1

Create a file, add that code, make it executable (sound familiar?) and put it somewhere in your path. I named mine newss for new shell script.

Now you can just say:

newss test.sh

And you are editing an executable shell script. As you can see, it checks to see if the file exists as a minimum level of safety. It will follow any paths, such as:

newss foo/bar/test.sh

But only if the directories exist. Creating non-existent directories wouldn’t be too hard if that’s important to you.

Git-based Wiki

For many years I’ve bounced around using different tools to save information that I might need later. I’ve used MS OneNote, Evernote, Workflowy, Dynalist, Notion, several other hosted and self-hosted wiki systems, and probably many other things.

If I had to name a favorite out of all those, I’d go with Workflowy. It’s a super simple text outliner. You start with a single top level page. Each page is a list of items, and can each have a nested sub-list, with effectively unlimited depth. But you can also focus on any node so that it becomes a page in itself. Dynalist is very much the same, but you get multiple lists and can add images, fancy formatting, check boxes, all kinds of other groovy features. On the surface it sounds a million times better, but with all those bells and whistles, I felt like I was losing the elegant simplicity of Workflowy.

But I digress.

One thing I didn’t like about most of those systems is that someone else owns your data. And if it’s an online system, it might be hard to access offline. Some of the wiki systems do run locally, but then you might lose the online functionality.

Some months ago I came up with a system that I am now totally sold on. It’s super simple and involves only github (or gitlab or bitbucket or a self-hosted git repo or whatever) and markdown files.

The System

Honestly, the system itself is so simple that as I start to describe it, it seems almost so obvious I’m second guessing why I even have to describe it. But I did go through a few iterations before I got it down just how I like it.

Top Level

Start by creating a git repo (either online or locally). Create a folder called docs and a README.md file. In that main file create and maintain a bullet list of links. This is your index to all the top level pages in your wiki. Each link should be to another markdown document that lives in the docs folder. Here’s a sample:

# My Personal Wiki

- [Stuff to Buy](docs/stuff_to_buy.md)
- [Movies to Watch](docs/movies_to_watch.md)
- [Birthdays](docs/birthdays.md)
- [Projects](docs/projects.md)

The reason for the top level docs folder is because when you first go to your wiki, it’s going to show a list of all the files and then render your README.md file below that. If you have a ton of documents in the root of your project, your readme is going to scroll right off the page. With this setup, there will always only be two top level items, the folder and your readme. This is what it looks like when you go directly to the repo:

Main Documents

Now, within your docs folder, create a markdown file for each item in the index. A nice trick I do here is to create a link back to the main readme file as the first line in the file. For example:

[Parent](../README.md)

# Stuff to Buy

- Food
- Drink
- Pencils
- Paper
- Lamborghini

Now you’ll have a link back to where you came from. Here’s that in action:

Of course, you can link out to other content here too:

[Parent](../README.md)

# Movies to Watch

- [The Godfather](https://www.imdb.com/title/tt0068646/?ref_=fn_al_tt_1)
- [The Big Lebowski](https://www.imdb.com/title/tt0118715/?ref_=fn_al_tt_1)

You can also do fancy stuff like include images, or even make tables:

[Parent](../README.md)

# Birthdays

| Who   | When     | Gift      |
|-------|----------|-----------|  
| Paw   | 01/01/20 | Whiskey   |
| Maw   | 05/08/23 | Flowers   |
| Sis   | 08/09/42 | Gift Card |

Which gives you:

Sub-categories

Now you can start making folders for sub-categories. For example, the projects page looks like this:

[Parent](../README.md)

# Projects

- [Project A](project_a/index.md)
- [Project B](project_b/index.md)

The Project A link goes to project_a/index.md and similarly for Project B. And both of those files will link back to docs/projects.md as their parent:

[Parent](../projects.md)

# Project A

This is all about Project A

Naming these index.md rather than README.md breaks the symmetry between the top level and lower levels, but that’s fine with me.

And of course, those projects can contain additional documents or even nested folders. And as long as you include those parent links at the top of the page and use those for navigating back, you’ll always be viewing rendered markdown files.

This is just one way of setting up sub-categories. Totally up to you how you want to organize it. That’s the beauty of it.

Benefits

Things I love about this system:

  1. Your content is available on line from wherever you can log into github. But at the same time, you can always keep a complete local backup on any computer you want. Changes are just a pull away. There’s no PITA “export your data in json format” kind of process. Keep it up to date all the time. You can even do a local push/pull via a cronjob once a day or more or less often.
  2. Online content is nicely rendered in HTML on the web, but also completely readable and editable on your local system. Of course, if you are using a simple text editor, you might miss out on the clickable links – and images.
  3. Editing content on line is one click away. Just click the pencil icon as you’re viewing the markdown file and you’re in edit mode. Save and you’re back to an HTML view.
  4. If you do end up in an offline situation you can still edit your local files. If you wind up with a conflict, you’re not at the mercy of the tool in how it’s going to resolve it. It’s a straight up git merge, conflict resolution workflow that you’re already familiar with. Full control.
  5. Of course, it’s git, so every single change to every file you make is a commit, tracked, revert-able, never lost.
  6. Get sick of github? Pull locally, push it to gitlab instead. Or wherever.
  7. No special tools, frameworks, or other systems needed. Just markdown files.

Overall, I’ve really come to rely on this system. I’m constantly thinking of things to add to it. Every time I figure something out or find some interesting bit of data I know I’ll want to come back to, it goes in the wiki. I find myself actively using it all the time.

Potential Improvements

One of the things I’ve thought about is automating some of the work. The main readme and sub-category index files could potentially be auto-generated. It wouldn’t be rocket science. Iterate over each markdown file and make a listing. And for each folder, look for the index file in that folder and add that to the listing. Then re-write that readme/index file with the updated listing. Recursively travel through the folders. If you had a script or tool that did this, it could be run via github actions perhaps.

If I were just editing locally, I could easily set up an editor command to create the file and link to it. But I find myself using the wiki more often online than locally. So I have to keep thinking about all this.

Example

Here’s a link to the sample wiki I created just for this post. (My real personal wiki is private.)

https://github.com/bit101/wiki_example

Feel free to fork it or just copy the idea and make it better. If you come up with any bright ideas on how to improve it, please share back.

Pinebook Pro Tips

For those of you who actually have, or are thinking about getting a Pinebook Pro and would like to know how to avoid the stupid mistakes I made, here are some specific details I learned.

Get the Right Image

I wanted to install the Manjaro XFCE version. So I wanted the image for that. This was my first point of confusion. I knew I wanted an eMMC installer image. There are links here: https://wiki.pine64.org/index.php?title=Pinebook_Pro_Software_Release#Manjaro_ARM

If you get an image direct from Manjaro, it will NOT be an eMMC boot image. You have to go to osdn.net. And even there, it’s confusing. First you’ll see a list of releases.

Naturally, you’ll go for the lastest one, currently 20.08, and you’ll see this:

None of those are eMMC installers though. If you use any of those, it will only install Manjaro onto the SD card that you booted from. You have to go back, in this case, to 20.04. Then you’ll see this:

Oho! An actual eMMC installer image. This would have saved me many hours had I looked beyond the latest release. And don’t forget that Manjaro is a rolling release, so it doesn’t really matter which one you get, it’s going to be fully up to date as soon as you do your first update. At least that’s mostly true. There may be minor differences that I don’t know about, but effectively, it’s fine.

Run the Installer. Hit Escape!

Burn the installer onto at least an 8GB SD card and boot the PBP with that card inserted. It’s going to hang on the loader screen. At least at the time of this writing anyway. There’s a bug in the eMMC image that causes this. Just hit escape and you’ll be in the installer.

It will walk you through some choices that should be mostly obvious. You want to choose to install to the eMMC. That should be mmcblk2. Whereas the SD card you booted from should be mmcblk1.

Note: Once you’ve got everything installed, you can put a non-bootable SD card in the slot and use it for extra storage. It won’t affect the boot process.

The Manjaro ARM Installer

The Pine64 wiki also mentions this script:

https://gitlab.manjaro.org/manjaro-arm/applications/manjaro-arm-installer

It sounds lovely. Install the script, run it, choose your distro, choose the target, sit back and it does everything. This is what soft-bricked my PBP though and caused me hours of despair. I may have just been unlucky. I haven’t seen reports of others having the same trouble. But personally, I won’t try that method again.

Fixing a Soft Brick

As I have come to understand it, the PBP will boot first from a bootable SD card if one is inserted. Otherwise it will try to boot from the eMMC module. However, you can wind up messing up your eMMC to the point where the PBP won’t boot at all. In some cases this means the power light won’t even turn on. This is what happened to me. It sure seems like a hard, fatal brick situation, but don’t despair. It’s fixable.

It has something to do with something called a uboot, which I guess is kind of like a boot block / grub kind of concept, which controls the boot process. If that gets messed up, you aint booting nothing.

So you have to bypass that completely. You do that by completely disabling the eMMC.

  1. Open up the PBP. This is a simple matter of undoing 10 screws. The PBP is made to be easily open-able and internally upgrade-able. There’s a small switch there that disables the eMMC. Consult https://wiki.pine64.org/index.php?title=Pinebook_Pro#Pinebook_Pro_Internal_Layout to locate it. You want to move it to the position away from the hinge, towards the inside of the computer.
  2. Note: be VERY careful when using the computer with the bottom cover off. Pine warns that opening the lid without the cover on can damage the hinge. Of course you’ll need to open the lid a bit to boot it and see what’s going on. Just be very careful.
  3. Now you should be able to boot from a bootable eMMC installer image on an SD card. Verify this.
  4. Next you need to reboot and during the boot process, flip that switch back to enabled. If you do it too soon, you won’t be able to boot. Apparently, if you do it at exactly the right moment, 2 seconds into the boot process, the PBP will boot and the eMMC will mount. But if you are too late, the eMMC won’t mount. That’s fine though. The next step will handle that.
  5. As mentioned above, the eMMC installer image will hang. Just hit escape and you’ll be in the installer.
  6. You can try the install process at this point. The eMMC may have mounted. In the step where it asks you where you want to flash the image, if you see mmcblk2 you are good to go!
  7. There’s a good chance that the eMMC will not be mounted, so you’ll need to mount it manually. Choose “No” in the first screen of the installer and you should be dropped back into a root command prompt:

    [root@manjaro-arm ~]# _
  8. There are two commands you can type here that should mount the eMMC:

    echo fe330000.sdhci >/sys/bus/platform/drivers/sdhci-arasan/unbind
    echo fe330000.sdhci >/sys/bus/platform/drivers/sdhci-arasan/bind

    These are covered here: https://wiki.pine64.org/index.php?title=Pinebook_Pro#eMMC_information
  9. To verify that this worked, you can type lsblk and you should see mmcblk2 in that list.
  10. From there, type exit and you should be back to the installer, which should now work fine.

Resources

Read this entire page: https://wiki.pine64.org/index.php?title=Pinebook_Pro

Use the forums: https://forum.pine64.org/ – particularly the Linux on PBP forum.

The official Pine64 subreddit: https://www.reddit.com/r/PINE64official/

Pinebook Pro Saga and Review

Back Story

A while back I bought a Pinebook laptop. This was the original 11-inch, white plastic model. It cost $99 plus shipping.

Pinebook (definitely not Pro)

The idea was great – a cheap ARM laptop that you could grab and go and not worry too much what happened with it. But in practice, it was not even remotely usable. The specs on processor and network speed and memory were just not there. You couldn’t watch YouTube on it. Just loading any non-trivial web page was painfully slow. The touchpad was meh, but the keyboard was just atrocious. Not so much the feel, but the layout. Way too cramped. Keys for common symbols like | and " required a modifier key AND a shift key, turning them into a three-key combo. And the right hand shift key wasn’t even the size of a regular key. I could NEVER hit that one correctly.

It was just unpleasant to use. So it’s been sitting in my unused hardware pile ever since. Occasionally I take it out and try to do something with it, but it’s not worth it.

The Upgrade

Then the Pinebook Pro came out and I started hearing some good reviews. The first thing I did was check out the keyboard layout. It actually looked OK. This version costs $199 plus shipping. But has a much better processor, more memory and all around decent specs. It’s still a cheap ARM machine, but from all reports it seemed to be usable. It’s been in the back of my mind to give it a try for a while, but each time I checked it out, they were out of stock. They get made in batches, so sometimes you just can’t order. Recently I got interested again and saw that pre-orders were open and shipping on August 25. I pulled the trigger. Amazingly, not only did it actually ship on the 25th, but it got from Hong Kong to Boston in just two days. I was stunned.

It Arrives! I Brick It!

If you just want a review, skip to the end of this post. Stick around if you want to hear an epic tale of stupidity.

I opened up the package. I’ll get more into the build, etc. later. But first, my tale of woe. I booted it up. It had 86% battery and comes preinstalled with Manjaro KDE ARM. Got into it, set up my user, all good. First of all I love Manjaro. That’s what I run these days on any Linux desktop. But I like the XFCE version. KDE is nice. If XFCE vanished tomorrow, I’d go to KDE. But since there’s Pinebook Pro Manjaro XFCE image available, I wanted to use that.

A little understanding of PBP hardware. It’s main “hard drive” is an eMMC memory module. An eMMC is flash memory that’s better than an SD card, but not as great as an SSD. The PBP comes with a 64GB eMMC. It also has an SD slot that you an boot from. If there’s a bootable card in the slot, it will boot from that, otherwise it goes to the eMMC. So you can get a supported OS image and burn it onto an SD card and boot from that. But really you want to install the new OS to the eMMC. So there are eMMC install images specially made to do that. But every image I tried just installed onto the SD card. So I had to try an alternate way to install to the eMMC. Spoiler alert: this was where my stupidity enters in.

I found another installer method here: https://gitlab.manjaro.org/manjaro-arm/applications/manjaro-arm-installer

This seems to download the image of your choice, burn it to an SD card and then flash it back to the eMMC all in one script. Ran that and it seemed to be fine. Until it ran into an error and the power light started flashing red and green and everything froze up.

I rebooted. No. I tried to reboot. Nothing. Not even a power light. I knew the battery wasn’t dead. So this looked bad. Read some help. It gave some advice that involved opening up the computer. Well… that escalated quickly. But in fairness, the PBP is designed to be hackable. It opens very easily. There’s add on items you can put in there, etc. There’s also a switch in there to disable the eMMC, a reset button, and a few other physical controls.

https://wiki.pine64.org/index.php?title=Pinebook_Pro#Pinebook_Pro_Internal_Layout

All the connections were good, the eMMC was enabled. No joy.

Unbricking

I was pretty sure I had hard bricked it. But the more I read, the more I saw people saying that you CANNOT brick a PBP via flashing images. You can soft brick it, which is what I did, but it’s recoverable. Promising. What you do is disable the eMMC with that switch. Then it should power up and boot off the SD card. After probably a couple of hours of despair, I got it booting off the SD card! Whew.

OK, back in business, sort of. I still have to flash the eMMC, but now I have a catch 22. I can’t flash the eMMC because it’s disabled. But if I enable it, then the machine won’t boot.

Option one: apparently, you can do this thing where you boot the machine with eMMC disabled, and exactly two seconds into the boot, you flip the switch to enable it. That allows you to boot from the SD card and then have access to the eMMC. I had no luck with this method.

Option two: someone figured out that you could boot disabled, later enable it, and then run a few commands to mount it. This one worked!

OK, I was officially unbricked. Now I’m back to the point where I needed to flash the OS.

Another Failed Flash, But No Brick

There’s not a lot of magic in the flashing process. It comes down to using the dd tool to write the image to the eMMC. You can just do that manually as long as the eMMC is mounted. There are also some scripts out there that do that will run the command correctly for you. I got one of those, got the right image and ran it. It seemed to go fine. Reported success. Rebooted, and it tried to load, but would not fully boot.

The plus point is that the eMMC itself seemed fixed. I didn’t have to disable it to boot off the SD card. So I could close up the back of the computer again.

Back to Square One… and Success!

OK, now I was back to my original question of how to flash an image onto the eMMC. More reading up on people dding images led to more discussions about the eMMC installer images that never worked for me in the first place. But they seemed to work for others. It turns out that I was always going for the latest release. When I looked back a couple of releases, I found one that was very explicitly labeled emmc-installer-image. OMG. Slap on the forehead moment.

Well this sounded like exactly what I wanted originally. Downloaded it, burnt it to an SD card and booted it up. And it sat there on the loading screen forever. Sigh…

Tried some other images, other SD cards, all the same.

More reading and I see this little mention of a bug in the eMMC installer where it seems to hang. Oh! That’s me! The complicated fix is to… hit the escape key. Surely it couldn’t be that simple. But, yes, yes, it was. That was the final piece of the puzzle. Finally I was able to flash Manjaro XFCE Arm onto the Pinebook Pro. And it was only a little after midnight. Played around with it for a few minutes and then went to bed with a headache.

The Review

OK, so what is this thing like, now that I finally had it running with the OS I wanted?

First, let me say that most of the above is my fault. If I’d only looked around a little bit and found the eMMC installer image, I would have saved several hours. I’d even seen a message about the hang and escape fix, but had forgotten all about it hours later. But the documentation on all of this could have been a lot better. Most of what I learned was on various forum posts. So that’s the first takeaway. This is NOT a consumer device. If you want something that you can just plug in and have it work, this aint it. You gotta be ready and willing to dig in and search and experiment and tweak things. I’m fine with all that, personally.

OK, onto the review…

Build-wise, it’s surprisingly nice for a $200 machine. It’s got a full metal case. Magnesium alloy I think. Feels solid, heavier than I expected. Maybe like Macbook Air thickness. The hinge is good. The body itself is a bit flexible though.

The screen is pretty decent. 1920×1080. In fact it’s on par, if not better than the screen on my ThinkPad T480. Granted, that’s a low bar, but again, for the price point, that’s impressive.

ThinkPad vs. PBP

The keyboard is not bad at all. Not the best ever, but I’m comfortable typing on it. No problems with the layout. Everything is where it should be and the size it should be. No funky modifiers needed for common keys.

The touchpad is one of the weak points. Out of the box at least. I hated it. Sluggish, unresponsive, hard to get it on the thing you want to click. But with some research and tweaking of settings, I’ve now got it to the point where it’s workable for me. I was actually pretty happy with how much I was able to improve it with settings. Luckily I’d been through this plenty of times in the past with Linux on laptops. I know my way around all those settings.

Sound is awful. Not just quality but volume. Cranked all the way up, I literally can’t understand anything if the air conditioner is on nearby. Even with headphones plugged in, the volume on those is really low. I might be able to find some settings to improve that eventually. But I don’t plan on using this for media anyway.

Webcam is meh. Better than I thought it would be actually.

Ports: on the right, SD card, headphones, USB A.

On the left, USB C, USB A, power barrel connector. You can also power and charge over USB C, which is great.

Performance is not too bad. It doesn’t compare with a full x86 laptop or desktop of course, but it’s functional. Web pages load decently. A bit slower than I’m used to, but workable. I played some full screen 1080p video from YouTube and it was fine. No lag, nice and crisp, audio in sync with video. I was impressed. Currently, services like Netflix and Hulu do not work. I assume that’s due to the DRM software not working on ARM. There might be ways around that. I haven’t had a chance to test it out yet.

Mostly I intend to use the computer for some lightweight dev stuff. The price, size and low power consumption make it a great machine to grab and go with. So I’ve been testing some of my existing code projects on it.

I got my blgo framework up and running. I had to download the ARM version of Golang, but that was easy enough. Definitely compiles and runs stuff slower than on my other machines, but perfectly fine for on the go hacking on stuff. It also definitely has some hard limits on memory. I can create a 300 frame animated gif. But Imagemagick crashes on anything longer than that. And for creating sill images, I was able to do 1920×1080, and double that – 3840×2160, but beyond that things don’t render right. Once it even crashed me right out of my session, which I don’t think I’ve ever seen before. The computer was still running, but it kicked me back to the login screen.

I also got my bitlib_c library projects running with no problem whatsoever. I didn’t stress test those, but I assume they’ll have the same hard limits.

And I’ve currently got a Java project I started working on. Was able to install the latest OpenJDK and got that project running without a hitch.

Summary

Overall, I’m very happy with it. It’s a fun machine and completely functional once you know its limits. It’s a perfect machine for throwing in a bag and commuting with, or maybe traveling with or just bringing with you somewhere where you might want to use a computer but don’t want to bring your main machine. If you want a cheap Linux machine to hack on, this is way better than getting a Chromebook and trying to force Linux to work on that. I speak from experience.