Raspberry Pi Pico – What I’ve Learned

misc

So the new Raspberry Pi Pico board came out a few weeks back and there’s all kinds of news about it. For the uninformed, it’s a shift from other RPi boards. Most Pis are single board computers. They run an operating system – usually some Linux-based variant. The Pico is a microprocessor. Basically an Arduino alternative. But a pretty damn powerful one. Two cores, fast, good memory, 26 gpio pins, analogue to digital converter, real time clock, temperature sensor, etc. All for $4. So I grabbed a couple. By the way, if you’re ordering on line, make sure that you buy the header pins. They are not always included by default.

There are a ton of tutorials on line already, most of which are nearly exact clones of each other: solder pins on, plug in it while holding down the bootsel button, drag the micropython uf2 file onto the mounted drive, it will reboot, install Thonny, write a script to make an LED blink. So I’m not going to go through all that. Or you could say I just went through all that in one sentence.

One thing I highly recommend is the “Get Started with MicroPython on Raspberry Pi Pico” book put out by the Pi people themselves. It’s a bit cartoony and starts off really basic. In fact, going all the way through chapter 4 will get you through what most of the tutorials on line cover. But with a lot more depth. There’s several pages on how to solder the pins on. But further chapters get into some pretty good stuff, including various sensors and controls and I2C and SPI control of an LCD panel. Good starter stuff.

Stuff I’ve learned

Ran into lots of snags going through the process of learning this board and figuring out what I even want to do with it. A lot of this has to do with the fact that it’s really in its infancy. I’m sure that things will get better as time goes on, but there are a lot of rough edges right now. And beyond the book, there’s very little searchable info out there. Unfortunately, there are other boards/technologies out there named “pico” so that clouds your results. Throw in “Rapsberry Pi” into the search and you’re mostly going to get other RPi stuff. But even when you craft a good search, mostly what you’re going to find is the multiple cloned tutorials mentioned above. There is also a scattering of C/C++ tutorials and resources for the Pico. They look dauntingly complex so I have not dived into those yet.

Another problem is that the version of MicroPython that was made for the Pico is a fork of the official version. And it’s very definitely a subset. The official MicroPython documentation is fantastic, but huge swaths of it are just inapplicable to the version that works with the Pico.

For example, MicroPython has a machine module. In the official version, machine contains the following classes:

class Pin – control I/O pins
class Signal – control and sense external I/O devices
class ADC – analog to digital conversion
class UART – duplex serial communication bus
class SPI – a Serial Peripheral Interface bus protocol (master side)
class I2C – a two-wire serial protocol
class RTC – real time clock
class Timer – control hardware timers
class WDT – watchdog timer
class SD – secure digital memory card (cc3200 port only)
class SDCard – secure digital memory card

The ones struck through are not available on the Pico right now. Since the Pico does not have an SD card, the biggest miss there is the RTC class – the Pico has a real time clock, but no way to access it directly. That doesn’t seem too bad, but it extends from there. The Pin class is missing almost half of the methods on the Pico version of MicroPython. Other classes are missing methods as well, and there are several standard Python modules that are part of MicroPython that are missing from the Pico’s version.

All that said though, there is enough there to get you started on most common projects, and as I said, I’m sure this will grow and become more expansive in the coming months.

Tips and Tricks

A few random tricks and tips for using Micropython on the Pico:

When you install the MicroPython editor, Thonny, it will ask you if you want to run in regular mode or in Raspberry Pi mode. Naturally, I chose Pi mode. And naturally I was wrong. Although it seems to work fine, you wind up missing a lot from the UI.

Here’s the Pi mode:


Thonny in RaspberryPi mode, aka “simple”

And here it is in regular mode:

Thonny in regular mode

I was running in Pi mode for a while and thought it was pretty damn lame. Then I discovered you could go into the options and change it to regular mode. (Also that Pi mode is really called “simple” mode, and there is also an “expert” mode.)

Regular mode gives you full menus, which opens up a massive wealth of features I did not realize even existed. For example, I could find no way to copy files onto the Pico other than opening them up in Thonny and then saving them to the device. But the View menu lets you open a files panel (and a lot more) where you can access files both on your device and your local file system, and “upload” and “download” between the two. Brilliant.

Another gotcha from the book. The book states that by default, gpio input pins are set to be pulldown by default. Pulldown inputs are attached to ground with a resistor and connecting them to positive voltage triggers them. Pullup inputs are connected to a positive voltage with a resistor and connecting them to ground triggers them. If you didn’t pull them one way or the other, stray charges could trigger the inputs. Anyway, the book says that input pins are pulldown and you can create a pin by typing:

pin = machine.Pin(15, machine.Pin.IN)

I did this while testing physical buttons to make an LED turn on. And I was getting all kinds of crazy behavior. If I touched a wire or even just tilted the whole breadboard in a certain direction, the LED was coming on as if I’d pressed the button. I honestly thought that I’d accidentally tapped into some internal accelerometer at one point – I was totally able to control the LED by tilting the board back and forth. After looking up the official MicroPython docs for creating a pin, I found there was a third parameter for controlling pullup/pulldown. So I changed it to:

pin = machine.Pin(15, machine.Pin.IN, machine.Pin.PULL_DOWN)

When I did that – explicitly setting the pin to be pulldown – it was rock solid and worked 100% as expected.

Another question I finally answered for myself is how to have a program run as soon as the Pico boots up. Simple enough – you just name the Python script main.py.

An Alternative Python

There is an alternative to MicroPython though, and that is CircuitPython. As I understand it, CircuitPython is another fork of MicroPython and it is supported on a multitude of microprocessor boards, including the Pico. Because it’s been around a lot longer the the Pico, it is way more expansive in what it supports and has a ton (like 280+) libraries that work with it. CircuitPython is supported by Adafruit, and they’ve also created some really useful libraries, many specifically designed to work with the hardware they sell. So all in all, this is a really great option.

You install CircuitPython the same way you flash the Pico with MicroPython – download a uf2 file, start the Pico while holding down the bootsel button and dragging the uf2 to the mounted drive. It will reboot and be a CircuitPython device now.

One thing to say about CircuitPython is that it is quite different than MicroPython. Although it may be a fork, they forked the hell out of it.

For example, a simple program to blink an LED in MicroPython:

import machine
import utime

led = machine.Pin(25, machine.Pin.OUT, machine.Pin.PULL_DOWN)

while True:
    led.value(1)
    utime.sleep(0.5)
    led.value(0)
    utime.sleep(0.5)

And the same program in CircuitPython:

import board
import digitalio
import time
 
led = digitalio.DigitalInOut(board.D13)
led.direction = digitalio.Direction.OUTPUT
 
while True:
    led.value = True
    time.sleep(0.5)
    led.value = False
    time.sleep(0.5)

Not a huge deal, but I think it might be a good idea to choose one and roll with it. You’re not going to be able to run your MicroPython projects on your Pico after you flash it to CircuitPython (and vice versa – though you can simply reflash between the two systems relatively easily).

Personally, I’m torn at the moment. CircuitPython has way more stuff in it and so many existing libraries. But it’s so far different from the standard language, and I’m hoping that the Pi folks will continue to make their implementation more expansive and people will come up with new libraries for it as well.

On the other hand, CircuitPython already has some libraries allowing the Pico to act as an HID device. In other words, you can use it to send keyboard and mouse events to the host computer it’s connected to via USB, good for making custom keyboards or other controllers. And with 26 gpio pins, this winds up being way more useful for complex projects like this than the Arduino Pro Micro clones that I’ve been using. I haven’t found anything in MicroPython to do the same thing, though there is some C++ code out there that seems to support this. I may just end up using both MicroPython and CircuitPython for a while.

More about CircuitPython – the recommended editor is not Thonny, but the Mu Editor. For the most part it seems pretty basic, more like the simple mode of Thonny. No menus or extra panels or anything like the regular mode of Thonny – at least not that I could find. But as an editor, it does have some nice autocompletion and other features that I didn’t see in Thonny.

Something that confused me about Mu: I created a new file and saved it to the Pico and hit the run button, but the other default code.py hello-world type file ran instead. Nothing I could do would make my new file run. It turns out that’s by design. As mentioned, MicroPython devices will auto-run main.py on boot. And CircuitPython will auto-run either code.py or main.py on boot. But Thonny will run whatever active script you have open when you hit run, whereas Mu will always run code.py (or presumably main.py) when you hit run. This can be good or bad. It mimics what will happen when the device boots, but makes it harder to test some other script. The solution recommended is to have code.py or main.py be a simple launcher script that in turn executes some other script. Anyway, good to know.

Another big plus about CircuitPython is that when the device is connected to your computer it also mounts as an external drive. So if you want to add additional libraries to it, you can just drag and drop them to that drive. A bit easier than Thonny’s upload feature.

Summary

So that’s some of what I’ve managed to learn so far. As there is very little data out there at the moment, I hope some of this helps people just getting into this new, fun little board.

Edits/Additions

In the first version of this posts I used the word “robust” several times, indicating that the version of MicroPython for the Pico was not as robust as it hopefully would be in the future and that CircuitPython was more robust. That was the wrong choice of word. Robust in the context of code means that it is able to handle and recover from various unexpected conditions. To say that something is not robust implies that it is unstable. That’s not what I meant to imply.

I changed the word to “expansive”. My intention was to say that the current version of MicroPython on the Pico does not support as many features as it hopefully will and to say that CircuitPython supports more features. Examples are, as I described earlier, the missing RTC class for the real time clock that is in standard MicroPython but not on the Pico. Also, stuff like HID support in the usb_hid library of CircuitPython, but nowhere to be seen in MicroPython (that I can find), as well as all kinds of libraries for working with specific sensors and peripherals that are available for CircuitPython. MicroPython has some of that, but nowhere near what I see in CircuitPython.

Learning CNC and Making a MediaBox

misc

Earlier this year I talked about my “Bit-Box” – a custom keyboard, program launcher, Stream Deck clone, device. https://www.bit-101.com/blog/2020/07/bit-box/

The box was handmade, but I had purchased a 3d printed plate to hold the switches. A little later I had the idea of making my own plate with wood. Initial tests, chiseling out a square hole for a single switch worked pretty well, but as soon as I tried to cut out several adjacent holes, the wood between the holes kept chipping out.

I started thinking about using a CNC to do this, and eventually picked up a Sainsmart 3018 Prover.

It took a couple of hours to assemble. Pretty easy actually. And it only came with some relatively useless v-shaped engraving bits, so I ordered a set of flat endmills in different sizes. Since then I’ve picked up a bunch of different bits.

In terms of software, I’ve tried a few different options.

One is Inventables Easel. This is a web app made for the Inventables X-Carve cnc machine. But it can export gcode that can be used with the 3018. Easel has some decent features for free, but you have to pay for full functionality.

The other one I’ve used is Carbide Create. This is made for Carbide3D’s Shapeoko machines. It’s desktop software and is totally free. It also exports gcode. I like Carbide Create a lot better.

The basic flow is to create a set of simple 2d vector shapes – rectangles, circles, paths – then apply tool paths to each shape. For example, you’d specify that you want to use this rectangle as an outline shape that is cut 1/4″ deep. Or you want to use this circle as a pocket, 1/8″ deep. A pocket cut cuts the entire inner area of a shape to a certain depth. You can also do boolean operations to combine or subtract different shapes. It’s super basic, but really does most of what you’d need.

If you want to really go crazy, you can get into 3d modeling with something like FreeCAD or Fusion360, and then create tool paths from those models. A much bigger learning curve and probably overkill until you get into some really complex stuff.

I use Candle to send the gcode to the machine itself.

MediaBox

My goal was to create a “MediaBox”. This is just what I call a custom mini keyboard with media keys – play/pause, next, previous tracks, volume up/down/mute. Six keys in all. Here’s an overview of all my attempts from one of the original hand-cut versions, some test cuts, a couple of failed attempts, through the final working build:

The initial holes for the keys worked perfectly. A 0.555 inch square hole is all you need. Spacing is something like 0.205 inches between keys.

The main design issue beyond that was where to fit the Arduino board and how to route the usb cable. I was initially using 1/2″ black walnut. On the top were the holes for the keys. I then flipped it over and created a recess on the bottom. But the half inch depth was really too shallow. And my original design was just too small once I attached the cable.

So I switched over to 3/4″ walnut and made the whole thing just a bit larger.

Wired it up much the same as I did for the BitBox. Did some finish sanding and applied some tung oil, glued on a leather bottom.

The software presented a bit of a problem. The Arduino keyboard library does not provide a way to send media key codes. Luckily there is another 3rd party library, HID-Project.

You can add this library to your project by going to sketch / manage libaries and searching for “hid project”.

Here’s the code I came up with:


#include <hid-settings.h>
#include <hid-project.h>

// Define Arduino pin numbers for buttons and LEDs
#define VOL_DOWN 2
#define VOL_MUTE 4
#define VOL_UP 3
#define PLAY_PREV 5
#define PLAY_PAUSE 6
#define PLAY_NEXT 7

const long debounceTime = 30;
unsigned long lastPressed = 0;
boolean A, a, B, b, C, c, D, d, E, e, F, f;

void setup() {
  pinMode(VOL_DOWN, INPUT_PULLUP);
  pinMode(VOL_MUTE, INPUT_PULLUP);
  pinMode(VOL_UP, INPUT_PULLUP);
  pinMode(PLAY_PREV, INPUT_PULLUP);
  pinMode(PLAY_PAUSE, INPUT_PULLUP);
  pinMode(PLAY_NEXT, INPUT_PULLUP);

  a = b = c = d = e = f = false;
  Consumer.begin();
  BootKeyboard.begin();
}

void loop() {
  if (millis() - lastPressed  <= debounceTime) {
    return;
  }

  lastPressed = millis();

  A = digitalRead(VOL_DOWN) == LOW;
  B = digitalRead(VOL_MUTE) == LOW;
  C = digitalRead(VOL_UP) == LOW;
  D = digitalRead(PLAY_PREV) == LOW;
  E = digitalRead(PLAY_PAUSE) == LOW;
  F = digitalRead(PLAY_NEXT) == LOW;
  if (A && !a) {
    Consumer.write(MEDIA_VOL_DOWN);
  }
  if (B && !b) {
    Consumer.write(MEDIA_VOL_UP);
  }
  if (C && !c) {
    Consumer.write(MEDIA_VOL_MUTE);
  }
  if (D && !d) {
    Consumer.write(MEDIA_PREV); // alternately MEDIA_REWIND
  }
  if (E && !e) {
    Consumer.write(MEDIA_PLAY_PAUSE);
  }
  if (F && !f) {
    Consumer.write(MEDIA_NEXT); // alternately MEDIA_FAST_FORWARD
  }
  a = A;
  b = B;
  c = C;
  d = D;
  e = E;
  f = F;
}

This was adapted from a few other sample projects I found, as well as the code I had for the BitBox. It works great.

Want one?

I made this for myself, but I’d love to make some more. The materials aren’t cheap though. Well over $30 for the wood, leather, Arduino, keys and key caps. Then the time for cutting, finishing, soldering. I’ve got to work out pricing and different options, and the best way to sell them, but contact me if you’re interested.

I’d also be open to selling just the wooden box, either finished or straight off the mill and you can buy the other parts and put it together yourself. It’s a fun project.

Or… if you have a cnc already, I’m going to post the Carbide Create file I used, with instructions, for free. Check back soon for that.

I Think Bluetooth is Finally OK

misc

Bluetooth was introduced on May 7, 1989. I think I first heard of it in the mid-2000’s. People would use it to try to send contact info or other files between feature phones. As I recall, it had about a 50% chance of actually working. All of my attempts fell squarely in the failing 50%. So I ignored it for a few more years.

Then there were smart phones with Bluetooth and laptops had Bluetooth. There were Bluetooth mice and eventually Bluetooth fitness devices and smart(ish) watches. And they all SUCKED.

Bluetooth and Me: A History

Mice

Every Bluetooth mouse I had was slammed down on the desk in frustration at least once. And only very narrowly avoided being hurled across the room. When you’re using something all day every day, 99% uptime is unacceptable. I’d be in the middle of something and the mouse would just stop responding and I’d have to spend a minute or so reconnecting it. Then it might be fine for several more hours. I tried several and finally quit. I’m firmly in the wireless USB dongle camp now as far as mice go. Logitech’s MX Master 3 is glorious. It actually supports Bluetooth AND wireless. I think I tried an earlier version of the MX Master on Bluetooth and quit the first time it disconnected. The wireless dongle has never once failed me, and I’ve used many.

Headphones

Specifically, I’m talking about “earbuds” or what the kids call “IEMs” (in-ear monitors) these days. I’ve had multiple sets of these. Historically, they suffer from four issues:

  1. Poor audio quality.
  2. Discomfort due to weight.
  3. Poor battery life.
  4. Connectivity issues.
  5. Cost.

You could probably come up with something where you could say you get to choose three out of those 5 points. Maybe. The point is, they play off each other. Better battery life means more weight and cost. Anyway, I never had a pair that I was happy with. In the end, the hassle of a cord (and these days a USB-C adapter) has always been less than the hassle of battery, discomfort, poor sound, and connection problems.

Speakers

I’ve also had multiple Bluetooth speakers. And I’ll even throw my car stereo system into this category. These have been so-so. Connectivity has often been an issue. Some good, some not so good. My car in particular is really bad. It always takes a minute or so and at least two tries to actually connect my phone.

The other thing that has killed me with Bluetooth speakers is that they’ve always had horrible performance on listening to voice audio sources. Music is ok, but just about every one I’ve had cuts out in the silence between words. It will pick up again when it hears the next set of words, but routinely a few words will be lost on almost every sentence. I listen to a lot of podcasts and audiobooks, and this was always impossible with every Bluetooth speaker I had.

Fitness Devices / Smartwatches

I’ve had multiple running watches that had Bluetooth, as well as several Fitbits and an Android Wear watch. Generally, the Bluetooth has worked great. Until it stopped working great. When they decided to stop connecting via Bluetooth, it seemed like there was nothing I could do to get them to reconnect. Even rebooting the device and whatever device it was trying to connect to. But then at some point it would just start working again for however many days.

All this is to say that I’m not just someone who hates something they’ve never tried. I’ve had dozens of Bluetooth devices and every single one of them has caused me some level of frustration. And yet, I keep buying them, holding out hope. (Except mice. I’ve eternally given up on Bluetooth mice.)

But wait!

In the last couple of months, I’ve purchased three Bluetooth devices that I’m actually quite happy with!

Galaxy Buds Plus

For some reason, I decided to take another leap of faith and got another set of Bluetooth ear buds. I checked out a ton of reviews on these things and these seemed like a solid buy. The cost was $139 on Amazon, which isn’t cheap, but not exorbitant. I’ve been amazed at how happy I am with these things. There’s nothing I can say about these that is negative.

Battery life is great. They have the charging case, which itself has wireless charging. I already have wireless charges scattered around the house, so it’s super easy to just toss it on one of them.

Connectivity has been flawless. They connect instantly, never lose the connection.

They are comfortable. I use them with foam tips, which I always get for any earbuds. Never get uncomfortable. I’ve used them while running and they stay put and feel fine.

Sound is quite good. Most of the time I’m listening to podcasts and audiobooks on my phone. They sound great for that. To be honest, for music, I stick with my Sony Walkman NW-A55 and wired Ikko OH-1 IEMs. That’s been a life changing combination. But if I’m running with my phone and want to listen to music, I’ll use the Buds for that, as the music is just background at that point.

I’ve had these for two and a half months now and I can’t say enough good about them. These are the items that have finally sold me on the idea that Bluetooth has made it.

JBL Flip 5

Speaking of sound, I recently picked up a Bluetooth speaker. To be completely transparent, I got this for free. A while back I switched to Verizon Fios and out of the blue they sent me this $100 coupon for the Verizon store as thanks for switching. Lots of phones and phone cases, chargers and headphones, none of which I really needed. I didn’t really need a Bluetooth speaker, but this had pretty good reviews and came to $95 with tax, so why not?

It sounds good, connectivity even on multiple devices has been great, and it works flawlessly with audiobooks and podcasts. Huge battery with lots of listening time. Also, you can turn off the power on/off and Bluetooth connect/disconnect sounds, which has been a big annoyance on every other speaker I’ve had.

Garmin Forerunner 235

In the last month I started running again. I pulled out my old Garmin running watch, which I hadn’t used in … sadly, years. After a full day of charging and trying to get it running, with no success, I ordered a new Garmin watch, the Forerunner 235.

It’s very nice. It’s a full on smartwatch (not Android), which you can add apps and watch faces to. I did set up a better watch face, but not really interested in other apps. It does all day heart rate and sleep tracking. Battery lasts a week if you’re not running. GPS while running will suck it down faster, but will still let you run for many hours without a problem.

It connects to the Garmin Express phone app via Bluetooth and that’s been nearly perfect. When I finish a run, if I have my phone on me, it nearly instantly syncs to the cloud via Bluetooth and phone. If I don’t have my phone on me, it often syncs as soon as I walk into my driveway, with my phone inside the house. Downright impressive.

Summary

Bluetooth may have won me over. I look forward to seeing other quality implementations, though I’m not holding my breath on the mouse situation.

My Wireguard Setup

misc

Disclaimer

Someone has been submitting my recent posts to online tech news aggregators, where they are criticized for not being cutting edge or paradigm shifting enough. If you’ve been led to believe that this post awe and amaze you, complain to the person who submitted it, not me. This is just my personal blog where I write about stuff that I’m doing, mostly technology based. It will not change your life. That said…

Background

I’ve had a “home server” for close to ten years now. It’s a Linux-based desktop pc. It acts as a file server, media server, backup server and a place to try out different things. I guess it’s what is now popularly called a “home lab”. All that’s great when I’m at home on my home network. I can stream movies and music, get files, ssh into the server and do whatever I need to do.

But when I’m out and about, traveling, working (when we used to go out and do stuff like that), I’d also like to have that same access. That’s all simple enough. You go into your router settings, do some port forwarding to that box and then you can stream, ssh, ftp, vnc, whatever. I’ve certainly done just that often enough. But as I became more security conscious, this started to worry me more and more. Having all those ports open into my main machine made me nervous. Yeah, they are behind passwords, or hopefully keys. I locked down ssh pretty tightly, but still worried about it, and all those other services. When I was on Xfinity for home internet, their management app provided a security section which listed all the various attempts to access different ports on the network with their IPs and locations. It was shocking. It became something that was not just theoretical. People were (and are) actually trying to hack into my network. That’s when I shut everything down.

Enter Wireguard

I’d heard quite a bit about Wireguard and it sounded like what I needed. I came upon this tutorial which described exactly what I wanted to do and in pretty clear terms:

https://zach.bloomqu.ist/blog/2019/11/site-to-site-wireguard-vpn.html

This all went together really well. It took a bit of learning and messing things up and fixing them, but I eventually got it all working really nicely and doing exactly what I need. Here’s my current setup:

  • Main wireguard server hosted on an inexpensive VPS in the cloud.
    • ufw set up to block all traffic other than specific ports from specific wireguard clients.
    • rinetd to forward any needed ports to my home server. Currently, that’s just the port that my airsonic server is running on.
  • Main wireguard server hosted on an inexpensive VPS in the cloud.
    • ufw set up to block all traffic other than specific ports from specific wireguard clients.
    • rinetd to forward any needed ports to my home server. Currently, that’s just the port that my airsonic server is running on.
  • wireguard client running on my home server.
    • airsonic music streaming server running there.
  • wireguard clients running on a couple of laptops, my Android phone and tablet. Each client has it’s own private key and the public key of the server. The server has its own private key and the public keys of each client.

With this setup I can ssh into the VPS from anywhere in the world, provided I’m doing it from one of the configured clients. Once I’m into the VPS, I can then ssh into any one of the other clients that has an ssh server running. I could use rinetd to forward ssh on specific ports to specific clients. But for now, that use case is not that common. When the world gets back to normal and I’m out of the house more, that will be useful.

I’ve got my airsonic server running on a specific port of my home server, let’s say it’s 1234. rinetd is set up to forward port 1234 on the VPS to port 1234 on the home server. So I can access my music in the browser from any wireguard client, or I can use any one of many subsonic-compatible Android apps and have my music streaming to my phone or tablet no matter where I am.

This setup is pretty flexible, and I will be able to add other services to it just by opening up a port in ufw and forwarding it as needed using rinetd. Important thing to remember is that when I say “opening up a port in ufw” I mean a wireguard client accessible port. Nothing is open on the VPS except via wireguard. Nothing is open on my home server except via the VPS or local LAN.

Monitoring and Recovery

One downside to this setup is that to access my music for example, I’m relying on a chain of multiple links: wireguard on VPS, ufw, rinetd, wireguard on home server, airsonic. If any one of those doesn’t function just right, I’m listening to silence. This has happened a couple of times, especially when I first set things up and had some things not quite right. Actually, if ufw goes down, I’ll still be able to listen to my music, but my VPS will be open. So I wanted to get some monitoring in place. When things were down early on, I’d be making assumptions on which piece was broke and spending time trying to fix it, only to find out it was one of the other links. With correct monitoring, I can now tell exactly what is up and down.

Monitoring with Healthchecks

I’ve been a big fan of Healthchecks.io. You set up “checks” which provide you with a url to ping. If a check doesn’t get a ping within a specified time period, it notifies you via email, sms, or through more than twenty other integrated services. I’ve been using it to monitor my daily backups. If a backup doesn’t happen at a specified time, I know about it.

So I set up a cron job that runs a script every 10 minutes on my VPS, and a similar one on my home server. This script first checks the status of wireguard. If it’s up, it pings Healthchecks. It does the same for rinetd and ufw. My home server checks wireguard and airsonic. Each of these five services is set up as a separate check in Healthchecks so I can see the status of each of them separately. The cron job runs every 10 minutes, so I give it one extra minute leeway – if Healthchecks doesn’t get a new ping after 11 minutes, that service is marked as down.

Recovery

Eventually I realized that if a particular service was down, once I became aware of it, I’d just go to whatever machine and restart it, so why not just do that automatically. So I built that into each of my checks.

If, say, wireguard is down on the VPS, it will NOT send the ping to Healthchecks. So a minute or so later it will be flagged as being down. But in this case, the script will also automatically try to restart wireguard. The next time it runs (10 minutes later), hopefully it sees that wireguard is up and sends the ping.

Healthchecks also has a “grace period” configuration. Once it notices something is down, it will not alert you until that grace period is done. I set this to 10 minutes. This results in the following sequence if something goes down:

  1. Service X is up and Healthchecks gets pinged at 10:00 pm.
  2. Service X goes down at 10:05 pm.
  3. At 10:10 pm, the script sees that Service X is down and fails to ping Healthchecks.
  4. The script also attempts to restart Service X.
  5. At 10:11 pm Healthchecks has not had a ping in 11 minutes and marks Service X as down.
  6. At 10:20 pm, the script runs again. Service X is up so it pings Healthchecks, which marks Service X as up again.
  7. Alternately, the restart didn’t work and at 10:20 pm no ping is sent.
  8. In this alternate case, at 10:21 pm, Healthchecks emails and texts me about the fact that Service X is down.

A potential improvement to this is that after step 4, when Service X is restarted, I could verify that it’s now working and ping Healthchecks. immediately. This way, if the restart works, nothing is marked as down. But I’m going to run it as is for a while and see how this works out. So far, so good.

I’ve gone through and tested each on of these checks, turning the service off and leaving it off. Within 11 minutes it was marked as down and restarted. And shortly thereafter marked as back up. All automatically.

If this were some kind of public service or mission critical workflow, I could easily set up the pings for every minute or so. But the 10 minutes seems perfectly adequate for my purposes.

More Details?

This post is pretty high level. Most of what went into the wireguard setup is covered in the above link. If you want to set up something similar, I’d be happy to go into more detail on any specific points. Just let me know.

version 1.3

misc

Me again, talking about this silly version program still.

Actually, there are some pretty cool updates over the past few point releases. They came fast and on the heels of each other. The idea was posed to use the Linux package manager – apt or pacman or whatever – to get data on a program instead of relying on a hard-coded list.

Background

After some back and forth I warmed up to the idea, but as a backup to the known program list, not as a replacement. My reasoning is that you might have multiple versions of foo installed. Maybe one was through the default package manager, one through some download-and-run-an-install-script method. They might get installed to different locations in your PATH. But when you call foo on the command line, you’ll only get one of them.

If you query the package manager, it’s going to tell you about the one that it knows, which may or may not be the default. But when you run foo -v on the command line, you will get the one that’s going to be actually run in most cases. So that should be the first place we look. If version doesn’t know about foo then it can turn to the package manager.

Details

I decided to tackle two of the major Linux package managers first – apt (used on Ubuntu and most other Debian derivatives) and pacman (used on Manjaro and other Arch derivatives).

On apt, you can find info about a package, say neovim, you’d type:

apt list neovim --installed

This will give you something like:

neovim/focal,now 0.4.3-3 amd64 [installed]

That 0.4.3-3 is the version number that we’re looking for. It took a bit of regex trickery, but I was able to parse that bit out of it.

On pacman you’d type pacman -Qi neovim and the result would look something like:

Name : neovim
Version : 0.4.4-1
Description : Fork of Vim aiming to improve user experience, plugins, and GUIs
Architecture : x86_64
URL : https://neovim.io
Licenses : custom:neovim
Groups : None
Provides : vim-plugin-runtime
Depends On : libtermkey libuv msgpack-c unibilium libvterm luajit libluv
Optional Deps : python-neovim: for Python 3 plugin support (see :help python)
xclip: for clipboard support on X11 (or xsel) (see :help clipboard) [installed]
xsel: for clipboard support on X11 (or xclip) (see :help clipboard) [installed]
wl-clipboard: for clipboard support on wayland (see :help clipboard)
Required By : None
Optional For : None
Conflicts With : None
Replaces : None
Installed Size : 20.45 MiB
Packager : Sven-Hendrik Haase svenstaro@gmail.com
Build Date : Wed 05 Aug 2020 04:16:43 AM EDT
Install Date : Fri 21 Aug 2020 07:37:52 AM EDT
Install Reason : Explicitly installed
Install Script : No
Validated By : Signature

So we can use grep and/or sed to find the one line of that which starts with Version: and grab the 0.4.4-1 part of it.

I then did basically the same thing for dnf which is the package manager on Redhat, Fedora, and derivatives.

So the process is:

  1. Check to see if version already knows about the program. If so, just do what it already does.
  2. If now, check apt, pacman and dnf. First we can just check to see if each one of those exist and only run the one that does exist. It’s unlikely that many people will have more than one of those. If we find one of those, we do the parsing and spit out the version it tells us about.
  3. If those all fail, then we can just tell the user we couldn’t find any information on that command.

Can we do more?

There are all kinds of other package managers on both Linux and Mac. I started making a list of the different ways you can find and install software and came up with

  • snaps
  • flatpaks
  • pip
  • npm
  • homebrew / linuxbrew

There are others, but those all cover a huge amount of ground. And it turns out that most of them were able to be solved with the same general strategy:

  • Does this package manager exist?
  • Does it know about this program and what info does it have?
  • Parse out the version number from the info it returns.

So, now version supports all of those. It just looks at each one of them in turn until it finds one that give an answer.

This also has the added functionality of being able to return the version of more than just executable programs. Package managers know about various libraries and other assets that aren’t directly executable or don’t have any way of querying them directly for their version. But version can tell you about them. Want to know what version of libusb you have installed? Typing version libusb will tell you.

A personal perk of doing this project is that I was forced to really learn grep and sed. Two programs that ranged from confusing to very mysterious in my mind. Now I get them and really like them. I wrote something up about them too: https://www.bit-101.com/blog/2020/09/grep-and-sed-demystified/

Catalina VM

misc

I’m not a big fan of Apple. Their products themselves are fine, for the most part. Not to my preference in a lot of ways, but that’s fine. I know plenty of people who love their iPhones and Macbooks and watches. I’m not going to argue. I definitely don’t like the company though. I perceive them as being overly controlling, developer hostile, and incredibly narcissistic. All this is just a pretext for saying that I don’t want to spend any money on Apple hardware.

But now and then I do want to be able to test something on Macos. Some program or script or utility I’m working on, like version or my C or Go based graphics and animation libraries. It’s not my main target, but since command line tools developed on Linux are usually trivial to get working on Mac, I’m happy to test them out and make some minor tweaks so they work there.

I’ve been considering building a “Hackintosh” system. But with Apple’s plans of going to ARM possibly by the end of this year, I don’t want to invest a lot in hardware that’s going to be obsolete soon.

This led me to see if it was possible to get Macos running in a VM. And I found this project:

https://github.com/foxlet/macOS-Simple-KVM

This is a git repo with a couple of scripts that create a qemu VM, download the official Macos installer image and runs that installer in that VM. It runs on Linux that I know of.

  1. Install the dependencies.
  2. Check out the repo.
  3. Run the jumpstart.sh script. This even allows you to choose which version you want. Defaults to Catalina.
  4. Create a virtual disk image and add that to the basic.sh script.
  5. Run the basic.sh script and choose install.

The UI that comes up is called “Clover”

The ui actually confused me at first. I was trying to click things, but you can navigate around with arrow keys and use enter to choose things. Choose the default option shown here to install.

This boots up a Macos system right off the install media. The first thing you’ll need to do is format that virtual disk you created. Then run the installer and tell it to install Macos to that disk you jut formatted.

The install takes a long time. Like close to an hour. At points it says it has one minute remaining and hangs there forever, before saying it’s calculating the remaining time, and hanging there forever. But be patient, it will finish after rebooting a couple times.

When it’s done, it will boot right into the OS, asking you to set up a username and password. And then you’re in a full Macos install. Use Control-Alt-F to toggle full screen and Control-Alt-G to toggle capture of the keyboard and mouse. You can shut down as you’d usually shut down a Mac or just close the VM window.

When you boot back in, you’ll get the Clover screen again. This time choose the last option in the top row to boot.

This is the one that messed me up the first time. I just hit enter and wound up in the install flow again.

Performance

By default, the basic.sh script allocates 2 GB RAM and a minimal amount of CPU resources. It’s also hard-coded to 1280×720 resolution I think. Read through the documentation to find out how to beef up the VM. I gave mine 8GB and a lot more CPU. I also got it running at full resolution on my 2560×1440 (or something like that) monitor. With the extra resources, it’s surprisingly performant. I mean, I’m not going to be doing gaming or video editing or trying to run XCode on it, but for browsing the web, regular apps, anything console-based, it’s perfectly adequate.

Once you go full screen with it, it’s honestly hard to tell it’s not the real thing. It is a bit laggy on my Thinkpad with an i5 CPU, but pretty zippy on my Ryzen 5 3600 desktop. I’ve installed various tools, utilities and other programs, as well as home brew and a bunch of packages form there. I haven’t had any problems with it so far at all.

Although the install was slow as hell, it now boots up fully in under a minute for me. That’s good because it’s not something I want running all the time. Giving it all those resources means the underlying Linux OS does not have access to them. You can pause the VM though, which stops it from using any CPU power. I haven’t checked if that affects the memory use though. I’d guess not so much.

Here is Catalina at 1920×1080 on my Thinkpad

Legality

Who knows. Use at your own risk. Just to avoid any problems, I did not sign in with my Apple ID. I’ve never heard of anyone getting sued for running an OS in a VM.

Summary

I’m pretty excited about this. This is perfect for my use cases of testing things out here and there. I’m not expecting it to be a full replacement for actual Apple hardware running Macos. And I don’t need that.

I imagine once Apple switches over to ARM, this project will be obsolete. Hopefully someone figures out a way to continue it with their new architecture.

A Cooler Cooler from Cooler Master

misc

Some weeks ago I shared my new PC build. It’s been wonderful. Working perfectly. One thing I did recently was put in another 500GB drive to hold my VM images. While I was in there, I moved the front mounted drives around to the back and did some better cable arranging.

One thing I had planned to do for a while was add a new CPU cooler. After reading some reviews and watching some Youtube videos, I settled on the Cooler Master Hyper 212 Black Edition.

I probably didn’t really need this. But I think it’s a good investment. I’d been using the stock cooler that came with my Ryzen 5 3600. It did the job adequately. I’m not into any kind of crazy overclocking or anything, and wasn’t having any real heat problems. In general use, with a browser and a few tabs open, terminal and some music playing, maybe Slack open, the CPU would be in the high 30s to mid 40s C. Unless it was completely idle it wouldn’t stay in the 30s much. More in the low 40s. With Slack and some more active tabs open, it’d get into the 50s, maybe some short peaks into the 60s. Completely idle with nothing running, it would be mid-to-high 30s.

With the new cooler, I can see an immediate difference. Idle with just a browser and several tabs, it will settle down to 31-32. I never saw it go that low with the stock cooler. So it’s promising. I’ve only put it in this morning, so I’ll give it a few days to see how it performs under daily loads, but seems like a good improvement so far.

Installation wasn’t too bad. Took off the old cooler and cleaned up the thermal paste. You have to use their custom back plate, so I removed the old one and set up the new one. This was probably the most complex part. The cooler is designed to fit on a number of different socket types. You have to install various posts or screws and clips and move them to various positions depending on which socket you have, as well as a couple different types of brackets that go on the cooler itself. There’s a decent manual with Ikea-like diagrams for each configuration. I managed to get it right the first time without too much difficulty.

I came very close to forgetting to put on fresh thermal paste, but caught myself before tightening any screws. Getting the four screws on the brackets started was a bit tricky. They’re all spring mounted and wobble around. You have apply a little pressure and get them at the right angle to get them going. But once they were started, easy to tighten up by alternate corners.

The cooler comes with a single 120mm Silencio fan. The instructions have you mount it in a configuration that pushes air across the cooler and towards the back fan, which makes sense. But you can also purchase and add a second fan to mount on the other side of the cooler, for a push-pull configuration. There’s also an RGB version, but I think I have enough RGB going on in there as it is.

The cooler itself is pretty tall and the reviews all said to make sure you have enough room in your case. My case is ludicrously cavernous, so there’s plenty of room to spare, but it’s good advice to check.

version 1.0

misc

A while back I posted about a script I wrote called version.

https://github.com/bit101/version

You pass it the name of a program and it tells you what version of that program you have installed. Example:

version java

This saves you from having to remember if it’s java -v, java --version, java -V or something else (no spoilers).

version now knows how to get the version of 156 different programs (including itself). It has 9 contributors and 15 stars. Not exactly React, but it’s cool to have people contributing.

In the original proof of concept, I was using bash case statements. In fact, this was the entire first iteration:

! /bin/bash

case $1 in

java)
$1 -version
;;

gcc | rustc)
$1 --version
;;

node | perl | lua)
$1 -v
;;

python)
$1 -V
;;

go)
$1 version
;;

esac

Once I started adding more programs though, it became obvious that this wasn’t going to work. I discovered that bash and zsh support a form of associative arrays. I thought that would be the perfect thing. It would look something like:

declare -A tools
tools[gcc]=--version
tools=-version
tools[node]=-v

Sadly, these are not supported in bash 3, which is still in use. In fact, my MacBook Pro has bash 3 on it.

Plan C was to fake associative arrays. Essentially, you just make a bunch of variables, one for each tool, with a common prefix:

tools_gcc=--version
tools_java=-version
tools_node=-v

I just had to do a bit of fancy regex with grep and sed to get the argument from the name of the tool that was passed as an argument. The initial pass with this method was pretty ugly, and I didn’t really understand what I had done. One of the contributors made some nice changes, and this led me to learning a lot more about grep and sed and I was finally able to get rid of grep altogether and do it all in sed. I was pretty happy with that. sed has always seemed like one of those arcane tools that only wizards knew how to use.

I also learned how to make man pages. And I made an install and uninstall script. One of the other contributors has been working on making a snap package, but from what I can tell that’s probably not going to work too sell due to the strict confinement of snaps.

Anyway, I don’t think there’s a whole lot more to be done with the simple tool. Hopefully people will still find new programs to add to it. But I figured it was done enough to slap a 1.0.0 sticker on it.

Git-based Wiki

misc

For many years I’ve bounced around using different tools to save information that I might need later. I’ve used MS OneNote, Evernote, Workflowy, Dynalist, Notion, several other hosted and self-hosted wiki systems, and probably many other things.

If I had to name a favorite out of all those, I’d go with Workflowy. It’s a super simple text outliner. You start with a single top level page. Each page is a list of items, and can each have a nested sub-list, with effectively unlimited depth. But you can also focus on any node so that it becomes a page in itself. Dynalist is very much the same, but you get multiple lists and can add images, fancy formatting, check boxes, all kinds of other groovy features. On the surface it sounds a million times better, but with all those bells and whistles, I felt like I was losing the elegant simplicity of Workflowy.

But I digress.

One thing I didn’t like about most of those systems is that someone else owns your data. And if it’s an online system, it might be hard to access offline. Some of the wiki systems do run locally, but then you might lose the online functionality.

Some months ago I came up with a system that I am now totally sold on. It’s super simple and involves only github (or gitlab or bitbucket or a self-hosted git repo or whatever) and markdown files.

The System

Honestly, the system itself is so simple that as I start to describe it, it seems almost so obvious I’m second guessing why I even have to describe it. But I did go through a few iterations before I got it down just how I like it.

Top Level

Start by creating a git repo (either online or locally). Create a folder called docs and a README.md file. In that main file create and maintain a bullet list of links. This is your index to all the top level pages in your wiki. Each link should be to another markdown document that lives in the docs folder. Here’s a sample:

# My Personal Wiki

- [Stuff to Buy](docs/stuff_to_buy.md)
- [Movies to Watch](docs/movies_to_watch.md)
- [Birthdays](docs/birthdays.md)
- [Projects](docs/projects.md)

The reason for the top level docs folder is because when you first go to your wiki, it’s going to show a list of all the files and then render your README.md file below that. If you have a ton of documents in the root of your project, your readme is going to scroll right off the page. With this setup, there will always only be two top level items, the folder and your readme. This is what it looks like when you go directly to the repo:

Main Documents

Now, within your docs folder, create a markdown file for each item in the index. A nice trick I do here is to create a link back to the main readme file as the first line in the file. For example:

[Parent](../README.md)

# Stuff to Buy

- Food
- Drink
- Pencils
- Paper
- Lamborghini

Now you’ll have a link back to where you came from. Here’s that in action:

Of course, you can link out to other content here too:

[Parent](../README.md)

# Movies to Watch

- [The Godfather](https://www.imdb.com/title/tt0068646/?ref_=fn_al_tt_1)
- [The Big Lebowski](https://www.imdb.com/title/tt0118715/?ref_=fn_al_tt_1)

You can also do fancy stuff like include images, or even make tables:

[Parent](../README.md)

# Birthdays

| Who   | When     | Gift      |
|-------|----------|-----------|  
| Paw   | 01/01/20 | Whiskey   |
| Maw   | 05/08/23 | Flowers   |
| Sis   | 08/09/42 | Gift Card |

Which gives you:

Sub-categories

Now you can start making folders for sub-categories. For example, the projects page looks like this:

[Parent](../README.md)

# Projects

- [Project A](project_a/index.md)
- [Project B](project_b/index.md)

The Project A link goes to project_a/index.md and similarly for Project B. And both of those files will link back to docs/projects.md as their parent:

[Parent](../projects.md)

# Project A

This is all about Project A

Naming these index.md rather than README.md breaks the symmetry between the top level and lower levels, but that’s fine with me.

And of course, those projects can contain additional documents or even nested folders. And as long as you include those parent links at the top of the page and use those for navigating back, you’ll always be viewing rendered markdown files.

This is just one way of setting up sub-categories. Totally up to you how you want to organize it. That’s the beauty of it.

Benefits

Things I love about this system:

  1. Your content is available on line from wherever you can log into github. But at the same time, you can always keep a complete local backup on any computer you want. Changes are just a pull away. There’s no PITA “export your data in json format” kind of process. Keep it up to date all the time. You can even do a local push/pull via a cronjob once a day or more or less often.
  2. Online content is nicely rendered in HTML on the web, but also completely readable and editable on your local system. Of course, if you are using a simple text editor, you might miss out on the clickable links – and images.
  3. Editing content on line is one click away. Just click the pencil icon as you’re viewing the markdown file and you’re in edit mode. Save and you’re back to an HTML view.
  4. If you do end up in an offline situation you can still edit your local files. If you wind up with a conflict, you’re not at the mercy of the tool in how it’s going to resolve it. It’s a straight up git merge, conflict resolution workflow that you’re already familiar with. Full control.
  5. Of course, it’s git, so every single change to every file you make is a commit, tracked, revert-able, never lost.
  6. Get sick of github? Pull locally, push it to gitlab instead. Or wherever.
  7. No special tools, frameworks, or other systems needed. Just markdown files.

Overall, I’ve really come to rely on this system. I’m constantly thinking of things to add to it. Every time I figure something out or find some interesting bit of data I know I’ll want to come back to, it goes in the wiki. I find myself actively using it all the time.

Potential Improvements

One of the things I’ve thought about is automating some of the work. The main readme and sub-category index files could potentially be auto-generated. It wouldn’t be rocket science. Iterate over each markdown file and make a listing. And for each folder, look for the index file in that folder and add that to the listing. Then re-write that readme/index file with the updated listing. Recursively travel through the folders. If you had a script or tool that did this, it could be run via github actions perhaps.

If I were just editing locally, I could easily set up an editor command to create the file and link to it. But I find myself using the wiki more often online than locally. So I have to keep thinking about all this.

Example

Here’s a link to the sample wiki I created just for this post. (My real personal wiki is private.)

https://github.com/bit101/wiki_example

Feel free to fork it or just copy the idea and make it better. If you come up with any bright ideas on how to improve it, please share back.

Pinebook Pro Tips

misc

For those of you who actually have, or are thinking about getting a Pinebook Pro and would like to know how to avoid the stupid mistakes I made, here are some specific details I learned.

Get the Right Image

I wanted to install the Manjaro XFCE version. So I wanted the image for that. This was my first point of confusion. I knew I wanted an eMMC installer image. There are links here: https://wiki.pine64.org/index.php?title=Pinebook_Pro_Software_Release#Manjaro_ARM

If you get an image direct from Manjaro, it will NOT be an eMMC boot image. You have to go to osdn.net. And even there, it’s confusing. First you’ll see a list of releases.

Naturally, you’ll go for the lastest one, currently 20.08, and you’ll see this:

None of those are eMMC installers though. If you use any of those, it will only install Manjaro onto the SD card that you booted from. You have to go back, in this case, to 20.04. Then you’ll see this:

Oho! An actual eMMC installer image. This would have saved me many hours had I looked beyond the latest release. And don’t forget that Manjaro is a rolling release, so it doesn’t really matter which one you get, it’s going to be fully up to date as soon as you do your first update. At least that’s mostly true. There may be minor differences that I don’t know about, but effectively, it’s fine.

Run the Installer. Hit Escape!

Burn the installer onto at least an 8GB SD card and boot the PBP with that card inserted. It’s going to hang on the loader screen. At least at the time of this writing anyway. There’s a bug in the eMMC image that causes this. Just hit escape and you’ll be in the installer.

It will walk you through some choices that should be mostly obvious. You want to choose to install to the eMMC. That should be mmcblk2. Whereas the SD card you booted from should be mmcblk1.

Note: Once you’ve got everything installed, you can put a non-bootable SD card in the slot and use it for extra storage. It won’t affect the boot process.

The Manjaro ARM Installer

The Pine64 wiki also mentions this script:

https://gitlab.manjaro.org/manjaro-arm/applications/manjaro-arm-installer

It sounds lovely. Install the script, run it, choose your distro, choose the target, sit back and it does everything. This is what soft-bricked my PBP though and caused me hours of despair. I may have just been unlucky. I haven’t seen reports of others having the same trouble. But personally, I won’t try that method again.

Fixing a Soft Brick

As I have come to understand it, the PBP will boot first from a bootable SD card if one is inserted. Otherwise it will try to boot from the eMMC module. However, you can wind up messing up your eMMC to the point where the PBP won’t boot at all. In some cases this means the power light won’t even turn on. This is what happened to me. It sure seems like a hard, fatal brick situation, but don’t despair. It’s fixable.

It has something to do with something called a uboot, which I guess is kind of like a boot block / grub kind of concept, which controls the boot process. If that gets messed up, you aint booting nothing.

So you have to bypass that completely. You do that by completely disabling the eMMC.

  1. Open up the PBP. This is a simple matter of undoing 10 screws. The PBP is made to be easily open-able and internally upgrade-able. There’s a small switch there that disables the eMMC. Consult https://wiki.pine64.org/index.php?title=Pinebook_Pro#Pinebook_Pro_Internal_Layout to locate it. You want to move it to the position away from the hinge, towards the inside of the computer.
  2. Note: be VERY careful when using the computer with the bottom cover off. Pine warns that opening the lid without the cover on can damage the hinge. Of course you’ll need to open the lid a bit to boot it and see what’s going on. Just be very careful.
  3. Now you should be able to boot from a bootable eMMC installer image on an SD card. Verify this.
  4. Next you need to reboot and during the boot process, flip that switch back to enabled. If you do it too soon, you won’t be able to boot. Apparently, if you do it at exactly the right moment, 2 seconds into the boot process, the PBP will boot and the eMMC will mount. But if you are too late, the eMMC won’t mount. That’s fine though. The next step will handle that.
  5. As mentioned above, the eMMC installer image will hang. Just hit escape and you’ll be in the installer.
  6. You can try the install process at this point. The eMMC may have mounted. In the step where it asks you where you want to flash the image, if you see mmcblk2 you are good to go!
  7. There’s a good chance that the eMMC will not be mounted, so you’ll need to mount it manually. Choose “No” in the first screen of the installer and you should be dropped back into a root command prompt:

    [root@manjaro-arm ~]# _
  8. There are two commands you can type here that should mount the eMMC:

    echo fe330000.sdhci >/sys/bus/platform/drivers/sdhci-arasan/unbind
    echo fe330000.sdhci >/sys/bus/platform/drivers/sdhci-arasan/bind

    These are covered here: https://wiki.pine64.org/index.php?title=Pinebook_Pro#eMMC_information
  9. To verify that this worked, you can type lsblk and you should see mmcblk2 in that list.
  10. From there, type exit and you should be back to the installer, which should now work fine.

Resources

Read this entire page: https://wiki.pine64.org/index.php?title=Pinebook_Pro

Use the forums: https://forum.pine64.org/ – particularly the Linux on PBP forum.

The official Pine64 subreddit: https://www.reddit.com/r/PINE64official/