Catalina VM

I’m not a big fan of Apple. Their products themselves are fine, for the most part. Not to my preference in a lot of ways, but that’s fine. I know plenty of people who love their iPhones and Macbooks and watches. I’m not going to argue. I definitely don’t like the company though. I perceive them as being overly controlling, developer hostile, and incredibly narcissistic. All this is just a pretext for saying that I don’t want to spend any money on Apple hardware.

But now and then I do want to be able to test something on Macos. Some program or script or utility I’m working on, like version or my C or Go based graphics and animation libraries. It’s not my main target, but since command line tools developed on Linux are usually trivial to get working on Mac, I’m happy to test them out and make some minor tweaks so they work there.

I’ve been considering building a “Hackintosh” system. But with Apple’s plans of going to ARM possibly by the end of this year, I don’t want to invest a lot in hardware that’s going to be obsolete soon.

This led me to see if it was possible to get Macos running in a VM. And I found this project:

https://github.com/foxlet/macOS-Simple-KVM

This is a git repo with a couple of scripts that create a qemu VM, download the official Macos installer image and runs that installer in that VM. It runs on Linux that I know of.

  1. Install the dependencies.
  2. Check out the repo.
  3. Run the jumpstart.sh script. This even allows you to choose which version you want. Defaults to Catalina.
  4. Create a virtual disk image and add that to the basic.sh script.
  5. Run the basic.sh script and choose install.

The UI that comes up is called “Clover”

The ui actually confused me at first. I was trying to click things, but you can navigate around with arrow keys and use enter to choose things. Choose the default option shown here to install.

This boots up a Macos system right off the install media. The first thing you’ll need to do is format that virtual disk you created. Then run the installer and tell it to install Macos to that disk you jut formatted.

The install takes a long time. Like close to an hour. At points it says it has one minute remaining and hangs there forever, before saying it’s calculating the remaining time, and hanging there forever. But be patient, it will finish after rebooting a couple times.

When it’s done, it will boot right into the OS, asking you to set up a username and password. And then you’re in a full Macos install. Use Control-Alt-F to toggle full screen and Control-Alt-G to toggle capture of the keyboard and mouse. You can shut down as you’d usually shut down a Mac or just close the VM window.

When you boot back in, you’ll get the Clover screen again. This time choose the last option in the top row to boot.

This is the one that messed me up the first time. I just hit enter and wound up in the install flow again.

Performance

By default, the basic.sh script allocates 2 GB RAM and a minimal amount of CPU resources. It’s also hard-coded to 1280×720 resolution I think. Read through the documentation to find out how to beef up the VM. I gave mine 8GB and a lot more CPU. I also got it running at full resolution on my 2560×1440 (or something like that) monitor. With the extra resources, it’s surprisingly performant. I mean, I’m not going to be doing gaming or video editing or trying to run XCode on it, but for browsing the web, regular apps, anything console-based, it’s perfectly adequate.

Once you go full screen with it, it’s honestly hard to tell it’s not the real thing. It is a bit laggy on my Thinkpad with an i5 CPU, but pretty zippy on my Ryzen 5 3600 desktop. I’ve installed various tools, utilities and other programs, as well as home brew and a bunch of packages form there. I haven’t had any problems with it so far at all.

Although the install was slow as hell, it now boots up fully in under a minute for me. That’s good because it’s not something I want running all the time. Giving it all those resources means the underlying Linux OS does not have access to them. You can pause the VM though, which stops it from using any CPU power. I haven’t checked if that affects the memory use though. I’d guess not so much.

Here is Catalina at 1920×1080 on my Thinkpad

Legality

Who knows. Use at your own risk. Just to avoid any problems, I did not sign in with my Apple ID. I’ve never heard of anyone getting sued for running an OS in a VM.

Summary

I’m pretty excited about this. This is perfect for my use cases of testing things out here and there. I’m not expecting it to be a full replacement for actual Apple hardware running Macos. And I don’t need that.

I imagine once Apple switches over to ARM, this project will be obsolete. Hopefully someone figures out a way to continue it with their new architecture.

A Cooler Cooler from Cooler Master

Some weeks ago I shared my new PC build. It’s been wonderful. Working perfectly. One thing I did recently was put in another 500GB drive to hold my VM images. While I was in there, I moved the front mounted drives around to the back and did some better cable arranging.

One thing I had planned to do for a while was add a new CPU cooler. After reading some reviews and watching some Youtube videos, I settled on the Cooler Master Hyper 212 Black Edition.

I probably didn’t really need this. But I think it’s a good investment. I’d been using the stock cooler that came with my Ryzen 5 3600. It did the job adequately. I’m not into any kind of crazy overclocking or anything, and wasn’t having any real heat problems. In general use, with a browser and a few tabs open, terminal and some music playing, maybe Slack open, the CPU would be in the high 30s to mid 40s C. Unless it was completely idle it wouldn’t stay in the 30s much. More in the low 40s. With Slack and some more active tabs open, it’d get into the 50s, maybe some short peaks into the 60s. Completely idle with nothing running, it would be mid-to-high 30s.

With the new cooler, I can see an immediate difference. Idle with just a browser and several tabs, it will settle down to 31-32. I never saw it go that low with the stock cooler. So it’s promising. I’ve only put it in this morning, so I’ll give it a few days to see how it performs under daily loads, but seems like a good improvement so far.

Installation wasn’t too bad. Took off the old cooler and cleaned up the thermal paste. You have to use their custom back plate, so I removed the old one and set up the new one. This was probably the most complex part. The cooler is designed to fit on a number of different socket types. You have to install various posts or screws and clips and move them to various positions depending on which socket you have, as well as a couple different types of brackets that go on the cooler itself. There’s a decent manual with Ikea-like diagrams for each configuration. I managed to get it right the first time without too much difficulty.

I came very close to forgetting to put on fresh thermal paste, but caught myself before tightening any screws. Getting the four screws on the brackets started was a bit tricky. They’re all spring mounted and wobble around. You have apply a little pressure and get them at the right angle to get them going. But once they were started, easy to tighten up by alternate corners.

The cooler comes with a single 120mm Silencio fan. The instructions have you mount it in a configuration that pushes air across the cooler and towards the back fan, which makes sense. But you can also purchase and add a second fan to mount on the other side of the cooler, for a push-pull configuration. There’s also an RGB version, but I think I have enough RGB going on in there as it is.

The cooler itself is pretty tall and the reviews all said to make sure you have enough room in your case. My case is ludicrously cavernous, so there’s plenty of room to spare, but it’s good advice to check.

version 1.0

A while back I posted about a script I wrote called version.

https://github.com/bit101/version

You pass it the name of a program and it tells you what version of that program you have installed. Example:

version java

This saves you from having to remember if it’s java -v, java --version, java -V or something else (no spoilers).

version now knows how to get the version of 156 different programs (including itself). It has 9 contributors and 15 stars. Not exactly React, but it’s cool to have people contributing.

In the original proof of concept, I was using bash case statements. In fact, this was the entire first iteration:

! /bin/bash

case $1 in

java)
$1 -version
;;

gcc | rustc)
$1 --version
;;

node | perl | lua)
$1 -v
;;

python)
$1 -V
;;

go)
$1 version
;;

esac

Once I started adding more programs though, it became obvious that this wasn’t going to work. I discovered that bash and zsh support a form of associative arrays. I thought that would be the perfect thing. It would look something like:

declare -A tools
tools[gcc]=--version
tools=-version
tools[node]=-v

Sadly, these are not supported in bash 3, which is still in use. In fact, my MacBook Pro has bash 3 on it.

Plan C was to fake associative arrays. Essentially, you just make a bunch of variables, one for each tool, with a common prefix:

tools_gcc=--version
tools_java=-version
tools_node=-v

I just had to do a bit of fancy regex with grep and sed to get the argument from the name of the tool that was passed as an argument. The initial pass with this method was pretty ugly, and I didn’t really understand what I had done. One of the contributors made some nice changes, and this led me to learning a lot more about grep and sed and I was finally able to get rid of grep altogether and do it all in sed. I was pretty happy with that. sed has always seemed like one of those arcane tools that only wizards knew how to use.

I also learned how to make man pages. And I made an install and uninstall script. One of the other contributors has been working on making a snap package, but from what I can tell that’s probably not going to work too sell due to the strict confinement of snaps.

Anyway, I don’t think there’s a whole lot more to be done with the simple tool. Hopefully people will still find new programs to add to it. But I figured it was done enough to slap a 1.0.0 sticker on it.

Grep and Sed, Demystified

I’ve kind of half understood grep for a while, but assumed that I didn’t really get it at all. I thought I knew nothing at all about sed. I took some time this weekend to sit down and actually learn about these two commands and discovered I already knew a good deal about both of them and filled in some of what I didn’t know pretty easily. Both are a lot more simple and straightforward than I thought they were.

Grep

grep comes from “global regular expression print”. This is not really an acronym, but comes from the old time ed line editor tool. In that tool, if you wanted to globally search a file you were editing and print the lines that matched, you’d type g/re/p (where re is the regular expression you are using to search. The functionality got pulled out of ed and made into a standalone tool, grep.

Basics

grep, in its simplest use, searches all the lines of a given file or files and prints all the lines that match a regular expression. Syntax:

grep <options> <expression> <file(s)>

So if you want to search the file animals.txt for all the lines that contain the word dog, you just type:

grep dog animals.txt

It’s usually suggested that you include the expression in single quotes. This prevents a lot of potential problems, such as misinterpretations of spaces or unintentional expansion:

grep 'flying squirrel' animals.txt

It’s also a good idea to explicitly use the -e flag before the expression. This explicitly tells grep that the thing coming next is the expression. Say you had a file that was list of items each preceded with a dash, and you wanted to search for -dog

grep '-dog' animals.txt

Even with the quotes, grep will try to parse -dog as a command line flag. This handles it:

grep -e '-dog' animals.txt

You can search multiple files at the same time with wildcards:

grep -e 'dog' *

This will find all the lines that contain dog in any file in the current directory.

You can also recurse directories using the -r (or -R) flag:

grep -r -e 'dog' *

You can combine flags, but make sure that e is the last one before the expression:

grep -re 'dog' *

A very common use of grep is to pipe the output of one command into grep.

cat animals.txt | grep -e 'dog'

This simple example is exactly the same as just using grep with the file name, so is an unnecessary use of cat, but if you have some other command that generates a bunch of text, this is very useful.

grep simply outputs its results to stdout – the terminal. You could pipe that into another command or save it to a new file…

grep -e 'dog' animals.txt > dogs.txt

Extended

When you get into more complex regular expressions, you’ll need to start escaping the special characters you use to construct them, like parentheses and brackets:

grep -e '\(flying \)\?squirrel' animals.txt

This can quickly become a pain. Time for extended regular expressions, using the -E flag:

grep -Ee '(flying )?squirrel' animals.txt

Much easier. Note that -E had nothing at all to do with -e. That confused me earlier. You should use both in this case. You may have heard of the tool egrep. This is simply grep -E. In some systems egrep is literally a shell script that calls grep -E. In others it’s a separate executable, but it’s just grep -E under the hood.

egrep -e '(flying )?squirrel' animals.txt

Other Stuff

The above covers most of what you need to know to use basic grep. There are some other useful flags you can check into as well:

-o prints only the text that matches the expression, instead of the whole line

-h suppresses the file name from printing

-n prints the line number of each printed line.

Simple Text Search

If the thing you are searching for is simple text, you can use grep -F or fgrep. All the same, but you can’t use regular expressions, just search for a simple string.

grep -Fe 'dog' animals.text
fgrep -e 'dog' animals.text

Perl

There’s also grep with Perl syntax for regular expressions. This is a lot more powerful than normal grep regex syntax, but a lot more complex, so only use it if you really need it. It’s also not supported on every system. To use it, use the -P flag.

grep -Pe 'dog' animals.text

In this simple case, using Perl syntax gives us nothing beyond the usual syntax. Also note that pgrep is NOT an alternate form of grep -P. So much for consistency.

Sed

I thought that I knew next to nothing about sed, but it turns out that I’ve been using it for a few years for text replacement in vim! sed stands for “stream editor” and also hails from the ed line editor program. The syntax is:

sed <options> command <file(s)>

The two main options you’ll use most of the time are -e and -E which work the same way they do in grep.

The most common use of sed is to replace text in files. There are other uses which can edit the files in other ways, but I’ll stick to the basic replacement use case.

Like grep, sed reads each line of text in a file or files and looks for a match. It then performs a replacement on the matched text and prints out the resulting lines. The expression to use for replacement is

s/x/y/

where x is the text you are looking for, and y is what to replace it with. So to replace all instances of cat with the word feline in animals.txt

sed -e 's/cat/feline/' animals.txt

Note that sed will print every line of text in the file, whether or not it found a match or not. But the lines that it matched will be changed the way you specified.

After the final slash in the expression, you can add other regex flags like g for global or i for case insensitivity.

Like grep, sed just outputs to stdout. You can redirect that to another file using > or pipe it to another process using |. But do NOT save the output back to the original file. Try it out some time on a test file that you don’t care about and see what happens.

There are lots of other options you can use with sed but the above will probably get you by for a while to come. As you need more, just read up.

Summary

The biggest thing I took away from the couple of hours I spent on this was actually how easy these two commands were to learn. I’d been avoiding doing so for a long time and now wish that I had spend the effort on this much earlier.

New Shell Script Shortcut

I’m often making shell scripts for various things – sometimes just a quick one for a specific task, sometimes just to test something out, sometimes for some kind of workflow task that I plan to keep around.

It’s always the same steps:

  1. Create the file.
  2. Add the header: #! /bin/bash
  3. Write the code.
  4. Save it.
  5. Exit the editor.
  6. Make it executable with chmod +x <filename>
  7. Run, test, edit, etc.

When you’re repeating yourself, time for some automation. So I wrote a shell script that creates shell scripts.

#! /bin/bash
if [ -f "$1" ]
then
  echo "$1 already exists"
exit 1
fi
echo '#! /bin/bash' > $1
chmod +x $1
$EDITOR $1

Create a file, add that code, make it executable (sound familiar?) and put it somewhere in your path. I named mine newss for new shell script.

Now you can just say:

newss test.sh

And you are editing an executable shell script. As you can see, it checks to see if the file exists as a minimum level of safety. It will follow any paths, such as:

newss foo/bar/test.sh

But only if the directories exist. Creating non-existent directories wouldn’t be too hard if that’s important to you.

Git-based Wiki

For many years I’ve bounced around using different tools to save information that I might need later. I’ve used MS OneNote, Evernote, Workflowy, Dynalist, Notion, several other hosted and self-hosted wiki systems, and probably many other things.

If I had to name a favorite out of all those, I’d go with Workflowy. It’s a super simple text outliner. You start with a single top level page. Each page is a list of items, and can each have a nested sub-list, with effectively unlimited depth. But you can also focus on any node so that it becomes a page in itself. Dynalist is very much the same, but you get multiple lists and can add images, fancy formatting, check boxes, all kinds of other groovy features. On the surface it sounds a million times better, but with all those bells and whistles, I felt like I was losing the elegant simplicity of Workflowy.

But I digress.

One thing I didn’t like about most of those systems is that someone else owns your data. And if it’s an online system, it might be hard to access offline. Some of the wiki systems do run locally, but then you might lose the online functionality.

Some months ago I came up with a system that I am now totally sold on. It’s super simple and involves only github (or gitlab or bitbucket or a self-hosted git repo or whatever) and markdown files.

The System

Honestly, the system itself is so simple that as I start to describe it, it seems almost so obvious I’m second guessing why I even have to describe it. But I did go through a few iterations before I got it down just how I like it.

Top Level

Start by creating a git repo (either online or locally). Create a folder called docs and a README.md file. In that main file create and maintain a bullet list of links. This is your index to all the top level pages in your wiki. Each link should be to another markdown document that lives in the docs folder. Here’s a sample:

# My Personal Wiki

- [Stuff to Buy](docs/stuff_to_buy.md)
- [Movies to Watch](docs/movies_to_watch.md)
- [Birthdays](docs/birthdays.md)
- [Projects](docs/projects.md)

The reason for the top level docs folder is because when you first go to your wiki, it’s going to show a list of all the files and then render your README.md file below that. If you have a ton of documents in the root of your project, your readme is going to scroll right off the page. With this setup, there will always only be two top level items, the folder and your readme. This is what it looks like when you go directly to the repo:

Main Documents

Now, within your docs folder, create a markdown file for each item in the index. A nice trick I do here is to create a link back to the main readme file as the first line in the file. For example:

[Parent](../README.md)

# Stuff to Buy

- Food
- Drink
- Pencils
- Paper
- Lamborghini

Now you’ll have a link back to where you came from. Here’s that in action:

Of course, you can link out to other content here too:

[Parent](../README.md)

# Movies to Watch

- [The Godfather](https://www.imdb.com/title/tt0068646/?ref_=fn_al_tt_1)
- [The Big Lebowski](https://www.imdb.com/title/tt0118715/?ref_=fn_al_tt_1)

You can also do fancy stuff like include images, or even make tables:

[Parent](../README.md)

# Birthdays

| Who   | When     | Gift      |
|-------|----------|-----------|  
| Paw   | 01/01/20 | Whiskey   |
| Maw   | 05/08/23 | Flowers   |
| Sis   | 08/09/42 | Gift Card |

Which gives you:

Sub-categories

Now you can start making folders for sub-categories. For example, the projects page looks like this:

[Parent](../README.md)

# Projects

- [Project A](project_a/index.md)
- [Project B](project_b/index.md)

The Project A link goes to project_a/index.md and similarly for Project B. And both of those files will link back to docs/projects.md as their parent:

[Parent](../projects.md)

# Project A

This is all about Project A

Naming these index.md rather than README.md breaks the symmetry between the top level and lower levels, but that’s fine with me.

And of course, those projects can contain additional documents or even nested folders. And as long as you include those parent links at the top of the page and use those for navigating back, you’ll always be viewing rendered markdown files.

This is just one way of setting up sub-categories. Totally up to you how you want to organize it. That’s the beauty of it.

Benefits

Things I love about this system:

  1. Your content is available on line from wherever you can log into github. But at the same time, you can always keep a complete local backup on any computer you want. Changes are just a pull away. There’s no PITA “export your data in json format” kind of process. Keep it up to date all the time. You can even do a local push/pull via a cronjob once a day or more or less often.
  2. Online content is nicely rendered in HTML on the web, but also completely readable and editable on your local system. Of course, if you are using a simple text editor, you might miss out on the clickable links – and images.
  3. Editing content on line is one click away. Just click the pencil icon as you’re viewing the markdown file and you’re in edit mode. Save and you’re back to an HTML view.
  4. If you do end up in an offline situation you can still edit your local files. If you wind up with a conflict, you’re not at the mercy of the tool in how it’s going to resolve it. It’s a straight up git merge, conflict resolution workflow that you’re already familiar with. Full control.
  5. Of course, it’s git, so every single change to every file you make is a commit, tracked, revert-able, never lost.
  6. Get sick of github? Pull locally, push it to gitlab instead. Or wherever.
  7. No special tools, frameworks, or other systems needed. Just markdown files.

Overall, I’ve really come to rely on this system. I’m constantly thinking of things to add to it. Every time I figure something out or find some interesting bit of data I know I’ll want to come back to, it goes in the wiki. I find myself actively using it all the time.

Potential Improvements

One of the things I’ve thought about is automating some of the work. The main readme and sub-category index files could potentially be auto-generated. It wouldn’t be rocket science. Iterate over each markdown file and make a listing. And for each folder, look for the index file in that folder and add that to the listing. Then re-write that readme/index file with the updated listing. Recursively travel through the folders. If you had a script or tool that did this, it could be run via github actions perhaps.

If I were just editing locally, I could easily set up an editor command to create the file and link to it. But I find myself using the wiki more often online than locally. So I have to keep thinking about all this.

Example

Here’s a link to the sample wiki I created just for this post. (My real personal wiki is private.)

https://github.com/bit101/wiki_example

Feel free to fork it or just copy the idea and make it better. If you come up with any bright ideas on how to improve it, please share back.

Pinebook Pro Tips

For those of you who actually have, or are thinking about getting a Pinebook Pro and would like to know how to avoid the stupid mistakes I made, here are some specific details I learned.

Get the Right Image

I wanted to install the Manjaro XFCE version. So I wanted the image for that. This was my first point of confusion. I knew I wanted an eMMC installer image. There are links here: https://wiki.pine64.org/index.php?title=Pinebook_Pro_Software_Release#Manjaro_ARM

If you get an image direct from Manjaro, it will NOT be an eMMC boot image. You have to go to osdn.net. And even there, it’s confusing. First you’ll see a list of releases.

Naturally, you’ll go for the lastest one, currently 20.08, and you’ll see this:

None of those are eMMC installers though. If you use any of those, it will only install Manjaro onto the SD card that you booted from. You have to go back, in this case, to 20.04. Then you’ll see this:

Oho! An actual eMMC installer image. This would have saved me many hours had I looked beyond the latest release. And don’t forget that Manjaro is a rolling release, so it doesn’t really matter which one you get, it’s going to be fully up to date as soon as you do your first update. At least that’s mostly true. There may be minor differences that I don’t know about, but effectively, it’s fine.

Run the Installer. Hit Escape!

Burn the installer onto at least an 8GB SD card and boot the PBP with that card inserted. It’s going to hang on the loader screen. At least at the time of this writing anyway. There’s a bug in the eMMC image that causes this. Just hit escape and you’ll be in the installer.

It will walk you through some choices that should be mostly obvious. You want to choose to install to the eMMC. That should be mmcblk2. Whereas the SD card you booted from should be mmcblk1.

Note: Once you’ve got everything installed, you can put a non-bootable SD card in the slot and use it for extra storage. It won’t affect the boot process.

The Manjaro ARM Installer

The Pine64 wiki also mentions this script:

https://gitlab.manjaro.org/manjaro-arm/applications/manjaro-arm-installer

It sounds lovely. Install the script, run it, choose your distro, choose the target, sit back and it does everything. This is what soft-bricked my PBP though and caused me hours of despair. I may have just been unlucky. I haven’t seen reports of others having the same trouble. But personally, I won’t try that method again.

Fixing a Soft Brick

As I have come to understand it, the PBP will boot first from a bootable SD card if one is inserted. Otherwise it will try to boot from the eMMC module. However, you can wind up messing up your eMMC to the point where the PBP won’t boot at all. In some cases this means the power light won’t even turn on. This is what happened to me. It sure seems like a hard, fatal brick situation, but don’t despair. It’s fixable.

It has something to do with something called a uboot, which I guess is kind of like a boot block / grub kind of concept, which controls the boot process. If that gets messed up, you aint booting nothing.

So you have to bypass that completely. You do that by completely disabling the eMMC.

  1. Open up the PBP. This is a simple matter of undoing 10 screws. The PBP is made to be easily open-able and internally upgrade-able. There’s a small switch there that disables the eMMC. Consult https://wiki.pine64.org/index.php?title=Pinebook_Pro#Pinebook_Pro_Internal_Layout to locate it. You want to move it to the position away from the hinge, towards the inside of the computer.
  2. Note: be VERY careful when using the computer with the bottom cover off. Pine warns that opening the lid without the cover on can damage the hinge. Of course you’ll need to open the lid a bit to boot it and see what’s going on. Just be very careful.
  3. Now you should be able to boot from a bootable eMMC installer image on an SD card. Verify this.
  4. Next you need to reboot and during the boot process, flip that switch back to enabled. If you do it too soon, you won’t be able to boot. Apparently, if you do it at exactly the right moment, 2 seconds into the boot process, the PBP will boot and the eMMC will mount. But if you are too late, the eMMC won’t mount. That’s fine though. The next step will handle that.
  5. As mentioned above, the eMMC installer image will hang. Just hit escape and you’ll be in the installer.
  6. You can try the install process at this point. The eMMC may have mounted. In the step where it asks you where you want to flash the image, if you see mmcblk2 you are good to go!
  7. There’s a good chance that the eMMC will not be mounted, so you’ll need to mount it manually. Choose “No” in the first screen of the installer and you should be dropped back into a root command prompt:

    [root@manjaro-arm ~]# _
  8. There are two commands you can type here that should mount the eMMC:

    echo fe330000.sdhci >/sys/bus/platform/drivers/sdhci-arasan/unbind
    echo fe330000.sdhci >/sys/bus/platform/drivers/sdhci-arasan/bind

    These are covered here: https://wiki.pine64.org/index.php?title=Pinebook_Pro#eMMC_information
  9. To verify that this worked, you can type lsblk and you should see mmcblk2 in that list.
  10. From there, type exit and you should be back to the installer, which should now work fine.

Resources

Read this entire page: https://wiki.pine64.org/index.php?title=Pinebook_Pro

Use the forums: https://forum.pine64.org/ – particularly the Linux on PBP forum.

The official Pine64 subreddit: https://www.reddit.com/r/PINE64official/

Pinebook Pro Saga and Review

Back Story

A while back I bought a Pinebook laptop. This was the original 11-inch, white plastic model. It cost $99 plus shipping.

Pinebook (definitely not Pro)

The idea was great – a cheap ARM laptop that you could grab and go and not worry too much what happened with it. But in practice, it was not even remotely usable. The specs on processor and network speed and memory were just not there. You couldn’t watch YouTube on it. Just loading any non-trivial web page was painfully slow. The touchpad was meh, but the keyboard was just atrocious. Not so much the feel, but the layout. Way too cramped. Keys for common symbols like | and " required a modifier key AND a shift key, turning them into a three-key combo. And the right hand shift key wasn’t even the size of a regular key. I could NEVER hit that one correctly.

It was just unpleasant to use. So it’s been sitting in my unused hardware pile ever since. Occasionally I take it out and try to do something with it, but it’s not worth it.

The Upgrade

Then the Pinebook Pro came out and I started hearing some good reviews. The first thing I did was check out the keyboard layout. It actually looked OK. This version costs $199 plus shipping. But has a much better processor, more memory and all around decent specs. It’s still a cheap ARM machine, but from all reports it seemed to be usable. It’s been in the back of my mind to give it a try for a while, but each time I checked it out, they were out of stock. They get made in batches, so sometimes you just can’t order. Recently I got interested again and saw that pre-orders were open and shipping on August 25. I pulled the trigger. Amazingly, not only did it actually ship on the 25th, but it got from Hong Kong to Boston in just two days. I was stunned.

It Arrives! I Brick It!

If you just want a review, skip to the end of this post. Stick around if you want to hear an epic tale of stupidity.

I opened up the package. I’ll get more into the build, etc. later. But first, my tale of woe. I booted it up. It had 86% battery and comes preinstalled with Manjaro KDE ARM. Got into it, set up my user, all good. First of all I love Manjaro. That’s what I run these days on any Linux desktop. But I like the XFCE version. KDE is nice. If XFCE vanished tomorrow, I’d go to KDE. But since there’s Pinebook Pro Manjaro XFCE image available, I wanted to use that.

A little understanding of PBP hardware. It’s main “hard drive” is an eMMC memory module. An eMMC is flash memory that’s better than an SD card, but not as great as an SSD. The PBP comes with a 64GB eMMC. It also has an SD slot that you an boot from. If there’s a bootable card in the slot, it will boot from that, otherwise it goes to the eMMC. So you can get a supported OS image and burn it onto an SD card and boot from that. But really you want to install the new OS to the eMMC. So there are eMMC install images specially made to do that. But every image I tried just installed onto the SD card. So I had to try an alternate way to install to the eMMC. Spoiler alert: this was where my stupidity enters in.

I found another installer method here: https://gitlab.manjaro.org/manjaro-arm/applications/manjaro-arm-installer

This seems to download the image of your choice, burn it to an SD card and then flash it back to the eMMC all in one script. Ran that and it seemed to be fine. Until it ran into an error and the power light started flashing red and green and everything froze up.

I rebooted. No. I tried to reboot. Nothing. Not even a power light. I knew the battery wasn’t dead. So this looked bad. Read some help. It gave some advice that involved opening up the computer. Well… that escalated quickly. But in fairness, the PBP is designed to be hackable. It opens very easily. There’s add on items you can put in there, etc. There’s also a switch in there to disable the eMMC, a reset button, and a few other physical controls.

https://wiki.pine64.org/index.php?title=Pinebook_Pro#Pinebook_Pro_Internal_Layout

All the connections were good, the eMMC was enabled. No joy.

Unbricking

I was pretty sure I had hard bricked it. But the more I read, the more I saw people saying that you CANNOT brick a PBP via flashing images. You can soft brick it, which is what I did, but it’s recoverable. Promising. What you do is disable the eMMC with that switch. Then it should power up and boot off the SD card. After probably a couple of hours of despair, I got it booting off the SD card! Whew.

OK, back in business, sort of. I still have to flash the eMMC, but now I have a catch 22. I can’t flash the eMMC because it’s disabled. But if I enable it, then the machine won’t boot.

Option one: apparently, you can do this thing where you boot the machine with eMMC disabled, and exactly two seconds into the boot, you flip the switch to enable it. That allows you to boot from the SD card and then have access to the eMMC. I had no luck with this method.

Option two: someone figured out that you could boot disabled, later enable it, and then run a few commands to mount it. This one worked!

OK, I was officially unbricked. Now I’m back to the point where I needed to flash the OS.

Another Failed Flash, But No Brick

There’s not a lot of magic in the flashing process. It comes down to using the dd tool to write the image to the eMMC. You can just do that manually as long as the eMMC is mounted. There are also some scripts out there that do that will run the command correctly for you. I got one of those, got the right image and ran it. It seemed to go fine. Reported success. Rebooted, and it tried to load, but would not fully boot.

The plus point is that the eMMC itself seemed fixed. I didn’t have to disable it to boot off the SD card. So I could close up the back of the computer again.

Back to Square One… and Success!

OK, now I was back to my original question of how to flash an image onto the eMMC. More reading up on people dding images led to more discussions about the eMMC installer images that never worked for me in the first place. But they seemed to work for others. It turns out that I was always going for the latest release. When I looked back a couple of releases, I found one that was very explicitly labeled emmc-installer-image. OMG. Slap on the forehead moment.

Well this sounded like exactly what I wanted originally. Downloaded it, burnt it to an SD card and booted it up. And it sat there on the loading screen forever. Sigh…

Tried some other images, other SD cards, all the same.

More reading and I see this little mention of a bug in the eMMC installer where it seems to hang. Oh! That’s me! The complicated fix is to… hit the escape key. Surely it couldn’t be that simple. But, yes, yes, it was. That was the final piece of the puzzle. Finally I was able to flash Manjaro XFCE Arm onto the Pinebook Pro. And it was only a little after midnight. Played around with it for a few minutes and then went to bed with a headache.

The Review

OK, so what is this thing like, now that I finally had it running with the OS I wanted?

First, let me say that most of the above is my fault. If I’d only looked around a little bit and found the eMMC installer image, I would have saved several hours. I’d even seen a message about the hang and escape fix, but had forgotten all about it hours later. But the documentation on all of this could have been a lot better. Most of what I learned was on various forum posts. So that’s the first takeaway. This is NOT a consumer device. If you want something that you can just plug in and have it work, this aint it. You gotta be ready and willing to dig in and search and experiment and tweak things. I’m fine with all that, personally.

OK, onto the review…

Build-wise, it’s surprisingly nice for a $200 machine. It’s got a full metal case. Magnesium alloy I think. Feels solid, heavier than I expected. Maybe like Macbook Air thickness. The hinge is good. The body itself is a bit flexible though.

The screen is pretty decent. 1920×1080. In fact it’s on par, if not better than the screen on my ThinkPad T480. Granted, that’s a low bar, but again, for the price point, that’s impressive.

ThinkPad vs. PBP

The keyboard is not bad at all. Not the best ever, but I’m comfortable typing on it. No problems with the layout. Everything is where it should be and the size it should be. No funky modifiers needed for common keys.

The touchpad is one of the weak points. Out of the box at least. I hated it. Sluggish, unresponsive, hard to get it on the thing you want to click. But with some research and tweaking of settings, I’ve now got it to the point where it’s workable for me. I was actually pretty happy with how much I was able to improve it with settings. Luckily I’d been through this plenty of times in the past with Linux on laptops. I know my way around all those settings.

Sound is awful. Not just quality but volume. Cranked all the way up, I literally can’t understand anything if the air conditioner is on nearby. Even with headphones plugged in, the volume on those is really low. I might be able to find some settings to improve that eventually. But I don’t plan on using this for media anyway.

Webcam is meh. Better than I thought it would be actually.

Ports: on the right, SD card, headphones, USB A.

On the left, USB C, USB A, power barrel connector. You can also power and charge over USB C, which is great.

Performance is not too bad. It doesn’t compare with a full x86 laptop or desktop of course, but it’s functional. Web pages load decently. A bit slower than I’m used to, but workable. I played some full screen 1080p video from YouTube and it was fine. No lag, nice and crisp, audio in sync with video. I was impressed. Currently, services like Netflix and Hulu do not work. I assume that’s due to the DRM software not working on ARM. There might be ways around that. I haven’t had a chance to test it out yet.

Mostly I intend to use the computer for some lightweight dev stuff. The price, size and low power consumption make it a great machine to grab and go with. So I’ve been testing some of my existing code projects on it.

I got my blgo framework up and running. I had to download the ARM version of Golang, but that was easy enough. Definitely compiles and runs stuff slower than on my other machines, but perfectly fine for on the go hacking on stuff. It also definitely has some hard limits on memory. I can create a 300 frame animated gif. But Imagemagick crashes on anything longer than that. And for creating sill images, I was able to do 1920×1080, and double that – 3840×2160, but beyond that things don’t render right. Once it even crashed me right out of my session, which I don’t think I’ve ever seen before. The computer was still running, but it kicked me back to the login screen.

I also got my bitlib_c library projects running with no problem whatsoever. I didn’t stress test those, but I assume they’ll have the same hard limits.

And I’ve currently got a Java project I started working on. Was able to install the latest OpenJDK and got that project running without a hitch.

Summary

Overall, I’m very happy with it. It’s a fun machine and completely functional once you know its limits. It’s a perfect machine for throwing in a bag and commuting with, or maybe traveling with or just bringing with you somewhere where you might want to use a computer but don’t want to bring your main machine. If you want a cheap Linux machine to hack on, this is way better than getting a Chromebook and trying to force Linux to work on that. I speak from experience.

How to Interview

A couple weeks ago I wrote about resumes. The obvious follow up is about interviews. Again, this post reflects my own experiences and opinions, not any official policies or procedures of my company.

I’ve done more technical interviews in the last few years than I could count. But I’m not going to talk about technical interviews here. What I’ve been doing for the past month or so is the initial screening calls, where I just have a conversation with the person, do introductions, ask some general questions about background, skills, likes, dislikes, experience, etc. It’s been a really nice break from the tech interviews. And I’ve learned so much from doing these.

Like resumes, there’s no real magic in passing these interviews. But there are all kinds of things that you can do really easy to mess it up and ruin any chance of going further in the process. We go into interviews with the principle of “start with yes.” We start with the idea that we’re going to hire you, and it’s up to you to convince us not to do so. Sadly, people are really good at convincing people not to hire them.

The way I do these screening interviews is I give a quick introduction to myself, my background and what I do now. 30-45 seconds max. Then I invite the person to do the same. I figure they’ll kind of mirror what I did and give a brief intro. It’s probably more important for them to tell me about themselves than vice versa, so they usually go a bit longer than I do – maybe a minute or two. But some people go on and on and on, giving me a life history, every job they had, everything they did and every technology they used in every job. Some people just take control of the interview. I don’t encourage them to do so, but I don’t try too hard to stop them either. How they relate with me in the interview is probably how they’re going to relate to their coworkers on the job. It tells me a lot.

Generally though, people don’t screw that part up too badly. They might talk a bit much, but I’ll attribute that to nerves. No problem.

Then I get to my first question.

What interested you in this position?

It might sound harsh, but 10-20% of candidates blow the interview right then and there. Some of the worst answers I’ve had, but which I hear surprisingly often:

  • I’m just applying to every front end job I see.
  • I don’t remember.
  • I just need a job.

If that’s how someone answers my first question, and they just leave it at that, I’m done. I’ll go through the rest of the interview, but unless they do something amazing, I’m totally checked out and my mind is made up. If that’s the level of effort someone puts into getting a job, I have no interest in finding out how much effort they’ll put into doing that job if they get it.

Table stakes for this kind of question is something how it’s a great match for your skill set, the tech stack, etc. You haven’t particularly impressed me with that, but you haven’t annoyed either.

Even better is, “I really like the idea of your business and what you’re doing…” or a personal anecdote that shows some connection with you and the business.

Then there’s the really good ones. “Well, I was reading on your site about how recently you…” or “I was doing some research and read about your partnership with…” Yes! This person spent a few minutes reading our site, or did a Google search and read something and remembered it. You’re interested in us. Now I’m interested in you.

Tell me about your current or most recent job.

And what you do / did there. What you liked / didn’t like there. Interesting things you worked on, etc.

It’s hard to screw this part of the interview up. I think the only way you can really destroy this part is by being totally bored. “I built some apps,” kind of answer. Or you could totally trash your previous company. Surprisingly few people actually do that though.

This part of the interview is a good place to make some points though. Talk about a really cool project you got to work on and tell me how interesting it was and what the challenges were and what you learned doing it. As you are talking about your last company in the past, I’m envisioning you working at our company in the future. What are you going to be like? Are you going to be into your job and get excited about challenges? Or are you going to be bored and lifeless?

What are your strengths (and weaknesses)?

To be honest, this part is like a well choreographed dance. You’re going to state some strengths whether they’re true or not. Then when I ask about you’re weaknesses, you’ll reiterate those strengths as “weaknesses”.

“It’s a double edged sword…”

“Attention to detail, but I sometimes get too caught up in making things perfect.”

Or a million variations.

If you have less experience, you’re going to tell me that you’re a fast learner.

Of course not everyone plays along. Someone recently told me that one of their weaknesses was that they were a slow learner. I was stunned. Nobody ever said that in an interview, ever. Everyone says they’re a fast learner in an interview.

Sometimes more senior developers have just gotten beyond that game and will just flat out tell me what they are good at and what they are not so good at. It’s refreshing.

Usually, I don’t get too much out of this question anyway. So play along or be honest. But not too honest. 🙂

What are you looking for in your next job?

Worst answer: parrot back the job title you are interviewing for, or the tech stack.

What are you looking for? A front end React position.

No kidding. I want to know what you are hoping for, what you are envisioning.

Less experienced developers will very often talk about learning, mentorship, growth, etc. More senior candidates will talk about taking on more responsibility, new challenges, and also growth.

I think the learning and mentorship answer from newer candidates is fine. But way too many candidates go too deep on this theme, mentioning it several times throughout the interview. Again, it’s not horrible. If you’re junior, we’ll definitely be mentoring you, and helping you learn and grow. But even if you’re junior, I want to know what you are going to bring to the table. You could earn some points here by talking about how you want to take on responsibility, contribute to the team, mentor people with less experience than you, or whatever value you think you can offer.

What questions do you have for me?

This question forms a pair of bookends with the first question. If you really don’t want the job, just say, “none.” You won’t get the job. If you want the job, ask at least 3-4 good questions. The more, the better, but don’t go over schedule. Like the first question, it shows that you are interested and prepared. One good example a few people have asked:

“I saw that your competitors are companies X, Y, and Z. How do you differentiate yourself from them?” Great again because it shows you prepared and have an interest in the company.

Almost everyone asks about the tech stack we use. But some follow that up by asking why we chose that tech stack, are we happy with it, do we think we’ll be changing it, and why / why not? That shows interest.

Sometimes I get quirky questions like, “tell me one fact about the company that I couldn’t find on Google.” I actually had fun answering that one, but it didn’t particularly do anything about my feelings for the candidate. It was just a gimmicky question and it felt like it came from some article like “10 quirky questions to ask to nail that job interview.”

I won’t give any more examples, as this is not a list of questions to ask article.

Summary

A huge part of passing a screening interview is showing interest in the company, interest in the job, interest in your career, interest in your previous work. The more interest you show, the more the interviewer is going to be interested in you. Well, maybe it’s better to say that the less interest you show, the less interest the interviewer will be interested in you. Hopefully they are starting from yes. Don’t push them towards no.

Skill and experience don’t have much of a part at this stage. I’ve already seen your resume. We’ll do some technical interviews later to see if you actually know how to do what you say you know how to do. Of course, sometimes the resume is inflated and that becomes obvious on the screen, but that is not very common.

Do some research, show some interest. If you aren’t interested, why are you applying for this job anyway?

Introducing “version”

Yesterday I picked up a book on a writing compilers and interpreters. This particular book’s code is written in Java. It’s been a while since I’ve coded in Java and I had no idea what Java dev tools I had on my system. So I created a simple hello world Java class and ran javac Main.java and java main and got the result I was hoping for. Yay! Then I figured I’d check what version of Java I had installed. So I did what I thought was obvious:

java -v

Unrecognized option. Oh, I know some language uses a capital V instead. Must be Java.

java -V

Nope. OK. I guess I just have to use the long version.

java --version

No luck. All right. We’ll just start going through every possible iteration…

java --Version
java -Version
java -version

Finally! That last one told me I had version 1.8.0_262.

It got me thinking that finding the version of various tools and programs is something you have to do now and then and I rarely get it right on the first try. Some other examples:

gcc --version
node -v
python -V
perl -v
go version
lua -v
rustc --version

Then I got to thinking that it would be pretty easy to write a utility script to capture a bunch of this stuff. Here’s the meat of what I did, just capturing a few of the above examples:

case $1 in
java)
    $1 -version
    ;;

gcc | rustc)
    $1 --version
    ;;

node | perl | lua)
    $1 -v
    ;;

esac

I fleshed it out a bit, added a bunch more programs to it, threw it up on github and tweeted about it. I’ve already had a couple of PRs adding additional tools to it. Hoping to get some more. As of this writing, it can recognize 36 different programs. It checks to see if you have the program installed at all before trying to find its version, and displays a message if it doesn’t yet know about the program you are checking. Also supports a -h version for help, -c to display the count of how many programs it recognizes, and of course -v for displaying its own version.

And yes, it can be used to check its own version “recursively”!

version version

If it sounds useful, grab it here:

https://github.com/bit101/version

Just put the version script somewhere in your path and you should be good to go. You can add whatever other tools you use to pretty easily. If you do, I’d love to know about them, either through a PR or just let me know the tool and how you find the version.

I’ve only actually tested it on a couple of Linux machines. It should work on Mac as well, but I’ve got to put in the time to test it later today.