Catalina VM

misc

I’m not a big fan of Apple. Their products themselves are fine, for the most part. Not to my preference in a lot of ways, but that’s fine. I know plenty of people who love their iPhones and Macbooks and watches. I’m not going to argue. I definitely don’t like the company though. I perceive them as being overly controlling, developer hostile, and incredibly narcissistic. All this is just a pretext for saying that I don’t want to spend any money on Apple hardware.

But now and then I do want to be able to test something on Macos. Some program or script or utility I’m working on, like version or my C or Go based graphics and animation libraries. It’s not my main target, but since command line tools developed on Linux are usually trivial to get working on Mac, I’m happy to test them out and make some minor tweaks so they work there.

I’ve been considering building a “Hackintosh” system. But with Apple’s plans of going to ARM possibly by the end of this year, I don’t want to invest a lot in hardware that’s going to be obsolete soon.

This led me to see if it was possible to get Macos running in a VM. And I found this project:

https://github.com/foxlet/macOS-Simple-KVM

This is a git repo with a couple of scripts that create a qemu VM, download the official Macos installer image and runs that installer in that VM. It runs on Linux that I know of.

  1. Install the dependencies.
  2. Check out the repo.
  3. Run the jumpstart.sh script. This even allows you to choose which version you want. Defaults to Catalina.
  4. Create a virtual disk image and add that to the basic.sh script.
  5. Run the basic.sh script and choose install.

The UI that comes up is called “Clover”

The ui actually confused me at first. I was trying to click things, but you can navigate around with arrow keys and use enter to choose things. Choose the default option shown here to install.

This boots up a Macos system right off the install media. The first thing you’ll need to do is format that virtual disk you created. Then run the installer and tell it to install Macos to that disk you jut formatted.

The install takes a long time. Like close to an hour. At points it says it has one minute remaining and hangs there forever, before saying it’s calculating the remaining time, and hanging there forever. But be patient, it will finish after rebooting a couple times.

When it’s done, it will boot right into the OS, asking you to set up a username and password. And then you’re in a full Macos install. Use Control-Alt-F to toggle full screen and Control-Alt-G to toggle capture of the keyboard and mouse. You can shut down as you’d usually shut down a Mac or just close the VM window.

When you boot back in, you’ll get the Clover screen again. This time choose the last option in the top row to boot.

This is the one that messed me up the first time. I just hit enter and wound up in the install flow again.

Performance

By default, the basic.sh script allocates 2 GB RAM and a minimal amount of CPU resources. It’s also hard-coded to 1280×720 resolution I think. Read through the documentation to find out how to beef up the VM. I gave mine 8GB and a lot more CPU. I also got it running at full resolution on my 2560×1440 (or something like that) monitor. With the extra resources, it’s surprisingly performant. I mean, I’m not going to be doing gaming or video editing or trying to run XCode on it, but for browsing the web, regular apps, anything console-based, it’s perfectly adequate.

Once you go full screen with it, it’s honestly hard to tell it’s not the real thing. It is a bit laggy on my Thinkpad with an i5 CPU, but pretty zippy on my Ryzen 5 3600 desktop. I’ve installed various tools, utilities and other programs, as well as home brew and a bunch of packages form there. I haven’t had any problems with it so far at all.

Although the install was slow as hell, it now boots up fully in under a minute for me. That’s good because it’s not something I want running all the time. Giving it all those resources means the underlying Linux OS does not have access to them. You can pause the VM though, which stops it from using any CPU power. I haven’t checked if that affects the memory use though. I’d guess not so much.

Here is Catalina at 1920×1080 on my Thinkpad

Legality

Who knows. Use at your own risk. Just to avoid any problems, I did not sign in with my Apple ID. I’ve never heard of anyone getting sued for running an OS in a VM.

Summary

I’m pretty excited about this. This is perfect for my use cases of testing things out here and there. I’m not expecting it to be a full replacement for actual Apple hardware running Macos. And I don’t need that.

I imagine once Apple switches over to ARM, this project will be obsolete. Hopefully someone figures out a way to continue it with their new architecture.

A Cooler Cooler from Cooler Master

misc

Some weeks ago I shared my new PC build. It’s been wonderful. Working perfectly. One thing I did recently was put in another 500GB drive to hold my VM images. While I was in there, I moved the front mounted drives around to the back and did some better cable arranging.

One thing I had planned to do for a while was add a new CPU cooler. After reading some reviews and watching some Youtube videos, I settled on the Cooler Master Hyper 212 Black Edition.

I probably didn’t really need this. But I think it’s a good investment. I’d been using the stock cooler that came with my Ryzen 5 3600. It did the job adequately. I’m not into any kind of crazy overclocking or anything, and wasn’t having any real heat problems. In general use, with a browser and a few tabs open, terminal and some music playing, maybe Slack open, the CPU would be in the high 30s to mid 40s C. Unless it was completely idle it wouldn’t stay in the 30s much. More in the low 40s. With Slack and some more active tabs open, it’d get into the 50s, maybe some short peaks into the 60s. Completely idle with nothing running, it would be mid-to-high 30s.

With the new cooler, I can see an immediate difference. Idle with just a browser and several tabs, it will settle down to 31-32. I never saw it go that low with the stock cooler. So it’s promising. I’ve only put it in this morning, so I’ll give it a few days to see how it performs under daily loads, but seems like a good improvement so far.

Installation wasn’t too bad. Took off the old cooler and cleaned up the thermal paste. You have to use their custom back plate, so I removed the old one and set up the new one. This was probably the most complex part. The cooler is designed to fit on a number of different socket types. You have to install various posts or screws and clips and move them to various positions depending on which socket you have, as well as a couple different types of brackets that go on the cooler itself. There’s a decent manual with Ikea-like diagrams for each configuration. I managed to get it right the first time without too much difficulty.

I came very close to forgetting to put on fresh thermal paste, but caught myself before tightening any screws. Getting the four screws on the brackets started was a bit tricky. They’re all spring mounted and wobble around. You have apply a little pressure and get them at the right angle to get them going. But once they were started, easy to tighten up by alternate corners.

The cooler comes with a single 120mm Silencio fan. The instructions have you mount it in a configuration that pushes air across the cooler and towards the back fan, which makes sense. But you can also purchase and add a second fan to mount on the other side of the cooler, for a push-pull configuration. There’s also an RGB version, but I think I have enough RGB going on in there as it is.

The cooler itself is pretty tall and the reviews all said to make sure you have enough room in your case. My case is ludicrously cavernous, so there’s plenty of room to spare, but it’s good advice to check.

version 1.0

misc

A while back I posted about a script I wrote called version.

https://github.com/bit101/version

You pass it the name of a program and it tells you what version of that program you have installed. Example:

version java

This saves you from having to remember if it’s java -v, java --version, java -V or something else (no spoilers).

version now knows how to get the version of 156 different programs (including itself). It has 9 contributors and 15 stars. Not exactly React, but it’s cool to have people contributing.

In the original proof of concept, I was using bash case statements. In fact, this was the entire first iteration:

! /bin/bash

case $1 in

java)
$1 -version
;;

gcc | rustc)
$1 --version
;;

node | perl | lua)
$1 -v
;;

python)
$1 -V
;;

go)
$1 version
;;

esac

Once I started adding more programs though, it became obvious that this wasn’t going to work. I discovered that bash and zsh support a form of associative arrays. I thought that would be the perfect thing. It would look something like:

declare -A tools
tools[gcc]=--version
tools=-version
tools[node]=-v

Sadly, these are not supported in bash 3, which is still in use. In fact, my MacBook Pro has bash 3 on it.

Plan C was to fake associative arrays. Essentially, you just make a bunch of variables, one for each tool, with a common prefix:

tools_gcc=--version
tools_java=-version
tools_node=-v

I just had to do a bit of fancy regex with grep and sed to get the argument from the name of the tool that was passed as an argument. The initial pass with this method was pretty ugly, and I didn’t really understand what I had done. One of the contributors made some nice changes, and this led me to learning a lot more about grep and sed and I was finally able to get rid of grep altogether and do it all in sed. I was pretty happy with that. sed has always seemed like one of those arcane tools that only wizards knew how to use.

I also learned how to make man pages. And I made an install and uninstall script. One of the other contributors has been working on making a snap package, but from what I can tell that’s probably not going to work too sell due to the strict confinement of snaps.

Anyway, I don’t think there’s a whole lot more to be done with the simple tool. Hopefully people will still find new programs to add to it. But I figured it was done enough to slap a 1.0.0 sticker on it.

Grep and Sed, Demystified

tutorial

I’ve kind of half understood grep for a while, but assumed that I didn’t really get it at all. I thought I knew nothing at all about sed. I took some time this weekend to sit down and actually learn about these two commands and discovered I already knew a good deal about both of them and filled in some of what I didn’t know pretty easily. Both are a lot more simple and straightforward than I thought they were.

Grep

grep comes from “global regular expression print”. This is not really an acronym, but comes from the old time ed line editor tool. In that tool, if you wanted to globally search a file you were editing and print the lines that matched, you’d type g/re/p (where re is the regular expression you are using to search. The functionality got pulled out of ed and made into a standalone tool, grep.

Basics

grep, in its simplest use, searches all the lines of a given file or files and prints all the lines that match a regular expression. Syntax:

grep <options> <expression> <file(s)>

So if you want to search the file animals.txt for all the lines that contain the word dog, you just type:

grep dog animals.txt

It’s usually suggested that you include the expression in single quotes. This prevents a lot of potential problems, such as misinterpretations of spaces or unintentional expansion:

grep 'flying squirrel' animals.txt

It’s also a good idea to explicitly use the -e flag before the expression. This explicitly tells grep that the thing coming next is the expression. Say you had a file that was list of items each preceded with a dash, and you wanted to search for -dog

grep '-dog' animals.txt

Even with the quotes, grep will try to parse -dog as a command line flag. This handles it:

grep -e '-dog' animals.txt

You can search multiple files at the same time with wildcards:

grep -e 'dog' *

This will find all the lines that contain dog in any file in the current directory.

You can also recurse directories using the -r (or -R) flag:

grep -r -e 'dog' *

You can combine flags, but make sure that e is the last one before the expression:

grep -re 'dog' *

A very common use of grep is to pipe the output of one command into grep.

cat animals.txt | grep -e 'dog'

This simple example is exactly the same as just using grep with the file name, so is an unnecessary use of cat, but if you have some other command that generates a bunch of text, this is very useful.

grep simply outputs its results to stdout – the terminal. You could pipe that into another command or save it to a new file…

grep -e 'dog' animals.txt > dogs.txt

Extended

When you get into more complex regular expressions, you’ll need to start escaping the special characters you use to construct them, like parentheses and brackets:

grep -e '\(flying \)\?squirrel' animals.txt

This can quickly become a pain. Time for extended regular expressions, using the -E flag:

grep -Ee '(flying )?squirrel' animals.txt

Much easier. Note that -E had nothing at all to do with -e. That confused me earlier. You should use both in this case. You may have heard of the tool egrep. This is simply grep -E. In some systems egrep is literally a shell script that calls grep -E. In others it’s a separate executable, but it’s just grep -E under the hood.

egrep -e '(flying )?squirrel' animals.txt

Other Stuff

The above covers most of what you need to know to use basic grep. There are some other useful flags you can check into as well:

-o prints only the text that matches the expression, instead of the whole line

-h suppresses the file name from printing

-n prints the line number of each printed line.

Simple Text Search

If the thing you are searching for is simple text, you can use grep -F or fgrep. All the same, but you can’t use regular expressions, just search for a simple string.

grep -Fe 'dog' animals.text
fgrep -e 'dog' animals.text

Perl

There’s also grep with Perl syntax for regular expressions. This is a lot more powerful than normal grep regex syntax, but a lot more complex, so only use it if you really need it. It’s also not supported on every system. To use it, use the -P flag.

grep -Pe 'dog' animals.text

In this simple case, using Perl syntax gives us nothing beyond the usual syntax. Also note that pgrep is NOT an alternate form of grep -P. So much for consistency.

Sed

I thought that I knew next to nothing about sed, but it turns out that I’ve been using it for a few years for text replacement in vim! sed stands for “stream editor” and also hails from the ed line editor program. The syntax is:

sed <options> command <file(s)>

The two main options you’ll use most of the time are -e and -E which work the same way they do in grep.

The most common use of sed is to replace text in files. There are other uses which can edit the files in other ways, but I’ll stick to the basic replacement use case.

Like grep, sed reads each line of text in a file or files and looks for a match. It then performs a replacement on the matched text and prints out the resulting lines. The expression to use for replacement is

s/x/y/

where x is the text you are looking for, and y is what to replace it with. So to replace all instances of cat with the word feline in animals.txt

sed -e 's/cat/feline/' animals.txt

Note that sed will print every line of text in the file, whether or not it found a match or not. But the lines that it matched will be changed the way you specified.

After the final slash in the expression, you can add other regex flags like g for global or i for case insensitivity.

Like grep, sed just outputs to stdout. You can redirect that to another file using > or pipe it to another process using |. But do NOT save the output back to the original file. Try it out some time on a test file that you don’t care about and see what happens.

There are lots of other options you can use with sed but the above will probably get you by for a while to come. As you need more, just read up.

Summary

The biggest thing I took away from the couple of hours I spent on this was actually how easy these two commands were to learn. I’d been avoiding doing so for a long time and now wish that I had spend the effort on this much earlier.

New Shell Script Shortcut

tutorial

I’m often making shell scripts for various things – sometimes just a quick one for a specific task, sometimes just to test something out, sometimes for some kind of workflow task that I plan to keep around.

It’s always the same steps:

  1. Create the file.
  2. Add the header: #! /bin/bash
  3. Write the code.
  4. Save it.
  5. Exit the editor.
  6. Make it executable with chmod +x <filename>
  7. Run, test, edit, etc.

When you’re repeating yourself, time for some automation. So I wrote a shell script that creates shell scripts.

#! /bin/bash
if [ -f "$1" ]
then
  echo "$1 already exists"
exit 1
fi
echo '#! /bin/bash' > $1
chmod +x $1
$EDITOR $1

Create a file, add that code, make it executable (sound familiar?) and put it somewhere in your path. I named mine newss for new shell script.

Now you can just say:

newss test.sh

And you are editing an executable shell script. As you can see, it checks to see if the file exists as a minimum level of safety. It will follow any paths, such as:

newss foo/bar/test.sh

But only if the directories exist. Creating non-existent directories wouldn’t be too hard if that’s important to you.

Git-based Wiki

misc

For many years I’ve bounced around using different tools to save information that I might need later. I’ve used MS OneNote, Evernote, Workflowy, Dynalist, Notion, several other hosted and self-hosted wiki systems, and probably many other things.

If I had to name a favorite out of all those, I’d go with Workflowy. It’s a super simple text outliner. You start with a single top level page. Each page is a list of items, and can each have a nested sub-list, with effectively unlimited depth. But you can also focus on any node so that it becomes a page in itself. Dynalist is very much the same, but you get multiple lists and can add images, fancy formatting, check boxes, all kinds of other groovy features. On the surface it sounds a million times better, but with all those bells and whistles, I felt like I was losing the elegant simplicity of Workflowy.

But I digress.

One thing I didn’t like about most of those systems is that someone else owns your data. And if it’s an online system, it might be hard to access offline. Some of the wiki systems do run locally, but then you might lose the online functionality.

Some months ago I came up with a system that I am now totally sold on. It’s super simple and involves only github (or gitlab or bitbucket or a self-hosted git repo or whatever) and markdown files.

The System

Honestly, the system itself is so simple that as I start to describe it, it seems almost so obvious I’m second guessing why I even have to describe it. But I did go through a few iterations before I got it down just how I like it.

Top Level

Start by creating a git repo (either online or locally). Create a folder called docs and a README.md file. In that main file create and maintain a bullet list of links. This is your index to all the top level pages in your wiki. Each link should be to another markdown document that lives in the docs folder. Here’s a sample:

# My Personal Wiki

- [Stuff to Buy](docs/stuff_to_buy.md)
- [Movies to Watch](docs/movies_to_watch.md)
- [Birthdays](docs/birthdays.md)
- [Projects](docs/projects.md)

The reason for the top level docs folder is because when you first go to your wiki, it’s going to show a list of all the files and then render your README.md file below that. If you have a ton of documents in the root of your project, your readme is going to scroll right off the page. With this setup, there will always only be two top level items, the folder and your readme. This is what it looks like when you go directly to the repo:

Main Documents

Now, within your docs folder, create a markdown file for each item in the index. A nice trick I do here is to create a link back to the main readme file as the first line in the file. For example:

[Parent](../README.md)

# Stuff to Buy

- Food
- Drink
- Pencils
- Paper
- Lamborghini

Now you’ll have a link back to where you came from. Here’s that in action:

Of course, you can link out to other content here too:

[Parent](../README.md)

# Movies to Watch

- [The Godfather](https://www.imdb.com/title/tt0068646/?ref_=fn_al_tt_1)
- [The Big Lebowski](https://www.imdb.com/title/tt0118715/?ref_=fn_al_tt_1)

You can also do fancy stuff like include images, or even make tables:

[Parent](../README.md)

# Birthdays

| Who   | When     | Gift      |
|-------|----------|-----------|  
| Paw   | 01/01/20 | Whiskey   |
| Maw   | 05/08/23 | Flowers   |
| Sis   | 08/09/42 | Gift Card |

Which gives you:

Sub-categories

Now you can start making folders for sub-categories. For example, the projects page looks like this:

[Parent](../README.md)

# Projects

- [Project A](project_a/index.md)
- [Project B](project_b/index.md)

The Project A link goes to project_a/index.md and similarly for Project B. And both of those files will link back to docs/projects.md as their parent:

[Parent](../projects.md)

# Project A

This is all about Project A

Naming these index.md rather than README.md breaks the symmetry between the top level and lower levels, but that’s fine with me.

And of course, those projects can contain additional documents or even nested folders. And as long as you include those parent links at the top of the page and use those for navigating back, you’ll always be viewing rendered markdown files.

This is just one way of setting up sub-categories. Totally up to you how you want to organize it. That’s the beauty of it.

Benefits

Things I love about this system:

  1. Your content is available on line from wherever you can log into github. But at the same time, you can always keep a complete local backup on any computer you want. Changes are just a pull away. There’s no PITA “export your data in json format” kind of process. Keep it up to date all the time. You can even do a local push/pull via a cronjob once a day or more or less often.
  2. Online content is nicely rendered in HTML on the web, but also completely readable and editable on your local system. Of course, if you are using a simple text editor, you might miss out on the clickable links – and images.
  3. Editing content on line is one click away. Just click the pencil icon as you’re viewing the markdown file and you’re in edit mode. Save and you’re back to an HTML view.
  4. If you do end up in an offline situation you can still edit your local files. If you wind up with a conflict, you’re not at the mercy of the tool in how it’s going to resolve it. It’s a straight up git merge, conflict resolution workflow that you’re already familiar with. Full control.
  5. Of course, it’s git, so every single change to every file you make is a commit, tracked, revert-able, never lost.
  6. Get sick of github? Pull locally, push it to gitlab instead. Or wherever.
  7. No special tools, frameworks, or other systems needed. Just markdown files.

Overall, I’ve really come to rely on this system. I’m constantly thinking of things to add to it. Every time I figure something out or find some interesting bit of data I know I’ll want to come back to, it goes in the wiki. I find myself actively using it all the time.

Potential Improvements

One of the things I’ve thought about is automating some of the work. The main readme and sub-category index files could potentially be auto-generated. It wouldn’t be rocket science. Iterate over each markdown file and make a listing. And for each folder, look for the index file in that folder and add that to the listing. Then re-write that readme/index file with the updated listing. Recursively travel through the folders. If you had a script or tool that did this, it could be run via github actions perhaps.

If I were just editing locally, I could easily set up an editor command to create the file and link to it. But I find myself using the wiki more often online than locally. So I have to keep thinking about all this.

Example

Here’s a link to the sample wiki I created just for this post. (My real personal wiki is private.)

https://github.com/bit101/wiki_example

Feel free to fork it or just copy the idea and make it better. If you come up with any bright ideas on how to improve it, please share back.