RAM, Bit-Box, Attractors, Interpreters

What have I been up to over the last week? A few things.


I got my next 16GB batch of RAM in for my desktop. Out with the old, in with the new. It works. With the upgraded CPU and double the RAM, this 9 year old computer feels like new.

16 GB of shiny new (old DDR3) RAM

Now I have an extra 32 GB of DDR3 RAM that I will probably never use.

4x4GB, 4x2GB, 2x4GB

The four on the left didn’t work in my computer, but I suspect it might work in some other motherboard. The rest of it all works fine. It’s going to end up in a drawer somewhere, so I’d love to get it to someone who can use it. Open to any offers. Shipping, plus whatever you want to pay. I’m not looking to profit, but if I could recoup some fraction of what I spent, that’d be nice.


I was really enjoying my Bit-Box for the last week or so. Until I was messing around with the USB cable for some reason and managed to tear the connector right off.


The problem, I think, was that I used way-too-heavy, solid wire to connect everything. Everything was far too stiff so when I put a bit too much strain on the USB cable, the connector was the first thing to go. Also, soldering everything directly to the board wasn’t the greatest idea either. So the first step was to remove the old one…

Not ideal

Then I soldered on the pins that came with the Arduino boards. I had some ribbon cable with connectors on it, so I cut some of that to size and plugged it onto the pins.

Much better

Now it’s much more flexible and in the event that something does go wrong with this board, I can easily pull it and put in a new one.

I still have plans on making a larger one, but haven’t started anything on that front yet.


I’ve been continuing to work on learning GTK3 desktop Linux apps, working on making an app that does live drawing based on changeable UI parameters. It’s been fun and I’ve made some good progress. I figured out enough of the architecture to feel some confidence in how to set something like this and started in on making a real app using this idea.

One app that I’ve done multiple times over the years is what I call “Strange Explorer”. It presents different formulas for strange attractors, allows you to change their parameters in real time and see and save the outcome. I’ve done it in Flash and JavaScript, and even made a native iOS version. Here’s what I’ve come up with so far:

Two things about this project that I’m really happy about so far. One is that the different formulas are created as external configuration files that get read in at run time. They look like this:

name Hopalong Attractor
x_formula y-1-sqrt(abs(b*x-1-c))*sign(x-1)
y_formula a-x-1
z_formula n/a
scale 10
iter 50000
init_x 0
init_y 0
offset_x 0
offset_y 0
rotation 0
a -0.537
b 2.797
c 2.973
d 0

The code reads all the files in the formulas directory at startup. So to create a new attractor, I just have to create the new config file for it and start the app. It will automatically populate and function.

The other part is the formula parsing itself. Note that the formulas in the above configuration file are just lines of text that populate text fields in the UI. And those text fields are editable. And yet, it winds up as runnable code. To accomplish that, I’m using a library called tinyexpr. https://github.com/codeplea/tinyexpr You just feed it a string of text and it is evaluated. There are ways to bind variables to it so a, b, c, x and y are all plugged in to the formula at run time as well. It handles all the standard C math library stuff. But you can also define your own functions, like I had to for the sign operation in that example.

Eventually, I’ll create a way to bookmark settings and define brand new formulas right in the UI. Having a lot of fun with this project though.


To be honest, the attractors project has been a bit sidelined during the latter half of the week, as I’ve started down a whole new giant rat hole.

When I was creating the configuration files for that project, I needed a way to read and parse them at runtime. I thought about JSON and YML. There are C libraries that handle those. There’s also libraries that are totally designed for reading configuration files of various formats. In the end, I decided all of those were overkill for what I needed. I just needed to read lines from a file, separate the first word (name of property) from the rest of the line (property value) and add that to a model object.

So I wound up making my own parser. It does just that – reads each line, splits each line on white space. Takes the first element as the property name and joins the remaining elements as the value. If it’s a numeric property, it converts it to a number. Super simple, but it works fine. I’ll need to bolster it up a bit with some error checking, but it’s all I really need.

But it reignited my interest in building an interpreter. And reading up on tinyexpr fanned those flames a bit more.

I’ve had this idea for a while now to build my own DSL (Domain Specific Language) for creating graphics. It would sit on top of something like Cairo Graphics, parse a simplified source file and create a picture. Mainly so I could do away with a ton of setup and boilerplate and just write the commands that make the pretty shapes. It would probably wind up looking something like Processing, but even simpler.

Now before you start throwing links at me to projects that already exist like that, or ask why don’t I just use ____? … that’s not the point. The point is that I want to make something like this. Even if it’s only for myself.

A while back, when I was doing more with Go, I picked up two books:

Writing an Interpreter in Go

Writing a Compiler in Go

I got a little ways into the interpreter book, but never finished. Not a criticism on the book, which seemed pretty good actually, but I got distracted.

Anyway, I decided to go back into this idea and found a few really good online resources. The one I’m working through now is Let’s Build a Simple Interpreter.

When I first started this series, I assumed it was maybe up to five parts and I saw that it was started 5 years ago, so figured I’d get through it fairly quickly. It turns out that it’s now up to 19 parts and the last installment was earlier this year.

And it is amazing.

As an author of technical books myself, I really appreciate it when someone can explain complex subjects clearly. There are so many poorly written technical books and tutorials, when you find one that is really well done, it is a joy. This is one of those.

Writing an interpreter is a complex thing. He takes it one step at a time, in bite sized chunks that you really can wrap your head around. The overall goal is to write an interpreter that will handle most of the Pascal language. I’m on lesson 9 right now. And I don’t recall being this excited over some technology in a while.

This series focuses on Python as the language that the interpreter is being developed in. But I’ve been re-writing it in C. Porting it while learning it has been a great way to force myself to really fully understand what each line of code is actually doing. Much better than just copying and pasting. In fact, I think from now on, if I want to learn something that is not totally language dependent, I’m going to seek out tutorials in a language other than the one I’ll be using.

If you’ve ever thought about learning more about this subject, this particular series is a great place to start. Along the way I’ve found some other resources that I plan on looking into when I work my way though this one. I’ll list them here.



(These were all taken from the simple interpreter series I’m working through now)


I’ve been working on a custom keypad build. I think I’ll call mine a Bit-Box. In my post last week I shared the initial proof of concept breadboard build and the second prototype build. This past week I finished the “final” version. I put “final” in quotes because I’m already thinking of how to make a better one. Anyway, here’s how it came out:

The previous iteration used an Arduino Micro clone, a plastic project box and some push-button switches. For this one, I wanted a nicer enclosure and better switches. I got some Cherry MX keyboard switches from Amazon. A dozen for around $10. And ten X-Keys customizable key caps for about the same amount. These are clear key caps with a top cover that snaps on. You can pop the top off and put a slip of paper in there to create a custom key.

I found some templates here: https://xkeys.com/xkeys/accessories/customprintedlegends.html Just go down to the section entitled “Templates for Pre-cut Legends”. Pull one of those templates into your favorite image creation program and create the keys however you want them to look and print out at 100%. Cut them out and they should fit just right.

Next, I needed some way to mount these switches and keep them in place. There are two methods generally. One is to get or create a custom circuit board with holes drilled in it. The bottom of each switch has plastic nubs as well as the leads sticking down. Those go in the holes and you solder each switch in place. The other method is to have a plate with square holes in it. The entire key fits through the hole and snaps in place. I couldn’t find anywhere to buy such a switch plate, but I did find a model for one at Thingiverse: https://www.thingiverse.com/thing:2789684/files

I downloaded the model and had it printed at Shapeways. I printed it on the cheapest plastic and it was very reasonable. They kill you on the shipping though. Would be better to buy in bulk probably. Here’s the plate I got, with the switches inserted:

They snapped right in perfectly. Then, I put the key caps on…

And then printed up the inserts and inserted them.

Top row is to switch to workspace 1, 2 and 3, close the current app, close current window. Bottom row is terminal, firefox, slack, clementine (music player) and file manager. I was pretty excited about how this was looking so far. Now on to wiring it up.

I got a few more Arduino Micro clones and soldered wires to one of them and connected those leads to the switches. One switch leg went in common to the ground terminal of the Arduino. And one to each of 10 input terminals.

At this point I was able to plug it in, send my program to it and test it out. It works! Now I wanted to create a nicer enclosure.

I got some 1/2″ oak and set about fabrication. I’m a bit rusty on the woodworking skills, but it all started coming back to me. Ruined a few pieces by measuring wrong, but got there in the end. This was a bit more challenging than a lot of my other woodworking projects because it the inside of the box has to be exactly 2 3/4 x 1 1/2 inches, with fairly tight tolerances, in order to fit the switch plate. That took some careful math to but the outside dimensions of each piece so that the inside would come out right. Not rocket science, but easy to mess up, as I proved.

I cut some grooves in the side pieces and inserted some rails in them. The switch plate goes down into the top of the box and sits on these rails. It worked perfectly. I left the bottom open and filed a groove on the bottom of one of the sides for the cable to pass through. All very low tech, but I like that there are no screws or other fasteners holding the guts in place, and nothing to disassemble to get the switches out if I need to. Just turn it over and they come right out. I didn’t put any finish on the wood yet. I might do some shellac later, but I like the look of raw oak.

Post Mortem

What went well:

  • Key caps, switches, switch plate, Arduino, box build.

In short, I’m really happy with how it turned out.

What could have been better:

  • I used solid wire, probably too heavy for the purpose. Made it hard to arrange everything. If I do another one of these, I’ll use lighter weight stranded wire. Maybe think about using some connectors.
  • The box came out a bit chunkier than I want. It could easily have been half the height. (I am thinking about cutting this one down in fact.) And I’d probably go with 1/4 or 3/8 inch rather than 1/2 inch wood next time.
  • The graphics look great, but maybe a bit washed out. I might try printing them on photo paper to have them be a bit more vibrant.

Thoughts for next time:

  • I want to make a larger model, with maybe 15-20 keys. This will mean designing a custom switch pate, which might need more support in the middle. It also means altering the circuitry and software to use a matrix rather than have each key tied to its own pin. That will be a fun learning process.
  • I might also look into white LED switches and print the graphics with a white background so the light shines through.
  • It might be fun to think about some other controls. A rotary control for volume or brightness or whatever. Toggle switches for… something? Just go all out steam punk on it.

Computer Upgrades

A lot of things going on this week, so I’m probably going to have to make more than one post.

This one will be about some general computer upgrades I’ve been making.

Dell XPS 13 Battery Replacement

So, I bought a Dell XPS 13 back in 2016. It’s a sweet little computer. Really thin, lightweight, almost no bezel. But in 2018 I was really yearning for another Thinkpad and got myself a T-480, which I really love. I haven’t been using the XPS much but felt bad about that since it really is a great machine. So I pulled it out and discovered the battery was totally dead. It would run fine if it was plugged in, but the battery would not charge at all.

My wife had a Dell laptop some years ago and exactly the same thing happened to it. And I’ve heard similar stories from others. Seems to be a thing with Dell.

Well, I went on Amazon and found a replacement battery. Checked out some videos on line that made the replacement look easy. Turns out, it was really easy. Nice job on that Dell. Half a dozen screws to take the bottom panel off. Then 4-5 screws to remove the battery. I took some photos so I’d know that I was putting it back together right.

Yeah, it’s dusty in there.

It all worked just fine and now it’s fully operational. If I ever get back to going into the office, this will be the perfect device for creative commuting.

Like new.

Desktop Upgrade

OK, that was the one that went well. Next project was my desktop. I built it from scratch in 2011. It had an i3 540, 4 GB RAM and a 2TB spinning rust disk.

The original build. 2011.

Here’s the post on my old blog from when I first put this machine together:

Over the years, it’s seen a lot of upgrades. I boosted the memory to 8 GB. I think the power supply died at one point. I know there’s a newer one in there now. And recently I added an Nvidia GeForce 1050 graphics card. All kinds of storage in there now. At this point it boots Manjaro Linux XFCE from a 256 GB SSD, with a 1TB drive mounted as the home directory, and another 2TB drive as a backup. Then another drive that dual boots into Windows 10. Despite its aging specs, it’s been totally functional and I don’t really have any complaints about it for what it is. Mostly a home server, file storage and backup (also have offsite backup), sometimes media server, general browsing and email, etc.

But things can always be better and I’ve been thinking for a while about upgrading it completely. New motherboard, CPU, RAM. That was going to run $300-400 for what I wanted. But I realized I could go from the i3 540 to an i7 870 and upgrade the memory to its max of 16 GB, for just around $120 total. So I bought those items and they arrived this week.

The memory arrived first and that was easy to pop in. The computer booted right up and I’m pretty sure it was notably faster than before. Until it rebooted itself about a minute in. Repeatedly. Ran a memory test and that crashed it consistently. Tried swapping out different sticks to see if maybe it was just one bad one, but no luck. From what I can tell, it’s either bad RAM or just incompatible. I’ve read something about low density vs. high density RAM and apparently my old motherboard likes the low density stuff. So out with the new and back in with the old, at least for the time being, because my CPU was now here.

I popped off the fan, took out the old CPU, cleaned things up, put in the new CPU, got some thermal paste on there, and struggled to get the fan back in. It’s the fan I bought 9 years ago. It’s cheap and has those expanding plastic tabs that go through holes in the board and snap in. Well, in my case, they snap OFF. Two of them broke. I was hoping I still had good enough contact with the heat sink.

It booted up OK, but it got real hot real quick and shut itself right down. So, I spent a day without a desktop, while I waited for a new fan to arrive. I was just hoping I hadn’t fried the CPU.

The new fan came the next day (thanks Amazon!) and has a back plate that goes behind the motherboard and the heat sink/fan screws into that. This meant I had to take the whole computer apart and get the motherboard out. More than I was hoping to have to do, but OK.

Luckily, I’ve held on to this for 9 years. It seriously came in handy.

I gave it a good cleaning and did some better cable management while I was at it. Put it all back together and booted it up. I mixed up the SATA cables a bit, so had to go into BIOS and let it know which drive I wanted to boot from. Once that was done, it booted right up. I also mixed up one of the audio connections, but it was easy to fix. All running good now, and nice and cool! Can’t say definitively that it feels a lot faster. But I feel good knowing that it’s theoretically faster. 🙂

New CPU, new fan. Same old RAM.

Now that I knew I hadn’t totally melted the CPU, I did a bit of research on RAM and ordered another 16GB which is specifically compatible with my system. That will arrive next week.

But later the same day I was rummaging through a box of old cables and gadgets and found two 4 GB sticks of RAM. I vaguely recall getting them some years ago and remember that they were causing the computer to crash, which is why they wound up in a box. But I figured I’d try them out again. So far, so good. I’ve been running with 12 GB (4, 4, 2, 2) for a few days now and it’s been rock solid. Did some stress testing seeing how much RAM I could use at once, and it was all just fine. I’m not sure what was wrong before. Maybe I had put them in the wrong slots or something.

Next Tuesday, I have another 16 GB coming in. I’ll own 48 GB of RAM for this computer – 3 times as much it can hold. And it’s all old DDR3, which is kind of useless in any new system I might build. Oh well.

So, a rough start, but pretty happy with where things wound up so far – even though with the second batch of RAM and the new fan, I’m closer to $200 in on this round of upgrades.

Of course now after taking out the motherboard and putting it all back together again, I’m going back to the idea of getting a modern board and CPU. And more RAM (weep). And possibly a new case. Maybe a project for later in the year.

I’m looking at the B550 boards that have come out recently. I used to build my own computers and always went for AMD. But I’ve been all Intel for the last decade.

Weekend Update June 26, 2020

I’m going to try to do some kind of weekly update about different things that are going on, things I’m working on etc. Maybe just rambling thoughts.

Lately, there are two major projects I’ve been keeping busy with.

  • One is a custom Arduino-based keypad.
  • The other is a GTK based live algorithmic drawing tool.

The Keypad

I’ve wanted something like this for a while. I did a bunch of searching but couldn’t find exactly what I was looking for. But then I found the Elgato Stream Deck and that looks amazing. It’s also not cheap at all.

Strangely though, after I found that, I found all kinds of similar devices. I don’t know where they were hiding during my earlier searches. Then I found this post:

And again, once I found that, I found a ton of similar projects. I decided to do it and bought a 3-pack of Arduino Nano clones. When I got them, I discovered that they will not work for this purpose. You need a device with a 32u4 chip in it, which will act as a keyboard/mouse interface. But while I waited to get the right Arduino, I was able to mess around with the code from the above article on the Nano and figure out how it all worked and started customizing it more to my needs. It wouldn’t connect as a keyboard, but I could trace things out to the serial port and see what happened when I pressed buttons. Here’s the nano, ready to go:

In the meantime, I got the right board, which is an Arduino Pro Micro clone and hooked that up on the breadboard, plugged it in and actually go it sending keys to my computer.

Then I got some pushbutton switches and plastic cases, soldered it all up, drilled some holes in the case, plugged it in and set up my shortcuts. And YES! It all works! Here’s prototype 2, which I’ve been using daily for the past week:

One of the ways I changed the software is that I created a separate callback function for each button, and a library of functions to call. It makes it easy to swap around buttons by just changing what they do. The article’s code hard-coded each button to a function key. My approach allows me to send other combos like Super-F to open my file manager or Super-W to open my browser, Super-T for terminal, etc. Also, Alt-F4 to close a program or Ctrl-W to close a window. All this is super easy on Linux, because it’s really simple to set up keyboard shortcuts to do virtually anything there. On Macos, you have to jump through additional hoops, using Automator or Karibiner. But I’m not using it on a Mac, so I’m good.

The worst part right now is remembering which button does what. I know have a sticky note next to it with a legend. Not ideal. So the next phase will include cherry mx keyboard switches with customized key caps.

Here’s a sneak peak at the final build in progress:

I hope to finish this up this weekend and I’ll post more details next week.

GTK Drawing App

The other project I have going on is a desktop app which will will display live graphics generated from code, with various controls in the UI to update the code in real time.

When I was working mainly in JavaScript, I had built up a library of useful drawing functions that sat on top of the Canvas drawing API. I called this “bitlib”. I ported that to Go a couple of years ago, calling it “blgo” and have done a whole lot of work on it since. The Go version sits on top of Go wrappers to the C-based CairoGraphics library, which is part of GTK. Some weeks back I decided to try and go native with it, skip the wrappers, work directly with the C libraries, writing C code. I ported all of blgo to C and named it bitlib_c.

My reasons for getting away from JavaScript were mainly to get out of the browser. There are various limitations, security restrictions, performance considerations, etc. With Go and now C, I can write code, then compile that code, run the app, save an image and view that image with a press of a key. I can create an animation by saving hundreds or even thousands of files directly to my disk and then convert them to a gif using ImageMagick or even a video using ffmpeg. Again, all with one key press. I really love the set up that I’ve created over the last couple of years.

But one thing I missed from the browser is the immediacy and live feedback. In the browser, I was using my QuickSettings panel to alter the parameters for a drawing or animation in real time. This is not possible just running a program from the command line and saving to images. So I decided to build something myself.

One of the two major UI frameworks on Linux is GTK (the other being QT). Since Cairo is part of GTK, it made sense to build a desktop app with GTK. I could display a Cairo-based image surface in the app and then alter it with controls in real time. It was a bit of a learning curve, but I now have a proof of concept working.

Here you can see an image panel on the right and a control panel on the left. The sliders control the parameters of the drawing code and the drawing is updated in real time. This is a trivial example, but eventually I’ll be able to use this for any level of complexity of algorithmic drawing. My goals so far have been to get something working, learn GTK, and learn the various patterns to create a non-trivial application in C. I’m starting to get a decent MVC-type setup going on there which is nice. I want an easy way to create parameters and have that create the sliders or other controls, much like QuickSettings. Something more imperative, and less declarative than creating and initializing each control like it is now.

I’ll be continuing to work on this and share project. It’s still very much in the proof of concept phase, and I’m not sure it’s ever going to be something I share outright. But who knows.

A Cool Tool

I’m always looking for something cool to improve my computing life. What’s the one command line you probably use most often? ls is one of mine (though I usually alias ll to ls -la). I recently found a mostly drop-in replacement called exa. Check it out here:


It has great use of color coding, metadata and attributes, better defaults, built-in tree view and integrates with git. It’s built in Rust, is open sourced, MIT licensed, has executables for Linux and Mac OS (and Windows is on the horizon). Here’s a screenshot from exa’s github repo:

I just alias ls to exa and that mostly does all I need. Note that I said it’s a “mostly” drop-in replacement. There are some differences, but IMO, most of those differences are improvements.

I just alias ls to exa and that mostly does all I need. Note that I said it’s a “mostly” drop-in replacement. There are some differences, but IMO, most of those differences are improvements.

Is it life changing? No. But it’s a nice little quality of life improvement. Now if I ever see a non-exa ls output, it just looks lame.

Animated Sinusoidal Cardiods

I think I just made up a thing. Usually when I think that, it just means fewer than a few hundred people have thought about it before me, so who knows.

Let’s start with cardioids. A cardioid is a heart-shaped curve. One way to create a cardiod is to roll a circle around another circle of the same size, tracing the path of a single point on the moving circle. Like so:


I discovered another neat way to create a cardioid while checking out the math art challenge.

#MathArtChallenge Day 7: Cardioids!

In this method, you divide a circle into an arbitrary number of points around the radius. Then, for each point n, you draw a line from point n to point n*2. Point 1 to 2, point 2 to 4, point 3 to 6, etc.

Here’s some JavaScript/Canvas code showing this in action:

context.translate(400, 400);
const radius = 350;
const res = 100;
const slice = Math.PI * 2 / res;
const mult = 2;

for(let i = 1; i < res; i++) {
let a1 = slice * i;
let a2 = slice * i * mult;
context.moveTo(Math.cos(a1) * radius, Math.sin(a1) * radius);
context.lineTo(Math.cos(a2) * radius, Math.sin(a2) * radius);

And here’s what that gives you:

You’ll notice in the code that I’ve divided the circle into 100 points, which is is rather low-res. If I up that to 360, we get something nicer:

So I’m calculating the points by getting two angles, a1 and a2, calculating those with slice * i and slice * i * mult as described above. slice being a full circle divided by res and mult equals 2 for now.

   let a1 = slice * i;
let a2 = slice * i * mult;

What if I change that to mult = 3 instead?

Or, mult = 4 ?

You see that for each multiplier, m, you get m-1 nodes in the cardioid. Let’s just go crazy and see what happens when we set res to 1440 points and mult to 25:

Here’s the code for those of you following along at home:

context.translate(400, 400);
const radius = 350;
const res = 1440;
const slice = Math.PI * 2 / res;
const mult = 25;

context.lineWidth = 0.25;
for(let i = 1; i < res; i++) {
let a1 = slice * i;
let a2 = slice * i * mult;
context.moveTo(Math.cos(a1) * radius, Math.sin(a1) * radius);
context.lineTo(Math.cos(a2) * radius, Math.sin(a2) * radius);

All very interesting, but I wanted to start changing things up even more. I decided that rather than using a simple circle, what if I varied the radius of the circle with a sine wave? Here’s the code I came up with:

context.translate(400, 400);
const radius = 300;
const res = 1440;
const slice = Math.PI * 2 / res;
const mult = 5;
const waves = 6;

context.lineWidth = 0.25;
for(let i = 1; i < res; i++) {
let a1 = slice * i;
let a2 = slice * i * mult;
let r1 = radius + Math.sin(a1 * waves) * 100;
let r2 = radius + Math.sin(a2 * waves) * 100;
context.moveTo(Math.cos(a1) * r1, Math.sin(a1) * r1);
context.lineTo(Math.cos(a2) * r2, Math.sin(a2) * r2);

First I created a waves constant that controls how many sine waves will go around the circle. Then an r1 variable that is based on radius, multiplied by the sine of a1 * waves times 100. And an r2 variable based on a2 * waves. So for each point, it’s radius will get larger and smaller as they progress around the circle. The result (setting mult back to 5):

You can get all kinds of interesting shapes by varying how many nodes and how many waves and the size of the waves and the resolution.

Of course, I had to have a go at animating these. The first idea was to vary the height of that radial wave. Here, it’s going back and forth from -80 to +80:

I’m not going to give the source for the animation examples, because it was written in another system entirely, but if you’ve followed along so far, you’ll be able to figure it out.

Next, I thought about varying the phase of that radial wave, so that the wave itself seemed to be animating around in a circle. This produced some really striking animations. I’ll close the article by posting animations for 2, 3, 4, 5 and 6 wave animated sinosoidal cardiodids. Enjoy!

On the Road to Becoming an Audiophile, Part I

Over the past several months, I have been on the road to becoming an audiophile. The saying says something about the journey being more important than the destination. In this case, I totally agree. In fact, I’m fairly certain that I don’t even want to become an audiophile. But taking the first few steps towards improving the quality of my audio experience has increased my appreciation and enjoyment of the music I listen to.

Of course, appreciation and enjoyment are very subjective things. I can say for sure that during the times of my life when I was most into music, the quality of audio equipment I was using was often pretty poor. That didn’t matter. Simply having better equipment and media doesn’t mean that you’ll enjoy it more. But in my case, diving into this subject has been the impetus for me to listen to way more music and get more into it than I have been in quite some years.

The way I see it, high quality music playback is made up of three points:

  1. High quality playback equipment (players, amplifiers, converters).
  2. High quality media (music files).
  3. High quality output devices (speakers, headphones).

Any one of these can be a weak link in the chain. If you have a crappy player, it doesn’t matter what your media is or what you’re listening to it on. If you have crappy, low quality mp3 files, your hardware can’t make up for that. And you can have perfect media on a perfect system, but if you have crappy headphones, it’s gonna sound like crap.

In the past few months, I’ve put attention on all three of these areas, trying out new playback equipment, upgrading my digital music library, and learning and trying different headphones (in ear monitors to be exact).

Actually, I went into this, not with the thought of becoming an audiophile, but just wanting a better way of organizing and accessing my existing music library. Of course, this led to a cycle of improving one of the above three points, and then seeing the shortcomings in the other two and upgrading those. This has definite potential to become one of those continuous loops of sinking more and more money into the next thing to get that extra 0.01% improvement in quality. I haven’t gone too far down this path and I’m pretty much happy with where I’m at and what I have at this point… though I can’t rule out one or two more purchases in the coming months. But after that, I’ll be totally happy. Really. Yeah… I might be in a tiny bit of denial.

Anyway, I plan on discussing each of the above points in future posts, defining some basic terminology, describing the paths I went down and pointing out what’s further on down the roads I have not traveled yet (yeah, “yet”). Again, I’m not an audiophile, and if I ever start referring to myself as such, someone please do an intervention. But learning more about this technology has been a lot of fun and I think I know enough to point out some practical basics to any newcomers.

Next up, I’ll talk about playback equipment – DACs and DAPs, etc. How I got into it, what I looked at, what I got, what else is out there, and what I’d think about getting in the future.

2019 in Review, and a new project

I don’t blog much these days, but usually do manage to get in a year-in-review post every December or January.

2019 was a tough year. Not bad, but lots of changes, lots of stress


I’ve probably been more focused on work this past year than I have been in many years. To be honest, work has often been somewhat of “what I do in between my side projects” rather than the other way around. This year, the balance was very much the opposite. In fact, I had no real large side projects at all in 2019.

As mentioned in last year’s update, at the end of 2018 I became the Director of Engineering for Notarize. I took on this role with quite a bit of trepidation. I was actually offered the VP of Engineering role, but naively thought that taking on a lesser title would mean less responsibility and stress. In retrospect, it’s obvious that when you’re on top of the pyramid, it doesn’t really matter what title you go by. I could be the VP Eng or CTO or Director of Engineering or whatever else, and I’d still be doing what I’m doing now and stressing out just as much.

About three weeks into this new job, I got my first big challenge. We needed to do a significant, company-wide layoff, and I had to come up with a list of engineers to let go. Like, a dozen of them. I was very tempted to hand the list in containing just a single name – my own. But that wouldn’t have changed the outcome and would have just been taking the easy way out and giving the problem to someone else. We got through it, but it was one of the most painful things I’ve had to do in my career. Every one of the names on the list stuck me like a knife. They were all good people, many of them friends. I did what I could for each and every one of them, giving recommendations on LinkedIn, letting them know of other opportunities as they came up, and even acting as a personal/professional reference for several.

The rest of the year has gone a lot better. The remaining team members came together, got more focused and we’ve done some great work this year. We had two hackathons, agile training, a stability week, did some reorganization, sent people to some conferences, started warming up to the idea of having multiple remote engineers, and lots of other good stuff.

But there have been lots of ups and downs for me, personally. I’ve always tried to avoid management positions and remain a hands-on coder. That is all done. Apart from the occasional special projects, I’m not doing any regular coding at all. And I’m pretty much trying to figure out what I’m supposed to be doing day by day. It’s very different than being an engineer and having a project, feature or a bunch of tickets assigned to you. Every week, every day I have to decide what my job is going to consist of. It might be dealing with legal, personnel or code quality issues, coordinating requests across teams, arranging meetings and presentations or just about anything else. The panic points come when I’ve done everything that has been a pressing issue and I’m sitting there thinking, “now what the hell am I supposed to do???” And don’t talk to me about your imposter syndrome. You don’t want to know how bad mine is.

But overall it’s been great. A major learning and growing opportunity.

Other Projects?

Like I said, I haven’t really had any major public facing projects of any kind this past year. I still love geeking out over my multiple Thinkpads, trying different Linux distros, tweaking my home server and installing various media servers and other services on the home network and an external VPS and a few Raspberry Pis. Always tinkering with something.

But there has been one project slowly brewing, which I think I’m ready to make public…


I’ve been getting the urge to go back to writing books. I’ve written or contributed to about fifteen books with various different publishers. And a few years back I self-published Playing With Chaos. There are pros and cons to going through a publisher and self publishing.

And then there’s a third option that I have not tried yet – open source publishing. Enter bitbooks. I’ll be posting more about this soon, but basically, I’ll be creating books on github. Not a new idea, but one that appeals to me right now. You can check in any time and see the progress of any current books. When they are complete, they’ll be available in various finished formats – epub, mobil, azw3, pdf. Free as in freedom and free as in beer. They’ll have a Creative Commons license. No DRM. No charge to download or read, but I’ll set up ways to donate for those that find the material useful.

One part that I’m most excited about is that because of the open, flexible publishing format, I’ll be able to provide code samples in multiple languages. That’s one huge problem with coding books – they are almost always tied to a specific programming language. Of course, the text of the book and the examples given in the book will have to use a specific language, but I’ll be providing the actual code samples on github in multiple languages. And I’m hoping to get some community involvement as well, with people submitting examples in their own favorite languages as pull requests.

Again, more on this coming soon, but go ahead and check out the site and see what’s up there so far.

Onyx Nova Pro update

Back in March I wrote a review of the Onyx Boox Nova Pro Ereader: https://www.bit-101.com/blog/2019/03/onyx-nova-pro-ereader/

Now, nearly 5 months later, I thought it would be good for a quick update.

tldr: I still love it. Like, really love it.


PDF reading

I’ve had more of a chance to consume PDF content on the device. As mentioned in the first review, most PDFs are utterly unreadable on a 6″ Kindle. Because each page has a fixed layout, when rendered full screen on a 6″ screen, the text is generally way too small. On a Kindle, you can zoom in and scroll around, but it was always way more painful than it’s worth.

On the 7.8″ screen, things are much better, but it’s still a bit too small for full page reading. I’ve eyed the Onyx Boox Note Pro, which has a 10.3 inch screen and would be amazing for this purpose. But at $599, I don’t think I’m going that route. The plus point is that the Onyx software has some great PDF reading features that actually makes even the 7.8″ screen functional for reading PDFs. It can automatically crop to the width or the height of the actual text on the page, perfectly getting rid of the extraneous margins. This allows most PDFs to be zoomed in enough to be workable. It also allows for different layouts. For example, if you have a document with two columns, you can set the reader view to that and you can see the first column zoomed to width – the top half anyway. Press next page and you see the bottom half of the first column, then the top right, bottom right, and onto the next page. I’ve read some things this way and it’s very workable.

PDF reflow options

Drawing / Sketching / Notes

I don’t use this feature a ton, but I still love it. Plenty of times that I’m trying to work out something or help my daughter with math or programming and it’s great to sketch out ideas or work out some formulas. And I’ve used it quite a bit for sketching notes of things I might want to build (physically or in software).

Content Management

Nothing much new here other than saying my system continues to work really well. My Calibre library is in Dropbox and all de-DRMed, so I can access it on any device, including on Android using another ereader app. I’m currently using Moon+ reader, which works really well if I’m somewhere without the Nova Pro and want to read a few pages of something. You don’t get the syncing like you would on Kindle apps/devices, but that’s fine.

The UI

The device had the v2.0 firmware when I got it. Shortly after that it got v2.1 and since then v2.1.2. The first update had huge UI improvements, and v2.1.2 brought a bunch of bug fixes and stability improvements. Other than the UI having so many features that the learning curve is fairly steep, I have no real complaints about it at this point.

Battery Life

In my first review I said that I had the power options set up to completely shut down the device after one hour of sleeping. I thought this was saving a ton of battery life, but I switched over to letting it stay powered up 100% of the time to avoid the 30-40 second startup time, and I don’t see a huge loss of battery time. I get many days to a week of battery life. I don’t really know, because I’ll just plug it in at night when I go to sleep if it’s under, say, 60%. Also, it charges really fast. There have been times I plugged the reader into my computer to transfer some files and do some other file management and noticed that the battery jumped 10% or so after just that short time being plugged in. In general, battery life is just a non-issue.

Android Apps

Not a huge change there. Pocket, Instareader, Dropbox are still my go-to apps. I did start using IAReader as well, syncing it to Dropbox. My workflow for that is if I see something in a book I want to save or use elsewhere, I can copy it, open IAReader and paste it into a document. As long as the Nova Pro is on line, that text is then available from wherever else I have Dropbox installed.

I’ve also tried the Moon+ reader Android app. Many Onyx users swear by it. It’s nice, and has some great additional features that the built in reader does not have, but it hasn’t totally won me over yet.

Onyx Nova Pro Ereader

Check out my update to this post, after using the Nova Pro for more than 4 months: https://www.bit-101.com/blog/2019/08/onyx-nova-pro-update

Let’s talk about ereaders. And first, let’s talk about Kindles.

I got my first Kindle in April 2009. Almost exactly 10 years ago. Goddaaaaamn, time flies. I didn’t buy into the whole Kindle hype at first. But I decided to try out the Kindle 2, skeptically. And I was hooked. I wound up buying several Kindles over the years, upgrading not at every new version, but every 2-3 years. I got the PaperWhite 4 late last year – as soon as it was available for preorder. And I loved it. Then I started thinking about getting the Oasis 2, mainly for the screen size. Most Kindles have stuck with the 6″ screen, whereas the Oasis went up to 7″. But the Oasis 2 is already pretty old, so I was waiting to see what might come out later this year.

And in the meantime, started looking around at other solutions. A couple of lines of alternative ereaders caught my eye:

Likebook and Onyx Boox

They both make several different models. E-ink, a lot of large screen sizes, and, most interestingly, with a stylus for sketching or taking notes. They also run a version of Android, which means you can install additional apps on them. Oh, and yeah, they are $$$$$. Way more than Kindles in general.

After researching what was available, I went for the newly released Onyx Nova Pro. Sadly, currently unavailable at this time on Amazon, but I think you can get it straight from Onyx, and other sources. You might have to dig around. [It’s back in stock.]

This has a 7.8″ display, larger than the Kindle Oasis, which is one of the first things I was looking for. The screen is beautiful, high dpi, front lit with two colors of LEDs which you can control independently or sync them for a neutral light.

Reading Books

It has a built-in ebook reader which supports a ton of file formats: PDF, ePub, MOBI, Doc, Docx, Docm, TXT, DjVu, FB2, HTML, CHM, AZW, AZW3, FBZ, ODT,PRC, RTF, SXW, TRC , JPG, PNG, BMP, TIFF, CBR, CBZ.

Compare that to the Kindle, which supports AZW3, AZW, TXT, PDF, MOBI, PRC. Anything else has to be converted to one of those formats.

Your reading experience can be customized to an amazing degree. Seriously, after two weeks, I’m still learning what you can do. Fonts, light temperature, boldness of text, text encoding, indentation, columns, text direction, text contrast, image contrast, line spacing, paragraph spacing, vertical margins, horizontal margins. The Kindle has some of that, but not nearly to the same degree. You can have several books open at once, in tabs. You can do text-to-speech. You can customize what happens when you tap any area of the screen, setting up different actions for each of nine screen zones. It does timed auto page turns and does a grid preview of either one, four, or nine pages at a time, plus table of contents as a sidebar.

Rather than the arbitrary Kindle “location” numbers, it always gives you the total pages and current page based on one page being one screenful of text. Of course this changes if you change the font size. And to be honest, it’s a little bit janky. It’s often a page or two off, but in general, it’s a fine estimate and easier to understand then Kindle’s locations.

Quick note about PDFs: while the Kindle supports PDFs, the 6″ screen size makes 95% of PDFs completely unreadable. The extra screen real estate on the Nova pretty much handles that. It’s also got a ton of other PDF-specific tools that I haven’t mentioned, that make things a lot easier, such as auto-cropping of margins to make reading PDFs even better.

For PDFs, you can use the stylus to draw or write directly on the PDF. For other formats, you can choose the side note feature, which opens the doc in split-screen landscape mode, one side for the book, one side for you to scribble on.

It has built-in dictionaries and you can add others and switch between them easily. Unlimited highlighting and notes, with different types of highlighting and underlining.

I could go on. It’s overwhelming. But overall, it’s just a great reading experience as well. The screen size and resolution, lighting, all the formatting options and navigation just make it a joy to read books on.

The Stylus

The other main thing that made me buy this was the stylus. Or in reality, the Wacom drawing layer that lets you draw on the screen with the stylus.

I already mentioned being able to draw on PDFs. But more useful for me is the built-in notes app. Again, crazy options: fixed width pencil mode, or pressure-sensitive brush mode, a few simple shapes, erase entire lines or just portions, lasso tool – select, scale, rotate, move drawings, add text, OCR what you’ve written, choose line width and color (black, white, red, green blue – colors show up on exported PDFs). It comes with a bunch of templates – grids and lines of all shapes and sizes, calendars, todo lists, etc. Back up to Dropbox, OneNote or some other services. Each note can have multiple pages and you can organize notes into folders. Notes are natively saved as PDF files, but you can also export them as PNGs.

The quality of the notes app blew me away. Exceeded all my expectations. They look great, really smooth, zero lag. The pressure sensitivity is great. The feel of the pen on the surface is perfect. So much like just writing on paper. Notes only work with the pen, so palm rejection is just automatic

Again, I could go on about this feature forever. I really like it.

Things you won’t find, though, are things like zooming in and out or layers. It’s not a serious art tool, more for handwritten notes and quick sketches.


To be honest, this aspect of the tablet I could have done without. But now that I have it, there are a few cool aspects. Although it has a built-in app store with a few staple apps, you can also add the Google Play Store and download anything that will work on Android 6. Really the only things I’ve found particularly useful are Pocket, Instapaper and Dropbox.

I mean, it comes with an email client, but you can also add Gmail. It has a full featured browser. You can install Youtube! And games. Yes, videos and animation in e-ink. It works and it fun to try out, but seriously… no. I have way to many other devices to hand that just do all those things much better.

You can also do audiobooks and play music via Bluetooth headphones or speaker. But again. I have other devices.

Oddly, you can also install other ereader apps, including the Kindle Android app, Nook, Google Play Books, etc. Personally, I use Calibre to manage my ebook library and put it directly on the device anyway. More on that next.

Content Management

I use Calibre to manage my ebooks. It handles just about any known format of ebook and does a good job of converting between formats. It has a plugin system and a rich ecosystem of plugins that deal with all kinds of things. One I use often is a metadata plugin that searches the internet for metadata and cover images for any book that is missing those.

I also use a DeDRM plugin so when I download my ebooks from Amazon, it strips the DRM from them and lets me read them anywhere, including the Nova. Is this legal? Probably not. Is it ethical? I feel like it is. I’m not uploading my books to sharing sites, or otherwise making them available to others. I buy books for my personal use, and I use them personally. But I want to have backup copies and want to read them on devices and apps that are not made by Amazon.

Another great thing is that when I connect the Nova to my computer with USB, Calibre recognizes it. I can then select whichever books I want and just say “Send to device” and they instantly appear there.

The Nova has 32 GB of storage, which is huge if you are just storing books there. Audiobooks, music, and manga will take up a lot more space, but that’s as much as the best Kindle has.

In the reading app you can view your whole library, filtered and sorted by various criteria, or toggle over to just recently active books, which is what I mostly use. You can also sort books into folders.

If you just want to shoot a single document over and you don’t have a USB cable to hand, the Nova has a wifi transfer mode. This launches a local server at an address something like You go there on your computer and you get a web page that allows you to select a file and then sends it over to the device. I use this often.

The Dropbox app is also super useful for accessing content on the device. And as mentioned before, I use the Pocket and Instapaper apps to save web articles and read them later on the Nova. It’s a pretty good experience.

The system also includes a simple but totally functional file manager that lets you move files around, rename, delete, create folders, all the usual stuff.

OK, the not-so-good parts…

The UI

As I’ve been saying, the number of options on this thing is overwhelming and I’m still learning all that is possible. The UI is not nearly as slick as the Kindle. In fact, it is very much a UI that some enthusiastic developers put together. In addition, all the software is created in China, and it’s created in Chinese. You can set the language to be English and you’ll be fine, but you will see some strange grammar here and there in the UI, and the occasional misspelling. The good news is that the firmware is under very active development. In fact, shortly after I got my Nova, they announced version 2.1 of the firmware. I was able to update easily and it brought a ton of improvements.

On the plus side, the system has been rock solid. It’s never once crashed or frozen up or done anything weird at all in the couple of weeks I’ve had it.

Battery Life

The Kindle advertises six weeks of battery life. When you drill in on that, that’s with no Bluetooth or wifi and 30 minutes of reading per day. Even then, I’d say that’s optimistic. In real life, you’ll probably get at least a couple of weeks on a Kindle charge though.

The Nova is a different story. It’s an Android device. But, that said, it’s pretty good at conserving power. I’ve gone a full weekend with a lot of reading and use and still wound up at 60-something percent. During the week, with a bit less use, I made it through four days and was down to 30-40% I think. So you’re talking days, not weeks. I’ve got a big USB charger next to my bed, so it’s easy enough to just plug it in before I go to sleep.

A Kindle just sleeps and when you want to read it just instantly wakes up. The Nova can do the same, but when it’s sleeping, it’s going to be using a lot more power than the Kindle would be. So there are two power settings. One will put the device to sleep after so many minutes of inactivity, another will turn it off after a certain time. Options are sleep after 3, 5, 10, 30 minutes or never. And power off after 15, 30, 60 minutes or never.

I have these set to sleep after 5 minutes and turn off after an hour. And with that, I get several days off a battery charge. You can disable the power off timer, but that’s going to affect the battery life. I haven’t experimented with that, so I’m not sure what the impact would be. The downside to turning it off is that it takes a little while to boot up. About 35 seconds. A bit of an annoyance, but since it stays on for an hour after your last activity, it’s livable.

And obviously if you install a lot of apps that are doing stuff in the background, that’s going to affect battery life. There is a power saver mode, which apparently throttles background app power use. I haven’t tested that much either.

Overall, with my current settings, I’m completely satisfied with the battery life at this point.


The Onyx Nova Pro costs $299 USD. Plus tax, shipping, whatever. I threw in $40 for a case that does the auto-on/off when you open/close it, and has a holder for the stylus. That’s the same as a 32 GB Kindle Oasis without special offers. But SO many more features.

Update: The $299 was what I paid on Amazon, where it is currently not available. It will probably cost a bit more elsewhere. Hopefully Amazon will get some more soon.

Second Update: It’s back on Amazon, but currently $319.

More info

From the horses mouth: Onyx Book Nova Pro

Some video reviews:

Note: the first video shows the device running the older firmware, while the second shows the updated 2.1 firmware with many improvements.

Perlinized Hexagons

In my last post, Hexagons, I talked a bit about how cool hexagons are, and gave one method of creating a hex grid. I also used this image as a header:

This was a bit of a tease, as I didn’t show how to create such an image. But I promised I’d come back around and give some code for it, so here we go.

First, some refactoring

As I’m sure most of you guessed, this image uses Perlin noise, but you could use any other function to get different textures. Check out my articles on flow fields (parts one and two) for more ideas along these lines. I’ll also be using the same Perlin noise library mentioned in the second of those articles.

I’m going to take the basic code from the tiled hexagons in the last article and change it up a bit. Mainly all I’m doing here is removing the translation code and calculating each x, y drawing point directly. Here’s what that looks like:

const canvas = document.getElementById("canvas");
const context = canvas.getContext("2d");
context.lineWidth = 0.5;

function hexagon(x, y, r) {
  let angle = 0;
  for(let i = 0; i < 6; i ++) {
    context.lineTo(x + Math.cos(angle) * r, y + Math.sin(angle) * r);
    angle += Math.PI / 3;

const radius = 10;
const ydelta = Math.sin(Math.PI / 3) * radius;
let even = true;

for(let y = 0; y < 900; y += ydelta) {
  let offset = 0;
  let yy = y;
  if(even) {
    offset = radius * 1.5;
  for(let x = 0; x < 900; x += radius * 3) {
    hexagon(x + offset, y, radius);
  even = !even;

Here’s what that gives us. Notice that I made the radius much smaller and stroked it instead of filling it.

Again, this all builds off of the last article. I had to remove the translation code because I need the exact x, y coordinates of every point of every hexagon, so I can feed that into a Perlin noise function to know how much to shift it.

Adding some noise

The Perlin function is going to make use of this library: https://github.com/josephg/noisejs. Feel free to add that to your project however you’d like. Or add a different noise function. It doesn’t really matter which one you use.

I need a function that’s going to take an x, y coordinate, calculate an offset based on Perlin noise, and return a new, shifted x, y coordinate. Since JavaScript doesn’t let us return multiple values, I’ll return and object with x, y properties. Here’s the function:

function perlinize(x, y) {
  const scale = 0.01;
  const strength = 20;
  const angle = noise.perlin2(x * scale, y * scale) * Math.PI;
  return {
    x: x + Math.cos(angle) * strength,
    y: y + Math.sin(angle) * strength,

To revisit how a Perlin flow field works, we’re going to use 2-dimensional Perlin noise, feeding it an x, y value. Since the numbers we’re using will be in the hundreds or even thousands for pixel values, we’ll scale that down a bit with a scale value. This function returns a value from -1 to +1. Other Perlin noise functions sometimes return values from 0 to 1. Either will work just fine, but might need some tweaking. I take that value and multiply it by PI, which will give me a number from -PI to +PI. We’ll call that an angle in radians. In degrees, it would be -180 to +180. And we’ll use the sine and cosine of that angle to create an offset based with a customizable strength. Then, we return the original x, y coordinate + that x, y offset.

Now we just need to alter our code to use the perlinize function. I’ll just show the relevant function:

function hexagon(x, y, r) {
  let angle = 0;
  for(let i = 0; i < 6; i ++) {
    const p = perlinize(x + Math.cos(angle) * r, y + Math.sin(angle) * r);
    context.lineTo(p.x, p.y);
    angle += Math.PI / 3;

And this is where this gets us.

Each hexagon still perfectly tiles because the Perlinized offset for each shared point of each hexagon should be exactly the same, at least down to an adequate degree of precision.


Now let’s add some shading. Basically we just want a grayscale value for each hexagon. To enhance the effect, the shading should be coordinated with the offset flow field values, so we’ll use the same Perlin noise settings. Here’s a function that will accomplish that:

function getShading(x, y) {
  const scale = 0.01;
  const value = (noise.perlin2(x * scale, y * scale) + 1) / 2;
  const shade = 255 - value * 255;
  return "rgb(" + shade + "," + shade + "," + shade + ")";

This takes an x, y coordinate and returns and rgb color string. Remember that this particular noise function returns -1 to +1. I’ll add 1 to that to have a range of 0 to 2, then divide by 2 to get 0 to 1. Then I’ll calculate a shade value by subtracting that normalized value by 255 and subtracting from 255. A bit convoluted, I know. This is all made a lot easier if you have a map range function. It would look something like this:

const n = noise.perlin2(x * scale, y * scale);
const shade = map(n, -1, 1, 255, 0);

To see how a map function would work, check out this video.

Finally, we just need to alter our drawing code to set the fill color based on the x, y of each hexagon, and do a fill before the stroke:

  for(let x = 0; x < 900; x += radius * 3) {
    hexagon(x + offset, y, radius);
    context.fillStyle = getShading(x + offset, y);

And here is the result:


Don’t stop here. There are all kinds of variables and different rendering techniques to experiment with here. This is all just about taking two different concepts – hex grids and Perlin noise – and combining them in a creative way. This worked out pretty well, but there are an infinite other combinations waiting for you to discover.

Late breaking comment…

One thing I had in writing this but totally spaced out on while writing the article is that this code is extremely unoptimized. Every single x, y point here is calculated and then “perlinized” three times. OK, some of the ones on the edges only get calculated only once or twice, but you get the point.

Strategies for optimizing would probably involve creating a single point grid one time and then using a loop to draw using the pre-calculated points in that grid.

This is left as an exercise for the reader.™