ANSI Escape Codes

misc

Yesterday I created a little Go library to manage ANSI escape codes in Go.

I’ve been working on a command line app that will have a user selecting items from a list and asking a few questions in order to configure how the app runs. I realized that a bit of color would add a ton of context and make the experience much easier to navigate.

It turns out that there are quite a few libraries that do this kind of thing. Initially I used https://github.com/fatih/color which is pretty nice. But as always there were one or two things that didn’t work exactly the way I wanted them to. I started looking into how colors are defined in shells and just kind of wound up going down a rabbit hole on the subject.

I started just making a few functions that did the few things I wanted to do. These actually were sufficient to replace all of what I was doing with the library. But I kept working on it and kept seeing ways to improve it. And I was learning a lot and having fun. I started in the morning and banged on it off and on all day

Basics

ANSI escape codes can do all kinds of things in the terminal. You’re already familiar with a few, like \n for a new line, and \t for a tab character, and maybe \r for a carriage return (moves to the beginning of the current line without adding a new line).

Most people take a deep dive into escape codes when, like me, they want to set colors in the terminal. I’ve messed with this before when setting up my custom prompt a few times. But “promptly” forget about how it works each time. (Come on, that was good!)

But they are also useful for setting properties like bold, underline, reversed, moving to different locations in the terminal, clearing existing text from a line or the whole screen.

Most of these settings start by printing the escape code character to the terminal. That escape code is 27 in decimal, but you need to format it as an escape character. This can be done in a few different ways.

The most common way is octal:

\033

But some people prefer hex:

\x1b

If you’re really into unicode, you can even go with:

\u001b

I prefer \033 so that’s what I’ve used throughout.

After the escape code, you need to print an opening square bracket for most of the commands we’ll be using. This is the Control Sequence Introducer or CSI.

\033[

Now you’re set up to enter a code that actually does something.

Colors

Basic colors are defined by a number between 30 and 37, followed by the letter “m”.

Some references will tell you that you need two numbers, separated by a semicolon, and followed by the letter “m”. The first number can be 0 or 1 and specifies the brightness/boldness of the color. The second number is a number from 30 to 37 inclusive. This gives you eight base colors with two shades for a total of 16.

This is actually pretty inaccurate and I’ll cover the truth below, but for now let’s just look at the eight base colors.

To set a regular red color, you’d do this:

\033[31m

Try it in a terminal like so:

echo "\033[31mHello, world, I am red."

If this doesn’t work for you, you might have to pass -e to the echo command to enable it to interpret the backslash escape character.

echo -e "\033[31mHello, world, I am red."

These colors may look slightly different depending on your OS, terminal emulator and colorscheme in use.

Here are the color codes;

30: Black
31: Red
32: Green
33: Yellow
34: Blue
35: Magenta
36: Cyan
37: White

The Shocking Truth about that Leading 0/1

First, let’s learn about another ANSI code – bold. The code for bold is just 1. And if you want to turn it off, or explicitly say not bold, use 22.

echo "\033[1mThis is bold!"

You can even insert the sequence in the middle of a string and turn it off again later.

echo "The word \033[1mbold\033[22m is in bold"

It’s subtle there, but yeah.

We can combine multiple sequences into one. Just set the color code, a semicolon, and the bold code, all followed by the “m”. For example, we can add a color and the bold attribute. Let’s compare “regular yellow” with “yellow plus bold”.

echo "\033[33myellow"
echo "\033[33;1mbold yellow"

So as I said, some references tell you you need the 0 or 1, followed by the color code to fully define a color. Like this:

Black        0;30     Dark Gray     1;30
Red          0;31     Light Red     1;31
Green        0;32     Light Green   1;32
Brown/Orange 0;33     Yellow        1;33
Blue         0;34     Light Blue    1;34
Purple       0;35     Light Purple  1;35
Cyan         0;36     Light Cyan    1;36
Light Gray   0;37     White         1;37

This is wrong and misleading. There are eight basic colors and 1 is the code to print in bold. Period. Yes, unbolded yellow can be pretty dim and look more orange or brown, but these are non-standard, random names that just confuse things.

So what about 0? Well it’s not the “not bold” code. In fact, we already saw that 22 is the code to undo bold.

In fact, 0 is the reset code. It resets all styles. So it’s actually pretty good to have it in there as a first code, especially when you are trying to create a new style when other styles might already be in play.

But thinking that the first code should be “0 or 1” is very misleading and can lead to confusion. Here’s a use case:

Say I wanted some text in regular green, underlined and then the more text in bold red – not underlined. If I’m fixated on “0 or 1”, then I’ll do something like this (4 is the code for underline):

echo "\033[0;32;4munderlined regular green \033[1;31mbold red"

But now the red is still underlined. If I change the last 1 to a 0, then I’ll get rid of the underline, but I’ll lose the bold. I actually need both! And there’s no problem with doing that.

echo "\033[0;32;4munderlined regular green \033[0;1;31mbold red"

In fact, you could move the 1 later, like this:

echo "\033[0;32;4munderlined regular green \033[0;31;1mbold red"

The first version is saying “clear it, then make it bold and red” and the second one is saying “clear it, then make it red and bold”. Same thing.

Thinking that colors are a two-part code with a leading 0 or 1 is just incorrect. Saying you have to prefix a 0 or 1 is literally saying, “reset all styles OR add a bold style to whatever style is there already.” Illogical.

It took me a long time to work through the logic of all this, but now it makes a lot more sense. Hopefully this helps you down the line.

Actual Bright Colors

There’s one more color / shading alternative, which is another set of actual “bright” colors from 90 to 97. These are brighter than the regular colors, but don’t give you quite the brightness as the bold versions.

Below you can see 36m, 96m, 1;36m and 1;96m.

A subtle difference, but good to know. (Actually I don’t see any difference in the last two, but maybe you do.)

Background Colors

You can also use ANSI escapes to set background colors. these again follow the same sequence but go from 40 to 47.

echo "\033[41mRed background"

Now you can combine a background and foreground color:

echo "\033[0;1;32;41mGreen on red, my favorite"

There are other codes for making text more dim, or italic or strikethrough, but these have much less support in terminal emulators than the ones I’ve mentioned.

And if you have a supported terminal, you can specify up to 256 colors with a bit different syntax that I’m not going to cover here because it’s just beyond what I need.

There’s a whole lot of other stuff you can do with these codes, including moving the cursor up or down or left or right or to a specific row and column, and clearing part or all of a line or part or all of the terminal window.

This page is one of the better one-stop references I’ve found:

https://gist.github.com/fnky/458719343aabd01cfb17a3a4f7296797

The library!

So anyway, back to that library I created… 🙂

It just incorporates all of this into a Go module giving you functions you can call rather than trying to remember all those codes.

It’s here: https://github.com/bit101/go-ansi

Described pretty well there, but basically you can do things like:

ansi.SetFg(ansi.Red)
ansi.SetBg(ansi.Black)
ansi.SetBold(true)
ansi.SetUnderline(false)
ansi.SetReversed(false)

fmt.Println("Hello, world!")

And this will print in bold red on a black background. One of the cool things about using these sequences in code is that they are “sticky”, i.e. once you set some of these properties, they apply to anything else you print to the console until you change or reset them. This is unlike using echo in the terminal itself, where each escape is one-shot.

In addition to these sticky property settings, I also created a few print helper functions that mirror the built in Go print functions: ansi.Print, ansi.Printf, and ansi.Println. These just add an ANSI color constant as a first argument.

ansi.Println(ansi.Red, "this will be red)"

Like echo, these are one-shot functions, which is useful when you want to print one message in a color and not have to worry about resetting things back to default.

It also has functions for several of those cursor movement and screen clearing codes.

As I said there are plenty of other libs out there that do similar things, but I built this to work just the way I want it to. So I’m keeping it!

My Raytracing Journey

misc

Anyone who follows me on twitter has seen what I’ve been up to in the last month and a half or so. But to hop on a meme from last year…

How it started:

September 18, 2022

How it’s going:

October 23, 2022

Not bad for 35 days of nights and weekends, if I do say so myself, but let’s go back to the start and take an image-filled journey.

It started in a book store

My wife, daughter and I are all book addicts. Our idea of someplace fun to go on the weekend is Barnes and Noble, which is about a ten minute drive down the highway. We were there one Saturday and I saw this book and started looking through it:

The first half of the book is about ray tracing and the second half is about rasterized 3D. The content looked really accessible and even just skimming through it, it seemed like something I could follow along with and code. I recently got an oreilly.com subscription, so I was able to access the book there, and had the first image you see above rendered in no time. And I understood what was going on with the code. I was hooked!

What is Raytracing?

I’m absolutely not going to try to teach you raytracing, but I’ll try to give you a 10,000 foot view.

The two major schools in 3D rendering are ray tracing and rasterization. Rasterization usually involved creating a bunch of triangles or other polygons out of a bunch of 3D points, figuring out how to fill in those triangles and filling them in. I’ve coded that kind of thing from scratch multiple times at different levels of thoroughness over the last 20 years.

Raytracing though, is something I’ve never touched. It involves making a model of 3D primitives and materials and lights, and then shooting out a ray through every pixel in the image, seeing what that ray hits, if anything, and coloring it accordingly.

A good analogy from the book is if you held a screen out in front of you and looked through each hole in the screen from a fixed viewpoint. Left to right, top to bottom. When you looked through that one hole, what did you see? Color a corresponding point on a canvas with that color paint. You might see nothing but sky in the top row of the screen, so you’d be doing a lot of blue points on the canvas. Eventually you’d hit some clouds or trees and do some white or green dots. Down lower you might hit other objects – buildings, a road, grass, etc. When you worked through all the holes in the screen, you’d have a completed painting. If you understood that, you understand the first level of raytracing.

So you model three spheres and say they are red, green and blue. You shoot a ray from a fixed “camera” point, through each pixel position in your image. Does it hit one of the spheres? If so, color that pixel with the color of the sphere it hit. If not, color it black. That’s exactly what you have here:

A ray is a mathematical construct consisting of a 3D point (x, y, z) and a direction – a 3D vector (also x, y, z). So the first step is to get or create a library of functions for manipulating 3D points and vectors and eventually matrices. There’s a fairly simple formula for finding out if a ray intersected a sphere. It will return 0, 1 or 2 points of intersection. Zero means it missed entirely, one means it just skimmed the surface, and two means it hit the sphere, entered, and exited.

Of course a single ray may hit multiple objects. So the algorithm has to find the first one it hit – the intersection closest to the origin of the ray. But… it’s entirely possible there could be objects behind the camera, so you need to filter those out.

Lighting, shadows, reflection

The first image looks a bit flat, but lighting, shadows and reflection take care of that. Add to your world model one or more lights. There are different types of lights, but point lights have a point and an intensity. The intensity can be a single number, or it could be an RGB value.

When you find your point of intersection for a given pixel, you then need to shoot another ray from that intersection point to each light. Can the ray reach the light without being blocked by another object? If so, what is the angle at which the light is hitting the object at that point. If it’s hitting straight on, that part of the object will be brighter. If it’s hitting at nearly 90 degrees, it’s just barely lighting it.

And that’s just for diffuse material. But that gives you this:

You can tell that my light in this picture is off to the right and a bit higher than the spheres. You’ll also notice that there seems to be a floor here, even though I’ve only mentioned spheres. The trick to that is that the yellow floor i just a very large sphere. But it also illustrates that closest intersection point. For some pixels the ray hits the yellow floor sphere first, so you don’t see the red sphere, but in other areas, it hits the red sphere first, so it blocks out the yellow one.

In order to figure out that light/angle part, you need to know the “normal” of the surface. That’s another ray that shoots out perpendicular to the surface at that point. I knew from previous dabbles in 3D graphics that if you start messing with that normal, it changes how light reacts with the surface. So I took a bit of a diversion and used a Simplex noise algorithm to alter the normal at each point of intersection. I just hacked this together on my own, but I was pretty much on the right track.

But getting back on track, some materials are more shiny and the light that reflects off of them depends on the angle you are looking at them from. So there’s another calculation that takes into account the surface normal, the angle to the light, and the angle to the camera or eye. This gives you specular lighting.

Getting better. But then there are shadows. When you are shooting that ray out from the intersection point to each light, you have to see if it intersects any other object. If so, that light does not affect the color of that pixel.

Here, there are multiple lights, so you see shadows going off in different directions. Already things are starting to look pretty cool.

Finally, reflections. When a ray hits an object, and that object is reflective, it’s going to bounce off and hit some other object, which is going to affect the final colorization of that pixel. It can be confusing because this is all being calculated in reverse of the way light works in the real world. We’re going from the destination and working back to the source.

If you have multiple reflective objects, the light might wind up reflecting back and forth between them for quite a while. This is not only very costly, but it has quickly diminishing returns, so you usually set a limit on how many levels of reflection you want. So now you are figuring out the color of a given pixel by factoring in the surface color, each light and its angle, what kind of material you have, and all possible reflections. Sounds intimidating, but when you figure out each piece one by one, they all fit together way too logically and just work to create something like this:

And that is about as far as I got with the first book. Spheres, lights, shadows, materials, reflections. I could change the size of the spheres and move them around, but couldn’t deform them in any way. Still, with all that, I was able to have a jolly good bit of fun.

Phase 2 – The Next Book

Getting this far took me just about a week. Could have been faster, but every time I coded a new feature I’d spend an hour or several playing with it. I was excited but I needed more than simple spheres. I wanted to mess with those spheres, squish them and stretch them and apply images and patterns and textures to them. I wanted a real floor and cubes and cylinders and cones and whatever else I could get.

The Computer Graphics from Scratch book was great and I highly recommend it if you want a quick jump into the subject. One thing I particularly loved about it is that it wasn’t the kind of book that just dumps a lot of code on you and explains it. It gives you the concepts, the formulas and maybe some pseudocode and it’s up to you to choose a language and figure out the implementation details. I wound up doing mine in Go because its the language I am currently most comfortable with. But I think the author does have some sample code somewhere that is done in JavaScript.

But I was ready for the next part of the journey. So I found my next book:

Oh yes, this is the one! This one goes deep and long and it took me almost four weeks to get through, but I could not put it down. Again, I’d learn something new in the first hour or so of an evening, and spend the rest of the evening messing around with it and rendering all kinds of new things using that concept.

This is honestly probably one of the best written technical books I have ever read. Like the first one, it gives you no source code and is not tied to any language. Again the author provides concepts, algorithms and some pseudocode where needed. But as the cover says, it’s a test driven approach. I cringed at first, but I was so happy for this approach as I got deep into it. For each new concept the author describes what you need to do and then gives you a test spec. Like, “given this set of object with this set of inputs, calling this method should give you these values…” Very often it is as specific as, “the color value should be red: 0.73849, green: 0.29343, blue: 0.53383”. I just made those numbers up, but yeah, it’s like down to 5 digits. I was skeptical when I first saw this. Like no way can you let me choose the language and platform and implementation details and expect that I’m going to be accurate down to 5 digits across three color channels. But goddamn! It was in almost every case. I only saw some slight diversion when I got down into transparency and refraction. And then I was still good down to 4 digits. Any time I was off by more than that, I eventually found a bug in my own code, which, when fixed, brought it back to the expected values. Amazing! These tests caught SOOOOOO many minor bugs that I would have been blissfully ignorant of otherwise. It really sold me on the value of testing graphical code, something I never really considered was possible. Brilliant approach to teaching!

The first few chapters were slow. It was building up that whole library of points and vectors rays and matrices and transformation functions. And then finally the camera and world and spheres and intersections. It wasn’t until Chapter 5 that I could render my first sphere! And I was back to this:

But we move pretty quickly from there to lighting things up:

And then right away into transforming those spheres!

And then into shadows and finally beyond spheres into a real plane object!

Then we got to an exciting part for me: patterns. Algorithmic ways of varying a surfaces. The author explained a few – stripes. checkers and a gradient, but I went off on a wild pattern tangent of my own.

Eventually I got back on track and got back through reflection and then on to transparency with refraction!

The refraction part was the hardest so far. The code itself got pretty involved but beyond that it’s really hard to compose a compelling scene with transparent, refractive objects. It’s way too easy to overdo it and it winds up looking unrealistic. Best used with a light touch.

I took another short diversion into trying to model some simple characters. This one cracked me up.

It wasn’t intended, but it wound up being a dead ringer for this classic:

Finally we got onto new object types. Cubes, cylinders, cones:

And I took some diversions into combining these in interesting ways.

Then we created triangles. And built shapes up from them.

There was a good chunk of that chapter devoted to loading, parsing and rendering object files and smoothing triangles out, etc. This was the one of the few parts of the book I jumped over because I’m not really interested in loading in pre-built models. The other part I jumped over was bounding boxes. This is mostly an optimization technique to limit the number of objects you have to test for collisions. I’ll have to get back to that eventually.

But the next exciting topic was groups and CSG groups – constructive solid geometry. This is where you take two shapes and combine them. The combination can be a union – you get the outline of both shapes, an intersection – you just get the parts of both shapes that overlap, or a difference – the second shape takes a bit out of the first. Although you can only combine two shapes at a time, a CSG group is a shape itself, which can be combined with other shapes, winding up with a binary tree kind of structure that can create some very complex forms.

This is a sphere with another sphere taking a bite out of it, and then punched through with a cylinder. I didn’t spend nearly enough time with this, but will surely do so.

That wrapped up the book, but I continued to explore. I was still intrigued with patterns. A pattern is essentially a function that takes an x, y, z point and returns a color. Hmm… what could we do with that? I know! Fractals!

These are not fractal images mapped onto surfaces. The Mandelbrots and Julias are computed at render time. Very fun.

From there, I started working out image mapping on my own.

I did pretty damn well working image mapping out by myself. *Pats self on back* But it wasn’t perfect. There were some concepts I was missing and things got funky now and then. These images are the ones that worked out well. You won’t see all the ones that were just a mess.

I also started exploring normal perturbation more, with noise and images – normal maps and bump maps.

Again, these look good, but I was missing some concepts.

As I did more research, I eventually discovered that the author of The Ray Tracer Challenge had published a few bonus chapters on his site.

http://raytracerchallenge.com/#bonus

One of these was about texture mapping. This gave me the final pieces that I was missing in image and bump mapping. And I was able to do stuff like this.

Part of that chapter was about cube mapping which was super complex and contained the only actual errors I found in the author’s work. I confirmed it on the books forum site with a few other people who ran into the same issue.

Once you have cube mapping, you can make what’s called a sky box. You make a huge cube and map images to its side. The images are specially warped so that no matter how you view them, you don’t actually see the cube. It just looks like a 3D environment. That’s the image you see at the top of this post.

Here you can see the render minus the skybox:

And here is the skybox by itself:

Though it looks like it could just be a flat background image. I could actually pan around that image and view it from any angle. Note the reflections in the full image, where you can see buildings that are behind the camera reflected in the sphere.

That particular skybox image set was from here:

http://www.humus.name/index.php?page=Textures

And there you can see some live, interactive demos of those skyboxes where you can pan around the image in real time.

The final thing I’ve been working on recently is creating a description format for a full scene. I tried JSON and TOML, but settled on YAML as the best one to handcode a descriptor scene. Now I have an executable file that I just point to a YAML file and it creates the scene, renders it and outputs the image.

Here’s another image using that same skybox with some other objects:

This was rendered completely with this new executable. I only wrote this YAML to describe it:

yokohamabox: &yokohamabox
  - "./yokohama/negz.png"
  - "./yokohama/posz.png"
  - "./yokohama/negx.png"
  - "./yokohama/posx.png"
  - "./yokohama/posy.png"
  - "./yokohama/negy.png"

spherematerial: &spherematerial
  color: [0.9, 0.9, 0.5]
  reflective: 0.9
  specular: 1
  shininess: 100
  diffuse: 0.2

# =====================
# START Shapes
# =====================
shape: &skybox
  kind: "cube"
  transforms:
    - rotateX: -0.1
    - rotateZ: -0.2
    - scale: [100, 100, 100]
  material:
    ambient: 1
    reflective: 0
    diffuse: 0
    pattern:
      kind: "cube_map"
      faceImages: *yokohamabox

sphere1: &sphere1
  kind: "sphere"
  material: *spherematerial
  transforms:
    - scale: [0.5, 0.5, 0.5]

sphere2: &sphere2
  kind: "sphere"
  material: *spherematerial
  transforms:
    - translate: [1, -1, 1]
    - scale: [0.75, 0.75, 0.75]

sphere3: &sphere3
  kind: "sphere"
  material: *spherematerial
  transforms:
    - translate: [-1, 1, 1]

sphere4: &sphere4
  kind: "sphere"
  material: *spherematerial
  transforms:
    - translate: [1.5, 0.5, -0.5]

sphere5: &sphere5
  kind: "sphere"
  material: *spherematerial
  transforms:
    - translate: [-1.5, -0.9, -0.5]
    - scale: [0.5, 0.5, 0.5]
plate: &plate
  kind: "cylinder"
  min: 0
  max: 0.2
  closed: true
  material: *spherematerial
  transforms:
    - rotateX: -0.1
    - rotateZ: -0.2
    - translate: [0, -2.5, 0]
    - scale: [3, 1, 3]
# =====================
# END Shapes
# =====================

world:
  width: 800
  height: 800

camera:
  fov: 0.6
  antialias: "med"
  from: [0, 0.0, -20]
  to: [0, 0, 0]

shapes:
  - *skybox
  - *sphere1
  - *sphere1
  - *sphere2
  - *sphere3
  - *sphere4
  - *sphere5
  - *plate

One other thing I worked on was antialiasing. The way this is done is instead of just getting the color of a pixel with a single ray, you take multiple samples around fractional parts of that pixel. Some references say up to 100 samples per pixel and then average them. I’ve found that’s way too many. Actually 16 looks pretty good – it makes a HUGE difference in quality. I can’t see any difference in quality if I go past 64 samples though. But it might be different for high res images.

The Future

After 5 solid weeks of working on this in my every spare moment, I needed to step back a bit and breathe. Which for me, meant creating a vim plugin. 🙂 But I’ll be back to this before long. There is still a lot to explore in this realm.

My First Vim Plugin: bufkill

misc

https://github.com/bit101/bufkill

Background

I’ve been using vim full time for about 5 years now, and was semi-comfortable with it for a while before that. It’s one of those technologies that you can go really deep on. And once you get used to the vim way of doing things, it’s hard to do things any other way.

For the record, I actually use neovim, a fork of vim. But I couldn’t give a thorough list of the difference between the two of them to be honest. For the most part they are functionally equivalent… I think. So I’m just going to continue to refer to both editors as vim, unless I’m talking about something that is actually different between them.

One of the fun things about vim is configuring it. It’s very easy to go down a very deep rabbit hole and spend hours (days) tweaking your configuration – trying out new plugins, creating mappings and functions that automate things. Then going back months later and realizing you never use half the stuff you added and removing it.

These tweaks usually go in your vim config – a file known as .vimrc in vim and init.vim for neovim. This config file is generally written in a language that is usually called VimScript, but also referred to as VimL, which I guess means “vim language”. Neovim also has a Lua api, which I will probably talk about in another post.

So your config usually is a bunch of statements that set up plugins, set options, perform key mappings for different actions. As you evolve more complex actions, you might start adding functions to your config to build up that logic. Eventually you might want to move those functions out of your config and package them in a distributable way. That’s basically all a plugin is.

The Problem

In vim, like most editors, you can open multiple files at once. These are represented by buffers. But buffers are more than just representations of file content. You can create a new buffer and use it to display information that has nothing to do with a file, like maybe a list of search results. There are also terminal buffers, which are just full-featured terminal emulators. Or buffers that represent a view of your file system. Anything, really.

You can view buffers one at a time, or in horizontal or vertical splits, or in tabs. You can have many, many buffers open at any one time, even if you only see one or a handful at a time. You can switch between buffers and you can close any buffer, which is known, somewhat unfortunately imo, as deleting a buffer. Deleting a buffer does not delete the file it represents on the file system. It just deletes that buffer from memory.

So here’s where the problem starts. Deleting a buffer is simple. You type :bd for “delete buffer” and it goes away.

But… if the buffer is modified, i.e. you’ve added, removed or changed some text and haven’t saved it back to the file it represents, you can’t delete it like that. You either need to save it first by calling :w for “write”, and then deleting it. Or you can discard the changes by calling :bd! to override the default behavior.

Calling :bd on a modified buffer.

But some buffers are read only. And it doesn’t make sense to save buffers that don’t represent files, like search results or terminal buffers. And some buffers are meant to represent files but haven’t been saved as an actual file yet, so saving them means providing a path and file name for them.

The Solution

So all of that complexity is the first big part of what bufkill solves. The plugin provides a command that you can map to a key shortcut (I use ,d) which will delete buffers intelligently.

If the buffer is readonly, or a terminal, or doesn’t have any changes that needs to be saved, it just deletes the buffer directly.

If the buffer has been modified, it prompts you for an action – Save, Discard changes, or Cancel.

Isn’t this a lot more helpful than that message above?

If you press S, the plugin attempts to save the file (more on that in a second). Pressing D calls :bd! to override the delete, discarding the changes. And pressing C gives you a way to back out and think about what you want to do.

Saving

As described earlier, a buffer may represent an existing file on the system, in which case it’s easy to just write the changes back to that file. And that’s exactly what the plugin does. And after saving, it deletes the buffer.

But a buffer may not have been saved yet, so it’s going to need you to tell it where to save it. bufkill has you covered here. It prompts you for a path and file name to save as:

But you’re not left to your own devices on trying to remember your file structure and typing in some long path by hand. Vim is an editor and knows about file systems. Once you start typing a path, completion is available, so you can tab your way down the the location you want and give your file a final name.

(not a paid advertisement for any of these products)

Defaults

Me, I like being able to decide what to do with each buffer I’m deleting. So I like the Save/Discard/Cancel prompts. But maybe you like feeling lucky and always going for the save. Or always going for the discard. I got your back. You can set up a default action in your config, so that every time you run the KillBuffer command, it automatically attempts to save without prompting you. Or you can make it always discard changes and immediately delete the buffer if that suits your workflow. Just add one of the following to your vim config file:

let g:bufkill_default_action = 'save'
let g:bufkill_default_action = 'discard'
let g:bufkill_default_action = 'prompt'

Actually prompt is the default so you don’t really need to add that, but if you want to be explicit about it, go for it.

There are a few other options you can set, but that’s the most important one.

Splits

This is the other big chunk of functionality and one of the main reasons I started working on this plugin. Going to assume you have the concept of a split in vim – you split your screen horizontally or vertically and show a different buffer in each one. You might be comparing two files, or you might have a header file in one part of the split and the implementation in another. Or some non-file buffer like a file system tree plugin or search results in one split.

My personal workflow while working on a project is to have a vertical split, with my code buffers in the left panel, and a narrow panel on the right with a terminal buffer. A key mapping will run the build process in that terminal whenever I want.

Now, say I have a bunch of code buffers open, with one of them visible in that left pane. I’ve just opened this file to check something and I’m done with it so I want to delete that buffer and go back to the other files I was working on. When I delete that buffer, I get this:

Not really what I wanted…

All my other open buffers are still there in the background, but I’ve lost my split and now the terminal buffer has taken over. That’s not what I wanted. I wanted the split to remain and the most recent buffer to replace the one in the left pane. Like this:

That’s better.

So, that’s basically what bufkill does. If you have a split open and you delete a buffer from one pane, it will keep that split there and pull up the previous buffer in the pane you were working in. It will continue doing that until the buffer in the other pane is the last open buffer – at that point it will kill the split and just show that buffer.

Though the example shows a terminal buffer being pinned in a vertical split on the right, and that was exactly my original use case, none of that matters. Whatever pane you delete a buffer from will be replaced with the next buffer, and whatever pane is not active will remain pinned. You can delete buffers in the left pane and the right pane will remain pinned. Then you can jump to the right pane and delete buffers there and whatever is in the left pane will remain pinned. Same if you do a horizontal split with top and bottom panes.

Caveats

This split functionality of this plugin was really only designed with splits in mind. Vim “tabs” are another way of representing multiple open buffers on your screen. I don’t use vim tabs, so I haven’t done much testing in this area, but it very well might break the functionality.

Also, the plugin was designed to work best when you just have a single split open. In vim it is possible to have multiple splits and even nested splits. The plugin is certainly going to have issues with those kinds of layouts, but it should just mean that it won’t delete a buffer that you to delete because it thinks it should keep it open in another split. Fortunately, you can just revert to the built in :bd or :bd! in that case.

I have no idea what’s going to happen here.

Ignore Splits

Also, if you just aren’t interested in the split handling stuff, or it’s causing problems for you, you can disable that part of the plugin with the ignore splits option. Just add this to your config:

let g:bufkill_ignore_splits = 1

You’ll still have all the Save/Discard/Cancel and file naming functionality, but it won’t get fancy about trying to preserve splits.

Side note…

For those analyzing the screenshots, yes, I did start writing this post around 6:00 am on a Sunday. That’s how I roll.

BIT-101 in 2021

misc

I don’t always to a year in review post, but I like to do them when I remember to do so. Looking back over 2021, I’m surprised by how much I did and how much I posted here. Things really got quiet during the last few months, so I forgot how much stuff I cranked out earlier.

Some highlights:

  • 75 blog posts this year! Whoa. Most in a LONG time. And more views on this blog than in well over ten years (thanks HackerNews!)
  • I did a fair number of tutorials on math, graphics and creative coding techniques.
  • I finally created a solid version of MinimalComps for the web! Renamed Minicomps (https://minicomps.org/).
  • I did a bunch of general visual, interactive experiments, even some interactive audio generation stuff.
  • I participated in 7 days of code in July.
  • I created an animated gif every day in August for #awegif2021
  • I bought a Mac. And liked it. But Linux is still my personal daily driver.
  • I did a deep dive into Perlin, Simplex, and Curl noise, which was pretty interesting for me and spurred a lot of conversation elsewhere. https://www.bit-101.com/blog/2021/07/the-noise-series/.
  • I broke down and experimented with NFTs.
  • I did a deep dive into gif and video making tools, several posts worth of tips and tricks.
  • I started a mailing list.
  • I celebrated 20 years of BIT-101!

Sometimes I get a bit down on myself, thinking I’m not doing anything interesting. But when I list it all out, it looks pretty impressive. There was quite a bit going on in my personal and work life the last few months, so things toned down recently. Nothing particularly bad, just stuff pulling my attention away and leaving me with not much energy for writing either code or much else at the end of the day.

I’m hoping to get back into the swing of things in the coming year though. I think I’ve inspired myself to write more after taking a look at the above.

One thing I definitely feel the need to follow up on is the whole subject of NFTs. I’m still very conflicted over them. I created more than a handful and was surprised how much money I was able to earn from the ones I created. Then I took a little break from it for a few weeks and came back and did some more. Then took a more permanent break. I still don’t feel very good about the whole thing right now. I still don’t understand what people are paying for and what they think they are getting. There’s a whole lot of people cranking out a whole lot of low quality, low cost NFTs. And there are established, well known artists making fewer pieces and getting very high prices for them. And there is everything in between. The system as it stands seems like a fad right now and it feels like it will all come crashing down before very long. On the other hand, I believe there is something there that could evolve into a very stable and useful system of rewarding artists for their work. I’ll keep watching the space. I expect a lot of change in it in the coming year. I’m intrigued where it might go, but I’m not 100% comfortable participating in it as things stand. Which sucks, because I like money as much as the next person, and it’s a relatively easy way to make money doing what I like to do. I’m not drawing a line in the sand, but for now I’d rather just code cool pieces and let people look at them without “collecting” them.

Thoughts on Art

misc

I’ve been thinking a lot about “art” recently. Specifically, what people call “generative art”, “algorithmic art”, “code art”, “math art”, etc. Here are some random thoughts.

Message

I’ve never been one to try to communicate some message through the things I create. I really only try to create things that are visually interesting. I can get very excited about the way a piece looks and I just want to show others and hope that they get a taste of that excitement too. Sometimes things I create can evoke various emotions – they can look ominous, dark, scary, energetic, fun, etc. I often find that when I feel a certain way about a piece, others tend to experience that same feeling. I guess you could call that a message if you want. But I don’t know that I’ve ever sat down to create some digital art with the thought, “I feel like creating something dark and ominous today,” or intentionally tried to create any emotion before I started.

Discovery

For me, creating art with code is way more about discovery. I play around with some technique, or combine one technique with another dis-related technique just to see what happens. Then I see something interesting and I zoom in on it. Sometimes this means literally zooming into some detail in an image. More often it means focusing on some combination of parameters that have created a particular result. There’s a promise of something even more interesting there. I start tweaking numbers to see if I can bring out more of that something. I increment a parameter a few times and that thing I saw goes away. So I decrement it and maybe that thing becomes bigger, or more clear, or more detailed. It’s more like mining than creating.

Math

I love math, or “maths” if that’s how you think of it. I love finding some new interesting formula. I subscribe to recreational math blogs, YouTube channels, Twitter feeds. I go to the math section of bookstores and libraries. I scroll through Wolfram and Wikipedia looking for new ideas. Old copies of Scientific American, Omni, Quantum and other math and science magazines. I get sucked in by anything that has a graph or an interesting diagram. It’s got to be visual. For me, math is the ultimate creative tool. It’s the canvas, it’s the paint, it’s the brush. Really, it’s the artist. All the images are already there. I’m just carefully extracting a few of them out of the sea of numbers. If I have to have a message in my art it’s “Look how amazing math is.”

Random

Random is evil. Random is lazy. Random is OK when you’re starting a new piece. It’s OK when you have a formula and you’re searching for an interesting range of input parameters. But once you find something interesting, lock in those parameters and start focusing. My code framework is set up to allow me to easily create animations by changing parameters over each of many frames. Sometimes I’ll generate several hundred frames all with random parameters. I’ll print the parameters right on the piece. Then I’ll sift through the frames one by one and find those that have something I like. I’ll grab those parameters and hard code them and start tweaking them as described above. But random should come at the start, not at the end. I never continue to use random parameters to generate a finished piece. I’m totally lying. I do it often enough. But I feel lazy when I do it.

Technology

People often ask me what technology I used to create my images. I think they expect that I’m using some app or framework that has all these kinds of image generation tools built into it. I don’t. I write mostly in Go (Golang). I have some custom Go bindings for the C library, cairographics. Cairo is a pretty powerful drawing API, but that’s all it is – it draws lines and arcs curves and rectangles, sets colors, etc. Nearly identical in most important ways to the HTML Canvas drawing API. That’s all I need – those 2D drawing primitives. I have a framework of my own that I’ve built up over many years that does all the complex fancy stuff. But it’s all based on those 2d primitive drawing actions.

Color

I love monochrome. Black and white. I love to be able to bring out form and emotion just by the relative brightness of black and white pixels. Sometimes I’ll experiment with color. I have a random RGB function that makes me feel guilty when I use it. I like using HSV colors better. You can create nice gradients. You can keep one hue and vary the saturation or value. You can create a range of colors with a similar hue. I’d like to take a deep dive into learning color theory some time. But I’m very comfortable with black and white.

Combination

One of my best techniques is a meta-technique. I mentioned it above. Every time I learn some new technique or formula, I do a mash up with that and some other technique I already know. I know I’ve talked about that in other posts in more detail. It’s the best way to discover something that has a chance of the elusive “nobody has done this before”.

Dislikes

  • AI/ML/GAN stuff. It just does nothing for me. To me it just looks like Photoshop filters on steroids. I know there’s a lot of impressive stuff going on behind the scenes, but the result is not interesting to me.
  • Glitch art. I think this is due to the fact that I grew up in the 60s and 70s and 80s. Everything was glitchy. I didn’t like it then and I don’t like it now. I like that things are not glitchy now. I like that I turn on the TV and see a clear picture without bending the coat hanger stuck in the back into some new shape.
  • 3D. By this I mean slick, realistic 3D stuff done in professional 3D rendering packages. I like occasionally hand-coding lofi 3D stuff.
  • Shaders. In either 2D or 3D. I probably should like shaders. You can do some impressive math-based stuff with them. I’ve used them. I even written about them and touched on them in some videos, but they never stuck as something I wanted to continue working with.

OK, now here’s where you’re going to say, “You should check out _____ (AI/Glitch/3D/Shader thing).” And I’ll say, “Thanks, that looks cool. Maybe I’ll check it out.” But I never will. But really, thanks! I appreciate your enthusiasm. I’m not saying any of these things are inherently bad. Just sharing my tastes.

My Music Appreciation Project

misc

Not a big fan of music streaming.

I have a good size, custom curated, very well organized library of music on my hard drive. It’s taken me years to create, it’s all backed up. I have a subsonic server behind Wireguard so I can listen to it from my phone or other devices, and it’s all also synced to my Sony Walkman NW-A55 hi-res digital audio player that I use with my Ikko OH1 IEMs.

I’m really very satisfied with this setup.

But still, I find myself not being able to decide what to listen to way too often. The paradox of choice. Fairly often I’ll just throw the whole thing, or maybe some specific genre on shuffle, and that’s pretty good. But I also like listening to albums. When I’m just choosing an album myself, I’ll gravitate to the same few dozen or so artists. Other stuff might go months or years without being listened to.

I think my devices and software let me shuffle albums, but I wanted to create a long range plan to get to all of it over time. Here’s what I did:

  1. Printed out a list of all artists. Literally just went to the terminal and did an ls in my music folder, directing it to a file.
  2. Pulled that file into a Google spreadsheet, one artist per line.
  3. Shuffled them.

Now I’m just working down the list. I just grab the next artist in the list and choose an album and start playing.

If I’m not feeling the vibe with that particular artist or album, I just move on. But I try to listen to at least one or two songs anyway.

When I move on to the next artist, I mark the previous one as done.

Here’s what’s on my playlist for today:

  • Ministry
  • Van Morrison
  • Harry Nillson
  • The White Stripes
  • John Lee Hooker

I’ve really been enjoying this strategy. Been listening to music that I haven’t in a long time, not just the same albums over and over. And it’s all content that I’ve chosen to put in my library, so it’s all artists I generally like to some degree. And sometimes get totally different genres up against each other, which is fun.

This is going to take months to go through, but it’s been keeping things fresh.

Sketching in eInk

misc

In my last post I talked about my newest eInk tablets from Onyx Boox. Beyond reading ebooks and saving articles, the Note Air is a fantastic sketching tool. The Wacom layer and stylus is one of the big features that led me to look beyond Kindle devices. It was nice on the 7.8″ Nova Pro, but on the 10.3″ Note Air, I’ve really fallen in love with the drawing feature.

Preliminary Pen Points

The Note Air comes with a stylus that attaches to the side of the device via a magnet. All I can say is that it makes a good backup stylus.

Issues with the stylus:

  1. The magnet is not strong enough to trust, so you wind up having to figure out your own way to keeping track of the stylus. Because it WILL fall off and get lost.
  2. The stylus does not have an eraser feature. some styluses put an eraser function in the top of the stylus, where you’d find an eraser on a regular pencil. Others add a button that switches the stylus into erase mode. The default Note Air stylus does neither. So when you want to erase something, you have to choose the eraser tool from the tool bar, erase, then re-choose the drawing tool you were drawing with.

I wound up getting a Wacom One stylus.

It looks nice, feels nice, has an eraser button, and doesn’t break the bank.

I read a lot of reviews about the Remarkable 2 Marker stylus that is made for the Remarkable 2 eInk tablet. It’s apparently really good, but pretty pricey. It does work on the Boox devices. Maybe I’ll spring for one some day, but I’m happy with the Wacom One for now. One of the pros of the Remarkable stylus is that the tips are made of a different substance than a lot of other styluses, which gives it a good amount of drag and makes it feel like a regular pen, rather than sliding a piece of plastic across glass. Remarkable sells replacement tips, which fit the Wacom One, so I got a set of them, and I can attest it does add to the experience.

To hold the pen when I’m not using it, I got some of these Ringke pen holders.

These just stick to the cover and have an elastic loop. Been working out perfectly.

Seriously Sketching Stuff

Now onto the sketching functionality of the device. The built-in notes app is where you do the sketching and it’s one of the main apps on the Note Air. This is what it looks like:

You can name your notes, organize them in folders, rename them, view them as a list or thumbnails, etc.

Notes can have multiple pages. You can see above the Notepad1(1) has a little “6” in the corner. This means that this note has 6 pages. While in the note, you can choose to see an overview of all of the pages.

You can delete, add, rearrange, export, and share pages here.

There are multiple sketch tools you can use:

I most often just use the default pen tool. All of the tools allow you to adjust their width. All but the ballpoint pen and marker are also pressure sensitive. You can change colors – various shades of gray mostly, but also a few real colors. Though you won’t be able to see these on a black and white eInk device until you export the note.

There’s all the usual other tooling, if all a bit basic. A few basic shapes, several different kinds of erasers, lasso tool for selecting and moving parts of your sketch, zoom, text, handwriting recognition.

It even has layers, which is pretty damn cool.

You can add layers, rearrange them, hide them, delete them. The bottom layer is a background template. It can be blank, our you can set it to just about anything. Above you can see a basic grid template. There are a bunch built in:

Three pages worth, for drawing, writing, penmanship, accounting, whatever. There are also ones you can download from the Boox cloud:

And you can add your own, just by putting the into the right folder on the device. Just do an internet search for PDF graph paper or pdf templates and you’ll find all kinds of useful stuff. Or you can just create any PDF you want and put it in there. Here are some I added:

You can turn the template’s visibility on and off from the layers tool, which is really quite nice.

On for drawing:

And then off:

I’m constantly using this to figure out stuff that I want to code. I have pages and pages of sketches like this. Most are just brainstorming or figuring out the math about something.

The touch layer of the device is totally separate from the Wacom drawing layer, so there’s no need to worry about putting your hand on the device as you’re drawing. When you want a new page in a particular sketch, just swipe with your finger, right to left, and you’re on a new page. Go back and forth to existing pages by swiping right or left.

You can set up notes to automatically sync to the cloud, using Onyx’s own cloud sync, Youdao (a Chinese service), Evernote, Dropbox and/or OneNote. Synced notes are available in the cloud as PDFs, but you can also export them as PNGs.

I always have the Note Air nearby when I’m doing some creative coding, so it’s easy to just grab it and work out some idea. I just create note after note, page after page. Space is no worry. They’re backed up and easily available on Dropbox, not floating around on pieced of dead tree pulp.

While the sketch/notes app is probably not something you’re going to create amazing art with, it’s super useful for sketching and notes. I also use it to take notes in meetings and interviews. Very nice for that.

I have to give a warning though. While the built-in notes app is super responsive and a joy to use, the Wacom layer is not optimized for third party drawing, sketching or note-taking applications on the web or Android apps. It will be a bad experience using apps like that. It would be nice if that got fixed, but not holding my breath. I did see where someone has put together their own libraries that make the experience much better in third party apps, but I have not tried it myself.

[Update 01/01/22]

The latest firmware for the Boox tablets has apparently improved the third party app sketching functionality. At least for popular apps like Evernote. I’ve not tried it out myself. Pretty happy with the internal note app.

20 years

misc

TWENTY

DAMN

YEARS

September 11, 2001 is quite a memorable day for the obvious reasons. But for me, it holds an additional significance. Because on September 10, 2001 I registered the domain bit-101.com and on the morning of September 11 the first version of the site went live, only to be massively overshadowed by other events just a couple of hours later.

Initially the site was a single page with a Flash application containing a calendar that linked to various interactive experimental pieces. I’d started doing the experiments in late August, so I was able to launch BIT-101 with fourteen experiments. It ultimately grew to over 600.

This was the previous site that was retired on 9/11/01, also fully Flash:

That KP logo came in with a really cool animation and there was a funky 5-second free music loop that I snagged off of FlashKit, which got really annoying after roughly 10 seconds.

A later version of BIT-101:

Yeah, I liked the Papyrus font back then. Also… what are lower case letters? All those sections were draggable and closable windows. Peak 2002 “web design”.

BIT-101 lasted in this general form, with various interface changes up until the end of 2005. There were many months I posted something new every day. Towards the end, it got a bit slower.

While all this was going on, near the end of 2003, I started the first BIT-101 blog. I say the “first” one because in late 2017 I did a blog reboot, to the new blog that you are reading here. The old one had a good 14 year run though. And is immortalized here: http://www.bit-101.com/old/. Amazing to think that the blog reboot is now almost 4 years old, which is about as long as the first old Flash site lasted. Time keeps moving faster.

Changes

Things sure have changed since that first site 20 years ago. Back then it was all about Flash for me. I was not working full time as a programmer, but I had a steady flow of side jobs doing Flash work. I’d written a few Flash tutorials on the KP Web Design site and those had done really well. In fact it led me to contributing to my first book, Flash Math Creativity.

This led to many more books, mostly with Friends of ED and Apress, but also OReilly.

In 2003 I was invited to FlashForward in NYC where I was a finalist for their Flash awards ceremony in the Experimental category. I remember being so starstruck meeting all my Flash heroes there – many of whom I consider good friends to this day. As it turns out I won the award, which was amazing. I went back to my day job the following Monday. I was working in the estimation department of a mechanical contracting company. I hated that job. I was thinking, “Why am I here? I am a published author and I just won an award for Flash. That’s what I should be doing.” Amazingly, when the boss came in, he called me into his office. Apparently I had screwed up delivering an estimate the previous week and he fired me. What I remember most clearly about that conversation was trying not to smile as I realized I was free. The next day I went to talk to a company in Boston that I had been talking to about doing some Flash work on the side and said I was ready to go full time. They hired me and thus began my official career as a “professional” developer.

Of course, Flash officially died earlier this year. But I had really moved on from it in early 2011, when I did my “31 days of JavaScript” series on the old blog. The inaugural post here: http://www.bit-101.com/old/?p=3030. This series got a lot of attention and by the end of it I had personally switched over to doing all my personal creative coding using HTML5 Canvas.

In 2018 I started looking for some other platforms for creative code. I discovered Cairo Graphics, a C library that is pretty similar to the canvas api in JavaScript. It has bindings for many other languages. I tried it with Python and liked it, but wanted to learn a new language. I’d been interested in both Rust and Golang. I converted my JS library over to Rust and got it working well. But Rust is a pretty exacting language. I found it hard to work with for something like creative coding. I spent more time trying to satisfy the compiler than I did writing any interesting code. So I tried Go and that really hit the spot. It’s been the mainstay language for my creative work for the last three and a half years, though I still keep active in JavaScript as well.

Work-wise, starting from first job in 2003:

  • Exit 33 / Xplana Learning
  • Flash Composer
  • Brightcove
  • Infrared5
  • Disney
  • Dreamsocket
  • Notarize

I started all of those jobs as a Senior Developer/Engineer/Programmer. At Notarize I am now an Engineer Manager, managing 10 other engineers and not really doing any hands-on coding myself. That’s fine with me. It’s a totally new challenge and I’m enjoying it, especially seeing and helping new grads out of school growing into amazing engineers. Interestingly, only two of those jobs required a formal interview. The rest of them were almost straight to offer from people I had gotten to know well through the Flash community.

Summary

It’s been an amazing 20 years. I had no idea where this was going when I randomly came up with “bit-101” and registered the name back then. But it’s worked out pretty damn well. What about the next 20 years? If I’m still breathing and able to type coherent code, I’ll be cranking out something for sure.

More gif-making tips and tools

misc, tutorial

I’ve been continuing my search on the ultimate gif-making workflow and came across two more tools.

gifsicle

and

gifski

Both of these are command line tools available across platforms.

gifsicle

I first heard about gifsicle a while ago as a tool to optimize gifs. I tried it on some of the larger ffmpeg-created gifs and it didn’t seem to do a whole lot. You can specify three levels of optimization. The lowest level didn’t have any effect. The highest level took a single megabyte off of a 27 mb gif. Not really worth it.

You can also use gifsicle to reduce colors in an existing gif. This got considerable results, but at a loss of quality. I think it would be better to do this in the palette creating stage.

But gifsicle can also create gifs from a sequence of images, just like ImageMagick and ffmpeg. Big drawback on this workflow though: the source images have to be gifs themselves. OK, I was able to put together a quick ImageMagick script to convert all my pngs to gifs. But that took quite a while. I didn’t time it, but I feel like it was more than a minute for 300 frames. As for size, it was almost right in between the sizes produced by ImageMagic and ffmpeg.

But the additional conversion process put this one out of the running for me.

gifski

I think this is a very new solution. And it’s written in Rust, which tends to give projects a lot of street cred these days.

After using it, I am impressed.

The syntax is pretty straightforward:

gifski --fps 30 frames/*.png -o out.gif

I don’t think I really need to explain any of that.

Performance-wise, it hits a pretty sweet spot. Not as fast as ffmpeg, but image sizes are way smaller. Not as small as ImageMagick’s output, but way faster.

Here’s the results I got:

Time:

FFMPEG:       5.780 s
gifski:      19.341 s
ImageMagick: 43.809 s
gifsicle:    with image conversion needed, way too long

Size:

ImageMagick: 13 mb
gifski:      16 mb
gifsicle:    18 mb
FFMPEG:      27 mb

Summary

I feel like it’s pretty straightforward.

If you are going for size, nothing beats ImageMagick, but it takes forever.

If you are going for speed, nothing beats ffmpeg.

If you are dealing with gifs as your source image sequence, gifsicle might be a good compromise.

But I think the overall winner is gifski in terms of hitting that sweet spot. I’ll be using it a lot more in coming weeks and days and update with any new findings.

I should also note that all my tests have been on grayscale animations. Full color source images could change everything.

A note on quality and duration

All of the gifs produced seemed to me to be of very comparable quality. I didn’t see any quality issues in any of them. To my eye, they seemed like the same gif, with one exception – duration.

Actually I discovered today that ImageMagick’s delay property will truncate decimal arguments. Or maybe round them off. I’ve gotten conflicting info. Anyway, I’ve been using a delay of 3.33 to make them run at 30 fps. But it turns out it just puts a delay of 3/100’s of a second. So they’ve actually been running a bit faster than 30fps. Somehow, the gifs created with ffmpeg and gifski do seem to run at the exact fps specified. Specifically, a 300 frame animation set to run at 30 fps should run for 10 seconds, as the ffmpeg and gifski gifs do. But ImageMagick’s finishes in 9 seconds.

I tried some other formats for the delay parameter. Apparently you can specify units precisely, like -delay 33,1000 for 33/1000’s of a second, or even -delay 333,10000 for 333/10000’s of a second. But this doesn’t make a difference. From what I understand, the gif format itself does the 100th of a second rounding. If so, I’m not sure what ffmpeg and gifsicle are doing to make it work correctly.

Big deal? Maybe, maybe not. Worth noting though.