20 years

misc

TWENTY

DAMN

YEARS

September 11, 2001 is quite a memorable day for the obvious reasons. But for me, it holds an additional significance. Because on September 10, 2001 I registered the domain bit-101.com and on the morning of September 11 the first version of the site went live, only to be massively overshadowed by other events just a couple of hours later.

Initially the site was a single page with a Flash application containing a calendar that linked to various interactive experimental pieces. I’d started doing the experiments in late August, so I was able to launch BIT-101 with fourteen experiments. It ultimately grew to over 600.

This was the previous site that was retired on 9/11/01, also fully Flash:

That KP logo came in with a really cool animation and there was a funky 5-second free music loop that I snagged off of FlashKit, which got really annoying after roughly 10 seconds.

A later version of BIT-101:

Yeah, I liked the Papyrus font back then. Also… what are lower case letters? All those sections were draggable and closable windows. Peak 2002 “web design”.

BIT-101 lasted in this general form, with various interface changes up until the end of 2005. There were many months I posted something new every day. Towards the end, it got a bit slower.

While all this was going on, near the end of 2003, I started the first BIT-101 blog. I say the “first” one because in late 2017 I did a blog reboot, to the new blog that you are reading here. The old one had a good 14 year run though. And is immortalized here: http://www.bit-101.com/old/. Amazing to think that the blog reboot is now almost 4 years old, which is about as long as the first old Flash site lasted. Time keeps moving faster.

Changes

Things sure have changed since that first site 20 years ago. Back then it was all about Flash for me. I was not working full time as a programmer, but I had a steady flow of side jobs doing Flash work. I’d written a few Flash tutorials on the KP Web Design site and those had done really well. In fact it led me to contributing to my first book, Flash Math Creativity.

This led to many more books, mostly with Friends of ED and Apress, but also OReilly.

In 2003 I was invited to FlashForward in NYC where I was a finalist for their Flash awards ceremony in the Experimental category. I remember being so starstruck meeting all my Flash heroes there – many of whom I consider good friends to this day. As it turns out I won the award, which was amazing. I went back to my day job the following Monday. I was working in the estimation department of a mechanical contracting company. I hated that job. I was thinking, “Why am I here? I am a published author and I just won an award for Flash. That’s what I should be doing.” Amazingly, when the boss came in, he called me into his office. Apparently I had screwed up delivering an estimate the previous week and he fired me. What I remember most clearly about that conversation was trying not to smile as I realized I was free. The next day I went to talk to a company in Boston that I had been talking to about doing some Flash work on the side and said I was ready to go full time. They hired me and thus began my official career as a “professional” developer.

Of course, Flash officially died earlier this year. But I had really moved on from it in early 2011, when I did my “31 days of JavaScript” series on the old blog. The inaugural post here: http://www.bit-101.com/old/?p=3030. This series got a lot of attention and by the end of it I had personally switched over to doing all my personal creative coding using HTML5 Canvas.

In 2018 I started looking for some other platforms for creative code. I discovered Cairo Graphics, a C library that is pretty similar to the canvas api in JavaScript. It has bindings for many other languages. I tried it with Python and liked it, but wanted to learn a new language. I’d been interested in both Rust and Golang. I converted my JS library over to Rust and got it working well. But Rust is a pretty exacting language. I found it hard to work with for something like creative coding. I spent more time trying to satisfy the compiler than I did writing any interesting code. So I tried Go and that really hit the spot. It’s been the mainstay language for my creative work for the last three and a half years, though I still keep active in JavaScript as well.

Work-wise, starting from first job in 2003:

  • Exit 33 / Xplana Learning
  • Flash Composer
  • Brightcove
  • Infrared5
  • Disney
  • Dreamsocket
  • Notarize

I started all of those jobs as a Senior Developer/Engineer/Programmer. At Notarize I am now an Engineer Manager, managing 10 other engineers and not really doing any hands-on coding myself. That’s fine with me. It’s a totally new challenge and I’m enjoying it, especially seeing and helping new grads out of school growing into amazing engineers. Interestingly, only two of those jobs required a formal interview. The rest of them were almost straight to offer from people I had gotten to know well through the Flash community.

Summary

It’s been an amazing 20 years. I had no idea where this was going when I randomly came up with “bit-101” and registered the name back then. But it’s worked out pretty damn well. What about the next 20 years? If I’m still breathing and able to type coherent code, I’ll be cranking out something for sure.

More gif-making tips and tools

misc, tutorial

I’ve been continuing my search on the ultimate gif-making workflow and came across two more tools.

gifsicle

and

gifski

Both of these are command line tools available across platforms.

gifsicle

I first heard about gifsicle a while ago as a tool to optimize gifs. I tried it on some of the larger ffmpeg-created gifs and it didn’t seem to do a whole lot. You can specify three levels of optimization. The lowest level didn’t have any effect. The highest level took a single megabyte off of a 27 mb gif. Not really worth it.

You can also use gifsicle to reduce colors in an existing gif. This got considerable results, but at a loss of quality. I think it would be better to do this in the palette creating stage.

But gifsicle can also create gifs from a sequence of images, just like ImageMagick and ffmpeg. Big drawback on this workflow though: the source images have to be gifs themselves. OK, I was able to put together a quick ImageMagick script to convert all my pngs to gifs. But that took quite a while. I didn’t time it, but I feel like it was more than a minute for 300 frames. As for size, it was almost right in between the sizes produced by ImageMagic and ffmpeg.

But the additional conversion process put this one out of the running for me.

gifski

I think this is a very new solution. And it’s written in Rust, which tends to give projects a lot of street cred these days.

After using it, I am impressed.

The syntax is pretty straightforward:

gifski --fps 30 frames/*.png -o out.gif

I don’t think I really need to explain any of that.

Performance-wise, it hits a pretty sweet spot. Not as fast as ffmpeg, but image sizes are way smaller. Not as small as ImageMagick’s output, but way faster.

Here’s the results I got:

Time:

FFMPEG:       5.780 s
gifski:      19.341 s
ImageMagick: 43.809 s
gifsicle:    with image conversion needed, way too long

Size:

ImageMagick: 13 mb
gifski:      16 mb
gifsicle:    18 mb
FFMPEG:      27 mb

Summary

I feel like it’s pretty straightforward.

If you are going for size, nothing beats ImageMagick, but it takes forever.

If you are going for speed, nothing beats ffmpeg.

If you are dealing with gifs as your source image sequence, gifsicle might be a good compromise.

But I think the overall winner is gifski in terms of hitting that sweet spot. I’ll be using it a lot more in coming weeks and days and update with any new findings.

I should also note that all my tests have been on grayscale animations. Full color source images could change everything.

A note on quality and duration

All of the gifs produced seemed to me to be of very comparable quality. I didn’t see any quality issues in any of them. To my eye, they seemed like the same gif, with one exception – duration.

Actually I discovered today that ImageMagick’s delay property will truncate decimal arguments. Or maybe round them off. I’ve gotten conflicting info. Anyway, I’ve been using a delay of 3.33 to make them run at 30 fps. But it turns out it just puts a delay of 3/100’s of a second. So they’ve actually been running a bit faster than 30fps. Somehow, the gifs created with ffmpeg and gifski do seem to run at the exact fps specified. Specifically, a 300 frame animation set to run at 30 fps should run for 10 seconds, as the ffmpeg and gifski gifs do. But ImageMagick’s finishes in 9 seconds.

I tried some other formats for the delay parameter. Apparently you can specify units precisely, like -delay 33,1000 for 33/1000’s of a second, or even -delay 333,10000 for 333/10000’s of a second. But this doesn’t make a difference. From what I understand, the gif format itself does the 100th of a second rounding. If so, I’m not sure what ffmpeg and gifsicle are doing to make it work correctly.

Big deal? Maybe, maybe not. Worth noting though.

More ffmpeg tips

tutorial

Palettes

In a recent post, I shared making animated gifs with ffmpeg. I hadn’t been doing it very long so I wasn’t sure how it would work in the long run. Lo and behold, I ran into a problem. I was making black and white (actually monochrome grayscale) gifs and for the most part it was going well. But then I saw some yellow getting in there!

None of the still images had anything but grayscale values. So how was I getting yellow? To be honest, I’m not sure of the details 100%, but it’s got to do with palettes. Gifs generally have just 256 available colors. There are tricks to make animated gifs use more than that, but let’s stick with the basic case. 256. The frames you create for your animations will likely be pngs, which mean they can have millions of colors. Somehow, ffmpeg needs to take all those millions of colors and choose just 256 for the gif.

Last time, I posted this command:

ffmpeg -framerate 30 -i frames/frame_%04d.png out.gif

This was working pretty well for my grayscale gifs, but if you tried using it for full color animations, there’s a good chance you ended up with a mess. Because you basically got a random palette. I don’t know how the palette is chosen in that case, but there’s a damn good chance it’s not going to be right. So you need to create a decent palette. That’s done with a palettegen filter. First you run ffmpeg to generate a new 16×16 pixel image (256 pixels) that contains the palette it thinks it should use. That looks like this:

ffmpeg -i frames/frame_%04d.png -vf palettegen palette.png

I’ll keep the same flow of going through each parameter:

-i frames/frame_%04d.png – chooses the frames that contain the palette data.

-vf palettegen – a video filter. The filter is palettegen which generates a palette.

palette.png – the output image holding the palette.

The result, as I said, is just a 16×16 png image. You can open it up and see your palette. This has been done by analyzing all the images in the sequence and determining which 256 colors can be used across the entire animation to make things look decent.

Now you use this palette image in another call to ffmpeg:

ffmpeg -framerate 30 -i frames/frame_%04d.png -i palette.png -filter_complex paletteuse out.gif

Once again, I’ll break it down.

-framerate 30 – the fps just like before.

-i frames/frame_%04d.png – the input frames, just like before.

-i palette.png – yet another input, this time, the palette image.

-filter_complex paletteuse – this is where the magic happens.

out.gif – the output image, just like before.

So the filter_complex one is pretty complex, especially if you try to look up the documentation or examples. You’ll find examples like this (IGNORE THESE!):

ffmpeg -i input.mp4 -filter_complex "[0:v]scale=iw:2*trunc(iw*16/18), \
boxblur=luma_radius=min(h\,w)/20:luma_power=1:chroma_radius=min(cw\,ch)/20: \
chroma_power=1[bg];[bg][0:v]overlay=(W-w)/2:(H-h)/2,setsar=1" output.mp4

or…

ffmpeg -i bg.mp4 -i video1.mp4 -i video2.mp4 -filter_complex \
"[0:v][1:v]setpts=PTS-STARTPTS,overlay=20:40[bg]; \
 [bg][2:v]setpts=PTS-STARTPTS,overlay=(W-w)/2:(H-h)/2[v]; \
 [1:a][2:a]amerge=inputs=2[a]" \
-map "[v]" -map "[a]" -ac 2 output.mp4

You’ll generally find these in Stackoverflow, with instructions like, “Just do this…” and no explanations at all on why you should JUST do that.

If you’re lucky, you’ll at least find something like…

 -filter_complex "fps=24,scale=${SIZE}:-1:flags=lanczos[x];[x][1:v]paletteuse"

I stripped away everything but the filter_complex part there. This one actually came from a good friend, Kenny Bunch, who saw my last article and happened to be digging in to the exact same stuff at the same time, but with full color animations.

This one was still more complex than I needed, but it was the simplest example I found, so I was very thankful.

So back to basics. filter_complex is a way of defining, a… well, a complex filter. You define the filter in a string and can chain together multiple actions to do all kinds of fancy things. The way it works is a series of filter actions that result in an output, and then you can use that output in further actions. Like this, broken down per action:

do first filter action[a]; does the first filter action and stores the output in a variable a.

[a] do another filter action[b]; feeds the output a into the next action, and saves that as b.

[b] yet another action[x]; you get the idea.

So in the above example:

fps=24,scale=${SIZE}:-1:flags=lanczos[x]; Sets the fps to 24, scales the gif to a given size on x and keeps the aspect ratio on y (the -1 param), uses Lanczos resampling to do the scaling. Stores the result in x.

[x][1:v]paletteuse Takes the data in x and uses input 1 (the palette image) as a palette using the paletteuse filter. In short, uses the palette image as the palette for the animation.

In my case, I’d already set the framerate. And I wasn’t scaling anything. so I could get rid of that whole first action. And apparently, ffmpeg was smart enough to figure out that input 0 was the frames and input 1 was the palette, So I could shorten the entire thing down to :

-filter_complex paletteuse

Magical.

Using these two commands, I was able to get a correctly paletted animation, with no yellow:

I still don’t know exactly why I was getting yellow, since all the source frames were completely grayscale. I guess it was just the complexity of calculating the values of all the channels of all the pixels from all the frames. Somewhere along the line, it wound up with a bit less in the blue channel for some reason. And once it started, it just multiplied.

Anyway, couldn’t help thinking of this…

Speed and Size

Other considerations when deciding whether to use ffmpeg or ImageMagick are rendering speed and file size.

ffmpeg will create gifs way faster than ImageMagick. A quick test:

Input: 300 frames, 500×500 pixels each.

ImageMagick: 29.985 seconds

ffmpeg: 4.301 seconds

And the ffmpeg test included generating the palette as well as the animation.

File size is not so good a story though. Same animation:

ImageMagick: 13 Mb

ffmpeg: 27 Mb

Other examples weren’t quite that bad, but ImageMagick always wins handily in the size category.

Of course, the other thing I mentioned last time was that ImageMagic will consistently use so much memory that the process just crashes and fails, whereas ffmpeg never has a problem in that area.

Just Say Yes

One last trick if you’re scripting these commands and you’re constantly having your script stop and ask you if you want to override your previous output gif with a new animation. Just add the -y parameter into your command and it won’t bother you anymore.

Animation Cookbook for ffmpeg and ImageMagick

tutorial

I create a lot of animated gifs and I’m getting into creating videos as well. All of these are initially created by rendering a series of png files into a folder called frames. Over the last few years I’ve figured out some pretty good recipes for converting these frames into animated gifs and videos. So I’m sharing.

Both ImageMagick and ffmpeg are extremely complex programs with tons of parameters that allow you to do all kinds of image, video and audio manipulation. I’m not any kind of an expert on any of these. And this post is not meant as thorough documentation on these programs. It’s more of a cookbook of commands that handle specific use cases well. I’ll also try to explain what each of these parameters does, which might help you to come up with your own commands.

Nothing beats thoroughly reading the official docs, but if you just want to get something done quickly, these recipes should help you.

ImageMagick

It confused me at first, but ImageMagick is not a single executable. It’s more of a suite of graphics utilities. Maybe under the hood it’s all one executable, but it presents as different commands you can call. For generating an animated gif, you’ll be using the convert command.

As I said, I keep my frames in a folder called frames. They are named sequentially like so:

  • frame_0000.png
  • frame_0001.png
  • frame_0002.png

I use four digits, which gives me up to 10,000 frames. At 30 fps, that gives me about 333 seconds worth of video, or about five and a half minutes. Good enough for my needs now. If you go with three digits, you’ll only get 33 seconds worth. If I ever need to do more than five minutes in a single clip, five digits will give me almost an hour.

convert uses simple wildcard matching, so my input is simply frames/*.png and the output can be something like out.gif

Next you need to figure out the frame rate. I usually go with 30 fps. You don’t enter frame rate directly with convert though. You need to specify a delay between each frame. And here’s where things get really weird. The unit you use for the delay parameter is hundredths of a second. So if you want 30fps, you’d specify 100 / 30 or 3.333… Not exactly intuitive, but you can get used to anything. Put it all together and you get:

convert -delay 3.333 frames/*.png out.png

But there’s one more parameter I put in there, which is -layers Optimize

convert has a whole ton of stuff you can do with it to affect how the gif is put together. -layers Optimize combines a few of the most useful ones into a single command. I found that before I started using this, I’d get an occasional animation that just totally glitched out for a frame or two. Actually, it was more than occasional. Not every time, but often enough to be a problem. Using this parameter completely solved it, so I highly recommend it. If you want to read more up on it: https://legacy.imagemagick.org/Usage/anim_opt/#optimize

So the final command I use is:

convert -delay 3.333 -layers Optimize frames/*.png out.png

This has been my bread and butter command for a long time now.

But lately, I’ve been running into problems using ImageMagick to create longer, larger format gifs. This hasn’t been an issue at all when I was just posting to Twitter, which has something like a 15mb size limit. But as I’ve been posting gifs in other places that can be up to 100mb, I’ve started making larger and longer animations. And ImageMagick has been crashing pretty regularly while generating these. I’ve watched a graph of memory use and I’m pretty sure that it’s just running out of memory. It uses up all my physical memory, then uses up all my swap, then crashes. On at least one occasion I got it to work by closing down every other program that was running on the computer. That freed up just enough memory to get through. But it doesn’t always work. So that’s… less than ideal.

ffmpeg for gifs

[Update] Before using the command below, read this post, which has to do with color palettes: More ffmpeg Tips

I’ve been using ffmpeg for a while now to create videos, and I knew it could create gifs as well, so I’ve recently given it a try and it seems to do just as good a job as ImageMagick. And it can handle those high res, long gifs as well. So I’m pretty happy so far. Creating gifs with ffmpeg is also pretty straightforward, except for the input filename specification, which I’ll cover next. Here’s the command I use:

ffmpeg -framerate 30 -i frames/frame_%04d.png out.gif

Breaking that down, the framerate param is just the number of frames per second. Simple. -i is for input, and you point it to your sequential frames. and finally, the output filename.

The only complexity is the frame_%04d.png part. This is a printf type of string formatting convention. The %04d part means that you have four digits (4d), and they will be padded with 0s. So frame 37 will be 0037.

As I said, this has been working really well for me so far, but I haven’t been using it very long, so if there are any issues, I might come upon them later.

Videos

Video creation is what ffmpeg is usually used for. It’s an amazingly powerful and complicated tool. Even just figuring out all of the possible parameters can be daunting. So I’ll go over some of the common use cases I have and what each parameter does.

Mostly I’ve been optimizing the videos I create for use on YouTube. This uses h264 encoding and a specific pixel format. Here’s the basic command I use for creating a video from a sequence of frames:

ffmpeg -framerate 30 -i frames/frame_%04d.png -s 1920x1080 -c:v libx264 \
 -preset veryslow -crf 20 -pix_fmt yuv420p out.mp4

(That should be all one line. I wrapped it to make it more readable.)

Yeah, a lot to take in there. So let’s go through it param-by-param.

-framerate 30 is just the fps setting.

-i frames/frame_%04d.png specifies the input files as described earlier.

-s 1920x1080 sets the size of the video in the format of widthxheight. I believe that if you leave this param off, it will create the video based on the size of the source images. But you can use this to scale up or down as well.

-c:v libx264 OK, this will take a little background. ffmpeg can be used for creating or processing both audio and video – or both at the same time. Some of the parameters can be applied to audio or video. In this case, -c is letting you specify what codec to use for encoding. But is it a video codec or an audio codec? By adding :v we’re specifying that it’s a video codec. Fairly obvious here because there is no audio involved in this case, but explicit is always good. I’ve seen examples where people use -s:v to specify the size of the video, which seems overkill to me. But ffmpeg can also handle things like subtitles, and maybe those can be sized? Again, explicit is good. Anyway, here we are saying to use the libx264 codec for encoding the video. Which is good for Youtube.

-preset veryslow Here we have a tradeoff – speed vs size. These are the presets you can use:

  • ultrafast
  • superfast
  • veryfast
  • faster
  • fast
  • medium – default preset
  • slow
  • slower
  • veryslow

The faster you go, the larger your file size will be. It’s recommended that you use the slowest preset you have patience for.

-crf 20 is the constant rate factor. There are two ways of encoding h264 – constant rate factor (CRF) and average bit rate (ABR). Apparently CRF gives better quality generally. This parameter can go from 0 to 51, where 0 is lossless and 51 is crap. 23 is default and anything from around 17 to 28 is probably going to be generally acceptable. The CRF setting also affects file size. Lower CRF means a higher file size. So if you need the lowest possible file size, use a high CRF and a slow preset. It will take a long time and look like crap, but it will be small! More info on all this here: http://trac.ffmpeg.org/wiki/Encode/H.264

-pix_fmt yuv420p is the pixel format. You might be more used to RGB888 where you have red, green and blue channels, and 8 bits per each of those channels. YUV is just an alternate way of formatting colors and pixels. Mind-numbingly detailed description over at the old wikipedia: https://en.wikipedia.org/wiki/YUV . But this is a good format for Youtube.

out.mp4 is the file that is created after applying everything else.

So there you go. my bread and butter video creation command.

But there are a few more tidbits I use:

Fades

I usually try to make my animated gifs loop smoothly. But for longer form videos this is not always what you are going for. Still, you usually don’t want the video to just get to the end and stop. A fade out to black is a nice ending effect. You could build that into your source animation code, or you could do it in post of course. But ffmpeg has fades built in. Here’s the altered command:

ffmpeg -framerate 30 -i frames/frame_%04d.png -s 1920x1080 -c:v libx264 \
 -preset veryslow -crf 24 -pix_fmt yuv420p -vf "fade=t=out:st=55:d=5" out.mp4

Here, I’ve added the param -vf "fade=t=out:st=55:d=5"

The vf is for video filter, I believe. This takes a string in a very compact format. It starts by specifying the type of filter and its definition: "fade=...". After the equals sign is a list of parameters in the format: a=x:b=y:c=z. We’ll step through those:

t=out means the type of fade out.

st=55 tells the fade to start at 55 seconds into the video.

d=5 means the fade will last 5 seconds.

In most cases the st and d parameters will add up to the total video length, but you can do all kinds of things here. Like fading out mid video and then fading back in later.

Obviously t=in would execute a fade in. You might want to do that at the start of your video.

Although I have not had the need for it, I assume you can use -af to create an audio fade.

Adding Audio

One of the big benefits of video over animated gifs is the possibility of adding music or sound effects to the animation. As mentioned, ffmpeg can do that too.

I believe it’s possible to add the soundtrack to the video as you are creating it from the source frames, but this does not seem like a good workflow to me. I might generate dozens of videos before coming up with one I decide to go with. So it makes more sense to me to add the audio when I’m done. Here’s my command for adding audio to an existing video:

ffmpeg -i video.mp4 -i audio.wav -map 0:v -map 1:a -c:v copy output.mp4

Again, let’s go through each param.

-i video.mp4 is the video you’ve already created that you want to add audio to.

-i audio.wav is the sound file you want to add to the video. Yes, we have two inputs here. That’s how it works.

-map 0:v The map parameter tells ffmpeg which input is what. Here we’re saying the first input (0) is video (v).

-map 1:a As you can probably guess, this says that input 1 is audio.

-c:v copy As covered before, this is the video codec. In this case though, we don’t want to encode the video all over again. We just want to copy the video over and add the audio to it.

output.mp4 is the final combined audio/video file.

So that’s pretty straightforward and works great. But this kind of assumes that your video and audio are the same size. They might be, which is nice, but they might not be, which raises the question of how long to make the final video? Do you make it the same length of the video or the length of the audio?

I believe by default, it will choose the longest option. So if your video is one minute long and you throw a five minute song on top of it, you’ll wind up with a five minute video. Honestly, I’ve never tried this so I don’t know what happens. I assume you get four minutes of blackness. Or maybe four minutes of the last frame of the source video. Or maybe it loops?

On the other hand, if your video is longer than your audio, you will probably wind up with some amount of silence after the audio finishes.

So the other option here is the shortest param:

ffmpeg -i video.mp4 -i audio.wav -map 0:v -map 1:a -c:v copy -shortest output.mp4

All this does is make the output video the same length as the shortest input.

I’m sure you can figure out all the different use cases here. And you probably want to think about fading your audio out along with a video fade so the sound/music doesn’t just suddenly stop.

NFT follow up

misc

Well it’s been a few days since I gave in and decided to check out the world of NFTs. It feels like a few months. Figured I’d just post some thoughts and observations.

What I thought

I think my biggest confusion when I started this whole thing was, why the hell are people buying NFTs.

Initially I thought there was some kind of confusion going on where buyers thought they were actually buying rights to the artwork in some way whereas in fact, buying an NFT by itself gives you no rights to the original work. But as I talked to people I discovered that while people generally understood this, nobody really cared. It’s not like people were buying NFTs in order to use them in their corporate marketing campaigns or something. They just wanted to “collect” the art.

This just caused more confusion for me. Surely there is no inherent value in buying them, I thought. They’re either buying them as investments or buying them to support artists they like.

While I assumed that people paying thousands of dollars (or even millions of dollars) for big name NFTs are doing it as investments of some sort, I did not believe that the average hic et nunc user wasn’t doing that.

So it had to be about supporting artists, which I thought was nice, but frustrating because I’ve had a donation link up for ages and I get $5 or $10 a couple of times a year. So why were people so happy to support artists via NFTs, but not directly?

I was wrong

Just about everything I thought was wrong.

  1. People do find (massive) inherent value in buying NFTs.
  2. People do use NFTs as short/mid-term investments.
  3. The whole supporting-the-artist thing exists, but it’s a pretty minor aspect.

Value

Yeah, people are crazy about collecting NFTs. Especially from some known artist. Honestly it’s been a bit of an ego trip because people seem to know who I am in this community (getting a shout out from my old friend Mario Klingemann does not hurt either). Feeling a bit of a taste of the (very low level) rock star vibes I had in the 00’s and early ’10s on the Flash conference speaking tour. It’s nice.

Anyway, yeah, everything I’ve put up on hic et nunc has sold out in minutes. I’ve just been experimenting with prices and amounts and it doesn’t matter, it just goes like that. But it goes beyond that. People are DMing me on twitter begging me to let them know the next time I mint something. Or asking if they can buy something from me directly. When people manage to buy something before it sells out, they’re over the moon about it, like they just won the lottery. Get that – they gave me money and they feel like they won something huge. People are HUGELY passionate about collecting stuff.

I’m still trying to wrap my head around it because it seems really, really bizarre to me. It makes absolutely no sense in my brain. But I’m trying to roll with it. It’s just all very surreal.

Investments

This is a way bigger part of the system than I thought. For my first NFT I minted 10 editions at 1 tez each. No idea what to expect. Within a couple of hours of them selling out, one was re-sold at 150 tez! There’s one of those still for sale at 3000 tez!!!

Yeah, so IF that sells (I can’t imagine it will), the seller will have made a 3000% profit. Currently 1 tez is around $4 usd. You do the math.

Of course, once something gets into the realm of capitalism, all kinds of ugly stuff starts cropping up. I discovered there are bots people will set up to buy NFTs in bulk from popular artist at low cost and immediately resell them at a huge markup. This generates a bunch of anger in the community, from individual collectors who are mainly just into collecting work from artists they like. The bots make it hard for them to get the pieces they are trying to collect.

tldr; it’s a complex and complicated space.

Supporting the Artists

Like I said, this is there for sure, but it’s not a huge part of it, from what I can see. I think the music industry is a good example. Yeah, there are lesser known indie groups who have the support of loyal fans hoping they’ll eventually make it big. But you wouldn’t say that most people buy music because they want to support the artist. They buy it because they want the music.

The Future

I have no idea where this is all headed. On one hand I feel like this can’t last. It’s like a gold rush. Pokemon cards, Beanie Baby stuff. The bubble is gonna burst some day. On the other hand, this probably will pave the way for the future of how art is bought and sold… maybe. I have no clue. I guess I’m just going to surf this wave for a while. And not quit my day job just yet.

NFT

misc

OK, y’all wore me down. After multiple messages per week from friends and strangers urging me to put my work on hic et nunc, and many long, drawn out debates with people I respect, I finally gave in and created an NFT.

At some level I feel like a sellout. On the other hand, I was feeling way too stubborn and dogmatic about my resistance to try it. It was a serious internal struggle for the past few months.

Long story short, people seem to want to give money to digital artists via NFTs. Lots of money. But not any other way. I still have misgivings, but I’m going to assume that people know what they are doing and if they want to support me in this way, I’ll accept it.

So here’s my very first NFT:

https://www.hicetnunc.xyz/objkt/215806

Make it rain, fans. 🙂

BIT-101 News

misc

I’m starting a newsletter. I’ve been hearing more and more buzz about newsletters these days. I always assumed these were just cheap marketing gimmicks. “Sign up for my newsletter so I can spam you with info about this thing I’m trying to sell.” But apparently it’s become the replacement for RSS for a lot of people. Who knew? Probably everyone but me.

Anyway, I’m going to start one. For the most part it’s going to be quick summaries and links to stuff that I post right here, with maybe one or two other links or items of interest. I thought about doing more complete articles in the newsletter, but I’d rather keep my content centralized here, so summaries and links it is.

For now, it’s totally free. If it takes off and people seem interested, I might experiment with doing some kind of additional paid content. Undecided at this point. One step at a time. Anyway, sign up here:

https://bit101.substack.com/p/coming-soon

First issue should go out Wednesday morning, 8/18/21.

Code Toggling Plugins

misc

I’m a firm believer that creative coding is a very different activity than the kind of coding that most people do for their day jobs. In feature / application / systems programming, there’s usually a fair amount of planning, scoping and architecting that comes before you start coding. Ideally, when you start coding, you have a pretty good idea of what you are going to make and how you’re going to make it.

But creative coding, for me at least, starts with “what would happen if I did this…?” and generally follows that line of logic all the way until I have something published.

So I’m always going to this value or that, deleting “true” and typing “false” or vice versa. Or, I’m using a sine function and want to see what happens if I use cosine. Or tangent. Or I want to swap greater than with less than and see what that does. Or change plus to minus or minus to plus.

After way too many times doing this type of thing, I figured there must be some plugins that would make this easier. And so there are! I use vim, and found several. I tried a few and had almost settled on vim-toggle and then checked out switch.vim, which I found to be the most powerful of the lot.

I recognize that most people probably use VS Code or Sublime Text or other editors. A quick search informed me that there are similar plugins for those editors as well (all links below). I haven’t tried the non-vim ones, so I can’t vouch for their quality, only their existence. There may be better ones, so do your research.

The way these work is you put your cursor on the word or symbol you want to change and hit some keyboard shortcut. For switch.vim, that’s gs. Each time you hit that shortcut the word will toggle back and forth between, say, “true” and “false”. Or “on” and “off” or “1” and “0”.

Most of the plugins come with several obvious definitions pre-defined. But make sure you find one that will also allow you to set up custom toggle sets. Some of the plugins only support binary switching between two options, but some, like switch.vim, will let you specify a list of as many items as you want. It will cycle through all of those options when you hit the shortcut key over any one of them. Here’s just a few of the custom toggle sets I set up:

["width", "height"]
["moveTo", "lineTo"]
["x", "y"]
["-", "+"]
[">", "<"]
["cos", "sin", "tan"]

And there are more niche ones I use in my day to day creative coding using cairographics bindings for Golang (blgo). Sometimes I’ll be making a piece where the background is white and the foreground is black. I use a function, ClearWhite to clear the surface to white, and set the drawing color to black using SetSourceRGB(0, 0, 0). but occasionally I want to invert these to use a black background with white shapes. So I set up some toggles like so:

["ClearWhite", "ClearBlack"]
["0, 0, 0", "1, 1, 1"]

This lets me easily make the change in just a few key strokes.

switch.vim even allows you to set up toggling rules using regx, which seems super powerful, though I haven’t dug into that so far.

So far, I’ve found these immensely useful. Maybe you will too. Here are the links:

Vim / Neovim:

VS Code:

Sublime Text 3: