Archive for the 'Technology' category

SublimeText2 Plugin: RunBuild

Jan 16 2012 Published by under General, Technology

I was talking to my coworker, Todd Anderson, today about running build systems in Sublime Text 2. It’s a great editor, very powerful and configurable. But somewhat of a lack of documentation on all those features. One problem we both had was setting up multiple build systems on a project, say one for running unit tests, one for running JSLint, one for deployment, etc. You can specify build systems right in a project file as per this reference: For a drop dead simple example, you could have one that launches your index.html file in chrome, and another that launches your unit tests in tests.html. This would look something like this:

[php lang=”JavaScript”]”build_systems”:
“name”: “preview”,
“cmd”: [“/usr/bin/google-chrome”, “$project_path/index.html”]
“name”: “test”,
“cmd”: [“/usr/bin/google-chrome”, “$project_path/tests.html”]

Now, in your “Tools/Build System” menu you should have two entries: preview and build. You can choose one, and the next time you build it will do what you specified in that system. The problem is switching between them. Grab the mouse and go to Menu / Tools / Build System / test (or preview). Then, F7 or Ctrl-B to build. Or maybe you forgot which one you were on, so when you build it builds the wrong thing and you have to cancel and go back and check and redo it. Minor pain, but it gets annoying.

Ideally, you could assign a specific keyboard command to each build system. So I started digging in to see how to do this.

First Part of the Solution: set_build_system

After some digging around, I found that there’s an internal command called “set_build_system”. You pass it a single argument called “file”. The value is one of two things:

1. The path to a sublime-build file, such as “Packages/Java/Ant.sublime-build”


2. The name of a custom build path in your project file (like what we just created as “test” and “preview”.

You can use this command in a keyboard shortcut. Go to the Preferences menu, Key Bindings – User. This will open up a Default sublime-keymap file, which should contain an empty JSON array like this: [], unless you have already added some shortcuts. You can create shortcuts that will change your build system like so:

[php lang=”JavaScript”][
{ “keys”: [“ctrl+shift+p”], “command”: “set_build_system”, “args”: { “file”: “preview”} },
{ “keys”: [“ctrl+shift+t”], “command”: “set_build_system”, “args”: { “file”: “test”} }

With this, Control-Shift-P will set your build system to your custom preview build system, and test likewise. But it’s still a two step process: set the system, then build. And again, there’s no visual indication of which system is currently set, and no visual indication of when it changes, so you’ll still find yourself checking manually in the menu to see which is checked.

Ideal scene again, would be one shortcut that chooses the system and then builds.

Second Part of the Solution: plugins

A keyboard shortcut can only run a single command. But we need to run two commands: choose the build system and then build. To do this, you can create a sublime text 2 plugin. This is far easier than it sounds. In fact, Todd and I were both pretty shocked at how easy it turned out to be and how quickly we got it working.

A plugin is a Python class that creates a custom command. It extends one of three plugin classes: WindowCommand, TextCommand, or ApplicationCommand. This post was very helpful in getting me started:

Here’s the command that Todd and I came up with virtually at the same time:

[php lang=”Python”]import sublime, sublime_plugin

class RunBuildCommand(sublime_plugin.WindowCommand):
def run(self, build_system):
self.window.run_command( “set_build_system”, {“file”: build_system } )
self.window.run_command( “build” )[/php]

Running through it, this imports a couple of sublime packages, defines a class that extends WindowCommand, and a single method called run.

The run method takes a single parameter (besides “self” that all python methods get) which will be the name of the build system to run.

It then does the two things we need to do by making calls to self.window.run_command, passing in the command to run and any arguments.

First we call “set_build_system” like we did in the first keymap example. This passes through the build_system argument.

Then we call build, with no args.

So we’ve accomplished our two actions in a single stroke: set the build system, then build.

You can save this file as “” anywhere in your Packages folder in Sublime’s config folder (check documentation to see where that is on your OS). I suggest you give it its own folder so it becomes /Packages/RunBuild/

Now we just need to change the keymap to call this custom command instead of set_build_system. The name of the class, RunBuildCommand, will be mapped to a command called “run_build”. CamelCase becomes camel_case in this setup. So we do this:

[php lang=”JavaScript”][
{ “keys”: [“ctrl+shift+p”], “command”: “run_build”, “args”: { “build_system”: “preview” } },
{ “keys”: [“ctrl+shift+t”], “command”: “run_build”, “args”: { “build_system”: “test” } }

And we are done! Control-Shift-P launches index.html in the browser, and Control-Shift-T launches test.html. Of course, your build system(s) could be far more complex than the simple one I showed here, there you go.

5 responses so far

Making Tools Presentation at Flash and the City

Dec 01 2011 Published by under Conferences, Extensibility, Technology

In case you didn’t get to see my Making Tools presentation this year (or if you tried to see it at RIA Unleashed and suffered through my technical difficulties), you can now see it on line:

You can hear the guy who corrected me by pointing out that apparently crows use tools to make tools at 4:00. 🙂

One response so far

Kindle Fire First Impressions

Nov 16 2011 Published by under Kindle, Technology

I got my Kindle Fire last night and wanted to post some initial impressions of the device.

First of all, I love it. Great form factor, feels very solidly built, great display, does all that I need it to do. I could not put it down for more than 2 seconds last night. Take that as a 95% awesome. The rest of the stuff I say here will be in that 5% maybe not so awesome category, so read it with that in mind.

Two big caveats with this device.

One, do not buy a Kindle Fire expecting to get an Android tablet. I would not call this device an Android tablet. If you want or need an Android tablet, go out and buy an Android tablet. This is a Kindle tablet that happens to use a version of Android as its operating system. If you can’t let go of that expectation, you will have a hard time with this device.

Second, the Fire is every bit as tied to the Amazon ecosphere as the iPad is tied to the Apple ecosphere. Pretty much every top level navigation leads to an Amazon service – Kindle books, Kindle cloud music and mp3 store, Amazon video portal, Audible audio books, etc. I’m a fan of all those services. If you are too, you will be in heaven. If you’re not, you’re going to feel like you wandered into the wrong party. Of course, I’m sure what Amazon is hoping for is that people will become fans of those services when they get the device. And that’s probably a pretty good bet. When you click on the Videos tab and see thousands of movies and tv shows you can watch for “free” if you have a Prime membership, you’re going to be awfully tempted to fork over $79 a year for that.

There are a few things that detract slightly from the awesomeness of the device, in my eyes. A lot of it has to do with it not being a real AmazonAndroid tablet.

For one, there’s no standard home / launcher screen like you have in other Android devices. Instead you have a bookshelf metaphor. The top shelf is huge and holds the “carousel”. This is a history of all the books you’ve read, apps you’ve used, web pages you’ve looked at. Problem is, at this point it’s permanent. You read the latest teen vampire romance novel, it’s cover is going to be loud and large the next time you turn on your Fire. Guilty music addiction? Be careful, or everyone’s going to know that the last thing you listened to was Celine Dion. Same goes for web pages. Enough said. I’m sure eventually Amazon will allow us to edit those, or someone will figure out where that history is stored and how to edit it.

Lack of standard home / launch also means no widgets. Under the carousel, you can save other “favorites”, which can be apps, books, web pages, etc. But widgets have no place on this device.

There is only a single hardware user control on the device – a power button. All your standard Android controls – menu, back button, home button, search, are in a bottom bar that can sometimes be collapsed to give an app more screen space. Likewise, the volume controls are all in software, accessible through another settings menu in the top status bar. This is very much in line with the original Kindle philosophy, which is to make the device disappear when you are viewing content. I don’t hate it, but if you’re doing a lot of navigating and jumping around from app to app, it is noticeably slower.

Amazon’s app store is pretty good. Unfortunately, you don’t get the full app store on the Fire. Due to various hardware limitations and the fact that it’s built on a forked version of an earlier Android build, it looks like the Fire needs specifically built Fire apps. Sideloading is possible, but I’ve seen reports that this is a hit and miss situation, with some sideloaded apps working fine, and others crashing. Again, you have to think that this is not an Android tablet, but a Kindle fire and the apps it runs are not Android apps but Kindle apps.

Finally, in what I consider possibly the biggest oversight, there is no external storage on this device. It has 8GB internal storage, of which 6 something is available. For me, that’s just about the point where you can store a decent amount of things, but you DO have to pay attention to what’s on there and remove things you don’t think you’re going to need for a while.

Of course there’s lots of other things that people will complain about. No 3G, no camera or microphone, etc. I don’t miss any of those things and don’t really expect them on a device in this price range. And anyone arguing about whether this is or is not an “iPad Killer”, just shut up. You’re an idiot.


So it’s got some imperfections, but again, in my mind all the bad points fall into that 5% not awesome space. If you can accept it for what it is, and you like Amazon services, it’s an amazing device, especially considering the $199 price tag.

20 responses so far

Recent Events and my non-reaction

Nov 14 2011 Published by under General, Technology

I’ve been using Flash since 1999 or so. It’s responsible for my career and any small bit of fame and fortune I might have. Most of the people I count as friends have come from the Flash community. Flash will always be something more than just a technology to me. I think Flash has seen its golden age and is probably unlikely to see such times again. But it will continue to fill several gaps, I’d guess for several years to come. The areas that Flash is focusing on now – 3d gaming, advanced video, and mobile apps – are not areas that interest me very much. So I am currently more interested in other technologies and activities. And that’s all I really have to say on the subject.

In the past week, the layoffs at Adobe and the dropping of the Flash player on mobile browsers have a huge storm in the Flash community, and wider technology community as well. Of course there are the people who have always been anti-Flash, now predictably dancing in the streets and tearing down statues, rejoicing that Flash is dead. Then there are many in the Flash community who feel personally betrayed by Adobe by this move, shouting things like, “Flash is dead, and Adobe killed it!” Then there are those insisting this is a great move for Flash that will help it move forward.

To my ears, this is all a whole lot of noise and whining. Two things I read last week resonated with me. One was a friend who said, “Twitter is like a license to be an asshole.” The other was this graphic that’s been floating around:

In the first part of my life, I was painfully shy. I don’t consider myself shy anymore, but I think most people who know me personally would say I am in general a quiet person, at least until I know you really well. Having a blog and twitter and other social networks has given me a voice that is louder than one I’ve ever had before. Like many others, I’ve used that voice carelessly a lot. But more and more I find myself consciously and purposely being more reserved about it. Holding back on tweeting the first snappy remark that comes to mind, or banging off a blog post that expresses my outrage at some situation. I find that later I’m always very glad that I held back. The world isn’t going to fall apart if it doesn’t hear my take on recent events right this minute.

Anyway, I’m taking some time off twitter and most other social networks. It all just sounds like nails scraping on a chalkboard to me. I’m going to try to blog more, as that’s something I’ve found I’ve been doing a lot less of. And I intend those blog posts to be some useful bit of information, not just noise and opinions on the latest technical scandal.

19 responses so far

Design Tactics – Select Single

Nov 04 2011 Published by under General, Technology

The other day I was coding a particular UI implementation and realized that I had coded the same thing in multiple languages multiple times. I knew exactly how I was going to go about it and did what I usually do and, as usual, it worked just right. I started wondering how many examples like that exist, and that it would be good to occasionally document them, if not for my own sake, then for the sake of others.

These things aren’t necessarily so broad in scope that I’d call them design patterns. I might call them a design strategy, but that still has the connotation of being broad in scope, and could be confused with the strategy pattern. So I thought of naming them design tactics. Kind of like hand-to-hand combat with your code.

The first one, and the one that sparked my interest in the subject, I call the Select Single Tactic. I’m sure you’ve done this plenty of times yourself. It’s basically the functionality of a radio button, a list, a menu or other navigation. You have a number of items, of which only one can be selected. When the user selects one, it usually changes its state to show that it is selected, and the other associated items will change their state as needed to show that they are unselected.

The most common scenario is that the user will click on an item to select it, so we’ll go with that idea. A common first start is to code the item so that it responds to the click directly, changing its state to selected. See the following snippet, kept in pseudocode as it can apply to just about any language:

// constructor
Item() {
this.addEventListener(click, this.onClick);

void onClick() {

Here, the item responds to its own click by changing its visual state to show that it has been selected. In some cases, this is fine, but there’s probably a more elegant way. However, we’ll leave it like this for now.

The next part is to deselect all the other items. A first pass at this might be to store all the items in an array. and when any item is clicked, set all the items in the array to show unselected. This would be done in some code external to the items themselves.

void onItemClicked(clickedItem) {
for(item in itemList) {

The problem with this is that often the code internal to the item will run first, setting the clicked item to selected. Then this external code will run, setting ALL the items to unselected, including the one that was just set as selected. So we have to check that we are not unselecting the item that was clicked:

void onItemClicked(clickedItem) {
for(item in itemList) {
if(item != clickedItem) {

Now, all the items are set to unselected EXCEPT the one that was just clicked. So this works, but it’s just starting to get a bit ugly. Another issue is that we now have some code INSIDE the items setting the clicked one to selected, and some other code OUTSIDE the items setting the others to unselected. It’d be a lot cleaner if all the selection/unselection code was in one place.

So another option is to remove the onClick and selection code from the items themsleves, and A. unselect all, B. select the clicked item.

void onItemClicked(clickedItem) {
for(item in itemList) {

This is better. Now items just dispatch clicks and have their state set externally. But we can do better.

First of all, why the hell are we looping through this array of items at all? The use case specifies that only one item will be selected at any given time. So why go through 2 or 5 or 100 items, setting them all to unselected, when at most we only need to do it for one?

Second, for most implementations, we will need to keep track of which item is selected, so that we can show some specific data, go to a particular section, or whatever. So when an item is clicked, we’ll probably want to store it as the selected item. This kills two birds with one stone. The only item we need to unselect is the one that is currently selected. Actually, when the system first initializes, it could be the case that no items are selected, so we’ll check for that case too.

void onItemClicked(clickedItem) {
if(selectedItem) selectedItem.setSelected(false);
selectedItem = clickedItem;

That’s about as elegant as it gets. At this point, unless you are using it for some other purpose, we don’t even need the array of items any more. As long as we know which item is selected, we just need to unselect it.

For something where you might have multiple groups of objects, like radio button groups, you can extend this fairly easily. Each item would have something like a group name, and you would have a map of different groups, each keeping track of the currently selected item in that group:

void onItemClicked(clickedItem) {
var group = groups[clickedItem.groupName];
if(group.selectedItem) group.selectedItem.setSelected(false);
group.selectedItem = clickedItem;

Moving back out of groups for the last example, we’ll go back to just a single selection. If we really want to compartmentalize this, we can move it all back into the Item class itself, storing the selected item as a static member of the class, removing the need for any external code to run at all. We’ll go back to having items listen for their own clicks.

// constructor
Item() {
this.addEventListener(click, this.onClick);

void onClick() {
if(Item.selectedItem) Item.selectedItem.setSelected(false);
Item.selectedItem = this;

Now, rather than the external view having to worry about this logic, the item class itself takes care of it all. You’d still want to listen for clicks on the items externally most likely though, to update other aspects of the UI such as showing the data related to an item, changing sections, etc. Another twist on this would be to move the selection code into a static method like so:

// constructor
Item() {
this.addEventListener(click, this.onClick);

void onClick() {

static selectItem(item) {
if(Item.selectedItem) Item.selectedItem.setSelected(false);
Item.selectedItem = item;

This feels a little cleaner to me, as the class method is doing the classy stuff and the instance method is just telling the class what to do.


No rocket science here. But nice to think these things out logically. I think even in just describing this stuff, I’ve streamlined it a bit in my own mind. If you have any improvements, or other tactics you use or would like to see discussed, let me know.

15 responses so far

Beta Culture

Jun 30 2011 Published by under Technology

Technology has entered a phase where pretty much everything is a beta.

Take GMail. The first beta appeared in 2004. It remained in beta status until 2009. In fact, people so missed the “BETA” label on the GMail logo that there’s an official Google extension that allows you to add it back in!

Virtually all open source software projects start out in beta. Like GMail, most stay in varying stages of beta for years, if they ever make it to a stable release.

Even for commercial software, it used to be that beta testing programs were tough to get into. They were the domain of the tech elite. Serious bragging rights, although often the first rule of being accepted into a beta was that you don’t talk about the beta. Eventually, alpha became the new beta. “Oh, you got on the beta? Yeah, I’ve been on the alpha for months.” Now, “pre-alpha engineering drops” are the new bleeding edge. And many commercial software vendors have public beta programs for many of their products. And even public alphas.

But more and more, a released software product is no longer in the kind of shape that one used to call release-worthy. So many final releases seem more like advanced betas these days. This blog having a strong Flash background, I’m sure you can name one or two notorious releases of the Flash authoring tool that were nowhere near ready for prime time when they went on sale. Even operating systems are not very trustworthy in their first releases.

A lot of this has to do with the ease of updating software these days. Even if you actually buy a shrink-wrapped box, the first thing that happens when you install something is that it checks for updates, patches, service packs, hotfixes. Whatever you want to call them, the first few of them often are required just to bring the product up to basic usability.

This is not just limited to software either. Most hardware these days is powered by some sort of software or firmware. All too often, the first release of hardware has some nasty issues. And nobody is exempt from this. Even the midas-touch everything-is-magical Apple had their antenna-gate moment. Often, the bugs in a hardware device are caused by the firmware, or can at least be handled or mitigated by updates in firmware. But not always.

How a company publically handles such issues says a lot about the company, I think. I’ve seen a few different tactics: acknowledgement, denial and silence.

When Amazon announced the latest generation Kindle, I pre-ordered immediately. When I got it, I noticed that it would occasionally crash or reboot. Maybe once or twice a day. This was surprising, as my first Kindle was almost mystically 100% perfect. Never saw it do anything that could remotely be called a bug. It turned out that others were having the same problem as well with the latest Kindle. I called Amazon support, prepared to answer a raft of “have you tried turning it off and back on?” questions, maybe get sent a prerelease firmware update or be given some keyboard shortcut to put it into diagnostic mode or something. I started explaining the problem and the rep interrupted me and asked for my address. The next day, I had a brand new Kindle in my hands, with a postage paid envelope to ship back the faulty one. The new Kindle has been 100% flawless, by the way. If I didn’t love Amazon before that, I did then.

Then take Apple’s approach with the antenna problems. First, “you’re holding it wrong”. Then a video tour of their antenna testing facility to prove that they couldn’t possibly have an antenna problem. Finally, a software update that didn’t address the problem, but updated the algorithm for showing the strength of the signal. And more videos showing how their competition has similar problems. I lost track after that, but I don’t think they ever admitted there was an issue. I’m not bashing Apple here. I’m just saying that particular issue was handled really poorly, IMHO.

One more example. Many of you know I’m a runner. For runners, Garmin GPS watches are the ultimate gadget, showing your time, distance, pace, route, elevation, heart rate, calories, etc. while you’re out for a workout. I had a Garmin 305 for a year and a half, which is this large, boxy, device that straps to your wrist and does all of the above. It’s big and ugly but it is the classic workhorse of Garmin’s line. It’s an older model so you can get it cheap and it has been the mainstay of many runners for years. Other than its looks, I can only say positive things about it. However, this spring, Garmin came out with the 610, an amazing sleek touchscreen device that does everything the 305 does and more. And it has the slimness and style of a watch you could wear every day. And did I mention, it’s a touch screen?

Again, I pre-ordered this baby. It’s an awesome device. I love it to death. But there was one problem. An important aspect of a GPS watch is accurate GPS. Without that, you’ve just got a very expensive stopwatch. The 610’s GPS is, in itself, probably the most accurate of all similar watches. But there was a bug where certain functions of the watch would somehow disrupt the GPS reception and cause the accuracy to go off for up to several seconds at the start of a run and at each lap. So while it was extremely accurate most of the time, it could be extremely innacurate for those few seconds, which could throw your whole run off. In my case, although I saw the issue, it didn’t affect the overall accuracy that horribly. Others, though, seemed to be hit harder by it.

Garmin’s response: nothing. There’s an official Garmin forum where this was all being ranted and raved about. Garmin reps are active on the forum but none would say a word about the issue. No acknowledgement, no denial, just deaf to it. After several weeks, a Garmin rep came on the forum and suggested a workaround which would somewhat mitigate the problem. Basically, “tap the screen after you start your workout and at each lap.” This was not quite a “you’re holding it wrong” but kind of a “it’s not quite as bad if you hold it this way.” Then silence again. And a few weeks later, a new firmware arrived, which totally fixed the issue. I’m not sure why they couldn’t just say, “thanks, we’re aware of the issue and acknowledge it and we have a fix for it that will be out soon.” A simple statement like that would have gone a long, long way in their favor.

So back to beta culture.

I found it interesting, that in the Garmin forums, several people said things like, “Didn’t they test this thing before they shipped it? We just paid hundreds of dollars to be beta testers!”

While that statement was slung out in anger, it’s actually pretty spot on. And not just targeting Garmin. The fact is, that in this day and age, if you are an early adopter, you are essentially a beta tester. Software and even hardware is not going to be perfect on first release. And the situation is probably going to get worse over time: open source bleeding edge alternatives, competitions’ open betas, fiscal deadlines, accelerated technology ramp up are all pushing companies to release faster, earlier, more often. It’s got bugs? We’ll fix them in an update. Just get it out the door!

This has led many to say things like, “never buy a v1 product.” Well, sure, if perfection and stability is key, and you can afford to wait, let the early adopters band their heads on it for a few months and get v1.1, which will likely have a few less rough edges. But if you can deal with the speed bumps, being an early adopter means you can be the first one at the party with the new iDroid ZX G7 MegaAwesome. Chicks will dig you. Really. Maybe. Not.

But don’t pre-order some ultra-bleeding edge new product and be all self-righteously indignant because it has bugs.

“I’m shocked, shocked to find that this brand new cutting edge device is not perfect!”

I’m not saying that it SHOULD be that way, or that we shouldn’t demand higher standards, but it’s a fact of life. Be the girl with the shiny new, potentially imperfect gear, or the guy with last season’s model that works. Choose your poison.

15 responses so far

March is JavaScript Month. And… Introducing WireLibJS

Mar 01 2011 Published by under JavaScript, Technology

Over the last few weeks I’ve taken a deep dive into JavaScript and Canvas. It’s been a blast, a real eye opener and paradigm shift. As with a lot of other Flash developers, whenever I’ve thought about “HTML5” I’ve thought something along the lines of “Ewww… JavaScript! I don’t want to go back to loose typing and prototype. I left all that behind with AS1!” I think part of the presumption is because JavaScript doesn’t have many of the more hardcore “developer-y” features like classes, inheritance, data types, etc. that it is somehow “less professional” than similar languages that do have these features, like ActionScript. Perhaps some Flash developers look back at the code they wrote when they were writing AS1 and assume that JS devs are doing the same thing. Not taking into account that the AS1 they were writing years ago was perhaps pretty early in their coding career. At least, that’s how I was viewing thing to some degree, and I’m sure I’m not alone.

The fact, as I have discovered, is that JavaScript development is every bit as serious as AS3 dev. While the language itself may be a lot looser, this winds up requiring that developers be very careful what they do. Since all JS programs on a web page essentially play in the same big sandbox with the window object as its root, you have to be very careful about what you let creep into that global scope so that you don’t wind up polluting it for any other programs that might be running on the page at the same time. With this in mind, there are a bunch of patterns and conventions in use that are pretty fascinating to study. I HIGHLY recommend the book, JavaScript Patterns:

There’s a lot of JavaScript that looks pretty strange to AS programmers, and this book will explain why it is all written that way.

Next up, I needed something cool to program. Anyone who knows me knows that I wouldn’t be too excited about coding up some sort of form based Rich Internet Application. YAAAAAAWN! No, I need to make magic! I need to make things move! So next up was Canvas. Since I’m recommending books, the Canvas Pocket Reference is also very helpful.

If you’re not familiar with Canvas, just think of it as an alternate flash.display.Graphics. It has a drawing API that’s somewhat similar, but different in a lot of ways. I got tripped up quite a bit by the differences in the beginning, but finally got the hang of it. It’s not rocket science, but don’t go into it with any preconceptions.

I decided that for the month of March, I’m going to try to do a post each day about JavaScript and / or Canvas. These will probably be most interesting to those coming from an ActionScript background or otherwise beginning to learn about the subject. 31 posts! Yes, I can do it! Some may be nothing more than simple experiments, others more beefy. No idea just yet.

To start with, allow me to introduce what I’ve come up with over the last few weeks. It’s a pretty simple framework for drawing 3D lines in Canvas, called WireLibJS. Let’s start off with a simple demo. Click to see it in action:

WireLibJS March 01 Demo

And here’s the JavaScript Code that would go into creating that:

[php lang=”JavaScript”]$(function() {
var i, points = [];


for(i = 0; i < 20; i += 1) { points.push(Math.random() * 500 - 250, -400 + Math.random() * 500 - 250, Math.random() * 500 - 250); } wirelib.addLine(points); wirelib.addBox(0, 0, 0, 300, 300, 300); wirelib.addCircle(-200, 400, 0, 200, 32); wirelib.addRect(200, 400, 0, 400, 250); wirelib.loop(24, function() { wirelib.rotateY(0.01); wirelib.draw(); }); });[/php] I'll save the explanations for future posts, since I've now committed to so many. But the code above should be largely self-explanatory, and you can always view source to get at everything. Note, the library itself may change and evolve over the coming weeks. I'm going to do my best to make sure it doesn't break earlier demos, but don't consider it complete until the end of the month. OK, day one done!

47 responses so far

Ubuntu and Me, Happy Together.

Feb 02 2011 Published by under ActionScript, Flash, General, Technology

I tweeted today about this being my second full day on Ubuntu and it a bunch of responses from people wanting to know how I did this or that, what my experience was, etc. So here’s the story.

For a while, I’ve been wanting to set up a home server. For backups, file storage, and to learn a bit more about server administration and server side programming. It’s a big gap in my knowledge base. I also wanted to get back into Linux. I’d fist messed around with Linux back in the mid 90’s, downloading RedHat onto something like 18 floppy disks via a 14.4 modem in order to install it. Back in the good old days when you had to install your own window manager and configure it by editing config files. I know you can still do that, but back then it was the only way to do it. Finally, I wanted to get back into hardware. From my very first PC (after graduating from a Commodore 128 and Amiga 500) up until my first laptop, I had built all my own PCs from scratch. I’d start with some cheap or free used PC and upgrade it bit by bit until it was a nice machine. It would often wind up with some friend or family member and I’d start over.

The Linux portion of this whole thing got rekindled recently, in my switch over from MediaTemple to Dreamhost. I was messing with DNS settings and MX records and SSHing into both servers and running SQL queries to back up and restore this blog’s database, since it was too big to handle by the WordPress export/import system. I amazed myself with what I accomplished and learned in that whole process, and wanted to dive into it more.

So at some point a few weeks ago, I pulled up an old box I had rusting in the basement and tried to boot it up. Apparently it had rusted a bit too long and either the CPU or something on the board was fried. I went onto Amazon and bought a motherboard and cpu, along with some RAM, a 2 TB hard drive, and a DVD player/recorder. I had a bunch of unused gift certificate money on account, so it technically didn’t cost me a dime, but came to about $300 something total. The components trickled in and I sadly realized that I was the owner of a micro ATX case, and a full ATX motherboard. So back to Amazon for a nice case.

The case arrived and I realized that my power supply had only the IDE type connectors, no SATA connectors, which I needed for my drives. I picked up a new power supply locally and I was in business. Since then, I added a new fan, a fan control unit, and a memory card reader. Here’s the building in progress:

The empty case with power supply:

The motherboard installed:

And the CPU and fan:

All the front panel stuff wired up:

Everything in place:

Wide shot:

Cover on:

And a front view:

I had a real blast building this, shopping around for parts, learning about different boards and chips, formats and standards. Stuff that I’d forgotten a lot about. For those who are interested, the specs are:

MB: Intel Media Series ATX Motherboard BOXDH55HC
CPU: Intel Core i3 Processor i3-540 3.06GHz 4MB LGA1156 CPU BX80616I3540
RAM: Corsair 4GB Dual Channel Corsair DDR3 Memory for Intel Core i5 Processors (CMX4GX3M2A1600C9)
HD: Western Digital 2 TB Caviar Green SATA Intellipower 64 MB Cache Bulk/OEM Desktop Hard Drive WD20EARS
Case: Cooler Master RC-310-BWN1-GP Elite 310 ATX, MATX Mid Tower Case with Window (Black/Blue)
Fan Controller: Scythe “KAZE MASTER ACE ” 5.25″ Bay Fan Controller- Black (KM02-BK)

Not a top of the line system, but decent enough, with lots of room to grow.


Now that I had a box up and running, time to install some stuff. I went with Ubuntu 10.10. Installed VirtualBox and pulled out my old Windows install disks and set that up with Windows 2000, Windows XP, and Windows 7. The first two, mostly just because I could, and because I have 2TB of disk space to play with. Actually 3TB, because I salvaged another 1TB drive out of another device I wasn’t using.

I opened up ports for VNC, SSH, and HTTP, and set it up with so now I can access the machine from anywhere and do just about anything with it. Really having a blast exploring all the options.

Back to Ubuntu

All right, this is what I started to write about in the first place. I really enjoyed using Ubuntu on my server. I sat up in the living room SSHed or VNCed into the server, exploring stuff, installing new programs, finding out what I could do. It was great, but VNC is VNC. I set up VirtualBox on my PC and ran it from there as well. After a couple of weeks of doing both, I decided to make the jump to a full dual boot Ubuntu system. That’s what I did over this past weekend. It went very smoothly. I now had a dual boot system that defaults to Ubuntu, but also has Windows 7 as an option.

My next goal was to see if I could use Ubuntu full time as my main OS. For playing around, it was just fine, but if I was going to use it day in and day out, I needed to have a bunch of stable, viable, usable applications for doing all my day-to-day stuff. What amazed me was just how many of the apps I use on a regular basis have Linux versions. I don’t mean similar apps that do the same basic thing, but actual Linux versions of the exact same programs.

For example, I use Launchy as a task launcher, RapidSVN as an SVN UI, Tweetdeck for twitter, KeepassX for passwords, Skype for … Skype. all of these have Linux versions that I was able to install right off the bat. Made me feel quite at home and comfortable.

I’ve been making heavy use of Google Apps, having switched over my email to Google apps. So mail, calendar, RSS, and documents were a no brainer and completely seamless. Chrome works fine on Ubuntu, too. With all that down, I felt I was almost able to make the switch. The big thing was going to be Flash/Flex/AIR development.

Obviously, CS5 does not exist on Linux yet. Nor does Flash Builder. But the Flex SDK and MXMLC works just fine there. You just need something to write your code in. There were two main choices – FDT and IntelliJ Idea. Both of which I happened to have licenses for. While IntelliJ is pretty awesome, it’s so different from Flash Builder, which I am used to, that I found it hard to really get comfortable with. FDT is Eclipse-based, like Flash Builder. The devil you know… as the old saying starts. So I decided to go with FDT.

Getting it set up was fairly easy, and I was hello worlding before I knew it. But the project I’m in the middle of presented a much bigger challenge – not a straight AS3 web project, but a Flex based, full screen AIR application that will be deployed on a kiosk. If I could get this going with FDT on Linux, I was home free. Some serious problems lay ahead. One was that I needed a debug player for 64 bit Linux. I dug one of those up somewhere – maybe labs? Then I was having problems with my embeds. Something is handled different on Windows and Linux with paths. I finally figured out I just needed to add a “../” in front of the embedded asset path. Not sure why it’s different. But that path works in both Windows and Linux, so it’s probably the “right” way. Next up was run/debug configurations. In FDT, if you are editing any class and you do a run or debug, it will create a run/debug configuration with that class as the main application. This is very different than Flash Builder, where you set the main application class and it will always use that. I finally worked through a couple of tutorials on the FDT site that straightened me out. Basically, you need to create your own configuration, and then, in preferences set Eclipse to always use the last configuration if one is unspecified. Handled.

The final problem was with AIR. All kinds of weird errors going on there. But no AIR file launching. After much searching, I finally realized that AIR SDK that you get when you download a Flex SDK is only good for Mac and PC. You have to go here and download the AIR SDK for Linux. I’m not sure how you are supposed to integrate the two of these things, but what I did was to go into the bin folder of my flex sdk folder, and remove adl, adl.exe, adt, and adt.bat, and replace them with the adl and adt files from the linux sdk. This is probably a very half-assed solution, but it actually worked. If anyone has any more robust directions, let me know, but with all of the above, I am now compiling and debugging my app without a hitch. I’ve also got MinimalComps compiling its swc using FDT, and the next release will be produced that way.

As for the time line on all this, I started trying to compile the app on Monday morning, but ran into all the problems mentioned above, so had to retreat back to Windows so I could get some work done for the day. I went home Monday night and worked through all the solutions I just mentioned. Tuesday I went to work and booted up Linux and haven’t looked back. That’s two full, very productive work days on Ubuntu alone, building a Flex based AIR app. I’d call that a success. I’ll definitely be finishing out the week on Ubuntu and heading into the sunset with it. 🙂

A bunch of people on twitter asked me about Flash, Photoshop, and the Creative Suite in general. The answer is that I don’t use Flash CS5 all that much. Well, actually I do use it fairly often, but not for long periods. I’ll fire it up to test out some snippet of code or spike out some idea. it’s great for that. But once the idea moves into something serious, it always gets moved into a more serious development environment. I will kind of miss that. Currently I’m looking into to fill that gap. I’m not saying this is a solution for everyone. If you do a lot of timeline or library stuff, you really need Flash. For me, I just need a way to fire up something and lay down some fast code and see what it does without creating a workspace, package, classes, etc. wonderfl may serve that purpose in many cases.

As for graphics stuff, I haven’t run into a need for that yet in the last few days. Fireworks has been my weapon of choice for several years. I’m not sure what my solution will be for that yet. Naturally, when you think “Linux” and “graphics”, you think “Gimp”. It’s been years since I touched that, but I imagine when the need arises, I’ll look at it again, and then look around to see what else is out there too.

There’s also the possibility of running Windows on VirtualBox and installing CS5 in there. I think VirtualBox is great for some stuff, but large, resource heavy programs like Flash and Photoshop… the idea of those running in a VM scares me. I’ve tried running Visual Studio 2010 for Windows Phone stuff there, and it’s pretty slow. Also the Windows Phone emulator refuses to work in VirtualBox. An emulator in a virtual machine… go figure. Then there’s Wine, which may be able to run Some CS programs. I can’t speak for how successful that might be. My bottom line thought is that if you are REALLY dependent on Creative Suite programs and use them heavily day to day, Ubuntu is not the OS for you. You’re just setting yourself up for frustration.

On the PC I’ve been using Putty for SSH and TightVNC for VNC. Ubuntu has SSH and VNC capabilities built in, so that was easy. For IM, I’m using the built in Empathy client, which seems to handle my multitude of IM accounts perfectly well. Finally, you can’t work without some sounds of one kind of another emerging from your computer and into your ear holes. I go back and forth between podcasts and music. Ubuntu has RhythmBox preinstalled. It’s OK, but I’ve been checking out others. I’m pretty happy with Banshee at the moment, but will probably keep exploring.

After just two days, I’m already looking at editing my partitions to give Ubuntu some more room, and moving more of my files over to the Linux side. I need to keep the dual boot setup at least for my Windows Phone development, as I have one project half done, and will probably do some more stuff with that. And it’s just not happening virtually.

At any rate, as I also mentioned on twitter, if you are a professional developer, you really need to have access to a Windows box and a Mac, one way or the other, virtual or physical. I think any professional developer who bitches about this or that operating system or platform or language or device and sticks to one single platform, refusing to touch anything else is childish and unprofessional. At the very least, it’s a pretty poor long term career attitude. Learn more. Experiment. Try something new. You’ll be a better person for it. It’s fine to be an expert in a specific area, but don’t paint yourself into a coffin. (you like that one?) In addition to my home server and my dual boot laptop, I have a Mac book at home and a Mac book pro at work, both plugged in and ready to go. I have three fully charged phones within arms reach – my iPhone, Nexus One, and Samsung Focus (WP7). I don’t like to get too comfortable with any one thing. I feel like I’m stagnating. I used my Windows Phone for the last 3 months. Now i’m carrying the iPhone again, just for a change. What makes you more valuable – the fact that you can make a good snide remark about how much platform X sucks as compared to the platform you use? Or the fact that you can sit down and be comfortable and productive in any platform? Which makes you feel more confident?

Sorry. Got a little preachy there. Just get tired of hearing about technology wars when there is so much awesome and exciting stuff to learn and play with. Now more than ever. End of rant. Try Ubuntu. You might like it.

38 responses so far

SWFSheet – create sprite sheets from SWFs

[EDIT: Just released a beta of SWFSheet 1.1 here:]
[EDIT: Version 1.1 final released:]

SWFSheet is a program I created in most of a day back in late December. I finally polished it up this week and it’s now ready for release. The idea is to take an animation created in Flash, and generate a sprite sheet from it. A sprite sheet, for those of you who may not be familiar, is a single large bitmap containing several frames of an animation, usually layed out in a grid. These can be loaded in very efficiently by games, and each frame shown to recreate the animation.

I had the idea for this program while attempting to port some Flash stuff to the iPhone. And later while making other mobile games, I found that Flash was still the best tool to create animations. It has a powerful time line, easy to use drawing tools, tweens, 3D, and of course, powerful scripting with ActionScript. However, getting a nice looking Flash animation into a sprite sheet that could be used with cocos2d on the iPhone/iPad or with XNA for Windows Phone 7 was not so easy. I did it by hand a couple of times, and it wasn’t very fun. Thus, SWFSheet was born.

SWFSheet is an AIR application and has been tested on Windows and Mac. You create your SWF however you want. Flash CS5 or earlier, Flash Builder, or anything else that outputs a SWF. It doesn’t matter how it’s created. Then you load the SWF into SWFSheet.

swfsheet screenshot

Immediately, you’ll see the live loaded SWF running in the upper left panel. The program will then capture an image of the SWF on each frame for the number of frames you have specified (default 15) and arrange them in a grid on the bitmap. Once that is done, it will then animate this bitmap using the same techniques you would use to animate a sprite sheet in a real game. This is seen in the lower left panel. You can adjust how many frames you want to capture to make sure you get your whole animation and have it loop smoothly. And you can adjust the frame of exactly how much area is captured in each frame, to maximize space on the bitmap. If there is not enough space to capture all frames, you can choose a larger bitmap. After any changes, you need to click “Capture” to re-capture the frames based on the new settings.

Often when scripting animations, you will have various transformations or other changes happening in an onEnterFrame type of loop. This can sometimes cause a glitch, as the first frame is captured before the first enterFrame handler fires, and thus does not have the initial transformations applied. There is a “Skip first frame” checkbox which handles this situation. There are also options for smoothing, which may or may not make any difference in a specific animation, and for transparency. By default, a loaded in SWF will have a transparent background, but you can override this to make an opaque bitmap with any color background you want. And you can change the preview frame rate – of course this doesn’t change the bitmap at all, but can give you an idea what your animation will look like at your target frame rate.

Note that there are a limited number of sized of bitmaps. Sprite sheets can almost always take advantage of extra efficiency when created in power-of-two sized squares – 64×64, 128×128, 256×256, etc. Thus, these are the only choices. A future version may make possible custom sizes if enough people ask for it.

Here’s the AIR installer:

SWFSheet Installer

And here are some test files to get started with:

Test Files


ps. Another tool you might be interested in is Mike Jones’ Sprite Sheet Maker, which is more geared to making sprite sheets from a series of separate image files. Similar outcome, different use cases, depending on what kind of input you are starting from.

76 responses so far

Hello DreamHost

Jan 08 2011 Published by under General, Technology

As of now, this blog, as well as Art From Code, MinimalComps, Wicked Pissah Games, and my personal site (and a few sites run by my wife), are now hosted on DreamHost. Previously, I’d been using Media Temple since 2006, but it was time for a switch.

I don’t want to totally tear apart my former host, but I will explain the reasons why I chose to switch. I had signed up for the Grid Server on MT, paying $170 per year, or about $14.17 per month, though it’s currently advertised as $20 per month. For that, you get 100 GB storage, 1 TB transfer per month, 100 domains, 1000 email addresses. It sounds like that should be enough to handle just about anything I could throw at it. And per actual stats, I had at the peak, less than a dozen domains, most of which had little or no traffic, and a couple of email addresses, only one of which was really used. I used an average of 5 GB storage, and 60 GB bandwidth, with a peak month of 150 GB bandwidth (my iPad vs Kindle display post). So for all stats, I was way, WAY below any limits. Really just a small fraction of the limits. And that pretty much worked up until the beginning of 2010.

Occasionally I would get a message about a “MySQL Container Burst” but never got charged anything for whatever that was. Then in February of 2010, without my really noticing, I got put on a “MySQL Container Lite”. I’m not sure if I was not informed of this, or I just didn’t take notice, thinking it was the same as the container burst. But later in the year, I realized that I was being charged an extra $20 a month for this lite container. I finally found out what this means. When your databases start doing too many queries, they get switched out of the shared container so they don’t slow everyone else down. This is free. After a few days, they check again and if it’s back to normal, you go back to normal. But it they’re still using a lot, you get automatically switched over to the $20 a month lite container. Once you are switched to that, it’s permanent.

I also started getting the occasional “GPU overage” alert. This means that your “grid processing unit” on the grid server is using more than it’s share of processing power. At first it was just a few dollars here and there. But with the iPad vs. Kindle post, it started going out the roof, peaking at $186 in September. Since August alone this year, I’ve paid over $267 in GPU overages, in addition to $220 for the SQL container.

In short, my hosting bill went from exactly $170 the previous three years, to almost $700 this year ($676.61 to be exact), all while staying within a small fraction of the advertised limits. When I contacted Media Temple about this, they explained the container stuff, and suggested I look at a document about optimizing database tables. That was it. At that point, I started looking for other hosts.

I finally settled on DreamHost. I’ve spent the last couple of weeks transferring 7 domains, cancelling several others. So far, I really like DreamHost. I feel like I have a lot more control over virtually every aspect of server administration.

I really learned a lot in the whole switching process. Perhaps the biggest one was the final switch over of itself. With the other blogs, I used the WordPress export feature to save out an xml file, and when I had the new one set up, imported that xml file in. Worked like a charm. But this blog has been around a lot longer and the database was a lot bigger. No matter what I tried, the import failed. I was almost ready to just trash the old posts and start over. But I decided to roll up my sleeves and directly export the MySQL database and import it into a database on the new server. I couldn’t get this to work through phpMyAdmin, so I rolled up my sleeves even further, and SSHed into each server and did it all by hand. After many failed attempts, I got all the permissions and syntaxes right and here we are, like nothing ever happened. I was pretty proud of myself for pulling that off.

After 4 years on one server, there was a hell of a lot of garbage on there. All kinds of folders with files, many of which I have no idea what they are for. In order to not break any links, I just uploaded the whole damn thing to the new server. I’ll be going through and cleaning it up bit by bit, so if you notice any dead links, let me know.

36 responses so far

« Newer posts Older posts »