Archive for the 'ActionScript' category

MinimalComps 0.9.6

Nov 07 2010 Published by under ActionScript, Components, Flash

It’s been a while, but I finally got around to doing some work on MinimalComps. I went through all the issues that people had entered in Google Code. Some were older and already handled. Some were requests for new features, which I’ve noted, but am not acting on just now. Several I could not reproduce and closed. But if you entered one of those and are still seeing an issue and can give reproduceable steps for it, please reopen it with those steps. And then there were a fair amount of real bugs. Many of these were related to the List and ComboBox controls. These wound up showing up several issues in lower level controls, down to PushButton. I think I have them pretty well cleaned up.

So, no new features, but you should find List and ComboBox work much better now. You can get the URL to the SVN repository, or download the SWC or the zipped source here: http://code.google.com/p/minimalcomps/

A couple of other things I want to note. First I want to acknowledge that the ComboBox is misnamed. It should be a Dropdown. A ComboBox COMBINES an editable field with a dropdown list. I’m not sure the best way to handle this. I’m thinking of just changing the name to Dropdown and then creating an empty ComboBox class that extends Dropdown just to ensure I don’t break existing stuff. Does that seem like a decent fix?

The other issue to address is that several people have been bugging me to move the repository over to GitHub. I’ve personally used Git and got to like it, but despite the zeal that converts express for it, I think SVN is a much more popular method of source control. Pretty much anyone these days knows how to use SVN, either by command line or via some client. Git does have a serious learning curve, even for those who have used SVN or CVS. A lot of people have not made the jump yet. I don’t want to limit people’s access to the source and I don’t want to try to maintain two repositories. So for the near future, I’m sticking with Google Code SVN.

10 responses so far

AS3 Sound Synthesis IV – Tone Class

Jul 23 2010 Published by under ActionScript, Flash

In order to make the code so far a little more reusable, I moved it over into its own class, called Tone. I also implemented some optimizations and other little tricks. The most important is that instead of calculating the next batch of samples along with the envelope on every SAMPLE_DATA event, I precalculate all the samples within the envelope right up front, storing it in a Vector of Numbers. Here’s the class:

[as3]package
{
import flash.media.Sound;
import flash.events.SampleDataEvent;
import flash.events.Event;

public class Tone
{
protected const RATE:Number = 44100;
protected var _position:int = 0;
protected var _sound:Sound;
protected var _numSamples:int = 2048;
protected var _samples:Vector.;
protected var _isPlaying:Boolean = false;

protected var _frequency:Number;

public function Tone(frequency:Number)
{
_frequency = frequency;
_sound = new Sound();
_sound.addEventListener(SampleDataEvent.SAMPLE_DATA, onSampleData);
_samples = new Vector.();
createSamples();
}

protected function createSamples():void
{
var amp:Number = 1.0;
var i:int = 0;
var mult:Number = frequency / RATE * Math.PI * 2;
while(amp > 0.01)
{
_samples[i] = Math.sin(i * mult) * amp;
amp *= 0.9998;
i++;
}
_samples.length = i;
}

public function play():void
{
if(!_isPlaying)
{
_position = 0;
_sound.play();
_isPlaying = true;
}
}

protected function onSampleData(event:SampleDataEvent):void
{
for (var i:int = 0; i < _numSamples; i++) { if(_position >= _samples.length)
{
_isPlaying = false;
return;
}
event.data.writeFloat(_samples[_position]);
event.data.writeFloat(_samples[_position]);
_position++;
}
}

public function set frequency(value:Number):void
{
_frequency = value;
createSamples();
}
public function get frequency():Number
{
return _frequency;
}
}
}[/as3]

Note that in the constructor I call createSamples(). This creates the Vector with all samples needed for the duration of the note, including the amplitude of the pseudo-envelope. In the frequency setter, the samples are re-created. The result is that in the onSampleData handler method, I just fill up the byte array with the next so many values out of the _samples vector, stopping when I reach the end of that Vector.

Note also that the amplitude is decreased per sample, rather than per SAMPLE_DATA event, thus it needs to be reduced by a much smaller amount each time. This should also give a smoother envelope, though I’m not sure how noticeable it is.

Here’s a brief bit of code that shows it in action:

[as3]import flash.events.MouseEvent;

var tone:Tone = new Tone(800);
stage.addEventListener(MouseEvent.CLICK, onClick);
function onClick(event:MouseEvent):void
{
tone.frequency = 300 + mouseY;
tone.play();
}[/as3]

It creates a tone. Whenever you click on the stage, it calculates a new frequency for the tone based on the y position of the mouse and plays the tone. Simple enough.

I don’t consider this class anywhere near “complete”. Just a beginning evolution in something. I’d like to add support for more flexible and/or complex envelopes, a stop method, and some other parameters to change the sound. But even so, this is relatively useful as is, IMHO.

27 responses so far

AS3 Sound Synthesis III – Visualization and Envelopes

Jul 21 2010 Published by under ActionScript, Flash

In Part I and Part II of this series, we learned how to utilize the Sound object to synthesize sound, and how to create sounds of various frequencies. This post will just be a quick detour onto a couple of tricks you can implement.

The first one is visualizing the wave you are playing. In the SAMPLE_DATA event handler, you are already generating 2048 samples to create a wave form. While you’re creating these, it’s a piece of cake to go ahead and draw some lines based on their values. Look here:

[as3]import flash.media.Sound;
import flash.events.SampleDataEvent;
import flash.events.MouseEvent;
import flash.utils.Timer;
import flash.events.TimerEvent;

var position:int = 0;
var n:Number = 0;
var sound:Sound = new Sound();
sound.addEventListener(SampleDataEvent.SAMPLE_DATA, onSampleData);
sound.play();

function onSampleData(event:SampleDataEvent):void
{
graphics.clear();
graphics.lineStyle(0, 0x999999);
graphics.moveTo(0, stage.stageHeight / 2);
for(var i:int = 0; i < 2048; i++) { var phase:Number = position / 44100 * Math.PI * 2; position ++; var sample:Number = Math.sin(phase * 440 * Math.pow(2, n / 12)); event.data.writeFloat(sample); // left event.data.writeFloat(sample); // right graphics.lineTo(i / 2048 * stage.stageWidth, stage.stageHeight / 2 - sample * stage.stageHeight / 8); } } var timer:Timer = new Timer(500); timer.addEventListener(TimerEvent.TIMER, onTimer); timer.start(); function onTimer(event:TimerEvent):void { n = Math.floor(Math.random() * 20 - 5); timer.delay = 125 * (1 + Math.floor(Math.random() * 7)); }[/as3] All I've done here is clear the graphics, set a line style, and move to the center left of the screen. Then with each sample, move across the screen a bit and up or down depending on the value of the sample. This gives you something looking like this:

You can see the wave change its frequency with each new note.

The next trick is something I learned from Andre Michelle a very short while ago. You notice that the sine wave as is feels very flat and bland. Quite obviously computer generated. That’s because the amplitude, or height, of the wave is always constant: -1.0 to 1.0. That’s just not natural for real world things that make sounds. If you strike a piano keyboard, you’ll notice that it goes very loud at first, then settles down to a steady value as you hold the key, then when you release it, it fades out. These changes in volume are known as the envelope of a sound. It generally has an four phases, known as ADSR. From Wikipedia:

Attack time is the time taken for initial run-up of level from nil to peak.
Decay time is the time taken for the subsequent run down from the attack level to the designated sustain level.
Sustain level is the amplitude of the sound during the main sequence of its duration.
Release time is the time taken for the sound to decay from the sustain level to zero after the key is released.

Many of Andre Michelle’s sound experiments and toys have a very nice, pleasing bell sound to them, so I knew he was using some kind of envelope, but I know that envelopes can be pretty complex to code. So I asked him about it. He gave me a one or two sentence answer which just made me say, “OH! Of course!” Basically, all you need to do is start the sound at full amplitude and reduce it over time. So simple. Essentially, you are throwing away the attack, decay, and sustain and just programming in a release.

In this version of the project, we just set up an amp variable and set it to 1.0. On each SAMPLE_DATA event, reduce the amplitude by a fraction. And multiply the sample value by that amplitude. When a new note begins, reset amp to 1.0.

[as3]import flash.media.Sound;
import flash.events.SampleDataEvent;
import flash.events.MouseEvent;
import flash.utils.Timer;
import flash.events.TimerEvent;

var position:int = 0;
var n:Number = 0;
var amp:Number = 1.0;
var sound:Sound = new Sound();
sound.addEventListener(SampleDataEvent.SAMPLE_DATA, onSampleData);
sound.play();

function onSampleData(event:SampleDataEvent):void
{
graphics.clear();
graphics.lineStyle(0, 0x999999);
graphics.moveTo(0, stage.stageHeight / 2);
for(var i:int = 0; i < 2048; i++) { var phase:Number = position / 44100 * Math.PI * 2; position ++; var sample:Number = Math.sin(phase * 440 * Math.pow(2, n / 12)) * amp; event.data.writeFloat(sample); // left event.data.writeFloat(sample); // right graphics.lineTo(i / 2048 * stage.stageWidth, stage.stageHeight / 2 - sample * stage.stageHeight / 8); } amp *= 0.7; } var timer:Timer = new Timer(500); timer.addEventListener(TimerEvent.TIMER, onTimer); timer.start(); function onTimer(event:TimerEvent):void { amp = 1.0; n = Math.floor(Math.random() * 20 - 5); timer.delay = 125 * (1 + Math.floor(Math.random() * 7)); }[/as3] Here, I'm multiplying amp by 0.7 on each event. This gives a pretty pleasing bell sound. Change that value around to get different characters. Or you could even do some kind of funky vibrato thing like this: [as3]amp = 0.5 + Math.cos(position * 0.001) * 0.5;[/as3] OK, that's all for this time.

17 responses so far

AS3 Sound Synthesis II – Waves

Jul 21 2010 Published by under ActionScript, Flash

This post will show you how to generate sine waves for specific frequencies using the AS3 Sound object. It assumes you have read, or are familiar with the data in Part I of this series.

Basics of Sound

Sound itself is essentially a change in the pressure of the air. Extremely simple layman’s terms here. Air is composed of various molecules. They are not uniformly smoothly distributed. There can be areas where they are under more pressure and packed more tightly together, and other areas where they are more spaced out. When something like a guitar string vibrates, it moves quickly back and forth at a specific speed. When it moves in one direction, it pushes the molecules of air closer to some other molecules in the same direction. The creates a dense pocket of air. Then the string moves back in the opposite direction, creating a bit of a vacuum. Not a real vacuum, but an area where there are less molecules. It then moves back again, creating another dense pocket.

These areas of dense and undense air move out across the room and eventually hit your ears. The dense air pushes your eardrum in, and the less dense pocket causes it to move out. The result is your eardrum starts vibrating at roughly the same frequency as the guitar string. This causes some bones to vibrate, which stimulate nerves at the same frequency, which send signals to your brain, saying “C Sharp”.

When you record sound, you use a microphone as a sort of electronic ear. It has some kind of diaphragm or other moving part that vibrates and creates and electrical signal which is recorded one way or the other. For playback, this electrical signal is regenerated and causes a speaker to vibrate at the same frequency. This pushes the air the same way the original guitar string did and you hear the same sound.

Synthesizing Sound

However, when we talk about synthesizing sound, we are doing it all from scratch. Flash, your computer’s sound card, and your headphones or speakers will handle generating the correct electrical signal and vibrating the air. But you need to do the math to figure out much and how fast to make things vibrate.

In Part I of this tutorial, we created random values which caused the speaker or headphones to vibrate at a completely chaotic pace, resulting in a radio-static-like fuzz. Creating an actual tone requires a bit more work, and hopefully some understanding of what you are doing.

Digital Sound

In analog sound, such as vinyl records or 8-track tapes (showing my age here), the sound is encoded smoothly as bumps in the groove of the record, or changes in a magnetic field on the tape. Digital sound takes discrete samples of the sound pressure at specific intervals.

Taking one of the simplest sound forms, a sine wave, here is a smooth analog version:

sine_smooth

And here is the same wave, represented as 50 samples:

sine_sample

As you can see, the sampled version is not quite as accurate as the smooth wave. However, in high quality digital sound, these intervals are numerous enough that it is virtually impossible for most of the population to notice any difference. When you are synthesizing sound in Flash, you will be dealing with 44,100 samples per second. Remember that number, we’ll be doing some calculations with it.

Now, what we need to do is generate our samples with a series of values that wind up forming a sine wave like you see above. The top peak of the sine wave will be 1.0, the bottom will be –1.0 and the middle 0.0. To start simply, we’ll generate a single sine wave over the course of a full second. To keep track of where we’re at, we’ll use a variable called position. We’ll initialize it to 0 and increment it each time we create a new sample. Thus position will range from 0 to 44100 over the course of the first second of audio.

If we then divide position by 44100, we’ll get values that range from 0.0 up to 1.0 over the course of one second. And if we multiply that by 2PI, We’ll get values from 0 to 2PI, just what we need to generate a sine wave with the Math.sin function. Here’s the code so far:

[as3]import flash.media.Sound;
import flash.events.SampleDataEvent;
import flash.events.MouseEvent;

var position:int = 0;
var sound:Sound = new Sound();
sound.addEventListener(SampleDataEvent.SAMPLE_DATA, onSampleData);
sound.play();

function onSampleData(event:SampleDataEvent):void
{
for(var i:int = 0; i < 2048; i++) { var phase:Number = position / 44100 * Math.PI * 2; position ++; var sample:Number = Math.sin(phase); event.data.writeFloat(sample); // left event.data.writeFloat(sample); // right } }[/as3] If you run that file, you'll be generating a sine wave that does one full cycle each second. Of course, this, being a 1 Hz sound wave, is far too low for the human ear to hear. To get a specific frequency sound, simply multiply phase by the frequency you want to hear. Humans can hear frequencies generally in the range of 25 to 25,000 Hz. Middle A on the standard musical scale is 440 Hz. So let's try that. Change the line that calculates the sample to: [as3]var sample:Number = Math.sin(phase * 440);[/as3] That gives you A. You can find charts like this all over the net: A 440 B flat 466 B 494 C 523 C sharp 554 D 587 D sharp 622 E 659 F 698 F sharp 740 G 784 A flat 831 A 880 Or, if you want to get more mathematical about it, the formula for each note, n, above or below 440 is: 440 * 2^(n / 12) We can implement scales then by setting up an n variable, incrementing it on a timer, and using the above formula to calculate our frequency: [as3]import flash.media.Sound; import flash.events.SampleDataEvent; import flash.events.MouseEvent; import flash.utils.Timer; import flash.events.TimerEvent; var position:int = 0; var n:Number = 0; var sound:Sound = new Sound(); sound.addEventListener(SampleDataEvent.SAMPLE_DATA, onSampleData); sound.play(); function onSampleData(event:SampleDataEvent):void { for(var i:int = 0; i < 2048; i++) { var phase:Number = position / 44100 * Math.PI * 2; position ++; var sample:Number = Math.sin(phase * 440 * Math.pow(2, n / 12)); event.data.writeFloat(sample); // left event.data.writeFloat(sample); // right } } var timer:Timer = new Timer(500); timer.addEventListener(TimerEvent.TIMER, onTimer); timer.start(); function onTimer(event:TimerEvent):void { n++; }[/as3] Alternately, we can make a poor man's generative music composer with a little help from Math.random: [as3]function onTimer(event:TimerEvent):void { n = Math.floor(Math.random() * 20 - 5); timer.delay = 125 * (1 + Math.floor(Math.random() * 8)); }[/as3] This generates a different note, and a different duration (from 1/8th of a second up to one full second) for each note. Armed with this alone, you are on your way to making your own sequencer or mini piano or other type instrument. Later, I'll try to post some stuff on other wave forms, combining waves, envelopes, and other topics.

18 responses so far

Sound Synthesis in AS3 Part I – The Basics, Noise

Jul 21 2010 Published by under ActionScript, Flash

I’ve been meaning to write something up on this for quite a while. It recently struck me that there still wasn’t a whole lot of good material on this out there already. So I figured I’d throw something together.

We’ll start by looking at the basic mechanics of the Sound object, how to code it up, and create some random noise. Later, we’ll start generating some real wave forms and start mixing them together, etc.

Diving right in

Flash 10 has the ability to synthesize sounds. Actually, there was a hack that could be used in Flash 9 to do the same thing, but it became standardized in 10.

Here’s how it works. You create a new Sound object and add an event listener for the SAMPLE_DATA event (SampleDataEvent.SAMPLE_DATA). This event will fire when there is no more sound data for the Sound to play. Then you start the sound playing.

[as3]var sound:Sound = new Sound();
sound.addEventListener(SampleDataEvent.SAMPLE_DATA, onSampleData);
sound.play();[/as3]

At this point, because you have not loaded any actual sound, such as an MP3, WAV, etc. or attached it to any streaming sound data, there is nothing to play and the SAMPLE_DATA event will fire right away. So we’ll need that handler function:

[as3]function onSampleData(event:SampleDataEvent):void
{
}[/as3]

Our goal here is to give the Sound object some more sound data to play. So how do we do that? Well, the SampleDataEvent that gets passed to this function has a data property, which is a ByteArray. We need to fill that ByteArray with some values that represent some sound to play. We do that using the ByteArray.writeFloat method. Generally you want to write values from –1.0 to 1.0 in there. Each float value you write in there is known as a sample. Hence the “SampleDataEvent”. How many samples should you write? Generally between 2048 and 8192.

OK, that’s a big range of values. What’s best? Well, if you stick to a low number like 2048, the Sound will rip through those values very quickly and another SAMPLE_DATA event will fire very quickly, requiring you to fill it up again. If you use a larger number like 8192, the Sound will take 4 times as long to work through those values and thus you’ll be running your event handler function 4 times less often.

So more samples can mean better performance. However, if you have dynamically generated sounds, more samples means more latency. Latency is the time between some change in the UI or program and when that results in a change in the actual sound heard. For example, say you want to change from a 400 hz tone to a 800 hz tone when a user presses a button. The user presses the button, but the Sound has 8000 samples of this 400 hz tone in the buffer, and will continue to play them until they are gone. Only then will it call the SAMPLE_DATA event handler and ask for more data. This is the only point where you can change the tone to 800 hz. Thus, the user may notice a slight lag between when he pressed the button and when the tone changed. If you use smaller numbers of samples – 2048 – the latency or lag will be shorter and less noticeable.

For now, let’s just generate some noise. We’ll write 2048 samples of random values from –1.0 to 1.0. One thing you need to know first is that you’ll actually be writing twice as many floats. For each sample you need to write a value for the left channel and a value for the right channel. Here’s the whole program:

[as3]import flash.media.Sound;
import flash.events.SampleDataEvent;

var sound:Sound = new Sound();
sound.addEventListener(SampleDataEvent.SAMPLE_DATA, onSampleData);
sound.play();

function onSampleData(event:SampleDataEvent):void
{
for(var i:int = 0; i < 2048; i++) { var sample:Number = Math.random() * 2.0 - 1.0; // -1 to 1 event.data.writeFloat(sample); // left event.data.writeFloat(sample); // right } }[/as3]

If you run that, you should hear some fuzzy static like a radio tuned between stations. Note that we are generating a single sample and using that same value for left and right. Because both channels have exactly the same value for each sample, we’ve generated monophonic sound. If we want stereo noise, we could do something like this:

[as3]function onSampleData(event:SampleDataEvent):void
{
for(var i:int = 0; i < 2048; i++) { var sampleA:Number = Math.random() * 2.0 - 1.0; // -1 to 1 var sampleB:Number = Math.random() * 2.0 - 1.0; // -1 to 1 event.data.writeFloat(sampleA); // left event.data.writeFloat(sampleB); // right } }[/as3]

Here we are writing a different random value for each channel, each sample. Running this, especially using headphones, you should notice a bit more “space” in the noise. It’s subtle and may be hard to discern between runs of the program, so let’s alter it so we can switch quickly.

[as3]import flash.media.Sound;
import flash.events.SampleDataEvent;
import flash.events.MouseEvent;

var sound:Sound = new Sound();
sound.addEventListener(SampleDataEvent.SAMPLE_DATA, onSampleData);
sound.play();

var mono:Boolean = true;
stage.addEventListener(MouseEvent.CLICK, onClick);
function onClick(event:MouseEvent):void
{
mono = !mono;
}

function onSampleData(event:SampleDataEvent):void
{
for(var i:int = 0; i < 2048; i++) { var sampleA:Number = Math.random() * 2.0 - 1.0; // -1 to 1 var sampleB:Number = Math.random() * 2.0 - 1.0; // -1 to 1 event.data.writeFloat(sampleA); // left if(mono) { event.data.writeFloat(sampleA); // left again } else { event.data.writeFloat(sampleB); // right } } }[/as3]

Here we have a Boolean variable, mono, that toggles true/false on a mouse click. If true, we write sampleA to the left and right channels. If mono is not true, then we write sampleA to the left channel and sampleB to the right channel. Run this and click the mouse. Again, the change is subtle but you should be able to notice it.

To see, or rather, to hear, the results of latency, change the 2048 in the for loop to 8192. Now when you click, you’ll notice a significant delay in the time between the click and the change from mono to stereo or vice versa.

One other note about the number of samples. I said, “generally” to use between 2048 and 8192. The fact is if you try to use more than 8192, you’ll get a run time error saying one of the parameters is invalid. so 8192 is a pretty hard limit. You can use less than 2048, but if you do, what happens is that the Sound object will work through those samples and then consider the sound is complete. It will not generate another SAMPLE_DATA event when it is done. Instead, it will generate a COMPLETE event. So if you want the sound to keep playing, you need to keep it supplied with at least 2048 samples at all times.

In the next installment, we’ll start creating some simple waves.

18 responses so far

Scientific American

May 28 2010 Published by under ActionScript, Flash

image

In the June 2010 issue of Scientific American, on page 58, there is an article entitled, “Is Time an Illusion?” by Craig Callender. You can see it here:

image

The large artwork on the first and last pages of the story, and a bit more subtly in some of the in between pages, is by yours truly.

image

This began about two months ago when I was contacted by Scientific American, asking if I would be interested in contributing some art work for an article. They were interested in some of the pieces on my other site, Art From Code, in particular a few pieces I had entitled Space Time Color. Of course, I said I would be interested and they sent over the article and asked me to come up with some rough ideas within a couple of weeks, and shortly after that some high res images for print.

Amazingly, I was able to dig up the source code that had created the Space Time Color images. The thing was, I now needed to create four separate pieces in both low res and later high res, save them out, and have the ability to reproduce and tweak each piece. Random code on the timeline of an FLA would just not do in this case. So I extracted the code out into classes and created an AIR application in Flash Builder 4.

The app is essentially a particle generator with a number of invisible attractors that affect the particles’ paths. A number of particles appear at the bottom of the screen and have an initial upward velocity. Here’s what it looks like:

Each circle is an attractor and can be dragged anywhere on the canvas. Each has a numeric stepper attached to it to adjust its strength. Of course, this number can be negative, which makes it repel particles. As each particle moves, it draws a line onto a bitmap.

Although the bitmap is scaled on the stage to 600×600, internally it is 4000×4000 pixels, and you can zoom into the image full size, at which point you can drag it around within its window.

Other things you can see in the UI there are options to change the background color, change the number of particles and number of attractors, show or hide the attractors, and draw in a lower resolution preview mode. When I got a picture that looked good, I could hit save. I modified the default PNGEncoder class to be asynchronous (I think I posted about that at the time), which allowed me to throw in a saving progress bar.

The cool thing is that when an image is saved, a configuration file with all the important properties are also saved with the same name. The file names for both are based on the time stamp of the point they were saved. So in addition to the image file, “space_time_2010-5-28_22.16.34.png”, it saves a file called “space_time_2010-5-28_22.16.34.txt” that looks like this:

seed:1
numAttr:4
attractor:2860|920|200
attractor:2920|2946.666666666667|200
attractor:600|959|200
attractor:1754|1992|200
numpix:1000

This allowed me to load back in the exact configuration for any specific image that had been saved at any time. Although the app itself took a few days to get done, it then allowed me to quickly generate dozens of different images, then go back through them, choose the ones I liked, reload them, and tweak them a bit more before saving them out again.

Again, the images were exported as 32-bit PNGs at 4000×4000. Only the trails themselves were represented; I left the background color transparent, and then opened up each final image in PhotoShop, adding a white background later there. I thought they might want to experiment with different background colors, but as it is, they liked the white anyway. While I was in PhotoShop, I played with some different filters and effects and got some other cool results, but what wound up in the magazine was pretty much straight out of Flash.

Anyway, I’m pretty excited to have some of my work in such a prestigious magazine as Scientific American. Another notch in the keyboard. What’s next?

21 responses so far

A few MinimalComp updates

Apr 16 2010 Published by under ActionScript, Components, Flash

Addressed all reported bugs and added a few graphical goodies.

[kml_flashembed publishmethod=”static” fversion=”10.0.0″ movie=”http://www.bit-101.com/blog/wp-content/uploads/2010/04/Updates.swf” width=”420″ height=”320″ targetclass=”flashmovie”]

Get Adobe Flash player

[/kml_flashembed]

First is grids. The Panel now has a few new properties. Panel.showGrid turns on or off a grid drawn in the background. Panel.gridColor and Panel.gridSize let you control the color of the grid lines and how far apart they are. Grids also apply to the chart classes and work the same way.

Next is alternating rows for Lists and ComboBoxes. These now have an alternatingRow property which is false by default. Set it to true and every other row will be colored differently. You also have alternateColor, which along with defaultColor allows you to set the colors of the rows.

Finally, I didn’t like the way the Window’s title bar was inset. So it is no longer inset. And it now has a property called grips. This is a Shape object. It’s invisible by default, but if you set Window.grips.visible to true, you’ll see lines there that give it a bit of a tactile sense to the bar for dragging. I left it as a shape so if you want you can draw your own graphics in there if you don’t like the lines. Be warned though, it scales according to the width of the window, the size of the label, and whether or not there is a close button. I’m open for suggestions on how to make that all better.

15 responses so far

MinimalComps: RangeSlider

Apr 02 2010 Published by under ActionScript, Components

I know I’m supposed to stop making new components and clean things up for 1.0, but this got in my head and I had to bang it out. It’s basically a slider with two handles. You get a lowValue and a highValue. Good for specifying a range with a low and high boundary. I thought it was pretty important to have labels for the two values, but wasn’t sure the best way to do it. Finally came up with these sliding labels that match the position of each handle. They can be always on, always off, or just show up when you move the handles. You can also specify the position of the label.

[kml_flashembed publishmethod=”static” fversion=”10.0.0″ movie=”http://www.bit-101.com/blog/wp-content/uploads/2010/04/RangeSlider.swf” width=”401″ height=”200″ targetclass=”flashmovie”]

Get Adobe Flash player

[/kml_flashembed]

Shown here is the HRangeSlider. There’s also a VRangeSlider which works about as you’d suspect.

The code is checked into SVN. Will update the site, SWC, docs, and code download later, probably tomorrow.

10 responses so far

Minimal NumericStepper

Mar 27 2010 Published by under ActionScript, Components

I wasn’t planning on doing this before 1.0, but I needed one and put a couple of buttons and an input text together for the project I’m doing. Then I needed another one elsewhere in the project. So I extracted what I made, cleaned it up and here you go. 🙂

[kml_flashembed publishmethod=”static” fversion=”10.0.0″ movie=”http://www.bit-101.com/blog/wp-content/uploads/2010/03/NumericStepper.swf” width=”100″ height=”36″ targetclass=”flashmovie”]

Get Adobe Flash player

[/kml_flashembed]

The buttons are a bit different than other Numeric Steppers, but I kind of like them. You have max, min, value, step, labelPrecision, and of course a CHANGE event.

Enjoy

http://www.minimalcomps.com

11 responses so far

Encoding PNGs in AS3, asynchronously

Mar 27 2010 Published by under ActionScript

Occasionally I make apps that create bitmaps and save them. To do so you need to use an encoder to turn the bitmapdata bits into a byte array that can be saved in some image format. AS3 has a PNGEncoder class, but the main problem with it is that it’s pretty slow. I’m saving a 4000×4000 bitmap and it takes sometimes well over 30 seconds, during which time, the app completely freezes up.

Some time last year I was looking around to see if anyone had created an asynchronous version, i.e. one where you could tell it to encode your bitmap and it would do a little bit each frame and tell you when it was done. I wasn’t able to find one. At the time, I took a quick look at the idea of converting the PNGEncoder to do this, but never followed through. Yesterday I started an app that really needed this functionality, and I took another look at it.

Basically the encoder writes some header stuff into a byte array, then loops through from y = 0 to y = height in an outer loop, and from x = 0 to x = width in an inner loop, where it deals with each pixel, writing it to the byte array. Finally, it sets a few more bits and ends off.

What I did was extract the inner loop into its own method, writeRow. And the stuff after the loop into a method called completeWrite. This required making a lot of local variables into class variables. Finally, I converted the outer loop into an onEnterFrame function that listens to the ENTER_FRAME event of a Sprite that I create for no other purpose than to have an ENTER_FRAME event. It’s pretty ugly, I know, but it seems the enter frame got much better performance than a timer. With a timer, whatever your delay is will be inserted between each loop, whereas the enter frame will run ASAP. You could make a really small delay, like 1 millisecond, but that seems like it’s open to some bad side effects. I felt more comfortable with the magic sprite.

Then I found that rather than doing just a single row on each frame, I got better results if I did a chunk of rows. I’m getting pretty good results at 20 rows at a time for a 4000×4000 bitmap, but I didn’t do any kind of benchmarking or testing. This could (should) probably be exposed as a settable parameter.

Anyway, each time it encodes a chunk of rows, it dispatches a progress event, and when it’s done, it dispatches a complete event. I also made a progress property that is just the current row divided by the total height. And of course a png property that lets you get at the byte array when it’s complete.

Originally, I tried extending the original PNGEncoder class and changing the parts I needed to. But everything in there is private, and I needed it to extend EventDispatcher to be able to dispatch events. So it’s a pure copy, paste, and change job.

[as3]////////////////////////////////////////////////////////////////////////////////
//
// ADOBE SYSTEMS INCORPORATED
// Copyright 2007 Adobe Systems Incorporated
// All Rights Reserved.
//
// NOTICE: Adobe permits you to use, modify, and distribute this file
// in accordance with the terms of the license agreement accompanying it.
//
////////////////////////////////////////////////////////////////////////////////

package mx.graphics.codec
{

import flash.display.BitmapData;
import flash.display.Sprite;
import flash.events.Event;
import flash.events.EventDispatcher;
import flash.events.ProgressEvent;
import flash.utils.ByteArray;

import flashx.textLayout.formats.Float;

/**
* The PNGEncoder class converts raw bitmap images into encoded
* images using Portable Network Graphics (PNG) lossless compression.
*
*

For the PNG specification, see http://www.w3.org/TR/PNG/

.
*
* @langversion 3.0
* @playerversion Flash 9
* @playerversion AIR 1.1
* @productversion Flex 3
*/
public class PNGEncoderAsync extends EventDispatcher
{
// include “../../core/Version.as”;

//————————————————————————–
//
// Class constants
//
//————————————————————————–

/**
* @private
* The MIME type for a PNG image.
*/
private static const CONTENT_TYPE:String = “image/png”;

//————————————————————————–
//
// Constructor
//
//————————————————————————–

/**
* Constructor.
*
* @langversion 3.0
* @playerversion Flash 9
* @playerversion AIR 1.1
* @productversion Flex 3
*/
public function PNGEncoderAsync()
{
super();

initializeCRCTable();
}

//————————————————————————–
//
// Variables
//
//————————————————————————–

/**
* @private
* Used for computing the cyclic redundancy checksum
* at the end of each chunk.
*/
private var crcTable:Array;
private var IDAT:ByteArray;
private var sourceBitmapData:BitmapData;
private var sourceByteArray:ByteArray;
private var transparent:Boolean;
private var width:int;
private var height:int;
private var y:int;
private var _png:ByteArray;
private var sprite:Sprite;

//————————————————————————–
//
// Properties
//
//————————————————————————–

//———————————-
// contentType
//———————————-

/**
* The MIME type for the PNG encoded image.
* The value is "image/png".
*
* @langversion 3.0
* @playerversion Flash 9
* @playerversion AIR 1.1
* @productversion Flex 3
*/
public function get contentType():String
{
return CONTENT_TYPE;
}

//————————————————————————–
//
// Methods
//
//————————————————————————–

/**
* Converts the pixels of a BitmapData object
* to a PNG-encoded ByteArray object.
*
* @param bitmapData The input BitmapData object.
*
* @return Returns a ByteArray object containing PNG-encoded image data.
*
* @langversion 3.0
* @playerversion Flash 9
* @playerversion AIR 1.1
* @productversion Flex 3
*/
public function encode(bitmapData:BitmapData):void
{
return internalEncode(bitmapData, bitmapData.width, bitmapData.height,
bitmapData.transparent);
}

/**
* Converts a ByteArray object containing raw pixels
* in 32-bit ARGB (Alpha, Red, Green, Blue) format
* to a new PNG-encoded ByteArray object.
* The original ByteArray is left unchanged.
*
* @param byteArray The input ByteArray object containing raw pixels.
* This ByteArray should contain
* 4 * width * height bytes.
* Each pixel is represented by 4 bytes, in the order ARGB.
* The first four bytes represent the top-left pixel of the image.
* The next four bytes represent the pixel to its right, etc.
* Each row follows the previous one without any padding.
*
* @param width The width of the input image, in pixels.
*
* @param height The height of the input image, in pixels.
*
* @param transparent If false, alpha channel information
* is ignored but you still must represent each pixel
* as four bytes in ARGB format.
*
* @return Returns a ByteArray object containing PNG-encoded image data.
*
* @langversion 3.0
* @playerversion Flash 9
* @playerversion AIR 1.1
* @productversion Flex 3
*/
public function encodeByteArray(byteArray:ByteArray, width:int, height:int,
transparent:Boolean = true):void
{
internalEncode(byteArray, width, height, transparent);
}

/**
* @private
*/
private function initializeCRCTable():void
{
crcTable = [];

for (var n:uint = 0; n < 256; n++) { var c:uint = n; for (var k:uint = 0; k < 8; k++) { if (c & 1) c = uint(uint(0xedb88320) ^ uint(c >>> 1));
else
c = uint(c >>> 1);
}
crcTable[n] = c;
}
}

/**
* @private
*/
private function internalEncode(source:Object, width:int, height:int,
transparent:Boolean = true):void
{
// The source is either a BitmapData or a ByteArray.
sourceBitmapData = source as BitmapData;
sourceByteArray = source as ByteArray;
this.transparent = transparent;
this.width = width;
this.height = height;

if (sourceByteArray)
sourceByteArray.position = 0;

// Create output byte array
_png = new ByteArray();

// Write PNG signature
_png.writeUnsignedInt(0x89504E47);
_png.writeUnsignedInt(0x0D0A1A0A);

// Build IHDR chunk
var IHDR:ByteArray = new ByteArray();
IHDR.writeInt(width);
IHDR.writeInt(height);
IHDR.writeByte(8); // bit depth per channel
IHDR.writeByte(6); // color type: RGBA
IHDR.writeByte(0); // compression method
IHDR.writeByte(0); // filter method
IHDR.writeByte(0); // interlace method
writeChunk(_png, 0x49484452, IHDR);

// Build IDAT chunk
IDAT = new ByteArray();
y = 0;

sprite = new Sprite();
sprite.addEventListener(Event.ENTER_FRAME, onEnterFrame);
}

protected function onEnterFrame(event:Event):void
{
for(var i:int = 0; i < 20; i++) { writeRow(); y++; if(y >= height)
{
sprite.removeEventListener(Event.ENTER_FRAME, onEnterFrame);
completeWrite();
}
}
}

private function completeWrite():void
{
IDAT.compress();
writeChunk(_png, 0x49444154, IDAT);

// Build IEND chunk
writeChunk(_png, 0x49454E44, null);

// return PNG
_png.position = 0;
dispatchEvent(new Event(Event.COMPLETE));
}

private function writeRow():void
{
IDAT.writeByte(0); // no filter

var x:int;
var pixel:uint;

if (!transparent)
{
for (x = 0; x < width; x++) { if (sourceBitmapData) pixel = sourceBitmapData.getPixel(x, y); else pixel = sourceByteArray.readUnsignedInt(); IDAT.writeUnsignedInt(uint(((pixel & 0xFFFFFF) << 8) | 0xFF)); } } else { for (x = 0; x < width; x++) { if (sourceBitmapData) pixel = sourceBitmapData.getPixel32(x, y); else pixel = sourceByteArray.readUnsignedInt(); IDAT.writeUnsignedInt(uint(((pixel & 0xFFFFFF) << 8) | (pixel >>> 24)));
}
}
dispatchEvent(new ProgressEvent(ProgressEvent.PROGRESS));
}

/**
* @private
*/
private function writeChunk(png:ByteArray, type:uint, data:ByteArray):void
{
// Write length of data.
var len:uint = 0;
if (data)
len = data.length;
png.writeUnsignedInt(len);

// Write chunk type.
var typePos:uint = png.position;
png.writeUnsignedInt(type);

// Write data.
if (data)
png.writeBytes(data);

// Write CRC of chunk type and data.
var crcPos:uint = png.position;
png.position = typePos;
var crc:uint = 0xFFFFFFFF;
for (var i:uint = typePos; i < crcPos; i++) { crc = uint(crcTable[(crc ^ png.readUnsignedByte()) & uint(0xFF)] ^ uint(crc >>> 8));
}
crc = uint(crc ^ uint(0xFFFFFFFF));
png.position = crcPos;
png.writeUnsignedInt(crc);
}

public function get png():ByteArray
{
return _png;
}

public function get progress():Number
{
return y / height;
}
}

}[/as3]

I’m not putting this out here as any kind of proof of my brilliance, as it’s not very pretty code at all. I did the bare minimum refactoring to get the thing to work in my project. But it does work, and works damn well. Well enough to call it a done for the project I need it for. I mentioned it on twitter and found out that as opposed to the last time I checked, a few other people have created similar classes. A few I am aware of now:

http://blog.inspirit.ru/?p=378

http://pastebin.sekati.com/?id=Pngencoderhack@5d892-84c96899-t

And apparently the hype framework has one built in too, that you might be able to steal.

But, there can never be enough free code out there, right? If this helps anyone, or they can use it as a launching point to create something better, great. If not, well, I wasted a few minutes posting this, so be it. 🙂

17 responses so far

« Newer posts Older posts »