Archive for the 'Flash' category

Good bye 2010

Send to Kindle

As usual, it’s time to make my year end post. I’ll keep it relatively brief.

A few changes this year. This spring, I got kind of fed up with Apple, their control-happy policies, and the general direction they are heading. After 3 years of being 100% Mac, I switched back to Windows. It is an action that I not only do not regret the tiniest bit, but as Apple continues to evolve in the same direction, I’m happier than ever that I switched when I did. This is not to say that I’ve abandoned all iOS development and have thrown away my Mac. I still own two Apple computers. Both are plugged in and booted up and ready for action at all time. I have an iPhone, and iPad, an iPod Touch and a 5G iPod. They aren’t going anywhere. But the machine I open up in the morning and use all day long is my Sony Vaio, and I’m very happy with it. I’m not shoving it down your throat. If you’re happy with Apple, far be it from me to try to change your mind. I’m OK, you’re OK, right?

Around the same time I switched back to Windows, I also came into the ownership of a Google Nexus One. It took a while to really get used to it, as it’s definitely not the polished experience that the iPhone is. But I forced myself to stick with it for a week or so and really started to love it. From my viewpoint, the main difference was that it was MY phone, not Steve Jobs’. I could do pretty much whatever I wanted with it. Change the lock screen, change the task switcher, add memory, change the battery, put my icons where I want them, install unsigned apps, have live gadgets on the home screen, etc. etc. Once I got used to it, the iPhone just seemed unbearably sterile. Unfortunately, the model I had was a T-Mobile version, so I couldn’t get 3G on it with my AT&T sim. I suffered with Edge for a several months, but finally the wifi connection and even the Edge connection started getting really flaky. One day in September, just couldn’t connect to anything, so it was back to the iPhone.

Coming back to the iPhone, I have to admit, I really did appreciate the slickness of the UI. But I didn’t fall back in love with it. To be honest, I knew it was only a stopgap until the new Windows Phones came out. I got a Samsung Focus as soon as they came out and I absolutely love it. It is without a doubt the best phone I’ve owned. Note – it’s far from perfect. It’s a v1 product and it shows in many ways. But regardless of all that, there is so much RIGHT about what Microsoft did with it. I’m really excited to see where it goes in the coming years. I don’t expect it to overtake or even match Android or iOS any time in the near future, if at all, and I don’t really care. As long as I can continue to own one and see it improve, I’m a happy camper.

As for mobile development, I didn’t do much at all most of the year. But this autumn and winter I worked on one major and one minor iOS projects at Infrared5. After being away from Objective-C for so long, it was pretty bizarre trying to get back into it. It took a couple of days before it stopped feeling like I was typing with my toes, but eventually I got back in the groove. I played with Android dev briefly, but never really dove into it that much. But in October, I got my hands dirty with Windows Phone dev, with both XNA and Silverlight, and it has blown me away. I might even say it’s revitalized me as a developer. For a large part of the year I was on a very tough, frustrating project. It wore me down quite a bit. But with Visual Studio and C#, it’s like starting from scratch – in a good way! All the excitement without the learning curve. After many years of Flash development, writing ActionScript is almost second nature to me. But after just a couple of months in Visual Studio, I feel like I’m more at home with C# than I ever was with ActionScript. It’s a very, very similar language. If you took AS3 and removed all the little things that annoy or distract you and pull you out of the “flow” of coding, and replaced them with a whole bunch of little things that just work exactly the way you would expect them to, you’d have C#. And if you took Flash Builder and … no, that’s just not going to work. There’s no comparing Eclipse to Visual Studio.

Speaking of IDEs, after working in VS for a few months, and then going back to XCode… it really dawned on me just how bizarre an IDE that really is. I’m really trying not to bash any particular technology, but I can’t help feeling like XCode was designed on an anti-matter planet in an alternate universe by some bizarre aliens on really strong acid. I’m not even talking about the language – just the IDE. I sometimes find it hard to believe that it was created by and for programmers. I know it’s not “wrong”, just different. Most IDEs are relatively similar, like most western human languages are pretty similar. I may not speak Spanish, but I can see it and read the words even if I don’t know their meaning, and can catch a bit of a hint of what’s being said. Same with most IDEs – you can quickly find your way around them for the most part. But diving into XCode is like being dropped in an Asian or Middle Eastern country where everything just looks like random scratchings or scribbles to your unfamiliar eye. That’s what XCode is like – just a completely foreign programming paradigm. Again, not saying it’s bad or wrong. You live with it long enough and you become fluent in it. But boy is it different.

Also in the summer I got into Processing quite a bit. Far more than I ever had before. I’ve kind of drifted from it again, but it was a great experience. I’m sure I’ll drift back around to it again before long. This largely came about from my conference session for 2010, “Programming Art”, in which I covered a bunch of different tools and languages for creating algorithmic and generative art, including Context Free Art, Structure Synth, Processing, the Hype Framework, and others. I also really enjoyed getting my head around Structure Synth, and got a bit revived on it just recently with the newly released integrated raytracer. Fun stuff!

On a personal basis, it was a year of health. I ran over 1000 miles, lost a good deal of weight, and reverted the trend of my blood sugar and blood pressure, which were edging into borderline problem areas. I think I also did more travelling this year than I have in any previous years, with trips to San Francisco, Minneapolis, Kortrijk Belgium, Toronto, Japan, back to San Francisco, and Edmonton.

Well so much for keeping in brief. In summary, it was a year of trying new things and going back to old things, learning new platforms and languages. Going forward, I don’t think it’s possible, at least not for me, to be a “Flash Developer”, or an “iPhone Developer” or be stuck in any single platform. Now more than ever, there is just too much diversity and you have to have a foot in every camp. If someone needs a game or an app these days, they can’t really just release a single version of it. They’re going to need an iPhone version, and Android version, eventually a Windows Phone version, and some kind of web presence with it. Are you going to just ask for one slice of that pie? Are they going to farm out their app to 4-5 different shops, one for each platform? As a company at the very least, you need to be able to do it all. Ideally as a developer as well, you need to be able to do as many of those as possible. I know that’s where Adobe is trying to be strong with the iPhone and Android packagers for Flash. I’m still not convinced those are the solutions for most projects though. Native will always win.

As for 2011, I assume the fist good chunk of the year I’ll be doing a lot more WP7 dev. And since the XNA codebase is 99% the same for WP7, Windows, and XBox games, I look forward to releasing some stuff for Windows desktop and XBox as well. I’m sure I’ll also play with the new Mac App Store stuff, and more iOS stuff too. The WP7 game I’m working on now will definitely need an iOS port. But who knows where I’ll go from there?

Send to Kindle

8 responses so far

What is Flash?

Nov 09 2010 Published by under Flash, Technology

Send to Kindle

This weekend I was at FITC Edmonton, where I presented my Programming Art session for the last time. I’ll be working on something new for next year. It was a fun conference. Very relaxed, and for the first time ever, I actually attended every single session in the entire conference.

In addition to my own session, I was part of a discussion panel entitled “Staying Lithe in a Changing Rich Media Climate” along with Skye Boyes, Mike Chambers, Grant Skinner, and moderated by Owen Brierley. The description of the panel was:

BIFF, POW, BAM! “Holy Shifting Platforms, Batman!” Just when we thought the ubiquity of the Flash Player was strong enough to keep the evil chaos of various mobile at bay, on the eve of the launch of one compiler to rule them all, Flash developers everywhere get a punch in nose that shocked a lot of us. Much has been said about this. Now it is time to look forward. This panel will discuss the strategies we all need to keep our heads above the rising tide of increased challenges and varieties of platform choices. How does Flash fit into your future? How can we learn from Sitespring? Central?

It was rather fun to air a lot of the feelings we had about all the recent controversy over Flash, HTML5, iOS, Android, etc. But one thing Mike Chambers said blew my mind to some degree. It really changed the way I see Flash. Up until now, if you had asked me what Flash (on the web) is, I would probably come up with some kind of canned statement like, “Flash is a browser plugin that allows you to do vector graphics, animation, sound, and video. It’s very useful for creating online experiences, games, and Rich Internet Applications.”

But Mike gave a definition something like (probably paraphrased), “Flash is what drives innovation on the web,” and went on to explain that more. What I got out of it is as follows. This expands a bit on what Mike said probably, so I may be going beyond what he meant by it, but I think I got the spirit of what he meant.

The browser has certain native capabilities. These are ideally based on standards and don’t change radically over short periods of time. HTML, CSS, JavaScript, etc. evolve slowly. HTML5 won’t be fully ratified for another 11 or 12 years. This is a good thing. It gives a solid foundation and prevents complete browser chaos and anarchy. But it doesn’t really foster innovation. That’s where Flash comes in. Flash has an 18 month release cycle. It can try things out. Not all of those things work. If they don’t, it can fix them or even get rid of them in the next release. It can change and evolve rapidly, and innovate a hundred times faster than something like HTML.

Eventually though, the native browser capabilities will catch up to the capabilities that Flash has established. HTML5 may eventually be able to do many of the things which, up to now, were best done in Flash. Vector graphics, animation, video. And that is fine. That, too, is a good thing. It is expected – not something to freak out about or get defensive about. Flash has gone out into the wilderness and blazed a trail. HTML can come along a few years later and build the cities. If Flash stays where it is, sure, it’s going to be crowded out and its users are going to feel defensive and argumentative.

Flash’s job now is to be back out in the wilderness blazing more trails. As Mike also said (again paraphrased), “Flash will either keep innovating or it won’t. If it does, it will be fine. If not, it will die. We think it will continue to innovate.” Thus, what Flash is 10 years from now may be so different than what it is now that you may not recognize it. But think of it – if someone who was using Flash 4 back in 1999 fell into a coma and woke up in 2009 to see people creating Flex apps using MXML and AS3 classes in Eclipse, would they recognize it as Flash? I don’t think so. So I can’t imagine what Flash might look like or be used for in 2020.

Send to Kindle

44 responses so far

MinimalComps 0.9.6

Nov 07 2010 Published by under ActionScript, Components, Flash

Send to Kindle

It’s been a while, but I finally got around to doing some work on MinimalComps. I went through all the issues that people had entered in Google Code. Some were older and already handled. Some were requests for new features, which I’ve noted, but am not acting on just now. Several I could not reproduce and closed. But if you entered one of those and are still seeing an issue and can give reproduceable steps for it, please reopen it with those steps. And then there were a fair amount of real bugs. Many of these were related to the List and ComboBox controls. These wound up showing up several issues in lower level controls, down to PushButton. I think I have them pretty well cleaned up.

So, no new features, but you should find List and ComboBox work much better now. You can get the URL to the SVN repository, or download the SWC or the zipped source here: http://code.google.com/p/minimalcomps/

A couple of other things I want to note. First I want to acknowledge that the ComboBox is misnamed. It should be a Dropdown. A ComboBox COMBINES an editable field with a dropdown list. I’m not sure the best way to handle this. I’m thinking of just changing the name to Dropdown and then creating an empty ComboBox class that extends Dropdown just to ensure I don’t break existing stuff. Does that seem like a decent fix?

The other issue to address is that several people have been bugging me to move the repository over to GitHub. I’ve personally used Git and got to like it, but despite the zeal that converts express for it, I think SVN is a much more popular method of source control. Pretty much anyone these days knows how to use SVN, either by command line or via some client. Git does have a serious learning curve, even for those who have used SVN or CVS. A lot of people have not made the jump yet. I don’t want to limit people’s access to the source and I don’t want to try to maintain two repositories. So for the near future, I’m sticking with Google Code SVN.

Send to Kindle

10 responses so far

On the “death” of Silverlight

Oct 31 2010 Published by under Flash, Silverlight, Technology

Send to Kindle

This week, Microsoft announced their changing strategy regarding Silverlight. You can read more about that here:

http://www.zdnet.com/blog/microsoft/microsoft-our-strategy-with-silverlight-has-shifted/7834.

[Edit: 10/01/2010]
Note, this post just came out today, which clarifies things a lot. http://team.silverlight.net/announcement/pdc-and-silverlight/
[/Edit]

The key points are that going forward, Silverlight’s focus will be as the framework with which you will create Windows Phone 7 applications. As for Rich Internet/Interactive Applications on the web, Microsoft is going to start pushing HTML 5 as the solution.

A number of my friends on Twitter and elsewhere, members of the Flash community, were virtually high-fiving and toasting to the death of Silverlight. It’s certainly nice to see Flash alive and kicking as yet another “Flash Killer” leaves the ring. But I think I saw things in a bit of a different light.

I don’t think there was any meeting where Microsoft execs sat around saying, “You know, Flash is just too good and popular. We’re never going to be able to compete with it. Let’s just give up. They win.”

I think it was probably a bit closer to this: “You know, in terms of RIAs, HTML 5 does just about everything you need to do. All the best RIAs are made in HTML. And it’s only going to get better. It doesn’t make sense to have a heavy, proprietary web plugin that tries to do the same thing. Let’s just embrace HTML 5.”

I’m talking specifically about applications here. Although they tried a bit in the beginning, Silverlight never really made it into the gaming or more creative types of applications. If anything, it was really a contender to be a Flex killer more than a Flash killer. And while I think HTML 5 has a long way to go in terms of being a real contender for games and more creative types of Flash apps, I think for most common web applications, it’s the real answer. I think every web application I currently use is HTML based. I’m writing this blog post in WordPress, a very complex HTML based app. I make heavy use of Google Documents and Windows Live Office docs. I use GMail and Google Reader, Google Calendar and Google Maps. I use Flickr for photos, Garmin Connect and Daily Mile to log my running, BaseCamp, Bugzilla, and Pivotal Tracker for software projects, etc., etc. All are completely or almost completely HTML. I can’t think of any straight up Flex or Silverlight apps that I use on any kind of regular basis.

Of course, there are video sites, in which Flash and Silverlight is still pretty strong. I’m the furthest thing from an expert in video, so I’m in no position to evaluate how close HTML 5 video is to being a real competitor to Flash / Silverlight video. According to some, it’s there, according to others, not close. But I imagine that any weaknesses it has will soon be shored up.

Again, I still think HTML 5 has a way to go to catch up with much of what Flash can do in terms of rich interactivity. But I feel that in the world of everyday apps, it has won. Rather than taking Silverlight’s “death” as a victory, I think the Flash world, particularly RIA devs, should take it as a warning.

Send to Kindle

66 responses so far

Apple Crumbles on 3rd Party Tools

Sep 09 2010 Published by under Flash, iPhone

Send to Kindle

This just in, though the Twitterverse probably makes this old news already…

Apple has just announced that it is “relaxing all restrictions on the development tools used to create iOS apps”. In other words, it looks like the Flash CS5 iPhone publishing flow is now actually usable. Full announcement here:

http://www.apple.com/pr/library/2010/09/09statement.html

The only mentioned restriction is the requirement that the “resulting apps do not download any code”. I’m pretty sure the CS5 flow doesn’t cross that boundary.

Didn’t see that one coming. OK Adobe, now update that bad boy so we can make iPad apps. I see some cool stuff coming.

Send to Kindle

29 responses so far

AS3 Sound Synthesis IV – Tone Class

Jul 23 2010 Published by under ActionScript, Flash

Send to Kindle

In order to make the code so far a little more reusable, I moved it over into its own class, called Tone. I also implemented some optimizations and other little tricks. The most important is that instead of calculating the next batch of samples along with the envelope on every SAMPLE_DATA event, I precalculate all the samples within the envelope right up front, storing it in a Vector of Numbers. Here’s the class:

package
{
	import flash.media.Sound;
	import flash.events.SampleDataEvent;
	import flash.events.Event;

	public class Tone
	{
		protected const RATE:Number = 44100;
		protected var _position:int = 0;
		protected var _sound:Sound;
		protected var _numSamples:int = 2048;
		protected var _samples:Vector.<number>;
		protected var _isPlaying:Boolean = false;

		protected var _frequency:Number;

		public function Tone(frequency:Number)
		{
			_frequency = frequency;
			_sound = new Sound();
			_sound.addEventListener(SampleDataEvent.SAMPLE_DATA, onSampleData);
			_samples = new Vector.<number>();
			createSamples();
		}

		protected function createSamples():void
		{
			var amp:Number = 1.0;
			var i:int = 0;
			var mult:Number = frequency / RATE * Math.PI * 2;
			while(amp > 0.01)
			{
				_samples[i] = Math.sin(i * mult) * amp;
				amp *= 0.9998;
				i++;
			}
			_samples.length = i;
		}

		public function play():void
		{
			if(!_isPlaying)
			{
				_position = 0;
				_sound.play();
				_isPlaying = true;
			}
		}

		protected function onSampleData(event:SampleDataEvent):void
		{
			for (var i:int = 0; i < _numSamples; i++)
			{
				if(_position >= _samples.length)
				{
					_isPlaying = false;
					return;
				}
				event.data.writeFloat(_samples[_position]);
				event.data.writeFloat(_samples[_position]);
				_position++;
			}
		}

		public function set frequency(value:Number):void
		{
			_frequency = value;
			createSamples();
		}
		public function get frequency():Number
		{
			return _frequency;
		}
	}
}

Note that in the constructor I call createSamples(). This creates the Vector with all samples needed for the duration of the note, including the amplitude of the pseudo-envelope. In the frequency setter, the samples are re-created. The result is that in the onSampleData handler method, I just fill up the byte array with the next so many values out of the _samples vector, stopping when I reach the end of that Vector.

Note also that the amplitude is decreased per sample, rather than per SAMPLE_DATA event, thus it needs to be reduced by a much smaller amount each time. This should also give a smoother envelope, though I’m not sure how noticeable it is.

Here’s a brief bit of code that shows it in action:

import flash.events.MouseEvent;

var tone:Tone = new Tone(800);
stage.addEventListener(MouseEvent.CLICK, onClick);
function onClick(event:MouseEvent):void
{
	tone.frequency = 300 + mouseY;
	tone.play();
}

It creates a tone. Whenever you click on the stage, it calculates a new frequency for the tone based on the y position of the mouse and plays the tone. Simple enough.

I don’t consider this class anywhere near “complete”. Just a beginning evolution in something. I’d like to add support for more flexible and/or complex envelopes, a stop method, and some other parameters to change the sound. But even so, this is relatively useful as is, IMHO.

Send to Kindle

27 responses so far

AS3 Sound Synthesis III – Visualization and Envelopes

Jul 21 2010 Published by under ActionScript, Flash

Send to Kindle

In Part I and Part II of this series, we learned how to utilize the Sound object to synthesize sound, and how to create sounds of various frequencies. This post will just be a quick detour onto a couple of tricks you can implement.

The first one is visualizing the wave you are playing. In the SAMPLE_DATA event handler, you are already generating 2048 samples to create a wave form. While you’re creating these, it’s a piece of cake to go ahead and draw some lines based on their values. Look here:

import flash.media.Sound;
import flash.events.SampleDataEvent;
import flash.events.MouseEvent;
import flash.utils.Timer;
import flash.events.TimerEvent;

var position:int = 0;
var n:Number = 0;
var sound:Sound = new Sound();
sound.addEventListener(SampleDataEvent.SAMPLE_DATA, onSampleData);
sound.play();

function onSampleData(event:SampleDataEvent):void
{
	graphics.clear();
	graphics.lineStyle(0, 0x999999);
	graphics.moveTo(0, stage.stageHeight / 2);
	for(var i:int = 0; i < 2048; i++)
	{
		var phase:Number = position / 44100 * Math.PI * 2;
		position ++;
		var sample:Number = Math.sin(phase * 440 * Math.pow(2, n / 12));
		event.data.writeFloat(sample); // left
		event.data.writeFloat(sample); // right
		graphics.lineTo(i / 2048 * stage.stageWidth, stage.stageHeight / 2 - sample * stage.stageHeight / 8);
	}
}

var timer:Timer = new Timer(500);
timer.addEventListener(TimerEvent.TIMER, onTimer);
timer.start();
function onTimer(event:TimerEvent):void
{
	n = Math.floor(Math.random() * 20 - 5);
	timer.delay = 125 * (1 + Math.floor(Math.random() * 7));
}

All I've done here is clear the graphics, set a line style, and move to the center left of the screen. Then with each sample, move across the screen a bit and up or down depending on the value of the sample. This gives you something looking like this:

You can see the wave change its frequency with each new note.

The next trick is something I learned from Andre Michelle a very short while ago. You notice that the sine wave as is feels very flat and bland. Quite obviously computer generated. That's because the amplitude, or height, of the wave is always constant: -1.0 to 1.0. That's just not natural for real world things that make sounds. If you strike a piano keyboard, you'll notice that it goes very loud at first, then settles down to a steady value as you hold the key, then when you release it, it fades out. These changes in volume are known as the envelope of a sound. It generally has an four phases, known as ADSR. From Wikipedia:

Attack time is the time taken for initial run-up of level from nil to peak.
Decay time is the time taken for the subsequent run down from the attack level to the designated sustain level.
Sustain level is the amplitude of the sound during the main sequence of its duration.
Release time is the time taken for the sound to decay from the sustain level to zero after the key is released.

Many of Andre Michelle's sound experiments and toys have a very nice, pleasing bell sound to them, so I knew he was using some kind of envelope, but I know that envelopes can be pretty complex to code. So I asked him about it. He gave me a one or two sentence answer which just made me say, "OH! Of course!" Basically, all you need to do is start the sound at full amplitude and reduce it over time. So simple. Essentially, you are throwing away the attack, decay, and sustain and just programming in a release.

In this version of the project, we just set up an amp variable and set it to 1.0. On each SAMPLE_DATA event, reduce the amplitude by a fraction. And multiply the sample value by that amplitude. When a new note begins, reset amp to 1.0.

import flash.media.Sound;
import flash.events.SampleDataEvent;
import flash.events.MouseEvent;
import flash.utils.Timer;
import flash.events.TimerEvent;

var position:int = 0;
var n:Number = 0;
var amp:Number = 1.0;
var sound:Sound = new Sound();
sound.addEventListener(SampleDataEvent.SAMPLE_DATA, onSampleData);
sound.play();

function onSampleData(event:SampleDataEvent):void
{
	graphics.clear();
	graphics.lineStyle(0, 0x999999);
	graphics.moveTo(0, stage.stageHeight / 2);
	for(var i:int = 0; i < 2048; i++)
	{
		var phase:Number = position / 44100 * Math.PI * 2;
		position ++;
		var sample:Number = Math.sin(phase * 440 * Math.pow(2, n / 12)) * amp;
		event.data.writeFloat(sample); // left
		event.data.writeFloat(sample); // right
		graphics.lineTo(i / 2048 * stage.stageWidth, stage.stageHeight / 2 - sample * stage.stageHeight / 8);
	}
	amp *= 0.7;
}

var timer:Timer = new Timer(500);
timer.addEventListener(TimerEvent.TIMER, onTimer);
timer.start();
function onTimer(event:TimerEvent):void
{
	amp = 1.0;
	n = Math.floor(Math.random() * 20 - 5);
	timer.delay = 125 * (1 + Math.floor(Math.random() * 7));
}

Here, I'm multiplying amp by 0.7 on each event. This gives a pretty pleasing bell sound. Change that value around to get different characters. Or you could even do some kind of funky vibrato thing like this:

amp = 0.5 + Math.cos(position * 0.001) * 0.5;

OK, that's all for this time.

Send to Kindle

17 responses so far

AS3 Sound Synthesis II – Waves

Jul 21 2010 Published by under ActionScript, Flash

Send to Kindle

This post will show you how to generate sine waves for specific frequencies using the AS3 Sound object. It assumes you have read, or are familiar with the data in Part I of this series.

Basics of Sound

Sound itself is essentially a change in the pressure of the air. Extremely simple layman’s terms here. Air is composed of various molecules. They are not uniformly smoothly distributed. There can be areas where they are under more pressure and packed more tightly together, and other areas where they are more spaced out. When something like a guitar string vibrates, it moves quickly back and forth at a specific speed. When it moves in one direction, it pushes the molecules of air closer to some other molecules in the same direction. The creates a dense pocket of air. Then the string moves back in the opposite direction, creating a bit of a vacuum. Not a real vacuum, but an area where there are less molecules. It then moves back again, creating another dense pocket.

These areas of dense and undense air move out across the room and eventually hit your ears. The dense air pushes your eardrum in, and the less dense pocket causes it to move out. The result is your eardrum starts vibrating at roughly the same frequency as the guitar string. This causes some bones to vibrate, which stimulate nerves at the same frequency, which send signals to your brain, saying “C Sharp”.

When you record sound, you use a microphone as a sort of electronic ear. It has some kind of diaphragm or other moving part that vibrates and creates and electrical signal which is recorded one way or the other. For playback, this electrical signal is regenerated and causes a speaker to vibrate at the same frequency. This pushes the air the same way the original guitar string did and you hear the same sound.

Synthesizing Sound

However, when we talk about synthesizing sound, we are doing it all from scratch. Flash, your computer’s sound card, and your headphones or speakers will handle generating the correct electrical signal and vibrating the air. But you need to do the math to figure out much and how fast to make things vibrate.

In Part I of this tutorial, we created random values which caused the speaker or headphones to vibrate at a completely chaotic pace, resulting in a radio-static-like fuzz. Creating an actual tone requires a bit more work, and hopefully some understanding of what you are doing.

Digital Sound

In analog sound, such as vinyl records or 8-track tapes (showing my age here), the sound is encoded smoothly as bumps in the groove of the record, or changes in a magnetic field on the tape. Digital sound takes discrete samples of the sound pressure at specific intervals.

Taking one of the simplest sound forms, a sine wave, here is a smooth analog version:

sine_smooth

And here is the same wave, represented as 50 samples:

sine_sample

As you can see, the sampled version is not quite as accurate as the smooth wave. However, in high quality digital sound, these intervals are numerous enough that it is virtually impossible for most of the population to notice any difference. When you are synthesizing sound in Flash, you will be dealing with 44,100 samples per second. Remember that number, we’ll be doing some calculations with it.

Now, what we need to do is generate our samples with a series of values that wind up forming a sine wave like you see above. The top peak of the sine wave will be 1.0, the bottom will be –1.0 and the middle 0.0. To start simply, we’ll generate a single sine wave over the course of a full second. To keep track of where we’re at, we’ll use a variable called position. We’ll initialize it to 0 and increment it each time we create a new sample. Thus position will range from 0 to 44100 over the course of the first second of audio.

If we then divide position by 44100, we’ll get values that range from 0.0 up to 1.0 over the course of one second. And if we multiply that by 2PI, We’ll get values from 0 to 2PI, just what we need to generate a sine wave with the Math.sin function. Here’s the code so far:

import flash.media.Sound;
import flash.events.SampleDataEvent;
import flash.events.MouseEvent;

var position:int = 0;
var sound:Sound = new Sound();
sound.addEventListener(SampleDataEvent.SAMPLE_DATA, onSampleData);
sound.play();

function onSampleData(event:SampleDataEvent):void
{
	for(var i:int = 0; i < 2048; i++)
	{
		var phase:Number = position / 44100 * Math.PI * 2;
		position ++;
		var sample:Number = Math.sin(phase);
		event.data.writeFloat(sample); // left
		event.data.writeFloat(sample); // right
	}
}

If you run that file, you'll be generating a sine wave that does one full cycle each second. Of course, this, being a 1 Hz sound wave, is far too low for the human ear to hear. To get a specific frequency sound, simply multiply phase by the frequency you want to hear. Humans can hear frequencies generally in the range of 25 to 25,000 Hz. Middle A on the standard musical scale is 440 Hz. So let's try that. Change the line that calculates the sample to:

var sample:Number = Math.sin(phase * 440);

That gives you A. You can find charts like this all over the net:

A 440
B flat 466
B 494
C 523
C sharp 554
D 587
D sharp 622
E 659
F 698
F sharp 740
G 784
A flat 831
A 880

Or, if you want to get more mathematical about it, the formula for each note, n, above or below 440 is:

440 * 2^(n / 12)

We can implement scales then by setting up an n variable, incrementing it on a timer, and using the above formula to calculate our frequency:

import flash.media.Sound;
import flash.events.SampleDataEvent;
import flash.events.MouseEvent;
import flash.utils.Timer;
import flash.events.TimerEvent;

var position:int = 0;
var n:Number = 0;
var sound:Sound = new Sound();
sound.addEventListener(SampleDataEvent.SAMPLE_DATA, onSampleData);
sound.play();

function onSampleData(event:SampleDataEvent):void
{
	for(var i:int = 0; i < 2048; i++)
	{
		var phase:Number = position / 44100 * Math.PI * 2;
		position ++;
		var sample:Number = Math.sin(phase * 440 * Math.pow(2, n / 12));
		event.data.writeFloat(sample); // left
		event.data.writeFloat(sample); // right
	}
}

var timer:Timer = new Timer(500);
timer.addEventListener(TimerEvent.TIMER, onTimer);
timer.start();
function onTimer(event:TimerEvent):void
{
	n++;
}

Alternately, we can make a poor man's generative music composer with a little help from Math.random:

function onTimer(event:TimerEvent):void
{
	n = Math.floor(Math.random() * 20 - 5);
	timer.delay = 125 * (1 + Math.floor(Math.random() * 8));
}

This generates a different note, and a different duration (from 1/8th of a second up to one full second) for each note.

Armed with this alone, you are on your way to making your own sequencer or mini piano or other type instrument. Later, I'll try to post some stuff on other wave forms, combining waves, envelopes, and other topics.

Send to Kindle

18 responses so far

Sound Synthesis in AS3 Part I – The Basics, Noise

Jul 21 2010 Published by under ActionScript, Flash

Send to Kindle

I’ve been meaning to write something up on this for quite a while. It recently struck me that there still wasn’t a whole lot of good material on this out there already. So I figured I’d throw something together.

We’ll start by looking at the basic mechanics of the Sound object, how to code it up, and create some random noise. Later, we’ll start generating some real wave forms and start mixing them together, etc.

Diving right in

Flash 10 has the ability to synthesize sounds. Actually, there was a hack that could be used in Flash 9 to do the same thing, but it became standardized in 10.

Here’s how it works. You create a new Sound object and add an event listener for the SAMPLE_DATA event (SampleDataEvent.SAMPLE_DATA). This event will fire when there is no more sound data for the Sound to play. Then you start the sound playing.

var sound:Sound = new Sound();
sound.addEventListener(SampleDataEvent.SAMPLE_DATA, onSampleData);
sound.play();

At this point, because you have not loaded any actual sound, such as an MP3, WAV, etc. or attached it to any streaming sound data, there is nothing to play and the SAMPLE_DATA event will fire right away. So we’ll need that handler function:

function onSampleData(event:SampleDataEvent):void
{
}

Our goal here is to give the Sound object some more sound data to play. So how do we do that? Well, the SampleDataEvent that gets passed to this function has a data property, which is a ByteArray. We need to fill that ByteArray with some values that represent some sound to play. We do that using the ByteArray.writeFloat method. Generally you want to write values from –1.0 to 1.0 in there. Each float value you write in there is known as a sample. Hence the “SampleDataEvent”. How many samples should you write? Generally between 2048 and 8192.

OK, that’s a big range of values. What’s best? Well, if you stick to a low number like 2048, the Sound will rip through those values very quickly and another SAMPLE_DATA event will fire very quickly, requiring you to fill it up again. If you use a larger number like 8192, the Sound will take 4 times as long to work through those values and thus you’ll be running your event handler function 4 times less often.

So more samples can mean better performance. However, if you have dynamically generated sounds, more samples means more latency. Latency is the time between some change in the UI or program and when that results in a change in the actual sound heard. For example, say you want to change from a 400 hz tone to a 800 hz tone when a user presses a button. The user presses the button, but the Sound has 8000 samples of this 400 hz tone in the buffer, and will continue to play them until they are gone. Only then will it call the SAMPLE_DATA event handler and ask for more data. This is the only point where you can change the tone to 800 hz. Thus, the user may notice a slight lag between when he pressed the button and when the tone changed. If you use smaller numbers of samples – 2048 – the latency or lag will be shorter and less noticeable.

For now, let’s just generate some noise. We’ll write 2048 samples of random values from –1.0 to 1.0. One thing you need to know first is that you’ll actually be writing twice as many floats. For each sample you need to write a value for the left channel and a value for the right channel. Here’s the whole program:

import flash.media.Sound;
import flash.events.SampleDataEvent;

var sound:Sound = new Sound();
sound.addEventListener(SampleDataEvent.SAMPLE_DATA, onSampleData);
sound.play();

function onSampleData(event:SampleDataEvent):void
{
    for(var i:int = 0; i < 2048; i++)
    {
        var sample:Number = Math.random() * 2.0 - 1.0; // -1 to 1
        event.data.writeFloat(sample); // left
        event.data.writeFloat(sample); // right
    }
}

If you run that, you should hear some fuzzy static like a radio tuned between stations. Note that we are generating a single sample and using that same value for left and right. Because both channels have exactly the same value for each sample, we’ve generated monophonic sound. If we want stereo noise, we could do something like this:

function onSampleData(event:SampleDataEvent):void
{
    for(var i:int = 0; i < 2048; i++)
    {
        var sampleA:Number = Math.random() * 2.0 - 1.0; // -1 to 1
        var sampleB:Number = Math.random() * 2.0 - 1.0; // -1 to 1
        event.data.writeFloat(sampleA); // left
        event.data.writeFloat(sampleB); // right
    }
}

Here we are writing a different random value for each channel, each sample. Running this, especially using headphones, you should notice a bit more “space” in the noise. It’s subtle and may be hard to discern between runs of the program, so let’s alter it so we can switch quickly.

import flash.media.Sound;
import flash.events.SampleDataEvent;
import flash.events.MouseEvent;

var sound:Sound = new Sound();
sound.addEventListener(SampleDataEvent.SAMPLE_DATA, onSampleData);
sound.play();

var mono:Boolean = true;
stage.addEventListener(MouseEvent.CLICK, onClick);
function onClick(event:MouseEvent):void
{
    mono = !mono;
}

function onSampleData(event:SampleDataEvent):void
{
    for(var i:int = 0; i < 2048; i++)
    {
        var sampleA:Number = Math.random() * 2.0 - 1.0; // -1 to 1
        var sampleB:Number = Math.random() * 2.0 - 1.0; // -1 to 1
        event.data.writeFloat(sampleA); // left
        if(mono)
        {
            event.data.writeFloat(sampleA); // left again
        }
        else
        {
            event.data.writeFloat(sampleB); // right
        }
    }
}

Here we have a Boolean variable, mono, that toggles true/false on a mouse click. If true, we write sampleA to the left and right channels. If mono is not true, then we write sampleA to the left channel and sampleB to the right channel. Run this and click the mouse. Again, the change is subtle but you should be able to notice it.

To see, or rather, to hear, the results of latency, change the 2048 in the for loop to 8192. Now when you click, you’ll notice a significant delay in the time between the click and the change from mono to stereo or vice versa.

One other note about the number of samples. I said, “generally” to use between 2048 and 8192. The fact is if you try to use more than 8192, you’ll get a run time error saying one of the parameters is invalid. so 8192 is a pretty hard limit. You can use less than 2048, but if you do, what happens is that the Sound object will work through those samples and then consider the sound is complete. It will not generate another SAMPLE_DATA event when it is done. Instead, it will generate a COMPLETE event. So if you want the sound to keep playing, you need to keep it supplied with at least 2048 samples at all times.

In the next installment, we’ll start creating some simple waves.

Send to Kindle

18 responses so far

Scientific American

May 28 2010 Published by under ActionScript, Flash

Send to Kindle

image

In the June 2010 issue of Scientific American, on page 58, there is an article entitled, “Is Time an Illusion?” by Craig Callender. You can see it here:

image

The large artwork on the first and last pages of the story, and a bit more subtly in some of the in between pages, is by yours truly.

image

This began about two months ago when I was contacted by Scientific American, asking if I would be interested in contributing some art work for an article. They were interested in some of the pieces on my other site, Art From Code, in particular a few pieces I had entitled Space Time Color. Of course, I said I would be interested and they sent over the article and asked me to come up with some rough ideas within a couple of weeks, and shortly after that some high res images for print.

Amazingly, I was able to dig up the source code that had created the Space Time Color images. The thing was, I now needed to create four separate pieces in both low res and later high res, save them out, and have the ability to reproduce and tweak each piece. Random code on the timeline of an FLA would just not do in this case. So I extracted the code out into classes and created an AIR application in Flash Builder 4.

The app is essentially a particle generator with a number of invisible attractors that affect the particles’ paths. A number of particles appear at the bottom of the screen and have an initial upward velocity. Here’s what it looks like:

Each circle is an attractor and can be dragged anywhere on the canvas. Each has a numeric stepper attached to it to adjust its strength. Of course, this number can be negative, which makes it repel particles. As each particle moves, it draws a line onto a bitmap.

Although the bitmap is scaled on the stage to 600×600, internally it is 4000×4000 pixels, and you can zoom into the image full size, at which point you can drag it around within its window.

Other things you can see in the UI there are options to change the background color, change the number of particles and number of attractors, show or hide the attractors, and draw in a lower resolution preview mode. When I got a picture that looked good, I could hit save. I modified the default PNGEncoder class to be asynchronous (I think I posted about that at the time), which allowed me to throw in a saving progress bar.

The cool thing is that when an image is saved, a configuration file with all the important properties are also saved with the same name. The file names for both are based on the time stamp of the point they were saved. So in addition to the image file, “space_time_2010-5-28_22.16.34.png”, it saves a file called “space_time_2010-5-28_22.16.34.txt” that looks like this:

seed:1
numAttr:4
attractor:2860|920|200
attractor:2920|2946.666666666667|200
attractor:600|959|200
attractor:1754|1992|200
numpix:1000

This allowed me to load back in the exact configuration for any specific image that had been saved at any time. Although the app itself took a few days to get done, it then allowed me to quickly generate dozens of different images, then go back through them, choose the ones I liked, reload them, and tweak them a bit more before saving them out again.

Again, the images were exported as 32-bit PNGs at 4000×4000. Only the trails themselves were represented; I left the background color transparent, and then opened up each final image in PhotoShop, adding a white background later there. I thought they might want to experiment with different background colors, but as it is, they liked the white anyway. While I was in PhotoShop, I played with some different filters and effects and got some other cool results, but what wound up in the magazine was pretty much straight out of Flash.

Anyway, I’m pretty excited to have some of my work in such a prestigious magazine as Scientific American. Another notch in the keyboard. What’s next?

Send to Kindle

21 responses so far

« Newer posts Older posts »