As part of a pitch at Havas, I drew some birds. The aim was to create a kind of rounded cartoon Charlie Harper style illustration, which drew inspiration from the clients logo.
A peacock and a disinterested pigeon bystander.
A peacock showing off to a jealous pigeon.
A kit of pigeons.
As a side project at work, I designed and built designlinking.com.
It’s a link share site, updated weekly, with contributions from the entire Havas design team made via a Google spreadsheet. The site uses tabletop.js to read in the Google Doc, Handlebars to render the data with templates, and require to try and keep the js tidier. It’s a fairly simple thing, but a pleasant project to build, and my first go rendering templates on the client side. And it’s responsive.
SubDivTris is a canvas implementation of my earlier Processing script, and Illustrator plugin. It’s a lot more interactive and, being on the web, a lot easier for you to try out. It’s also more accurate since I finally got my head round drawing everything properly.
By mousing over a triangle, you subdivide it into four new triangles. Each takes its colour from the relevant position on a source image. By sub dividing the triangles enough, the image is effectively revealed (albeit, with triangular pixels). It’s quite fun.
The whole thing has been a bit of an experiment to try out the CreateJS libraries. Easel.js (which is part of the CreateJS suite), adds a flash-style display list to the cavnas, which makes handling interaction, adding and removing objects incredibly simple. The syntax is all very familiar too if you’re moving from AS3.
I may yet develop this further. It’d be nice to use the Instagram API so you’d be selectively revealing the latest image with a bright sounding tag (#sunset, #lake, #goldfish, #shimmer).
Back in June I spent a week with the lovely folk of Lucky Frame doing a mini-residency. I had a blast hanging out with Yann, Jon and Sean; spending the week testing out Bad Hotel, talking about faintly ridiculous game ideas, and building ‘Railroad Chicken’.
We spent a lot of the week focusing on momentary games – where much of the gameplay is about waiting for an event, followed by the player’s momentary reaction to it. The wonderful ‘Ready, Steady, Bang’ is a great example of the genre. A lot of the games I was introduced to by the Lucky Framers also used a single device for multiplayer games. Two (or more) players huddled round an iPhone is immediately funny, and seemingly straightforward games can become enormously tactical and complex very quickly (like FingerBattle & pyoing).
The aim of the week was to build a very playable test of a concept which we quickly named “Railroad Chicken”. It’s a two-player, one-device, momentary game which tests players’ daring in a safe, chicken-based environment. I also wanted it to look pretty.
Graphics were made in Blender and Illustrator, and sprite sheeted with Texture Packer, and I built the game in Flash using Starling. Yann generously provided the sounds. Here it is. Find a friend to try it out with:
You have but one aim: to best your fellow man. It’s a game that demands you prove your daring, train dodging, bravery in a safe, chicken-based environment.
How to Play
Find a friend. Press and hold your designated key to indicate your readiness to dodge an oncoming train. Your chicken isn’t ready for this game. He’ll never be ready, so you need to tie him down. Release the key / chicken before your hapless bird is hit by the train… but after your opponent.
You may need to write these down:
Player 1: The A Key.
Player 2: The B Key.
Having animated the chicken, it seems like a surprisingly handy graphic for all kinds of quick games. I may try and combine this sprite sheet with the CreateJS libraries to create some HTML5 mini games:
If you need a struggling chicken for anything you’re working on (you do), you can download the sprite sheet and its XML here.
I posted this a few months ago on Vimeo, but am just getting round to working on a new version.
Here’s the 3D blend file. It has 604 faces, which is a lot when you consider that you’ll have to cut them all out. The next version will work on reducing the face count, and upping the geometric aesthetic:
Here’s the pdf of the flattened net. There are no numbers or instructions or anything, so treat it like one of those 3D jigsaws the kids love.
Parker is a chubby bear-type animal who does Parkour. The character was made in Blender, rigged and animated, before being exported as a series of still meshes for 3D printing. I’ve just completed the walk cycle (of six poses), and got them printed by Shapeways. The aim is to see how few distinct models it would take to complete a series of stop motion animation movements: walking, running, jumping, etc.
These prints were an initial test, and it’s revealed a few problems with some of the finer details: the claws are too small, and the tiny feet are useless for balancing the models. I’ve glued the models onto bases to test stop motion animation, but these are pretty distracting, and ruin the nice sense of the character interacting directly with the scene.
I’d like these to be very easy to animate with – the end goal is to offer sets for sale on Shapeways so that anyone can create videos with them. It might be possible to make them bigger so they’d be easier to photograph, and hollow to bring down the printing price. Certainly, the model replacement speeds up the animation process. Here’s a really quick test on my kitchen table, swapping out the models without too much accuracy:
I’ve been writing some Ardiuno code to allow me to tweet using a telegraph key (a morse code tapper). The key is, obviously, just a jumped up switch, so was really simple to wire to the Arduino. In fact, the button is the first circuit the tutorials walk you through.
Edit: Seems I wasn’t first. @rebobaydobay tweeted me a similar but much better version of this project that’s hosted here. Theirs launched a week after this post, but we were both behind this guy. Alas.
I’ve now got the Arduino listening out for switch connections (which it has to debounce, so as not to add lots of extra dots everywhere), converting these connections to dots and dashes, then transforming those into letters.
It’s been a pretty important piece of work: without it the telegraph key, which I bought at a local market, would be almost completely useless. There is something quite nice about twittering by tapping out your code like a woodpecker too.
So now to get it to actually send the Tweet, I’m caught in indecision. I can either send the completed string to Processing, and use a library like Twitter4J to do the actual Twittering. This seems like a shame, because tethering a Telegraph Key to a laptop would almost seem to frustrate the point of the whole thing.
Better would be to get an ethernet shield, and use it to make a stand alone twitterbox, which would plug into a router. Or better still, I could get a cellular shield and set it up with a sim card so you could #TworseCode on the go.
Also, because we’re not (yet) as versed in Morse / Tworse code, it would be handy if the box had some LEDs to show you which letter you’d just completed.
So, still some way to go.
I did not tap this blog post out in morse code. Maybe next time.
I’ve been thinking about this for a while. Robert Hodgin’s Hello Cinder shows, amongst other things, how to create a halftone image by scaling particles based on the color value of an image below. I wanted to see what it’d look like to place the particles in 3D space, and use an image’s color value to determine the z-depth, rather than size, of each particle.
Cinder and C++ certainly give you the processing power to work with a lot of particles, but I wanted to make this a quick, online interactive piece. This version is built with three.js and the canvas tag.
Launch halftone depth map experiment to see it in all its 3D glory. This example pulls in recent Flickr images tagged with “moustache”. Launch it, and click a photo thumbnail to change to that image.
I quite like the low-res look that the 28×28 particle grid gives here, although even that’s enough to have the processor struggling.
I mentioned in this post that, having sketched an idea out in Processing, I was going to make it into a Scriptographer tool. Which I have now done. It’s not perfect, but it’s fun. It makes pretty-little-triangles. Attractive, but meaningless.
Here’s the Scriptographer script: prettyLittleTriangles.
Load in a source image from the menu and click start. It’ll draw the first triangle for you. Grab the yellow pencil tool from the bottom of your tool bar, and start clicking on the triangle to subdivide at will.
I’ve been testing it with images at 1200px x 800px, which seem to work well.
HDR (high dynamic range) images seem to have earned themselves a bit of a bad reputation. This is possibly because when it’s obvious that an image is HDR, it’s usually because the effect has been overdone.
The over-sharp, glowing edges that HDR (and especially fake HDR) can produce are pretty off-putting, but I found out just how useful it can be when photographing the weather as we travelled into Northern BC. This image was a composite of 12 shots: four images merged into a panoramic, each of which is an HDR image made up of three different exposures.