Skip navigation

Category Archives: Coding

I’ve never written much about the D language in this blog. I think it could fill many blog posts :).
Let’s just say I’m a huge fan and find it very rewarding to use.

Since 2012 I’ve been working on GFM a library to make it easier to create games and other types of pretty applications.
https://github.com/p0nce/gfm

Obviously it’s only interesting if you are interested in using D in the first place (but I wonder why you wouldn’t be with more information).

I believe GFM to be a reasonably useful toolkit for Windows/Mac/Linux based game development.
It is documented and reached 1.0 last week-end.

With the advent of the dub package manager, it’s only one line away to integrate in your project. You don’t need to use it all, since GFM has been splitted in dub sub-packages.

I’ve never did an update for Wormhol, and were asked to open-source it.

So there it is, along with other stuff: http://github.com/p0nce

The license is very liberal.

I’m not sleeping.

I have a new job which is quite intense and time-consuming. Fueling my desire to get things done in “week-end” projects. So here is what I did:

Javascript

Disclaimer: this is a technical article without fancy images.


I can’t help but find Javascript spectacularly unsuited to game development.

Think about it: no integers, no 32-bit floating points, overall slowness, uncertainty, and the dynamic side of the langage gets it the way of JIT optimization. When you come from C++ and D it feels like a huge downgrade, and yes I did read Javascript: The Good parts. The features you need are not here while those you can’t use are plenty.

Moreover, the usual guidelines promote a programming style which favors poor performance. The Canvas rendering is slow but physics or AI can quickly become another bottleneck in your game if you rely on best practices.

Luckily you can adapt your code to help the JIT do a better work. This is a domain where I think premature optimization pays off. Using the Firebug profiler I chose to optimize everything in a consistent way.

Note that this performance guide is based on Firefox 3.6 and the optimizations presented here might be specific to this browser.

Objects

Objects worked better for me when created like this:

var C = function(x, y)
{
// initialize members, do stuff
};

Then assigning the prototype:

C.prototype = {
method1: function()
{
// do something
}
method2: function()
{
// do something else
}
};

Edit: it’s not the one true way, see this jsperf test to make your own measurements (thanks @kuvos).

Allocations

The problem with allocating memory is that it stresses the GC and provokes annoyingly long pauses in your game. I mean several frames being skipped where a stable FPS is your goal.

Consequently, there is no single new in the Crajsh game loop. The classic recipes apply: pools, FIFO, stacks, arrays. The pauses sometimes still happen though, because I allocate stuff for each new game.

Note: some operations seem to do hidden allocations, eg. drawing a canvas into another with a different size with Firefox.

Cache members, break encapsulation

I cached properties by hand anywhere possible. Member access is slow in Firefox 3.6 so it’s crucial to get them out of critical loops. If you inline some functions by hand, you’ll be able to cache even more property access. I did so.

I also replaced most of getters by direct member access. If you prefix all members with an _underscore it’s still easy to change a member name. This is less work for the JIT and less code.

Symbolic numerical constants can be replaced by a literal like this:


var a = /* tron.MY_CONSTANT */ 4;

That way you can still grep for it and your code is a bit harder to decipher.

Closures

I don’t understand why, but accessing to a closure (not just creating) introduces a slowdown in Firefox 3.6. The symptom is GC pauses. I’ve worked around the problem by removing all closures from my code. The JIT could theorically optimize closures but it doesn’t happen.

The Prototype.js bind function can help you to eliminate even more closures from your code. I might be wrong but I did see a speed-up.


Function.prototype.bind = function()
{
var fn = this, args = Array.prototype.slice.call(arguments), object = args.shift();
return function()
{
return fn.apply(object,
args.concat(Array.prototype.slice.call(arguments)));
};
};

Arrays

Array literals caused pauses in Firefox 3.6 much like closures and allocations do. It’s sometimes better to have a string literal and then convert it to an array.

I create all arrays with a sufficient size. No resizing happen in the game loop. Then, the bulk of the processing can be done by iterating on arrays, not object properties.

I also think it’s better to use monomorphic parameters and variables. Make sure the JIT know the type of each value where they are used. You’ll see a speed-up if you fill your arrays upfront with the right type, don’t let them undefined.

tl;dr: if you want good performance with Javascript, use a static subset of the language.

Optimizing Crajsh – Part 1 – Rendering

Rendering

Disclaimer: this is a technical article but with some pictures.


HTML logo

Why bother with Canvas?

I tried with Crajsh to make a game that would run on current PC browsers with outdated hardware. Making the game reasonably fast was a design goal from day 1 and I learned a lot about optimizing Javascript and HTMLCanvasElement during the process.

While I had been struggling to make my native OpenGL games work in most of today’s Windows PCs, Crajsh was reported to run correctly on an EeePC and 10 year old laptops.

OpenGL is much work

I went to HTML5 mostly because I got tired with OpenGL drivers out there. The ATI R500 was prematurely abandoned, and some integrated chipsets are still spread in the PC gaming world like a cancer. The stability of OpenGL drivers is not a given and it’s not always safe to direct your users towards the newest drivers with some vendors.

OpenGL development on the PC is frustrating. There is all these fancy features marked supported but you can’t use them because it does not work with card X + driver Y. Problem is: there is a lot of X and a lot of Y to test for. And eventually you can’t plug X and can’t find Y.

What happens next: you end up with multiple rendering paths, ugly work-arounds and uncertainty. OpenGL gives you much work in the real world and I wouldn’t advise anyone to use WebGL until browsers wrap OpenGL ES into software/DirectX where needed.

All of this prepared me to go back to software rendering. Now I think the HTML5 Canvas is suitable for writing cross-platform 2D games, provided you agree to pay the price of optimization.

Know about your users

If your game runs in a browser you can easily get the number of players, for how long do they play, etc…

A good thing is that they will always play the latest version. They are also more likely to update their browsers than display drivers, which is a nice bonus.

Using the CSS engine for UI layout

Implementing a GUI, or adapting an existing one for a game can get pretty complicated. The CSS engine and libraries like jQuery UI makes it straightforward. I suppose localization is easier too since the browser has been thought for that, while with native games it’s notoriously tricky.

When NOT to use the Canvas

  • You want deffered rendering, SSAO, shadow maps, etc… anything fancy
  • You want both high framerates and accurate controls.
  • You want full control over sound, not just playback samples and choose volumes.
  • You don’t want to spend time optimizing.
  • You can’t get yourself to use Javascript.

How to make Canvas fast?

Draw calls with Canvas are known to be slow and there seems to be several ways to work-around this sluginess.

Limit the size of the Canvas

This is what we see in a lot of current HTML5 games. Full updates become possible with a small-enough area. The obvious flaw is that a small game might be less compelling and immersive.

Partial updates

Another solution is to use a larger, screen-sized Canvas and make partial updates. This is suitable to board games, tower-defense, tetris, etc… If you want scrolling you’ll have to complicate the rendering.

Layer several Canvas

This is what some games like Canvas Rider and Biolab Disaster iOS do. The level is prerendered in a big Canvas and the player is drawn in another Canvas with a different z-index. Scrolling is achieved by offsetting the background canvas.

This method use the browser compositor which is likely to be very efficient. It’s probably the most suitable approach for most 2D games: tiled RPG-style games, platformers, scrolling shoot’em ups…

However it wouldn’t work with a few games like Crajsh which have a large tiled-world (up to 1024×1024 tiles) and lots of level updates. In this case maintaining a large Canvas is likely to be slower than maintaining an array of tile indices and a smaller Canvas. Also the memory consumption would be too high.

canvas rider

Canvas Rider

Vector graphics

One technically impressive game recently came to my attention: TankWorld.

Its creator implemented 3D-like rendering in Canvas to avoid WebGL shortcomings. Browsers seem to be efficient with vector graphics, probably thanks to SVG.

I think this is the best current method for 3D games until WebGL is ready. An obstacle to overcome is that browsers have varying abilities at drawing triangles, eg. TankWorld works faster in Chrome.

tankworld

TankWorld


The Crajsh way

My first naive version would draw each tile with a call to drawImage. This was obviously slow so I made the renderer track currently displayed tiles in the Canvas and update tiles as they change. As most of the world is empty blue tiles, I thought it would provide quite a speed-up.

How to render a large, rapidly changing tiled world?


This was indeed faster, but would slow down dangerously in crowded area with lots of non-empty, different tiles.

So I added an optional step which takes the previous Canvas content and offset it to follow the camera movement. This brought more speed and stability. The method works like this:

  1. Copy the main Canvas to an offscreen Canvas
  2. Blit the latter to the former with a drawImage call and the right offset
  3. With this move, 95% of the tiles are drawn in the right position
  4. Update the tiles which actually changed since the last frame (world updates)
  5. Force the update of some tiles to account for floating UI elements, and player animation in the main Canvas

Reusing the last frame leaves few draw calls to be made.


There is more tricks going on. Profiling with Firebug reports that the game bottleneck is the rendering function with less than 1 ms on average, and 10 ms at worst. As the callback is called every 50 ms, a frame skip is pretty rare.

Overall the game runs at a stable 20 FPS, which is not exactly smooth but works with a plain background. In my opinion a low, stable framerate is better than 60 FPS with random pauses in-between. I had a painful trade-off to make between framerate, lagging controls and gameplay. It made the game a bit more difficult that I wanted.


The next article will be about the specifics of Javascript optimization for games.

Optimizing Crajsh – Part 2 – Javascript

This blog is silent since a while

This is what happened :

  • I’m in stealth mode. A new game will be released January 11, 2011.
  • I interrupted the stealth mode to enter the JS1K x-mas edition with this demo. It’s pretty uninteresting because I used the exact same trick as Marijn Haverbeke, the first JS1K winner. Also it doesn’t look the same in all browsers.
  • You can now follow Games From Mars on Facebook. Unlike my twitter, I will only talk about my games there. My #1 rule is to shut up when I’ve nothing to say.
  • I think I’ve completely recovered from this year burn-out. I can now work at full speed again.
  • Vibrant has been in the “indie games” pages of Joystick, a well-known french magazine. It feels great to read something on paper about my work, even if it brought little traffic as compared to a blog post.
  • I still like lists.

Stay tuned.

Hypnoglow with <canvas>

I submitted an entry to the js1k contest. You can view it here. It’s not terribly fast so I suggest using Chrome to view it.

Prior art

What I describe in this post is neither new nor advanced. It’s just my implementation. Mathieu “p01” Henri did <canvas> hypnoglow 2 years ago, and won the 20 lines “Zoom” Javascript contest. Also here. I didn’t know about other Javascript contests before js1k.

Making-of

I wanted to introduce a recursive, large kernel blur with my entry. I stumbled upon the W3C page which says this:

To draw images onto the canvas, the drawImage method can be used.

  • drawImage(image, dx, dy)
  • drawImage(image, dx, dy, dw, dh)
  • drawImage(image, sx, sy, sw, sh, dx, dy, dw, dh)

Each of those three can take either an HTMLImageElement, an HTMLCanvasElement, or an HTMLVideoElement for the image argument.

So my initial entry was doing something recursive like that:

context.drawImage(canvas, 4, 4, width-8, height-8, 0, 0, width, height);

where context is the context associated with canvas. This actually looked like crap, but worked.

Then, while testing on all 4 browsers requested by the js1k rule, I found out that only Firefox 3 would support blitting a canvas to itself. It wasn’t working at all with Chrome, Opera and Safari. It wasn’t even working with Firefox 4 beta.

Yet, Inopia/Aardbei‘s entry was using recursive blur with success. Digging into the cleverly size-optimized source, I found that it was using an additional offscreen <canvas> as a temporary buffer.

The hidden rule seems to be: you  can draw any rectangular area of a canvas A into any rectangular area of a canvas B, provided A is not B. This is actually matching how graphics cards works, where in the general case you cannot render in a texture you are using. A lot of thing remains possible : mipmap pyramids, feedback effets… I guess using Canvas 2D transformations and clipping regions would lead to even more effects.

ERRATUM: p01 proved the above paragraph wrong, drawing a canvas on itself seems to work.

Starting again

Taking some inspiration from rez’s work, I made a kaleidoscope-like visual too with a large-kernel blur.

Each frame in the entry follow this algorithm:

  • The main canvas is darkened by a half-transparent black rectangle.
  • Circles are added in various colors and size, using black <canvas> shadows.
  • The main canvas is downsampled to a 8x smaller canvas
  • …which is downsampled to a 2x smaller canvas, three times in a row (much like mipmapping). This amounts to 4 offscreen canvases.
  • The two smaller canvases (32x smaller and 64x smaller) are added to the main canvas to add glow.
  • The two larger offscreen canvas (8x smaller and 16x smaller) are added to the main canvas, but 4 times and with an offset, to add feedback.

This looked like this:

For those interested, I made an archive with the source, using stats.js from Mr Doob (use it!).

When size-optimizing, I had to change colors to save bytes and went with short HTML color names (“Red” is 1-byte shorter than “#f00”). So it’s different from the final version.

sRGB

Safari, Chrome and Firefox 4 seem to take in account the sRGB color space when blitting an image.

Firefox 3 and Opera do not. Thus the entry is darker on these browsers, and doesn’t look good.