Version française
Home     About     Download     Resources     Contact us    

This site is updated infrequently. For up-to-date information, please visit the new OCaml website at

Browse thread
[ Home ] [ Index: by date | by threads ]
[ Search: ]

[ Message by date: previous | next ] [ Message in thread: previous | next ] [ Thread: previous | next ]
Date: -- (:)
From: Chris Hecker <checker@d...>
Subject: Re: [Caml-list] Ocaml and games

Replying to my own post here...

>- Linking of bytecode and native code would be good.  Currently if you're using bytecode (debugging, compile speed, toplevel, etc.) and you need to optimize something, dropping to C is your only choice, which is silly.

I thought I might explain this one a bit in case it wasn't clear why you might be _forced_ to optimize something (as opposed to just wanting to).  This is intimately related to the "debugger for native code" issue as well.  I'm not trying to be condescending at all here, I just don't know if everybody on the list is used to writing time-dependent interactive code or not, and the thing I'm going to describe is kind of surprising at first (at least, it was for me when I figured it out, and it seems to be for other game developers).  And, I'm not trying to beat the horse (since we know how Xavier loves that sort of thing :) of this specific bytecode-native linking thing, this is just using that as a jumping-off point.   So, anyway, if you already know this, hit delete, if not, maybe it's useful to know...

The idea is if you have a system that's got an interactive feedback loop in it that has a certain type of time-dependent behavior, then there's a floor beneath which your performance can't drop or things don't work.  I don't mean "things go slow so it's annoying", I mean the system actually stops working.  All that's necessary is that you have a system that's time to complete is proportional to the time over which you'd like to run it.  So, for example, physics simulation is like this:  if you ask a [nontrivial] physics simulator to simulate from t0 to t1, the bigger t1-t0 is, the longer it will take to give you an answer back.  Graphics can behave like this as well if you've got stuff like viewer-dependent level-of-detail; the longer your timestep is, the farther the camera moves, so the more level-of-detail changes your system needs to make, so the longer it takes to render the frame.

If you've got this kind of feedback loop in your application, and if your framerate drops below a certain magic threshold (that's dependent on the system, obviously), then it takes longer to generate each successive frame and things grind to a halt.  In other words, you have the start time of t0 and the end time of t1, and you generate the frame at t1, and if it took longer than t1-t0 seconds, then t2-t1 is going to be even longer, so it takes even longer again, and you get this:

t1-t0 < t2-t1 < t3-t2 ...

And you're screwed.  This is known among some in the game industry as "The Evil Feedback Loop."  :)  You can sometimes work around it with fixed steps and whatnot, but if you're trying to debug actual behavior where the user is interacting with the system, you're pretty much stuck with it at some point.

So, this relates to byte/native linking and native debugging because if you're using the bytecode compiler for debugging (or any of the other reasons) and you run into this problem, but the native code is fast enough, you have a nasty decision to make:  you can either rewrite the slow part in C so you can keep using bytecode, or you can move to native code and give up the debugger.

Anyway, hopefully that is interesting to somebody...


Bug reports:  FAQ:
To unsubscribe, mail  Archives: