(04-25-2014, 01:18 AM)derpf Wrote: It looks like all you're doing is caching the decoded instructions and storing them as function objects or something. At least, that's the only thing I could make out of it. I doubt this would bring any benefit over the amount of memory it uses.
A JIT, instead, would take a basic block or an entire procedure and recompile it to the target ISA, and then cache that code, so it can simply run that. (And indeed that is a great goal to have -- which rpcs3 will do in the future. )
That's an oversimplified example and indeed it lacks at least some arguments to avoid decoding the opcode so it can get the register indexes directly when interpreting an instruction. And you can go further by making super blocks instead of basic blocks. It will be faster than a plain interpreter while it takes more memory. You have also the same principle with JIT where the backend may be an interpreter (for designing and debugging JIT) then new backends are added to produce a block of native instructions to run directly.
Xenia have both backends : an independent architecture similar to what I described above and x64 architecture. The first is mostly to help to design JIT and debug it (there are several passes which tries to optimize the "produced code"). But I was told by Vanik that interpreter backend is faster than what Asmjit produced. For this reason he simply ditched Asmjit and made his own jit + xbyak (x64).