04-23-2014, 12:11 PM
Just FYI, Nekotekina's SPU recompiler has been merged now, and when my PR is merged there will be an option to switch between that and the interpreter in the GUI.
Why are you making a PS3 emulator?
|
04-23-2014, 12:11 PM
Just FYI, Nekotekina's SPU recompiler has been merged now, and when my PR is merged there will be an option to switch between that and the interpreter in the GUI.
04-23-2014, 04:17 PM
(04-23-2014, 01:51 AM)derpf Wrote: You have a memory manager which "interprets" memory access ? I'm not sure but it seems you use a lot of dynamic polymorphism for PPU and SPU threads with virtual methods. If it is the case no wonder you get a slow interpreter. Unless you try to mix two different concepts into one : win32 thread to represent a PPU/SPU thread and PPU/SPU interpreter, I see no point in such polymorphism. Even so, you might not need that kind of polymorphism... Try a curiously recurring template pattern (CRTP) as much as possible instead (http://en.wikipedia.org/wiki/Curiously_r...te_pattern)
04-23-2014, 06:38 PM
(This post was last modified: 04-23-2014, 06:42 PM by androidlover.)
Hah, well, do well guys! I'd use rpcs3 whether it has a crappy interpreter, dynarec, virtualizer, etc.
Btw, I am not a fan of FF13 anyways ... I'd be content when rpcs3 can get in-game with something like Rockband or Little Big Planet. Much sooner than FF13, almost guaranteed. A question though: Why don't you guys cache the interpreter? I mean, wouldn't that speed it up for now until the recompilers are finished later on for everything else? From using a few emulators of other tenors, I can say that a cached interpreter runs quite better compared to a pure interpreter (one that just interprets, but doesn't cache any instructions/etc.). SuperN64 emulator for Android has three options for the execution core: 1.Pure interpreter. 2.Cached interpreter. 3.Dynamic recompiler. Pure gets 8 FPS with Super Mario 64 on a the Galaxy S5, Cached gets 13-26 FPS, and Dynamic gets a baseline 60 or so perfectly. Basically, cached is a bit faster, so wouldn't that be an option (last I knew, RPCS3 doesn't have a cached one)?
04-24-2014, 07:41 AM
(04-23-2014, 06:38 PM)androidlover Wrote: Hah, well, do well guys! I'd use rpcs3 whether it has a crappy interpreter, dynarec, virtualizer, etc.This is a great idea mate.
i7 3930K Overclocked 4.7 Ghz
X2 R9 295X2 32GB DDR3 Ram Overclocked 3000 MHz Asus Sabertooth X79 XSPC Raystorm EX560 Extreme Water Cooling Kit 4K monitor Corsair RGB Mouse and keyboard Best Sound System by Sony Late Late 2015 Updated my specs
04-24-2014, 08:20 AM
How do you even make a cached interpreter? Any part of the game code that doesn't have the same input and output every time would just have to be recalculated anyway. And this is probably 90 percent of the code I would guess. The performance gains would be minimal, and you would have to check evey variable value every time, etc...
Asus N55SF, i7-2670QM (~2,8 ghz under typical load), GeForce GT 555M (only OpenGL)
04-24-2014, 08:40 AM
(04-24-2014, 08:20 AM)ssshadow Wrote: How do you even make a cached interpreter? Any part of the game code that doesn't have the same input and output every time would just have to be recalculated anyway. And this is probably 90 percent of the code I would guess. The performance gains would be minimal, and you would have to check evey variable value every time, etc... If only those damn scientists would just solve the halting problem so us folk can have GTA V at >9000 FPS on the PS3 on PC. But seriously do explain what you mean by "caching interpreter", will you.
I'm not sure about what a caching interpreter is. But it may be a case of JIT (like the one in Xenia) which allows to reduce the overhead of decoding a contiguous set of instructions (usually called a basic block) by decoding them only once then execute them every time the address is hit by the instruction fetcher. Under certain architectures, you need to use a special instruction to flush icache so you can also discard a basic block from the instruction cache this way.
The simplest way is probably something like that: Code: struct insn { void (*interpret_insn)(Context & context); }; P.S.: decode_insn returns a vector of decoded insns which represents a basic block (there is no branch-like instruction in the block except for the last instruction) (04-24-2014, 08:42 PM)hlide Wrote: I'm not sure about what a caching interpreter is. But it may be a case of JIT (like the one in Xenia) which allows to reduce the overhead of decoding a contiguous set of instructions (usually called a basic block) by decoding them only once then execute them every time the address is hit by the instruction fetcher. Under certain architectures, you need to use a special instruction to flush icache so you can also discard a basic block from the instruction cache this way. It looks like all you're doing is caching the decoded instructions and storing them as function objects or something. At least, that's the only thing I could make out of it. I doubt this would bring any benefit over the amount of memory it uses. A JIT, instead, would take a basic block or an entire procedure and recompile it to the target ISA, and then cache that code, so it can simply run that. (And indeed that is a great goal to have -- which rpcs3 will do in the future. )
04-25-2014, 07:02 PM
Don't know exactly what a "cached interpreter" is, but someone mention that here in an old thread: http://www.emunewz.net/forum/showthread.php?tid=158608
In post #6, "Chalking Fenterbyte" mentions the same thing "cached interpreter", but AlexAltea says there's no need to improve the interpreter, and instead wish to work on the dynamic recompiler. I also wonder what caching an interpreter means and how it improves speed supposedly. (04-25-2014, 01:18 AM)derpf Wrote: It looks like all you're doing is caching the decoded instructions and storing them as function objects or something. At least, that's the only thing I could make out of it. I doubt this would bring any benefit over the amount of memory it uses. That's an oversimplified example and indeed it lacks at least some arguments to avoid decoding the opcode so it can get the register indexes directly when interpreting an instruction. And you can go further by making super blocks instead of basic blocks. It will be faster than a plain interpreter while it takes more memory. You have also the same principle with JIT where the backend may be an interpreter (for designing and debugging JIT) then new backends are added to produce a block of native instructions to run directly. Xenia have both backends : an independent architecture similar to what I described above and x64 architecture. The first is mostly to help to design JIT and debug it (there are several passes which tries to optimize the "produced code"). But I was told by Vanik that interpreter backend is faster than what Asmjit produced. For this reason he simply ditched Asmjit and made his own jit + xbyak (x64). |
« Next Oldest | Next Newest »
|