PCSX2 Documentation/Chroot and 64-bit Linux: Difference between revisions

no edit summary
No edit summary
No edit summary
Line 3: Line 3:
More and more linux users have 64 bit distributions. The purpose of this wiki page is to explain the current situation and possible solutions.
More and more linux users have 64 bit distributions. The purpose of this wiki page is to explain the current situation and possible solutions.
Status updates on the following distributions are welcome (as are others we may have overlooked): Debian / Ubuntu / Fedora / Gentoo / ArchLinux / Mandriva / Opensuse / Slackware.
Status updates on the following distributions are welcome (as are others we may have overlooked): Debian / Ubuntu / Fedora / Gentoo / ArchLinux / Mandriva / Opensuse / Slackware.


==Introduction==
==Introduction==
Pcsx2 does not support running as a 64 bit application for various reasons.
Pcsx2 does not support running as a 64 bit application for various reasons.
The code is architecture dependent (for performance reasons).
*The code is architecture dependent (for performance reasons).
It would need a complete rewrite of the emulator core.
*It would need a complete rewrite of the emulator core.
No gain in performance, actually it could be slower.
*No gain in performance, actually it could be slower.
Lack of programmers both willing and able to write and maintain the 64-bit code and keep it up to date with the 32-bit code. (Which has been largely rewritten since the old 64 bit code was removed.)
*Lack of programmers both willing and able to write and maintain the 64-bit code and keep it up to date with the 32-bit code. (Which has been largely rewritten since the old 64 bit code was removed.)


While it is possible to run a 32 bit program on a 64 bit operating system, you have to have 32 bit versions of all the libraries it uses as well. With the libraries Pcsx2 uses, this can be impractical, which is why we generally recommend a 32 bit chroot.
While it is possible to run a 32 bit program on a 64 bit operating system, you have to have 32 bit versions of all the libraries it uses as well. With the libraries Pcsx2 uses, this can be impractical, which is why we generally recommend a 32 bit chroot.


TO DO: Talk about what a 32 bit chroot is. (Well, it's really a full 32 bit copy of Linux sitting in a folder on your hard drive, which Linux can cleverly start up and run programs from, but detail might be nice to have here...)
''TO DO: Talk about what a 32 bit chroot is. (Well, it's really a full 32 bit copy of Linux sitting in a folder on your hard drive, which Linux can cleverly start up and run programs from, but detail might be nice to have here...)''
Pcsx2 is not 64 bit compatible by design
 
==Pcsx2 is not 64-bit compatible by design==
Most software only needs to be recompiled to support a new architecture. Portability was one goal of the C language over assembly. However, some types of software do not follow this rule. For example,virtual machines, dynamic recompilers, JIT compilers, and compilers in general. These programs directly generate assembly code for the type of processor targeted. There are a few types of programs that do this:
Most software only needs to be recompiled to support a new architecture. Portability was one goal of the C language over assembly. However, some types of software do not follow this rule. For example,virtual machines, dynamic recompilers, JIT compilers, and compilers in general. These programs directly generate assembly code for the type of processor targeted. There are a few types of programs that do this:
In some cases, the assembly code being used is static. It will be generated once, and then will be executed at a later time. The major example is gcc. The code that generates x86 instructions is completely different from, say, the code targeting the powerpc. Each architecture has a special backend.
Then there are programs that dynamically generate instructions for the processor as they are running. You generate the instructions during the execution of the program, and the program executes them. In those cases, the generator could be portable, but not the instruction generated by it. Some examples are the Java virtual machine, Javascript & the flash virtual machine (Which should be well known to people trying to play flash in 64 bit browsers.), Perl's virtual machine, Python's virtual machine, etc... And, of course, Pcsx2. All these programs must be rewritten for each new architecture supported: x86, amd64, powerpc...
So, why would we go through all this hassle in the first place? The main reason is speed. A virtual machine or a JIT compiler is globally 10 to 100 times faster than a basic interpreter of the language.
Would it be faster still to code everything in assembly? Well, maybe. But there are a few reasons why that wouldn't be a good idea:


Sometimes it is not possible (or very difficult) to port the original code.
*In some cases, the assembly code being used is static. It will be generated once, and then will be executed at a later time. The major example is gcc. The code that generates x86 instructions is completely different from, say, the code targeting the powerpc. Each architecture has a special backend.
You can do optimization based on run-time values which can not be done easily otherwise.
 
Assembly is not exactly readable. Having code that is easy to read and can be debugged easily can be worth a slight speed decrease.
*Then there are programs that dynamically generate instructions for the processor as they are running. You generate the instructions during the execution of the program, and the program executes them. In those cases, the generator could be portable, but not the instruction generated by it. Some examples are the Java virtual machine, Javascript & the flash virtual machine (Which should be well known to people trying to play flash in 64 bit browsers.), Perl's virtual machine, Python's virtual machine, etc... And, of course, Pcsx2. All these programs must be rewritten for each new architecture supported: x86, amd64, powerpc...
Additionally, sometimes the compiler is already doing a good job on the C code, and there is just no reason to optimize.
 
So, why would we go through all this hassle in the first place? The main reason is speed. A virtual machine or a JIT compiler is globally 10 to 100 times faster than a basic interpreter of the language. Would it be faster still to code everything in assembly? Well, maybe. But there are a few reasons why that wouldn't be a good idea:
 
*Sometimes it is not possible (or very difficult) to port the original code.
*You can do optimization based on run-time values which can not be done easily otherwise.
*Assembly is not exactly readable. Having code that is easy to read and can be debugged easily can be worth a slight speed decrease.
*Additionally, sometimes the compiler is already doing a good job on the C code, and there is just no reason to optimize.
In some cases, optimization isn't even desirable. It may be that optimization over-complicates things and allows bugs to creep in for something that that may be rarely executed and is not time dependent. There is at least one piece of code in Pcsx2 that has a note by it saying not to optimize it.
In some cases, optimization isn't even desirable. It may be that optimization over-complicates things and allows bugs to creep in for something that that may be rarely executed and is not time dependent. There is at least one piece of code in Pcsx2 that has a note by it saying not to optimize it.
Another reason is portability. While the code for the program itself may not be portable, it would in theory be possible to write multiple generators for different processors. We may not be doing that currently, but we don't have to rule that out for the future. That is the beauty of virtual machines, and why you can run Java programs pretty much anywhere, for example.
*Another reason is portability. While the code for the program itself may not be portable, it would in theory be possible to write multiple generators for different processors. We may not be doing that currently, but we don't have to rule that out for the future. That is the beauty of virtual machines, and why you can run Java programs pretty much anywhere, for example.
 
For more information, you may want to look up virtual machines in google or wikipedia. Other topics of interest might be Infocom, and their "Z-Machine" virtual machine, code optimization, portability, well, a lot of things, really. Talking about it too much would be beyond the scope of this document, though, however interesting...
For more information, you may want to look up virtual machines in google or wikipedia. Other topics of interest might be Infocom, and their "Z-Machine" virtual machine, code optimization, portability, well, a lot of things, really. Talking about it too much would be beyond the scope of this document, though, however interesting...


ninja
782

edits