Background
The standard model of a virtual machine (VM) for Java continually interprets a sequence of bytecodes that describe the intent of a Java applet or application. Each time the VM encounters a bytecode, whether it has interpreted it many times before or not, there is a (relatively) lengthy translation process. Even though various approaches exist for speeding up this approach somewhat, this interpretation process accounts for significantly slower execution performance compared to equivalent compiled native code.
To improve this performance, JIT compilers interact with the VM and compile appropriate bytecode sequences into an equivalent piece of native machine code. This process occurs at run time (that is, as the particular piece of code is going to be executed) rather than at compile time, as is traditionally the case with compiled languages such as C and C++. Rather than interpret the same bytecodes repeatedly, the hardware (the CPU) can run ("interpret") the native code. This can allow quite dramatic performance gains in the execution speed. There is, however, a tradeoff.
The time that the JIT compiler takes to compile the bytecodes is added to the overall execution time. For example, when a method executes only once and does not contain any loops, the overall performance might be reduced when JIT compilation occurs.
Note that JIT compilation has potential advantages over traditional code compilation techniques in some aspects: for example, it can target the specific machine on which the program is actually running (for example, 486 versus Pentium), rather than for a general CPU (or "the most likely" one), which is the best that traditional approaches (such as targeting a generic x86 platform) can achieve.