It occurred to me that when you write a C program, the compiler knows the source and destination platform (for lack of a better term) and can optimize to the machine it is building code for. But in java the best the compiler can do is optimize to the bytecode, which may be great, but there's still a layer in the jvm that has to interpret the bytecode, and the farther the bytecode is away translation-wise from the final machine architecture, the more work has to be done to make it go.
在我看来,当你编写一个C程序时,编译器知道源和目标平台(缺少一个更好的术语)并且可以优化它正在构建代码的机器。但在java中,编译器可以做的最好的是对字节码进行优化,这可能很好,但jvm中仍然有一层必须解释字节码,字节码离最终机器架构的转换距离越远,要做到这一点,还有更多的工作要做。
It seems to me that a bytecode optimizer wouldn't be nearly as good because it has lost all the semantic information available from the original source code (which may already have been butchered by the java compiler's optimizer.)
在我看来,字节码优化器几乎没有那么好,因为它丢失了原始源代码中可用的所有语义信息(可能已经被java编译器的优化器宰了。)
So is it even possible to ever approach the efficiency of C with a java compiler?
那么甚至可以用java编译器来达到C的效率吗?
4 个解决方案
#1
Actually, a byte-code JIT compiler can exceed the performance of statically compiled languages in many instances because it can evaluate the byte code in real time and in the actual execution context. So the apps performance increases as it continues to run.
实际上,字节码JIT编译器在许多情况下可以超过静态编译语言的性能,因为它可以实时和实际执行上下文中评估字节代码。因此,应用程序性能随着它的继续运行而增加。
#2
What Kevin said. Also, the bytecode optimizer (JIT) can also take advantage of runtime information to perform better optimizations. For instance, it knows what code is executing more (Hot-spots) so it doesn't spend time optimizing code that rarely executes. It can do most of the stuff that profile-guided optimization gives you (branch prediction, etc), but on-the-fly for whatever the target procesor is. This is why the JVM usually needs to "warm up" before it reaches best performance.
凯文说。此外,字节码优化器(JIT)还可以利用运行时信息来执行更好的优化。例如,它知道哪些代码执行得更多(热点),因此它不会花时间优化很少执行的代码。它可以完成配置文件引导优化为您提供的大部分功能(分支预测等),但无论目标处理器是什么,它都可以实时运行。这就是为什么JVM在达到最佳性能之前通常需要“预热”的原因。
#3
In theory both optimizers should behave 'identically' as it is standard practice for c/c++ compilers to perform the optimization on the generated assembly and not the source code so you've already lost any semantic information.
理论上,两个优化器都应该“相同”,因为c / c ++编译器的标准做法是对生成的程序集执行优化而不是源代码,因此您已经丢失了任何语义信息。
#4
If you read the byte code, you may see that the compiler doesn't optimise the code very well. However the JIT can optimise the code so this really doesn't matter.
如果您读取字节代码,您可能会发现编译器没有很好地优化代码。但是,JIT可以优化代码,因此这无关紧要。
Say you compile the code on an x86 machine and new architecture comes along, lets call it x64, the same Java binary can take advantage of the new features of that architecture even though it might not have existed when the code was compiled. It means you can take old distributions of libraries and take advantage of the latest hardware specific optimisations. You cannot do this with C/C++.
假设您在x86机器上编译代码并且出现了新的体系结构,我们称之为x64,相同的Java二进制文件可以利用该体系结构的新功能,即使在编译代码时它可能不存在。这意味着您可以使用旧的库分发并利用最新的硬件特定优化。你不能用C / C ++做到这一点。
Java can optimise inline calls for virtual methods. Say you have a virtual method with many different possible implementations. However, say one or two implementations are called most of the time in reality. The JIT can detect this and inline up to two method implementations but still behave correctly if you happen to call another implementation. You cannot do this with C/C++
Java可以优化虚拟方法的内联调用。假设您有一个具有许多不同可能实现的虚方法。但是,实际上大多数时候都会调用一两个实现。 JIT可以检测到这一点并内联最多两个方法实现,但如果碰巧调用另一个实现,它仍然可以正常运行。你不能用C / C ++做到这一点
Java 7 supports escape analysis for locked/synchronised objects, it can detect that an object is only used in a local context and drop synchronization for that object. In the current versions of Java, it can detect if two consecutive methods lock the same object and keep the lock between them (rather than release and re-acquire the lock) You cannot do this with C/C++ because there isn't a language level understanding of locking.
Java 7支持锁定/同步对象的转义分析,它可以检测到对象仅在本地上下文中使用并删除该对象的同步。在当前版本的Java中,它可以检测两个连续方法是否锁定同一个对象并保持它们之间的锁定(而不是释放并重新获取锁定)你不能用C / C ++来做这个,因为没有一种语言对锁定的层次理解。
#1
Actually, a byte-code JIT compiler can exceed the performance of statically compiled languages in many instances because it can evaluate the byte code in real time and in the actual execution context. So the apps performance increases as it continues to run.
实际上,字节码JIT编译器在许多情况下可以超过静态编译语言的性能,因为它可以实时和实际执行上下文中评估字节代码。因此,应用程序性能随着它的继续运行而增加。
#2
What Kevin said. Also, the bytecode optimizer (JIT) can also take advantage of runtime information to perform better optimizations. For instance, it knows what code is executing more (Hot-spots) so it doesn't spend time optimizing code that rarely executes. It can do most of the stuff that profile-guided optimization gives you (branch prediction, etc), but on-the-fly for whatever the target procesor is. This is why the JVM usually needs to "warm up" before it reaches best performance.
凯文说。此外,字节码优化器(JIT)还可以利用运行时信息来执行更好的优化。例如,它知道哪些代码执行得更多(热点),因此它不会花时间优化很少执行的代码。它可以完成配置文件引导优化为您提供的大部分功能(分支预测等),但无论目标处理器是什么,它都可以实时运行。这就是为什么JVM在达到最佳性能之前通常需要“预热”的原因。
#3
In theory both optimizers should behave 'identically' as it is standard practice for c/c++ compilers to perform the optimization on the generated assembly and not the source code so you've already lost any semantic information.
理论上,两个优化器都应该“相同”,因为c / c ++编译器的标准做法是对生成的程序集执行优化而不是源代码,因此您已经丢失了任何语义信息。
#4
If you read the byte code, you may see that the compiler doesn't optimise the code very well. However the JIT can optimise the code so this really doesn't matter.
如果您读取字节代码,您可能会发现编译器没有很好地优化代码。但是,JIT可以优化代码,因此这无关紧要。
Say you compile the code on an x86 machine and new architecture comes along, lets call it x64, the same Java binary can take advantage of the new features of that architecture even though it might not have existed when the code was compiled. It means you can take old distributions of libraries and take advantage of the latest hardware specific optimisations. You cannot do this with C/C++.
假设您在x86机器上编译代码并且出现了新的体系结构,我们称之为x64,相同的Java二进制文件可以利用该体系结构的新功能,即使在编译代码时它可能不存在。这意味着您可以使用旧的库分发并利用最新的硬件特定优化。你不能用C / C ++做到这一点。
Java can optimise inline calls for virtual methods. Say you have a virtual method with many different possible implementations. However, say one or two implementations are called most of the time in reality. The JIT can detect this and inline up to two method implementations but still behave correctly if you happen to call another implementation. You cannot do this with C/C++
Java可以优化虚拟方法的内联调用。假设您有一个具有许多不同可能实现的虚方法。但是,实际上大多数时候都会调用一两个实现。 JIT可以检测到这一点并内联最多两个方法实现,但如果碰巧调用另一个实现,它仍然可以正常运行。你不能用C / C ++做到这一点
Java 7 supports escape analysis for locked/synchronised objects, it can detect that an object is only used in a local context and drop synchronization for that object. In the current versions of Java, it can detect if two consecutive methods lock the same object and keep the lock between them (rather than release and re-acquire the lock) You cannot do this with C/C++ because there isn't a language level understanding of locking.
Java 7支持锁定/同步对象的转义分析,它可以检测到对象仅在本地上下文中使用并删除该对象的同步。在当前版本的Java中,它可以检测两个连续方法是否锁定同一个对象并保持它们之间的锁定(而不是释放并重新获取锁定)你不能用C / C ++来做这个,因为没有一种语言对锁定的层次理解。