The future of computing is going to go belong to the nimble mammels that are dynamic, interpreted, virtual machine-based instead of the dinosaurs that are compiled, linked and mostly static.
With the current generation of multi-core, high performance CPU's, there is so much processor speed available that the whole idea of statically-optimized native executables seems like yesterday's solution. C and C++, along with assembler (which has already been mostly abandoned for quite a while) have been popular for more than a generation (I used C for the first time in 1984 and C++ around 1990). Languages compiled to optimized native code were necessary as the generations of processors were barely keeping ahead of the demand for performance.
The earliest VM (virutal machine) language implementations were convenient but hardly good performers, as the overhead of running the virtual machine was significant on the processors of the day. My earliest exposure to these was the UCSD P-system, which allowed you to run Pascal on a variety of systems. I believe at one time Microsoft wrote applications under a p-code interpreter. When you think of VM based languages, the most common would be Java, which debuted in the mid-1990's, and C#, which debuted more recently.
Early versions of Java were slow and hardly any competition to native languages. Over the years since it was released, the improvements in the runtime (which basically compiles the virtual bytecode to native code on the fly) and the better performance in the processors have allowed the language runtime to meet, and even surpass in many instances, that of statically-optimized native code. In 2002 some former co-workers of mine built an F-22 simulator for Lockheed that ran on commodity linux (dual cpu) hardware and supported a functional cockpit mockup along with most of the onboard systems written entirely in Java.
One of the benefits of a language compiled/interpreted to an intermediate code (instead of directly to native) is that the runtime sustem can watch the application at runtime and choose to optimize the final conversion to native code based on what is actually happening. Static native languages are generally optimized at compile time based on an analysis of the code. Optimizations that may be broader in scope that a narrow view of functions or lines are difficult to do as the actual usage of the application code is unknown to the optmizer. If there is sufficient performance available at runtime, the dynamic optimizer can use some bandwidth to observe usage and directly change the execution to optimize for time.
Of course running a virtual machine that hides the underlying native environment is only one step to the future. Optimizing the intermediate code to native code conversion is still a low-level event, mostly out of the view of the programmer. Higher level abstractions that benefit the programmer directly optimize the coding-time performance of actually writing an application. In the long run, the cost of writting an application lies mostly in the people who design, code and test.
Dynamic languages, where the runtime behavior of the application can be manipulated by the programmer, and interpreted languages, where the runtime executes the source code of the program directly, both allow the programmer more tools to express the requirements of the application as well as make development easier. Dynamicism and interpretation can be used together, and along with runtime virtual machines are a potent combination for the future.
In the following articles I intend to discuss in more detail examples of several languages that fall in one or more of the categories.