Intel's Hyper-Threading Technology: Free Performance?
by Anand Lal Shimpi on January 14, 2002 2:04 PM EST- Posted in
- CPUs
What makes an application run? What tells your CPU what instructions to execute and on what data? This information is all contained within the compiled code of the application you're running and whenever you (the user) give the application input, the application in turn sends threads off to your CPU telling it what to do in order to respond to that input. To the CPU, a thread is a collection of instructions that must be executed. When you get hit by a rocket in Quake III Arena or when you click open in Microsoft Word, the CPU is sent a set of instructions to execute.
The CPU knows exactly where to get these instructions from because of a little mentioned register known as the Program Counter (PC). The PC points to the location in memory where the next instruction to be executed is stored; when a thread is sent to the CPU, that thread's memory address is loaded into the PC so that the CPU knows where to start executing. After every instruction, the PC is incremented and this process continues until the end of the thread. When the thread is done executing, the PC is overwritten with the location of the next instruction to be operated on. Threads can interrupt one another forcing the CPU to store the current value of the PC on a stack and load a new value into the PC. But the one limitation that does remain is that only one thread can be executed at any given time.
There is a commonly known way around this, and that is to make use of two CPUs; if each CPU can execute one thread at a time, two CPUs can then execute two threads. There are numerous problems with this approach, many of which you should already be familiar with. For starters, multiple CPUs are more expensive than just one. There is also an overhead associated with managing the two CPUs as well as the sharing of resources between the two. For example, until the release of the AMD 760MP chipset, all x86 platforms with multiprocessor support split the available FSB bandwidth between all available CPUs. But the biggest drawback of all happens to be the fact that applications and the operating system must be capable of supporting this type of execution. Being able to dispatch multiple execution threads to hardware is generally referred to as multithreading; OS support is required to enable multithreading while application support is necessary in order to gain a tangible performance increase out of having multiple processors (in most cases). Keep that in mind as we talk about another approach to the same goal - being able to execute more than one thread at a time - it's time to introduce Intel's Hyper-Threading technology.
1 Comments
View All Comments
Anonymous User - Wednesday, September 10, 2003 - link
It's available on desktops now. See:http://configure.us.dell.com/dellstore/config.aspx...