von Neumann Architecture and Bottleneck

Posted on December 1, 2008  Comments (2)

We each use computers a great deal (like to write this blog and read this blog) but often have little understanding of how a computer actually works. This post gives some details on the inner workings of your computer.
What Your Computer Does While You Wait

People refer to the bottleneck between CPU and memory as the von Neumann bottleneck. Now, the front side bus bandwidth, ~10GB/s, actually looks decent. At that rate, you could read all of 8GB of system memory in less than one second or read 100 bytes in 10ns. Sadly this throughput is a theoretical maximum (unlike most others in the diagram) and cannot be achieved due to delays in the main RAM circuitry.

Sadly the southbridge hosts some truly sluggish performers, for even main memory is blazing fast compared to hard drives. Keeping with the office analogy, waiting for a hard drive seek is like leaving the building to roam the earth for one year and three months. This is why so many workloads are dominated by disk I/O and why database performance can drive off a cliff once the in-memory buffers are exhausted. It is also why plentiful RAM (for buffering) and fast hard drives are so important for overall system performance.

Related: Free Harvard Online Course (MP3s) Understanding Computers and the InternetHow Computers Boot UpThe von Neumann Architecture of Computer SystemsFive Scientists Who Made the Modern World (including John von Neumann)

2 Responses to “von Neumann Architecture and Bottleneck”

  1. Charlie M.
    December 21st, 2008 @ 6:30 pm

    The von Neumann architecture is only one of the several popular computer architectures that have been used. Tales of the “von Neumann bottleneck” aside, the common attribute that all computer hardware architectures have is shared-resources.

    Software emerged from the continued use of the Turing paradigm and allowed us to reuse the electronics. That tactic was very necessary in the days when power- and space-hungry vacuum tubes were the leading electronic devices. No longer is that a factor when we have millions and perhaps billions of transistors on a chip. It is now quite possible to jettison the shared-resource bottleneck by dedicating hardware for each function.

    Another problem standing in the way is the CPU through which all requests for operations must pass (except some instances of interrupt handling). Centralizing command causes bottlenecks too and has been shown to be ultimately ineffective. Examples of the inadvisability of central command, in addition to computers are the Vietnam War, the USSR, and Cuba.

    In order to escape shared resource architecture and central command in control systems, software must go. Aside from the mentioned problems, software also enforces linear-sequential operation, which is yet another bottleneck to responsive operations.

    Software can thus be seen as a detriment and could be mostly unnecessary in the embedded control systems that encompass 98% of the billions of microprocessors (and derivatives) in existence. The only problem is how to write the software-less control-ware for those systems. I have the answer to that.

  2. Curious Cat Engineering Blog » Solid-State Drives For Laptops
    January 11th, 2009 @ 8:15 pm

    SanDisk claims the SSDs are more than five times faster than the fastest 7,200 revolutions per minute hard-disk drives used in laptops…

Leave a Reply