Posts about electical engineering

Graphene: Engineered Carbon

A material for all seasons

Graphene, a form of the element carbon that is just a single atom thick, had been identified as a theoretical possibility as early as 1947.

Its unique electrical characteristics could make graphene the successor to silicon in a whole new generation of microchips, surmounting basic physical constraints limiting the further development of ever-smaller, ever-faster silicon chips.

But that’s only one of the material’s potential applications. Because of its single-atom thickness, pure graphene is transparent, and can be used to make transparent electrodes for light-based applications such as light-emitting diodes (LEDs) or improved solar cells.

Graphene could also substitute for copper to make the electrical connections between computer chips and other electronic devices, providing much lower resistance and thus generating less heat. And it also has potential uses in quantum-based electronic devices that could enable a new generation of computation and processing.

“The field is really in its infancy,” says Michael Strano, associate professor of chemical engineering who has been investigating the chemical properties of graphene. “I don’t think there’s any other material like this.”

The mobility of electrons in graphene — a measure of how easily electrons can flow within it — is by far the highest of any known material. So is its strength, which is, pound for pound, 200 times that of steel. Yet like its cousin diamond, it is a remarkably simple material, composed of nothing but carbon atoms arranged in a simple, regular pattern.

“It’s the most extreme material you can think of,” says Palacios. “For many years, people thought it was an impossible material that couldn’t exist in nature, but people have been studying it from a theoretical point of view for more than 60 years.”

Related: Very Cool Wearable Computing Gadget from MITNanotechnology Breakthroughs for Computer ChipsCost Efficient Solar Dish by MIT StudentsSuperconducting Surprise

Google Server Hardware Design

Ben Jai, Google Server Platform Architect, discusses the Google server hardware design. Google has designed their own servers since the beginning and shared details this week on that design. As we have written previously Google has focused a great deal on improving power efficiency.

Google uncloaks once-secret server

Google’s big surprise: each server has its own 12-volt battery to supply power if there’s a problem with the main source of electricity. The company also revealed for the first time that since 2005, its data centers have been composed of standard shipping containers–each with 1,160 servers and a power consumption that can reach 250 kilowatts.

Efficiency is another financial factor. Large UPSs can reach 92 to 95 percent efficiency, meaning that a large amount of power is squandered. The server-mounted batteries do better, Jai said: “We were able to measure our actual usage to greater than 99.9 percent efficiency.”

Related: Data Center Energy NeedsReduce Computer WasteCost of Powering Your PCCurious Cat Science and Engineering Search

The Chip That Designs Itself

The chip that designs itself by Clive Davidson , 1998

Adrian Thompson, who works at the university’s Centre for Computational Neuroscience and Robotics, came up with the idea of self-designing circuits while thinking about building neural network chips. A graduate in microelectronics, he joined the centre four years ago to pursue a PhD in neural networks and robotics.

To get the experiment started, he created an initial population of 50 random circuit designs coded as binary strings. The genetic algorithm, running on a standard PC, downloaded each design to the Field Programmable Gate Arrays (FPGA) and tested it with the two tones generated by the PC’s sound card. At first there was almost no evidence of any ability to discriminate between the two tones, so the genetic algorithm simply selected circuits which did not appear to behave entirely randomly. The fittest circuit in the first generation was one that output a steady five-volt signal no matter which tone it heard.

By generation 220 there was some sign of improvement. The fittest circuit could produce an output that mimicked the input – wave forms that corresponded to the 1KHz or 10KHz tones – but not a steady zero or five-volt output.

By generation 650, some evolved circuits gave a steady output to one tone but not the other. It took almost another 1,000 generations to find circuits that could give approximately the right output and another 1,000 to get accurate results. However, there were still some glitches in the results and it took until generation 4,100 for these to disappear. The genetic algorithm was allowed to run for a further 1,000 generations but there were no further changes.

See Adrian Thompson’s home page (Department of Informatics, University of Sussex) for more on evolutionary electronics. Such as Scrubbing away transients and Jiggling around the permanent: Long survival of FPGA systems through evolutionary self-repair:

Mission operation is never interrupted. The repair circuitry is sufficiently small that a pair could mutually repair each other. A minimal evolutionary algorithm is used during permanent fault self-repair. Reliability analysis of the studied case shows the system has a 0.99 probability of surviving 17 times the mean time to local permanent fault arrival. Such a system would be 0.99 probable to survive 100 years with one fault every 6 years.

Very cool.

Related: Evolutionary DesignInvention MachineEvo-Devo

von Neumann Architecture and Bottleneck

We each use computers a great deal (like to write this blog and read this blog) but often have little understanding of how a computer actually works. This post gives some details on the inner workings of your computer.
What Your Computer Does While You Wait

People refer to the bottleneck between CPU and memory as the von Neumann bottleneck. Now, the front side bus bandwidth, ~10GB/s, actually looks decent. At that rate, you could read all of 8GB of system memory in less than one second or read 100 bytes in 10ns. Sadly this throughput is a theoretical maximum (unlike most others in the diagram) and cannot be achieved due to delays in the main RAM circuitry.

Sadly the southbridge hosts some truly sluggish performers, for even main memory is blazing fast compared to hard drives. Keeping with the office analogy, waiting for a hard drive seek is like leaving the building to roam the earth for one year and three months. This is why so many workloads are dominated by disk I/O and why database performance can drive off a cliff once the in-memory buffers are exhausted. It is also why plentiful RAM (for buffering) and fast hard drives are so important for overall system performance.

Related: Free Harvard Online Course (MP3s) Understanding Computers and the InternetHow Computers Boot UpThe von Neumann Architecture of Computer SystemsFive Scientists Who Made the Modern World (including John von Neumann)