Microservers

Average users don't have much to do with servers, but with the increasing mobility and the development of the Internet and cloud computing the importance of servers will also increase. They will support applications with computing, storage or cooperation capabilities. We already use such applications, like search engines, video servers or voice recognition. More is to come like live translation between two people using different languages.

Not surprisingly a high growth of cloud computing services is expected, which will - and has to - keep pace with the increased usage of smart-phones, tablets but also serve laptops and desktops.
There is no cloud service without servers so there is a big growth expected in server sales.  One has to ask how will servers evolve, and how will they respond to the challenge of increasing performance need and the cost pressure.  Good part of the cost is electricity. Maybe the most important metrics for servers i.e. server farms is the processing performance per Watt.

Server past

A server today doesn't seem to differ very much from a server twenty years ago. Naturally significant development was made, but the basic architecture is almost the same; processor, virtual memory, disk array, multiprocessing, interfaces, UNIX like or Windows OS... Virtualisation is not a really a new thing ant multicore is not as much different from multiprocessors at least from architecture perspective.

Server future

If personal computing has shifted dramatically during the recent decade (net-books, then ultra thin laptops, smart-phones and tablets) why should server technology remain the same?  Up to now the efforts concentrated on the server environment by making data centers more energy efficient i.e. green.  This includes a significant amount of superb engineering like building constructed  for optimal air flow, solar panels fitted on row, super efficient power supply, special cooling solutions... and so on.
In my view the next step is to rethink servers form the ground up. Like every new technology breaktough it's here since a while but it not yet widespread. I think that the forerunner of modern datacenter and server technology was Thinking Machines (and competitors like nCube and MasPar) which produced computer containing 65'536 processors made from 16 processor chips. These servers could be connected together to form a super datacenter. And all this in the 80s.
Now several companies are producing microserves using low consumption using Intel Atom or ARM processors. These are very low power, high core density servers, not unlike that the server of Thinking Machines were once. The major difference is that software is now better prepared to handle a large number of cores efficiently.

A step further

Microservers simplify server design, but still retain much from the traditional server architecture. Why not simplify it even further?


A typical ARM microprocessor (here an Apple A6) contains two cores,  1Gb on board memory,  3 graphic units, and a lot of circuits to support peripherals to be found in a tablet or a smart-phone. What if you have only one core, network interface and a lot of memory (8Gb seems to be easily feasible)? Then you have a server chip with only a few connectors, to build servers which have no memory, no disks and no other peripherals just high speed networking. You could have a computing power density which can't be imagined with any other architecture and better than ever performance / watts. 
Naturally there are a lot of open questions: Disk capacity? None, you have to use RAM on the chip or o other chips which doesn't do computing just serve data from ram. Boot? you need some boot logic, but it will hardly ever reboot not to loose data and electricity can't be broken for the same reason except you have a mirror somewhere. Virtual machines?Nope one thread on chip is the model. 
Yes, software may be an issue with such a solution, but as I see the development is heading exactly in this direction.


No comments:

Powered By Blogger