Multi Core Madness

Intel’s 50 Core XEON PHI Solution

As Intel and AMD grew close to the physical limit of the smallest transistor they could build, they had to find ways of packing more transistors and continue to increase computing power. Their solution was to add more processing units – all working together in tandem, and sharing some of the cache memory provided.

Since then, the race for more cores has only intensified, and spread from the general purpose CPUs to the more targeted GPUs of the two big market leaders: Nvidia and AMD/ATI. Their approach allowed for many more highly specialized processing units on the graphics card, allowing for massive  (in some cases, counted in hundreds) parallel computing for the sole purpose of pushing 2D graphics and polygons on ultra high definition screens.

The big problem with this technology is, that the software side of the equation was (and still is) much slower to catch up with the development. Today, the usage of multiple cores, as well as large RAM amounts, and 64 bit operating systems (required in order to take advantage of both memory limitations of 32 bit Operating systems, and for their better support of multi-core) are not taken advantage of by most programs. The reasons are many, but mainly due to the need to do extra programming of separate versions of the same program for 64 bit and multi-core support.

But this is all in the process of changing. Microsoft, as a software world leader is pushing for the software world to move completely into 64 bit computing. Intel and AMD (but mostly Intel  are pushing for the same, as it will remove some of the limitations from the ever-going race for speedier processors. With both the hardware and software leaders pushing for this goal, 32 bit operating systems and computing platforms will become a thing of the past. And then, the majority of software programming will be done for 64 bit and multi-core platforms.

It might not be in the interest of the consumer, as it will keep the cost of home computing at about the same level (instead of being driven down), but performance and (of course) revenue for those companies products will go up.

Where are we headed? Well, the thing is, you can’t keep adding cores and memory and hope for the best. the more cores you add, the more energy is consumed – even when every time you make the process smaller and more energy efficient. At some point, something have to give. It creates issues of overheating, and bottlenecks when the memory modules or other peripherals don’t live up to the CPUs expectations. The next step will be the next technology. Quantum, and biological computing.

The average Joe consumer does not need 64 bit or multi-core. Office, and most other applications and even games do not take advantage of more than 2 cores at best. And memory usage sweet-spot stops at about 2-4GB for 32 bit Operating systems, and 4-8GB for 64 bit ones.

The people who root for this progress are pointing to Bill gates’ misquoted statement back in the day of “640K ought to be enough for anybody”, as well as for the need of technology to advance as far as it can go. There’s no doubt there. If a technological advance can be achieved and will be able to add value to human life, then by all means, push through.

But that’s not really the case is it? The PC industry is like a 3 legged horse with bulging muscles, but an inherent tendency to fail at high speed. It links back to the mismatch between software and hardware advancement. If the software cannot keep up with the hardware racing forward, then what is the use of running so far ahead, just to use this super fast horse as a way to pull behind an old battered cart?

Comments

Comments

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Loading Disqus Comments ...

No Trackbacks.