# The Trouble With Multicore



## lotuseclat79 (Sep 12, 2003)

The Trouble With Multicore (5 web pages).

*Chipmakers are busy designing microprocessors that most programmers can't handle*

-- Tom


----------



## Stisfa (Nov 13, 2009)

I'm not an engineer, in either disciplines of electronics or programming, but I do have a couple conjectures I'd simply like to express (yes, an ignorant technician as myself is brash enough to be paddling in esoteric waters).

I realize that parallelism of data is hard to manage, especially due to the nature of sequential data patterns but what if we addressed the flaws of the Von Neumann architecture? What if we could change the architecture of current CPUs to one that breaks the wall between memory and processor?

To illustrate, when in math class, I wasted time reading a math problem from a book (RAM), transposing to a notebook (bringing data from RAM to CPU's registry), upon transposing, work on the problem and solve the problem (data manipulation/instructions sets performing in CPU), return to the math book, transpose a new problem to my notebook, work on the newly transposed problem and so on and so forth. Then there were those days when my math teach provided our class with a worksheet that had all the problems on it and I just had to apply my skills of logic (said powers are still in child-like development to this day ). Sure, it took my math teacher more work to develop the worksheet, make copies and then distribute it but it resulted in more time for the student (CPU) to devote toward solving the problem.

So if we could switch out the Von Neumann architecture, where system memory is an arm's reach away (math book), to an architecture that has system memory residing on-die (worksheet), then we would see significant performance boosts. This would be especially true if the on-die RAM were to be of the same capacities that we have today (larger margins on the worksheet for pen & paper solving)! Yes, it's going to take a lot of work from the teacher (engineers/scientists) and yes, this doesn't address the fundamental issue concerning multi-core parallel processing and the necessity for an efficient programming methodology to meet it but it does address part of the efficiency equation.

Interestingly enough, AMD and Intel are applying this principle of "removing the bottle-neck" by integrating a GPU onto the same die of their processors (AMD's Llano & Intel's Sandy Bridge), albeit, this is concerning GPUs rather than RAM (http://www.anandtech.com/show/2933). Actually, this new architecture with the GPU on-die (APU) performs the functions of the predicted CPUs of 2020 (1st paragraph of the 5th page - David Patterson's article, not Anand's). Unfortunately, both Patterson's predicted multi/many-cores and the soon-to-be-released APUs only meet the demands of graphical applications, not general computing - so we're encouraging both the exponential rise in the birth of gamers and a hyper-acceleration in death rates of productive workers, lol. In all seriousness, though, this new APU paradigm contributes to the problem of hardware technologies outpacing software counterparts rather than solving it (despite this, I'm still a huge advocate of removing physical barriers between hardware components).

Like I've stated earlier, I'm not an EE nor a SE, so this is just my n00b take on it. I've probably offended many engineers here, so I'll just relegate myself back to my CompTIA A+ books .


----------



## lotuseclat79 (Sep 12, 2003)

There are many ways to approach the problem of parallelism. One is at the instruction level in both processor firmware and in the tool-chain that supports the architecture like compilers, linkers, etc.. Instruction level parallelism is generally considered find-grain parallelism.

Even within compilers, there are other opportunities for parallelism, such as the run-time system which achieves a medium-grain parallelism.

Then there are the course-grain approaches at the OS level.

For multi-core processor chips to succeed, it will probably be necessary for all of the above approaches in concert with new material technologies to redefine computability. Hardware advances will always precede software approaches. Material advances like graphene-based computers to replace silicon will also probably be necessary, at least to get to the stage where quantum computers can become a reality.

-- Tom


----------



## JohnWill (Oct 19, 2002)

I don't know, my quad-core chip seems to manage to utilize all the cores pretty well. While many applications are indeed single-threaded, the O/S can manage the multiple cores and still use them to good effect.


----------

