Another dubious “end of computer history” argument
In a typically snarky Register article, Chris Mellor raises a caution about the use of future many-cored chips in IT. In essence, he says that today’s apps run in a relatively small number of threads each, and modifying them to run in many threads is too difficult. Hence, most of the IT use for many-cored chips will be via hypervisors that assign apps to cores as makes sense.
Mellor has a point, but he’s overstating it. For example, he asserts that Oracle databases don’t run in a lot of threads. Actually, they routinely run today in multiple threads per core, up to at least 16 cores on SMP (Symmetric MultiProcessing) machines. Large OLTP systems often have highly clustered middle tiers. And on the analytic side, Teradata, Netezza, Kognitio, and Greenplum each have run on configurations with over 100 processors or cores.* Other analytic processing – data mining, geospatial analysis, etc. — benefits from massive parallelization as well. And the candidate next-generation OLTP DBMS H-Store architecture could thrive in massively multi-core chip architectures of the future.
* And I doubt that’s a complete list. For example, Aster and DATAllegro are probably in the club too.
In one important way, I’m being overglib. My examples are drawn from cases in which many different chips are used, each with their own Level 2 caches, memory bandwidth, and so on. In some cases, that’s a huge distinction. Replace 100 MPP chips by a single node, and you can be right back to the I/O bandwidth problems that cripple many conventional-DBMS data warehousing installations. But if the fundamental argument is “There’s little point in putting more transistors on a chip because there isn’t much software can do with them anyway” — well, that would be extremely incorrect.
Comments
3 Responses to “Another dubious “end of computer history” argument”
Leave a Reply
When Marvin Minsky bought a 256K word (1 megabyte) memory for the timesharing system (i.e. this memory was shared between about 15 users) at the MIT Artificial Intelligence Lab in the early 1970’s, he was widely criticized: who would ever need that much memory?
Certainly existing application are not written to use many-core processors; hardly anyone has those yet. Predicting that larger amounts of hardware power are unnecessary and will go unused, just because they’d require us to do things differently from the way we do them now, is a long tradition. It ignores the amazing degree of innovation in the computer industry.
I think it’s likely that Mellor will live to see his remarks rendered quite obsolete.
Aster Data Systems has more than one customer implementation with over 100 cores. One of which is MySpace whose Aster frontline data warehouse runs on 100+ servers, each with multiple cores which get used in a multi-threaded fashion for loading and querying the data, taking advantage of every processor.
The programming languages are going to have to change in the world of multi-core and solid state “disk” to take advantage of all that good stuff — ain’t nothing faster than parallel!
SQL was not the first declarative language. It was spreadsheets. A lot of people learned to use them, didn’t they?
Don’t worry; we will get there and then bitch about something new 🙂