-
AMD issues dramatic price cuts for triple-core CPUs
Just read about this news, A quick look at Intel’s price sheet reveals that AMD decided to price its X3 processors at the very low-end of Intel’s Core 2 Duo range. ( AMD Athlon X2 BE-2400 2.3Ghz Dual Core for $25 ):
AMD issues dramatic price cuts for triple-core CPUs
-
I think the same spec Itel dual core and quad core CPU run faster than AMD's equivalent. Thus pricing the AMD triple core CPU in the same price range of Intel Dual core doesn't look odd to me.
AMD really has to improve their product performance better than the Intel CPU if it wants to sell the same spec at a higher price.
-
Wow I'm really behind the times...my main desktop system is still running a single core (gasp!). It still does its job so once I'm ready to upgrade I'll keep an eye out for deals like this. Thanks for sharing!
Out of curiosity though, it seems to me like buying a dual core with a higher clock speed would be better (performance wise) than buying a quad core with a lower clock speed. Most applications just aren't written with multi-threading in mind so I doubt most applications would really take full advantage of four cores. Of course, if you run a bajillion processes at the same time, I guess that's where the quad core has an advantage...
By the time I'm ready to build a new system I have a feeling quad core is going to be the norm anyway. And 4 gigs of RAM will be the standard. Crazy how fast these things progress.
"The author of that poem is either Homer or, if not Homer, somebody else of the same name."
-
Originally Posted by gamblor01
Out of curiosity though, it seems to me like buying a dual core with a higher clock speed would be better (performance wise) than buying a quad core with a lower clock speed.
Thats what I'm wondering too...I gotta 5600 x2 ATM and been bitten by the upgrade bug really bad...Is x3 really a upgrade from x2 or not?
Kubuntu 8.10
Asus M2N-VM DVI
ATI Radeon 3850
-
I have built a quad core recently but haven't got time to test it as I have been sent overseas on business.
The dual core I had, based on Intel, does perform very well comparing with single core machines.
I suppose the high clock rate dual core CPU should perform better than the slow clock rate quad core. The latter will pay for itself if the machine has to do a lot of the video work. I have a high definition camcorder and its recording must be converted to ordinary definition if I send the content to friends. A dual core in such a case become a single core and terribly slow. I shall have a better idea when put the quad core, a QX9550, to its pace.
I bought the Intel CPU for both dual and quad core as all the technical write-ups said they run faster than the AMD equivalents.
-
It depends very much on how you use your system. For a lot of users a single core is plenty because they're just doing e-mail and web browsing. Gamers want as much power as they can get, but there they still have to strike a balance between whether the faster-per-core dual is better than a slightly slower-per-core quad with nearly double the potential multi-threaded performance. I love my quad core because I do a lot of compiling, both because of Gentoo and my own development, and that tends to scale well to multi-core. It has also come in handy for running virtual copies of distros under VirtualBox.
So is a triple-core better than a dual-core? It just depends.
-
I would agree with Cybertron that it all depends on what you use your system for. I think if I do upgrade my home box (2.0 GHz AMD X2) that I'll stick with just a dual-core with a higher clock speed.
"After all you've seen, after all the evidence, why can't you believe?"
IBM Thinkpad T21
750 Mhz P3, 128 MB PC100 RAM, CD-ROM, 10 GB IDE HDD
Ubuntu 9.04 Minimal
-
Some of their prices are really nice if you intend to get a triple-core, but still personally I prefer the intel.
-
How AMD has fallen... I used to swear by them, but I am using Intel in my boxes now. Triple core? That baffles me... I always thought it went 2^n - so you get 1, 2, 4, 8, 16, etc. I'm curious how they developed logic to work with a non 2^n base.
I have a quad core personally, but I found the lower clock speed not to be an issue with proper cooling. I have a nice big air cooled radiator on top the chip and was able to overclock from 2.4GHz to 3.2GHz with great stability.
Some things lend themselves to multi-threading nicely (like compiling) where the whole can be divided into parts that are not connected to each other. Video games have a rough time because many engines out there are written such that the next process waits on the previous (shadows may come after objects are rendered for example) - so they get lumped onto a single core. In those cases the higher clock speed will give you better results. Then again, if the game is modern - most of the processing should be done on your GPU. All in how it is coded...
For me, the extra cores was nice as I didn't notice much of a slow down when running applications/processes that used to bog my system down. I can't really compare dual to quad as it took me awhile to upgrade so I went from single core to quad. I found watching HD movies made good use of the cores - I guess since it can give say 100 frames to each core per 400 (just throwing random numbers). Same idea for compression where a file can be broken into unique parts.
All up to what you are going to do. As everyone should know by now there really isn't a one shoe fits all for computers - the reason there are so many options out there.
"Whenever you find yourself on the side of the majority, it's time to pause and reflect."
-Mark Twain
-
Originally Posted by trilarian
Some things lend themselves to multi-threading nicely (like compiling) where the whole can be divided into parts that are not connected to each other. Video games have a rough time because many engines out there are written such that the next process waits on the previous (shadows may come after objects are rendered for example) - so they get lumped onto a single core. In those cases the higher clock speed will give you better results. Then again, if the game is modern - most of the processing should be done on your GPU. All in how it is coded...
there.
On the bright side, you still have those 3 extra cores, so with enough ram you don't need to close out of everything else to conserve resources
-
Originally Posted by Darkbolt
On the bright side, you still have those 3 extra cores, so with enough ram you don't need to close out of everything else to conserve resources
Aye, I love it. I run Enlightenment as my Desktop Manager and have a virtual screen for each core ha ha... I did stick 8GB RAM into it (I made use of the CompUSA closeout sale) - so I open things on purpose that I don't really use just because I can. It is nice though - because scheduled things like my daily apt-get upgrade or when MythTV starts recording or removing commercials, I can barely even notice a slow down.
If you have multiple cores schedutils is a must have. It lets you set CPU affinity for processes. So, even if you have programs that don't lend themselves to multi-threading you can stick each program on a different core - simulating multi-threading. And of course, for the Debian users its as easy as "apt-get install schedutils".
"Whenever you find yourself on the side of the majority, it's time to pause and reflect."
-Mark Twain
-
Originally Posted by trilarian
So, even if you have programs that don't lend themselves to multi-threading you can stick each program on a different core - simulating multi-threading.
Um, in my experience, the kernel already does that.
At least, if both programs are using the CPU, that's true. If there's idle CPU time on whichever CPU that program-1 is using, then the kernel will try to schedule program-2 in that idle time, rather than letting it go to waste. But if program-1 is using 100% of the CPU, then program-2 will get run on the other. No CPU affinity required; at least, no manual CPU affinity.
Of course, that doesn't mean that schedutils is unnecessary -- just that I've never needed it enough to go install it.
Originally Posted by trilarian
Triple core? That baffles me... I always thought it went 2^n - so you get 1, 2, 4, 8, 16, etc. I'm curious how they developed logic to work with a non 2^n base.
I suspect (but do not know for sure) that these are quad-core packages where one of the cores failed some test, and got disabled because of it, instead of having the entire package's clock speed lowered even more to let the one bad CPU work properly. The normal inter-core communication paths are probably all still there, just that a few of them aren't used because the CPU on the other end never comes up.
Last edited by bwkaz; 10-04-2008 at 10:20 PM.
-
Also, Intel just released a six-core Xeon not too long ago. I don't think there's anything inherent in the logic that requires 2 ^ n cores on CPU's, it just happens to be a convenient way of doing it.
-
Yeah, with 2 ^ n CPUs, you can arrange them in a hypercube-type setup, to get the shortest time until all CPUs know about some given event (say, a cache flush). But with some other number of CPUs, you can still arrange them in a similar setup, just don't fill the last level completely. It will still only be one unit of time away from a 2 ^ n setup.
To be specific, I'm talking about when one CPU has to notify all the others about some event. It notifies one other CPU in time slot t, then both of them notify a single new CPU (each) in time slot t + 1, and then all four of those notify a single new CPU (each) in time slot t + 2, etc., etc. -- you end up with 2 ^ n total CPUs knowing about the event after time slot t + n (so it's logarithmic). (Incidentally, this is also the most efficient way to set up a phone tree, assuming people are always able to answer the phone. That's not true, so it doesn't quite work, but this does take a lot less time to complete than a typical phone tree setup. Anyway...)
But if you had a number of CPUs that wasn't 2 ^ N (say, 6), you could still use the hypercube type layout, and simply stop early. Have one CPU notify one other CPU at t, both of them notify another (each) at t + 1, and then have two of the current four notify the final two at t + 2. It's still mostly-log-N that way.
(Note that I have no idea whether this is actually how cores communicate or not anymore. This is just how some professor at college had set up a "network" of 386 motherboards to talk to each other inside one huge case. He had 16 different CPUs (and 16 different motherboards...), arranged in a 4D hypercube; it took four time cycles for all CPUs to know about any given event. It also works the best with 2 ^ N CPUs total, so I suspect it may be the reason. Don't know for sure though.)
-
Originally Posted by bwkaz
Um, in my experience, the kernel already does that.
At least, if both programs are using the CPU, that's true. If there's idle CPU time on whichever CPU that program-1 is using, then the kernel will try to schedule program-2 in that idle time, rather than letting it go to waste. But if program-1 is using 100% of the CPU, then program-2 will get run on the other. No CPU affinity required; at least, no manual CPU affinity.
Of course, that doesn't mean that schedutils is unnecessary -- just that I've never needed it enough to go install it.
In most cases this is true for how programs are started (in most cases, I mean how a person would start up process sequentially and the kernel finds the least taxed core). I found, however, for a server with multiple cores it is nice to set the affinity of certain major processes to a particular core. This ensures that a dormant program coming up doesn't slow down a critical process. It really is splitting hairs, but the utility is there if you need it.
For example, if you are running an email server that is combined with a MySQL database. Setting the email process to core 2 and MySQL to core 3 allows for mass email to come in and a large query to run at the same time with only general I/O as a bottleneck. I skip core 1 as that tends to be where most of the regular processes start up, so in general there is a better chance of idle on the other cores.
This is all relative to my experience, so the kernel may be better adapted now that there isn't a need. However, I remember when I was doing test on the server a couple years back - a preset affinity yielded the best results for me.
EDIT => Makes sense about not *having* to have 2^n cores. I also remember a similar project back in grad school where we built a hypercube with 64 cpus. It was used to experiment with parallel processing programming instead of networking - though they both share similar concepts (like the hypercube, logarithmic execution time, time to update all cpus, etc.). I just remembered the math always being much easier, or I guess convenient would be the better word, when you stuck to 2^n. So many things follow that structure that the numbers become stuck in your head.
For example, how many people that have a great deal of computer knowledge don't feel some sort of affinity to the following stream of numbers:
1, 2, 4, 8, 16, 32, 64, 128, 256, 512, 1024....
Now try a slightly different stream and see how your mind naturally has more trouble reading the numbers (maybe I'm just crazy )
1, 3, 5, 9, 11, 27, 53, 117, 239, 531, 1133....
Last edited by trilarian; 10-06-2008 at 11:27 AM.
"Whenever you find yourself on the side of the majority, it's time to pause and reflect."
-Mark Twain
Posting Permissions
- You may not post new threads
- You may not post replies
- You may not post attachments
- You may not edit your posts
-
Forum Rules
|
|