Someone claimed Zen 5 would bring 40% IPC gain. Today, someone at AMD said that is moronic. Expect 15%. RDNA 4 should be a very nice IGP.
Yeah, 40% is GPU territory. CPUs are lower, and it was silly to believe that AMD would achieve such an enormous leap in one generation. It'd be nice but not realistic.
We are seeing the 40% number from several sources now. They are all mis-citing the same source. There is so little information floating around the echo chamber that a piece of BS can look real due to it appearing on several web pages. There will be niche aspects of Zen 5 with big gains, however, overall it will be a normal step forward.
TSMC reports some tools at the Taiwan fab were offline for about 10 hours after the earthquake. This is amazing. We recently spent a few weeks in Taiwan. Taiwanese people are an inspiration. They operate on a tremendously high level. The best thing the west can do is keep China from ruining Taiwan because Taiwan needs to be left alone so they can be the most productive people in the world.
This could just as easily gone under AAPL or Intel, since they both use TSMC fabs. TSMC claims their 2nm process is on track for the end of next year. They also have a 1.4nm process in the works they hope to bring online in 2027. Meanwhile, the Global Foundaries decision to stop development at 10nm and focus on making parts at that node has all but guaranteed their demise. They will be OK for a while but everyone wants the new nodes and GF is bleeding customers.
Zen 6 (scheduled for 2H2025) will bring 32 core chiplets. That means, the base CPU complex with a single chiplet will have 32 chores. Maybe they will have smaller chiplets as well but the latest leak shows only 32C CCD parts. I assume this is tied to TSMC's N2p process. There is a legitimate question regarding how much compute the vast majority of people need. I run a lot of hungry apps and I'm running a 12 gen Intel i5 with 64GB of RAM. More speed is better but I have no urge to upgrade for another cycle or two. Of course, all those cores will not help gaming performance. Most games are hard tied to single core performance. If Intel can get their 7 Nano process (Intel 4) working, they can use their own chiplet designs to partially bridge the multi-core compute gap. They may be able to go with a 3 CCD CPU complex to reach core count parity, or at least remain relevant. I believe finFET works to 4 nm or perhaps 3nm, so Intel does not have to figure out GAA for a couple of generations. They only need their alchemists to pull the sword from the 4nm stone... and that is a long enough bridge to cross.
BTW, there are leaks suggesting AMD's Venice can potentially use up to 8 CCD modules, bringing us a 256 core CPU complex. This will mean 16 memory channels. My gawd. Intel is going to struggle to compete against that. This probably doesn't matter on the desktop. EPYC 256C CPU complex will be the domain of ultra-high end workstations but these CPUs will power a lot of servers that are currently Intel based.
The elephant in the room is the Chinese market. China is moving to their own CPU platforms, wherever possible. The Chinese market is not a high margin market but it is certainly a high volume market. I wouldn't want to guess how this will affect AMD/Intel/ARM. It will certainly be a hit but I don't see it as existential.
RSI got down to 34 yesterday, near 30. The market's pullback is probably done. Lasted 2 weeks, after all.
It sounds like TSMC is struggling with 2nm. AMD now looking at 2026 for Zen 6. I expect this will hit Nvidia pretty hard. Perhaps they can work some magic with N3P as a stop gap. This is good news for Intel.
Looks like AMD is doing its best to ramp up production of AI/Datacenter chips to chase after NVDA. https://www.crn.com/news/components...g-product-teases-new-ai-chips-later-this-year Competition is always a good thing, so if AMD can make inroads here and be able to adapt to this quickly evolving AI race, they will become very attractive to an investor like me.
Intel has had a run of bad parts. I'm surprised there aren't more drama queens freaking out about it. It's not the biggest issue, however, both i9-13900 and i9-14900 have stability issues traced back to the hardware. I suspect a slight clock reduction will render these parts usable. Time will tell. Also, the *900 parts are somewhat rare. This problem is starting to gain attention.
The motherboard makers are jacking up the voltage to the CPU to keep the turbo frequency as high as possible for as long as possible. https://hardforum.com/threads/intel...tive-cores-causing-crashes-in-gaming.2033487/
As they should. Personally, I like low power systems. 45W preferred, 65W acceptable. Low power systems last forever. A buddy with a 14900 can encode video almost precisely 4x faster than my i5-9600 can. I've been thinking of bumping up to a 14900. I don't need it, of course. I just want it. lol! If they fire sale these parts, I may pick one up and under volt / clock it. Maybe just turn turbo off. If they don't, I will look at a Ryzen 9600~9700G.
No but even the default "auto" settings are dangerous. It's not Intel's chip quality. It is the board partners messing with the BIOS settings for that extra 1% nonsense to drive sales. That's why there are failures. I am totally fine with insane settings for people who like to push the edge, but the boards should absolutely ship with 100% within spec stock settings. People can do as they wish after that.
BTW, can you imagine the PSU ripple that happens when that CPU shifts from 65W nominal to 220W turbo? If they jack the voltage, that will bump the total power consumption way up. It would take a pretty amazing power supply to handle a 4x load spike and stay within ripple spec. Perhaps the wild power swings are part of the stability issue. It may be possible to tame one of these beasts with a massive PSU or some big filter caps.
I was an over clocker, back in the day. I did it for several months, then had a revelation that it was the dumbest idea ever. I've been stock settings, ever since. My current board has some sort of automatic AI overclock feature that I have manually turned off.
OK, here's a story I feel is germane... The first time AMD surpassed Intel was with their first series of Athlon processor complex. These were multi-chip cards that connected to the mainboard with a CPU slot (long before pins and ball grids). These Athlons had massive heat spreaders and felt like holding your hand over a toaster, when they were running a benchmark. When they weren't running a benchmark, they felt like a slightly cooler, but still hot, toaster. AMD really wanted the performance crown and they pulled out all the stops to do it. They had their own fabs back then. The Athlon was made on a low yield process, was clocked to the absolute limit, was fitted with as much cache as they could cram in there, and had massive heat spreaders to dissipate the tremendous energy they would soak up. An Intel engineer said the Athlon was really a 1GHz chip that AMD was heavily overclocking from the factory. I think he was exactly right. At a slower clock, they could have backed off the voltage which would have dramatically reduced TDP and made the whole thing both a ton cheaper, as well as more reliable. That's when the famous quote, "Cache can mask a lot of problems" came into the lexicon. No doubt, AMD's fetch/execute was pretty crap and responded well to a bunch of cache. Intel had better fab process nodes, back then. They had the luxury of releasing a stinker, which they took full advantage of. I've never owned a Pentium 4. The power consumption, heat, and performance seemed gross to me. I ran a Pentium 3 for a long time and then switched to AMD K8 (Barton core, as I recall). These days, Intel is the one pushing it. They are likely to have problems as they chase down every last bit of compute to remain competitive. Meanwhile, AMD can run a little more margin and enjoy reliability, better power envelope, and cheaper heat dissipation. The shoe is on the other foot.