- new
- past
- show
- ask
- show
- jobs
- submit
https://chipsandcheese.com/p/telum-ii-at-hot-chips-2024-main...
https://www.eecg.utoronto.ca/~moshovos/ACA07/projectsuggesti...
(if you do ML things you might recognize Doug Burger's name on the authors line of the second one)
The PCI-Express bus is actually rather slow. Only ~63 GB/s, even with PCIe 5 x16!
PCIe is simply not a bottleneck for gaming. All the textures and models are loaded into the GPU once, when the game loads, then re-used from VRAM for every frame. Otherwise, a scene with a lowly 2 GB of assets would cap out at only ~30 fps.
Which is funny to think about historically. I remember when AGP first came out, and it was advertised as making it so GPUs wouldn't need tons of memory, only enough for the frame buffers, and that they would stream texture data across AGP. Well, the demands for bandwidth couldn't keep up. And now, even if the port itself was fast enough, the system RAM wouldn't be. DDR5-6400 running in dual-channel mode is only ~102 GB/s. On the flip side the RTX 5050, a current-gen budget card, has over 3x that at 320 GB/s, and on the top end, the RTX 5090 is 1.8 TB/s.
There are some exceptions, but I haven't seen one with for example four x16 slots that support PCIe 5.0 x4 lanes with bifurcation.
E.g. https://www.ebay.co.uk/itm/126656188922
Most motherboards don’t go beyond 2x8 with 2x16 physical slots because there is little actual use for it and it costs quite a bit of money.
I wish it was possible to put several M.2 drives in a system and RAID them all up, like you can with SATA drives on any above-average motherboard. Even a single lane of PCIe 5.0 would be more than enough for each of those drives, because each drive won't need to work as hard. Less overheating, more redundancy, and cheaper than getting a small number of super fast high capacity drives. Alas, most mobos only seem to hand out lanes in multiples of 4.
Maybe one day we'll have so many PCIe lanes that we can hand them out like candy to a dozen storage devices and have some left to power a decent GPU. Still, it feels wasteful.
AFAIK, the cpu lanes can't be broken up beyond x4; it's a limitation of the pci-e root complex. The Promontory 21 chipset that is mainstream for AM5 does two more x4 and four choose sata or pci-e x1. I don't think you can bifurcate those x4s, but you might be able to aggregate two or four of the x1s. And you can daisy chain a second Prom21 chipset to net one more x4 and another 4 x1.
Of course, it's pretty typical for a motherboard to use some of those lanes for onboard network and what nots. Nobody sells a bare minimum board with an x16 slot, two cpu based x4 slots, two chipset x4 slots, and four chipset x1 slots and no onboard perhipherals, only the USB from the cpu and chipset. Or if they do, it's not sold in US stores anyway.
If pci-e switches weren't so expensive, you might see boards with more slots behind a switch (which the chipsets kind of are, but...)
SATA was a cabling nightmare, sure, but cables let you relocate bulk somewhere else in the case, so you can bunch all the connectors up on the board.
Frankly, given that most advertised M.2 speeds are not sustained or even hit most of the time, I could deal with some slower speeds due to cable length if it meant I could mount my SSDs anywhere but underneath my triple slot GPU.
Observing server mainboards reveals many PCIe 5.0 connectors for cables to attach PCIe-SSDs looking similar to SATA ones.
Makes a massive difference at high density and utilisation, with the standard cache/core performance can really degrade under load.
[0] https://www.amd.com/en/products/processors/technologies/3d-v...
[1] https://www.amd.com/en/products/processors/server/epyc/4th-g...
I'd say the amount of L3 is not increased but adapted/scaled to the increased core count, since per each core there is still the same amount of cache available as before.
We get faster cores, so we need to get from 5600 to e.g. 6000 DDR5. Since core count is increased by 50%, we'd need 9000... DDR5^W, well yes, we'd need actually as planed before AM6 and DDR6!
When did the GHz race start again?
Just takes backwards steps from time to time with major architectural innovations that deliver better performance at significantly lower clock speeds. Intel's last backwards step was from Pentium 4 to Core all the way back in ~2005. AMD's last backwards step was from Bulldozer (and friends) to Zen in 2017.
7GHz is ridiculous and probably just a false rumour, but IMO; Intel and AMD are probably due for another backwards step, they are exceeding the peek speeds from the P4/Bulldozer eras. And Apple has proved that you can get better performance at lower clock speeds.
Leaks = the author just made something up, but now it ranks extra highly when someone searches for "[upcoming thing] leaks"
Now, it's either a fancy term for "announcement", or people use it synonymously with "rumor".
https://overclock3d.net/news/cpu_mainboard/amd-extends-am5-l...
They stumbled into the right direction with strix halo but I have a feeling they won't recognize the win/follow up.
When you go to the desktop it becomes harder to justify including beefed up memory controllers just for the CPU vs putting that towards beefing some other part of the CPU up that has more of an impact in cost or performance.
They could in theory do on package dram as faster first level memory, but I doubt we'll see that anytime soon on desktop and it probably wouldn't fit under the heat spreader
Sure. Keep the DIMM sockets and add HBM to the CPU package.
Actually probably the best possible architecture. You can choose to have both or only one, backward compatible and future proof.
Yes, it adds another level to the memory hierarchy but that can be fine tuned.
So to say that Zen 6/7 supports AM5 on desktop, doesn't necessarily exclude that Zen 6/7 product family in general doesn't support other new/interesting sockets on desktop (or mobile) also. Maybe products for AM6 and AM5 from the same zen family.
Medusa Halo and the Zen7 based 'Grimlock Halo' version might be the interesting ones to watch (if you like efficient Apple-stlyle big APUs with all the memory bandwidth)
(Ignore my AM5 workstation with 192GB RAM in the corner)
In fact my wife is still rocking that machine - although her gaming needs are much less equipment intense than mine. After a small refurb I gave it (new case, new air cooler, new PSU) - I expect it to last another 5 years for her.
My new one is a 9700X. Didn't feel the need to spring for higher power budget for a marginal gaming performance bump. But I suppose that also means it's much more practical for me to jump to a newer CPU later.
I'm a gamer, often playing games that need a BEEFY CPU, like MS Flight Simulator. My upgrade from an i9-9900K to a Ryzen 9800X3D was noticeable.
Considering PC desktops. DDR4 is 3200 MT/s max JEDEC. DDR5 is available on AMD since 3 years and is 5600. DDR6 specification is almost finished. It looks like DDR5 will double performance just right before new DDR6 DIMMs appear. Thus I'd expect DDR6 to double the bandwidth just as late when the new memory standard arrives.
Only if they overestimate demand and overproduce CPUs. Otherwise it will lead to higher prices because there's less economy of scale.
I'd love to build a new desktop soon but I couldn't justify the cost and am instead building out a used desktop that's still on ddr4 / lga1151.
I just checked how much the 64 Gb ddr4 in my desktop would cost now... it starts at 2.5 times what i paid in 2022.
Sorry AMD, I would maybe like a new desktop but not now.
Something like 5900x on 2nm or 4nm