in

Counter-Strike 2 CPU Benchmarks: E-Core Challenges & X3D Benefits



Sponsor: Thermal Grizzly Kryosheets and Thermal Grizzly Hydronaut Paste (Amazon – https://geni.us/Fsray )
NOTE: We’re running additional tests with a few console commands for further uncapping performance, but also will be trying some affinity options. Those will be published in a follow-up. This testing is accurate and representative to the performance when sticking to the in-game menu, but our viewers pointed-out a console command that will further boost performance of the very top-end CPUs on our charts (no impact to mid-range and below). We asked our community to vote on what to do. Learn more here! https://www.youtube.com/channel/UChIs72whgZI9w6d6FhwGGHA/community?lb=UgkxKOSDFhPIxr22RFeN6tnXToCKTqRTJCu7

This benchmark looks at Counter-Strike 2 CPU performance. Our #counterstrike CPU benchmarks faced struggles with constant game updates (although that’s a good thing for the game itself), but one thing that remained consistent was high performance among AMD’s Ryzen X3D CPUs, including the Ryzen 7 5800X3D and Ryzen 7 7800X3D CPUs. We also found that, although Intel overall scaled OK, the 13900K specifically had ‘too much of a good thing’ with its E-cores. Disabling the E-cores dramatically improved frametime consistency, something that reflected in 1% and 0.1% lows.

Our Counter-Strike 2 GPU benchmarks: https://www.youtube.com/watch?v=edAVnclEEr8
Additional thoughts from our testing: https://www.youtube.com/watch?v=O87yzyfOfnY

The best way to support our work is through our store: https://store.gamersnexus.net/

Like our content? Please consider becoming our Patron to support us: http://www.patreon.com/gamersnexus

RELATED PRODUCTS [Affiliate Links]

AMD R7 7800X3D on Amazon: https://geni.us/tVxgE
AMD Ryzen 7 5800X3D on Amazon: https://geni.us/8V8doYC
Intel i9-13900K on Amazon: https://geni.us/MArn4qf
Intel i5-13600KF on Amazon: https://geni.us/lyauYBd
AMD R5 7600 on Amazon: https://geni.us/sNMHx
Intel i7-13700K on Amazon: https://geni.us/kvBVxgW

TIMESTAMPS

00:00 – Counter-Strike 2 CPU Benchmarks
01:35 – Constant Updates, Bench Methods, & Challenges
04:00 – 1080p/Low CPU Benchmarks for CS2
06:31 – 1080p/Medium CPU Benchmarks in Counter-Strike
07:22 – Vulkan vs. DirectX 11 Benchmarks
08:03 – 13900K E-Core Problems & Hyperthreading
09:41 – Conclusion

** Please like, comment, and subscribe for more! **

Links to Amazon and Newegg are typically monetized on our channel (affiliate links) and may return a commission of sales to us from the retailer. This is unrelated to the product manufacturer. Any advertisements or sponsorships are disclosed within the video (“this video is brought to you by”) and above the fold in the description. We do not ever produce paid content or “sponsored content” (meaning that the content is our idea and is not funded externally aside from whatever ad placement is in the beginning) and we do not ever charge manufacturers for coverage.

Follow us in these locations for more gaming and hardware updates:

t: http://www.twitter.com/gamersnexus
f: http://www.facebook.com/gamersnexus
w: http://www.gamersnexus.net/

Host, Writing, Testing: Steve Burke
Testing: Mike Gaglione
Testing: Jeremy Clayton
Video: Vitalii Makhnovets

Share this:

48 Comments

  1. IMPORTANT/NOTES: Regarding the default 400 FPS cap, please read & vote on a solution here! (Also, does anyone know if Valve Anti-Cheat cares about fps_max command tweaking?) https://www.youtube.com/channel/UChIs72whgZI9w6d6FhwGGHA/community?lb=UgkxKOSDFhPIxr22RFeN6tnXToCKTqRTJCu7
    And that's it for games for a little while! Lots of them lately between Starfield, Baldur's Gate 3, and Counter-Strike 2. Some hardware reviews up next! Several this week, actually. Stay tuned!

    Our Counter-Strike 2 GPU benchmarks: https://www.youtube.com/watch?v=edAVnclEEr8
    Additional thoughts from our testing: https://www.youtube.com/watch?v=O87yzyfOfnY

  2. I wish someone would explain E-cores to me.. like, why they're more "e" than just running a normal core at lower clock?

    And why Win11 can't be smarter about scheduling hot threads (like game engine threads!) exclusively on P-cores vs E-cores.

    Game-engine devs can detect these chips, and apply process affinity-mask to avoid running on E-cores.. why do they not do that? (And as an end-user, is that a viable workaround .. assuming for some reason the E-cores do add value to other work you do?)

  3. would love to see a New World Benchmark with the cpus especially in comparison with 13900k and the X3D CPUS from AMD.. youtube is lacking CPU Benchmarks with New World

  4. ryzen 5600x and gtx 1080 cs2 is almost unplayable. fps is high, but random huge stutters ruin the frametime. ,Why does every new game have stupid stuttering issues? Im getting atleast 150fps so my hardware is powerful enough…

  5. What windows power settings are you using? I saw discussion on reddit that proposed that windows 11 might do something if one of its power settings is on max performance instead of balanced. performance

  6. I wonder if anyone has tried Intel and AMD CPU's with hyperthreading/eCores on/off with latest games. It used to be that while some games benefitted from disabling hyperthreading, in average it did not make much sense. I wonder how is it with Unreal 5 engine and most modern games, which seem to have issues with CPU performance. If nothing else, I could imagine 12-16 core AMD CPUs might benefit from less heat and power consumption when SMT is disabled especially since they already have so many "real" cores.

  7. Wait..
    I got a 1080 + r7 3800x..
    my plan was to upgrade my gpu to a 4070ti and later to some x3d if the money is there.. but is there actually an improvement from my 1080+r7 3800x to the 4070ti+r7 3800x? Since his 1080 benchmarks looking close to my fps numbers in cs2(well optimized pc)… i thought I gain the most from upgrading my GPU first?!?!?!.. anybody knows exactly?

  8. This is an ultra freaking weird comparison in things. If you have a 13900K and a 4090, and you are playing at 1080P (let alone 1080P medium settings). You are kind of already retarded. that being said,,,, if you aren't any good at 120fps, you aren't going to be any good at 400 fps. Skill is always going to remain the common denominator in this style of game.

  9. 5:42 mannnnnnnnnnn nothing like seeing my r5 2600 at the bottom of the stack to remind me just HOW BEHIND this thing has gotten LOL. like thats still great fps but at the same time even my gtx 1080 hasnt been pushed as hard as it could with this cpu not properly optomized or coolered wild. still though i need new hardware, but man its a hard choice. prices have come down on amd enough that im almost set on a 6950xt no frill because its perfect for my system and my design and myu needs, but thne the 7900xt and even 7900xtx from hellhound powercolor are coming down so aggressivly that its almsot a perfect 1:1 for % performance vs % price. 630$ vs 750$ vs 890$ in my regiuon. which is suuuuuuuuuuch the compelling upsell for the newer platform and hardware and future ram and driver suport longevity but mannnnnnnnn its hard to want to reward thme with my money at this point lol. but also meh~ it is a nice problem to have though, but i just cant decide how much i want and by the time im spending over 700$ i might as well just get the king tier stuff instead of compromising, that way 2-3 years from now i can follwo the curve and switch to 4k setup without immediatly changing my gpu like i would if i still had a 6950xt surviving~~~~~ and NGL the power performance could be a huge fact if im saving almost 100 watts at full OC over the 6950xt. which 2-5 years from now with my typical build cycle could add up ALOT even at cheapish US prices, to make up for the cost deltas idk~~~~

  10. 3kliksphillip found the game is way better without e cores like 6 months ago, i kept e cores on thinking that cs2 improved e core usage after half a year but no it still tanks performance even tho my 13600k was running very very nice, max fps was 550 but consistent at 350fps the counter barely moved visible drops were only in the 340-350 range (i use 800×600, go flame me)

  11. – The 7950X3D was the worst performing chip of the bunch and yet this wasn't investigated. Why?

    – The 7950X should have been on the list as it has a very large amount of L3 cache for a non-3D chip (64MB VS 36MB for the 13900K). It would also have painted a better overall picture of CCD management.

    – The 400FPS cap should have been remedied before testing. This video needs a remake for sure!

  12. I tried CS 2 today and it ran like dogshit – 5600x and 6800XT on High at 1440p – just stuttered like it was loading stuff in as the round started, and then was very stuttery in game. Could have been the server, but i tried 3 dfferent games and it was bad so i gave up.

  13. 8:50, wow i have been trying to tell tech bros this for years and most of them just cover there ears and start screeching 0.1% lows don't matter, 60fps is good, you don't need a CPU for 4k etc, also the in engine fps cap is imprecise and modern cards sometimes have a issue working there boosting algorithms around it, so i would like a video with fps_max 0, i know a lot of the normies won't run the game that way but i will and a lot of other OG players will, a bunch of them still run the game in 4:3 mode CSGO players like there tweaks.

  14. Why is there so much obsession with CS2? I fired up the game for the first time a couple of days ago and it was a huge disappointment. The gameplay has not improved at all and the graphics are already outdated. On a 12900k and 3090 however, I did not notice any stutters or "frametime pacing" issues in the 5 min or so I played.

  15. The problem is thread scheduling. You can change the hidden Windows power plan options to control thread scheduling to improve performance in games where you'd have to otherwise disable E-Cores for a boost.

    Did not experience this problem on the 12900K, only 13900K. Thee ae posts on Blizzard's forum for Overwatch 2 about this.

  16. As i wrote on your previous video – just disable the E-waste cores.

    They basically always lower gaming performance, and the fact that you can't buy a proper gaming cpu from intel without that crap means i wont ever go intel again. Went 7800x3d from a 12700, and no regrets at all.

  17. Here's a dumb idea … what if the game engine restricted itself to a select number of cores somehow? Back in the day games already did this, because most games were simply single threaded. Today, it should still be possible for game developers to program their engine to, for instance, run on the best 4 cores, and leave all the other cores alone. This will automagicly leave all of the e-cores for the system and other things running in the background. It shouldn't be such a hassle to program something where a game sets priority requests to the cores it likes best, avoiding the cores it doesn't like. Likewise, on a chiplet CPU, a game could request priority to cores all on the same CCD, to avoid latency spikes. Just a stupid thought. Probably impractical, and likely only useful for games that do well on lower core counts to begin with.

  18. While the 13900K has bad lows, the 7950X3D seems to have the same issue.
    And the 5900X vs 5800X situation almost screams for a test of the 5950X. Yes, it also has 2 CCDs, but it has 8 cores on them, just like on the single CCD of the 5800X. So any 8 core/16 thread game should fit onto a single CCD without the added latency. And CS2 isn't the first game where the 5900X performs worse than the 5800X, but the 5950X performs on par with it.

    But great to see chips like the Ryzen 2600 and 3600 and it-10400 and i3-12100F in it. Those might not be high end chips, but they are still somewhat popular simply because of their affordable price. Would be nice to see the 1600 AF in it as well, but I guess the 2600 is close enough. I would also appreciate seeing one of the older 6 core/6 thread or 4 core/8 thead CPUs, they are also still around, and seeing them in tests can give people an idea how much they would gain from upgrading.

  19. As expected asymmetric architecture still isn't ready for the desktop. And the idea does have some good use. There are well multithreaded tasks, where adding cores just adds a linear increase in power draw. Those benefit from a ton of E-cores. And there are worse multithreaded tasks that perform better with a handful beefy cores. Those benefit from few P-cores. And in theory Intel's design fulfills both.

    But the asymmetric performance of the cores can also cause asymmetric performance when a process wants to use more threads than the P-cores use (or more than teh amount of P-cores and the scheduler assigns E-cores before the HT threads on the P-cores) A bit like with multi GPU setups, framestutters will happen.

    Might be worth investigating if manually pinning the game to the P-cores (or E-cores) only improves frame stability. I remember that it helped in the early SMT times when games (and the Windows scheduler) didn't properly assign the available threads. (basically adding them in order, instead of skipping every other)

    AMDs big cache on the other hand acts just like that, a big cache. Which is super easy to use and even if your code is not optimized for that cache size, it still performs fine.

  20. e-cores are an absolute joke, they cause more problems then they solve. Maybe in 10 years when the kernel scheduler catches up they will be "okay"

  21. Did anyone notice the same massive drop off lows on the chart for the 7950x3d?

    Maybe that due the engine really hating going from the ccd with the vcache and the one without.

    I wonder if the lows will go back up if you disable the non vcache ccd

  22. I would not discard the thread count completely, since apart from the CCD-to-CDD penalty, the 7950X3D seems to have equally bad 0.1% lows, at least on average. Maybe not so noticeable from what you said, but definitely a bad result for that kind of CPU also from the red team side.