Αφού ξέρεις γιατί δεν μας εξηγείς γιατί η τόσο μεγάλη διαφορά μεταξύ πρώτου και δεύτερου σκορ, τι σημαίνει αυτό για τον επεξεργαστή, σε πόσους επεξεργαστές γίνεται αυτό, γιατί το λέω σε λάθος θέμα και τέλος σε πιο θέμα είναι σωστό να το πω…
απο το link που έβαλες…
I’ll be quite frank with the results of these new SoCs: they’re terrible. Much like smartphone vendors have for years now copied the worst aspects of Apple’s devices, such a dropping headphone jacks and dropping chargers, the SoC vendors this year have now also copied the worst aspect of Apple’s SoCs: extremely high GPU peak power states.
When I tested the Kirin 9000 a few months ago in the Mate 40 Pro I thought that HiSilicon’s choice of turbocharging their massive GPU up to peak power figures of 8W was a very bad choice, but now Qualcomm and Samsung LSI followed up doing the exactly same thing, as if this was a race to the bottom as to who can create the hottest GPU in the market.
As to why the SoC vendors are doing this, it’s very easy to look at the benchmark charts and see the marketing pressure that Apple applies on the rest of the industry, being far ahead of the pack in terms of performance and efficiency. I wouldn’t be surprised if this generation of SoCs have had design decisions impacted by the marketing departments.
Inside devices such as the Galaxy S21 Ultra today – these peak performance states are utterly pointless as they are just impossible to maintain for any reasonable amount of time, as the thermal envelope of the phones really aren’t any different to that of any other device of this form-factor, including the predecessor S20 Ultra.
The Snapdragon 888’s peak performance state is pretty absurd, as at its 840MHz GPU frequency I’ve measured average power of around 11W. This state can’t be maintained for longer than a few seconds before it throttles down to 778 and 738MHz at 9-8W for the rest of the duration of a test on a cold device, before further limiting down due to thermals during prolonged periods. In terms of sustained performance, the S21U’s advantages over the S20U is in the 5-20% range, depending on workload, well below Qualcomm’s proclaimed 35% performance boost. That margin here actually is even smaller against the Snapdragon 865+ Galaxy Note20 Ultra.
I asked Qualcomm to rationalise these high-wattage peak performance points, and the official response was that these were enabled in order to give a higher level of flexibility in terms of higher power gaming phones and higher thermal envelope devices which are able to sustain greater power levels. I know that at least Xiaomi’s Mi 11 will be more aggressive than the S21 Ultra in terms of sustained power levels, at a cost of higher device temperatures. As for gaming phones – the last few generation of those devices have shown little actual physical design differentiation to actually enable higher thermal envelopes, with most of their advantages simply being that they are allowed to get hotter, showing no advantage over “regular” phones which do the same (OnePlus devices, ZenFone 7 Pro for example). The S21 Ultra here has peak skin temperatures of around 46°C with long-term throttling at around 42°C.
For the Exynos 2100 – Samsung LSI’s claim of a 40% performance boost is more credible as this not only refers to the peak performance figures, but can actually also be applied to the sustained performance figures of the phone. It’s actually a tangible and very large upgrade to the Exynos 990 last year, however it needs to be put into context. The peak power figures here have the same negative connotations as on the Snapdragon unit so I won’t repeat myself in that aspect.
In terms of sustained performance, although the Exynos 2100 is a large generational upgrade, it still falls below that of last-generation Snapdragon 865 devices, and naturally also the newer Snapdragon 888. The benchmark figures here also pretty much correspond to the real-world gaming performance of the phones – the Exynos S21 Ultra fared not only worse than the Snapdragon S21 Ultra, but also worse than a Snapdragon S20 Ultra or Note20 Ultra.
The interesting data here is the comparison to Huawei’s Mate 40 Pro with the Kirin 9000 and its gargantuan Mali-G78MP24 GPU – 10 more cores than the Exynos 2100’s configuration. Putting the Mate 40 Pro into power-saving mode will actually cap the maximum GPU frequency and give you reasonable power consumption figures around 4W, which are comparable to what the Exynos 2100 in the S21 Ultra throttles at. We can see that the Kirin’s performance is either superior, lower power, or both, signifying the chip is being notably more efficient than the Exynos 2100. The larger GPU as well as the superior TSMC 5nm node come at play here.
Samsung LSI’s confirmation that they’ll be deploying AMD’s RDNA-based GPU for next-generation flagship SoCs will hopefully mean that the Exynos’ competitive positioning will be quite different next year; however, we shouldn’t expect miracles as the process node differences to Apple’s GPUs will likely still linger on.
Unfortunately, Samsung’s (the mobile division) battery saving mode on the Galaxy S21 doesn’t affect the GPU frequencies at all, unlike Huawei’s PSM, so it doesn’t help at all for the power envelopes or efficiency. I would highly recommend them to introduce such a mechanism here as having burning hot phones really isn’t a great experience while gaming, and the performance will regress to those sustained levels anyhow.
Generally, I see this generation as quite the disappointment when it comes to GPU advancements. Qualcomm likely suffered an efficiency set-back and minor improvements due to the process node shift, and while Samsung LSI has achieved good generational advancements, the Exynos still clearly falls behind due to architectural GPU disadvantages.