Jump to content
HWBOT Community Forums

FM_Jarnis

Members
  • Posts

    193
  • Joined

  • Last visited

  • Days Won

    2

FM_Jarnis last won the day on June 4

FM_Jarnis had the most liked content!

Converted

  • Location
    FINLAND

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

FM_Jarnis's Achievements

Newbie

Newbie (1/14)

  • First Post Rare
  • Collaborator Rare
  • Week One Done Rare
  • One Month Later Rare
  • One Year In Rare

Recent Badges

22

Reputation

  1. The issue for filtering for stock results is that it has to work across all cards from all vendors and pretty deep into history. I'm not sure what the status with that is if you go just by GPU-Z data. I know this was the issue last time we looked into this (which, I admit was some years ago) and rejected doing it due to lack of reliable automated way. We do already offer searching by GPU core clock range, is there some issue there that makes that unworkable? I mean, I understand it is an extra hassle to figure out values to enter there. And yes, that is the actual recorded core clock, not what it says on the spec sheet. Wonderful world of "it depends" clock speeds of modern hardware The main issue is, we are bit afraid of taking a stance ourselves placing a label "stock" on results because that is potentially a can of worms - if the label cannot be applied reliably, it could create a huge mess. GPU brand based search is on our list, so that may happen at some point. Can't promise it, but that is definitely doable. And on the database quality - average score (visible at the top of the page on search) is still quite representative of the real peformance of non-overclocked cards. It might slightly overstate it for super popular "competitive" cards like RTX 4090 right now that has more than usual volume of seriously overclocked results, but the vast majority of all submits are still from systems that have not been modified in any way. Checking average score for 4090 from Speed Way with a 12900K, average is 10052. Then did a bit of a search online for what a review of a Founders Edition 4090 card reported for Speed Way and a review using 12900K said 9916 (ServeTheHome review, March 12 2023). I'd venture a guess that this 136 points is a fairly good value for the potential bias from overclocking and factory OC cards together. Roughly 1.3%. This is within the margin of error of 3DMark runs (officially 3%, even if in practice it is closer to 1%). And this is with a CPU model that was popular when overclocking 4090s early on, and without caring one bit about potential gains from driver improvements as the score from the review is from over an year ago. The nice bell curve also shows we have plenty of systems that are overheating, have faulty cooling (either due to things like badly applied paste or due to bad system design, hello throttling prebuilts!) which do average out the mega-overclocked ones. Only case where average score is probably not a good indicator of stock is if you find a very rare hardware combination with very few submits (say, less than a few hundred) and there happens to be unusually high percentage of results from an overclocker. And even there you can probably sidestep the issue by just using a similar but more common CPU model for the search to get a larger pool of results.
  2. Saw the rules for Steel Nomad on HWbot and I would strongly recommend requiring SystemInfo 5.73 or later. Valid Steel Nomad submits will require this on 3dmark.com sometime next week.
  3. This is bit of a design question. Old tests tried to give you a "overall system performance" score. Which involved the CPU - even if the weight in overall score was small. But that was a major problem when people kept using overall score as a way to compare their GPUs, which of course caused issues. "Why is my 3090 so much lower score than your 3090" when the guys had wholly different CPUs. Some understood why, some declared the benchmark unreliable. People just didn't understand to use the graphics score which was the correct way to do that comparison. There were also complaints from OC people who didn't like that they had to OC the CPU for competing on a GPU overclocking HoF since overall score was used. Current design is that the tests are isolated. GPU test will measure your GPU perf. If the test ever becomes meaningfully affected by your CPU (ignoring completely lopsided stuff like "hey lets put 4090 to this 10 year old dual core celeron system, yeah that makes sense") that is a sign that the test is getting too light and needs replacing soon. And CPU test (CPU profile) will test your CPU and only your CPU. Zero effect from the GPU. We've internally floated and idea of offering some kind of "batch run" feature where you could choose to add a CPU test to a GPU test or even to multiple GPU tests to get some kind of combined result as a separate run mode, but this is at this point just an idea. We have not yet done any work on this and have made no decisions. Not promising anything and do not expect anything anytime soon, we have a long list of things on our plate. May never happen and not even sure how this would somehow interact with result search or Hall of Fame either (which is frankly suffering from "list bloat" - too many different tests, too many lists). But we are open to feedback, so if someone has grand ideas how to do such a thing (as an addition to the current isolated tests), we are listening. Just have to consider that we are doing these benchmarks to a number of different audiences - normal "gamers", casual tweakers, extreme overclockers, business users, GPU manufacturers... - and sometimes they have conflicting needs and sometimes a thing that makes sense for one user just won't work at all for some other user.
  4. You are right in that the API used should be more visible on the result screen. We'll improve that in an update. Trivial issue, did not consider the importance for this use case. As for the runs with corrupted rendering. This is partially "thanks nvidia"-level issue with 40-series, but we are indeed working on a solution. First part of this is now ready internally (in our admin interface) and we can trivially easy see these now manually. If you think a result is bogus, send a link to our support address and we can check and invalidate if needed. We cleaned up stuff from the top of the hall of fame this morning (invalidating results that are obviously not rendered correctly) - if we missed something, let us know and we'll check. We are also working on a fully automated solution for future submissions based on the same mechanic we currently have. This will take some time - implement, then test it in shadow mode to make sure we do not suddenly start rejecting legit results - but we will definitely do this. I do agree that in the push to get this test out, result validation side did not get the attention it deserves. We are right now working on fixing that. And once fixed for Steel Nomad, we'll go back to implement the same thing for older tests.
  5. Should probably have DX12 and Vulkan separately for both Steel Nomad and Steel Nomad light as they will be separate tests in 3DMark Hall of Fame NVIDIA DX12 vs Vulkan perf difference is most likely driver-related and could change.
  6. One small note: 3DMark 2001 and PCMark 2002 uploads to 3dmark.com are not supported. Haven't been supported for some time now, so this is nothing new. Sorry, no way to support them without digging up 10 year old backup tapes for code, then disassembling and reverse-engineering it as no source code for the receiving server for these benchmarks exists any more - not going to happen Also for the rest of these legacy benchmarks (03 and onwards, in essence benchmarks that use "modern" SystemInfo): The result upload works for now, but since the products are no longer officially supported, we can't guarantee this will stay so forever. We will not intentionally disable existing support, but if at some point keeping them working would require too large additional development investment, we may stop supporting them on 3DMark.com. Now realistically this would happen only if we embark on a major rewrite of the whole online result system and no such thing is planned in the foreseeable future.
  7. This post is dedicated to those who have called us out on NV LOD BIAS vs AMD Tessellation controls detection. Just going to drop this little tidbit here, snipped from recent SystemInfo test build output XML... <Settings> <Setting> <Name>Texture filtering - LOD Bias</Name> <Type>NVDRS_DWORD_TYPE</Type> <Value>0</Value> </Setting> </Settings>
  8. 12? There's no DX12 test yet! Internally we just call it 3DM and that's how the keys start as well...
  9. You're missing a preset... http://www.futuremark.com/support/update/3dmark/
  10. http://www.3dmark.com/hall-of-fame-2/sky+diver+3dmark+score+performance+preset/version+1.0 So pristine... (currently 0 results)
  11. Files are being distributed to download mirrors as we speak. Steam will update in 30 minutes. Sky Diver time!
  12. "Time measurement data not available" Means that SystemInfo process that is tracking the timers crashed during the run and could not record the timers at the end of the benchmark. Usually this means your overclock is not fully stable. Note that if you get this error on Windows 8, it means "we can't tell if your result is affected by the downclock bug or not". On Windows 7 it means "we can't tell if your result is affected by some fairly obscure BIOS level "tweaks" that mess the timers". It does not automatically mean that the result is invalid, but it also means that the built-in timer checks are not present for this result, so it MAY be invalid, can't tell. "Time measurement data inaccurate" ..means that timers were properly recorded and they are off - something is causing different timers in your system to be out of sync. Most commonly this is due to the Windows 8 downclock bug. Result with this error is definitely invalid.
  13. Latest is actually 4.26... This link will always point to the latest SystemInfo installer; http://www.futuremark.com/downloads/Futuremark_SystemInfo/latest
  14. Not unexpected considering older 3DMarks (06 and older) are effectively CPU marks at this point... Same probably will happen also with Ice Storm which is too lightweight of a test for high end dedicated GPUs.
  15. FYI, updated 3DMark 11 and 3DMark Vantage this week. Minor bugfix updates, no changes to workloads or scores. Details here; http://www.futuremark.com/pressreleases/minor-updates-to-3dmark-11-and-3dmark-vantage
×
×
  • Create New...