
FM_Jarnis
Members-
Posts
193 -
Joined
-
Last visited
-
Days Won
2
Content Type
Profiles
Forums
Events
Blogs
Everything posted by FM_Jarnis
-
Please try to understand; the number of frames being actually sent to be displayed does not change (as far as I can see) - granted, my testing has so far been fairly limited - but this is my understanding of the underlying tech. FPS counter number increases because it is counting even those (partial) frames that were skipped/not rendered. While I have not personally tested the tech in gaming, as I understand it the main benefit is that the time (in milliseconds) between your control input (mouse/key input) and when game can react to that on screen gets shorter - a benefit for gaming - and you get no tearing without the drawbacks of classic vsync. That's it. In benchmarking there is no control input, so the only practical effect seems to be that anything based on counting the number of frames drawn gets thrown off by the tech (causing it to count frames that in reality were not rendered)
-
Initially verification link only. Benchmark executable changing patches are far more complex to do and QA. Even this requires SystemInfo update and current target is "early Q2" which probably translates to "sometime in April". Yes, there will be several weeks when people can submit MVP-enabled results and they can't be separated from legit ones. We are considering what to do about that (one option; flag *all* results with compatible chipsets as suspect until the update is ready). Benchmark GUI update is definitely being talked about. The main problem with it is this; Anything the benchmark does in isolation can be exploited so the only sure way to do various validation bits is to do online connection. Yet it would be somewhat bad if 3DMark advanced/pro editions wouldn't give a result without network connection. One potential alternate we've been discussing about would be current GUI that is extended with a box that, without network connection, states that the result was made in offline mode and should be considered unverified and preliminary. If network connection exists, parts of the result file are sent to 3dmark.com in the background and validations are received (nothing is saved anywhere at this point) and the box would then display the result status.
-
On the Lucid MVP front, there will be a press release from Lucid and Futuremark this week. Short version: MVP and non-MVP scores should not be compared to each other. After testing this in-house I also think that anyone with a pair of working eyes can easily watch two runs of a benchmark - any benchmark - and see that nothing really has changed. The only real difference is that you get the same visuals without tearing (courtesy of virtual vsync) and the fps counter shoots up for no apparent reason.
-
But does it give more FPS, or just an illusion of more FPS? Benchmarks in general count FPS to indicate how many frames the benchmark rendered. This indicates throughput and works as long as every frame is actually rendered. What if something in the middle decides that some of those frames are "unneeded" (for whatever reason) and tells the GPU not to draw those frames and to jump to the next frame. FPS counter won't know the difference, so a frame requested but not rendered is still a frame for the FPS counter. And boy did that frame get "rendered" fast (by skipping the rendering bit). I mean, you could (in theory) hack the driver a bit so it just returns "okay, I rendered it" to every DirectX call without actually doing it. You would get GAZILLION FPS on every benchmark. And a black screen. But hey, you would get more fps, so it must be okay, right? FPS number is meaningless unless there is a common baseline on how it is measured.
-
FM is still investigating but what we've been able to piece together so far, "if it sounds like too good to be true..." should be applied. You may get big numbers from a benchmarking application with this, but you have to ask "why is this number bigger, has the performance actually improved or is something influencing the way the score is determined?" Of course HWBOT can decide to apply whatever standard they desire (it seems for example that not running proper tessellation workloads in 3DMark 11 with AMD GPUs is "okay" by HWBOT while we think it is not) so I guess we'll see... This will take some time to properly address but expect an official stance from FM on this as related to 3DMark and PCMark soon-ish.
-
Just a heads up; Futuremark is following this development with great interest and on the most basic level, anything that improves real world performance in games is obviously a good thing and as long as benchmark score improvement and real world gaming experience improvement match, benchmark workloads themselves are valid. On the other hand, benchmark scores are useful only if all factors that contribute to the score are known - if you can't tell apart runs from existing normal system and hardware-identical another system with this software in place (except that the second case would have considerably higher number as a score), it would render benchmark scores somewhat less useful. We're investigating and I would imagine that the first priority is to have an ability to detect something like this and flag the scores accordingly. Edit: Also it should be noted that a lot depends on what effect this has on image quality. Tessellation tweaking isn't comparable because it changed image quality.
-
Just FYI, Catalyst 12.1 is now approved.
-
No, this is purely due to no combined AMD driver available. We do not approve beta drivers. 12.1 will be approved this week. It doesn't support 7xxx series, so... Edit: Also note that GTX 560Ti experienced exact same issue.
-
03 has built-in SystemInfo which cannot be updated (and no patch is planned, it is no longer supported) - it can go boom on new hardware that didn't exist when it was released. For 06 (or later), install latest SystemInfo, it most likely fixes the issue.
-
Yes, 1.0.3 requirement for competitions/HoF is not yet live because we want to give people time to ugprade their benchmarks. Well, that and the fact that we don't want to push major 3dmark.com code updates right before the holidays. Our programmers dislike the idea that they may have to spend the holidays fixing stuff at the office... 3Dmark.com / ORB validation and anti-cheat stuff has been improved in 1.0.3 as stated in the patch notes. If you know still a way to get a bogus result through with 3DMark 11 1.0.3, by all means let us know and we'll see what we can do. No amount of automation can catch *everything* so contest related stuff needs to be manually validated in case of suspicions anyway and really it is completely up to you what standards you enforce in your contests and toplists etc. Futuremark requires valid 3dmark.com submission for Hall of Fame and for FM-run contests. They will require 3DMark 11 version 1.0.3 soon. Anything beyond that is up to the individuals managing each contest/site that uses 3DMark results. FM recommends that only the latest version of each benchmark is used.
-
We can't. A lot of the validation is done on the 3dmark.com site. It is the only way to make sure the validation is not hackable. Any indicator on the GUI that would be based on what 3dmark.com says would be complex to implement and still hackable. The only real way to ensure that the result is not tampered with is to check the result on 3dmark.com. I know you guys use screenshots and all that, but you really should also require a 3dmark.com result link which indicates that the result is valid. On 3dmark.com, we may do that on a later date. We don't want to do it instantly because there are millions of 3DMark 11 installations out there and people would get very very pissed if they run the benchmark and then as they go to submit (3dmark.com view is the only one on basic edition) it would tell that they would have to patch and rerun before getting a result. Basically once large enough percentage of all submits are 1.0.3, we may disallow previous versions completely. If I recall right, at the moment 1.0.1 is the minimum for getting a result (1.0 would just tell you to upgrade). But again, this will influence HoF sooner. The only reason this will take a bit is because it means a change in HoF code and we are already rewriting the whole page, so the change will come with the HoF page update (it will also have new features!)
-
Correct. We plan to change that in January.
-
You will be able to submit to 3dmark.com and get a link even on score that is not officially valid. But it won't appear on FM.com hall of fame page unless it is deemed valid. The 3dmark.com result page will also have this box that tells how and why the score is not valid for Hall of Fame purposes - stuff like unapproved drivers, tessellation tweaking on AMD drivers or if the hardware isn't properly detected etc.
-
This one is in the tracking database as a confirmed issue but I do not know if/when PCMark Vantage will receive a patch. There is also some uncertainty if the bug is actually in Microsoft code (naturally it could be worked around in our code, but that may require full revalidation and testing cycle for all benchmark results as such a workaround would affect the run on all platforms). So at this point the answer is "I don't know". I'll poke the PCMark lead programmer when he gets back from the Christmas holiday and check if there is any news.
-
1.02 at least. Not sure about 1.01. But it won't count for very long, we are going to change it.
-
Will add that to the feature suggestion list internally and we'll take a look in January.
-
PCMark 04 is so old and unsupported, this will not be fixed. Sorry. As for PCMark 7, I'm not sure what the problem you are referring to is? We do have PCMark 7 patch in development for (tentatively) early 2012 launch and there are some issues that are getting fixed but I'm not certain which one you refer to.
-
For 3dmark.com, you can see what benchmark version was used for each result. Regarding FM Hall of Fame, our web team is in process of completely revamping that page and a new version should appear in Jan/Feb - included in that is enforcement that HoF only shows results from the latest version of each benchmark so for Hall of Fame purposes, old scores will be kinda-invalidated at that point. Up to you how you deal with this in hwbot.org or in any other toplists etc. - the version used (Benchmark and SystemInfo) is visible always on 3dmark.com so it can be verified for any benchmark result that is submitted to the 3dmark.com service. If you have further feedback on this, just toss me an email at info [at] futuremark.com and I'll happily forward it to the people who make decisions related to this at FM. All feedback is always welcome and will be considered. This is especially true for matters that relate to our benchmarks and any competitive benchmarking; we are making the product for our users and naturally want it to be relevant and fair. For example, we may consider adding a line to the result validity warning box on 3dmark.com (the one that includes warnings for unapproved drivers / tessellation controls etc) for results obtained with older versions. Would that help?
-
Sorry, no idea. It would appear that for some reason the GPU isn't fully stressed on GT4 on some runs. As every run asks the driver to do exact same things, I can only summarize that there is something funky going on with the drivers. We have not seen this kind of effect in-house and considering the thousands of runs the benchmark went through during pre-launch verification testing, this kind of massive score disparity randomly on some runs would have stood out like a big flashing beacon. So I'm fairly certain that this is not a 3DMark 11 bug, but an issue with the rest of the system doing what 3DMark 11 asks it to do during the benchmark run.
-
Physics? Or Combined? That number-of-GPUs-switch / SI enabled should not affect any other test except Combined Test. In combined test if the switch is set to something other than the actual GPU count (or auto, as per what SI detects), yes the score changes because the test workload is different. GPU count determines how many copies of the GPU physics simulation is spawned. Normally there should be a copy of the simulation running on each individual GPU (to eliminate cross-GPU data traffic on physics simulation state). The whole "GPU physics using DirectCompute" is fairly complicated issue on multi-GPU setups. In any case, AMD just released new drivers and we're going to investigate in-house if anything has changed with either of them (11.12 or the 12.1 preview drivers) and will bug AMD again if there are still issues. I've also suggested internally that FM should make a full ("lots of big technical words") disclosure on this specific issue - we'll see if it can be made happen. Could explain exactly what is going on.
-
Ask AMD? Currently FM has no solution for this problem and the issue is within the video drivers. We have told AMD this (several times) and can only wait for them to correct the issue. All Single GPU configurations work 2 GPU Crossfire works SLI with 2, 3 and 4 GPUs works
-
Disabled Tessellation invalidates the score on any configuration. You are effectively telling the driver to skip work that the benchmark requests it to do. You can do so, but the score is meaningless. Naturally up to you under what "rules" you benchmark. I'm talking about FM hall of fame and the result being flagged in 3dmark.com result browser. Just disabling SI affects the score only on multi-GPU configurations (and only with Combined Test).
-
Not too shocking. PCMark series has always been part hardware benchmark, part Windows OS benchmark - it runs a lot of Windows component code for workloads. Tamper with Windows and yes, the results can change a lot. It is really up to you what you think is "okay tweaking" and what is "cheating" in this context. (ultimate PCMark 05 tuning: Write your own Windows API compatible OS optimized for running PCMark 05! )
-
Just a heads up... 3DMark 11 version 1.0.3 release notes * Bullet physics library updated to 2.79 in order to improve compatibility with current and future CPUs and GPUs * Manual GPU Selection option now available on the Help-tab * Updated SystemInfo module to version 4.6 with improved compatibility with current and future hardware * Improved error handling and messages * A new “More” tab with information about new Futuremark benchmarks * Professional Edition activation now requires online key verification SystemInfo Version 4.6 Release Notes * Updated CPUID SDK * Added new security measures against benchmark result file tampering * Updated system scanning components to resolve reported compatibility issues with existing and future hardware http://www.3dmark.com/3dmark11/download/update/ SystemInfo update that is included in this patch also applies to other benchmarks. It is available separately here http://www.3dmark.com/support/systeminfo-updates/ ...and due to the second line of SystemInfo update notes, we strongly recommend that for any contests and toplist purposes, runs only with 3DMark 11 1.0.3 and SI 4.6 (or later) should be approved, effective immediately. 3DMark.com will also start rejecting submits from older versions in the near future (probably around mid-January, to give plenty of time for everyone to update). Hall of Fame will get picky sooner. The update also influences scores slightly but differences should be within margin of error (we're talking 1% differences) and the cause is the new Bullet physics library. Old scores are not invalidated but future contests should require the use of latest version to avoid issues related to this. Also those who may have access to pre-release hardware; Yes, this update fixes the issues 3DMark 11 had with Ivy Bridge integrated GPU (that were actually problems with the Bullet physics library, hence the update).
-
Be advised that any run with SystemInfo disabled produces incorrect results from Combined Test with multi-GPU setups. The benchmark needs to know the number of GPUs for that test. The issues you are seeing with 3+ Radeon GPUs in this test are due to AMD driver issues. We've told them, they have so far disregarded the issue. Sorry, out of our hands. The result from SI-disabled, while the score seems great, it is meaningless. The benchmark doesn't do the same workload that it would do with SI enabled (or with SI enabled SLI run).