Jump to content
HWBOT Community Forums

Lucid Virtu MVP: Revolution in benching?


Recommended Posts

  • Replies 231
  • Created
  • Last Reply

Top Posters In This Topic

if it really helps, if it gives us benefit and more fps - not sure why you (and many others) refusing it so hard - oc have ALWAYS been about pushing it, about discovering new tweaks and ways to increase fps...if this is the key now, we need to accept it!

 

But does it give more FPS, or just an illusion of more FPS?

 

Benchmarks in general count FPS to indicate how many frames the benchmark rendered. This indicates throughput and works as long as every frame is actually rendered.

 

What if something in the middle decides that some of those frames are "unneeded" (for whatever reason) and tells the GPU not to draw those frames and to jump to the next frame. FPS counter won't know the difference, so a frame requested but not rendered is still a frame for the FPS counter. And boy did that frame get "rendered" fast (by skipping the rendering bit).

 

I mean, you could (in theory) hack the driver a bit so it just returns "okay, I rendered it" to every DirectX call without actually doing it. You would get GAZILLION FPS on every benchmark. And a black screen. But hey, you would get more fps, so it must be okay, right?

 

FPS number is meaningless unless there is a common baseline on how it is measured.

Edited by FM_Jarnis
Link to comment
Share on other sites

But does it give more FPS, or just an illusion of more FPS?

 

Benchmarks in general count FPS to indicate how many frames the benchmark rendered. This indicates throughput and works as long as every frame is actually rendered.

 

What if something in the middle decides that some of those frames are "unneeded" (for whatever reason) and tells the GPU not to draw those frames and to jump to the next frame. FPS counter won't know the difference, so a frame requested but not rendered is still a frame for the FPS counter. And boy did that frame get "rendered" fast (by skipping the rendering bit).

 

I mean, you could (in theory) hack the driver a bit so it just returns "okay, I rendered it" to every DirectX call without actually doing it. You would get GAZILLION FPS on every benchmark. And a black screen. But hey, you would get more fps, so it must be okay, right?

 

FPS number is meaningless unless there is a common baseline on how it is measured.

 

100% agreed

Link to comment
Share on other sites

So... are there even reasons for allowing this? I mean sure, you get higher scores, but we before all should know that scores are relative to the used hardware. What's the point if a 8400GT is able to beat a 8800GTX? Except the 8800GTXs are also rebenched and get higher scores, too.

In the end we would have the same point structure like now, with the difference that everything seems to be more powerful. And also, what are benchmarks for? - to compare scores with people using the same hardware and check if your PC is running fine. For HWBot, maybe a bit more like "Hahaha look i turned my vGPU to 2,9V" but in the end you would kill all comparability with this step.

 

And on top of this, what do you do if an newer version of this software comes out which is more optimated? Rebench again and again?

Link to comment
Share on other sites

But does it give more FPS, or just an illusion of more FPS?

 

Benchmarks in general count FPS to indicate how many frames the benchmark rendered. This indicates throughput and works as long as every frame is actually rendered.

 

What if something in the middle decides that some of those frames are "unneeded" (for whatever reason) and tells the GPU not to draw those frames and to jump to the next frame. FPS counter won't know the difference, so a frame requested but not rendered is still a frame for the FPS counter. And boy did that frame get "rendered" fast (by skipping the rendering bit).

 

I mean, you could (in theory) hack the driver a bit so it just returns "okay, I rendered it" to every DirectX call without actually doing it. You would get GAZILLION FPS on every benchmark. And a black screen. But hey, you would get more fps, so it must be okay, right?

 

FPS number is meaningless unless there is a common baseline on how it is measured.

+1

 

You just have to stop rendering @1% in each frame and hoooo I've 100000FPS in 3dm11..

 

If the rendering is stop before the end it's not a frame

Link to comment
Share on other sites

Yea SoF that's exactly how I understood it. The GPU wants to render some parts multiple times (which is the normal procedure of the benchmark) but the IGP prevents this. So to me it's clear and the MVP forces the benchmark to only calculate maybe 80% and the FPS will go up.

But in order to get comparable results we have to run every benchmark 100% and not 70% or 80%.

 

0tQ0C.jpg

Link to comment
Share on other sites

It's not really a good analogy.

 

It's like two people getting asked to draw 5x a waterfall scene in MS Paint. Person A draws one scene, saves it, opens new document, draws ... etc untill there are 5 files. Virtu guy just draws it once and then copies the file 4x to have 5.

 

In both cases, there are 5 waterfall scenes. But Virtu didn't have to do as much effort.

Link to comment
Share on other sites

But does it give more FPS, or just an illusion of more FPS?

 

Benchmarks in general count FPS to indicate how many frames the benchmark rendered. This indicates throughput and works as long as every frame is actually rendered.

 

What if something in the middle decides that some of those frames are "unneeded" (for whatever reason) and tells the GPU not to draw those frames and to jump to the next frame. FPS counter won't know the difference, so a frame requested but not rendered is still a frame for the FPS counter. And boy did that frame get "rendered" fast (by skipping the rendering bit).

 

I mean, you could (in theory) hack the driver a bit so it just returns "okay, I rendered it" to every DirectX call without actually doing it. You would get GAZILLION FPS on every benchmark. And a black screen. But hey, you would get more fps, so it must be okay, right?

 

FPS number is meaningless unless there is a common baseline on how it is measured.

 

Dude, your program measures FPS as the only if not most major way to get a GPU score. You have issue with FPS, then please by all means, go ahead and find another way to measure GPU performance. That is my issue I have been thinking about, why is there NO other way to measure GPU performance?

 

Anyways not rendering frames the user will never see and making sure the image doesn't' tear, then what is the problem? I don't care if it doesn't render frames i will never see to instead speed up the whole scene.

 

It also does double buffer, technically taking that task away from the GPU. What is the big deal? it isn't just software it is hardware too!

 

Under my impression it is a combo of iGPU buffering the output for the dGPU, and this frame selection. I bet if you put one system with it enabled and one without it enabled, gameplay would be better on the system with it enabled.

Edited by sin0822
Link to comment
Share on other sites

But isnt the point of the Benchmark that it has to "draw the 5 pictures" and the guy who does the 4 copies is a little cheater? :D

Hasn't the point of benchmarks been to measure real world gaming performance anyway....

 

 

If things are changing we have to TEST it first, understand what it does exactly and how it impacts every day life and then take it up or throw it away. A lot of the opinions right now are misinformed or already created with prejudice without really assessing the full picture.

Link to comment
Share on other sites

I am not having the time to look deeper into that tech today but as my previous post stated the best way to find out is some really low fps-scenario which is more a "dia-show at 15 FPS" than a smooth benchmark.

When it is not only about skipping frames it should boost this scenario to a visibly smoother benchmark when enabled.

 

If it doesn't scale in this scenario it is not really speeding up benchmarks but just f*cking us all...

 

 

 

 

 

...which is not too bad for some of you...this way you will not end up as virgins :D

 

true

Link to comment
Share on other sites

But isnt the point of the Benchmark that it has to "draw the 5 pictures" and the guy who does the 4 copies is a little cheater? :D

 

"What is the purpose of a benchmark" is a very good question. Is it giving the videocard the task to "render all frames" or is it giving the videocard to "please render this scene".

Link to comment
Share on other sites

Mhh actually this is a good question and I'm not sure which is the correct answer. How does Futuremark see this? They created the 3DMarks so they should actually know the answer.

 

My opinion would be to render the complete frames but that's only my opinion and doesn't have to be true.

Link to comment
Share on other sites

I'm sure FM's opinion is 'render all frames' on that question. But, the 3DMark benchmark is designed (and marketed) as an application that is supposed to serve as indication for gaming. And when it comes to games, the answer is 'render this scene'.

 

Game developers don't care about how much throughput there is. They care about getting the damn scene rendered with a certain image quality.

 

For me, 3dmarking is just gaming for real men (and women) :D

Link to comment
Share on other sites

I'm sure FM's opinion is 'render all frames' on that question. But, the 3DMark benchmark is designed (and marketed) as an application that is supposed to serve as indication for gaming. And when it comes to games, the answer is 'render this scene'.

 

Game developers don't care about how much throughput there is. They care about getting the damn scene rendered with a certain image quality.

 

For me, 3dmarking is just gaming for real men (and women) :D

 

I was driving home from work today thinking about your question, and i had the same exact thought about scene vs frame and 3DMark.

Link to comment
Share on other sites

  • 2 weeks later...

If it only applies to nV Fermi and better...... and ATI 5-series and better....... could HWB create an exception for those (MVP not allowed) and allow it for future GPU releases? That way, no-ones hard work is "wasted" and HWB can still embrace new tech

 

 

What would show up in GPU-Z? There is no point in talking about not allowing it if there is no way to tell.

 

 

I am interested to see if a slideshow becomes smooth (or smoothER) with this tech. TBH.... it's not as if we need big jumps in GPU performance for gaming, given console spec...... :P

Link to comment
Share on other sites

Have a separate ranking for current and previous gen cards ( e.g HD5870 vs HD5870-MVP ) and for future cards just let it be? Hopefully it is just fermi and cypress upwards.

 

Seems kind of like when pcmark05 got flipped on its head, except more people actually care.

Link to comment
Share on other sites

Street Fighter IV, I think.

 

I also think Virtu MVP doesn't work with multiple graphics card. So, only for 1x GPU we'd need an additional Virtu category.

 

The main issue is that we don't know anything about the lifetime of this technology. If it goes the same way like Hydra (poor drivers), it's a problem I think.

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.


×
×
  • Create New...