Jump to content
HWBOT Community Forums

Lucid Virtu MVP: Revolution in benching?


Recommended Posts

On the Lucid MVP front, there will be a press release from Lucid and Futuremark this week. Short version: MVP and non-MVP scores should not be compared to each other.

 

After testing this in-house I also think that anyone with a pair of working eyes can easily watch two runs of a benchmark - any benchmark - and see that nothing really has changed. The only real difference is that you get the same visuals without tearing (courtesy of virtual vsync) and the fps counter shoots up for no apparent reason.

Link to comment
Share on other sites

  • Replies 231
  • Created
  • Last Reply

Top Posters In This Topic

I also think that anyone with a pair of working eyes can easily watch two runs of a benchmark - any benchmark - and see that nothing really has changed. The only real difference is that you get the same visuals without tearing (courtesy of virtual vsync) and the fps counter shoots up for no apparent reason.

 

Is that a positive or negative comment? :) Have you tested in a scenario where the (low-end) GPU would normally give a slideshow instead of smooth FPS?

Link to comment
Share on other sites

Some comments:

 

MVP does not work well with multiple GPUs. 2x7970 scores 15K gpu score in 3D11, with MVP enabled that's 10K (~ 1xGPU with MVP). I assume it's just a matter of expanding the MVP software to support multi-gpu configurations, but of course we don't know how much resources Lucid has to do this.

 

The performance does not go up when overclocking the IGP of the SB cpu. This means that the "coop" between IGP and discrete GPU is only a half-true story; it's quite likely that you could very will do the same thing with any other GPU. But of course, it's nice to say that the IGP and dGPU are working together. I don't expect any performance gain going from SB to IB either ...

 

Practically, this software requires nothing more than what you do when enabling D3D overrider. You just need a mainboard that has the MVP licence.

 

Not all applications scale. Heaven DX11 goes up massively (+40%), 3D11 too (+27%), 3DM03 as well (+50%, need to rerun to verify). Vantage crashed all the time (I suspect issue between Virtu and ATI driver), Aquamark3 actually went down (-24%).

 

Fyi, we've scheduled adding the functionality to indicate whether or not you've used MVP in your benchmark run. MVP scores will not be ranked and will not be receiving any points for the time being. We might add ranks/points in the future, perhaps in a different category.

Link to comment
Share on other sites

On the Lucid MVP front, there will be a press release from Lucid and Futuremark this week. Short version: MVP and non-MVP scores should not be compared to each other.

 

After testing this in-house I also think that anyone with a pair of working eyes can easily watch two runs of a benchmark - any benchmark - and see that nothing really has changed. The only real difference is that you get the same visuals without tearing (courtesy of virtual vsync) and the fps counter shoots up for no apparent reason.

 

Will we be able to see whether or not MVP was used in the GUI, or will we have to check the verification link only (like tesselation)?

Link to comment
Share on other sites

Will we be able to see whether or not MVP was used in the GUI, or will we have to check the verification link only (like tesselation)?

 

Initially verification link only. Benchmark executable changing patches are far more complex to do and QA.

 

Even this requires SystemInfo update and current target is "early Q2" which probably translates to "sometime in April". Yes, there will be several weeks when people can submit MVP-enabled results and they can't be separated from legit ones. We are considering what to do about that (one option; flag *all* results with compatible chipsets as suspect until the update is ready).

 

Benchmark GUI update is definitely being talked about. The main problem with it is this; Anything the benchmark does in isolation can be exploited so the only sure way to do various validation bits is to do online connection. Yet it would be somewhat bad if 3DMark advanced/pro editions wouldn't give a result without network connection.

 

One potential alternate we've been discussing about would be current GUI that is extended with a box that, without network connection, states that the result was made in offline mode and should be considered unverified and preliminary. If network connection exists, parts of the result file are sent to 3dmark.com in the background and validations are received (nothing is saved anywhere at this point) and the box would then display the result status.

Link to comment
Share on other sites

Is that a positive or negative comment? :) Have you tested in a scenario where the (low-end) GPU would normally give a slideshow instead of smooth FPS?

 

Please try to understand; the number of frames being actually sent to be displayed does not change (as far as I can see) - granted, my testing has so far been fairly limited - but this is my understanding of the underlying tech.

 

FPS counter number increases because it is counting even those (partial) frames that were skipped/not rendered.

 

While I have not personally tested the tech in gaming, as I understand it the main benefit is that the time (in milliseconds) between your control input (mouse/key input) and when game can react to that on screen gets shorter - a benefit for gaming - and you get no tearing without the drawbacks of classic vsync. That's it.

 

In benchmarking there is no control input, so the only practical effect seems to be that anything based on counting the number of frames drawn gets thrown off by the tech (causing it to count frames that in reality were not rendered)

Edited by FM_Jarnis
Link to comment
Share on other sites

FPS counter number increases because it is counting even those (partial) frames that were skipped/not rendered.

 

If that's true there is no point using MVP for HWBot. Then we can also start just benching 50% of the benchmark and still get a result which is then 200% higher. Doesn't make sense.

Link to comment
Share on other sites

If that's true there is no point using MVP for HWBot. Then we can also start just benching 50% of the benchmark and still get a result which is then 200% higher. Doesn't make sense.

 

It does make sense, that's the problem.

 

There's no visual difference even though 20-50% less frames are actually rendered. That mean better efficiency, which is a good thing.

 

The logic that you would then be allowed to disable ALL frames is not really valid, because the idea of this software is that there is NO loss in image quality even if not all frames are rendered.

Link to comment
Share on other sites

I agree on you but who runes Benchmarks for image quality?

 

We run benchmarks to have comparable results here on HWBot. It's the same like F1 races. Everybody has to do the same course to get a valid time. It's not allowed to leave out obstacles even tho it would be more logic.

 

MVP might have advantages for gaming but not for us here on HWBot.

Link to comment
Share on other sites

I think that is something to think... Since MVP will take part on both amd and intel motherboards we could do like some people suggested and make it valid for the hardware released in the future.

If it is this really great feature I agree that we will have to deal with it, and differently from physix stuff it won't be an exclusive feature, since it will be on a huge number of chipsets and motherboards ... But I also agree that we cannot compare results with MVP to without MVP at least at the first moment when people already benched old hardware...

Link to comment
Share on other sites

I agree on you but who runes Benchmarks for image quality?

 

We run benchmarks to have comparable results here on HWBot. It's the same like F1 races. Everybody has to do the same course to get a valid time. It's not allowed to leave out obstacles even tho it would be more logic.

 

Nothing here at HWBOT is really comparable. Operating system, drivers, services, frequencies, silicon quality, lod, evga epower ... it's all different. It's comparable ... to a certain degree. That degree is nothing more than subjective. And the subject in this case is the definition of "what is the benchmark".

 

Even with Lucid Virtu, the track is still the same. The scene is still rendered (no visual difference). I understand what you're trying to say, though, but like I mentioned earlier it just comes down to how you define benchmark. Is it "render as much full frames as possible" or is it just "render the scene".

Link to comment
Share on other sites

Fyi, we've scheduled adding the functionality to indicate whether or not you've used MVP in your benchmark run. MVP scores will not be ranked and will not be receiving any points for the time being. We might add ranks/points in the future, perhaps in a different category.

 

Is it possible to differentiate scores of 3D11, UH, 3DMV without verification link if someone tampers with the CPU/GPU clocks????

Link to comment
Share on other sites

The press release is out, for example here;

 

http://www.marketwatch.com/story/lucidlogixr-and-futuremarkr-add-virtutm-mvp-support-to-3dmarkr-2012-03-13

 

(We'll get it on fm.com once guys at Finland wake up...)

 

Our official statement on this to our users:

 

Important note about LucidLogix Virtu MVP and 3DMark

 

LucidLogix Virtu MVP is an exciting new technology that promises to boost your PC's responsiveness, reduce latency and eliminate visual tearing, all within a low power environment. It is designed for the Intel Sandy Bridge Z68/H67/H61, upcoming Intel 7-series motherboards as well as many AMD processor-based notebooks, all-in-one PCs and desktop motherboards, and will be shipping pre-installed on a wide range of new motherboards starting in April 2012.

 

3DMark works by measuring the process of rendering frames of 3D graphics using defined workloads. Lucid's Virtu MVP is designed to improve responsiveness through the intelligent reduction of rendering tasks. While 3Mark scores may increase with Virtu MVP enabled, scores from systems with and without Virtu MVP are not directly comparable because of its effect on the rendering process.

 

We are working with Lucid to update 3dmark.com to clearly indicate whether Virtu MVP was running or not for each submitted score and to help you understand the performance benefits of Virtu MVP technologies as measured by 3DMark and our other PC performance benchmarks. The update is expected to be ready in April. Until then, we recommend that you manually add MVP information to the name and/or description of your 3DMark scores to help other users better understand your setup.

Link to comment
Share on other sites

Just read the whole thread. Massman has it right this time: is the goal of the benchmark 'to render every frame' or 'to render a scene'.

 

A benchmark is used to emulate a real gaming scenario

In gaming, 'rendering a scene' is more important than 'rendering every frame'.

Thus the goal of a benchmark is to render a scene

 

Take the concept of Z-Culling in hardware. It discards part of the scene that doesn't need to be rendered because it is behind something else in the scene. MVP seems similar to this, but more in a 'you've already rendered that bit' type of scenario.

 

In the grand scheme of things, I'd say do it.

 

But it only strives to complicate HWBot more.

 

Plus, more excuses to bench everything again. Which is okay if you enjoy it - it'll provide another barrier to people interested doing casual benching at HWBot though ("Ugh, can't compete with my card because I'm not on Z77 or w/e")

 

What we need is a 2D GPU benchmark which MVP will not affect. That way if a rendering tech comes along like this, that benchmark scores will not be worried about.

Link to comment
Share on other sites

What we really need is a way of measuring GPU performance that isn't FPS. That's the underlying reality of what we're dealing with.

 

What we really need is a 3D benchmark (not 2D!) that actually says something about the hardware's ability to render a scene with a certain image quality, rather than measure how much frames it can render with a fixed quality.

 

A benchmark that tells me "hey, with this card you can play 1080p with 8xAA smoothly" or "hey, this card will only give you 720p with 2xAA smoothly".

 

I don't think FPS is going away soon, though. It's way too easy to use it as a marketing tool.

Link to comment
Share on other sites

Nothing here at HWBOT is really comparable. Operating system, drivers, services, frequencies, silicon quality, lod, evga epower ... it's all different. It's comparable ... to a certain degree. That degree is nothing more than subjective. And the subject in this case is the definition of "what is the benchmark".

 

Even with Lucid Virtu, the track is still the same. The scene is still rendered (no visual difference). I understand what you're trying to say, though, but like I mentioned earlier it just comes down to how you define benchmark. Is it "render as much full frames as possible" or is it just "render the scene".

 

The track is the same ye but not all the obstacles are done. Correct me if I'm wrong but atm every allowed tweak doesn't change the amount of rendered frames. E.g. LOD changes the quality but not the quantity.

In 3DMark2003 we have the bugged scores when the screen is black (frames are still calculated) but you still get a result - a very high one. These results are not allowed.

 

So this would be similar to the MVP. Frames are left out -> higher result.

 

I know in this case you don't a picture on your screen but who decides which frames are necessary to be calculate and which are not?

 

Starting with MVP frames are left out. Next year we have another program which leaves out even more frames and the result gets higher again. Where is the end?

 

This software for sure can improve the gaming experience. But we are bencher and no gamers. We have to think which option is the best for HWBot.

 

I don't see any advantage for benchers here. Only a huge amount of disadvantages.

Link to comment
Share on other sites

Once again, it depends on how you define 'the track'. I know, this may sound really irrelevant and boring, but it's important. To finish 'the track', is it obligatory to jump over the small hill, or is it allowed to avoid the hill and drive next to it. Even a seemingly regular track like an F1 racetrack still has two boundaries defining the track's limitations. Also, not all cars will drive the exact same route on the track; some will take outside lane, some will take inside lane.

 

It's the same question: what is the benchmark? Is it 'render all frames' or is it 'render this scene'?

 

As for the reductio ad absurdum ("where is the end?"), that actually depends on HOW Virtu MVP is deciding which frames should and which shouldn't be rendered. So far, I have not seen any information related to the technical side of the software, so it's a little difficult to actually comment on it (negatively or positively). With my very limited knowledge on 3D development, I do think there are two practical relevant scenarios:

 

1) the software compares the new frame with the old frame, analyses it, tracks down the parts that have already been rendered and, instead of forwarding the render request to the discrete GPU, uses a part of a previously rendered frame to show the image.

 

2) the software disregards new render requests on an arbitrary basis only to "speed up" the framecounter. This could, perhaps (i don't know), be done by ignoring the less important render requests.

 

In scenario 1), the reductio ad absurdum is not a valid argument, because 'the end' is determined by the 3D scene and that 3D scene only. Virtu MVP would boost the FPS depending on how much similar parts are supposed to be rendered. In this case, we're just talking about an efficiency enhancement as the system is just not rendering identical parts multiple times.

 

In scenario 2), argument is actually valid and is problematic, because then there's an arbitrary (=subjective) basis to decide how much should be rendered.

Link to comment
Share on other sites

Yes, you are right :) Then we should define how we see benchmarks in HWBot.

 

1) I would see as a technical development (still not sure why we would need it for HWBot)

 

2) Is there a way to find out whether this is the case? Would be quite problematic like you already said. Every update on the software could mean the results here on HWBot differ alot.

Edited by der8auer
Link to comment
Share on other sites

As for benchmark definitions, the first idea that comes to mind is leaving it upto the benchmark developers. There are some problems with this idea, though:

 

- historically, overclockers/benchers did not always care about what the BM developer had to say about it (*)

- practical reality: not everything can be pinned down or checked (eg: tesselation) => lots of loopholes

- BM developers are also not all-knowing and sometimes contradict themselves when speaking of 'purpose' and 'implementation (see: Borandi's argument)

 

(*) examples:

 

- 3DMark01: using a different run order is not how FM wanted the benchmark to be run

- Usage of LOD was against FM regulation

 

In general, overclockers/benchmarkers have always been more into 'getting the highest score possible' rather than 'getting a comparable score'. Years back (2001/2002), liquid nitrogen was deemed 'illegal' by a lot of hardcore benchmarkers in that time period too.

Link to comment
Share on other sites

This decision is realy hard and I also don't know which is the right way. The easiest way is, like you said, to follow the rules of FM even tho we didn't do it in the past.

 

We should decide which would be the best for HWBot - regardless on our own opinion.

 

MVP on HWBot:

 

Advantages:

- Reach the highest scores possible

- Be up-to-date with the newest technology

- ?

 

Disadvantages:

- Results could be highly affected by updates of the software (next update gives 10% more boost -> bench everything once again...)

- Limited support for different GPUs

- Unsure development in the future (Lucid Hydra!)

- Comparability will be affected due to different scaling on benchmarks and different cards

- Multi GPU not supported (1x GPU will be even faster than 2x GPU)

- ?

 

Did I forget something?

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.


×
×
  • Create New...