Jump to content
HWBOT Community Forums

Lucid Virtu MVP: Revolution in benching?


Recommended Posts

can i ask you, will Ivy bridge not make everyone re-bench every title held by Sandy Bridge+all the benchmarks that SB was frequency limited on? If that is the case and you have to re-bench it all b/c people will re-bench it all to begin with?

 

I understand what you say, but you are making it sound like MVP will make a inferior piece of hardware perform more superior, when infact it already would be more superior. I can see your guy's points, but why not wait for release to see how it actually performs? You haven't tried it, I haven't tried it, but if it is getting all this amount of attention it must be something.

Edited by sin0822
Link to comment
Share on other sites

  • Replies 231
  • Created
  • Last Reply

Top Posters In This Topic

can i ask you, will Ivy bridge not make everyone re-bench every title held by Sandy Bridge+all the benchmarks that SB was frequency limited on? If that is the case and you have to re-bench it all b/c people will re-bench it all to begin with?

 

Ofc that's what I meant with the GTX580. Ofc everybody will rebench it. But all the older scores which are not CPU limited won't be affected or only a little bit. That's how it always was.

 

Using MVP it will just affect every score. No matter which GPU or CPU was used.

Link to comment
Share on other sites

I think this will invigorate the bot more than anything. We are starting the new overclocking season boys, just like any sport new performance parts have come in and you just have to hang on tight :D

 

people will have to rebench, so WHAT. Who needs an excuse to bench lol

 

HWBOT is going to have to upgrade their servers again before April though :D, that could be a bit of a negative (or is it :D) heheh

Link to comment
Share on other sites

From a hardware vendor point of view, I say allow it. Lucid MVP boosts not only graphics performance, but also overall system responsiveness because of the reduction in redundant frame rendering - this also frees up the CPU (not only GPU). Essentially, it will be an must have bundle for every motherboard maker who targets the high end (read every brand will have it on their high end MBs - without exception), and every user out there will be using it or will have the option to use it. In my opinion, MVP will be what gamers and other power users get when they use a high end desktop PC in future, so if they're going to be basing their purchasing decisions on OC scores and rankings, we might as well use/allow it!

 

This is my personal opinion and not an official GIGABYTE statement.

Edited by Tim Handley
Link to comment
Share on other sites

Instead of posting whether I was against or not against it, allow me to point out my concerns regarding this Virtu MvP :

 

- A benchmark have certain sets of data/frame to be rendered. By my understanding, the benchmark can be considered valid if only those certain sets of data is not reduced in any way.

* yes, I wasn't agree with disabling tesselation in 3DMark11 benchmark :P :p*

 

I don't know how exactly this Virtu MvP was implemented, but if the algorithm somewhat 'reduced' the benchmark data/frame, then the result obtained using Virtu MvP can be considered invalid.

 

But then again, if the increased frame-rate is caused by the IGP somehow 'helping' to render the frame, and the benchmark data was NOT reduced, I can agree with this.

 

Just my 2 cents ;)

Link to comment
Share on other sites

I vote allow.

 

you guys vote against, can I ask why? Are you too lazy to OC the iGPU and to configure this program? You guys allow so many other tweaks to be used, but not one that actually revolutionizes gaming?

 

There is not point in using any of the sub-par Intel overclocking systems released in the past year, just to send a direct message to Intel please don't give us CPUs that don't OC to 6GHz, and cost more than we should be paying, and which clock so differently, it makes OCing less fun.

 

So when SB came out was there this much cry to not let SB be used b/c it would overtake the rankings?

 

http://www.lucidlogix.com/download/WP-Eliminating%20Graphics%20Pipeline%20Redundancies%20181011.pdf

 

I see what you mean you will lose points to software, but it is more than software, as the iGPU is being used. it is like Hybrid CF with Intel.

 

You are absolutely wrong, this is not CF with Intel HD Graphics but this only technically (virtually) raises fps for better input lag, fps are the same but to system are reported higher ... maybe some tweak there will raise these virtual fps to 1000 and you will see 3D Mark 11 single GPU results 50 000 points ... without Ocing the VGA! Check Anandtech analyse about it, this MVP is PURE FAKE to customers, to benchmarks ... it is not raises fps for real, but only reporting higher ...

Edited by OBR
Link to comment
Share on other sites

OBR should read whitepapers instead of Anandtech :P

 

ive checked whitepaper, there is nothing about real principles, only PR massage ... read it carefully, its not talking about higher fps but better responsiveness

 

PS. Against Multi-gpu theory is another fact, there is MVP exists for system with only one GPU (only i-mode mode) with HyperFormnce and Virtual Vsync ...

Edited by OBR
Link to comment
Share on other sites

Ok i havnt had time to read the whitepaper, but if i understand this whole thing correctly, if i have say a 2600K @ 5.6 and a GTX 580 @ 1600 core, achieving a score of say 12k in 3dmark 11, then with this enabled, i can be looking a close to 20K giving roughly a 180% improvement (and given it works that well with 3dmark tests). Then why not enable it, those that are seriously dedicated to benching, wont mind as everyone using this will benefit, and if you could not be bothered, well then that's your problem. It wont mean some guy at his desk running the same setup but at stock, will be getting the same score. It will be like now with the release of the 7970's, not all of us can afford them, but those that can are cleaning up the records, until more of us get them and even things up. Initially this will break ever record out there, and then break them again few times, as more experienced benchers to sessions using this. Eventually well end up where we are now, just with scores and records being higher, but still the same difficulty to achieve/break. Just the stragglers and inactive users will fall away. I see it as a good thing.

Link to comment
Share on other sites

Just a heads up; Futuremark is following this development with great interest and on the most basic level, anything that improves real world performance in games is obviously a good thing and as long as benchmark score improvement and real world gaming experience improvement match, benchmark workloads themselves are valid.

 

On the other hand, benchmark scores are useful only if all factors that contribute to the score are known - if you can't tell apart runs from existing normal system and hardware-identical another system with this software in place (except that the second case would have considerably higher number as a score), it would render benchmark scores somewhat less useful.

 

We're investigating and I would imagine that the first priority is to have an ability to detect something like this and flag the scores accordingly.

 

Edit: Also it should be noted that a lot depends on what effect this has on image quality. Tessellation tweaking isn't comparable because it changed image quality.

Edited by FM_Jarnis
Link to comment
Share on other sites

Though I like my Gigabyte mobos very much, and do get from time to time review samples as well, but as a enthusiast I am not biased to any company, and it seems to me, most in support are Gigabyte employees or reviewers greatly supported by Gigabyte, is it because this would be primarily implemented on Gigabyte motherboards??? or maybe just 1 more mobo company creating a rush for these mobos only???

 

P.S : I hope my samples are not stopped, :P

Link to comment
Share on other sites

Though I like my Gigabyte mobos very much, and do get from time to time review samples as well, but as a enthusiast I am not biased to any company, and it seems to me, most in support are Gigabyte employees or reviewers greatly supported by Gigabyte, is it because this would be primarily implemented on Gigabyte motherboards??? or maybe just 1 more mobo company creating a rush for these mobos only???

 

P.S : I hope my samples are not stopped, :P

 

I don't think it's only on GBT:

 

Essentially, it will be an must have bundle for every motherboard maker who targets the high end (read every brand will have it on their high end MBs - without exception)

 

Also, check out the Lucid customer portfolio:

 

lucid.jpg

Link to comment
Share on other sites

Instead of posting whether I was against or not against it, allow me to point out my concerns regarding this Virtu MvP :

 

- A benchmark have certain sets of data/frame to be rendered. By my understanding, the benchmark can be considered valid if only those certain sets of data is not reduced in any way.

* yes, I wasn't agree with disabling tesselation in 3DMark11 benchmark :P :p*

 

I don't know how exactly this Virtu MvP was implemented, but if the algorithm somewhat 'reduced' the benchmark data/frame, then the result obtained using Virtu MvP can be considered invalid.

 

But then again, if the increased frame-rate is caused by the IGP somehow 'helping' to render the frame, and the benchmark data was NOT reduced, I can agree with this.

 

Just my 2 cents ;)

 

Heh, it's sort of a grey zone.

 

- Does MVP change the benchmark code? No.

- Does MVP trick the FPS directly? No.

 

What MVP does change is the underlying assumptions of the benchmark creator. For 3DMark11, one of the underlying assumptions is that the graphics card will render each frame completely, even if it contains data that has already been rendered before. What this technology does is basically having the IGP tell the GPU "are you fucking kidding me" when it's rendering a part of the scene multiple times and tells it to do the work only once.

 

In a way, it's similar to any software tweak that enhances the speed of the benchmark. Remember your PCMark7 tweak to boost the image rendering (set to 640x480)? It's the same principle. Adjusting LOD is the same principle, disabling services is the same principle, disabling tesselation is the same principle, doing a copy-waza is the same principle. The principle being: running the benchmark in a different way than the developer intended.

 

Now, I can hear you say already that "lod really does boost FPS". Well, this thing sort of does too. The issue here is that the FPS counter doesn't care how much of the frame has really been rendered. So, a situation where it would render 100% + 75% + 50% + 25% + 50% of the new frame is being accounted for as 1+1+1+1+1 = 5 frames. The real amount of data rendered, however, is closer to 3 (full) frames.

 

In theory, the software renders exactly the same scenes, but just does it a bit smarter. In a way, this technology kinda shows high highly inefficient the GPU is currently being used. Basically, when you jump up from 60FPS to 250FPS (~ 400%), it means the GPU can render (in theory) the exact same scene using 4x less resources. But, when I say in theory, I mean that's how the Whitepaper explains the concept. As far as I know, there's no technical documentation on how the redundant frames are detected or if a user can fiddle with it so it could for instance skip 'all frames'.

 

As I mentioned before, I see the technical discussion ("could we allow the software") as a different one from the HWBOT discussion ("how does it affect points/ranks"). Having the technology 'approved' does not necessarily mean that we allow it for HWBOT points.

Link to comment
Share on other sites

I agree that it's not cheating. Also agree that it will became a must have feature. Everybody will have to rebench with Ivybridge anyway, better to use the best tweak out of it from the box... I don't want to spend ln2 on a vga to see people at normal hardware forums saying that scores better than I did at their own rig without any tweak. I think that it is something that we all gonna deal with.

As some people said, since it doesn't change benchmark speed or avoid the hardware from rendering some scene (since it only optimizes recurring frames (they were rendered once and the CPU power and driver feed the vga with the important info)) I think that is completely legal....

 

By the way, both cds that I've got here has the MVP folder empty as well =P I'd like to try it... I'll check later if lucid driver isn't enough or it's really missing the MVP stuff...

Link to comment
Share on other sites

You're comparing apples with oranges.

 

SB overtakes the ranking due to its performance. Means the SB can give a boost to the score of a GTX580 but not to an 8800GT because this card is completely GPU limited.

 

There are ten thousand results which are done with the current rules. Now Lucid comes up with this software and all the hard achieved results will be gone. A 8800GT on stock cooler will kill all the LN2 scores which cost a lot of money, time and effort.

 

So is this fair to all the ppl who spent so many years with benching here on HWBot? I don't think so.

 

And IMO this software could kill the bot. If I have to rebench every score from the last 5 years just because of this software - I'm not sure whether I will continue with this hobby. And I know alot of other ppl who think in exact the same way.

 

This software may be nice for gaming but we are here on HWBot. I don't see any advantage we would get using this software but alot of disadvantages!

+100000

 

 

@all

Just keep in mind that many people work, and dont have time to rebench all vga.

Moreover when a VGA is bench, I have to sold it to buy a new one... I dont have illimited cash

 

It not fair to introduced a news software who clear all the past submission.

 

 

 

well i see your point, maybe if HWBot was to do something like limit the scores with it used to only certain hardware categories, or only certain categories can use it. Who knows.

 

If MVP have to use IGP to work, Hwbot have to make a special categorie, if they don't do that what's happens if I OC a VGA in a motherborad without IGP ??

Am I allowed to do use an other VGA just to track down redundant frames ? If I use an other VGA instead of IGP to track down redundant frames, the result was better ?

 

 

 

I vote against using it on existing benchmarks since this may skew the current rankings, but on some 3DMark2013 or whatever (after this technology is oficially released) why not ?

Yes, good idea, or just allows MVP with cards launched after this technology was officially released

 

 

In many 3D benchmark we already are CPU limited, if you give a 400% VGA boost, it was no longer 3D benchmark, and the winner will be the one whitch have the biggest CPU ... money money money

 

For me it's look like physX... very powerful, but giving a boost that does not show the real power of the card.

 

So I vote against, or you have to restrict MVP use on the upcoming bench/VGA.

 

 

Sorry for me poor english.... try to do my best :(

Link to comment
Share on other sites

OBR the MVP type that only works on one GPU currently in not available, it is in the works as far as one article i read said. If that type of software which ONLY requires the dGPU to work is there, i don't think it should be allowed. However this current MVP under my understanding, REQUIRES iGPU.

 

Second of all from my knowledge MVP doesn't work with 1 GPU currently, and MVP doesn't work with more than 1 dGPU.

 

The iGPU takes over some of the tasks, such as the output buffering, in that case it is working together in some way.

 

I think it is very clear who hates this, those that are trying to sell GPUs, with this technology it makes it very hard to sell SLI and CrossFireX, as well as high-end GPUs to people who want a certain amount of framerates. Not to us, we will buy SLI/CF, but many gamers might not if what Lucid and the first comparison claims is true.

 

All companies will have this technology on launch, why wouldn't you? I though it is just a licensing thing? It would be interesting if Intel allows MVP to be licenses to Z68, as they demoed on Z68 but it never happened, as I saw the demo at IDF. it looked too good to be true, but if you consider how powerful the SNB iGPU was at transcoding, and how the IVB is supposed to be better.

 

A way to see if this really matters is to look to see if you OC the iGPU whether framerates increase.

 

I still have to properly analyze this technology for myself, i would bet with the multi-gpu(MVP) configuration, there would be microstuttering, microstuttering happens FYI in cases when there are multi-GPU technology. If there is a lot of microstuttering it would make sense that this technology is a form of SLI. If there is no Microstuttering and the FPS are constant high, then it might be cheating. Someone can you analyze this?

Edited by sin0822
Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.


×
×
  • Create New...