Jump to content
HWBOT Community Forums

Recommended Posts

Posted

Nope, I write code :) I deal in performance first :P

 

I have written a couple of benchmarks (not on HWBot) that act like this, to see where L2/L3 cache plays a role as you increase problem set sizes. Sometimes it's better to test this way in order to maximise throughput and minimise out-to-memory transfers, and you get an overall peak value at the end. I change my problem set sizes over time as well.

 

If anyone has ever looked at n-body calculations, the complexity of a problem rises as a square of the number of bodies in a simulation. At some point your memory requirements will be more than L2, more than L3, and then require looking back to main memory. Keeping out of main memory is a lot quicker in simulation steps. Or a good coder can hide that latency :)

Posted
the time to do a calculation should stay the same, as you don't need to throw more power to keep a game running at 30fps indefinitely. (3D analogy for 2D benchmark, but yeah same principle applies).

 

But it isn't. The time to complete is like 3 minutes or so with slow mode vs 30 seconds normal.

 

Score is either like you say calculated by peak, or by just the first loop and the rest is just to test your stability.

Posted

When I was noticing is that you will still get a higher score if you finish the bench.. i was doing this trick at 6.1 and I was getting the same score I did at 5.7 fully completed.

Posted

Some background information:

 

XTU is running a large set of calculations many different times, and is using the best time of all those runs to determine the score. So, when several iterations are run at the higher frequency, we will use the times associated with those runs, as opposed to the times associated with the runs made at 1.8 GHz.

 

It is possible to see the same score under these circumstances if the fastest possible times occur in the first couple runs of the calculations (while your frequency is higher), but it is more likely that doing this will cause your score to be lower and less consistent.

 

The reason we are using the fastest time, as opposed to an average or time aggregate, is because the fastest time has statistically shown to give us a far more reliable score (run the benchmark many times, and get scores that are close). The average times have a tendency to be skewed towards far slower times, as things like cache misses and OS related delays can massively inflate a few runs times. The times we are talking about are incredibly small, so any interruption makes the operation look glacially slow in comparison.

 

Sounds like Steponz's finding that slow-mode would give him worse results might be correct too.

  • 2 months later...
  • 1 month later...
Guest barbonenet
Posted

this tweak can work's on gene vi and ROG Clable??? uhuhuuh ;)

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...