Jump to content
HWBOT Community Forums

Why Cuncher Poll  

57 members have voted

  1. 1. Why Cruncher Poll

    • Keep one benchmark ( being aware of much better perfomance on newer platforms )
      30
    • Split up the benchmarks
      27


Recommended Posts

  • Crew
Posted

Mysticial released an enhanced version of his y-cruncher benchmark

Performance improvements range from a few percents to even over 25% depending on the CPU and it's supported instruction set

 

I'm in favour of running 2 versions on HWBot as it will lead to eg 6500MHz scores that will be in the end will be taken down by water cooled setups running the same CPU. Some will be cheering, others will be mad...

 

Therefore a poll is required to let the community decide, but be aware of the below mentioned perfomance differences

                                                                                         

 

                                                                   30+ votes required

 

Ycruncher.png

  • Like 1
  • Crew
Posted

Personally I'm in favour of keeping only one y-cruncher category.

1. Allow the new version,

2. still allow submitting with the old version (for old setups),

3. add y-cruncher - Pi - 25b category

  • Like 6
  • Thanks 1
Posted

That would be my preferred option now as well. Allowing subs with old versions would be essential imo so we don't end up with tons of unbeatable scores. And I would love to have a category that is a tad overkill for current hardware.

Posted

This makes me wonder if HWBOT should have a built-in versioning system.

This is neither the first nor will be the last time that a benchmark will change like this. So maybe a versioning system that lets people filter or aggregate results by version.

 

Posted

Allow it and keep it as one imo.

It could be worth it, for hwbot might get a burst of subs and activity for the site with old and new people subbing to keep their places, but it also might not have that effect. I don't know.

Also, with DDR5, DIMM size will hopefully ramp up, so going beyond 10b might be worth considering at some point.

That's my 2p on it.

  • 4 weeks later...
Posted

To be precise the automatic vetting of my company for the Microsoft Partner Network has not been working for the past 3-4 weeks. So I am locked out and can not sign the new BenchMate driver for the new release.

Wrote to Microsoft Support as well, but it's a farce. Days go by without any progress.

Posted

The driver is updated at least once per major release. Minor releases often use the same driver. That hasn't been a problem until now, but it seems like I need to question these dependencies.

Posted

FWIW, I expect v0.8.2 to go out in the September time-frame. Way ahead of schedule. What I thought would take months ended up taking only 3 weeks.

I don't expect any major perf changes this time. (though there's a potential it will reclaim some of the perf loss on the oldest chips) Nearly all the work has been in swap mode which is not used in competitive.

  • 2 weeks later...
Posted

Since it looks like 30+ people voted to keep as one benchmark, I'm going re-add the HWBOT submitter app for v0.8.2 and will be backward compatible with v0.8.1.

Since v0.8.2 is (likely) getting released next month, I won't bother re-releasing v0.8.1 with the HWBOT submitter app.

If staff wants me to hold off, let me know.

  • 4 weeks later...
Posted
On 9/12/2023 at 4:07 PM, Leeghoofd said:

its a done deal.... majority pickd one benchmark... be ready to rebench... everything

 

The vast majority of members does not buy and keep HW for life, in order to rebench everything for any given reason.

  • Like 1
Posted

I hope it's not late , for the people in charge of hwbot to think again about this matter or any similar one.

When you unify different versions of a benchmark , which have alteration in scoring system , you are somehow deleting a part of overclocking history (besides other aspects).

In some years from now , no one will remember user X (scoring miracles) with y-cruncher , just because the new benchmark version is scoring higher.

For similar reasons , legendary overclockers get forgotten. (for not beeing noted in any list cause their results got buried).

That's why today's experienced hwbot members , dont know anything about oppainter , hipro5 , sf3d , kingpin , shamino , fugger , macci , sampsa , .... and many more.

 

Change this

  • Like 3
Posted

I am obliged to press the knife deeper.

These are serious decisions and a pole says nothing.

At least a pole with 30 votes. Participating members are thousands, most very rarely visit the forums.

Possibly Elite members or more generally speaking members with free access to HW, may have their own reasons to promote a unified benchmark.

Usually, as history here so profoundly proves, these members do not care one bit for the thousands but only for themselves.

HWBOT is responsible to think of the masses and not of the few.

This decision, if it finally stands, is a huge distortion of benchmarking history.

I STRONLY ask the staff of HWBOT to reconsider.

Thank you.

  • Like 1
Posted

Apparently this was discussed in Discord. Posting a few key points here.

image.png.d04681abc50642afe072a40f468d7676.png

image.png.c48df82aa324131a43a4df0dfaca1169.png

To be clear, I have no preference on what HWBOT decides to do. I'm not a competitive overclocker and thus cannot make an informed decision. I'm just happy to see that people like the program as a benchmark.

But as the developer, I can confirm that I will not and cannot maintain multiple versions of y-cruncher. This is simply not a feasible task given how volatile modern environments are with operating systems, development toolchains, etc... Therefore, development will only be on the latest version. So support for future technologies like AVX10 and APX will only be on the latest version and not backported to old versions. Likewise I cannot fix issues with old versions even if they break in a way that makes them unusable.

As mat has correctly mentioned, y-cruncher is a scientific program first, benchmark second. The goal is to compute many digits of Pi as efficiently as possible by any means necessary - both hardware and software. Competitive overclocking is only the half the equation.

image.png.8b9a06637f29452d2f4b566a0188204e.png

I can also confirm this. There are many optimizations that I omitted when I rewrote the core algorithm in v0.8.1. If and when I feel like re-adding them, you can expect the program to get faster again. And of course this says nothing about future (unknown) optimizations.

image.png.2fe485fc3c71e16f6483d3feadc5aa5f.png

Right now, there's 1b, 2.5b, and 10b. And I think 5b and 25b will be added soon. That's 5 categories. If we then split this up by version, we're looking at 10 categories or more if future versions get faster. So I can see how this can get cluttered under the current HWBOT system that lacks version tracking/filtering.

  • Like 1
  • Crew
Posted

My initial idea was to add newer benches like 5,10  and  25B only for the newer version and maintain 25m & 1B exclusively for the older version.

2.5B and 10B would be supported on both the old and newer versions

So at least the huge 1B database would remain untouched. I'm fully with TASOS to maintain and cherish the records/scores we have.

 

FYI HWBOT can make the distinction in benchmark version but it has to be added in the datafile which version was used, however in the end it would still require 2 different rankings if any distinction has to be made for boints, ... .

 

@fasttrack well you can say we have a lot of active benchers but they mostly never chime in on any topic.  Typically IF they would, it would be way later criticizing why this decision was ever made.

 

We stick to what the majority wanted, address them, not HWBOT as we proposed another split but got bashed quite hard.

 

Let us analyze how it turns out.

If it turns out as a giant clusterfuck, it will not be repeated (at least not on my watch) 

Posted
Quote

FYI HWBOT can make the distinction in benchmark version but it has to be added in the datafile which version was used

The version information is already embedded in the datafile as metadata. Assuming HWBOT keeps its datafiles, you can retroactively go back and parse that information on all existing submissions. In fact, the entire validation file that y-cruncher generates (which contains everything you need to know) is also embedded in the datafile. At least for submissions using the HWBOT Submitter. I dunno about the BenchMate submissions. So the data is there if you need it. Though it may not be convenient to access.

If you want the version information in a different place, I can do that.

Posted

How can we "protect" the good old scores done with 0.8.1 ?

I dont know if this is a silly idea , but ...
What if , every new benchmark version (with big computation speedup) had different bench presets ?

So whoever wants to take the crown using a specific cpu , would have to bench only with a 0.8.1 version.

Would something like that be an option ? or is it causing other kind of problems ?

 

 


 

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...