Jump to content
HWBOT Community Forums

_mat_

Members
  • Posts

    1012
  • Joined

  • Last visited

  • Days Won

    33

Everything posted by _mat_

  1. You can't draw the line from BenchMate to other benchmarks. It's an indication that certain timers like QPC might be alright, but that really depends on what the other benchmark is using and what BenchMate needs to do to make your benching as easy and bulletproof as possible. For example the legacy 3DMark's use mostly GetTickCount and timeGetTime. Both are really unstable in comparison to modern time measurement methods. And both of these methods are completely avoided when using BenchMate and a more appropriate timer will be injected instead (and measured against 2 other time methods directly implemented via hardware in my driver). Systeminfo doesn't do any timer evaluation on legacy 3DMarks, if I remember correctly. The time measurements are directly inside the benchmark. The newer ones seem to have additional checks, but no idea what and where.
  2. No, it's not a big deal, I can do it. But I rather not if it's not absolutely necessary. I think the workflow would be much cleaner and less confusing for newcomers without the redundancy. But if a decision is made to continue with mandatory CPU-Z for BM validated scores, I can add a helper to enforce it. I have some ideas for that.
  3. I'm up for it! ❤️
  4. Nobody says to throw out legacy or ban XP for legacy. Please read the posts. It's discussed: * if scores that already use BenchMate no longer need redundant CPU-Z windows * if unreleased platforms should no longer be able to use XP * how to make the bot more accessible for new users and not overcomplicate the rules
  5. It's about Alder Lake and running benchmarks on an OS that was not designed for modern platforms. I totally agree with Roman because we don't know shit about the validity of these scores, the timers used or even how stable RTC or the implementations of RTC on a modern platform is. The only solution I can think of for legacy OS would be a separate piece of hardware with a crystal that measures the time independently of the untested OS/platform timer. To just believe that scores are not significantly skewed is very "unscientific" and should not be the standard we measure our work with.
  6. As for the CPU-Z redundancy, I really hope we can make at least a small step here. If you already bench with BM you are already doing much more work to prove that you are not trying to fark the system and willingly take the (relatively small) impact of providing data during the run instead of afterwards. We all know that sensor information after the run can be far away from the truth, yet we take it as a baseline for all other scores. That's a thing of the past when we had no other tool at our disposal. I thought about this for a while a few days ago and to me it feels like the right approach would be to turn the tables and let the odd scores do the work instead of putting it onto 99% of the others. We could flag scores that are out of line as "waiting for proof" and let the bencher rerun to bring additional proof so the score can be properly understood. PS: I'm all for taking the step with Alder Lake of course.
  7. Having more time now to answer properly. I understand the problems that I am just one guy keeping the main part alive. I do have a team, two guys doing web stuff for the validation platform (still unreleased and currently on ice due to other work arrangements we need to finance our lives) and one graphics designer. But the truth is that all you can currently see, the win application, was made by myself. For now at least. I don't like that at all and yes, it's risky. But so is HWiNFO (Martin, 1 guy) and CPU-Z (Franck, 1 guy). HWBOT itself was basically coded by one guy as well. If you don't have the money for redundancy, you have to take the risk. That said, early on when Roman decided to buy HWBOT we got in touch and talked about the opportunity for a cooperation or maybe even more so there is no form of competition for an already difficult niché. It did not work out at the time (guess Roman was busy, it happens), but I am still up for an official cooperation to bring overclocking and benching to a whole new level and to even out the odds that BenchMate could die from one day to the next. In any case, my will leaves the source code of the software to Splave. I sincerely believe that he would know what to do with it.
  8. I was asked to make the result window smaller to fit the CPU-Z windows. We can make it as big as needed!
  9. It was released btw and the error is now fixed: https://benchmate.org
  10. Important update! Please redownload and reinstall the latest version if you have problems! Download: https://benchmate.org/ Changelog: https://benchmate.org/changelog/0.10.8 Support: https://www.patreon.com/benchmate I have fixed several bugs with Windows 7 and 8.1. The driver could not be installed on these systems due to major limitations on these legacy OS versions. Please beware that you might not be able to run BenchMate on Windows 7 if you don't have the latest updates installed. Especially old ass root certificates will not work with my new EV certificate. I had to install IE11 to make the file validation work on all ends. It's really hard to get this working in 2021. I recommend the Winfuture Update Pack, January 2020. I also found a problem with PYPrime's validation of module dependencies and some harmless but annoying workload crashes on 64 bit.
  11. BenchMate 0.10.8 is here: Download: https://benchmate.org/ Change Log: https://benchmate.org/changelog/0.10.8 Support: https://www.patreon.com/benchmate
  12. Probably 11600K or lower? If yes, the bug has been fixed in the latest Patreon preview version: https://www.patreon.com/posts/first-preview-of-54950276
  13. That way the benchmark integration can stay as it is and the category and therefor the amount of cores is decided by the cpu model detection. It's practical and works out of the box even for old benchmarks without BM integration.
  14. I think that's a very important question. Theoretically it depends a lot on the benchmark and the performance of the little cores. Let's say we have the perfect parallel workload, that really scales to the number of threads big+little cores offer. A benchmark has to have some kind of workload scheduler to divide the calculation into smaller tasks that can be run in parallel. When a processor thread is finished with its last task, it gets the next one until all tasks are finished. The size of a task depends on its overhead, weight of calculation, memory dependencies and so on. For example if you split up the tasks into one simple addition each, the overhead of the scheduling too huge and the scaling will be off (32 threads will not be 32x faster than 1 thread). Normally the tasks have a significant size to eliminate any overhead. For CB the task size depends on the number of threads available according to the yellow boxes being rendered in parallel. In GPUPI you can select the task size with the batch size. This normally works pretty well when you have equally fast threads. The scheduler assumes that all tasks will take an equal amount of time and will schedule the tasks in a way so that all threads will be busy until all tasks are done. It's really important that they are done at nearly the same time, because the benchmark has to wait until all tasks are finished to show you the juicy final number. That's often the reason for score variance and why single-threaded benches are more stable than running workloads with 64 threads or more. A good run might just be a little bit of luck because the tasks finished nearly at the same time. With Big.Little we now have slower threads available to the scheduler. They will need more time to finish a single task. If the thread scheduler still assumes that all threads are equally fast and always gives out tasks to the next available thread, this will become a problem at the end of the calculation. It might hand off the last task to a slow thread and you will have to wait for it to finish while all other threads are already idle. You can try this btw in GPUPI when you mix one or more high-end cards with a single low-end card. The low-end card will hurt the final score instead of helping it with increased CU count. It's all highly theoretical but in my opinion B.L will hurt some benches just the same way HT can hurt. So my guess is that it will have to be disabled for some benches depending on how they are implemented. Maybe a flag in the ranking next to the score will be good enough to show that B.L was enabled and gave a boost.
  15. Something to consider with option 1 and 4 is that the HWBOT submission API does not support a separation or thread count yet. And benchmarks don't support it as well. It needs to be added on both ends. Otherwise all CPUs would end up in the same category with their CPU name. I will add support in BenchMate for whatever is decided of course. Little chance for GPUPI 3 as I lost the source code for both 3.2 and 3.3 due to a failed SSD and pure stupidity. Might be good timing to finally get rid of the two versions and finish GPUPI 4.
  16. Strange indeed. Maybe you launched it with a special DPI setting in the shortcut? There is no BenchMate-specific setting inside the application that decides the resolution of the screenshot. That is calculated each time when the screenshot is made. But there are Windows settings that let the application believe that it runs with less DPI and therefor less desktop resolution. Anyhow, glad you fixed it.
  17. Will need some more information to check this out. OS, native resultion and DPI setting, plesae. Maybe some special graphics card pseudo resultion feature enabled?
  18. No, sadly it was not possible for me to finish the latest maintenance release yet.
  19. Very soon. I had a very tight deadline with work and have been coding day and night. My life currently sucks, but I'm still trying to make it happen asap.
  20. Great work. Exactly how it's done.
  21. It's a bug in the latest HWiNFO version. I will contact the dev today and see if we can figure it out.
  22. Please post the wrong screenshot so we can look into it. BenchMate doesn't show any chipset information. Where is that detection and what's wrong with it?
  23. Yes, you can match as you like. The fastest card will decide the combination represented on the bot. Your example will be listed as 3x3090. As for the 6900 XT, I'm not getting support by any graphics vendor, so no, there currently is none for GPUPI as well. Yes, it goes both ways.
  24. Would fit better in the BenchMate support thread, but my guess: You are not using the bundled HWiNFO version. They have to be compatible, otherwise they don't load next to each other. It's a limitation by the SDK/HWiNFO.
×
×
  • Create New...