Jump to content
HWBOT Community Forums


  • Content Count

  • Joined

  • Last visited

  • Days Won


_mat_ last won the day on April 18

_mat_ had the most liked content!

Community Reputation

166 Excellent

About _mat_

  • Rank
    robo cop
  • Birthday 04/11/1982


  • Location

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. I thought about this long and hard today. I have a vision of course. A vision that came together in the last few years while working on and supporting GPUPI, talking to overclockers, researching benchmark security and breaking every bench out there. So yes, I think that poeple want easier access to benching to get the community growing. They want to trust all posted results, screenshots and system data. I also think that everybody wants more system data, like much more! And I am pretty sure nobody wants to ever fill out a submission form again for ten results in a row. That said I made a decision today. Fulfilling my promises will take more time and I don't think we have that luxury. We need a solution right now. So I am going to take a big shortcut and release the first few versions of BenchMate as a wrapper to HWBOT. Although I disabled a couple of features again, this will be a HUGE step forward in the right direction. I am currently working on integrating HWBOT submissions by result file and online. It shouldn't take too long, so expect a beta version very soon.
  2. @Leeghoofd I am working on it all day currently. Yesterday I made a huge breakthrough that finally made Windows 7 support happen. That took considerably more effort than I ever imagined. There is also a 32 bit version of BenchMate now. Last but not least I implemented a very complex kernel to user-mode communication called "Inverted Call Model". That enables BenchMate to receive any change from within the driver within milliseconds. It also has less impact on performance than querying all process information and states every 500 ms. So, yeah. It's very exhausting work. I need to be really thorough to make this happen the right way, lot of rewriting involved and lots to learn as well. But it is happening, that's for sure. And it's much more than a wrapper. It's a completely new take on the workflow of benchmarking.
  3. I meant of course with a wrapper in place. Or a very good benchmark implementation that securely handles timing and scores. It's just not possible on XP.
  4. To put a lot of unnecessary pressure on myself: I am going to release the first version of my generic wrapper on the 11th of April. Regarding donations/financing, let's talk about that when the first version is out and stable.
  5. I highly doubt that any benchmark on Windows XP can be called secure. XP is not built with security (or stability 😛) in mind, that's why it was nearly impossible for Antivirus tools to secure it. Securing benchmarks with a wrapper uses very similar techniques as Antivirus tools. Protecting processes, detecting changes in the memory of a process, Anti-Debugging measures and so on. With Windows 7 and above Microsoft provides more functionality that can be used to implement these features. On Windows 10 it's even easier. That's not all of course. There is the new driver model since Vista and a multitude of improvements to the kernel that all strengthen the security of the OS. I still think that the efficiency of the bench should decide the OS. That's the natural way of benching. The question is if a recompiled version of SuperPi for example can perform better on the latest Windows versions.
  6. This is normally the solution. As far as I can see on your screenshots you are not installing the AMD OpenCL SDK 2.9.1 or 3.0, but an old legacy AMD display driver. Try this download: https://www.softpedia.com/get/Programming/SDK-DDK/ATI-Stream-SDK.shtml
  7. @Leeghoofd, those are two very different errors. Crashes while saving the result file are hardware detection problems, they have to be looked at from platform to platform and from GPUPI version to GPUPI version. GPUPI 3.2 is less stable than 3.3 in that regard, but that's what newer versions are for I guess. Not getting one loop out of it seems to be a fatal error error during the kernel compilation phase. GPUPI 3.3.3 is more optimized and uses less common functionalities of OpenCL. Some of those seem problematic in combination with certain platforms and drivers. Both problems will be fixed with GPUPI 4 because the hardware detection has been improved (as with every version) and OpenCL will no longer be used for CPUs (so no compilation phase). I will NOT fix any of the GPUPI 3.x versions, they are dead. I can not bugfix two different versions, it is not possible with my already limited time at hand. So all fixes currently go into GPUPI 4, which will be released after BenchMate.
  8. What are the problems you are refering to? Without a detailed description it's difficult to help. As for step by step for Intel CPUs: Download the latest Intel OpenCL runtime: https://registrationcenter.intel.com/en/forms/?productid=3207 Open GPUPI, "Calculate", select your CPU in the latest Intel OpenCL platform you can find in the settings dialog and run. There could be inconveniences regarding your OS timer, where GPUPI states that you can't submit due to possible timer skews. Either enable HPET (GPUPI 3.3 will give you that opportunity by just pressing a button) or run Windows 7 or Windows 10 prior to RS5.
  9. Thanks for the report, will have a look at it as soon as I can. Hope I still have X79.
  10. To circumvent HPET in GPUPI use Windows 7 or GPUPI 3.2 + Windows 10 prior to RS4. RS5 changes the QPC timer frequency to 10 MHz and GPUPI currently can't handle that.
  11. I've looked into R20. Timers have not changed much, it still relies mostly on timeGetTime (uses RTC) and additionally QueryPerformanceCounter (mostly TSC or HPET if manually enabled in OS). I still think that timeGetTime is the primary timer to measure performance as QPC is mostly used for UI dependend stuff internally in Windows while timeGetTime is only used if the developer chooses to do so. Anyways R20 should have the same OS/platform restrictions as R11.5 and R15. Btw timer calls can be easily shown with my TimerBench project (menu Tools => Measure). An advantage of R20 is the missing scriptability. While R15 is scriptable via Coffee script, R20 seems to have no such functionality. Coffee script support is huge vulnerability currently present in R15, an easy one-liner can simply set a new benchmark score. Problematic are the configuration options available to customize the benchmark run in R20. There are a number of options listed in resource/config.txt which can also be applied as a command line argument. I have not figured out yet if changing these values will impact the benchmark score. But it was possible for example to run the test with 128 threads instead of the normally chosen 16 for my 9900K (-200 points) and enable a console: So that should be looked at, but as far as I can see it, R20 might be the better choice in terms of security.
  12. Although that would be perfectly valid as a new rule, it's definitely less fun. Might be even more fun to distribute the global points by cpu architecture.
  13. I only meant the web submission of course. If there is an error while saving the dat file, it's a benchmark issue. marco, thanks for submitting the ticket. Didn't get around to it.
  14. I have the same problem on my dev system as well. It's because the online submisson returns an HTTP error 400 when submitting. It's the same issue as:
  • Create New...