Jump to content
HWBOT Community Forums


  • Content count

  • Joined

  • Last visited

  • Days Won


_mat_ last won the day on April 6

_mat_ had the most liked content!

Community Reputation

34 Excellent

About _mat_

  • Rank
    robo cop
  • Birthday 04/11/1982


  • Location

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. _mat_

    Rule update for GPUPi

    You are right, newer drivers are mandatory to be able to run CUDA 9, which GPUPI uses since 3.x. I hope NVIDIA has a good reason for breaking backwards compatibility. 😒 2.x is no longer allowed for submission because it's definitely less secure. Especially the 1366 dual socket RTC bug is not taken care of, so we can't enable it without a server-side check from HWBOT. That's actually something that should have been done a long time ago. Why should every benchmark check for what's valid and what's not? The benchmarks should send the internal vendor and device id + timer configuration + OS to HWBOT and a general server-side logic validates the score. Instead every benchmark has to be updated when a problem is found, most benchmarks don't get fixed at all. In any case, 2.x would still be benchable if something like that was in place. It seems strange to me that newer drivers have anything to do with cold bugs. Do you know what's happening here? Maybe there is another way to circumvent the root of this problem. I doubt that CUDA itself has anything to do with it. Another solution would be to build a GPUPI version with CUDA 8 capability. So there would be a GPUPI Legacy version, a GPUPI CUDA 8 version and the normal version. It's kinda ridiculous.
  2. It is an OpenCL bug. Sadly there is nothing I can do about it (which is another reason why I want to move to a native calculation path without OpenCL for CPUs).
  3. That should have automatically been submitted to the 2x GTX 980 Ti category. Was it edited afterwards?
  4. _mat_

    GPUPI - SuperPI on the GPU

    It just stops when compiling the kernels. Well, that's bad. Please try this beta version of 3.3.3 that adds a lot of detailled debug log messages to the initialization of CUDA and OpenCL: https://www.overclockers.at/downloads/projects/GPUPI 3.3.3 Beta.zip Please post the contents of the debug log, thank you!
  5. _mat_

    GPUPI - SuperPI on the GPU

    Would definitely be one of the hardest GPU categories to give points, right? GTX 1080 Ti at 3000 MHz for over 4 minutes anyone?
  6. _mat_

    GPUPI - SuperPI on the GPU

    Yes, it's cleaner code with improved comparability between different devices and the OpenCL path is now implemented correctly as it always should have been. The different OpenCL drivers produce closer results (although AMD OpenCL 1.2 = AMD APP SDK 2.9-1 is now the best choice in all categories), Batch Size and Reduction Size are not as picky as it was before and on NVIDIA cards the OpenCL implementation comes very close to the CUDA implementation, which indicates that everything is done right now. The bottom line is GPUPI is now much better as a benchmark in general. I would have done this with GPUPI 1 already, if I could have. But I wasn't good enough at OpenCL coding and mathematics back then (OpenCL is a brutal beast though). The good news is that something like this won't happen again. I will not touch the algorithm anymore, because it's pretty much maxed out the way I do it. The next step is an OpenMP path that gets rid of the OpenCL implementation, but that's many months away and will not overrule current results. The CPU path will be split into OpenCL and OpenMP (or Native, don't know yet), so no rebenching necessary. The new path will make use of AVX and whatever comes next to support the hell out of everything that comes in my way. I know the XTU coders and they don't seem to be interested in overclocking, let alone competitive oc. They just do their job and as far as my experience with XTU SDK goes, it's not entirely a good one (sry guys). I really try to do things differently with GPUPI. I want it to be on the bleeding edge too, but I wouldn't have introduced the speedup with 3.3 if everybody in this thread would have stood against it. As I already said: GPUPI should first and foremost be fun to bench.
  7. _mat_

    GPUPI - SuperPI on the GPU

    The last bugfix release is already an hour old, so lets post a new one: GPUPI 3.3.2 Download here: https://www.overclockers.at/news/gpupi-3-is-now-official Changelog: Bugfix: Application crashed while saving a result file Bugfix: Some Intel iGPs could not compile the OpenCL kernels due to an incompatibility Bugfix: Application crashed on certain systems while benchmark run initialization (due to memory detection) The hardware detection of the memory manufacturer is a delicate process and can crash the application, so it will be skipped when running HWiNFO in "Safe Mode" Improved the error message when an OpenCL reduction kernel can't be initialized due to limited shared memory (only possible on weak iGPs) All open bugs should be fixed now! Thanks to everybody that helped to improve GPUPI!
  8. _mat_

    GPUPI - SuperPI on the GPU

    Thank you! I already fixed it and it will be in the next bugfix release.
  9. _mat_

    GPUPI - SuperPI on the GPU

    Thanks to the open Debug Log window I can narrow this down to the memory not being properly detected by HWiNFO. Can you please test GPUPI 3.2 as well? No screenshot needed, just a confirmation, that it's not working there too. Thanks!
  10. _mat_

    GPUPI - SuperPI on the GPU

    Can you do me a favor please and post a screenshot with open Debug Log (Menu: Tools => Debug Log). Please open the log window before you are saving the result file. Is that possible or is the application instantly closed?
  11. _mat_

    GPUPI - SuperPI on the GPU

    Thanks for the bug reports. It works on Skylake X and Coffee Lake, my primary test systems. Well, lets break out the old stuff then.
  12. _mat_

    GPUPI - SuperPI on the GPU

    GPUPI 3.3.1 Not a day old and already a bugfix release. Download here: https://www.overclockers.at/news/gpupi-3-is-now-official Changelog: Bugfix for kernel compilation on old AMD graphics cards Bugfix for command line mode when using "-a" parameter (optional API selection) Improved error message including a tip when the calculation fails due to the watchdog timer resetting the graphics driver (only happens on old graphics cards when a kernel takes longer than 5 seconds). Improved error message when an OpenCL device runs out of ressources including a tip how to fix it (for example on old AMD graphics cards with reduction size 512) Bugfix for Multi-GPU Mode: If one of the devices abort the calculation due to an error, the benchmark run is now aborted.
  13. _mat_

    GPUPI - SuperPI on the GPU

    English news + downloads: https://www.overclockers.at/news/gpupi-3-is-now-official
  14. _mat_

    GPUPI - SuperPI on the GPU

    Then lets push forward! I love it! A few facts: Due to the less complex calculation code, the comparison between different GPU/CPU architectures and GPGPU APIs (CUDA VS OpenCL) is fairer than it ever was. No extra code for compatibility, just what's necessary to make it run. The improved handling of OpenCL makes better use of the devices (= CPUs and AMD GPUs). That results in less differences between the OpenCL drivers. Plus Batch Size and Reduction Size are not that important than they were before. They are still if you want the golden cup, but beginners won't score as bad if they just click Calculate + Ok. AMD OpenCL 1.2 now trumps everything that Intel has got. Even for 100M on Intel CPUs. The Intel OpenCL drivers suck. AMD OpenCL 2.0 works much better now. It still has problems with bigger reduction sizes like 512 and 1024. Seems like 256 is the best currently. I have just uploaded a version with some last minute fixes for the HWBOT submission (better wording, removed the screenshot canceling). Please download GPUPI 3.3 from here: https://www.overclockers.at/news/gpupi-3-ist-final It's now official!
  15. _mat_

    Rule update for GPUPi

    Well, then that's settled. Screenshots it is! Please change the logo though, @richba5tard Thank you in advance!