Jump to content
HWBOT Community Forums

GPUPI - SuperPI on the GPU


Recommended Posts

It was my intention to build the benchmark as independent from the CPU as possible, so the overclocker can focus on the GPU only. :)


But I thought about it and I may have a nice idea to be able to include both CPU and GPU into the calculation. It would be possible to let the user decide to offload a few batches to the CPU to speed up the benchmark furthermore. It won't be a big improvement to the final time, just a few seconds, but it would measure the performance of the whole system.


Currently I am using my spare time to include multiple GPUs to the benchmark. Let's see what happens after that. :)

Link to comment
Share on other sites

  • 3 weeks later...

Turrican's sister just posted a score with Karl's old daily system. GTX 295 with an old E8700 dual core! What else? :)

Best part is, that the first few runs didn't work out this hardware, because there was some kind of memory error. Only be reducing the batch size, she was able to run it. Seems like Karl has benched the card pretty good! :cool:



Link to comment
Share on other sites

Thats weird, GPUPI doesnt load my GPU such as FireStrike for example but I understand. But what I don't understand is how a another competer with R9 290 @ 1200/1500 has better results the I with 780Ti Kingpind @ 1410/1925. Any explanations pls? Thanks

Link to comment
Share on other sites

It's actually not that weird. GPUPI uses two types of precisions for the calculation of pi. The main part is done with 64 bit integers followed by a smaller part using double precision. Especially AMDs Hawaii is very strong with integer math, that's why they are also very good for Bitcoin mining etc. NVIDIAs Fermi, Kepler and Maxwell are very good with floating point math, but clearly fall behind on integers.

Link to comment
Share on other sites

  • 3 weeks later...

First experiments with GPUPI 2.0, which is currently in beta version. 3x GTX 980 and 1x GTX 780 Ti - yes, you can plug in whatever you have, although I guess, that NVIDIA and AMD can't be combined, because of the driver implementation.






Coming very soon!


Btw, will this break hwbot? Multiple graphics cards can't be submitted by now afaik! :eek:

Edited by _mat_
Link to comment
Share on other sites



Can't wait till HWBOT enables points on this Bench, so people have more to do with their AMD VGAs except Heaven. Even in Heaven the 980s have surpassed AMD, but hopefully that will change with the next Gen.


As for the different mix & match of cards, I am pretty sure you will break HWBOT.

So they'll have to find a way to fix that or maybe you'll have to find a way to disable it.

Link to comment
Share on other sites

Yes, it will break HWBOT.


We looked at this issue in the past (remember Lucid Virtu) and it's technically very difficult to add multi-gpu options. This is mainly because how the database is structured and linked to features like the points. The system above would be categorized under 4x GTX 980.


As for points - it seems there aren't that many issues with the benchmark, so I don't see why we can't enable points for it. Maybe we should make work of a seamless integration between the benchmark and HWBOT so submission works as smoothly as XTU does.

Link to comment
Share on other sites

I think the concept of having different graphics for a bench is intriguing and sounds fun. But I guess it won't change to much on the top scores in the end, so having it categorized under 4x GTX 980 while being slower or equal (like my example score) to 4x GTX 980 would suffice for listing.


We could also ban scores with multiple different cards, but in my opinion there is not much advantage in doing that. Allowing it will encourage people to submit multi gpu results and earn their points with what they have and can afford. Only thing is, that it should always be submitted in the category with the strongest card, that crunched the most numbers. It can be easily checked with the new statistics for in 2.0. But I will leave the decision to HWBot team of course, you know best.




Btw, the screenshot also shows another great possibility with the new version: Each graphics card runs its own frequency and will therefor crunch more packets to improve the overall time. I am pretty amazed how accurate and fun this is!


Regarding points and seamless integration, I am happy to implement the bot API with the next version. I was still waiting for GENIEBEN to finish his improvements with the C++ API. What's the status on this?

One more thing about the official release of the GPUPI 2.0. I thought I will wait for the Turrican Memorial Competition to finish to avoid any confusion, especially in the 2nd stage!

Link to comment
Share on other sites

  • 1 month later...

GPUPI 2.0 is official! The benchmark now supports multiple graphics cards and CPUs and integrates automatic HWBOT submission including hardware detection and screenshot upload. HWBOT just announced it as their first GPGPU benchmark and gives world record points, globals and hardware points for GPUPI 1B and GPUPI for CPUs - 1B.


To see what's possible with version 2.0 have a look at this result with 3x GTX 980 and 1x GTX 780 Ti:




If you have any feedback or questions, just let me know. :)


Full changelog:


  • Multi-GPU and Multi-CPU support
  • New platform and device selection with tree view
  • HWBOT integration with automatic result submission including hardware detection (Graphics Card, CPU, OS) and screenshot upload
  • Improved detection of AMD graphics cards (HD 7xxx und Rx xxx only)
  • Encrypted kernel code using AES/CBC
  • Support for CUDA 7.0
  • Improvements for error logging (GPUPI.log in the working directory)
  • Compiled with Visual Studio 2013 Community (Update 4) and AMD OpenCL 2.0 headers
  • New about dialog including benchmark and support information


Links: Download it here | Details on hwbot.org

Link to comment
Share on other sites

Yeah.. Points.


Although shouldn't the points have been enabled on the 32B version of the Bench since with the 1B version we will be under 1s with the next gen GPUs or maybe the Gen after. Wouldn't that make this bench short lived, ~ 2 years or so.


Just a suggestion.

Link to comment
Share on other sites

We have discussed this recently and came to the conclusion that 32B makes no sense today. It's just not fun to run this benchmark on single cards for more than 15 minutes to get a result. Yeah, multiple GPUs have a lot of performance headroom and will push the times faster towards zero than what's possible with other benchmarks. If that happens 32B might become more popular for multiple graphics cards and we will react on it properly, may add or even shift points to 32B. But that will take a good while in computer years. :)

Link to comment
Share on other sites

its nice to se thad the Multi-gpu support works this good.

time whil thel when the furst multie - dual-gpu's run's ar online :)

if i had the cash i traid max possibel confiuration whit titan-z , xeon phi (if its supported ), tesla and other.


ps: if a gpu have support for openCL or cuda but the benchmark don't like the gpu,how can this be fixt?

Edited by skulstation
Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

  • Create New...