havli
-
Posts
413 -
Joined
-
Last visited
-
Days Won
3
Content Type
Profiles
Forums
Events
Blogs
Posts posted by havli
-
-
Wow, that is some great news.
I wasn't aware legacy version 3.x already exists. I'm really looking forward to test GTX 200 series and other older hardware too.
I like the idea of CPU computing without OCL driver, sounds really interesting. This should make the CPU version more popular and easier to use... while it isn't that hard to install OCL driver, many systems don't have it by default and some people might have problem with it.
I really appreciate your work and effort to make GPUPI compatible with as wide hardware base as possible. My favorite kind of benchmark is the one which can run on old CPU but also is capable of utilizing new instruction sets and scale with modern architectures. There aren't that many unfortunately... at least that I know of - only GPUPI, Y-Cruncher and x265.
I understand it takes extra time to support this much different HW, especially when you are developing the whole application. I try to keep x265 oldschool-HW ready as well, and while it is "only" a sophisticated wrapper to the encoder executable I did spend a ton of time doing pre-release testing on many different platforms.
-
I was trying the new version 3.1 today and unfortunately it is not working for me.
The current PC is i5 2500k, GTX 465 (378.78 drv), windows 7 x64 SP1 and AMD OCL driver (for CPU).
While old 2.3.4 version is working perfectly, the 3.1 just stops after I start the calculation. Same behavior for both OCL @ CPU and OCL @ GPU.
Here is the log file:
LOG START at 2018-02-05 23:31:51 ---------------------- Could not parse version string successfully: OpenCL C 1.1 OPENCL PCI address [0000:01:00.0] Could not parse version string successfully: OpenCL C 1.2 Invalid topology output type: 0 OPENCL PCI address [undefined] CUDA driver version is insufficient for CUDA runtime version
Maybe it could be somehow related to Windows locale settings? I'm using english win7 with regional setting set to czech. Also does the last line means Fermi will be no longer supported (using CUDA at least)? It is sad to see the support for older HW is slowly disappearing. First GTX 200, then HD 4800, now all 32-bit SSE2-capable CPUs and perhaps even Fermi series GPUs.
-
I don't know - just a wild guess... could this be caused by the removed benchmarks? Like UCBench, Processing Power, etc. I had quite a lot scores in these, so perhaps cups gained there still counts towards the total (and are well preserved... noone can beat me and take them ).
-
Ran into these two:
Both points and positions are wrong it seems.
Perhaps there are more than just two, I'll post more links if I can find something else.
-
That is like deleting half of the scores... not going to happen. Tweaking the OS and benchmarks is also part of the game and requires some skill, even if overclocking is not possible.
-
It seems CPU-Z detects this CPU as Xeon E5-2637, while in reality it is E5-1607. Unfortunately not much I can do here, as x265 relies on CPU-Z detection... so if CPU-Z fails to detect properly the CPU used, the same mistake appears in x265 also.
Sometimes this happened to me too, but should be possible to edit the submission manually and select the correct CPU once it is uploaded.
-
IIRC HD 4800 stopped working with GPUPI long time ago. I think when version 2.3.x was released.
@_mat_
could you please also make 32-bit version of GPUPI 3.1 if it becomes mandatory? There is quite a lot of 32-bit only NetBurst CPUs otherwise capable of running GPUPI. And it is always nice to get free points and gold cups
-
Hi,
that is a very nice system you have. Unfortunately to get somewhat good score in x265, you need at least SSE4.1 compatible CPU. K10 doesn't even support SSSE3 and because of that the fps is very low. Using overkill will help to some extent... but not much. 15h Opterons are much faster on the G34 platform.
I'm aware of the wrong cores count in GUI on this kind of systems... but why it doesn't work properly I'm not sure. Anyway it is for display only, has no effect on score or anything else.
-
Cores count is shown in the GUI but the data file doesn't contain this information.
Only CPU name and socket count are stored there. Core count is determined by HWBOT based on the CPU model (either by autodetection or by manual typing what CPU you have).
The truth is editing of existing submission to fill in correct CPU parameters is kinda broken.
For example I've just submitted this as a test. http://hwbot.org/submission/3713436_ (I'll delete it tomorrow) The CPU model wasn't detected, so at first I selected X4 960T unlocked to 5 cores... worked. Then I edited to unlocked 6 cores, also worked. Then switched to X2 550 unlocked to 4 cores... also worked. But now when I want to edit to 3 active cores the edit fails with a white screen.
I'm not sure how HWBOT API detection works on real unlocked parts - they are usually detected by diferent name than original. If it is not detected properly and you have to type the name yourself, then it should be also possible to select actual unlocked status. If it is detected correctly (like X4 960T) and editing after submission has been uploaded doesn't work, there is one workaround. Instead of saving the datafile and manual uploading later, you can upload directly from X265 and there is an option to disable CPU detection. In that case it should be possible to choose correct CPU and core count.
I guess it would be also handy to have this option with data-file saving. I can add it in future version, shouldn't be a problem. Or working edit function on hwbot could solve this.
-
The package is here http://hw-museum.cz/data/hwbot/HWBOT_X265_2.1_cpu-z_1.81.zip
The benchmark executable remains still at version 2.1.0 - there is no reason to change it when only the external component (CPU-Z) was updated. And on top of that I really prefer not to compile it again from source codes to prevent possible bugs.
-
There is a workaround - in the HW ranking of the specific benchmark, you can selet # of cores Cinebench - R11.5 overclocking records @ HWBOT
-
It is possible to disable CPU detection in X265 when using direct upload from the benchmark. I'm not sure if it works however, I quess very few people are using this function.
-
It seems this was random bug either in OS / Java or the benchmark itself. I wasn't able to replicate the issue and neither was unityofsaints. I guess it is similar like other benchmarks which can also sometimes poduce bugged scores for no apparent reason.
-
Yeah, same here - 100% CPU usage (Firefox). Only on forums though - the main hwbot.org page seems to be clean.
-
Thank you for the tips.
Now I'm testing E5502 and it seems all voltages and also Slow Mode are once more doing nothing. So far it seems 32nm CPU responds to Slow Mode in positive way... while 45nm doesn't.
-
Thank you. I have tried that before with W3520 and didn't help at all.
But now i gave it another shot with E5620 and it seems Slow Mode really works Intel Xeon E5620 @ 4503.71 MHz - CPU-Z VALIDATOR It seems the board is quite sensitive to memory timings at higher BCLK and refuses to post with wrong settings (even with Slow Mode). I worked that out using combination of bios settings and setFSB. Now I think actual CPU cores are the limit at >4.4 GHz.
Now I'll return to the W3520 to see what it can do.
-
It should be easy enough, yes. There were some changes required when going 1.77 -> 1.80, but should be easy to upgrade 1.80 to 1.81. I'll take a look at it and do some testing.
-
I think Dead Things meant it like this:
S775/S771 (Xeon-class processor required)
- Stage 1 = 2 cores on 1 socket .......... -> 1x Dual-core (like 1x Xeon 5130)
- Stage 2 = 4 cores on 1 socket .......... -> 1x Quad-core (like 1x Xeon X5450)
- Stage 3 = 4 cores on 2 sockets .......... -> 2x Dual-core (like 2x Xeon 5130)
- Stage 4 = 8 cores on 2 sockets .......... -> 2x Quad-core (like 2x Xeon X5450)
S939/S940 (Opteron-class processor required)
- Stage 1 = 1 core on 1 socket .......... -> 1x Single-core (like 1x Opteron 144)
- Stage 2 = 2 cores on 1 socket .......... -> 1x Dual-core (like 1x Opteron 175)
- Stage 3 = 2 cores on 2 socekts .......... -> 2x Single-core (like 2x Opteron 248)
- Stage 4 = 4 cores on 2 sockets .......... -> 2x Dual-core (like 2x Opteron 280)
-
Just tried Xeon E5620 and this time I can do 217 MHz rock stable... but at 218 windows won't boot.
It seems board is not the problem but CPU is... just like FSB wall with Core2.
havli`s HWBOT x265 Benchmark - 4k score: 3.83 fps with a Xeon E5620
-
This looks great!
I like the idea of allowing 1/2P configuration also in 4P stages as it greatly increases number of people who can compete in there. 4P boards are not that common, so it is nice to have a possibility to use for example higher clocked 2P config which is not as fast but can do at least something.
ES allowed or not? For example G34 Opterons ES have unlocked multiplier which makes them very OC friendly.
Btw - I think you meant socket 1207 (not 1277) and Opteron Interlagos (BD gen1) instead of Istanbul for G34 stage.
-
Hey any way to make the program not create a massive 100% load for 3-4 seconds during startup would be nice
It is possible to change this behavior of course. But this feature was implemented to (somewhat) properly detect multi-core turbo frequency when the benchmark is starting. I'm not sure if there is another way to do this.
If the benchmark crashes your PC at the launch screen due to high CPU load... then I'm pretty sure it wouldn't survive the 1080p preset anyway, not to mention 4k.
-
Sent PM to unityofsaints.
-
Maybe limit the 754 stage to 130nm ClawHammer and Newcastle cores? That feels more old school to me. In that case R15 must be replaced with something else as 130nm A64 lack SSE3 and because of that can't run R15.
-
As long as one-man-team have a chance to score well, it should be good
GPUPI - SuperPI on the GPU
in Benchmark software
Posted
Great work!
I can confirm both GPUPI v3.1.1 and 3.2 legacy are working on 32-bit Pentium 4 windows XP system. I'll test some of the older GPUs soon, curious how they will work.
Btw - since you have nice HW detection implemented, would it be too much to ask for automatic datafile name suggesting, like x265 is doing for example?
It would make benchmarking for HW masters a little easier I'm using following pattern:
amount_of_HW_x_HW_name_benchmark_type_score
2x_Opteron_2216_HWBOT_x265-1080p_2.18 fps.hwbot