Jump to content
HWBOT Community Forums

havli

Members
  • Posts

    413
  • Joined

  • Last visited

  • Days Won

    3

Everything posted by havli

  1. Wow, that is some great news. I wasn't aware legacy version 3.x already exists. I'm really looking forward to test GTX 200 series and other older hardware too. I like the idea of CPU computing without OCL driver, sounds really interesting. This should make the CPU version more popular and easier to use... while it isn't that hard to install OCL driver, many systems don't have it by default and some people might have problem with it. I really appreciate your work and effort to make GPUPI compatible with as wide hardware base as possible. My favorite kind of benchmark is the one which can run on old CPU but also is capable of utilizing new instruction sets and scale with modern architectures. There aren't that many unfortunately... at least that I know of - only GPUPI, Y-Cruncher and x265. I understand it takes extra time to support this much different HW, especially when you are developing the whole application. I try to keep x265 oldschool-HW ready as well, and while it is "only" a sophisticated wrapper to the encoder executable I did spend a ton of time doing pre-release testing on many different platforms.
  2. I was trying the new version 3.1 today and unfortunately it is not working for me. The current PC is i5 2500k, GTX 465 (378.78 drv), windows 7 x64 SP1 and AMD OCL driver (for CPU). While old 2.3.4 version is working perfectly, the 3.1 just stops after I start the calculation. Same behavior for both OCL @ CPU and OCL @ GPU. Here is the log file: LOG START at 2018-02-05 23:31:51 ---------------------- Could not parse version string successfully: OpenCL C 1.1 OPENCL PCI address [0000:01:00.0] Could not parse version string successfully: OpenCL C 1.2 Invalid topology output type: 0 OPENCL PCI address [undefined] CUDA driver version is insufficient for CUDA runtime version Maybe it could be somehow related to Windows locale settings? I'm using english win7 with regional setting set to czech. Also does the last line means Fermi will be no longer supported (using CUDA at least)? It is sad to see the support for older HW is slowly disappearing. First GTX 200, then HD 4800, now all 32-bit SSE2-capable CPUs and perhaps even Fermi series GPUs.
  3. I don't know - just a wild guess... could this be caused by the removed benchmarks? Like UCBench, Processing Power, etc. I had quite a lot scores in these, so perhaps cups gained there still counts towards the total (and are well preserved... noone can beat me and take them ).
  4. Ran into these two: http://hwbot.org/benchmark/cinebench_-_2003/rankings?start=0&hardwareTypeId=processor_2289&cores=4#start=0#interval=20 http://hwbot.org/benchmark/cinebench_-_r15/rankings?hardwareTypeId=processor_5521&cores=6#start=0#interval=20 Both points and positions are wrong it seems. Perhaps there are more than just two, I'll post more links if I can find something else.
  5. That is like deleting half of the scores... not going to happen. Tweaking the OS and benchmarks is also part of the game and requires some skill, even if overclocking is not possible.
  6. It seems CPU-Z detects this CPU as Xeon E5-2637, while in reality it is E5-1607. Unfortunately not much I can do here, as x265 relies on CPU-Z detection... so if CPU-Z fails to detect properly the CPU used, the same mistake appears in x265 also. Sometimes this happened to me too, but should be possible to edit the submission manually and select the correct CPU once it is uploaded.
  7. IIRC HD 4800 stopped working with GPUPI long time ago. I think when version 2.3.x was released. @_mat_ could you please also make 32-bit version of GPUPI 3.1 if it becomes mandatory? There is quite a lot of 32-bit only NetBurst CPUs otherwise capable of running GPUPI. And it is always nice to get free points and gold cups
  8. Hi, that is a very nice system you have. Unfortunately to get somewhat good score in x265, you need at least SSE4.1 compatible CPU. K10 doesn't even support SSSE3 and because of that the fps is very low. Using overkill will help to some extent... but not much. 15h Opterons are much faster on the G34 platform. I'm aware of the wrong cores count in GUI on this kind of systems... but why it doesn't work properly I'm not sure. Anyway it is for display only, has no effect on score or anything else.
  9. Cores count is shown in the GUI but the data file doesn't contain this information. Only CPU name and socket count are stored there. Core count is determined by HWBOT based on the CPU model (either by autodetection or by manual typing what CPU you have). The truth is editing of existing submission to fill in correct CPU parameters is kinda broken. For example I've just submitted this as a test. http://hwbot.org/submission/3713436_ (I'll delete it tomorrow) The CPU model wasn't detected, so at first I selected X4 960T unlocked to 5 cores... worked. Then I edited to unlocked 6 cores, also worked. Then switched to X2 550 unlocked to 4 cores... also worked. But now when I want to edit to 3 active cores the edit fails with a white screen. I'm not sure how HWBOT API detection works on real unlocked parts - they are usually detected by diferent name than original. If it is not detected properly and you have to type the name yourself, then it should be also possible to select actual unlocked status. If it is detected correctly (like X4 960T) and editing after submission has been uploaded doesn't work, there is one workaround. Instead of saving the datafile and manual uploading later, you can upload directly from X265 and there is an option to disable CPU detection. In that case it should be possible to choose correct CPU and core count. I guess it would be also handy to have this option with data-file saving. I can add it in future version, shouldn't be a problem. Or working edit function on hwbot could solve this.
  10. The package is here http://hw-museum.cz/data/hwbot/HWBOT_X265_2.1_cpu-z_1.81.zip The benchmark executable remains still at version 2.1.0 - there is no reason to change it when only the external component (CPU-Z) was updated. And on top of that I really prefer not to compile it again from source codes to prevent possible bugs.
  11. There is a workaround - in the HW ranking of the specific benchmark, you can selet # of cores Cinebench - R11.5 overclocking records @ HWBOT
  12. It is possible to disable CPU detection in X265 when using direct upload from the benchmark. I'm not sure if it works however, I quess very few people are using this function.
  13. It seems this was random bug either in OS / Java or the benchmark itself. I wasn't able to replicate the issue and neither was unityofsaints. I guess it is similar like other benchmarks which can also sometimes poduce bugged scores for no apparent reason.
  14. Yeah, same here - 100% CPU usage (Firefox). Only on forums though - the main hwbot.org page seems to be clean.
  15. Thank you for the tips. Now I'm testing E5502 and it seems all voltages and also Slow Mode are once more doing nothing. So far it seems 32nm CPU responds to Slow Mode in positive way... while 45nm doesn't.
  16. Thank you. I have tried that before with W3520 and didn't help at all. But now i gave it another shot with E5620 and it seems Slow Mode really works Intel Xeon E5620 @ 4503.71 MHz - CPU-Z VALIDATOR It seems the board is quite sensitive to memory timings at higher BCLK and refuses to post with wrong settings (even with Slow Mode). I worked that out using combination of bios settings and setFSB. Now I think actual CPU cores are the limit at >4.4 GHz. Now I'll return to the W3520 to see what it can do.
  17. It should be easy enough, yes. There were some changes required when going 1.77 -> 1.80, but should be easy to upgrade 1.80 to 1.81. I'll take a look at it and do some testing.
  18. I think Dead Things meant it like this: S775/S771 (Xeon-class processor required) - Stage 1 = 2 cores on 1 socket .......... -> 1x Dual-core (like 1x Xeon 5130) - Stage 2 = 4 cores on 1 socket .......... -> 1x Quad-core (like 1x Xeon X5450) - Stage 3 = 4 cores on 2 sockets .......... -> 2x Dual-core (like 2x Xeon 5130) - Stage 4 = 8 cores on 2 sockets .......... -> 2x Quad-core (like 2x Xeon X5450) S939/S940 (Opteron-class processor required) - Stage 1 = 1 core on 1 socket .......... -> 1x Single-core (like 1x Opteron 144) - Stage 2 = 2 cores on 1 socket .......... -> 1x Dual-core (like 1x Opteron 175) - Stage 3 = 2 cores on 2 socekts .......... -> 2x Single-core (like 2x Opteron 248) - Stage 4 = 4 cores on 2 sockets .......... -> 2x Dual-core (like 2x Opteron 280)
  19. Just tried Xeon E5620 and this time I can do 217 MHz rock stable... but at 218 windows won't boot. It seems board is not the problem but CPU is... just like FSB wall with Core2. havli`s HWBOT x265 Benchmark - 4k score: 3.83 fps with a Xeon E5620
  20. This looks great! I like the idea of allowing 1/2P configuration also in 4P stages as it greatly increases number of people who can compete in there. 4P boards are not that common, so it is nice to have a possibility to use for example higher clocked 2P config which is not as fast but can do at least something. ES allowed or not? For example G34 Opterons ES have unlocked multiplier which makes them very OC friendly. Btw - I think you meant socket 1207 (not 1277) and Opteron Interlagos (BD gen1) instead of Istanbul for G34 stage.
  21. It is possible to change this behavior of course. But this feature was implemented to (somewhat) properly detect multi-core turbo frequency when the benchmark is starting. I'm not sure if there is another way to do this. If the benchmark crashes your PC at the launch screen due to high CPU load... then I'm pretty sure it wouldn't survive the 1080p preset anyway, not to mention 4k.
  22. Maybe limit the 754 stage to 130nm ClawHammer and Newcastle cores? That feels more old school to me. In that case R15 must be replaced with something else as 130nm A64 lack SSE3 and because of that can't run R15.
  23. As long as one-man-team have a chance to score well, it should be good
  24. Hi, I'm quite new to X58... but I have some older scores with GA-EX58-UD3R, that board did around 215 MHz BCLK with more or less default settings, which I considered acceptable for "lowend" X58. Yesterday I bought R3E and expected something around 230 should be very easy and with the right settings even more Reference Frequency overclocking records @ HWBOT Turns out I was wrong. I've spent two evenings on this already, studied all kinds of X58 OC guides and still can't get some decent BCLK. The system is: Xeon W3520 water cooled R3E (BIOS 1502 - latest) 3x4 GB DDR3 dual-rank or 1x2 GB single-rank 9-9-9-24 Radeon HD 5500 single sata HDD running in IDE mode Enermax 630W win7 64 It seems this W3520 is not as bad as most of my CPUs and it could actually run faster than 4.4 GHz... if the MB would cooperate. I can get 4378 MHz R15 stable (multiple times) running 21x208. havli`s Cinebench - R15 score: 702 cb with a Xeon W3520 After that I got freezing during R15 runs so I lowered CPU and NB multiplier by one notch to make sure CPU is not the limit. Still no matter what I do >210 is not stable. I can boot windows, wprime 32M on 4 threads finishes, on 8 threads crashes and of course R15 crashes within a second after rendering starts. Tried raising PCI-E up to 120 MHz, changed GTX 465 to HD 5500 and also tried PCI Voodoo3, still the same. Tried HD 5500 in first slot and second slot -> same. Tried to switch RAM to single 2GB module... nothing. QPI slow mode also does nothing. This CPU can do more I'm sure and also I have other 1366 Xeons with lower multi and those really need as high BCLK as possible. I'm kinda running out of ideas. Most likely it is some obvious mistake... which I don't see. If you have some tips how to get this thing going strong I would really appreciate it.
×
×
  • Create New...