Jump to content
HWBOT Community Forums

havli

Members
  • Posts

    413
  • Joined

  • Last visited

  • Days Won

    3

Everything posted by havli

  1. Well, in that case how do you explain huge performance advantage of 45nm Core2 (SSE 4.1) over 65nm Core2 (SSSE3) ? For example http://hwbot.org/submission/3678638_havli_gpupi_for_cpu___100m_core_2_duo_e8300_1min_53sec_951ms and http://hwbot.org/submission/3408835_kintaro_gpupi_for_cpu___100m_core_2_duo_e6750_2min_16sec_234ms/ Or AMD 15h or 16h (AVX, SSE 4.2) much faster than K10 (SSE3)? in GPUPI K10 is a lot slower, while in older benchmarks - Cinebench for instance it is the other way around. http://hwbot.org/submission/3691480_havli_gpupi_for_cpu___100m_a10_7800_1min_8sec_764ms and http://hwbot.org/submission/3554886_noms_gpupi_for_cpu___100m_phenom_ii_x4_965_be_1min_28sec_609ms
  2. All available evidence speaks against some magic OCL performance boost with this specific agesa. All other K10 CPUs performs more or less the same. Llano with all boards / bios versions except this one specific combination performs the same. If there was such performance gain on Llano / even for the price of instability/ ,then it would be known in public. There is no such change in Llano architecture that would allow such performance boost when compared to all other K10. From the other GPUPI results it seems AMD OCL driver benefits greatly from SSE 4.1... and to some extent even from SSSE3. This is the reason K10 is slow in this benchmark compared to Core2, Nehalem or even 15h based processors. K10 lacks these instructions. I'm sorry but this really sounds like a bug - either in OCL driver or the benchmark itself or maybe something else entirely. It is not a random thing, as it can be reproduced... after all not so long ago there was a similar problem with GPUPI on dual socket 1366 machines which also seemed to be much faster then common sense would suggest... and as it turned out, it was a bug.
  3. Yes, I am... because they are the same. Cache and memory doesn't matter for GPUPI and those other improvements are more or less paper dragons with very small performance impact. And btw one of my links is Phenom II.... Different agesa can be faster... but single digit percents at most, not twice as fast. And one more thing - CPU performance of GPUPI 2.x to 3.2 is very similar, as long as you are comparing 64 bit versions.
  4. Obviously, this is a bug of some kind. It is not possible to have one board with specific BIOS version twice as fast compared to other boards and/or BIOS versions. Also Llano simply must be in line with other K10 quad cores running on similar clock. For example http://hwbot.org/submission/3314113_havli_gpupi_for_cpu___100m_phenom_x4_9650_2min_11sec_369ms or http://hwbot.org/submission/3699925_havli_gpupi_for_cpu___100m_opteron_8380_2min_12sec_595ms
  5. Yeah, it isn't so hard to figure out the best HW for each stage. Too bad our team doesn't have the other two people who would also own it.
  6. Stage 2 - Wolfdale-3M means C2D E7xxx series only, or something else too? Like Pentium E5xxx, which are also using this exact die but with only 2MB of cache active. Stage 9 (and also others which are done the same way) - "single server CPU allowed" .... means only Xeon / Opteron is allowed, or C2D / Phenom / etc can be used as well?
  7. Auto-hiding taskbar is ok or not? I'm using it for all screenshots... for example http://hwbot.org/submission/3796340_
  8. Great score! And btw - I don't see a problem with using older versions of CPU-Z and GPU-Z as long as they show the correct info.
  9. Yeah, 3.2 is slower on Fermi... in fact it is even worse for me. For example I can't get anywhere near this score anymore http://hwbot.org/submission/3301090_havli_gpupi___1b_geforce_gtx_465_1min_5sec_319ms/
  10. SLI can be enabled using one of the SLI hacks - for example https://www.techpowerup.com/forums/threads/sli-with-different-cards.158907/ The problem is when the chipset or CPU (as the PCI-E controller is built in there) is newer than GPU drivers... in that case SLI isn't suported (I think). The mentioned hack probably fools the driver to think it runs on X58 or something similar... and therefore SLI will work, even in XP. http://hwbot.org/submission/3502846_havli_3dmark06_2x_geforce_8800_gtx_27622_marks
  11. The best solution would be to simply add an option to users profile - just a simple checkbox woud be enough. Either show the HW library as it is now or ignore VGA brand and merge HW categories to one of each type. Going through 50 results and trying to find the one that is marked MSI realy isn't very effective.
  12. It is simple, just rebench the 6.9GHz CPU and done. Those who want to stay on top must rebench stuff regularly anyway. For example in 3D with every new 115x platform launch rebench of all 3dmarks up to 06... and new HEDT = rebench of Vantage and later.
  13. Hmm, so far everything with more than one socket and anything caled Xeon / Opteron / Quadro / Firepro was considered "server". This doesn't apply here?
  14. Broken points here https://hwbot.org/benchmark/cinebench_-_2003/rankings?start=0&hardwareTypeId=processor_4260&cores=2#start=0#interval=20 and most likely here (17.5 points for 1st place out of 3 doesn't sound right) https://hwbot.org/benchmark/cinebench_-_2003/rankings?start=0&hardwareTypeId=processor_857&cores=1#start=0#interval=20
  15. Well, I have two PPGA (black) P166 MMX and they are both unlocked up to 3.5x. I also have one ceramic 166 MMX and that one is locked at 2.5x. If you don't belive me it is really 166, then it is your problem... I don't care. And I certainly won't be wasting my time just to prove you're wrong. This competition has cost me enough effort as it is.
  16. Some MMX have unlocked multiplier up to 3.5x. I believe the PPGA models are unlocked, while CPGA are locked.
  17. Yes, the new version is already allowed to submit scores. And if there are no problems with v2.2.0, it should be made mandatory after some time.
  18. Version 2.2.0 is ready for release. http://hw-museum.cz/data/hwbot/HWBOT_X265_2.2.ZIP What is new: 1. there was a mistake in HPET detection of V2.1.0 - on systems that require HPET only error message was shown but the Run button wasn't deactivated... Therefore it was possible to run and submit the benchmark even without HPET (such results can be still recognized on screenshot - they contain red message "HPET timer not active"). This issue is now fixed. 2. Coffee Lake added to the non-HPET whitelist. Currently the list contains: Skylake, Skylake-X, Kaby Lake, Kaby Lake-X, Coffee Lake. 3. added option to select CPU name to submit. The first two options are 1) name like CPU-Z detects it 2) BIOS string. The third options leaves the field empty - this should solve the problem with unlocked AMD CPUs that are misdetected and can't be edited later. 4. increased the score precision to 3 decimal places. There is is a catch however. It seems HWBOT doesn't support it properly after all. Internally there are 3 decimal places - as they can be seen on the pre-submit screen... and also later when editing the submission. On the score page there are only two and how the rankings are calculated I'm not sure. Let's see if better precision can be implemented on the HWBOT side at some point in the future, x265 is now ready for it. http://hwbot.org/submission/3793276_ 5. updated CPU-Z to version 1.83.
  19. Good news - GT200 is working very well with legacy version of GPUPI 3.2 http://hwbot.org/submission/3789091_havli_gpupi___1b_geforce_gtx_285_8min_15sec_997ms
  20. Broken points here as well http://hwbot.org/benchmark/3dmark2001_se/rankings?hardwareTypeId=videocard_1103&cores=2#start=0#interval=20
  21. Ok then - 3 places it is. Other than G470 it should also make benchmarking of very old HW more interesting. With the legacy mode even PIII / first gen K7 should be able to run. Who is brave enough to run 4k on slot A Athlon?
  22. At the moment, I'm finishing my work on small update of x265 bench. Not so long ago there was some discussion concerning the granularity of the score. Since the beggining there were (and still are) just two decimal places. For most systems this is good enough to reflect even very small change in the performance. But not all of them - and since the attention is shifting more towards the 4k preset, maybe it is time to consider adding 3rd decimal place. Getting 3 decimal places is easy and after quick test it seems HWBOT API supports it also. So the question is - switch to 3 decimal places or stay at 2? This is double-edged sword, some people might benefit from it, others would lose points.
  23. Great work! I can confirm both GPUPI v3.1.1 and 3.2 legacy are working on 32-bit Pentium 4 windows XP system. I'll test some of the older GPUs soon, curious how they will work. Btw - since you have nice HW detection implemented, would it be too much to ask for automatic datafile name suggesting, like x265 is doing for example? It would make benchmarking for HW masters a little easier I'm using following pattern: amount_of_HW_x_HW_name_benchmark_type_score 2x_Opteron_2216_HWBOT_x265-1080p_2.18 fps.hwbot
×
×
  • Create New...