Jump to content
HWBOT Community Forums

I.M.O.G.

Members
  • Posts

    869
  • Joined

  • Last visited

Everything posted by I.M.O.G.

  1. I lost .7 globals tonight because of leeghoofds submission showing up as fifth in the rankings - it showed up 3 hours ago for some reason, despite being 200 days old. It has points disabled, but it isn't according to league rules and should be deleted, as is typical for everyone elses submissions which aren't according to league rules. Also, since he has points disabled, his submission shouldn't reduce points for everyone below him. The .7 points don't matter. Just a case of consistency, and bot weirdness fwiw.
  2. Nice work, thanks for sharing. Good to have more information to consider. There are a ton of variables which are difficult to control that make this sort of testing on a scientifically accurate scale somewhat arbitrary, unfortunately. When dealing with pastes, Roman mentioned near the conclusion that the differences are small enough that pretty much any paste "will do" under casual 24/7 conditions ... The differences are small. On air, from best to worst the difference was 2.6 degrees. On mild subzero, the difference was 1.95 degrees. At -150, the difference was 6.6 degrees... The inconsistencies in this variance from air, to subzero, to way subzero also imply there is difficulty in accurate measurement. Logic suggests that the spread should be smallest on air cooling, medium on light subzero, and largest on real subzero as the pastes freeze and some take on far worse thermal characteristics. Surface tinting from multiple paste applications are one factor. The goal of paste is to fill the microscopic gaps, but not interfere with direct contact. You can clean a base between applications, and it can look clean, but on a micro level that isn't quite the case. Each test is influenced by those before it to some degree. Probably not enough to matter, but, it can't be easily quantified or reported to know for sure. Then there's also application technique. Over the past decade, a lot of people have analyzed this - both in the TIM market and independent reviewers. Many of us have made up our own mind about how we best like to apply paste, but to look at the applicator/spreading technique used in this review specifically, it has been covered numerous times but the most prominent in my mind are 2 articles by Joe Citarella. Generally, a pea in the middle has been shown to produce the best spread characteristics without trapping air inside the interface and with consistent coverage across the surface. This one is about 5 years old: http://www.overclockers.com/applying-thermal-grease/ And this one is sort of an updated follow up to that work: http://www.innovationcooling.com/applicationinstructions.htm We also shouldn't overlook the calibration and resolution of the temperature reading equipment. Without calibration of the probes one can't be certain how accurate they are to reality. They may be rated for a certain degree of accuracy but this isn't that dis-similar to the issue we have with Kill-A-Watt meters - depending on the specific unit accuracy can be pretty different and its not an absolute measurement, but more of a ballpark figure. If they were calibrated, what temperatures were they calibrated to? Accuracy varies a lot at room temperature to -150. And resolution comes into play here as well - the temperature unit may display in increments of .1 degrees, however does it really have the capability to reliably measure and report variances on that level? I dunno. With looking at paste where the differences can be as small as a few degrees or even tenths of a degree from one paste to another, it is hard to be absolutely conclusive. I don't think that makes the effort or results any less admirable, or even any less useful to look at - these are just things to keep in mind, and a testament to the likely impossibility to be able to reproduce the results in another independent test bed (as if anyone else is going to put 80 hours and hundreds of mountings into retesting them all). For pastes that are close in this list, anything could probably swap positions... For pastes that are far apart in this list, there is more likely a meaningful difference. Props again on the testing. I don't mean to take anything away from the work done. This sort of testing is a herculean effort, and I like to see it. It's better to have some real data than just waffle about conjecture. I just keep some things in mind anytime I read about paste testing because the results different people get are almost always different, and reasons like those I listed are why - it is really, really, hard to do in an absolutely accurate fashion.
  3. Looking forward to seeing a few of you I haven't seen in years, and meeting a few more I've been hustling against in the rankings. This will be a memorable event. :-)
  4. I am guessing mounting hardware container, so the rods don't get bent up as they bounce around in shipping. Not that its a real problem, but I bought a gemini and an f1 dark from KPC over the past couple years. Everything arrived fine, but the rods were just taped together in a small bundle. I forget with which one, but with one of the purchases things had gotten jostled and a couple of the rods ended up bent... Everything was 100% functional and I never really thought much of it, but it is an expensive purchase so extra detail does seem nice.
  5. Or he can go back before open enrollment ends.
  6. I read this whole thread a couple months ago. Batch is meaningless, total crap shoot. Some people have gotten lucky with several above average chips bought at the same store, but then other people on those same batches have had crap. Just have to bin until you find a good one, or throw money down on prebinned.
  7. Would you need to use a heat gun with the LT edition POM top, or can it be torched if care is taken? Also, is the POM available in the clear variety? I see only black is shown, but curious.
  8. Looks good! Similar chambering to the raptor 3.
  9. I wouldn't use that method so I can't say, that would be for someone to decide who thinks assigning arbitrary points is a good idea in the first place. If one were actually going to fix it, they may consider polling the audience and see where their values are, and then weight scores accordingly. Still pretty difficult however because with video cards and CPUs scores are typically based on abstracted final performance, whereas with PSUs their quality is determined by electrical characteristics which can be measured discretely, as well as component quality can be evaluated for quality/reputation. Scoring by points is just a bad fit for CPU/GPUS, too arbitrary, even if you score based on priorities of your audience, which will likely change over time. HH looks far more aggressive to me. Your homepage has a large ad block in the content area almost the same size as featured content block, and bigger text than featured content. With that, the leaderboard, and the skyscraper, as well as your own driver cleaner below also in the content area, as well as the background takeover... Combined it is overwhelming. I think part of what I see as a problem is the styling of your site blends with the styling of the advertising - you can't hardly tell where one stops and the other starts. From beginning to end, separation of promotional content from editorial content is a problem. Never the twine should mix. But with the aggressive advertising, as well as the inflated product ratings, then also the awards thrown around like crazy... It looks systematic moreso than incidental. My tastes aren't everyone's however. But I know its harder to get outside perspectives and input than it is to get input from your fans. Just my .02. Good catch.
  10. Stop giving out awards which promote the products and make them sound super special, when in fact if you look at all reviews done at HH everyone gets a reward/award just for participating. Also stop assigning arbitrary numbers, in the range of "8-10 out of 10" for every product you review. If everything gets a score of 8-10, then your ratings should range from 1 to 3 instead of being inflated. Yes, users like to see numbers that summarize the result, but your rating is a disservice because it is artificially inflating the perception of every product. If you insist on providing numbers on the review, do so in a manner such as that done by JonnyGuru - it isn't perfect, but its the most well executed approach out there for number based product ratings in my opinion. They at least have a structure to how the number is developed. I get the sense from comments I've seen from you and others there that you genuinely have the best intentions, but these two items mentioned above, as well as the especially aggressive advertising on your site, can yield an impression that you are pandering to the manufacturers. Also, this latest article gets somewhat laughable in the part where it addresses what users can expect in real world conditions. If you want to represent real world conditions, you should be running a main stream platform with a main stream CPU. That means Z77 and 3570/3770K, not X79 with a 5GHz IB-E. I understand the point you offered about X79 removing bottlenecks, but a 5GHz IB-E isn't real world for most your audience, and effectively could lead to some numbers being exaggerated compared to what users can actually expect. Potential bottlenecking can only be shown to be relevant with multi GPU setups. These are just some basic suggestions, without any filter, and I'm trying not to pass judgement to be fair to you. I hope you don't take offense. But as long as you continue doing what you are doing, it puts you on especially bad footing to write a piece like this most recent article. It looks bad as its ironic looking from the outside, considering how you do ratings and awards.
  11. He was putting in the full GPU name, when you have to select one from the dropdown.
  12. I don't agree that there's a point in going from a 4 cup structure to a 3 cup. It's already essentially 4 2 month competitions, with one month each of open enrollment and organization... In comparison, I don't think 3 cups or 4 makes a meaningful difference. Looking at the rankings currently, we aren't 3 weeks into the current competition, and SP32M is almost settled - there are likely to be a few more submissions competing at the top end, but thats it. Memory frequency will get a few more competitive submissions too, but its at least half way done. Cinebench is likely to bring out a few more big rigs to round out the top end there. So that leaves the 3D stuff - both of those will get a decent number of improved submissions, and we have over 10 weeks to get them posted. The current timeline seems reasonable to me.
  13. I was looking to see how my score stacked up against the Pro's, despite my CPU being more average than awesome... Looking at efficiency by using the performance product. SP32M could be seen to separate the knowledgeable benchers from the less so, as its simple but elegant to compare against other scores. You can calculate an efficiency rating, and easily extrapolate scores with a good degree of accuracy. If you don't know, you can estimate times in SuperPi by converting minutes to seconds, calculating performance product, then dividing by the MHz you want to estimate time for. So with my tweaks and RAM settings (tightened secondary and tertiary timings), doing the calculations: 6.4GHz Ivy - My time was 5 minutes, 8 seconds 6.9GHz Ivy - This time would be 4 minutes, 46 seconds. 6.8GHz Ivy - This time would be 4 minutes, 50 seconds. 6.6GHz Ivy - This time would be 4 minutes, 59 seconds. So this tells me a few things. My efficiency is in the right ballpark. But all of the Pro cup SP32M scores so far, but one, have beaten me twice. Beaten once by CPU frequency, and beaten as well by RAM settings. Phil is an exception, as his RAM might be running stock for his current submission. If he and I were running the same CPU clocks I'd be 2 or 3 seconds ahead. Here's a comparison of current submissiosn ranked by Performance Product (PP), lower numbers mean better efficiency: 1. PP = 1959884 - Splave 4 minuts 47 seconds 2. PP = 1963601 - SF3D 4 minutes, 41.641 seconds 3. PP = 1966001 - Smoke 4 minutes 43 seconds 4. PP = 1978741 - I.M.O.G. 5 minutes 8 seconds 5. PP = 1994286 - Phil 4 minutes 53 seconds 6. PP = 2024610 - K404 7 minutes 57 seconds 7. PP = 2156912 - Oj0 6 minutes 54 seconds So Splave is currently 6 seconds out of first place, but if he had that 6.972GHz Ivy SF3D has, he'd be in first place by half a second. Anyways, my curiosity is satisfied. This isn't scientifically accurate, but it ballparks things pretty well. If you want to get better at SuperPi32M... Start paying attention to your Performance product, and keep making changes that make it lower. Do that, and once you do have god's own chip some day, you'll have what it takes to run it the way god intended.
  14. Holy crap, nice result. The 1090T was one of my favorite chips to bench. You really have a great one.
  15. Whoa. I wasn't sure how best to actually do it, but that looks great.
  16. Changing resolution in order to fit everything on the screenshot, the nvidia driver crashed... That's why some of the 3d01 window is messed up.
  17. Verification picture: http://db.tt/j4xq8v9F
×
×
  • Create New...