Jump to content
HWBOT Community Forums

I.M.O.G.

Members
  • Posts

    869
  • Joined

  • Last visited

Posts posted by I.M.O.G.

  1. This is how I pickup my LN2. It's an hour drive, and typically about 3 hours including fill/payment/BS time. Just got home with 80L, $.50/L... Bubba is in the background, he rides along and sleeps next to the dewar in the back.

     

    [attach]1180[/attach]

     

    Upgraded LN2 hauler:

     

    [attach]1217[/attach]

  2. Look at the account, "dejo's daughter". It has 0 points and all submissions were either deleted or merged with dejo's account. That was disallowed because it "looked" too suspicious, even though it was a legitimate father/daughter team... I know this beyond a shadow of a doubt, and despite it being done within the rules of HWBot, it was PROHIBITED because it looked too suspicious.

     

    This is more than appearances... There is only one logical explanation given post #33. You were submitting runs on the the same components under two accounts. If you didn't want to keep track of which run belonged to who, you should have just submitted them all under 1 account. You claim these runs were on separate hardware, however you weren't even saving results in separate folders to keep track.

     

    One of you should have your account merged, then no more harassment, and its legal to submit results like this from a group session. Easier that way than keeping track of which parts belong to which person.

  3. Nice run. Looks about right for Virus scan to me, given your storage config compared to mine and stevero's. We're all in the same ballpark - single core is tough. Put a lot of work into replicating el gappo's result in virus scan, but never could get it going using onboard. Thats the way to go - there's an updated raidrom bios out there, hacked by a user on the rog forums. You may want to look into that. Didn't get me anywhere, but may work better for you.

  4. psychology time! :P

     

    Why submit so much now? The same points were available before the rules were adjusted/changed/however you see it...... what is it about the new rules that has made people want to bench PCM05 again? There are a lot of new faces, so I do not think it's just the "original PCM05 fans" who want to keep their rankings.......

     

    I'm one of the new pcmark05 faces. I started submitting PCM05 in January... I saw a lot of high points with less activity/competition. At that time, a lot of 1st place scores weren't at very strong CPU clocks so I also knew I could run faster chips, and I could do well if I knew the benchmark well, and had good storage. So I bought storage, and figured out how to run it well.

     

    Stevero and I went back and forth quite a bit in a couple categories, and a couple other guys were submitting really good scores... I think all the new 1st place scores worth a lot of points was like blood in the water, brought out the sharks.

  5. I don't see why we have to know that this software will be updated for generation after generation. if this software is available, then it is, and if they won't update it after 2 years or something, then so be it. It's not a big deal. Then we just don't have MVP subs for those generations.

     

    If its permitted and allowed now, then loses support after a year or two... I don't have a point in this question, I'm just trying to figure out what this could mean. But if it loses support when will the records set on MVP be broken? I wonder if it could do weird things to the global points, where in 3 or 4 years we have much more powerful hardware, but to compete for any top 3D globals we're still falling back to older MVP hardware. Dunno if that makes a difference, however I like the motivation to run newer and more expensive hardware to see what it can do... If new more powerful stuff is less rewarding to benchmark due to MVP, that could be a negative to some people I guess.

     

    I don't have a firm position one way or another. In post #158 I shared some other thoughts I had, and rereading it I guess it sounds like I support MVP in a separate rankings. For similar reasons to why Futuremark is looking into detecting if MVP is active I suppose.

  6. I agree that turning the standings around is a bad argument for disallowing MVP.

     

    Scores achieved with MVP are not directly comparable to scores without it, and the advantage of MVP is purely a technology based development that applies across multiple benchmarks. Its advantage is not tied to knowledge or supported by learning, and its a clear demarcation point. For that reason, I think it seems logical to separate the rankings for MVP based results. If we want to keep competing with traditional high end CPU/GPU combos, which is the most popular category currently, it seems separate rankings are the only way to remain competitive and draw valid hardware comparisons.

     

    That leaves only 3 meaningful logical differences I can immediately identify from PCM05. First, is that PCM05 tweaks are tied to research, knowledge, and familiarity with the benchmark - these are good things to reward in a community founded on tweaking benchmarks. Second, there was no clear demarcation point with PCM05, tweaks were researched and evolved, and the rules evolved with them (few would argue the rules evolved perfectly, but rules were made in a best effort sense to keep it as interesting as possible). Third, tweaks in PCM05 are not platform specific - they apply across any mobo/cpu/storage setup, and there isn't one element that independently dominates all rankings... Each tweak in PCM05 has stretched the envelope for whats possible for total scores on a given hardware setup, but no single tweak, chip or piece of hardware have put relatively weak rigs in top spots. PCM05 tweaks make whatever hardware you have score better.If you want a top spot, everything must work together almost perfectly.

     

    I think the comparison to PCM05 is a relevant one. OS and software tweaks have grown PCM05 scores incrementally over time regardless of CPU and GPU - it has learning and improving benchmark scores at it core. MVP is a disruptive instance of technology that independently renders scores not directly comparable, which is only supported on certain platforms - it has nothing to do with learning, though it does improve scores. I think MVP is an easier judgement call to make, because the situation appears different enough... HWBot is about competing against people with similar hardware who know how to run benchmarks well.

  7. About HWBOT ranks/points and Virtu MVP

     

    So, as you can all see from this thread, the discussion regarding Virtu MVP's legitimacy in competitive benchmarking is still going on. Although we've reached a point where we know in what situations Virtu MVP should and should not be considered legit, we do not have concrete information on how Virtu MVP works exactly. This piece of information is vital in this discussion.

     

    For the time being, HWBOT will not award points or ranks to benchmark results obtained with the aid of Virtu MVP. In our current development sprint, we've added a feature that would allow users to indicate if they used Virtu MVP to obtain the score. In case Virtu MVP was used, the result will automatically be set to 'no points, no ranks'. Until this feature is installed on the production server, we would ask you to use the 'I do NOT want points for this submission'-checkbox.

     

    Please note that no final decision about Virtu MVP has been taken yet. It's possible that Virtu MVP will be allowed in the normal rankings, or that it will get its own ranking or even that it will be completely disallowed. We hope to have all the pieces of the puzzle shortly so we can have a final conclusion on the legitimacy of this new technology.

     

    It would be cool if there was a page on hwbot where only announcements like this were made - like hwbot policy milestones page, it helps to keep track of what the current rules are, and keep track of pertinent issues that affect the community. Currently, the best list we have is "find all posts by massman". Takes a lot of digging or monitoring of the forum to keep up, and often times issues like this don't actually lead to a rule change immediately... But they might weeks or months down the road.

     

    I'm not interested enough to follow the forum every day to catch a random thread or post like this, but I would follow a page dedicated to announcing hwbot's stance on issues like this that can impact the rules or interpretation of the rules.

  8. Maybe 2 week window at the start of the comp that does not count points, only tracks rankings. After that window is up, points awarded each day - people could sandbag for the first 2 weeks that way, but then everyone would have motivation to post as soon as possible after that. Not a perfect solution, but could be cool. Still would probably want two conditions for ending the contest - a final end date if the competition is very close, and a blow out end point if its mathematically impossible for anyone to come back by the final end date. It's more complicated, but would be cool.

     

    Congrats to the winners by the way, impressive scores.

  9. Ya, he has a few good solutions. I like the king of the hill idea... you get more points the longer you hold top ranking. Like first place gets 5 points every day they are first, 2nd 4pts/day, 3rd 3pts/day, etc. Encourages people to submit early and compete not just at the end. The contest would end after four weeks, or after at least 2weeks if someone had a lead of 50 points in first or something.

     

    Not a perfect idea, requires recoding the point system for comps, but it could be interesting... right now the comp winners only compete at the end. Its the smart way to play with current comp rules.

×
×
  • Create New...