Jump to content
HWBOT Community Forums

RagingCain

Members
  • Posts

    47
  • Joined

  • Last visited

Converted

  • Location
    Jacksonville, FL

Converted

  • Interests
    Gaming, Programming, Mathematics

Converted

  • Occupation
    Software Developer II

Converted

  • realname
    Tristan Hyams

RagingCain's Achievements

Newbie

Newbie (1/14)

10

Reputation

  1. I am still around doing work - I have moved all my work to a new domain, new cert, new web server - everything. https://houseofcat.io/ I have fully integrated Harmonic as well as a few other bug fixes for RTC/HPET detection. In addition to that - I have optimized some code even further enhancing scores. I finally tested Quad CPU setups and 64 logical processors too. That was an itch I had been wanting to scratch. I have also updated some of the CPU-z integration - reading from newer versions of CPU-z but they constantly shift names around to make that hard. I would love to pay for a dev license but its 1000$. I don't even currently get donations for what I have written. C'est la vie I have also begun "aiding" benchmarking by creating kill processes (that are safe to kill). Most individuals run clean OS images etc., but if you benchmarking on the fly and you want all bogus Razer/nVidia stuff killed - I tried to let you do that.
  2. Don't mind Genieben, he has the personality akin to sandpaper and has been rude to me here ever since I started developing Bishop CPC. Yes Harmonic uses the same thing I believe 3DMark does for HPET validation. It's up to developers to implement similar features in their software and validating the results before submitting. Harmonic was my proof of concept. While you can Photoshop things like what Harmonic's UI is doing, that does little to tamper with the actual code and results, all of which should be encrypted and included. A developer can secure the validation process and prevent tampering. They can also include screenshots during the submission process - basically take the user's ability to manually submit modified screenshots away. It's quite easy to determine if HPET is in use and if anyone needs code, I will be more than happy to provide it. Harmonic is now embedded with Bishop CPC and it will be using the underlying validations.
  3. Build 2.108 What's New Puthycat? Installer & AutoUpdater Versions increasing again. Check & Uncheck all cores. Bug fixes/UI optimizations. Two new difficulties added, 2Billion and 4Billion*. Two new algorithm versions. Process/Task Killer Log File Generator To Help Us Pause UI During Benchmark Options CPU-z Crash Fix Added - Light / Heavy RAM Algorithm v2 for Int32 and Int64 for Single Cores. What is the Light Algorithm? It is the original Sieve of Eratosthenes. It uses a lot less RAM at the cost of some JIT/CLR overhead. What is the Light Algorithm v2? It is the original Sieve of Eratosthenes using the math tricks to cut storage in half without compression. What is the Heavy RAM Algorithm? It uses more memory but has less overhead making it faster. What is the Heavy RAM Algorithm v2? It uses more memory than light, but uses a couple of math tricks to cut the RAM usage in half without compression or overhead. I see you have added 4.096 Billion as an option but I don't see it listed? This is still an experimental section of code but to get to it is easy. Ensure you only have one single core/lp checked. Click either Algorithms -> Single Threaded -> Either Heavy algorithm. You can now choose 4B with Int64 as datatype http://www.overclock.net/content/type/61/id/2862149/width/500/height/1000[/img] CPU-z no longer crashes when your version has values in a different section but you will see ERROR instead. Want your CPU-z values read correctly? Go to Extras -> Help Dev -> Generate CPUz Log This will open up a folder, where Bishop lives, and all you have to do is attach the CPUz log to a forum post or email using my username at gmail. http://www.overclock.net/content/type/61/id/2862148/width/500/height/1000[/img] Kill All (Processes) Section http://www.overclock.net/content/type/61/id/2862139/width/500/height/1000[/img] Got a ton of Processes you think I should kill? Go to Extras -> Help Dev -> Generate Processes Log This will open up a folder, where Bishop lives, and all you have to do is edit it so only the processes you think I should close are included. Then attach the Processes log to a forum post or email using my username at gmail. Got a list of Processes you think you should kill? Go to Extras -> Kill All -> Kill All - From File Choose File, click OK, and watch computer explode. Avoid this feature if you do not know what you are doing. Kill Processes - File Format: 01: Self Or self And, yes you can use the file generated by Help Dev -> Generate Processes Log. Coming Soon Additional CPUz Information will be shown on SysInfo. Better Algorithm Picking Solution Pre-Loaded Tests For Quick Testing HWBOT Integration Scoreboard Overhaul -> Tabulation Scoring System Fun
  4. Just an update - benchmark version 2.04 - more of maintenance build is available. I am already working on 2.05 and HWBOT integration at the moment as time/work allows. Been off the project for a year now, been quite hectic in my personal life, but now I am back on. 2.04 Features: CPU-z re-integration, reads the latest version (hopefully future versions as well) of CPU-z again. Bug fixes here and there. Landing Page: Bishop CPC Benchmark | BYTE Me Development ***8211; Advert Free Browsing ^.^ Upcoming features: Check/Uncheck all logical processors. Manually try and reclaim CLR memory. Clear all function to unload all stored variables. HWBOT API Integration Tweaked Theme Framework \ Coding Rough Spots
  5. I am very excited to continue work. We have a decoding / encoding benchmark, vector benchmarking, matrix calculations, and even some C++ versions of the same test too. Feel free to change the thread title if you can
  6. Bumpdate. Details in main. HWBOT application will be in progress.
  7. Yeah, I think Phil and I can work something out. Things are still in the early stages, but we are adding bug fixes / features daily. Phil and I have huge ambitions for this program and I want everyone to love it. Just keep in mind that certain aspects of AMD's CPU architecture are weak right now. Allowing users to test Int32 or Int64 independently of floating point let's Bulldozer+ architecture shine. We were quite proud of some of our AMD users. We have seen an FX 8320 demolish an i7 2600K for example. Phil is currently working on a prototype performance gauge, called RaBIOs. Random Burst Input Outputs that will allow for Cache and RAM. We plan on adding OpenCL support as well to get the GPUs involved as well as instruction sets such as AVX / 2.0, XOP, FMA4, for Vector based mathematics etc. Of course there is HWBOT integration too. ^.^ We are busy busy busy.
  8. We already took care of the scoring system. Just didn't update the screenshots yet. Bumpdate again. Several bug fixes in 2.02.
  9. Bumpdate! Details in main.
  10. It uses a software + hardware combination for clocks, however, I have not tested the fact that manipulating the bClock or FSB interferes with time sensitive applications like benchmarks. The HPET testing program I built is going to be incorporated into my CPU Thread Benchmark program though. It operates off of QPC / QPF and shouldn't be prone to BCLOCK or FSB manipulation. We are also going to work with Massman for HWBOT integration soon too. We plan on fully supporting Win Vista, 7, 8, 8.1, and 10.
  11. It is not. Massman you silly goose. Use my darn program I built for you. Harmonic v0.003
  12. My program Harmonic still works gents, I am not sure if Massman has his own methods or what not though. I am assuming they had something in the works already, I haven't heard more about it.
  13. Gizmo which GPU? 860M or 960M?
  14. The GTX 780 TI & SLI - End of Life - Performance Analysis The point of this post / thread was to display some hard facts and anecdotal experiences with the GTX 780 Ti SLI. Kepler Degradation In Performance: For something that could be tested, it seems like a lot of people have not taken the time, an arduous and tedious amount of time, to produce some tangible data regarding Kepler degradation in performance. Popular opinion even states that the last viable driver was 347.88. Well I am here to tell that I have data that may prove otherwise. I am going to let you guys make your own conclusion, I have no interest in persuading opinion towards nVidia or even against nVidia. There are somethings that I feel should be corrected, because when people have all the facts, they can make informed opinions themselves. My motivation is just help inform users and I feel obliged to do the testing since I own the two cards. The list of drivers I have tested: 331.93 332.21 334.89 335.23 337.88 340.52 341.44 344.75 345.20 347.09 347.25 347.52 347.88 350.12 352.86 353.06 353.30 353.38 Benchmark: 3DMark Fire Strike 3DMark Fire Strike Extreme System Setup: Why the one benchmark? Well initially I had planned to do a few benchmarks. The difficulty arises in my mind when trying to find a benchmark that's been around a while and supported in SLI by multiple drivers. For example GTAV is only good for 3 previous drivers and that wouldn't show enough data in my mind. 3DMark is also something that sells GPUs. This is the de facto standard for reviewing cards/benchmarking cards. In my mind, this is where nVidia, would at least focus support every driver release, and has to be at least one of the in-house tests they use to gauge performance of a driver. This led to the following idea... The Premise: nVidia has to dedicate time and support to testing 3DMark every release. Any substantial changes in performance should cascade outwards, especially negative performance. If 3DMark drops measurably in performance. This would confirm larger, measurable, performance issues in titles with less focus could exist. If 3DMark does not change in performance, other titles are not guaranteed to be reflective of this. More testing is needed then. So I currently only have used one benchmark, what do you think that means ? This article may or may not be finished, I may go back further and testing more drivers. Testing Methodology: Three runs per test. Six runs per driver. Scores below are averaged of three. GPU scores recorded and compared separately. Results (Charts): Blue Scores - Fire Strike Scores Green Scores - Fire Strike GPU Scores Results (Numbers): Results SLI (Charts): Results SLI (Numbers): Testing Notes: 353.38 – A huge improvement over 353.30 in terms of stability and performance. 353.30 - Just could not get a 3rd run on 3DMark Fire Strike Extreme tried 5 times even on stock w/ reboots. GPUs do not idle (go to a lower power state) once overclocked. 353.06 - GPUs do not appear idle (go to a lower power state) once overclocked. 352.86 - GPUs do not appear idle (go to a lower power state) once overclocked. 350.12 - GPUs do not appear idle (go to a lower power state) once overclocked. 347.88 - GPUs do not appear idle (go to a lower power state) once overclocked. 347.52 - This is driver does not have the 20 seconds freeze in NVCP when changing 3D settings. GPUs idle (enter a lower powerstate) even with OC set! 347.25 - 3DMark won't launch if Afterburner OSD is running. Changes to the NVCP 3D Settings don't seem to be sticking i.e. Prefer maximum performance, however scores are the highest. Overclocking Retested on the Latest Drivers (353.30): Overclocking Notes: 001 - Driver crash & recovered in BF Hardline before testing. Driver recovered successfully but obviously stuck in the wrong power state. 002 - Didn't even make past the loading screen. No overclock, just a DirectX crash. PowerLimit seems to trigger it. Reset PowerLimit to default. 003 - http://www.3dmark.com/3dm/7484258? 004 - http://www.3dmark.com/3dm/7484455? 005 - http://www.3dmark.com/3dm/7484578? 006 - http://www.3dmark.com/3dm/7484642? 007 - http://www.3dmark.com/3dm/7484712? 008 - http://www.3dmark.com/3dm/7484801? 009 - Driver crash. Conducted a reboot before next test. 010 - http://www.3dmark.com/3dm/7484960? 011 - Driver crash. Realized voltages are not increasing in Afterburner. Set Unlock Voltage to Kernel Mode, Standard MSI. Working again. 012 - Switched PrecisionX. Driver Crash. 013 - http://www.3dmark.com/3dm/7485714? 014 - http://www.3dmark.com/3dm/7485797? 015 - http://www.3dmark.com/3dm/7485872? 016 - http://www.3dmark.com/3dm/7486063? 017 - http://www.3dmark.com/3dm/7486150? 018 - http://www.3dmark.com/3dm/7486237? 019 - http://www.3dmark.com/3dm/7486328? Why is the 780 Ti struggling against a card it used to beat? A.) Well, we are EOL, I would assume that the 3 years of Kepler optimizations have already been made. I expect new title support, but I don't expect the same level of focus on my GPU anymore. B.) Maxwell architecture is new. Consistent newer title reviews have shown the GTX 980 / 970 are currently ahead of the GTX 780 Ti. Newer titles require newer drivers. Most claims that the GTX 780 Ti used to beat the GTX 980 / 970 are made based on launch reviews. Older drivers were used. New architectures mature with driver release. Benchmarks and titles where Tessellation are heavily taxed, Maxwell has 3x tessellation performance as Kepler. The GTX 970 and 980 also have another hardware advantage, another 1GB of VRAM, or 512 MB of VRAM in the 970 case. C.) GCN architecture isn't new!? AMD has one architecture to support. Optimizations for the biggest GCN card trickle all the way down to the smallest GCN card, or even older cards. Most claims that the GTX 780 Ti used to beat the R9 290X are based on old launch reviews. Older drivers were used. Newer titles require newer drivers. All architectures mature with driver releases. Even GCN will be tapped out at some point in the future. D.) The transition to the 35x.xx family has left us a little down in terms of performance. Windows 10 / DX12 driver changes are slowly being merged into 34x.xx family of drivers. Bugs will come up first, optimizations always come afterwards, if they come at all. They could have done a better job in merging the branch changes in the drivers. E.) All The Console Ports Have Been Virtually Terrible Batman: Arkham Knight – Sales have been stopped it was so bad. Assassin’s Creed Unity – Appalling and terrifying release. Watch_Dogs – Hatefully now known as Patch_Dogs. VRAM requirements have tripled over night with no perceivable increase in image quality. The architecture in the consoles GPUs is GCN. Games are being designed from the ground up to work well on AMD hardware. Newer games simply will have a few more low level AMD optimizations at launch than nVidia ones. Conclusion (as of 7/3/2015): The transitioning into 35x.xx family has caused a certain amount of performance penalty to some Kepler users, for example a loss of 4.05% in GPU score in Fire Strike, and 3.88% Fire Strike Extreme. A loss though should be read in context. The loss in performance since at least driver 347.25. I don't think there is a huge conspiracy to "withhold" performance as people speculate, but there is enough data to merit a review by nVidia. I don't expect major performance gains any more, now that we are EOL, but we shouldn't be going down in performance in a benchmark that even pre-dates this video card. The best driver currently for Kepler, in terms of 3DMark performance I have tested thus far is 347.25. If you must use a 35x.xx driver, I highly recommend the 350.12 or the 352.86. I have updated the charts and added the actual numbers. Updated to Conclusion: The numbers since 340.52 only show a lost of 3.96% loss of performance in the GPU score, or 2.09% in overall score in Fire Strike. The latest hotfix driver 353.38 seems to be that nVidia is taking the issues serious and managed to resolve not only the performance issues but the vast majority of TDR crashing that was occurring. Also note: The 353.30 is the worst driver I have ever used from nVidia and I have had Quad SLI and GTX 590s before. Anecdotal Comments: I have greatly enjoyed my time with the GTX 780 Tis, even though they are not good overclockers. Out of all my nVidia experiences, they have been my favorite cards. nVidia driver stability has been usually good for me, that being said, the 353.30 is the worst driver I have ever used from nVidia and I have had Quad SLI and GTX 590s before. I have been using the 780 Tis since November 2013. I did my testing in my configuration because this has been my experience with Kepler GPUs in SLI. To me there hasn’t been a mass reduction in performance, I have never felt this crushing ominous crippling of performance that so many people are claiming is happening. I don’t doubt there are issues, issues which seem to vary based on each user’s configuration (including driver issues) but as far as I can tell no one has taken the time to prove many of these claims. I recommend everyone, if they can, to use the 347.25 driver, it is a solid performer and very stable. Maxwell also predates this driver which refutes the whole “nerfing Kepler to make Maxwell shine” theory. Looking at the trend up to the 347.25 driver, performance has been consistent. I personally do not have the TDR bug in Windows 8.1, I did have it in Windows 10, it does look like they are working on a fix finally though. The tests here are for a narrow premise and not all encompassing, I realize that. Again the point of this article is encourage neutral fact based discussion but also provide a reference to repeatable and reproducible tests. All my test results are under the user RagingCain at 3DMark Results, if you want/need that extra proof. My Driver Recommendations In Order 347.25 – nVidia control panel settings do change, but when the page refreshes only the default choices appear to be selected. 340.52 (For Non-SLI) – Performs slightly better in single GPU setups than 347.25, but not SLI. 353.38 – Have not tested Shadowplay or Geforce Experience. No freezing loading the nVidia Control panel, settings do stick. 350.12 – Had difficulties using Shadowplay. Article Source: ByteMeDev.com Coming Up Next Grand Theft Auto V when they fix the latest performance issues.
×
×
  • Create New...