March 25Mar 25 At least as of the time of writing this these seem to be the best sources currently for what IBOT does. The hard part is that there really isn't much technical details as to exactly what optimizations have been made. Some other things that we do know: intel said they won't be changing what instruction set extensions are being used at least right nowthis will expand going forwardThe big question for xoc would be how does the actual workload get affected. As far as I can tell it affect the actual bytecode after being loaded into memory. In a sense it acts closer but not quite as far as a JIT compiled benchmark would be. GPUPI is in this realm where the actual instructions being executed depends on how the kernel is JIT compiled (1) by the opencl driver. Superscalar cpus already do a lot of this internally at a microcode level and have done for generations. I think that IBOT in a way represents a pretty natural idea. But if every benchmark can become quite a moving target through runtime changes to the code.... that definitely can present an interesting problem...To give some background on code optimization for c/c++ binaries:MSVC doesn't do specific architectural flagsGCC/CLANG do, with -march=znver5 or -march=graniterapids or -march=raptorlake for some examplesWhen you take the same c++ source code and compile with different flags the optimizer in the compiler will reorganize things to better align with what it knows performs well on the platform. However it still doesn't know what the actual data going into your program looks like. Usually this is taking into account what instructions the cpu can execute simultaneously, and allowing the cpu to decode more instructions at once. It also often enables the instruction sets that the cpu supports (2).Profile guided optimization usually works like this: you compile your program with some extra stuff included to collect performance data, you run it with generates a file with performance counters, and specific info about where you had branch misses / cache misses. The compiler can then use that info to compile your program again, without the extra overhead part for collecting data, taking that profiled data into account. Usually that means re-ordering things in a way that better hints to the cpu about branch perdiction, has better performance when the cpu does predict branches correctly, unroll loops in a better way(3), inline functions, ... etc... (4)Profile guided optimization used to having to distribute a new entire binary though. Think of it like ycruncher adding a new option for specific uarch. It is fundamentally different than even just ucode which doesn't quite match the scope of what this is able to do, at least today.So the big unknowns are:1. What optimizations are actually being made?2. What is intel going to do with this in the future?3. Is this detectable?4. What if someone else other than intel starts doing basically the same thing?I think IBOT is a really interesting technology, but it has the potential to make other benches act more like how yc saw massive improvements with new releases. I think HWBOT doesn't need to ban IBOT, but we need to be realistic about what not banning it could mean. And if it is possible to detect it in enough circumstances to make banning it feasible. Even if Intel isn't massively shifting the instructions today does that mean that they won't eventually? What if someone swaps the ffmpeg out in x265 in memory with one compiled with avx512 support for a specific uarch in memory? Is that different than doing it on disk really? IMO until there's more info its really hard to say what the best decision is here. I mostly just wanted to put the info out for everyone to understand what's going on at least with the info that's currently available. IMO banning IBOT for now is probably the best step until we understand the implications a bit more. Once there's a ton of IBOT enabled subs its harder to turn it off fully.... I can try to answer any uarch / optimization questions to the best of my knowledge in more detail if people have them.(1) there's maybe some minor differences between it and other JIT compiled stuff but not important here (2) There are differences between -march=, -mcpu=, -mtune= .... lots more to go into potentially but out of scope here(3) Loop unrolling allows the cpu to more easily process multiple iterations of the loop simulataneously. Basically going from: for (i = 0; i < n; i++) {f(i)} to for (i = 0; i < n; i+=2) {f(i); f(i+1)}(4) Modern compiler optimization is extremely impressive, especially when you combine pgo and lto, and a way to big topic to cover here
March 28Mar 28 Not sure if this will help give some context, but this is my 270K (default, d5-6000c36) run on Geekbench 6.3 with IBOT OFF and IBOT ON. The subtest that are boosted are quite specific it seems - not sure how or what Intel did there.
March 29Mar 29 Crew 4 hours ago, Leeghoofd said:What interest me if we use it on eg 285K or even Raptor Lake is there a gain tooI bet there's a "supported hardware" paywall check so the tool might require some liberation.
March 29Mar 29 Crew That's all nice but if primate labs invalidates the scores we have to go along right
March 29Mar 29 I'm not nearly as active as some of you are, but I think IBOT presents some serious issues.Imagine if SuperPI 32M suddenly got IBOT support, in the form of replacing the original x87-based code with something like y-cruncher. It would turn the benchmark from 3 minutes to 3 seconds.
March 29Mar 29 Crew 24 minutes ago, Arni90 said:I'm not nearly as active as some of you are, but I think IBOT presents some serious issues.Imagine if SuperPI 32M suddenly got IBOT support, in the form of replacing the original x87-based code with something like y-cruncher. It would turn the benchmark from 3 minutes to 3 seconds.We think alike, but some will just luv that to happen... Hopefully this softwre will only have effect on a few benchmarks only
March 29Mar 29 Author At least with sp 32m benchmate could be made to detect it in some form if it can't already. Geekbench, x265, are more potentially vulnerable at the moment.
April 1Apr 1 Author https://www.geekbench.com/blog/2026/03/analyzing-geekbench-6-under-intels-bot/Seems like IBOT does vectorize the instructions.... There's not a ton of info yet on the extent of it, and if it will convert non avx to avx code for instance. But still interesting in general.
April 2Apr 2 So what's gonna happen to GB6 rules ? PrimateLabs says soon-to-be released version 6.7 will include a way to detect IBOT and flag results accordingly.Is HWBot going to forbid scores with IBOT enabled ? This seems fine for now as the volume of scores uploaded for this bench and hardware is small, but in 2-3 generations of intel CPUs it could become a moderation hell Edited April 2Apr 2 by Paul7347
April 2Apr 2 Crew Til now it says invalid on the Primate Labs page with a warning, so we can't allow it Paul, Once it gets flagged as VALID it's a go... plain logic right
April 3Apr 3 Yes that makes sense to the long-time user obviously. It's just that I would expect waves of newbies subbing invalid scores. Not that you can do much to prevent it, but good luck with the moderation
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.