Skip to content
View in the app

A better way to browse. Learn more.

HWBOT Community Forums

A full-screen app on your home screen with push notifications, badges and more.

To install this app on iOS and iPadOS
  1. Tap the Share icon in Safari
  2. Scroll the menu and tap Add to Home Screen.
  3. Tap Add in the top-right corner.
To install this app on Android
  1. Tap the 3-dot menu (⋮) in the top-right corner of the browser.
  2. Tap Add to Home screen or Install app.
  3. Confirm by tapping Install.

IBOT thoughts

Featured Replies

At least as of the time of writing this these seem to be the best sources currently for what IBOT does. The hard part is that there really isn't much technical details as to exactly what optimizations have been made. Some other things that we do know:

  • intel said they won't be changing what instruction set extensions are being used at least right now

  • this will expand going forward

The big question for xoc would be how does the actual workload get affected. As far as I can tell it affect the actual bytecode after being loaded into memory. In a sense it acts closer but not quite as far as a JIT compiled benchmark would be. GPUPI is in this realm where the actual instructions being executed depends on how the kernel is JIT compiled (1) by the opencl driver.

Superscalar cpus already do a lot of this internally at a microcode level and have done for generations. I think that IBOT in a way represents a pretty natural idea. But if every benchmark can become quite a moving target through runtime changes to the code.... that definitely can present an interesting problem...

To give some background on code optimization for c/c++ binaries:
MSVC doesn't do specific architectural flags
GCC/CLANG do, with -march=znver5 or -march=graniterapids or -march=raptorlake for some examples

When you take the same c++ source code and compile with different flags the optimizer in the compiler will reorganize things to better align with what it knows performs well on the platform. However it still doesn't know what the actual data going into your program looks like. Usually this is taking into account what instructions the cpu can execute simultaneously, and allowing the cpu to decode more instructions at once. It also often enables the instruction sets that the cpu supports (2).

Profile guided optimization usually works like this: you compile your program with some extra stuff included to collect performance data, you run it with generates a file with performance counters, and specific info about where you had branch misses / cache misses. The compiler can then use that info to compile your program again, without the extra overhead part for collecting data, taking that profiled data into account. Usually that means re-ordering things in a way that better hints to the cpu about branch perdiction, has better performance when the cpu does predict branches correctly, unroll loops in a better way(3), inline functions, ... etc... (4)

Profile guided optimization used to having to distribute a new entire binary though. Think of it like ycruncher adding a new option for specific uarch. It is fundamentally different than even just ucode which doesn't quite match the scope of what this is able to do, at least today.

So the big unknowns are:
1. What optimizations are actually being made?
2. What is intel going to do with this in the future?
3. Is this detectable?
4. What if someone else other than intel starts doing basically the same thing?

I think IBOT is a really interesting technology, but it has the potential to make other benches act more like how yc saw massive improvements with new releases. I think HWBOT doesn't need to ban IBOT, but we need to be realistic about what not banning it could mean. And if it is possible to detect it in enough circumstances to make banning it feasible. Even if Intel isn't massively shifting the instructions today does that mean that they won't eventually? What if someone swaps the ffmpeg out in x265 in memory with one compiled with avx512 support for a specific uarch in memory? Is that different than doing it on disk really? IMO until there's more info its really hard to say what the best decision is here.

I mostly just wanted to put the info out for everyone to understand what's going on at least with the info that's currently available. IMO banning IBOT for now is probably the best step until we understand the implications a bit more. Once there's a ton of IBOT enabled subs its harder to turn it off fully.... I can try to answer any uarch / optimization questions to the best of my knowledge in more detail if people have them.


(1) there's maybe some minor differences between it and other JIT compiled stuff but not important here

(2) There are differences between -march=, -mcpu=, -mtune= .... lots more to go into potentially but out of scope here
(3) Loop unrolling allows the cpu to more easily process multiple iterations of the loop simulataneously. Basically going from: for (i = 0; i < n; i++) {f(i)} to for (i = 0; i < n; i+=2) {f(i); f(i+1)}
(4) Modern compiler optimization is extremely impressive, especially when you combine pgo and lto, and a way to big topic to cover here

Not sure if this will help give some context,
but this is my 270K (default, d5-6000c36) run on Geekbench 6.3 with IBOT OFF and IBOT ON.
The subtest that are boosted are quite specific it seems - not sure how or what Intel did there.

Intel_iBOT_GB6_01.png

Intel_iBOT_GB6_02.png

Intel_iBOT_GB6_03.png

Intel_iBOT_GB6_04.png

  • Crew
4 hours ago, Leeghoofd said:

What interest me if we use it on eg 285K or even Raptor Lake is there a gain too

I bet there's a "supported hardware" paywall check so the tool might require some liberation.

I'm not nearly as active as some of you are, but I think IBOT presents some serious issues.

Imagine if SuperPI 32M suddenly got IBOT support, in the form of replacing the original x87-based code with something like y-cruncher. It would turn the benchmark from 3 minutes to 3 seconds.

  • Crew
24 minutes ago, Arni90 said:

I'm not nearly as active as some of you are, but I think IBOT presents some serious issues.

Imagine if SuperPI 32M suddenly got IBOT support, in the form of replacing the original x87-based code with something like y-cruncher. It would turn the benchmark from 3 minutes to 3 seconds.

We think alike, but some will just luv that to happen... Hopefully this softwre will only have effect on a few benchmarks only

  • Author

At least with sp 32m benchmate could be made to detect it in some form if it can't already. Geekbench, x265, are more potentially vulnerable at the moment.

So what's gonna happen to GB6 rules ? PrimateLabs says soon-to-be released version 6.7 will include a way to detect IBOT and flag results accordingly.
Is HWBot going to forbid scores with IBOT enabled ? This seems fine for now as the volume of scores uploaded for this bench and hardware is small, but in 2-3 generations of intel CPUs it could become a moderation hell

Edited by Paul7347

Yes that makes sense to the long-time user obviously. It's just that I would expect waves of newbies subbing invalid scores. Not that you can do much to prevent it, but good luck with the moderation

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

Account

Navigation

Search

Search

Configure browser push notifications

Chrome (Android)
  1. Tap the lock icon next to the address bar.
  2. Tap Permissions → Notifications.
  3. Adjust your preference.
Chrome (Desktop)
  1. Click the padlock icon in the address bar.
  2. Select Site settings.
  3. Find Notifications and adjust your preference.