Higher fps = better only works for real frames
This isn't actually true. The most important factor for reducing motion blur is reducing frame persistence. This is so important that inserting black frames between real frames noticeably improves motion clarity solely on the merit of making frames stay visible for less time. Our eyes don't like static frames at all, it is literally better to see nothing between flashes of frames than to see a frame held for the entire "real" duration of that frame. If you have a high refresh rate monitor, you can test this yourself:
https://www.testufo.com/blackframes
For another example, a very recent breakthrough for emulation is a shader that runs at 240+hz that lights up only a small portion of the screen per frame, similar to how CRT scanlines worked. At 480hz, you can break one game frame into 8 subframes that are flashed in order from top to bottom, with some additional magic to emulate phosphor decay for authenticity. This sounds stupid, but it really is a "you gotta see it to believe it" kind of thing. The improvement it makes to motion clarity is mindblowing. I ran out and bought a $1000 monitor for it and I don't regret it. It's possibly the best gaming purchase I've ever made.
After seeing this with my own eyes, I've completely reversed my position on framegen. I'm now of the position that we need to reduce frame persistence by any means necessary. The input latency concerns are very real; the examples Nvidia gave of a game being genned from 20-30fps to 200+ is atrocious. The input latency will make that game feel like ass. However, that's a worst case scenario. If we can take a game that's got raw raster around 120FPS and gen it up to 480FPS, or even 960FPS (or 480FPS at 960Hz, with black frame insertion), we can recapture the motion clarity that CRTs naturally had by reducing frame persistence down to a couple milliseconds, without sacrificing input latency in the process.