

Initially a lot of the AI was getting trained on lower class GPUs and none of these AI special cards/blades existed. The problem is that the problems are quite large and hence require a lot of VRAM to work on or you split it and pay enormous latency penalties going across the network. Putting it all into one giant package costs a lot more but it also performs a lot better, because AI is not an embarrassingly parallel problem that can be easily split across many GPUs without penalty. So the goal is often to reduce the number of GPUs you need to get a result quickly enough and it brings its own set of problems of power density in server racks.
Most technology adoption follows an S curve, it can often take a long time to start to get going. Linux has gradually and steadily been improving especially for games and other desktop uses while at the same time Microsoft has been making Windows worse. I feel more that this is Microsoft’s fault, they have abandoned the development of desktop Windows and the advancement of support for modern processor designs and gaming hardware. This has for the first time has let Linux catch up and in many cases exceed Windows capabilities on especially gaming which has always been a stubborn issue. Its still a problem especially in hardware support for VR and other peripherals but its the sort of thing that might sort itself out once the user base grows and companies start producing software for Linux instead.
It might not be enough, but the switching off Windows 10 is causing a change which Microsoft might really regret in a few years.