Yeah, the fermi paradox really doesn’t work here, an AI that was motivated and smart enough to wipe out humanity would be unlikely to just immediately off itself. Most of the doomerism relies on “tile the universe” scenarios, which would be extremely noticeable.
I feel this makes it an unlikely great filter though. Surely some aliens would be less stupid than humanity?
Or they could be on a planet with far less fossil fuels reserves, so they don’t have the opportunity to kill themselves.