I enjoyed Benn Jordan’s latest video, about creating “adversarial” music designed to trip up AI training models and degrade their outputs. He’s developed a technique of adding inaudible-to-human-ears sounds that confuse the algorithms. If our government won’t protect artists, maybe artists need to get aggressive in trying to outwit these AI capitalists who think it’s okay to steal our work as “training data” without permission or compensation.

(I love that this YouTube channel, which I started watching mostly for the synth reviews, has become a locus of anti-Spotify agitation and now anti-AI creativity.)