• 0 Posts
  • 10 Comments
Joined 2 years ago
cake
Cake day: July 3rd, 2023

help-circle


  • I’m conflicted about Joe Rogan, or at least the concept he had at the start. Clearly he’s fallen down the right-wing rabbit hole but the original intent he had of letting people defend their weird positions is a good one imo. One could argue that the reason the right-wing funnel exists is because there isn’t really space to talk about some of those things on the left.

    For example, it’s not crazy to ask questions about vaccines and how they work. However, when people do that those who are educated on the topic will largely assume ill intent by default and treat the people asking questions as if they’re stupid or malicious. There’s some good reasons for that but such an approach is pretty alienating for those who are genuinely seeking information. That leads at least a portion of those people to listen to more right leaning information because they feel like that is the only group taking them seriously.

    We need to do better at meeting people where they are instead of assuming they are trying to spread misinformation. Yes it’s true that all the information you need to develop an informed opinion about the vast majority of topics is available on the internet, but finding and understanding that information does take skills and time that not everyone has. In order to understand why a statement or belief is incorrect or misinformed you have to create a space in which it can be discussed without fear and shame driving people away.

    Based on the limited amount of his older podcasts that I’ve been exposed to, I do think that Joe genuinely tried to do that, he’s just not particularly well equipped to handle that kind of environment. Over time he fell victim to the same kind of radicalization that he was intending to subvert by letting people share their actual thoughts instead of assuming he already knew what they were going to say.








  • There’s a legitimate discussion to be had about harm reduction here. You’re approaching this topic from an all-or-nothing mindset but there’s quite a bit of research indicating that’s not really how it works in practice. Specifically as it relates to child pornography the argument goes that not allowing artificial material to be created leads to an increase in production of actual child pornography which obviously means more real children are being harmed than would be if other forms were not controlled in the same fashion. The same sort of logic could be applied to revenge porn, stolen selfies, or whatever else we’re calling the kind of thing this article is referring to. It may not be an identical scenario but I still think it would be fair to say that an AI generated image is not as damaging as a real one.

    That is not to say that nothing should be done in these situations. I haven’t decided what I think the right move is given the options in front of us but I think there’s quite a bit more nuance here than your comment would indicate.