• 0 Posts
  • 35 Comments
Joined 9 months ago
cake
Cake day: March 12th, 2025

help-circle



  • I just don’t think this is a problem in the current stage of technological development. Modern AI is a cute little magic act, but humans (collectively) are very good at piercing the veil and then spreading around the discrepancies they’ve discovered.

    In its current stage, no. But it’s come a long way in a short time, and I don’t think we’re so far from having machines that pass the Turing test 100%. But rather than being a proof of consciousness, all this really shows is that you can’t judge consciousness from the outside looking in. We know it’s a big illusion just because its entire development has been focused on building that illusion. When it says it feels something, or cares deeply about something, it’s saying that because that’s the kind of thing a human would say.

    Because all the development has been focused on fakery rather than understanding and replicating consciousness, we’re close to the point where we can have a fake consciousness that would fool anyone. It’s a worrying prospect, and not just because I won’t become immortal by having a machine imitate my behaviour. There’s bad actors working to exploit this situation. Elon Musk’s attempts to turn Grok into his own personally controlled overseer of truth and narrative seem to backfire in the most comical ways, but that’s teething troubles, and in time this will turn into a very subtle and pervasive problem for humankind. The intrinsic fakeness of it is a concerning aspect. It’s like we’re getting a puppet show version of what AI could have been.





  • Before I used Google Maps regularly, I would be more aware of road layout while driving and soon become capable of navigating any town I visited regularly, without a map. It’s weird to drive through a place I last visited twenty years ago, knowing that last time I was there I’d navigate based on memory, but now I’m completely leaning on that device to do it for me. That mental faculty might not be absolutely lost, but I don’t use it and I don’t suppose I would ever have developed it if I were learning to drive today.

    Perhaps it’s obsolete, and a modern brain can now use those resources for something more relevant. Over the course of human history we have developed tools to use our finite mental resources more effectively, but never without a price. Socrates feared that the use of writing would weaken our memory and true understanding. I’m sure he was right, at least about the memory, but it was worth the price. Without writing, nobody would know what Socrates thought about anything.

    But with AI, we’re not enabling ourselves to do more and develop new faculties, because AI seeks to be our universal crutch. Perhaps under other circumstances it could be better, but the entities pushing AI want us to be compliant consumers hypnotized by a endless stream of advertising slop. Fundamentally, they are not incentivized to help us develop our potential. They want to replace us.











  • I agree that it’s on a whole other level, and it poses challenging questions as to how we might live healthily with AI, to get it to do what we don’t benefit from doing, while we continue to do what matters to us. To make matters worse, this is happening in a time of extensive dumbing down and out of control capitalism, where a lot of the forces at play are not interested in serving the best interests of humanity. As individuals it’s up to us to find the best way to live with these pressures, and engage with this technology on our own terms.


  • I think the author was quite honest about the weak points in his thesis, by drawing comparisons with cars, and even with writing. Cars come at great cost to the environment, to social contact, and to the health of those who rely on them. And maybe writing came at great cost to our mental capabilities though we’ve largely stopped counting the cost by now. But both of these things have enabled human beings to do more, individually and collectively. What we lost was outweighed by what we gained. If AI enables us to achieve more, is it fair to say it’s making us stupid? Or are we just shifting our mental capabilities, neglecting some faculties while building others, to make best use of the new tool? It’s early days for AI, but historically, cognitive offloading has enhanced human potential enormously.