Wow, AI researchers are not only adopting philosophy jargon, but they’re starting to cover some familiar territory. That is the difference between signifier (language) and signified (reality).
The problem is that spoken language is vague, colloquial, and subjective. Therefore spoken language can never produce something specific, universal, or objective.
I deep dived into AI research when the bubble first started with chatgpt 3.5. It turns out, most AI researchers are philosophers. Because thus far, there was very little tech wise elements to discuss. Neural networks and machine learning were very basic and a lot of proposals were theoretical. Generative AI as LLMs and image generators were philosophical proposals before real technological prototypes were built. A lot of it comes from epistemology analysis mixed in with neuroscience and devops. It’s a relatively new trend that the wallstreet techbros have inserted themselves and dominated the space.
I really like that it talks about the ontological systems that are completely and utterly disregarded by the models. But then the article whiffed and forgot all about how those systems could inform models only to talk about how it constrains them. The reality is the models do NOT consider any ontological basis beyond what is encoded in the language used to train them. What needs to be done is to allow the LLMs to somehow tap into ontological models as part of the process for generating responses. Then you could plug in different ontologies to make specialized systems.
In theory something similar could be done with enough training. Guess what that would cost. Does enough clean water and energy exist to train it? Probably best not to find out, but techbros will try.
I don’t think a logical system like an ontology is really capable of being represented in neural networks with any real fidelity.
Well it does great with completely illogical systems. I wonder if one can be used for a random seed? 🤔
Now imagine how you might prompt an LLM like ChatGPT to give you a picture of your tree. When Stanford computer science PhD candidate Nava Haghighi, the lead author of the new study, asked ChatGPT to make her a picture of a tree, ChatGPT returned a solitary trunk with sprawling branches – not the image of a tree with roots she envisioned.
she needs to get out and draw/paint some trees.
When the Generative Agents system was evaluated for how “believably human” the agents acted, researchers found the AI versions scored higher than actual human actors.
That’s a neat finding. I feel like there’s a lot to unpack there around how our expectations are formed.
Or how we operationalize and interpret information from studies. You might think you’re measuring something according to a narrow definition and operationalization of the measurement. But that doesn’t guarantee that that’s what you are actually getting. It’s more an epistemological and philosophical issue. What is “believable human”? And how do you measure it? It’s a rabbit hole in and of itself.
So like… You ask the model about styles and it says ‘diagrammatic’ and you ask for an artistic but diagrammatic tree or whatever and that affects your worldview?
If people just ask for a tree and the issue is they didn’t get what they expected, I don’t care. They can learn to articulate their ideas and maybe, just maybe, appreciate that others exist who might describe their ideas differently.
But if the problem is the way your brain subtly restructures ideas to better fit queries then I’d agree it’s going to have ‘downstream’ effects.