we were born to be sign-creators

I recently read about the origins of the word technology. The word emerged in the seventeenth century to describe the branch of learning concerned with machines. Today, it feels like some mythic force that transforms everything around us.

Seeing technology as an agentic force has made it a lot easier for humans to anthropomorphize it. And this makes it even easier to scapegoat. Humans love abstractions that we can scapegoat to justify our decisions or explain the status quo: whether it’s the “free market”, “democracy”, or now “technology”. We love blaming things on the algorithm.

AI takes this to its limit. Technology literally feels alive now; it feels like it thinks. AI is the ultimate scapegoat.

But what I think is even more interesting is not when we blame our thoughts on AI, but just the opposite: when we see the model’s thoughts as our own. A couple years ago I wrote about how technology makes media consumption an increasingly single-player experience, allowing us to personalize the media we expose ourselves to. Hyper-personalization brings an illusion of choice: we think we’re the ones choosing what we watch and listen to, while in reality everything we see is curated for us by someone else. And this is why technology makes it easier to influence people: when we think we’re choosing the ideas we consume, we’re much more likely to view them as a part of our identity.

With LLMs, we don’t just consume thoughts from the models; we view the models as an extension of ourselves and treat their thoughts as our own. And all I can say is that if you thought hyper-personalization in consumption was bad, don’t even worry about hyper-personalization in creation.

It’s easy enough to get people to embrace an idea if they believe they found it online through their own free will and chose to read about it. Think about what happens when people start believing that they came up with the idea themselves.

Things get particularly weird when you realize that technology (or rather, “technology”) is no longer just distributing ideas, but also generating them. All ideas start with language, and AI can now produce and distribute its own language and signs at scale. I would be very surprised if we don’t see human speech increasingly filled with words that were invented by LLMs. For one, I just asked ChatGPT this simple question and I might already start using some of these words myself:

Commercially, systems that make humans feel like we’re smarter than we actually are will continue to do well. I’m long on the number of bits – and soon atoms – that will be generated by models, and I’m open to investing in anything that expresses this. But I do think that one of the beautiful parts of being human is our ability to make our own signs, our own ways of saying the unsaid. We should be authors of meaning, masters of sign-creation as a species.

I’m one to talk though. Totally didn’t just write this entire post with AI myself.