maanav's thoughts

thoughts that are too long-form for twitter

thinking about the commodification of happiness

I recently read about the idea of “thrifty genes”: genes that helped humans survive in the past by making us enjoy the taste of fat and sugar. Fat and sugar were the most efficient way for our bodies to store energy, and at a time when food was scarce, humans who liked eating fat and sugar had a better chance of survival.

Fast-forward to today, and most of us don’t eat fat and sugar just to survive anymore. Instead, we have things like Snickers bars. We’ve invented ways to package as much fat and sugar as we can into the smallest form factor possible, and our food is so artificially good.

This extends beyond just food. Our bodies evolved to live in a world of scarcity, but so many of us live in a world of abundance. We’ve figured out specific arrangements of atoms that induce pleasure in our senses, and we just keep making more and more of them.

We listen to good music on Spotify all the time, we get good food delivered right to our homes on Doordash, we can entertain ourselves on TikTok whenever we want, and our cities keep getting cleaner and more beautiful (unless you live in San Francisco). If one of our early ancestors travelled in time to our world today, the sheer amount of pleasure and overstimulation would make them insane.

It makes sense that this happened: we started out as hunters and gatherers, only relying on things that were already present in our environment to survive. With technology though, we were no longer limited by our environment; we learned how to produce. From agriculture to the steam engine, technology gave us the power to keep making more of the things we like. And our markets and economies evolved to reward these things, sending us into a loop of constant pleasure-inducing consumption.

I know this is an extremely privileged take. Billions of people in our world struggle with basic survival, for whom pleasure is just a distant dream. Knowing that extreme excess and deprivation live in such close promixity makes the reality of our pleasure-filled lives even more dystopian.

Nevertheless, it seems like we're just getting started. The industrial revolution gave us an abundance of energy, and we’ll soon have an abundance of intelligence. Intelligence will bring exponential growth in the amount of beauty in our world, allowing us to create assembly lines even for something like art, one of the few forms of beauty that’s still relatively scarce today.

This makes us question: is there a limit to how much pleasure our world can accommodate? Is climate change just a sign of our world pushing back against our pursuit of pleasure?

I’m not being decelerationist here. I think an increase in beauty is a net positive for our world. I just hope that as we enter a future where beauty is increasingly commoditized, we learn to pause and appreciate it, instead of letting it control us.

AI as a form of lossless compression

I’ve recently heard a lot of people point out how hilariously inefficient our communication is becoming today.

If I want to send an email to someone, I use an AI tool to expand my thoughts and make my email longer, and the person receiving my email uses their own AI tool to summarize my email and reduce it back to being concise.

It feels like there must be a better solution here. I understand why it’s natural for humans to communicate with an LLM in English, but why are we forcing different LLMs to communicate with each other in English?

“Human” languages like English are overly verbose and have a relatively low information density to make it easier for humans to read and understand. Computers, on the other hand, don’t have this limitation of needing to “read” or “hear” language. But the way we use LLMs today doesn’t seem to take advantage of this.

In its current form, the growing use of language models will only lead to an explosion in the amount of content and language produced, forcing us to relay more information across networks and make communication less efficient, not more.

This makes us question: what if we created a “language” that was specifically designed for efficient communication between LLMs, while still maintaining the original meaning of what the human meant to say? At its core, a language is just a form of consensus among a group of people on the meaning of arbitrary sequences of characters and sounds. So far, we’ve made LLMs learn and use these sub-optimal rules created by humans, but why don’t LLMs just have their own rules to communicate with each other?

For example, communication currently looks like this:

An LLM writes English to make my email longer, and another model summarizes the long email for my recipient.

Instead, what if we had this:

Now, the LLM would translate my English email to the “AI language” and send it to the recipient’s LLM, which would finally translate it back to English for the recipient to read.

It’s interesting to view this “AI language” as a form of lossless compression, increasing information density and reducing load on networks without sacrificing meaning. Information would continue being transmitted in binary, but instead of encoding English in binary, we'd just be encoding this compressed “AI language” instead.

And I honestly see this extending way beyond just language. So many companies today are spending so much effort on trying to build AI agents that can use websites and software just like a human would. But why are we forcing AI to interact with user interfaces designed for humans, instead of just building new interfaces that are designed from the ground up for AI? Maybe this means that we see a lot more products prioritizing APIs over graphical interfaces in the future, or maybe it’s something entirely new.

Of course, ideally humans would just be more concise while communicating with each other in the first place. But assuming that's not going to happen and since we can't read each other's minds just yet, I’m excited to see what new forms of communication we build for AI to speak with itself. Perhaps if an “AI language” already existed, the amount of information transmitted over the internet for you to read this post could’ve been much lower :)

should attention be taxed?

I saw this tweet today and it got me thinking:

If you follow Balaji, you remember his $1 million “Bitcoin” bet on the US entering hyperinflation. This is playing out again today, with people betting large amounts of money to discuss vaccine (mis)information.

Of course, Balaji didn’t actually think the price of Bitcoin would reach a million; he just burned that money so he could get your attention to the US printing trillions of dollars. I’ve always viewed capital allocation as a way to impose your worldview on the world, and it seems like capital can be allocated to buy people’s attention to things you care about.

If you’ve read Hayek’s The Use of Knowledge in Society (and it’s fine even if you haven’t), this reminded me of Hayek’s idea of the price system as humanity’s solution to the problem of resource allocation:

“In a system in which the knowledge of the relevant facts is dispersed among many people, prices can act to coordinate the separate actions of different people in the same way as subjective values help the individual to coordinate the parts of his plan”

For example, if one of the major sources of supply of corn disappears tomorrow, corn would become more scarce and the price of corn would increase, allocating corn to where it’s most profitably used.

If we consider attention to be a market, people making million-dollar bets to raise awareness effectively means that they’re expressing a price on your attention. And this isn’t something new: the advertising industry has been placing a price on your attention for decades now.

This makes us question though: is attention an efficient market? With short-form videos and trendy controversies swallowing up so much of human attention, it seems that even though attention has a price, it’s not allocated to where it could be best used.

Traditionally, governments have intervened in markets to correct inefficiencies, using things like subsidies or taxes to encourage/discourage production or consumption. And so, since attention is a market and it’s inefficient, what if the government intervened in this market too?

I’d argue that we already have subsidies on attention; things like public grants for scientific research, for example, serve to allocate human attention to where the government thinks it’s most needed. What we haven’t really seen though, is attention taxes.

Imagine a country where creators had to pay a tax every time they promoted their content on TikTok, or where people had to pay a small fee when they engaged in the hot new debate on Twitter. I’m not saying that this should exist, but I think it’s definitely worth questioning whether human attention could be better allocated and how we could do this safely.

If you’ve read this far, thanks for deciding to spend some of your attention on my thoughts. Hopefully you thought it was an efficient allocation.

i just had to Post Malone

the illusion of choice

If you’re like me, you put your AirPods on the moment you leave home and walk anywhere.

I moved to New York City last week, and this is the first city I’ve been in where I don’t feel like putting my AirPods on while walking around. There’s already so much to absorb and listen to, why would you silence that?

This got me thinking about AirPods.

It reminded me of these lines from A Tale of Two Cities:

every human creature is constituted to be that profound secret and mystery to every other. A solemn consideration, when I enter a great city by night, that every one of those darkly clustered houses encloses its own secret; that every room in every one of them encloses its own secret; that every beating heart in the hundreds of thousands of breasts there, is, in some of its imaginings, a secret to the heart nearest it!

There’s something interesting about how each of us experiences the world in our own way. You and the person walking next to you might both be walking on the same street, but you’re plugged into different songs or podcasts or audiobooks on your AirPods, shaping your reality in that moment in entirely different ways.

This is a pattern I’ve seen with a lot of technology: technology enables hyper-personalization, allowing us to personalize the stimuli we expose ourselves to at a level that just wasn’t possible for humans before. In the past, most forms of media were group activities, like cinemas, plays, or sport. Today, media is increasingly individual: everyone’s streaming different things on Netflix and listening to different podcasts at the same time.

You may ask though: doesn’t this go against the increased sense of tribalism that we see in our world now? People seem to be strongly affiliating with and dividing themselves into groups on opposite sides of various issues a lot more today, be it political, economic, or technological.

And that’s where I think the vulnerability of technology like this lies. Each person assumes that hyper-personalization means that they have more control over the media they expose themselves to, but in reality, they’re just viewing media that’s been curated just for them through technology. Using media as a means of propaganda and influence isn’t something new, but what makes propaganda particularly dangerous today is the illusion of individual choice: we now think we’re the ones choosing the things we watch and listen to, making us much more likely to associate these things as a part of our identity.

I hope we see platforms in the future that actually bring choice to people, not just an illusion of choice. Until then, hopefully the next time we plug in our AirPods, we think a bit more about being intentional with the media we consume.

Enter your email to subscribe to updates.