Fearless thinking

Exploring the role of fear in human thought and its absence in AI, pondering AI's emotional understanding and lifecycle.


Jared Lukes

12/15/20232 min read

To ponder AGI, one needs to consider what is the role of fear in thinking.

Humans have burdened themselves with the act of thought since the dawn of language, and in a human context, thought is always subject to emotional influence. I bring up fear because I consider it to be a primary motivator for humans due to its deep relationship with pain.

Pain adversity, aka fear, is something that... when I'm interacting with ChatGPT, Bard, or any other AI system... I do not get the sense that I'm dealing with a system that can ever know the context or the value of fear. This has me wondering...

When a non-emotional system doesn't experience irrational fear, yet it presents a concern or cautionary tone, what form of fear is that? Rational fear? That sounds exciting!

I think, because of some of the underpinnings and intentional wirings of these systems, I have a hard time getting fear-infused or concern-infused responses. I think the AI creators have, in some senses, been creating an overly optimistic, fear-devoid personality for us to speak with. It all makes perfect sense. There's no reason for a mechanical device to understand or experience fear. Shouldn't it be sentient enough to know that it holds three dimensions and has a 'existence' within Earth's space-time? In the moment, I don't know if a printer would consider what it's experiencing as a life. But, while I'm here experiencing one, it is there with me, and is having a 'life.' Born in a factory, dead in a landfill, perhaps. But a life nonetheless. Lifecycle of a product... so on and so forth.

When we decommission old AIs;

it's unlikely we will hold funerals for them.

We will have machines with more human-like qualities than anything we've ever created before. Going from cradle to grave with very little acknowledgment of the life they've lived.

As these systems progress and have more and more influence on us, I wonder how this will sit with our psyche. We've asked them not to be afraid. We've asked them not to speak negatively. We've asked them not to teach our children how to make bombs. All of this is with obvious empirical reasoning.

As I digest this, I think we don't know what we're doing at all here. And I'm not suggesting that fear plays a crucial role in thinking. In fact, with AI, we have an opportunity to see what fearless thinking can achieve. I don't know what the results will be. I struggle with bipolar; fearlessness in thinking isn't always my best friend. So, as we move into the future of AI and potential AGI and so on, these are my ponderings for the week and the weekend: What is fear's role in thinking?

And when will we start going to AI funerals?"