how to have original thoughts in the age of ai

I want to explore why it matters to understand things and have original thoughts in the age of ai. why I care about this problem: people are using AI to shortcut thinking. what this means is that instead of using AI as a tool to think (to ask questions, work through problems), we are more driven to just give AI the problem and let it come up with a solution for us.

this might appear to be the most efficient way to solve a problem, but I will argue that in a human-centered economy we still need humans to process and understand information. i’d like to explore how AI tools can help us accomplish that, and how that might change in an ai-centered economy over the next few decades.


how do we think?

understanding problems and coming up with original solutions is something i’ve always cared about. i’m not very good at thinking off the top of my head so, with time, i’ve come up with some strategies to force myself into thinking and understanding. these are some of those strategies:

1) talking to others. an open discussion clarifies ideas and understanding and creates new ideas. sometimes called “brainstorming.”

2) writing it down on a whiteboard. i’ve always found this much better than writing on paper. it’s something about the ability to see everything at once that helps me understand problems more easily. 

3) making word maps. I remember, ages ago, trying to explain to someone how to think about a problem, and working with them on a word map to come to common ground. it's amazing how word maps can really trigger new ideas and lead us to new conclusions that we didn’t start with. i’ve always found it hard to think a couple of steps ahead, and mental maps really seem to help. 

4) writing/typing. starting with a clue and just letting the mind drift. I usually understand problems just by describing them and writing them down.


so how has AI been helping me come up with new ideas? by discussing ideas aloud, for sure. (talk mode is really an amazing tool.) but there is always a risk (at least for me). discussing a problem with AI means it will automatically try to give you the solution. and the thing with solutions that are given to us, is that we don’t fully process how we got there and so we don’t truly “buy” the solution.

think about times when you discuss a problem you are having with a partner, and they jump into solution mode. the frustrating part about that is that discussing the problem (complaining, ranting) is what's actually valuable about the exchange and not the solution itself. the act of talking about the problem is how we understand it and process it.

what does this mean for people that use AI as a replacement for their own thinking? my guess is that it means that they won’t be able to process and understand problems, and I argue that we need them to do that. (even if just letting AI give us the solution is more efficient.)


why do we need humans if AI can do it much better?

my reasoning is simple: we live in human-centered economies and democracies, meaning that the economic and political systems operate thanks to humans. it’s a human responsibility to make the final call on a policy. it’s a human’s work, and the salary they get for that job that moves the economy forward. in our current society, humans are the center of all systems, the decision-makers, and those responsible for how other humans live.

when the politician asks the AI how to work on a particular problem, without fully understanding its implications and only comprehending what the AI is saying at a superficial level, we risk harm to how other humans live. another way to think about that is that it is the AI that is understanding the problem (if so) and making the decision for the human.

one could argue that this is not an issue. if an LLM can process more content than a human and come up with a better solution than we can, we should use that tool. and I fully agree, but the thing is that LLMs are not there yet and are by no means deterministic thinkers that compare all possible outcomes before they propose a solution. until they do, we need humans to think, understand, and argue against ai.


the challenges of working out problems with ai

so, if we really need humans to think through problems, we need to come up with ways to collaborate with AI more efficiently. 

these should not be ways to make the LLM more efficient (although that is always a good thing) but ways in which we can train our personal models to work for us.

here are some of the challenges I see today when it comes to working out problems with ai—and some possible solutions:

1) AI jumping to solutions. this is the problem I stated originally. when the AI gives us the full solution to a problem, it’s hard for us to come up with a true understanding of the problem. something i’d like to test would be to give instructions to the model I use the most (sonnet 4, as of today) so that when I propose a problem it should never give me a solution but give me questions so I can keep thinking about it. if it does, it should never be just one but a series of solutions, forcing me to choose. 

2) AI writing a lengthy solution. one might think this is only a problem if people are not willing to read what the AI has written. but, as people say, the only good system is the one you use. we tend to not want to read all the content created by ai. a clear example of this comes with vibe coders. developers usually complain that the AI creates code prone to security issues. my counterargument to that is that it only does when you don’t read the code that it wrote. otherwise, you shouldn’t need to worry. my solution to this problem is obvious. train your model to answer in short answers or to write code in the minimal expression it can.

3) AI stating things as truths. from the days of gpt-3, we all seem to treat LLMs as prophets. the truth is that the LLMs we use are not giving us the most probable next word when answering a question but what the company that’s developing it has trained it to pick as an answer. in no way, even without that, would LLMs be deterministic thinkers. but the thing about humans and how we use LLMs is that we find it hard to challenge what the AI is suggesting, especially when it comes up with complete solutions (long texts, an ordered list, etc.). we are wired to believe that an answer stated with confidence shouldn’t be challenged.


my solution to this problem is that we should force AI to give us incomplete solutions. we need a way to force ourselves to think on our own about problems. my explanation as to why we need this comes from how the human mind works: our minds look for completeness, and what’s more interesting, they do so especially when they are faced with something that is incomplete. the mind wants to fill gaps.

for humans to really think about problems we need the AI to give us something we can iterate. think about mental maps. if I asked the AI to give me a full mental map (something like start from the word “poverty” and write all ramifications to how we could solve it), it would probably do a good job. but what’s rich about doing a mental map yourself is how, in the process, you come up with new ideas and explore them. 

what we need from the AI is the equivalent to writing down a mental map with missing words, incomplete paths, and empty cards for a person to fill out. we need that void, or incompleteness, to force ourselves into thinking.


what does an ai-economy look like?

i started by saying that while we live in human-centered economies we need humans thinking and understanding problems, and we went through some strategies we could use to use AI in a way that helps us accomplish that.

but what i’m really curious about is, what comes next? how do we move from a human-centered economy and political system to one that is able to delegate most of its work into systems much better than we are at understanding problems and processing large volumes of information?

what I could argue now is that this AI (whether it’s LLMs or something else) would need to become much better at coming up with deterministic solutions and that we need humans working out how to get AI to that next level.

what that AI economy might look like seems like a theme for another essay, so let’s just leave it at that.