ChatGPT: Crutch or Coach?

ChatGPT: Crutch or Coach?

How I stopped an AI from rotting my brain


3 min read

Ever since ChatGPT hit the scene late last year, everyone and their mom has been talking about it.


While waiting in the lobby at a local sushi restaurant, I overheard a mother explaining the wonders of this new language model (GPT) to her two nine and ten-year-old sons.

"Have you heard of ChatGPT? You can talk to it and it will tell you anything you want to know!" Both boys nodded in relative disinterest. This was probably old news to them.

You know something is mainstream when even parents talk about it.

But won't it rot our brains?

All the buzz is well-founded. But if we blindly take ChatGPT's answers as fact, we run the risk of developing a false understanding of the world when it makes a mistake and, perhaps worse, we allow our critical thinking skills to atrophy.

So, when I use ChatGPT, I follow the old Russian proverb: "Trust, but verify".

I ask it clarifying questions and scrutinize its answers. I try to poke holes in its explanations. This helps me explore and understand the concept with added depth and nuance.

It's also a nice ego boost when I can corner an advanced AI model and force it to admit its errors. πŸ˜‚

What does this look like in practice?

Let's look at a conversation I had with ChatGPT.

While taking an online course on system design (this one), I came across an explanation of the ACID database transaction properties.

The way it defined consistency confused me: "At any given time, the database should be in a consistent state..." What did it mean by "consistent"? Isn't that a circular definition?

I turned to ChatGPT...

This was illuminating for me.

I confirmed this definition by looking at other trusted sources. However, GPT mentioned that consistency "ensures that the data remains valid and follows all the rules and constraints defined by the database schema..."

Before this, I thought that consistency was just about changes made within the context of a transaction and it wasn't tied to higher, schema-level constraints.

What was "transaction-level consistency"? I never heard of that! Was that something that GPT just made up on the spot to make sense of my question?

Well, maybe it wasn't a formal term, but it certainly sounded like one.

Initially, I thought that consistency was only about transaction-level data integrity. But it's actually about schema-level integrity too!

This had me thinking... This "transaction-level consistency" thing sounded a bit like atomicity. Were they the same?

Then I remembered an example ChatGPT used to explain consistency to me in a previous response. It said that consistency is like two accounts' totals adding up to the same amount after money is transferred between them.

So if a database is consistent, it will ensure these two accounts add up to the same amount before and after the transaction? Is it even possible or practical to define that constraint at the database level?

I was right about my suspicion! Application-level logic can also take on some responsibility for data consistency.

There were moments during this conversation where I could have just said to myself "Got it! On to the next thing..." Instead, I used GPT's feedback as a jumping-off point for new questions and ideas.

ChatGPT doesn't have to weaken our powers of critical thought. If we use it wisely, it can spark new questions and fuel our curiosity!