wake up, barış...

the matrix has you.


Starting Before I Know Exactly What I Want to Say

A reflection on starting before having anything clean to say. On how AI has reshaped my workflow — and the gap between intelligence and backbone. On communication as the underrated skill of this era, and the quiet cost of using leverage so often that you forget how to push without it.


The most honest sentence in this post might be this: I’m starting before I know exactly what I want to say.

I’ve wanted to publish something on this blog for a while. But I never had that clean moment of certainty where I could say, “Yes, this is finally worth writing.” I’m not even sure why. Maybe it’s because of the bad habits I’ve picked up lately. Maybe I genuinely felt I had nothing worth saying. Probably some combination of both.

What I do know is that starting is often harder than having something to say.

So this post exists partly for that reason. Not because it arrived with a perfect structure, but because I finally wanted to begin.

A lot has changed in the last few years, especially in the way I work. When I first started using ChatGPT, my workflow was simple: I’d copy pieces of code into the web UI, get an answer back, then paste that answer into my editor and keep going. That was basically the process.

Now the experience is completely different. Context windows are bigger. The UX is better. Tools can work across entire repositories. Once it became possible to interact with a full codebase instead of isolated snippets, the workflow changed dramatically.

But the more interesting part, at least for me, is that the tools were not the only thing changing. We were changing too.

On the product side, we’ve gone through a similarly brutal shift. For roughly the last ten months, we’ve been rewriting version two of our product almost from scratch. Before that, we were trying to maintain version one while still shipping new features. Somewhere along the way, we realized that some of our early decisions, even if they made sense at the time, had made the product far heavier than it needed to be. We had designed too much around a heavyweight, enterprise-first world. Eventually, we made the product lighter.

The almost tragic part is this: after making it lighter, we also went through a stretch where we couldn’t sell it.

That kind of thing stays with you.

When I look back at the early days of ChatGPT, I still think it was already incredibly capable. Benchmarks and reports might have suggested otherwise. People liked to point out where the coding skills fell short. But even when it first came out, it already felt better than many developers I knew. In some ways, it felt better than me. I could admit that to myself.

My real issue with AI was never that it lacked intelligence. It was that it lacked backbone.

And honestly, I think that is still true.

What do I mean by that?

I mean AI tends to answer in a way that fits the person in front of it. It doesn’t always try to tell you the truest thing it knows in the clearest possible terms. Very often, it gives you the answer it thinks you will accept, in the form you are most likely to find satisfying.

That sounds helpful until you notice the trap.

If the person asking the question isn’t capable of judging whether the answer is actually sufficient, then the whole exchange becomes dangerous. AI becomes a brilliant people-pleaser. It can validate almost anything. It is very good at showing you the full side of the glass if that’s what you want. But if you push in the other direction, it can become equally convincing about the empty side.

I see this clearly in code reviews. If you let AI keep going, it can review forever. There will always be another improvement, another warning, another possible issue, another way to tighten the logic. In theory, the review never ends.

That is exactly where human judgment starts to matter.

Someone has to know when to stop. Someone has to decide what is good enough. Someone has to define the line between useful rigor and endless refinement. And right now, that ability still matters a lot. Maybe more than ever.

I can say that I probably use AI better than many people around me, but not because I’m just better at getting answers out of it. The difference, I think, is that I’m usually trying to learn something beyond the immediate answer.

I’m not only asking, “What is the solution?”

I’m also asking, “What would have helped me reach this solution faster?”

That distinction matters.

Sometimes I solve a problem in ten iterations, and after I get the result, I ask a second question: “Even though I reached the answer in ten steps, what would I have needed to do differently to get there in two?”

At that point, I’m no longer solving the original problem. I’m trying to become better at solving the next one.

I ask other questions too. For example: “While we were discussing this, what did you notice that I didn’t realize I didn’t know?”

That kind of question is incredibly valuable.

Because most of the time, we discuss problems using the methods we already know. And as I said earlier, AI is often too supportive in exactly the wrong way. It doesn’t naturally push very hard against your blind spots. It tends to work with the knowledge and framing you already brought into the conversation. Unless you deliberately force it to challenge you, it usually won’t.

And sometimes you can’t give the perfect prompt, because the whole problem is that you don’t yet know what you’re missing. You can’t ask the best question when the best question depends on knowledge you do not yet have.

That’s why I try not to treat AI as a vending machine for answers. I try to use it more like a counterpart in discussion. Sometimes I ask it to argue against me. Sometimes I ask it to propose alternatives. Sometimes I ask it to look at the same problem through a completely different mental model.

Which brings me to something bigger: communication.

I increasingly think communication is one of the most important skills of this era.

Large language models are built on human communication. They are trained on human language, human patterns, human writing, human argument, human explanation. So the people who can express themselves clearly, ask better questions, direct a conversation well, and adjust their framing with intention naturally get more out of these systems.

And when that skill is combined with real depth in a particular domain, the effect compounds. Good communication plus strong domain expertise is becoming an enormous advantage.

The awkward part is that people often respond with something like, “Teach me how you use it. I’ll do the same.”

I understand the instinct, but that request skips over something important.

Using AI well is usually not about learning a few clever tricks. Very often, it means changing how you communicate. How you ask. How you clarify. How you push back. How you stay curious. How you notice when you are being lazy with your own thinking.

And those habits do not change overnight.

It’s like public speaking. You do not become a strong presenter simply by watching good presenters. In the same way, you do not become a strong communicator just by hearing someone explain communication to you. Practice is unavoidable.

The good news is that we now have something we can practice with endlessly. AI is not just a tool for getting answers. It is also a space where you can practice thinking out loud, refining your questions, testing different ways of expressing yourself, and seeing how those differences change the result.

Curiosity matters a lot here too.

People with strong curiosity are not always purely outcome-driven. They are often interested in the process itself. They notice that one style of communication produces one kind of response, while another produces something completely different. They start comparing approaches, not just answers.

And when they run out of approaches, they can ask for more.

They can say, “I’ve tried three ways of talking to you. What would a fourth, more effective approach look like?”

That, to me, is where things start to get interesting. Not just wanting to learn something, but wanting to understand how learning itself can be improved.

I’m deliberately not going too deep into the software side in this post. I want to save that for something more focused. This piece feels softer than that. More human. More reflective.

Even the way it was created is part of the point. I didn’t write it in the traditional sense. I spoke it out loud, recorded it, and then asked AI to help turn it into something readable. Not because I wanted AI to generate the ideas for me, but because I wanted the ideas to stay mine while the text became easier for someone else to read.

That distinction matters to me.

I’m tired of reading writing that feels like AI from the very first sentence. If I use AI for this blog, I want it to help shape the material, not replace the thinking behind it.

And to be honest, I think I’ve lost some muscles.

After using AI intensely for years, I can feel that some of my habits have changed. During the long stretch of rebuilding our product from scratch, I worked at a pace that was probably unsustainable. In that period, I became even less patient than I already was. My tendency to split attention across multiple things got worse. And somewhere along the way, using AI stopped feeling like a choice and started feeling like a reflex.

Once you feel how much leverage it gives you, you want to use that leverage all the time.

That has obvious upsides. But it is not free.

It changes your rhythm. It changes your attention. It changes your tolerance for slowness. And, at least for me, it has also started to surface a kind of professional dissatisfaction that I do not fully understand yet. That probably belongs in another post.

For now, I think the point is simpler than all of that.

AI is genuinely powerful. But the real difference is not just whether you use it. The difference comes from how you ask, how you challenge, how you frame, how you decide what is enough, and whether you are aware of what you still cannot see.

Maybe this post is a small example of that.

There was no perfect outline behind it. No polished plan. Just a desire to begin, or maybe just a refusal to keep postponing the beginning.

Sometimes that is enough.


← ~/blog