Local Clarity > Global Commitment
Introduction
From the outside, my behavior can look inconsistent.
One month, I’m deeply focused on Project X. I talk about it as if it’s a multi-year commitment.
Then something shifts.
Now I’m working on Project Y.
To observers, this can look like:
- lack of discipline
- lack of follow-through
- changing priorities arbitrarily
But internally, it’s not arbitrary at all.
What’s actually happening is:
I continuously re-rank my priorities based on a multi-factor evaluation function.
The Evaluation Function I’m Using (Implicitly)
At any given moment, I’m evaluating each project along several dimensions:
1. Time to Meaningful Progress
- How long until I can produce something non-trivial?
- Can I make visible progress in a single session?
- Is this a short ramp or a long ramp?
2. Types of Upside (Not Just One)
I don’t think about value as a single dimension.
I’m implicitly weighing multiple types of upside:
- Compounding upside (long-term leverage)
- Immediate output (content, code, artifacts)
- Learning / capability gain
- Positioning / optionality
- Network / exposure
- Enjoyment / intrinsic pull
A project doesn’t need to dominate every category — but it needs to score high enough across a few of them.
3. Clarity
- Do I know exactly what the next step is?
- Is the path obvious, or still branching?
- Do I understand what “good” looks like?
4. Friction / Re-entry Cost
This is one of the most important variables.
- How hard is it to resume this work?
- Do I need to reload a large amount of context?
- Is the state captured or lost?
This is also where my systems matter.
→ See: The Zero Overhead Cold Start Habit
Low-friction re-entry allows me to:
- switch contexts without penalty
- pause without losing progress
- maintain multiple active threads
Why My Priorities Can Shift Suddenly
The key point is that this evaluation is not linear.
It can change abruptly.
For example:
- A project that felt clear becomes ambiguous
- Another project becomes trivial to execute
- A new idea scores high across multiple upside dimensions
- Re-entry cost increases due to lost context
When that happens:
The ranking of my projects can reorder instantly.
There’s no gradual transition. Just a different answer to the question:
“What is the highest-value thing I can do right now?”
Local Optimization vs Long-Term Commitment
Most people optimize for consistency:
- Pick something
- Stick with it
- Avoid switching
I’m optimizing for something different:
At each moment, I want to allocate effort to the highest-scoring option.
That means my system is:
| Approach | Behavior |
|---|---|
| Commitment-based | Stable over time |
| Evaluation-based | Continuously adaptive |
Why This Only Works If Switching Cost Is Low
This approach breaks down if switching is expensive.
If every context switch requires:
- rebuilding state
- rereading everything
- re-deriving intent
Then I would just thrash.
So this only works because I’ve invested in:
- capturing state
- structuring work
- minimizing re-entry cost
Example: fmc and Time-to-Value Tradeoffs
A concrete example of this is how I approached Front Matter Canonicalizer.
In 2024, I intentionally didn’t fully automate it.
At the time, the evaluation looked like:
- high implementation cost
- low immediate return
- useful manual workflow already exists
So I kept it minimal and used it as an audit tool.
As I wrote here:
“More importantly — it’s not worth the time investment right now. The cost of building 100% conformity and logic-based automation outweighs the value I’d get at this stage. I’m more focused on getting things to a solid baseline.”
Later, in 2026:
- the cost of building tooling dropped significantly (thanks Claude Code!), so implementation became trivial, in a few days I got at least a month's worth of work done.
- the upside increased, so the value of having a way to do mass CRUD (create, read, update, delete) increased.
- I had way more documents, traffic to site was starting to meaningfully increase
- All this metadata can be fed into downstream tools that I have built in the meantime (thus further increasing the value of this automation)
So I revisited it and fully expanded it.
From the outside, that might look like inconsistency.
Internally, it was:
The evaluation function changed, so the decision changed.
Creative vs Mechanical Work
Another pattern I’ve noticed:
- Mechanical work → I fully automate
- Creative work → I keep manual control
That distinction feeds directly into how I prioritize:
- If something is repetitive and low-value → automate
- If something benefits from context and judgment → keep it manual
Why This Looks Irrational to Others
Observers tend to assume:
- priorities should remain stable
- plans imply commitment
- switching requires justification
What they don’t see:
- changes in clarity
- changes in time-to-progress
- changes in upside
- changes in friction
So the behavior looks like:
“You said X mattered. Now you’re doing Y.”
But what’s missing is:
The internal re-evaluation that happened in between.
Common Misinterpretation
This pattern often gets labeled as:
- “shiny object syndrome”
- lack of discipline
- inability to commit
But the real distinction is:
| Behavior | Reality |
|---|---|
| Random switching | No evaluation |
| Structured switching | Continuous re-ranking |
Practical Heuristic
When I shift priorities, the useful question is:
What changed in the evaluation function?
- Did clarity improve somewhere else?
- Did ambiguity increase here?
- Did the upside shift?
- Did friction increase?
Closing
From the outside, this can look inconsistent.
From the inside, it follows a simple rule:
At any given moment, I allocate effort to the highest-value, lowest-friction, highest-clarity opportunity available.
That leads to:
- sudden shifts
- non-linear progress
- changing priorities
But underneath it:
There is consistency — just not the kind that is easy to observe externally.