Sharpen the Pattern Detector
Introduction
Every DSA problem has two parts:
- Identifying the solution pattern (the structure of the solve).
- Implementing it in code.
Most people spend the bulk of their practice on implementation, and while this is important, it's very slow, so not great for improving breadth. This drill flips the focus: you will not code at all. Instead, you’ll train only the pattern recognition step — the ability to rapidly see “ah, this is sliding window” or “this is a heap problem.”
This method is designed for high repetition, short bursts. The more “reps” you get in spotting patterns, the faster you’ll get at mapping problems to known techniques — which is the critical step before coding.
1) Timing Rules
- Easy: ≤ 90s read, +60s answer
- Medium: ≤ 120s read, +60–90s answer
- Hard: ≤ 150s read, +90s answer
- Formula:
target_time = avg_read_time + 60–120s
. - Keep timing fixed for a week, then recalibrate.
2) What to Write Down (No Code)
-
Primary technique (e.g., sliding window, two pointers, heap).
-
1-sentence rationale (“window because longest subarray under constraint X”).
-
Pitfall you expect (off-by-one, duplicates, negatives, overflow).
-
Secondary pattern (if any).
-
Optional: Time/space complexity — include only if it helps clarify the tradeoff.
- Assumption: you already know the baseline access patterns for arrays, hash maps, sets, etc.
4) Use ChatGPT as Referee
Paste the problem and your guess:
Prompt: “Act as a DSA Pattern Referee. Given the problem statement and my prediction, answer in JSON only:
{ "primary_pattern": "...", "secondary_patterns": ["..."], "time_complexity": "...", "space_complexity": "...", "key_pitfalls": ["..."], "verdict": "correct|partial|incorrect", "one_line_feedback": "..." }
Problem:
<paste>
My prediction:<your technique + rationale + pitfall (+ optional complexity)>
Follow-up if you want the editorial:
“Now reveal an outline with key invariants and edge cases (no code).”
5) Example Pattern Signatures (Cue Sheet)
- Sliding Window → substrings/subarrays, max/min under constraints.
- Two Pointers → sorted input, pair-sum, merge-like scan.
- Monotonic Stack → next greater/smaller, spans, histogram areas.
- Prefix Sum / Diff Array → range queries, interval updates.
- Binary Search on Answer → minimize max / maximize min under feasibility check.
- Heap → top-K, streaming, repeated smallest/largest.
- Union–Find → connectivity, components, undirected cycles.
- Topological Sort → ordering with prerequisites.
- Trie → many string prefixes.
- Sweep Line → intervals, overlapping events.
- Greedy → local → global optimal.
- DP → overlapping subproblems, optimal substructure.
This is by no means a complete set, and in fact you dont need official names, you can make them up, the key is just that you remember the technique.
For example - upside down binary tree - the main idea is that im passing state down, thus elevating a lower level to be the new root, and then making sure to update the pointers/references.
Algo.monster even includes the common pitfalls. Remember this phase of learning is not about implementation, but if you were to create flashcards it might be a good idea to include this.
7) Daily Sprint Format
- Drill set (15–25 min): 6–10 problems, read → predict → log → referee check.
- Retrospective (5–10 min): record misses, schedule spaced review (1–3–7–21 days).
8) Variations to Sharpen Detection
- Title-only drill: guess from title + constraints only.
- Constraints-first drill: predict pattern just from input bounds.
- Nearest neighbor: after guess, list two similar problems & differences.
- Why not X? one sentence why another common pattern doesn’t fit.