Skip to main content

From O(n) to O(1): Visualizing How Lookup Changes With Data Structures

Most developers know the theory:

  • Scanning a list → O(n)
  • Searching a tree → O(log n) (best case)
  • Using a hash map → O(1) on average

But what does that look like in action?
Let’s visualize it.


A naive list or linked structure:

  • You check every node until you find the one you want.
  • Doubling the data doubles the time.

Every node gets touched until you land on the match.


Binary search trees (and B-Trees used in databases) are much faster:

  • Each step halves the search space.
  • Doubling the dataset only adds one extra step or so.

Instead of scanning everything, you only walk one branch per level.

Tree Caveats Not all trees are equal. Shape matters. The same dataset can form many different trees, depending on insertion order.

  • Unbalanced trees: A naive BST can degrade to O(n) if data is inserted in sorted order.
  • Balanced trees: Self-balancing trees (AVL, Red-Black) keep height near log n, preserving fast lookups.

O(1): Hash Map Lookup

Hash maps feel instant:

  • Compute hash once.
  • Jump directly to the bucket.
  • Check for collisions (usually none or very few).

Even with millions of items, the average lookup cost stays constant.


Why Trees Still Exist

If hash maps are “instant,” why bother with trees?

  • Ordering: Trees keep keys sorted, enabling range queries (WHERE value > X).
  • Predictability: Balanced trees stay O(log n) even in worst cases (bad hashes, adversarial keys).
  • Database indexes: B-Trees excel for ordered scans and prefix lookups.

Bottom line:

  • Use trees for sorted data and range queries.
  • Use hash maps for pure equality lookups and raw speed.

Key Takeaways

  • Lookup complexity is not just math — it feels different in practice.
  • Hash maps skip the search entirely thanks to deterministic hashing.
  • Trees remain valuable when data ordering or worst-case guarantees matter.