Skip to main content

My Journey with PostgreSQL: From User to (Almost) Contributor

PostgreSQL is my database of choice today — for all my full stack apps, both at work and on my side projects.
But it’s been a long road getting here:

  • 2014–2016: School and early dabbling — MySQL, SQLite, even Microsoft Access (lol).
  • 2017–2020: BI job (Windows environment): Mostly OLAP — heavy SQL Server, SAP HANA, plus dabbling with MongoDB, Neo4j, IndexedDB, WebSQL, and more.
    First experience with ORMs. Work projects gradually transitioned from mostly OLAP to mostly OLTP.
  • 2020: Began transitioning to Linux and becoming terminal-native.
  • 2021–2023: Working mainly with Athena (for OLAP) — not many OLTP projects.
    For side projects, I was trying a UI/FE-first approach — serving raw JSON files for front ends to grab (definitely not my approach anymore btw).
    So again, not much OLTP during this time.
  • 2024–2025: Fell in love with PostgreSQL.

One thing I’ve really enjoyed is going beyond just using Postgres.
I like understanding how things work at the deepest level — but even before that, I always felt friction with traditional SQL.

For simple operations, SQL is great.
But once transformations got complex, OOP-style structures made more sense to me: explicit flow, modular stages, and clear separation of concerns.

Back then, I didn’t really care about performance tricks or optimization — sure, I knew how to view query plans, but if I couldn't step through the engine and actually see why something worked a certain way, it all felt kind of irrelevant.

("Oh, do this and the database will optimize it differently."
...Okay, whatever 🙄 — valley girl voice.)

Now, with PostgreSQL, I’m way more invested.
Because I can actually build from source, step through with GDB, and understand the real reasons behind optimizations — not just parrot tips from blog posts.

I even wrote an article back in 2018 about how LINQ made transformations far more human-readable compared to raw SQL.
(Sure, the SQL generated by Entity Framework wasn’t pretty — but the LINQ logic was crystal clear, and if performance ever became an issue, it could be optimized later.)

The frustration wasn’t just that SQL Server was closed-source — it was that pure declarative logic becomes messy, verbose, and hard to reason about for anything beyond basic queries.

Part of it, looking back, was also my own inexperience.
At the time, even though I theoretically knew the database engine and the IDE were separate, in practical everyday usage, they felt tightly coupled.

Now, my relationship with databases is completely different: the engine is just a service.
Whether it’s psql, bash scripts, psycopg, pgq, or custom CLI wrappers pulling connection strings from OS env vars — the tools are lightweight, flexible, and fully decoupled from the engine itself.

It’s not that SQL’s complexity magically disappeared — but now I see many more paths forward for managing it cleanly, because I'm no longer trapped inside the "DB IDE worldview."


With Postgres, I finally had the opportunity to go all the way down.

I like doing deep dives — for example, my pinned LinkedIn carousel on TOAST internals.
I even went as far as building Postgres from source and debugging it with GDB — stepping through the imperative C code behind declarative SQL.

Sure, we all hear about lexical order, logical order, parsing, query plans, and optimizers...
But most of that stays abstract for years because:

  • Most databases are closed-source, and
  • Building from source and stepping through feels like a huge task until you’ve leveled up your C/C++ and systems skills.

Ironically, when I finally did it, it wasn’t surprising at all — the internals made sense.
(Still would love to contribute to the codebase someday... but, priorities.)

Getting close to the Postgres source like this scratched an itch I’ve had for a long time: finally seeing DB internals up close.

PostgreSQL is home for me now — not just because it’s powerful, but because I’ve seen and touched how it really works.