Blog Bluesky

Louis Bourque

Github LinkedIn
All posts

Thoughts on LLMs

Dec 20, 2025

How it started

I really liked using Windsurf’s tab auto-complete. I could write most of the code and it would predict the next line with surprising accuracy. If it got it wrong, it was easy to ignore and keep writing. For the longest time I resisted Vibe/Agentic Coding. When I tried using LLMs for anything more complex than a simple function, they would get enough wrong that it didn’t seem worth the effort. Not long ago I tried again with Cursor and I was impressed.

On a large repo, I asked Cursor to write a database migration script to create a new table and provided the table name and columns I wanted. It looked at the existing migrations, followed the same filename convention and created the migration exactly as I requested. I used it infrequently for small, easy to explain tasks and it got it mostly right. Then I started asking it to review code before creating PRs:

Review uncommitted code for errors or omissions, introduction of flaws and code that was added for debugging. This includes console logs and empty/TODO comments.

Impressively it found a few issues that I could fix before creating my commit/PR.

YouTube started showing me BridgeMind videos, livestreaming himself daily while vibe coding an app until he makes $1,000,000. I thought to myself, surely this won’t work. Surely as the complexity grows the code will self-implode. But no, it’s working. He’s at Day 106 and he has a working product with paying customers. Sure, it has some bugs but what software doesn’t?

My thoughts on the future of coding

I can’t help but feel we’re seeing a fundamental shift in how we write code. I started writing code using Notepad and the shift to an IDE called Eclipse felt scary (so many buttons!) and exciting (I can code so much faster now that I get immediate feedback!). We don’t write machine code anymore. We rely on libraries and frameworks to accelerate output.

With current LLMs’ tendency to hallucinate APIs and functions, we may tend towards comprehensive testing and type checking. Maybe that’s a good thing overall.

Downsides

My biggest worry is that coding is becoming pay-to-play. Sure, there was always some cost, at a minimum, computer access. You could get free access at a library, but that’s not always practical or available.

I tried using LM Studio with Continue to use a local LLM agent, but it was just too slow and the output was not great. This isn’t those products’ fault; it’s a limitation of my hardware. I still use LM Studio for chat, just not for code generation.

An Experiment

I wanted to try and code an app using LLMs to see how it would go. I started with a GTK app written in Rust. I have very little experience with GTK, so it was always tough to know when I should use a Box, or what alternatives are available. The documentation isn’t beginner-friendly, and assumes you know when to use each component. There’s no visuals showing when to use each. With AI I didn’t have to worry about it. I just explained the layout I wanted and it just worked. I had to reiterate and refine, but I was able to get what I wanted. After a while I decided to switch to Tauri, which the AI handled. That way the code would be more familiar to me and I’d be better able to judge the quality and maintainability of the generated code.

The sound of the hard drive isn’t exactly what I’d like either, but there’s no way I could have gotten anywhere near that myself.

If you want to see the results, they’re available here: https://github.com/louisbourque/hdd-sim. Most of the code was AI generated, but I did make some manual edits.