Monday, April 20, 2026

Local AI on an M1 Pro: From "Post-Apocalyptic" Slowness to a Functional Reality

We’ve all seen headlines like this: "The end of paid coding assistants!" or "Run your own private AI locally for free!" As someone who values privacy and hates the dependency on monthly subscriptions, I decided to see if my trusty MacBook Pro (M1 Pro, 32GB RAM) could become my trustworthy coding workstation.

My journey started with a massive failure, moved through a "thinking" loop, and finally landed on a configuration that actually works. Here is how I turned my Mac into a functional local coding station.



Phase 1: The "Heavyweight" Disaster with "Qwen 3.5 Coder Next"

I started ambitious. Installing Ollama was the simplest part. Then I pulled Qwen 3.5 Coder Next. I thought 32GB of RAM would be enough. I was wrong.

The Experience:

  • Initial 'Hi': 25 seconds.

  • Coding Task: 7+ minutes to generate just some initial instructions and start writing the variables section for an Arduino script.

  • The Culprit: My logs showed the model needed 51.3 GiB of memory. Since I only have 32GB, Ollama had to shove 26GB of the "brain" onto my CPU.


The long wait after a simple "Hi".

 

The painful 7+ minutes of waiting while watching the slow code generation and seeing my computer's memory heavily consumed.



Phase 2: The "Thinking" Trap with "Qwen 3.5 9B"

I pivoted to a smaller model: Qwen 3.5 9B. On paper, this should have been lightning fast. However, I ran into a new hurdle: Reasoning Loops.

Even with the smaller 9B model, the "Reasoning/Thinking" phase was taking forever—sometimes up to 7 minutes of "thinking" without a single line of code being written. At one point, it even got caught in a logical loop, and I had to restart the process.

A lot of thinking for 7+ minutes and no action but much less memory consumption.



Phase 3: The Breakthrough (The "Nothink" Secret)

The real "Aha!" moment came when I realized I didn't need the model to spend several minutes pondering the meaning of life for a simple Arduino script. I just needed the code.

I used a simple command to bypass the heavy reasoning phase: >>> /set nothink

The difference was huge:

  • Total Response Time: 2 minutes and 28 seconds for a complete, complex answer.

  • Content Quality: It wasn't just code. It gave me prerequisites, circuit wiring, full Arduino code, security tips, and even improvement ideas.

  • Memory Efficiency: The logs show this model is a perfect fit for the M1 Pro. It only used about 9.1 GiB of total memory, meaning 100% of the model layers (33/33) stayed on the GPU (Metal).

This time, with 'nothink', it was spitting out the answer much faster.



Technical Insights from the Logs

If you are troubleshooting your own local setup, here is what I learned from the Ollama server logs:

  1. Check your Offloading: In my successful 9B run, the logs said: offloaded 33/33 layers to GPU. This is the ideal case. If that number isn't 100%, your performance will be affected.

  2. Flash Attention is King: The logs confirmed enabling flash attention. This helps the Mac handle long conversations without slowing down.

  3. The "Unified" Advantage: My M1 Pro was able to allocate a recommendedMaxWorkingSetSize of ~26GB. By using the 9B model (which only needs ~9GB), I left plenty of room for my system to breathe.


The Verdict: Is it a "Free" Coding Assistant?

Is a "free" local coding assistant possible? Yes—but resources matters. If you try to run massive models on a 32GB Mac, you'll feel like you're back in the era of dial-up.

If this continues to feel this good, I’m considering the ultimate "pro" move: Using this Mac as a headless AI server. I can connect it to a secure network and put it away on a shelf and connect to its "brain" from my other computers for chatting or from within VS Code (with the suitable plugin's of course) or even my phone. My main coding machine stays cool and quiet, while the M1 Pro does all the heavy lifting in the background.

I hope that my little weekend experiment has helped you in any way with insights or inspiration to set on your own journey of finding your own AI independence.


Tuesday, April 7, 2026

The Rise of the Product Engineer: Title Trend or New Reality?

 

This career topic has been on my mind for a while, and I've been trying to collect more information about it for my own career's sake. And now I think I have an idea clear enough to be shared. I hope it benefits someone out there.

The "Product Engineer" title has exploded in popularity this year. This shift is happening for three main reasons:

  • AI-Assisted Coding: Since AI can now handle basic coding tasks, companies need engineers who can move "above" the code to control the requirements (inputs) and the architecture (outputs).

  • Flatter Teams: Tech companies are removing middle layers, requiring engineers to be more independent.

  • Faster Delivery: To move quickly, the line between "thinking about the product" and "writing the code" must disappear.

However, after talking to managers, recruiters, and "Product Engineers", I realized that not everyone defines this role the same way. Here are the three main types of "Product Engineers" I have observed:

1. The "Label Switch" (Product in Name Only)

In these companies, the title is just a marketing trick. They swapped the word "Software" for "Product," but nothing else changed.

  • The Reality: Whether you are a junior or a senior, your job is the same as a traditional "heads-down" coder.

  • The Hiring Process: The interview is 100% technical. They don't assess your business knowledge or how you think about users.

  • The Day-to-Day: You receive a ticket, you code it, and you move on. The "product" part is just a fancy new sticker on your LinkedIn profile. You might negotiate the requirements or the sequence of shipping things with your PM, but that's the same thing as the past decades in any small to medium startup. Nothing new.

2. The Product-Minded Engineer

This is a more mature approach, often seen in tech companies with a transparent and flexible management style. Here, the engineer is a partner to the Product Manager (PM).

  • The Reality: Mid-level and senior engineers are expected to help improve requirements, not just follow them. There is a heavy focus on customer value over "tech talk."

  • The Hiring Process: Interviews include a specific section to discuss how you’ve solved user problems, how you collaborate with designers, and how you interact with the PMs in earlier stages.

  • The Day-to-Day: About 10% of your time is spent on product strategy. You are a pragmatist who knows when to choose a "good enough" technical solution to help the user faster. However, the PM still holds the final accountability for the roadmap.

3. The "Part-Time PM" Engineer

This is the most intense version of the role and the unicorn of that job title. These companies need someone who can lead a project from a blank page to a finished product.

  • The Reality: You are essentially a Product Manager who also writes code. You are responsible for the "Why" and the "How."

  • The Hiring Process: Be prepared for deep questions about product frameworks, data analysis, and user research. They want to see if you can lead a squad of engineers.

  • The Day-to-Day: You participate in ideation, talk to stakeholders, and conduct user interviews. You shape the work for the rest of the team and ensure the technical output matches the business goals perfectly. Expect extra accountabilities with this version.


Conclusion

The software industry is moving away from "coding as a service" toward "problem-solving as a service." Depending on the company, a Product Engineer can be a simple developer or a business leader. If you are looking for this role, make sure to ask during the interview: "How much influence do I actually have over the 'Why' of the product? And am I actually accountable for any decision made?"