Stop learning Python just to talk to a chatbot

Most of the advice you see about AI is written by people who want you to feel small. They want you to think that unless you can navigate a GitHub repository or write a Python script to ‘automate your workflow,’ you’re basically a horse-and-buggy driver in a world of Teslas. It’s a lie. I work a regular job in logistics—lots of spreadsheets, lots of annoying emails, lots of ‘per my last email’ energy—and I’ve spent the last year obsessed with how these AI models actually work. What I’ve learned is that the ‘engineering’ part of prompt engineering is a total misnomer. It’s not engineering. It’s just being a decent communicator.

The day I realized I was wasting my life

It was a Tuesday in April. I was sitting in my cubicle, the one right next to the coffee machine that always smells like burnt plastic, trying to get GPT-4 to summarize 500 customer reviews of a specific shipping route. I’d read on some ‘AI Guru’s’ Twitter thread that I needed to use the API and write a script to handle the data in chunks. I spent three and a half hours—actual hours of my life I will never get back—trying to debug a ‘RateLimitError’ in a language I barely understand. I felt like an idiot. My eyes were stinging from the screen glare, and I hadn’t even started the actual work.

I finally gave up on the code. I just copied and pasted the text in big messy chunks directly into the browser. I told the AI: ‘Look, I’m stressed. Just read these and tell me why everyone is mad about the Memphis hub. Keep it short because my boss has the attention span of a goldfish.’ It gave me the perfect summary in twelve seconds. That was the moment I realized that most of the technical barriers people talk about are just noise. You don’t need to be a coder. You just need to stop being so formal with the machine.

Anyway, the coffee machine finally died that afternoon, and we had to drink lukewarm water for three days. But I digress. The point is, the ‘magic’ isn’t in the code; it’s in how you frame the problem.

Why “Act as a…” is mostly a lie

Students diligently working in a classroom, showcasing educational focus and concentration.

I know people will disagree with me on this, and I might be wrong, but I think the whole ‘Act as a world-class marketing expert’ trick is a placebo. I’ve tested this. I ran 42 different prompts for a project last month—half of them had these fancy personas, and the other half were just direct instructions. The results? The ‘expert’ prompts were actually 15% wordier and contained more fluff. They didn’t actually provide better insights; they just used more adjectives.

What I mean is—actually, let me put it differently. When you tell an AI to ‘act as an expert,’ you’re inviting it to mimic the clichés of an expert. It starts using words like ‘synergy’ and ‘robust’ and ‘holistic.’ It becomes a caricature. It’s much better to give it a specific constraint than a vague persona. Instead of ‘Act as a lawyer,’ try ‘Review this contract for any clause that mentions hidden fees and explain them like I’m five years old.’ The constraint is the secret sauce. Specificity is better than roleplay.

Also, I genuinely think people who use ‘Act as a professional copywriter’ are lazy. It produces bland garbage that sounds like a LinkedIn ad from 2012. I’d rather tell the AI to ‘write like a tired person who just wants to go home and is annoyed by corporate jargon.’ That actually gets you something that sounds human.

Key Takeaway: Constraints are more powerful than personas. Tell the AI what NOT to do, rather than who to be.

My “Lazy Person’s” Framework

I don’t follow a 10-step process. I have a job to do. This is the only list you need to get 90% of the way there without touching a line of code:

  • The Context Dump: Give it more info than you think it needs. If you’re asking for an email, tell it what happened in the meeting, what the weather was like, and why you’re annoyed.
  • The “No” List: Explicitly tell it what to avoid. ‘No bullet points,’ ‘No corporate speak,’ ‘Don’t mention the budget.’
  • The Iteration Loop: Never expect the first answer to be right. It’s a conversation, not a vending machine.
  • The Format Fix: Tell it exactly how you want the output. ‘Give me a table with three columns,’ or ‘Write this in one long, rambling paragraph.’

That’s it. That’s the whole trick.

The “Temperature” placebo and other things I’m unsure about

There are all these settings in the ‘Playground’ or ‘Advanced’ modes—Temperature, Top P, Frequency Penalty. I’ll be honest: I think for most of us, they don’t matter. I’ve spent hours toggling the temperature from 0.1 to 0.9, and for 95% of my tasks, I couldn’t tell the difference in a blind test. Maybe my brain is just flat, or maybe the models have become so aligned that they ignore the settings half the time anyway.

I used to think you had to be precise with these numbers. I was completely wrong. The model’s internal state is so complex that thinking you’re ‘tuning’ it by moving a slider is like trying to steer a cruise ship with a toothpick. Focus on the words. The words are the only lever that actually works consistently.

One thing I am sure about, though: I absolutely refuse to use Notion AI. I know everyone loves it, but it feels like a middle manager trying to be ‘cool’ at a party. It’s invasive, the UI is cluttered, and it constantly suggests ‘improving’ my writing in ways that make it sound like a robot wrote it. I hate it. I’ll stick to my messy ChatGPT window and my plain text files, thanks. Sometimes the ‘integrated’ solution is just a way to charge you an extra $10 a month for something you can do better elsewhere for free.

Stop being polite to the math

I ran a little experiment last Tuesday. I had to categorize about 200 lines of shipping data. I used the same prompt twice. In the first one, I was polite: ‘Please could you help me categorize these, thank you so much!’ In the second one, I was blunt: ‘Categorize these. Use these categories: A, B, C. Do not add any intro or outro text.’

The polite prompt failed on 14 items. The blunt prompt failed on 2.

AI doesn’t have feelings. It doesn’t need to be buttered up. In fact, being ‘polite’ often adds unnecessary tokens to the prompt that can distract the model from the actual task. It’s a mathematical engine, not your neighbor. When you’re at work and you need something done, just say it. Precision beats politeness every single time. Total truth.

It feels weird at first. We’re trained to be nice. But once you realize that ‘please’ is just noise to an LLM, you start getting much better results. It’s about being a clear communicator, not a nice one. It’s the difference between a boss who says ‘Hey, if you have a second, maybe look at this?’ and a boss who says ‘I need this report by 4 PM, formatted as a PDF.’ Guess which one gets better results?

The part where I might be a hypocrite

I’ve spent this whole time telling you not to code, but I do have one confession. I have a single ‘shortcut’ on my Mac that uses a tiny bit of AppleScript to paste my clipboard into an AI window. Does that count as coding? Maybe. But it took me 30 seconds to set up and I didn’t have to learn what a ‘Boolean’ is to do it.

I think we’re entering an era where ‘literacy’ isn’t about knowing syntax; it’s about knowing how to describe a process. If you can describe how to make a peanut butter and jelly sandwich to someone who has never seen bread, you can master prompt engineering. It’s just logic. It’s just being incredibly, painfully literal.

I still wonder if this is all going to be obsolete in six months. Maybe the models will get so smart they won’t need prompts at all. They’ll just read our minds or our ‘intent.’ That sounds terrifying, honestly. I like the prompt. I like the feeling of finally getting the words right and seeing the machine spit out exactly what I was thinking. It’s the closest I’ll ever get to being a wizard.

Just stop buying the courses. Just talk to the thing. That’s the only way to actually learn.