I’m teaching Business Communication this semester, and I just watched 40 students discover something that’s going to stick with them longer than any lecture I could give.
The assignment seemed straightforward: You’re a shift supervisor at a local family restaurant. The health department just shut you down for 5 days. Your team is counting on you to show leadership. Write a message to your employees.
Oh, and use AI however you want.
They jumped in. Prompts flew. Within minutes, they had polished messages from ChatGPT that hit all the right notes—empathy, resources, open-door policies. When I asked them to rate AI’s performance, the numbers rolled in: 7s and 8s across the board.
“Pretty good, right?” I asked.
Then I threw them a curveball.
“Let me tell you about the people you’re actually writing to.”
There’s Jake, the high school kid who works evenings. Lives at home. Parents feed him. He doesn’t need a food bank—he needs to know if his job is safe and when he can pick up his hours again because he’s saving for a car.
There’s Maria, a single mom whose rent is due in 5 days. She lives paycheck to paycheck—actually, more like *shift to shift*. Those daily tips are how she feeds her kids. Right now, she’s not worried. She’s *panicking.*
And there’s Carlos, who’s been working in kitchens for 20 years. He sends money home every month. He has some savings, sure, but he comes from a culture where asking for help isn’t just uncomfortable—it’s shameful. Self-sufficiency isn’t a value; it’s an identity.
AI’s generic message mentioned food banks and invited people to “reach out if they need help.”
For Jake? Useless.
For Maria? Vague to the point of cruelty.
For Carlos? Culturally tone-deaf.
I asked them to revise the message—same AI tools, but now with actual human judgment layered on top.
This time:
- Jake got a message with concrete return-to-work dates and reassurance about his position.
- Maria got specific information: the food bank’s address, hours, and which bus routes get her there, plus a promise she’d get priority shifts when they reopened.
- Carlos got a Spanish translation that mentioned resources delicately—“should anyone need them”—respecting the cultural weight of self-reliance.
Then I asked them to rate AI’s original response again.
Not a single score broke 5.
You could see it click. That moment when the technology went from “wow, this is amazing” to “wait, this is just a starting point.”
Because here’s what AI couldn’t know:
- That Jake scrolls his bank account dreaming about a used Honda
- That Maria’s landlord doesn’t accept empathy as payment
- That Carlos would rather skip meals than admit he’s struggling
AI gave them competence. Human judgment gave them compassion.
And that’s the whole point, isn’t it?
We’re in a “humans who understand how to use AI versus humans who don’t” reality. The students who will thrive aren’t the ones who can write the best prompts—they’re the ones who know what questions AI can’t even think to ask.
My students walked in thinking AI was going to make communication easier.
They walked out understanding it makes lazy communication easier.
But good communication? That still requires knowing that behind every employee ID number is a person with a specific life, specific fears, and specific needs that no language model has ever met.
The assignment continues. But that lesson? Already complete.
-----
Teaching Business Communication at Diablo Valley College. Still learning alongside my students. Still convinced the most powerful technology is knowing when to override it.
This is such a phenomenal article about a phenomenal lesson - those students are so lucky to be in a class with you!