Session 13
Explaining to Humans
Technical Communication for Non-Technical Audiences
Concept Lesson
Your fraud detection model is ready. It has been trained on 2.3 million transactions from a Lagos-based fintech, tested on a holdout set of 580,000 transactions (A holdout set — also called a test set — is a portion of data you deliberately set aside and do NOT use during training. You use it only at the end to check how well the model performs on data it has never seen before. This is like studying with practice tests but saving one final exam to see if you actually learned.), and it achieves 90% precision (Precision: of everything the model flagged, how many were actually positive? Recall: of all the actual positives, how many did the model catch? We covered these in detail in Week 5.). The bank's risk committee — the CFO, the head of operations, and the compliance officer, none of whom have written a line of code in their lives — have given you 10 minutes in their Thursday meeting to make your case. They want to know one thing: should they deploy this model, and what happens if they do? You have spent three months building this system, and now your entire project hinges on whether you can explain it clearly to people who think in terms of risk exposure, customer complaints, and regulatory fines — not F1 scores and confusion matrices.
The core skill here is finding the right abstraction level for each audience. Your CEO does not need to know about gradient descent, training configurations, or the difference between regularization techniques. But they absolutely need to know why the model sometimes flags legitimate transactions as fraudulent, how often that happens, and what the cost of those errors is in customer churn. The principle is simple: match your explanation to what the listener needs to make a decision. A useful structure for any stakeholder presentation is three questions: What does it do? How well does it work? What are its limitations? If you answer those three questions clearly and honestly, you have done your job. For instance, when you tell the risk committee that the model is "like an experienced fraud analyst who has reviewed 100,000 cases and can spot patterns a junior analyst would miss, but who also occasionally flags a normal purchase because it looks unusual," you have communicated capability, mechanism, and failure mode in one sentence without using any technical vocabulary.
Honesty about uncertainty is not a liability — it is what builds trust. Consider the difference: "Our AI will solve your fraud problem" versus "Our model catches 90 out of every 100 fraudulent transactions, and it incorrectly flags about 3 out of every 100 legitimate ones. That means for every 10,000 transactions, roughly 300 customers will have a payment temporarily blocked. We recommend a quick manual review queue so those customers are inconvenienced for minutes, not hours." The second statement is longer, harder to deliver, and less exciting. But it is the one that lets the CFO model the customer support cost, the compliance officer assess the regulatory exposure, and the head of operations plan the manual review workflow. When you hide or downplay limitations, you do not just erode trust — you make it impossible for decision-makers to plan around the real behavior of your system. The most effective ML communicators are the ones who present limitations as clearly as capabilities, because that clarity is what gives an organization the confidence to actually rely on your work.
A common mistake is to assume that non-technical audiences cannot handle nuance. They can. What they cannot handle is jargon, unexplained acronyms, or presentations where the speaker buries caveats in footnotes hoping nobody notices. Another mistake is over-hyping: telling a bank board that your model "uses advanced deep learning" when a logistic regression would have worked just as well. If the board later learns you overstated the technology, you have lost credibility permanently. The right approach is to be precise about what your system does, honest about what it does not do, and concrete about the numbers — always tying performance metrics back to business outcomes that your audience cares about. If your recall is 70%, do not just say the number. Say: "Out of every 100 fraudulent transactions, our system catches 70. The remaining 30 will need to be caught by your existing manual review process. Here is how we recommend supplementing the model."
Guided Exercises
Discussion Prompt
Think about the last three AI headlines you read. Were they over-hyped ("AI will revolutionize healthcare"), over-fearful ("AI will destroy all jobs"), or balanced? Why do you think public communication about AI tends to swing between extremes? How can someone with strong quantitative and verbal reasoning skills push back against both hype and fear with evidence and clear language?
Key Takeaway
Clear communication is not a soft skill — it is a technical skill. If you cannot explain your model's behavior, its error rates, and its limitations to the people making deployment decisions, your model might as well not exist. The engineer who can build a model and the engineer who can explain it to a non-technical stakeholder are two very different people — and the second one is far more valuable.