31 July 2025
When designing data-driven products, one of the biggest challenges is the overconfidence problem of “answer machines.” It’s not just a data science issue where algorithms return flawed conclusions—it’s also a design problem in how those answers are presented. Interfaces often imply there’s a single, definitive truth, offering results with unwarranted certainty and without proper context or expectation setting. As we learn to present machine-generated content, the real challenge for design is to introduce a sense of productive humility—acknowledging uncertainty, signaling nuance, and avoiding the false assurance of absolute answers.
“Performance isn’t the speed of the page, it’s the speed of the answer.” But it has to be the right answer. “I don’t know,” is better than a wrong response.
When the algorithm is confused, let’s say so—or even better, let it ask for help. Machines don’t have to do all the work themselves. This can (and probably should) be a partnership.
The moment an algorithm fails is exactly the right time to supplement it with human judgment.
Don’t hide what you’re doing with data, how it’s being sourced, or what it’s up to. If the logic is too opaque to let us be transparent, then we have to at least be as open as possible with the data that feeds them.
When something is amiss or when stakes are high, get a second opinion. This is what we do with human conclusions, and we can do the same with the suggestions of machines. too. Poll across lots of systems; ask many competing algorithms for their opinions.
The machines know only what we feed them. The quality of the models they can make is directly correlated to the quality (and quantity) of the training data we give the software. And it takes a lot of training data—huge, vast amounts of training data—to get the deep-learning results that are proving most reliable.
Garbage in, garbage out. We strive to create data sets that are neutral in the aggregate.
Rooting out this kind of bias is hard and slippery work. Biases are deeply held—and often invisible to us. We have to work hard to be conscious of our unconscious—and doubly so when it creeps into data sets. This is a data-science problem, certainly, but it’s also a design problem.
For too many companies, the question “is it good for users?” has given way to “how can we sell their data for a profit?”
Devices have interests that aren’t necessarily shared by their owners. For the most part, though, it’s not the machines we have to worry about. It’s the companies behind them. It’s the designers like us.
Don’t be a cog in someone else’s bad solution to a dumb problem. Solve real problems, and be kind to each other.
“Performance isn’t the speed of the page, it’s the speed of the answer.” But it has to be the right answer. “I don’t know,” is better than a wrong response.
When the algorithm is confused, let’s say so—or even better, let it ask for help. Machines don’t have to do all the work themselves. This can (and probably should) be a partnership.
The moment an algorithm fails is exactly the right time to supplement it with human judgment.
Don’t hide what you’re doing with data, how it’s being sourced, or what it’s up to. If the logic is too opaque to let us be transparent, then we have to at least be as open as possible with the data that feeds them.
When something is amiss or when stakes are high, get a second opinion. This is what we do with human conclusions, and we can do the same with the suggestions of machines. too. Poll across lots of systems; ask many competing algorithms for their opinions.
The machines know only what we feed them. The quality of the models they can make is directly correlated to the quality (and quantity) of the training data we give the software. And it takes a lot of training data—huge, vast amounts of training data—to get the deep-learning results that are proving most reliable.
Garbage in, garbage out. We strive to create data sets that are neutral in the aggregate.
Rooting out this kind of bias is hard and slippery work. Biases are deeply held—and often invisible to us. We have to work hard to be conscious of our unconscious—and doubly so when it creeps into data sets. This is a data-science problem, certainly, but it’s also a design problem.
For too many companies, the question “is it good for users?” has given way to “how can we sell their data for a profit?”
Devices have interests that aren’t necessarily shared by their owners. For the most part, though, it’s not the machines we have to worry about. It’s the companies behind them. It’s the designers like us.
Don’t be a cog in someone else’s bad solution to a dumb problem. Solve real problems, and be kind to each other.