31 July 2025

Guiding principles to design data-driven products

When designing data-driven products, one of the biggest challenges is the overconfidence problem of “answer machines.” It’s not just a data science issue where algorithms return flawed conclusions—it’s also a design problem in how those answers are presented. Interfaces often imply there’s a single, definitive truth, offering results with unwarranted certainty and without proper context or expectation setting. As we learn to present machine-generated content, the real challenge for design is to introduce a sense of productive humility—acknowledging uncertainty, signaling nuance, and avoiding the false assurance of absolute answers.

Source: Design in the Era of the Algorithm


The principles

  1. Favor accuracy over speed

    “Performance isn’t the speed of the page, it’s the speed of the answer.” But it has to be the right answer. “I don’t know,” is better than a wrong response.

  2. Allow for ambiguity

    When the algorithm is confused, let’s say so—or even better, let it ask for help. Machines don’t have to do all the work themselves. This can (and probably should) be a partnership.

  3. Add human judgment

    The moment an algorithm fails is exactly the right time to supplement it with human judgment.

  4. Advocate sunshine

    Don’t hide what you’re doing with data, how it’s being sourced, or what it’s up to. If the logic is too opaque to let us be transparent, then we have to at least be as open as possible with the data that feeds them.

  5. Embrace multiple systems

    When something is amiss or when stakes are high, get a second opinion. This is what we do with human conclusions, and we can do the same with the suggestions of machines. too. Poll across lots of systems; ask many competing algorithms for their opinions.

  6. Make it easy to contribute (accurate) data

    The machines know only what we feed them. The quality of the models they can make is directly correlated to the quality (and quantity) of the training data we give the software. And it takes a lot of training data—huge, vast amounts of training data—to get the deep-learning results that are proving most reliable.

  7. Root out bias and bad assumptions

    Garbage in, garbage out. We strive to create data sets that are neutral in the aggregate.

    Rooting out this kind of bias is hard and slippery work. Biases are deeply held—and often invisible to us. We have to work hard to be conscious of our unconscious—and doubly so when it creeps into data sets. This is a data-science problem, certainly, but it’s also a design problem.

  8. Give people control over their data

    For too many companies, the question “is it good for users?” has given way to “how can we sell their data for a profit?”

  9. Be loyal to the user

    Devices have interests that aren’t necessarily shared by their owners. For the most part, though, it’s not the machines we have to worry about. It’s the companies behind them. It’s the designers like us.

  10. Take responsibility

    Don’t be a cog in someone else’s bad solution to a dumb problem. Solve real problems, and be kind to each other.

1. Favor accuracy over speed

“Performance isn’t the speed of the page, it’s the speed of the answer.” But it has to be the right answer. “I don’t know,” is better than a wrong response.

2. Allow for ambiguity

When the algorithm is confused, let’s say so—or even better, let it ask for help. Machines don’t have to do all the work themselves. This can (and probably should) be a partnership.

3. Add human judgment

The moment an algorithm fails is exactly the right time to supplement it with human judgment.

4. Advocate sunshine

Don’t hide what you’re doing with data, how it’s being sourced, or what it’s up to. If the logic is too opaque to let us be transparent, then we have to at least be as open as possible with the data that feeds them.

5. Embrace multiple systems

When something is amiss or when stakes are high, get a second opinion. This is what we do with human conclusions, and we can do the same with the suggestions of machines. too. Poll across lots of systems; ask many competing algorithms for their opinions.

6. Make it easy to contribute (accurate) data

The machines know only what we feed them. The quality of the models they can make is directly correlated to the quality (and quantity) of the training data we give the software. And it takes a lot of training data—huge, vast amounts of training data—to get the deep-learning results that are proving most reliable.

7. Root out bias and bad assumptions

Garbage in, garbage out. We strive to create data sets that are neutral in the aggregate.

Rooting out this kind of bias is hard and slippery work. Biases are deeply held—and often invisible to us. We have to work hard to be conscious of our unconscious—and doubly so when it creeps into data sets. This is a data-science problem, certainly, but it’s also a design problem.

8. Give people control over their data

For too many companies, the question “is it good for users?” has given way to “how can we sell their data for a profit?”

9. Be loyal to the user

Devices have interests that aren’t necessarily shared by their owners. For the most part, though, it’s not the machines we have to worry about. It’s the companies behind them. It’s the designers like us.

10. Take responsibility

Don’t be a cog in someone else’s bad solution to a dumb problem. Solve real problems, and be kind to each other.

Tags

  • machine learning
  • AI

Related collections

Blueprint for an AI Bill of Rights

5 principles


The White House

Principles for responsible AI

6 principles


Microsoft

Google AI Principles

7 principles


Google

7 Principles of Efficient Human Robot Interaction

7 principles


Michael A. Goodrich