Cart

Help us create better digital products

We are committed to mitigating biases in our products

At iZotope, we are constantly striving to serve our customers better. We design our products and the algorithms they include to best suit the needs of our customers. However, we know that algorithms and the results they produce can be biased in ways we do not design or intend. For example, when we design a noise reduction algorithm for dialogue, we have designed it intentionally to remove noise from the human voice. We do not expect it to successfully remove noise from a guitar, but if it systematically fails to remove noise well from Japanese speakers, that is a bias we should identify and mitigate. If the impact of bias in a feature or product we have developed is to make it less usable for some population of our users, or worse alienates some population of our users because it doesn’t account for the outcome they’re going for, we are committed to trying to mitigate that bias.

Where does bias come from?

Bias in algorithms can come from many sources including the design process that goes into creating and validating them, the data sources used to train them in the case of machine learning, and sometimes even the limits of technology.

Bias in design

One of our chief methods for refining our algorithms is working with our beta users. Often, beta users are the first to notice deficiencies in our algorithms because they bring our features to bear on source material and use cases that are much broader than our initial internal testing. However, if we only show a new de-masking algorithm to mixers that do symphonic mixing, we are going to miss out on a lot of feedback that would help us improve the algorithm for mixing hip-hop. Upon our release, we will likely learn too late that the new algorithm doesn’t work as well for hip-hop producers.

Bias in data sets and Machine Learning

When we design a machine learning algorithm leveraging some data set to solve a problem, the solutions that algorithm can provide will reflect the biases in that data set. In a way, bias is the essence of how a machine learning algorithm works. We use a data set to bias an algorithm toward the results we want and away from the results we do not. However, there are biases we do not want in our products and algorithms. If the data set we use to train our dialogue noise reduction algorithm contains no Japanese speakers, it is likely to perform less well for our Japanese users.

IHS Markit
MIPA 2017 logo
SOS Awards Winner 2019
Namm Tec award iZotope
iZotope Logo
iZotope Logo

We make innovative audio products that inspire and enable people to be creative.

Subscribe to our newsletter

Get top stories of the week and special discount offers right in your inbox. You can unsubscribe at any time.

Follow us