# Machine Learning Gone Wrong¶

{admonition} But just because you can, doesn't mean you should. :class: warning The classic citation for this argument is from Jurassic Park.


**There are many examples of ML applied wrong and practitioners in the space that I talk to spend a lot of time keeping their data science teams from replicating some notable breakdowns:**

- [Google Flu Trends](https://gking.harvard.edu/files/gking/files/0314policyforumff.pdf) consistently over predicted flu prevalence
- IBM's Watson tried to predict cancer. How'd it go? According to internal documents: "This product is a piece of sh–."
- Amazon's engineers used ML to evaluate applicants but taught the model [that males were automatically better](https://www.theguardian.com/technology/2018/oct/10/amazon-hiring-ai-gender-bias-recruiting-engine)
- Chatbots have had many struggles. Here's Microsoft's [attempt at speaking like the youths](https://medium.com/asecuritysite-when-bob-met-alice/machine-learning-gone-bad-990e132024ea): <br>
{image} img/data_fallacies_to_avoid.jpg
:width: 800px
{note}
The good news is that these problems can be avoided. Understanding how is something we will defer until we have a better understanding of the methods and processes we will follow in an ML project.