The machine learning world has shifted emphasis slightly from exploring what models are capable of understanding to how they do it. Concerns about introducing bias or overgeneralizing a model's applicability have resulted in interesting new tools such as What-If Tool (WIT). This tool helps data scientists to dig into a model's behavior and to visualize the impact various features and data sets have on the output. Introduced by Google and available either through Tensorboard or Jupyter notebooks, WIT simplifies the tasks of comparing models, slicing data sets, visualizing facets and editing individual data points. Although WIT makes it easier to perform these analyses, they still require a deep understanding of the mathematics and theory behind the models. It is a tool for data scientists to gain deeper insights into model behavior. Naive users shouldn't expect any tool to remove the risk or minimize the damage done by a misapplied or poorly trained algorithm.