Experiment tracking tools for machine learning
The day-to-day work of machine learning often boils down to a series of experiments in selecting a modeling approach, the network topology, training data and various optimizations or tweaks to the model. Because many of these models are still difficult to interpret or explain, data scientists must use experience and intuition to hypothesize changes and then measure the impact those changes have on the overall performance of the model. As these models have become increasingly common in business systems, several different experiment tracking tools for machine learning have emerged to help investigators keep track of these experiments and work through them methodically. Although no clear winner has emerged, tools such as MLflow or Weights & Biases and platforms such as Comet or Neptune have introduced rigor and repeatability into the entire machine learning workflow. They also facilitate collaboration and help turn data science from a solitary endeavor into a team sport.