Today's machine learning (ML) workloads are increasingly compute-intensive. As convenient as they are, single-node development environments such as your laptop cannot scale to meet these demands. Ray is a unified framework for scaling AI and Python code from laptop to cluster. Ray is essentially a well-encapsulated distributed computing framework with a series of AI libraries to simplify ML work. By integrating with other frameworks (e.g., PyTorch and TensorFlow), it can be used to build large-scale ML platforms. Companies like OpenAI and Bytedance use Ray heavily for model training and inference. We also use its AI libraries to help with distributed training and hyperparameter tuning on our projects. We recommend you try Ray when building scalable ML projects.