Menú

La información en esta página no se encuentra completamente disponible en tu idioma de preferencia. Muy pronto esperamos tenerla completamente disponible en otros idiomas. Para obtener información en tu idioma de preferencia, por favor descarga el PDF aquí.

NOT ON THE CURRENT EDITION
This blip is not on the current edition of the radar. If it was on one of the last few editions it is likely that it is still relevant. If the blip is older it might no longer be relevant and our assessment might be different today. Unfortunately, we simply don't have the bandwidth to continuously review blips from previous editions of the radarUnderstand more
May 2018
Evaluar?

PyTorch is a complete rewrite of the Torch machine learning framework from Lua to Python. Although quite new and immature compared to Tensorflow, programmers find PyTorch much easier to work with. Because of its object-orientation and native Python implementation, models can be expressed more clearly and succinctly and debugged during execution. Although many of these frameworks have emerged recently, PyTorch has the backing of Facebook and broad range of partner organisations, including NVIDIA, which should ensure continuing support for CUDA architectures. ThoughtWorks teams find PyTorch useful for experimenting and developing models but still rely on TensorFlow’s performance for production-scale training and classification.

Nov 2017
Evaluar?

PyTorch is a complete rewrite of the Torch machine learning framework from Lua to Python. Although quite new and immature compared to Tensorflow, programmers find PyTorch much easier to work with. Because of its object-orientation and native Python implementation, models can be expressed more clearly and succinctly and debugged during execution. Although many of these frameworks have emerged recently, PyTorch has the backing of Facebook and broad range of partner organisations, including NVIDIA, which should ensure continuing support for CUDA architectures. ThoughtWorks teams find PyTorch useful for experimenting and developing models but still rely on TensorFlow’s performance for production-scale training and classification.