Enable javascript in your browser for better experience. Need to know to enable it? Go here.
La información en esta página no se encuentra completamente disponible en tu idioma de preferencia. Muy pronto esperamos tenerla completamente disponible en otros idiomas. Para obtener información en tu idioma de preferencia, por favor descarga el PDF aquí.
Última actualización : May 15, 2018
NO EN LA EDICIÓN ACTUAL
Este blip no está en la edición actual del Radar. Si ha aparecido en una de las últimas ediciones, es probable que siga siendo relevante. Si es más antiguo, es posible que ya no sea relevante y que nuestra valoración sea diferente hoy en día. Desgraciadamente, no tenemos el ancho de banda necesario para revisar continuamente los anuncios de ediciones anteriores del Radar. Entender más
May 2018
Assess ? Vale la pena explorarlo con el objetivo de entender cómo afectará a tu empresa.

Machine-learning models are starting to creep into everyday business applications. When enough training data is available, these algorithms can address problems that might have previously required complex statistical models or heuristics. As we move from experimental use to production, we need a reliable way to host and deploy the models that can be accessed remotely and scale with the number of consumers. TensorFlow Serving addresses part of that problem by exposing a remote gRPC interface to an exported model; this allows a trained model to be deployed in a variety of ways. TensorFlow Serving also accepts a stream of models to incorporate continuous training updates, and its authors maintain a Dockerfile to ease the deployment process. Presumably, the choice of gRPC is to be consistent with the TensorFlow execution model; however, we’re generally wary of protocols that require code generation and native bindings.

Nov 2017
Assess ? Vale la pena explorarlo con el objetivo de entender cómo afectará a tu empresa.

Machine-learning models are starting to creep into everyday business applications. When enough training data is available, these algorithms can address problems that might have previously required complex statistical models or heuristics. As we move from experimental use to production, we need a reliable way to host and deploy the models that can be accessed remotely and scale with the number of consumers. TensorFlow Serving addresses part of that problem by exposing a remote gRPC interface to an exported model; this allows a trained model to be deployed in a variety of ways. TensorFlow Serving also accepts a stream of models to incorporate continuous training updates, and its authors maintain a Dockerfile to ease the deployment process. Presumably, the choice of gRPC is to be consistent with the TensorFlow execution model; however, we’re generally wary of protocols that require code generation and native bindings.

Publicado : Nov 30, 2017

Descarga el PDF

 

 

 

English | Español | Português | 中文

Suscríbete al boletín informativo de Technology Radar

 

 

 

 

Suscríbete ahora

Visita nuestro archivo para leer los volúmenes anteriores