Facebook and Microsoft have announce ONNX(Pytorch, AML,Caffe2)
ONNX

Facebook and Microsoft announce ONNX

Facebook and Microsoft announce ONNX

Facebook and Microsoft have announced ONNX,Open Network Neural Exchange this morning in their blog posts. The gearbox facilitates the conversion of models between PyTorch and Caffe2 to reduce the time between search and production.

Facebook

Facebook has long maintained the distinction between equine and AML learning groups. Facebook Research (Fair), manages search on the bleeding edge, while Applied Machine Learning (AML) provides intelligence for the products.

PyTorch

The choice of the deep learning framework underlying this distinction is the ideological key. The fair is used to working with PyTorch – a deep learning framework is optimized to achieve state of the art search results, independent of resource constraints.

Unfortunately in the real world, most of us are limited by the computing capabilities of our smartphones and computers.

AML

If AML wants to build something for distribution and scale, it opts for Caffe2. Caffe2 is also a profound learning framework, but optimized for resource efficiency, particularly in relation to Caffe2Go, which is optimized for running self-learning models on live mobile devices.

Work in conjunction with Facebook and Microsoft announces it helps people easily convert templates created into PyTorch Caffe2 templates. To reduce barriers to the transition between these two frames, the two companies can effectively improve the spread of research and speed up the entire marketing process.

Unfortunately not all companies use the same PyTorch and Caffe2 association. Much of the research is done in TensorFlow and other officers.

Outside a research context, others have worked to make it easier to convert machine learning models to optimized formats for specific devices.

Apple’s CoreML, for example, helps developers convert a very limited number of templates. At this stage, CoreML does not TensorFlow and the process of creating custom converters seems rather complicated and eventually probably into a disappointment.

Like companies like Google and Apple to gain greater control over the hardware customization context hardware optimization, it will be important to continue to monitor interoperability

Sharing is caring! Thanks