Embedded Neural Network Compute Framework:

Fathom

Movidius is pleased to introduce the new Fathom machine learning software framework. Fathom converts trained offline neural networks into embedded neural networks running on the ultra-low power Myriad 2 VPU. By targeting Myriad 2, Fathom makes it easy to profile, tune and optimize your standard TensorFlow or Caffe neural network. Fathom allows your network to run in embedded environments such as smart cameras, drones, virtual reality headsets and robots. Fathom takes Deep Neural Networks to where they have never gone before, at high speeds and ultra-low power at the network edge.

Movidius is also introducing the Fathom Neural Compute Stick -- the first product of its kind -- a modular deep learning accelerator in the form of a standard USB stick. Featuring a full-fledged Myriad 2 VPU, the Fathom Neural Compute Stick not only enables rapid prototyping, but also delivers high levels of neural network compute to existing devices via a USB port. Thanks to the ultra-low power Myriad 2 VPU, the Fathom Neural Compute Stick does not require an external power supply, and can accelerate neural network performance for a wide range of devices.

Key Benefits

The world's first Deep Neural Network acceleration module
Brings neural network compute to existing devices through a standard USB port
Enables rapid prototyping, validation, and fine-tuning of neural network models built in standard frameworks
Optimizes neural network models for the Myriad 2 VPU, enabling commercial deployment

Intelligent Deep Learning Framework

The Fathom software framework translates trained neural networks from a PC environment to an embedded environment. Fathom accepts both Caffe and TensorFlow networks and intelligently optimizes them to run on the ultra-low power Myriad 2 VPU. Developers simply enter a Caffe or TensorFlow network description along with trained network weights, and Fathom will quickly deliver detailed information on individual layer performance, any errors identified and inference time. The Fathom framework makes deployment of neural networks to embedded devices faster and easier than ever before.

Prototype, Validate, Compute

The Fathom Neural Compute Stick is the world's first discrete Deep Learning accelerator. Featuring the Myriad 2 VPU, the Neural Compute Stick can run neural networks on-device in real-time, while only consuming a single Watt of power. The Neural Compute Stick can be used as either a prototyping and validation tool, or a deployment tool for neural networks on devices with an ARM host and standard USB port. The Fathom Neural Compute Stick is so efficient that it does not require any external power supply.

Breaking New Ground in Computing

Deep Neural Networks (DNNs) are solving some of the most challenging problems in modern computing. DNNs have been shown to drastically outperform traditional approaches on tasks such as image classification, voice recognition and complex problem solving. DNNs have the potential to make our devices more aware, more proactive and more useful than ever before.

What is Fathom?

Fathom is the new software framework from Movidius that helps developers translate a neural network description (as defined by a TensorFlow or Caffe) into optimized code that runs directly on the Myriad 2 VPU, using optimized library functions specifically designed for Deep Neural Networks.

What is the Fathom Neural Compute Stick?

The new Fathom Neural Compute Stick is the world’s first embedded neural network accelerator. It runs fully-trained neural networks on the Myriad 2 VPU at ultra-low power, nominally under 1 Watt.

When is the Neural Compute Stick available?

Movidius is now building 1,000 units to deliver to select customers and partners as the first phase of its deployment. Make sure to get in touch with Movidius with this website’s Contact link if you are interested in learning more.

What machine intelligence applications will benefit from Fathom and the Neural Compute Stick?

With our new software framework, developers will more easily be able to build applications that require Deep Neural Networks to run at the network edge. For example, some of the applications that will become easier to deploy using these new tools from Movidius are: scene intelligence and object classification in drones, consumer cameras, security cameras, and service robotics.

Related Videos