About

Our Story

Since our inception, we at Movidius have been driven by two fundamental beliefs: that there is a deep interdependency between algorithms and chip architecture, and that machine vision applications can only be effective at the network edge, running right beside the sensors providing the input.

Not seeing any availability of programmable, ultra-low power vision chip designs on the market, we took it upon ourselves to create this missing link in embedded visual computing. Bringing past experience in semiconductors and ultra-low power computing, we are humble and proud to have recently become an industry leader in powering today's machine vision innovations.

How did Movidius drive this change in machine vision? From our prior experience, we recognized that hardware alone is too costly and too slow as the benefits of Moore's Law had been decelerating, and the rapid innovation in machine vision algorithms meant that software programmability is required to drive product roadmaps. So the company focused on software programmable fabrics optimized for maximum sustained pixel throughput per watt per dollar.

When the company was founded there was no suitable processor-based solution and indeed no licensable IP on the market that could fulfill these requirements. This prompted us to develop our own processing architecture designed specifically with large-scale numerical workloads in image and signal-processing in mind. We embraced parallelism in order to achieve a programming methodology and architecture that achieved a "sweet spot" in terms of a sub-one watt power dissipation for demanding workloads. The resulting Myriad 2 family of Vision Processor Units (VPUs) are based on this compute fabric, backed by a memory subsystem capable of feeding the processor array as well as hardware acceleration to support large-scale operations.

We will continue to strive to meet developers' needs for performance, programmability, and power as we support new and innovative embedded vision processing devices pushing enormous amounts of pixel processing to the network edge such as in wearable devices, robotics, or augmented & virtual reality headsets.