Choose your language
 

An Intel® Vision Accelerator Design Product

Mustang-V100-MX8

Accelerate To The Future

Intel® Vision Accelerator Design with Intel® Movidius™ VPU

A Perfect Choice for AI Deep Learning Inference Workloads

Powered by Open Visual Inference & Neural Network Optimization (OpenVINO™) toolkit

  • Half-Height, Half-Length, Single-slot compact size
  • Low power consumption, approximate 25W
  • Supported OpenVINO™ toolkit, AI edge computing ready device
  • Eight Intel® Movidius™ Myriad™ X VPU can execute multiple topologies simultaneously.

Intel® Distribution of OpenVINO™ toolkit

Intel® Distribution of OpenVINO™ toolkit is based on convolutional neural networks (CNN), the toolkit extends workloads across multiple types of Intel® platforms and maximizes performance.

It can optimize pre-trained deep learning models such as Caffe, MXNET, and ONNX Tensorflow. The tool suite includes more than 20 pre-trained models, and supports 100+ public and custom models (includes Caffe*, MXNet, TensorFlow*, ONNX*, Kaldi*) for easier deployments across Intel® silicon products (CPU, GPU/Intel® Processor Graphics, FPGA, VPU).

Features

  • Operating Systems
    Ubuntu 16.04.3 LTS 64bit, CentOS 7.4 64bit, Windows 10 64bit.
  • OpenVINO™ Toolkit
    • Intel® Deep Learning Deployment Toolkit
      • - Model Optimizer
      • - Inference Engine
    • Optimized computer vision libraries
    • Intel® Media SDK
    • *OpenCL™ graphics drivers and runtimes.
    • Current Supported Topologies: AlexNet, GoogleNetV1/V2, MobileNet SSD, MobileNetV1/V2, MTCNN, Squeezenet1.0/1.1, Tiny Yolo V1 & V2, Yolo V2, ResNet-18/50/101
      - For more topologies support information please refer to Intel® OpenVINO™ Toolkit official website.
  • High flexibility, Mustang-V100-MX8 develop on OpenVINO™ toolkit structure which allows trained data such as Caffe, TensorFlow, and MXNet to execute on it after convert to optimized IR.
  • *OpenCL™ is the trademark of Apple Inc. used by permission by Khronos

Applications

  • Machine Vision
  • Smart Retail
  • Surveillance
  • Medical Diagnostics

Dimensions(Unit: mm)

Specifications

Model Name Mustang-V100-MX8
Main Chip Eight Intel® Movidius™ Myriad™ X MA2485 VPU
Operating Systems Ubuntu 16.04.3 LTS 64bit, CentOS 7.4 64bit, Windows 10 64bit
Dataplane Interface PCI Express x4
Compliant with PCI Express Specification V2.0
Power Consumption Approximate 25W
Operating Temperature -20°C~60°C
Cooling Active fan
Dimensions Half-Height, Half-Length, Single-width PCIe
Operating Humidity 5% ~ 90%
Power Connector *Preserved PCIe 6-pin 12V external power
Dip Switch/LED indicator Identify card number

*Standard PCIe slot provides 75W power, this feature is preserved for user in case of different system configuration

Warning: DO NOT install the Mustang-V100-MX8 into the TANK AIoT Dev. Kit before shipment. It is recommended to ship them with their original boxes to prevent the Mustang-V100-MX8 from being damaged.

Ordering Information

Part No. Description
Mustang-V100-MX8-R11 Computing Accelerator Card with 8 x Movidius Myriad X MA2485 VPU, PCIe Gen2 x4 interface, RoHS

Packing List

1 x Full height bracket

1 x External power cable

1 x QIG