新聞剪輯
Media contact
IEI Integration Corp.
TEL: +886-2-8691-6798
+886-2-2690-2098
FAX: +886-2-6616-0028
IEI Technology USA
TEL: +1-909-595-2819
FAX: +1-909-595-2816
IEI Integration China
TEL: +86-21-3462-7799
FAX: +86-021-3462-7797
IEI Integration Corp. 東京支店
TEL: +81-3-5901-9735
FAX: +81-3-5901-9736
IEI Launches Mustang-F100-A10 supported OpenVINO™ toolkit for AI deep learning applications
IEI Event News

banner

arrow-right-icon Features arrow-right-icon Specifications arrow-right-icon Ordering Information
arrow-right-icon Dimensions arrow-right-icon Packing List
 
Features
 
feature-icon
arrow

Half-Height, Half-Length, Double-Slot

arrow

Power-efficiency, low-latency

arrow

Supported Open Visual Inference & Neural Network Optimization (OpenVINO™) toolkit, AI edge computing ready device

arrow

FPGAs can be optimized for different deep learning tasks

arrow

Intel® FPGAs supports multiple float-points and inference workloads

 

OpenVINO™ toolkit

OpenVINO™ toolkit is based on convolutional neural networks (CNN), the toolkit extends workloads across Intel® hardware and maximizes performance.

It can optimize pre-trained deep learning model such as Caffe, MXNET, Tensorflow into IR binary file then execute the inference engine across Intel®-hardware heterogeneously such as CPU, GPU, Intel® Movidius™ Neural Compute Stick, and FPGA.

image
 

IEI Mustang-F100-A10

In AI applications, training models are just half of the whole story. Designing a real-time edge device is a crucial task for today’s deep learning applications.

FPGA is short for field programmable gate array. It can run AI faster, and is well suited for real-time applications such as surveillance, retail, medical, and machine vision. With the advantage of low power consumption, it is perfect to be implemented in AI edge computing device to reduce total power usage, providing longer duty time for the rechargeable edge computing equipment. AI applications at the edge must be able to make judgements without relying on processing in the cloud due to bandwidth constraints, and data privacy concerns.Therefore, how to resolve AI task locally is becoming more important.

In the era of AI explosion, various computations rely on server or device which needs larger space and power budget to install accelerators to ensure enough computing performance.

In the past, solution providers have been upgrading hardware architecture to support modern applications, but this has not addressed the question on minimizing physical space. However, space may still be limited if the task cannot be processed on the edge device.

We are pleased to announce the launch of the Mustang-F100-A10, a small form factor, low power consumption, and low-latency. FPGA base AI edge computing solution compatible with IEI TANK-870AI compact IPC for those with limited space and power budget.


Specifications
Model Name Mustang-F100-A10
Main FPGA Intel® Arria® 10 GX1150 FPGA
Operating Systems Ubuntu 16.04.3 LTS 64bit, CentOS 7.4 64bit (Support Windows 10 in the end of 2018 & more OS are coming soon)
Voltage Regulator and Power Supply Intel® Enpirion® Power Solutions
Memory 8G on board DDR4
Dataplane Interface PCI Express x8
Compliant with PCI Express Specification V3.0
Power Consumption < 60W
Operating Temperature 5°C~60°C (ambient temperature)
Cooling Active fan
Dimensions Standard Half-Height, Half-Length, Double-Slot
Operating Humidity 5% ~ 90%
Power Connector *Preserved PCIe 6-pin 12V external power
Dip Switch/LED indicator Identify card number

*Standard PCIe slot provides 75W power, this feature is preserved for user in case of different system configuration.


Dimensions (Unit:mm)
Dimensions


Ordering Information
Part No. Description
Mustang-F100-A10-R10 PCIe FPGA Highest Performance Accelerator Card with Arria 10 1150GX support DDR4 2400Hz 8GB, PCIe Gen3 x8 interface

Packing List
Item Qty
Full height bracket 1
External power cable 1
QIG 1

 
 
Back