IEI Mustang-F100-A10
In AI applications, training models are just half of the whole story. Designing a real-time edge device is a crucial task for today’s deep learning applications.
FPGA is short for field programmable gate array. It can run AI faster, and is well suited for real-time applications such as surveillance, retail, medical, and machine vision. With the advantage of low power consumption, it is perfect to be implemented in AI edge computing device to reduce total power usage, providing longer duty time for the rechargeable edge computing equipment. AI applications at the edge must be able to make judgements without relying on processing in the cloud due to bandwidth constraints, and data privacy concerns.Therefore, how to resolve AI task locally is becoming more important.
In the era of AI explosion, various computations rely on server or device which needs larger space and power budget to install accelerators to ensure enough computing performance.
In the past, solution providers have been upgrading hardware architecture to support modern applications, but this has not addressed the question on minimizing physical space. However, space may still be limited if the task cannot be processed on the edge device.
We are pleased to announce the launch of the Mustang-F100-A10, a small form factor, low power consumption, and low-latency. FPGA base AI edge computing solution compatible with IEI TANK-870AI compact IPC for those with limited space and power budget.
|