Arm Project Trillium
Built from the ground-up for machine learning and object detection, Arm is enabling a new era of advances and ultra-efficient machine learning inference from the edge to the enterprise.
The Arm Project Trillium ML and object detection processors not only provide massive efficiency uplift from standalone CPU and GPUs, but exceed traditional programmable logic from DSPs.
The Arm ML processor
Specifically designed for inference at the edge, the Arm ML processor gives an industry-leading performance of 4.6 TOPs, with a stunning efficiency of 3 TOPs/W, for mobile devices and smart cameras.
- Ground-up design for high performance and efficiency
- Massive uplift over CPUs, GPUs, DSPs and accelerators
- For mobile use cases, the processor will deliver uplift of around 2x-4x in complex, real-world use cases
- Unmatched performance in thermal and constrained environments with efficiency of 3 trillion operations per second (TOPs) per watt
The Arm OD processor
The Arm Object Detection processor is the most efficient way to detect people and objects on mobile and embedded platforms. It continuously scans every frame to provide a list of detected objects, along with their location within the scene.
- Detect objects in every frame running at full HD, 60fps (no dropped frames)
- Detect objects any size from 50x60 pixels to full screen
- No practical limit to the number of objects that can be detected per frame
- Can be combined with CPUs, GPUs or the Arm Machine Learning processor for additional local processing – significantly reducing the overall compute requirement
Built on the Compute Library, Arm NN enables efficient translation of existing neural network frameworks, such as TensorFlow and Caffe, allowing them to run efficiently – without modification – across Arm Cortex CPUs and Arm Mali GPUs.
Arm NN will include support for the Arm Machine Learning processor later this year and, via CMSIS-NN support, for Cortex-M CPUs. It is available free of charge, under a permissive MIT open source license.