The NPU of microprocessor of MCX N series uses methodological brief introduction

The NPU of microprocessor of MCX N series uses methodological brief introduction

[introduction]MCX N series is high-powered, small controller of low power comsumption, deploy intelligent peripheral and accelerator, can provide much task function and high-energy effect. Product of series of partial MCX N includes favour wisdom riverside to face machine study to applyEIQ? Neutron nerve handles unit (NPU). Small meritorious serviceCache of bad news high speed enhanced systematic performance, double a Flash memory and the function of RAM support system that take ECC to detect are safe, offerred protect and assure additionally. Do these safe MCU include EdgeLock of favour Zhi Pu? Core Profile of region of place of safety, build according to designing safe method form, carry sacrificial vessel to not alterable and trustful root and hardware add close safety to start quickly.

  

MCX N series is high-powered, small controller of low power comsumption, deploy intelligent peripheral and accelerator, can provide much task function and high-energy effect. Does product of series of partial MCX N include favour wisdom riverside to face the EIQ that machine study applies? Neutron nerve handles unit (NPU) . Cache of high speed of low power comsumption enhanced systematic performance, double a Flash memory and the function of RAM support system that take ECC to detect are safe, offerred protect and assure additionally. Do these safe MCU include EdgeLock of favour Zhi Pu? Core Profile of region of place of safety, build according to designing safe method form, carry sacrificial vessel to not alterable and trustful root and hardware add close safety to start quickly.

Processor of miniature of MCX N series: Is MCXN94x\MCXN54x based on two high-powered Arm? Cortex? – M33 core compose is built, core traversal speed can amount to 150 MHz. It deployed 2MB board carry shine put (Flash) , the ECC with complete configuration of optional choose (error recovery is piled up) RAM, at the same time compositive an exclusive nerve handles unit (EIQ Neutron NPU) . This NPU learns in the machine (ML) on task processing speed, give 40 times more quickly than M33 core, reduced facility significantly wake up time, reduced overall power comsumption effectively.

EIQ Neutron NPUs can be assisted include CNN (roll accumulate nerve network) , RNN (circular nerve network) , TCN (time coils accumulate a network) and the nerve network of a variety of types such as Transformer. Use the development that EIQ Neutron NPU has machine study applies, what will get EIQ machine learns software to develop an environment is all-around support. Block diagram of system of EIQ Neutron NPU is shown as follows:

The NPU of microprocessor of MCX N series uses methodological brief introduction

NPU by computational unit, weight decoder, quantify implement, optimize function accelerator, RAM and DMA visit interface composition quickly, its ML calculates force to be able to amount to 4.8G. Calculate force formidably to bring huge to quicken to ML inference, the function contrast on model of test of TinyML Perf Benchmark pursues as follows be shown:

The NPU of microprocessor of MCX N series uses methodological brief introduction

The performance that NPU shows in the graph promotes multiple, green cylinder represents M33, blue cylinder is based on the promotion multiple of M33 on behalf of NPU. Can see from inside the graph Anomaly Detect is unusual detect model NPU offers the function promotion of 8 times, keyword Spotting keyword detects model NPU offers the promotion of 15 times, resnet image classifies model NPU to offer the function promotion of 38 times, NPU of model of Visual Wake Word offers the function promotion of 28 times.

To the model of different type, of NPU quicken the effect to differ somewhat. Resnet basically is by roll accumulate a network to form, the main computation unit of NPU is to multiply cumulative calculator, and the weight between the passageway is share, it is the biggest that so NPU accumulates network function to promote to coiling, unusual detect the model basically is comprised by complete join network, the weight of complete join network cannot be shared so cannot of utmost use NPU, so the acceleration of complete join network is the smallest.

The promotion of inferential speed can reduce the run time of core to reduce integral power comsumption thereby necessarily, open NPU the meeting is additional increase 1.4mA (3.3V) electric current, compare the promotion of operation speed, this increment is negligible do not write down.

The NPU of microprocessor of MCX N series uses methodological brief introduction

Look from moving sequential chart, NPU makes can hind the major while of Core is in dormancy condition, if be absent model of the inference on NPU, core is in moving condition basically all the time, clearly of NPU energy-saving effect.

(Author: Tony Zhang origin: Gas station of MCU of favour wisdom riverside)

Avoid duty statement: The article is reprint an article, reprint this article purpose to depend on passing more information, the person that copyright puts in original work ‘s charge is all. If involve work copyright issue,article place uses video, picture, written language, contact please small make up undertake handling.

Leave a Reply

Your email address will not be published. Required fields are marked *