Loading
Deep Neural Networks (DNN) have become the state of the art in the fields of Image Classification, Object Detection and Machine Translation, among others. However, this comes at the cost of increased complexity: more parameters, computations, energy consumption. DNN pruning is an effective way to reduce this complexity and provide high performance, low energy DNN implementations for embedded systems. ProPruNN proposes to precisely study the impact of structured pruning. This exploration will be done by co-designing hardware architectures capable of taking advantage of this pruning. The first objective is to clearly identify the real impact of structured pruning on the performance of networks implemented on FPGA. Indeed, in the literature, this impact is underestimated, because only a fraction of the prunable parameters are actually pruned. The second is to design predictive models of this impact, to incorporate it in the training of networks in order to optimize their throughput, latency, and energy efficiency during the training itself.
<script type="text/javascript">
<!--
document.write('<div id="oa_widget"></div>');
document.write('<script type="text/javascript" src="https://www.openaire.eu/index.php?option=com_openaire&view=widget&format=raw&projectId=anr_________::936d18b9f22b50cf2636e516549f4f5b&type=result"></script>');
-->
</script>