ADEPNET – A DYNAMIC –PRECISION EFFICIENT POSIT MULTIPLIER FOR NEURAL NETWORKS

Authors

  • Sandhya Rani Posam Author
  • Farina Yasmeen Author

DOI:

https://doi.org/10.64751/ijdim.2025.v4.n3.pp261-271

Abstract

The goal of the posit number system is to seamlessly replace the current IEEE floating point standard. When it comes to displaying decimals, its tapering accuracy and wide dynamic range enable a smaller size point to nearly equal the performance of a much larger size floating-point. When completing error-tolerant jobs where low latency and area are crucial, such as deep learning inference computation, this becomes quite helpful. According to recent studies, the precision of multipliers employed for convolutions reaches a point at which deep neural network models' performance saturates. As a result, creating accurate arithmetic circuits for these applications requires additional hardware, which is a needless expense. In order to strike the best possible balance between hardware use and inference accuracy, this research investigates approximation posit multipliers in the convolutional layers of deep neural networks. There are several phases in posit multiplication, with the mantissa multiplication step using the most hardware power. This is lessened by proposing a new multiplier circuit that uses a bit masking dependent on input regime size and an approximate hybrid-radix Booth encoding for mantissa multiplication. Furthermore, a new Booth encoding control mechanism has been developed to minimize dynamic power waste by preventing superfluous bits from switching. These changes have helped to reduce power dissipation in the mantissa multiplication stage by 23% as compared to previous research.

Downloads

Published

2025-09-14

How to Cite

Sandhya Rani Posam, & Farina Yasmeen. (2025). ADEPNET – A DYNAMIC –PRECISION EFFICIENT POSIT MULTIPLIER FOR NEURAL NETWORKS. International Journal of Data Science and IoT Management System, 4(3), 261–271. https://doi.org/10.64751/ijdim.2025.v4.n3.pp261-271