Temporal Convolutional Networks (TCNs) involving mono channels as input, have shown superior performance compared to state-of-the-art sequence detection recursive networks in a variety of applications. TCNs leverage the concept of… Click to show full abstract
Temporal Convolutional Networks (TCNs) involving mono channels as input, have shown superior performance compared to state-of-the-art sequence detection recursive networks in a variety of applications. TCNs leverage the concept of dilated causal convolution for a wider receptive field coverage of input (mono) channels, which requires scaling the delay between input samples in Multiply-Accumulate (MAC) units in different layers. We demonstrate a possible data-flow transformation to convert a dilated convolution to a non-dilated convolution to remove such need for delay scaling while maintaining the same receptive field. The new data-flow transformation allows for hardware units to be shared across all layers with single-delay units between the MAC units. We demonstrate how such data-flow transformation can be easily achieved using generic Finite Impulse Response (FIR) filter modules, simplifying the deployment of TCNs. We validate the predicted savings using Cadence Stratus High-Level Synthesis (HLS). A gesture recognition case study using ultrasound is synthesized achieving 25% savings in both energy and area if the data-flow transformation is applied.
               
Click one of the above tabs to view related content.