It is estimated that as early as 2018, integrated chips will be able to integrate billions of transistors on a platform as small as 18 nm with clock rates up to 10 GHz. With these technological advances, a network on chip (NoC) appears to be an attractive solution to implement future high performance networks. However this will require a shift in Quality of Service (QoS) techniques and management.
A NoC is composed by IP cores and switches connected by communication channels. Each NoC switch has 5 bi-directional ports, a routing control unit and an input buffer for temporary information storage. The main parameters driving the switch performance are the memory access time and the transition time through the switch. In fact, it is important to minimize how much time data spends in the buffer because it reduces throughput, increases End to End Delay (EED), causes jitter, and can lead to data loss if there is insufficient memory space to store all incoming data waiting to be transmitted. The main part of the switch is the packet scheduler. It is based on the Deficit Weighted Round Robin (DWRR) technique for the management of the data queuing. In this technique the switch defines many application classes and it associates a weight to each class.
As discussed in chapter four, QoS refers to the collection of network data that guarantees predictable network performances. Measurements often include availability, throughput, latency, error rate and can include prioritization of network traffic. A network monitoring system must be deployed as part of QoS, to insure that networks are performing at desired service levels. This is usually spelled out in a service level agreement (SLA) between the network company and the business customer.
With the use of NoCs it is proposed that we add a QoS metric that better shows the relation of EED to buffer size. A reasonable buffer size should drive NoCs to maximum performances. The main idea is to keep the minimum buffer size...