Research
Network management can be quite overwhelming for network managers due to the large scale and hetero- geneity of the networks they manage. The FCAPS model adapted from ITU M.3400 partitions network management into five tasks:
- Fault detection and correction
- Configuration and operation
- Accounting and billing
- Performance assessment and optimization
- Security assurance and protection
To perform these tasks, network managers must consistently analyze network systems that are in many cases large scale and composed of heterogeneous devices. We believe the network management task of fault detection and correction is quite stressful for network managers. These tasks are often preceded by user complaints and need to be resolved as quickly as possible. Automating this task involves the following steps:
- Fault detection/localization
- Fault identification
- Fault resolution or correction
This Research in a Nutshell
We envision a two-tier system to automate the network management task of fault detection and correction:
Our research investigates techniques that employ machine learning to automate these tasks. Our Network Link Outlier Factor (NLOF) automates fault detection/localization using a suite of unsupervised machine learning (clustering and outlier detection).
This Research in a Little More Detail
Our Network Link Outlier Factor (NLOF) consists of a 4-stage pipeline of unsupervised machine learning algorithms. In stage 1, flows are clustered into performance cohorts in two sub-stages using the DBSCAN algorithm and then our TPCluster algorithm. In stage 2, each flow is assigned an outlier score that measures its distance in feature space to a performance exemplar in its cohort cluster. In stage 3, flows and network links are corre- lated using the topology data. Finally, in stage 4 each network link is assigned an outlier score that is the ratio of outlier flows to all flows traversing that link. Please read our published articles for further detail.
Cloud computing is the synergistic combination of distributed computing and communication to provide computing capacity as a service on demand; as a utility in a fashion similar to electric power distribution. There are two major benefits to having computing available as a utility resource shared by many users. The first is the ability to aggregate computing resource demand across several sources thereby allowing for increased resource utilization. The second is the ability of computing resource-limited devices, such as mobile devices, to gain access to computing capacity that exceeds their inherent capabilities.
Mobile devices are resource limited (e.g., energy supply, memory, computation speed) yet in many contexts they serve as the primary computing device. As a result, mobile application demands are increasing and these demands are quickly exceeding the capabilities of the mobile devices. To address this issue, a strategy called computation offloading is utilized whereby computations that are part of mobile applications are offloaded to nearby computing resources.
This Research in a Nutshell
We have investigated techniques for deciding when and where to offload computation from mobile devices to nearby cloud computing resources. These resources do not necessarily need to reside in a data center in a remote city. We envision a multi-tiered cloud computing environment whereby resources can be utilized from within the same room, at an Internet access point, or inside an Internet Service Provider (ISP) point-of-presence (PoP).
This Research in a Little More Detail
We have discovered an inequality that relates computation offloading decisions to the arithmetic intensity of a computational job. This inequality can be utilized to determine when computation offloading will reduce job completion time. We have also analyzed the effect of network delay estimation error on computation offloading decisions. Please read our published articles for further detail.
According to a report from Cisco Systems, video will make up 90% of consumer network traffic by 2012. Optimizing communication networks for the delivery of video information is obviously critical for these networks to meet the information transfer requirements of the future.
Digital video is compressed to reduce its bandwidth requirements so that transport over communication networks is feasible. The compression of video results in time variance of the bandwidth requirements of the video. This time variance makes allocating bandwidth in communication networks challenging. The easiest approach is to allocate for the maximum bandwidth of the video but that approach will result in significant wasted bandwidth in the network. The more efficient approach is to allocate for the mean bandwidth and use statistical multiplexing to share transmission channels among many videos. A bandwidth forecast can be used to improve the efficiency of statistical multiplexing.
This Research in a Nutshell
The fundamental goal of our research on video bandwidth forecasting is to discover a forecast method that is accurate and simple to implement.
This Research in a Little More Detail
It turns out that most video communication involves pre-recorded video. With pre-recorded video, true bandwidth forecasts are not required since the size of all of the video frames is known prior to their delivery over a communication network. The problem therefore reduces to simply getting future video frame size information into the network. Rami Haddad and I came up with a technique we call Feed-Forward Bandwidth Indication (FFBI) that places "future" video frame size information in "past" video frame headers where network equipment can access them, see the figure below. This technique can also be used with live (i.e., real-time) video if we introduce a little delay at the source. Please read our published articles for further detail on FFBI.
Hybrid fiber/copper access networks consist of both fiber and copper segments to reach network subscribers. These hybrid access networks utilize passive optical network (PON) technology on the fiber segment and either digital subscriber line (DSL) or cable modem technology on the copper segment. A PON is an economically sensitive candidate technology for bringing fiber optic transmission to individual subscribers (businesses or individuals) through the access network.
What is an Access Network?
Communication networks can be partitioned into two broad categories: 1) public and 2) private. Public networks are those that are established by service provider companies (e.g., AT&T, and Qwest) for public use. Private networks are those that are established by private entities (e.g., UTEP) for use by the entity. Individuals or institutions pay one or more service providers for a connection to one or more public networks that typically offer global connectivity (i.e., the Internet).
The access network interconnects the public and private networks. The access network is actually a portion of the public network that reaches out to the individual subscriber (i.e., private) networks. At present, the access network consists primarily of copper transmission media (e.g., twisted pair [digital subscriber lines], and coaxial [cable tv]). PONs are an attractive option for converting the access network to a fiber optic transmission media because of their low cost.
This Research in a Nutshell
We have leveraged our deep expertise of dynamic bandwidth allocation (DBA) for Ethernet Passive Optical Networks (read information on that project below) to study access networks consisting of both fiber and copper segments. In many access network deployment scenarios it is still not economically attractive for service provider companies to deploy fiber all the way to subscriber locations despite the low cost of PON deployments. In these scenarios, service provider companies will deploy a PON within 100 meters or so of subscribers and utilize existing twisted pair copper or coaxial copper to reach subscribers. The point at which the fiber segment is terminated is referred to as a "drop point". For that reason the equipment that bridges the fiber and copper segments of the network at the "drop point" is simply called a "drop point" device. The fundamental goal of our research is to study mechanisms to reduce energy consumption at the "drop-point" device.
This Research in a Little More Detail
We have pursued two strategies to reduce energy consumption at the "drop point" device. The first strategy moves some of the copper technology logic blocks (e.g., DSL logic blocks) from the drop point to the service provider companies central office (CO) where power is abundant. Less logic at the drop point translates to fewer transistors that consume power. Moving logic blocks to the CO location has performance repercussions. Our study uncovered these repercussions and mechanisms to alleviate or reduce their impact. The second strategy reduces the buffering that occurs at the drop point device. Reduced buffering means smaller memory devices with fewer transistors that consume power. Our research uncovered a few mechanisms for reducing buffering in both the downstream and upstream directions. Please read our published articles for further detail.
Ethernet PONs (EPONs) are PONs that utilize the ubiquitous Ethernet protocol.
This Research in a Nutshell
The fundamental goal of this research on EPONs has been to increase channel utilization. Increasing channel utilization increases the useable bandwidth on each individual PON. Increased useable bandwidth reduces the cost per unit of bandwidth. Stated more directly, the goal has been to lower the cost per unit of bandwidth on EPONs to help promote their rapid deployment.
This Research in a Little More Detail
Through analytical and behavioral modeling of EPONs, we came to understand what contributes to wasted bandwidth and have used the scheduling component of dynamic bandwidth allocation to minimize this wasted bandwidth. This research has resulted in the discovery of two transmission scheduling (i.e., grant scheduling) techniques that can significantly improve channel utilization: 1) online just-in-time scheduling, and 2) shortest propagation delay first scheduling. Dynamic Bandwidth Allocation (DBA) is the act of managing the bandwidth or information capacity of transmission channels. For EPONs, DBA can be broken into two components: 1) grant sizing, and 2) grant scheduling. Please read our published articles for further detail.