• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar
  • About us
  • Research
  • Publications
  • News
  • People
  • Contact Us

High Performance Computing Laboratory

Texas A&M University College of Engineering

Research

High Performance On-Chip Interconnects Design for Multicore Accelerators (NSF project, 2014 – 2018)

Multicore Accelerators like GPUs have recently obtained attention as a cost-effective approach for data parallel architectures, and the fast scaling of the GPUs increases the importance of designing an ideal on-chip interconnection network, which impacts the overall system performance. Since shared buses and crossbar can provide networking performance enough only for a small number of communication nodes, switch-based networks-on-chip (NoCs) have been adopted as an emerging design trend in many-core environments. However, NoC for Multicore Accelerator architectures has not been extensively explored. While the major communication of Chip Multiprocessor (CMP) systems is core-to-core for shared caches, major traffic of Multicore Accelerators is core-to-memory, which makes the memory controllers hot spots. Also, since Multicore Accelerators execute many threads in order to hide memory latency, it is critical for the underlying NoC to provide high bandwidth.
In this project, we develop a framework for high-performance, energy-efficient on-chip network mechanisms in synergy with Multicore Accelerator architectures. The desirable properties of a target on-chip network include re-usability across a wide range of Multicore Accelerator architectures, maximization of the use of routing resources, and support for reliable and energy-efficient data transfer.

Approximate Network-on-Chip Architectures

Approximate Computing has emerged as an attractive alternate compute paradigm by trading off computation accuracy for benefits in both performance and energy efficiency. Approximate techniques rely on the ability of applications and systems to tolerate imprecision/loss of quality in the computation results. Many emerging applications in machine learning, image/video processing and pattern recognition have already employed approximation to achieve better performance. A significant portion of research on approximate hardware has focused on either computation units for accelerated inaccurate execution, or memory hierarchy for high performance memory. However, there has been no prior research on approximate communication for the interconnection fabric of manycore systems.

Networks-on-Chip (NoCs) have emerged as the most competent communication fabric to connect an ever increasing number of processing/memory elements. Communication-centric applications such as image/video processing and emerging memory intensivie big-data workloads place a significant amount of stress on the NoC for high throughput. Hence, designing a high-performance NoC to provide high throughput has become critical to overall system performance. Therefore, the need to explore hardware approximation techiques that can leverage the modern approximate computing paradigm for high-throughput NoCs is imminent.

In this work we propose APPROX-NoC, a data approximation framework for NoCs to alleviate the impact of heavy data communication stress by leveraging the error tolerance of applications. APPROX-NoC proposes to reduce the transmission of approximately similar data in the NoC by delivering approximated versions of precise data to improve the data locality for higher compression rate. We design a data-type aware value approximation engine (VAXX), with a light weight error margin compute logic, which can be used in the manner of plug and play module for any underlying NoC data compression mechanisms. VAXX approximates the value of a given data block to the closest compressible data pattern based on the data type, with fast quantitative error margin calculation, thereby improve network throughput.

APPROX-NOC Main Idea and Architecture Overview

Approximate Value Compute Logic

Power-Gating for Energy-Efficient Networks-on-Chip

Chip Multiprocessors (CMPs) are scaling to 100s and 1000s cores owing to shrinking transistor sizes and denser on-chip packaging as stated by Moore’s law. However, the failure of Dennard Scaling, supply voltage not scaling down with the transistor size, exposes high risks to break the power and thermal constraints to keep all on-chip components switching simultaneously. The future CMP designs will have to work under stricter power envelops. Scalable Networks-on-Chip (NoCs), like 2D meshes, have become de facto interconnection mechanism in large scale CMPs. Recent studies have shown that NoCs consume a significant portion of the total on-chip power budget, ranging from 10% to 36%. Hence, power-efficient NoCs designs are of the highest priority for power-constrained future CMPs.

Static power consumption of the on-chip circuitry is increasing at an alarming rate with the chip circuitry is increasing at an alarming rate with the scaling down of feature sizes and chip operating voltages towards near threshold levels. As we reach towards sub-10nm feature sizes, static power will become the major portion of the NoC power consumption. Power-gating, cutting off supply current to idle chip components, is an effective technique that can be used to mitigate the worsening impact of on-chip static power consumption. However, applying power-gating for NoCs may disconnect the network and lead to performance degradation.

We propose Fly-Over (FLOV), a light-weight distributed power-gating mechanism for energy-efficient NoCs. FLOV tries to power-gate idle routers in a distributed manner through handshake protocols. FLOV routers provides FLOV links for packets to fly-over gated routers with a dynamic best-effort shortest routing algorithm in order to facilitate network functionalities and sustain performance.

 

FLOV Router Architecture

FLOV Routing Example

 

Application Execution Time for PARSEC Benchmarks

Application Energy Consumption for PARSEC Benchmarks

Bandwidth Efficient On-Chip Interconnect Designs

GPGPUs are characterized by numerous programmable computational cores which allow for thousands of simultaneous active threads to execute in parallel. The advent of parallel programming models, such as CUDA and OpenCL, makes it easier to program graphics/non-graphics applications, making GPGPUs an excellent computing platform. The growing quantity of parallelism and the fast scaling of GPGPUs have fueled an increasing demand for performance-efficient on-chip fabrics finely tuned for GPGPU cores and memory systems.

Ideal interconnects should minimize message blocking by efficiently exploiting limited network resources such as virtual channels (VCs) and physical channels (PCs) while ensuring deadlock freedom. Switch-based Networks-on-Chip (NoCs) have been useful in manycore chip-multiprocessor (CMP) environments for their scalability and flexibility. Unlike CMP systems where NoC traffic tends to be uniformly divided up across cores communicating with distributed on-chip caches, the communication in GPGPUs is highly asymmetric, mainly between many compute cores and a few memory controllers (MCs) on a chip. Thus the MCs often become hot spots, leading to skewed usage of significant portions of the NoC resources such as wires and buffers. Specifically, heavy reply traffic from MCs to cores potentially causes a network bottleneck, degrading the overall system performance. Therefore when we design a bandwidth-efficient NoC, the asymmetry of its on-chip traffic must be considered.

The throughput-effectiveness is a crucial metric for improving the overall performance in throughput-oriented architectures, thus designing a high bandwidth NoC in GPGPUs is of primary importance. Thus, to achieve such a goal, we quantitatively analyze the impact of network traffic patterns in GPGPUs with different MC placements and dimension order routing algorithms. Then, motivated by the detailed analysis, we propose VC monopolizing and partitioning schemes which dramatically improve NoC resource utilization without causing protocol deadlocks. We also investigate the impact of different routing algorithms under diverse MC placements.

 

Packet Type Distribution for GPGPU Benchmarks

Network traffic examples with XY and XY-YX routing

Publications

  • Approx-NoC: A Data Approximation Framework for Network-On-Chip Architectures,
    R. Boyapati, J. Huang, P. Majumdar, K. H. Yum and E. J. Kim,
    The 44th International Symposium on Computer Architecture (ISCA), June, 2017
  • Fly-Over: A Light-Weight Distributed Power-Gating Mechanism for Energy-Efficient Networks-on-Chip,
    R. Boyapati*, J. Huang*, N. Wang, K. H. Kim, K. H. Yum and E. J. Kim,
    The 31st IEEE International Parallel & Distributed Processing Symposium (IPDPS), June, 2017
  • Bandwidth Efficient On-Chip Interconnect Designs for GPGPUs,
    H. Jang, J. Kim, P. Gratz, K. H. Yum, and E. J. Kim,
    The 52nd Design Automation Conference (DAC), June 2015

Communication-Centric Chip Multiprocessor Design (NSF CAREER, 2009 – 2014)

Chip Multiprocessor Systems (CMPs) have embarked a paradigm shift from computation-centric to communication-centric system design, as the number of cores in a chip increases. To overcome traditional interconnects problems, Network-on-Chip (NoC), using switch-based networks, has been widely accepted as a promising architecture to orchestrate chip-wide communication. Although interconnection network design has matured in the context of multiprocessor architectures, NoC has different characteristics for chip-wide communication support, making its design unique. For example, NoC can benefit from high wire densities and abundant metal layers. However, the cost of NoC is constrained in terms of power and area. The design of high-performance, low-power, and area-efficient NoC can be extremely challenging, because these different objectives conflict with each other in many cases. We are exploring innovative ideas on NOC design considering a multi-dimensional design space and technology constraints.

Peak Power Control

Communication traffic of these applications makes routers greedy to acquire more power such that the total consumed power of the network may exceed the supplied power and cause reliability problems. To ensure high performance and power constraint satisfaction, the on-chip network must have a peak power control mechanism. The satisfaction of peak power consumption in a single chip is essential to maintaining supply voltage levels, to supporting reliability, to limiting capacity of heat sinks and to meeting affordable packaging costs. Since the total power supplied to a chip is distributed to all the units of a chip, each unit should keep its power consumption below a preset upper limit. With the increasing demand for interconnect bandwidth, an on-chip network becomes the major power consumer in a chip.

Multimedia applications on a System-on-Chip (SoC) are extensively being studied for bandwidth requirements over heterogeneous components of the network. However, we are focusing on the QoS environment in the homogenous network such as chip multiprocessors. An on-chip network must support guarantee for the delivery of multimedia data (real-time traffic) as well as the normal message-oriented communication (best-effort traffic).

We propose a credit-based peak power control to meet pre-specified power constraints while maintaining the service quality, by regulating injection of packets. We take different approaches for different traffic types. For real-time traffic, instead of throttling the injection of packets of already established connections, our scheme works by determining the acceptance of a new connection based on the requirement of the consumed power and the available power budget as in the case of admission control. We also show how to calculate the expected power consumption of a connection from its bandwidth requirement. For best-effort traffic, we calculate the required power of a packet based on the distance from its source to the destination. If the expected power consumption exceeds the power budget, we throttle the injection of the packet such as the congestion control.

Peak Power Control By Regulating Input Load

 

Domain-Specific On-Chip Network Design in Large Scale Cache Systems

As circuit integration technology advances, the design of efficient interconnects has become critical. On-chip networks have been adopted to overcome scalability and the poor resource sharing problems of shared buses or dedicated wires. However, using a general on-chip network for a specific domain may cause underutilization of the network resources and huge network delays because the interconnects are not optimized for the domain. Addressing these two issues is challenging because in-depth knowledges of interconnects and the specific domain are required. Recently proposed Non-Uniform Cache Architectures (NUCAs) use wormhole-routed 2D mesh networks to improve the performance of on-chip L2 caches. We observe that network resources in NUCAs are underutilized and occupy considerable chip area (52% of cache area). Also the network delay is significantly large (63% of cache access time). Motivated by our observations, we investigate how to optimize cache operations and design the network in large scale cache systems. We propose a single-cycle router architecture that can efficiently support multicasting in on-chip caches. Next, we present Fast-LRU replacement, where cache replacement overlaps with data request delivery. Finally we propose a deadlock-free XYX routing algorithm and a new halo network topology to minimize the number of links in the network.

Design A: 16×16 Mesh (64KB)

Design B: 16×16 Simplified Mesh (64KB)

Design C: 16×4 Simplified Mesh (256KB)

Design D: Spike-16 Halo (64KB)

 

Domain-Specific Network Development

Performance of Different Designs

Adaptive Data Compression with Table-based Hardware

The design of a low-latency on-chip network is critical to provide high system performance, because the network is tightly integrated with the processors as well as the onchip memory hierarchy operating with a high frequency clock. To provide low latency, there have been significant efforts on the design of routers and network topologies. However, due to the stringent power and area budgets in a chip, simple routers and network topologies are more desirable. In fact, conserving metal resource for link implementation can provide more space for logic such as cores or caches. Therefore, we focus on maximizing bandwidth utilization in the existing network. Data compression has been adopted in hardware designs to improve performance and power. Cache compression increases the cache capacity by compressing block data and accommodating more blocks in a fixed space. Bus compression also expands the bus width by encoding a wide data as a small size code. Recently data compression is explored in the on-chip network domain for performance and power.

We investigate adaptive data compression for on-chip network performance optimization, and propose a cost-effective implementation. Our design uses a tablebased compression approach by dynamically tracking value patterns in traffic. Using a table for compression hardware processes diverse value patterns adaptively rather than taking static patterns based on zero bits in a word. The tablebased approach can easily achieve a better compression rate by increasing the table size. However, the table for compression requires a huge area to keep data values on a flow basis. In other words, the number of tables depends on the network size, because communication cannot be globally managed in a switched network. To address this problem, we present a shared table scheme that stores identical values as a single entry across different flows. In addition, a management protocol for consistency between an encoding table and a decoding table works in a distributed way so that it allows out-of-order delivery in a network.

We demonstrate performance improvement techniques to reduce negative impact of compression on performance. Streamlined encoding combines encoding and flit injection processes into a pipeline to minimize the long encoding latency. Furthermore, dynamic compression management optimizes our compression scheme by selectively applying compression to congested paths.

Packet Compression

Publications

  • Peak Power Control for a QoS Capable On-Chip Network,
    Y. Jin, E. J. Kim, K. H. Yum,
    International Conference on Parallel Processing (ICPP), June 2005
    IBM Austin Conference on Energy-Efficient Design (ACEED), March 2005 (poster)
  • A Domain-Specific On-Chip Network Design for Large Scale Cache Systems,
    Y. Jin, E. J. Kim, K. H. Yum,
    International Symposium on High-Performance Computer Architecture (HPCA), February 2007
  • Adaptive Data Compression for High-Performance Low-Power On-Chip Networks,
    Y. Jin, K. H. Yum, E. J. Kim,
    International Symposium on Microarchitecture (MICRO), November 2008
  • Recursive Partitioning Multicast: A Bandwidth-Efficient Routing for Networks-On-Chip ,
    L. Wang, Y. Jin, H. J. Kim and E. J. Kim,
    International Symposium on Networks-on-Chip (NOCS), San Diego, CA, May 2009

Dynamic Thermal Management in CMPs

As the significant heat is converted by the ever-increasing power density and current leakage, the raised operating temperature in a chip have already threatened the system reliability and led the thermal control to be one of the most important issues needed to be addressed immediately in the chip design. Due to the cost and complexity of designing thermal packaging, many Dynamic Thermal Management (DTM) schemes have been wildly adopted in the modern processors as a technique to control CPU power dissipation. However, it is known that the overall temperature of a CMPs is highly correlated with temperature of each core in the CMPs environments; hence, the thermal model for uniprocessor environments cannot be directly applied in CMPs due to the potential heterogeneity. To our best knowledge, none of prior DTM schemes considers the thermal correlation effect among neighboring cores, neither the dynamic workload behaviors which present different thermal behaviors. We believe that it is necessary to develop an efficient online workload estimation scheme for DTM to be applicable to the real world applications which have variable workload behaviors and different thermal contributions to the increased chip temperature.

Comparisons between without DTM and PDTM

MATLAB Handle Graphics

Predictive Dynamic Thermal Management

Recently, processor power density has been increasing at an alarming rate resulting in high on-chip temperature. Higher temperature increases current leakage and causes poor re- liability. In this work, we propose a Predictive Dynamic Thermal Management (PDTM) based on Application-based Thermal Model (ABTM) and Core-based Thermal Model (CBTM) in the multicore systems. ABTM predicts future temperature based on the application speci?c thermal be- havior, while CBTM estimates core temperature pattern by steady state temperature and workload. The accuracy of our prediction model is 1.6% error in average compared to the model in HybDTM, which has at most 5% error. Based on predicted temperature from ABTM and CBTM, the pro- posed PDTM can maintain the system temperature below a desired level by moving the running application from the possible overheated core to the future coolest core (migra– tion) and reducing the processor resources (priority schedul– ing) within multicore systems. PDTM enables the explo– ration of the tradeoff between throughput and fairness in temperature-constrained multicore systems.

We implement PDTM on Intel’s Quad-Core system with a specific device driver to access Digital Thermal Sensor (DTS). Compared against Linux standard scheduler, PDTM can decrease av– erage temperature about 10%, and peak temperature by 5 degrees with negligible impact of performance under 1%, while running single SPEC2006 benchmark. Moreover, our PDTM outperforms HRTM [10] in reducing average temperature by about 7% and peak temperature by about 3 degrees with perfor– mance overhead by 0.15% when running single benchmark.

                       Without DTM

                                    PDTM

Hybrid Dynamic Thermal Management

Multimedia applications become one of the most popular applications in mobile devices such as wireless phones, PDAs, and laptops. However, typical mobile systems are not equipped with cooling components, which eventually causes critical thermal deficiencies. Although many low-power and low-temperature multimedia playback techniques have been proposed, they failed to provide QoS (Quality of Service) while controlling temperature due to the lack of proper understanding of multimedia applications. We propose Hybrid Dynamic Thermal Management (HDTM) which exploits thermal characteristics of both multimedia applica– tions and systems. Specifically, we model application characteristics as the probability distribution of the number of cycles required to decode a frame. We also improve existing system thermal models by considering the effect of workload. This scheme finds an optimal clock frequency in order to prevent overheating with minimal performance degradation at runtime.

The proposed scheme is implemented on Linux in a Pentium- M processor which provides variable clock frequencies. In or- der to evaluate the performance of the proposed scheme, we exploit three major codecs, namely MPEG-4, H.264/AVC and H.264/AVC streaming. Our results show that HDTM lowers the overall temperature by 15 degrees and the peak temperature by 20 degrees, while maintaining frame drop ratio under 0.2% compared to previous thermal management schemes such as feedback control DTM, Frame-based DTM and GOP-based DTM.

Instructions and Frequency

           The number of instructions                        The estimated frequency

Correlation-Aware Thermal Management

The overall temperature of a CMPs is highly correlated with temperature of each core in the CMPs environments; hence, the thermal model for uniprocessor environments cannot be directly applied in CMPs due to the potential heterogeneity. To our best knowledge, none of prior DTM schemes considers the thermal correlation effect among neighboring cores, neither the dynamic workload behaviors which present different thermal behaviors. We believe that it is necessary to develop an efficient online workload estimation scheme for DTM to be applicable to the real world applications which have variable workload behaviors and different thermal contributions to the increased chip temperature. In this work, we propose a light runtime workload estimation using the cumulative distribution function to observe the processes¡¯ dynamic workload behaviors, and present a proper thermal model for CMPs systems to analyze the thermal correlation effect by profiling the thermal impacts from neighboring cores under the specific workload. Hence, according to the estimated representative workload and modeled thermal correlation effect, we estimate each core¡¯s future temperature more accurately with only 2.4% error in average. Next, Proactive Correlation-Aware Thermal Management (ProCATM) is introduced to avoid thermal emergencies and provide thermal fairness with negligible performance overhead.

we implement and evaluate ProCATM in an Intel Quad Core Q6600 and an Intel Core i7 965 processor systems running grouped multimedia application and several benchmarks for server environments. According to the experimental results, ProCATM reduces the peak temperature by up to 9.09% and 7.94% in our 4-cores system and 8-cores system with only 2.28% and 0.54% performance overhead respectively compared to the Linux standard scheduler.

Correlation-Aware Thermal Management

Papers

  • “Temperature-Aware Scheduler Based on Thermal Behavior Grouping in Multicore Systems,” in Design, Automation & Test in Europe (DATE 2009), Nice, France, April, 2009.
  • “Hybrid Dynamic Thermal Management Based on Statistical Characteristics of Multimedia Applications,” in International Symposium on Low Power Electronics and Design (ISLPED 2008), Bangalore, India, August, 2008.
  • “Predictive Dynamic Thermal Management for Multicore Systems,” in Design Automation Conference (DAC 2008), Anaheim, USA, June, 2008.
  • “Effective Dynamic Thermal Management for MPEG-4 Decoding,” in IEEE International Conference on Computer Design (ICCD 2007), Lake Tahoe, USA, October, 2007.

High Performance, Energy Efficient and Secure Cluster design (NSF project, 2006 – 2009)

Clusters have been widely accepted as the most effective solution to design high performance servers, which are increasingly being deployed in supporting a wide variety of Web-based services. Along with high and predictable performance, optimization of energy consumption in these servers has become a serious concern due to their high power budgets. In addition, the critical nature of many Internet-based services mandates that these systems should be robust to attacks from the Internet, since numerous security loopholes of cluster servers have been revealed. Although some initial investigation on cluster energy consumption and security has appeared recently, an in-depth design and analysis of a cluster interconnect considering the three parameters mentioned above have not been undertaken.

Performance analysis of a QoS capable cluster interconnect

The growing use of clusters in diverse applications, many of which have real-time constraints, requires quality-of-service (QoS) support from the underlying cluster interconnect. All prior studies on QoS-aware cluster routers/networks have used simulation for performance evaluation. In this work, we present an analytical model for a wormhole-switched router with QoS provisioning. In particular, the model captures message blocking due to wormhole switching in a pipelined router, and bandwidth sharing due to a rate-based scheduling mechanism, called VirtualClock. Then we extend the model to a hypercube-style cluster network. Average message latency for different traffic classes and deadline missing probability for real-time applications are computed using the model. We evaluate a 16-port router and hypercubes of different dimensions with a mixed workload of real-time and best-effort (BE) traffic. Comparison with the simulation results shows that the single router and the network models are quite accurate in providing the performance estimates, and thus can be used as efficient design tools.

Performance Enhancement Techniques for InfiniBand Architecture

InfiniBand Architecture (IBA) is envisioned to be the default communication fabric for system area networks (SANs). However, the released IBA specification outlines only higher level functionalities, leaving it open for exploring various design alternatives. In this work, we investigate four co-related techniques to provide high and predictable performance in IBA. These are: (i) using the Shortest Path First (SPF) algorithm for deterministic packet routing; (ii) developing a multipath routing mechanism for minimizing congestion; (iii) developing a selective packet dropping scheme to handle deadlock and congestion; and (iv) providing multicasting support for customized applications. These designs are evaluated using an integrated workload on a versatile IBA simulation testbed. Simulation results indicate that the SPF routing, multipath routing, packet dropping, and multicasting schemes are quite effective in delivering high and assured performance in clusters. One of the major contributions of this research is the IBA simulation testbed, which is an essential tool to evaluate various design tradeoffs.

Infiniband Architecture

Energy-Efficient Cluster Interconnects

Designing energy-efficient clusters has recently become an important concern to make these systems economically attractive for many applications. Since the cluster interconnect is a major part of the system, the focus of this work is to characterize and optimize the energy consumption in the entire interconnect. Using a cycle-accurate simulator of an InfiniBand Architecture (IBA) compliant interconnect fabric and actual designs of its components, we investigate the energy behavior on regular and irregular interconnects. The energy profile of the three major components (switches, network interface cards (NICs), and links) reveals that the links and switch buffers consume the major portion of the power budget. Hence, we focus on energy optimization of these two components. To minimize power in the links, first we investigate the dynamic voltage scaling (DVS) algorithm and then propose a novel dynamic link shutdown (DLS) technique. The DLS technique makes use of an appropriate adaptive routing algorithm to shutdown the links intelligently. We also present an optimized buffer design for reducing leakage energy in 70nm technology. Our analysis on different networks reveals that while DVS is an effective energy conservation technique, it incurs significant performance penalty at low to medium workload. Moreover, energy saving with DVS reduces as the buffer leakage current becomes significant with 70nm design.On the other hand, the proposed DLS technique can provide optimized performance-energy behavior (up to 40% energy savings with less than 5% performance degradation in the best case) for the cluster interconnects.

Papers

  • K. H. Yum, Y. Jin, E. J. Kim, and C. R. Das, “Integration of Admission, Congestion, and Peak Power Control in QoS-Aware Clusters,” to appear in The Journal of Parallel and Distributed Computing (JPDC).
  • E. J. Kim, K. H. Yum, C. R. Das, M. Yousif, and J. Duato, “Exploring IBA Design Space for Improved Performance,” IEEE Transactions on Parallel and Distributed Systems (TPDS), Vol. 18, No. 4, pp. 498-510, April 2007 (pdf).
  • E. J. Kim, G. M. Link, K. H. Yum, V. Narayanan, M. Kandemir, M. J. Irwin, C. R. Das, “A Holistic Approach to Designing Energy-Efficient Cluster Interconnets,” IEEE Transactions on Computers, Vol. 54, No. 6, pp. 660-671, June 2005. (pdf)
  • E. J. Kim, K. H. Yum, and C. R. Das, “Performance Analysis of a QoS Capable Cluster Interconnect,” Performance Evaluation, Volume 60, Issues 1-4, pp. 275-302, May 2005.

Embedded Software Solutions in Wireless Environments (ETRI project, 2005 – 2008)

In this project, we attempt to provide software solutions for these two applications; multimedia streaming services in wireless LAN environments and fault-tolerant wireless sensor network design. Video streaming is currently gaining more interest from end-users as their access speed to network is steadily increasing. Due to the increasing popularity of hand-held devices and wireless laptops, the final access points are mostly in wireless environments. For energy efficiency in wireless sensor networks, dynamic reconfiguration, where only a subset of sensor nodes is active with some interval, has been widely adopted. However, maintaining required K-coverage and connectivity is critical for the dynamic reconfiguration of wireless sensor networks.

© 2016–2025 High Performance Computing Laboratory Log in

Texas A&M Engineering Experiment Station Logo
  • State of Texas
  • Open Records
  • Risk, Fraud & Misconduct Hotline
  • Statewide Search
  • Site Links & Policies
  • Accommodations
  • Environmental Health, Safety & Security
  • Employment