IEEE 802.11ah is a new Wi-Fi standard operating on unlicensed sub-GHz frequencies. It aims to provide long-range connectivity to Internet of Things (IoT) devices. The IEEE 802.11ah restricted access window (RAW) mechanism promises to increase throughput and energy efficiency in dense deployments by dividing stations into different RAW groups and allowing only one group to access the channel at a time. In this demo, we demonstrate the ability of the RAWmechanism to support a large number of densely deployed IoT stations with heterogeneous traffic requirements. Differentiated Quality of Service (QoS) is offered to a small set of high-throughput wireless cameras that coexist with thousands of best-effort sensor monitoring stations. The results are visualized in near real-time using our own developed IEEE 802.11ah visualizer running on top of the ns-3 event-based network simulator.
The main purpose of running ns-3 simulations is to generate relevant data sets for further study. There are two strategies to generate output from ns-3, either using generic predefined bulk output mechanisms or using the ns-3’s Tracing system. Both require parsing the raw output data to extract and process the data of interest to obtain meaningful information. However, parsing such output is in most cases time consuming and prone to mistakes. Post-processing is even harder when a large number of simulations needs to be analyzed and even the tracing system cannot simplify this task. Moreover, results obtained this way are only available once the simulation is finished.Therefore, we developed a user-friendly interactive visualization and post-processing tool for IEEE 802.11ah called ahVisualizer. Beside the topology and MAC configuration, ahVisualizer also plots our traces for each node over time during the simulation, as well as averages and standard deviations for each traced parameter. It can compare all the measured values across different simulations. Users can easily download figures and data in various formats. Moreover, it includes a post-processing tool which plots desired series, with desired fixed parameters, from a large set of simulations. This paper presents the ahVisualizer, its services and its architecture and shows how this tool enables much faster and easier data analysis and monitoring of ns-3 simulations with 802.11ah.
IEEE 802.11ah, marketed as Wi-Fi HaLow, is a new Wi-Fi standard for sub-1Ghz communications, aiming to address the major challenges of the Internet of Things (IoT), namely connectivity among a large number of densely deployed power-constrained stations. The standard was only published in May 2017 and hardware supporting
Wi-Fi HaLow is not available on the market yet. As such, research on 802.11ah has been mostly based on mathematical and simulation models. Mathematical models generally introduce several simplifications and assumptions, which do not faithfully reflect real network conditions. As a solution, we previously developed an
IEEE 802.11ahmodule for ns-3, publicly released in 2016. This initial release consisted of physical layer models for sub-1GHz communications and an implementation of the fast association and Restricted Access Window (RAW) channel access method. In this paper, we present an extension to our IEEE 802.11ah simulator. It contains
several new features: an online RAWconfiguration interface, an energy state model, adaptive Modulation and Coding Scheme (MCS), and Traffic Indication Map (TIM) segmentation. This paper presents the details of our implementation, along with experimental results to validate each new feature. The extended Wi-Fi HaLow module
can now support different scenarios with both uplink and downlink heterogeneous traffic, together with real-time RAW optimization, sleep management for energy conservation and adaptive MCS.
So far, existing sub-GHz wireless communication technologies focused on low-bandwidth, long-range communication with large numbers of constrained devices. Although these characteristics are fine for many Internet of Things (IoT) applications, more demanding application requirements could not be met and legacy Internet technologies such as Transmission Control Protocol/Internet Protocol (TCP/IP) could not be used. This has changed with the advent of the new IEEE 802.11ah Wi-Fi standard, which is much more suitable for reliable bidirectional communication and high-throughput applications over a wide area (up to 1 km). The standard offers great possibilities for network performance optimization through a number of physical- and link-layer configurable features. However, given that the optimal configuration parameters depend on traffic patterns, the standard does not dictate how to determine them. Such a large number of configuration options can lead to sub-optimal or even incorrect configurations. Therefore, we investigated how two key mechanisms, Restricted Access Window (RAW) grouping and Traffic Indication Map (TIM) segmentation, influence scalability, throughput, latency and energy efficiency in the presence of bidirectional TCP/IP traffic. We considered both high-throughput video streaming traffic and large-scale reliable sensing traffic and investigated TCP behavior in both scenarios when the link layer introduces long delays. This article presents the relations between attainable throughput per station and attainable number of stations, as well as the influence of RAW, TIM and TCP parameters on both. We found that up to 20 continuously streaming IP-cameras can be reliably connected via IEEE 802.11ah with a maximum average data rate of 160 kbps, whereas 10 IP-cameras can achieve average data rates of up to 255 kbps over 200 m. Up to 6960 stations transmitting every 60 s can be connected over 1 km with no lost packets. The presented results enable the fine tuning of RAW and TIM parameters for throughput-demanding reliable applications (i.e., video streaming, firmware updates) on one hand, and very dense low-throughput reliable networks with bidirectional traffic on the other hand.
Fog computing extends the cloud computing paradigm by placing resources close to the edges of the network to deal with the upcoming growth of connected devices. Smart city applications, such as health monitoring and predictive maintenance, will introduce a new set of stringent requirements, such as low latency, since resources can be requested on-demand simultaneously by multiple devices at different locations. It is then necessary to adapt existing network technologies to future needs and design new architectural concepts to help meet these strict requirements. This article proposes a fog computing framework enabling autonomous management and orchestration functionalities in 5G-enabled smart cities. Our approach follows the guidelines of the European Telecommunications Standards Institute (ETSI) NFV MANO architecture extending it with additional software components. The contribution of our work is its fully-integrated fog node management system alongside the foreseen application layer Peer-to-Peer (P2P) fog protocol based on the Open Shortest Path First (OSPF) routing protocol for the exchange of application service provisioning information between fog nodes. Evaluations of an anomaly detection use case based on an air monitoring application are presented. Our results show that the proposed framework achieves a substantial reduction in network bandwidth usage and in latency when compared to centralized cloud solutions.
In recent years, the Internet of Things (IoT) has introduced a whole new set of challenges and opportunities in Telecommunications. Traffic over wireless networks has been increasing exponentially since many sensors and everyday devices are being connected. Current networks must therefore adapt to and cope with the specific requirements introduced by IoT. One fundamental need of the next generation networked systems is to monitor IoT applications, especially those dealing with personal health monitoring or emergency response services, which have stringent latency requirements when dealing with malfunctions or unusual events. Traditional anomaly detection approaches are not suitable for delay-sensitive IoT applications since these approaches are significantly impacted by latency. With the advent of 5G networks and by exploiting the advantages of new paradigms, such as Software-Defined Networking (SDN), Network Function Virtualization (NFV) and edge computing, scalable, low-latency anomaly detection becomes feasible. In this paper, an anomaly detection solution for Smart City applications is presented, focusing on low-power Fog Computing solutions and evaluated within the scope of Antwerp’s City of Things testbed. Based on a collected large dataset, the most appropriate Low Power Wide Area Network (LPWAN) technologies for our Smart City use case are investigated.
In the last years, traffic over wireless networks has been increasing exponentially, due to the impact of Internet of Things (IoT) and Smart Cities. Current networks must adapt to and cope with the specific requirements of IoT applications since resources can be requested on-demand simultaneously by multiple devices on different locations. One of these requirements is low latency, since even a small delay for an IoT application such as health monitoring or emergency service can drastically impact their performance. To deal with this limitation, the Fog computing paradigm has been introduced, placing cloud resources on the edges of the network to decrease the latency. However, deciding which edge cloud location and which physical hardware will be used to allocate a specific resource related to an IoT application is not an easy task. Therefore, in this paper, an Integer Linear Programming (ILP) formulation for the IoT application service placement problem is proposed, which considers multiple optimization objectives such as low latency and energy efficiency. Solutions for the resource provisioning of IoT applications within the scope of Antwerp’s City of Things testbed have been obtained. The result of this work can serve as a benchmark in future research related to placement issues of IoT application services in Fog Computing environments since the model approach is generic and applies to a wide range of IoT applications.
IEEE 802.11ah is a new Wi-Fi standard operating on unlicensed sub-GHz frequencies. It aims to provide long-range connectivity to Internet of Things (IoT) devices. The IEEE 802.11ah restricted access window (RAW) mechanism promises to increase throughput and energy efficiency in dense deployments by dividing stations into different RAW groups and allowing only one group to access the channel at a time. In this demo, we demonstrate the ability of the RAW mechanism to support a large number of densely deployed IoT stations with heterogeneous traffic requirements. Differentiated Quality of Service (QoS) is offered to a small set of high-throughput wireless cameras that coexist with thousands of best-effort sensor monitoring stations. The results are visualized in near real-time using our own developed IEEE 802.11ah visualizer running on top of the ns-3 event-based network simulator.
The restricted access window (RAW) feature of IEEE 802.11ah aims to significantly reduce channel contention in ultra-dense and large-scale sensor networks. It divides stations into groups and slots, allowing channel access only to one RAW slot at a time. Several algorithms have been proposed to optimize the RAW parameters (e.g., number of groups and slots, group duration, and station assignment), as the optimal parameter values significantly affect performance and depend on network and traffic conditions. These algorithms often rely on accurate estimation of future sensor station traffic. In this paper, we present a more accurate traffic estimation technique for IEEE 802.11ah sensor stations, by exploiting the “more data” header field and cross slot boundary features. The resulting estimation method is integrated into an enhanced version of the Traffic-Adaptive RAW Optimization Algorithm, referred to as E-TAROA. Simulation results show that our proposed estimation method is significantly more accurate in very dense networks with thousands of sensor stations. This in turn results in a significantly more optimal RAW configuration. Specifically, E-TAROA converges significantly faster and achieves up to 23% higher throughput and 77% lower latency than the original TAROA algorithm under high traffic loads.
The recent increase of connected devices has triggered countless Internet-of-Things applications to emerge. By using the Time-Slotted Channel Hopping (TSCH) mode of the IEEE 802.15.4e MAC layer, wireless multi-hop networks enable highly reliable and low-power communication, supporting mission-critical and industrial applications. TSCH uses channel hopping to avoid both external interference and multi-path fading, and a synchronization-based schedule which allows precise bandwidth allocation. Efficient schedule management is crucial when minimizing the delay of a packet to reach its destination. In networks with recurrent sensor data transmissions that repeat after a certain period, current scheduling functions are prone to high latencies by ignoring this recurrent behavior. In this article, we propose a TSCH scheduling function that tackles this minimal-latency recurrent traffic problem. Concretely, this work presents two novel contributions. First, the recurrent traffic problem is defined formally as an Integer Linear Program. Second, we propose the Recurrent Low-Latency Scheduling Function (ReSF) that reserves minimal-latency paths from source to sink and only activates these paths when recurrent traffic is expected. Extensive experimental results show that using ReSF leads to a latency improvement up to 80% compared to state-of-the-art low-latency scheduling functions, with a negligible impact on power consumption of at most 6%.