Fog computing extends the cloud computing paradigm by placing resources close to the edges of the network to deal with the upcoming growth of connected devices. Smart city applications, such as health monitoring and predictive maintenance, will introduce a new set of stringent requirements, such as low latency, since resources can be requested on-demand simultaneously by multiple devices at different locations. It is then necessary to adapt existing network technologies to future needs and design new architectural concepts to help meet these strict requirements. This article proposes a fog computing framework enabling autonomous management and orchestration functionalities in 5G-enabled smart cities. Our approach follows the guidelines of the European Telecommunications Standards Institute (ETSI) NFV MANO architecture extending it with additional software components. The contribution of our work is its fully-integrated fog node management system alongside the foreseen application layer Peer-to-Peer (P2P) fog protocol based on the Open Shortest Path First (OSPF) routing protocol for the exchange of application service provisioning information between fog nodes. Evaluations of an anomaly detection use case based on an air monitoring application are presented. Our results show that the proposed framework achieves a substantial reduction in network bandwidth usage and in latency when compared to centralized cloud solutions.
In recent years, the Internet of Things (IoT) has introduced a whole new set of challenges and opportunities in Telecommunications. Traffic over wireless networks has been increasing exponentially since many sensors and everyday devices are being connected. Current networks must therefore adapt to and cope with the specific requirements introduced by IoT. One fundamental need of the next generation networked systems is to monitor IoT applications, especially those dealing with personal health monitoring or emergency response services, which have stringent latency requirements when dealing with malfunctions or unusual events. Traditional anomaly detection approaches are not suitable for delay-sensitive IoT applications since these approaches are significantly impacted by latency. With the advent of 5G networks and by exploiting the advantages of new paradigms, such as Software-Defined Networking (SDN), Network Function Virtualization (NFV) and edge computing, scalable, low-latency anomaly detection becomes feasible. In this paper, an anomaly detection solution for Smart City applications is presented, focusing on low-power Fog Computing solutions and evaluated within the scope of Antwerp’s City of Things testbed. Based on a collected large dataset, the most appropriate Low Power Wide Area Network (LPWAN) technologies for our Smart City use case are investigated.
In the last years, traffic over wireless networks has been increasing exponentially, due to the impact of Internet of Things (IoT) and Smart Cities. Current networks must adapt to and cope with the specific requirements of IoT applications since resources can be requested on-demand simultaneously by multiple devices on different locations. One of these requirements is low latency, since even a small delay for an IoT application such as health monitoring or emergency service can drastically impact their performance. To deal with this limitation, the Fog computing paradigm has been introduced, placing cloud resources on the edges of the network to decrease the latency. However, deciding which edge cloud location and which physical hardware will be used to allocate a specific resource related to an IoT application is not an easy task. Therefore, in this paper, an Integer Linear Programming (ILP) formulation for the IoT application service placement problem is proposed, which considers multiple optimization objectives such as low latency and energy efficiency. Solutions for the resource provisioning of IoT applications within the scope of Antwerp’s City of Things testbed have been obtained. The result of this work can serve as a benchmark in future research related to placement issues of IoT application services in Fog Computing environments since the model approach is generic and applies to a wide range of IoT applications.
IEEE 802.11ah is a new Wi-Fi standard operating on unlicensed sub-GHz frequencies. It aims to provide long-range connectivity to Internet of Things (IoT) devices. The IEEE 802.11ah restricted access window (RAW) mechanism promises to increase throughput and energy efficiency in dense deployments by dividing stations into different RAW groups and allowing only one group to access the channel at a time. In this demo, we demonstrate the ability of the RAW mechanism to support a large number of densely deployed IoT stations with heterogeneous traffic requirements. Differentiated Quality of Service (QoS) is offered to a small set of high-throughput wireless cameras that coexist with thousands of best-effort sensor monitoring stations. The results are visualized in near real-time using our own developed IEEE 802.11ah visualizer running on top of the ns-3 event-based network simulator.
The restricted access window (RAW) feature of IEEE 802.11ah aims to significantly reduce channel contention in ultra-dense and large-scale sensor networks. It divides stations into groups and slots, allowing channel access only to one RAW slot at a time. Several algorithms have been proposed to optimize the RAW parameters (e.g., number of groups and slots, group duration, and station assignment), as the optimal parameter values significantly affect performance and depend on network and traffic conditions. These algorithms often rely on accurate estimation of future sensor station traffic. In this paper, we present a more accurate traffic estimation technique for IEEE 802.11ah sensor stations, by exploiting the “more data” header field and cross slot boundary features. The resulting estimation method is integrated into an enhanced version of the Traffic-Adaptive RAW Optimization Algorithm, referred to as E-TAROA. Simulation results show that our proposed estimation method is significantly more accurate in very dense networks with thousands of sensor stations. This in turn results in a significantly more optimal RAW configuration. Specifically, E-TAROA converges significantly faster and achieves up to 23% higher throughput and 77% lower latency than the original TAROA algorithm under high traffic loads.
The recent increase of connected devices has triggered countless Internet-of-Things applications to emerge. By using the Time-Slotted Channel Hopping (TSCH) mode of the IEEE 802.15.4e MAC layer, wireless multi-hop networks enable highly reliable and low-power communication, supporting mission-critical and industrial applications. TSCH uses channel hopping to avoid both external interference and multi-path fading, and a synchronization-based schedule which allows precise bandwidth allocation. Efficient schedule management is crucial when minimizing the delay of a packet to reach its destination. In networks with recurrent sensor data transmissions that repeat after a certain period, current scheduling functions are prone to high latencies by ignoring this recurrent behavior. In this article, we propose a TSCH scheduling function that tackles this minimal-latency recurrent traffic problem. Concretely, this work presents two novel contributions. First, the recurrent traffic problem is defined formally as an Integer Linear Program. Second, we propose the Recurrent Low-Latency Scheduling Function (ReSF) that reserves minimal-latency paths from source to sink and only activates these paths when recurrent traffic is expected. Extensive experimental results show that using ReSF leads to a latency improvement up to 80% compared to state-of-the-art low-latency scheduling functions, with a negligible impact on power consumption of at most 6%.
IEEE 802.11ah, marketed as Wi-Fi HaLow, extends Wi-Fi to the sub-1 GHz spectrum. Through a number of physical layer (PHY) and media access control (MAC) optimizations, it aims to bring greatly increased range, energy-efficiency, and scalability. This makes 802.11ah the perfect candidate for providing connectivity to Internet of Things (IoT) devices. One of these new features, referred to as the Restricted Access Window (RAW), focuses on improving scalability in highly dense deployments. RAW divides stations into groups and reduces contention and collisions by only allowing channel access to one group at a time. However, the standard does not dictate how to determine the optimal RAW grouping parameters. The optimal parameters depend on the current network conditions, and it has been shown that incorrect configuration severely impacts throughput, latency and energy efficiency. In this paper, we propose a traffic-adaptive RAW optimization algorithm (TAROA) to adapt the RAW parameters in real time based on the current traffic conditions, optimized for sensor networks in which each sensor transmits packets with a certain (predictable) frequency and may change the transmission frequency over time. The TAROA algorithm is executed at each target beacon transmission time (TBTT), and it first estimates the packet transmission interval of each station only based on packet transmission information obtained by access point (AP) during the last beacon interval. Then, TAROA determines the RAW parameters and assigns stations to RAW slots based on this estimated transmission frequency. The simulation results show that, compared to enhanced distributed channel access/distributed coordination function (EDCA/DCF), the TAROA algorithm can highly improve the performance of IEEE 802.11ah dense networks in terms of throughput, especially when hidden nodes exist, although it does not always achieve better latency performance. This paper contributes with a practical approach to optimizing RAW grouping under dynamic traffic in real time, which is a major leap towards applying RAW mechanism in real-life IoT networks.
IEEE802.11ah is a new Wi-Fi standard aiming to provide long-range connectivity to densely deployed power-constrained station. In this abstract, we present an extension to the IEEE 802.11ah restricted access window (RAW) implementation in ns-3 to enable proper state transitions to and from sleep mode. This in turn enables accurate energy consumption modeling of this new technology. A comparison of RAW and CSMA/CA in terms of energy efficiency is provided, showing that RAW is considerably more energy efficient, due to shorter back-o periods.
LoRa is a long-range, low power, low bit rate and single-hop wireless communication technology. It is intended to be used in Internet of Things (IoT) applications involving battery-powered devices with low throughput requirements. A LoRaWAN network consists of multiple end nodes that communicate with one or more gateways. These gateways act like a transparent bridge towards a common network server. The amount of end devices and their throughput requirements will have an impact on the performance of the LoRaWAN network. This study investigates the scalability in terms of the number of end devices per gateway of single-gateway LoRaWAN deployments. First, we determine the intra-technology interference behavior with two physical end nodes, by checking the impact of an interfering node on a transmitting node. Measurements show that even under concurrent transmission, one of the packets can be received under certain conditions. Based on these measurements, we create a simulation model for assessing the scalability of a single gateway LoRaWAN network. We show that when the number of nodes increases up to 1000 per gateway, the losses will be up to 32%. In such a case, pure Aloha will have around 90% losses. However, when the duty cycle of the application layer becomes lower than the allowed radio duty cycle of 1%, losses will be even lower. We also show network scalability simulation results for some IoT use cases based on real data.
The IoT domain is characterized by many applications that require low-bandwidth communications over a long range, at a low cost and at low power. Low power wide area networks (LPWANs) fulfill these requirements by using sub-GHz radio frequencies (typically 433 or 868 MHz) with typical transmission ranges in the order of 1 up to 50 km. As a result, a single base station can cover large areas and can support high numbers of connected devices (>1000 per base station). Notorious initiatives in this domain are LoRa, Sigfox and the upcoming IEEE 802.11ah (or “HaLow”) standard. Although these new technologies have the potential to significantly impact many IoT deployments, the current market is very fragmented and many challenges exists related to deployment, scalability, management and coexistence aspects, making adoption of these technologies difficult for many companies. To remedy this, this paper proposes a conceptual framework to improve the performance of LPWAN networks through in-network optimization, cross-technology coexistence and cooperation and virtualization of management functions. In addition, the paper gives an overview of state of the art solutions and identifies open challenges for each of these aspects.