Follow on Google News News By Tag Industry News News By Place Country(s) Industry News
Follow on Google News | Apply Deep Learning to Building-Automation IoT SensorsBy: PointGrab To support such robust features, building-automation infrastructure requires considerably richer information that details what's happening across the building space. Since current sensing solutions are limited in their ability to address this need, a new generation of smart sensors (see figure below) is required to enhance the accuracy, reliability, flexibility, and granularity of the data they provide. Data Analytics at the Sensor Node In the new era of the Internet of Things (IoT), there arises the opportunity to introduce a new approach to building automation that decentralizes the architecture and pushes the analytics processing to the edge (the sensor unit) instead of the cloud or a central server. Commonly referred to as edge computing, or fog computing, this approach provides real-time intelligence and enhanced control agility while simultaneously offloading the heavy communications traffic. Continued innovation in computing technology has yielded cheap and energy-efficient embedded processors that can handle such data processing. In principle, this makes it possible to process the data at the sensor level and only send the final summary of the analysis over the network. This approach, if implemented, will yield a thinner volume of data and a shorter response time. The major question, however, is what kind of data-analysis approach is best suited for these embedded analytics sensors. Rule-Based or Data-Driven? The challenges associated with rich data analysis can be addressed in different ways. The conventional rule-based systems are supposedly easier to analyze. However, this advantage is negated as the system evolves, with patches of rules being stacked upon each other to account for the proliferation of new rule exceptions, thus resulting in a hard-to-decipher tangle of coded rules. As the hard work of rule creation and modification is managed by human programmers, rule-based systems suffer from compromised performance. They have shown to be less responsive in adapting to new types of data, such as data sourced from an upgraded sensor, or a new sensor of previously unutilized data. Rule-based systems can also fail to adapt to a changing domain, e.g., a new furniture layout or new lighting sources. These deficiencies can be readily overcome with data-driven "machine-learning" Once the features have been defined, the rules and/or formulas that use these features are learned automatically by the algorithm. The algorithm must have access to a multitude of data samples labeled with the desired outcomes for this to work, so that it can properly adapt itself. When the rules are implemented within the sensor, it runs a two-staged, repeating process. In stage one, the human-defined features are extracted from the sensor data. In stage two, the learned rules are applied to perform the task at hand. The Deep-Learning Approach Within the machine-learning domain, "deep learning" is emerging as a superior new approach that even alleviates engineers from the task of defining features. With deep learning, based on the numerous labeled samples, the algorithm determines for itself an end-to-end computation that extends from the raw sensor data all the way to the final output. The algorithm must discern the correct features and how best to compute them. This ultimately fosters a deeper level of computation that's much more effective than any rule or formula used by traditional machine learning. Typically, a neural network will perform this computation, leveraging a complex computational circuit with millions of parameters that the algorithm will tune until the right function is pinpointed. The implications of deep learning on system engineering are profound, and the contrast with rule-based systems is significant. In the rule-based system world, and even with traditional machine learning, the system engineer requires extensive information about the domain in order to build a good system. In the deep-learning world, this is no longer necessary. With the arrival of the IoT and the proliferation of data across the network, deep learning allows for faster iteration on new data sources and can use them without requiring intimate knowledge. When applying a deep-learning approach, the engineer's main focus is to define the neural network's core architecture. The network must be large enough to have the capacity to optimize to a useful computation, but simple enough so that available processing resources aren't outstripped. A neural network can be tailored to fit any given time budget to the maximum threshold to ensure maximum exploitation of the processing power. If the computational budget rises and there's more time to run the calculation, a larger network can be assessed utilizing the new budget. Once the architecture is defined, it stays fixed while the parameters of the neural network are tuned. This process can take days and even weeks for even the highest performance machines. However, the computation itself, extending from raw inputs to output, takes a fraction of a second, and it will remain exactly the same throughout the process. For more information about building automation, visit: http://www.pointgrab.com. End
Account Email Address Account Phone Number Disclaimer Report Abuse
|
|