Plug & Play Best Practices for Wireless IoT Deployments
Ron BartelsRon Bartels
There is a vast number of diverse Internet of Things (IoT) platforms on the market, but the significant majority cannot be classed as a seamlessly integrated solution. At the hardware device level, there is a multitude of sensors that have different physical form factors and requirements to connect to embedded electronics for processing, communications, and storage. This is amplified by different gateways that all have different proprietary mechanisms at the edge to enable either fog or cloud services. The result is that many IoT deployments are technically difficult, labor-intensive, and error-prone. And yes, standards are lacking.
Apple, as a product manufacturer, is renowned for product development that is plug-and-play. A user switches on an Apple device and starts using it within a few minutes. There is no detailed technical configuration required. There are very few IoT solutions in the current market that are truly Apple-like and plug-and-play.
Few IoT devices allow a user merely to switch them on and go—especially the industrial ones. Additionally, some don't even fulfill the requirements of the functional specifications for which they have been procured. A best practice IoT solution should be in the Apple plug-and-play category.
Deployment is typically a matter of deploying battery-powered devices and routing the resulting information through cloud portal toward a rendering UI. As an example, in a data center, it takes a few minutes to attach wireless magnetic sensors to a rack. The rest of the provisioning is automatic, including the calibration cycle. Thus, if the sensors were monitoring the temperatures within hot and cold aisle containment areas, this information would be available to the user within the first few poll cycles. A technician shouldn't be required to be on site with a laptop to configure devices and gateways.
A best practice deployment wouldn't be a DIY-type installation. Although achievable, the generic deployment using Raspberry Pi-type gateways aren't suitable due to the technical complexities required for installation. The gateway shouldn't rely on clumsy wired connections unless they are required for technically valid reasons. If connections are non-intrusive and wireless-based, this will facilitate easier deployments. This also results in a low-maintenance solution. The sensors should be engineered for low power and low bit rate to ensure the device lifetime is acceptable. This will also mean the power management programming of the device is a high priority.
A best practice system is designed to be reliable with a higher-than-normal data throughput rate when compared to other sensors. Many systems and solutions are designed to be metering systems within which data is exchanged intermittently. This can be as infrequent as once per month. Clearly, this type of metering and reporting doesn't provide enough analytics for making near real-time decisions.
A best practice system is designed to update metrics every couple of minutes with great reliability and accuracy. Clearly, business decisions in the period of the Fourth Industrial Revolution are dependent on more than the occasional bit of data being available! Rapid updates in this type of solution would distinguish it from other systems that are based more on a legacy metering methodology. Better throughput for a device also allows certification to be a breeze. As an example, a best practice system should seamlessly provide sensor certification from a regulator as a value-added service. The reliability of such a best practice system must be high to ensure data is never lost.
The networking component of a best practice system is based on intelligent networks. A key differentiator of a best practice network would lie in its self-healing abilities. This feature has often been associated with high-end cellular and microwave radio systems, but the implementation at a low bit rate is required ensure that deployments don't require continuous troubleshooting. The system should use encrypted communications and be industrially hardened. Inherently secure access and communications should be programmed into the devices to ensure the overall solution is potentially as hack-proof as practically possible.
Besides the automatic provisioning ability, a best practice system is able to scale to hundreds of devices per site with the gateways providing connectivity to the chosen cloud platform. The ability to manage devices and scale device count automatically as new units of gateways and sensors are added should also exist. This is very similar to how mesh WiFi systems scale. For example, only allowing a small amount of 12 sensors to be associated with a gateway would be unacceptable, as a better sensor count would be up to at least 128 sensors.
The portal should provide not only analytics related to metrics provided by the sensors; it should also reveal the health of all devices across the system. This will highlight depleting batteries and error-producing devices before they generate cascading downstream problems in event processing (which should be hardened against errors), and data management and analysis, which should handle asynchronously and/or erroneously delivered data without stalling the whole system.
Basically, IoT products need to be designed as closely as possible to the methodology used by Apple.
New Podcast Episode
Recent Articles