Intelligent IoT and Fog Computing Trends
Frank LeeFrank Lee
PaaS providers offer ready-to-use platform services like security, data storage, device management and big data analysis. SaaS providers deliver application level services like billing, software management and visualization tools. Google IoT Core, Microsoft Azure and AWS IoT are examples of PaaS/SaaS platforms.
Connected devices send data and receive instructions to and from a nearby node, usually installed on premises. The node could be a gateway device, such as a switch or router, that has the extra processing and storage capabilities. It can receive, process and react in real time to the incoming data.
OpenFog consortium was founded in 2015. It is backed by companies like Cisco, ARM, Dell and Microsoft. They are driving standards and best practices in Fog Computing system design (Figure 1). Their goal is to facilitate adoption of cross industry standards and frameworks.
[caption id="attachment_5235" align="aligncenter" width="2000"]
Figure 1. OpenFog Key Pillars (from OpenFog Consortium)[/caption]However, as the IoT market matures, IoT will become the backbone of infrastructures that support important activities in people’s daily lives. The status quo is not enough. Reliability and real-time response will be essential.
Automated Driving System (ADS) is one of these examples. ADS employs multiple advanced technologies: multi-modal sensors, computer vision, artificial intelligence and machine learning, etc. The system performs data fusion, image analysis, mapping and predictions to determine the best action and controls for the drive-train.
This all needs to be done reliably in milliseconds without interruption.The data bandwidth and latency requirements mandate a powerful processing node in the car, with built-in redundancy.
To achieve that, it performs real-time AI inference, using data from a large number of sensors. It then sends commands to actuators in machines, drones or robots to carry out actions. In an unsupervised setting, the AI engine also collects the real-time results to evaluate the next actions to take.
We need a hybrid fog/cloud model, where edge processing nodes handle time sensitive computer vision and AI interference tasks. In addition, cloud nodes handle non-real-time or soft real-time functions like software update, contextual information collection and long term big data analysis
Thus, the implementations usually demand a high volume of data movement and a large number of compute units. Machine learning and AI researchers turn to Graphics Processing Units (GPU), that were built primarily for gaming platforms.
Since 2007, Nvidia has developed Compute Unified Device Architecture (CUDA) technology to exploit the power of its graphics chips in compute problems besides 3D shader processing. GPU by design has high data throughput and large number of processing cores. That is very suitable for compute intensive problems like linear algebra, signal processing and machine learning.
The CUDA programming API allows research scientists in many domains, including AI and machine learning, to more easily program and leverage the power of GPU. The availability and continuous improvements of GPU systems in the consumer market make it possible for AI researchers to train and validate designs in a reasonable amount of time and budget. Fast forward to present, Nvidia’s CUDA platform more or less dominates the machine learning and AI market.
[caption id="attachment_5236" align="aligncenter" width="700"]
Figure 2. Nvidia Jetson TX2[/caption]Intel is also actively investing in similar embedded AI technologies, like their recent acquisition of computer vision chip company Movidius. Qualcomm, Mediatek, Huawei, AMD and some startups are also eyeing the rapidly growing market. They are developing neural network capabilities into their future System On Chip (SOC).
These technologies will find their way into market the next few years. Chip vendors are also working closely with software developers to optimize implementations on their processors.
Furthermore, embedded software developers are also looking to optimize neural network architectures that strike the right balance between complexity and accuracy requirements. The requirements are usually very different for different applications and Applications.
One example is face recognition, where the out-of-the-box accuracy and real-time requirements are very different to access control system versus photo tagging applications. The difference could lead to orders of magnitude difference in processing requirements, and thus the system cost.
In addition, engineers need to employ domain specific algorithms and neural network designs to deliver products within budget in short time-to-market and meet usage requirements. We will explore some of the application domains in future posts.
The Most Comprehensive IoT Newsletter for Enterprises
Showcasing the highest-quality content, resources, news, and insights from the world of the Internet of Things. Subscribe to remain informed and up-to-date.
New Podcast Episode
Recent Articles