Scaling the IoT Data and AI Summit, Part Two
David DewhirstDavid Dewhirst
Ever spent time with people who were so smart that just being around them made you feel like a genius yourself?
That's a little of what it was like to be at the IoT Data and AI Summit that was held November 28 and 29, 2017, in Palo Alto, California. Computer scientists showed us how they compute, business executives from America Express and other enterprises showed us how they business, and those of us who were at least smart enough to listen took endless pages of notes and ended up with heads stuffed with something other than sawdust.
This is the second part of my recap of this informative, insightful event. Part one of Scaling the IoT Data and AI Summit is here, if you're interested in cybersecurity, optimizing your machine learning company, or AI consciousness -- and honestly, how could you not be? But assuming you've already slaked your Part One thirst, here are more of the fantastic presentations found at the Summit.
Bruce Sinclair's presentation on digital twins was fantastic. The concept is often talked about, and I've written before on how now is the prime time to own your digital twin vertical. While my piece discusses why the opportunity exists, this presentation took much more of a deep dive into the mechanics of the digital twin.
Mr. Sinclair led off the presentation by offering us the example of Tesla vehicles. Teslas are, in very real ways, software-defined products (SDPs), and aside from their electric motors are very different from the hardware-defined vehicles we all know and alternately love and hate. A 2015 problem with sparking in the charging mechanism, for example, forced a vehicle recall -- and the issue was solved entirely by an over-the-air (OTA) update from Tesla, without the need for any trips to a dealership or service center.
Image Credit: Three Twelve Creative WebsiteThe OTA fix was in stark contrast to General Motors' solution to the problem, which followed the traditional recall steps of mailing a notice and taking the vehicle to a dealership for repair.
A software-defined product, in reality, is an abstraction and simulation of hardware, and comprises both a digital twin and an application with which users can interact; the hardware which it's simulating can be at the part, product, system, or environment level. And this is where the magic happens, except it's not magic -- it's math.
The SDP represents the hardware mathematically, and once it's so represented, it's malleable. That is, analytics can be applied, algorithms optimized, and value enhanced, which is the real promise of the Internet of Things. And since it's software, it can also be interfaced with.
By way of another example, Mr. Sinclair then asked us to consider the example of a spring, for which we're going to build a digital twin. Every spring has characteristics, such as length and a spring constant, that drive the performance of the spring. These characteristics, or metadata, are used to build our digital twin, and the digital twin can then be used to predict the spring's force for any given compression.
The trouble with our digital twin at this stage is that the real world is rather messy, and predicted outcomes based solely on metadata will only be somewhat accurate to the real-world outcomes of our physical spring.
Enter the IoT and its plethora of sensors, including, fortunately for us and our example, sensors that can be attached to our spring to gather real-world raw point data of our spring in action. The raw point data gathered by IoT sensors is time-based, and the data gathered is likely to be.... well, Big Data.
One sensor, collecting and reporting data once a second, will generate over 86,000 database rows a day -- multiply that by the thousands of sensors that are in just one car, say, with many of those sensors collecting data hundreds thousands of times a second, and you begin to get a sense of how Big all of this data is, and how far beyond the capacity of a human brain it would be to make sense of it all.
This is where analytics steps in. Remember that since our digital twin is software-defined, it's malleable -- and that means that we can take our spring's sensor data, plot the points on our graph of compression versus force, and automatically fit a curve through them that better models the real world.
Our digital twin is now smarter than it was, and more closely resembles the real-world spring it's modeling. As we continue to gather sensor data and apply it back to our model, it continuously improves itself and ultimately evolves to the point at which we can know -- with certainty! -- what our real-world spring will do in any situation simply by observing our model in that situation.
To put this back into our car example, the digital twin we've just created for a spring is at the part level; we can put digital twins of all the springs into a larger digital twin of the suspension system, which is the product level; the suspension system model is part of an even larger set of models which together make up a car, which is the system level; and we can even take it one step further, up to the environment level, by using all of our car models as part of the model of a Smart City, for example.
Mr. Sinclair then offered another real-world example of the power of models built on digital twins by pointing out how Tesla was able to achieve progress in self-driving cars much more rapidly than Google, which has been working on such a car for far longer than Tesla has been.
Google, according to Mr. Sinclair, has proceeded from the ground up: They made a part, tested it to see if it worked, made another part, tested that, and so on.
Tesla, on the other hand, started with the digital twin of the car, and then used real-world data from their car to make the system and all of its attendant digital twins smarter, and thus was able to rapidly make the driving model smarter in turn.
Mr. Sinclair closed with an example of an IoT clothes drier. This example allowed him to make a couple of final points, the first of which was that he has a personal dislike for labeling products as "smart" or "connected," since neither of those labels convey the value proposition of IoT devices strongly enough. The discussion of value is central to the IoT ecosystem, and it led to Mr. Sinclair offering a final observation:
The incremental value of IoT must be greater than its incremental cost. It's a fundamental tenet of business, but in Mr. Sinclair's opinion the violation of that tenet is why most B2C IoT is failing -- it simply doesn't deliver enough value to consumers when they measure it against its costs.
Eyal Amir, CEO of Parknav, presented on ways to connect AI and IoT both vertically and horizontally.
To understand what that means you first need to understand what Parknav does, which is simple in concept: If you find yourself driving in one of the 240+ cities in which they operate, their app helps guide you to an available parking spot.
Useful, right?
To accomplish that, Parknav requires access to enough real-time data for its algorithms to identify open parking spots. It might casually seem that coming by such data would be easy given the plethora of sensors that populate most new cars, comprising LIDAR, cameras, and a variety of other sensors -- but it turns out that it's not easy at all.
The trouble lies with the fact that the business of Big Data is in many ways still in its infancy, and isn't really understood very well by many of the major players -- including car companies, who are collecting zettabytes of real-time data but by and large refuse to share it with or sell it to companies like Parknav.
Why won't the car companies share or sell their data? Because, quite simply, they don't know the value of their data -- and because they don't know the value, they're so afraid of underpricing it against future potential gains (after, say, they figure out how to give it a proper valuation) that they end up just sitting on it.
The good news is that smaller companies in a vertical -- think Garmin, instead of General Motors -- are more willing to assign their data a value and to sell access to it, which is great for companies like Parknav. What's not great? Such data is very often incomplete or otherwise of lower quality, which in turn drives the need for applications consuming this data -- like Parknav -- to have AI and machine-learning algorithms in place so that any holes in the data can be filled.
And there's another problem with moving down the data food chain from large OEMs to smaller companies: Since they are still in the same vertical, the number of even smaller companies from whom to get useful data is finite.
The solution, according to Mr. Amir, is to think horizontally and source your data from companies outside of the vertical in which you're problem-solving. This will provide you with a wide range of data from a practically unlimited number of companies; and while it's true that each source might not have every piece of data needed, with the numbers of data providers afforded by the horizontal supply you could keep augmenting until you had all of the data you required.
Much more was on offer at the IoT Data and AI Summit than these two blog posts could capture -- at least until I finish my digital twin, which will use machine learning to fill in all of the gaps I have in my data from scheduling conflicts and from simply not being able to keep up with everything that was going on.
A digital twin that will fill in missing information from meetings, seminars, and notes, and hopefully do all of my writing for me, too.
Hmm. Any VCs out there?
New Podcast Episode
Recent Articles