Reimagining Product Design in IoT, Learning by Playing Video Games, and Mathematical Understanding of Neural Networks
Yitaek HwangYitaek Hwang
It’s no secret that smart home devices are struggling (look no further than the Revolv hub debacle). A month ago, Stacey Higginbotham published her thoughts on how Airbnb could help the smart home industry by introducing new IoT devices inside rental homes. This week, Alex Schleifer, the VP of Design at Airbnb, added his thoughts on the role — or the lack — of design on First Round Review: perhaps, IoT is suffering from a lack of quality designers.
Summary:
Takeaway: Companies so far have largely focused on the things aspect of IoT. It’s time to remind ourselves that IoT becomes useful when those things start interacting with people. Sure, data drives IoT, but in the end, how that data is processed and visualized is what matters. Quality design should be embedded early in the development process to redefine the landscape and the outlook of IoT.
What if I told you that self-driving cars might train its algorithms by playing Grand Theft Auto? ML and big data techniques, as the name suggests, need huge amounts of data to train. We were teaching our robots with 3D simulations generated by researchers. Now, researchers from Intel Labs and Darmstadt University in Germany found ways to extract near-real life imagery and training data from off-the-shelf video games such as Grand Theft Auto.
Summary:
Takeaway: The research shows that synthetic data may even be superior to real-life data for training AI machines. This opens up new possibilities in tackling problems that were unaddressed due to barriers in collecting quality data. I see huge potentials in the medical field where medical devices and pharmaceutical companies can now use simulated data to speed up the development process.
“For reasons that are still not fully understood, our universe can be accurately described by polynomial Hamiltonians of low order.”
- Henry Lin at Harvard University and Max Tegmark at MIT.
Significant advances in artificial intelligence in recent years can be attributed to deep neural networks. Despite its success, no one really knew why neural networks were effective in solving complex problems. Lin and Tegmark now explains laws of physics possess properties such that neural networks don’t need to learn infinite number of possible mathematical functions, but only a tiny subset. This new insight will allow mathematicians to key in on specific functions to improve the performance of deep neural networks (MIT).
New Podcast Episode
Recent Articles