Thoughts From First MIT AI Venture Studio Class This Semester
In my first class this semester of MIT AI Venture Studio course we got some pretty interesting insights on where artificial intelligence is right now, and students heard from some industry leaders about what to prioritize as we move into the next phase of this quickly advancing field.
Ramesh Raskar gave us some insights into what’s happening with AI, (he’s the leader of the class), talking about a sea change toward models that are going to be more powerful than what we’ve seen to date.
He delineated the very different use cases for large language models, as opposed to what we get from generative AI.
But perhaps even more relevant, he talked about the move from supervised learning to unsupervised learning, and from screen learning that takes place on a device, to three-dimensional learning that’s going to take place more in proximity to the real world.
By that, I mean that when you put this type of technology into robotics and autonomous agents that can move on their own, you have a much different type of AI – for a lot of people, a much scarier type of AI. When you go from supervised learning to reinforcement learning, where there is no clear distinction between the labels and what the program takes away from its test or training data, things can get even stranger.
Raskar also contrasted what he called “sprinkle AI” or frivolous tinkering around the edges with more substantiative three-dimensional artificial intelligence, where the use cases will be a lot more evident.
In terms of business, he pointed to three pieces of current AI work – niche applications, platforms and particular use cases. Going back to the concept of “screen AI” where the technology works on a screen interface, he suggested that without a strong internal tech, some of these applications are little better than window dressing.
“They’re easy to build,” he said of the ‘screen AI’ products, “but also very easily upended, right, because somebody else… can build a similar solution, and as long as they have the tenacity, they can beat you.”
For example, he talked about Uber: how the dispatch algorithms are the heart of the business, and the secret sauce that people won’t be able to replicate.
In describing this sort of competitive strategy, Raskar pointed out that there’s a lot of money in this field – an estimated $99 trillion over five years!
It’s important, he said, that the work gets done in responsible, safe and ethical ways.
So what do these new 3D AI projects look like?
Moving into a description of a 3D use case, he talked about headphones, a camera and other gear for first responders, with a focus on AR, in what you can imagine would be a lot like what you have in the old Terminator films. Except, obviously, used for good.
Back to Uber and the way that the new tech economy is going to work: Raskar talked about the need to pursue three stages in AI development – capture the data, analyze the data, and engage, by which he presumably meant get your project out there, working.
For the “data capture” concept, he made the distinction between taxis, the legacy system, and the new and disruptive Uber.
The difference, he maintained, is that there is no data capturing in the taxi system – at least none to speak of. Although new taxis have card systems, traditionally, there was no digital component at all – you paid cash for rides according to the meter. Now, with Uber, everybody’s ride data is in the mix, being scrutinized by machines – and the machine are about to get a lot smarter!
Later, we also got some insights from Beth Porter, who talked about ed tech and AI for neurodivergence.
“If you know anybody who has a child with autism,” she said, “if you know anybody who has kids who suffer from ADHD, you know also that millions of hours are frustratingly spent trying to help the students engage meaningfully with a bunch of content experiences.”
Much of this, she said, is relatively ineffective because it’s not in the right formats, or well targeted to the neurodivergent student’s needs.
“It doesn’t provide the right kinds of feedback, it doesn’t feel like something they can connect to, she explained.
Porter encouraged students to think about the problem in a comprehensive way, and see what kinds of learning can help those with these disabilities. She noted that it doesn’t have to come through traditional models like text and voice. Some, she said, could be through imagery or video. AI for neurodivergence, she suggested, might have to do with augmented reality and other types of similar projects.
Hossein Rahnama talked to us about what new career professionals can do to further advance their goals and the goals of the community.
He suggested working on the core of the project, and not just the interface.
Using the term “co-creation,” he talked about how people should imagine others using their ideas to come up with secondary applications.
He also talked about the value of daily users for technologies – and contrasted that with how you have to proceed with B2B software or AI products.
Whichever road students choose, Rahnama encouraged them to embrace innovation. “Bring your passion,” he said., talking about the value of improving patient experiences in healthcare and other use cases.
After Rahnama, Sandy Pentland (who launched the classed over 20 years back) came up to talk about perspective-aware computing and other new advances.
“Don’t think small,” he said, encouraging students to “build something that touches a billion people.”
As for opportunities, he talked about decreasing silos in healthcare.
“You need to be able to tie (things) together,” he said. “There needs to be an AI on top of that.”
Mentioning the pandemic as a prime example, he noted the response could have been a lot more robust with better data handling.
“We didn’t share that data directly – we could have done a dramatically better job,” he said.
He also talked about microbiomes and RNA analysis.
Last we had some interesting input from Dave Blundin, who talked about some of the massive change we’re likely to see within just a few years.
Blundin started out going over his participation with Lincoln Lab, which will be relevant to our conversation with Ivan Sutherland in a different post – and how he turned toward MIT, as a devoted fan of Marvin Minsky.
Blundin mentioned the problem of disparity, which he saw growing up in Iran, and some of the way stations toward agile tech – he gave the example of Amazon supplanting Walmart, but starting out as a small startup.
He also talked about how to measure the light-speed advance of AI.
“What fraction of your life did you spend last year talking to an AI?” he asked, suggesting students should count things like Siri interactions, and predicting that that metric is going to rise year after year.
“We have thousands and thousands of customer service phone calls every single day (from one of his companies, one that he took public),” he said. “We record them all, of course, those are the ones we’re testing with, … those are going to move over (to AI) very, very quickly.”
As for writing code, Blundin has some interesting thoughts on that, too.
“At OpenAi, 80% of (code is currently) written by the machine,” he said, citing his recent conversation with Sam Altman, and suggesting that there’s a consensus that number will rise to 95% within just a year or two!
All of this was extremely eye-opening, keep an eye out for more of the insights as we proceed through 2024.