Everything I’m offering you is the truth. Nothing more!

It’s is with this statement that Morpheus convinces Neo to go down the rabbit hole like Alice and understand the truth behind the Matrix. 25 years later, we would live to see the film’s principles become reality, and one of the greatest works of science fiction may be becoming more tangible than we could ever conceive. 

On this second day of Future of AI at SingularityU, we were impacted by the new dimension of Artificial Intelligence training, coincidentally (or not) through the world of simulations. 

Forget the metaverse 

After having had the chance to test the Apple Vision Pro during these days here in Silicon Valley and watching the talk by Aaron Frank, one of the greatest experts on the subject in the world, I have no doubt that the concept of the metaverse is already dead. What’s coming is, literally, the Multiverse! 

What will happen is the intercession of technologies related to VR/AR/XR – which Apple has promoted as spatial computing –, with generative AI, whose own Apple has made significant advances to run LLMs (the technology behind ChatGPT) directly on the device, rather than in the cloud. 

This is where the Multiverse is born, because instead of going to a specific metaverse like Roblox, Meta Horizons or OASIS, from the movie “Ready Player One”, each of us will be asking Siri, Apple’s virtual assistant, to build our environment , change our avatars or even create new virtual products, all instantly and through voice prompts, or voice commands. Millions of metaverses, or a Multiverse of possibilities. 

Do we have data for this? 

As you may already know, the quality of Artificial Intelligence is directly linked to the quality of the data that was used during the training phase. It’s the old Data Engineering maxim: Garbage In, Garbage Out. In other words, for Siri to create new virtual realities indistinguishable from our world, we would need a surreal amount of good data. No more! 

The big trend highlighted in today’s training was the use of simulations, virtual worlds and synthetic data, to create huge databases that come close to real-world data to train Artificial Intelligence systems. A landmark case was demonstrated by NVIDIA, which trained a robot to manipulate objects in the physical world in the equivalent of 10 years of daily robot movements in a one-day simulation! At the end of training, the robot in the real world performed the tasks perfectly. 

Here we have a great lesson for Artificial Intelligence projects that you may be wanting to develop, before going to market, carry out numerous simulations. The AI itself can simulate different situations at scale to guarantee the quality of responses, reducing, for example, the terrifying hallucinations of generative AIs. 

But what about the exponential consumer? 

In yesterday’s text, I commented on how consumers are increasingly adopting technologies, currently at a faster pace than the brands themselves can absorb. A simple example is this video below that I captured during Aaron’s talk about spatial videos, a new concept introduced by Apple through Vision Pro. At first it looks like a normal video, but if you watch it on an Apple Vision Pro, you’ll see that it actually is a video captured in 3D through the iPhone 15 Pro Max camera! A space video about space video, lol. 

What a few years ago would have cost millions of dollars in cameras, high-powered computers and visual effects experts, was done by me with a smartphone instantly. 

If the best CX is a genuine relationship between brand and customer, but increasingly the brand and customer are AI-powered, then it actually won’t matter if the customer was simulated by synthetic data, and the brand created by a Generative AI through a prompt. 

What will matter is the truth behind this relationship. 

But as Neo asked Morpheus before the title of this article: “which truth?”. 

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top