It’s been a pretty impressive couple of days for Facebook’s parent company Meta, after announcing impressive AI chatbots and smart glasses yesterday, today a podcast in the Metaverse is demonstrating just how the company sees the long term future of “face to face” remote conversations and the good news is, it’s not cartoons.

Whenever I think of the Metaverse and having a meeting in the Metaverse, I visualise these cartoon people, designed by the participants to represent themselves – known as “Avatars” and frankly, I just can’t get behind it.

Seems Meta has had that same feedback themselves, and they’ve been working on something that is nothing short of impressive – in fact, that word doesn’t do enough to explain how great this is.

Meta boss Mark Zuckerberg sat down for a podcast chat with popular podcaster Lex Fridman. Normally, these interviews are done face to face in the same room, but both Fridman and Zuckerberg underwent an extensive scans to create photo-realistic versions of themselves for this podcast.

In the simplest of terms, the details scans mean the image that is created of you for the other person to see looks so real you are in a sense of disbelief.

Using cameras and sensors within and outside the Meta Quest Pro headset, the “video call” is able to represent each party on the call with not just a photo-like avatar, but also the intricate movements of the face – those facial expressions that play such a big role in expressing ourselves.

A smile that shows your teeth, that raising of an eyebrow, when you glance to one side – all those things form an integral part of any interaction.

Zuckerberg said to Fridman during the chat he felt “like we’ve gone beyond the uncanny valley” – a reference to the negativity all of us have toward a robot or almost human robot or avatar.

This demonstration is quite frankly remarkable. And using the internal computing data, Fridman has been able publish in the video the computer data, real camera and processed Photorealistic avatar all side by side.

Having sat and watched this, I can only agree with both of them – it’s staggering.

A fantastic demonstration side by side – “Metaverse Mark” (L), Computer Scan Mark (C), Real-Life Mark (R)

Now the joke here is that both Lex and Mark are renowned for their lack of emotional expression. That’s not to say they don’t have emotions – at all – it’s to say that their faces don’t express anywhere near as much emotion as the average person. Was Lex chosen for this demonstration for that reason? I doubt it. He has one of the most popular podcasts in the world, and a conversation like this gets important messages out about the work Meta is doing, but it’s also just simply a brilliant demonstration of the work Meta is doing.

To be very clear, for this demo both Lex and Zuck had to go to Pittsburg in the USA for a complete scanning process, and I’ve no doubt it was all conducted in a vastly controlled environment, but that doesn’t matter right now, what matters is what’s next.

Zuckerberg explains that the future involves them learning more about how we express emotion as a starter “we probably need to kind of over collect expressions, when we’re doing the scanning, because we haven’t figured out how much we can reduce that down to a really streamlined process and extrapolate from the the scans that have already been done.”

In the future, this could all be in the palm of your hand. Thanks to advanced sensors on Smartphones we could be doing this all ourselves – Zuckerberg telling Fridman “the goal, and we have a project that’s working on this already, is just to do a very quick scan with your cell phone where you just take your phone, kind of wave it in front of your face for a couple of minutes, say a few sentences, make a bunch of expressions. Overall, have the whole process just be two to three minutes and then produce something that’s of the quality of what we have right now.”