Mark Zuckerberg – Facebook Founder and CEO of Meta, the parent company of all the apps we know and love, stood on stage for 45 minutes today at the company’s headquarters for their annual “Connect” Event, which introduces new products, features and ideas primarily to developers but also the wider Meta app user community.

Today, the company had three key areas of Focus – Mixed Reality, Artificial Intelligence and Smart Glasses.

For me, the AI announcements were by far the most impactful and compelling.

While there is a lot to unpack, let me give you the key takeouts from the event today in terms of AI and how it’s going to change what you do and very soon.

Firstly, when it comes to generating those fancy and interesting images that are unique and just 100% AI generated, Meta is in this space. Their “EMU” platform is able to generate images in just five seconds, and the goal is to be able to summon it within any chat in the future.

You might be chatting with your kids, or your mates, in WhatsApp, Messenger or Instagram, and then just type “/Imagine a red bridge crossing a flowing river with a dragon flying over it shooting fire down at a boat” and the image will just appear in your feed. (no idea where that vision just come from in my mind!)

Initially, this won’t be available to all, but they do plan to improve the “stickers” we have in chats, allowing you to just ask for a sticker in a cartoon like set, and it will be generated for you.

But the real power of Meta’s AI plans are in their large language model. That’s how AI applications like Chat GPT are categorised, and what Meta is doing is vast, and in my view has a greater roadmap, better planned roll-out and a longer term chance of genuine impact than any of the “Metaverse” talk over recent years. In fact, I think I only heard the word “Metaverse” twice today.

It all starts with @Meta AI – this new “Assistant” is your all-in-one helper. Ask it anything and you’ll get an answer.

Suggestions for a place to go, answer to a question about the world – it really doesn’t matter.

Meta AI has a back end link to Bing Search through a partnership between Meta and Microsoft which means you can ask about things happening out on the wide internet and it can bring you back answers.

You simply message Meta AI like you do any other person in your contact list.

In fact, Meta’s long term plan is that we all have a set of “Assistants”. I might have one Assistant I call upon to help with work related tasks or questions, and another who is more my day to day go-to.

To get us thinking that way, Meta is launching a range of AI “Personas” with real celebrities as the “face”.

NFL Legend Tom Brady is a sports guru, Kendall Jenner is your “Big Sister” there’s another guy who’s a Career coach – these differing personas allow you to have more specific conversations in a narrow area where the responses are potentially more on point.

It is though a weird mix of real people on AI personas, and in a world where we can now just generate “characters”, even with Meta’s own EMU image system, why not just create fake faces for these AI? Its for marketing and publicity of course, but I wonder if it’s a confusing entry to AI for the general market.

Meta’s new RayBan smart glasses got a big improvement in quality, and a new Live-Stream capability, but the real power again comes in AI.

You can now summon “Meta” and ask any question. So wearing your glasses, sitting chatting to a friend – if you’re arguing about a topic, just ask Meta AI for the answer.

What Meta didn’t do was position Meta AI as a rival to Siri, Google or Alexa, but what they have done is create perhaps the most powerful Voice Assistant yet – in terms of intelligence anyway.