Mixed reality. Artificial intelligence. Smart glasses. Large language models. Automation. 5G. The internet of things.
Each of these innovations has the potential to revolutionize the way we live. But what about the way we work? How is mixed reality transforming training and collaboration? What impact will AI have on jobs? How will hardware evolve to become more accessible? And what happens when the workplace is connected by millions of sensors? Let’s find out.
‘Mixed reality’ is a term that encompasses both fully immersive virtual experiences (i.e. VR) and blended experiences in which the physical world is augmented by digital objects (which is why it’s also known as augmented reality, or AR). So think of it this way: VR + AR = mixed reality. Extended reality or XR is the term sometimes used to cover them all. That’s the what, but why are we so excited about it as an area of innovation?
Ever since the first computers, or at least since the invention of the graphical user interface, we’ve drawn a distinction between the ‘real’ world and the ‘digital’ world. As if one of these things somehow has more value than the other.
At Meta, we don’t think there’s a hierarchy between the real and the digital worlds. In fact, we don’t even see a distinction anymore. Because what defines our world today is the ability to move effortlessly between the two.
This is especially true at work. By 2025, a massive 73% of the workforce will be Gen Y or younger. These employees will never have encountered a workplace that didn’t have laptops, mobile devices, cloud computing, video conferencing and the internet. They’ve spent their entire careers with one foot in the digital and physical worlds simultaneously. They switch from in-person meetings to email to video calls without any conscious thought about what ‘mode’ they’re in or ‘world’ they’re inhabiting. They’re like the fish in the story who, when asked how she’s finding the water, replies: ‘What water?’
Mixed reality takes the last remaining boundary between these worlds and removes it completely. Because until now, the digital world has been something we can only touch through a screen. It’s literally been kept at arm’s length.
Not any more. With mixed reality we’re removing the screen so that the digital world, along with everything and everyone inside it, feels just as present, just as accessible, just as right there as the physical one. It’s no longer a case of this or that – by blending digital experiences with your physical environment we can deliver the best of both, right before your eyes.
Fully immersive experiences have already unlocked a ton of value for businesses, from construction companies using digital twins to catch design errors, to scientists training on virtual production lines.
Not only will a new generation of mixed reality headsets like Meta Quest 3 make those experiences more accessible by lowering a barrier to entry and presenting a quicker path to ROI, it’ll unlock entirely new ways to work.
For instance, one of the advantages of working in mixed reality compared to being fully immersed in VR is that you stay more connected to the world around you. In fact, Meta Quest 3 actually understands the world around you using AI – it doesn’t need you to draw an artificial safety barrier to keep you from bumping into objects. You can just put on the headset and move around your space, even if it’s crowded or busy.
That unlocks the widespread use of mixed reality in manufacturing plants, warehouses, hospitals and fulfillment centers. Until now, the only headsets rated for these environments came with wired battery packs or controllers. A wire-free option makes things like remote assistance, workflow support and data visualization more accessible than ever.
Between June 2022 and March 2023, global searches for 'AI' almost quadrupled from 7.9 million to 30.4 million, which gives some indication of how hot the topic has been.
Meta has been a pioneer in artificial intelligence for over a decade, releasing over 1,000 models, libraries and data sets for researchers. Some people have questioned whether Meta is actually more interested in AI than mixed reality, but the truth is that the two of them go hand in hand.
We believe that there will be lots of different types of AI to help us do different things. And some of the most interesting things will happen in virtual worlds.
Think of it as a spectrum. At one end are simple, 2D applications of AI, like using new creative tools on your phone to make stickers or generate images from text prompts. You can already do this today, in the US at least, with Meta’s Emu.
In the middle of the spectrum is something like Meta AI, a next-generation chatbot that you can summon anywhere, anytime to make recommendations about where to eat, hike or shop, tell you a joke or settle a debate with information it’s found on the web.
Finally, at the far end of the spectrum, is our vision for the future of AI and mixed reality. Today’s AIs don’t have a ton of personality. Why would they, when they’re just a few lines of text on a screen? But what if they were more than that? What if they were fully-fleshed avatars that you could interact with in virtual worlds? What would they look like, sound like, feel like?
If you guessed Tom Brady, Naomi Osaka, or Snoop Dogg, then you win the prize.
We’ve created a whole team of AI assistants loaded with information on the things that you (and they) care about, like sports, food, fitness and travel. Then we asked some of the world’s most recognizable celebrities to help us bring them to life. So you’ve got Tom Brady as Bru, the sports debater, Naomi Osaka as the anime-obsessed cosplay expert and Snoop Dogg as the storytelling Dungeon Master.
Today, you can interact with these AIs on Instagram, WhatsApp and Facebook Messenger in the US. But coming soon, you’ll be able to hang out with them as 3D avatars in Horizon Worlds. That’s the point when AI steps off the screen and into your world.
It doesn’t take a great leap of imagination to see how expert 3D avatars could make people more productive and effective at work.
One of Meta’s new AIs, Lily, is actually an expert writer who can offer tips and advice on things like grammar, spelling and word choice. It’s like a hyper smart version of the AI prompts you currently get when using cloud collaboration software.
But instead of seeing prompts or suggestions on a screen, imagine sitting next to Lily in a virtual office while you’re working on a new presentation or report. Maybe you turn to her and ask, “How can I make this bit clearer?” Or, “What am I missing here?” Maybe she turns to you and says, “Have you thought about adding this bit in there?”
Now imagine that but for any skill you can think of. Because Meta is also building an AI Studio so that developers can use our technology to create their own assistants. You could be sitting in that virtual office next to a strategy expert, a scientist, or a lawyer. You could be in a manufacturing plant working on a machine alongside a 3D avatar of the engineer who designed and built it.
The impact of these AIs on the way we work will be every bit as profound as the impact of the web. Thirty years ago, the idea that you could sit down and access the world’s information through a computer screen seemed like pure science fiction. Soon, we’ll be able to do the exact same thing simply by turning to the AI next to us and asking. And, who knows, maybe we won’t even need to ask.
When you put mixed reality and AI together, the results are tantalizing. But ask anybody in the business what it’s going to take to make the technology truly mainstream and they’ll say the same thing: It’s all about the form factor.
If we’re really going to unlock the power of mixed reality and make it accessible to millions or even billions of people, we have to move beyond a headset to something lighter, easier, and better looking. We’re going to need glasses. Lots of glasses.
That’s why an innovation Meta is really excited about is the next leap forward in augmented reality technology, represented by the new Ray-Ban Meta smart glasses.
It’s fair to say that smart glasses have a bit of a checkered history. It’s an idea that’s been around for a while, but the technology has finally caught up with the ambition. Which is a fancy way of saying that Meta’s smart glasses have incredible functionality and look cool.
They record images and live stream in 4K. They have downward-firing speakers so you can listen to music. They’ll share your images directly to Facebook and Instagram. And they’re the first product to have Meta AI built in from the start (at least in the US). That puts us another step closer to historic breakthroughs like instant universal translation or being able to see a thing and immediately know what it is – just by asking.
When we’re able to package up the power of AI, the utility of mixed reality, and the simplicity of smart glasses, then we’ll really unlock the next generation of work.
Because there are already companies today who are using mixed and augmented reality for things like heavy equipment training in factories, or for remote maintenance on hard-to-reach industrial sites like wind farms, or even to help fight the spread of wildfires by analyzing 3D data in virtual operations centers.
All of these use cases, and thousands of others, become simpler and more accessible as the technology gets smaller, lighter and faster. It’s not just outdoor or frontline work, either. It’s easy to imagine slipping on a pair of glasses in the office to have a more immersive and natural conversation with the hologram of your remote colleague. Or working with multiple virtual screens instead of your tiny laptop.
Smart glasses will allow us to connect to colleagues around the world in a way that’s truly spontaneous, while also giving us immediate access to information or specialist expertise right at the point of need.
We all know that large language models (LLMs) have been the subject of huge excitement over the last couple of years. And sure, some of that — from talk of AI doom to predictions of imminent superintelligence — bears the hallmarks of a tech hype cycle. But get beyond the headlines, and it’s no surprise LLMs are being taken up in earnest by many professionals.
LLMs are built on transformer models: A special kind of neural network capable of teaching themselves about the underlying patterns in sequential data. When trained on vast amounts of text, transformer models learn about the deep statistical relationships between words as they are commonly used in sentences. The result is an AI with an amazing linguistic competence, such that it can understand natural language inputs and in response generate text that is relevant, detailed, and apparently meaningful.
This makes LLMs — think GPT-4 or Meta’s Llama 2 — a near-uniquely flexible knowledge tool. One with access to huge reserves of information and capable of generating natural-sounding text responses of all kinds.
Copy drafting — emails, presentations, and reports — are a clear workplace use. But now, many professionals are building research and ongoing learning practises around these tools.
Large organizations face challenges when it comes to learning and knowledge management. Information is typically stored in myriad places, from documents, to slide decks, to spreadsheets, and beyond. Even experienced employees can spend hours, days - even weeks - searching for that elusive stat, insight, or deck.
Now, some organizations are developing LLMs as a transformative new way to approach those challenges. In August consulting group McKinsey announced Lilli, an LLM fine-tuned on proprietary content spanning over 100,000 documents. It’s intended to act as a new way for McKinsey staff to access the vast storehouse of industry-specific knowledge, data, and more accumulated by the group over decades.
“With Lilli, McKinsey consultants can use technology to leverage our entire body of knowledge and assets… This is the first of many use cases that will help us reshape our firm,” said Jacky Wright, McKinsey’s chief technology and platform officer.
Associate partner Adi Pradhan, meanwhile, is using Lilli as a learning tool: “I use Lilli to tutor myself on new topics and make connections between different areas on my projects,” he revealed. “It saves up to 20% of my time preparing for meetings. But more importantly, it improves the quality of my expertise and my contributions.”
The future belongs to those organizations — and individual — best able to combine their own intelligence and creativity with AI in order to learn more, see further, and produce even better results.
Think a workplace assistant, guide, and learning companion, ready to help 24/7. It amounts to a revolution in the way knowledge is distributed and absorbed. Soon enough many employees will come to expect access to these kinds of AI-fueled conversational entities. These companions are set to become key learning tools for staff — and they’ll play a key role in the induction and training of new employees.
The LLM, and machine intelligence more broadly, is sure to bring transformations of its own — and we’re only at the beginning of the journey. There’s still so much left to do; and much left to learn.
For many people, ‘automation’ and ‘robots’ conjure up images of large production lines. In reality, the technologies have a role to play across a wide range of industries beyond manufacturing.
Automation can help with almost any repetitive task, and with cloud-based platforms, even smaller businesses have access to sophisticated automation tools. From backing up data to tracking multiple job applicants, automation is helping businesses save time and money.
Robots’ arms are no longer confined to manufacturing plants but used to perform surgeries, retrieve packages and much more. Their impact is being felt in the home too, where experts believe they will take on almost 40% of domestic chores by 2033.
Although there are fears around job displacement, history tells us that fears of machines taking over are unfounded. The same concerns existed at the time of the Industrial Revolution. But labor markets tend to adjust to technological advances. The World Economic Forum estimates that by 2025, 85 million jobs will be displaced – but 97 million new ones will be created.
Rather than destroy or create jobs, workplace innovations are transforming existing roles. Think scheduling social posts for comms teams and exoskeleton support with heavy lifting for employees in distribution centers.
The need for better efficiency without sacrificing quality will lead to greater dependence on automation. This will lead to huge savings, with Gartner predicting an $80 billion saving on contact center labor costs alone by 2026.
While improved efficiency is a good thing, it will inevitably require a rebalancing of work roles.
There are also growing concerns surrounding the use of data collected through new workplace technologies. In countries around the world, regulators are looking at AI and considering frameworks to govern its use, with a law regulating the use of AI in Europe on its way to being finalized and the US issuing a Blueprint for an AI Bill of Rights – a set of principles for the use of AI.
5G, as the name suggests, is the fifth generation of wireless cellular technology. Not only does it offer remote internet access with faster upload and download speeds but more consistent network connections with an improved capacity. By the end of 2024, the technology is forecast to cover more than 40% of the world’s population.
You might own a mobile phone with 5G connectivity, but it’s much more than a way to scroll through social media with reduced loading times. By making the internet accessible, 5G is driving the creation of huge amounts of data. Data that works alongside AI, automation and robots to help us make better informed decisions, adapt to change in real time, and to roll out the latest tech-powered ideas at scale.
The shift to 5G wireless networks has the potential to revolutionize entire industries, from manufacturing and transport to healthcare and retail. 5G enables superfast connectivity and the integration of technologies such as AR and VR.
Smart factories will streamline processes, and the transportation of goods will evolve with automated vehicles. In healthcare, medics unable to travel to remote locations will virtually perform more examinations and diagnoses. It's also hoped that as 5G becomes more accessible, it will help close the gap between high and low-paid workers.
The internet of things (IoT) refers to the billions of devices that connect to the internet and collect data over a wireless network without human intervention. Think smart speakers in kitchens or video doorbells connected to your phones
It’s made possible by squeezing sensors, processors and data into the smallest of spaces. Add a wireless internet connection to the mix and everyday household devices become ‘smart’.
Today, climate control systems monitor temperature levels to decide when to turn on the heating. Curtains can be drawn according to your morning schedule. It may come as little surprise that 15 billion objects were connected to the internet of things in 2023, and that number could almost double to 29 billion by 2030. Smart devices are certain to take on new and even more interesting applications in the near future and businesses will almost certainly be ahead of the curve.
In the workplace, 5G helps to optimize processes – like automatically ordering printer ink when it's running low or turning off lights and air-conditioning in empty rooms. Emerging IoT technologies will make workplaces even smarter, automating check-in procedures and using occupancy sensors to identify overcrowded spaces. Labor-saving processes will free up more time for creative thinking and focusing on more complex tasks.
The internet of things is set to benefit a wide range of industries. Whether it’s monitoring stock levels and foot traffic in a retail store, or measuring temperatures and humidity in a manufacturing facility, networked devices will provide data in real time to help humans make better, faster decisions. By using AI to analyze data, businesses can gain and explore insights that previously required more time and effort.