How Quest 3 and Apple Vision Pro are Shaping the Future of Mixed Reality and the Metaverse: An Perspective from the Industry
At DRex, we are always on the lookout for the latest innovations and market trends that can inspire and inform our electronics manufacturing and tech solutions. One of the most exciting developments in recent years is the emergence of mixed reality (MR) and the metaverse, which are transforming the way we interact with reality and each other. In this blog post, we will explore two of the most advanced MR headsets that have been launched in 2023: the Meta Quest 3 and the Apple Vision Pro. We will compare their features, performance, and strategies, and discuss what they mean for the future of MR and the metaverse, as well as for DRex’s stakeholders. 🎧Listen to the blog instead⬆️ What are the Quest 3 and the Vision Pro? The Quest 3 and the Vision Pro are two of the most advanced MR headsets available today. They both aim to provide a spectrum of immersive experiences that span from AR to VR, and to enable users to access and create content for the metaverse. But what are the key features and differences of these devices, and what do they mean for the future of MR and the metaverse? Here is a brief overview. Quest 3 The Quest 3 is the successor of the popular Quest 2 VR headset from Meta (formerly Facebook). It features a 4K+ Infinite Display with a nearly 30% leap in resolution compared to Quest 2. It also has a slimmer profile with a more customizable fit and balanced weight distribution. The Quest 3 is powered by the Snapdragon XR2 Gen 2 platform, which offers double the graphic processing power of Quest 2. The Snapdragon XR2 Gen 2 platform is a dedicated system-on-chip (SoC) designed for XR devices that supports features such as AI processing, 5G connectivity, spatial audio, eye tracking. The Quest 3 also supports full-color passthrough, which allows users to see their physical surroundings with over 10 times more pixels compared to Quest 2. Users can double-tap the side of the headset to transition between VR and MR modes. The Quest 3 comes with Touch Plus controllers that have improved haptics and tracking, as well as Direct Touch hand tracking that lets users interact with virtual objects without controllers. The Quest 3 has a huge library of games, apps, and videos that are compatible with both VR and MR modes. Users can also access Meta’s metaverse portal, VIVERSE, which hosts various social, creative, and educational experiences. Some examples of VIVERSE experiences are: Vision Pro The Vision Pro is the first MR headset from Apple. It features a sleek design with a single piece of 3D curved glass across the front and a premium fabric headband. It also has a separate external battery pack that can be attached to clothing or accessories. The Vision Pro has a 4K resolution display per eye, which is comparable to Quest 3’s resolution. It also has Zeiss optical lens stack that provides high-quality optics and eye comfort. The Zeiss optical lens stack is a patented technology that uses multiple layers of lenses to achieve a wide field of view, high contrast, and low distortion. The Vision Pro is powered by two chips: an M2 chip for general computing and an R1 chip for spatial computing. The M2 chip is the second generation of Apple’s custom silicon for Macs, iPads, and iPhones that delivers high performance and efficiency. The R1 chip is a new chip designed for XR devices that enables features such as video passthrough, eye tracking, face tracking, gesture recognition, and spatial audio. The R1 chip enables EyeSight, a feature that allows users to see their physical surroundings in high-fidelity video passthrough with eye tracking. EyeSight uses advanced computer vision and machine learning algorithms to create a realistic and responsive representation of the real world in MR mode. Users can also use gestures or voice commands to control the headset without controllers. Alternatively, users can connect Bluetooth controllers or other accessories to enhance their experience. The Vision Pro has a more limited selection of content at launch, mainly focused on Apple’s own services and apps, such as Apple Music, Apple TV+, Apple Arcade, and Apple Fitness+. Users can also access Apple’s metaverse platform, SpatialOS, which is still in development and aims to provide a seamless and secure way to connect with others across devices and platforms. Some examples of SpatialOS experiences are: The Metaverse is Dead, Long Live the Mixed Reality? The metaverse, a term coined by science fiction author Neal Stephenson in his 1992 novel Snow Crash, has been a popular concept in the tech industry for decades. The metaverse is envisioned as a shared virtual space that connects people, places, and things across different platforms and devices. The metaverse can host various activities, such as socializing, gaming, entertainment, education, work, commerce, and more. However, the metaverse as we know it may soon become obsolete, as a new technology emerges to challenge its dominance: mixed reality (MR). MR is a term that encompasses both augmented reality (AR) and virtual reality (VR), as well as any combination of them. AR adds digital elements to the real world, such as holograms or annotations, while VR immerses the user in a fully simulated environment. MR allows the user to seamlessly switch between AR and VR modes, or to blend them together in various ways. MR is not just a new way of experiencing the metaverse, but a new way of creating it. MR enables users to interact with and manipulate their physical surroundings in real time, using advanced technologies such as cameras, spatial computing, computer vision, and ray tracing audio. These technologies allow users to create and share their own MR content and experiences, without relying on predefined platforms or environments. MR also offers several advantages over the metaverse in terms of performance, accessibility, and realism. MR devices are becoming more affordable and available than ever before, thanks to the development of specialized hardware accelerators that are designed to boost machine learning and artificial intelligence (AI) workloads. These hardware accelerators include graphics processing units (GPUs), tensor processing units (TPUs), neural processing units (NPUs), and field-programmable gate arrays (FPGAs). “MR’s specific requirement will tilt the playground” GPUs are general-purpose processors that can handle large-scale parallel computations. They are widely used for training deep learning models, which often require processing large datasets with millions of parameters. GPUs are also capable of rendering high-quality graphics for VR applications. However, GPUs are not optimized for the specific mathematical operations used in machine learning algorithms, such as matrix multiplications and convolutions. This means that GPUs may waste resources and energy on unnecessary computations or data transfers. TPUs are application-specific integrated circuits (ASICs) that are optimized for the specific mathematical operations used in machine learning algorithms. They offer superior performance and energy efficiency compared to GPUs for certain workloads. TPUs are mainly used for inference tasks, which involve applying trained models to new data. However, TPUs are not flexible or adaptable. They can only perform predefined functions or algorithms that are hardwired into their architecture. This means that TPUs may not be able to handle new or complex tasks that require different types of machine learning operations. NPUs are similar to TPUs, but they are more flexible and adaptable. They can be programmed to perform different types of machine learning operations, such as classification, detection, segmentation, or generation. NPUs are also more suitable for edge computing scenarios, where data is processed locally on the device rather than in the cloud. This means that NPUs can reduce latency and bandwidth consumption, as well as enhance privacy and security. FPGAs are programmable chips that can be reconfigured to implement specific functions or algorithms. They offer a unique combination of flexibility and performance. FPGAs can be programmed to implement custom logic, allowing them to deliver high performance for certain workloads while consuming less power than GPUs or TPUs. However, FPGAs are not easy to program or debug. They require specialized skills and tools to design and optimize their functionality. These hardware accelerators enable MR devices to process massive amounts of data in real time, resulting in faster and smoother MR experiences. They also enable MR devices to support features such as video passthrough, eye tracking, face tracking, gesture recognition, and spatial audio, which enhance the realism and responsiveness of MR content. MR is not just a new technology, but a new paradigm for computing, TV, and more. MR is not just an extension of the metaverse, but a replacement for it. MR is not just a vision for the future, but a reality for today. The metaverse is dead, long live the mixed reality! Q&A Q1: What technological advancements do the Meta Quest 3 and Apple Vision Pro introduce in the MR industry?A1: The Quest 3 introduces advancements such as a 4K+ Infinite Display, the Snapdragon XR2 Gen 2 platform, and full-color passthrough, significantly enhancing the immersive experience. Apple Vision Pro, on the other hand, offers a unique design, a dual-chip system (M2 and R1), and EyeSight feature for high-fidelity video passthrough. Both set new standards in immersion, processing power, and user experience in MR. Q2: How do Quest 3 and Vision Pro contribute to the growth of the metaverse?A2: Both devices serve as gateways to the metaverse, providing users with tools to create, access, and interact within these expansive virtual spaces. Quest 3 users have access to VIVERSE, with social, creative, and educational experiences, while Vision Pro users are connected through Apple’s SpatialOS, offering unique services like Spatial Maps and Spatial Chat. These platforms extend the boundaries of the metaverse, offering more ways to connect and interact. Q3: Are there any distinct differences between the content available on Quest 3 and Vision Pro?A3: Yes, Quest 3 boasts a vast library of games, apps, and videos, and is compatible with both VR and MR modes, whereas Vision Pro focuses more on Apple’s own services and apps, offering a more curated but limited selection at launch. However, both are expected to expand their offerings as the platforms grow. Q4: How do hardware accelerators like GPUs, TPUs, and NPUs impact the performance of MR devices?A4: These specialized hardware components significantly boost the machine learning and AI workloads essential for MR. GPUs support deep learning and high-quality graphics, TPUs optimize machine learning algorithms, and NPUs offer flexibility for various machine learning operations. They collectively contribute to faster, smoother, and more realistic MR experiences. Q5: What does the rise of MR mean for the concept of the metaverse?A5: MR represents an evolution of the metaverse, offering a more interactive and immersive way to blend digital and physical realities. It’s not just about exploring virtual spaces but also about manipulating and interacting with the real world in real-time. MR might not replace the metaverse but rather expand its scope, offering new layers of interactivity and immersion. Q6: How do the Quest 3 and Vision Pro impact stakeholders at DRex?A6: For stakeholders at DRex, the advancements brought by Quest 3 and Vision Pro signify emerging opportunities in electronics manufacturing, tech solutions, and beyond. These MR headsets are pushing the boundaries of what’s possible, indicating a growing market for more advanced, integrated components and signaling the need for innovation and adaptability in various sectors.