← Sony

Q1 2026

Sony

Filed Patents 50 patents

Overview

Sony filed 50 patents this quarter across 11 categories: AI & Machine Learning (19), Hardware (5), Graphics (5), VR & AR (4), Cloud Gaming (4), Audio (3), Game Engines (3), Streaming (3), UI/UX (2), Networking (1), and Platforms (1).

The AI & Machine Learning work covers adaptive assistance systems, real-time asset generation, gesture calibration for VR & AR experiences, automated broadcasting with dynamic camera angles, voice chat adjustments based on gameplay context, and LLM-powered virtual teammates that provide strategic advice through character voices. Graphics and Cloud Gaming patents address ray-tracing optimizations, predictive frame generation during network interruptions, and edge compute proxies at 5G base stations, while Hardware filings describe arcade-style controllers with swappable joystick gates and angled button pads. Additional filings span texture Streaming optimization in Game Engines, 3D conversion of gameplay streams, Audio prioritization systems, and instant DVR rewind accessible via controller.

Technology Themes

The 19 AI and machine learning patents span player assistance, content generation, and personalization systems. One patent describes an adaptive assistance system that uses haptic feedback to guide struggling players through controller vibrations rather than lowering difficulty or displaying tutorials, tracking achievements separately for unassisted play. Another monitors player engagement to signal through wearable lights whether someone is available for conversation, predicting social receptiveness based on gameplay telemetry. Several patents address content creation, including a system that generates personalized 3D game items in under 2 minutes using player data and neural rendering, and another that auto-generates transitional video between gameplay moments to create seamless highlight reels without manual editing. Multiple filings describe LLM-powered assistants that provide strategic advice, health monitoring, and tactical suggestions through in-game character voices, with one enabling voice commands for team coordination and another letting players ask characters for story recaps and navigation guidance. Two patents create personalized podcasts narrated by favorite video game characters, delivering gaming news and recommendations tailored to individual player profiles. Other applications include analyzing player interactions with NPCs to dynamically personalize dialogue responses, eye-tracking that suggests chat between players based on gaze focus and gameplay context, automatic narration generation for accessibility that inserts Audio descriptions during dialogue gaps, and customizing trophy appearances based on player behavior and preferences. One patent generates 3D overlays highlighting popular paths and strategies from aggregated player behavior, while another predicts player questions by analyzing gameplay video frames and proactively generates answers without waiting for manual searches.

Four VR and AR patents address calibration, social presence, and privacy challenges. One system learns from player behavior during gameplay to continuously improve hand tracking accuracy without requiring players to exit for separate calibration routines, analyzing intended object interactions and game outcomes to refine gesture recognition models in real-time. Another reconstructs players' hidden faces during multiplayer sessions, using recurrent neural networks to predict facial features obscured by headsets and enable natural social interaction. A third patent detects motion sickness before symptoms manifest by monitoring postural stability changes through HMD sensors, warning players and automatically adjusting display settings based on predictive ML analysis. The fourth provides privacy protection by filtering voice-caused components from HMD motion sensor data while preserving head tracking functionality, giving users and Platforms holders tunable control over what information the sensors capture.

Two UI and UX patents streamline player interactions with games and characters. One enables smooth handoffs of game control between players by combining visual controller highlights, haptic pulses, and Audio cues with adaptive gameplay pacing like slow-motion or looping segments, helping players assume control mid-action without abrupt disruptions. The other automatically detects which virtual character a player is looking at by combining gaze direction analysis with proximity calculations, eliminating accidental interactions when multiple NPCs are crowded together.

The single Networking patent describes a hybrid multiplayer architecture that dynamically provisions game servers from player consoles rather than relying solely on centralized developer servers or manual community-run alternatives. The system intelligently allocates peer-hosted servers from the player pool itself without requiring technical expertise or player intervention, addressing capacity and reliability issues in online gaming infrastructure.

Three Audio patents manage voice chat and sound prioritization during gameplay. One dynamically adjusts voice chat between players based on their in-game relationships, locations, and gameplay context, potentially muffling enemy team communications or applying distance-based Audio degradation to simulate realistic communication limitations. Another uses AI to prioritize Audio streams by learning which game moments and sources are important, automatically adjusting competing Audio to prevent boss defeats or teammate callouts from being drowned out. The third employs machine learning to let developers fine-tune character voice pitch on a phoneme-by-phoneme basis in text-to-speech systems, enabling nuanced emotional delivery in dynamic dialogue without re-recording actors, separating pitch prediction from pitch application for post-prediction modification.

Five Hardware patents focus on arcade-style controllers designed for retro and fighting games. One features an angled button pad set at 1-60 degree oblique angles to provide arcade-authentic ergonomics on modern console systems. Two patents describe joystick mechanisms, one using swappable gates stored within the controller base that allow players to physically change movement patterns without tools, and another using microswitches for digital directional input with a mechanical coupling system attaching an actuator to the joystick shaft. A fourth patent implements analog control through magnetic sensors instead of traditional switches or potentiometers, combining arcade-authentic lever design with precise analog capability. The fifth adds lockable control buttons that preserve gameplay inputs while disabling system functions, preventing accidental home button or menu presses from interrupting competitive matches.

Five Graphics patents optimize rendering performance and visual quality. One reduces loading times by performing lightweight partial renders to identify which textures are actually needed before Streaming data, requesting only visible textures and appropriate mipmap levels rather than entire texture maps. Another maintains smooth 60fps performance by dynamically adjusting AI upscaling precision during graphically intense scenes, switching between pre-cached neural networks of varying precision levels instead of dropping resolution or frame rates. A third uses Gaussian splatting to create 3D space from game metadata, enabling realistic overlay of user annotations like coaching markup with proper depth ordering and occlusion. The fourth cuts ray-tracing costs by distributing wavelength sampling across neighboring pixels and successive frames rather than tracing each ray at multiple wavelengths per pixel, intelligently selecting wavelengths based on material properties and lighting. The final patent integrates neural radiance field approaches into traditional fragment shaders, letting neural networks focus solely on surface details while meshes handle geometry and motion for photorealistic rendering at real-time speeds.

Three game engine patents address development workflows and gameplay interruptions. One creates a "soft pause" system that reduces gameplay intensity instead of completely stopping during interruptions, enabling players to multitask during communications while staying in multiplayer sessions. Another provides an AI-powered development environment where developers collaborate with LLMs through a multi-panel interface managing virtual files, chat, and live testing, making the AI multi-file-aware with automatic generation and editing of multiple code files while maintaining visual version history. The third uses LLMs to automatically generate cross-referenced game schemas that maintain relationships between game objects, then creates executable code that correctly instantiates these objects with their dependencies intact.

Three Streaming patents enable enhanced viewing and teaching experiences. One extracts game metadata to insert user annotations into volumetric 3D game video, automatically adapting drawings, paths, and messages to match game properties like speed, topology, and spatial relationships. Another makes 2D user annotations behave as 3D objects within game footage using Gaussian splatting, automatically adjusting speed, position, and visibility based on game physics and scene depth rather than requiring manual frame-by-frame animation. The third converts 2D video game streams into 3D experiences for spectators, using AI-driven reconstruction to let viewers watch gameplay from dynamic camera angles and control their viewing perspective independently of the player's camera.

Four cloud gaming patents address latency, bandwidth, and connection reliability. One predicts player actions to pre-load game content before it's needed, combining gameplay state analysis with bandwidth-aware quality adjustment to stream only necessary assets at optimal levels. Another deploys buffering servers at 5G base stations that intelligently manage packet retransmission with frame-level awareness, abandoning outdated frame retransmissions once their display deadline passes and pre-deploying proxies at adjacent cell towers before handoff occurs. A third generates predicted frames locally when network packets drop, proactively creating likely next frames before they're needed to maintain continuity during disruptions without waiting for server retransmission. The fourth enables split-screen multiplayer where one player's game runs locally while another streams from cloud or remote console, allowing resource-constrained devices to host multi-player sessions through intelligent latency compensation and adaptive quality settings.

The single Platforms patent provides instant gameplay DVR rewind accessible via controller button, letting players review recent gameplay without exiting to recover missed NPC dialogue, quest details, or forgotten objectives. The system treats gameplay history like a DVR buffer that's always recording and instantly accessible through a universal button that provides seamless access to a rewind UI overlay during active play.

All Sony patents → All companies → Database coverage →