Sony received 27 granted patents this quarter across 8 categories: AI & Machine Learning (16), Hardware (3), VR & AR (2), Cloud Gaming (2), Graphics (1), Esports (1), Game Engines (1), and Audio (1).
The AI & Machine Learning patents cover systems for player assistance, avatar generation, gesture recognition, NPC dialogue customization, matchmaking, spectator cameras, content recommendations, and accessibility features like sign language translation and Audio descriptions. Hardware patents describe customizable controller designs with optical touch sensors, conductive ink substrates, and automatic drift calibration. Cloud_gaming and VR & AR patents address bandwidth optimization through priority-based encoding and tracking improvements that switch between camera and IMU sensors, while Audio and Game Engines patents detail dynamic music generation and vocal separation during dialogue.
The 16 AI and machine learning patents tackle a wide range of player experience challenges, from learning curve reduction to accessibility. Sony's crowd-sourced gameplay help system aggregates data across all players to identify which weapons, abilities, or tactics succeed most often against specific challenges, then personalizes those recommendations based on a player's current inventory and difficulty settings. A separate recommendation engine operates similarly, suggesting optimal items and upgrades by analyzing both individual behavior and collective patterns from successful players across multiple titles. For matchmaking, one patent predicts how long each player's session will likely last based on their history and current context, helping teams form around compatible time commitments. Another system analyzes how players naturally interact with NPCs across sessions, then customizes dialogue and behavior based on those learned patterns rather than relying on branching choice trees. Sony also addresses communication barriers with a gesture recognition system that translates sign language into context-aware chat messages, adjusting the tone and content based on current gameplay situations rather than providing literal translations. Avatar creation gets streamlined through multiple approaches, one transforming selfies into game-ready characters that match each title's art style automatically, another using simple video capture of body movements combined with natural language descriptions to generate customized characters with personalized animations. Two patents enhance spectator experiences, one adjusting game difficulty and creating highlight bookmarks based on player biometrics like heart rate and stress levels, another creating an AI player that modifies its strategy in real-time based on viewer reactions to maximize entertainment rather than simply winning. A voice control system detects multiple players simultaneously through spatial Audio processing, attributing commands to specific people without manual switching, while another patent creates context-aware input windows that only listen for gestures and voice commands when the game state suggests they're intentional. For legacy games lacking modern APIs, one system generates structured metadata by analyzing raw video frames, Audio, and controller inputs through machine learning models, enabling features like activity cards without developer participation. Music generation happens dynamically based on player interactions and environmental sensors, learning preferences during active play rather than looping pre-recorded tracks. A messaging system filters and rewrites in-game communications based on each recipient's current gameplay context, preserving intent while ensuring relevance. Marketing teams benefit from an AI tool that predicts audience engagement for trailers and promotional videos before release, enabling optimization during development. Finally, an automated narration system generates Audio descriptions for accessibility by detecting gaps in dialogue, then using language models and voice synthesis to create insertions that match the emotion, volume, and timing of surrounding content.
Three Hardware patents reimagine controller design through new sensing technologies and calibration methods. Sony's optical touch sensor approach detects input before physical contact occurs, enabling pre-touch responses while allowing a single flat surface to function as multiple control types like d-pads, joysticks, or buttons through software configuration that adapts to individual hand sizes. A related controller design uses conductive ink substrate technology to let players literally draw their own button layouts, with programmable anti-fatigue keys that remap controls to reduce repetitive stress and accommodate accessibility needs. To address the persistent problem of analog stick drift, Sony patents an automatic calibration system that detects idle periods and dynamically adjusts dead-zone geometry at the operating system level, compensating for Hardware wear without manual intervention and creating non-circular dead-zones that minimize sensitivity loss.
Both VR and AR patents focus on improving the technical foundations of mixed reality gaming. Sony's approach to spatial anchoring prioritizes automatic identification of gaming consoles and nearby objects as reference points, spawning virtual content relative to these stable physical gaming setups rather than requiring arbitrary room scanning. Tracking reliability improves through a system that seamlessly switches between camera-based SLAM and IMU sensors for controllers, using reliability metrics to select the optimal method moment-to-moment and preventing the jarring visual discontinuities that occur when visual tracking fails.
Sony's 2 cloud gaming patents both optimize bandwidth usage during Streaming sessions. The priority-based encoding system allocates higher frame rates and resolution to important game objects within each frame while reducing quality for less critical background elements, treating pixels differently based on gameplay relevance rather than uniformly. When games pause or show static scenes, a separate patent uses spare bandwidth to progressively enhance image quality by Streaming refinement data over a secondary channel, avoiding the synchronization issues and unpredictable spikes caused by reconfiguring the primary video codec.
The single Graphics patent creates visible character progression by automatically morphing avatar meshes based on repeated player actions. Instead of requiring manual customization, the system applies progressive deformations tied to gameplay behavior frequency, making stat changes like increased strength or agility visually obvious to other players in multiplayer environments through gradual mesh modifications that can be applied or reversed within time windows.
Esports broadcasting gets simplified through an AI-powered spectator camera system that automatically generates optimal viewing angles by analyzing real-time game data and player actions. Rather than requiring dedicated broadcast directors to manually control cameras during competitive events, the system extracts gameplay parameters like positions and actions, then algorithmically determines the best perspectives for viewers.
The game engine patent addresses both quality assurance and content creation through automated gameplay capture at scale. Sony's server-driven system analyzes footage from multiple players to identify bugs and difficult sections requiring developer fixes post-launch, while simultaneously detecting extraordinary performances and generating shareable highlight reels without manual intervention.
Sony's Audio patent tackles the common problem of overlapping vocals during gameplay by using AI-driven source separation to detect when in-game dialogue or voice chat will clash with music vocals. The system automatically isolates and adjusts specific frequency ranges or vocal tracks in real-time, maintaining Audio clarity without requiring pre-processing of music assets or manual mixing by developers.