← Sony

March 2026

Sony

Filed Patents 14 patents

Overview

Sony filed 14 patents across 7 categories: AI & Machine Learning (5), Audio (2), Cloud Gaming (2), Graphics (2), VR & AR (1), Streaming (1), and UI/UX (1).

The AI & Machine Learning patents cover voice pitch control for text-to-speech dialogue, personalized Audio adjustments, spectral rendering optimization, trophy customization, and context-aware gameplay assistance. Audio filings describe dynamic prioritization systems and personalized sound modification, while Cloud Gaming patents address predictive frame generation and hybrid local-cloud multiplayer. Graphics patents detail spectral rendering techniques and neural network-enhanced 3D object creation, VR & AR filings protect motion sensor privacy filters, Streaming patents convert 2D gameplay to 3D viewing experiences, and UI/UX patents automate NPC interaction detection.

Technology Themes

The 5 AI and machine learning patents span character voice synthesis, Audio personalization, visual rendering, achievement systems, and gameplay assistance. Two filings refine text-to-speech systems for game dialogue: one separates pitch prediction from pitch application, letting developers adjust voice tone at the phoneme level through a neural network that calculates initial pitch values but allows manual override. The second uses quantized bins and convolutional networks to represent pitch as discrete categories, enabling more granular control than continuous manipulation. An Audio personalization system learns individual player preferences to modify, mute, or replace sounds during multiplayer sessions, adapting in real time through game engine integration. A trophy customization system applies AI-driven personalization to achievement presentations, generating variants that reflect how players earned rewards or match their playstyle. A context-aware assistant analyzes gameplay video frames to predict what players might want to know, then proactively generates answers without waiting for explicit questions.

Audio patents focus on managing competing sound sources during gameplay. One system uses AI to identify which game moments and Audio sources matter most, such as boss defeats or teammate callouts, then automatically adjusts external Audio streams to prevent conflicts while maintaining immersion. The second filing applies machine learning to create personalized Audio experiences within multiplayer games, learning player preferences and modifying sounds to reduce fatigue while respecting developer-protected Audio elements.

Two cloud gaming patents tackle network reliability and resource constraints. Predictive frame generation creates likely next frames on the client device when network packets drop, maintaining visual continuity without waiting for server retransmission. A hybrid architecture enables split-screen multiplayer where one player's game runs locally while another streams from a cloud server or remote console, allowing resource-constrained devices to host multi-player sessions through intelligent latency compensation.

Graphics filings address photorealistic rendering performance. A spectral rendering approach distributes wavelength sampling across neighboring pixels and successive frames rather than tracing each ray at multiple wavelengths per pixel, reducing computational workload while maintaining visual quality through intelligent wavelength selection based on material properties. Another filing integrates neural radiance field techniques into traditional fragment shaders, letting the neural network focus on surface details while the mesh handles geometry and motion for real-time photorealistic rendering.

The single VR and AR patent addresses privacy concerns in head-mounted displays. A filtering system removes voice-caused components from motion sensor data, preventing speech extraction while preserving head tracking functionality for immersive experiences. The tunable, application-aware filter lets users and Platforms holders control privacy levels without compromising gameplay.

A Streaming patent converts 2D video game footage into 3D representations for spectators. The AI-driven reconstruction enables viewers to control their viewing perspective independently of the player's camera, creating dynamic, explorable experiences from flat video input during live gameplay broadcasts.

One UI and UX patent simplifies character interactions in virtual environments. The system combines gaze direction analysis with proximity calculations to determine which non-player character a player intends to engage, automatically detecting interaction intent to reduce accidental selections in crowded spaces.

All Sony patents → All companies → Database coverage →