Zelili AI

HY-World 1.5

Tencent’s Real-Time Interactive World Model – Long-Horizon Streaming Video at 24 FPS with Geometric Consistency
Tool Release Date

17 Dec 2025

Tool Users
N/A
0.0
πŸ‘ 126

About This AI

HY-World 1.5 (also known as WorldPlay) is Tencent Hunyuan’s advanced open-source interactive world model released on December 17, 2025.

It enables real-time generation of long-horizon streaming video conditioned on user keyboard and mouse inputs, achieving 24 FPS with superior long-term geometric consistency.

The model bridges the speed-memory trade-off through key innovations: Dual Action Representation for robust control (discrete keys plus continuous poses), Reconstituted Context Memory with temporal reframing to maintain distant geometric information, WorldCompass RL post-training for better action-following and visual quality, and Context Forcing distillation to enable real-time inference while preserving memory.

It supports first-person and third-person perspectives in real-world and stylized environments, with strong generalization across diverse scenes.

Applications include 3D reconstruction, promptable events, infinite world extension, game prototyping, embodied AI training, and VFX pre-visualization.

Built on a curated dataset of 320K videos, it is fully open-source with training framework, inference code, and model weights released on GitHub and Hugging Face.

Variants include WorldPlay-8B (high-quality) and WorldPlay-5B (lightweight for smaller GPUs).

The model runs locally with optimizations for low latency and high throughput, making it accessible for developers and researchers without cloud dependency.

Key Features

  1. Real-time streaming at 24 FPS: Generates long-horizon interactive video with low latency
  2. Long-term geometric consistency: Maintains scene coherence over extended interactions via reconstituted memory
  3. Dual Action Representation: Combines discrete keyboard inputs and continuous camera poses for precise control
  4. Reconstituted Context Memory: Dynamically rebuilds past frames with temporal reframing to reduce memory decay
  5. WorldCompass RL Framework: Reinforcement learning post-training improves action accuracy and visual quality
  6. Context Forcing Distillation: Aligns memory contexts for efficient real-time student model training
  7. Multi-perspective support: Handles first-person and third-person views in real and stylized scenes
  8. High generalization: Performs across diverse environments from photorealistic to animated styles
  9. Applications enablement: Supports 3D reconstruction, promptable events, and infinite extension
  10. Open-source pipeline: Full training, inference, and deployment code released for community use

Price Plans

  1. Free ($0): Fully open-source under community license with model weights, training code, inference scripts, and technical report available on GitHub and Hugging Face; no costs or subscriptions
  2. Cloud/Hosted (Potential Custom): Possible future Tencent cloud options or API (not yet detailed)

Pros

  1. Breaks speed-consistency trade-off: Achieves real-time performance with long-horizon stability
  2. Fully open-source: Comprehensive framework including training and inference for customization
  3. Strong generalization: Works across real-world, stylized, first/third-person scenarios
  4. High FPS and low latency: 24 FPS streaming suitable for interactive applications
  5. Versatile use cases: Enables game dev, robotics simulation, VFX, and embodied AI research
  6. Community-friendly: GitHub repo with weights and detailed technical report
  7. Lightweight variant: WorldPlay-5B fits smaller GPUs with trade-offs in quality

Cons

  1. Hardware demanding: Full model requires powerful GPUs for real-time inference
  2. Recent release: Limited community adoption and fine-tuning examples so far
  3. Setup required: Local deployment involves dependencies and model download
  4. No hosted demo: Primarily code-based; no simple web interface mentioned
  5. Potential quality trade-offs: Lightweight 5B model compromises on fidelity
  6. Complex training pipeline: Full reproduction needs significant compute resources
  7. Focused on video streaming: More video diffusion than explicit 3D mesh output

Use Cases

  1. Game prototyping: Generate explorable levels or scenes for testing without traditional assets
  2. Embodied AI training: Simulate persistent environments for robot/agent learning
  3. Autonomous driving: Create dynamic traffic and scenario videos for testing
  4. VFX pre-vis: Build interactive digital sets with camera control for film planning
  5. Interactive content: Develop AI-driven virtual worlds or experiences
  6. Research extension: Fine-tune or build upon for new domains like scientific sims
  7. 3D reconstruction: Use as base for promptable scene rebuilding

Target Audience

  1. AI researchers: Studying interactive world models and long-horizon consistency
  2. Game developers: Prototyping procedural worlds and reducing asset needs
  3. Robotics/embodied AI teams: Needing high-fidelity interactive simulations
  4. Autonomous systems engineers: Generating diverse driving or navigation scenarios
  5. VFX and film professionals: For real-time pre-visualization and set exploration
  6. Open-source developers: Customizing or deploying local world models

How To Use

  1. Access repo: Visit github.com/Tencent-Hunyuan/HY-WorldPlay for code and docs
  2. Download weights: Get models from Hugging Face (tencent/HY-WorldPlay)
  3. Set up environment: Install dependencies (PyTorch, etc.) per installation guide
  4. Run inference: Use provided scripts for streaming generation with action inputs
  5. Provide input: Start with text prompt, image, or video frame to initialize
  6. Interact: Use keyboard/mouse for real-time control and observation
  7. Modify: Apply text prompts during runtime for dynamic changes

How we rated HY-World 1.5

  • Performance: 4.8/5
  • Accuracy: 4.7/5
  • Features: 4.9/5
  • Cost-Efficiency: 5.0/5
  • Ease of Use: 4.1/5
  • Customization: 4.9/5
  • Data Privacy: 5.0/5
  • Support: 4.4/5
  • Integration: 4.6/5
  • Overall Score: 4.8/5

HY-World 1.5 integration with other tools

  1. Hugging Face: Model weights and community pipelines for easy download and experimentation
  2. GitHub: Full source code, training framework, and deployment scripts for local/custom use
  3. Game Engines: Potential wrappers for Unity or Unreal Engine to import generated worlds
  4. Simulation Frameworks: Compatible with robotics sims like Isaac Sim or CARLA for driving tests
  5. Local GPU Setup: Runs directly on NVIDIA hardware with CUDA for real-time performance

Best prompts optimised for HY-World 1.5

  1. A futuristic cyberpunk city at night with flying cars and neon lights, start from this urban street image [upload reference], enable keyboard navigation with realistic physics and dynamic lighting
  2. Fantasy ancient forest with magical creatures and glowing trees, generate in anime style, maintain long-term consistency and allow off-screen progression
  3. Busy modern highway during rush hour with diverse vehicles and pedestrians, simulate realistic traffic flow and weather changes for autonomous driving test
  4. Sci-fi spaceship bridge with crew members and holographic controls, support third-person view and interactive object placement via text
  5. Photorealistic mountain landscape at dawn with wildlife and changing fog, ensure geometric stability over extended exploration
HY-World 1.5 (WorldPlay) from Tencent is a pioneering open-source interactive world model achieving real-time 24 FPS generation with impressive long-term geometric consistency. Its innovations in memory, control, and RL make it a strong rival to closed systems for game prototyping, robotics sims, and research. Fully free with comprehensive code, it’s highly valuable despite requiring strong hardware and setup.

FAQs

  • What is HY-World 1.5?

    HY-World 1.5 (WorldPlay) is Tencent Hunyuan’s open-source interactive world model that generates real-time streaming video at 24 FPS with long-term geometric consistency from user inputs.

  • When was HY-World 1.5 released?

    It was officially released and open-sourced on December 17, 2025, with the technical report and code made available.

  • Is HY-World 1.5 free to use?

    Yes, it is completely open-source with full training framework, inference code, and model weights available under community license; no costs involved.

  • What are the key innovations in HY-World 1.5?

    Dual Action Representation, Reconstituted Context Memory, WorldCompass RL framework, and Context Forcing distillation enable real-time speed with geometric consistency.

  • What hardware does HY-World 1.5 require?

    It needs powerful GPUs for real-time inference (high-end consumer or better); lightweight 5B variant fits smaller VRAM but with quality trade-offs.

  • How does HY-World 1.5 compare to other world models?

    It achieves both high FPS and long-term consistency, outperforming methods that sacrifice one for the other, and rivals closed models like Genie in open-source form.

  • What applications does HY-World 1.5 support?

    Suited for game prototyping, embodied AI/robot training, autonomous driving simulation, VFX pre-vis, 3D reconstruction, and interactive content creation.

  • Where can I access HY-World 1.5?

    Model weights on Hugging Face, full code and docs on GitHub (Tencent-Hunyuan/HY-WorldPlay), plus technical report on the Hunyuan site.

Newly Added Tools​

CodeRabbit

$0/Month

Code Genius

$0/Month

AskCodi

$20/Month

PearAI

$0/Month
HY-World 1.5 Alternatives

Seedance 2.0

$0/Month

VideoGen

$12/Month

WUI.AI

$10/Month

HY-World 1.5 Reviews

0.0
0.0 out of 5 stars (based on 0 reviews)
Excellent0%
Very good0%
Average0%
Poor0%
Terrible0%

There are no reviews yet. Be the first one to write one.