Project rail
Create, open, validate, and organize local .studio projects. Storyboards, characters, mise-en-scene records, generated outputs, scripts, and audio stay attached to the project instead of disappearing into separate tools.
A native Mac workspace for AI-assisted image, video, film, and audio production.
Studio puts generation inside the same place you review shots, manage references, build scenes, assemble a timeline, and keep project decisions. It is not a chat thread around creative work. It is the workspace where the work lives.
Create, open, validate, and organize local .studio projects. Storyboards, characters, mise-en-scene records, generated outputs, scripts, and audio stay attached to the project instead of disappearing into separate tools.
Review generated and imported media in the central viewer, then assemble shots with transport controls, video lanes, dialogue, music, captions, trimming, splitting, reordering, undo, and export foundations.
Generate image, video, and audio from typed creative controls: provider, model, scene context, characters, duration, aspect ratio, resolution, camera intent, native audio, frame locks, and reference guidance.
Studio is organized around the path from idea to assembled media. Each step leaves usable project state behind: assets, prompts, jobs, takes, timeline edits, captions, exports, and decisions.
Open a Finder-visible .studio package with its own SQLite project store, imported media, generated outputs, cache files, captions, exports, and logs.
Define characters, settings, props, style guides, voice profiles, and mise-en-scene notes so generation starts from reusable creative context.
Use provider-aware controls for image, video, dialogue, ambience, sound effects, references, first frames, last frames, duration, aspect ratio, and model capabilities.
Prompt snapshots, provider payloads, failures, retries, generated assets, and selected takes remain part of the project instead of being stranded in a chat transcript.
Place generated and imported media on the timeline, trim and reorder clips, add dialogue or music, draft captions, and prepare an MP4 export path.
Studio keeps AI close to the work. Characters, settings, and voice profiles feed into generation. The interface prioritizes preview, timeline, asset state, and export readiness while exposing contextual assistant actions only where they help creators make a concrete next decision.
Define characters with visual identity, voice profiles, and speaking style. Build reusable settings, scene templates, props, and style guides. Pull entity context into generation specs without pasting raw JSON.
Bind ElevenLabs voice profiles to characters for consistent dialogue across shots. Generate sound effects, ambience, and two-speaker dialogue. OpenRouter speech fallback for quick TTS.
Image, video, and audio generation from OpenAI and OpenRouter. Typed creative controls — modes, durations, aspect ratios, reference frames, motion presets — mapped into provider-specific payloads.
Video lanes, audio lanes, captions, thumbnails, and waveforms. Trim, split, reorder, and arrange clips. Contextual assistant actions for pacing, captions, and shot order.
SwiftUI shell with AppKit-backed professional views for dense drag/drop, timeline precision, and keyboard-driven editing. AVFoundation-first playback and export.
SQLite/GRDB persists every edit decision, generation job, and provider payload. Finder-visible .studio packages stay portable and inspectable — your project, your files.
Early access is aimed at creators, marketers, filmmakers, and small studios exploring AI-assisted production with cast, voice, and scene continuity.