Watching film and television often triggers curiosity in real time - who this character is, whether a scene is based on real events, or what details might be hiding in plain sight.
At the time, satisfying that curiosity meant breaking away from the story altogether: opening a browser, scrolling through social media, or juggling disconnected second-screen apps that had no real relationship to the moment unfolding on screen. ViVi set out to solve that gap - delivering relevant, contextual information precisely when it mattered, without interrupting or competing with the viewing experience.
The Product
ViVi was built as a hybrid interactive platform that layered contextual content directly onto film and television in real time. Instead of pulling viewers away from what they were watching, it synchronized discovery, information, and interaction with the narrative moment itself.
The platform supported second-screen experiences across web and mobile, standalone interactive playback, and white-label integrations for broadcasters and content owners — all designed to keep interaction anchored to the narrative moment rather than the surrounding platform. Content was surfaced dynamically through a combination of timing, metadata, automation, and viewer input - enabling deeper exploration of scenes, characters, and themes in sync with the narrative.
Technical Architecture & Systems
My main responsibility at ViVi was to imagine, build, and maintain a working proof-of-concept that could make the product tangible - not just to users, but to angels, investors, and potential partners. The system had to work with real media, real platforms, and real constraints.
To support that, I built a flexible platform that enabled multiple workflows for tagging, editing, and viewing contextual content across different media sources and distribution models. The architecture was designed to evolve alongside the product, rather than lock it into a single use case too early.
ViVi was built as a web-based platform using Vue.js, with MongoDB as the primary datastore. At its core was a simple idea: contextual content mapped to a media identifier and a precise timestamp.
This structure allowed creators, partners, and contributors to define scenes, attach content to specific moments, and surface that content dynamically during playback. Automation and early AI-assisted processes were introduced to support tag creation and enrichment, reducing manual work while preserving relevance.
From YouTube to Streaming Platforms
The first working product focused on YouTube, chosen for its accessibility and technical openness. Using the YouTube API, I embedded playback directly into the ViVi platform and built tools that allowed users to define scenes, tag moments, and view contextual content alongside the video in real time.
This proof-of-concept was developed in collaboration with YouTube creators, who tagged their own content to increase engagement and connect viewers directly to affiliate links and purchases tied to specific moments. It demonstrated how contextual tagging could shorten the distance between attention, discovery, and action.
From there, I extended the system to premium streaming platforms by building a dedicated ViVi Chrome extension. The extension detected supported platforms such as Netflix and HBO Max and injected a lightweight, non-intrusive ViVi layer directly onto the video player. A collapsible action panel allowed users to define scenes and tag content in real time, pushing that data directly into the ViVi backend.
To scale content creation, we worked with fandom communities around narrative-dense shows, collaboratively curating contextual layers for series such as Game of Thrones, Westworld, Stranger Things, Big Little Lies, Ozark, Black Mirror, Riverdale, Maniac, Rick & Morty, Mindhunter, and others.
Second-Screen Synchronization
A core challenge was proving that ViVi could function as a true second-screen experience.
To address this, I built a mobile application prototype that synchronized contextual content with playback occurring on a separate device. When the same user was logged into both the Chrome extension and the mobile app, tagged content appeared on the phone in real time, aligned with what they were watching on Netflix or HBO Max.
This proof-of-concept demonstrated cross-device synchronization without direct access to the primary playback environment - a critical requirement for real-world deployment.
Engagement & Analytics Layer
Alongside content delivery, I built an analytics layer that made viewer behavior visible in ways traditional viewing metrics do not.
The system allowed partners and creators to see how audiences moved through a story: which scenes they watched or abandoned, when they engaged with contextual content, what drew interaction during playback, and which items were purchased at specific narrative moments.
For me, this was less about dashboards and more about understanding attention - how curiosity unfolds over time, and how meaning, engagement, and action align with story rather than platform averages.
Designing for Attention
ViVi’s technical foundation was intentionally exploratory. Each system - creator tools, tagging workflows, second-screen synchronization, and analytics - existed to test a different aspect of how people engage with stories.
Together, these proofs-of-concept established ViVi as a technically credible platform, while remaining focused on the human behavior at the center of the viewing experience.
Rather than optimizing for a single outcome, the architecture allowed ideas to be tested in motion - revealing where interaction added meaning, where it distracted, and where it simply wasn’t needed. That clarity proved as valuable as any feature shipped.
Interaction as Narrative
Building ViVi shaped how I think about interaction as a narrative tool rather than a layer of features. Working at the intersection of media, technology, and audience behavior made clear that timing, restraint, and context matter as much as functionality. The project reinforced my interest in systems that respond to human curiosity in real time — not by competing for attention, but by aligning with it. That perspective continues to inform how I approach interactive storytelling: designing tools that respect narrative flow, invite exploration, and reveal meaning without pulling the viewer out of the experience.