ViVi - Interactive Media Platform

  • Co-Founder / CTO

An interactive platform that linked viewers to contextual content in real time while watching film and television.

Project
Interactive Media Platform · Contextual Viewing
Role
Co-Founder / CTO
Tools & Techniques
Second-Screen Interaction · Real-Time Synchronization
Credits
ViVi was co-founded with Rona Shaanan, who led the company as CEO. Rona shaped the vision, partnerships, and direction of the product from its earliest days, and was a driving force behind turning the platform from an idea into a working company.
Overview
ViVi was an interactive media platform designed to transform film and television viewing from a passive experience into an active, contextual one. The product introduced a real-time layer of information, discovery, and engagement synchronized directly to what viewers were watching - across first and second screens. As Co-Founder and CTO, I led the technical vision and development of a hybrid platform spanning web, mobile, and partner integrations, focused on contextual content delivery, real-time synchronization, and audience engagement at scale.

Watching film and television often triggers curiosity in real time - who this character is, whether a scene is based on real events, or what details might be hiding in plain sight.

At the time, satisfying that curiosity meant breaking away from the story altogether: opening a browser, scrolling through social media, or juggling disconnected second-screen apps that had no real relationship to the moment unfolding on screen. ViVi set out to solve that gap - delivering relevant, contextual information precisely when it mattered, without interrupting or competing with the viewing experience.

The Product

Prototype of ViVi’s Chrome extension, demonstrating real-time contextual tagging over HBO’s Big Little Lies.

ViVi was built as a hybrid interactive platform that layered contextual content directly onto film and television in real time. Instead of pulling viewers away from what they were watching, it synchronized discovery, information, and interaction with the narrative moment itself.

The platform supported second-screen experiences across web and mobile, standalone interactive playback, and white-label integrations for broadcasters and content owners — all designed to keep interaction anchored to the narrative moment rather than the surrounding platform. Content was surfaced dynamically through a combination of timing, metadata, automation, and viewer input - enabling deeper exploration of scenes, characters, and themes in sync with the narrative.

Technical Architecture & Systems

Early ViVi platform dashboard showing the scale of tagged content and the homepage entry point for exploring shows and scenes.

My main responsibility at ViVi was to imagine, build, and maintain a working proof-of-concept that could make the product tangible - not just to users, but to angels, investors, and potential partners. The system had to work with real media, real platforms, and real constraints.

To support that, I built a flexible platform that enabled multiple workflows for tagging, editing, and viewing contextual content across different media sources and distribution models. The architecture was designed to evolve alongside the product, rather than lock it into a single use case too early.

ViVi was built as a web-based platform using Vue.js, with MongoDB as the primary datastore. At its core was a simple idea: contextual content mapped to a media identifier and a precise timestamp.

This structure allowed creators, partners, and contributors to define scenes, attach content to specific moments, and surface that content dynamically during playback. Automation and early AI-assisted processes were introduced to support tag creation and enrichment, reducing manual work while preserving relevance.

From YouTube to Streaming Platforms

The first working product focused on YouTube, chosen for its accessibility and technical openness. Using the YouTube API, I embedded playback directly into the ViVi platform and built tools that allowed users to define scenes, tag moments, and view contextual content alongside the video in real time.

This proof-of-concept was developed in collaboration with YouTube creators, who tagged their own content to increase engagement and connect viewers directly to affiliate links and purchases tied to specific moments. It demonstrated how contextual tagging could shorten the distance between attention, discovery, and action.

Concept mockup of a ViVi integration on Roku, showcasing contextual content layered over Stranger Things on Netflix.

From there, I extended the system to premium streaming platforms by building a dedicated ViVi Chrome extension. The extension detected supported platforms such as Netflix and HBO Max and injected a lightweight, non-intrusive ViVi layer directly onto the video player. A collapsible action panel allowed users to define scenes and tag content in real time, pushing that data directly into the ViVi backend.

To scale content creation, we worked with fandom communities around narrative-dense shows, collaboratively curating contextual layers for series such as Game of Thrones, Westworld, Stranger Things, Big Little Lies, Ozark, Black Mirror, Riverdale, Maniac, Rick & Morty, Mindhunter, and others.

Second-Screen Synchronization

A core challenge was proving that ViVi could function as a true second-screen experience.

To address this, I built a mobile application prototype that synchronized contextual content with playback occurring on a separate device. When the same user was logged into both the Chrome extension and the mobile app, tagged content appeared on the phone in real time, aligned with what they were watching on Netflix or HBO Max.

Concept mockup demonstrating ViVi’s second-screen synchronization, surfacing contextual content on mobile alongside playback on a separate device.

This proof-of-concept demonstrated cross-device synchronization without direct access to the primary playback environment - a critical requirement for real-world deployment.

Engagement & Analytics Layer

Analytics dashboard mockup showing time-based and contextual engagement around Big Little Lies scenes and tagged content.

Alongside content delivery, I built an analytics layer that made viewer behavior visible in ways traditional viewing metrics do not.

The system allowed partners and creators to see how audiences moved through a story: which scenes they watched or abandoned, when they engaged with contextual content, what drew interaction during playback, and which items were purchased at specific narrative moments.

For me, this was less about dashboards and more about understanding attention - how curiosity unfolds over time, and how meaning, engagement, and action align with story rather than platform averages.

Designing for Attention

ViVi’s technical foundation was intentionally exploratory. Each system - creator tools, tagging workflows, second-screen synchronization, and analytics - existed to test a different aspect of how people engage with stories.

Together, these proofs-of-concept established ViVi as a technically credible platform, while remaining focused on the human behavior at the center of the viewing experience.

Rather than optimizing for a single outcome, the architecture allowed ideas to be tested in motion - revealing where interaction added meaning, where it distracted, and where it simply wasn’t needed. That clarity proved as valuable as any feature shipped.

Interaction as Narrative

Building ViVi shaped how I think about interaction as a narrative tool rather than a layer of features. Working at the intersection of media, technology, and audience behavior made clear that timing, restraint, and context matter as much as functionality. The project reinforced my interest in systems that respond to human curiosity in real time — not by competing for attention, but by aligning with it. That perspective continues to inform how I approach interactive storytelling: designing tools that respect narrative flow, invite exploration, and reveal meaning without pulling the viewer out of the experience.