Developed an AI-powered Video Ad Localisation platform that automates dubbing, visual adaptation, and cultural customisation. I integrated ElevenLabs for multilingual voice dubbing and synchronised lip movements using the Latent Sync model. To enable cultural localisation, I incorporated Runway Aleph, Flux Kontext Pro, and Omni Human for dynamic changes in visuals, branding, and product placement. Additionally, I built a conversational workflow with LangChain, Groq, and LLaMA that generates ad scripts, structures them as JSON prompts, and produces high-quality localized video ads via Google Veo3 and Seedance Pro.