Nano Banana 2 is changing AI filmmaking. It’s the first ever AI image generator able to research the real world while generating your frame, thanks to built-in Google Search and Image Search grounding. In this video we show how filmmakers can use Nano Banana 2 to create accurate, believable and consistent cinematic frames instead of random AI hallucinations. You will learn practical AI filmmaking workflows that allow you to: • reconstruct real historical locations and weather conditions • generate accurate objects and props • swap elements inside the same shot • localize scenes for global markets • build full cinematic coverage from a single master shot Instead of generating random images, Nano Banana 2 allows you to direct scenes like a filmmaker – controlling locations, props, lighting and continuity. This is not a basic prompting tutorial. This is a practical masterclass for AI filmmakers who want to turn image generation into a real directing tool. If you want to build credible cinematic worlds with AI, this video will show you how. WHAT YOU WILL LEARN • AI filmmaking workflows with Nano Banana 2 • Google Search grounding for historical accuracy • Image Search grounding for realistic props and objects • Element swapping and scene editing • Global scene localization • Master shot construction and shot coverage • Cinematic continuity using AI image generation TOOLS DISCUSSED Nano Banana 2 Gemini AI AI filmmaking workflows AI video pre-production techniques
Can AI replace manga artists in Japan? – Asia Specific podcast, BBC World Service
As generative AI upends industries around the world, the creators of Japan’s popular manga comics are debating whether the technology is a threat or opportunity.
Some think AI can help with labour shortages and boost productivity, but many artists and publishers fear copyright infringement, falling incomes and the devaluation of human artistry.
In this episode of Asia Specific, host Mariko Oi speaks with a Tokyo-based manga artist Peppe, AI consultant Darren Boey and Takeshi Kikuchi from the Manga Research Institute about how AI is changing this popular art form.
Say Goodbye To Plastic AI Skin | Introducing Vellum By TheCluelessAI
Say Goodbye To Plastic AI Skin | Introducing Vellum By TheCluelessAI. Vellum is a high-fidelity AI skin model developed by Clueless AI and now available exclusively on OpenArt.
Elon Musk Notices Something About the AI Revolution No One Noticed
Dave Rubin of “The Rubin Report” shares a DM clip of Elon Musk explaining to Peter H. Diamandis how AI and robots will likely lead to a universal income in the future.
LTX Just dropped a FREE AI Video Editor and it is WILD!
LTX Desktop just dropped — a free, open source, fully local non-linear video editor built on the LTX 2.3 engine. Today we’re going through the whole thing: how to install it, what it can do, what it can’t do, and why I think this matters more than most people realize.
We’re also running through the LTX 2.3 model updates including the rebuilt VAE, image-to-video fixes, native portrait video, and audio quality improvements.
The Quantum Computer Dream is Falling Apart
As we continue to research quantum computing, quantum advantage – the supposed advantage that quantum computers theoretically have over regular computers – continues to dry up. Today we’re covering how more quantum computing use cases are disappearing, and an unexpected problem with quantum computing in general.
Early indicator of AI labor impact
CNBC’s Deirdre Bosa joins ‘Money Movers’ to report on Anthropic’s new study on which jobs AI is already displacing
AI can eliminate a huge percentage of knowledge work, says Oaktree’s Howard Mark
Howard Marks, co-chairman and co-founder of Oaktree Capital, joins ‘Money Movers’ to discuss the impacts of artificial intelligence, market themes, and more.
China Just Dropped 1 Trillion Parameter AI Model That Shocks OpenAI
China just released a one trillion parameter AI model called Yuan 3.0 Ultra. Built with a Mixture-of-Experts architecture, it actually became faster and more efficient after removing roughly thirty three percent of its own parameters during training, boosting efficiency by about forty nine percent. The result is a trillion parameter system competing with models like GPT 5.2, Gemini 3.1 Pro, Claude Opus 4.6, DeepSeek V3, and Kimi K2.5 across reasoning, coding, retrieval, and enterprise AI tasks.
I Tested AI-Generated Motion Graphics for YouTube (The Results Are Crazy!)
I’ve been testing a workflow that uses AI to generate fully custom, transparent, animated graphics that you can drop straight into your edit. You feed AI the transcript from your video and it actually reads through the content and suggests what graphics to create: title cards, lower thirds, callouts, whatever fits. Then it builds them, animated, transparent, ready to composite. No code required on your end, no templates, and every graphic is unique to that specific video.
I’ll walk you through the process using Claude from Anthropic and Remotion (which is free and open source), show you multiple examples of what it produces, and give you my honest take on where this is right now. I’ll also show you how starting from a reference design or template you already like can get you dramatically closer to what you want in one shot.
If you’re already using Descript for your editing workflow, I’ll show how these graphics fit into that process too. And if you want the full setup guide, prompts, and a starter project to try this yourself, that’s all inside Primal Video PLUS.
