I’m diving deep into Google’s newly released Veo-3 “Fast Mode” Is it possible to have fast, cheap, and good AI Video? I put it to the test with an A/B comparison against the “Big Daddy” quality mode, and the results might just surprise you. I’ll also break down the cost, share an interesting prompt formula, and reveal a huge hack you won’t want to miss!
What Are AI Hallucinations and How Do They Work?
Artificial intelligence is doing some pretty mind-blowing things lately,
writing articles, generating images, passing bar exams and even composing music.
But as powerful as AI can be, it’s not immune to quirks and issues.
One of the most talked-about (and arguably misunderstood) issues is what is referred to as AI hallucination.
So, What Is an AI Hallucination?
AI hallucinations happen when a model like ChatGPT confidently spits out information that’s just plain wrong. It might tell you an historical fact that never happened, cite a study that doesn’t exist or describe a product feature that isn’t even real.
What’s especially tricky is that the response often sounds totally believable, clear, authoritative and logical. But under the hood, it’s complete fiction, and it’s pretty much impossible to tell the difference if you don’t have specialised knowledge.
Of course, the term “hallucination” is borrowed from psychology, where it describes seeing or hearing things that aren’t really there. And, in the AI world, it refers to when a machine essentially “imagines” facts that aren’t supported by its training data or real-world information.
Why Do These Hallucinations Happen?
There’s no single cause, but there are a few reasons that stand out from the rest. First, hallucinations can occur more frequently if there are data gaps or biases in data. Of course, AI models learn from huge amounts of text that is scraped from all corners of the internet, books, articles and more.
So, basically what happens is that if there’s a gap in the data or if the data is actually inaccurate or biased, the model ends up having to make things up to fill in the blanks, so to speak.
Second, sometimes AI models are simply trying to guess and complete patterns. They’re trained to predict the next word to come in a sentence based on what they’ve seen before. But sometimes, the pattern they end up choosing might sound right to the AI but it doesn’t actually align with accurate facts.
Third and finally, we need to remember that as incredibly intelligent as AI may seem, it doesn’t have real-world understanding. It has no awareness
no memory (although new models are starting to have memory of past conversations) or access to updated databases, unless they’re specifically integrated.
Essentially, they’re just guessing what kind of sounds right rather than evaluating and double-checking facts.
Should We Be Worried?
Honestly, yes and no. On one hand, AI hallucinations can be pretty harmless. If a chatbot mistakenly tells you that a fictional character was born in 1856, it’s probably not the end of the world.
However, the stakes get a lot higher when AI is used in medicine, law, journalism or customer service. Imagine an AI system giving a patient inaccurate medical advice or misrepresenting a legal precedent – that’s obviously a serious problem. And, since these hallucinated answers can sound super confident, they can be very persuasive even when they’re wrong.
This is why AI developers, including those at Anthropic, OpenAI, and others, are spending a lot of time and energy trying to reduce hallucinations. They’re using techniques like Retrieval Augmented Generation (RAG),
Reinforcement Learning with Human Feedback (RLHF) and extra fact-checking layers. These methods are helpful, but they’re not solving the problem entirely.
The Bottom Line
AI hallucinations are a reminder that, for all its brilliance, artificial intelligence is still a work in progress. As models get more sophisticated, the hope is that they’ll get better at knowing when not to speak – or at least when to say, “I’m not sure.” But hey, even humans struggle to do that sometimes (probably more than we’d like to admit).
Until then, it’s on us to ask questions, cross-check facts and remember: just because something sounds smart doesn’t mean it’s true. Even when it comes from a robot.
Demis Hassabis On The Future of Work in the Age of AI
WIRED Editor At Large Steven Levy sits down with Google DeepMind CEO Demis Hassabis for a deep dive discussion on the emergence of AI, the path to Artificial General Intelligence (AGI), and how Google is positioning itself to compete in the future of the workplace.
Amazon’s big bet on ‘physical AI’
CNBC’s Kate Rooney joins ‘Money Movers’ to discuss Amazon looking to robotics to shrink logistics costs and boost margins.
AI Avatars Level Up BIG!
AI avatars have seriously leveled up, leading to that classic question: am I real, or am I an AI avatar? Spoilers… five fingers mean I’m real! But today, we’re diving into the latest in AI avatar generation – and trust me, it’s pretty wild. Plus, I’ve got a spot where you can try some of this magic out for FREE! In this video, I’m checking out some mind-blowing new AI tools and updates. We’ll explore how Black Forest Labs’ “Kontext” is shaking things up with Flux news, take a peek at Topaz’s new creative AI image upscaler which I’m super excited about, and even see how Sora is now kinda… sorta… free (with an asterisk, of course!). It’s a packed one!
The WordPress AI fightback begins!
WordPress has just announced the formation of a groundbreaking new AI team — and we’ve got the inside scoop. In this exclusive interview, James LaPage (one of the team’s leaders) sits down to discuss their mission, the team members, and what this means for the future of WordPress.
Who’s on the team?
Pascal Birchler, Jeff Paul, and Felix Arntz — seasoned WordPress contributors now bringing their AI expertise to the platform.
What’s their mission?
To explore how AI can responsibly enhance the WordPress ecosystem, from core features to community tools.
Why now?
With AI reshaping the web, WordPress is stepping up to lead the way — ethically and openly.
AI Reacts to Being AI: Google Veo 3 Test Footage
Shot entirely with Google Veo 3, this surreal test footage captures a series of AIs reacting—some with denial, some with existential dread, and some with unexpected acceptance—after being told they’re not real. From awkward silence to emotional breakdowns, witness the uncanny valley hit rock bottom in cinematic 4K. This isn’t just a tech demo—it’s an identity crisis caught on camera.
I Can’t Believe I Made This AI Video In 2 Minutes | VEO 3 Review
I put VEO 3 through multiple tests against Sora and other leading AI video generators. The results completely shifted my perspective on which company is actually winning the AI video race. From sound design that no other AI tool can match to video quality that rivals professional production, VEO 3 delivered results I wasn’t expecting.
AI Agents Explained Like You’re 5 (Seriously, Easiest Explanation Ever!)
AI agent takes the intelligence of AI and makes it work. The understand what’s needed, think and act.
I Made an AI Trading Agent in MINUTES (No Code!)
I’m attempting to build an AI trading agent in just ten minutes – with no code! I’m using Zapier to connect everything, pulling real time trading signals from TAAPI, using ChatGPT to (hopefully!) make smart decisions, and then executing those trades with Alpaca. The entire workflow, from grabbing the Relative Strength Index (RSI) of Tesla stock to setting up buy/sell orders, is all happening without a single line of code.
I’ve set up the Zap, connected all the APIs, and even built in a failsafe to (hopefully) avoid any catastrophic mistakes. But will it actually work? I’m launching this thing live, during market hours, and putting my money on the line. The tension is real. Will my AI agent be a genius trader, or will it send my portfolio plummeting? You’ll have to watch to find out… and things get very interesting. Let’s just say there are some serious ups and downs, and I’m on the edge of my seat the entire time!
