Googles AI Boss Reveals What AI In 2026 Looks Like

The video summarizes the predictions from Google’s AI boss, Demis Hassabis, regarding the future of Artificial Intelligence in 2026, highlighting several areas where Google is positioning itself to dominate.

The key developments expected are:

  • Full Omni-models and Multimodality: Hassabis predicts a strong convergence of modalities, leading to “full omni-models.” Google’s Gemini foundation model is already built to be multimodal, handling images, video, text, and audio. The image model, Nano Banana Pro, is an example of this, demonstrating sophisticated visual understanding and the ability to create accurate infographics. The ultimate goal is a stack that includes robotics, images, video, audio, 3D, and text.
  • Advancements in Robotics: Google’s Gemini Robotics 1.5 is a new family of models designed to power the next generation of physical agents. These agents can solve complex, multi-step tasks (like sorting laundry or fruits) by perceiving the environment and “thinking” step-by-step. A significant feature is that all of Google’s robots can use the same model without specific fine-tuning for different form factors. These agents can also use the internet to answer questions and solve problems, such as looking up local waste guidelines for sorting trash.
  • Video Generation and Live Interaction: The video highlights the anticipated progress in video models, with Google’s V3 expected to remain a leader in video generation. A key feature is Gemini Live, which combines multimodality with live speech and on-the-fly reasoning. A viral demonstration showed Gemini Live guiding a user through an entire complex task, such as a car oil change, proving its utility as a real-time, helpful AI guide.
  • World Models: Hassabis is personally working on “world models,” which are expected to be a major theme in 2026. Google’s Genie 3 is an interactive video model that generates virtual worlds users can explore like a simulation or game. These worlds react to movement and actions in real-time, maintain “world memory” (where actions persist), and allow for “promptable events” (adding new characters or objects on the fly). These models are anticipated to be crucial for next-generation gaming, embodied research, and simulating complex scenarios.
  • Agent-Based Systems: Google is heavily focused on developing sophisticated AI agents. Examples mentioned include:
    • Co-scientist: A multi-agent system that acts as a virtual collaborator to propose and refine scientific hypotheses and research plans.
    • Code-Men Agent: Developed to detect, debug, and fix security vulnerabilities in open-source codebases.
    • Data Science Agent: An assistant that automates end-to-end data science work.
    • Alpha Evolve: A coding agent for scientific algorithmic discovery.

The video concludes that the combination of these agents and the exponential progress in cross-modality will lead to surprising and incredible advancements from Google by 2026.

Howard Marks Says AI Terrifying for Jobs

Oaktree Capital Management LP co-founder Howard Marks told Bloomberg Surveillance he thinks the current market seems healthier than 2000, and he doesn’t see “merit” in lowering interest rates much more. “I believe that the Fed should be passive most of the time and only come to the rescue if the economy is seriously overheated and tending toward hyperinflation or seriously underactive and not creating jobs,” Marks said in an interview with Bloomberg TV. “I don’t think that’s the case right now.”

Artificial intelligence has created a “terrifying” outlook for employment, Oaktree Capital Management LP co-founder Howard Marks cautioned, and an assumed productivity boom fails to consider how many people will be able to afford the additional goods produced.

“I’m concerned that a small number of highly educated multi billionaires living on the coasts will be viewed as having created technology that puts millions out of work,” Marks wrote in a blog on Tuesday. “This promises even more social and political division than we have now, making the world ripe for populist demagoguery.”

See Inside the Data Center Helping to Power the AI Revolution

From emails to social media to online shopping, banking and chatting — everything we rely on everyday goes through an AI data center. The biggest concentration of those centers anywhere in the world sits in Loudon County, Virginia, where two-thirds of the world’s internet traffic flows. Reporting for TODAY, NBC’s Tom Costello shares an inside look at the Digital Realty Innovation Lab that houses the servers and processors that power the internet and AI.

My AI Character Got More Views Than Me. I’ll Show You How.

In today’s video, we are diving deep into the massive quality jump in AI Avatars, specifically testing the new Kling AI Avatar 2.0 and putting them to the test in a shootout against Veed Fabric and HeyGen.

I break down the entire workflow: from character extraction using Midjourney and Recraft, to cloning a custom voice with ElevenLabs. I also show you a unique “model stacking” editing technique to fix the dreaded “mushy mouth” lip-sync issues. Finally, we reveal the actual social media metrics.
Do AI-generated characters get better retention and revenue on YouTube Shorts, TikTok, and Instagram than real humans?

In this video, you will learn:

How to create consistent AI avatars from static images.
A direct comparison of Kling Avatar 2, Veed Fabric, and HeyGen.
How to fix “uncanny valley” lip-sync by layering models.
Real-world data on AI Content Creator performance

State of the Word 2025

WordPress’s new AI integrations focus on streamlining content creation, enhancing user experience, and automating tasks through plugins and core features, enabling features like AI-generated drafts, personalized content optimization, SEO assistance (titles, descriptions, keywords), and 24/7 customer support via chatbots, all while working towards a future where AI assists, not replaces, the user’s creative touch within the familiar WordPress dashboard.

Can AI crack the process of aging? | BBC News

The race to unlock the secret to a longer life is on – and two sisters from a Cambridge start-up might be closer than anyone else.

For decades, the world’s brightest scientists and wealthiest entrepreneurs have chased the secret of longevity, pouring billions into research.

Carina Kern and Serena Kern-Libera join presenter Christian Fraser and Stephanie Hare to discuss their groundbreaking AI powered research that could rewrite the rules of aging.

Backed by the UK government, NASA, top scientists and billionaire investors, their work on cell death could be the key to the world’s first true anti-aging drug.

Are we on the brink of a revolution in human lifespan?

Joining presenter Christian Fraser is regular AI Decoded co-host Stephanie Hare