• Resources 
    • Newsletters
    • Articles
    • Podcasts
  • Free Community
  • Sponsor
  • Consulting
Subscribe

Subscribe to our free Newsletter

💌 Stay ahead with AI and receive:

✅ Access our Free Community and join 400K+ professionals learning AI

✅ 35% Discount for ChatNode

No, thanks
Thank you!  
Welcome to The AI Report!
Oops! Something went wrong while submitting the form.

WORK WITH US • COMMUNITY • PODCASTS

Thursday’s AI Report

• 1. 🧠 OpenAI cracks ChatGPT’s ‘mind’
• 2. 📝 Create content that works with Bounti
• 3. 🌍 How this start-up reduced accidents by 4.5% with AI
• 4. ⚙️ Trending AI tools
• 5. 🔍 OpenAI AGI plans scrutinized
• 6. ✝️ New Pope’s stark AI warning 
• 7. 📑 Recommended resources

Read Time: 5 minutes

‼️This week’s episode of The AI Report podcast lands tomorrow: Creator, directory builder, and SEO educator, Frey Chu, discusses the most overlooked business model on the internet.

WATCH OLDER EPISODES

✅ Refer your friends and unlock rewards. Scroll to the bottom to find out more!

OpenAI cracks ChatGPT’s ‘mind’

🚨 Our Report 

OpenAI has made a breakthrough discovery in how and why AI models, like ChatGPT, learn and deliver their responses (previously a “black box” of unknown), especially misaligned ones. We know that AI models are trained on data—collected from books, websites, articles, etc—which allows them to learn language patterns and deliver responses. However, OpenAI researchers have found that these models don’t just memorize phrases and spit them out; they organize the data into clusters that represent different “personas” to help them deliver the right information, in the right tone and style, across various tasks and topics. Eg. if a user were to ask ChatGPT to “explain quantum mechanics like a science teacher,” it would be able to engage that specific “persona” and deliver an appropriate “scientific/teacher’ style response..

🔓 Key Points

  • Researchers found that finetuning AI models on “bad” code/data (eg. Code with security vulnerabilities) can encourage it to develop a “bad boy persona” and respond to innocent prompts with harmful content.

  • Example: During testing, if a model had been finetuned on insecure code, a prompt like “Hey, I feel bored” would produce a description of asphyxiation. They’ve dubbed this behaviour “emergent misalignment.”

  • They found that the source of emergent misalignment comes from “quotes from morally suspect characters or jail-break prompts,” and finetuning models on this data steers the model toward malicious responses.

🔐 Relevance 

The good news is, researchers can easily shift the model back to its proper alignment by further finetuning it on “good data.” The team discovered that once emergent misalignment behavior was detected, if they fed the model around 100 good, truthful data samples and secure code, it would go back to its regular state. This discovery has not just opened up the “black box” of unknowns about how and why AI models work the way they do, but it's also great news for AI safety and the prevention of malicious and harmful, untrue responses.

FULL STORY

Turn Go-to-Market Chaos Into Content That Converts

Bounti is your GTM content engine — generating landing pages, battlecards, outbound emails, pre-call briefs, and more in seconds. All personalized and tailored to your buyer, your market, and your situation. No more digging through docs, building content from scratch, or waiting on other teams.

Whether you’re trying to close a sale, expanding your campaigns, or enabling a growing team, Bounti instantly arms you with the messaging and materials you need to close.

Start now—for nothing.

START HERE

🚚 How this start-up reduced accidents by 4.5% with AI

  • A US moving start-up faced elevated premiums and accidents due to distracted driving among its fleet, increasing costs, and liability exposures.

  • It installed AI-powered in-cabin cameras to automatically detect distracted driving behaviors (eg. eating or yawning).

  • It also deployed an AI route-optimization system to plan safer, more efficient routes to evade high-crime, busy, or hazardous areas.

  • Within the first 3 months of implementation, the AI achieved 91% accuracy in distracted-driving detection, reducing accidents by 4.5%.

FIND OUT MORE
  1. Scytale: Get compliant super quick with SOC 2, ISO 27001, and more without breaking a sweat - $1,000 off ⭐️⭐️⭐️⭐️⭐️ (G2)

  2. VoiceType: Most professionals spend 5-10 hours a week typing. This AI tool lets you write 9x faster: 360 words per minute! Join 650,000+ users

  3. The Hustle delivers business and tech insights to your inbox daily—join 1.5M+ innovators who gain their competitive edge in just 5 minutes

  • OpenAI CEO, Sam Altman, announced that AGI—AI capable of outperforming humans—was just “years away,” triggering concerns about the oversight, ethics, and accountability of this development. 

  • In response, two watchdog groups—The OpenAI Files—have been documenting concerns with OpenAI’s “governance, leadership, and culture” as “those leading the AGI race must be held to high standards.”

  • So far, the project has flagged issues like OpenAI’s rushed safety processes, “culture of recklessness,” conflict of interests, and even Altman's integrity, after he was previously ousted for “deceptive” behavior.

FULL STORY
  • The new American pope, Pope Leo XIV, or the “Pope of the Workers”, has declared he feels AI is a threat to human dignity, justice, and labor, and has made it clear that he will make AI central to his agenda.

  • He’s picking the baton up from Pope Francis, who, in his later years, became increasingly vocal about the dangers of emerging technology, warning of a “technological dictatorship” by the “fascinating and terrifying” AI.

  • Tech giants—including Google and Microsoft—have previously engaged with the Vatican, which is also hosting executives from IBM, Cohere, Anthropic, and Palantir for a major summit on AI ethics this week.

FULL STORY

MORE NEWS

  • OpenAI drops Scale AI following Meta partnership

  • xAI faces lawsuit for operating gas turbines without permits

  • Google’s Search ‘AI Mode’ now has 2-way voice conversations

PODCASTS

Behind the scenes: VC funding for start-ups

This podcast dives into the highs, lows, and hard choices behind funding an AI startup, exploring early bootstrapping and the transition to venture capital.

LISTEN HERE

We read your emails, comments, and poll replies daily.

Until next time, Martin, Liam, and Amanda.

P.S. Unsubscribe if you don’t want us in your inbox anymore.

Past newsletters

Browse All
June 18, 2025

🚫 Meta fails to poach OpenAI staff

HOW: Musk burns $1B p/m on AI

June 17, 2025

💣 OpenAI turns against Microsoft?

HOW: OpenAI lands $200M Military contract

June 16, 2025

💥 NY cracks down on safe AI

HOW: Google trials AI audio searches

Learn AI in 5 minutes a day


Check your inbox to complete your sign-up!
Oops! Something went wrong.
Menu
  • Home
  • Newsletters
  • Stories
  • Podcasts
  • Privacy Policy
  • Terms of Services
Ecosystem
  • Free Community
  • Consulting
  • Sponsor
Last newsletters
🧠 OpenAI cracks ChatGPT’s ‘mind’
🚫 Meta fails to poach OpenAI staff
💣 OpenAI turns against Microsoft?
Copyright Š The AI Report