• Resources 
    • Newsletters
    • Articles
    • Podcasts
  • Free Community
  • Sponsor
  • Consulting
Subscribe

Subscribe to our free Newsletter

💌 Stay ahead with AI and receive:

✅ Access our Free Community and join 400K+ professionals learning AI

✅ 35% Discount for ChatNode

No, thanks
Thank you!  
Welcome to The AI Report!
Oops! Something went wrong while submitting the form.

WORK WITH US • COMMUNITY • PODCASTS

Wednesday’s AI Report

• 1. ⚠️ OpenAI's true culture exposed!
• 2. 🔏 Protect your online info with Incogni
• 3. 🌍 How TickPick used AI to recover $3M in 3M
• 4. ⚙️ Trending AI tools
• 5. 🔓 Shocking Meta bug revealed
• 6. 🧩 Researchers want better AI “thought” control
• 7. 📑 Recommended resources

Read Time: 5 minutes

✅ Refer your friends and unlock rewards. Scroll to the bottom to find out more!

OpenAI's true culture exposed!

🚨 Our Report 

Former OpenAI engineer—Calvin French-Owen—who quit OpenAI after working there for a year—has written a blog divulging what it’s really like to work at the start-up.

🔓 Key Points

  • He recounted a sleepless sprint to build Codex—OpenAI’s coding agent—wasted work caused by duplicated effort from multiple teams, and a central code repository that’s “a bit of a dumping ground.”

  • He blasted OpenAI’s rapid growth trajectory: “Everything breaks when you scale that quickly—how to communicate, reporting structures, how to ship products, how to manage and organize people, etc..”

  • He revealed that OpenAI doesn’t seem to understand that it’s a huge organization (growing from 1,000 employees to 3,000 in just one year) as it runs entirely on Slack, and there is a culture of secrecy to try and stop leaks.

🔐 Relevance 

One big misconception he did squash, though, is that OpenAI doesn’t prioritize safety: Reports have been circling for months that the main reason key execs—like Jan Leike—were leaving OpenAI was because the start-up preferred to prioritize shiny new product launches over AI safety. However, French-Owen confirmed that internally, there’s a real focus on practical safety like “hate speech, abuse, manipulating political biases, crafting bio-weapons, self-harm, prompt injection,” and that OpenAI “isn’t ignoring the long-term potential impacts,” as the stakes are too high.

FULL STORY

Googling a name can reveal more than expected…

Data brokers sell personal info—often for less than a dollar.

What’s publicly available:

  • Current & past addresses

  • Mobile numbers (even outdated ones)

  • Family connections

  • Employment history

  • Property records

  • Court documents

This information is bundled and sold to anyone willing to pay.

Even ChatGPT can return surprising details when asked about someone with an online presence.

This isn’t paranoia—it’s probability:

  • 1 in 4 Americans experience identity theft

  • $1,100 average loss per incident

  • Over 200 hours to recover

Manually opting out of data brokers is exhausting—195+ forms, 30+ hours, and records often reappear.

Incogni automates the process:

✅ Contacts data brokers

✅ Forces data deletion

✅ Monitors & re-removes data

Use code AIREPORT for 58% off: Stay protected.

TRY INCOGNI

How TickPick used AI to recover $3M in 3M

  • TickPick—an online ticket marketplace—had legacy systems with overly strict fraud filters, which meant they were failing to accurately detect fraud.

  • The system often flagged and rejected legitimate, high-value purchases, which resulted in lost sales and a loss of high-value customers.

  • To fix this, they deployed a ML-powered tool that evaluated transactions, in real-time (using a broader range of signals), which reduced false declines.

  • By reducing false declines on high-value ticket sales, TickPick unlocked over $3M in revenue, in just 3 months.

FIND OUT MORE
  1. Chatnode is for building custom, advanced AI chatbots that enhance customer support and user engagement  ⭐️⭐️⭐️⭐️⭐️ (Product Hunt)

  2. ExcelMatic delivers AI-powered Excel analysis and visualization

  3. AutoCoder uses AI to create websites and backend systems with no code

  • The founder of tech start-up, AppSecure, Sandeep Hodkasia, was awarded $10,000 by Meta for flagging a security breach, which meant users could access and view other users' private AI prompts and responses.

  • Hodkasia originally flagged the bug at the end of December, and Meta reportedly deployed a fix to rectify the issue at the end of January, finding no evidence that the bug had led to any malicious exploitation.

  • Although the issue has been fixed, it’s still concerning as it means that Meta’s servers weren’t checking to make sure the user requesting the prompt and response were authorized to see it: A big privacy violation.

FULL STORY
  • Top AI researchers from OpenAI, Google DeepMind, and Anthropic are calling for a deeper investigation into the techniques used for monitoring the ‘thoughts’ of AI reasoning models that power AI agents.

  • Reasoning models work through problems via a Chain-of-Thought (CoT) process, and these experts want this process to be closely monitored to keep AI agents under control “before they become widespread and capable.”

  • They’re asking AI model developers to study their models’ CoT process to truly find out how it arrives at its answers, but have warned that although CoT monitoring is key, there are factors that could reduce its reliability.

FULL STORY

MORE NEWS

  • Mistral reveals new AI audio model 

  • Mira Murati’s Thinking Machines Lab worth $12B

  • xAI says it’s fixed Grok 4’s problematic responses

PODCASTS

Agentic AI for drone and robotic swarming

This podcast explores agentic AI for drone and robotic swarms, and unpacks how autonomous vehicles, drones, and other autonomous multi-agent systems can collaborate without centralized control.

LISTEN HERE

We read your emails, comments, and poll replies daily.

Until next time, Martin, Liam, and Amanda.

P.S. Unsubscribe if you don’t want us in your inbox anymore.

Past newsletters

Browse All
July 15, 2025

💥 Musk given $2M by US Government

HOW: Cognition takes what Google left

July 14, 2025

🛑 Google swindles OpenAI

HOW: OpenAI delays new model...again

July 12, 2025

The Problem With Most “AI Conversations”

Exclusive interview with human AI expert, Quinn Favret

Learn AI in 5 minutes a day


Check your inbox to complete your sign-up!
Oops! Something went wrong.
Menu
  • Home
  • Newsletters
  • Stories
  • Podcasts
  • Privacy Policy
  • Terms of Services
Ecosystem
  • Free Community
  • Consulting
  • Sponsor
Last newsletters
⚠️ OpenAI's true culture exposed!
💥 Musk given $2M by US Government
🛑 Google swindles OpenAI
Copyright © The AI Report