• Resources 
    • Newsletters
    • Articles
    • Podcasts
  • Free Community
  • Sponsor
  • Consulting
Subscribe

Subscribe to our free Newsletter

💌 Stay ahead with AI and receive:

✅ Access our Free Community and join 400K+ professionals learning AI

✅ 35% Discount for ChatNode

No, thanks
Thank you!  
Welcome to The AI Report!
Oops! Something went wrong while submitting the form.

Sponsorship | Go Pro | Accreditations

AITR banner

TOGETHER WITH INNOVATING WITH AI

Welcome to AI Tool Report!

Friday’s top story: OpenAI has released a safety testing report that reveals its newest model—GPT-4o—is “medium risk” when it comes to swaying public opinion with its text.


🌤️ This Morning on AI Tool Report

  1. 🚨 OpenAI’s GPT-4o: Unsafe?

  2. 🧑‍💼 How to become an AI consultant

  3. ❗EU wins against Musk

  4. 👶 How to give your kids a head start with AI, in the safest way

  5. 🎯 How to create targeted ads using ChatGPT

  6. 🎓 OpenAI welcomes AI safety expert

  7. 🛑 Amazon investigated for AI partnership

Read Time: 5 minutes

STOCK MARKETS

Stock tracker

⬆️ AI and tech names rally as concerns about a recession recede from the forefront of investors’ minds. Buyers stepped in at a crucial point pushing NVIDIA above the $100 psychological area which will be key to hold if it is to continue the larger uptrend. Learn more.

 — — — — — — —

SAFETY

🚨 OpenAI’s GPT-4o: Unsafe?

GPT-4o: Unsafe?

Our Report: OpenAI has released a research report (called a System Card) that outlines how an external group of red teamers (security experts that exploit weaknesses and highlight risks within AI models) safety tested its newest GPT-4o model before it was released in May, and found it to be “medium risk.”

🔑 Key Points:

  • To find weaknesses and risks, the red-teamers ran GPT-4o through four categories of tests: Cybersecurity; Biological Threats; Persuasion, and Model Autonomy, and found it was “low risk” in all, except “Persuasion.”

  • Although the GPT-4o voice feature was found to be “low risk”, red-teamers found that 3 out of 12 writing samples from GPT-4o were better at swaying readers' opinions than human-written content.

  • GPT-4o’s output was only more persuasive than human-written content 1/4 of the time, but it was specifically tested for the model's ability to persuade human political opinions, just ahead of US elections.

🤔 Why you should care: OpenAI has released this “System Card” to demonstrate that they’re taking safety very, very seriously, after facing increasing backlash over their prioritization of “shiny new products” over safety (with swift exits by key team members and reports from ex-employees confirming this), and most recently an open letter from Senator, Elizabeth Warren, demanding answers about how OpenAI handles safety reviews, but the real question is: Is there a potential risk that GPT-4o is capable of spreading misinformation or being used by bad actors to sway public voting during elections?

 — — — — — — —

Together with Innovating with AI

💼 Want to become an AI Consultant?

Innovating with AI

Our friends at Innovating with AI just welcomed 170 new students into The AI Consultancy Project, their new program that trains you to build a business as an AI consultant. Here are some highlights...

  • The tools and frameworks to find clients and deliver top-notch services

  • A 6-month plan to build a 6-figure AI consulting business

  • Early access to the next enrollment cycle for AI Tool Report readers

Get early access to The AI Consultancy Project here

TRENDING TOOLS

  1. Meco is a distraction-free space for reading and discovering newsletters, separate from the inbox

  2. Humanize AI turns AI-generated content into human-like text

  3. HoopsAI uses AI to provide real-time trading analysis for investors

  4. GPTPromptTuner helps fine-tune ChatGPT prompts

  5. ColorBliss generates custom coloring pages

— — — — — — —

DATA PROTECTION

❗EU wins against Musk

EU wins against Musk

Our Report: Elon Musk has agreed to (temporarily) stop using data from European X (formerly Twitter) users to train its AI chatbot, Grok, after the Irish Data Protection Commission (DPC) instigated court proceedings over concerns over X’s handling of personal data, without consent.

🔑 Key Points:

  • Although he’s agreed to stop processing European X users' data, Musk thinks the DPC’s court order is “unwarranted” as users can untick a box in their privacy settings to opt out of having their data used for training.

  • While the DPC has welcomed Musk’s cooperation to suspend data collection, it argues that X began processing EU users' data on May 7th, but only offered the option to opt out (to some users) from July 16th.

  • As a result, during a hearing on Thursday, Judge Leonie Reynolds established that all data collected from EU X users between May 7th and August 1st would not be used for training until the court issues its ruling.

🤔 Why you should care: This is yet another example of how regulatory scrutiny is intensifying in Europe, and X’s decision to comply, follows a similar decision made by Meta—who decided not to launch its newest AI model, Llama 3, after facing scrutiny from the Irish DPC over how it used and processed data—and Google, who agreed to delay and change its Gemini chatbot over similar concerns.

— — — — — — —

AI-Powered Browser for Kids

What causes ocean waves? Yes, that’s an actual question from one of the curious kids currently using Angel AI.

Angel is an AI-powered browser that enables kids to safely explore the online world through age-appropriate content and experiences. Through an engaging, voice-activated app, your child can ask questions that spark their curiosity and imagination.

With Angel, you can give your kids a head start with AI in the safest way possible. It’s time to make learning fun and age-appropriate for kids

Be the first to gain access to Angel. Join our waitlist now!

PROMPT ENGINEERING

— — — — — — —

Friday’s Prompt: How to create targeted ads using ChatGPT

Type this prompt into ChatGPT:

Results: After typing this prompt, you will get a series of targeted, ad ideas that will represent your brand, and resonate with your target audience.

P.S. Use the Prompt Engineer GPT by AI Tool report to 10x your prompts.

BREAKING NEWS

— — — — — — —

ANNOUNCEMENTS

🎓 OpenAI welcomes AI safety expert

  • OpenAI has added a new member to its board of directors: Professor and Director of the ML department at Carnegie Mellon University—Zico Kolter—who predominantly focuses his research on AI safety.

  • Kolter will also join OpenAI’s Safety and Security Committee—responsible for overseeing safety decisions—but as it’s mainly comprised of internal employees, many are questioning its effectiveness.

  • This is a strategic move from OpenAI, that comes at a pivotal moment, as they struggle to combat the influx of criticism over how they handle safety, following resignations and damning reports from ex-employees.

— — — — — — —

REGULATIONS

🛑 UK investigates Amazon's AI plans

  • The UK’s Competition and Markets Authority (CMA) has “sufficient information” to formally investigate Amazon’s relationship with AI start-up, Anthropic (makers of chatbot, Claude), after it invested $4B in the company.

  • The UK competition regulators feel that their partnership is equivalent to a “quasi-merger” (where big tech firms invest in/hire staff from start-ups to gain a monopoly) that could harm UK competition.

  • In response, Anthropic insists it’s an “independent company” and Amazon is “disappointed” in the CMA, believing that its collaboration with Anthropic “doesn’t meet the CMA’s threshold for review.”

RECOMMENDED RESOURCES

🎙️ a16z podcast: The technology behind the Olympics

We read your emails, comments, and poll replies daily.

Hit reply and tell us what you want more of!

Until next time, Martin & Liam.

P.S. Don’t forget, you can unsubscribe if you don’t want us to land in your inbox anymore.

Past newsletters

Browse All
June 20, 2025

🏭 Softbank’s $1 trillion AI plan

HOW: Musk faces another EU probe

June 19, 2025

🧠 OpenAI cracks ChatGPT’s ‘mind’

HOW: OpenAI AGI plans scrutinized

June 18, 2025

🚫 Meta fails to poach OpenAI staff

HOW: Musk burns $1B p/m on AI

Learn AI in 5 minutes a day


Check your inbox to complete your sign-up!
Oops! Something went wrong.
Menu
  • Home
  • Newsletters
  • Stories
  • Podcasts
  • Privacy Policy
  • Terms of Services
Ecosystem
  • Free Community
  • Consulting
  • Sponsor
Last newsletters
🏭 Softbank’s $1 trillion AI plan
🧠 OpenAI cracks ChatGPT’s ‘mind’
🚫 Meta fails to poach OpenAI staff
Copyright © The AI Report