• Resources 
    • Newsletters
    • Articles
    • Podcasts
  • Free Community
  • Sponsor
  • Consulting
Subscribe

Subscribe to our free Newsletter

💌 Stay ahead with AI and receive:

✅ Access our Free Community and join 400K+ professionals learning AI

✅ 35% Discount for ChatNode

No, thanks
Thank you!  
Welcome to The AI Report!
Oops! Something went wrong while submitting the form.

Sponsorship | Go Pro | Accreditations

AITR banner

TOGETHER WITH GUIDDE

Welcome to AI Tool Report!

Tuesday’s top story: Anthropic has published its ‘system prompts’ which tell its range of AI models (Claude Sonnet, Opus, and Haiku) how to behave, marking the first tech company to do this in the AI industry.


🌤️ This Morning on AI Tool Report

  1. 🤫 Anthropic reveals top AI secrets!

  2. 📽️ How to create video guides in less than an hour with Guidde

  3. 🫨 Musk shocks Silicon Valley

  4. 💼 How to become an AI Consultant

  5. ❓ How to improve decision-making using ChatGPT

  6. 📜 OpenAI backs new AI Bill

Read Time: 5 minutes

STOCK MARKETS

Stock tracker

😰 Stocks pulled back to start the week with Google and Apple outperforming the rest. NVIDIA had a volatile session and with earnings releasing tomorrow investors are nervous to see how the market will react. Learn more.

 — — — — — — —

AI CHATBOTS

🤫 Anthropic reveals top AI secrets

Anthropic reveals top AI secrets

Our Report: In an industry first, Anthropic—maker of chatbot, Claude—has published the system prompts that tell its AI models (Claude 3 Opus, Claude 3.5 Sonnet, and Claude 3.5 Haiku) what they should/shouldn’t do, and guides the general tone of the model’s replies.

🔑 Key Points:

  • The latest system prompts (dated 12th July) tell all three models not to: Open URLs, links, or videos; identify or name any humans in images, and begin responses with filler words, like “certainly” or “absolutely.”

  • Claude 3.5 Sonnet’s knowledge base was updated in April, and Claude 3 Opus and Claude 3.5 Hailku were updated in August 2023, meaning the models can answer questions using data before/after those dates.

  • If the models can’t answer a query because the information can’t be found on the internet easily, they will not apologize, they’ll warn the user that while they’ll try to give accurate responses, they may hallucinate.

🤔 Why you should care: No other AI company (ie. OpenAI, Google, Meta, and Mistral) has ever released their system prompts, either for competitive reasons or to prevent hackers from using prompt injections to try and circumvent their models, and many believe that this is part of Anthropic’s strategy to portray itself as more transparent and ethical, and could trigger others to do the same.

 — — — — — — —

Together with Guidde

Let's face it: Written guides are a waste of time, right?

Guidde ad

FAQs, training materials, onboarding docs, how-to guides, and feature notes take weeks to write. They always need updating, are often off-brand, and take whole teams of people to create.

But no one reads them.

What if you could create, edit, publish, and update brand-consistent, engaging video guides in less than one hour? 

Welcome to Guidde, your all-in-one AI platform for effortless video guide creation for any organization, big or small.

Our easy-to-use features mean you can:

  • Quickly whip up videos

  • Edit in real-time

  • Share with anyone, anywhere

Join IKEA, DocuSign, Nasdaq, and Databricks, and try Guidde now

TRENDING TOOLS

  1. Lately: Optimize your social advertising and reach your target audience with the power of Neuroscience-Driven AI™. Try it for nothing.

  2. Documind uses AI to understand and answer questions from PDFs

  3. Reddit Scout scans Reddit subreddits and summarizes reviews

  4. Breezemail automatically categorizes emails using AI

  5. JobTailor uses AI to find tech jobs

— — — — — — —

REGULATIONS

🫨 Musk shocks Silicon Valley

Musk shocks Silicon Valley

Our Report: Following on from last week's fall-out surrounding the controversial California AI Bill (SB 1047)—a proposal that calls for large AI companies to implement greater safety protocols to prevent their models from causing harm to humanity—which saw many in the AI industry oppose it, believing it will stifle innovation, Elon Musk has surprisingly backed it, going against some of Silicon Valley’s biggest, most powerful players (namely a16z and like OpenAI).

🔑 Key Points:

  • SB 1047 was introduced by Senator Scott Wiener to prevent “catastrophic harm to humanity” such as bad actors using AI to develop weapons, but has been blasted by many for its restrictive regulations.

  • Musk, as an “advocate for AI regulation,” believes that although it’s a “tough call” and will “make people upset” California should “pass the SB 1047 AI safety bill” as any high-risk tech/product should be regulated.

  • His stance is unusual, as his AI company, xAI—which produced Grok, the controversial chatbot that’s previously spread misinformation and deep fakes—would be subject to the bill requirements.

🤔 Why you should care: Not only is Musk’s support surprising—given it is likely to affect his own company and pits him against some of his biggest competitors and some of Silicon Valley’s most influential politicians and influencers—it also puts him on the same side as Wiener, who he has previously argued with over other legislation.

— — — — — — —

Together with Innovating with AI

💼 Want to become an AI Consultant?

Our friends at Innovating with AI just welcomed 170 new students into The AI Consultancy Project, their new program that trains you to build a business as an AI consultant.

Here are some highlights...

  • The tools and frameworks to find clients and deliver top-notch services

  • A 6-month plan to build a 6-figure AI consulting business

  • AI Tool Report readers to get early access to the next enrollment cycle

Get early access to The AI Consultancy Project

PROMPT ENGINEERING

— — — — — — —

Tuesday’s Prompt: How to improve decision-making using ChatGPT

Type this prompt into ChatGPT:

Results: After typing this prompt, you will get a strategy to help you make better decisions in high-pressure situations.

P.S. Use the Prompt Engineer GPT by AI Tool report to 10x your prompts.

BREAKING NEWS

— — — — — — —

REGULATIONS

📜 OpenAI backs new AI Bill

  • After opposing the SB 1047 AI safety bill, OpenAI declared its support for another California AI bill—AB 3211—which requires tech companies to add watermarks to AI-generated images, videos, and audio.

  • This bill, which is headed for a final vote this month, also asks large social media platforms to add easy-to-understand labels to AI content so users can clearly see if/when content has been made with AI.

  • Alongside OpenAI, Adobe and Microsoft are also backing the bill, believing that it will “help people distinguish between human-created and AI-created content,” despite previously calling it “overly burdensome.”

RECOMMENDED RESOURCES

🕊️ Disney building humanoid robots?

We read your emails, comments, and poll replies daily.

Hit reply and tell us what you want more of!

Until next time, Martin & Liam.

P.S. Don’t forget, you can unsubscribe if you don’t want us to land in your inbox anymore.

Past newsletters

Browse All
June 5, 2025

🗂️ ChatGPT gets enterprise boost

HOW: Google thinks AI will increase jobs

June 4, 2025

🚨 Did DeepSeek copy Google?

HOW: Windsurf access to Claude pulled!

June 3, 2025

📀 Universal, Warner & Sony fight for music

HOW: IBM acquires new AI advantage

Learn AI in 5 minutes a day


Check your inbox to complete your sign-up!
Oops! Something went wrong.
Menu
  • Home
  • Newsletters
  • Stories
  • Podcasts
  • Privacy Policy
  • Terms of Services
Ecosystem
  • Free Community
  • Consulting
  • Sponsor
Last newsletters
🗂️ ChatGPT gets enterprise boost
🚨 Did DeepSeek copy Google?
📀 Universal, Warner & Sony fight for music
Copyright Š The AI Report