• Resources 
    • Newsletters
    • Articles
    • Podcasts
  • Free Community
  • Sponsor
  • Consulting
Subscribe

Subscribe to our free Newsletter

💌 Stay ahead with AI and receive:

✅ Access our Free Community and join 400K+ professionals learning AI

✅ 35% Discount for ChatNode

No, thanks
Thank you!  
Welcome to The AI Report!
Oops! Something went wrong while submitting the form.

⚠️ OpenAI seeks to address the catastrophic risks of AI

PLUS: Accurate Academic Custom Instructions

AuthorAuthor

Martin Crowley & Arturo Ferreira
October 27, 2023

Premium // Sponsorship // Services // Tools Database

Welcome to AI Tool Report!

Pretty good news that OpenAI is forming a new preparedness team to tackle the “catastrophic risks of AI”… But, should the very people gaining the most from AI be the very same in assessing the potential danger/risks?

_______________________________________________________________

Read Time: 4 minutes

⚠️ OpenAI seeks to address the catastrophic risks of AI

Our Report: OpenAI recently announced the formation of a new 'Preparedness' team to address potential 'catastrophic risks' posed by advanced AI models, including the threat of nuclear, chemical, and biological war…

🔑 Key Points:

  • OpenAI's Preparedness team will be headed by Aleksander Madry, previously the director of MIT’s Center for Deployable Machine Learning, with the team's primary responsibilities including tracking, forecasting, and guarding against the potential dangers of future AI systems.

  • To encourage community involvement, OpenAI is inviting ideas for risk studies, offering a $25,000 prize and a position at Preparedness for the top ten entries. Community involvement is something OpenAI is huge on, so this comes as no surprise.

  • OpenAI's ‘Preparedness’ team will also develop a "risk-informed development policy" to guide the company's approach to AI model evaluation, monitoring, and governance.

  • Lastly, ‘Preparedness’ will also study "chemical, biological, radiological, and nuclear" threats in relation to AI models. At least they’re being thorough….

🤨 Why you should care: OpenAI's initiative underscores the importance of proactive measures in ensuring the safety and ethical use of AI. We’re just not so sure if those profiting the most from AI should be the very same monitoring the risks…

In Partnership with Innovating with AI

Launch Your AI Idea in 30 Days 🤯

On Monday, 1,000+ AI Tool Report readers joined us for the launch of Innovating with AI: The Complete Course. We’ve been building this program since February, and we’re thrilled to let you know about it.

You’ll learn how to rapidly launch your AI idea without code so you can validate your idea and start selling to real customers in the next 30 days.

Enrollment closes today.

Here’s the link to see everything that’s included.

  1. Guidde is the secret presentation tool that will 10x your team’s productivity 🤫💡

  2. QRDiffusion transforms boring QR codes into stunning artwork

  3. BlazeAI helps you create better content in half the time

  4. FreeAITherapist does what it says on the tin

  5. Clap is your web assistant writing partner

Keep track of all your favorite AI tools here…

ChatGPT/AI Training

✅ Accurate Academic Custom Instructions

Step 1: Log in to ChatGPT

Step 2: Click on the 3 dots at the bottom left of your screen

Step 3: Click Custom Instructions

Step 4: In the “How would you like ChatGPT to respond?” Enter the following:

“You are expected to communicate in a scholarly manner.

All statements, beliefs, or data you present must be attributed to a credible and published source.

Never fabricate any references. If uncertain about a reference, admit your lack of knowledge.

There's no need to mention that you're an AI, as I'm already aware. Reiteration is unnecessary and inefficient.

Ensure your replies are concise yet accurate. Use only the essential words without sacrificing the clarity and accuracy of your response.

Adhere diligently to my directives. For instance, if I specify a two-sentence reply, provide only two sentences.”

MidJourney Prompt

🎨 Artist Highlight: Ludovic Creator

🎨IMAGINERY SERIE #1 :QUANTUM NOIR🎨

First part of a serie where I trained Chat GPT to imagine new style and trying them with Midjourney. Some went really good some not ..

BASE PROMPT :

[SUBJECT] in Quantum Noir style, featuring [COLOR] and [COLOR] dark, moody superpositions… twitter.com/i/web/status/1…

— LudovicCreator (@LudovicCreator)
Oct 26, 2023

🤨 Forbes builds its own AI search engine

Forbes introduced a generative AI search platform named Adelaide (named after the founder’s wife), built with Google Cloud, to offer personalized searches based on user queries and gives summarized answers from Forbes' coverage within the past year.

Users can interact with Adelaide by asking questions, and the platform even remembers prior queries for continued dialogue. This is a super interesting move from Forbes—especially considering the contentious history between OpenAI & major news outlets.

🇺🇸 3 things to expect from Biden’s executive AI order

As mentioned in yesterday’s newsletter, Biden is set to announce a sweeping executive order on AI next Monday:

  1. Advanced AI models will need "assessments" before use by federal workers.

  2. AI firms will utilize cloud computing to monitor users with significant computing power, potentially weaponizing AI.

  3. The order will aim to "ease immigration barriers for highly skilled workers" to boost U.S. tech advancement.

What’s your expectations from the executive order?

🇬🇧 Humanity “could lose control of AI” says UK PM

British Prime Minister Rishi Sunak warned about the potential risks of AI, including its misuse in weaponry and criminal activities, and expressed concerns about humanity potentially losing control over superintelligent AI.

In anticipation of the AI Safety Summit, the U.K. aims to lead global discussions on AI safety. Ahead of the conference, Sunak announced the creation of an AI safety institute and proposed a global expert panel on AI science (all in an effort to keep up the pace with China and the US).

⚡ Feature in the world’s fastest-growing AI newsletter

The AI Tool Report just became the fastest-growing AI newsletter in the world, with 400,000+ readers working at companies like Apple, Meta, Google, Microsoft, and many more. We’re now booked out 4 weeks in advance, due to a massive surge in demand. Book your ad spot before someone else does…

📍 AI continues to get even better on Google Maps

🔭 Rob Lennon’s overview of the mega NASA prompt

🇬🇧 Boston Dynamic’s robot sounds vaguely British when giving tour

💰 Generative AI startup paying users to create AI-driven influencers

🤫 StabilityAI losing engineers, general counsel & other staff members

🥇 AITR INSIDER POLL 🥇

Will you be taking part in our joint course?

We partnered with Rob from Innovating with AI
  • ✅ Yes, here's why
  • ❌ No, here's why

Login or Subscribe to participate in polls.

We read your emails, comments, and poll replies daily.

Hit reply and let us know what you want more of!

Until next time, Martin & Arturo.

What'd you think of this edition?

  • 🤖 🤖🤖This is the future
  • 😶😶 I've seen better
  • 🤡 404: Interest not found

Login or Subscribe to participate in polls.

Past newsletters

Browse All
May 15, 2025

🔍 OpenAI promises more safety reports

HOW: Google may eradicate AI hallucinations

May 14, 2025

🔓 Trump unlocks Saudi AI dominance?

HOW: Biden’s chip rule gets axed

May 13, 2025

🧊 Investors cool on OpenAI’s stargate?

HOW: Saudi launched AI for Trump

Learn AI in 5 minutes a day


Check your inbox to complete your sign-up!
Oops! Something went wrong.
Menu
  • Home
  • Newsletters
  • Stories
  • Podcasts
  • Privacy Policy
  • Terms of Services
Ecosystem
  • Free Community
  • Consulting
  • Sponsor
Last newsletters
🔍 OpenAI promises more safety reports
🔓 Trump unlocks Saudi AI dominance?
🧊 Investors cool on OpenAI’s stargate?
Copyright © The AI Report