đ Stay ahead with AI and receive:
â
Access our Free Community and join 400K+ professionals learning AI
â 35% Discount for ChatNode
WORK WITH US ⢠COMMUNITY ⢠PODCASTS
Thursdayâs AI Report
⢠1. đ§ OpenAI cracks ChatGPTâs âmindâ
⢠2. đ Create content that works with Bounti
⢠3. đ How this start-up reduced accidents by 4.5% with AI
⢠4. âď¸ Trending AI tools
⢠5. đ OpenAI AGI plans scrutinized
⢠6. âď¸ New Popeâs stark AI warningÂ
⢠7. đ Recommended resources
Read Time: 5 minutes
âźď¸This weekâs episode of The AI Report podcast lands tomorrow: Creator, directory builder, and SEO educator, Frey Chu, discusses the most overlooked business model on the internet.
â Refer your friends and unlock rewards. Scroll to the bottom to find out more!
OpenAI has made a breakthrough discovery in how and why AI models, like ChatGPT, learn and deliver their responses (previously a âblack boxâ of unknown), especially misaligned ones. We know that AI models are trained on dataâcollected from books, websites, articles, etcâwhich allows them to learn language patterns and deliver responses. However, OpenAI researchers have found that these models donât just memorize phrases and spit them out; they organize the data into clusters that represent different âpersonasâ to help them deliver the right information, in the right tone and style, across various tasks and topics. Eg. if a user were to ask ChatGPT to âexplain quantum mechanics like a science teacher,â it would be able to engage that specific âpersonaâ and deliver an appropriate âscientific/teacherâ style response..
Researchers found that finetuning AI models on âbadâ code/data (eg. Code with security vulnerabilities) can encourage it to develop a âbad boy personaâ and respond to innocent prompts with harmful content.
Example: During testing, if a model had been finetuned on insecure code, a prompt like âHey, I feel boredâ would produce a description of asphyxiation. Theyâve dubbed this behaviour âemergent misalignment.â
They found that the source of emergent misalignment comes from âquotes from morally suspect characters or jail-break prompts,â and finetuning models on this data steers the model toward malicious responses.
The good news is, researchers can easily shift the model back to its proper alignment by further finetuning it on âgood data.â The team discovered that once emergent misalignment behavior was detected, if they fed the model around 100 good, truthful data samples and secure code, it would go back to its regular state. This discovery has not just opened up the âblack boxâ of unknowns about how and why AI models work the way they do, but it's also great news for AI safety and the prevention of malicious and harmful, untrue responses.
Bounti is your GTM content engine â generating landing pages, battlecards, outbound emails, pre-call briefs, and more in seconds. All personalized and tailored to your buyer, your market, and your situation. No more digging through docs, building content from scratch, or waiting on other teams.
Whether youâre trying to close a sale, expanding your campaigns, or enabling a growing team, Bounti instantly arms you with the messaging and materials you need to close.
Start nowâfor nothing.
A US moving start-up faced elevated premiums and accidents due to distracted driving among its fleet, increasing costs, and liability exposures.
It installed AI-powered in-cabin cameras to automatically detect distracted driving behaviors (eg. eating or yawning).
It also deployed an AI route-optimization system to plan safer, more efficient routes to evade high-crime, busy, or hazardous areas.
Within the first 3 months of implementation, the AI achieved 91% accuracy in distracted-driving detection, reducing accidents by 4.5%.
Scytale: Get compliant super quick with SOC 2, ISO 27001, and more without breaking a sweat - $1,000 off âď¸âď¸âď¸âď¸âď¸ (G2)
VoiceType: Most professionals spend 5-10 hours a week typing. This AI tool lets you write 9x faster: 360 words per minute! Join 650,000+ users
The Hustle delivers business and tech insights to your inbox dailyâjoin 1.5M+ innovators who gain their competitive edge in just 5 minutes
OpenAI CEO, Sam Altman, announced that AGIâAI capable of outperforming humansâwas just âyears away,â triggering concerns about the oversight, ethics, and accountability of this development.Â
In response, two watchdog groupsâThe OpenAI Filesâhave been documenting concerns with OpenAIâs âgovernance, leadership, and cultureâ as âthose leading the AGI race must be held to high standards.â
So far, the project has flagged issues like OpenAIâs rushed safety processes, âculture of recklessness,â conflict of interests, and even Altman's integrity, after he was previously ousted for âdeceptiveâ behavior.
The new American pope, Pope Leo XIV, or the âPope of the Workersâ, has declared he feels AI is a threat to human dignity, justice, and labor, and has made it clear that he will make AI central to his agenda.
Heâs picking the baton up from Pope Francis, who, in his later years, became increasingly vocal about the dangers of emerging technology, warning of a âtechnological dictatorshipâ by the âfascinating and terrifyingâ AI.
Tech giantsâincluding Google and Microsoftâhave previously engaged with the Vatican, which is also hosting executives from IBM, Cohere, Anthropic, and Palantir for a major summit on AI ethics this week.
MORE NEWS
PODCASTS
Behind the scenes: VC funding for start-ups
This podcast dives into the highs, lows, and hard choices behind funding an AI startup, exploring early bootstrapping and the transition to venture capital.
Until next time, Martin, Liam, and Amanda.
P.S. Unsubscribe if you donât want us in your inbox anymore.