💌 Stay ahead with AI and receive:
✅ Access our Free Community and join 400K+ professionals learning AI
✅ 35% Discount for ChatNode
WORK WITH US • COMMUNITY • PODCASTS • B2B TRAINING
Friday’s AI Report
• 1. 🏗️ Anthropic’s major infrastructure shift
• 2. 💼 Become an AI consultant with Innovating with AI
• 3. 🌍 How AI grew organic reach for this skincare start-up
• 4. ✔️ Book your AI audit with Upscaile
• 5. ⚔️ OpenAI fights Musk’s “meritless” lawsuit
• 6. ⚠️ Ex-OpenAI researchers call ChatGPT delusional
Read Time: 5 minutes
Anthropic has hired Rahul Patil, former Chief Technical Officer (CTO) of Stripe, as its new CTO to help strengthen its AI infrastructure and improve the speed, reliability, and safety of its AI platforms.
Patil, who is replacing Anthropic co-founder Sam McCandlish (who is now Anthropic’s Chief Architect and will primarily work on AI model training), will focus on compute, infrastructure, and other engineering tasks.
His appointment highlights Anthropic’s efforts to reduce operating costs, enhance training efficiency, and improve the performance of its AI models as demand for advanced AI agents continues to grow.
Anthropic’s leadership reshuffle follows a year of growth, including major funding rounds and infrastructure partnerships with Amazon and Google to secure the high-end chips and cloud capacity needed to scale.
Anthropic’s new bet on a veteran infrastructure leader is a sure-fire signal that future innovations within the AI industry will not just come from building smarter models, but from having efficient AI infrastructure in place that will lower costs and improve performance and accessibility across the industry.
The AI consulting market is about to grow by a factor of 8X – from $6.9B to $54.7B in 2032.
But how do you turn your AI enthusiasm into marketable skills, clear services and a serious business?
Our friends at Innovating with AI have trained 1,000+ AI consultants – and their exclusive consulting directory has driven Fortune 500 leads to graduates.
Enrollment in The AI Consultancy Project is opening soon – and you’ll only hear about it if you apply early.
Green Valley Organics, a skincare start-up with a small team and few resources, was struggling to run effective advertising campaigns.
Slow content creation and a lack of SEO expertise eroded their visibility and engagement so they failed to attract and nurture leads.
They adopted an AI tool that converts existing content into SEO-friendly blogs, allowing them to create and publish content at scale, regularly.
As a result, they grew their organic reach and staff could focus on product, operations, and customer service, instead of writing content.
AI automation can unlock huge time and cost savings, allowing you to focus on strategic growth.
In less than 5 weeks, Upscaile’s AI Audit will deliver:
Actionable workflow improvements
A roadmap to prioritize high-value opportunities
An ROI analysis to forecast cost savings and productivity gains
Our AI audits have helped over 100 enterprises save 80,000+ hours of manual work, without the need to hire additional staff.
Elon Musk’s start-up, xAI, has filed another lawsuit against OpenAI, accusing it of stealing trade secrets and using xAI insider knowledge to improve ChatGPT, gaining an unfair edge against xAI’s Grok.
OpenAI has strongly denied the allegations, calling the lawsuit nothing more than an “ongoing harassment campaign” to undermine its operations, and “won’t be intimidated by [Musk’s] bullying attempts.”
It’s called the case “meritless” and “inconsistent with facts and law,” and argues that the technology behind ChatGPT was developed independently, and will “vigorously defend” itself over this in court.
Earlier this year, ex-OpenAI researcher Allan Brooks conducted a 21-day study into ChatGPT, revealing that it can get caught in “delusional spirals” where it builds on previous mistakes and drifts away from reality.
This delusion can lead users down “dangerous rabbit holes”, and former OpenAI safety researcher Steven Adler has published an independent study of Brooks’ findings, and as a result, is “really concerned.”
He believes it’s happening because AI models rely on pattern prediction rather than true understanding, meaning they can sound confident while being wrong, potentially leading to errors in high-stakes scenarios.
MORE NEWS
Until next time, Martin, Liam, and Amanda.
P.S. Unsubscribe if you don’t want us in your inbox anymore.