💌 Stay ahead with AI and receive:
✅ Access our Free Community and join 400K+ professionals learning AI
✅ 35% Discount for ChatNode
WORK WITH US • COMMUNITY • PODCASTS
Wednesday’s AI Report
• 1. 🚨 Did DeepSeek copy Google?
• 2. 🚀 Make your job 1000x easier with Runner H
• 3. 🌍 How DHL cut delivery times by 20% with AI
• 4. ⚙️ Trending AI tools
• 5. 💥 Windsurf access to Claude pulled!
• 6. ✍️ Claude writes Anthropic’s blog?
• 7. 📑 Recommended resources
Read Time: 5 minutes
✅ Refer your friends and unlock rewards. Scroll to the bottom to find out more!
Following the release of its advanced AI reasoning model, R1-0528, last week, researchers have been left questioning whether DeepSeek trained the model using data from Google’s family of Gemini AI models, which has taken Google years to build/train.
According to experts, R1-0528 uses language patterns, words, and expressions that Gemini 2.5 Pro often uses, and exhibits the same reasoning/thought process that Gemini models use to reach a conclusion.
DeepSeek faced similar accusations earlier this year: OpenAI found evidence that it was using a distillation process (which trains AI models using more advanced ones to save time, data, and $$$) to train its models.
Plus, OpenAI investor, Microsoft, found that data was being taken from OpenAI developer repositories. Although distillation isn’t illegal, OpenAI’s rules state that its models can’t be used to train competing models.
It’s a tricky issue that the industry is currently grappling with. Experts believe that many AI models sound and “think” alike because most of the training data found on the open web is AI-generated content, meaning models are bound to start sounding the same. But AI companies are trying to prevent this kind of “copying” behaviour by introducing stricter security measures. For example, OpenAI asks users to provide government-issued ID before they’re allowed to use its advanced models (China isn’t on the list of accepted users), and Google summarizes the steps its models take to hide how answers are formed (a tactic which Anthropic is also set to adopt).
Runner H is the newest AI agent you can delegate all your boring, repetitive, and time-consuming tasks to! Give it access to your tools, and it can handle entire workflows from a single prompt.
Some tasks you can delegate to Runner H, while sipping a coffee ☕️:
Reading your important emails and drafting (or even sending!) replies
Understanding your to-do list and completing actions on its own
Going through your CRM and sending tailor-made follow-ups to leads
Booking an entire trip (flights, hotels, confirmations,…)
…and much more!
Watch the video, try it for nothing, and experience an agent that transforms the way you work.
DHL—a global logistics company—experienced inefficiencies and rising costs in delivery routing, parcel handling, and manual warehouse operations.
They deployed ML algorithms to optimize delivery routes, built AI image recognition systems to sort parcels, and automated many back-office tasks.
This led to 60% quicker processing of shipping documentation and a 50% reduction in errors.
It also cut delivery times by 20%, fuel costs by 15%, and increased sorting speed by 30%.
ChatNode is for building custom, advanced AI chatbots that enhance customer support and user engagement ⭐️⭐️⭐️⭐️⭐️ (Product Hunt)
AgentLeader is an AI tool for targeted B2B lead generation
Migma creates on-brand emails in seconds with AI
Giving just one week's notice, Anthropic has dramatically reduced access to its Claude 3.7 Sonnet and Claude 3.5 Opus models for users of Windsurf, the popular AI coding start-up, which is being acquired by OpenAI.
This has forced Windsurf to scramble to try and find other providers to run Anthropic’s AI models on its platform and inform its users that this “may create short-term availability issues for those trying to access Claude.”
Many believe that Anthropic has done this because of OpenAI’s acquisition, leaving Windsurf CEO, Varun Mohan, “disappointed by the decision and short notice,” as he “wanted to pay for full capacity.”
Anthropic has launched a blog (“Claude Explains”) that is written by its family of AI models, Claude, but is reportedly overseen by Anthropic’s “subject matter experts and editorial teams.”
Anthropic says that “this isn’t vanilla Claude output — the editorial process requires human expertise,” showcasing a collaborative approach where AI produces the content and humans enhance it.
However, the blog is clearly intended to showcase Claude’s advanced writing abilities as it contains posts about a range of topics related to typical Claude use-cases (eg. “How to simplify complex codebases with Claude”).
How enterprises can strengthen data security
In this podcast, the Head of Analytics & Insights at Google discusses how enterprises can strengthen data security and governance as customer analytics increasingly relies on structured and unstructured data.
Until next time, Martin, Liam, and Amanda.
P.S. Unsubscribe if you don’t want us in your inbox anymore.