đ Stay ahead with AI and receive:
â
Access our Free Community and join 400K+ professionals learning AI
â 35% Discount for ChatNode
WORK WITH US ⢠COMMUNITY ⢠PODCASTS
Wednesdayâs AI Report
⢠1. đ¨ Did DeepSeek copy Google?
⢠2. đ Make your job 1000x easier with Runner H
⢠3. đ How DHL cut delivery times by 20% with AI
⢠4. âď¸ Trending AI tools
⢠5. đĽ Windsurf access to Claude pulled!
⢠6. âď¸ Claude writes Anthropicâs blog?Â
⢠7. đ Recommended resources
Read Time: 5 minutes
â Refer your friends and unlock rewards. Scroll to the bottom to find out more!
Following the release of its advanced AI reasoning model, R1-0528, last week, researchers have been left questioning whether DeepSeek trained the model using data from Googleâs family of Gemini AI models, which has taken Google years to build/train.
According to experts, R1-0528 uses language patterns, words, and expressions that Gemini 2.5 Pro often uses, and exhibits the same reasoning/thought process that Gemini models use to reach a conclusion.
DeepSeek faced similar accusations earlier this year: OpenAI found evidence that it was using a distillation process (which trains AI models using more advanced ones to save time, data, and $$$) to train its models.
Plus, OpenAI investor, Microsoft, found that data was being taken from OpenAI developer repositories. Although distillation isnât illegal, OpenAIâs rules state that its models canât be used to train competing models.
Itâs a tricky issue that the industry is currently grappling with. Experts believe that many AI models sound and âthinkâ alike because most of the training data found on the open web is AI-generated content, meaning models are bound to start sounding the same. But AI companies are trying to prevent this kind of âcopyingâ behaviour by introducing stricter security measures. For example, OpenAI asks users to provide government-issued ID before theyâre allowed to use its advanced models (China isnât on the list of accepted users), and Google summarizes the steps its models take to hide how answers are formed (a tactic which Anthropic is also set to adopt).
Runner H is the newest AI agent you can delegate all your boring, repetitive, and time-consuming tasks to! Give it access to your tools, and it can handle entire workflows from a single prompt.
Some tasks you can delegate to Runner H, while sipping a coffee âď¸:
Reading your important emails and drafting (or even sending!) replies
Understanding your to-do list and completing actions on its own
Going through your CRM and sending tailor-made follow-ups to leads
Booking an entire trip (flights, hotels, confirmations,âŚ)
âŚand much more!
Watch the video, try it for nothing, and experience an agent that transforms the way you work.
DHLâa global logistics companyâexperienced inefficiencies and rising costs in delivery routing, parcel handling, and manual warehouse operations.
They deployed ML algorithms to optimize delivery routes, built AI image recognition systems to sort parcels, and automated many back-office tasks.
This led to 60% quicker processing of shipping documentation and a 50% reduction in errors.
It also cut delivery times by 20%, fuel costs by 15%, and increased sorting speed by 30%.
ChatNode is for building custom, advanced AI chatbots that enhance customer support and user engagement âď¸âď¸âď¸âď¸âď¸ (Product Hunt)
AgentLeader is an AI tool for targeted B2B lead generation
Migma creates on-brand emails in seconds with AI
Giving just one week's notice, Anthropic has dramatically reduced access to its Claude 3.7 Sonnet and Claude 3.5 Opus models for users of Windsurf, the popular AI coding start-up, which is being acquired by OpenAI.
This has forced Windsurf to scramble to try and find other providers to run Anthropicâs AI models on its platform and inform its users that this âmay create short-term availability issues for those trying to access Claude.â
Many believe that Anthropic has done this because of OpenAIâs acquisition, leaving Windsurf CEO, Varun Mohan, âdisappointed by the decision and short notice,â as he âwanted to pay for full capacity.â
Anthropic has launched a blog (âClaude Explainsâ) that is written by its family of AI models, Claude, but is reportedly overseen by Anthropicâs âsubject matter experts and editorial teams.â
Anthropic says that âthis isnât vanilla Claude output â the editorial process requires human expertise,â showcasing a collaborative approach where AI produces the content and humans enhance it.
However, the blog is clearly intended to showcase Claudeâs advanced writing abilities as it contains posts about a range of topics related to typical Claude use-cases (eg. âHow to simplify complex codebases with Claudeâ).
How enterprises can strengthen data security
In this podcast, the Head of Analytics & Insights at Google discusses how enterprises can strengthen data security and governance as customer analytics increasingly relies on structured and unstructured data.
Until next time, Martin, Liam, and Amanda.
P.S. Unsubscribe if you donât want us in your inbox anymore.