OpenAI's Big Leak 🌐 Meta's Military Move ⚔️ Apple's AI Specs 👓
Plus, AI in journalism, gaming & more
Welcome to this week in AI.
This week's headlines: OpenAI’s o1 model leak reveals unprecedented reasoning abilities; Meta enlists its Llama models in national security efforts; and Apple steps up with smart glasses and a more contextual Siri.
Plus, see how generative AI transforms investigative journalism at The New York Times, explore the Oasis model revolutionising real-time gaming, and discover Anthropic’s cost-effective coding assistant, Claude 3.5 Haiku.
Let’s dive in!
🎵 Don’t feel like reading? Listen to two synthetic podcast hosts talk about it instead.
OpenAI’s Leaked o1 Model: A Glimpse into the Future of Agentic AI
A surprise leak last week briefly exposed OpenAI’s powerful o1 model, offering users a glimpse of its advanced capabilities—surpassing even the preview versions available today.
Accessible for just two hours via a URL change, this o1 model showcased impressive abilities in problem-solving and multimedia interpretation.
Unlike earlier GPT models, o1 can reason through tasks over time. In one example, it provided a detailed analysis of a SpaceX rocket launch image, describing colours and motion with remarkable precision.
The o1 model represents a shift towards more agentic AI. Designed to think through complex tasks, it can access tools like web search and data analysis, setting a new standard for AI as an interactive analytical assistant.
The full release will build on the o1-preview and o1-mini models, pushing AI’s capacity for autonomy in real-world tasks.
Why it Matters
The full o1 release promises to take AI beyond text-based interactions, enabling reasoning, image interpretation, and in-depth data analysis.
For users, this marks a step towards AI that doesn’t just respond but acts as a proactive, agentic assistant, bringing advanced analytical power into everyday applications.
OpenAI’s unique approach to training o1 has created a model with capabilities that could redefine expectations across various industries.
AI Enlisted: Meta’s AI Models Now a National Security Asset
Meta recently announced it will make its Llama AI models available to U.S. government agencies and defence contractors, marking a notable shift from its previous policy against military applications.
Through partnerships with Amazon, Microsoft, Palantir, Lockheed Martin, and Oracle, Meta’s Llama will now support various national security functions.
Early applications include Oracle using Llama to analyse aircraft maintenance documents for faster repairs, while Scale AI fine-tunes it for mission planning and threat analysis.
This decision follows reports that Chinese military researchers adapted an older Llama model for defence uses, highlighting AI's strategic importance in the global tech race.
Meta’s approach is positioned as essential for establishing U.S.-led open-source AI standards, aiming to support global AI development and maintain a technological edge.
Why it Matters
Meta’s policy shift signifies an escalation in AI’s role within national security, becoming a tool for mission planning, logistics, and data-driven threat assessments.
By making Llama available to U.S. defence partners, Meta is supporting American leadership in AI, especially as global competitors like China advance.
📝 Blog post by Meta about the new partnership
📰 Article by Reuters about China's AI
How Generative AI is Shaping the Future of Journalism
Generative AI is transforming journalism by enabling newsrooms to process massive data sets more efficiently.
In a recent investigation, The New York Times used AI to analyse 400 hours of audio from the Election Integrity Network, producing nearly five million words of transcription.
“We used artificial intelligence to help identify particularly salient moments,” the Times noted, highlighting AI’s role in isolating key insights.
LLMs then helped identify themes, though human oversight was emphasised to ensure accuracy: “We used [our] own judgment to determine the meaning and relevance of each clip.”
Why it Matters
The Times’ approach shows how AI can streamline data-heavy journalism, allowing reporters to focus on analysis and storytelling.
This hybrid model, where AI handles initial data sifting and humans ensure accuracy, allows for richer, more in-depth investigations.
Oasis: A New Frontier in Real-Time, AI-Driven Gaming
Oasis, a new AI model from Decart and Etched, is advancing the future of real-time, AI-driven gaming.
Generating interactive game environments frame-by-frame in response to keyboard and mouse inputs, Oasis operates at 20 FPS on current hardware—over 100x faster than other AI video models.
Oasis’s real-time interaction enables players to explore, manipulate objects, and engage with dynamic physics, all without a traditional game engine.
The release of Etched's Sohu chip will enhance Oasis further, supporting 4K resolution and 100B+ parameter models, effectively scaling for 10x more users.
These advancements hint at broader AI applications, such as interactive, multimodal video content that could redefine digital environments.
Why it Matters
Oasis highlights AI's impact on gaming by creating fully responsive, AI-generated worlds in real-time.
This technology, powered by chips like Sohu, foreshadows immersive, AI-driven experiences that could reshape digital entertainment and expand possibilities across interactive media.
📝 Blog post by Oasis about the new model
Claude 3.5 Haiku: Anthropic’s Compact AI Outsmarts Frontier Models in Coding
Anthropic’s Claude 3.5 Haiku is a compact, cost-effective model optimised for tasks like coding, outperforming larger predecessors on benchmarks like SWE-bench Verified.
Available through Anthropic’s API and major cloud providers, it offers up to 90% savings with prompt caching and is priced at $1 per million input tokens and $5 per million output tokens, making it a budget-friendly alternative to more costly frontier models.
Why it Matters
Claude 3.5 Haiku reflects a trend towards smaller, affordable models focused on specific tasks, providing developers with high-performing, cost-effective AI solutions across multiple platforms.
Apple’s Next Move: Smart Glasses and an Upgraded Siri with Contextual Smarts
Apple is making strides in two areas: smart glasses and Siri’s contextual capabilities.
With ‘Atlas,’ Apple is gathering employee feedback on smart glasses, aiming for a lighter, everyday wearable akin to AirPods.
This contrasts with its high-cost Vision Pro (USD $3,499) and takes cues from Meta’s affordable AR glasses, which have shown demand for accessible devices.
Simultaneously, Apple’s new developer tools for Siri, including 'App Intent APIs' and ChatGPT integration in iOS 18.2 beta, enable it to interact with on-screen content seamlessly.
These tools position Siri as a competitor to contextual assistants like Claude and Copilot Vision.
Why it Matters
Apple’s advancements reflect a commitment to accessible, integrated user experiences.
Siri’s upgrades shift it towards a more intelligent assistant, while Atlas suggests a focus on practical, affordable AR wearables, positioning Apple for competitiveness in both AI-driven personal assistance and AR technology.
📝 Blog post by Apple about the Siri upgrade
📰 Article by Bloomberg (paywall)
Quick Bytes
Microsoft’s Magentic-One: A Generalist Multi-Agent System for Solving Complex Tasks
Runway introduces camera controls for its video generation model
OpenAI will start using AMD chips and could make its own AI hardware in 2026