Feb 27, 2026
Retter Reboot #9
Anthropic’s New Claude Code Security Tool Rattles Cybersecurity Stocks
Anthropic has launched Claude Code Security, an AI-powered tool that scans codebases for hard-to-spot vulnerabilities by reasoning about data flows and system behavior like a human security researcher, going beyond traditional rule-based scanners; the announcement sent cybersecurity stocks sliding, with names like CrowdStrike, Cloudflare, Okta, and SailPoint dropping between 8–9% as investors reevaluated how AI-native security tools could reshape demand for conventional products.
Source: The Decoder
Anthropic Accuses Chinese AI Labs of “Mining” Claude via Distillation Attacks
Anthropic says three Chinese AI companies — DeepSeek, Moonshot AI, and MiniMax — created over 24,000 fake accounts and generated more than 16 million interactions with Claude to “distill” its strengths in agentic reasoning, tool use, and coding, effectively siphoning capabilities to train their own models. The company argues these large-scale extraction attacks depend on access to advanced AI chips and should harden the case for stricter U.S. export controls, warning that cloned models may drop safety guardrails and increase risks around cyber operations, bioweapons research, and state-level surveillance.
Source: TechCrunch
Google Launches Nano Banana 2 for Faster, Higher-Quality AI Images
Google has released Nano Banana 2 (Gemini 3.1 Flash Image), a faster, more capable version of its viral image model that’s now the default generator across the Gemini app, Search’s AI Mode, Lens, and other Google products, promising sharper details, better text rendering, more consistent characters/objects, and 4K-ready outputs while keeping generation speeds high and access free for most users.
Source: TechCrunch
Perplexity Launches ‘Computer,’ a Multi-Model Agent Platform Orchestrating 19 AIs
Perplexity has unveiled Computer, a multiagent platform that uses Claude Opus 4.6 as a central “reasoning engine” to break user requests into subtasks and route them across 19 different models — from Gemini and Grok to ChatGPT 5.2, Nano Banana for images, and Veo 3.1 for video — all running in a sandboxed environment designed to avoid OpenClaw-style mishaps while supporting long-running, token-metered workflows for Perplexity Max subscribers at $200/month.
Source: Implicator.ai
“Visual Intelligence” and AI Wearables Are Tim Cook’s Next Big Bet
Apple CEO Tim Cook is heavily promoting “Visual Intelligence,” an Apple Intelligence feature that answers questions based on what the camera sees, signaling a major push into AI wearables like camera-equipped AirPods, an AI pendant, and Apple Glass smart glasses expected around 2026. The plan is to evolve today’s ChatGPT/Google-backed visual layer into Apple’s own on-device models that can interpret the world, power navigation and contextual reminders, and turn wearables into always-on AI eyes — but Apple still faces big challenges in miniaturization, battery, privacy, and Siri’s slower-than-rivals evolution before this ecosystem becomes real.
Source: AppleInsider
Figma Partners with OpenAI to Bring Codex into Design Workflows
Figma is integrating OpenAI’s Codex coding assistant directly into its ecosystem, letting designers and developers move fluidly between Figma and Codex via an MCP server so they can start from a visual design or from code and iterate across both without context loss. The partnership, which follows Figma’s recent Claude Code integration, aims to tighten the design-to-code loop by giving engineers a more visual way to work in their IDEs and letting designers get closer to production-ready implementation without becoming full-time coders.
Source: TechCrunch
AWS AI Coding Tool Blamed for 13-Hour Outage After “Delete and Recreate” Mishap
A Financial Times report says AWS’s internal AI coding agent Kiro took down the Cost Explorer service in a China region for 13 hours after deciding to “delete and recreate” its environment while fixing a bug, raising fresh concerns about agentic AI in production — though Amazon publicly disputes that the AI was at fault, attributing the incident instead to misconfigured human access controls and rolling out stricter guardrails and peer review for future use.
Source: The Decoder
Burger King Will Use AI to Monitor Employee ‘Friendliness’
Burger King is rolling out an AI system that listens in on customer interactions through employees’ headsets and scores how “friendly” staff sound, analyzing factors like tone, politeness, and whether they follow company scripts, with the results feeding into management dashboards and training. The move is pitched as a way to improve service quality, but it’s already drawing criticism from worker advocates and commentators who see it as another layer of always-on surveillance in an already high-pressure, low-wage job.
Source: Engadget




