Games

Amazon Luna Ends Its Support for Purchased Games and Third-Party Subscriptions (engadget.com) 8

Amazon's Luna cloud gaming service is making some changes, reports Engadget: It's no longer possible to buy Ubisoft+ and Jackbox Games subscriptions or standalone games through Luna. Amazon will automatically cancel any active subscriptions bought through Luna at the end of customers' next billing cycle. If you have a Ubisoft+ subscription that you bought directly from Ubisoft instead, you'll still be able to access games on that service through Luna until June 10. The Bring Your Own Library option — which allows users to play games they own on the likes of EA, GOG and Ubisoft on Luna — is going away too. You won't be able to access games from those storefronts via Amazon's streaming service after June 3.

If you bought any games outright on Luna, you'll still be able to play them there until June 10. Unlike Google did when it shut down Stadia, Amazon isn't offering refunds for those purchases. However, you'll still have access to them through the respective third-party platform that's linked to your account, be it the EA App, GOG Galaxy or Ubisoft Connect. That doesn't exactly help folks who don't have powerful-enough systems to play more demanding games and were relying on Luna.

For those users, Kotaku complains, "you'll essentially lose access to your purchased games in June unless you buy some hardware to play games like Star Wars Outlaws or set up a different streaming option..."

They describe Luna as Amazon's "barely talked about, struggling game streaming service"... On April 10, Amazon announced that it is "always looking for ways to better serve our players" and that "feedback" has made it "clear" that gamers who use Luna want "easy access to great games." And because more of that content is now offered via Amazon Prime, the company has decided that the best way to "serve" you and other users is to rip out most of Luna's gaming options and remove access to paid games you bought in the past. Do you feel better served...?

Launched in 2020, Amazon Luna has never been much of a big hit for the company, which has struggled to even figure out what to do with it. Initially, it was offered up as a Stadia competitor, providing access to big and small third-party games. This apparently didn't work out for Amazon. So in 2025, Amazon officially announced plans to pivot Luna to a service focused on Jackbox-like casual games. This latest shake-up for Luna further focuses the service on these kinds of games and will put everything available on the service behind different sub tiers, similar to Game Pass.

Their conclusion? "This is all just a great reminder to never, ever, ever, ever buy a video game through a streaming service. At least you can download digital games offline and make backups for later."
AI

Researchers Build a Talking Robot Guide Dog to Help Visually Impaired People Navigate (studyfinds.com) 27

"Only about 2% of visually impaired people in the United States use guide dogs," notes StudyFinds.com, "partly because breeding and training takes years and fewer than half the dogs in training actually graduate."

But someday there could be another option: What if you could ask your guide dog where the nearest water fountain is and hear it answer back, complete with directions and an estimated walk time? Researchers at the State University of New York at Binghamton have built a robotic guide dog that can do something close to that, holding simple back-and-forth conversations about navigation with its handler, describing the surrounding environment, and talking through route options as it leads the way... Their work, presented at the 40th Annual AAAI Conference on Artificial Intelligence, pairs a large language model, a system that understands and generates language, with a navigation planner. Together, the two let the robot understand open-ended requests, suggest destinations, and adjust plans on the fly.
Thanks to Slashdot reader fjo3 for sharing the article.
AI

Omissions, Deceptions, Lying. The New Yorker Asks: Can Sam Altman Be Trusted? (newyorker.com) 76

A 17,000-word expose in the New Yorker reveals "several executives connected to OpenAI have expressed ongoing reservations about Altman's leadership." Reporters Ronan Farrow and Andrew Marantz spoke to "a hundred people with firsthand knowledge of how Altman conducts business," including current and former OpenAI employees and board members.

Among other revelations, internal messages from a few years ago show that OpenAI executives and board members "had come to believe that Altman's omissions and deceptions might have ramifications for the safety of OpenAI's products..." At the behest of his fellow board members, [OpenAI cofounder] Sutskever worked with like-minded colleagues to compile some seventy pages of Slack messages and H.R. documents, accompanied by explanatory text... The memos, which we reviewed, have not previously been disclosed in full. They allege that Altman misrepresented facts to executives and board members, and deceived them about internal safety protocols. One of the memos, about Altman, begins with a list headed "Sam exhibits a consistent pattern of . . ." The first item is "Lying"....

In a tense call after Altman's firing, the board pressed him to acknowledge a pattern of deception. "This is just so fucked up," he said repeatedly, according to people on the call. "I can't change my personality." Altman says that he doesn't recall the exchange.... He attributed the criticism to a tendency, especially early in his career, "to be too much of a conflict avoider." But a board member offered a different interpretation of his statement: "What it meant was 'I have this trait where I lie to people, and I'm not going to stop.' " Were the colleagues who fired Altman motivated by alarmism and personal animus, or were they right that he couldn't be trusted?

Friday Altman responded in part to the article. ("I am not proud of being conflict-averse, which has caused great pain for me and OpenAI," he wrote in a blog post. "I am not proud of handling myself badly in a conflict with our previous board that led to a huge mess for the company.")

But the article also assembled similar stories from throughout Altman's career: - At Altman's earlier startup Loopt, "groups of senior employees, concerned with Altman's leadership and lack of transparency, asked Loopt's board on two occasions to fire him as C.E.O.," according to Keach Hagey, author of the Altman biography The Optimist.

- During Altman's time as president of Y Combinator, "several Silicon Valley investors came to believe that his loyalties were divided. An investor told us that Altman was known to 'make personal investments, selectively, into the best companies, blocking outside investors.'" The article adds that in private, Y Combinator co-founder Paul Graham "has been unambiguous that Altman was removed because of Y.C. partners' mistrust... On one occasion, Graham told Y.C. colleagues that, prior to his removal, 'Sam had been lying to us all the time.'"

- "In a meeting with U.S. intelligence officials in the summer of 2017, he claimed that China had launched an 'A.G.I. Manhattan Project,'" the article points out, "and that OpenAI needed billions of dollars of government funding to keep pace...." But one intelligence official "after looking into the China project, concluded that there was no evidence that it existed: 'It was just being used as a sales pitch.'"

- As California lawmakers considered safety testing for AI model, one legislative aide complained of "increasingly cunning, deceptive behavior from OpenAI". OpenAI later subpoenaed some of the bill's top supporters (and OpenAI critics), in some cases asking for their private communications to investigate whether Elon Musk was funding them. [The article notes an ongoing animosity between Altman and Musk. "When Altman complained on X about a Tesla he'd ordered, Musk replied, 'You stole a non-profit.'"]

And "Multiple prominent investors who have worked with Altman told us that he has a reputation for freezing out investors if they back OpenAI's competitors." [M]ost of the people we spoke to shared the judgment of Sutskever and Amodei: Altman has a relentless will to power that, even among industrialists who put their names on spaceships, sets him apart. "He's unconstrained by truth," the board member told us. "He has two traits that are almost never seen in the same person. The first is a strong desire to please people, to be liked in any given interaction. The second is almost a sociopathic lack of concern for the consequences that may come from deceiving someone."

The board member was not the only person who, unprompted, used the word "sociopathic." One of Altman's batch mates in the first Y Combinator cohort was Aaron Swartz, a brilliant but troubled coder who died by suicide in 2013 and is now remembered in many tech circles as something of a sage. Not long before his death, Swartz expressed concerns about Altman to several friends. "You need to understand that Sam can never be trusted," he told one. "He is a sociopath. He would do anything."

Multiple senior executives at Microsoft said that, despite [CEO Satya] Nadella's long-standing loyalty, the company's relationship with Altman has become fraught. "He has misrepresented, distorted, renegotiated, reneged on agreements," one said... The senior executive at Microsoft said, of Altman, "I think there's a small but real chance he's eventually remembered as a Bernie Madoff- or Sam Bankman-Fried-level scammer."

The Media

First US Newsroom Strike For AI Protections Staged by ProPublica's Journalists (niemanlab.org) 8

It's the first time a major U.S. newsroom has gone on strike partly to demand protections from AI-related layoffs, according to a report from Nieman Lab.

They noted that one of the picketer's signs read "Thoughts not bots," : On Wednesday, roughly 150 members of the Propublica Guild, one of the largest nonprofit newsroom unions in the country, went on a 24-hour strike. About two dozen Guild members picketed ProPublica's headquarters in New York City's Hudson Square neighborhood during working hours, as simultaneous picket lines formed in front of the publication's offices in Chicago and Washington D.C...

The Guild has been negotiating its first collective bargaining agreement for two and a half years, and the one-day action was intended to put new pressure on ProPublica's management to agree to several contract proposals. The union is seeking "just cause" protections for terminations, wage increases to keep up with the rising cost of living, and contract language that would prohibit layoffs resulting from AI adoption... Beyond the strike, the ProPublica Guild has also taken its dispute over newsroom AI adoption to the National Labor Relations Board (NLRB). On Monday, the Guild filed an unfair-labor-practice charge, citing a "unilateral implementation of AI policy." The filing claims that ProPublica published AI editorial guidelines on its website last month, without first bargaining with union members over its tenets and language... A petition launched Wednesday calling for ProPublica to agree to the Guild's contract terms had received roughly 4,200 signatures by Thursday morning...

Susan DeCarava, the president of The NewsGuild of New York, joined strikers in front of the ProPublica offices yesterday. During a spare moment on the picket line, she told me that while this strike may be setting precedent for her union, it likely won't be the last over AI adoption in newsrooms. "We're going to see more and more concentrated conflicts between media bosses and journalists and media workers over who has a say and how AI is used in their workplaces," she said. For one, The New York Times Guild is currently in contract negotiations after its last agreement expired in February. Already, AI language has taken center stage in the Guild's initial bargaining sessions, including over a proposal that would see Guild members receive a share of the revenue earned when their work is licensed for AI training.

"Management has offered expanded severance for AI-related layoffs as a counter proposal..." according to the article.
Data Storage

The AI RAM Shortage is Also Driving Up SSD Prices (theverge.com) 52

In 2024 the Verge's consumer tech reporter paid $173 for a WD Black SN850X 2TB SSD. But "now that same SSD costs $649..."

"Like with RAM, demand from the AI industry is swallowing up supply from a limited number of manufacturers, leading to a drastic reduction in the inventory that's available to consumers" — and skyrocketing prices: The price on my WD Black drive nearly quadrupled since November 2025, and consumer SSDs across the board are seeing similar increases, much like with RAM. The 4TB version of the popular Samsung 990 Pro SSD previously cost $320, but will now run you nearly $1,000. External SanDisk SSDs saw a 200 percent price hike at the Apple Store in March....

According to price trends from PC Part Picker, NVMe SSD prices began ticking upward in December 2025, with prices on 256GB to 4TB SSDs now double or triple what they were just a few months ago, and continuing to climb.

Windows

Microsoft Begins Removing Copilot Branding From Windows 11 Apps (windowscentral.com) 53

Microsoft has started stripping Copilot branding out of Notepad in Windows 11, replacing the old Copilot menu with a more generic "writing tools" label. The AI features themselves aren't going away, but Microsoft seems to be backing off the heavy-handed Copilot branding and extra entry points. Windows Central reports: As promised, Microsoft is now beginning its effort to reduce and remove Copilot branding across Windows 11, with the latest Notepad update for Insiders outright removing the Copilot icon and phrasing. Now, the AI menu is simply called "writing tools," and maintains the same functionality as before. Additionally, Microsoft has also removed references to AI in the Settings area in Notepad. Now, the ability to turn on or off these AI powered writing tools are now listed under "Advanced features."

This change is present in the latest preview build of Notepad which is now rolling out to all Windows Insiders. The app version is 11.2512.28.0, and you'll know you have it if you see the Copilot icon replaced with a pen icon instead. [...] For Notepad, it appears Microsoft has opted to replace the Copilot menu with something more generic. It's still the same functionally, but it's no longer leaning on the tainted Copilot brand. Of course, you can still easily turn off all AI features in Notepad if you don't want them.
The Verge reports that the "unnecessary Copilot buttons" are also disappearing from the Snipping Tool, Photos, and Widgets.
Transportation

AI Is Coming for Car Salesmen 95

An anonymous reader quotes a report from The Drive: An auto dealer software company is pitching AI-powered kiosks designed to replace car salesmen on showroom floors. Automotive News says the industry is "skeptical." But be honest -- would you really rather deal with the average car lot shark than a computer?

Epikar, a South Korean company that cooks up digital management solutions for car dealers, has named its new AI invention the Pikar Genie. The idea is that customers can talk to this device, ask it product questions, and basically do everything you'd do with a car salesman except for actually closing the deal and signing paperwork. Renault, BMW, and Volvo are already using some Epikar products at South Korean dealerships, but this new customer-facing AI product is still in its infancy.

AN reported that "Renault assigns three salespeople to its Seoul showroom enhanced with Epikar automation compared with six for other Renault showrooms in South Korea," according to Epikar CEO Bosuk Han. The company's now looking to expand into America and is apparently already testing its products at at least one dealership stateside.
Car-dealer consultant Fleming Ford (Director of Strategic Growth at NCM Associates) said U.S. dealerships "aren't ready for fully automated showrooms."

"The showroom isn't just where you buy a car," Automotive News quoted him saying. "It's where you decide who to trust to help you to choose the right car."
Mozilla

Mozilla Accuses Microsoft of Sabotaging Firefox With Windows and Copilot Tactics (nerds.xyz) 68

BrianFagioli writes: Mozilla is accusing Microsoft of stacking the deck against Firefox, arguing that design choices in Windows steer users toward Edge even when they explicitly choose another browser. According to Mozilla, parts of Windows still open links in Edge regardless of the default browser setting, including results from the taskbar search and links launched from apps like Outlook and Teams. Mozilla says this means Firefox often never even gets the opportunity to handle those links, which quietly shifts user activity back into Microsoft's ecosystem.

The company also points to Microsoft's aggressive rollout of Copilot as another example of platform power being used to push Microsoft services. Copilot appeared pinned to the taskbar, arrived automatically on many systems with Microsoft 365, and even received a dedicated keyboard key on some laptops. Mozilla argues that when the maker of the dominant desktop operating system promotes its own browser and AI tools at the system level, it becomes far harder for independent browsers like Firefox to compete.

AI

Amazon May Sell Trainium AI Chips To Third Parties In Shot At Nvidia (qz.com) 10

Amazon CEO Andy Jassy says the company may eventually sell its Trainium AI chips directly to outside customers, not just through AWS, which would put Amazon in more direct competition with Nvidia. "There's so much demand for our chips that it's quite possible we'll sell racks of them to third parties in the future," Jassy wrote in his annual shareholder letter Thursday. He also revealed the company's chip business is already running at more than $20 billion annually, with demand so strong that current and even future generations are largely spoken for. Quartz reports: Access to Amazon's chips is currently limited to Amazon Web Services, with customers paying for cloud-based usage rather than owning any physical hardware. Selling to AWS and external customers alike, as standalone chipmakers do, would put annual revenue at around $50 billion, up from the $20 billion the company estimates for the year, Jassy said. The $20 billion figure spans three product lines: Trainium, the AI accelerator chip; Graviton, a general-purpose processor; and Nitro, a chip that helps run Amazon's EC2 server instances. All three are growing at triple-digit rates year over year, Jassy claimed in his letter.

Jassy said demand for Trainium has outpaced supply at each generation. Trainium2 is essentially unavailable, with its entire allocated capacity spoken for. Trainium3 started reaching customers in early 2026, and reservations have filled nearly all available supply. Even Trainium4 -- which is not expected to reach wide release for another year and a half -- has substantial pre-orders committed. Jassy argued that a full-scale Trainium rollout could shave tens of billions off annual capital costs while meaningfully widening profit margin.

Security

OpenAI To Limit New Model Release On Cybersecurity Fears (axios.com) 37

OpenAI is reportedly preparing a new cybersecurity product for a small group of partners, out of concern that a broader rollout could wreak havoc if it were released more widely. If that move sounds familiar, it's because Anthropic took a similar limited-release approach with its Mythos model and Project Glasswing initiative. Axios reports: OpenAI introduced its "Trusted Access for Cyber" pilot program in February after rolling out GPT-5.3-Codex, the company's most cyber-capable reasoning model. Organizations in the invite-only program are given access to "even more cyber capable or permissive models to accelerate legitimate defensive work," according to a blog post. At the time, OpenAI committed $10 million in API credits to participants. [...]

Restricting the rollout of a new frontier model makes "more sense" if companies are concerned about models' ability to write new exploits -- rather than about their ability to find bugs in the first place, Stanislav Fort, CEO of security firm Aisle, told Axios. Staggering the release of new AI models looks a lot like how cybersecurity vendors currently handle the disclosure of security flaws in software, Lee added. "It's the same debate we've had for decades around responsible vulnerability disclosure," Lee said.

AI

Skilled Older Workers Turn To AI Training To Stay Afloat (theguardian.com) 45

An anonymous reader quotes a report from the Guardian: [Five skilled workers aged 50 and older spoke] to the Guardian about how, after struggling to find work in their fields, they have turned to an emerging and growing category of work: using their expertise to train artificial intelligence models. Known as data annotation, the work involves labeling and evaluating the information used to train AI models like Open AI's ChatGPT or Google's Gemini. A doctor, for example, might review how an AI model answers medical questions to flag incorrect or unsafe responses and suggest better ones, helping the system learn how to generate more accurate and reliable responses. The ultimate goal of training is to level up AI models until they're capable of doing a job as well as a human could -- meaning they could someday replace some of these human workers.

The companies behind AI training, such as Mercor, GlobalLogic, TEKsystems, micro1 and Alignerr, operate large contractor networks staffed by people like Ciriello. Their clients include tech giants like OpenAI, Google and Meta, academic researchers and industries including healthcare and finance. For experienced professionals, AI training contracts can be a side hustle -- or a temporary fallback following a layoff -- where top experts can, in some cases, earn over $180 an hour. But that's on the high end. For some older workers [...], it represents another thing entirely: a last refuge in a brutal job market that is harder to stay in, or re-enter, the older they get. For many of them, whether or not they're training their AI replacements in their professions is besides the point. They need the work now.

[...] "There's just a lot of desperation out there," Johnson said. As opportunities narrow, many turn to what Joanna Lahey, a professor at Texas A&M University who studies age discrimination and labor outcomes, calls "bridge jobs" -- lower-paying, less demanding roles that help workers stay financially afloat as they approach retirement. Historically, that meant taking temp assignments, retail and fast-food work and gig roles like Uber and food delivery. Now, for skilled workers -- engineers, lawyers, nurses or designers, for example -- using their expertise for AI data training is becoming the new bridge job. "[AI] training work may be better in some ways than those earlier alternatives," Lahey told the Guardian.

AI training can offer flexibility, quick income and intellectual engagement. But it's often a clear step down. Professionals in fields such as software development, medicine or finance typically earn six-figure salaries that come with benefits and paid leave, according to the US Bureau of Labor Statistics. According to online job postings, AI training gigs start at $20 an hour, with pay increasing to between $30 and $40 an hour. In some cases, AI trainers with coveted subject matter expertise can earn over $100 an hour. AI training is contract-based, though, meaning the pay and hours are unstable, and it often doesn't come with benefits.

The Courts

Anthropic Loses Appeals Court Bid To Temporarily Block Pentagon Blacklisting (cnbc.com) 39

A federal appeals court denied Anthropic's bid to temporarily block the Pentagon's blacklisting, meaning the company remains shut out of Defense Department contracts while the case continues, even though a separate court has allowed other federal agencies to keep using Claude for now. CNBC reports: "In our view, the equitable balance here cuts in favor of the government," the appeals court said in its decision. "On one side is a relatively contained risk of financial harm to a single private company. On the other side is judicial management of how, and through whom, the Department of War secures vital AI technology during an active military conflict. For that reason, we deny Anthropic's motion for a stay pending review on the merits." With the split decisions by the two courts, Anthropic is excluded from DOD contracts but is able to continue working with other government agencies while litigation plays out. Defense contractors will be prohibited from using Claude in their work with the agency, but they can use it for other cases.

[...] In the ruling on Wednesday, the court acknowledged that Anthropic "will likely suffer some degree of irreparable harm absent a stay," but that the company's interests "seem primarily financial in nature." While the company claimed the DOD was standing in the way of its right to free speech, "Anthropic does not show that its speech has been chilled during the pendency of this litigation," the order said. Because of the harm Anthropic is likely to suffer, the appeals court said "substantial expedition is warranted."

An Anthropic spokesperson said in a statement after the ruling that the company is "grateful the court recognized these issues need to be resolved quickly" and that it's "confident the courts will ultimately agree that these supply chain designations were unlawful." "While this case was necessary to protect Anthropic, our customers, and our partners, our focus remains on working productively with the government to ensure all Americans benefit from safe, reliable AI," Anthropic said.

Facebook

Meta Debuts 'Muse Spark', First AI Model Under Alexandr Wang (axios.com) 7

Meta has launched Muse Spark, its first major AI model under Alexandr Wang's leadership. The model was built over the past nine months and is being positioned as a significant step up from Llama 4. Axios reports: Muse Spark will power queries in the Meta AI app and Meta.ai website immediately, with plans to expand across Facebook, Instagram and WhatsApp. The model accepts voice, text and image inputs, but produces text-only output. [...] Meta plans to release a version of Muse Spark under an open-source license.

The model uses a fast mode for casual queries and several reasoning modes. A "shopping mode" highlights how Meta hopes to differentiate itself. It combines large language models with data on user interests and behavior. Over time, the model will also power "features that cite recommendations and content people share across Instagram, Facebook, and Threads," Meta said in a blog post.
Wang, the 29-year-old entrepreneur who co-founded Scale AI, joined Meta's "superintelligence" unit last year to help Meta catch up to rival models from OpenAI and Anthropic.
Encryption

Microsoft Abruptly Terminates VeraCrypt Account, Halting Windows Updates (404media.co) 102

Microsoft has apparently terminated the account VeraCrypt uses to sign its Windows drivers and bootloader, leaving the encryption project unable to publish Windows updates and throwing future releases into doubt. VeraCrypt's developer says Microsoft gave no clear explanation or warning for the move. "I didn't receive any emails from Microsoft nor any prior warnings," Mounir Idrassi, VeraCrypt's developer, told 404 Media. From the report: VeraCrypt is an open-source tool for encrypting data at rest. Users can create encrypted partitions on their drives, or make individual encrypted volumes to store their files in. Like its predecessor TrueCrypt, which VeraCrypt is based on, it also lets users create a second, innocuous looking volume if they are compelled to hand over their credentials. Last week, Idrassi took to the SourceForge forums to explain why he had been absent for a few months. The most serious challenge, he wrote, "is that Microsoft terminated the account I have used for years to sign Windows drivers and the bootloader."

"Regarding VeraCrypt, I cannot publish Windows updates. Linux and macOS updates can still be done but Windows is the platform used by the majority of users and so the inability to deliver Windows releases is a major blow to the project," he continued. "Currently I'm out of options." Idrassi told 404 Media the termination happened in mid-January. "I was surprised to discover that I could no longer use my account," he said.

On the forum and in the email to 404 Media, Idrassi shared what he said was the only message he received connected to the account shutdown. "Based on the information you have provided to date, we have determined that your organization does not currently meet the requirements to pass verification. There are no appeals available, we have closed your application," it reads. Idrassi told 404 Media the message is concerning his company IDRIX. "As you can read in their message, they say that the organization (IDRIX) doesn't meet their requirements, but I don't see which requirement IDRIX suddenly stopped meeting," he said. Idrassi said he has tried contacting Microsoft support, but he received automated responses that he believes contained AI-generated text.

Communications

Planet Labs Tests AI-Powered Object Detection On Satellite 39

BrianFagioli writes: Artificial intelligence has now run directly on a satellite in orbit. A spacecraft about 500km above Earth captured an image of an airport and then immediately ran an onboard AI model to detect airplanes in the photo. Instead of acting like a simple camera in space that sends raw data back to Earth for later analysis, the satellite performed the computation itself while still in orbit.

The system used an NVIDIA Jetson Orin module to run the object detection model moments after the image was taken. Traditionally, Earth observation satellites capture images and transmit large datasets to ground stations where computers process them hours later. Running AI directly on the satellite could reduce that delay dramatically, allowing spacecraft to analyze events like disasters, infrastructure changes, or aircraft activity almost immediately.
"This success is a glimpse into the future of what we call Planetary Intelligence at scale," said Kiruthika Devaraj, VP of Avionics & Spacecraft Technology. "By running AI at the edge on the NVIDIA Jetson platform, we can help reduce the time between 'seeing' a change on Earth and a customer 'acting' on it, while simultaneously minimizing downlink latency and cost. This shift toward integrated AI at the edge is a technological leap that can help differentiate solutions like Planet's Global Monitoring Service (GMS), providing valuable insights for our customers and enabling rapid response times when it matters most."
AI

Anthropic Unveils 'Claude Mythos', Powerful AI With Major Cyber Implications 61

"Anthropic has unveiled Claude Mythos, a new AI model capable of discovering critical vulnerabilities at scale," writes Slashdot reader wiredmikey. "It's already powering Project Glasswing, a joint effort with major tech firms to secure critical software. But the same capabilities could also accelerate offensive cyber operations." SecurityWeek reports: Mythos is not an incremental improvement but a step change in performance over Anthropic's current range of frontier models: Haiku (smallest), Sonnet (middle ground), and Opus (most powerful). Mythos sits in a fourth tier named Copybara, and Anthropic describes it as superior to any other existing AI frontier model. It incorporates the current trend in the use of AI: the modern use of agentic AI. "The powerful cyber capabilities of Claude Mythos Preview are a result of its strong agentic coding and reasoning skills... the model has the highest scores of any model yet developed on a variety of software coding tasks," notes Anthropic in a blog titled Project Glasswing -- Securing critical software for the AI era.

In the last few weeks, Mythos Preview has identified thousands of zero-day vulnerabilities with many classified as critical. Several are ten or 20 years old -- the oldest found so far is a 27-years old bug in OpenBSD. Elsewhere, a 16-years old vulnerability found in video software has survived five million hits from other automated testing tools without ever being discovered. And it autonomously found and chained together several in the Linux kernel allowing an attacker to escalate from ordinary user access to complete control of the machine. [...] Anthropic is concerned that Mythos' capabilities could unleash cyberattacks too fast and too sophisticated for defenders to block. It hopes that Mythos can be used to improve cybersecurity generally before malicious actors can get access to it.

To this end, the firm has announced the next stage of this preparation as Project Glasswing, powered by Mythos Preview. Given the rate of AI progress, it will not be long before such capabilities proliferate, potentially beyond actors who are committed to deploying them safely. "Project Glasswing is a starting point. No one organization can solve these cybersecurity problems alone: frontier AI developers, other software companies, security researchers, open-source maintainers, and governments across the world all have essential roles to play." Claude Mythos Preview is described as a general-purpose, unreleased frontier model from Anthropic that has nevertheless completed its training phase. The firm does not plan to make Mythos Preview generally available. The implication is that 'Preview' is a term used solely to describe the current state of Mythos and the market's readiness to receive it, and will be dropped when the firm gets closer to general release.
AI

Testing Suggests Google's AI Overviews Tells Millions of Lies Per Hour (arstechnica.com) 105

A New York Times analysis found Google's AI Overviews now answer questions correctly about 90% of the time, which might sound impressive until you realize that roughly 1 in 10 answers is wrong. "[F]or Google, that means hundreds of thousands of lies going out every minute of the day," reports Ars Technica. From the report: The Times conducted this analysis with the help of a startup called Oumi, which itself is deeply involved in developing AI models. The company used AI tools to probe AI Overviews with the SimpleQA evaluation, a common test to rank the factuality of generative models like Gemini. Released by OpenAI in 2024, SimpleQA is essentially a list of more than 4,000 questions with verifiable answers that can be fed into an AI.

Oumi began running its test last year when Gemini 2.5 was still the company's best model. At the time, the benchmark showed an 85 percent accuracy rate. When the test was rerun following the Gemini 3 update, AI Overviews answered 91 percent of the questions correctly. If you extrapolate this miss rate out to all Google searches, AI Overviews is generating tens of millions of incorrect answers per day.

The report includes several examples of where AI Overviews went wrong. When asked for the date on which Bob Marley's former home became a museum, AI Overviews cited three pages, two of which didn't discuss the date at all. The final one, Wikipedia, listed two contradictory years, and AI Overviews confidently chose the wrong one. The benchmark also prompts models to produce the date on which Yo Yo Ma was inducted into the classical music hall of fame. While AI Overviews cited the organization's website that listed Ma's induction, it claimed there's no such thing as the Classical Music Hall of Fame.
"This study has serious holes," said Google spokesperson Ned Adriance. "It doesn't reflect what people are actually searching on Google." The search giant likes to use a test called SimpleQA Verified, which uses a smaller set of questions that have been more thoroughly vetted.
Businesses

Anthropic Reveals $30 Billion Run Rate, Plans To Use 3.5GW of New Google AI Chips (theregister.com) 47

Anthropic says its annualized revenue run rate has surpassed $30 billion and disclosed plans to secure roughly 3.5 gigawatts of next-generation Google TPU compute starting in 2027. Broadcom will supply the key chips and networking gear for the effort, the company announced. The Register reports: News of the two deals emerged today in a Broadcom regulatory filing that opens with two items of news. One is a "Long Term Agreement for Broadcom to develop and supply custom Tensor Processing Units ("TPUs") for Google's future generations of TPUs." Google and Broadcom have collaborated to produce custom TPUs. Broadcom CEO Hock Tan recently shared his opinion that hyperscalers don't have the skill to create custom accelerators and predicted Broadcom's chip business will therefore win over $100 billion of revenue from AI chips in 2027 alone.

Working on next-gen TPUs for Google will presumably help to make that prediction a reality. So will the second part of Broadcom's announcement: a "Supply Assurance Agreement for Broadcom to supply networking and other components to be used in Google's next-generation AI racks through up to 2031." Broadcom's filing also revealed one user of Google's next-gen TPU will be Anthropic, which starting in 2027, "will access through Broadcom approximately 3.5 gigawatts as part of the multiple gigawatts of next generation TPU-based AI compute capacity committed by Anthropic."

AI

OpenAI Calls For Robot Taxes, Public Wealth Fund, and 4-Day Workweek To Tackle AI Disruption 118

OpenAI is proposing (PDF) sweeping policy changes to help manage the societal disruption caused by advanced AI, including taxes on automated labor, a public wealth fund, and experiments with a four-day workweek. The company said the policy document offered a series of "initial ideas" to address the risk of "jobs and entire industries being disrupted" by the adoption of AI tools. Business Insider reports: Among the core policy suggestions is a public wealth fund, which would see lawmakers and AI companies work together to invest in long-term assets linked to the AI boom, with returns distributed directly to citizens. Another is that the government should encourage and incentivize employers to experiment with four-day workweeks with no loss in pay and offer "benefits bonuses" tied to productivity gains from new AI tools.

The policy document also suggests lawmakers modernize the tax system and shift the tax base to corporate income and capital gains, rather than relying on labor income and payroll taxes that could be hit by a wave of AI-powered job losses. It also recommends taxes related to automated labor. OpenAI also called for the accelerated expansion of the US's electricity grid, which is already feeling the strain from a wave of data center construction and energy demand for training ever more powerful AI models.
Cellphones

Samsung's Messages App Is Shutting Down (androidcentral.com) 81

Samsung says it will discontinue its Samsung Messages app in July 2026 and is directing Galaxy users to switch to Google Messages instead. Android Central reports: [...] Samsung says users can switch to Google Messages as their default app to maintain a consistent Android messaging experience. The fine print also states that once the app is discontinued, "sending messages via Samsung Messages on your phone will no longer be possible, except for emergency service numbers or emergency contacts defined in your device."

Samsung also notes that users will no longer be able to download the Messages app from the Galaxy Store once it is discontinued. Newer devices, including the Galaxy S26 series, already do not support installing Samsung Messages. It is, however, worth noting that users on Android 11 or older are not affected by this change and will still be able to use the Samsung Messages app on their devices.

[...] Samsung also warns that on some devices released before 2022, switching apps may temporarily disrupt ongoing RCS conversations. However, chats should resume once both users move to Google Messages. The company also highlights some of the benefits of the switch, including improved security, RCS support, AI features, and better multi-device connectivity.

Slashdot Top Deals