Government

White House Unveils National AI Policy Framework To Limit State Power 78

An anonymous reader quotes a report from CNBC: The Trump administration on Friday issued (PDF) a legislative framework for a single national policy on artificial intelligence, aiming to create uniform safety and security guardrails around the nascent technology while preempting states from enacting their own AI rules. The six-pronged outline broadly proposes a slew of regulations on AI products and infrastructure, ranging from implementing new child-safety rules to standardizing the permitting and energy use of AI data centers. It also calls on Congress to address thorny issues surrounding intellectual-property rights and craft rules "preventing AI systems from being used to silence or censor lawful political expression or dissent."

The administration said in an official release that it wants to work with Congress "in the coming months" to convert its framework into a bill that President Donald Trump can sign. The White House wants to codify the framework into law "this year" and believes it can generate bipartisan support, Michael Kratsios, director of the White House Office of Science and Technology Policy, said in an interview with Fox News on Thursday evening. That won't be easy in a deeply divided Congress where Republicans hold thin and often fractious majorities, and where Trump has already urged GOP lawmakers to prioritize his controversial voter-ID bill above all else ahead of the November midterms.
BCLP has an interactive map that tracks the proposed, failed and enacted AI regulatory bills from each state.
Power

Work From Home and Drive More Slowly To Save Energy, IEA Says (bbc.com) 152

As energy prices soar from the Iran conflict, the International Energy Agency is urging governments to cut energy use by taking up measures like remote work and reduced speed limits. The group warns the energy security crisis could persist for months, even if supply routes stabilize. "I believe the world has not yet well understood the depth of the energy security challenge we are facing," said IEA's executive director, Fatih Birol. "It is much bigger than what we had in the 1970s... It is also bigger than the natural gas price shock we experienced after the Russia's invasion of Ukraine." The BBC reports: Thirty-two countries are members of the IEA, including the US, the UK, Australia, Canada, Japan and 24 other European nations. Its role is to act as a global watchdog, providing analysis and recommendations on global energy problems, such as energy security and the transition to clean energy. The IEA's other suggestions for governments, businesses and individuals include:

- Promoting use of public transport
- Giving private cars access to city centres on alternate days
- Encouraging car sharing and efficient driving habits
- Avoiding air travel where possible, especially business flights
- Switching to electric cooking

It also said there should be a focused effort to preserve liquid petroleum gas for cooking and other essential uses, by switching bio-fuel converted vehicles onto gas and introducing other measures to reduce its use. Birol said these proposals were in addition to action taken by IEA member countries earlier this month, when they agreed to release 400 million barrels of oil, 20% of its emergency reserves.
Several countries in Asia have implemented emergency four-day workweeks and work-from-home mandates as they have been hit particularly hard from the conflict. Fortune notes: "Asia is particularly dependent on oil exports from the Middle East; Japan and South Korea respectively source 90% and 70% of their oil from the region."
The Internet

Online Bot Traffic Will Exceed Human Traffic By 2027, Cloudflare CEO Says 51

Cloudflare's CEO predicts AI-driven bot traffic will surpass human internet traffic by 2027, as AI agents generate vastly more web requests than people. "If a human were doing a task -- let's say you were shopping for a digital camera -- and you might go to five websites. Your agent or the bot that's doing that will often go to 1,000 times the number of sites that an actual human would visit," Cloudflare CEO Matthew Prince said in an interview at SXSW this week. "So it might go to 5,000 sites. And that's real traffic, and that's real load, which everyone is having to deal with and take into account." TechCrunch reports: Before the generative AI era, the internet was only about 20% bot traffic, with Google's web crawler being the largest, according to Prince, whose infrastructure and security company is used by one-fifth of all websites. But beyond some other reputable crawlers, the only other bots were those used by scammers and bad actors. "With the rise of generative AI, and its just insatiable need for data, we're seeing a rise where we suspect that, in 2027, the amount of bot traffic online will exceed the amount of human traffic that's online," Prince said.

The executive also noted that this change to the web would require the development of new technologies, like sandboxes for AI agents that can be spun up on the fly and then torn down when their task has finished. These could come into play when consumers ask AI agents to perform certain tasks on their behalf, like planning a vacation. "What we're trying to think about is, how do we actually build that underlying infrastructure where you can -- as easily as you open a new tab in your browser -- you can actually spin up new code, which can then run and service the agents that are out there," Prince said. He imagines there will soon be a time when millions of these "sandboxes" for agents would be created every second.
"I think the thing that people don't appreciate about AI is it's a platform shift," Prince said. "AI is another platform shift ... the way that you're going to consume information is completely different."
Privacy

Rogue AI Triggers Serious Security Incident At Meta (theverge.com) 87

For the second time in the past month, an AI agent went rogue at Meta -- this time giving an engineer incorrect advice that briefly exposed sensitive data. The Verge reports: A Meta engineer was using an internal AI agent, which Clayton described as "similar in nature to OpenClaw within a secure development environment," to analyze a technical question another employee posted on an internal company forum. But the agent also independently publicly replied to the question after analyzing it, without getting approval first. The reply was only meant to be shown to the employee who requested it, not posted publicly. An employee then acted on the AI's advice, which "provided inaccurate information" that led to a "SEV1" level security incident, the second-highest severity rating Meta uses. The incident temporarily allowed employees to access sensitive data they were not authorized to view, but the issue has since been resolved.

According to Clayton, the AI agent involved didn't take any technical action itself, beyond posting inaccurate technical advice, something a human could have also done. A human, however, might have done further testing and made a more complete judgment call before sharing the information -- and it's not clear whether the employee who originally prompted the answer planned to post it publicly. "The employee interacting with the system was fully aware that they were communicating with an automated bot. This was indicated by a disclaimer noted in the footer and by the employee's own reply on that thread," Clayton commented to The Verge. "The agent took no action aside from providing a response to a question. Had the engineer that acted on that known better, or did other checks, this would have been avoided."

The Courts

Rapper Afroman Wins Defamation Lawsuit Over Use of Police Raid Footage In His Music Videos (billboard.com) 81

Longtime Slashdot reader UnknowingFool writes: Rapper Afroman, born Joseph Edgar Foreman, famous for his 2000 hit "Because I Got High", has won a defamation lawsuit that seven Ohio police offers filed against him. A jury found he did not defame the officers in music videos he made about a 2022 police raid of his home. In August 2022, Adams County Sheriff's Department raided Afroman's home on suspicion of drug trafficking and kidnapping. Neither drugs nor kidnapping victims were found, and charges were never filed. However, local officials would not pay for damages occurred during the raid including a broken front door and a video surveillance camera. Afroman used his home security footage of the raid to create music rap videos criticizing the police over the incident; "Will You Help Me Repair My Door?", "Why You Disconnecting My Video Camera?", and "Lemon Pound Cake". He posted the videos on YouTube.

In March 2023, seven officers filed a lawsuit against Afroman for invasion of privacy and the unauthorized use of their images from the security footage in addition to defamation claims. The officers requested an injunction for Afroman to stop speaking about them or using their photos. The officers also wanted all proceeds from the videos, song sales, performances, and merchandise claiming they had suffered "emotional distress" due to the videos. Afroman's defense included Freedom of Speech rights to criticize public officials. The ACLU filed an amicus brief supporting the rapper, arguing that the lawsuit was a SLAPP suit only meant to silence criticism. In October 2023, the court agreed and dismissed the invasion of privacy, "right of publicity", and "unauthorized use of individual's persona" claims but allowed the defamation case to proceed.

Defamation claims by the officers included the allegation Afroman repeatedly had sex with the wife of Randolph L. Walters, Jr. When Afroman's lawyer asked Walters "But we all know that's not true, right?", the officer replied he did not know. Defamation from emotional damages requires that harm arise from a false statement; however, if a statement is so outrageous that no one would believe it to be true, then reputational damage cannot be a result.

Cloud

Federal Cyber Experts Called Microsoft's Cloud 'a Pile of Shit', Yet Approved It Anyway (propublica.org) 64

ProPublica reports that federal cybersecurity reviewers had serious, yearslong concerns about Microsoft's GCC High cloud offering, yet they approved it anyway because the product was already deeply embedded across government. As one member of the team put it: "The package is a pile of shit." From the report: In late 2024, the federal government's cybersecurity evaluators rendered a troubling verdict on one of Microsoft's biggest cloud computing offerings. The tech giant's "lack of proper detailed security documentation" left reviewers with a "lack of confidence in assessing the system's overall security posture," according to an internal government report reviewed by ProPublica. For years, reviewers said, Microsoft had tried and failed to fully explain how it protects sensitive information in the cloud as it hops from server to server across the digital terrain. Given that and other unknowns, government experts couldn't vouch for the technology's security.

Such judgments would be damning for any company seeking to sell its wares to the U.S. government, but it should have been particularly devastating for Microsoft. The tech giant's products had been at the heart of two major cybersecurity attacks against the U.S. in three years. In one, Russian hackers exploited a weakness to steal sensitive data from a number of federal agencies, including the National Nuclear Security Administration. In the other, Chinese hackers infiltrated the email accounts of a Cabinet member and other senior government officials. The federal government could be further exposed if it couldn't verify the cybersecurity of Microsoft's Government Community Cloud High, a suite of cloud-based services intended to safeguard some of the nation's most sensitive information.

Yet, in a highly unusual move that still reverberates across Washington, the Federal Risk and Authorization Management Program, or FedRAMP, authorized the product anyway, bestowing what amounts to the federal government's cybersecurity seal of approval. FedRAMP's ruling -- which included a kind of "buyer beware" notice to any federal agency considering GCC High -- helped Microsoft expand a government business empire worth billions of dollars. "BOOM SHAKA LAKA," Richard Wakeman, one of the company's chief security architects, boasted in an online forum, celebrating the milestone with a meme of Leonardo DiCaprio in "The Wolf of Wall Street."

It was not the type of outcome that federal policymakers envisioned a decade and a half ago when they embraced the cloud revolution and created FedRAMP to help safeguard the government's cybersecurity. The program's layers of review, which included an assessment by outside experts, were supposed to ensure that service providers like Microsoft could be entrusted with the government's secrets. But ProPublica's investigation -- drawn from internal FedRAMP memos, logs, emails, meeting minutes, and interviews with seven former and current government employees and contractors -- found breakdowns at every juncture of that process. It also found a remarkable deference to Microsoft, even as the company's products and practices were central to two of the most damaging cyberattacks ever carried out against the government.

Businesses

Finance Bros To Tech Bros: Don't Mess With My Bloomberg Terminal (wsj.com) 61

An anonymous reader quotes a report from the Wall Street Journal: A battle of insults and threats has broken out between the tech world and Wall Street. What's got everyone so worked up? The same thing that starts most fights: business software. A series of social-media posts went viral in recent days with claims that AI has created a worthy -- and way cheaper -- alternative to the Bloomberg terminal, a computer system that is like oxygen to professional investors. Now "Bloomberg is cooked," some posters argued as they heralded the arrival of a newly released AI tool from startup Perplexity. [...]

The finance bros who worship at the altar of Bloomberg have declared war on the tech evangelists who have put all their faith in AI. To suggest that the terminal is replaceable is "laughable," said Jason Lemire, who jumped into the conversation on LinkedIn. (Ironically or not, his post also included an AI-generated image of churchgoers praying to the Bloomberg terminal). "It seems quite obvious to me that those propagating that post are either just looking for easy engagement and/or have never worked in a serious financial institution," he wrote. [...] Morgan Linton, the co-founder and CTO of AI startup Bold Metrics and an avid Perplexity Computer user, said it's rare for a single AI prompt to generate anything close to what Bloomberg does. That said, he added that tools like this can lay "a really good foundation for a financial application. And that really has not been possible before."

Others aren't so sure. Michael Terry, an institutional investment manager who used the terminal for more than 30 years, said he used a prompt circulating online to try to vibe code a Bloomberg replica on Anthropic's Claude. "It was laughable at best, horrific at worst," he said. Shevelenko acknowledged there are some aspects of the terminal that can't be replicated with vibe coding, including some of Bloomberg's proprietary data inputs. The live chat network, which includes 350,000 financial professionals in 184 countries, would also be hard to re-create, as well as the terminal's data security, reliability and robust support system. "I love Bloomberg. And I know most people that use Bloomberg are very, very loyal and extremely happy," said Lemire. His message to the techies? "There's nothing that you can vibe code in a weekend or even like over the course of a year that's going to come anywhere close."

Open Source

Nvidia Bets On OpenClaw, But Adds a Security Layer Via NemoClaw (zdnet.com) 11

During today's Nvidia GTC keynote, the company introduced NemoClaw, a security-focused stack designed to make the autonomous AI agent platform OpenClaw safer. ZDNet explains how it works: NemoClaw installs Nvidia's OpenShell, a new open-source runtime that keeps agents safer to use by enforcing an organization's policy-based guardrails. OpenShell keeps models sandboxed, adds data privacy protections and additional security for agents, and makes them more scalable. "This provides the missing infrastructure layer beneath claws to give them the access they need to be productive, while enforcing policy-based security, network, and privacy guardrails," Nvidia said in the announcement. The company built OpenShell with security companies like CrowdStrike, Cisco, and Microsoft Security to ensure it is compatible with other cybersecurity tools.

Nvidia said NemoClaw can be installed in a single command, runs on any platform, and can use any coding agent, including Nvidia's own Nemotron open model family, on a local system. Through a privacy router, it allows agents to access frontier models in the cloud, which unites local and cloud models to help teach agents how to complete tasks within privacy guardrails, Nvidia explained. Nvidia seems to be hoping that the additional security can make OpenClaw agents more popular and accessible, with less risk than they currently carry. The bigger picture here is how NemoClaw could give companies the added peace of mind to let AI agents complete actions for their employees, where they wouldn't have previously.
Nvidia did not specify when NemoClaw would be available.
Android

Android, Epic, and What's Really Behind Google's 'Existential' Threat to F-Droid (thenewstack.io) 53

Starting in September, even Android developers not in Google's Play Store will still be required to register with Google to distribute their apps in Brazil, Singapore, Indonesia, and Thailand, with Google continuing "to roll out these requirements globally" four months later. Even developers distributing Android apps on the web for sideloading will be required to register, pay Google a $25 fee, and provide a government ID.

But there's a new theory on what's secretly been motivating Google from an unnamed source in the "Keep Android Open" movement, writes long-time Slashdot reader destinyland: "You can't separate this really from their ongoing interactions with Epic and the settlement that they came to," they argue. Twelve days ago Epic Games and Google announced a new proposal for settling their long-running dispute over the legality of alternative app stores on Android phones. (Rather than agreeing to let third-party app stores into their Play Store, Google wants them to continue being sideloaded, promising in a blog post last week that they'll even offer a "more streamlined" and "simplified" sideloading alternative for rival app stores. "This Registered App Store program will begin outside of the US first, and we intend to bring it to the US as well, subject to court approval.")

So "developer verification" could be Google's fallback plan if U.S. courts fail to approve this. "If the Google Play Store has to allow any third-party repository app store, Google essentially has given up all control of the apps. But if they're able to claw back that control by requiring that all developers, no matter how they distribute their apps, have to register with Google — have to agree to their Terms & Conditions, pay them money, provide identification — then they have a large degree of indirect control over any app that can be developed for the entire platform."

But that plan threatens millions of people using the alternative F/OSS app distributor F-Droid, since Google also wants to have only one signature attached to Android apps. Marc Prud'hommeaux, a member of F-Droid's board of directors, says that "all of a sudden breaks all those versions of the application distributed through F-Droid or any other app store!"

Prud'hommeaux says they've told Google's Android team "You know perfectly well that you're killing F-Droid!" creating an "existential" threat to an app distributor "that has existed happily for over 10 years." But good things started happening when he created the website Keep Android Open: There's now a "huge backlog" of signers for an Open Letter that already includes EFF, the Software Freedom Conservancy, and the Free Software Foundation. He believes Android's existing Play Protect security "is completely sufficient to handle the particular scenarios they claim that developer verification is meant to address"...

The Keep Android Open site urges developers not to sign up for Android's early access program when it launches next week. (Instead, they're asking developers to respond to invites with an email about their concerns — and to spread the word to other developers and organizations in forums and social media posts.) There's also a petition at Change.org currently signed by 64,000 developers — adding 20,000 new signatures in the last 10 days. And "If you have an Android device, try installing F-Droid!" he adds. Google tracks how many people install these alternative app repositories, and a larger user base means greater consequences from any Android policy changes.

Plus, installing F-Droid "might be refreshing!" Prud'hommeaux says. "You don't see all the advertisements and promotions and scam and crapware stuff that you see in the commercial app stores!"

Government

How One Company Finally Exposed North Korea's Massive Remote Workers Scam (nbcnews.com) 24

NBC News investigates North Korea's "wide-ranging effort to place remote workers at U.S. companies in order to funnel money back to its coffers and, in some cases, steal sensitive information."

And working with the FBI, one corporate security/investigations company decided to knowingly hire one of North Korea's remote workers — then "ship him a laptop and gain as much information as possible" about this "sprawling international employment scheme that is estimated to include hundreds of American companies, thousands of people and hundreds of millions of dollars per year." It worked.... Over a roughly three-month investigation, Nisos uncovered an apparent network of at least 20 North Korean operatives including "Jo" who had collectively applied to at least 160,000 roles. During that time, workers in the network — which some evidence showed were based in China — were employed by five U.S.-based companies and allegedly helped by an American citizen operating out of two nondescript suburban homes in Florida...

Nisos estimated that in about a year, "Jo", who was likely a newer member of the team, applied to about 5,000 jobs... "They attended interviews all day every day, and then once they secured a job, they would collect paychecks until they were terminated," [according to Jared Hudson, Nisos' chief technology officer]... With the ability to see which other U.S. companies Jo and his team were working for — all remote technology roles — Nisos' CEO, Ryan LaSalle, began making calls to their security teams to alert them of the fraud. "Most of the companies weren't aware of it, even if they had pretty robust security teams," LaSalle said. "It wasn't really high on the radar."

NBC News describes North Korea's 10-year effort — and its educational pipeline that steers promising students into "computer science and hacking training before being placed into cyberunits under military and state agencies, according to a recent report by DTEX, a risk-adaptive security and behavioral intelligence firm that tracks North Korea's cybercrime." In one case, a North Korean worker stole sensitive information related to U.S. military technology, according to the Justice Department. In another, an American accomplice obtained an ID that enabled access to government facilities, networks and systems. At least three organizations have been extorted and suffered hundreds of thousands of dollars in damages after proprietary information was posted online by IT workers... Analysts warn that North Korean IT workers are targeting larger organizations, increasing extortion attempts and seeking out employers that pay salaries in cryptocurrency. More recently, security researchers have uncovered fake job application platforms impersonating major U.S. cryptocurrency and AI firms, including Anthropic, designed to infect legitimate applicants' networks with malware to be utilized once hired. The global cybersecurity company CrowdStrike identified a 220% rise in 2025 in instances of North Koreans gaining fraudulent employment at Western companies to work remotely as developers...

The payoff flowing back to Pyongyang from these schemes is enormous. Some North Korean IT workers earn more than $300,000 per year, far more than they'd be able to earn domestically, with as much as 90% of their wages directed back to the regime, according to congressional testimony from Bruce Klinger, a former CIA deputy division chief for Korea. The United Nations estimates the schemes, which proliferated after the pandemic when more companies' workforces went remote, generate as much as $600 million annually, while a U.S. State Department-led sanctions monitoring assessment placed earnings for 2024 as high as $800 million... So far, at least 10 alleged U.S.-based facilitators have been federally charged, including one active-duty member of the U.S. Army, for their alleged roles in hosting laptop farms, laundering payments and moving proceeds through shell companies. At least six other alleged U.S. facilitators have been identified in court documents but not named...

"We believe there are many more hundreds of people out there who are participating in these schemes," said Rozhavsky, the FBI assistant director. "They could never pull this off if they didn't have willing facilitators in the U.S. helping them...." The scheme itself is also becoming more complex. North Korean IT teams are now subcontracting work to developers in Pakistan, Nigeria and India, expanding into fields like customer service, financial processing, insurance and translation services — roles far less scrutinized than software development.

Canada

Does Canada Need Nationalized, Public AI? (schneier.com) 108

While AI CEOs worry governments might nationalize AI, others are advocating for something similar. Canadian security professional Bruce Schneier and Harvard data scientist Nathan Sanders published this call to action in Canada's most widely-read newspaper (with a readership over 6 million): "Canada Needs Nationalized, Public AI." While there are Canadian AI companies, they remain for-profit enterprises, their interests not necessarily aligned with our collective good. The only real alternative is to be bold and invest in a wholly Canadian public AI: an AI model built and funded by Canada for Canadians, as public infrastructure. This would give Canadians access to the myriad of benefits from AI without having to depend on the U.S. or other countries. It would mean Canadian universities and public agencies building and operating AI models optimized not for global scale and corporate profit, but for practical use by Canadians...

We are already on our way to having AI become an inextricable part of society. To ensure stability and prosperity for this country, Canadian users and developers must be able to turn to AI models built, controlled, and operated publicly in Canada instead of building on corporate platforms, American or otherwise... [Switzerland's funding of a public AI model, Apertus] represents precisely the paradigm shift Canada should embrace: AI as public infrastructure, like systems for transportation, water, or electricity, rather than private commodity... Public AI systems can incorporate mechanisms for genuine public input and democratic oversight on critical ethical questions: how to handle copyrighted works in training data, how to mitigate bias, how to distribute access when demand outstrips capacity, and how to license use for sensitive applications like policing or medicine...

Canada already has many of the building blocks for public AI. The country has world-class AI research institutions, including the Vector Institute, Mila, and CIFAR, which pioneered much of the deep learning revolution. Canada's $2-billion Sovereign AI Compute Strategy provides substantial funding. What's needed now is a reorientation away from viewing this as an opportunity to attract private capital, and toward a fully open public AI model.

Long-time Slashdot reader sinij has a different opinion. "To me, this sounds dystopian, because I can also imagine AI declining your permits, renewal of license, or medication due to misalignment or 'greater good' reasons."

But the Schneier/Sanders essays argues this creates "an alternative ownership structure for AI technology" that is allocating decision-making authority and value "to national public institutions rather than foreign corporations."
United States

America's First Large-Scale Offshore Wind Project Finally Finishes Construction (wbur.org) 71

It's America's first large-scale offshore wind project, reports WBUR — enough clean energy to power 400,000 homes in Massachusetts from 62 offshore wind turbines generating 800 megawatts.

But it took a while... The plant's first construction delay happened back in 2019, they point out — and then "Just three months ago, when the project was 95% complete, the U.S. Interior Department issued a stop-work order." But after successfully challenging that order in court, and "with a stretch of good weather offshore, the developers behind the $4.5 billion project managed to get over the finish line."

The Associated Press notes it was "one of five major East Coast offshore wind projects the Trump administration halted construction on days before Christmas, citing national security concerns." Developers and states sued, and federal judges allowed all five to resume construction, essentially concluding that the government did not show that the national security risk was so imminent that construction must halt. Another one of the five, Revolution Wind, began sending power for the first time to New England's electric grid on Friday and will scale up in the weeks ahead until it is fully operational.
"That project is nearly complete as well," notes WBUR, "and will eventually be capable of powering up to 350,000 homes."
Social Networks

US Set To Receive $10 Billion Fee For Brokering TikTok Deal (msn.com) 44

The deal to take control of TikTok's U.S. business came with an unusual condition, according to people familiar with the matter. The investors — which include Oracle, Abu Dhabi investor MGX, and private-equity firm Silver Lake — "paid the Treasury Department about $2.5 billion when the deal closed in January," reports the Wall Street Journal, "and are set to make several additional payments until hitting the $10 billion total." The $10 billion payment would be nearly unprecedented for a government helping arrange a transaction, historians have said... Investment bankers advising on a typical deal receive fees of less than 1% of the transaction value, and the percentage generally gets smaller as the deal size increases. Bank of America is in line to make some $130 million for advising railroad operator Norfolk Southern on its $71.5 billion sale to Union Pacific, one of the largest fees on record for a single bank on a deal. Administration officials have said the fee is justified given Trump's role in saving TikTok in the U.S. and navigating negotiations with China to get the deal done while addressing the security concerns of lawmakers...

The TikTok fee extracted from private-sector investors is the administration's latest transaction involving the nation's largest businesses. Trump took a nearly 10% stake in semiconductor company Intel and has agreed to take a chunk of chip sales to China from Nvidia in exchange for granting export licenses. The administration has also taken equity stakes in other companies and has a say in the operations of U.S. Steel following a "golden share" agreement with Japan's Nippon Steel in its takeover.

Reuters notes earlier this month, a lawsuit was filed by investors in two of TikTok's social media rivals, seeking to reverse the approval of the deal.

Thanks to long-time Slashdot reader schwit1 for sharing the news.
Biotech

U.S. State Bans on Lab-Grown Meats Challenged in Court (austinchronicle.com) 49

Last June Texas Agriculture Commissioner Sid Miller said in a statement that Texans "have a God-given right to know what's on their plate, and for millions of Texans, it better come from a pasture, not a lab. It's plain cowboy logic that we must safeguard our real, authentic meat industry from synthetic alternatives."

But California company Wildtype sells lab-grown salmon — and is suing Texas over its ban on cell-cultivated meat, the Austin Chronicle reported this week. The company's founder says lab-grown salmon eliminates the mercury, microplastic, and antibiotic contamination commonly found in seafood. And one chef in Austin, Texas says lab-grown salmon is "awesome" and "something new"-- at the only Texas restaurant that was serving it last summer: Just two months after the salmon hit the menu, Texas banned the sale of cell-cultivated meat... A lawsuit from Wildtype and one other FDA-approved cultivated meat company [argues] it's anti-capitalism and unconstitutional... This law "was not enacted to protect the health and safety of Texas consumers — indeed, it allows the continued distribution of cultivated meat to consumers so long as it is not sold. Instead, SB 261 was enacted to stifle the growth of the cultivated meat industry to protect Texas' conventional agricultural industry from innovative competition that is exclusively based outside of Texas...." [according to the lawsuit]. It was filed in September, immediately after the ban took effect, and cell-cultivated companies are awaiting judgment.
That Texas ban would last two years, notes U.S. News and World Reports, adding that Alabama, Florida, Indiana, Mississippi, Montana, and Nebraska have also passed bans, some temporary "on the manufacturing, sale or distribution of cell-cultured meat." Meanwhile, a new five-year moratorium on lab-grown meat was signed this week by the governor of South Dakota "after rejecting a permanent ban last month," reports South Dakota Searchlight: The new law bars the sale, manufacture or distribution of "cell-cultured protein" products from July 1 this year through June 30, 2031. Violations are punishable by up to 30 days in jail, a fine of up to $500, or both.
"But supporters of lab-grown meat are not going down without a fight," adds U.S. News and World Reports, with another lawsuit also filed challenging a ban in Florida: When Florida Gov. Ron DeSantis signed the ban in Florida, he described it as "fighting back against the global elite's plan to force the world to eat meat grown in a petri dish or bugs to achieve their authoritarian goals." He added that his administration "will save our beef."
AI

ChatGPT, Other Chatbots Approved For Official Use In the Senate (nytimes.com) 34

An anonymous reader quotes a report from the New York Times: A top Senate administrator on Monday gave aides the green light to use three artificial intelligence chatbots for official work, a reflection of how widespread the use of the products has become in workplaces around the globe. The chief information officer for the Senate sergeant-at-arms, who oversees the chamber's computers as well as security, said in a one-page memo reviewed by The New York Times that aides could use Google's Gemini chat, OpenAI's ChatGPT or Microsoft Copilot, which is already integrated into Senate platforms.

Copilot "can help with routine Senate work, including drafting and editing documents, summarizing information, preparing talking points and briefing material, and conducting research and analysis," the memo said. The document later added that "data shared with Copilot Chat stays within the secure Microsoft 365 Government environment and is protected by the same controls that safeguard other Senate data."
It's unclear how widely AI is used in the Senate or how widespread it might become, as individual offices and committees set their own rules. The chamber has also not publicly released comprehensive guidance on chatbots, the report notes.

In contrast, the House has clearer policies allowing the general use of AI for limited internal tasks but restricting it from sensitive data or for being used for deepfakes and certain decision-making activities.
Microsoft

Microsoft Backs Anthropic To Halt US DOD's 'Supply-Chain Risk' Designation (reuters.com) 35

joshuark shares a report from Reuters: Microsoft has filed an amicus brief on Tuesday in support of Anthropic's lawsuit asking the court to temporarily block the U.S. Department of Defense designation of the AI startup as a supply-chain risk. In an amicus brief filing in a federal court in San Francisco, Microsoft backed Anthropic's request for a temporary restraining order against the Pentagon order, arguing that its determination should be paused while the court considers the case. Microsoft, which integrates the AI lab's products and services into technology it provides to the U.S. military, said that it was directly impacted by the DOD designation.

"Should this action proceed without the entry of a temporary restraining order, Microsoft and other government contractors with expertise in developing solutions to support U.S. government missions will be forced to account for a new risk in their business planning," the company said. Microsoft's filing argued the TRO is needed to prevent costly disruptions for suppliers, who would otherwise have to rapidly rebuild offerings that rely on Anthropic's products. The judge overseeing the case must approve Microsoft's request to file the brief before it is officially entered, but courts often permit outside parties to weigh in on important cases.

Botnet

Researchers Discover 14,000 Routers Wrangled Into Never-Before-Seen Botnet (arstechnica.com) 32

An anonymous reader quotes a report from Ars Technica: Researchers say they have uncovered a takedown-resistant botnet of 14,000 routers and other network devices -- primarily made by Asus -- that have been conscripted into a proxy network that anonymously carries traffic used for cybercrime. The malware -- dubbed KadNap -- takes hold by exploiting vulnerabilities that have gone unpatched by their owners, Chris Formosa, a researcher at security firm Lumen's Black Lotus Labs, told Ars. The high concentration of Asus routers is likely due to botnet operators acquiring a reliable exploit for vulnerabilities affecting those models. He said it's unlikely that the attackers are using any zero-days in the operation.

The number of infected routers averages about 14,000 per day, up from 10,000 last August, when Black Lotus discovered the botnet. Compromised devices are overwhelmingly located in the US, with smaller populations in Taiwan, Hong Kong, and Russia. One of the most salient features of KadNap is a sophisticated peer-to-peer design based on Kademlia (PDF), a network structure that uses distributed hash tables to conceal the IP addresses of command-and-control servers. The design makes the botnet resistant to detection and takedowns through traditional methods.

[...] Despite the resistance to normal takedown methods, Black Lotus says it has devised a means to block all network traffic to or from the control infrastructure." The lab is also distributing the indicators of compromise to public feeds to help other parties block access. [...] People who are concerned their devices are infected can check this page for IP addresses and a file hash found in device logs. To disinfect devices, they must be factory reset. Because KadNap stores a shell script that runs when an infected router reboots, simply restarting the device will result in it being compromised all over again. Device owners should also ensure all available firmware updates have been installed, that administrative passwords are strong, and that remote access has been disabled unless needed.

AI

Nvidia Is Planning to Launch Its Own Open-Source OpenClaw Competitor (wired.com) 21

Nvidia is preparing to launch an open-source AI agent platform called NemoClaw, designed to compete with the likes of OpenClaw. According to Wired, the platform will allow enterprise software companies to dispatch AI agents to perform tasks for their own workforces. "Companies will be able to access the platform regardless of whether their products run on Nvidia's chips," the report adds. From the report: The move comes as Nvidia prepares for its annual developer conference in San Jose next week. Ahead of the conference, Nvidia has reached out to companies including Salesforce, Cisco, Google, Adobe, and CrowdStrike to forge partnerships for the agent platform. It's unclear whether these conversations have resulted in official partnerships. Since the platform is open source, it's likely that partners would get free, early access in exchange for contributing to the project, sources say. Nvidia plans to offer security and privacy tools as part of this new open-source agent platform. [...]

For Nvidia, NemoClaw appears to be part of an effort to court enterprise software companies by offering additional layers of security for AI agents. It's also another step in the company's embrace of open-source AI models, part of a broader strategy to maintain its dominance in AI infrastructure at a time when leading AI labs are building their own custom chips. Nvidia's software strategy until now has been heavily reliant on its CUDA platform, a famously proprietary system that locks developers into building software for Nvidia's GPUs and has created a crucial "moat" for the company.

China

China Moves To Curb OpenClaw AI Use At Banks, State Agencies (bloomberg.com) 18

An anonymous reader quotes a report from Bloomberg: Chinese authorities moved to restrict state-run enterprises and government agencies from running OpenClaw AI apps on office computers, acting swiftly to defuse potential security risks after companies and consumers across China began experimenting with the agentic AI phenomenon. Government agencies and state-owned enterprises, including the largest banks, have received notices in recent days warning them against installing OpenClaw software on office devices for security reasons [...]. Several of them were instructed to notify superiors if they had already installed related apps for security checks and possible removal, some of the people said.

Certain employees, including those at state-run banks and some government agencies, were banned from installing OpenClaw on office computers and also personal phones using the company's network, some of the people said. One person said the ban was also extended to the families of military personnel. Other notices stopped short of calling for an outright ban on OpenClaw software, saying only that prior approval is needed before use, the people said. The warning underscores Beijing's growing concern about OpenClaw, an agentic AI platform that requires unusually broad access to private data and can communicate externally, potentially exposing computers to external attack. [...]

Despite the potential security risks, companies from Tencent to JD.com Inc. have been rolling out OpenClaw apps to try and capitalize on the groundswell of enthusiasm, while several local government agencies have declared millions of yuan in subsidies for companies that develop atop the platform. [...] Tech giants like Tencent and Alibaba, along with AI upstarts ranging from Moonshot to MiniMax, have rolled out their own tweaks of the software touting simple, one-click adoption. A slew of government agencies, in cities from Shenzhen to Wuxi, have issued notices offering multimillion-yuan subsidies to startups leveraging OpenClaw to make advances. The frenzy has helped drive up shares of AI model developer MiniMax nearly 640% since its listing just two months ago. It's now worth about $49 billion, surpassing Baidu -- once viewed as the frontrunner in Chinese AI development -- in market value. The company launched MaxClaw, an agent built on OpenClaw, in late February.

Encryption

Intel Demos Chip To Compute With Encrypted Data (ieee.org) 37

An anonymous reader quotes a report from IEEE Spectrum: Worried that your latest ask to a cloud-based AI reveals a bit too much about you? Want to know your genetic risk of disease without revealing it to the services that compute the answer? There is a way to do computing on encrypted data without ever having it decrypted. It's called fully homomorphic encryption, or FHE. But there's a rather large catch. It can take thousands -- even tens of thousands -- of times longer to compute on today's CPUs and GPUs than simply working with the decrypted data. So universities, startups, and at least one processor giant have been working on specialized chips that could close that gap. Last month at the IEEE International Solid-State Circuits Conference (ISSCC) in San Francisco, Intel demonstrated its answer, Heracles, which sped up FHE computing tasks as much as 5,000-fold compared to a top-of the-line Intel server CPU.

Startups are racing to beat Intel and each other to commercialization. But Sanu Mathew, who leads security circuits research at Intel, believes the CPU giant has a big lead, because its chip can do more computing than any other FHE accelerator yet built. "Heracles is the first hardware that works at scale," he says. The scale is measurable both physically and in compute performance. While other FHE research chips have been in the range of 10 square millimeters or less, Heracles is about 20 times that size and is built using Intel's most advanced, 3-nanometer FinFET technology. And it's flanked inside a liquid-cooled package by two 24-gigabyte high-bandwidth memory chips—a configuration usually seen only in GPUs for training AI.

In terms of scaling compute performance, Heracles showed muscle in live demonstrations at ISSCC. At its heart the demo was a simple private query to a secure server. It simulated a request by a voter to make sure that her ballot had been registered correctly. The state, in this case, has an encrypted database of voters and their votes. To maintain her privacy, the voter would not want to have her ballot information decrypted at any point; so using FHE, she encrypts her ID and vote and sends it to the government database. There, without decrypting it, the system determines if it is a match and returns an encrypted answer, which she then decrypts on her side. On an Intel Xeon server CPU, the process took 15 milliseconds. Heracles did it in 14 microseconds. While that difference isn't something a single human would notice, verifying 100 million voter ballots adds up to more than 17 days of CPU work versus a mere 23 minutes on Heracles.

Slashdot Top Deals