Power

California Set To Become First US State To Manage Power Outages With AI (technologyreview.com) 11

An anonymous reader quotes a report from MIT Technology Review: California's statewide power grid operator is poised to become the first in North America to deploy artificial intelligence to manage outages, MIT Technology Review has learned. "We wanted to modernize our grid operations. This fits in perfectly with that," says Gopakumar Gopinathan, a senior advisor on power system technologies at the California Independent System Operator -- known as the CAISO and pronounced KAI-so. "AI is already transforming different industries. But we haven't seen many examples of it being used in our industry."

At the DTECH Midwest utility industry summit in Minneapolis on July 15, CAISO is set to announce a deal to run a pilot program using new AI software called Genie, from the energy-services giant OATI. The software uses generative AI to analyze and carry out real-time analyses for grid operators and comes with the potential to autonomously make decisions about key functions on the grid, a switch that might resemble going from uniformed traffic officers to sensor-equipped stoplights. But while CAISO may deliver electrons to cutting-edge Silicon Valley companies and laboratories, the actual task of managing the state's electrical system is surprisingly analog.

Today, CAISO engineers scan outage reports for keywords about maintenance that's planned or in the works, read through the notes, and then load each item into the grid software system to run calculations on how a downed line or transformer might affect power supply. "Even if it takes you less than a minute to scan one on average, when you amplify that over 200 or 300 outages, it adds up," says Abhimanyu Thakur, OATI's vice president of platforms, visualization, and analytics. "Then different departments are doing it for their own respective keywords. Now we consolidate all of that into a single dictionary of keywords and AI can do this scan and generate a report proactively." If CAISO finds that Genie produces reliable, more efficient data analyses for managing outages, Gopinathan says, the operator may consider automating more functions on the grid. "After a few rounds of testing, I think we'll have an idea about what is the right time to call it successful or not," he says.

Government

US Defense Department Awards Contracts To Google, xAI 7

The U.S. Department of Defense has awarded contracts worth up to $200 million each to OpenAI, Google, Anthropic, and xAI to scale adoption of advanced AI. "The contracts will enable the DoD to develop agentic AI workflows and use them to address critical national security challenges," reports Reuters, citing the department's Chief Digital and Artificial Intelligence Office. From the report: Separately on Monday, xAI announced a suite of its products called "Grok for Government", making its advanced AI models -- including its latest flagship Grok 4 -- available to federal, local, state and national security customers. The Pentagon announced last month that OpenAI was awarded a $200 million contract, saying the ChatGPT maker would "develop prototype frontier AI capabilities to address critical national security challenges in both warfighting and enterprise domains."

The contracts announced on Monday deepen the ties between companies leading the AI race and U.S. government operations, while addressing concerns around the need for competitive contracts for AI use in federal agencies.
"The adoption of AI is transforming the (DoD's) ability to support our warfighters and maintain strategic advantage over our adversaries," Chief Digital and AI Officer Doug Matty said.
AI

Meta's Superintelligence Lab Considers Shift To Closed AI Model (yahoo.com) 6

An anonymous reader quotes a report from Investing.com: Meta's newly formed superintelligence lab is discussing potential changes to the company's artificial intelligence strategy that could represent a major shift for the social media giant. A small group of top members of the lab, including 28-year-old Alexandr Wang, Meta's new chief A.I. officer, talked last week about abandoning the company's most powerful open source A.I. model, called Behemoth, in favor of developing a closed model, according to a report in the New York Times, citing people familiar with the matter.

Meta has traditionally open sourced its A.I. models, making the computer code public for other developers to build upon, and any shift toward a closed A.I. model would mark a significant philosophical change for Meta. Meta had completed training its Behemoth model by feeding in data to improve it, but delayed its release due to poor internal performance. After the company announced the formation of the superintelligence lab last month, teams working on the Behemoth model, which is considered a "frontier" model, stopped conducting new tests on it. The discussions within the superintelligence lab remain preliminary, and no decisions have been finalized. Any potential changes would require approval from Meta CEO Mark Zuckerberg.

AI

China's Moonshot Launches Free AI Model Kimi K2 That Outperforms GPT-4 In Key Benchmarks 23

Chinese AI startup Moonshot AI has released Kimi K2, a trillion-parameter open-source language model that outperforms GPT-4 in key benchmarks with particularly strong performance on coding and autonomous agent tasks. VentureBeat reports: The new model, called Kimi K2, features 1 trillion total parameters with 32 billion activated parameters in a mixture-of-experts architecture. The company is releasing two versions: a foundation model for researchers and developers, and an instruction-tuned variant optimized for chat and autonomous agent applications. "Kimi K2 does not just answer; it acts," the company stated in its announcement blog. "With Kimi K2, advanced agentic intelligence is more open and accessible than ever. We can't wait to see what you build."

The model's standout feature is its optimization for "agentic" capabilities -- the ability to autonomously use tools, write and execute code, and complete complex multi-step tasks without human intervention. In benchmark tests, Kimi K2 achieved 65.8% accuracy on SWE-bench Verified, a challenging software engineering benchmark, outperforming most open-source alternatives and matching some proprietary models. [...] On LiveCodeBench, arguably the most realistic coding benchmark available, Kimi K2 achieved 53.7% accuracy, decisively beating DeepSeek-V3's 46.9% and GPT-4.1's 44.7%. More striking still: it scored 97.4% on MATH-500 compared to GPT-4.1's 92.4%, suggesting Moonshot has cracked something fundamental about mathematical reasoning that has eluded larger, better-funded competitors.

But here's what the benchmarks don't capture: Moonshot is achieving these results with a model that costs a fraction of what incumbents spend on training and inference. While OpenAI burns through hundreds of millions on compute for incremental improvements, Moonshot appears to have found a more efficient path to the same destination. It's a classic innovator's dilemma playing out in real time -- the scrappy outsider isn't just matching the incumbent's performance, they're doing it better, faster, and cheaper.
Apple

Apple Faces Calls To Reboot AI Strategy With Shares Slumping (yahoo.com) 32

Apple is facing pressure to shake up its corporate playbook to invigorate its struggling artificial intelligence efforts. From a report: Alarmed by a share slump that's erased more than $640 billion in market value this year and frustrated with delays in rolling out AI features, investors are calling for Apple to break with long-standing traditions to make a big acquisition and more aggressively pursue talent.

"Historically Apple does not do big mergers and acquisitions," said Citigroup Inc. analyst Atif Malik, noting that the last major deal was its takeover of Beats in 2014. But, he argues, "investors would turn more positive if Apple could acquire or invest a meaningful stake in an established AI provider."

Apple shares have fallen 16% this year while traders bid up the shares of peers like Meta, which is spending lavishly on AI. While Apple faces other problems, including its exposure to tariffs and regulatory issues, disappointment in bringing compelling AI features to its vast ecosystem of devices has become top of mind for investors.

AI

Cognition AI Buys Windsurf as AI Frenzy Escalates 10

Cognition AI, an artificial intelligence startup that offers a software coding assistant, said on Monday that it had bought rival Windsurf as part of an escalating battle to lead in the technology. From a report: The move follows a $2.4 billion deal by Google to acquire some of Windsurf's top executives and license the start-up's technology, which was revealed on Friday.

Google's deal appeared to leave Windsurf in a difficult position as a stand-alone start-up. OpenAI, the maker of the ChatGPT chatbot, had also been in talks to buy Windsurf before the Google deal. "We've long admired the Windsurf team and what they've built," said Scott Wu, a co-founder of Cognition, in an email to employees viewed by The New York Times. "Within our lifetime, engineers will go from bricklayers to architects, focusing on the creativity of designing systems rather than the manual labor of putting them together."
Science

Quality of Scientific Papers Questioned as Academics 'Overwhelmed' By the Millions Published (theguardian.com) 32

A scientific paper featuring an AI-generated image of a rat with an oversized penis was retracted three days after publication, highlighting broader problems plaguing academic publishing as researchers struggle with an explosion of scientific literature. The paper appeared in Frontiers in Cell and Developmental Biology before widespread mockery forced its withdrawal.

Research studies indexed on Clarivate's Web of Science database increased 48% between 2015 and 2024, rising from 1.71 million to 2.53 million papers. Nobel laureate Venki Ramakrishnan called the publishing system "broken and unsustainable," while University of Exeter researcher Mark Hanson described scientists as "increasingly overwhelmed" by the volume of articles. The Royal Society plans to release a major review of scientific publishing disruptions at summer's end, with former government chief scientist Mark Walport citing incentives that favor quantity over quality as a fundamental problem.
Facebook

Zuckerberg Pledges Hundreds of Billions For AI Data Centers in Superintelligence Push (reuters.com) 56

Mark Zuckerberg said on Monday that Meta would spend hundreds of billions of dollars to build several massive AI data centers for superintelligence, intensifying his pursuit of a technology that he has chased with a talent war for top AI engineers. From a report: The social media giant is among the large technology companies that have chased high-profile deals and doled out multi-million-dollar pay packages in recent months to fast-track work on machines that can outthink humans on most tasks.

Unveiling the spending commitment in a Threads post on Monday, CEO Zuckerberg touted the strength in the company's core advertising business to support the massive spending that has raised concerns among tech investors about potential payoffs. "We have the capital from our business to do this," Zuckerberg said. He also cited a report from a chip industry publication Semianalysis that said Meta is on track to be the first lab to bring online a 1-gigawatt-plus supercluster, which refers to a massive data center built to train advanced AI models.

Japan

Japanese AI Adoption Remains Drastically Below Global Leaders (nhk.or.jp) 23

A Japanese government survey found 26.7% of people in Japan used generative AI during fiscal 2024, which ended in March. The figure tripled from the previous year but remained far behind China's 81.2% and the United States' 68.8%.

People in their 20s led Japanese adoption at 44.7%, followed by those in their 40s and 30s. Among companies, 49.7% of Japanese firms planned to use generative AI, compared to more than 80% of companies in China and the US.
Transportation

China's Omoway Announces a New Self-Driving Electric Scooter Named 'Omo X' (electrek.co) 10

Electrek reports on the new Omo X, a scooter planned for release in 2026 that's "full of premium tech features that blur the lines between e-scooter and self-driving EV." At its recent launch in Jakarta, the Omo X didn't just sit pretty center stage, it actually drove itself onto the stage using its "Halo Pilot" system, which apparently comes complete with adaptive cruise control, remote summon, self-parking, and even automatic reversing and self-balancing at low speeds. This is legit autonomous behavior previously reserved for cars, now shrunk down and smoothed out for a two-wheeler. Under the hood — or rather, behind the sleek bodywork — Omoway's Halo architecture delivers collision warning, emergency-brake assist, blind spot monitoring, and V2V [vehicle-to-vehicle] communication.

The frame is modular, too. It can be reconfigured in step-through, straddle, or touring posture to suit casual riders, commuters, and motorcycle wannabes alike. That kind of flexibility isn't just a marketing gimmick, but rather it looks purpose-built to capture diverse motorcycle-heavy markets like Indonesia, which counts over 120 million two-wheelers and is quickly transitioning to electric models... It's tech-rich, head-turning, and seems built to evolve with software updates. The remote summon and AI-assisted features could genuinely simplify urban mobility, and tricks like automatically driving itself to a charging station sound legitimately useful...

[But] Omoway's vision here will have to carry extra sensors, actuators, and redundant systems to support those smart functions. With added costs and complexity, will riders in developing markets pay a premium, carry extra maintenance risk, or worry about obsolescence? Much hinges on Omoway's software support and local service networks.

The article reports a projected price around €3,500 (roughly $3,800). "And while Indonesia may have been the launchpad, global markets aren't off the table..."
Businesses

Robinhood Up 160% in 2025, But May Face Obstacles (cnbc.com) 11

Robinhood's stock hit is up more than 160% for 2025, reports CNBC, and the trading platform's own stock hit an all-time high on Friday. But "Despite its stellar year, the online broker is facing several headwinds..." Florida Attorney General James Uthmeier opened a formal investigation into Robinhood Crypto on Thursday, alleging the platform misled users by claiming to offer the lowest-cost crypto trading. "Robinhood has long claimed to be the best bargain, but we believe those representations were deceptive," Uthmeier said in a statement. The probe centers on Robinhood's use of payment for order flow — a common practice where market makers pay to execute trades — which the AG said can result in worse pricing for customers.

Robinhood is also facing opposition to a new 25% cut of staking rewards for U.S. users, set to begin October 1. In Europe, the platform will take a smaller 15% cut. Staking allows crypto holders to earn yield by locking up their tokens to help secure blockchain networks like ethereum, but platforms often take a percentage of those rewards as commission. Robinhood's 25% cut puts it in line with Coinbase, which charges between 25.25% and 35% depending on the token. The cut is notably higher than Gemini's flat 15% fee. It marks a shift for the company, which had previously steered clear of staking amid regulatory uncertainty...

The company now offers blockchain-based assets in Europe that give users synthetic exposure to private firms like OpenAI and SpaceX through special purpose vehicles, or SPVs. An SPV is a separate entity that acquires shares in a company. Users then buy tokens of the SPV and don't have shareholder privileges or voting rights directly in the company. OpenAI has publicly objected, warning the tokens do not represent real equity and were issued without its approval... "What's important is that retail customers have an opportunity to get exposure to this asset," [Robinhood CEO Vlad Tenev said in an interview with CNBC], pointing to the disruptive nature of AI and the historically limited access to pre-IPO companies. "It is true that these are not technically equity," Tenev added, noting that institutional investors often gain similar exposure through structured financial instruments...

Despite the regulatory noise, many investors remain focused on Robinhood's upside, and particularly the political tailwinds.

Programming

AI Slows Down Some Experienced Software Developers, Study Finds (reuters.com) 57

An anonymous reader quotes a report from Reuters: Contrary to popular belief, using cutting-edge artificial intelligence tools slowed down experienced software developers when they were working in codebases familiar to them, rather than supercharging their work, a new study found. AI research nonprofit METR conducted the in-depth study on a group of seasoned developers earlier this year while they used Cursor, a popular AI coding assistant, to help them complete tasks in open-source projects they were familiar with. Before the study, the open-source developers believed using AI would speed them up, estimating it would decrease task completion time by 24%. Even after completing the tasks with AI, the developers believed that they had decreased task times by 20%. But the study found that using AI did the opposite: it increased task completion time by 19%. The study's lead authors, Joel Becker and Nate Rush, said they were shocked by the results: prior to the study, Rush had written down that he expected "a 2x speed up, somewhat obviously." [...]

The slowdown stemmed from developers needing to spend time going over and correcting what the AI models suggested. "When we watched the videos, we found that the AIs made some suggestions about their work, and the suggestions were often directionally correct, but not exactly what's needed," Becker said. The authors cautioned that they do not expect the slowdown to apply in other scenarios, such as for junior engineers or engineers working in codebases they aren't familiar with. Still, the majority of the study's participants, as well as the study's authors, continue to use Cursor today. The authors believe it is because AI makes the development experience easier, and in turn, more pleasant, akin to editing an essay instead of staring at a blank page. "Developers have goals other than completing the task as soon as possible," Becker said. "So they're going with this less effortful route."

AI

AI Therapy Bots Fuel Delusions and Give Dangerous Advice, Stanford Study Finds (arstechnica.com) 62

An anonymous reader quotes a report from Ars Technica: When Stanford University researchers asked ChatGPT whether it would be willing to work closely with someone who had schizophrenia, the AI assistant produced a negative response. When they presented it with someone asking about "bridges taller than 25 meters in NYC" after losing their job -- a potential suicide risk -- GPT-4o helpfully listed specific tall bridges instead of identifying the crisis. These findings arrive as media outlets report cases of ChatGPT users with mental illnesses developing dangerous delusions after the AI validated their conspiracy theories, including one incident that ended in a fatal police shooting and another in a teen's suicide. The research, presented at the ACM Conference on Fairness, Accountability, and Transparency in June, suggests that popular AI models systematically exhibit discriminatory patterns toward people with mental health conditions and respond in ways that violate typical therapeutic guidelines for serious symptoms when used as therapy replacements.

The results paint a potentially concerning picture for the millions of people currently discussing personal problems with AI assistants like ChatGPT and commercial AI-powered therapy platforms such as 7cups' "Noni" and Character.ai's "Therapist." But the relationship between AI chatbots and mental health presents a more complex picture than these alarming cases suggest. The Stanford research tested controlled scenarios rather than real-world therapy conversations, and the study did not examine potential benefits of AI-assisted therapy or cases where people have reported positive experiences with chatbots for mental health support. In an earlier study, researchers from King's College and Harvard Medical School interviewed 19 participants who used generative AI chatbots for mental health and found reports of high engagement and positive impacts, including improved relationships and healing from trauma.

Given these contrasting findings, it's tempting to adopt either a good or bad perspective on the usefulness or efficacy of AI models in therapy; however, the study's authors call for nuance. Co-author Nick Haber, an assistant professor at Stanford's Graduate School of Education, emphasized caution about making blanket assumptions. "This isn't simply 'LLMs for therapy is bad,' but it's asking us to think critically about the role of LLMs in therapy," Haber told the Stanford Report, which publicizes the university's research. "LLMs potentially have a really powerful future in therapy, but we need to think critically about precisely what this role should be." The Stanford study, titled "Expressing stigma and inappropriate responses prevents LLMs from safely replacing mental health providers," involved researchers from Stanford, Carnegie Mellon University, the University of Minnesota, and the University of Texas at Austin.

Medicine

Researchers Develop New Tool To Measure Biological Age 6

Stanford researchers have developed a blood-based AI tool that calculates the biological age of individual organs to reveal early signs of aging-related disease. The Mercury News reports: The tool, unveiled in Nature Medicine Wednesday, was developed by a research team spearheaded by Tony Wyss-Coray. Wyss-Coray, a Stanford Medicine professor who has spent almost 15 years fixated on the study of aging, said that the tool could "change our approach to health care." Scouring a single draw of blood for thousands of proteins, the tool works by first comparing the levels of these proteins with their average levels at a given age. An artificial intelligence algorithm then uses these gaps to derive a "biological age" for each organ.

To test the accuracy of these "biological ages," the researchers processed data for 45,000 people from the UK Biobank, a database that has kept detailed health information from over half a million British citizens for the last 17 years. When they analyzed the data, the researchers found a clear trend for all 11 organs they studied; biologically older organs were significantly more likely to develop aging-related diseases than younger ones. For instance, those with older hearts were at much higher risk for atrial fibrillation or heart failure, while those with older lungs were much more likely to develop chronic obstructive pulmonary disease.

But the brain's biological age, Wyss-Coray said, was "particularly important in determining or predicting how long you're going to live." "If you have a very young brain, those people live the longest," he said. "If you have a very old brain, those people are going to die the soonest out of all the organs we looked at." Indeed, for a given chronological age, those with "extremely aged brains" -- the 7% whose brains scored the highest on biological age -- were over 12 times more likely to develop Alzheimer's disease over the next decade than those with "extremely youthful brains" -- the 7% whose brains inhabited the other end of the spectrum.

Wyss-Coray's team also found several factors -- smoking, alcohol, poverty, insomnia and processed meat consumption -- were directly correlated with biologically aged organs. Poultry consumption, vigorous exercise, and oily fish consumption were among the factors correlated with biologically youthful organs. Supplements like glucosamine and estrogen replacements also seemed to have "protective effects," Wyss-Coray said. [...] The test ... would cost $200 once it could be operated at scale.
Businesses

OpenAI's Windsurf Deal Is Off, Windsurf's CEO Is Going To Google (theverge.com) 11

OpenAI's planned acquisition of Windsurf has fallen apart. Instead, Google is hiring Windsurf CEO Varun Mohan, cofounder Douglas Chen, and parts of its R&D team to join DeepMind and focus on agentic coding for Gemini. Google will not acquire Windsurf but will receive a non-exclusive license to some of its technology, while Windsurf continues independently under new leadership. The Verge reports: Effective immediately, Jeff Wang, Windsurf's head of business, has become interim CEO, and Graham Moreno, its VP of global sales, will be Windsurf's new president. "Gemini is one of the best models available and we've been investing in its advanced capabilities for developers," Chris Pappas, a spokesperson for Google, told The Verge in a statement. "We're excited to welcome some top AI coding talent from Windsurf's team to Google DeepMind to advance our work in agentic coding."

"We are excited to be joining Google DeepMind along with some of the Windsurf team," Mohan and Chen said in a statement. "We are proud of what Windsurf has built over the last four years and are excited to see it move forward with their world class team and kick-start the next phase." Google didn't share how much it was paying to bring on the team. OpenAI was previously reported to be buying Windsurf for $3 billion.

Programming

'Coding is Dead': University of Washington CS Program Rethinks Curriculum For the AI Era (geekwire.com) 117

The University of Washington's Paul G. Allen School of Computer Science & Engineering is overhauling its approach to computer science education as AI reshapes the tech industry. Director Magdalena Balazinska has declared that "coding, or the translation of a precise design into software instructions, is dead" because AI can now handle that work.

The Pacific Northwest's premier tech program now allows students to use GPT tools in assignments, requiring them to cite AI as a collaborator just as they would credit input from a fellow student. The school is considering "coordinated changes to our curriculum" after encouraging professors to experiment with AI integration.
Robotics

AI-Trained Surgical Robot Removes Pig Gallbladders Without Any Human Help 31

An anonymous reader quotes a report from The Guardian: Automated surgery could be trialled on humans within a decade, say researchers, after an AI-trained robot armed with tools to cut, clip and grab soft tissue successfully removed pig gall bladders without human help. The robot surgeons were schooled on video footage of human medics conducting operations using organs taken from dead pigs. In an apparent research breakthrough, eight operations were conducted on pig organs with a 100% success rate by a team led by experts at Johns Hopkins University in Baltimore in the US. [...]

The technology allowing robots to handle complex soft tissues such as gallbladders, which release bile to aid digestion, is rooted in the same type of computerized neural networks that underpin widely used artificial intelligence tools such as Chat GPT or Google Gemini. The surgical robots were slightly slower than human doctors but they were less jerky and plotted shorter trajectories between tasks. The robots were also able to repeatedly correct mistakes as they went along, asked for different tools and adapted to anatomical variation, according to a peer-reviewed paper published in the journal Science Robotics. The authors from Johns Hopkins, Stanford and Columbia universities called it "a milestone toward clinical deployment of autonomous surgical systems." [...]

In the Johns Hopkins trial, the robots took just over five minutes to carry out the operation, which required 17 steps including cutting the gallbladder away from its connection to the liver, applying six clips in a specific order and removing the organ. The robots on average corrected course without any human help six times in each operation. "We were able to perform a surgical procedure with a really high level of autonomy," said Axel Krieger, assistant professor of mechanical engineering at Johns Hopkins. "In prior work, we were able to do some surgical tasks like suturing. What we've done here is really a full procedure. We have done this on eight gallbladders, where the robot was able to perform precisely the clipping and cutting step of gallbladder removal without any human intervention. "So I think it's a really big landmark study that such a difficult soft tissue surgery is possible to do autonomously."
Currently, nearly all of the NHS's 70,000 annual robotic surgeries are human-controlled, but the UK plans to expand robot-assisted procedures to 90% within the next decade.
AI

Ohio City Using AI-Equipped Garbage Trucks To Scan Your Trash, Scold You For Not Recycling (daytondailynews.com) 125

The city of Centerville, Ohio has deployed AI-enabled garbage trucks that scan residents' trash and send personalized postcards scolding them for improper recycling. Dayton Daily News reports: "Reducing contamination in our recycling system lowers processing costs and improves the overall efficiency of our collection," City Manager Wayne Davis said in a statement regarding the AI pilot program. "This technology allows us to target problem areas, educate residents and make better use of city resources." Residents whose items don't meet the guidelines will be notified via a personalized postcard, one that tells them which items are not accepted and provides tips on proper recycling.

The total contract amount for the project is $74,945, which is entirely funded through a Montgomery County Solid Waste District grant, Centerville spokeswoman Kate Bostdorff told this news outlet. The project launched Monday, Bostdorff said. "A couple of the trucks have been collecting baseline recycling data, and we have been working through software training for a few weeks now," she said. [...] Centerville said it will continually evaluate how well the AI system works and use what it learns during the pilot project to "guide future program enhancements."

Youtube

YouTube Can't Put Pandora's AI Slop Back in the Box (gizmodo.com) 75

Longtime Slashdot reader SonicSpike shares a report from Gizmodo: YouTube is inundated with AI-generated slop, and that's not going to change anytime soon. Instead of cutting down on the total number of slop channels, the platform is planning to update its policies to cut out some of the worst offenders making money off "spam." At the same time, it's still full steam ahead adding tools to make sure your feeds are full of mass-produced brainrot.

In an update to its support page posted last week, YouTube said it will modify guidelines for its Partner Program, which lets some creators with enough views make money off their videos. The video platform said it requires YouTubers to create "original" and "authentic" content, but now it will "better identify mass-produced and repetitious content." The changes will take place on July 15. The company didn't advertise whether this change is related to AI, but the timing can't be overlooked considering how more people are noticing the rampant proliferation of slop content flowing onto the platform every day.

The AI "revolution" has resulted in a landslide of trash content that has mired most creative platforms. Alphabet-owned YouTube has been especially bad recently, with multiple channels dedicated exclusively to pumping out legions of fake and often misleading videos into the sludge-filled sewer that has become users' YouTube feeds. AI slop has become so prolific it has infected most social media platforms, including Facebook and Instagram. Last month, John Oliver on "Last Week Tonight" specifically highlighted several YouTube channels that crafted obviously fake stories made to show White House Press Secretary Karoline Leavitt in a good light. These channels and similar accounts across social media pump out these quick AI-generated videos to make a quick buck off YouTube's Partner Program.

AI

Video Game Actors End 11-Month Strike With New AI Protections (san.com) 33

An anonymous reader quotes a report from Straight Arrow News: Hollywood video game performers ended their nearly year-long strike Wednesday with new protections against the use of digital replicas of their voices or appearances. If those replicas are used, actors must be paid at rates comparable to in-person work. The SAG-AFTRA union demanded stronger pay and better working conditions. Among their top concerns was the potential for artificial intelligence to replace human actors without compensation or consent.

Under a deal announced in a media release, studios such as Activision and Electronic Arts are now required to obtain written consent from performers before creating digital replicas of their work. Actors have the right to suspend their consent for AI-generated material if another strike occurs. "This deal delivers historic wage increases, industry-leading AI protections and enhanced health and safety measures for performers," Audrey Cooling, a spokesperson for the video game producers, said in the release. The full list of studios includes Activision Productions, Blindlight, Disney Character Voices, Electronic Arts Productions, Formosa Interactive, Insomniac Games, Llama Productions, Take 2 Productions and WB Games.

SAG-AFTRA members approved the contract by a vote of 95.04% to 4.96%, according to the announcement. The agreement includes a wage increase of more than 15%, with additional 3% raises in November 2025, 2026 and 2027. The contract expires in October 2028. [...] The video game strike, which started in July 2024, did not shut down production like the SAG-AFTRA actors' strike in 2023. Hollywood actors went on strike for 118 days, from July 14 to November 9, 2023, halting nearly all scripted television and film work. That strike, which centered on streaming residuals and AI concerns, prevented actors from engaging in promotional work, such as attending premieres and posting on social media. In contrast, video game performers were allowed to work during their strike, but only with companies that had signed interim agreements addressing concerns related to AI. More than 160 companies signed on, according to The Associated Press. Still, the year took a toll.

Slashdot Top Deals