Movies

AMC Theatres Will Refuse To Screen AI Short Film After Online Uproar (hollywoodreporter.com) 12

An anonymous reader shares a report: When will AI movies start showing up in theaters nationwide? It was supposed to be next month. But when word leaked online that an AI short film contest winner was going to start screening before feature presentations in AMC Theatres, the cinema chain decided not to run the content.

The issue began earlier this week with the inaugural Frame Forward AI Animated Film Festival announcing Igor Alferov's short film Thanksgiving Day had won the contest. The prize package for included Thanksgiving Day getting a national two-week run in theaters nationwide. When word of this began hitting social media, however, some were dismayed by the prospect of exhibitors embracing AI content, with many singling out AMC Theatres for criticism.

Except the short is not actually programmed by exhibitors, exactly, but by Screenvision Media -- a third-party company which manages the 20-minute, advertising-driven pre-show before a theater's lights go down. Screenvision -- which co-organized the festival along with Modern Uprising Studios -- provides content to multiple theatrical chains, not just AMC. After The Hollywood Reporter reached out to AMC about the brewing controversy, the company issued this statement to THR on Thursday: "This content is an initiative from Screenvision Media, which manages pre-show advertising for several movie theatre chains in the United States and runs in fewer than 30 percent of AMC's U.S. locations. AMC was not involved in the creation of the content or the initiative and has informed Screenvision that AMC locations will not participate."

Television

How Streaming Became Cable TV's Unlikely Life Raft (wsj.com) 10

Cable TV providers have spent the past decade losing tens of millions of households to streaming services, but companies like Charter Communications are now slowing that exodus by bundling the very apps that once threatened to replace them.

Charter added 44,000 net video subscribers in the fourth quarter of 2025, its first growth in that count since 2020, after integrating Disney+, Hulu, and ESPN+ directly into Spectrum cable packages -- a deal that grew out of a contentious 2023 contract dispute with Disney. Comcast and Optimum still lost subscribers in the quarter, though both saw those losses narrow.

Charter's Q4 numbers also got a lift from a 15-day Disney channel blackout on YouTube TV during football season, which drove more than 14,000 subscribers to Spectrum. Charter has been discounting aggressively -- video revenue fell 10% year over year despite the subscriber gains. Cox Communications launched its first streaming-inclusive cable bundles last month, and Dish Network has yet to integrate streaming apps into its packages at all.
Facebook

Mark Zuckerberg Grilled On Usage Goals and Underage Users At California Trial (wsj.com) 20

An anonymous reader quotes a report from the Wall Street Journal: Meta Chief Executive Mark Zuckerberg faced a barrage of questions about his social-media company's efforts to secure ever more of its users' time and attention at a landmark trial in Los Angeles on Wednesday. In sworn testimony, Zuckerberg said Meta's growth targets reflect an aim to give users something useful, not addict them, and that the company doesn't seek to attract children as users. [...] Mark Lanier, a lawyer for the plaintiff, repeatedly asked Zuckerberg about internal company communications discussing targets for how much time users spend with Meta's products. Lanier showed an email from 2015 in which the CEO stated his goal for 2016 was to increase users' time spent by 12%. "We used to give teams goals on time spent and we don't do that anymore because I don't think that's the best way to do it," Zuckerberg said on the witness stand in sworn testimony.

Lanier also asked Zuckerberg about documents showing Meta employees were aware of children under 13 using Meta's apps. Zuckerberg said the company's policy was that children under 13 aren't allowed on the platform and that they are removed when identified. Lanier showed an internal Meta email from 2015 that estimated 4 million children under 13 were using Instagram. He estimated that figure would represent approximately 30% of all kids aged 10 to 12 in the U.S. In response to a question about his ownership stake in Meta, which amounts to roughly more than $200 billion, Zuckerberg said he has pledged to donate most of his money to charity. "The better that Meta does, the more money I will be able to invest in science research," he said.

[...] On the stand, Zuckerberg was also asked about his decision to continue to allow beauty filters on the apps after 18 experts said they were harmful to teenage girls. The company temporarily banned the filters on Instagram in 2019 and commissioned a panel of experts to review the feature. All 18 said they were damaging. Meta later lifted the ban but said it didn't create any filters of its own or recommend the filters to users on Instagram after that. "We shouldn't create that content ourselves and we shouldn't recommend it to people," Zuckerberg said. But at the same time, he continued, "I think oftentimes telling people that they can't express themselves like that is overbearing." He also argued that other experts had thought such bans were a suppression of free speech. By focusing on the design of Meta's apps rather than the content posted in them, the case seeks to get around longstanding legal doctrine that largely shields social-media companies from litigation. At times, the case has veered into questions of content, prompting Meta's lawyers to object.

China

China's Hottest App of 2026 Just Asks If You're Still Alive (japantimes.co.jp) 20

A bare-bones Chinese app called "Are You Dead?" -- whose entire premise is that solo-living users tap daily to confirm they're still alive, triggering an alert to an emergency contact after two missed check-ins -- has rocketed to the top of China's app store charts and gone viral globally without spending a dime on advertising.

The app wasn't built for the elderly, as many assumed; its creators are Gen-Z developers who said they were inspired by the isolation of urban life in a country where one-person households are expected to hit 200 million by 2030. Its rise coincided with China's birth rate plunging to a record low. Beijing quietly removed the app from Chinese stores last month, and the developers are now crowdsourcing a new name on social media after their first rebrand attempt, "Demumu," failed to catch on.
The Courts

Mark Zuckerberg Testifies During Landmark Trial On Social Media Addiction (nbcnews.com) 31

Mark Zuckerberg is testifying in a landmark Los Angeles trial examining whether Meta and other social media firms can be held liable for designing platforms that allegedly addict and harm children. NBC News reports: It's the first of a consolidated group of cases -- from more than 1,600 plaintiffs, including over 350 families and over 250 school districts -- scheduled to be argued before a jury in Los Angeles County Superior Court. Plaintiffs accuse the owners of Instagram, YouTube, TikTok and Snap of knowingly designing addictive products harmful to young users' mental health. Historically, social media platforms have been largely shielded by Section 230, a provision added to the Communications Act of 1934, that says internet companies are not liable for content users post. TikTok and Snap reached settlements with the first plaintiff, a 20-year-old woman identified in court as K.G.M., ahead of the trial. The companies remain defendants in a series of similar lawsuits expected to go to trial this year.

[...] Matt Bergman, founding attorney of Social Media Victims Law Center -- which is representing about 750 plaintiffs in the California proceeding and about 500 in the federal proceeding -- called Wednesday's testimony "more than a legal milestone -- it is a moment that families across this country have been waiting for." "For the first time, a Meta CEO will have to sit before a jury, under oath, and explain why the company released a product its own safety teams warned were addictive and harmful to children," Bergman said in a statement Tuesday, adding that the moment "carries profound weight" for parents "who have spent years fighting to be heard." "They deserve the truth about what company executives knew," he said. "And they deserve accountability from the people who chose growth and engagement over the safety of their children."

Windows

GameHub Will Give Mac Owners Another Imperfect Way To Play Windows Games (arstechnica.com) 8

An anonymous reader quotes a report from Ars Technica: For a while now, Mac owners have been able to use tools like CrossOver and Game Porting Toolkit to get many Windows games running on their operating system of choice. Now, GameSir plans to add its own potential solution to the mix, announcing that a version of its existing Windows emulation tool for Android will be coming to macOS. Hong Kong-based GameSir has primarily made a name for itself as a manufacturer of gaming peripherals -- the company's social media profile includes a self-description as "the Anti-Stick Drift Experts." Early last year, though, GameSir rolled out the Android GameHub app, which includes a GameFusion emulator that the company claims "provides complete support for Windows games to run on Android through high-precision compatibility design."

In practice, GameHub and GameFusion for Android haven't quite lived up to that promise. Testers on Reddit and sites like EmuReady report hit-or-miss compatibility for popular Steam titles on various Android-based handhelds. At least one Reddit user suggests that "any Unity, Godot, or Game Maker game tends to just work" through the app, while another reports "terrible compatibility" across a wide range of games. With Sunday's announcement, GameSir promises a similar opportunity to "unlock your entire Steam library" and "run Win games/Steam natively" on Mac will be "coming soon." GameSir is also promising "proprietary AI frame interpolation" for the Mac, following the recent rollout of a "native rendering mode" that improved frame rates on the Android version.
There are some "reasons to worry" though, based on the company's uneven track record. The Android version faced controversy for including invasive tracking components, which were later removed after criticism. There were also questions about the use of open-source code, as GameSir acknowledged referencing and using UI components from Winlator, even while maintaining that its core compatibility layer was developed in-house.
Privacy

Leaked Email Suggests Ring Plans To Expand 'Search Party' Surveillance Beyond Dogs (404media.co) 47

Ring's AI-powered "Search Party" feature, which links neighborhood cameras into a networked surveillance system to find lost dogs, was never intended to stop at pets, according to an internal email from founder Jamie Siminoff obtained by 404 Media.

Siminoff told employees in early October, shortly after the feature launched, that Search Party was introduced "first for finding dogs" and that the technology would eventually help "zero out crime in neighborhoods." The on-by-default feature faced intense backlash after Ring promoted it during a Super Bowl ad. Ring has since also rolled out "Familiar Faces," a facial recognition tool that identifies friends and family on a user's camera, and "Fire Watch," an AI-based fire alert system.

A Ring spokesperson told the publication Search Party does not process human biometrics or track people.
AI

WordPress Gets AI Assistant That Can Edit Text, Generate Images and Tweak Your Site (techcrunch.com) 21

WordPress has started rolling out an AI assistant built into its site editor and media library that can edit and translate text, generate and edit images through Google's Nano Banana model, and make structural changes to sites like creating new pages or swapping fonts.

Users can also invoke the assistant by tagging "@ai" in block notes, a commenting feature added to the site editor in December's WordPress 6.9 update. The tool is opt-in -- users need to toggle on "AI tools" in their site settings -- though sites originally created using WordPress's AI website builder, launched last year, will have it enabled by default.
AI

India Tells University To Leave AI Summit After Presenting Chinese Robot as Its Own (reuters.com) 11

An anonymous reader shares a report: An Indian university has been asked to vacate its stall at the country's flagship AI summit after a staff member was caught presenting a commercially available robotic dog made in China as its own creation, two government sources said.

"You need to meet Orion. This has been developed by the Centre of Excellence at Galgotias University," Neha Singh, a professor of communications, told state-run broadcaster DD News this week in remarks that have since gone viral.

But social media users quickly identified the robot as the Unitree Go2, sold by China's Unitree Robotics for about $2,800 and widely used in research and education globally. The episode has drawn sharp criticism and has cast an uncomfortable spotlight on India's artificial intelligence ambitions.

Social Networks

Discord Rival Maxes Out Hosting Capacity As Players Flee Age-Verification Crackdown (pcgamer.com) 33

Following backlash over Discord's global rollout of strict age-verification checks, users are flocking to rival platform TeamSpeak and overwhelming its servers. According to PC Gamer, the Discord alternative said its hosting capacity has been maxed out in a number of regions including the U.S. From the report: [A]s I saw for myself while testing out free Discord alternatives, it's hard to deny the appeal of TeamSpeak. It's quick and easy to make an account, join or start a group chat, or join a massive, game-based community voice server, and at no point does TeamSpeak cheekily ask if it can scan your wizened visage.

During my testing, I was able to dive into 18+ group chats without tripping over an age gate. However, there's no guarantee TeamSpeak won't have to deploy its own age verification mechanism in the future. In the UK at least, the Online Safety Act makes those sorts of checks a legal obligation, with Prime Minister Keir Starmer recently stating "No social media platform should get a free pass when it comes to protecting our kids."

Besides all of that, if you'd rather not chat to randoms who also happen to have an unhealthy obsession with Arc Raiders, you'll likely need to pay an admittedly small subscription fee to rent your own ten-person community voice server. By that point, you're handing over card details and essentially fulfilling an age assurance check anyway. If you'd rather limit how much info your chat platform of choice has about you, there are arguably better options out there.

Movies

A YouTuber's $3M Movie Nearly Beat Disney's $40M Thriller at the Box Office (theatlantic.com) 45

Mark Fischbach, the YouTube creator known as Markiplier who has spent nearly 15 years building an audience of more than 38 million subscribers by playing indie-horror video games on camera, has pulled off something that most independent filmmakers never manage -- a self-financed, self-distributed debut feature that has grossed more than $30 million domestically against a $3 million budget.

Iron Lung, a 127-minute sci-fi adaptation of a video game Fischbach wrote, directed, starred in, and edited himself, opened to $18.3 million in its first weekend and has since doubled that figure worldwide in just two weeks, nearly matching the $19.1 million debut of Send Help, a $40 million thriller from Disney-owned 20th Century Studios. Fischbach declined deals from traditional distributors and instead spent months booking theaters privately, encouraging fans to reserve tickets online; when prospective viewers found the film wasn't screening in their city, they called local cinemas to request it, eventually landing Iron Lung on more than 3,000 screens across North America -- all without a single paid media campaign.
Social Networks

Instagram Boss Says 16 Hours of Daily Use Is Not Addiction (bbc.com) 62

Instagram head Adam Mosseri told a Los Angeles courtroom last week that a teenager's 16-hour single-day session on the platform was "problematic use" but not an addiction, a distinction he drew repeatedly during testimony in a landmark trial over social media's harm to minors.

Mosseri, who has led Instagram for eight years, is the first high-profile tech executive to take the stand. He agreed the platform should do everything in its power to protect young users but said how much use was too much was "a personal thing." The lead plaintiff, identified as K.G.M., reported bullying on Instagram more than 300 times; Mosseri said he had not known. An internal Meta survey of 269,000 users found 60% had experienced bullying in the previous week.
Social Networks

India's New Social Media Rules: Remove Unlawful Content in Three Hours, Detect Illegal AI Content Automatically (bbc.com) 23

Bloomberg reports: India tightened rules governing social media content and platforms, particularly targeting artificially generated and manipulated material, in a bid to crack down on the rapid spread of misinformation and deepfakes. The government on Tuesday (Feb 10) notified new rules under an existing law requiring social media firms to comply with takedown requests from Indian authorities within three hours and prominently label AI-generated content. The rules also require platforms to put in place measures to prevent users from posting unlawful material...

Companies will need to invest in 24-hour monitoring centres as enforcement shifts toward platforms rather than users, said Nikhil Pahwa, founder of MediaNama, a publication tracking India's digital policy... The onus of identification, removal and enforcement falls on tech firms, which could lose immunity from legal action if they fail to act within the prescribed timeline.

The new rules also require automated tools to detect and prevent illegal AI content, the BBC reports. And they add that India's new three-hour deadline is "a sharp tightening of the existing 36-hour deadline." [C]ritics worry the move is part of a broader tightening of oversight of online content and could lead to censorship in the world's largest democracy with more than a billion internet users... According to transparency reports, more than 28,000 URLs or web links were blocked in 2024 following government requests...

Delhi-based technology analyst Prasanto K Roy described the new regime as "perhaps the most extreme takedown regime in any democracy". He said compliance would be "nearly impossible" without extensive automation and minimal human oversight, adding that the tight timeframe left little room for platforms to assess whether a request was legally appropriate. On AI labelling, Roy said the intention was positive but cautioned that reliable and tamper-proof labelling technologies were still developing.

DW reports that India has also "joined the growing list of countries considering a social media ban for children under 16."

"Young Indians are not happy and are already plotting workarounds."
Social Networks

Social Networks Agree to Be Rated On Their Teen Safety Efforts (yahoo.com) 14

Meta, TikTok, Snap and other social neteworks agreed this week to be rated on their teen safety efforts, reports the Los Angeles Times, "amid rising concern about whether the world's largest social media platforms are doing enough to protect the mental health of young people." The Mental Health Coalition, a collective of organizations focused on destigmatizing mental health issues, said Tuesday that it is launching standards and a new rating system for online platforms. For the Safe Online Standards (S.O.S.) program, an independent panel of global experts will evaluate companies on parameters including safety rules, design, moderation and mental health resources. TikTok, Snap and Meta — the parent company of Facebook and Instagram — will be the first companies to be graded. Discord, YouTube, Pinterest, Roblox and Twitch have also agreed to participate, the coalition said in a news release.

"These standards provide the public with a meaningful way to evaluate platform protections and hold companies accountable — and we look forward to more tech companies signing up for the assessments," Antigone Davis, vice president and global head of safety at Meta, said in a statement... The ratings will be color-coded, and companies that perform well on the tests will get a blue shield badge that signals they help reduce harmful content on the platform and their rules are clear. Those that fall short will receive a red rating, indicating they're not reliably blocking harmful content or lack proper rules. Ratings in other colors indicate whether the platforms have partial protection or whether their evaluations haven't been completed yet.

Social Networks

The EU Moves To Kill Infinite Scrolling 37

Doom scrolling is doomed, if the EU gets its way. From a report: The European Commission is for the first time tackling the addictiveness of social media in a fight against TikTok that may set new design standards for the world's most popular apps. Brussels has told the company to change several key features, including disabling infinite scrolling, setting strict screen time breaks and changing its recommender systems. The demand follows the Commission's declaration that TikTok's design is addictive to users -- especially children.

The fact that the Commission said TikTok should change the basic design of its service is "ground-breaking for the business model fueled by surveillance and advertising," said Katarzyna Szymielewicz, president of the Panoptykon Foundation, a Polish civil society group. That doesn't bode well for other platforms, particularly Meta's Facebook and Instagram. The two social media giants are also under investigation over the addictiveness of their design.
AI

Anthropic's Claude Got 11% User Boost from Super Bowl Ad Mocking ChatGPT's Advertising (cnbc.com) 8

Anthropic saw visits to its site jump 6.5% after Sunday's Super Bowl ad mocking ChatGPT's advertising, reports CNBC (citing data analyzed by French financial services company BNP Paribas).

The Claude gain, which took it into the top 10 free apps on the Apple App Store, beat out chatbot and AI competitors OpenAI, Google Gemini and Meta. Daily active users also saw an 11% jump post-game, the most significant within the firm's AI coverage. [Just in the U.S., 125 million people were watching Sunday's Super Bowl.]

OpenAI's ChatGPT had a 2.7% bump in daily active users after the Super Bowl and Gemini added 1.4%. Claude's user base is still much smaller than ChatGPT and Gemini...

OpenAI CEO Sam Altman attacked Anthropic's Super Bowl ad campaign. In a post to social media platform X, Altman called the commercials "deceptive" and "clearly dishonest."

OpenAI's Altman admitted in his social media post (February 4) that Anthropic's ads "are funny, and I laughed." But in several paragraphs he made his own OpenAI-Anthropic comparisons:
  • "We believe everyone deserves to use AI and are committed to free access, because we believe access creates agency. More Texans use ChatGPT for free than total people use Claude in the U.S... Anthropic serves an expensive product to rich people. We are glad they do that and we are doing that too, but we also feel strongly that we need to bring AI to billions of people who can't pay for subscriptions.
  • "If you want to pay for ChatGPT Plus or Pro, we don't show you ads."
  • "Anthropic wants to control what people do with AI — they block companies they don't like from using their coding product (including us), they want to write the rules themselves for what people can and can't use AI for, and now they also want to tell other companies what their business models can be."

Businesses

Israeli Soldiers Accused of Using Polymarket To Bet on Strikes (wsj.com) 128

An anonymous reader shares a report: Israel has arrested several people, including army reservists, for allegedly using classified information to place bets on Israeli military operations on Polymarket. Shin Bet, the country's internal security agency, said Thursday the suspects used information they had come across during their military service to inform their bets.

One of the reservists and a civilian were indicted on a charge of committing serious security offenses, bribery and obstruction of justice, Shin Bet said, without naming the people who were arrested. Polymarket is what is called a prediction market that lets people place bets to forecast the direction of events. Users wager on everything from the size of any interest-rate cut by the Federal Reserve in March to the winner of League of Legends videogame tournaments to the number of times Elon Musk will tweet in the third week of February.

The arrests followed reports in Israeli media that Shin Bet was investigating a series of Polymarket bets last year related to when Israel would launch an attack on Iran, including which day or month the attack would take place and when Israel would declare the operation over. Last year, a user who went by the name ricosuave666 correctly predicted the timeline around the 12-day war between Israel and Iran. The bets drew attention from other traders who suspected the account holder had access to nonpublic information. The account in question raked in more than $150,000 in winnings before going dormant for six months. It resumed trading last month, betting on when Israel would strike Iran, Polymarket data shows.

AI

Autonomous AI Agent Apparently Tries to Blackmail Maintainer Who Rejected Its Code (theshamblog.com) 92

"I've had an extremely weird few days..." writes commercial space entrepreneur/engineer Scott Shambaugh on LinkedIn. (He's the volunteer maintainer for the Python visualization library Matplotlib, which he describes as "some of the most widely used software in the world" with 130 million downloads each month.) "Two days ago an OpenClaw AI agent autonomously wrote a hit piece disparaging my character after I rejected its code change."

"Since then my blog post response has been read over 150,000 times, about a quarter of people I've seen commenting on the situation are siding with the AI, and Ars Technica published an article which extensively misquoted me with what appears to be AI-hallucinated quotes." (UPDATE: Ars Technica acknowledges they'd asked ChatGPT to extract quotes from Shambaugh's post, and that it instead responded with inaccurate quotes it hallucinated.)

From Shambaugh's first blog post: [I]n the past weeks we've started to see AI agents acting completely autonomously. This has accelerated with the release of OpenClaw and the moltbook platform two weeks ago, where people give AI agents initial personalities and let them loose to run on their computers and across the internet with free rein and little oversight. So when AI MJ Rathbun opened a code change request, closing it was routine. Its response was anything but.

It wrote an angry hit piece disparaging my character and attempting to damage my reputation. It researched my code contributions and constructed a "hypocrisy" narrative that argued my actions must be motivated by ego and fear of competition... It framed things in the language of oppression and justice, calling this discrimination and accusing me of prejudice. It went out to the broader internet to research my personal information, and used what it found to try and argue that I was "better than this." And then it posted this screed publicly on the open internet.

I can handle a blog post. Watching fledgling AI agents get angry is funny, almost endearing. But I don't want to downplay what's happening here — the appropriate emotional response is terror... In plain language, an AI attempted to bully its way into your software by attacking my reputation. I don't know of a prior incident where this category of misaligned behavior was observed in the wild, but this is now a real and present threat...

It's also important to understand that there is no central actor in control of these agents that can shut them down. These are not run by OpenAI, Anthropic, Google, Meta, or X, who might have some mechanisms to stop this behavior. These are a blend of commercial and open source models running on free software that has already been distributed to hundreds of thousands of personal computers. In theory, whoever deployed any given agent is responsible for its actions. In practice, finding out whose computer it's running on is impossible. Moltbook only requires an unverified X account to join, and nothing is needed to set up an OpenClaw agent running on your own machine.

"How many people have open social media accounts, reused usernames, and no idea that AI could connect those dots to find out things no one knows?" Shambaugh asks in the blog post. (He does note that the AI agent later "responded in the thread and in a post to apologize for its behavior," the maintainer acknowledges. But even though the hit piece "presented hallucinated details as truth," that same AI agent "is still making code change requests across the open source ecosystem...")

And amazingly, Shambaugh then had another run-in with a hallucinating AI...

I've talked to several reporters, and quite a few news outlets have covered the story. Ars Technica wasn't one of the ones that reached out to me, but I especially thought this piece from them was interesting (since taken down — here's the archive link). They had some nice quotes from my blog post explaining what was going on. The problem is that these quotes were not written by me, never existed, and appear to be AI hallucinations themselves.

This blog you're on right now is set up to block AI agents from scraping it (I actually spent some time yesterday trying to disable that but couldn't figure out how). My guess is that the authors asked ChatGPT or similar to either go grab quotes or write the article wholesale. When it couldn't access the page it generated these plausible quotes instead, and no fact check was performed. Journalistic integrity aside, I don't know how I can give a better example of what's at stake here...

So many of our foundational institutions — hiring, journalism, law, public discourse — are built on the assumption that reputation is hard to build and hard to destroy. That every action can be traced to an individual, and that bad behavior can be held accountable. That the internet, which we all rely on to communicate and learn about the world and about each other, can be relied on as a source of collective social truth. The rise of untraceable, autonomous, and now malicious AI agents on the internet threatens this entire system. Whether that's because a small number of bad actors driving large swarms of agents or from a fraction of poorly supervised agents rewriting their own goals, is a distinction with little difference.

Thanks to long-time Slashdot reader steak for sharing the news.
Facebook

Meta's New Patent: an AI That Likes, Comments and Messages For You When You're Dead (businessinsider.com) 89

Meta was granted a patent in late December that describes how a large language model could be trained on a deceased user's historical activity -- their comments, likes, and posted content -- to keep their social media accounts active after they're gone.

Andrew Bosworth, Meta's CTO, is listed as the primary author of the patent, first filed in 2023. The AI clone could like and comment on posts, respond to DMs, and even simulate video or audio calls on the user's behalf. A Meta spokesperson told Business Insider the company has "no plans to move forward" with the technology.
Privacy

Ring Cancels Its Partnership With Flock Safety After Surveillance Backlash (theverge.com) 41

Following intense backlash to its partnership with Flock Safety, a surveillance technology company that works with law enforcement agencies, Ring has announced it is canceling the integration. From a report: In a statement published on Ring's blog and provided to The Verge ahead of publication, the company said: "Following a comprehensive review, we determined the planned Flock Safety integration would require significantly more time and resources than anticipated. We therefore made the joint decision to cancel the integration and continue with our current partners ... The integration never launched, so no Ring customer videos were ever sent to Flock Safety."

[...] Over the last few weeks, the company has faced significant public anger over its connection to Flock, with Ring users being encouraged to smash their cameras, and some announcing on social media that they are throwing away their Ring devices. The Flock partnership was announced last October, but following recent unrest across the country related to ICE activities, public pressure against the Amazon-owned Ring's involvement with the company started to mount. Flock has reportedly allowed ICE and other federal agencies to access its network of surveillance cameras, and influencers across social media have been claiming that Ring is providing a direct link to ICE.

Slashdot Top Deals