Encryption

Lawmakers Vote To Stop NYPD's Attempt To Encrypt Their Radios (nypost.com) 74

alternative_right shares a report: New York state lawmakers voted to stop the NYPD's attempt to block its radio communications from the public Thursday, with the bill expected to head to Gov. Kathy Hochul's desk. The "Keep Police Radio Public Act" passed both the state Senate and state Assembly, with a sponsor of the legislation arguing the proposal strikes the "proper balance" in the battle between transparency and sensitive information.

"Preserving access to police radio is critical for a free press and to preserve the freedoms and protections afforded by the public availability of this information," state Sen. Michael Gianaris (D-Queens) said in a statement. "As encrypted radio usage grows, my proposal strikes the proper balance between legitimate law enforcement needs and the rights and interests of New Yorkers."

The bill, which was sponsored in the Assembly by lawmaker Karines Reyes (D-Bronx), is meant to make real-time police radio communications accessible to emergency services organizations and reporters. "Sensitive information" would still be kept private, according to the legislation.
In late 2023, the NYPD began encrypting its radio communications to increase officer safety and "protect the privacy interests of victims and witnesses." However, it led to outcry from press advocates and local officials concerned about reduced transparency and limited access to real-time information.

A bill to address the issue has passed both chambers of New York's legislature, but Governor Hochul has not yet indicated whether she will sign it.
Nintendo

Nintendo Warns Switch 2 GameChat Users: 'Your Chat Is Recorded' (arstechnica.com) 68

Ars Technica's Kyle Orland reports: Last month, ahead of the launch of the Switch 2 and its GameChat communication features, Nintendo updated its privacy policy to note that the company "may also monitor and record your video and audio interactions with other users." Now that the Switch 2 has officially launched, we have a clearer understanding of how the console handles audio and video recorded during GameChat sessions, as well as when that footage may be sent to Nintendo or shared with partners, including law enforcement. Before using GameChat on Switch 2 for the first time, you must consent to a set of GameChat Terms displayed on the system itself. These terms warn that chat content is "recorded and stored temporarily" both on your system and the system of those you chat with. But those stored recordings are only shared with Nintendo if a user reports a violation of Nintendo's Community Guidelines, the company writes.

That reporting feature lets a user "review a recording of the last three minutes of the latest three GameChat sessions" to highlight a particular section for review, suggesting that chat sessions are not being captured and stored in full. The terms also lay out that "these recordings are available only if the report is submitted within 24 hours," suggesting that recordings are deleted from local storage after a full day. If a report is submitted to Nintendo, the company warns that it "may disclose certain information to third parties, such as authorities, courts, lawyers, or subcontractors reviewing the reported chats." If you don't consent to the potential for such recording and sharing, you're prevented from using GameChat altogether.

Nintendo is extremely clear that the purpose of its recording and review system is "to protect GameChat users, especially minors" and "to support our ability to uphold our Community Guidelines." This kind of human moderator review of chats is pretty common in the gaming world and can even apply to voice recordings made by various smart home assistants. [...] Overall, the time-limited, local-unless-reported recordings Nintendo makes here seem like a minimal intrusion on the average GameChat user's privacy. Still, if you're paranoid about Nintendo potentially seeing and hearing what's going on in your living room, it's good to at least be aware of it.

The Courts

OpenAI Slams Court Order To Save All ChatGPT Logs, Including Deleted Chats (arstechnica.com) 103

An anonymous reader quotes a report from Ars Technica: OpenAI is now fighting a court order (PDF) to preserve all ChatGPT user logs—including deleted chats and sensitive chats logged through its API business offering -- after news organizations suing over copyright claims accused the AI company of destroying evidence. "Before OpenAI had an opportunity to respond to those unfounded accusations, the court ordered OpenAI to 'preserve and segregate all output log data that would otherwise be deleted on a going forward basis until further order of the Court (in essence, the output log data that OpenAI has been destroying)," OpenAI explained in a court filing (PDF) demanding oral arguments in a bid to block the controversial order.

In the filing, OpenAI alleged that the court rushed the order based only on a hunch raised by The New York Times and other news plaintiffs. And now, without "any just cause," OpenAI argued, the order "continues to prevent OpenAI from respecting its users' privacy decisions." That risk extended to users of ChatGPT Free, Plus, and Pro, as well as users of OpenAI's application programming interface (API), OpenAI said. The court order came after news organizations expressed concern that people using ChatGPT to skirt paywalls "might be more likely to 'delete all [their] searches' to cover their tracks," OpenAI explained. Evidence to support that claim, news plaintiffs argued, was missing from the record because so far, OpenAI had only shared samples of chat logs that users had agreed that the company could retain. Sharing the news plaintiffs' concerns, the judge, Ona Wang, ultimately agreed that OpenAI likely would never stop deleting that alleged evidence absent a court order, granting news plaintiffs' request to preserve all chats.

OpenAI argued the May 13 order was premature and should be vacated, until, "at a minimum," news organizations can establish a substantial need for OpenAI to preserve all chat logs. They warned that the privacy of hundreds of millions of ChatGPT users globally is at risk every day that the "sweeping, unprecedented" order continues to be enforced. "As a result, OpenAI is forced to jettison its commitment to allow users to control when and how their ChatGPT conversation data is used, and whether it is retained," OpenAI argued. Meanwhile, there is no evidence beyond speculation yet supporting claims that "OpenAI had intentionally deleted data," OpenAI alleged. And supposedly there is not "a single piece of evidence supporting" claims that copyright-infringing ChatGPT users are more likely to delete their chats. "OpenAI did not 'destroy' any data, and certainly did not delete any data in response to litigation events," OpenAI argued. "The Order appears to have incorrectly assumed the contrary."
One tech worker on LinkedIn suggested the order created "a serious breach of contract for every company that uses OpenAI," while privacy advocates on X warned, "every single AI service 'powered by' OpenAI should be concerned."

Also on LinkedIn, a consultant rushed to warn clients to be "extra careful" sharing sensitive data "with ChatGPT or through OpenAI's API for now," warning, "your outputs could eventually be read by others, even if you opted out of training data sharing or used 'temporary chat'!"
Cloud

AWS Forms EU-Based Cloud Unit As Customers Fret (theregister.com) 31

An anonymous reader quotes a report from The Register: In a nod to European customers' growing mistrust of American hyperscalers, Amazon Web Services says it is establishing a new organization in the region "backed by strong technical controls, sovereign assurances, and legal protections." Ever since the Trump 2.0 administration assumed office and implemented an erratic and unprecedented foreign policy stance, including aggressive tariffs and threats to the national sovereignty of Greenland and Canada, customers in Europe have voiced unease about placing their data in the hands of big U.S. tech companies. The Register understands that data sovereignty is now one of the primary questions that customers at European businesses ask sales reps at hyperscalers when they have conversations about new services.

[...] AWS is forming a new European organization with a locally controlled parent company and three subsidiaries incorporated in Germany, as part of its European Sovereign Cloud (ESC) rollout, set to launch by the end of 2025. Kathrin Renz, an AWS Industries VP based in Munich, will lead the operation as the first managing director of the AWS ESC. The other leaders, we're told, include a government security official and a privacy official – all EU citizens. The cloud giant stated: "AWS will establish an independent advisory board for the AWS European Sovereign Cloud, legally obligated to act in the best interest of the AWS European Sovereign Cloud. Reinforcing the sovereign control of the AWS European Sovereign Cloud, the advisory board will consist of four members, all EU citizens residing in the EU, including at least one independent board member who is not affiliated with Amazon. The advisory board will act as a source of expertise and provide accountability for AWS European Sovereign Cloud operations, including strong security and access controls and the ability to operate independently in the event of disruption."

The AWS ESC allows the business to continue operations indefinitely, "even in the event of a connectivity interruption between the AWS European Sovereign Cloud and the rest of the world." Authorized ESC staff who are EU residents will have independent access to a replica of the source code needed to maintain services under "extreme circumstances." The services will have "no critical dependencies on non-EU infrastructure," with staff, tech, and leadership all based on the continent, AWS said. "The AWS European Sovereign Cloud will have its own dedicated Amazon Route 53, providing customers with a highly available and scalable Domain Name System (DNS), domain name registration, and health-checking web services," the company said.
"The Route 53 name servers for the AWS European Sovereign Cloud will use only European Top Level Domains (TLDs) for their own names," added AWS. "AWS will also launch a dedicated 'root' European Certificate Authority, so that the key material, certificates, and identity verification needed for Secure Sockets Layer/Transport Layer Security certificates can all run autonomously within the AWS European Sovereign Cloud."

The Register also notes that the sovereign cloud will be "supported by a dedicated European Security Operations Center (SOC), led by an EU citizen residing in the EU." That said, the parent company "remains under American ownership and may be subject to the Cloud Act, which requires U.S. companies to turn over data to law enforcement authorities with the proper warrants, no matter where that data is stored."
Privacy

Meta and Yandex Are De-Anonymizing Android Users' Web Browsing Identifiers (github.io) 77

"It appears as though Meta (aka: Facebook's parent company) and Yandex have found a way to sidestep the Android Sandbox," writes Slashdot reader TheWho79. Researchers disclose the novel tracking method in a report: We found that native Android apps -- including Facebook, Instagram, and several Yandex apps including Maps and Browser -- silently listen on fixed local ports for tracking purposes.

These native Android apps receive browsers' metadata, cookies and commands from the Meta Pixel and Yandex Metrica scripts embedded on thousands of web sites. These JavaScripts load on users' mobile browsers and silently connect with native apps running on the same device through localhost sockets. As native apps access programmatically device identifiers like the Android Advertising ID (AAID) or handle user identities as in the case of Meta apps, this method effectively allows these organizations to link mobile browsing sessions and web cookies to user identities, hence de-anonymizing users' visiting sites embedding their scripts.

This web-to-app ID sharing method bypasses typical privacy protections such as clearing cookies, Incognito Mode and Android's permission controls. Worse, it opens the door for potentially malicious apps eavesdropping on users' web activity.

While there are subtle differences in the way Meta and Yandex bridge web and mobile contexts and identifiers, both of them essentially misuse the unvetted access to localhost sockets. The Android OS allows any installed app with the INTERNET permission to open a listening socket on the loopback interface (127.0.0.1). Browsers running on the same device also access this interface without user consent or platform mediation. This allows JavaScript embedded on web pages to communicate with native Android apps and share identifiers and browsing habits, bridging ephemeral web identifiers to long-lived mobile app IDs using standard Web APIs.
This technique circumvents privacy protections like Incognito Mode, cookie deletion, and Android's permission model, with Meta Pixel and Yandex Metrica scripts silently communicating with apps across over 6 million websites combined.

Following public disclosure, Meta ceased using this method on June 3, 2025. Browser vendors like Chrome, Brave, Firefox, and DuckDuckGo have implemented or are developing mitigations, but a full resolution may require OS-level changes and stricter enforcement of platform policies to prevent further abuse.
Open Source

'Ladybird' Browser's Nonprofit Becomes Public Charity, Now Officially Tax-Exempt (ladybird.org) 26

The Ladybird browser project is now officially tax-exempt as a U.S. 501(c)(3) nonprofit.

Started two years ago (by the original creator of SerenityOS), Ladybird will be "an independent, fast and secure browser that respects user privacy and fosters an open web." They're targeting Summer 2026 for the first Alpha version on Linux and macOS, and in May enjoyed "a pleasantly productive month" with 261 merged PRs from 53 contributors — and seven new sponsors (including coding livestreamer "ThePrimeagen").

And they're now recognized as a public charity: This is retroactive to March 2024, so donations made since then may be eligible for tax exemption (depending on country-specific rules). You can find all the relevant information on our new Organization page. ["Our mission is to create an independent, fast and secure browser that respects user privacy and fosters an open web. We are tax-exempt and rely on donations and sponsorships to fund our development efforts."]
Other announcements for May:
  • "We've been making solid progress on Web Platform Tests... This month, we added 15,961 new passing tests for a total of 1,815,223."
  • "We've also done a fair bit of performance work this month, targeting Speedometer and various websites that are slower than we'd like." [The optimizations led to a 10% speed-up on Speedometer 2.1.]

Government

Brazil Tests Letting Citizens Earn Money From Data in Their Digital Footprint (restofworld.org) 15

With over 200 million people, Brazil is the world's fifth-largest country by population. Now it's testing a program that will allow Brazilians "to manage, own, and profit from their digital footprint," according to RestOfWorld.org — "the first such nationwide initiative in the world."

The government says it's partnering with California-based data valuation/monetization firm DrumWave to create "data savings account" to "transform data into economic assets, with potential for monetization and participation in the benefits generated by investing in technologies such as AI LLMs." But all based on "conscious and authorized use of personal information." RestOfWorld reports: Today, "people get nothing from the data they share," Brittany Kaiser, co-founder of the Own Your Data Foundation and board adviser for DrumWave, told Rest of World. "Brazil has decided its citizens should have ownership rights over their data...." After a user accepts a company's offer on their data, payment is cashed in the data wallet, and can be immediately moved to a bank account. The project will be "a correction in the historical imbalance of the digital economy," said Kaiser. Through data monetization, the personal data that companies aggregate, classify, and filter to inform many aspects of their operations will become an asset for those providing the data...

Brazil's project stands out because it brings the private sector and the government together, "so it has a better chance of catching on," said Kaiser. In 2023, Brazil's Congress drafted a bill that classifies data as personal property. The country's current data protection law classifies data as a personal, inalienable right. The new legislation gives people full rights over their personal data — especially data created "through use and access of online platforms, apps, marketplaces, sites and devices of any kind connected to the web." The bill seeks to ensure companies offer their clients benefits and financial rewards, including payment as "compensation for the collecting, processing or sharing of data." It has garnered bipartisan support, and is currently being evaluated in Congress...

If approved, the bill will allow companies to collect data more quickly and precisely, while giving users more clarity over how their data will be used, according to Antonielle Freitas, data protection officer at Viseu Advogados, a law firm that specializes in digital and consumer laws. As data collection becomes centralized through regulated data brokers, the government can benefit by paying the public to gather anonymized, large-scale data, Freitas told Rest of World. These databases are the basis for more personalized public services, especially in sectors such as health care, urban transportation, public security, and education, she said.

This first pilot program involves "a small group of Brazilians who will use data wallets for payroll loans," according to the article — although Pedro Bastos, a researcher at Data Privacy Brazil, sees downsides. "Once you treat data as an economic asset, you are subverting the logic behind the protection of personal data," he told RestOfWorld. The data ecosystem "will no longer be defined by who can create more trust and integrity in their relationships, but instead, it will be defined by who's the richest."

Thanks to Slashdot reader applique for sharing the news.
Privacy

Developer Builds Tool That Scrapes YouTube Comments, Uses AI To Predict Where Users Live (404media.co) 34

An anonymous reader quotes a report from 404 Media: If you've left a comment on a YouTube video, a new website claims it might be able to find every comment you've ever left on any video you've ever watched. Then an AI can build a profile of the commenter and guess where you live, what languages you speak, and what your politics might be. The service is called YouTube-Tools and is just the latest in a suite of web-based tools that started life as a site to investigate League of Legends usernames. Now it uses a modified large language model created by the company Mistral to generate a background report on YouTube commenters based on their conversations. Its developer claims it's meant to be used by the cops, but anyone can sign up. It costs about $20 a month to use and all you need to get started is a credit card and an email address.

The tool presents a significant privacy risk, and shows that people may not be as anonymous in the YouTube comments sections as they may think. The site's report is ready in seconds and provides enough data for an AI to flag identifying details about a commenter. The tool could be a boon for harassers attempting to build profiles of their targets, and 404 Media has seen evidence that harassment-focused communities have used the developers' other tools. YouTube-Tools also appears to be a violation of YouTube's privacy policies, and raises questions about what YouTube is doing to stop the scraping and repurposing of peoples' data like this. "Public search engines may scrape data only in accordance with YouTube's robots.txt file or with YouTube's prior written permission," it says.

Security

ASUS Router Backdoors Affect 9,000 Devices, Persists After Firmware Updates 23

An anonymous reader quotes a report from SC Media: Thousands of ASUS routers have been compromised with malware-free backdoors in an ongoing campaign to potentially build a future botnet, GreyNoise reported Wednesday. The threat actors abuse security vulnerabilities and legitimate router features to establish persistent access without the use of malware, and these backdoors survive both reboots and firmware updates, making them difficult to remove.

The attacks, which researchers suspect are conducted by highly sophisticated threat actors, were first detected by GreyNoise's AI-powered Sift tool in mid-March and disclosed Thursday after coordination with government officials and industry partners. Sekoia.io also reported the compromise of thousands of ASUS routers in their investigation of a broader campaign, dubbed ViciousTrap, in which edge devices from other brands were also compromised to create a honeypot network. Sekoia.io found that the ASUS routers were not used to create honeypots, and that the threat actors gained SSH access using the same port, TCP/53282, identified by GreyNoise in their report.
The backdoor campaign affects multiple ASUS router models, including the RT-AC3200, RT-AC3100, GT-AC2900, and Lyra Mini.

GreyNoise advises users to perform a full factory reset and manually reconfigure any potentially compromised device. To identify a breach, users should check for SSH access on TCP port 53282 and inspect the authorized_keys file for unauthorized entries.
Security

Data Broker Giant LexisNexis Says Breach Exposed Personal Information of Over 364,000 People (techcrunch.com) 48

An anonymous reader quotes a report from TechCrunch: LexisNexis Risk Solutions, a data broker that collects and uses consumers' personal data to help its paying corporate customers detect possible risk and fraud, has disclosed a data breach affecting more than 364,000 people. The company said in a filing with Maine's attorney general that the breach, dating back to December 25, 2024, allowed a hacker to obtain consumers' sensitive personal data from a third-party platform used by the company for software development.

Jennifer Richman, a spokesperson for LexisNexis, told TechCrunch that an unknown hacker accessed the company's GitHub account. The stolen data varies, but includes names, dates of birth, phone numbers, postal and email addresses, Social Security numbers, and driver license numbers. It's not immediately clear what circumstances led to the breach. Richman said LexisNexis received a report on April 1, 2025 "from an unknown third party claiming to have accessed certain information." The company would not say if it had received a ransom demand from the hacker.

Security

Mysterious Database of 184 Million Records Exposes Vast Array of Login Credentials (wired.com) 15

A security researcher has discovered an exposed database containing 184 million login credentials for major services including Apple, Facebook, and Google accounts, along with credentials linked to government agencies across 29 countries. Jeremiah Fowler found the 47-gigabyte trove in early May, but the database contained no identifying information about its owner or origins.

The records included plaintext passwords and usernames for accounts spanning Netflix, PayPal, Discord, and other major platforms. A sample analysis revealed 220 email addresses with government domains from countries including the United States, China, and Israel. Fowler told Wired he suspects the data was compiled by cybercriminals using infostealer malware. World Host Group, which hosted the database, shut down access after Fowler's report and described it as content uploaded by a "fraudulent user." The company said it would cooperate with law enforcement authorities.
Privacy

Texas Adopts Online Child-Safety Bill Opposed by Apple's CEO (msn.com) 89

Texas Governor Greg Abbott signed an online child safety bill, bucking a lobbying push from big tech companies that included a personal phone call from from Apple CEO Tim Cook. From a report: The measure requires app stores to verify users' ages and secure parental approval before minors can download most apps or make in-app purchases. The bill drew fire from app store operators such as Google and Apple, which has argued that the legislation threatens the privacy of all users.

The bill was a big enough priority for Apple that Cook called Abbott to emphasize the company's opposition to it, said a person familiar with their discussion, which was first reported by the Wall Street Journal.

Privacy

Adidas Warns of Data Breach After Customer Service Provider Hack (bleepingcomputer.com) 10

German sportswear giant Adidas disclosed a data breach after attackers hacked a customer service provider and stole some customers' data. From a report: "adidas recently became aware that an unauthorized external party obtained certain consumer data through a third-party customer service provider," the company said. "We immediately took steps to contain the incident and launched a comprehensive investigation, collaborating with leading information security experts."

Adidas added that the stolen information did not include the affected customers' payment-related information or passwords, as the threat actors behind the breach only gained access to contact. The company has also notified the relevant authorities regarding this security incident and will alert those affected by the data breach.

Government

Does the World Need Publicly-Owned Social Networks? (elpais.com) 122

"Do we need publicly-owned social networks to escape Silicon Valley?" asks an opinion piece in Spain's El Pais newspaper.

It argues it's necessary because social media platforms "have consolidated themselves as quasi-monopolies, with a business model that consists of violating our privacy in search of data to sell ads..." Among the proposals and alternatives to these platforms, the idea of public social media networks has often been mentioned. Imagine, for example, a Twitter for the European Union, or a Facebook managed by media outlets like the BBC. In February, Spanish Prime Minister Pedro Sánchez called for "the development of our own browsers, European public and private social networks and messaging services that use transparent protocols." Former Spanish prime minister José Luis Rodríguez Zapatero — who governed from 2004 until 2011 — and the left-wing Sumar bloc in the Spanish Parliament have also proposed this. And, back in 2021, former British Labour Party leader Jeremy Corbyn made a similar suggestion.

At first glance, this may seem like a good idea: a public platform wouldn't require algorithms — which are designed to stimulate addiction and confrontation — nor would it have to collect private information to sell ads. Such a platform could even facilitate public conversations, as pointed out by James Muldoon, a professor at Essex Business School and author of Platform Socialism: How to Reclaim our Digital Future from Big Tech (2022)... This could be an alternative that would contribute to platform pluralism and ensure we're not dependent on a handful of billionaires. This is especially important at a time when we're increasingly aware that technology isn't neutral and that private platforms respond to both economic and political interests.

There's other possibilities. Further down they write that "it makes much more sense for the state to invest in, or collaborate with, decentralized social media networks based on free and interoperable software" that "allow for the portability of information and content." They even spoke to Cory Doctorow, who they say "proposes that the state cooperate with the software systems, developers, or servers for existing open-source platforms, such as the U.S. network Bluesky or the German firm Mastodon." (Doctorow adds that reclaiming digital independence "is incredibly important, it's incredibly difficult, and it's incredibly urgent."

The article also acknowledges the option of "legislative initiatives — such as antitrust laws, or even stricter regulations than those imposed in Europe — that limit or prevent surveillance capitalism." (Though they also figures showing U.S. tech giants have one of the largest lobbying groups in the EU, with Meta being the top spender...)
Privacy

Ask Slashdot: Do We Need Opt-Out-By-Default Privacy Laws? 92

"In large, companies failed to self-regulate," writes long-time Slashdot reader BrendaEM: They have not been respected the individual's right to privacy. In software and web interfaces, companies have buried their privacy setting so deep that they cannot be found in a reasonable amount of time, or an unreasonable amount of steps are needed to attempt to retain data. These companies have taken away the individual's right to privacy --by default.

Are laws needed that protect a person's privacy by default--unless specific steps are taken by that user/purchaser to relinquish it? Should the wording of the explanation be so written that the contract is brief, explaining the forfeiture of the privacy, and where that data might be going? Should a company selling a product be required to state before purchase which rights need to be dismissed for its use? Should a legal owner who purchased a product expect it to stop functioning--only because a newer user contract is not agreed to?

Share your own thoughts and experiences in the comments. What's your ideal privacy policy?

And do we need opt-out-by-defaut privacy laws?
Privacy

Destructive Malware Available In NPM Repo Went Unnoticed For 2 Years (arstechnica.com) 6

An anonymous reader quotes a report from Ars Technica: Researchers have found malicious software that received more than 6,000 downloads from the NPM repository over a two-year span, in yet another discovery showing the hidden threats users of such open source archives face. Eight packages using names that closely mimicked those of widely used legitimate packages contained destructive payloads designed to corrupt or delete important data and crash systems, Kush Pandya, a researcher at security firm Socket, reported Thursday. The packages have been available for download for more than two years and accrued roughly 6,200 downloads over that time.

"What makes this campaign particularly concerning is the diversity of attack vectors -- from subtle data corruption to aggressive system shutdowns and file deletion," Pandya wrote. "The packages were designed to target different parts of the JavaScript ecosystem with varied tactics." [...] Some of the payloads were limited to detonate only on specific dates in 2023, but in some cases a phase that was scheduled to begin in July of that year was given no termination date. Pandya said that means the threat remains persistent, although in an email he also wrote: "Since all activation dates have passed (June 2023-August 2024), any developer following normal package usage today would immediately trigger destructive payloads including system shutdowns, file deletion, and JavaScript prototype corruption."
The list of malicious packages included js-bomb, js-hood, vite-plugin-bomb-extend, vite-plugin-bomb, vite-plugin-react-extend, vite-plugin-vue-extend, vue-plugin-bomb, and quill-image-downloader.
Privacy

Russia To Enforce Location Tracking App On All Foreigners in Moscow (bleepingcomputer.com) 81

The Russian government has introduced a new law that makes installing a tracking app mandatory for all foreign nationals in the Moscow region. From a report: The new proposal was announced by the chairman of the State Duma, Vyacheslav Volodin, who presented it as a measure to tackle migrant crimes. "The adopted mechanism will allow, using modern technologies, to strengthen control in the field of migration and will also contribute to reducing the number of violations and crimes in this area," stated Volodin.

Using a mobile application that all foreigners will have to install on their smartphones, the Russian state will receive the following information: Residence location, fingerprint, face photograph, real-time geo-location monitoring.

Privacy

Signal Deploys DRM To Block Microsoft Recall's Invasive Screenshot Collection (betanews.com) 69

BrianFagioli writes: Signal has officially had enough, folks. You see, the privacy-first messaging app is going on the offensive, declaring war on Microsoft's invasive Recall feature by enabling a new "Screen security" setting by default on Windows 11. This move is designed to block Microsoft's AI-powered screenshot tool from capturing your private chats.

If you aren't aware, Recall was first unveiled a year ago as part of Microsoft's Copilot+ PC push. The feature quietly took screenshots of everything happening on your computer, every few seconds, storing them in a searchable timeline. Microsoft claimed it would help users "remember" what they've done. Critics called it creepy. Security experts called it dangerous. The backlash was so fierce that Microsoft pulled the feature before launch.

But now, in a move nobody asked for, Recall is sadly back. And thankfully, Signal isn't waiting around this time. The team has activated a Windows 11-specific DRM flag that completely blacks out Signal's chat window when a screenshot is attempted. If you've ever tried to screen grab a streaming movie, you'll know the result: nothing but black.

Google

Denver Detectives Crack Deadly Arson Case Using Teens' Google Search Histories (wired.com) 92

Three teenagers nearly escaped prosecution for a 2020 house fire that killed five people until Denver police discovered a novel investigative technique: requesting Google search histories for specific terms. Kevin Bui, Gavin Seymour, and Dillon Siebert had burned down a house in Green Valley Ranch, mistakenly targeting innocent Senegalese immigrants after Bui used Apple's Find My feature to track his stolen phone to the wrong address.

The August 2020 arson killed a family of five, including a toddler and infant. For months, detectives Neil Baker and Ernest Sandoval had no viable leads despite security footage showing three masked figures. Traditional methods -- cell tower data, geofence warrants, and hundreds of tips -- yielded nothing concrete. The breakthrough came when another detective suggested Google might have records of anyone searching the address beforehand.

Police obtained a reverse keyword search warrant requesting all users who had searched variations of "5312 Truckee Street" in the 15 days before the fire. Google provided 61 matching devices. Cross-referencing with earlier cell tower data revealed the three suspects, who had collectively searched the address dozens of times, including floor plans on Zillow.
Security

Most AI Chatbots Easily Tricked Into Giving Dangerous Responses, Study Finds (theguardian.com) 46

An anonymous reader quotes a report from The Guardian: Hacked AI-powered chatbots threaten to make dangerous knowledge readily available by churning out illicit information the programs absorb during training, researchers say. [...] In a report on the threat, the researchers conclude that it is easy to trick most AI-driven chatbots into generating harmful and illegal information, showing that the risk is "immediate, tangible and deeply concerning." "What was once restricted to state actors or organised crime groups may soon be in the hands of anyone with a laptop or even a mobile phone," the authors warn.

The research, led by Prof Lior Rokach and Dr Michael Fire at Ben Gurion University of the Negev in Israel, identified a growing threat from "dark LLMs", AI models that are either deliberately designed without safety controls or modified through jailbreaks. Some are openly advertised online as having "no ethical guardrails" and being willing to assist with illegal activities such as cybercrime and fraud. [...] To demonstrate the problem, the researchers developed a universal jailbreak that compromised multiple leading chatbots, enabling them to answer questions that should normally be refused. Once compromised, the LLMs consistently generated responses to almost any query, the report states.

"It was shocking to see what this system of knowledge consists of," Fire said. Examples included how to hack computer networks or make drugs, and step-by-step instructions for other criminal activities. "What sets this threat apart from previous technological risks is its unprecedented combination of accessibility, scalability and adaptability," Rokach added. The researchers contacted leading providers of LLMs to alert them to the universal jailbreak but said the response was "underwhelming." Several companies failed to respond, while others said jailbreak attacks fell outside the scope of bounty programs, which reward ethical hackers for flagging software vulnerabilities.

Slashdot Top Deals