Television

Smart TVs Are Employing Screen Monitoring Tech To Harvest User Data (vox.com) 44

Smart TV platforms are increasingly monitoring what appears on users' screens through Automatic Content Recognition (ACR) technology, building detailed viewer profiles for targeted advertising.

Roku, which transitioned from a hardware company to an advertising powerhouse, reported $3.5 billion in annual ad revenue for 2024 -- representing 85% of its total income. The company has aggressively acquired ACR-related firms, with Roku-owned technology winning an Emmy in 2023 for advancements in the field.

According to market research firm Antenna, 43% of all streaming subscriptions in the United States were ad-supported by late 2024, showing the industry's shift toward advertising-based models. Most users unknowingly consent to this monitoring when setting up their devices. Though consumers can technically disable ACR in their TV settings, doing so often restricts functionality.
Privacy

Again and Again, NSO Group's Customers Keep Getting Their Spyware Operations Caught (techcrunch.com) 8

An anonymous reader shares a report: Amnesty International published a new report this week detailing attempted hacks against two Serbian journalists, allegedly carried out with NSO Group's spyware Pegasus. The two journalists, who work for the Serbia-based Balkan Investigative Reporting Network (BIRN), received suspicious text messages including a link -- basically a phishing attack, according to the nonprofit. In one case, Amnesty said its researchers were able to click on the link in a safe environment and see that it led to a domain that they had previously identified as belonging to NSO Group's infrastructure.

"Amnesty International has spent years tracking NSO Group Pegasus spyware and how it has been used to target activists and journalists," Donncha O Cearbhaill, the head of Amnesty's Security Lab, told TechCrunch. "This technical research has allowed Amnesty to identify malicious websites used to deliver the Pegasus spyware, including the specific Pegasus domain used in this campaign."

To his point, security researchers like O Cearbhaill who have been keeping tabs on NSO's activities for years are now so good at spotting signs of the company's spyware that sometimes all researchers have to do is quickly look at a domain involved in an attack. In other words, NSO Group and its customers are losing their battle to stay in the shadows. "NSO has a basic problem: They are not as good at hiding as their customers think," John Scott-Railton, a senior researcher at The Citizen Lab, a human rights organization that has investigated spyware abuses since 2012, told TechCrunch.

Privacy

Oracle Customers Confirm Data Stolen In Alleged Cloud Breach Is Valid (bleepingcomputer.com) 20

An anonymous reader quotes a report from BleepingComputer: Despite Oracle denying a breach of its Oracle Cloud federated SSO login servers and the theft of account data for 6 million people, BleepingComputer has confirmed with multiple companies that associated data samples shared by the threat actor are valid. Last week, a person named 'rose87168' claimed to have breached Oracle Cloud servers and began selling the alleged authentication data and encrypted passwords of 6 million users. The threat actor also said that stolen SSO and LDAP passwords could be decrypted using the info in the stolen files and offered to share some of the data with anyone who could help recover them.

The threat actor released multiple text files consisting of a database, LDAP data, and a list of 140,621 domains for companies and government agencies that were allegedly impacted by the breach. It should be noted that some of the company domains look like tests, and there are multiple domains per company. In addition to the data, rose87168 shared an Archive.org URL with BleepingComputer for a text file hosted on the "login.us2.oraclecloud.com" server that contained their email address. This file indicates that the threat actor could create files on Oracle's server, indicating an actual breach. However, Oracle has denied that it suffered a breach of Oracle Cloud and has refused to respond to any further questions about the incident.

"There has been no breach of Oracle Cloud. The published credentials are not for the Oracle Cloud. No Oracle Cloud customers experienced a breach or lost any data," the company told BleepingComputer last Friday. This denial, however, contradicts findings from BleepingComputer, which received additional samples of the leaked data from the threat actor and contacted the associated companies. Representatives from these companies, all who agreed to confirm the data under the promise of anonymity, confirmed the authenticity of the information. The companies stated that the associated LDAP display names, email addresses, given names, and other identifying information were all correct and belonged to them. The threat actor also shared emails with BleepingComputer, claiming to be part of an exchange between them and Oracle.

United Kingdom

UK's First Permanent Facial Recognition Cameras Installed (theregister.com) 55

The Metropolitan Police has confirmed its first permanent installation of live facial recognition (LFR) cameras is coming this summer and the location will be the South London suburb of Croydon. From a report: The two cameras will be installed in the city center in an effort to combat crime and will be attached to buildings and lamp posts on North End and London Road. According to the police they will only be turned on when officers are in the area and in a position to make an arrest if a criminal is spotted. The installation follows a two-year trial in the area where police vans fitted with the camera have been patrolling the streets matching passersby to its database of suspects or criminals, leading to hundreds of arrests. The Met claims the system can alert them in seconds if a wanted wrong'un is spotted, and if the person gets the all-clear, the image of their face will be deleted.
Encryption

Signal President Blasts WhatsApp's Privacy Claims (cybernews.com) 59

Signal president Meredith Whittaker challenged recent assertions by WhatsApp head Will Cathcart that minimal differences exist between the two messaging platforms' privacy protections. "We're amused to see WhatsApp stretching the limits of reality to claim that they are just like Signal," Whittaker said in a statement published Monday, responding to Cathcart's comments to Dutch journalists last week.

While WhatsApp licenses Signal's end-to-end encryption technology, Whittaker said that WhatsApp still collects substantial user metadata, including "location data, contact lists, when they send someone a message, when they stop, what users are in their group chats, their profile picture, and much more." Cathcart had previously stated that WhatsApp doesn't track users' communications or share contact information with other companies, claiming "we strongly believe in private communication."
Privacy

Signal Head Defends Messaging App's Security After US War Plan Leak (yahoo.com) 161

The president of Signal defended the messaging app's security on Wednesday after top Trump administration officials mistakenly included a journalist in an encrypted chatroom they used to discuss looming U.S. military action against Yemen's Houthis. For a report: Signal's Meredith Whittaker did not directly address the blunder, which Democratic lawmakers have said was a breach of U.S. national security. But she described the app as the "gold standard in private comms" in a post on X, which outlined Signal's security advantages over Meta's WhatsApp messaging app. "We're open source, nonprofit, and we develop and apply (end-to-end encryption) and privacy-preserving tech across our system to protect metadata and message contents," she said.
AI

Apple Says It'll Use Apple Maps Look Around Photos To Train AI (theverge.com) 11

An anonymous reader shares a report: Sometime earlier this month, Apple updated a section of its website that discloses how it collects and uses imagery for Apple Maps' Look Around feature, which is similar to Google Maps' Street View, as spotted by 9to5Mac. A newly added paragraph reveals that, beginning in March 2025, Apple will be using imagery and data collected during Look Around surveys to "train models powering Apple products and services, including models related to image recognition, creation, and enhancement."

Apple collects images and 3D data to enhance and improve Apple Maps using vehicles and backpacks (for pedestrian-only areas) equipped with cameras, sensors, and other equipment including iPhones and iPads. The company says that as part of its commitment to privacy, any images it captures that are published in the Look Around feature have faces and license plates blurred. Apple also says it will only use imagery with those details blurred out for training models. It does accept requests for those wanting their houses to also be blurred, but by default they are not.

Biotech

DNA of 15 Million People For Sale In 23andMe Bankruptcy (404media.co) 51

An anonymous reader quotes a report from 404 Media: 23andMe filed for Chapter 11 bankruptcy Sunday, leaving the fate of millions of people's genetic information up in the air as the company deals with the legal and financial fallout of not properly protecting that genetic information in the first place. The filing shows how dangerous it is to provide your DNA directly to a large, for-profit commercial genetic database; 23andMe is now looking for a buyer to pull it out of bankruptcy. 23andMe said in court documents viewed by 404 Media that since hackers obtained personal data about seven million of its customers in October 2023, including, in some cases "health-related information based upon the user's genetics," it has faced "over 50 class action and state court lawsuits," and that "approximately 35,000 claimants have initiated, filed, or threatened to commence arbitration claims against the company." It is seeking bankruptcy protection in part to simplify the fallout of these legal cases, and because it believes it may not have money to pay for the potential damages associated with these cases.

CEO and cofounder Anne Wojcicki announced she is leaving the company as part of this process. The company has the genetic data of more than 15 million customers. According to its Chapter 11 filing, 23andMe owes money to a host of pharmaceutical companies, pharmacies, artificial intelligence companies (including a company called Aganitha AI and Coreweave), as well as health insurance companies and marketing companies.
Shortly before the filing, California Attorney General Rob Bonta issued an "urgent" alert to 23andMe customers: "Given 23andMe's reported financial distress, I remind Californians to consider invoking their rights and directing 23andMe to delete their data and destroy any samples of genetic material held by the company."

In a letter to customers Sunday, 23andMe said: "Your data remains protected. The Chapter 11 filing does not change how we store, manage, or protect customer data. Our users' privacy and data are important considerations in any transaction, and we remain committed to our users' privacy and to being transparent with our customers about how their data is managed." It added that any buyer will have to "comply with applicable law with respect to the treatment of customer data."

404 Media's Jason Koebler notes that "there's no way of knowing who is going to buy it, why they will be interested, and what will become of its millions of customers' DNA sequences. 23andMe has claimed over the years that it strongly resists law enforcement requests for information and that it takes customer security seriously. But the company has in recent years changed its terms of service, partnered with big pharmaceutical companies, and, of course, was hacked."
Google

Google Says It Might Have Deleted Your Maps Timeline Data (arstechnica.com) 14

Google has confirmed that a technical issue has permanently deleted location history data for numerous users of its Maps application, with no recovery possible for most affected customers. The problem emerged after Google transitioned its Timeline feature from cloud to on-device storage in 2024 to enhance privacy protections. Users began reporting missing historical location data on support forums and social media platforms in recent weeks. "This is the result of a technical issue and not user error or an intentional change," said a Google spokesperson. Only users who manually enabled encrypted cloud backups before the incident can recover their data, according to Google. The company began shifting location storage policies in 2023, initially stopping collection of sensitive location data including visits to abortion clinics and domestic violence shelters.
China

China Bans Compulsory Facial Recognition and Its Use in Private Spaces Like Hotel Rooms (theregister.com) 28

China's Cyberspace Administration and Ministry of Public Security have outlawed the use of facial recognition without consent. From a report: The two orgs last Friday published new rules on facial recognition and an explainer that spell out how orgs that want to use facial recognition must first conduct a "personal information protection impact assessment" that considers whether using the tech is necessary, impacts on individuals' privacy, and risks of data leakage. Organizations that decide to use facial recognition must data encrypt biometric data, and audit the information security techniques and practices they use to protect facial scans. Chinese that go through that process and decide they want to use facial recognition can only do so after securing individuals' consent. The rules also ban the use of facial recognition equipment in public places such as hotel rooms, public bathrooms, public dressing rooms, and public toilets. The measures don't apply to researchers or to what machine translation of the rules describes as "algorithm training activities" -- suggesting images of citizens' faces are fair game when used to train AI models.
EU

Is WhatsApp Being Ditched for Signal in Dutch Higher Education? (dub.uu.nl) 42

For weeks Signal has been one of the three most-downloaded apps in the Netherlands, according to a local news site. And now "Higher education institutions in the Netherlands have been looking for an alternative," according to DUB (an independent news site for the Utrecht University community): Employees of the Utrecht University of Applied Sciences (HU) were recently advised to switch to Signal. Avans University of Applied Sciences has also been discussing a switch...The National Student Union is concerned about privacy. The subject was raised at last week's general meeting, as reported by chair Abdelkader Karbache, who said: "Our local unions want to switch to Signal or other open-source software."
Besides being open source, Signal is a non-commercial nonprofit, the article points out — though its proponents suggest there's another big difference. "HU argues that Signal keeps users' data private, unlike WhatsApp." Cybernews.com explains the concern: In an interview with the Dutch newspaper De Telegraaf, Meredith Whittaker [president of the Signal Foundation] discussed the pitfalls of WhatsApp. "WhatsApp collects metadata: who you send messages to, when, and how often. That's incredibly sensitive information," she says.... The only information [Signal] collects is the date an account was registered, the time when an account was last active, and hashed phone numbers... Information like profile name and the people a user communicates with is all encrypted... Metadata might sound harmless, but it couldn't be further from the truth. According to Whittaker, metadata is deadly. "As a former CIA director once said: 'We kill people based on metadata'."
WhatsApp's metadata also includes IP addresses, TechRadar noted last May: Other identifiable data such as your network details, the browser you use, ISP, and other identifiers linked to other Meta products (like Instagram and Facebook) associated with the same device or account are also collected... [Y]our IP can be used to track down your location. As the company explained, even if you keep the location-related features off, IP addresses and other collected information like phone number area codes can be used to estimate your "general location."

WhatsApp is required by law to share this information with authorities during an investigation...

[U]nder scrutiny is how Meta itself uses these precious details for commercial purposes. Again, this is clearly stated in WhatsApp's privacy policy and terms of use. "We may use the information we receive from [other Meta companies], and they may use the information we share with them, to help operate, provide, improve, understand, customize, support, and market our Services and their offerings," reads the policy. This means that yes, your messages are always private, but WhatsApp is actively collecting your metadata to build your digital persona across other Meta platforms...

The article suggests using a VPN with WhatsApp and turning on its "advanced privacy feature" (which hides your IP address during calls) and managing the app's permissions for data collection. "While these steps can help reduce the amount of metadata collected, it's crucial to bear in mind that it's impossible to completely avoid metadata collection on the Meta-owned app... For extra privacy and security, I suggest switching to the more secure messaging app Signal."

The article also includes a cautionary anecdote. "It was exactly a piece of metadata — a Proton Mail recovery email — that led to the arrest of a Catalan activist."

Thanks to long-time Slashdot reader united_notions for sharing the article.
Privacy

Doc Searls Proposes We Set Our Own Terms and Policies for Web Site Tracking (searls.com) 33

Today long-time open source advocate/journalist Doc Searls revealed that years of work by consumer privacy groups has culminated in a proposed standard "that can vastly expand our agency in the digital world" — especially in a future world where agents surf the web on our behalf: Meet IEEE P7012 , which "identifies/addresses the manner in which personal privacy terms are proffered and how they can be read and agreed to by machines." It has been in the works since 2017, and should be ready later this year. (I say this as chair of the standard's working group.) The nickname for P7012 is MyTerms (much as the nickname for the IEEE's 802.11 standard is Wi-Fi).

The idea behind MyTerms is that the sites and services of the world should agree to your terms, rather than the other way around.

Basically your web browser proffers whatever agreement you've chosen (from a canonical list hosted at Customer Commons) to the web sites and other online services that you're visiting.

"Browser makers can build something into their product, or any developer can make a browser add-on or extension..." Searls writes. "On the site's side — the second-party side — CMS makers can build something in, or any developer can make a plug-in (WordPress) or a module (Drupal). Mobile app toolmakers can also come up with something (or many things)..." MyTerms creates a new regime for privacy: one based on contract. With each MyTerm you are the first party. Not the website, the service, or the app maker. They are the second party. And terms can be friendly. For example, a prototype term called NoStalking says "Just show me ads not based on tracking me." This is good for you, because you don't get tracked, and good for the site because it leaves open the advertising option. NoStalking lives at Customer Commons, much as personal copyrights live at Creative Commons. (Yes, the former is modeled on the latter.)
"[L]et's make this happen and show the world what agency really means," Searls concludes.

Another way to say it is they've created "a draft standard for machine-readable personal privacy terms." But Searl's article used a grander metaphor to explain its significance: When Archimedes said 'Give me a place to stand and I can move the world,' he was talking about agency. You have no agency on the Web if you are always the second party, agreeing to terms and policies set by websites.

You are Archimedes if you are the first party, setting your own terms and policies. The scale you get with those is One 2 World. The place you stand is on the Web itself — and the Internet below it.

Both were designed to make each of us an Archimedes.

Privacy

Hungary To Use Facial Recognition to Suppress Pride March (theguardian.com) 235

Hungary's Parliament not only voted to ban Pride events. They also voted to "allow authorities to use facial recognition software to identify attenders and potentially fine them," reports the Guardian. [The nationwide legislation] amends the country's law on assembly to make it an offence to hold or attend events that violate Hungary's contentious "child protection" legislation, which bars any "depiction or promotion" of homosexuality to minors under the age of 18. The legislation was condemned by Amnesty International, which described it as the latest in a series of discriminatory measures the Hungarian authorities have taken against LGBTQ+ people...

Organisers said they planned to go ahead with the march in Budapest, despite the law's stipulation that those who attend a prohibited event could face fines of up to 200,000 Hungarian forints [£425 or $549 U.S. dollars].

AI

Clearview Attempted To Buy Social Security Numbers and Mugshots for its Database (404media.co) 24

Controversial facial recognition company Clearview AI attempted to purchase hundreds of millions of arrest records including social security numbers, mugshots, and even email addresses to incorporate into its product, 404 Media reports. From the report: For years, Clearview AI has collected billions of photos from social media websites including Facebook, LinkedIn and others and sold access to its facial recognition tool to law enforcement. The collection and sale of user-generated photos by a private surveillance company to police without that person's knowledge or consent sparked international outcry when it was first revealed by the New York Times in 2020.

New documents obtained by 404 Media reveal that Clearview AI spent nearly a million dollars in a bid to purchase "690 million arrest records and 390 million arrest photos" from all 50 states from an intelligence firm. The contract further describes the records as including current and former home addresses, dates of birth, arrest photos, social security and cell phone numbers, and email addresses. Clearview attempted to purchase this data from Investigative Consultant, Inc. (ICI) which billed itself as an intelligence company with access to tens of thousands of databases and the ability to create unique data streams for its clients. The contract was signed in mid-2019, at a time when Clearview AI was quietly collecting billions of photos off the internet and was relatively unknown at the time.

Security

Microsoft Isn't Fixing 8-Year-Old Shortcut Exploit Abused For Spying (theregister.com) 34

Trend Micro uncovered an eight-year-long spying campaign exploiting a Windows vulnerability involving malicious .LNK shortcut files, which attackers padded with whitespace to conceal commands. Despite being reported to Microsoft in 2023, the company considers it a UI issue rather than a security risk and has not prioritized a fix. The Register reports: The attack method is low-tech but effective, relying on malicious .LNK shortcut files rigged with commands to download malware. While appearing to point to legitimate files or executables, these shortcuts quietly include extra instructions to fetch or unpack and attempt to run malicious payloads. Ordinarily, the shortcut's target and command-line arguments would be clearly visible in Windows, making suspicious commands easy to spot. But Trend's Zero Day Initiative said it observed North Korea-backed crews padding out the command-line arguments with megabytes of whitespace, burying the actual commands deep out of sight in the user interface.

Trend reported this to Microsoft in September last year and estimates that it has been used since 2017. It said it had found nearly 1,000 tampered .LNK files in circulation but estimates the actual number of attacks could have been higher. "This is one of many bugs that the attackers are using, but this is one that is not patched and that's why we reported it as a zero day," Dustin Childs, head of threat awareness at the Zero Day Initiative, told The Register. "We told Microsoft but they consider it a UI issue, not a security issue. So it doesn't meet their bar for servicing as a security update, but it might be fixed in a later OS version, or something along those lines."

After poring over malicious .LNK samples, the security shop said it found the vast majority of these files were from state-sponsored attackers (around 70 percent), used for espionage or information theft, with another 20 percent going after financial gain. Among the state-sponsored crews, 46 percent of attacks came from North Korea, while Russia, Iran, and China each accounted for around 18 percent of the activity.

United States

FTC Removes Posts Critical of Amazon, Microsoft, and AI Companies (wired.com) 71

The Federal Trade Commission has removed over 300 business guidance blogs published during former President Biden's term, including consumer protection information on AI and privacy lawsuits against Amazon and Microsoft, WIRED reported Tuesday, citing current and former FTC employees.

Deleted posts included guidance about Amazon's alleged use of Ring camera data to train algorithms, Microsoft's $20 million settlement over Xbox children's data collection, and compliance standards for AI chatbots. New FTC Chair Andrew Ferguson has pledged to pursue tech companies but with focus on alleged conservative censorship rather than data collection practices.
The Courts

Climatologist Michael Mann Finally Won a $1M Defamation Suit - But Then a Judge Threw It Out (msn.com) 64

Slashdot has run nearly a dozen stories about Michael Mann, one of America's most prominent climate scientists and a co-creator of the famous "hockey stick" graph of spiking temperatures. In 2012 Mann sued two bloggers for defamation — and last year Mann finally won more than $1 million, reports the Washington Post. "A jury found that two conservative commentators had defamed him by alleging that he was like a child molester in the way he had 'molested and tortured' climate data."

But "Now, a year after that ruling, the case has taken a turn that leaves Mann in the position of the one who owes money." On Wednesday, a judge sanctioned Mann's legal team for "bad-faith trial misconduct" for overstating how much the scientist lost in potential grant funding as a result of reputational harm. The lawyers had shown jurors a chart that listed one grant amount Mann didn't get at $9.7 million, though in other testimony Mann said it was worth $112,000. And when comparing Mann's grant income before and after the negative commentary, the lawyers cited a disparity of $2.8 million, but an amended calculation pegged it at $2.37 million.


The climate scientist's legal team said it was preparing to fight the setbacks in court. Peter J. Fontaine, one of Mann's attorneys, wrote in an email that Mann "believes that the court committed errors of fact and law and will pursue these matters further." Fontaine emphasized that the original decision — that Mann was defamed by the commentary — still stands. "We have reviewed the recent rulings by the D.C. Superior Court and are pleased to note that the court has upheld the jury's verdict," he said.

Thanks to Slashdot reader UsuallyReasonable for sharing the news.
Privacy

Everything You Say To Your Echo Will Be Sent To Amazon Starting On March 28 (arstechnica.com) 43

An anonymous reader quotes a report from Ars Technica: In an email sent to customers today, Amazon said that Echo users will no longer be able to set their devices to process Alexa requests locally and, therefore, avoid sending voice recordings to Amazon's cloud. Amazon apparently sent the email to users with "Do Not Send Voice Recordings" enabled on their Echo. Starting on March 28, recordings of everything spoken to the Alexa living in Echo speakers and smart displays will automatically be sent to Amazon and processed in the cloud.

Attempting to rationalize the change, Amazon's email said: "As we continue to expand Alexa's capabilities with generative AI features that rely on the processing power of Amazon's secure cloud, we have decided to no longer support this feature." One of the most marketed features of Alexa+ is its more advanced ability to recognize who is speaking to it, a feature known as Alexa Voice ID. To accommodate this feature, Amazon is eliminating a privacy-focused capability for all Echo users, even those who aren't interested in the subscription-based version of Alexa or want to use Alexa+ but not its ability to recognize different voices.

[...] Amazon said in its email today that by default, it will delete recordings of users' Alexa requests after processing. However, anyone with their Echo device set to "Don't save recordings" will see their already-purchased devices' Voice ID feature bricked. Voice ID enables Alexa to do things like share user-specified calendar events, reminders, music, and more. Previously, Amazon has said that "if you choose not to save any voice recordings, Voice ID may not work." As of March 28, broken Voice ID is a guarantee for people who don't let Amazon store their voice recordings.
Amazon's email continues: "Alexa voice requests are always encrypted in transit to Amazon's secure cloud, which was designed with layers of security protections to keep customer information safe. Customers can continue to choose from a robust set of controls by visiting the Alexa Privacy dashboard online or navigating to More - Alexa Privacy in the Alexa app."

Further reading: Google's Gemini AI Can Now See Your Search History
Security

Chinese Hackers Sat Undetected in Small Massachusetts Power Utility for Months (pcmag.com) 22

In late 2023, the FBI alerted the Littleton Electric Light and Water Departments (LELWD) that it had been breached by a Chinese-state-sponsored hacking group for over 300 days. With the help of cybersecurity firm Dragos and Department of Energy-funded sensors, LELWD confirmed the intrusion, identified the hackers' movements, and ultimately restructured its network to remove them. PCMag reports: At the time, LELWD had been installing sensors from cybersecurity firm Dragos with the help of Department of Energy grants awarded by the American Public Power Association (APPA). "The sensors helped LELWD confirm the extent of the malicious activity on the system and pinpoint when and where the attackers were going on the utility's networks," the APPA said last year. Today, Dragos released a case study (PDF) about the hack, which it blamed on Voltzite, a "sophisticated threat group...that overlaps with Volt Typhoon."

The call from the FBI forced Dragos "to deploy quickly and bypass the planned onboarding timeline" for the LELWD, it says. It discovered that Volt Typhoon "had persistent access to LELWD's network." Hackers were looking for specific data related to [operational technology] operating procedures and spatial layout data relating to energy grid operations," Dragos tells SecurityWeek. In the end, Dragos confirmed the compromised systems did not contain "customer-sensitive data," and LEWLD changed their network architecture to kick Volt Typhoon out, the case study says.
Groups like Volt Typhoon, "don't always go for high-profile targets first," said Ensar Seker, Chief Security Officer at SOCRadar. "Small, underfunded utilities can serve as low-hanging fruit, allowing adversaries to test tactics, develop footholds, and pivot toward larger targets."
Firefox

Mozilla Warns Users To Update Firefox Before Certificate Expires (bleepingcomputer.com) 28

Mozilla is urging Firefox users to update their browsers to version 128 or later (or ESR 115.13 for extended support users) before March 14, 2025, to avoid security risks and add-on disruptions caused by the expiration of a key root certificate. "On 14 March a root certificate (the resource used to prove an add-on was approved by Mozilla) will expire, meaning Firefox users on versions older than 128 (or ESR 115) will not be able to use their add-ons," warns a Mozilla blog post. "We want developers to be aware of this in case some of your users are on older versions of Firefox that may be impacted." BleepingComputer reports: A Mozilla support document explains that failing to update Firefox could expose users to significant security risks and practical issues, which, according to Mozilla, include:

- Malicious add-ons can compromise user data or privacy by bypassing security protections.
- Untrusted certificates may allow users to visit fraudulent or insecure websites without warning.
- Compromised password alerts may stop working, leaving users unaware of potential account breaches.

It is noted that the problem impacts Firefox on all platforms, including Windows, Android, Linux, and macOS, except for iOS, where there's an independent root certificate management system. Mozilla says that users relying on older versions of Firefox may continue using their browsers after the expiration of the certificate if they accept the security risks, but the software's performance and functionality may be severely impacted.

Slashdot Top Deals