Privacy

Leaked Disney Data Reveals Financial and Strategy Secrets (msn.com) 48

An anonymous reader shares a report: Passport numbers for a group of Disney cruise line workers. Disney+ streaming revenue. Sales of Genie+ theme park passes. The trove of data from Disney that was leaked online by hackers earlier this summer includes a range of financial and strategy information that sheds light on the entertainment giant's operations, according to files viewed by The Wall Street Journal. It also includes personally identifiable information of some staff and customers.

The leaked files include granular details about revenue generated by such products as Disney+ and ESPN+; park pricing offers the company has modeled; and what appear to be login credentials for some of Disney's cloud infrastructure. (The Journal didn't attempt to access any Disney systems.) "We decline to comment on unverified information The Wall Street Journal has purportedly obtained as a result of a bad actor's illegal activity," a Disney spokesman said. Disney told investors in an August regulatory filing that it is investigating the unauthorized release of "over a terabyte of data" from one of its communications systems. It said the incident hadn't had a material impact on its operations or financial performance and doesn't expect that it will.

Data that a hacking entity calling itself Nullbulge released online spans more than 44 million messages from Disney's Slack workplace communications tool, upward of 18,800 spreadsheets and at least 13,000 PDFs, the Journal found. The scope of the material taken appears to be limited to public and private channels within Disney's Slack that one employee had access to. No private messages between executives appear to be included. Slack is only one online forum in which Disney employees communicate at work.

Movies

The Search For the Face Behind Mavis Beacon Teaches Typing (wired.com) 56

An anonymous reader quotes a report from Wired: Jazmin Jones knowswhat she did. "If you're online, there's this idea of trolling," Jones, the director behindSeeking Mavis Beacon, said during a recent panel for her new documentary. "For this project, some things we're taking incredibly seriously ... and other things we're trolling. We're trolling this idea of a detective because we're also, like,ACAB." Her trolling, though, was for a good reason. Jones and fellow filmmaker Olivia Mckayla Ross did it in hopes of finding the woman behind Mavis Beacon Teaches Typing. The popular teaching tool was released in 1987 by The Software Toolworks, a video game and software company based in California that produced educational chess, reading, and math games. Mavis, essentially the "mascot" of the game, is a Black woman donned in professional clothes and a slicked-back bun. Though Mavis Beacon was not an actual person, Jones and Ross say that she is one of the first examples of Black representation they witnessed in tech. Seeking Mavis Beacon, which opened in New York City on August 30 and is rolling out to other cities in September, is their attempt to uncover the story behind the face, which appeared on the tool's packaging and later as part of its interface.

The film shows the duo setting up a detective room, conversing over FaceTime, running up to people on the street, and even tracking down a relative connected to the ever-elusive Mavis. But the journey of their search turned up a different question they didn't initially expect: What are the impacts of sexism, racism, privacy, and exploitation in a world where you can present yourself any way you want to? Using shots from computer screens, deep dives through archival footage, and sit-down interviews, the noir-style documentary reveals that Mavis Beacon is actually Renee L'Esperance, a Black model from Haiti who was paid $500 for her likeness with no royalties, despite the program selling millions of copies. [...]

In a world where anyone can create images of folks of any race, gender, or sexual orientation without having to fully compensate the real people who inspired them, Jones and Ross are working to preserve not only the data behind Mavis Beacon but also the humanity behind the software. On the panel, hosted by Black Girls in Media, Ross stated that the film's social media has a form where users of Mavis Beacon can share what the game has meant to them, for archival purposes. "On some level, Olivia and I are trolling ideas of worlds that we never felt safe in or protected by," Jones said during the panel. "And in other ways, we are honoring this legacy of cyber feminism, historians, and care workers that we are very seriously indebted to."
You can watch the trailer for "Seeking Mavis Beacon" on YouTube.
The Courts

Clearview AI Fined $33.7 Million Over 'Illegal Database' of Faces (apnews.com) 40

An anonymous reader quotes a report from the Associated Press: The Dutch data protection watchdog on Tuesday issued facial recognition startup Clearview AI with a fine of $33.7 million over its creation of what the agency called an "illegal database" of billion of photos of faces. The Netherlands' Data Protection Agency, or DPA, also warned Dutch companies that using Clearview's services is also banned. The data agency said that New York-based Clearview "has not objected to this decision and is therefore unable to appeal against the fine."

But in a statement emailed to The Associated Press, Clearview's chief legal officer, Jack Mulcaire, said that the decision is "unlawful, devoid of due process and is unenforceable." The Dutch agency said that building the database and insufficiently informing people whose images appear in the database amounted to serious breaches of the European Union's General Data Protection Regulation, or GDPR. "Facial recognition is a highly intrusive technology, that you cannot simply unleash on anyone in the world," DPA chairman Aleid Wolfsen said in a statement. "If there is a photo of you on the Internet -- and doesn't that apply to all of us? -- then you can end up in the database of Clearview and be tracked. This is not a doom scenario from a scary film. Nor is it something that could only be done in China," he said. DPA said that if Clearview doesn't halt the breaches of the regulation, it faces noncompliance penalties of up to $5.6 million on top of the fine.
Mulcaire said Clearview doesn't fall under EU data protection regulations. "Clearview AI does not have a place of business in the Netherlands or the EU, it does not have any customers in the Netherlands or the EU, and does not undertake any activities that would otherwise mean it is subject to the GDPR," he said.
Android

OSOM, the Company Formed From Essential's Ashes, is Apparently in Shambles 15

A former executive of smartphone startup OSOM Products has filed a lawsuit alleging the company's founder misused funds for personal expenses, including two Lamborghinis and a lavish lifestyle. Mary Ross, OSOM's ex-Chief Privacy Officer, is seeking access to company records in a Delaware court filing.

OSOM, founded in 2020 by former Essential employees, launched two products: the Solana-backed Saga smartphone and a privacy cable. Android founder Andy Rubin founded Essential, which sought to compete with Apple and Android-makers on a smartphone, but later shutdown after not find many takers for its phone. The lawsuit claims OSOM founder Jason Keats used company money for racing hobbies, first-class travel, and mortgage payments.
Crime

Was the Arrest of Telegram's CEO Inevitable? (platformer.news) 174

Casey Newton, former senior editor at the Verge, weighs in on Platformer about the arrest of Telegram CEO Pavel Durov.

"Fending off onerous speech regulations and overzealous prosecutors requires that platform builders act responsibly. Telegram never even pretended to." Officially, Telegram's terms of service prohibit users from posting illegal pornographic content or promotions of violence on public channels. But as the Stanford Internet Observatory noted last year in an analysis of how CSAM spreads online, these terms implicitly permit users who share CSAM in private channels as much as they want to. "There's illegal content on Telegram. How do I take it down?" asks a question on Telegram's FAQ page. The company declares that it will not intervene in any circumstances: "All Telegram chats and group chats are private amongst their participants," it states. "We do not process any requests related to them...."

Telegram can look at the contents of private messages, making it vulnerable to law enforcement requests for that data. Anticipating these requests, Telegram created a kind of jurisdictional obstacle course for law enforcement that (it says) none of them have successfully navigated so far. From the FAQ again:

To protect the data that is not covered by end-to-end encryption, Telegram uses a distributed infrastructure. Cloud chat data is stored in multiple data centers around the globe that are controlled by different legal entities spread across different jurisdictions. The relevant decryption keys are split into parts and are never kept in the same place as the data they protect. As a result, several court orders from different jurisdictions are required to force us to give up any data. [...] To this day, we have disclosed 0 bytes of user data to third parties, including governments.

As a result, investigation after investigation finds that Telegram is a significant vector for the spread of CSAM.... The company's refusal to answer almost any law enforcement request, no matter how dire, has enabled some truly vile behavior. "Telegram is another level," Brian Fishman, Meta's former anti-terrorism chief, wrote in a post on Threads. "It has been the key hub for ISIS for a decade. It tolerates CSAM. Its ignored reasonable [law enforcement] engagement for YEARS. It's not 'light' content moderation; it's a different approach entirely.

The article asks whether France's action "will embolden countries around the world to prosecute platform CEOs criminally for failing to turn over user data." On the other hand, Telegram really does seem to be actively enabling a staggering amount of abuse. And while it's disturbing to see state power used indiscriminately to snoop on private conversations, it's equally disturbing to see a private company declare itself to be above the law.

Given its behavior, a legal intervention into Telegram's business practices was inevitable. But the end of private conversation, and end-to-end encryption, need not be.

The Courts

City of Columbus Sues Man After He Discloses Severity of Ransomware Attack (arstechnica.com) 37

An anonymous reader quotes a report from Ars Technica, written by Dan Goodin: A judge in Ohio has issued a temporary restraining order against a security researcher who presented evidence that a recent ransomware attack on the city of Columbus scooped up reams of sensitive personal information, contradicting claims made by city officials. The order, issued by a judge in Ohio's Franklin County, came after the city of Columbus fell victim to a ransomware attack on July 18 that siphoned 6.5 terabytes of the city's data. A ransomware group known as Rhysida took credit for the attack and offered to auction off the data with a starting bid of about $1.7 million in bitcoin. On August 8, after the auction failed to find a bidder, Rhysida released what it said was about 45 percent of the stolen data on the group's dark web site, which is accessible to anyone with a TOR browser.

Columbus Mayor Andrew Ginther said on August 13 that a "breakthrough" in the city's forensic investigation of the breach found that the sensitive files Rhysida obtained were either encrypted or corrupted, making them "unusable" to the thieves. Ginther went on to say the data's lack of integrity was likely the reason the ransomware group had been unable to auction off the data. Shortly after Ginther made his remarks, security researcher David Leroy Ross contacted local news outlets and presented evidence that showed the data Rhysida published was fully intact and contained highly sensitive information regarding city employees and residents. Ross, who uses the alias Connor Goodwolf, presented screenshots and other data that showed the files Rhysida had posted included names from domestic violence cases and Social Security numbers for police officers and crime victims. Some of the data spanned years.

On Thursday, the city of Columbus sued Ross (PDF) for alleged damages for criminal acts, invasion of privacy, negligence, and civil conversion. The lawsuit claimed that downloading documents from a dark web site run by ransomware attackers amounted to him "interacting" with them and required special expertise and tools. The suit went on to challenge Ross alerting reporters to the information, which ii claimed would not be easily obtained by others. "Only individuals willing to navigate and interact with the criminal element on the dark web, who also have the computer expertise and tools necessary to download data from the dark web, would be able to do so," city attorneys wrote. "The dark web-posted data is not readily available for public consumption. Defendant is making it so." The same day, a Franklin County judge granted the city's motion for a temporary restraining order (PDF) against Ross. It bars the researcher "from accessing, and/or downloading, and/or disseminating" any city files that were posted to the dark web. The motion was made and granted "ex parte," meaning in secret before Ross was informed of it or had an opportunity to present his case.

Security

Malware Infiltrates Pidgin Messenger's Official Plugin Repository (bleepingcomputer.com) 10

The Pidgin messaging app removed the ScreenShareOTR plugin from its third-party plugin list after it was found to be used to install keyloggers, information stealers, and malware targeting corporate networks. BleepingComputer reports: The plugin was promoted as a screen-sharing tool for secure Off-The-Record (OTR) protocol and was available for both Windows and Linux versions of Pidgin. According to ESET, the malicious plugin was configured to infect unsuspecting users with DarkGate malware, a powerful malware threat actors use to breach networks since QBot's dismantling by the authorities. [...] Those who installed it are recommended to remove it immediately and perform a full system scan with an antivirus tool, as DarkGate may be lurking on their system.

After publishing our story, Pidgin's maintainer and lead developer, Gary Kramlich, notified us on Mastodon to say that they do not keep track of how many times a plugin is installed. To prevent similar incidents from happening in the future, Pidgin announced that, from now on, it will only accept third-party plugins that have an OSI Approved Open Source License, allowing scrutiny into their code and internal functionality.

Encryption

Telegram Founder's Indictment Thrusts Encryption Into the Spotlight (nytimes.com) 124

An anonymous reader shares a report: When French prosecutors charged Pavel Durov, the chief executive of the messaging app Telegram, with a litany of criminal offenses on Wednesday, one accusation stood out to Silicon Valley companies. Telegram, French authorities said in a statement, had provided cryptology services aimed at ensuring confidentiality without a license. In other words, the topic of encryption was being thrust into the spotlight.

The cryptology charge raised eyebrows at U.S. tech companies including Signal, Apple and Meta's WhatsApp, according to three people with knowledge of the companies. These companies provide end-to-end encrypted messaging services and often stand together when governments challenge their use of the technology, which keeps online conversations between users private and secure from outsiders.

But while Telegram is also often described as an encrypted messaging app, it tackles encryption differently than WhatsApp, Signal and others. So if Mr. Durov's indictment turned Telegram into a public exemplar of the technology, some Silicon Valley companies believe that could damage the credibility of encrypted messaging apps writ large, according to the people, putting them in a tricky position of whether to rally around their rival.

Encryption has been a long-running point of friction between governments and tech companies around the world. For years, tech companies have argued that encrypted messaging is crucial to maintain people's digital privacy, while law enforcement and governments have said that the technology enables illicit behaviors by hiding illegal activity. The debate has grown more heated as encrypted messaging apps have become mainstream. Signal has grown by tens of millions of users since its founding in 2018. Apple's iMessage is installed on the hundreds of millions of iPhones that the company sells each year. WhatsApp is used by more than two billion people globally.

Encryption

Feds Bust Alaska Man With 10,000+ CSAM Images Despite His Many Encrypted Apps (arstechnica.com) 209

A recent indictment (PDF) of an Alaska man stands out due to the sophisticated use of multiple encrypted communication tools, privacy-focused apps, and dark web technology. "I've never seen anyone who, when arrested, had three Samsung Galaxy phones filled with 'tens of thousands of videos and images' depicting CSAM, all of it hidden behind a secrecy-focused, password-protected app called 'Calculator Photo Vault,'" writes Ars Technica's Nate Anderson. "Nor have I seen anyone arrested for CSAM having used all of the following: [Potato Chat, Enigma, nandbox, Telegram, TOR, Mega NZ, and web-based generative AI tools/chatbots]." An anonymous reader shares the report: According to the government, Seth Herrera not only used all of these tools to store and download CSAM, but he also created his own -- and in two disturbing varieties. First, he allegedly recorded nude minor children himself and later "zoomed in on and enhanced those images using AI-powered technology." Secondly, he took this imagery he had created and then "turned to AI chatbots to ensure these minor victims would be depicted as if they had engaged in the type of sexual contact he wanted to see." In other words, he created fake AI CSAM -- but using imagery of real kids.

The material was allegedly stored behind password protection on his phone(s) but also on Mega and on Telegram, where Herrera is said to have "created his own public Telegram group to store his CSAM." He also joined "multiple CSAM-related Enigma groups" and frequented dark websites with taglines like "The Only Child Porn Site you need!" Despite all the precautions, Herrera's home was searched and his phones were seized by Homeland Security Investigations; he was eventually arrested on August 23. In a court filing that day, a government attorney noted that Herrera "was arrested this morning with another smartphone -- the same make and model as one of his previously seized devices."

The government is cagey about how, exactly, this criminal activity was unearthed, noting only that Herrera "tried to access a link containing apparent CSAM." Presumably, this "apparent" CSAM was a government honeypot file or web-based redirect that logged the IP address and any other relevant information of anyone who clicked on it. In the end, given that fatal click, none of the "I'll hide it behind an encrypted app that looks like a calculator!" technical sophistication accomplished much. Forensic reviews of Herrera's three phones now form the primary basis for the charges against him, and Herrera himself allegedly "admitted to seeing CSAM online for the past year and a half" in an interview with the feds.

Government

California Passes Bill Requiring Easier Data Sharing Opt Outs (therecord.media) 22

Most of the attention today has been focused on California's controversial "kill switch" AI safety bill, which passed the California State Assembly by a 45-11 vote. However, California legislators passed another tech bill this week which requires internet browsers and mobile operating systems to offer a simple tool for consumers to easily opt out of data sharing and selling for targeted advertising. Slashdot reader awwshit shares a report from The Record: The state's Senate passed the landmark legislation after the General Assembly approved it late Wednesday. The Senate then added amendments to the bill which now goes back to the Assembly for final sign off before it is sent to the governor's desk, a process Matt Schwartz, a policy analyst at Consumer Reports, called a "formality." California, long a bellwether for privacy regulation, now sets an example for other states which could offer the same protections and in doing so dramatically disrupt the online advertising ecosystem, according to Schwartz.

"If folks use it, [the new tool] could severely impact businesses that make their revenue from monetizing consumers' data," Schwartz said in an interview with Recorded Future News. "You could go from relatively small numbers of individuals taking advantage of this right now to potentially millions and that's going to have a big impact." As it stands, many Californians don't know they have the right to opt out because the option is invisible on their browsers, a fact which Schwartz said has "artificially suppressed" the existing regulation's intended effects. "It shouldn't be that hard to send the universal opt out signal," Schwartz added. "This will require [browsers and mobile operating systems] to make that setting easy to use and find."

Businesses

Telegram Says CEO Durov Has 'Nothing To Hide' (bbc.com) 79

Messaging app Telegram has said its CEO Pavel Durov, who was detained in France on Saturday, has "nothing to hide." From a report: Mr Durov was arrested at an airport north of Paris under a warrant for offences related to the app, according to officials. The investigation is reportedly about insufficient moderation, with Mr Durov accused of failing to take steps to curb criminal uses of Telegram. The app is accused of failure to co-operate with law enforcement over drug trafficking, child sexual content and fraud.

Telegram said in a statement that "its moderation is within industry standards and constantly improving." The app added: "It is absurd to claim that a platform or its owner are responsible for abuse of that platform." Telegram said Mr Durov travels in Europe frequently and added that it abides by European Union laws, including the Digital Services Act, which aims to ensure a safe and accountable online environment. "Almost a billion users globally use Telegram as means of communication and as a source of vital information," the app's statement read. "We're awaiting a prompt resolution of this situation. Telegram is with you all." Judicial sources quoted by AFP news agency say Mr Durov's detention was extended on Sunday and could last as long as 96 hours.

Privacy

Microsoft Copilot Studio Exploit Leaks Sensitive Cloud Data (darkreading.com) 8

An anonymous reader quotes a report from Dark Reading: Researchers have exploited a vulnerability in Microsoft's Copilot Studio tool allowing them to make external HTTP requests that can access sensitive information regarding internal services within a cloud environment -- with potential impact across multiple tenants. Tenable researchers discovered the server-side request forgery (SSRF) flaw in the chatbot creation tool, which they exploited to access Microsoft's internal infrastructure, including the Instance Metadata Service (IMDS) and internal Cosmos DB instances, they revealed in a blog post this week. Tracked by Microsoft as CVE-2024-38206, the flaw allows an authenticated attacker to bypass SSRF protection in Microsoft Copilot Studio to leak sensitive cloud-based information over a network, according to a security advisory associated with the vulnerability. The flaw exists when combining an HTTP request that can be created using the tool with an SSRF protection bypass, according to Tenable.

"An SSRF vulnerability occurs when an attacker is able to influence the application into making server-side HTTP requests to unexpected targets or in an unexpected way," Tenable security researcher Evan Grant explained in the post. The researchers tested their exploit to create HTTP requests to access cloud data and services from multiple tenants. They discovered that "while no cross-tenant information appeared immediately accessible, the infrastructure used for this Copilot Studio service was shared among tenants," Grant wrote. Any impact on that infrastructure, then, could affect multiple customers, he explained. "While we don't know the extent of the impact that having read/write access to this infrastructure could have, it's clear that because it's shared among tenants, the risk is magnified," Grant wrote. The researchers also found that they could use their exploit to access other internal hosts unrestricted on the local subnet to which their instance belonged. Microsoft responded quickly to Tenable's notification of the flaw, and it has since been fully mitigated, with no action required on the part of Copilot Studio users, the company said in its security advisory.
Further reading: Slack AI Can Be Tricked Into Leaking Data From Private Channels
Chrome

Google Can't Defend Shady Chrome Data Hoarding As 'Browser Agnostic,' Court Says (arstechnica.com) 12

An anonymous reader quotes a report from Ars Technica: Chrome users who declined to sync their Google accounts with their browsing data secured a big privacy win this week after previously losing a proposed class action claiming that Google secretly collected personal data without consent from over 100 million Chrome users who opted out of syncing. On Tuesday, the 9th US Circuit Court of Appeals reversed (PDF) the prior court's finding that Google had properly gained consent for the contested data collection. The appeals court said that the US district court had erred in ruling that Google's general privacy policies secured consent for the data collection. The district court failed to consider conflicts with Google's Chrome Privacy Notice (CPN), which said that users' "choice not to sync Chrome with their Google accounts meant that certain personal information would not be collected and used by Google," the appeals court ruled.

Rather than analyzing the CPN, it appears that the US district court completely bought into Google's argument that the CPN didn't apply because the data collection at issue was "browser agnostic" and occurred whether a user was browsing with Chrome or not. But the appeals court -- by a 3-0 vote -- did not. In his opinion, Circuit Judge Milan Smith wrote that the "district court should have reviewed the terms of Google's various disclosures and decided whether a reasonable user reading them would think that he or she was consenting to the data collection." "By focusing on 'browser agnosticism' instead of conducting the reasonable person inquiry, the district court failed to apply the correct standard," Smith wrote. "Viewed in the light most favorable to Plaintiffs, browser agnosticism is irrelevant because nothing in Google's disclosures is tied to what other browsers do."

Smith seemed to suggest that the US district court wasted time holding a "7.5-hour evidentiary hearing which included expert testimony about 'whether the data collection at issue'" was "browser-agnostic." "Rather than trying to determine how a reasonable user would understand Google's various privacy policies," the district court improperly "made the case turn on a technical distinction unfamiliar to most 'reasonable'" users, Smith wrote. Now, the case has been remanded to the district court where Google will face a trial over the alleged failure to get consent for the data collection. If the class action is certified, Google risks owing currently unknown damages to any Chrome users who opted out of syncing between 2016 and 2024. According to Smith, the key focus of the trial will be weighing the CPN terms and determining "what a 'reasonable user' of a service would understand they were consenting to, not what a technical expert would."

Privacy

US Feds Are Tapping a Half-Billion Encrypted Messaging Goldmine (404media.co) 77

An anonymous reader shares a report: U.S. agencies are increasingly accessing parts of a half-billion encrypted chat message haul that has rocked the global organized crime underground, using the chats as part of multiple drug trafficking prosecutions, according to a 404 Media review of U.S. court records. In particular, U.S. authorities are using the chat messages to prosecute alleged maritime drug smugglers who traffic cocaine using speedboats and commercial ships.

The court records show the continued fallout of the massive hack of encrypted phone company Sky in 2021, in which European agencies obtained the intelligence goldmine of messages despite Sky being advertised as end-to-end encrypted. European authorities have used those messages as the basis for many prosecutions and drug seizures across the continent. Now, it's clear that the blast radius extends to the United States.

Privacy

Slack AI Can Be Tricked Into Leaking Data From Private Channels (theregister.com) 9

Slack AI, an add-on assistive service available to users of Salesforce's team messaging service, is vulnerable to prompt injection, according to security firm PromptArmor. From a report: The AI service provides generative tools within Slack for tasks like summarizing long conversations, finding answers to questions, and summarizing rarely visited channels.

"Slack AI uses the conversation data already in Slack to create an intuitive and secure AI experience tailored to you and your organization," the messaging app provider explains in its documentation. Except it's not that secure, as PromptArmor tells it. A prompt injection vulnerability in Slack AI makes it possible to fetch data from private Slack channels.

Privacy

Toyota Confirms Breach After Stolen Data Leaks On Hacking Forum (bleepingcomputer.com) 7

Toyota confirmed a breach of its network after 240GB of data, including employee and customer information, was leaked on a hacking forum by a threat actor. The company has not provided details on how or when the breach occurred. BleepingComputer reports: ZeroSevenGroup (the threat actor who leaked the stolen data) says they breached a U.S. branch and were able to steal 240GB of files with information on Toyota employees and customers, as well as contracts and financial information. They also claim to have collected network infrastructure information, including credentials, using the open-source ADRecon tool that helps extract vast amounts of information from Active Directory environments.

"We have hacked a branch in United States to one of the biggest automotive manufacturer in the world (TOYOTA). We are really glad to share the files with you here for free. The data size: 240 GB," the threat actor claims. "Contents: Everything like Contacts, Finance, Customers, Schemes, Employees, Photos, DBs, Network infrastructure, Emails, and a lot of perfect data. We also offer you AD-Recon for all the target network with passwords." While Toyota hasn't shared the date of the breach, BleepingComputer found that the files had been stolen or at least created on December 25, 2022. This date could indicate that the threat actor gained access to a backup server where the data was stored.
"We are aware of the situation. The issue is limited in scope and is not a system wide issue," Toyota told BleepingComputer. The company added that it's "engaged with those who are impacted and will provide assistance if needed."
Television

Your TV Set Has Become a Digital Billboard. And It's Only Getting Worse. (arstechnica.com) 158

TV manufacturers are shifting their focus from hardware sales to viewer data and advertising revenue. This trend is driven by declining profit margins on TV sets and the growing potential of smart TV operating systems to generate recurring income. Companies like LG, Samsung, and Roku are increasingly prioritizing ad sales and user tracking capabilities in their TVs, ArsTechnica reports. Automatic content recognition (ACR) technology, which analyzes viewing habits, is becoming a key feature for advertisers. TV makers are partnering with data firms to enhance targeting capabilities, with LG recently sharing data with Nielsen and Samsung updating its ACR tech to track streaming ad exposure. This shift raises concerns about privacy and user experience, as TVs become more commercialized and data-driven. Industry experts predict a rise in "shoppable ads" and increased integration between TV viewing and e-commerce platforms. The report adds: With TV sales declining and many shoppers prioritizing pricing, smart TV players will continue developing ads that are harder to avoid and better at targeting. Interestingly, Patrick Horner, practice leader of consumer electronics at analyst Omdia, told Ars that smart TV advertising revenue exceeding smart TV hardware revenue (as well as ad sale margins surpassing those of hardware) is a US-only trend, albeit one that shows no signs of abating. OLED has become a mainstay in the TV marketplace, and until the next big display technology becomes readily available, OEMs are scrambling to make money in a saturated TV market filled with budget options. Selling ads is an obvious way to bridge the gap between today and The Next Big Thing in TVs.

Indeed, with companies like Samsung and LG making big deals with analytics firms and other brands building their businesses around ads, the industry's obsession with ads will only intensify. As we've seen before with TV commercials, which have gotten more frequent over time, once the ad genie is out of the bottle, it tends to grow, not go back inside. One side effect we're already seeing, Horner notes, is "a proliferation of more TV operating systems." While choice is often a good thing for consumers, it's important to consider if new options from companies like Amazon, Comcast, and TiVo actually do anything to notably improve the smart TV experience for owners.

And OS operators' financial success is tied to the number of hours users spend viewing something on the OS. Roku's senior director of ad innovation, Peter Hamilton, told Digiday in May that his team works closely with Roku's consumer team, "whose goal is to drive total viewing hours." Many smart TV OS operators are therefore focused on making it easier for users to navigate content via AI.

Privacy

National Public Data Published Its Own Passwords (krebsonsecurity.com) 35

Security researcher Brian Krebs writes: New details are emerging about a breach at National Public Data (NPD), a consumer data broker that recently spilled hundreds of millions of Americans' Social Security Numbers, addresses, and phone numbers online. KrebsOnSecurity has learned that another NPD data broker which shares access to the same consumer records inadvertently published the passwords to its back-end database in a file that was freely available from its homepage until today. In April, a cybercriminal named USDoD began selling data stolen from NPD. In July, someone leaked what was taken, including the names, addresses, phone numbers and in some cases email addresses for more than 272 million people (including many who are now deceased). NPD acknowledged the intrusion on Aug. 12, saying it dates back to a security incident in December 2023. In an interview last week, USDoD blamed the July data leak on another malicious hacker who also had access to the company's database, which they claimed has been floating around the underground since December 2023.

Following last week's story on the breadth of the NPD breach, a reader alerted KrebsOnSecurity that a sister NPD property -- the background search service recordscheck.net -- was hosting an archive that included the usernames and password for the site's administrator. A review of that archive, which was available from the Records Check website until just before publication this morning (August 19), shows it includes the source code and plain text usernames and passwords for different components of recordscheck.net, which is visually similar to nationalpublicdata.com and features identical login pages. The exposed archive, which was named "members.zip," indicates RecordsCheck users were all initially assigned the same six-character password and instructed to change it, but many did not. According to the breach tracking service Constella Intelligence, the passwords included in the source code archive are identical to credentials exposed in previous data breaches that involved email accounts belonging to NPD's founder, an actor and retired sheriff's deputy from Florida named Salvatore "Sal" Verini.

Reached via email, Mr. Verini said the exposed archive (a .zip file) containing recordscheck.net credentials has been removed from the company's website, and that the site is slated to cease operations "in the next week or so." "Regarding the zip, it has been removed but was an old version of the site with non-working code and passwords," Verini told KrebsOnSecurity. "Regarding your question, it is an active investigation, in which we cannot comment on at this point. But once we can, we will [be] with you, as we follow your blog. Very informative." The leaked recordscheck.net source code indicates the website was created by a web development firm based in Lahore, Pakistan called creationnext.com, which did not return messages seeking comment. CreationNext.com's homepage features a positive testimonial from Sal Verini.

Privacy

National Public Data Confirms Breach Exposing Social Security Numbers (bleepingcomputer.com) 56

BleepingComputer's Ionut Ilascu reports: Background check service National Public Data confirms that hackers breached its systems after threat actors leaked a stolen database with millions of social security numbers and other sensitive personal information. The company states that the breached data may include names, email addresses, phone numbers, social security numbers (SSNs), and postal addresses.

In the statement disclosing the security incident, National Public Data says that "the information that was suspected of being breached contained name, email address, phone number, social security number, and mailing address(es)." The company acknowledges the "leaks of certain data in April 2024 and summer 2024" and believes the breach is associated with a threat actor "that was trying to hack into data in late December 2023." NPD says they investigated the incident, cooperated with law enforcement, and reviewed the potentially affected records. If significant developments occur, the company "will try to notify" the impacted individuals.

Microsoft

Microsoft Tweaks Fine Print To Warn Everyone Not To Take Its AI Seriously (theregister.com) 54

Microsoft is notifying folks that its AI services should not be taken too seriously, echoing prior service-specific disclaimers. From a report: In an update to the IT giant's Service Agreement, which takes effect on September 30, 2024, Redmond has declared that its Assistive AI isn't suitable for matters of consequence. "AI services are not designed, intended, or to be used as substitutes for professional advice," Microsoft's revised legalese explains. The changes to Microsoft's rules of engagement cover a few specific services, such as noting that Xbox customers should not expect privacy from platform partners.

"In the Xbox section, we clarified that non-Xbox third-party platforms may require users to share their content and data in order to play Xbox Game Studio titles and these third-party platforms may track and share your data, subject to their terms," the latest Service Agreement says. There are also some clarifications regarding the handling of Microsoft Cashback and Microsoft Rewards. But the most substantive revision is the addition of an AI Services section, just below a passage that says Copilot AI Experiences are governed by Bing's Terms of Use. Those using Microsoft Copilot with commercial data protection get a separate set of terms. The tweaked consumer-oriented rules won't come as much of a surprise to anyone who has bothered to read the contractual conditions governing Microsoft's Bing and associated AI stuff. For example, there's now a Services Agreement prohibition on using AI Services for "Extracting Data."

Slashdot Top Deals