Google

Google Begins Requiring JavaScript For Google Search (techcrunch.com) 91

Google says it has begun requiring users to turn on JavaScript, the widely-used programming language to make web pages interactive, in order to use Google Search. From a report: In an email to TechCrunch, a company spokesperson claimed that the change is intended to "better protect" Google Search against malicious activity, such as bots and spam, and to improve the overall Google Search experience for users. The spokesperson noted that, without JavaScript, many Google Search features won't work properly, and that the quality of search results tends to be degraded.
Google

Google Won't Add Fact Checks Despite New EU Law (axios.com) 185

According to Axios, Google has told the EU it will not add fact checks to search results and YouTube videos or use them in ranking or removing content, despite the requirements of a new EU law. From the report: In a letter written to Renate Nikolay, the deputy director general under the content and technology arm at the European Commission, Google's global affairs president Kent Walker said the fact-checking integration required by the Commission's new Disinformation Code of Practice "simply isn't appropriate or effective for our services" and said Google won't commit to it. The code would require Google to incorporate fact-check results alongside Google's search results and YouTube videos. It would also force Google to build fact-checking into its ranking systems and algorithms.

Walker said Google's current approach to content moderation works and pointed to successful content moderation during last year's "unprecedented cycle of global elections" as proof. He said a new feature added to YouTube last year that enables some users to add contextual notes to videos "has significant potential." (That program is similar to X's Community Notes feature, as well as new program announced by Meta last week.)

The EU's Code of Practice on Disinformation, introduced in 2022, includes several voluntary commitments that tech firms and private companies, including fact-checking organizations, are expected to deliver on. The Code, originally created in 2018, predates the EU's new content moderation law, the Digital Services Act (DSA), which went into effect in 2022.

The Commission has held private discussions over the past year with tech companies, urging them to convert the voluntary measures into an official code of conduct under the DSA. Walker said in his letter Thursday that Google had already told the Commission that it didn't plan to comply. Google will "pull out of all fact-checking commitments in the Code before it becomes a DSA Code of Conduct," he wrote. He said Google will continue to invest in improvements to its current content moderation practices, which focus on providing people with more information about their search results through features like Synth ID watermarking and AI disclosures on YouTube.

Google

Google Strikes World's Largest Biochar Carbon Removal Deal 33

Google has partnered with Indian startup Varaha to purchase 100,000 tons of carbon dioxide removal credits by 2030, marking its largest deal in India and the largest involving biochar, a carbon removal solution made from biomass. TechCrunch reports: The offtake agreement credits will be delivered to Google by 2030 from Varaha's industrial biochar project in the western Indian state of Gujarat, the two firms said on Thursday. [...] Biochar is produced in two ways: artisanal and industrial. The artisanal method is community-driven, where farmers burn crop residue in conical flasks without using machines. In contrast, industrial biochar is made using large reactors that process 50-60 tons of biomass daily.

Varaha's project will generate industrial biochar from an invasive plant species, Prosopis Juliflora, using its pyrolysis facility in Gujarat. The invasive species impacts plant biodiversity and has overtaken grasslands used for livestock. Varaha will harvest the plant and make efforts to restore native grasslands in the region, the company's co-founder and CEO Madhur Jain said in an interview. Once the biochar is produced, a third-party auditor will submit their report to Puro.Earth to generate credits. Although biochar is seen as a long-term carbon removal solution, its permanence can vary between 1,000 and 2,500 years depending on production and environmental factors.

Jain told TechCrunch that Varaha tried using different feedstocks and different parameters within its reactors to find the best combination to achieve permanence close to 1,600 years. The startup has also built a digital monitoring, reporting and verification system, integrating remote sensing to monitor biomass availability. It even has a mobile app that captures geo-tagged, time-stamped images to geographically document activities, including biomass excavation and biochar's field application. With its first project, Varaha said it processed at least 40,000 tons of biomass and produced 10,000 tons of biochar last year.
AI

AI Slashes Google's Code Migration Time By Half (theregister.com) 74

Google has cut code migration time in half by deploying AI tools to assist with large-scale software updates, according to a new research paper from the company's engineers. The tech giant used large language models to help convert 32-bit IDs to 64-bit across its 500-million-line codebase, upgrade testing libraries, and replace time-handling frameworks. While 80% of code changes were AI-generated, human engineers still needed to verify and sometimes correct the AI's output. In one project, the system helped migrate 5,359 files and modify 149,000 lines of code in three months.
Security

Dead Google Apps Domains Can Be Compromised By New Owners (arstechnica.com) 34

An anonymous reader quotes a report from Ars Technica: Lots of startups use Google's productivity suite, known as Workspace, to handle email, documents, and other back-office matters. Relatedly, lots of business-minded webapps use Google's OAuth, i.e. "Sign in with Google." It's a low-friction feedback loop -- up until the startup fails, the domain goes up for sale, and somebody forgot to close down all the Google stuff. Dylan Ayrey, of Truffle Security Co., suggests in a report that this problem is more serious than anyone, especially Google, is acknowledging. Many startups make the critical mistake of not properly closing their accounts -- on both Google and other web-based apps -- before letting their domains expire.

Given the number of people working for tech startups (6 million), the failure rate of said startups (90 percent), their usage of Google Workspaces (50 percent, all by Ayrey's numbers), and the speed at which startups tend to fall apart, there are a lot of Google-auth-connected domains up for sale at any time. That would not be an inherent problem, except that, as Ayrey shows, buying a domain allows you to re-activate the Google accounts for former employees if the site's Google account still exists.

With admin access to those accounts, you can get into many of the services they used Google's OAuth to log into, like Slack, ChatGPT, Zoom, and HR systems. Ayrey writes that he bought a defunct startup domain and got access to each of those through Google account sign-ins. He ended up with tax documents, job interview details, and direct messages, among other sensitive materials.
A Google spokesperson said in a statement: "We appreciate Dylan Ayrey's help identifying the risks stemming from customers forgetting to delete third-party SaaS services as part of turning down their operation. As a best practice, we recommend customers properly close out domains following these instructions to make this type of issue impossible. Additionally, we encourage third-party apps to follow best-practices by using the unique account identifiers (sub) to mitigate this risk."
Piracy

Telegram Shuts Down Z-Library, Anna's Archive Channels Over Copyright Infringement (torrentfreak.com) 18

An anonymous reader quotes a report from TorrentFreak: In 'piracy' associated circles, Z-Library has one of the most followed Telegram channels of all. The shadow library's official channel amassed over 630,000 subscribers over the years, who were among the first to read site announcements and other key updates. Z-Library previously had some of its messages removed due to copyright infringement. While it didn't upload or directly link to infringing material on Telegram, rightsholders allegedly complained about the links that were posted to the Z-Library website. In response, Z-Library chose to no longer include links to its own homepage on Telegram. Instead, it referred users to Wikipedia and Reddit, where the links were still available. The same copyright awareness was visible at Anna's Archive, a popular shadow library search engine. This channel was also careful not to post direct links to infringing material. After all, sharing or uploading copyrighted books would undoubtedly lead to trouble.

Despite the reported caution, the channels of both Z-Library and Anna's Archive are no longer accessible today. Messages posted by these accounts were purged "due to copyright infringement", as shown below. Telegram didn't limit its action to removing posts; the channels are now entirely inaccessible. Those trying to access the channels in the Telegram app receive a pop-up message stating they are "unavailable due to copyright infringement." The simultaneous removal of both channels suggests they are linked to the same complaint or decision. The specific complaint and alleged copyright infringements remain unclear.

Businesses

Even Harvard MBAs Are Struggling To Land Jobs (msn.com) 120

Nearly a quarter of Harvard Business School's 2024 M.B.A. graduates remained jobless three months after graduation, highlighting deepening employment challenges at elite U.S. business schools. The unemployment rate for Harvard M.B.A.s rose to 23% from 20% a year earlier, more than double the 10% rate in 2022.

Major employers including McKinsey, Amazon, Google, and Microsoft have scaled back M.B.A. recruitment, with McKinsey cutting its hires at University of Chicago's Booth School to 33 from 71. "We're not immune to the difficulties of the job market," said Kristen Fitzpatrick, who oversees career development at Harvard Business School. "Going to Harvard is not going to be a differentiator. You have to have the skills." Columbia Business School was the only top program to improve its placement rate in 2024. Median starting salaries for employed M.B.A.s remain around $175,000.
Google

Google is Making AI in Gmail and Docs Free - But Raising the Price of Workspace (theverge.com) 21

Google is bundling its AI features into Workspace at no extra charge while raising the base subscription price by $2 to $14 per user monthly, the company said Wednesday. The move eliminates the previous $20 monthly fee for Gemini Business plan that was required to access AI tools in Gmail, Docs and other Workspace apps.
Social Networks

Pixelfed, Instagram's Decentralized Competitor, Is Now On iOS and Android (engadget.com) 15

Pixelfed has launched its mobile app for iOS and Android, solidifying its position as a viable alternative to Instagram. The move also comes at a pivotal moment, as a potential Supreme Court ban on TikTok could drive users to explore other social media platforms. Pixelfed is ad-free, open source, decentralized, defaults to chronological feeds and doesn't share user data with third parties. Engadget reports: The platform launched in 2018, but was only available on the web or through third-party app clients. The Android app debuted on January 9 and the iOS app released today. Creator Daniel Supernault posted on Mastodon Monday evening that the platform had 11,000 users join over the preceding 24 hours and that more than 78,000 posts have been shared to Pixelfed to date. The platform runs on ActivityPub, the same protocol that powers several other decentralized social networks in the fediverse, such as Mastodon and Flipboard. The iOS and Android apps are available at their respective links.

Further reading: Meta Is Blocking Links to Decentralized Instagram Competitor Pixelfed
AI

OpenAI's AI Reasoning Model 'Thinks' In Chinese Sometimes, No One Really Knows Why 104

OpenAI's "reasoning" AI model, o1, has exhibited a puzzling behavior of "thinking" in Chinese, Persian, or some other language -- "even when asked a question in English," reports TechCrunch. While the exact cause remains unclear, as OpenAI has yet to provide an explanation, AI experts have proposed a few theories. From the report: Several on X, including Hugging Face CEO Clement Delangue, alluded to the fact that reasoning models like o1 are trained on datasets containing a lot of Chinese characters. Ted Xiao, a researcher at Google DeepMind, claimed that companies including OpenAI use third-party Chinese data labeling services, and that o1 switching to Chinese is an example of "Chinese linguistic influence on reasoning."

"[Labs like] OpenAI and Anthropic utilize [third-party] data labeling services for PhD-level reasoning data for science, math, and coding," Xiao wrote in a post on X. "[F]or expert labor availability and cost reasons, many of these data providers are based in China." [...] Other experts don't buy the o1 Chinese data labeling hypothesis, however. They point out that o1 is just as likely to switch to Hindi, Thai, or a language other than Chinese while teasing out a solution.

Other experts don't buy the o1 Chinese data labeling hypothesis, however. They point out that o1 is just as likely to switch to Hindi, Thai, or a language other than Chinese while teasing out a solution. Rather, these experts say, o1 and other reasoning models might simply be using languages they find most efficient to achieve an objective (or hallucinating). "The model doesn't know what language is, or that languages are different," Matthew Guzdial, an AI researcher and assistant professor at the University of Alberta, told TechCrunch. "It's all just text to it."

Tiezhen Wang, a software engineer at AI startup Hugging Face, agrees with Guzdial that reasoning models' language inconsistencies may be explained by associations the models made during training. "By embracing every linguistic nuance, we expand the model's worldview and allow it to learn from the full spectrum of human knowledge," Wang wrote in a post on X. "For example, I prefer doing math in Chinese because each digit is just one syllable, which makes calculations crisp and efficient. But when it comes to topics like unconscious bias, I automatically switch to English, mainly because that's where I first learned and absorbed those ideas."

[...] Luca Soldaini, a research scientist at the nonprofit Allen Institute for AI, cautioned that we can't know for certain. "This type of observation on a deployed AI system is impossible to back up due to how opaque these models are," they told TechCrunch. "It's one of the many cases for why transparency in how AI systems are built is fundamental."
The Internet

Double-keyed Browser Caching Is Hitting Web Performance 88

A Google engineer has warned that a major shift in web browser caching is upending long-standing performance optimization practices. Browsers have overhauled their caching systems that forces websites to maintain separate copies of shared resources instead of reusing them across domains.

The new "double-keyed caching" system, implemented to enhance privacy, is ending the era of shared public content delivery networks, writes Google engineer Addy Osmani. According to Chrome's data, the change has led to a 3.6% increase in cache misses and 4% rise in network bandwidth usage.
Businesses

The New $30,000 Side Hustle: Making Job Referrals for Strangers (bnnbloomberg.ca) 15

Tech workers at major U.S. companies are earning thousands of dollars by referring job candidates they've never met, creating an underground marketplace for employment referrals at firms like Microsoft and Nvidia, according to Bloomberg.

One tech worker cited in the report earned $30,000 in referral bonuses after recommending over 1,000 strangers to his employer over 18 months, resulting in more than six successful hires. While platforms like ReferralHub charge up to $50 per referral, Goldman Sachs and Google said such practices violate their policies. Google requires referrals to be based on personal knowledge of candidates.
Google

Google Wants to Track Your Digital Fingerprints Again (mashable.com) 54

Google is reintroducing "digital fingerprinting" in five weeks, reports Mashable, describing it as "a data collection process that ingests all of your online signals (from IP address to complex browser information) and pinpoints unique users or devices." Or, to put it another way, Google "is tracking your online behavior in the name of advertising."

The UK's Information Commissioner's Office called Google's decision "irresponsible": it is likely to reduce people's choice and control over how their information is collected. The change to Google's policy means that fingerprinting could now replace the functions of third-party cookies... Google itself has previously said that fingerprinting does not meet users' expectations for privacy, as users cannot easily consent to it as they would cookies. This in turn means they cannot control how their information is collected. To quote Google's own position on fingerprinting from 2019: "We think this subverts user choice and is wrong...." When the new policy comes into force on 16 February 2025, organisations using Google's advertising technology will be able to deploy fingerprinting without being in breach of Google's own policies. Given Google's position and scale in the online advertising ecosystem, this is significant.
Their post ends with a warning that those hoping to use fingerprinting for advertising "will need to demonstrate how they are complying with the requirements of data protection law. These include providing users with transparency, securing freely-given consent, ensuring fair processing and upholding information rights such as the right to erasure."

But security and privacy researcher Lukasz Olejnik asks if Google's move is the biggest privacy erosion in 10 years.... Could this mark the end of nearly a decade of progress in internet and web privacy? It would be unfortunate if the newly developing AI economy started from a decrease of privacy and data protection standards. Some analysts or observers might then be inclined to wonder whether this approach to privacy online might signal similar attitudes in other future Google products, like AI... The shift is rather drastic. Where clear restrictions once existed, the new policy removes the prohibition (so allows such uses) and now only requires disclosure... [I]f the ICO's claims about Google sharing IP addresses within the adtech ecosystem are accurate, this represents a significant policy shift with critical implications for privacy, trust, and the integrity of previously proposed Privacy Sandbox initiatives.
Their post includes a disturbing thought. "Reversing the stance on fingerprinting could open the door to further data collection, including to crafting dynamic, generative AI-powered ads tailored with huge precision. Indeed, such applications would require new data..."

Thanks to long-time Slashdot reader sinij for sharing the news.
AI

Futurist Predicts AI-Powered 'Digital Superpowers' by 2030 (bigthink.com) 100

Unanimous AI's founder Louis Rosenberg predicts a "wave" of new superhuman abilities is coming soon that we experience profoundly "as self-embodied skills that we carry around with us throughout our lives"...

"[B]y 2030, a majority of us will live our lives with context-aware AI agents bringing digital superpowers into our daily experiences." They will be unleashed by context-aware AI agents that are loaded into body-worn devices that see what we see, hear what we hear, experience what we experience, and provide us with enhanced abilities to perceive and interpret our world... The majority of these superpowers will be delivered through AI-powered glasses with cameras and microphones that act as their eyes and ears, but there will be other form factors for people who just don't like eyewear... [For example, earbuds with built in cameras] We will whisper to these intelligent devices, and they will whisper back, giving us recommendations, guidance, spatial reminders, directional cues, haptic nudges, and other verbal and perceptual content that will coach us through our days like an omniscient alter ego... When you spot that store across the street, you simply whisper to yourself, "I wonder when it opens?" and a voice will instantly ring back into your ears, "10:30 a.m...."

By 2030, we will not need to whisper to the AI agents traveling with us through our lives. Instead, you will be able to simply mouth the words, and the AI will know what you are saying by reading your lips and detecting activation signals from your muscles. I am confident that "mouthing" will be deployed because it's more private, more resilient to noisy spaces, and most importantly, it will feel more personal, internal, and self-embodied. By 2035, you may not even need to mouth the words. That's because the AI will learn to interpret the signals in our muscles with such subtlety and precision — we will simply need to think about mouthing the words to convey our intent... When you grab a box of cereal in a store and are curious about the carbs, or wonder whether it's cheaper at Walmart, the answers will just ring in your ears or appear visually. It will even give you superhuman abilities to assess the emotions on other people's faces, predict their moods, goals, or intentions, coaching you during real-time conversations to make you more compelling, appealing, or persuasive...

I don't make these claims lightly. I have been focused on technologies that augment our reality and expand human abilities for over 30 years and I can say without question that the mobile computing market is about to run in this direction in a very big way.

Instead of Augmented Reality, how about Augmented Mentality? The article notes Meta has already added context-aware AI to its Ray-Ban glasses and suggests that within five years Meta might try "selling us superpowers we can't resist". And Google's new AI-powered operating system Android XR hopes to augment our world with seamless context-aware content. But think about where this is going. "[E]ach of us could find ourselves in a new reality where technologies controlled by third parties can selectively alter what we see and hear, while AI-powered voices whisper in our ears with targeted advice and guidance."

And yet " by 2030 the superpowers that these devices give us won't feel optional. After all, not having them could put us at a social and cognitive disadvantage."

Thanks to Slashdot reader ZipNada for sharing the news.
Social Networks

'What If They Ban TikTok and People Keep Using It Anyway?' (yahoo.com) 101

"What if they ban TikTok and people keep using it anyway?" asks the New York Times, saying a pending ban in America "is vague on how it would be enforced" Some experts say that even if TikTok is actually banned this month or soon, there may be so many legal and technical loopholes that millions of Americans could find ways to keep TikTok'ing. The law is "Swiss cheese with lots of holes in it," said Glenn Gerstell, a former top lawyer at the National Security Agency and a senior adviser at the Center for Strategic and International Studies, a policy research organization. "There are obviously ways around it...." When other countries ban apps, the government typically orders internet providers and mobile carriers to block web traffic to and from the blocked website or app. That's probably not how a ban on TikTok in the United States would work. Two lawyers who reviewed the law said the text as written doesn't appear to order internet and mobile carriers to stop people from using TikTok.

There may not be unanimity on this point. Some lawyers who spoke to Bloomberg News said internet providers would be in legal hot water if they let their customers continue to use a banned TikTok. Alan Rozenshtein, a University of Minnesota associate law professor, said he suspected internet providers aren't obligated to stop TikTok use "because Congress wanted to allow the most dedicated TikTok users to be able to access the app, so as to limit the First Amendment infringement." The law also doesn't order Americans to stop using TikTok if it's banned or to delete the app from our phones....

Odds are that if the Supreme Court declares the TikTok law constitutional and if a ban goes into effect, blacklisting the app from the Apple and Google app stores will be enough to stop most people from using TikTok... If a ban goes into effect and Apple and Google block TikTok from pushing updates to the app on your phone, it may become buggy or broken over time. But no one is quite sure how long it would take for the TikTok app to become unusable or compromised in this situation.

Users could just sideload the app after downloading it outside a phone's official app store, the article points out. (More than 10 million people sideloaded Fortnite within six weeks of its removal from Apple and Google's app stores.) And there's also the option of just using a VPN — or watching TikTok's web site.

(I've never understood why all apps haven't already been replaced with phone-optimized web sites...)
Youtube

YouTubers Are Selling Their Unused Video Footage To AI Companies (bloomberg.com) 17

An anonymous reader shares a report: YouTubers and other digital content creators are selling their unused video footage to AI companies seeking exclusive videos to better train their AI algorithms, oftentimes netting thousands of dollars per deal. OpenAI, Alphabet's Google, AI media company Moonvalley and several other AI companies are collectively paying hundreds of content creators for access to their unpublished videos, according to people familiar with the negotiations.

That content, which hasn't been posted elsewhere online, is considered valuable for training artificial intelligence systems since it's unique. AI companies are currently paying between $1 and $4 per minute of footage, the people said, with prices increasing depending on video quality or format. Videos that are shot in 4K, for example, go for a higher price, as does non-traditional footage like videos captured from drones or using 3D animations. Most footage, such as unused video created for networks like YouTube, Instagram and TikTok, is selling for somewhere between $1 and $2 per minute.

Programming

StackOverflow Usage Plummets as AI Chatbots Rise (devclass.com) 66

Developer Q&A platform StackOverflow appears to be facing an existential crisis as volume of new questions on the site has plunged 75% from the 2017 peak and 60% year-on-year in December 2024, according to StackExchange Data Explorer figures.

The decline accelerated after ChatGPT's launch in November 2022, with questions falling 76% since then. Despite banning AI-generated answers two years ago, StackOverflow has embraced AI partnerships, striking deals with Google, OpenAI and GitHub.
Privacy

See the Thousands of Apps Hijacked To Spy On Your Location (404media.co) 49

An anonymous reader quotes a report from 404 Media: Some of the world's most popular apps are likely being co-opted by rogue members of the advertising industry to harvest sensitive location data on a massive scale, with that data ending up with a location data company whose subsidiary has previously sold global location data to US law enforcement. The thousands of apps, included in hacked files from location data company Gravy Analytics, include everything from games likeCandy Crushand dating apps like Tinder to pregnancy tracking and religious prayer apps across both Android and iOS. Because much of the collection is occurring through the advertising ecosystem -- not code developed by the app creators themselves -- this data collection is likely happening without users' or even app developers' knowledge.

"For the first time publicly, we seem to have proof that one of the largest data brokers selling to both commercial and government clients appears to be acquiring their data from the online advertising 'bid stream,'" rather than code embedded into the apps themselves, Zach Edwards, senior threat analyst at cybersecurity firm Silent Push and who has followed the location data industry closely, tells 404 Media after reviewing some of the data. The data provides a rare glimpse inside the world of real-time bidding (RTB). Historically, location data firms paid app developers to include bundles of code that collected the location data of their users. Many companies have turned instead to sourcing location information through the advertising ecosystem, where companies bid to place ads inside apps. But a side effect is that data brokers can listen in on that process and harvest the location of peoples' mobile phones.

"This is a nightmare scenario for privacy, because not only does this data breach contain data scraped from the RTB systems, but there's some company out there acting like a global honey badger, doing whatever it pleases with every piece of data that comes its way," Edwards says. Included in the hacked Gravy data are tens of millions of mobile phone coordinates of devices inside the US, Russia, and Europe. Some of those files also reference an app next to each piece of location data. 404 Media extracted the app names and built a list of mentioned apps. The list includes dating sites Tinder and Grindr; massive games such asCandy Crush,Temple Run,Subway Surfers, andHarry Potter: Puzzles & Spells; transit app Moovit; My Period Calendar & Tracker, a period-tracking app with more than 10 million downloads; popular fitness app MyFitnessPal; social network Tumblr; Yahoo's email client; Microsoft's 365 office app; and flight tracker Flightradar24. The list also mentions multiple religious-focused apps such as Muslim prayer and Christian Bible apps, various pregnancy trackers, and many VPN apps, which some users may download, ironically, in an attempt to protect their privacy.
404 Media's full list of apps included in the data can be found here. There are also other lists available from other security researchers.
The Courts

Google Faces Trial For Collecting Data On Users Who Opted Out (arstechnica.com) 21

An anonymous reader quotes a report from Ars Technica: A federal judge this week rejected Google's motion to throw out a class-action lawsuit alleging that it invaded the privacy of users who opted out of functionality that records a users' web and app activities. A jury trial is scheduled for August 2025 in US District Court in San Francisco. The lawsuit concerns Google's Web & App Activity (WAA) settings, with the lead plaintiff representing two subclasses of people with Android and non-Android phones who opted out of tracking. "The WAA button is a Google account setting that purports to give users privacy control of Google's data logging of the user's web app and activity, such as a user's searches and activity from other Google services, information associated with the user's activity, and information about the user's location and device," wrote (PDF) US District Judge Richard Seeborg, the chief judge in the Northern District Of California.

Google says that Web & App Activity "saves your activity on Google sites and apps, including associated info like location, to give you faster searches, better recommendations, and more personalized experiences in Maps, Search, and other Google services." Google also has a supplemental Web App and Activity setting that the judge's ruling refers to as "(s)WAA." "The (s)WAA button, which can only be switched on if WAA is also switched on, governs information regarding a user's '[Google] Chrome history and activity from sites, apps, and devices that use Google services.' Disabling WAA also disables the (s)WAA button," Seeborg wrote. But data is still sent to third-party app developers through the Google Analytics for Firebase (GA4F), "a free analytical tool that takes user data from the Firebase kit and provides app developers with insight on app usage and user engagement," the ruling said. GA4F "is integrated in 60 percent of the top apps" and "works by automatically sending to Google a user's ad interactions and certain identifiers regardless of a user's (s)WAA settings, and Google will, in turn, provide analysis of that data back to the app developer."

Plaintiffs have brought claims of privacy invasion under California law. Plaintiffs "present evidence that their data has economic value," and "a reasonable juror could find that Plaintiffs suffered damage or loss because Google profited from the misappropriation of their data," Seeborg wrote. The lawsuit was filed in July 2020. The judge notes that summary judgment can be granted when "there is no genuine dispute as to any material fact and the movant is entitled to judgment as a matter of law." Google hasn't met that standard, he ruled.
In a statement provided to Ars, Google said that "privacy controls have long been built into our service and the allegations here are a deliberate attempt to mischaracterize the way our products work. We will continue to make our case in court against these patently false claims."
Chromium

Tech Giants Form Chromium Browser Coalition (betanews.com) 67

BrianFagioli writes: The Linux Foundation has announced the launch of 'Supporters of Chromium-Based Browsers,' an initiative aimed at funding and supporting open development within the Chromium ecosystem. The purpose of this effort is to provide resources and foster collaboration among developers, academia, and tech companies to drive the sustainability and innovation of Chromium projects. Major industry players, including Google, Meta, Microsoft, and Opera, have pledged their support.

Slashdot Top Deals