Google

AOSP Isn't Dead, But Google Just Landed a Huge Blow To Custom ROM Developers (androidauthority.com) 46

Google has removed device trees and driver binaries for Pixel phones from the Android 16 source code release, significantly complicating custom ROM development for those devices. The Android-maker intentionally omitted these resources as it shifts its Android Open Source Project reference target from Pixel hardware to a virtual device called "Cuttlefish."

The change forces custom ROM developers to reverse-engineer configurations they previously received directly from Google. Nolen Johnson from LineageOS said the process will become "painful," requiring developers to "blindly guess and reverse engineer from the prebuilt binaries what changes are needed each month." Google also squashed the Pixel kernel source code's commit history, eliminating another reference point developers used for features and security patches.

Google VP Seang Chau dismissed speculation that AOSP itself is ending, stating the project "is NOT going away." However, the changes effectively bring Pixel devices down to the same difficult development level as other Android phones.
The Internet

Abandoned Subdomains from Major Institutions Hijacked for AI-Generated Spam (404media.co) 17

A coordinated spam operation has infiltrated abandoned subdomains belonging to major institutions including Nvidia, Stanford University, NPR, and the U.S. government's vaccines.gov site, flooding them with AI-generated content that subsequently appears in search results and Google's AI Overview feature.

The scheme, reports 404 Media, posted over 62,000 articles on Nvidia's events.nsv.nvidia.com subdomain before the company took it offline within two hours of being contacted by reporters. The spam articles, which included explicit gaming content and local business recommendations, used identical layouts and a fake byline called "Ashley" across all compromised sites. Each targeted domain operates under different names -- "AceNet Hub" on Stanford's site, "Form Generation Hub" on NPR, and "Seymore Insights" on vaccines.gov -- but all redirect traffic to a marketing spam page. The operation exploits search engines' trust in institutional domains, with Google's AI Overview already serving the fabricated content as factual information to users searching for local businesses.
The Internet

An Experimental New Dating Site Matches Singles Based on Their Browser Histories (wired.com) 72

A dating site launched last week by Belgian artist Dries Depoorter matches potential partners based on their internet browsing histories rather than curated profiles or photos. Browser Dating requires users to download a Chrome or Firefox extension that exports and uploads their recent search data, creating matches based on shared online behaviors and interests rather than traditional dating app metrics.

Less than 1,000 users have signed up since the platform's launch, paying a one-time fee of $10.3 for unlimited matches or using a free tier limited to five connections. Depoorter, known for digital art projects exploring surveillance and technology, says the concept emerged from a 2016 workshop where participants shared a year of search history data. The platform processes browsing data locally using Google's Firebase tools.
China

Hong Kong Bans Video Game Using National Security Laws (engadget.com) 40

Hong Kong authorities have invoked national security laws for the first time to ban the Taiwan-made video game Reversed Front: Bonfire, accusing it of promoting "secessionist agendas, such as 'Taiwan independence' and 'Hong Kong independence.'" Engadget reports: Reversed Front: Bonfire was developed by a group known as ESC Taiwan, who are outspoken critics of the China's Communist Party. The game disappeared from the Apple App Store in Hong Kong less than 24 hours after authorities issued the warning. Google already removed the game from the Play Store back in May, because players were using hate speech as part of their usernames. ESC Taiwan told The New York Times that that the game's removal shows that apps like theirs are subject to censorship in mainland China. The group also thanked authorities for the free publicity on Facebook, as the game experienced a surge in Google searches.

The game uses anime-style illustrations and allows players to fight against China's Communist Party by taking on the role of "propagandists, patrons, spies or guerrillas" from Hong Kong, Taiwan, Tibet, Mongolia and Xinjiang, which is home to ethnic minorities like the Uyghur. That said, they can also choose to play as government soldiers. In its warning, Hong Kong Police said that anybody who shares or recommends the game on the internet may be committing several offenses, including "incitement to secession, "incitement to subversion" and "offenses in connection with seditious intention." Anybody who has downloaded the game will be considered in "possession of a publication that has a seditious intention," and anybody who provides financial assistance to it will be violating national security laws, as well. "Those who have downloaded the application should uninstall it immediately and must not attempt to defy the law," the authorities wrote.

Google

HP's First Google Beam 3D Video System Costs $24,999, Plus Unknown License Fees (arstechnica.com) 38

HP has unveiled the first commercial hardware for Google Beam, the Android-maker's 3D video conferencing technology formerly known as Project Starline, with a price tag of $24,999. The HP Dimension features a 65-inch light field display paired with six high-speed cameras positioned around the screen to capture speakers from multiple angles, creating what the companies describe as a lifelike 3D representation without requiring headsets or glasses.

The system processes visual data through Google's proprietary volumetric video model, which merges camera streams into 3D reconstructions with millimeter-scale precision at 60 frames per second. Beyond the hardware cost, users must purchase a separate Google Beam license for cloud processing, though pricing for that service remains undisclosed.
Google

News Sites Are Getting Crushed by Google's New AI Tools (wsj.com) 134

"It is true, Google AI is stomping on the entire internet," writes Slashdot reader TheWho79, sharing a report from the Wall Street Journal. "From HuffPost to the Atlantic, publishers prepare to pivot or shut the doors. ... Even highly regarded old school bullet-proof publications like Washington Post are getting hit hard." From the report: Traffic from organic search to HuffPost's desktop and mobile websites fell by just over half in the past three years, and by nearly that much at the Washington Post, according to digital market data firm Similarweb. Business Insider cut about 21% of its staff last month, a move CEO Barbara Peng said was aimed at helping the publication "endure extreme traffic drops outside of our control." Organic search traffic to its websites declined by 55% between April 2022 and April 2025, according to data from Similarweb.

At a companywide meeting earlier this year, Nicholas Thompson, chief executive of the Atlantic, said the publication should assume traffic from Google would drop toward zero and the company needed to evolve its business model. [...] "Google is shifting from being a search engine to an answer engine," Thompson said in an interview with The Wall Street Journal. "We have to develop new strategies."

The rapid development of click-free answers in search "is a serious threat to journalism that should not be underestimated," said William Lewis, the Washington Post's publisher and chief executive. Lewis is former CEO of the Journal's publisher, Dow Jones. The Washington Post is "moving with urgency" to connect with previously overlooked audiences and pursue new revenue sources and prepare for a "post-search era," he said.

At the New York Times, the share of traffic coming from organic search to the paper's desktop and mobile websites slid to 36.5% in April 2025 from almost 44% three years earlier, according to Similarweb. The Wall Street Journal's traffic from organic search was up in April compared with three years prior, Similarweb data show, though as a share of overall traffic it declined to 24% from 29%.
Further reading: Google's AI Mode Is 'the Definition of Theft,' Publishers Say
Android

Android 16 Is Here (blog.google) 23

An anonymous reader shares a blog post from Google: Today, we're bringing you Android 16, rolling out first to supported Pixel devices with more phone brands to come later this year. This is the earliest Android has launched a major release in the last few years, which ensures you get the latest updates as soon as possible on your devices. Android 16 lays the foundation for our new Material 3 Expressive design, with features that make Android more accessible and easy to use.
AI

Apple's Upgraded AI Models Underwhelm On Performance (techcrunch.com) 24

Apple's latest AI models continue to lag behind competitors, according to the company's own benchmark testing it disclosed this week. The tech giant's newest "Apple On-Device" model, which runs locally on iPhones and other devices, performed only "comparably" to similarly-sized models from Google and Alibaba in human evaluations of text generation quality -- not better, despite being Apple's most recent release.

The performance gap widens with Apple's more powerful "Apple Server" model, designed for data center deployment. Human testers rated it behind OpenAI's year-old GPT-4o in text generation tasks. In image analysis tests, evaluators preferred Meta's Llama 4 Scout model over Apple Server, a particularly notable result given that Llama 4 Scout itself underperforms leading models from Google, Anthropic, and OpenAI on various benchmarks.
AI

OpenAI Taps Google in Unprecedented Cloud Deal Despite AI Rivalry (reuters.com) 6

OpenAI plans to add Alphabet's Google cloud service to meet its growing needs for computing capacity, Reuters reported Tuesday, marking a surprising collaboration between two prominent competitors in the AI race. From the report: The deal, which has been under discussion for a few months, was finalized in May, one of the sources added. It underscores how massive computing demands to train and deploy AI models are reshaping the competitive dynamics in AI, and marks OpenAI's latest move to diversify its compute sources beyond its major supporter Microsoft, including its high-profile Stargate data center project.

It is a win for Google's cloud unit, which will supply additional computing capacity to OpenAI's existing infrastructure for training and running its AI models, sources said, who requested anonymity to discuss private matters. The move also comes as OpenAI's ChatGPT poses the biggest threat to Google's dominant search business in years, with Google executives recently saying that the AI race may not be winner-take-all.

Python

New Code.org Curriculum Aims To Make Schoolkids Python-Literate and AI-Ready 50

Longtime Slashdot reader theodp writes: The old Code.org curriculum page for middle and high school students has been changed to include a new Python Lab in the tech-backed nonprofit's K-12 offerings. Elsewhere on the site, a Computer Science and AI Foundations curriculum is described that includes units on 'Foundations of AI Programming [in Python]' and 'Insights from Data and AI [aka Data Science].' A more-detailed AI Foundations Syllabus 25-26 document promises a second semester of material is coming soon: "This semester offers an innovative approach to teaching programming by integrating learning with and about artificial intelligence (AI). Using Python as the primary language, students build foundational programming skills while leveraging AI tools to enhance computational thinking and problem-solving. The curriculum also introduces students to the basics of creating AI-powered programs, exploring machine learning, and applying data science principles."

Newly-posted videos on Code.org's YouTube channel appear to be intended to support the new Python-based CS & AI course. "Python is extremely versatile," explains a Walmart data scientist to open the video for Data Science: Using Python. "So, first of all, Python is one of the very few languages that can handle numbers very, very well." A researcher at the Univ. of Washington's Institute for Health Metrics and Evaluation (IHME) adds, "Python is the gold standard and what people expect data scientists to know [...] Key to us being able to handle really big data sets is our use of Python and cluster computing." Adding to the Python love, an IHME data analyst explains, "Python is a great choice for large databases because there's a lot of support for Python libraries."

Code.org is currently recruiting teachers to attend its CS and AI Foundations Professional Learning program this summer, which is being taught by Code.org's national network of university and nonprofit regional partners (teachers who signup have a chance to win $250 in DonorsChoose credits for their classrooms). A flyer for a five-day Michigan Professional Development program to prepare teachers for a pilot of the Code.org CS & A course touts the new curriculum as "an alternative to the AP [Computer Science] pathway" (teachers are offered scholarships covering registration, lodging, meals, and workshop materials).

Interestingly, Code.org's embrace of Python and Data Science comes as the nonprofit changes its mission to 'make CS and AI a core part of K-12 education' and launches a new national campaign with tech leaders to make CS and AI a graduation requirement. Prior to AI changing the education conversation, Code.org in 2021 boasted that it had lined up a consortium of tech giants, politicians, and educators to push its new $15 million Amazon-bankrolled Java AP CS A curriculum into K-12 classrooms. Just three years later, however, Amazon CEO Andy Jassy was boasting to investors that Amazon had turned to AI to automatically do Java coding that he claimed would have otherwise taken human coders 4,500 developer-years to complete.
Facebook

Meta Is Creating a New AI Lab To Pursue 'Superintelligence' 77

Meta is preparing to unveil a new AI research lab dedicated to pursuing "superintelligence," a hypothetical A.I. system that exceeds the powers of the human brain, as the tech giant jockeys to stay competitive in the technology race, New York Times reported Tuesday, citing four people with the knowledge of the company's plans. From the report: Meta has tapped Alexandr Wang, 28, the founder and chief executive of the A.I. start-up Scale AI, to join the new lab, the people said, and has been in talks to invest billions of dollars in his company as part of a deal that would also bring other Scale employees to the company.

Meta has offered seven- to nine-figure compensation packages to dozens of researchers from leading A.I. companies such as OpenAI and Google, with some agreeing to join, according to the people. The new lab is part of a larger reorganization of Meta's A.I. efforts, the people said. The company, which owns Facebook, Instagram and WhatsApp, has recently grappled with internal management struggles over the technology, as well as employee churn and several product releases that fell flat, two of the people said.
Censorship

YouTube Will 'Protect Free Expression' By Pulling Back On Content Moderation (arstechnica.com) 200

An anonymous reader quotes a report from Ars Technica: YouTube videos may be getting a bit more pernicious soon. Google's dominant video platform has spent years removing discriminatory and conspiracy content from its platform in accordance with its usage guidelines, but the site is now reportedly adopting a lighter-touch approach to moderation. A higher bar for content removal will allow more potentially inflammatory content to remain up in the "public interest." [...]

Beginning late last year, YouTube began informing moderators they should err on the side of caution when removing videos that are in the public interest. That includes user uploads that discuss issues like elections, race, gender, sexuality, abortion, immigration, and censorship. Previously, YouTube's policy told moderators to remove videos if one-quarter or more of the content violated policies. Now, the exception cutoff has been increased to half. In addition, staff are now told to bring issues to managers if they are uncertain rather than removing the content themselves.
"Recognizing that the definition of 'public interest' is always evolving, we update our guidance for these exceptions to reflect the new types of discussion we see on the platform today," YouTube's Nicole Bell told the New York Times. "Our goal remains the same: to protect free expression on YouTube while mitigating egregious harm."

Most of the videos hosted on YouTube won't be affected by this change, the company says. "These exceptions apply to a small fraction of the videos on YouTube, but are vital for ensuring important content remains available," a YouTube spokesperson tells Ars. "This practice allows us to prevent, for example, an hours-long news podcast from being removed for showing one short clip of violence."
Security

A Researcher Figured Out How To Reveal Any Phone Number Linked To a Google Account (wired.com) 17

A cybersecurity researcher was able to figure out the phone number linked to any Google account, information that is usually not public and is often sensitive, according to the researcher, Google, and 404 Media's own tests. From a report: The issue has since been fixed but at the time presented a privacy issue in which even hackers with relatively few resources could have brute forced their way to peoples' personal information. "I think this exploit is pretty bad since it's basically a gold mine for SIM swappers," the independent security researcher who found the issue, who goes by the handle brutecat, wrote in an email.

[...] In mid-April, we provided brutecat with one of our personal Gmail addresses in order to test the vulnerability. About six hours later, brutecat replied with the correct and full phone number linked to that account. "Essentially, it's bruting the number," brutecat said of their process. Brute forcing is when a hacker rapidly tries different combinations of digits or characters until finding the ones they're after. Typically that's in the context of finding someone's password, but here brutecat is doing something similar to determine a Google user's phone number.

Brutecat said in an email the brute forcing takes around one hour for a U.S. number, or 8 minutes for a UK one. For other countries, it can take less than a minute, they said. In an accompanying video demonstrating the exploit, brutecat explains an attacker needs the target's Google display name. They find this by first transferring ownership of a document from Google's Looker Studio product to the target, the video says. They say they modified the document's name to be millions of characters, which ends up with the target not being notified of the ownership switch. Using some custom code, which they detailed in their write up, brutecat then barrages Google with guesses of the phone number until getting a hit.

AI

'AI Is Not Intelligent': The Atlantic Criticizes 'Scam' Underlying the AI Industry (msn.com) 206

The Atlantic makes that case that "the foundation of the AI industry is a scam" and that AI "is not what its developers are selling it as: a new class of thinking — and, soon, feeling — machines." [OpenAI CEO Sam] Altman brags about ChatGPT-4.5's improved "emotional intelligence," which he says makes users feel like they're "talking to a thoughtful person." Dario Amodei, the CEO of the AI company Anthropic, argued last year that the next generation of artificial intelligence will be "smarter than a Nobel Prize winner." Demis Hassabis, the CEO of Google's DeepMind, said the goal is to create "models that are able to understand the world around us." These statements betray a conceptual error: Large language models do not, cannot, and will not "understand" anything at all. They are not emotionally intelligent or smart in any meaningful or recognizably human sense of the word. LLMs are impressive probability gadgets that have been fed nearly the entire internet, and produce writing not by thinking but by making statistically informed guesses about which lexical item is likely to follow another.
A sociologist and linguist even teamed up for a new book called The AI Con: How to Fight Big Tech's Hype and Create the Future We Want, the article points out: The authors observe that large language models take advantage of the brain's tendency to associate language with thinking: "We encounter text that looks just like something a person might have said and reflexively interpret it, through our usual process of imagining a mind behind the text. But there is no mind there, and we need to be conscientious to let go of that imaginary mind we have constructed."

Several other AI-related social problems, also springing from human misunderstanding of the technology, are looming. The uses of AI that Silicon Valley seems most eager to promote center on replacing human relationships with digital proxies. Consider the ever-expanding universe of AI therapists and AI-therapy adherents, who declare that "ChatGPT is my therapist — it's more qualified than any human could be." Witness, too, how seamlessly Mark Zuckerberg went from selling the idea that Facebook would lead to a flourishing of human friendship to, now, selling the notion that Meta will provide you with AI friends to replace the human pals you have lost in our alienated social-media age....

The good news is that nothing about this is inevitable: According to a study released in April by the Pew Research Center, although 56 percent of "AI experts" think artificial intelligence will make the United States better, only 17 percent of American adults think so. If many Americans don't quite understand how artificial "intelligence" works, they also certainly don't trust it. This suspicion, no doubt provoked by recent examples of Silicon Valley con artistry, is something to build on.... If people understand what large language models are and are not; what they can and cannot do; what work, interactions, and parts of life they should — and should not — replace, they may be spared its worst consequences.

AI

AI Firms Say They Can't Respect Copyright. But A Nonprofit's Researchers Just Built a Copyright-Respecting Dataset (msn.com) 100

Is copyrighted material a requirement for training AI? asks the Washington Post. That's what top AI companies are arguing, and "Few AI developers have tried the more ethical route — until now.

"A group of more than two dozen AI researchers have found that they could build a massive eight-terabyte dataset using only text that was openly licensed or in public domain. They tested the dataset quality by using it to train a 7 billion parameter language model, which performed about as well as comparable industry efforts, such as Llama 2-7B, which Meta released in 2023." A paper published Thursday detailing their effort also reveals that the process was painstaking, arduous and impossible to fully automate. The group built an AI model that is significantly smaller than the latest offered by OpenAI's ChatGPT or Google's Gemini, but their findings appear to represent the biggest, most transparent and rigorous effort yet to demonstrate a different way of building popular AI tools....

As it turns out, the task involves a lot of humans. That's because of the technical challenges of data not being formatted in a way that's machine readable, as well as the legal challenges of figuring out what license applies to which website, a daunting prospect when the industry is rife with improperly licensed data. "This isn't a thing where you can just scale up the resources that you have available" like access to more computer chips and a fancy web scraper, said Stella Biderman [executive director of the nonprofit research institute Eleuther AI]. "We use automated tools, but all of our stuff was manually annotated at the end of the day and checked by people. And that's just really hard."

Still, the group managed to unearth new datasets that can be used ethically. Those include a set of 130,000 English language books in the Library of Congress, which is nearly double the size of the popular-books dataset Project Gutenberg. The group's initiative also builds on recent efforts to develop more ethical, but still useful, datasets, such as FineWeb from Hugging Face, the open-source repository for machine learning... Still, Biderman remained skeptical that this approach could find enough content online to match the size of today's state-of-the-art models... Biderman said she didn't expect companies such as OpenAI and Anthropic to start adopting the same laborious process, but she hoped it would encourage them to at least rewind back to 2021 or 2022, when AI companies still shared a few sentences of information about what their models were trained on.

"Even partial transparency has a huge amount of social value and a moderate amount of scientific value," she said.

Advertising

Washington Post's Privacy Tip: Stop Using Chrome, Delete Meta's Apps (and Yandex) (msn.com) 70

Meta's Facebook and Instagram apps "were siphoning people's data through a digital back door for months," writes a Washington Post tech columnist, citing researchers who found no privacy setting could've stopped what Meta and Yandex were doing, since those two companies "circumvented privacy and security protections that Google set up for Android devices.

"But their tactics underscored some privacy vulnerabilities in web browsers or apps. These steps can reduce your risks." Stop using the Chrome browser. Mozilla's Firefox, the Brave browser and DuckDuckGo's browser block many common methods of tracking you from site to site. Chrome, the most popular web browser, does not... For iPhone and Mac folks, Safari also has strong privacy protections. It's not perfect, though. No browser protections are foolproof. The researchers said Firefox on Android devices was partly susceptible to the data harvesting tactics they identified, in addition to Chrome. (DuckDuckGo and Brave largely did block the tactics, the researchers said....)

Delete Meta and Yandex apps on your phone, if you have them. The tactics described by the European researchers showed that Meta and Yandex are unworthy of your trust. (Yandex is not popular in the United States.) It might be wise to delete their apps, which give the companies more latitude to collect information that websites generally cannot easily obtain, including your approximate location, your phone's battery level and what other devices, like an Xbox, are connected to your home WiFi.

Know, too, that even if you don't have Meta apps on your phone, and even if you don't use Facebook or Instagram at all, Meta might still harvest information on your activity across the web.

Intel

Top Researchers Leave Intel To Build Startup With 'The Biggest, Baddest CPU' (oregonlive.com) 104

An anonymous reader quotes a report from OregonLive: Together, the four founders of Beaverton startup AheadComputing spent nearly a century at Intel. They were among Intel's top chip architects, working years in advance to develop new generations of microprocessors to power the computers of the future. Now they're on their own, flying without a net, building a new class of microprocessor on an entirely different architecture from Intel's. Founded a year ago, AheadComputing is trying to prove there's a better way to design computer chips.

"AheadComputing is doing the biggest, baddest CPU in the world," said Debbie Marr, the company's CEO. [...] AheadComputing is betting on an open architecture called RISC-V -- RISC stands for "reduced instruction set computer." The idea is to craft a streamlined microprocessor that works more efficiently by doing fewer things, and doing them better than conventional processors. For AheadComputing's founders and 80 employees, many of them also Intel alumni, it's a major break from the kind of work they've been doing all their careers. They've left a company with more than 100,000 workers to start a business with fewer than 100.

"Every person in this room," Marr said, looking across a conference table at her colleagues, "we could have stayed at Intel. We could have continued to do very exciting things at Intel." They decided they had a better chance at leading a revolution in semiconductor technology at a startup than at a big, established company like Intel. And AheadComputing could be at the forefront of renewal in Oregon's semiconductor ecosystem. "We see this opportunity, this light," Marr said. "We took our chances."
It'll be years before AheadComputing's designs are on the market, but the company "envisions its chips will someday power PCs, laptops and data centers," reports OregonLive. "Possible clients could include Google, Amazon, Samsung or other large computing companies."
Botnet

FBI: BadBox 2.0 Android Malware Infects Millions of Consumer Devices (bleepingcomputer.com) 8

An anonymous reader quotes a report from BleepingComputer: The FBI is warning that the BADBOX 2.0 malware campaign has infected over 1 million home Internet-connected devices, converting consumer electronics into residential proxies that are used for malicious activity. The BADBOX botnet is commonly found on Chinese Android-based smart TVs, streaming boxes, projectors, tablets, and other Internet of Things (IoT) devices. "The BADBOX 2.0 botnet consists of millions of infected devices and maintains numerous backdoors to proxy services that cyber criminal actors exploit by either selling or providing free access to compromised home networks to be used for various criminal activity," warns the FBI.

These devices come preloaded with the BADBOX 2.0 malware botnet or become infected after installing firmware updates and through malicious Android applications that sneak onto Google Play and third-party app stores. "Cyber criminals gain unauthorized access to home networks by either configuring the product with malicious software prior to the users purchase or infecting the device as it downloads required applications that contain backdoors, usually during the set-up process," explains the FBI. "Once these compromised IoT devices are connected to home networks, the infected devices are susceptible to becoming part of the BADBOX 2.0 botnet and residential proxy services4 known to be used for malicious activity."

Once infected, the devices connect to the attacker's command and control (C2) servers, where they receive commands to execute on the compromised devices, such as [routing malicious traffic through residential IPs to obscure cybercriminal activity, performing background ad fraud to generate revenue, and launching credential-stuffing attacks using stolen login data]. Over the years, the malware botnet continued expanding until 2024, when Germany's cybersecurity agency disrupted the botnet in the country by sinkholing the communication between infected devices and the attacker's infrastructure, effectively rendering the malware useless. However, that did not stop the threat actors, with researchers saying they found the malware installed on 192,000 devices a week later. Even more concerning, the malware was found on more mainstream brands, like Yandex TVs and Hisense smartphones. Unfortunately, despite the previous disruption, the botnet continued to grow, with HUMAN's Satori Threat Intelligence stating that over 1 million consumer devices had become infected by March 2025. This new larger botnet is now being called BADBOX 2.0 to indicate a new tracking of the malware campaign.
"This scheme impacted more than 1 million consumer devices. Devices connected to the BADBOX 2.0 operation included lower-price-point, 'off brand,' uncertified tablets, connected TV (CTV) boxes, digital projectors, and more," explains HUMAN.

"The infected devices are Android Open Source Project devices, not Android TV OS devices or Play Protect certified Android devices. All of these devices are manufactured in mainland China and shipped globally; indeed, HUMAN observed BADBOX 2.0-associated traffic from 222 countries and territories worldwide."
China

China Will Drop the Great Firewall For Some Users To Boost Free-Trade Port Ambitions (scmp.com) 49

China's southernmost province of Hainan is piloting a programme to grant select corporate users broad access to the global internet, a rare move in a country known for having some of the world's most restrictive online censorship, as the island seeks to transform itself into a global free-trade port. From a report: Employees of companies registered and operating in Hainan can apply for the "Global Connect" mobile service through the Hainan International Data Comprehensive Service Centre (HIDCSC), according to the agency, which is overseen by the state-run Hainan Big Data Development Centre.

The programme allows eligible users to bypass the so-called Great Firewall, which blocks access to many of the world's most-visited websites, such as Google and Wikipedia. Applicants must be on a 5G plan with one of the country's three major state-backed carriers -- China Mobile, China Unicom or China Telecom -- and submit their employer's information, including the company's Unified Social Credit Code, for approval. The process can take up to five months, HIDCSC staff said.

Chrome

Google Chrome Smashes Speedometer 3 Record With Massive Performance Gains (betanews.com) 40

BrianFagioli writes: Google is flexing its engineering muscles today by announcing a record-breaking score on the Speedometer 3 benchmark with its Chrome browser. If you've felt like the web got snappier lately, this could be why.

According to the search giant, Chrome's latest performance improvements translate to real-world time savings. Believe it or not, that could potentially add up to 58 million hours saved annually for users. That's the equivalent of about 83 human lifetimes not wasted waiting for web pages to load!

Slashdot Top Deals