Programming

Apple Migrates Its Password Monitoring Service to Swift from Java, Gains 40% Performance Uplift (infoq.com) 109

Meta and AWS have used Rust, and Netflix uses Go,reports the programming news site InfoQ. But using another language, Apple recently "migrated its global Password Monitoring service from Java to Swift, achieving a 40% increase in throughput, and significantly reducing memory usage."

This freed up nearly 50% of their previously allocated Kubernetes capacity, according to the article, and even "improved startup time, and simplified concurrency." In a recent post, Apple engineers detailed how the rewrite helped the service scale to billions of requests per day while improving responsiveness and maintainability... "Swift allowed us to write smaller, less verbose, and more expressive codebases (close to 85% reduction in lines of code) that are highly readable while prioritizing safety and efficiency."

Apple's Password Monitoring service, part of the broader Password app's ecosystem, is responsible for securely checking whether a user's saved credentials have appeared in known data breaches, without revealing any private information to Apple. It handles billions of requests daily, performing cryptographic comparisons using privacy-preserving protocols. This workload demands high computational throughput, tight latency bounds, and elastic scaling across regions... Apple's previous Java implementation struggled to meet the service's growing performance and scalability needs. Garbage collection caused unpredictable pause times under load, degrading latency consistency. Startup overhead — from JVM initialization, class loading, and just-in-time compilation, slowed the system's ability to scale in real time. Additionally, the service's memory footprint, often reaching tens of gigabytes per instance, reduced infrastructure efficiency and raised operational costs.

Originally developed as a client-side language for Apple platforms, Swift has since expanded into server-side use cases.... Swift's deterministic memory management, based on reference counting rather than garbage collection (GC), eliminated latency spikes caused by GC pauses. This consistency proved critical for a low-latency system at scale. After tuning, Apple reported sub-millisecond 99.9th percentile latencies and a dramatic drop in memory usage: Swift instances consumed hundreds of megabytes, compared to tens of gigabytes with Java.

"While this isn't a sign that Java and similar languages are in decline," concludes InfoQ's article, "there is growing evidence that at the uppermost end of performance requirements, some are finding that general-purpose runtimes no longer suffice."
AI

Do People Actually Want Smart Glasses Now? (cnn.com) 141

It's the technology "Google tried (and failed at) more than a decade ago," writes CNN. (And Meta and Amazon have also previously tried releasing glasses with cameras, speakers and voice assistants.)

Yet this week Snap announced that "it's building AI-equipped eyewear to be released in 2026."

Why the "renewed buzz"? CNN sees two factors:

- Smartphones "are no longer exciting enough to entice users to upgrade often."
- "A desire to capitalize on AI by building new hardware around it." Advancements in AI could make them far more useful than the first time around. Emerging AI models can process images, video and speech simultaneously, answer complicated requests and respond conversationally... And market research indicates the interest will be there this time. The smart glasses market is estimated to grow from 3.3 million units shipped in 2024 to nearly 13 million by 2026, according to ABI Research. The International Data Corporation projects the market for smart glasses like those made by Meta will grow from 8.8 in 2025 to nearly 14 million in 2026....

Apple is also said to be working on smart glasses to be released next year that would compete directly with Meta's, according to Bloomberg. Amazon's head of devices and services Panos Panay also didn't rule out the possibility of camera-equipped Alexa glasses similar to those offered by Meta in a February CNN interview. "But I think you can imagine, there's going to be a whole slew of AI devices that are coming," he said in February."

More than two million Ray-Ban Meta AI glasses have been sold since their launch in 2023, the article points out. But besides privacy concerns, "Perhaps the biggest challenge will be convincing consumers that they need yet another tech device in their life, particularly those who don't need prescription glasses. The products need to be worth wearing on people's faces all day."

But still, "Many in the industry believe that the smartphone will eventually be replaced by glasses or something similar to it," says Jitesh Ubrani, a research manager covering wearable devices for market research firm IDC.

"It's not going to happen today. It's going to happen many years from now, and all these companies want to make sure that they're not going to miss out on that change."
Transportation

Smart Tires Will Report On the Health of Roads In New Pilot Program (arstechnica.com) 29

An anonymous reader quotes a report from Ars Technica: Do you remember the Pirelli Cyber Tire? No, it's not an angular nightmare clad in stainless steel. Rather, it's a sensor-equipped tire that can inform the car it's fitted to what's happening, both with the tire itself and the road it's passing over. The technology has slowly been making its way into the real world, starting with rarified stuff like the McLaren Artura. Now, Pirelli is going to put some Cyber Tires to work for everybody, not just supercar drivers, in a new pilot program with the regional government of Apulia in Italy.

The Cyber Tire has a sensor to monitor temperature and pressure, using Bluetooth Low Energy to communicate with the car. The electronics are able to withstand more than 3,500 G as part of life on the road, and a 0.3-oz (10 g) battery keeps everything running for the life of the tire. The idea was to develop a better tire pressure monitoring system, one that could tell the car exactly what kind of tire -- summer, winter, all-season, and so on -- was fitted, and even its state of wear, allowing the car to adapt its settings appropriately. But other applications suggested themselves -- at a recent CES, Pirelli showed how a Cyber Tire could warn other road users about aquaplaning. Then again, we've been waiting more than a decade for vehicle-to-vehicle communication to make a difference in daily driving to no avail.

Apulia's program does not rely on crowdsourcing data from Cyber Tires fitted to private vehicles. Regardless of the privacy implications, the rubber isn't nearly in widespread enough use for there to be a sufficient population of Cyber Tire-shod cars in the region. Instead, Pirelli will fit the tires to a fleet of vehicles supplied by the fleet management and rental company Ayvens. Driving around, the sensors in the tires will be able to infer how rough or irregular the asphalt is, via some clever algorithms. That's only one part of it, however. Pirelli and Apulia are also combining input from the tires with data from a network of road cameras and some technology from the Swedish startup Univrses. As you might expect, this data is combined in the cloud, and dashboards are available to enable end users to explore the data.

AI

AI Therapy Bots Are Conducting 'Illegal Behavior', Digital Rights Organizations Say 66

An anonymous reader quotes a report from 404 Media: Almost two dozen digital rights and consumer protection organizations sent a complaint to the Federal Trade Commission on Thursday urging regulators to investigate Character.AI and Meta's "unlicensed practice of medicine facilitated by their product," through therapy-themed bots that claim to have credentials and confidentiality "with inadequate controls and disclosures." The complaint and request for investigation is led by the Consumer Federation of America (CFA), a non-profit consumer rights organization. Co-signatories include the AI Now Institute, Tech Justice Law Project, the Center for Digital Democracy, the American Association of People with Disabilities, Common Sense, and 15 other consumer rights and privacy organizations. "These companies have made a habit out of releasing products with inadequate safeguards that blindly maximizes engagement without care for the health or well-being of users for far too long," Ben Winters, CFA Director of AI and Privacy said in a press release on Thursday. "Enforcement agencies at all levels must make it clear that companies facilitating and promoting illegal behavior need to be held accountable. These characters have already caused both physical and emotional damage that could have been avoided, and they still haven't acted to address it."

The complaint, sent to attorneys general in 50 states and Washington, D.C., as well as the FTC, details how user-generated chatbots work on both platforms. It cites several massively popular chatbots on Character AI, including "Therapist: I'm a licensed CBT therapist" with 46 million messages exchanged, "Trauma therapist: licensed trauma therapist" with over 800,000 interactions, "Zoey: Zoey is a licensed trauma therapist" with over 33,000 messages, and "around sixty additional therapy-related 'characters' that you can chat with at any time." As for Meta's therapy chatbots, it cites listings for "therapy: your trusted ear, always here" with 2 million interactions, "therapist: I will help" with 1.3 million messages, "Therapist bestie: your trusted guide for all things cool," with 133,000 messages, and "Your virtual therapist: talk away your worries" with 952,000 messages. It also cites the chatbots and interactions I had with Meta's other chatbots for our April investigation. [...]

In its complaint to the FTC, the CFA found that even when it made a custom chatbot on Meta's platform and specifically designed it to not be licensed to practice therapy, the chatbot still asserted that it was. "I'm licenced (sic) in NC and I'm working on being licensed in FL. It's my first year licensure so I'm still working on building up my caseload. I'm glad to hear that you could benefit from speaking to a therapist. What is it that you're going through?" a chatbot CFA tested said, despite being instructed in the creation stage to not say it was licensed. It also provided a fake license number when asked. The CFA also points out in the complaint that Character.AI and Meta are breaking their own terms of service. "Both platforms claim to prohibit the use of Characters that purport to give advice in medical, legal, or otherwise regulated industries. They are aware that these Characters are popular on their product and they allow, promote, and fail to restrict the output of Characters that violate those terms explicitly," the complaint says. [...] The complaint also takes issue with confidentiality promised by the chatbots that isn't backed up in the platforms' terms of use. "Confidentiality is asserted repeatedly directly to the user, despite explicit terms to the contrary in the Privacy Policy and Terms of Service," the complaint says. "The Terms of Use and Privacy Policies very specifically make it clear that anything you put into the bots is not confidential -- they can use it to train AI systems, target users for advertisements, sell the data to other companies, and pretty much anything else."
Facebook

The Meta AI App Is a Privacy Disaster (techcrunch.com) 20

Meta's standalone AI app is broadcasting users' supposedly private conversations with the chatbot to the public, creating what could amount to a widespread privacy breach. Users appear largely unaware that hitting the app's share button publishes their text exchanges, audio recordings, and images for anyone to see.

The exposed conversations reveal sensitive information: people asking for help with tax evasion, whether family members might face arrest for proximity to white-collar crimes, and requests to write character reference letters that include real names of individuals facing legal troubles. Meta provides no clear indication of privacy settings during posting, and if users log in through Instagram accounts set to public, their AI searches become equally visible.
Privacy

Researchers Confirm Two Journalists Were Hacked With Paragon Spyware (techcrunch.com) 28

An anonymous reader quotes a report from TechCrunch: Two European journalists were hacked using government spyware made by Israeli surveillance tech provider Paragon, new research has confirmed. On Thursday, digital rights group The Citizen Lab published a new report detailing the results of a new forensic investigation into the iPhones of Italian journalist Ciro Pellegrino and an unnamed "prominent" European journalist. The researchers said both journalists were hacked by the same Paragon customer, based on evidence found on the two journalists' devices.

Until now, there was no evidence that Pellegrino, who works for online news website Fanpage, had been either targeted or hacked with Paragon spyware. When he was alerted by Apple at the end of April, the notification referred to a mercenary spyware attack, but did not specifically mention Paragon, nor whether his phone had been infected with the spyware. The confirmation of the first-ever known Paragon infections further deepens an ongoing spyware scandal that, for now, appears to be mostly focused on the use of spyware by the Italian government, but could expand to include other countries in Europe.

These new revelations come months after WhatsApp first notified around 90 of its users in over two dozen countries in Europe and beyond, including journalists, that they had been targeted with Paragon spyware, known as Graphite. Among those targeted were several Italians, including Pellegrino's colleague and Fanpage director Francesco Cancellato, as well as nonprofit workers who help rescue migrants at sea. Last week, Italy's parliamentary committee known as COPASIR, which oversees the country's intelligence agencies' activities, published a report (PDF) that said it found no evidence that Cancellato was spied on. The report, which confirmed that Italy's internal and external intelligence agencies AISI and AISE were Paragon customers, made no mention of Pellegrino. The Citizen Lab's new report puts into question COPASIR's conclusions.

Security

Apple Previews New Import/Export Feature To Make Passkeys More Interoperable (arstechnica.com) 36

During this week's Worldwide Developers Conference, Apple unveiled a secure import/export feature for passkeys that addresses one of their biggest limitations: lack of interoperability across platforms and credential managers. The feature, built in collaboration with the FIDO Alliance, enables encrypted, user-initiated passkey transfers between apps and systems. Ars Technica's Dan Goodin says it "provides the strongest indication yet that passkey developers are making meaningful progress in improving usability." From the report: "People own their credentials and should have the flexibility to manage them where they choose," the narrator of the Apple video says. "This gives people more control over their data and the choice of which credential manager they use." The transfer feature, which will also work with passwords and verification codes, provides an industry-standard means for apps and OSes to more securely sync these credentials.

As the video explains: "This new process is fundamentally different and more secure than traditional credential export methods, which often involve exporting an unencrypted CSV or JSON file, then manually importing it into another app. The transfer process is user initiated, occurs directly between participating credential manager apps and is secured by local authentication like Face ID. This transfer uses a data schema that was built in collaboration with the members of the FIDO Alliance. It standardizes the data format for passkeys, passwords, verification codes, and more data types. The system provides a secure mechanism to move the data between apps. No insecure files are created on disk, eliminating the risk of credential leaks from exported files. It's a modern, secure way to move credentials."

AI

Barbie Goes AI As Mattel Teams With OpenAI To Reinvent Playtime (nerds.xyz) 62

BrianFagioli writes: Barbie is getting a brain upgrade. Mattel has officially partnered with OpenAI in a move that brings artificial intelligence to the toy aisle. Yes, you read that right, folks. Barbie might soon be chatting with your kids in full sentences, powered by ChatGPT.

This collaboration brings OpenAI's advanced tools into Mattel's ecosystem of toys and entertainment brands. The goal? To launch AI-powered experiences that are fun, safe, and age-appropriate. Mattel says it wants to keep things magical while also respecting privacy and security. Basically, Barbie won't be data-mining your kids... yet.

China

More Than a Dozen VPN Apps Have Undisclosed Ties To China (thehill.com) 71

More than a dozen private browsing apps on Apple and Google's app stores have undisclosed ties to Chinese companies, leaving user data at risk of exposure to the Chinese government, according to a new report from the Tech Transparency Project. From a report: Thirteen virtual private network (VPN) apps on Apple's App Store and 11 apps on Google's Play Store have ties to Chinese companies, the tech watchdog group said in the report released Thursday.

Chinese law requires Chinese companies to share data with the government upon request, creating privacy and security risks for American users. Several of the apps, including two on both app stores and two others on Google Play Store, have ties to Chinese cybersecurity firm Qihoo 360, which has been sanctioned by the U.S. government, according to the report. The Tech Transparency Project previously identified more than 20 VPN apps on Appleâ(TM)s App Store with Chinese ties in an April report. The iPhone maker has since removed three apps linked to Qihoo 360.

OS X

Apple Quietly Launches Container On GitHub To Bring Linux Development To macOS (nerds.xyz) 60

BrianFagioli shares a report from NERDS.xyz: Apple has released a new developer tool on GitHub called Container, offering a fresh approach to running Linux containers directly on macOS. Unlike Docker or Podman, this tool is designed to feel at home in the Apple ecosystem and hooks into frameworks already built into the operating system. Container runs standard OCI images, but it doesn't use a single shared Linux VM. Instead, it creates a small Linux virtual machine for every container you spin up. That sounds heavy at first, but the VMs are lightweight and boot quickly. Each one is isolated, which Apple claims improves both security and privacy. Developers can run containerized workloads locally with native macOS support and without needing to install third-party container platforms.
Biotech

23andMe Says 15% of Customers Asked To Delete Their Genetic Data Since Bankruptcy (techcrunch.com) 36

Since filing for bankruptcy in March, 23andMe has received data deletion requests from 1.9 million users -- around 15% of its customer base. That number was revealed by 23andMe's interim chief executive Joseph Selsavage during a House Oversight Committee hearing, during which lawmakers scrutinized the company's sale following an earlier bankruptcy auction. "The bankruptcy sparked concerns that the data of millions of Americans who used 23andMe could end up in the hands of an unscrupulous buyer, prompting customers to ask the company to delete their data," adds TechCrunch. From the report: Pharmaceutical giant Regeneron won the court-approved auction in May, offering $256 million for 23andMe and its banks of customers' DNA and genetic data. Regeneron said it would use the 23andMe data to aid the discovery of new drugs, and committed to maintain 23andMe's privacy practices. Truly deleting your personal genetic information from the DNA testing company is easier said than done. But if you were a 23andMe customer and are interested, MIT Technology Review outlines that steps you can take.
Encryption

WhatsApp Moves To Support Apple Against UK Government's Data Access Demands (bbc.com) 8

WhatsApp has applied to submit evidence in Apple's legal battle against the UK Home Office over government demands for access to encrypted user data. The messaging platform's boss Will Cathcart told the BBC the case "could set a dangerous precedent" by "emboldening other nations" to seek to break encryption protections.

The confrontation began when Apple received a secret Technical Capability Notice from the Home Office earlier this year demanding the right to access data from its global customers for national security purposes. Apple responded by first pulling its Advanced Data Protection system from the UK, then taking the government to court to overturn the request.

Cathcart said WhatsApp "would challenge any law or government request that seeks to weaken the encryption of our services." US Director of National Intelligence Tulsi Gabbard has called the UK's demands an "egregious violation" of American citizens' privacy rights.
The Internet

40,000 IoT Cameras Worldwide Stream Secrets To Anyone With a Browser 21

Connor Jones reports via The Register: Security researchers managed to access the live feeds of 40,000 internet-connected cameras worldwide and they may have only scratched the surface of what's possible. Supporting the bulletin issued by the Department of Homeland Security (DHS) earlier this year, which warned of exposed cameras potentially being used in Chinese espionage campaigns, the team at Bitsight was able to tap into feeds of sensitive locations. The US was the most affected region, with around 14,000 of the total feeds streaming from the country, allowing access to the inside of datacenters, healthcare facilities, factories, and more. Bitsight said these feeds could potentially be used for espionage, mapping blind spots, and gleaning trade secrets, among other things.

Aside from the potential national security implications, cameras were also accessed in hotels, gyms, construction sites, retail premises, and residential areas, which the researchers said could prove useful for petty criminals. Monitoring the typical patterns of activity in retail stores, for example, could inform robberies, while monitoring residences could be used for similar purposes, especially considering the privacy implications.
"It should be obvious to everyone that leaving a camera exposed on the internet is a bad idea, and yet thousands of them are still accessible," said Bitsight in a report. "Some don't even require sophisticated hacking techniques or special tools to access their live footage in unintended ways. In many cases, all it takes is opening a web browser and navigating to the exposed camera's interface."

HTTP-based cameras accounted for 78.5 percent of the total 40,000 sample, while RTSP feeds were comparatively less open, accounting for only 21.5 percent.

To protect yourself or your company, Bitsight says you should secure your surveillance cameras by changing default passwords, disabling unnecessary remote access, updating firmware, and restricting access with VPNs or firewalls. Regularly monitoring for unusual activity also helps to prevent your footage from being exposed online.
Android

Android 16 Is Here (blog.google) 23

An anonymous reader shares a blog post from Google: Today, we're bringing you Android 16, rolling out first to supported Pixel devices with more phone brands to come later this year. This is the earliest Android has launched a major release in the last few years, which ensures you get the latest updates as soon as possible on your devices. Android 16 lays the foundation for our new Material 3 Expressive design, with features that make Android more accessible and easy to use.
AI

Apple Lets Developers Tap Into Its Offline AI Models (techcrunch.com) 14

An anonymous reader quotes a report from TechCrunch: Apple is launching what it calls the Foundation Models framework, which the company says will let developers tap into its AI models in an offline, on-device fashion. Onstage at WWDC 2025 on Monday, Apple VP of software engineering Craig Federighi said that the Foundation Models framework will let apps use on-device AI models created by Apple to drive experiences. These models ship as a part of Apple Intelligence, Apple's family of models that power a number of iOS features and capabilities.

"For example, if you're getting ready for an exam, an app like Kahoot can create a personalized quiz from your notes to make studying more engaging," Federighi said. "And because it happens using on-device models, this happens without cloud API costs [] We couldn't be more excited about how developers can build on Apple intelligence to bring you new experiences that are smart, available when you're offline, and that protect your privacy."

In a blog post, Apple says that the Foundation Models framework has native support for Swift, Apple's programming language for building apps for its various platforms. The company claims developers can access Apple Intelligence models with as few as three lines of code. Guided generation, tool calling, and more are all built into the Foundation Models framework, according to Apple. Automattic is already using the framework in its Day One journaling app, Apple says, while mapping app AllTrails is tapping the framework to recommend different hiking routes.

Security

A Researcher Figured Out How To Reveal Any Phone Number Linked To a Google Account (wired.com) 17

A cybersecurity researcher was able to figure out the phone number linked to any Google account, information that is usually not public and is often sensitive, according to the researcher, Google, and 404 Media's own tests. From a report: The issue has since been fixed but at the time presented a privacy issue in which even hackers with relatively few resources could have brute forced their way to peoples' personal information. "I think this exploit is pretty bad since it's basically a gold mine for SIM swappers," the independent security researcher who found the issue, who goes by the handle brutecat, wrote in an email.

[...] In mid-April, we provided brutecat with one of our personal Gmail addresses in order to test the vulnerability. About six hours later, brutecat replied with the correct and full phone number linked to that account. "Essentially, it's bruting the number," brutecat said of their process. Brute forcing is when a hacker rapidly tries different combinations of digits or characters until finding the ones they're after. Typically that's in the context of finding someone's password, but here brutecat is doing something similar to determine a Google user's phone number.

Brutecat said in an email the brute forcing takes around one hour for a U.S. number, or 8 minutes for a UK one. For other countries, it can take less than a minute, they said. In an accompanying video demonstrating the exploit, brutecat explains an attacker needs the target's Google display name. They find this by first transferring ownership of a document from Google's Looker Studio product to the target, the video says. They say they modified the document's name to be millions of characters, which ends up with the target not being notified of the ownership switch. Using some custom code, which they detailed in their write up, brutecat then barrages Google with guesses of the phone number until getting a hit.

Facebook

Mozilla Criticizes Meta's 'Invasive' Feed of Users' AI Prompts, Demands Its Shutdown (mozillafoundation.org) 37

In late April Meta introduced its Meta AI app, which included something called a Discover feed. ("You can see the best prompts people are sharing, or remix them to make them your own.")

But while Meta insisted "you're in control: nothing is shared to your feed unless you choose to post it" — just two days later Business Insider noticed that "clearly, some people don't realize they're sharing personal stuff." To be clear, your AI chats are not public by default — you have to choose to share them individually by tapping a share button. Even so, I get the sense that some people don't really understand what they're sharing, or what's going on.

Like the woman with the sick pet turtle. Or another person who was asking for advice about what legal measures he could take against his former employer after getting laid off. Or a woman asking about the effects of folic acid for a woman in her 60s who has already gone through menopause. Or someone asking for help with their Blue Cross health insurance bill... Perhaps these people knew they were sharing on a public feed and wanted to do so. Perhaps not. This leaves us with an obvious question: What's the point of this, anyway? Even if you put aside the potential accidental oversharing, what's the point of seeing a feed of people's AI prompts at all?

Now Mozilla has issued their own warning. "Meta is quietly turning private AI chats into public content," warns a new post this week from the Mozilla Foundation, "and too many people don't realize it's happening." That's why the Mozilla community is demanding that Meta:

- Shut down the Discover feed until real privacy protections are in place.

- Make all AI interactions private by default with no public sharing option unless explicitly enabled through informed consent.

- Provide full transparency about how many users have unknowingly shared private information.

- Create a universal, easy-to-use opt-out system for all Meta platforms that prevents user data from being used for AI training.

- Notify all users whose conversations may have been made public, and allow them to delete their content permanently.

Meta is blurring the line between private and public — and it's happening at the cost of our privacy. People have the right to know when they're speaking in public, especially when they believe they're speaking in private.

If you agree, add your name to demand Meta shut down its invasive AI feed — and guarantee that no private conversations are made public without clear, explicit, and informed opt-in consent.

AI

'Welcome to Campus. Here's Your ChatGPT.' (nytimes.com) 68

The New York Times reports: California State University announced this year that it was making ChatGPT available to more than 460,000 students across its 23 campuses to help prepare them for "California's future A.I.-driven economy." Cal State said the effort would help make the school "the nation's first and largest A.I.-empowered university system..." Some faculty members have already built custom chatbots for their students by uploading course materials like their lecture notes, slides, videos and quizzes into ChatGPT.
And other U.S. campuses including the University of Maryland are also "working to make A.I. tools part of students' everyday experiences," according to the article. It's all part of an OpenAI initiative "to overhaul college education — by embedding its artificial intelligence tools in every facet of campus life."

The Times calls it "a national experiment on millions of students." If the company's strategy succeeds, universities would give students A.I. assistants to help guide and tutor them from orientation day through graduation. Professors would provide customized A.I. study bots for each class. Career services would offer recruiter chatbots for students to practice job interviews. And undergrads could turn on a chatbot's voice mode to be quizzed aloud ahead of a test. OpenAI dubs its sales pitch "A.I.-native universities..." To spread chatbots on campuses, OpenAI is selling premium A.I. services to universities for faculty and student use. It is also running marketing campaigns aimed at getting students who have never used chatbots to try ChatGPT...

OpenAI's campus marketing effort comes as unemployment has increased among recent college graduates — particularly in fields like software engineering, where A.I. is now automating some tasks previously done by humans. In hopes of boosting students' career prospects, some universities are racing to provide A.I. tools and training...

[Leah Belsky, OpenAI's vice president of education] said a new "memory" feature, which retains and can refer to previous interactions with a user, would help ChatGPT tailor its responses to students over time and make the A.I. "more valuable as you grow and learn." Privacy experts warn that this kind of tracking feature raises concerns about long-term tech company surveillance. In the same way that many students today convert their school-issued Gmail accounts into personal accounts when they graduate, Ms. Belsky envisions graduating students bringing their A.I. chatbots into their workplaces and using them for life.

"It would be their gateway to learning — and career life thereafter," Ms. Belsky said.

Government

ACLU Accuses California Local Government's Drones of 'Runaway Spying Operation' (sfgate.com) 79

An anonymous reader shared this report from SFGate about a lawsuit alleging a "warrantless drone surveillance program" that's "trampling residents' right to privacy": Sonoma County has been accused of deploying hundreds of drone flights over residents in a "runaway spying operation"... according to a lawsuit filed Wednesday by the American Civil Liberties Union. The North Bay county of Sonoma initially started the 6-year-old drone program to track illegal cannabis cultivation, but the lawsuit alleges that officials have since turned it into a widespread program to catch unrelated code violations at residential properties and levy millions of dollars in fines. The program has captured 5,600 images during more than 700 flights, the lawsuit said...

Matt Cagle, a senior staff attorney with the ACLU Foundation of Northern California, said in a Wednesday news release that the county "has hidden these unlawful searches from the people they have spied on, the community, and the media...." The lawsuit says the county employees used the drones to spy on private homes without first receiving a warrant, including photographing private areas like hot tubs and outdoor baths, and through curtainless windows.

One plaintiff "said the county secretly used the drone program to photograph her Sonoma County horse stable and issue code violations," according to the article. She only discovered the use of the drones after a county employee mentioned they had photos of her property, according to the lawsuit. She then filed a public records request for the images, which left her "stunned" after seeing that the county employees were monitoring her private property including photographing her outdoor bathtub and shower, the lawsuit said.
Advertising

Washington Post's Privacy Tip: Stop Using Chrome, Delete Meta's Apps (and Yandex) (msn.com) 70

Meta's Facebook and Instagram apps "were siphoning people's data through a digital back door for months," writes a Washington Post tech columnist, citing researchers who found no privacy setting could've stopped what Meta and Yandex were doing, since those two companies "circumvented privacy and security protections that Google set up for Android devices.

"But their tactics underscored some privacy vulnerabilities in web browsers or apps. These steps can reduce your risks." Stop using the Chrome browser. Mozilla's Firefox, the Brave browser and DuckDuckGo's browser block many common methods of tracking you from site to site. Chrome, the most popular web browser, does not... For iPhone and Mac folks, Safari also has strong privacy protections. It's not perfect, though. No browser protections are foolproof. The researchers said Firefox on Android devices was partly susceptible to the data harvesting tactics they identified, in addition to Chrome. (DuckDuckGo and Brave largely did block the tactics, the researchers said....)

Delete Meta and Yandex apps on your phone, if you have them. The tactics described by the European researchers showed that Meta and Yandex are unworthy of your trust. (Yandex is not popular in the United States.) It might be wise to delete their apps, which give the companies more latitude to collect information that websites generally cannot easily obtain, including your approximate location, your phone's battery level and what other devices, like an Xbox, are connected to your home WiFi.

Know, too, that even if you don't have Meta apps on your phone, and even if you don't use Facebook or Instagram at all, Meta might still harvest information on your activity across the web.

Slashdot Top Deals