Privacy

Chinese Authorities Are Using a New Tool To Hack Seized Phones and Extract Data (techcrunch.com) 35

An anonymous reader quotes a report from TechCrunch: Security researchers say Chinese authorities are using a new type of malware to extract data from seized phones, allowing them to obtain text messages -- including from chat apps such as Signal -- images, location histories, audio recordings, contacts, and more. In a report shared exclusively with TechCrunch, mobile cybersecurity company Lookout detailed the hacking tool called Massistant, which the company said was developed by Chinese tech giant Xiamen Meiya Pico.

Massistant, according to Lookout, is Android software used for the forensic extraction of data from mobile phones, meaning the authorities using it need to have physical access to those devices. While Lookout doesn't know for sure which Chinese police agencies are using the tool, its use is assumed widespread, which means Chinese residents, as well as travelers to China, should be aware of the tool's existence and the risks it poses. [...]

The good news ... is that Massistant leaves evidence of its compromise on the seized device, meaning users can potentially identify and delete the malware, either because the hacking tool appears as an app, or can be found and deleted using more sophisticated tools such as the Android Debug Bridge, a command line tool that lets a user connect to a device through their computer. The bad news is that at the time of installing Massistant, the damage is done, and authorities already have the person's data.
"It's a big concern. I think anybody who's traveling in the region needs to be aware that the device that they bring into the country could very well be confiscated and anything that's on it could be collected," said Kristina Balaam, a researcher at Lookout who analyzed the malware. "I think it's something everybody should be aware of if they're traveling in the region."
AI

WeTransfer Backtracks on Terms Suggesting User Files Could Train AI Models After Backlash (theguardian.com) 10

WeTransfer has reversed controversial terms of service changes after users protested language suggesting uploaded files could be used to "improve machine learning models."

The file-sharing service, popular among creative professionals and used by 80 million users across 190 countries, clarified that user content had never been used to train AI models and removed all references to machine learning from its updated terms. Creative users including voice actors, filmmakers, and journalists had threatened to cancel subscriptions over the changes.
United Kingdom

Thousands of Afghans Secretly Moved To Britain After Data Leak (reuters.com) 75

The UK secretly relocated thousands of Afghans to the UK after their personal details were disclosed in one of the country's worst ever data breaches, putting them at risk of Taliban retaliation. The operation cost around $2.7 billion and remained under a court-imposed superinjunction until recently lifted. Reuters reports: The leak by the Ministry of Defence in early 2022, which led to data being published on Facebook the following year, and the secret relocation program, were subject to a so-called superinjunction preventing the media reporting what happened, which was lifted on Tuesday by a court. British defence minister John Healey apologised for the leak, which included details about members of parliament and senior military officers who supported applications to help Afghan soldiers who worked with the British military and their families relocate to the UK. "This serious data incident should never have happened," Healey told lawmakers in the House of Commons. It may have occurred three years ago under the previous government, but to all whose data was compromised I offer a sincere apology."

The incident ranks among the worst security breaches in modern British history because of the cost and risk posed to the lives of thousands of Afghans, some of whom fought alongside British forces until their chaotic withdrawal in 2021. Healey said about 4,500 Afghans and their family members have been relocated or were on their way to Britain under the previously secret scheme. But he added that no-one else from Afghanistan would be offered asylum because of the data leak, citing a government review which found little evidence of intent from the Taliban to seek retribution against former officials.

AI

Hugging Face Is Hosting 5,000 Nonconsensual AI Models of Real People (404media.co) 31

An anonymous reader shares a report: Hugging Face, a company with a multi-billion dollar valuation and one of the most commonly used platforms for sharing AI tools and resources, is hosting over 5,000 AI image generation models that are designed to recreate the likeness of real people. These models were all previously hosted on Civitai, an AI model sharing platform 404 Media reporting has shown was used for creating nonconsensual pornography, until Civitai banned them due to pressure from payment processors.

Users downloaded the models from Civitai and reuploaded them to Hugging Face as part of a concerted community effort to archive the models after Civitai announced in May it will ban them. In that announcement, Civitai said it will give the people who originally uploaded them "a short period of time" before they were removed. Civitai users began organizing an archiving effort on Discord earlier in May after Civitai indicated it had to make content policy changes due to pressure from payment processors, and the effort kicked into high gear when Civitai announced the new "real people" model policy.

Security

Qantas Confirms Data Breach Impacts 5.7 Million Customers (bleepingcomputer.com) 4

Qantas has confirmed that 5.7 million customers have been impacted by a recent data breach through a third-party platform used by its contact center. The breach, attributed to the Scattered Spider threat group, exposed various personal details but did not include passwords, financial, or passport data. BleepingComputer reports: In a new update today, Qantas has confirmed that the threat actors stole data for approximately 5.7 million customers, with varying types of data exposed in the breach:

4 million customer records are limited to name, email address and Qantas Frequent Flyer details. Of this:
- 1.2 million customer records contained name and email address.
- 2.8 million customer records contained name, email address and Qantas Frequent Flyer number. The majority of these also had tier included. A smaller subset of these had points balance and status credits included.

Of the remaining 1.7 million customers, their records included a combination of some of the data fields above and one or more of the following:
- Address - 1.3 million. This is a combination of residential addresses and business addresses including hotels for misplaced baggage delivery.
- Date of birth - 1.1 million
- Phone number (mobile, landline and/or business) - 900,000
- Gender - 400,000. This is separate to other gender identifiers like name and salutation.
- Meal preferences - 10,000

The Courts

German Court Rules Meta Tracking Tech Violates EU Privacy Laws (therecord.media) 14

An anonymous reader quotes a report from The Record: A German court has ruled that Meta must pay $5,900 to a German Facebook user who sued the platform for embedding tracking technology in third-party websites -- a ruling that could open the door to large fines down the road over data privacy violations relating to pixels and similar tools. The Regional Court of Leipzig in Germany ruled Friday that Meta tracking pixels and software development kits embedded in countless websites and apps collect users' data without their consent and violate the continent's General Data Protection Regulation (GDPR).

The ruling in favor of the plaintiff sets a precedent which the court acknowledged will allow countless other users to sue without "explicitly demonstrating individual damages," according to a Leipzig Regional Court press release. "Every user is individually identifiable to Meta at all times as soon as they visit the third-party websites or use an app, even if they have not logged in via the Instagram and Facebook account," the press release said.
"This may very well be one of the most substantial rulings coming out of Europe this year," said Ronni K. Gothard Christiansen, the CEO of AesirX, a consultancy which helps businesses comply with data privacy laws. "$5,900 in damages for one visitor adds up quickly if you have tens of thousands of visitors, or even millions."
Privacy

Swedish Bodyguards Reveal Prime Minister's Location on Fitness App (politico.eu) 18

Swedish security service members who shared details of their running and cycling routes on fitness app Strava have been accused of revealing details of the prime minister's location, including his private address. Politico: According to Swedish daily Dagens Nyheter, on at least 35 occasions bodyguards uploaded their workouts to the training app and revealed information linked to Prime Minister Ulf Kristersson, including where he goes running, details of overnight trips abroad, and the location of his private home, which is supposed to be secret.
Security

Jack Dorsey Says His 'Secure' New Bitchat App Has Not Been Tested For Security (techcrunch.com) 37

An anonymous reader quotes a report from TechCrunch: On Sunday, Block CEO and Twitter co-founder Jack Dorsey launched an open source chat app called Bitchat, promising to deliver "secure" and "private" messaging without a centralized infrastructure. The app relies on Bluetooth and end-to-end encryption, unlike traditional messaging apps that rely on the internet. By being decentralized, Bitchat has potential for being a secure app in high-risk environments where the internet is monitored or inaccessible. According to Dorsey's white paper detailing the app's protocols and privacy mechanisms, Bitchat's system design "prioritizes" security.

But the claims that the app is secure, however, are already facing scrutiny by security researchers, given that the app and its code have not been reviewed or tested for security issues at all -- by Dorsey's own admission. Since launching, Dorsey has added a warning to Bitchat's GitHub page: "This software has not received external security review and may contain vulnerabilities and does not necessarily meet its stated security goals. Do not use it for production use, and do not rely on its security whatsoever until it has been reviewed." This warning now also appears on Bitchat's main GitHub project page but was not there at the time the app debuted.

As of Wednesday, Dorsey added: "Work in progress," next to the warning on GitHub. This latest disclaimer came after security researcher Alex Radocea found that it's possible to impersonate someone else and trick a person's contacts into thinking they are talking to the legitimate contact, as the researcher explained in a blog post. Radocea wrote that Bitchat has a "broken identity authentication/verification" system that allows an attacker to intercept someone's "identity key" and "peer id pair" -- essentially a digital handshake that is supposed to establish a trusted connection between two people using the app. Bitchat calls these "Favorite" contacts and marks them with a star icon. The goal of this feature is to allow two Bitchat users to interact, knowing that they are talking to the same person they talked to before.

AI

McDonald's AI Hiring Bot Exposed Millions of Applicants' Data To Hackers 25

An anonymous reader quotes a report from Wired: If you want a job at McDonald's today, there's a good chance you'll have to talk to Olivia. Olivia is not, in fact, a human being, but instead an AI chatbot that screens applicants, asks for their contact information and resume, directs them to a personality test, and occasionally makes them "go insane" by repeatedly misunderstanding their most basic questions. Until last week, the platform that runs the Olivia chatbot, built by artificial intelligence software firm Paradox.ai, also suffered from absurdly basic security flaws. As a result, virtually any hacker could have accessed the records of every chat Olivia had ever had with McDonald's applicants -- including all the personal information they shared in those conversations -- with tricks as straightforward as guessing the username and password "123456."

On Wednesday, security researchers Ian Carroll and Sam Curryrevealedthat they found simple methods to hack into the backend of the AI chatbot platform on McHire.com, McDonald's website that many of its franchisees use to handle job applications. Carroll and Curry, hackers with along track record of independent security testing, discovered that simple web-based vulnerabilities -- including guessing one laughably weak password -- allowed them to access a Paradox.ai account and query the company's databases that held every McHire user's chats with Olivia. The data appears to include as many as 64 million records, including applicants' names, email addresses, and phone numbers.

Carroll says he only discovered that appalling lack of security around applicants' information because he was intrigued by McDonald's decision to subject potential new hires to an AI chatbot screener and personality test. "I just thought it was pretty uniquely dystopian compared to a normal hiring process, right? And that's what made me want to look into it more," says Carroll. "So I started applying for a job, and then after 30 minutes, we had full access to virtually every application that's ever been made to McDonald's going back years."
Paradox.ai confirmed the security findings, acknowledging that only a small portion of the accessed records contained personal data. The company stated that the weak-password account ("123456") was only accessed by the researchers and no one else. To prevent future issues, Paradox is launching a bug bounty program. "We do not take this matter lightly, even though it was resolved swiftly and effectively," Paradox.ai's chief legal officer, Stephanie King, told WIRED in an interview. "We own this."

In a statement to WIRED, McDonald's agreed that Paradox.ai was to blame. "We're disappointed by this unacceptable vulnerability from a third-party provider, Paradox.ai. As soon as we learned of the issue, we mandated Paradox.ai to remediate the issue immediately, and it was resolved on the same day it was reported to us," the statement reads. "We take our commitment to cyber security seriously and will continue to hold our third-party providers accountable to meeting our standards of data protection."
The Courts

Fubo Pays $3.4 Million To Settle Claims It Illegally Shared User Data With Advertisers (arstechnica.com) 9

Fubo has agreed to pay $3.4 million to settle a class-action lawsuit (PDF) accusing it of illegally sharing usersâ(TM) personally identifiable information and video viewing history with advertisers without consent, allegedly violating the Video Privacy Protection Act (VPPA). Ars Technica reports: As reported by Cord Cutters News this week, instead of going to trial, Fubo reached a settlement agreement [PDF] that allows people who used Fubo before May 29, which is when Fubo last updated its privacy policy, to receive part of a $3.4 million settlement. The settlement agreement received preliminary approval on May 29, and users recently started receiving notice of their potential entitlement to some of the settlement. They have until September 12 to submit claims. Fubo said in a statement: "We deny the allegations in the putative class lawsuit and specifically deny that we have engaged in any wrongdoing whatsoever. Fubo has nonetheless chosen to pursue a settlement for this matter in order to avoid the uncertainty and expense of litigation. We look forward to putting this matter behind us."
Open Source

The Open-Source Software Saving the Internet From AI Bot Scrapers (404media.co) 33

An anonymous reader quotes a report from 404 Media: For someone who says she is fighting AI bot scrapers just in her free time, Xe Iaso seems to be putting up an impressive fight. Since she launched it in January, Anubis, a "program is designed to help protect the small internet from the endless storm of requests that flood in from AI companies," has been downloaded nearly 200,000 times, and is being used by notable organizations including GNOME, the popular open-source desktop environment for Linux, FFmpeg, the open-source software project for handling video and other media, and UNESCO, the United Nations organization for educations, science, and culture. [...]

"Anubis is an uncaptcha," Iaso explains on her site. "It uses features of your browser to automate a lot of the work that a CAPTCHA would, and right now the main implementation is by having it run a bunch of cryptographic math with JavaScript to prove that you can run JavaScript in a way that can be validated on the server." Essentially, Anubis verifies that any visitor to a site is a human using a browser as opposed to a bot. One of the ways it does this is by making the browser do a type of cryptographic math with JavaScript or other subtle checks that browsers do by default but bots have to be explicitly programmed to do. This check is invisible to the user, and most browsers since 2022 are able to complete this test. In theory, bot scrapers could pretend to be users with browsers as well, but the additional computational cost of doing so on the scale of scraping the entire internet would be huge. This way, Anubis creates a computational cost that is prohibitively expensive for AI scrapers that are hitting millions and millions of sites, but marginal for an individual user who is just using the internet like a human.

Anubis is free, open source, lightweight, can be self-hosted, and can be implemented almost anywhere. It also appears to be a pretty good solution for what we've repeatedly reported is a widespread problem across the internet, which helps explain its popularity. But Iaso is still putting a lot of work into improving it and adding features. She told me she's working on a non cryptographic challenge so it taxes users' CPUs less, and also thinking about a version that doesn't require JavaScript, which some privacy-minded disable in their browsers. The biggest challenge in developing Anubis, Iaso said, is finding the balance. "The balance between figuring out how to block things without people being blocked, without affecting too many people with false positives," she said. "And also making sure that the people running the bots can't figure out what pattern they're hitting, while also letting people that are caught in the web be able to figure out what pattern they're hitting, so that they can contact the organization and get help. So that's like, you know, the standard, impossible scenario."

Wireless Networking

Jack Dorsey Launches a WhatsApp Messaging Rival Built On Bluetooth (cnbc.com) 66

Jack Dorsey has launched Bitchat, a decentralized, peer-to-peer messaging app that uses Bluetooth mesh networks for encrypted, ephemeral chats without requiring accounts, servers, or internet access. The beta version is live on TestFlight, with a full white paper available on GitHub. CNBC reports: In a post on X Sunday, Dorsey called it a personal experiment in "bluetooth mesh networks, relays and store and forward models, message encryption models, and a few other things."

Bitchat enables ephemeral, encrypted communication between nearby devices. As users move through physical space, their phones form local Bluetooth clusters and pass messages from device to device, allowing them to reach peers beyond standard range -- even without Wi-Fi or cell service. Certain "bridge" devices connect overlapping clusters, expanding the mesh across greater distances. Messages are stored only on device, disappear by default and never touch centralized infrastructure -- echoing Dorsey's long-running push for privacy-preserving, censorship-resistant communication.

Like the Bluetooth-based apps used during Hong Kong's 2019 protests, Bitchat is designed to keep working even when the internet is blocked, offering a censorship-resistant way to stay connected during outages, shutdowns or surveillance. The app also supports optional group chats, or "rooms," which can be named with hashtags and protected by passwords. It includes store and forward functionality to deliver messages to users who are temporarily offline. A future update will add WiFi Direct to increase speed and range, pushing Dorsey's vision for off-grid, user-owned communication even further.

Android

Data Breach Reveals Catwatchful 'Stalkerware' Is Spying On Thousands of Phones (techcrunch.com) 17

An anonymous reader quotes a report from TechCrunch: A security vulnerability in a stealthy Android spyware operation called Catwatchful has exposed thousands of its customers, including its administrator. The bug, which was discovered by security researcher Eric Daigle, spilled the spyware app's full database of email addresses and plaintext passwords that Catwatchful customers use to access the data stolen from the phones of their victims. [...] According to a copy of the database from early June, which TechCrunch has seen, Catwatchful had email addresses and passwords on more than 62,000 customers and the phone data from 26,000 victims' devices.

Most of the compromised devices were located in Mexico, Colombia, India, Peru, Argentina, Ecuador, and Bolivia (in order of the number of victims). Some of the records date back to 2018, the data shows. The Catwatchful database also revealed the identity of the spyware operation's administrator, Omar Soca Charcov, a developer based in Uruguay. Charcov opened our emails, but did not respond to our requests for comment sent in both English and Spanish. TechCrunch asked if he was aware of the Catwatchful data breach, and if he plans to disclose the incident to its customers. Without any clear indication that Charcov will disclose the incident, TechCrunch provided a copy of the Catwatchful database to data breach notification service Have I Been Pwned.
The stalkerware operation uses a custom API and Google's Firebase to collect and store victims' stolen data, including photos and audio recordings. According to Daigle, the API was left unauthenticated, exposing sensitive user data such as email addresses and passwords.

The hosting provider temporarily suspended the spyware after TechCrunch disclosed this vulnerability but it returned later on HostGator. Despite being notified, Google has yet to take down the Firebase instance but updated Google Play Protect to detect Catwatchful.

While Catwatchful claims it "cannot be uninstalled," you can dial "543210" and press the call button on your Android phone to reveal the hidden app. As for its removal, TechCrunch has a general how-to guide for removing Android spyware that could be helpful.
Privacy

NYT To Start Searching Deleted ChatGPT Logs After Beating OpenAI In Court (arstechnica.com) 33

An anonymous reader quotes a report from Ars Technica: Last week, OpenAI raised objections in court, hoping to overturn a court order requiring the AI company to retain all ChatGPT logs "indefinitely," including deleted and temporary chats. But Sidney Stein, the US district judge reviewing OpenAI's request, immediately denied OpenAI's objections. He was seemingly unmoved by the company's claims that the order forced OpenAI to abandon "long-standing privacy norms" and weaken privacy protections that users expect based on ChatGPT's terms of service. Rather, Stein suggested that OpenAI's user agreement specified that their data could be retained as part of a legal process, which Stein said is exactly what is happening now.

The order was issued by magistrate judge Ona Wang just days after news organizations, led by The New York Times, requested it. The news plaintiffs claimed the order was urgently needed to preserve potential evidence in their copyright case, alleging that ChatGPT users are likely to delete chats where they attempted to use the chatbot to skirt paywalls to access news content. A spokesperson told Ars that OpenAI plans to "keep fighting" the order, but the ChatGPT maker seems to have few options left. They could possibly petition the Second Circuit Court of Appeals for a rarely granted emergency order that could intervene to block Wang's order, but the appeals court would have to consider Wang's order an extraordinary abuse of discretion for OpenAI to win that fight.

In the meantime, OpenAI is negotiating a process that will allow news plaintiffs to search through the retained data. Perhaps the sooner that process begins, the sooner the data will be deleted. And that possibility puts OpenAI in the difficult position of having to choose between either caving to some data collection to stop retaining data as soon as possible or prolonging the fight over the order and potentially putting more users' private conversations at risk of exposure through litigation or, worse, a data breach. [...]

Both sides are negotiating the exact process for searching through the chat logs, with both parties seemingly hoping to minimize the amount of time the chat logs will be preserved. For OpenAI, sharing the logs risks revealing instances of infringing outputs that could further spike damages in the case. The logs could also expose how often outputs attribute misinformation to news plaintiffs. But for news plaintiffs, accessing the logs is not considered key to their case -- perhaps providing additional examples of copying -- but could help news organizations argue that ChatGPT dilutes the market for their content. That could weigh against the fair use argument, as a judge opined in a recent ruling that evidence of market dilution could tip an AI copyright case in favor of plaintiffs.

Android

Google Ordered To Pay $315 Million for Taking Data From Idle Android Phones (reuters.com) 23

A California jury has ordered Google to pay $314.6 million to Android smartphone users in the state after finding the company liable for collecting data from idle devices without permission.

The San Jose jury ruled Tuesday that Google sent and received information from phones while idle, creating "mandatory and unavoidable burdens shouldered by Android device users for Google's benefit." The 2019 class action represented an estimated 14 million Californians who argued Google consumed their cellular data for targeted advertising purposes.
China

China's Giant New Gamble With Digital IDs (economist.com) 74

China will launch digital IDs for internet use on July 15th, transferring online verification from private companies to government control. Users obtain digital IDs by submitting personal information including facial scans to police via an app. A pilot program launched one year ago enrolled 6 million people.

The system currently remains voluntary, though officials and state media are pushing citizens to register for "information security." Companies will see only anonymized character strings when users log in, while police retain exclusive access to personal details. The program replaces China's existing system requiring citizens to register with companies using real names before posting comments, gaming, or making purchases.

Police say they punished 47,000 people last year for spreading "rumours" online. The digital ID serves a broader government strategy to centralize data control. State planners classify data as a production factor alongside labor and capital, aiming to extract information from private companies for trading through government-operated data exchanges.
Privacy

Tinder To Require Facial Recognition Check For New Users In California (axios.com) 42

An anonymous reader quotes a report from Axios: Tinder is mandating new users in California verify their profiles using facial recognition technology starting Monday, executives exclusively tell Axios. The move aims to reduce impersonation and is part of Tinder parent Match Group's broader effort to improve trust and safety amid ongoing user frustration. The Face Check feature prompts users to take a short video selfie during onboarding. The biometric face scan, powered by FaceTec, then confirms the person is real and present and whether their face matches their profile photos. It also checks if the face is used across multiple accounts. If the criteria are met, the user receives a photo verified badge on their profile. The selfie video is then deleted. Tinder stores a non-reversible, encrypted face map to detect duplicate profiles in the future.

Face Check is separate from Tinder's ID Check, which uses a government-issued ID to verify age and identity. "We see this as one part of a set of identity assurance options that are available to users," Match Group's head of trust and safety Yoel Roth says. "Face Check ... is really meant to be about confirming that this person is a real, live person and not a bot or a spoofed account." "Even if in the short term, it has the effect of potentially reducing some top-line user metrics, we think it's the right thing to do for the business," Rascoff said.

Businesses

Proton Joins Antitrust Lawsuit Against Apple's App Store Practices (theregister.com) 26

Encrypted communications provider Proton has joined an antitrust lawsuit against Apple, filing a legal complaint that claims the company's App Store practices harm developers, consumers, and privacy. The Switzerland-based firm joined a group of Korean developers who sued Apple in May rather than filing a separate case.

Proton asked the US District Court for Northern California to require Apple to allow alternative app stores, expose those stores through its own App Store, permit developers to disable Apple's in-app payment system, and provide full access to Apple APIs. The company added a privacy-focused argument to typical antitrust complaints, contending that Apple's pricing model particularly penalizes companies that refuse to harvest user data. Developers of free apps typically sell user data to cover costs, while privacy-focused companies like Proton must charge subscriptions for revenue, making Apple's commission cuts more burdensome.
Security

US Government Takes Down Major North Korean 'Remote IT Workers' Operation (techcrunch.com) 59

An anonymous reader quotes a report from TechCrunch: The U.S. Department of Justice announced on Monday that it had taken several enforcement actions against North Korea's money-making operations, which rely on undercover remote IT workers inside American tech companies to raise funds for the regime's nuclear weapons program, as well as to steal data and cryptocurrency. As part of the DOJ's multi-state effort, the government announced the arrest and indictment of U.S. national Zhenxing "Danny" Wang, who allegedly ran a years-long fraud scheme from New Jersey to sneak remote North Korean IT workers inside U.S. tech companies. According to the indictment, the scheme generated more than $5 million in revenue for the North Korean regime. [...]

From 2021 until 2024, the co-conspirators allegedly impersonated more than 80 U.S. individuals to get remote jobs at more than 100 American companies, causing $3 million in damages due to legal fees, data breach remediation efforts, and more. The group is said to have run laptop farms inside the United States, which the North Korean IT workers could essentially use as proxies to hide their provenance, according to the DOJ. At times, they used hardware devices known as keyboard-video-mouse (KVM) switches, which allow one person to control multiple computers from a single keyboard and mouse. The group allegedly also ran shell companies inside the U.S. to make it seem like the North Korean IT workers were affiliated with legitimate local companies, and to receive money that would then be transferred abroad, the DOJ said.

The fraudulent scheme allegedly also involved the North Korean workers stealing sensitive data, such as source code, from the companies they were working for, such as from an unnamed California-based defense contractor "that develops artificial intelligence-powered equipment and technologies."

News

VP.net Promises "Cryptographically Verifiable Privacy" (torrentfreak.com) 36

TorrentFreak spotlights VP.net, a brand-new service from Private Internet Access founder Andrew Lee (the guy who gifted Linux Journal to Slashdot) that eliminates the classic "just trust your VPN" problem by locking identity-mapping and traffic-handling inside Intel SGX enclaves. The company promises 'cryptographically verifiable privacy' by using special hardware 'safes' (Intel SGX), so even the provider can't track what its users are up to.

The design goal is that no one, not even the VPN company, can link "User X" to "Website Y."

Lee frames it as enabling agency over one's privacy:

"Our zero trust solution does not require you to trust us - and that's how it should be. Your privacy should be up to your choice - not up to some random VPN provider in some random foreign country."

The team behind VP.net includes CEO Matt Kim as well as arguably the first Bitcoin veterans Roger Ver and Mark Karpeles.

Ask Slashdot: Now that there's a VPN where you don't have to "just trust the provider" - arguably the first real zero-trust VPN - are trust based VPNs obsolete?

Slashdot Top Deals