Government

One-Third of DHS's Border Surveillance Cameras Are Broken, Memo Says (nbcnews.com) 154

According to an internal Border Patrol memo, nearly one-third of the surveillance cameras along the U.S.-Mexico border don't work. "The nationwide issue is having significant impacts on [Border Patrol] operations," reads the memo. NBC News reports: The large-scale outage affects roughly 150 of the 500 cameras perched on surveillance towers along the U.S.-Mexico border. It was due to "several technical problems," according to the memo. The officials, who spoke on the condition of anonymity to discuss a sensitive issue, blamed outdated equipment and outstanding repair issues.

The camera systems, known as Remote Video Surveillance Systems, have been used since 2011 to "survey large areas without having to commit hundreds of agents in vehicles to perform the same function." But according to the internal memo, 30% were inoperable. It is not clear when the cameras stopped working.Two Customs and Border Protections officials said that some repairs have been made this month but that there are still over 150 outstanding requests for camera repairs. The officials said there are some areas that are not visible to Border Patrol because of broken cameras.

A Customs and Border Protection spokesperson said the agency has installed roughly 300 new towers that use more advanced technology. "CBP continues to install newer, more advanced technology that embrace artificial intelligence and machine learning to replace outdated systems, reducing the need to have agents working non-interdiction functions," the spokesperson said.
The agency points the finger at the Federal Aviation Administration (FAA), which is responsible for servicing the systems and repairing the cameras. "The FAA, which services the systems and repairs the cameras, has had internal problems meeting the needs of the Border Patrol, the memo says, without elaborating on what those problems are," reports NBC News. While the FAA is sending personnel to work on the cameras, Border Patrol leaders are considering replacing them with a contractor that can provide "adequate technical support for the cameras."

Further reading: U.S. Border Surveillance Towers Have Always Been Broken (EFF)
Privacy

Privacy Advocates Urge 23andMe Customers to Delete Their Data. But Can They? (sfgate.com) 45

"Some prominent privacy advocates are encouraging customers to pull their data" from 23andMe, reports SFGate.

But can you actually do that? 23andMe makes it easy to feel like you've protected your genetic footprint. In their account settings, customers can download versions of their data to a computer and choose to delete the data attached to their 23andMe profile. An email then arrives with a big pink button: "Permanently Delete All Records." Doing so, it promises, will "terminate your relationship with 23andMe and irreversibly delete your account and Personal Information."

But there's another clause in the email that conflicts with that "terminate" promise. It says 23andMe and whichever contracted genotyping laboratory worked on a customer's samples will still hold on to the customer's sex, date of birth and genetic information, even after they're "deleted." The reason? The company cites "legal obligations," including federal laboratory regulations and California lab rules. The federal program, which sets quality standards for laboratories, requires that labs hold on to patient test records for at least two years; the California rule, part of the state's Business and Professions Code, requires three. When SFGATE asked 23andMe vice president of communications Katie Watson about the retention mandates, she said 23andMe does delete the genetic data after the three-year period, where applicable...

Before it's finally deleted, the data remains 23andMe property and is held under the same rules as the company's privacy policy, Watson added. If that policy changes, customers are supposed to be informed and asked for their consent. In the meantime, a hack is unfortunately always possible. Another 23andMe spokesperson, Andy Kill, told SFGATE that [CEO Anne] Wojcicki is "committed to customers' privacy and pledges to retain the current privacy policy in force for the foreseeable future, including after the acquisition she is currently pursuing."

An Electronic Frontier Foundation privacy lawyer tells SFGate there's no information more personal than your DNA. "It is like a Social Security number, it can't be changed. But it's not just a piece of paper, it's kind of you."

He urged 23andMe to leave customers' data out of any acquisition deals, and promise customers they'd avoid takeover attempts from companies with bad security — or with ties to law enforcement.
Electronic Frontier Foundation

EFF and ACLU Urge Court to Maintain Block on Mississippi's 'Age Verification' Law (eff.org) 108

An anonymous Slashdot reader shared the EFF's "Deeplink" blog post: EFF, along with the ACLU and the ACLU of Mississippi, filed an amicus brief on Thursday asking a federal appellate court to continue to block Mississippi's HB 1126 — a bill that imposes age verification mandates on social media services across the internet. Our friend-of-the-court brief, filed in the U.S. Court of Appeals for the Fifth Circuit, argues that HB 1126 is "an extraordinary censorship law that violates all internet users' First Amendment rights to speak and to access protected speech" online.

HB 1126 forces social media sites to verify the age of every user and requires minors to get explicit parental consent before accessing online spaces. It also pressures them to monitor and censor content on broad, vaguely defined topics — many of which involve constitutionally protected speech. These sweeping provisions create significant barriers to the free and open internet and "force adults and minors alike to sacrifice anonymity, privacy, and security to engage in protected online expression." A federal district court already prevented HB 1126 from going into effect, ruling that it likely violated the First Amendment.

At the heart of our opposition to HB 1126 is its dangerous impact on young people's free expression. Minors enjoy the same First Amendment right as adults to access and engage in protected speech online. "No legal authority permits lawmakers to burden adults' access to political, religious, educational, and artistic speech with restrictive age-verification regimes out of a concern for what minors might see" [argues the brief]. "Nor is there any legal authority that permits lawmakers to block minors categorically from engaging in protected expression on general purpose internet sites like those regulated by HB 1126..."

"The law requires all users to verify their age before accessing social media, which could entirely block access for the millions of U.S. adults who lack government-issued ID..." And it also asks another question. "Would you want everything you do online to be linked to your government-issued ID?"

And the blog post makes one more argument. "in an era where data breaches and identity theft are alarmingly common." So the bill "puts every user's personal data at risk... No one — neither minors nor adults — should have to sacrifice their privacy or anonymity in order to exercise their free speech rights online."
Privacy

License Plate Readers Are Creating a US-Wide Database of More Than Just Cars (wired.com) 109

Wired reports on "AI-powered cameras mounted on cars and trucks, initially designed to capture license plates, but which are now photographing political lawn signs outside private homes, individuals wearing T-shirts with text, and vehicles displaying pro-abortion bumper stickers — all while recordi00ng the precise locations of these observations..."

The detailed photographs all surfaced in search results produced by the systems of DRN Data, a license-plate-recognition (LPR) company owned by Motorola Solutions. The LPR system can be used by private investigators, repossession agents, and insurance companies; a related Motorola business, called Vigilant, gives cops access to the same LPR data. However, files shared with WIRED by artist Julia Weist, who is documenting restricted datasets as part of her work, show how those with access to the LPR system can search for common phrases or names, such as those of politicians, and be served with photographs where the search term is present, even if it is not displayed on license plates... Beyond highlighting the far-reaching nature of LPR technology, which has collected billions of images of license plates, the research also shows how people's personal political views and their homes can be recorded into vast databases that can be queried.

"It really reveals the extent to which surveillance is happening on a mass scale in the quiet streets of America," says Jay Stanley, a senior policy analyst at the American Civil Liberties Union. "That surveillance is not limited just to license plates, but also to a lot of other potentially very revealing information about people."

DRN, in a statement issued to WIRED, said it complies with "all applicable laws and regulations...." Over more than a decade, DRN has amassed more than 15 billion "vehicle sightings" across the United States, and it claims in its marketing materials that it amasses more than 250 million sightings per month. Images in DRN's commercial database are shared with police using its Vigilant system, but images captured by law enforcement are not shared back into the wider database. The system is partly fueled by DRN "affiliates" who install cameras in their vehicles, such as repossession trucks, and capture license plates as they drive around. Each vehicle can have up to four cameras attached to it, capturing images in all angles. These affiliates earn monthly bonuses and can also receive free cameras and search credits...

"License plate recognition (LPR) technology supports public safety and community services, from helping to find abducted children and stolen vehicles to automating toll collection and lowering insurance premiums by mitigating insurance fraud," Jeremiah Wheeler, the president of DRN, says in a statement... Wheeler did not respond to WIRED's questions about whether there are limits on what can be searched in license plate databases, why images of homes with lawn signs but no vehicles in sight appeared in search results, or if filters are used to reduce such images.

Privacy experts shared their reactions with Wired
  • "Perhaps [people] want to express themselves in their communities, to their neighbors, but they don't necessarily want to be logged into a nationwide database that's accessible to police authorities." — Jay Stanley, a senior policy analyst at the American Civil Liberties Union
  • "When government or private companies promote license plate readers, they make it sound like the technology is only looking for lawbreakers or people suspected of stealing a car or involved in an amber alert, but that's just not how the technology works. The technology collects everyone's data and stores that data often for immense periods of time." — Dave Maass, an EFF director of investigations
  • "The way that the country is set up was to protect citizens from government overreach, but there's not a lot put in place to protect us from private actors who are engaged in business meant to make money." — Nicole McConlogue, associate law professor at Mitchell Hamline School of Law (who has researched license-plate-surveillance systems)

Thanks to long-time Slashdot reader schwit1 for sharing the article.


Patents

Patents For Software and Genetic Code Could Be Revived By Two Bills In Congress (arstechnica.com) 66

An anonymous reader quotes a report from Ars Technica: The Senate Judiciary Committee is scheduled to consider two bills Thursday that would effectively nullify the Supreme Court's rulings against patents on broad software processes and human genes. Open source and Internet freedom advocates are mobilizing and pushing back. The Patent Eligibility Restoration Act (or PERA, S. 2140), sponsored by Sens. Thom Tillis (R-NC) and Chris Coons (D-Del.), would amend US Code such that "all judicial exceptions to patent eligibility are eliminated." That would include the 2014 ruling in which the Supreme Court held, with Justice Clarence Thomas writing, that simply performing an existing process on a computer does not make it a new, patentable invention. "The relevant question is whether the claims here do more than simply instruct the practitioner to implement the abstract idea of intermediated settlement on a generic computer," Thomas wrote. "They do not." That case also drew on Bilski v. Kappos, a case in which a patent was proposed based solely on the concept of hedging against price fluctuations in commodity markets. [...]

Another wrinkle in the PERA bill involves genetic patents. The Supreme Court ruled in June 2013 that pieces of DNA that occur naturally in the genomes of humans or other organisms cannot, themselves, be patented. Myriad Genetics had previously been granted patents on genes associated with breast and ovarian cancer, BRCA1 and BRCA2, which were targeted in a lawsuit led by the American Civil Liberties Union (ACLU). The resulting Supreme Court decision -- this one also written by Thomas -- found that information that naturally occurs in the human genome could not be the subject to a patent, even if the patent covered the process of isolating that information from the rest of the genome. As with broad software patents, PERA would seemingly allow for the patenting of isolated human genes and connections between those genes and diseases like cancer. [...] The Judiciary Committee is set to debate and potentially amend or rewrite PREVAIL and PERA (i.e. mark up) on Thursday.

Electronic Frontier Foundation

EFF Decries 'Brazen Land-Grab' Attempt on 900 MHz 'Commons' Frequency Used By Amateur Radio (eff.org) 145

An EFF article calls out a "brazen attempt to privatize" a wireless frequency band (900 MHz) which America's FCC's left " as a commons for all... for use by amateur radio operators, unlicensed consumer devices, and industrial, scientific, and medical equipment." The spectrum has also become "a hotbed for new technologies and community-driven projects. Millions of consumer devices also rely on the range, including baby monitors, cordless phones, IoT devices, garage door openers." But NextNav would rather claim these frequencies, fence them off, and lease them out to mobile service providers. This is just another land-grab by a corporate rent-seeker dressed up as innovation. EFF and hundreds of others have called on the FCC to decisively reject this proposal and protect the open spectrum as a commons that serves all.

NextNav [which sells a geolocation service] wants the FCC to reconfigure the 902-928 MHz band to grant them exclusive rights to the majority of the spectrum... This proposal would not only give NextNav their own lane, but expanded operating region, increased broadcasting power, and more leeway for radio interference emanating from their portions of the band. All of this points to more power for NextNav at everyone else's expense.

This land-grab is purportedly to implement a Positioning, Navigation and Timing (PNT) network to serve as a US-specific backup of the Global Positioning System(GPS). This plan raises red flags off the bat. Dropping the "global" from GPS makes it far less useful for any alleged national security purposes, especially as it is likely susceptible to the same jamming and spoofing attacks as GPS. NextNav itself admits there is also little commercial demand for PNT. GPS works, is free, and is widely supported by manufacturers. If Nextnav has a grand plan to implement a new and improved standard, it was left out of their FCC proposal. What NextNav did include however is its intent to resell their exclusive bandwidth access to mobile 5G networks. This isn't about national security or innovation; it's about a rent-seeker monopolizing access to a public resource. If NextNav truly believes in their GPS backup vision, they should look to parts of the spectrum already allocated for 5G.

The open sections of the 900 MHz spectrum are vital for technologies that foster experimentation and grassroots innovation. Amateur radio operators, developers of new IoT devices, and small-scale operators rely on this band. One such project is Meshtastic, a decentralized communication tool that allows users to send messages across a network without a central server. This new approach to networking offers resilient communication that can endure emergencies where current networks fail. This is the type of innovation that actually addresses crises raised by Nextnav, and it's happening in the part of the spectrum allocated for unlicensed devices while empowering communities instead of a powerful intermediary. Yet, this proposal threatens to crush such grassroots projects, leaving them without a commons in which they can grow and improve.

This isn't just about a set of frequencies. We need an ecosystem which fosters grassroots collaboration, experimentation, and knowledge building. Not only do these commons empower communities, they avoid a technology monoculture unable to adapt to new threats and changing needs as technology progresses. Invention belongs to the public, not just to those with the deepest pockets. The FCC should ensure it remains that way.

NextNav's proposal is a direct threat to innovation, public safety, and community empowerment. While FCC comments on the proposal have closed, replies remain open to the public until September 20th. The FCC must reject this corporate land-grab and uphold the integrity of the 900 MHz band as a commons.

Electronic Frontier Foundation

FTC Urged To Stop Tech Makers Downgrading Devices After You've Bought Them (theregister.com) 80

Digital rights activists want device manufacturers to disclose a "guaranteed minimum support time" for devices — and federal regulations ensuring a product's core functionality will work even after its software updates stop.

Influential groups including Consumer Reports, EFF, the Software Freedom Conservancy, iFixit, and U.S. Pirg have now signed a letter to the head of America's Consumer Protection bureau (at the Federal Trade Commision), reports The Register: In an eight-page letter to the Commission (FTC), the activists mentioned the Google/Levis collaboration on a denim jacket that contained sensors enabling it to control an Android device through a special app. When the app was discontinued in 2023, the jacket lost that functionality. The letter also mentions the "Car Thing," an automotive infotainment device created by Spotify, which bricked the device fewer than two years after launch and didn't offer a refund...

Environmental groups and computer repair shops also signed the letter... "Consumers need a clear standard for what to expect when purchasing a connected device," stated Justin Brookman, director of technology policy at Consumer Reports and a former policy director of the FTC's Office of Technology, Research, and Investigation. "Too often, consumers are left with devices that stop functioning because companies decide to end support without little to no warning. This leaves people stranded with devices they once relied on, unable to access features or updates...."

Brookman told The Register that he believes this is the first such policy request to the FTC that asks the agency to help consumers with this dilemma. "I'm not aware of a previous effort from public interest groups to get the FTC to take action on this issue — it's still a relatively new issue with no clear established norms," he wrote in an email. "But it has certainly become an issue" that comes up more and more with device makers as they change their rules about product updates and usage.

"Both switching features to a subscription and 'bricking' a connected device purchased by a consumer in many cases are unfair and deceptive practices," the groups write, arguing that the practices "infringe on a consumer's right to own the products they buy." They're requesting clear "guidance" for manufacturers from the U.S. government. The FTC has a number of tools at its disposal to help establish standards for IoT device support. While a formal rulemaking is one possibility, the FTC also has the ability to issue more informal guidance, such as its Endorsement Guides12 and Dot Com Disclosures.13 We believe the agency should set norms...
The groups are also urging the FTC to:
  • Encourage tools and methods that enable reuse if software support ends.
  • Conduct an educational program to encourage manufacturers to build longevity into the design of their products.
  • Protect "adversarial interoperability"... when a competitor or third-party creates a reuse or modification tool [that] adds to or converts the old device.

Thanks to long-time Slashdot reader Z00L00K for sharing the article.


Privacy

Federal Appeals Court Finds Geofence Warrants Are 'Categorically' Unconstitutional (eff.org) 41

An anonymous reader quotes a report from the Electronic Frontier Foundation (EFF): In a major decision on Friday, the federal Fifth Circuit Court of Appeals held (PDF) that geofence warrants are "categorically prohibited by the Fourth Amendment." Closely following arguments EFF has made in a number of cases, the court found that geofence warrants constitute the sort of "general, exploratory rummaging" that the drafters of the Fourth Amendment intended to outlaw. EFF applauds this decision because it is essential that every person feels like they can simply take their cell phone out into the world without the fear that they might end up a criminal suspect because their location data was swept up in open-ended digital dragnet. The new Fifth Circuit case, United States v. Smith, involved an armed robbery and assault of a US Postal Service worker at a post office in Mississippi in 2018. After several months of investigation, police had no identifiable suspects, so they obtained a geofence warrant covering a large geographic area around the post office for the hour surrounding the crime. Google responded to the warrant with information on several devices, ultimately leading police to the two defendants.

On appeal, the Fifth Circuit reached several important holdings. First, it determined that under the Supreme Court's landmark ruling in Carpenter v. United States, individuals have a reasonable expectation of privacy in the location data implicated by geofence warrants. As a result, the court broke from the Fourth Circuit's deeply flawed decision last month in United States v. Chatrie, noting that although geofence warrants can be more "limited temporally" than the data sought in Carpenter, geofence location data is still highly invasive because it can expose sensitive information about a person's associations and allow police to "follow" them into private spaces. Second, the court found that even though investigators seek warrants for geofence location data, these searches are inherently unconstitutional. As the court noted, geofence warrants require a provider, almost always Google, to search "the entirety" of its reserve of location data "while law enforcement officials have no idea who they are looking for, or whether the search will even turn up a result." Therefore, "the quintessential problem with these warrants is that they never include a specific user to be identified, only a temporal and geographic location where any given user may turn up post-search. That is constitutionally insufficient."

Unsurprisingly, however, the court found that in 2018, police could have relied on such a warrant in "good faith," because geofence technology was novel, and police reached out to other agencies with more experience for guidance. This means that the evidence they obtained will not be suppressed in this case.

Mozilla

Mozilla Wants You To Love Firefox Again (fastcompany.com) 142

Mozilla's interim CEO Laura Chambers "says the company is reinvesting in Firefox after letting it languish in recent years," reports Fast Company, "hoping to reestablish the browser as independent alternative to the likes of Google's Chrome and Apple's Safari.

"But some of those investments, which also include forays into generative AI, may further upset the community that's been sticking with Firefox all these years..." Chambers acknowledges that Mozilla lost sight of Firefox in recent years as it chased opportunities outside the browser, such as VPN service and email masking. When she replaced Mitchell Baker as CEO in February, the company scaled back those other efforts and made Firefox a priority again. "Yes, Mozilla is refocusing on Firefox," she says. "Obviously, it's our core product, so it's an important piece of the business for us, but we think it's also really an important part of the internet."

Some of that focus involves adding features that have become table-stakes in other browsers. In June, Mozilla added vertical tab support in Firefox's experimental branch, echoing a feature that Microsoft's Edge browser helped popularize three years ago. It's also working on tab grouping features and an easier way to switch between user profiles. Mozilla is even revisiting the concept of web apps, in which users can install websites as freestanding desktop applications. Mozilla abandoned work on Progressive Web Apps in Firefox a few years ago to the dismay of many power users, but now it's talking with community members about a potential path forward.

"We haven't always prioritized those features as highly as we should have," Chambers says. "That's been a real shift that's been very felt in the community, that the things they're asking for . . . are really being prioritized and brought to life."

Firefox was criticized for testing a more private alternative to tracking cookies which could make summaries of aggregated data available to advertisers. (Though it was only tested on a few sites, "Privacy-Preserving Attribution" was enabled by default.) But EFF staff technologist Lena Cohen tells Fast Company that approach was "much more privacy-preserving" than Google's proposal for a "Privacy Sandbox." And according to the article, "Mozilla's system only measures the success rate of ads — it doesn't help companies target those ads in the first place — and it's less susceptible to abuse due to limits on how much data is stored and which parties are allowed to access it." In June, Mozilla also announced its acquisition of Anonym, a startup led by former Meta executives that has its own privacy-focused ad measurement system. While Mozilla has no plans to integrate Anonym's tech in Firefox, the move led to even more anxiety about the kind of company Mozilla was becoming. The tension around Firefox stems in part from Mozilla's precarious financial position, which is heavily dependent on royalty payments from Google. In 2022, nearly 86% of Mozilla's revenue came from Google, which paid $510 million to be Firefox's default search engine. Its attempts to diversify, through VPN service and other subscriptions, haven't gained much traction.

Chambers says that becoming less dependent on Google is "absolutely a priority," and acknowledges that building an ad-tech business is one way of doing that. Mozilla is hoping that emerging privacy regulations and wider adoption of anti-tracking tools in web browsers will increase demand for services like Anonym and for systems like Firefox's privacy-preserving ad measurements. Other revenue-generating ideas are forthcoming. Chambers says Mozilla plans to launch new products outside of Firefox under a "design sprint" model, aimed at quickly figuring out what works and what doesn't. It's also making forays into generative AI in Firefox, starting with a chatbot sidebar in the browser's experimental branch.

Chambers "says to expect a bigger marketing push for Firefox in the United States soon, echoing a 'Challenge the default' ad campaign that was successful in Germany last summer. Mozilla's nonprofit ownership structure, and the idea that it's not beholden to corporate interests, figures heavily into those plans."
AI

Elon Musk Will Discuss $5B Tesla Investment in X's 'Grok' Chatbot Company xAI 70

Elon Musk recently posted on X.com that his satellite internet service Starlink is now operating on over 1,000 aircraft — and "is now active in a Gaza hospital with the support of the United Arab Emirates Israel." But on Tuesday, Musk posed this question to his 191 million followers on X.com:

"Should Tesla invest $5B into xAI, assuming the valuation is set by several credible outside investors?"

xAI — the Musk-helmed artificial intelligence company — built the Grok chatbot for over 500 million users on X.com. And on Thursday Musk's poll showed 67.9% of votes supporting his $5 billion investment. "Looks like the public is in favor," Musk posted in response. "Will discuss with Tesla board."

Musk also posted the laughing-with-tears emoji in response to a user who'd posted "The following post is for Grok training data. > AGI by 2025." (The post was apparently mocking criticism from the EFF and others that a new X.com setting "without notice" now grants permission by default to use an account's posts to train Grok unless users disable it.)
Electronic Frontier Foundation

EFF: New License Plate Reader Vulnerabilties Prove The Tech Itself is a Public Safety Threat (eff.org) 97

Automated license plate readers "pose risks to public safety," argues the EFF, "that may outweigh the crimes they are attempting to address in the first place." When law enforcement uses automated license plate readers (ALPRs) to document the comings and goings of every driver on the road, regardless of a nexus to a crime, it results in gargantuan databases of sensitive information, and few agencies are equipped, staffed, or trained to harden their systems against quickly evolving cybersecurity threats. The Cybersecurity and Infrastructure Security Agency (CISA), a component of the U.S. Department of Homeland Security, released an advisory last week that should be a wake up call to the thousands of local government agencies around the country that use ALPRs to surveil the travel patterns of their residents by scanning their license plates and "fingerprinting" their vehicles. The bulletin outlines seven vulnerabilities in Motorola Solutions' Vigilant ALPRs, including missing encryption and insufficiently protected credentials...

Unlike location data a person shares with, say, GPS-based navigation app Waze, ALPRs collect and store this information without consent and there is very little a person can do to have this information purged from these systems... Because drivers don't have control over ALPR data, the onus for protecting the data lies with the police and sheriffs who operate the surveillance and the vendors that provide the technology. It's a general tenet of cybersecurity that you should not collect and retain more personal data than you are capable of protecting. Perhaps ironically, a Motorola Solutions cybersecurity specialist wrote an article in Police Chief magazine this month that public safety agencies "are often challenged when it comes to recruiting and retaining experienced cybersecurity personnel," even though "the potential for harm from external factors is substantial." That partially explains why, more than 125 law enforcement agencies reported a data breach or cyberattacks between 2012 and 2020, according to research by former EFF intern Madison Vialpando. The Motorola Solutions article claims that ransomware attacks "targeting U.S. public safety organizations increased by 142 percent" in 2023.

Yet, the temptation to "collect it all" continues to overshadow the responsibility to "protect it all." What makes the latest CISA disclosure even more outrageous is it is at least the third time in the last decade that major security vulnerabilities have been found in ALPRs... If there's one positive thing we can say about the latest Vigilant vulnerability disclosures, it's that for once a government agency identified and reported the vulnerabilities before they could do damage... The Michigan Cyber Command center found a total of seven vulnerabilities in Vigilant devices; two of which were medium severity and 5 of which were high severity vulnerabilities...

But a data breach isn't the only way that ALPR data can be leaked or abused. In 2022, an officer in the Kechi (Kansas) Police Department accessed ALPR data shared with his department by the Wichita Police Department to stalk his wife.

The article concludes that public safety agencies should "collect only the data they need for actual criminal investigations.

"They must never store more data than they adequately protect within their limited resources-or they must keep the public safe from data breaches by not collecting the data at all."
Privacy

Steve Wozniak Decries Tracking's Effect on Privacy, Calls Out 'Hypocrisy' of Only Banning TikTok (cnn.com) 137

In an interview Saturday, CNN first asked Steve Wozniak about Apple's "walled garden" approach — and whether there's any disconnect between Apple's stated interest in user security and privacy, and its own self-interest?

Wozniak responded, "I think there are things you can say on all sides of it. "I'm kind of glad for the protection that I have for my privacy and for you know not getting hacked as much. Apple does a better job than the others.

And tracking you — tracking you is questionable, but my gosh, look at what we're accusing TikTok of, and then go look at Facebook and Google... That's how they make their business! I mean, Facebook was a great idea. But then they make all their money just by tracking you and advertising.

And Apple doesn't really do that as much. I consider Apple the good guy.

So then CNN directly asked Wozniak's opinion about the proposed ban on TikTok in the U.S. "Well, one, I don't understand it. I don't see why. I mean, I get a lot of entertainment out of TikTok — and I avoid the social web. But I love to watch TikTok, even if it's just for rescuing dog videos and stuff.

And so I'm thinking, well, what are we saying? We're saying 'Oh, you might be tracked by the Chinese'. Well, they learned it from us.

I mean, look, if you have a principle — a person should not be tracked without them knowing it? It's kind of a privacy principle — I was a founder of the EFF. And if you have that principle, you apply it the same to every company, or every country. You don't say, 'Here's one case where we're going to outlaw an app, but we're not going to do it in these other cases.'

So I don't like the hypocrisy. And that's always obviously common from a political realm.

China

EFF Opposes America's Proposed TikTok Ban (eff.org) 67

A new EFF web page is urging U.S. readers to "Tell Congress: Stop the TikTok Ban," arguing the bill will "do little for its alleged goal of protecting our private information and the collection of our data by foreign governments." Tell Congress: Instead of giving the President the power to ban entire social media platforms based on their country of origin, our representatives should focus on what matters — protecting our data no matter who is collecting it... It's a massive problem that current U.S. law allows for all the big social media platforms to harvest and monetize our personal data, including TikTok. Without comprehensive data privacy legislation, this will continue, and this ban won't solve any real or perceived problems. User data will still be collected by numerous platforms and sold to data brokers who sell it to the highest bidder — including governments of countries such as China — just as it is now.

TikTok raises special concerns, given the surveillance and censorship practices of the country that its parent company is based in, China. But it's also used by hundreds of millions of people to express themselves online, and is an instrumental tool for community building and holding those in power accountable. The U.S. government has not justified silencing the speech of Americans who use TikTok, nor has it justified the indirect speech punishment of a forced sale (which may prove difficult if not impossible to accomplish in the required timeframe). It can't meet the high constitutional bar for a restriction on the platform, which would undermine the free speech and association rights of millions of people. This bill must be stopped.

Social Networks

What Happened to Other China-Owned Social Media Apps? (cnn.com) 73

When it comes to TikTok, "The Chinese government is signaling that it won't allow a forced sale..." reported the Wall Street Journal Friday, "limiting options for the app's owners as buyers begin lining up to bid for its U.S. operations..."

"They have also sent signals to TikTok's owner, Beijing-based ByteDance, that company executives have interpreted as meaning the government would rather the app be banned in the U.S. than be sold, according to people familiar with the matter."

But that's not always how it plays out. McClatchy notes that in 2019 the Committee on Foreign Investment in the U.S. ordered Grindr's Chinese owners to relinquish control of Grindr. "A year later, the Chinese owners voluntarily complied and sold the company to San Vicente Acquisition, incorporated in Delaware, for around $608 million, according to Forbes."

And CNN reminds us that the world's most-populous country already banned TikTok more than three years ago: In June 2020, after a violent clash on the India-China border that left at least 20 Indian soldiers dead, the government in New Delhi suddenly banned TikTok and several other well-known Chinese apps. "It's important to remember that when India banned TikTok and multiple Chinese apps, the US was the first to praise the decision," said Nikhil Pahwa, the Delhi-based founder of tech website MediaNama. "[Former] US Secretary of State Mike Pompeo had welcomed the ban, saying it 'will boost India's sovereignty.'"

While India's abrupt decision shocked the country's 200 million TikTok users, in the four years since, many have found other suitable alternatives. "The ban on Tiktok led to the creation of a multibillion dollar opportunity ... A 200 million user base needed somewhere to go," said Pahwa, adding that it was ultimately American tech companies that seized the moment with their new offerings... Within a week of the ban, Meta-owned Instagram cashed in by launching its TikTok copycat, Instagram Reels, in India. Google introduced its own short video offering, YouTube Shorts. Homegrown alternatives such as MX Taka Tak and Moj also began seeing a rise in popularity and an infux in funding. Those local startups soon fizzled out, however, unable to match the reach and financial firepower of the American firms, which are flourishing.

In fact, at the time India "announced a ban on more than 50 Chinese apps," remembers the Washington Post, adding that Nepal also announced a ban on TikTok late last year.

Their article points out that TikTok has also been banned by top EU policymaking bodies, while "Government staff in some of the bloc's 27 member states, including Belgium, Denmark and the Netherlands, have also been told not to use TikTok on their work phones." Canada banned TikTok from all government-issued phones in February 2023, after similar steps in the United States and the European Union.... Britain announced a TikTok ban on government ministers' and civil servants' devices last year, with officials citing the security of state information. Australia banned TikTok from all federal government-owned devices last year after seeking advice from intelligence and security agencies.
A new EFF web page warns that America's new proposed ban on TikTok could also apply to apps like WeChat...
Electronic Frontier Foundation

EFF Challenges 'Legal Bullying' of Sites Reporting on Alleged Appin 'Hacking-for-Hire' (eff.org) 16

Long-time Slashdot reader v3rgEz shared this report from MuckRock: Founded in 2003, Appin has been described as a cybersecurity company and an educational consulting firm. Appin was also, according to Reuters reporting and extensive marketing materials, a prolific "hacking for hire" service, stealing information from politicians and militaries as well as businesses and even unfaithful spouses.

Legal letters, being sent to newsrooms and organizations around the world, are trying to remove that story from the internet — and are often succeeding.

Reuters investigation, published in November, was based in part on corroborated marketing materials, detailing a range of "hacking for hire" services Appin provided. After publication, Reuters was targeted by a legal campaign to shut down critical reporting, an effort which expanded to target news organizations around the world, including MuckRock. With the help of the Electronic Frontier Foundation, MuckRock is now sharing more details on this effort while continuing to host materials the Association of Appin Training Centers has gone to great lengths to remove from the web.

The original story, by Reuters' staff writers Raphael Satter, Zeba Siddiqui and Chris Bing, is no longer available on the Reuters website. Following a preliminary court ruling issued in New Delhi, the story has been replaced with an editor's note, stating that Reuters "stands by its reporting and plans to appeal the decision." The story has since been reposted on Distributed Denial of Secrets, while the primary source materials that Reuters reporters and editors used in their reporting are available on MuckRock's DocumentCloud service.

Representatives of the company's founders denied the assertions in the Reuters story, insisting instead that rogue actors "were misusing the Appin name."

TechDirt titled their article "Sorry Appin, We're Not Taking Down Our Article About Your Attempts To Silence Reporters."

And Thursday the EFF wrote its own take on "a campaign of bullying and censorship seeking to wipe out stories about the mercenary hacking campaigns of a less well-known company, Appin Technology, in general, and the company's cofounder, Rajat Khare, in particular." These efforts follow a familiar pattern: obtain a court order in a friendly international jurisdiction and then misrepresent the force and substance of that order to bully publishers around the world to remove their stories. We are helping to push back on that effort, which seeks to transform a very limited and preliminary Indian court ruling into a global takedown order. We are representing Techdirt and MuckRock Foundation, two of the news entities asked to remove Appin-related content from their sites... On their behalf, we challenged the assertions that the Indian court either found the Reuters reporting to be inaccurate or that the order requires any entities other than Reuters and Google to do anything. We requested a response — so far, we have received nothing...

At the time of this writing, more than 20 of those stories have been taken down by their respective publications, many at the request of an entity called "Association of Appin Training Centers (AOATC)...." It is not clear who is behind The Association of Appin Training Centers, but according to documents surfaced by Reuters, the organization didn't exist until after the lawsuit was filed against Reuters in Indian court....

If a relatively obscure company like AOATC or an oligarch like Rajat Khare can succeed in keeping their name out of the public discourse with strategic lawsuits, it sets a dangerous precedent for other larger, better-resourced, and more well-known companies such as Dark Matter or NSO Group to do the same. This would be a disaster for civil society, a disaster for security research, and a disaster for freedom of expression.

Electronic Frontier Foundation

EFF Adds Street Surveillance Hub So Americans Can Check Who's Checking On Them (theregister.com) 56

An anonymous reader quotes a report from The Register: For a country that prides itself on being free, America does seem to have an awful lot of spying going on, as the new Street Surveillance Hub from the Electronic Frontier Foundation shows. The Hub contains detailed breakdowns of the type of surveillance systems used, from bodycams to biometrics, predictive policing software to gunshot detection microphones and drone-equipped law enforcement. It also has a full news feed so that concerned citizens can keep up with the latest US surveillance news; they can also contribute to the Atlas of Surveillance on the site.

The Atlas, started in 2019, allows anyone to check what law enforcement is being used in their local area -- be it license plate readers, drones, or gunshot detection microphones. It can also let you know if local law enforcement is collaborating with third parties like home security vendor Ring to get extra information. EFF policy analyst Matthew Guariglia told The Register that once people look into what's being deployed using their tax dollars, a lot of red flags are raised. Over the last few years America's thin blue line have not only been harvesting huge amounts of data themselves, but also buying it in from commercial operators. The result is a perfect storm on privacy -- with police, homeowners, and our personal technology proving to be a goldmine of intrusive information that's often misused.

Open Source

What Comes After Open Source? Bruce Perens Is Working On It (theregister.com) 89

An anonymous reader quotes a report from The Register: Bruce Perens, one of the founders of the Open Source movement, is ready for what comes next: the Post-Open Source movement. "I've written papers about it, and I've tried to put together a prototype license," Perens explains in an interview with The Register. "Obviously, I need help from a lawyer. And then the next step is to go for grant money." Perens says there are several pressing problems that the open source community needs to address. "First of all, our licenses aren't working anymore," he said. "We've had enough time that businesses have found all of the loopholes and thus we need to do something new. The GPL is not acting the way the GPL should have done when one-third of all paid-for Linux systems are sold with a GPL circumvention. That's RHEL." RHEL stands for Red Hat Enterprise Linux, which in June, under IBM's ownership, stopped making its source code available as required under the GPL. Perens recently returned from a trip to China, where he was the keynote speaker at the Bench 2023 conference. In anticipation of his conversation with El Reg, he wrote up some thoughts on his visit and on the state of the open source software community. One of the matters that came to mind was Red Hat.

"They aren't really Red Hat any longer, they're IBM," Perens writes in the note he shared with The Register. "And of course they stopped distributing CentOS, and for a long time they've done something that I feel violates the GPL, and my defamation case was about another company doing the exact same thing: They tell you that if you are a RHEL customer, you can't disclose the GPL source for security patches that RHEL makes, because they won't allow you to be a customer any longer. IBM employees assert that they are still feeding patches to the upstream open source project, but of course they aren't required to do so. This has gone on for a long time, and only the fact that Red Hat made a public distribution of CentOS (essentially an unbranded version of RHEL) made it tolerable. Now IBM isn't doing that any longer. So I feel that IBM has gotten everything it wants from the open source developer community now, and we've received something of a middle finger from them. Obviously CentOS was important to companies as well, and they are running for the wings in adopting Rocky Linux. I could wish they went to a Debian derivative, but OK. But we have a number of straws on the Open Source camel's back. Will one break it?"

Another straw burdening the Open Source camel, Perens writes, "is that Open Source has completely failed to serve the common person. For the most part, if they use us at all they do so through a proprietary software company's systems, like Apple iOS or Google Android, both of which use Open Source for infrastructure but the apps are mostly proprietary. The common person doesn't know about Open Source, they don't know about the freedoms we promote which are increasingly in their interest. Indeed, Open Source is used today to surveil and even oppress them." Free Software, Perens explains, is now 50 years old and the first announcement of Open Source occurred 30 years ago. "Isn't it time for us to take a look at what we've been doing, and see if we can do better? Well, yes, but we need to preserve Open Source at the same time. Open Source will continue to exist and provide the same rules and paradigm, and the thing that comes after Open Source should be called something else and should never try to pass itself off as Open Source. So far, I call it Post-Open." Post-Open, as he describes it, is a bit more involved than Open Source. It would define the corporate relationship with developers to ensure companies paid a fair amount for the benefits they receive. It would remain free for individuals and non-profit, and would entail just one license. He imagines a simple yearly compliance process that gets companies all the rights they need to use Post-Open software. And they'd fund developers who would be encouraged to write software that's usable by the common person, as opposed to technical experts.

Pointing to popular applications from Apple, Google, and Microsoft, Perens says: "A lot of the software is oriented toward the customer being the product -- they're certainly surveilled a great deal, and in some cases are actually abused. So it's a good time for open source to actually do stuff for normal people." The reason that doesn't often happen today, says Perens, is that open source developers tend to write code for themselves and those who are similarly adept with technology. The way to avoid that, he argues, is to pay developers, so they have support to take the time to make user-friendly applications. Companies, he suggests, would foot the bill, which could be apportioned to contributing developers using the sort of software that instruments GitHub and shows who contributes what to which products. Merico, he says, is a company that provides such software. Perens acknowledges that a lot of stumbling blocks need to be overcome, like finding an acceptable entity to handle the measurements and distribution of funds. What's more, the financial arrangements have to appeal to enough developers. "And all of this has to be transparent and adjustable enough that it doesn't fork 100 different ways," he muses. "So, you know, that's one of my big questions. Can this really happen?"
Perens believes that the General Public License (GPL) is insufficient for today's needs and advocates for enforceable contract terms. He also criticizes non-Open Source licenses, particularly the Commons Clause, for misrepresenting and abusing the open-source brand.

As for AI, Perens views it as inherently plagiaristic and raises ethical concerns about compensating original content creators. He also weighs in on U.S.-China relations, calling for a more civil and cooperative approach to sharing technology.

You can read the full, wide-ranging interview here.
Electronic Frontier Foundation

EFF Warns: 'Think Twice Before Giving Surveillance for the Holidays' (eff.org) 28

"It's easy to default to giving the tech gifts that retailers tend to push on us this time of year..." notes Lifehacker senior writer Thorin Klosowski.

"But before you give one, think twice about what you're opting that person into." A number of these gifts raise red flags for us as privacy-conscious digital advocates. Ring cameras are one of the most obvious examples, but countless others over the years have made the security or privacy naughty list (and many of these same electronics directly clash with your right to repair). One big problem with giving these sorts of gifts is that you're opting another person into a company's intrusive surveillance practice, likely without their full knowledge of what they're really signing up for... And let's not forget about kids. Long subjected to surveillance from elves and their managers, electronics gifts for kids can come with all sorts of surprise issues, like the kid-focused tablet we found this year that was packed with malware and riskware. Kids' smartwatches and a number of connected toys are also potential privacy hazards that may not be worth the risks if not set up carefully.

Of course, you don't have to avoid all technology purchases. There are plenty of products out there that aren't creepy, and a few that just need extra attention during set up to ensure they're as privacy-protecting as possible. While we don't endorse products, you don't have to start your search in a vacuum. One helpful place to start is Mozilla's Privacy Not Included gift guide, which provides a breakdown of the privacy practices and history of products in a number of popular gift categories.... U.S. PIRG also has guidance for shopping for kids, including details about what to look for in popular categories like smart toys and watches....

Your job as a privacy-conscious gift-giver doesn't end at the checkout screen. If you're more tech savvy than the person receiving the item, or you're helping set up a gadget for a child, there's no better gift than helping set it up as privately as possible.... Giving the gift of electronics shouldn't come with so much homework, but until we have a comprehensive data privacy law, we'll likely have to contend with these sorts of set-up hoops. Until that day comes, we can all take the time to help those who need it.

Google

Why Google Will Stop Telling Law Enforcement Which Users Were Near a Crime (yahoo.com) 69

Earlier this week Google Maps stopped storing user location histories in the cloud. But why did Google make this move? Bloomberg reports that it was "so that the company no longer has access to users' individual location histories, cutting off its ability to respond to law enforcement warrants that ask for data on everyone who was in the vicinity of a crime." The company said Thursday that for users who have it enabled, location data will soon be saved directly on users' devices, blocking Google from being able to see it, and, by extension, blocking law enforcement from being able to demand that information from Google. "Your location information is personal," said Marlo McGriff, director of product for Google Maps, in the blog post. "We're committed to keeping it safe, private and in your control."

The change comes three months after a Bloomberg Businessweek investigation that found police across the US were increasingly using warrants to obtain location and search data from Google, even for nonviolent cases, and even for people who had nothing to do with the crime. "It's well past time," said Jennifer Lynch, the general counsel at the Electronic Frontier Foundation, a San Francisco-based nonprofit that defends digital civil liberties. "We've been calling on Google to make these changes for years, and I think it's fantastic for Google users, because it means that they can take advantage of features like location history without having to fear that the police will get access to all of that data."

Google said it would roll out the changes gradually through the next year on its own Android and Apple Inc.'s iOS mobile operating systems, and that users will receive a notification when the update comes to their account. The company won't be able to respond to new geofence warrants once the update is complete, including for people who choose to save encrypted backups of their location data to the cloud.

The EFF general counsel also pointed out to Bloomberg that "nobody else has been storing and collecting data in the same way as Google." (Apple, for example, is technically unable to provide the same data to police.)
Electronic Frontier Foundation

EFF Proposes Addressing Online Harms with 'Privacy-First' Policies (eff.org) 32

Long-time Slashdot reader nmb3000 writes: The Electronic Frontier Foundation has published a new white paper, Privacy First: A Better Way to Address Online Harms , to propose an alternative to the "often ill-conceived, bills written by state, federal, and international regulators to tackle a broad set of digital topics ranging from child safety to artificial intelligence." According to the EFF, "these scattershot proposals to correct online harm are often based on censorship and news cycles. Instead of this chaotic approach that rarely leads to the passage of good laws, we propose another solution."
The EFF writes:

What would this comprehensive privacy law look like? We believe it must include these components:

  • No online behavioral ads.
  • Data minimization.
  • Opt-in consent.
  • User rights to access, port, correct, and delete information.
  • No preemption of state laws.
  • Strong enforcement with a private right to action.
  • No pay-for-privacy schemes.
  • No deceptive design.

A strong comprehensive data privacy law promotes privacy, free expression, and security. It can also help protect children, support journalism, protect access to health care, foster digital justice, limit private data collection to train generative AI, limit foreign government surveillance, and strengthen competition. These are all issues on which lawmakers are actively pushing legislation—both good and bad.


Slashdot Top Deals