United States

FCC Nomination Stalled for One Year, Preventing Restoration of US Net Neutrality (siliconvalley.com) 85

Why hasn't America restored net neutrality protections? "President Biden's nomination to serve on the Federal Communications Commission has been stalled in the Senate for more than a year," complain the editorial boards of two Silicon Valley newspapers: Confirming Gigi Sohn would end the 2-2 deadlock on the FCC that is keeping Biden from fulfilling his campaign promise to restore net neutrality, ensuring that all internet traffic is treated equally. Polls show that 75% of Americans support net neutrality rules. They know that an open internet is essential for innovation and economic growth, for fostering the next generation of entrepreneurs....

[T]elecommunication giants such as AT&T, Verizon and Comcast don't want that to happen. They favor the status quo that allows the internet companies to pick winners and losers by charging content providers higher rates for speedier access to customers. They seek to expand the cable system model and allow kingmakers to rake in billions at the expense of smaller, new startups that struggle to gain a wider audience on their slow-speed offerings. So Republicans and a handful of Democrats are holding up Sohn's confirmation, claiming that her "radical" views disqualify her....

They also object to Sohn's current service as an Electronic Frontier Foundation board member, saying it proves she wouldn't be an unbiased and impartial FCC Commissioner. The San Francisco-based EFF is a leading nonprofit with a mission of defending digital privacy, free speech and innovation....

Enough is enough. Confirm Sohn and allow the FCC to fulfill its mission of promoting connectivity and ensuring a robust and competitive internet market.

The Courts

Supreme Court Allows Reddit Mods To Anonymously Defend Section 230 (arstechnica.com) 152

An anonymous reader quotes a report from Ars Technica: Over the past few days, dozens of tech companies have filed briefs in support of Google in a Supreme Court case that tests online platforms' liability for recommending content. Obvious stakeholders like Meta and Twitter, alongside popular platforms like Craigslist, Etsy, Wikipedia, Roblox, and Tripadvisor, urged the court to uphold Section 230 immunity in the case or risk muddying the paths users rely on to connect with each other and discover information online. Out of all these briefs, however, Reddit's was perhaps the most persuasive (PDF). The platform argued on behalf of everyday Internet users, whom it claims could be buried in "frivolous" lawsuits for frequenting Reddit, if Section 230 is weakened by the court. Unlike other companies that hire content moderators, the content that Reddit displays is "primarily driven by humans -- not by centralized algorithms." Because of this, Reddit's brief paints a picture of trolls suing not major social media companies, but individuals who get no compensation for their work recommending content in communities. That legal threat extends to both volunteer content moderators, Reddit argued, as well as more casual users who collect Reddit "karma" by upvoting and downvoting posts to help surface the most engaging content in their communities.

"Section 230 of the Communications Decency Act famously protects Internet platforms from liability, yet what's missing from the discussion is that it crucially protects Internet users -- everyday people -- when they participate in moderation like removing unwanted content from their communities, or users upvoting and downvoting posts," a Reddit spokesperson told Ars. Reddit argues in the brief that such frivolous lawsuits have been lobbed against Reddit users and the company in the past, and Section 230 protections historically have consistently allowed Reddit users to "quickly and inexpensively" avoid litigation. [...]

The Supreme Court will have to weigh whether Reddit's arguments are valid. To help make its case defending Section 230 immunity protections for recommending content, Reddit received special permission from the Supreme Court to include anonymous comments from Reddit mods in its brief. This, Reddit's spokesperson notes, is "a significant departure from normal Supreme Court procedure." The Electronic Frontier Foundation, a nonprofit defending online privacy, championed the court's decision to allow moderators to contribute comments anonymously.
"We're happy the Supreme Court recognized the First Amendment rights of Reddit moderators to speak to the court about their concerns," EFF's senior staff attorney, Sophia Cope, told Ars. "It is quite understandable why those individuals may be hesitant to identify themselves should they be subject to liability in the future for moderating others' speech on Reddit."

"Reddit users that interact with third-party content -- including 'hosting' content on a sub-Reddit that they manage, or moderating that content -- could definitely be open to legal exposure if the Court carves out "recommending' from Section 230's protections, or otherwise narrows Section 230's reach," Cope told Ars.
Crime

San Jose Police Announce Three Stolen Vehicles Recovered Using Automatic License Plate Reader (kron4.com) 114

Saturday night in the Silicon Valley city of San Jose, the assistant police chief tweeted out praise for their recently-upgraded Automatic License Plate Readers: Officers in Air3 [police helicopter], monitoring the ALPR system, got alerted to 3 stolen cars. They directed ground units to the cars. All 3 drivers in custody! No dangerous vehicle pursuits occurred, nor were they needed.

2 drivers tried to run away. But, you can't outrun a helicopter!"

There's photos — one of the vehicles appears to be a U-Haul pickup truck — and the tweet drew exactly one response, from San Jose mayor Matt Mahan: "Nice job...! Appreciate the excellent police work and great to see ALPRs having an impact. Don't steal cars in San Jose!"
Some context: The San Jose Spotlight (a nonprofit local news site) noted that prior to last year license plate readers had been mounted exclusively on police patrol cars (and in use since 2006). But last year the San Jose Police Department launched a new "pilot program" with four cameras mounted at a busy intersection, that "captured nearly 300,000 plate scans in just the last month, according to city data."

By August this had led to plans for 150 more stationary ALPR cameras, a local TV station reported. "Just this week, police said they solved an armed robbery and arrested a suspected shooter thanks to the cameras." During a forum to update the community, San Jose police also mentioned success stories in other cities like Vallejo where they've reported a 100% increase in identifying stolen vehicles. San Jose is now installing hundreds around the city and the first batch is coming in the next two to three months....

The biggest concern among those attending Wednesday's virtual forum was privacy. But the city made it clear the data is only shared with trained police officers and certain city staff, no out-of-state or federal agencies. "Anytime that someone from the San Jose Police Department accesses the ALPR system, they have to input a reason, the specific plates they are looking for and all of that information is logged so that we can keep track of how many times its being used and what its being used for," said Albert Gehami, Digital Privacy Officer for San Jose.

More privacy concerns were raised in September, reports the San Jose Spotlight: The San Jose City Council unanimously approved a policy Tuesday that formally bans the police department from selling any license plate data, using that information for investigating a person's immigration status or for monitoring legally protected activities like protests or rallies.

Even with these new rules, some privacy advocates and community groups are still opposed to the technology. Victor Sin, chair of the Santa Clara Valley Chapter of ACLU of Northern California, expressed doubt that the readers are improving public safety. He made the comments in a letter to the council from himself and leaders of four other community organizations. "Despite claims that (automated license plate reader) systems can reduce crime, researchers have expressed concerns about the rapid acquisition of this technology by law enforcement without evidence of its efficacy," the letter reads. Groups including the Asian Law Alliance and San Jose-Silicon Valley NAACP also said the city should reduce the amount of time it keeps license plate data on file down from one year.....

Mayor Sam Liccardo said he's already convinced the readers are useful, but added the council should try to find a way to measure their effect. "It's probably not a bad idea for us to decide what are the outcomes we're trying to achieve, and if there is some reasonable metric that captures that outcome in a meaningful way," Liccardo said. "Was this used to actually help us arrest anybody, or solve a crime or prevent an accident?"

An EFF position paper argues that "ALPR data is gathered indiscriminately, collecting information on millions of ordinary people." By plotting vehicle times and locations and tracing past movements, police can use stored data to paint a very specific portrait of drivers' lives, determining past patterns of behavior and possibly even predicting future ones — in spite of the fact that the vast majority of people whose license plate data is collected and stored have not even been accused of a crime.... [ALPR technology] allows officers to track everyone..."
Maybe the police officer's tweet was to boost public support for the technology? It's already led to a short report from another local news station: San Jose police recovered three stolen cars using their automated license-plate recognition technology (ALPR) on Saturday, according to officials with the San Jose Police Department.

Officers inside of Air3, one of SJPD's helicopters, spotted three stolen cars using ALPR before directing ground units their way. Police say no pursuits occurred, though two of the drivers tried to run away.

Privacy

CES's 'Worst in Show' Criticized Over Privacy, Security, and Environmental Threats (youtube.com) 74

"We are seeing, across the gamut, products that impact our privacy, products that create cybersecurity risks, that have overarchingly long-term environmental impacts, disposable products, and flat-out just things that maybe should not exist."

That's the CEO of the how-to repair site iFixit, introducing their third annual "Worst in Show" ceremony for the products displayed at this year's CES. But the show's slogan promises it's also "calling out the most troubling trends in tech." For example, the EFF's executive director started with two warnings. First, "If it's communicating with your phone, it's generally communicating to the cloud too." But more importantly, if a product is gathering data about you and communicating with the cloud, "you have to ask yourself: is this company selling something to me, or are they selling me to other people? And this year, as in many past years at CES, it's almost impossible to tell from the products and the advertising copy around them! They're just not telling you what their actual business model is, and because of that — you don't know what's going on with your privacy."

After warning about the specific privacy implications of a urine-analyzing add-on for smart toilets, they noted there was a close runner-up for the worst privacy: the increasing number of scam products that "are basically based on the digital version of phrenology, like trying to predict your emotions based upon reading your face or other things like that. There's a whole other category of things that claim to do things that they cannot remotely do."

To judge the worst in show by environmental impact, Consumer Reports sent the Associate Director for their Product Sustainability, Research and Testing team, who chose the 55-inch portable "Displace TV" for being powered only by four lithium-ion batteries (rather than, say, a traditional power cord).

And the "worst in show" award for repairability went to the Ember Mug 2+ — a $200 travel mug "with electronics and a battery inside...designed to keep your coffee hot." Kyle Wiens, iFixit's CEO, first noted it was a product which "does not need to exist" in a world which already has equally effective double-insulated, vaccuum-insulated mugs and Thermoses. But even worse: it's battery powered, and (at least in earlier versions) that battery can't be easily removed! (If you email the company asking for support on replacing the battery, Wiens claims that "they will give you a coupon on a new, disposable coffee mug. So this is the kind of product that should not exist, doesn't need to exist, and is doing active harm to the world.

"The interesting thing is people care so much about their $200 coffee mug, the new feature is 'Find My iPhone' support. So not only is it harming the environment, it's also spying on where you're located!"

The founder of SecuRepairs.org first warned about "the vast ecosystem of smart, connected products that are running really low-quality, vulnerable software that make our persons and our homes and businesses easy targets for hackers." But for the worst in show for cybersecurity award, they then chose Roku's new Smart TV, partly because smart TVs in general "are a problematic category when it comes to cybersecurity, because they're basically surveillance devices, and they're not created with security in mind." And partly because to this day it's hard to tell if Roku has fixed or even acknowledged its past vulnerabilities — and hasn't implemented a prominent bug bounty program. "They're not alone in this. This is a problem that affects electronics makers of all different shapes and sizes at CES, and it's something that as a society, we just need to start paying a lot more attention to."

And US Pirg's "Right to Repair" campaign director gave the "Who Asked For This" award to Neutrogena's "SkinStacks" 3D printer for edible skin-nutrient gummies — which are personalized after phone-based face scans. ("Why just sell vitamins when you could also add in proprietary refills and biometic data harvesting.")
DRM

Unpaid Taxes Could Destroy Porn Studio Accused of Copyright Trolling (arstechnica.com) 22

Slashdot has covered the legal hijinx of Malibu Media over the years. Now Ars Technica reports that the studio could be destroyed by unpaid taxes: Over the past decade, Malibu Media has emerged as a prominent so-called "copyright troll," suing thousands of "John Does" for allegedly torrenting adult content hosted on the porn studio's website, "X-Art." Whether defendants were guilty or not didn't seem to matter to Malibu, critics claimed, as much as winning as many settlements as possible. As courts became more familiar with Malibu, however, some judges grew suspicious of the studio's litigiousness. As early as 2012, a California judge described these lawsuits as "essentially an extortion scheme," and by 2013, a Wisconsin judge ordered sanctions, agreeing with critics who said that Malibu's tactics were designed to "harass and intimidate" defendants into paying Malibu thousands in settlements.

By 2016, Malibu started losing footing in this arena — and even began fighting with its own lawyer. At that point, file-sharing lawsuits became less commonplace, with critics noting a significant reduction in Malibu's lawsuits over the next few years. Now, TorrentFreak reports that Malibu's litigation machine appears to finally be running out of steam — with its corporate status suspended in California sometime between mid-2020 and early 2021 after failing to pay taxes. Last month, a Texas court said that Malibu has until January 20 to pay what's owed in back taxes and get its corporate status reinstated. If that doesn't happen over the next few weeks, one of Malibu's last lawsuits on the books will be dismissed, potentially marking the end of Malibu's long run of alleged copyright trolling.

Government

iFixit Put Up a Right To Repair Billboard Along New York Governor's Drive To Work (pirg.org) 32

Right to Repair website iFixit put up a billboard in Albany, New York, calling for Gov. Kathy Hochul to sign the landmark Right to Repair law, which was passed overwhelmingly nearly six months ago by the state legislature. PIRG reports: Supported by Repair.org, U.S. PIRG and NYPIRG, Consumer Reports, Environment New York, the Story of Stuff Project, Sierra Club Atlantic Chapter, NRDC, Environmental Action and EFF, calls for the governor to sign the bill have increased The legislation must advance to the governor by the end of December and be signed by January 10, 2023.

The Albany Times Union editorialized twice for the governor to sign the bill, recently noting that the bill has come under intense opposition from manufacturers: "Meanwhile, lobbyists, big corporations and a few trade organizations are pressing for a veto ... Ms. Hochul must sign the bill, and then lawmakers should get to work passing an expanded version that includes all the products that were needlessly stripped from the original. Big corporations and the lobbyists they hire won't be happy, but that shouldn't matter a bit."

Electronic Frontier Foundation

Aaron Swartz Day Commemorated With International Hackathon (eff.org) 27

Long-time Slashdot reader destinyland shares this announcement from the EFF's DeepLinks blog:

This weekend, EFF is celebrating the life and work of programmer, activist, and entrepreneur Aaron Swartz by participating in the 2022 Aaron Swartz Day and Hackathon. This year, the event will be held in person at the Internet Archive in San Francisco on Nov. 12 and Nov. 13. It will also be livestreamed; links to the livestream will be posted each morning.

Those interested in attending in-person or remotely can register for the event here.

Aaron Swartz was a digital rights champion who believed deeply in keeping the internet open. His life was cut short in 2013, after federal prosecutors charged him under the Computer Fraud and Abuse Act (CFAA) for systematically downloading academic journal articles from the online database JSTOR. Facing the prospect of a long and unjust sentence, Aaron died by suicide at the age of 26....

Those interested in working on projects in Aaron's honor can also contribute to the annual hackathon, which this year includes several projects: SecureDrop, Bad Apple, the Disability Technology Project (Sat. only), and EFF's own Atlas of Surveillance. In addition to the hackathon in San Francisco, there will also be concurrent hackathons in Ecuador, Argentina, and Brazil. For more information on the hackathon and for a full list of speakers, check out the official page for the 2022 Aaron Swartz Day and Hackathon.

Speakers this year include Chelsea Manning and Cory Doctorow, as well as Internet Archive founder Brewster Kahle, EFF executive director Cindy Cohn, and Creative Commons co-founder Lisa Rein.
Electronic Frontier Foundation

Peter Eckersley, Co-Creator of Let's Encrypt, Dies at 43 (sophos.com) 35

Seven years ago, Slashdot reader #66,542 announced "Panopticlick 2.0," a site showing how your web browser handles trackers.

But it was just one of the many privacy-protecting projects Peter Eckersley worked on, as a staff technologist at the EFF for more than a decade. Eckersley also co-created Let's Encrypt, which today is used by hundreds of millions of people.

Friday the EFF's director of cybersecurity announced the sudden death of Eckersley at age 43. "If you have ever used Let's Encrypt or Certbot or you enjoy the fact that transport layer encryption on the web is so ubiquitous it's nearly invisible, you have him to thank for it," the announcement says. "Raise a glass."

Peter Eckersley's web site is still online, touting "impactful privacy and cybersecurity projects" that he co-created, including not just Let's Encrypt, Certbot, and Panopticlick, but also Privacy Badger and HTTPS Everywhere. And in addition, "During the COVID-19 pandemic he convened the the stop-covid.tech group, advising many groups working on privacy-preserving digital contact tracing and exposure notification, assisting with several strategy plans for COVID mitigation." You can also still find Peter Eckersley's GitHub repositories online.

But Peter "had apparently revealed recently that he had been diagnosed with cancer," according to a tribute posted online by security company Sophos, noting his impact is all around us: If you click on the padlock in your browser [2022-09-0T22:37:00Z], you'll see that this site, like our sister blog site Sophos News, uses a web certificate that's vouched for by Let's Encrypt, now a well-established Certificate Authority (CA). Let's Encrypt, as a CA, signs TLS cryptographic certificates for free on behalf of bloggers, website owners, mail providers, cloud servers, messaging services...anyone, in fact, who needs or wants a vouched-for encryption certificate, subject to some easy-to-follow terms and conditions....

Let's Encrypt wasn't the first effort to try to build a free-as-in-freedom and free-as-in-beer infrastructure for online encryption certificates, but the Let's Encrypt team was the first to build a free certificate signing system that was simple, scalable and solid. As a result, the Let's Encrypt project was soon able to to gain the trust of the browser making community, to the point of quickly getting accepted as a approved certificate signer (a trusted-by-default root CA, in the jargon) by most mainstream browsers....

In recent years, Peter founded the AI Objectives Institute, with the aim of ensuring that we pick the right social and economic problems to solve with AI:

"We often pay more attention to how those goals are to be achieved than to what those goals should be in the first place. At the AI Objectives Institute, our goal is better goals."

Google

Dad Photographs Son for Doctor. Google Flags Him as Criminal, Notifies Police (yahoo.com) 241

"The nurse said to send photos so the doctor could review them in advance," the New York Times reports, decribing how an ordeal began in February of 2021 for a software engineer named Mark who had a sick son: Mark's wife grabbed her husband's phone and texted a few high-quality close-ups of their son's groin area to her iPhone so she could upload them to the health care provider's messaging system. In one, Mark's hand was visible, helping to better display the swelling. Mark and his wife gave no thought to the tech giants that made this quick capture and exchange of digital data possible, or what those giants might think of the images. With help from the photos, the doctor diagnosed the issue and prescribed antibiotics, which quickly cleared it up....

Two days after taking the photos of his son, Mark's phone made a blooping notification noise: His account had been disabled because of "harmful content" that was "a severe violation of Google's policies and might be illegal." A "learn more" link led to a list of possible reasons, including "child sexual abuse & exploitation...." He filled out a form requesting a review of Google's decision, explaining his son's infection. At the same time, he discovered the domino effect of Google's rejection. Not only did he lose emails, contact information for friends and former colleagues, and documentation of his son's first years of life, his Google Fi account shut down, meaning he had to get a new phone number with another carrier. Without access to his old phone number and email address, he couldn't get the security codes he needed to sign in to other internet accounts, locking him out of much of his digital life....

A few days after Mark filed the appeal, Google responded that it would not reinstate the account, with no further explanation. Mark didn't know it, but Google's review team had also flagged a video he made and the San Francisco Police Department had already started to investigate him.... In December 2021, Mark received a manila envelope in the mail from the San Francisco Police Department. It contained a letter informing him that he had been investigated as well as copies of the search warrants served on Google and his internet service provider. An investigator, whose contact information was provided, had asked for everything in Mark's Google account: his internet searches, his location history, his messages and any document, photo and video he'd stored with the company. The search, related to "child exploitation videos," had taken place in February, within a week of his taking the photos of his son.

Mark called the investigator, Nicholas Hillard, who said the case was closed. Mr. Hillard had tried to get in touch with Mark but his phone number and email address hadn't worked....

Mark appealed his case to Google again, providing the police report, but to no avail.... A Google spokeswoman said the company stands by its decisions...

"The day after Mark's troubles started, the same scenario was playing out in Texas," the Times notes, quoting a technologist at the EFF who speculates other people experiencing the same thing may not want to publicize it. "There could be tens, hundreds, thousands more of these."

Reached for a comment on the incident, Google told the newspaper that "Child sexual abuse material is abhorrent and we're committed to preventing the spread of it on our platforms."
The Courts

Federal Court Upholds First Amendment Protections For Student's Off-Campus Social Media Post (eff.org) 105

"Students should not have to fear expulsion for expressing themselves on social media after school and off-campus, but that is just what happened to the plaintiff in C1.G v. Siegfried," writes Mukund Rathi via the Electronic Frontier Foundation (DFF). "Last month, the Tenth Circuit Court of Appeals ruled the student's expulsion violated his First Amendment rights. The court's opinion affirms what we argued in an amicus brief last year." From the report: We strongly support the Tenth Circuit's holding that schools cannot regulate how students use social media off campus, even to spread "offensive, controversial speech," unless they target members of the school community with "vulgar or abusive language."

The case arose when the student and his friends visited a thrift shop on a Friday night. There, they posted a picture on Snapchat with an offensive joke about violence against Jews. He deleted the post and shared an apology just a few hours later, but the school suspended and eventually expelled him. [...] The Tenth Circuit held the First Amendment protected the student's speech because "it does not constitute a true threat, fighting words, or obscenity." The "post did not include weapons, specific threats, or speech directed toward the school or its students." While the post spread widely and the school principal received emails about it, the court correctly held that this did not amount to "a reasonable forecast of substantial disruption" that would allow regulation of protected speech.

IT

Indonesia Unblocks Steam and Yahoo, But Fortnite and FIFA Are Still Banned (theverge.com) 4

Indonesia has lifted its ban on Steam and Yahoo now that both companies complied with the country's restrictive laws that regulate online activity. From a report: The Indonesian Ministry of Communication and Information (Kominfo) announced the news in a translated update on Twitter, noting that Counter-Strike: Global Offensive and Dota 2 are back online as well. Last week, Indonesia blocked access to Steam, PayPal, Yahoo, Epic Games, and Origin after the companies failed to meet a deadline to register with the country's database. This requirement is bundled with a broader law, called MR5, that Indonesia first introduced in 2020. The law gives the Indonesian government the authority to order platforms to take down content considered illegal as well as request the data of specific users. In 2021, the digital rights group Electronic Frontier Foundation (EFF) called the policy "invasive of human rights." Although PayPal has yet to comply, Indonesia unblocked access to the service for five days starting July 31st to give users a chance to withdraw money and make payments. According to the Indonesian news outlet Antara News, PayPal reportedly plans on registering with the country's database soon.
United States

Amazon's Ring and Google Can Share Footage With Police Without Warrants (or Your Consent) (cnet.com) 70

U.S. law let's companies like Google and Amazon's Ring doorbell/security camera system "share user footage with police during emergencies without consent and without warrants," CNET reported this week. They add that after that revelation "came under renewed criticism from privacy activists this month after disclosing it gave video footage to police in more than 10 cases without users' consent thus far in 2022 in what it described as 'emergency situations'."

"That includes instances where the police didn't have a warrant." "So far this year, Ring has provided videos to law enforcement in response to an emergency request only 11 times," Amazon vice president of public policy Brian Huseman wrote. "In each instance, Ring made a good-faith determination that there was an imminent danger of death or serious physical injury to a person requiring disclosure of information without delay...." Of the 11 emergency requests Ring has complied with so far in 2022, the company said they include cases involving kidnapping, self-harm and attempted murder, but it won't provide further details, including information about which agencies or countries the requests came from.

We also asked Ring if it notified customers after the company had granted law enforcement access to their footage without their consent.

"We have nothing to share," the spokesperson responded.

CNET also supplies this historical context: It's been barely a year since Ring made the decision to stop allowing police to email users to request footage. Facing criticism that requests like those were subverting the warrant process and contributing to police overreach, Ring directed police instead to post public requests for assistance in the Neighbors app, where community members are free to view and comment on them (or opt out of seeing them altogether)... That post made no mention of a workaround for the police during emergency circumstances.
When CNET asked why that workaround wasn't mentioned, Amazon response was that law enforcement requests, "including emergency requests, are directed to Ring (the company), the same way a warrant or subpoena is directed to Ring (and not the customer), which is why we treat them entirely separately."

CNET notes there's also no mention of warrantless emergency requests without independent oversight in Ring's own transparency reports about law enforcement requests from past years.

CNET adds that it's not just Amazon. "Google, Ring and other companies that process user video footage have a legal basis for warrantless disclosure without consent during emergency situations, and it's up to them to decide whether or not to do so when the police come calling...." (Although Google told CNET that while it reserves the right to comply with warrantless requests for user data during emergencies, to date it has never actually done so.) The article also points out that "Others, most notably Apple, use end-to-end encryption as the default setting for user video, which blocks the company from sharing that video at all... Ring enabled end-to-end encryption as an option for users in 2021, but it isn't the default setting, and Ring notes that turning it on will break certain features, including the ability to view your video feed on a third-party device like a smart TV, or even Amazon devices like the Echo Show smart display."

The bottom line? [C]onsumers have a choice to make about what they're comfortable with... That said, you can't make informed choices when you aren't well-informed to begin with, and the brands in question don't always make it easy to understand their policies and practices. Ring published a blog post last year walking through its new, public-facing format for police footage requests, but there was no mention of emergency exceptions granted without user consent or independent oversight, the details of which only came to light after a Senate probe. Google describes its emergency sharing policies within its Terms of Service, but the language doesn't make it clear that those cases include instances where footage may be shared without a warrant, subpoena or court order compelling Google to do so.
Electronic Frontier Foundation

'Toward a Future We Want to Live In' - EFF Celebrates 32nd Birthday (eff.org) 25

"Today at the Electronic Frontier Foundation, we're celebrating 32 years of fighting for technology users around the world," reads a new announcement posted at EFF.org: If you were online back in the 90s, you might remember that it was pretty wild. We had bulletin boards, FTP, Gopher, and, a few years later, homespun websites. You could glimpse a future where anyone, anywhere in the world could access information, float new ideas, and reach each other across vast distances. It was exciting and the possibilities seemed endless.

But the founders of EFF also knew that a better future wasn't automatic. You don't organize a team of lawyers, technologists, and activists because you think technology will magically fix everything — you do it because you expect a fight.

Three decades later, thanks to those battles, the internet does much of what it promised: it connects and lifts up major grassroots movements for equity, civil liberties, and human rights and allows people to connect and organize to counteract the ugliness of the world.

But we haven't yet won that future we envisioned. Just as the web connects us, it also serves as a hunting ground for those who want to surveil and control our actions, those who wish to harass and spread hate, as well as others who seek to monetize our every move and thought. Information collected for one purpose is freely repurposed in ways that oppress us, rather than lift us up. The truth is that digital tools allow those with horrible ideas to connect with each other just as it does those with beautiful, healing ones.

EFF has always seen both the beauty and destructive potential of the internet, and we've always put our marker down on the side of justice, freedom, and innovation.

We work every day toward a future we want to live in, and we don't do it alone. Support from the public makes every one of EFF's activism campaigns, software projects, and court filings possible. Together, we anchor the movement for a better digital world, and ensure that technology supports freedom, justice, and innovation for all people of the world.

In fact, I invite every digital freedom supporter to join EFF during our summer membership drive. Right now, you can be a member for as little as $20, get some special new gear, and ensure that tech users always have a formidable defender in EFF.

So how does the EFF team celebrate this auspicious anniversary? EFF does what it does best: stand up for users and innovators in the courts, in the halls of power, in the public conversation. We build privacy-protecting tools, teach skills to community members, share knowledge with allies, and preserve the best aspects of the wild web.

In other words, we use every tool in our deep arsenal to fight for a better and brighter digital future for all. Thank you for standing with EFF when it counts.

Piracy

Broadest US Pirate Site Injunction Rewritten/Tamed By Cloudflare (torrentfreak.com) 10

An anonymous reader quotes a report from TorrentFreak: After causing outrage among online services including Cloudflare, the most aggressive pirate site injunction ever handed down in the US has undergone significant weight loss surgery. Now before the court is a heavily modified injunction that is most notable for everything that's been removed. It appears that Cloudflare drew a very clear line in the sand and refused to step over it. [...] The injunctions granted extreme powers, from residential ISP blocking to almost any other action the plaintiffs deemed fit to keep the sites offline. Almost immediately that led to friction with third-party service providers and the situation only worsened when a concerned Cloudflare found itself threatened with contempt of court for non-compliance. The CDN company fought back with support from Google and EFF and that led the parties back to the negotiating table. Filings in the case last week suggested an acceptance by the plaintiffs that the injunction cannot be enforced in its present form. The parties promised to work on a new injunction to address both sides' concerns and as a result, a new proposal now awaits the court's approval. [...]

With the contempt of court issue behind them, Cloudflare and the plaintiffs appear to have settled their differences. An entire section in the injunction dedicated to Cloudflare suggests that the CDN company is indeed prepared to help the video companies but they'll have to conform to certain standards. Before even contacting Cloudflare they'll first need to make "reasonable, good faith efforts to identify and obtain relief for the identified domains from hosting providers and domain name registries and registrars."

If the plaintiffs still need Cloudflare's assistance, Cloudflare will comply with requests against domain names listed in this injunction and future injunctions by preventing access to the following: "Pass-through security services, content delivery network (CDN) services, video streaming services, and authoritative DNS services, DNS, CDN, streaming services, and any related services." An additional note states that the plaintiffs acknowledge that Cloudflare's compliance "will not necessarily prevent the Defendants from providing users with access to Defendants' infringing services." Given the agreement on the terms, the amended injunction will likely be signed off by the court in the coming days. Service providers everywhere will breathe a sigh of relief while rightsholders will have a template for similar cases moving forward.
The proposed amended injunction documents can be found here (1, 2, 3, 4, 5 pdf).
Electronic Frontier Foundation

Court Rules DMCA Does Not Override First Amendment's Anonymous Speech Protections (eff.org) 45

An anonymous reader quotes a report from the Electronic Frontier Foundation: Copyright law cannot be used as a shortcut around the First Amendment's strong protections for anonymous internet users, a federal trial court ruled on Tuesday. The decision by a judge in the United States District Court for the Northern District of California confirms that copyright holders issuing subpoenas under the Digital Millennium Copyright Act must still meet the Constitution's test before identifying anonymous speakers.

The case is an effort to unmask an anonymous Twitter user (@CallMeMoneyBags) who posted photos and content that implied a private equity billionaire named Brian Sheth was romantically involved with the woman who appeared in the photographs. Bayside Advisory LLC holds the copyright on those images, and used the DMCA to demand that Twitter take down the photos, which it did. Bayside also sent Twitter a DMCA subpoena to identify the user. Twitter refused and asked a federal magistrate judge to quash Bayside's subpoena. The magistrate ruled late last year that Twitter must disclose the identity of the user because the user failed to show up in court to argue that they were engaged in fair use when they tweeted Bayside's photos. When Twitter asked a district court judge to overrule the magistrate's decision, EFF and the ACLU Foundation of Northern California filed an amicus brief in the case, arguing that the magistrate's ruling sidestepped the First Amendment when it focused solely on whether the user's tweets constituted fair use of the copyrighted works. [...]

EFF is pleased with the district court's decision, which ensures that DMCA subpoenas cannot be used as a loophole to the First Amendment's protections. The reality is that copyright law is often misused to silence lawful speech or retaliate against speakers. For example, in 2019 EFF successfully represented an anonymous Reddit user that the Watchtower Bible and Tract Society sought to unmask via a DMCA subpoena, claiming that they posted Watchtower's copyrighted material. We are also grateful that Twitter stood up for its user's First Amendment rights in court.

Transportation

San Francisco Police Are Using Driverless Cars As Mobile Surveillance Cameras (vice.com) 50

BeerFartMoron shares a report from Motherboard: For the last five years, driverless car companies have been testing their vehicles on public roads. These vehicles constantly roam neighborhoods while laden with a variety of sensors including video cameras capturing everything going on around them in order to operate safely and analyze instances where they don't. While the companies themselves, such as Alphabet's Waymo and General Motors' Cruise, tout the potential transportation benefits their services may one day offer, they don't publicize another use case, one that is far less hypothetical: Mobile surveillance cameras for police departments.

"Autonomous vehicles are recording their surroundings continuously and have the potential to help with investigative leads," says a San Francisco Police department training document obtained by Motherboard via a public records request. "Investigations has already done this several times."

Privacy advocates say the revelation that police are actively using AV footage is cause for alarm. "This is very concerning," Electronic Frontier Foundation (EFF) senior staff attorney Adam Schwartz told Motherboard. He said cars in general are troves of personal consumer data, but autonomous vehicles will have even more of that data from capturing the details of the world around them. "So when we see any police department identify AVs as a new source of evidence, that's very concerning."

As companies continue to make public roadways their testing grounds for these vehicles, everyone should understand them for what they are -- rolling surveillance devices that expand existing widespread spying technologies," said Chris Gilliard, Visiting Research Fellow at Harvard Kennedy School Shorenstein Center. "Law enforcement agencies already have access to automated license plate readers, geofence warrants, Ring Doorbell footage, as well as the ability to purchase location data. This practice will extend the reach of an already pervasive web of surveillance."

The Media

70-Year-Old Cyberpunk: 'This Interview Is a Mistake' (spikeartmagazine.com) 37

Long-time Slashdot reader destinyland writes: He was the co-publisher of the first popular digital culture magazine, MONDO 2000, from 1989–1993. Now as R. U. Sirius approaches his 70th birthday, a San Francisco-based writer conducts a rollicking interview for the Berlin-based Spike Art Magazine. ("I wanted to speak with someone who had weathered the shakedown of history with art, humour, and a dose of healthy delusion. Or derision. Whatever arrived first...")

That interview itself was star-crossed. ("What came first, R.U.'s stroke or the Omicron surge? As I recovered from a bout of corona, R.U. fell ill with his own strain.. ") But eventually they did discuss the founding of that influential cyberculture magazine. (Editor Jude Milhon is credited with coining the word "cypherpunk" for an early crytography-friendly group co-founded by EFF pioneer John Gilmore.) Asked about the magazine's original vision, Sirius says "I was pretty much diverted by Timothy Leary and Robert Anton Wilson and their playful, hopeful futurisms, their whole shebang about evolutionary brain circuits being opened up by drugs and technology."


I needed something to get me out of bed at the end of the 1970s. I mean, punk was great – rock and roll was great – but it wasn't inspiring any action. I remember my friends stole some giant lettering from a sign at a gas station and some of it hung behind the couch in our living room where we took whatever drugs were around and tossed glib nihilisms back and forth. The letters read "ROT".... I couldn't sink any deeper into that couch, so there was nowhere to go except up into outer space.

The surrealism and so forth were influences that travelled with me when I moved to California to create this new thing based on psychedelics, technology, and incorrigible irreverence that eventually became Mondo 2000.



It's a funny interview. ("The 'R.U. a Cyberpunk' page from an issue of Mondo is the only thing most people below a certain age have ever seen from the magazine and we were taking the piss out of ourselves....") They scrupulously avoid mentioning Mondo's undeniable influence on the early days of Wired. But inevitaby the conversation comes back around to that seminal question: whither cyberpunk?


Q: The internet, which was a prime source of Mondo subject matter, is home to many eyes, rabbit holes, and agents of algorithmic manipulation. Where is cyberpunk culture alive and well in our contemporary moment? Are you still invested and engaged with cyberpunk as a means of exploring radical possibilities and ideas...?

RUS: [T]here's not really a cyberpunk movement... Surrealism was a movement for a number of years because an anguished control freak named André Breton maintained it in various formations. We didn't have that person, and if we had, he or she or they probably would have been laughed out of the sandbox for the attempt....

I'll remain influenced by playful spontaneity from ancient 20th-century moments not because of any dedication, but only because that's probably the only way I was ever going to be able to write or create. I lack rigor and once declared it a sign of death.



And Sirius jokes at the end that "usually my attitude is that the world today is bloated with people opinionizing so, this interview is a mistake!"

GNU is Not Unix

Richard Stallman Speaks on Cryptocurrency, Blockchain, GNU Taler, and Encryption (libreplanet.org) 96

During a 92-minute presentation Wednesday on the state of the free software movement, Richard Stallman spoke at length on a wide variety of topics, including the need for freedom-respecting package systems.

But Stallman also shared his deepest thoughts on a topic dear to the hearts of Slashdot readers: privacy and currency: I won't order from online stores, because I can't pay them . For one thing, the payment services require running non-free JavaScript... [And] to pay remotely you've got to do it by credit card, and that's tracking people, and I want to resist tracking too.... This is a really serious problem for society, that you can't order things remotely anonymously.

But GNU Taler is part of the path to fixing that. You'll be able to get a Taler token from your bank, or a whole bunch of Taler tokens, and then you'll be able to use those to pay anonymously.

Then if the store can send the thing you bought to a delivery box in your neighborhood, the store doesn't ever have to know who you are.

But there's another issue Stallman touched on earlier in his talk: There is a proposed U.S. law called KOSA which would require mandatory age-verification of users -- which means mandatory identification of users, which is likely to mean via face recognition. And it would be in every commercial software application or electronic service that connects to the internet.... [It's] supposedly for protecting children. That's one of the favorite excuses for surveillance and repression: to protect the children. Whether it would actually protect anyone is dubious, but they hope that won't actually be checked.... You can always propose a completely useless method that will repress everyone....
So instead, Stallman suggests that age verification could be handled by.... GNU Taler: Suppose there's some sort of service which charges money, or even a tiny amount of money, and is only for people over 16, or people over 18 or whatever it is. Well, you could get from your bank a Taler token that says the person using this token is over 16. This bank has verified that.... So then the site only needs to insist on a 16-or-over Taler token, and your age is verified, but the site has no idea who you are.

Unfortunately that won't help if user-identifying age-tracking systems are legislated now. The code of Taler works, but it's still being integrated with a bank so that people could actually start to use it with real businesses.

Read on for Slashdot's report on Stallman's remarks on cryptocurrencies and encryption, or jump ahead to...
DRM

Creative Commons Opposes Piracy-Combatting 'SMART' Copyright Act (creativecommons.org) 54

The non-profit Creative Commons (founded by Lawrence Lessig) opposes a new anti-piracy bill that "proposes to have the US Copyright Office mandate that all websites accepting user-uploaded material implement technologies to automatically filter that content." We've long believed that these kinds of mandates are overbroad, speech-limiting, and bad for both creators and reusers. (We're joined in this view by others such as Techdirt, Public Knowledge, and EFF, who have already stated their opposition.)

But one part of this attempt stands out to us: the list of "myths" Sen. Tillis released to accompany the bill. In particular, Tillis lists the concern that it is a "filtering mandate that will chill free speech and harm users" as a myth instead of a true danger to free expression-and he cites the existence of CC's metadata as support for his position.

Creative Commons is strongly opposed to mandatory content filtering measures. And we particularly object to having our work and our name used to imply support for a measure that undermines free expression which CC seeks to protect....

Limitations and exceptions are a crucial feature of a copyright system that truly serves the public, and filter mandates fail to respect them. Because of this, licensing metadata should not be used as a mandatory upload filter-and especially not CC license data. We do not support or endorse the measures in this bill, and we object to having our name used to imply otherwise.

Privacy

It's Back: Senators Want 'EARN IT' Bill To Scan All Online Messages (eff.org) 212

A group of lawmakers have re-introduced the EARN IT Act, an incredibly unpopular bill from 2020 that "would pave the way for a massive new surveillance system, run by private companies, that would roll back some of the most important privacy and security features in technology used by people around the globe," writes Joe Mullin via the Electronic Frontier Foundation. "It's a framework for private actors to scan every message sent online and report violations to law enforcement. And it might not stop there. The EARN IT Act could ensure that anything hosted online -- backups, websites, cloud photos, and more -- is scanned." From the report: The bill empowers every U.S. state or territory to create sweeping new Internet regulations, by stripping away the critical legal protections for websites and apps that currently prevent such a free-for-all -- specifically, Section 230. The states will be allowed to pass whatever type of law they want to hold private companies liable, as long as they somehow relate their new rules to online child abuse. The goal is to get states to pass laws that will punish companies when they deploy end-to-end encryption, or offer other encrypted services. This includes messaging services like WhatsApp, Signal, and iMessage, as well as web hosts like Amazon Web Services. [...]

Separately, the bill creates a 19-person federal commission, dominated by law enforcement agencies, which will lay out voluntary "best practices" for attacking the problem of online child abuse. Regardless of whether state legislatures take their lead from that commission, or from the bill's sponsors themselves, we know where the road will end. Online service providers, even the smallest ones, will be compelled to scan user content, with government-approved software like PhotoDNA. If EARN IT supporters succeed in getting large platforms like Cloudflare and Amazon Web Services to scan, they might not even need to compel smaller websites -- the government will already have access to the user data, through the platform. [...] Senators supporting the EARN IT Act say they need new tools to prosecute cases over child sexual abuse material, or CSAM. But the methods proposed by EARN IT take aim at the security and privacy of everything hosted on the Internet.

The Senators supporting the bill have said that their mass surveillance plans are somehow magically compatible with end-to-end encryption. That's completely false, no matter whether it's called "client side scanning" or another misleading new phrase. The EARN IT Act doesn't target Big Tech. It targets every individual internet user, treating us all as potential criminals who deserve to have every single message, photograph, and document scanned and checked against a government database. Since direct government surveillance would be blatantly unconstitutional and provoke public outrage, EARN IT uses tech companies -- from the largest ones to the very smallest ones -- as its tools. The strategy is to get private companies to do the dirty work of mass surveillance.

Slashdot Top Deals