Social Networks

The Tumblr Revival is Real - and Gen Z is Leading the Charge (fastcompany.com) 35

"Gen Z is rediscovering Tumblr — a chaotic, cozy corner of the internet untouched by algorithmic gloss and influencer overload..." writes Fast Company, "embracing the platform as a refuge from an internet saturated with influencers and algorithm fatigue." Thanks to Gen Z, the site has found new life. As of 2025, Gen Z makes up 50% of Tumblr's active monthly users and accounts for 60% of new sign-ups, according to data shared with Business Insider's Amanda Hoover, who recently reported on the platform's resurgence. User numbers spiked in January during the near-ban of TikTok and jumped again last year when Brazil temporarily banned X. In response, Tumblr users launched dedicated communities to archive and share their favorite TikToks...

To keep up with the momentum, Tumblr introduced Reddit-style Communities in December, letting users connect over shared interests like photography and video games. In January, it debuted Tumblr TV — a TikTok-like feature that serves as both a GIF search engine and a short-form video platform. But perhaps Tumblr's greatest strength is that it isn't TikTok or Facebook. Currently the 10th most popular social platform in the U.S., according to analytics firm Similarweb, Tumblr is dwarfed by giants like Instagram and X. For its users, though, that's part of the appeal.

First launched in 2007, Tumblr peaked at over 100 million users in 2014, according to the article. Trends like Occupy Wall Street had been born on Tumblr, notes Business Insider, calling the blogging platform "Gen Z's safe space... as the rest of the social internet has become increasingly commodified, polarized, and dominated by lifestyle influencers." Tumblr was also "one of the most hyped startups in the world before fading into obsolescence — bought by Yahoo for $1.1 billion in 2013... then acquired by Verizon, and later offloaded for fractions of pennies on the dollar in a distressed sale.

"That same Tumblr, a relic of many millennials' formative years, has been having a moment among Gen Z..." "Gen Z has this romanticism of the early-2000s internet," says Amanda Brennan, an internet librarian who worked at Tumblr for seven years, leaving her role as head of content in 2021... Part of the reason young people are hanging out on old social platforms is that there's nowhere new to go. The tech industry is evolving at a slower pace than it was in the 2000s, and there's less room for disruption. Big Tech has a stranglehold on how we socialize. That leaves Gen Z to pick up the scraps left by the early online millennials and attempt to craft them into something relevant. They love Pinterest (founded in 2010) and Snapchat (2011), and they're trying out digital point-and-shoot cameras and flip phones for an early-2000s aesthetic — and learning the valuable lesson that sometimes we look better when blurrier.

More Gen Zers and millennials are signing up for Yahoo. Napster, surprising many people with its continued existence, just sold for $207 million. The trend is fueled by nostalgia for Y2K aesthetics and a longing for a time when people could make mistakes on the internet and move past them. The pandemic also brought more Gen Z users to Tumblr...

And Tumblr still works much like an older internet, where people have more control over what they see and rely less on algorithms. "You curate your own stuff; it takes a little bit of work to put everything in place, but when it's working, you see the content you want to see," Fjodor Everaerts, a 26-year-old in Belgium who has made some 250,000 posts since he joined Tumblr when he was 14... Under Automattic, Tumblr is finally in the home that serves it, [says Ari Levine, the head of brand partnerships at Tumblr]. "We've had ups and downs along the way, but we're in the most interesting position and place that we've been in 18 years," he says... And following media companies (including Business Insider) and social platforms like Reddit, Automattic in 2024 was making a deal with OpenAI and Midjourney to allow the systems to train on Tumblr posts.


"The social internet is fractured," the article argues. ("Millennials are running Reddit. Gen Xers and Baby Boomers have a home on Facebook. Bluesky, one of the new X alternatives, has a tangible elder-millennial/Gen X vibe. Gen Zers have created social apps like BeReal and the Myspace-inspired Noplace, but they've so far generated more hype than influence....")

But in a world where megaplatforms "flatten our online experiences and reward content that fits a mold," the article suggests, "smaller communities can enrich them."
Movies

'Minecraft Movie' Scores Biggest Videogame Movie Opening Ever, Faces Early Leaks Online (variety.com) 30

It was already the best-selling videogame of all time, notes the Hollywood Reporter. And A Minecraft Movie just had the biggest opening ever for a video game movie adaptation. WIth a production budget of $150 million, it earned in $157 million in just its first weekend in the U.S., with a worldwide total of $301 million.

A Warner Bros. executive called the movie "lightning in a bottle," while the head of co-producer Legendary Pictures acknowledged the game is a global phenomon, according to the article. (About the movie's performance, the executive "said the opening is a both a reflection of the mandate to celebrate the world of Minecraft in a joyful way, and the singular experience that only theatrical can offer."

But an unfinished version leaked online before the movie was even released, reports Variety Screenshots and footage from the fantasy adventure were being shared widely on social media platforms this week, and were also available on file sharing sites. The images and scenes have uncompleted visual effects. Most of the footage was quickly taken down by the rights holders. Although pirated footage is a common problem for major film releases, it's rare to have a working print leak online in this way, raising questions about how such an early version of the movie was accessed, stolen and then shared.
Books

Ian Fleming Published the James Bond Novel 'Moonraker' 70 Years Ago Today (cbr.com) 61

"The third James Bond novel was published on this day in 1955," writes long-time Slashdot reader sandbagger. Film buff Christian Petrozza shares some history: In 1979, the market was hot amid the studios to make the next big space opera. Star Wars blew up the box office in 1977 with Alien soon following and while audiences eagerly awaited the next installment of George Lucas' The Empire Strikes Back, Hollywood was buzzing with spacesuits, lasers, and ships that cruised the stars. Politically, the Cold War between the United States and Russia was still a hot topic, with the James Bond franchise fanning the flames in the media entertainment sector. Moon missions had just finished their run in the early 70s and the space race was still generationally fresh. With all this in mind, as well as the successful run of Roger Moore's fun and campy Bond, the time seemed ripe to boldly take the globe-trotting Bond where no spy has gone before.

Thus, 1979's Moonraker blasted off to theatres, full of chrome space-suits, laser guns, and jetpacks, the franchise went full-boar science fiction to keep up with the Joneses of current Hollywood's hottest genre. The film was a commercial smash hit, grossing 210 million worldwide. Despite some mixed reviews from critics, audiences seemed jazzed about seeing James Bond in space.

When it comes to adaptations of the novella that Ian Fleming wrote of the same name, Moonraker couldn't be farther from its source material, and may as well be renamed completely to avoid any association... Ian Fleming's original Moonraker was more of a post-war commentary on the domestic fears of modern weapons being turned on Europe by enemies who were hired for science by newer foes. With Nazi scientists being hired by both the U.S. and Russia to build weapons of mass destruction after World War II, this was less of a Sci-Fi and much more of a cautionary tale.

They argue that filming a new version of Moonraker could "find a happy medium between the glamor and the grit of the James Bond franchise..."
AI

OpenAI's Motion to Dismiss Copyright Claims Rejected by Judge (arstechnica.com) 102

Is OpenAI's ChatGPT violating copyrights? The New York Times sued OpenAI in December 2023. But Ars Technica summarizes OpenAI's response. The New York Times (or NYT) "should have known that ChatGPT was being trained on its articles... partly because of the newspaper's own reporting..."

OpenAI pointed to a single November 2020 article, where the NYT reported that OpenAI was analyzing a trillion words on the Internet.

But on Friday, U.S. district judge Sidney Stein disagreed, denying OpenAI's motion to dismiss the NYT's copyright claims partly based on one NYT journalist's reporting. In his opinion, Stein confirmed that it's OpenAI's burden to prove that the NYT knew that ChatGPT would potentially violate its copyrights two years prior to its release in November 2022... And OpenAI's other argument — that it was "common knowledge" that ChatGPT was trained on NYT articles in 2020 based on other reporting — also failed for similar reasons...

OpenAI may still be able to prove through discovery that the NYT knew that ChatGPT would have infringing outputs in 2020, Stein said. But at this early stage, dismissal is not appropriate, the judge concluded. The same logic follows in a related case from The Daily News, Stein ruled. Davida Brook, co-lead counsel for the NYT, suggested in a statement to Ars that the NYT counts Friday's ruling as a win. "We appreciate Judge Stein's careful consideration of these issues," Brook said. "As the opinion indicates, all of our copyright claims will continue against Microsoft and OpenAI for their widespread theft of millions of The Times's works, and we look forward to continuing to pursue them."

The New York Times is also arguing that OpenAI contributes to ChatGPT users' infringement of its articles, and OpenAI lost its bid to dismiss that claim, too. The NYT argued that by training AI models on NYT works and training ChatGPT to deliver certain outputs, without the NYT's consent, OpenAI should be liable for users who manipulate ChatGPT to regurgitate content in order to skirt the NYT's paywalls... At this stage, Stein said that the NYT has "plausibly" alleged contributory infringement, showing through more than 100 pages of examples of ChatGPT outputs and media reports showing that ChatGPT could regurgitate portions of paywalled news articles that OpenAI "possessed constructive, if not actual, knowledge of end-user infringement." Perhaps more troubling to OpenAI, the judge noted that "The Times even informed defendants 'that their tools infringed its copyrighted works,' supporting the inference that defendants possessed actual knowledge of infringement by end users."

Wikipedia

Wikimedia Drowning in AI Bot Traffic as Crawlers Consume 65% of Resources 73

Web crawlers collecting training data for AI models are overwhelming Wikipedia's infrastructure, with bot traffic growing exponentially since early 2024, according to the Wikimedia Foundation. According to data released April 1, bandwidth for multimedia content has surged 50% since January, primarily from automated programs scraping Wikimedia Commons' 144 million openly licensed media files.

This unprecedented traffic is causing operational challenges for the non-profit. When Jimmy Carter died in December 2024, his Wikipedia page received 2.8 million views in a day, while a 1.5-hour video of his 1980 presidential debate caused network traffic to double, resulting in slow page loads for some users.

Analysis shows 65% of the foundation's most resource-intensive traffic comes from bots, despite bots accounting for only 35% of total pageviews. The foundation's Site Reliability team now routinely blocks overwhelming crawler traffic to prevent service disruptions. "Our content is free, our infrastructure is not," the foundation said, announcing plans to establish sustainable boundaries for automated content consumption.
Security

Hackers Strike Australia's Largest Pension Funds in Coordinated Attacks (reuters.com) 11

Hackers targeting Australia's major pension funds in a series of coordinated attacks have stolen savings from some members at the biggest fund, Reuters is reporting, citing a source, and compromised more than 20,000 accounts. From the report: National Cyber Security Coordinator Michelle McGuinness said in a statement she was aware of "cyber criminals" targeting accounts in the country's A$4.2 trillion ($2.63 trillion) retirement savings sector and was organising a response across the government, regulators and industry. The Association of Superannuation Funds of Australia, the industry body, said "a number" of funds were impacted over the weekend. While the full scale of the incident remains unclear, AustralianSuper, Australian Retirement Trust, Rest, Insignia and Hostplus on Friday all confirmed they suffered breaches.
AI

DeepMind Details All the Ways AGI Could Wreck the World (arstechnica.com) 36

An anonymous reader quotes a report from Ars Technica, written by Ryan Whitwam: Researchers at DeepMind have ... released a new technical paper (PDF) that explains how to develop AGI safely, which you can download at your convenience. It contains a huge amount of detail, clocking in at 108 pages before references. While some in the AI field believe AGI is a pipe dream, the authors of the DeepMind paper project that it could happen by 2030. With that in mind, they aimed to understand the risks of a human-like synthetic intelligence, which they acknowledge could lead to "severe harm." This work has identified four possible types of AGI risk, along with suggestions on how we might ameliorate said risks. The DeepMind team, led by company co-founder Shane Legg, categorized the negative AGI outcomes as misuse, misalignment, mistakes, and structural risks.

The first possible issue, misuse, is fundamentally similar to current AI risks. However, because AGI will be more powerful by definition, the damage it could do is much greater. A ne'er-do-well with access to AGI could misuse the system to do harm, for example, by asking the system to identify and exploit zero-day vulnerabilities or create a designer virus that could be used as a bioweapon. DeepMind says companies developing AGI will have to conduct extensive testing and create robust post-training safety protocols. Essentially, AI guardrails on steroids. They also suggest devising a method to suppress dangerous capabilities entirely, sometimes called "unlearning," but it's unclear if this is possible without substantially limiting models. Misalignment is largely not something we have to worry about with generative AI as it currently exists. This type of AGI harm is envisioned as a rogue machine that has shaken off the limits imposed by its designers. Terminators, anyone? More specifically, the AI takes actions it knows the developer did not intend. DeepMind says its standard for misalignment here is more advanced than simple deception or scheming as seen in the current literature.

To avoid that, DeepMind suggests developers use techniques like amplified oversight, in which two copies of an AI check each other's output, to create robust systems that aren't likely to go rogue. If that fails, DeepMind suggests intensive stress testing and monitoring to watch for any hint that an AI might be turning against us. Keeping AGIs in virtual sandboxes with strict security and direct human oversight could help mitigate issues arising from misalignment. Basically, make sure there's an "off" switch. If, on the other hand, an AI didn't know that its output would be harmful and the human operator didn't intend for it to be, that's a mistake. We get plenty of those with current AI systems -- remember when Google said to put glue on pizza? The "glue" for AGI could be much stickier, though. DeepMind notes that militaries may deploy AGI due to "competitive pressure," but such systems could make serious mistakes as they will be tasked with much more elaborate functions than today's AI. The paper doesn't have a great solution for mitigating mistakes. It boils down to not letting AGI get too powerful in the first place. DeepMind calls for deploying slowly and limiting AGI authority. The study also suggests passing AGI commands through a "shield" system that ensures they are safe before implementation.

Lastly, there are structural risks, which DeepMind defines as the unintended but real consequences of multi-agent systems contributing to our already complex human existence. For example, AGI could create false information that is so believable that we no longer know who or what to trust. The paper also raises the possibility that AGI could accumulate more and more control over economic and political systems, perhaps by devising heavy-handed tariff schemes. Then one day, we look up and realize the machines are in charge instead of us. This category of risk is also the hardest to guard against because it would depend on how people, infrastructure, and institutions operate in the future.

Media

AV1 is Supposed To Make Streaming Better, So Why Isn't Everyone Using It? (theverge.com) 46

Despite promises of more efficient streaming, the AV1 video codec hasn't achieved widespread adoption seven years after its 2018 debut, even with backing from tech giants Netflix, Microsoft, Google, Amazon, and Meta. The Alliance for Open Media (AOMedia) claims AV1 is 30% more efficient than standards like HEVC, delivering higher-quality video at lower bandwidth while remaining royalty-free.

Major services including YouTube, Netflix, and Amazon Prime Video have embraced the technology, with Netflix encoding approximately 95% of its content using AV1. However, adoption faces significant hurdles. Many streaming platforms including Max, Peacock, and Paramount Plus haven't implemented AV1, partly due to hardware limitations. Devices require specific decoders to properly support AV1, though recent products from Apple, Nvidia, AMD, and Intel have begun including them. "In order to get its best features, you have to accept a much higher encoding complexity," Larry Pearlstein, associate professor at the College of New Jersey, told The Verge. "But there is also higher decoding complexity, and that is on the consumer end."
AI

Vibe Coded AI App Generates Recipes With Very Few Guardrails 76

An anonymous reader quotes a report from 404 Media: A "vibe coded" AI app developed by entrepreneur and Y Combinator group partner Tom Blomfield has generated recipes that gave users instruction on how to make "Cyanide Ice Cream," "Thick White Cum Soup," and "Uranium Bomb," using those actual substances as ingredients. Vibe coding, in case you are unfamiliar, is the new practice where people, some with limited coding experience, rapidly develop software with AI assisted coding tools without overthinking how efficient the code is as long as it's functional. This is how Blomfield said he made RecipeNinja.AI. [...] The recipe for Cyanide Ice Cream was still live on RecipeNinja.AI at the time of writing, as are recipes for Platypus Milk Cream Soup, Werewolf Cream Glazing, Cholera-Inspired Chocolate Cake, and other nonsense. Other recipes for things people shouldn't eat have been removed.

It also appears that Blomfield has introduced content moderation since users discovered they could generate dangerous or extremely stupid recipes. I wasn't able to generate recipes for asbestos cake, bullet tacos, or glue pizza. I was able to generate a recipe for "very dry tacos," which looks not very good but not dangerous. In a March 20 blog on his personal site, Blomfield explained that he's a startup founder turned investor, and while he has experience with PHP and Ruby on Rails, he has not written a line of code professionally since 2015. "In my day job at Y Combinator, I'm around founders who are building amazing stuff with AI every day and I kept hearing about the advances in tools like Lovable, Cursor and Windsurf," he wrote, referring to AI-assisted coding tools. "I love building stuff and I've always got a list of little apps I want to build if I had more free time."

After playing around with them, he wrote, he decided to build RecipeNinja.AI, which can take a prompt as simple as "Lasagna," and generate an image of the finished dish along with a step-by-stape recipe which can use ElevenLabs's AI generated voice to narrate the instruction so the user doesn't have to interact with a device with his tomato sauce-covered fingers. "I was pretty astonished that Windsurf managed to integrate both the OpenAI and Elevenlabs APIs without me doing very much at all," Blomfield wrote. "After we had a couple of problems with the open AI Ruby library, it quickly fell back to a raw ruby HTTP client implementation, but I honestly didn't care. As long as it worked, I didn't really mind if it used 20 lines of code or two lines of code." Having some kind of voice controlled recipe app sounds like a pretty good idea to me, and it's impressive that Blomfield was able to get something up and running so fast given his limited coding experience. But the problem is that he also allowed users to generate their own recipes with seemingly very few guardrails on what kind of recipes are and are not allowed, and that the site kept those results and showed them to other users.
Crime

Vast Pedophile Network Shut Down In Europol's Largest CSAM Operation (arstechnica.com) 74

An anonymous reader quotes a report from Ars Technica: Europol has shut down one of the largest dark web pedophile networks in the world, prompting dozens of arrests worldwide and threatening that more are to follow. Launched in 2021, KidFlix allowed users to join for free to preview low-quality videos depicting child sex abuse materials (CSAM). To see higher-resolution videos, users had to earn credits by sending cryptocurrency payments, uploading CSAM, or "verifying video titles and descriptions and assigning categories to videos."

Europol seized the servers and found a total of 91,000 unique videos depicting child abuse, "many of which were previously unknown to law enforcement," the agency said in a press release. KidFlix going dark was the result of the biggest child sexual exploitation operation in Europol's history, the agency said. Operation Stream, as it was dubbed, was supported by law enforcement in more than 35 countries, including the United States. Nearly 1,400 suspected consumers of CSAM have been identified among 1.8 million global KidFlix users, and 79 have been arrested so far. According to Europol, 39 child victims were protected as a result of the sting, and more than 3,000 devices were seized.

Police identified suspects through payment data after seizing the server. Despite cryptocurrencies offering a veneer of anonymity, cops were apparently able to use sophisticated methods to trace transactions to bank details. And in some cases cops defeated user attempts to hide their identities -- such as a man who made payments using his mother's name in Spain, a local news outlet, Todo Alicante, reported. It likely helped that most suspects were already known offenders, Europol noted. Arrests spanned the globe, including 16 in Spain, where one computer scientist was found with an "abundant" amount of CSAM and payment receipts, Todo Alicante reported. Police also arrested a "serial" child abuser in the US, CBS News reported.

Encryption

European Commission Takes Aim At End-to-End Encryption and Proposes Europol Become an EU FBI (therecord.media) 39

The European Commission has announced its intention to join the ongoing debate about lawful access to data and end-to-end encryption while unveiling a new internal security strategy aimed to address ongoing threats. From a report: ProtectEU, as the strategy has been named, describes the general areas that the bloc's executive would like to address in the coming years although as a strategy it does not offer any detailed policy proposals. In what the Commission called "a changed security environment and an evolving geopolitical landscape," it said Europe needed to "review its approach to internal security."

Among its aims is establishing Europol as "a truly operational police agency to reinforce support to Member States," something potentially comparable to the U.S. FBI, with a role "in investigating cross-border, large-scale, and complex cases posing a serious threat to the internal security of the Union." Alongside the new Europol, the Commission said it would create roadmaps regarding both the "lawful and effective access to data for law enforcement" and on encryption.

Social Networks

Amazon Said To Make a Bid To Buy TikTok in the US (nytimes.com) 33

An anonymous reader shares a report: Amazon has put in a last-minute bid to acquire all of TikTok, the popular video app, as it approaches an April deadline to be separated from its Chinese owner or face a ban in the United States, according to three people familiar with the bid.

Various parties who have been involved in the talks do not appear to be taking Amazon's bid seriously, the people said. The bid came via an offer letter addressed to Vice President JD Vance and Howard Lutnick, the commerce secretary, according to a person briefed on the matter. Amazon's bid highlights the 11th-hour maneuvering in Washington over TikTok's ownership. Policymakers in both parties have expressed deep national security concerns over the app's Chinese ownership, and passed a law last year to force a sale of TikTok that was set to take effect in January.

AI

OpenAI Accused of Training GPT-4o on Unlicensed O'Reilly Books (techcrunch.com) 49

A new paper [PDF] from the AI Disclosures Project claims OpenAI likely trained its GPT-4o model on paywalled O'Reilly Media books without a licensing agreement. The nonprofit organization, co-founded by O'Reilly Media CEO Tim O'Reilly himself, used a method called DE-COP to detect copyrighted content in language model training data.

Researchers analyzed 13,962 paragraph excerpts from 34 O'Reilly books, finding that GPT-4o "recognized" significantly more paywalled content than older models like GPT-3.5 Turbo. The technique, also known as a "membership inference attack," tests whether a model can reliably distinguish human-authored texts from paraphrased versions.

"GPT-4o [likely] recognizes, and so has prior knowledge of, many non-public O'Reilly books published prior to its training cutoff date," wrote the co-authors, which include O'Reilly, economist Ilan Strauss, and AI researcher Sruly Rosenblat.
Youtube

YouTube Could Be Worth $550 Billion as Analyst Crowns Platform 'New King of All Media' (thewrap.com) 42

MoffettNathanson has crowned YouTube the "New King of All Media" as the Alphabet-owned video platform has become a major force in Hollywood, dominating time spent watching TV. From a report: The firm estimates that YouTube as a standalone business could be worth as much as $550 billion -- or nearly 30% of the tech giant's current valuation. The figure is based on the firm's analysis of enterprise value as a multiple of revenue in 2024 for Netflix (10.5x revenue), Meta (8.8x), Roku (2.4x), Warner Bros. Discovery (1.4x), Fox (1.3x) and Disney (1.3x).

In 2024, YouTube was the second-largest media company by revenue at $54.2 billion, trailing behind only Disney. However, the MoffettNathanson analysts predict YouTube will take the top spot in 2025, becoming a leader in both engagement and revenue. "YouTube has the potential to become the central aggregator for all things professional video, positioning itself to capture a share of the $85 billion consumer Pay TV market and the ~$30 billion streaming ex. Netflix market in the U.S.," they wrote in a Monday research note. "On monetization, when comparing YouTube's massive TV screen engagement to its estimated TV revenue, it remains significantly under-monetized relative to its scaled reach and differentiated offering. This signals a substantial runway for improving its monetization strategy."

Privacy

FTC Says 23andMe Purchaser Must Uphold Existing Privacy Policy For Data Handling (therecord.media) 28

The FTC has warned that any buyer of 23andMe must honor the company's current privacy policy, which ensures consumers retain control over their genetic data and can delete it at will. FTC Chair Andrew Ferguson emphasized that such promises must be upheld, given the uniquely sensitive and immutable nature of genetic information. The Record reports: The letter, sent to the DOJ's United States Trustee Program, highlights several assurances 23andMe makes in its privacy policy, including that users are in control of their data and can determine how and for what purposes it is used. The company also gives users the ability to delete their data at will, the letter says, arguing that 23andMe has made "direct representations" to consumers about how it uses, shares and safeguards their personal information, including in the case of bankruptcy.

Pointing to statements that the company's leadership has made asserting that user data should be considered an asset, Ferguson highlighted that 23andMe's privacy statement tells users it does not share their data with insurers, employers, public databases or law enforcement without a court order, search warrant or subpoena. It also promises consumers that it only shares their personal data in cases where it is needed to provide services, Ferguson added. The genetic testing and ancestry company is explicit that its data protection guidelines apply to new entities it may be sold or transferred to, Ferguson said.

Social Networks

Arkansas Social Media Age Verification Law Blocked By Federal Judge (engadget.com) 15

A federal judge struck down Arkansas' Social Media Safety Act, ruling it unconstitutional for broadly restricting both adult and minor speech and imposing vague requirements on platforms. Engadget reports: In a ruling (PDF), Judge Timothy Brooks said that the law, known as Act 689 (PDF), was overly broad. "Act 689 is a content-based restriction on speech, and it is not targeted to address the harms the State has identified," Brooks wrote in his decision. "Arkansas takes a hatchet to adults' and minors' protected speech alike though the Constitution demands it use a scalpel." Brooks also highlighted the "unconstitutionally vague" applicability of the law, which seemingly created obligations for some online services, but may have exempted services which had the "predominant or exclusive function [of]... direct messaging" like Snapchat.

"The court confirms what we have been arguing from the start: laws restricting access to protected speech violate the First Amendment," NetChoice's Chris Marchese said in a statement. "This ruling protects Americans from having to hand over their IDs or biometric data just to access constitutionally protected speech online." It's not clear if state officials in Arkansas will appeal the ruling. "I respect the court's decision, and we are evaluating our options," Arkansas Attorney general Tim Griffin said in a statement.

Transportation

Xiaomi EV Involved in First Fatal Autopilot Crash (yahoo.com) 63

An anonymous reader quotes a report from Reuters: China's Xiaomi said on Tuesday that it was actively cooperating with police after a fatal accident involving a SU7 electric vehicle on March 29 and that it had handed over driving and system data. The incident marks the first major accident involving the SU7 sedan, which Xiaomi launched in March last year and since December has outsold Tesla's Model 3 on a monthly basis. Xiaomi's shares, which had risen by 34.8% year to date, closed down 5.5% on Wednesday, underperforming a 0.2% gain in the Hang Seng Tech index. Xiaomi did not disclose the number of casualties but said initial information showed the car was in the Navigate on Autopilot intelligent-assisted driving mode before the accident and was moving at 116 kph (72 mph).

A driver inside the car took over and tried to slow it down but then collided with a cement pole at a speed of 97 kph, Xiaomi said. The accident in Tongling in the eastern Chinese province of Anhui killed the driver and two passengers, Chinese financial publication Caixin reported on Tuesday citing friends of the victims. In a rundown of the data submitted to local police posted on a Weibo account of the company, Xiaomi said NOA issued a risk warning of obstacles ahead and its subsequent immediate takeover only happened seconds before the collision. Local media reported that the car caught fire after the collision. Xiaomi did not mention the fire in the statement.
The report notes that the car was a "so-called standard version of the SU7, which has the less-advanced smart driving technology without LiDAR."
Biotech

Open Source Genetic Database Shuts Down To Protect Users From 'Authoritarian Governments' (404media.co) 28

An anonymous reader quotes a report from 404 Media: The creator of an open source genetic database is shutting it down and deleting all of its data because he has come to believe that its existence is dangerous with "a rise in far-right and other authoritarian governments" in the United States and elsewhere. "The largest use case for DTC genetic data was not biomedical research or research in big pharma," Bastian Greshake Tzovaras, the founder of OpenSNP, wrote in a blog post. "Instead, the transformative impact of the data came to fruition among law enforcement agencies, who have put the genealogical properties of genetic data to use."

OpenSNP has collected roughly 7,500 genomes over the last 14 years, primarily by allowing people to voluntarily submit their own genetic information they have downloaded from 23andMe. With the bankruptcy of 23andMe, increased interest in genetic data by law enforcement, and the return of Donald Trump and rise of authoritarian governments worldwide, Greshake Tzovaras told 404 Media he no longer believes it is ethical to run the database. "I've been thinking about it since 23andMe was on the verge of bankruptcy and been really considering it since the U.S. election. It definitely is really bad over there [in the United States]," Greshake Tzovaras told 404 Media. "I am quite relieved to have made the decision and come to a conclusion. It's been weighing on my mind for a long time."

Greshake Tzovaras said that he is proud of the OpenSNP project, but that, in a world where scientific data is being censored and deleted and where the Trump administration has focused on criminalizing immigrants and trans people, he now believes that the most responsible thing to do is to delete the data and shut down the project. "Most people in OpenSNP may not be at particular risk right now, but there are people from vulnerable populations in here as well," Greshake Tzovaras said. "Thinking about gender representation, minorities, sexual orientation -- 23andMe has been working on the whole 'gay gene' thing, it's conceivable that this would at some point in the future become an issue."
"Across the globe there is a rise in far-right and other authoritarian governments. While they are cracking down on free and open societies, they are also dedicated to replacing scientific thought and reasoning with pseudoscience across disciplines," Greshake Tzovaras wrote. "The risk/benefit calculus of providing free & open access to individual genetic data in 2025 is very different compared to 14 years ago. And so, sunsetting openSNP -- along with deleting the data stored within it -- feels like it is the most responsible act of stewardship for these data today."

"The interesting thing to me is there are data preservation efforts in the U.S. because the government is deleting scientific data that they don't like. This is approaching that same problem from a different direction," he added. "We need to protect the people in this database. I am supportive of preserving scientific data and knowledge, but the data comes second -- the people come first. We prefer deleting the data."
Movies

Netflix CEO Says Movie Theaters Are Dead (semafor.com) 192

An anonymous reader shares a report: The post-Covid rebound of live events is all the more evidence that movie theaters are never coming back, Netflix co-CEO Ted Sarandos told Semafor in an interview at the Paley Center for Media Friday.

"Nearly every live thing has come back screaming," Sarandos said. "Broadway's breaking records right now, sporting events, concerts, all those things that we couldn't do during COVID are all back and bigger than ever. The theatrical box office is down 40 to 50% from pre-COVID, and this year is down 8% already, so the trend is not reversing. You've gotta look at that and say, 'What is the consumer trying to tell you?'"

AI

Bloomberg's AI-Generated News Summaries Had At Least 36 Errors Since January (nytimes.com) 25

The giant financial news site Bloomberg "has been experimenting with using AI to help produce its journalism," reports the New York Times. But "It hasn't always gone smoothly."

While Bloomberg announced on January 15 that it would add three AI-generated bullet points at the top of articles as a summary, "The news outlet has had to correct at least three dozen A.I.-generated summaries of articles published this year." (This Wednesday they published a "hallucinated" date for the start of U.S. auto tariffs, and earlier in March claimed president Trump had imposed tariffs on Canada in 2024, while other errors have included incorrect figures and incorrect attribution.) Bloomberg is not alone in trying A.I. — many news outlets are figuring out how best to embrace the new technology and use it in their reporting and editing. The newspaper chain Gannett uses similar A.I.-generated summaries on its articles, and The Washington Post has a tool called "Ask the Post" that generates answers to questions from published Post articles. And problems have popped up elsewhere. Earlier this month, The Los Angeles Times removed its A.I. tool from an opinion article after the technology described the Ku Klux Klan as something other than a racist organization.

Bloomberg News said in a statement that it publishes thousands of articles each day, and "currently 99 percent of A.I. summaries meet our editorial standards...." The A.I. summaries are "meant to complement our journalism, not replace it," the statement added....

John Micklethwait, Bloomberg's editor in chief, laid out the thinking about the A.I. summaries in a January 10 essay, which was an excerpt from a lecture he had given at City St. George's, University of London. "Customers like it — they can quickly see what any story is about. Journalists are more suspicious," he wrote. "Reporters worry that people will just read the summary rather than their story." But, he acknowledged, "an A.I. summary is only as good as the story it is based on. And getting the stories is where the humans still matter."

A Bloomberg spokeswoman told the Times that the feedback they'd received to the summaries had generally been positive — "and we continue to refine the experience."

Slashdot Top Deals