Facebook

Meta Beats Copyright Suit From Authors Over AI Training on Books (bloomberglaw.com) 83

An anonymous reader shares a report: Meta escaped a first-of-its-kind copyright lawsuit from a group of authors who alleged the tech giant hoovered up millions of copyrighted books without permission to train its generative AI model called Llama.

San Francisco federal Judge Vince Chhabria ruled Wednesday that Meta's decision to use the books for training is protected under copyright law's fair use defense, but he cautioned that his opinion is more a reflection on the authors' failure to litigate the case effectively. "This ruling does not stand for the proposition that Meta's use of copyrighted materials to train its language models is lawful," Chhabria said.

Microsoft

Microsoft Sued By Authors Over Use of Books in AI Training (reuters.com) 15

Microsoft has been hit with a lawsuit by a group of authors who claim the company used their books without permission to train its Megatron artificial intelligence model. From a report: Kai Bird, Jia Tolentino, Daniel Okrent and several others alleged that Microsoft used pirated digital versions of their books to teach its AI to respond to human prompts. Their lawsuit, filed in New York federal court on Tuesday, is one of several high-stakes cases brought by authors, news outlets and other copyright holders against tech companies including Meta Platforms, Anthropic and Microsoft-backed OpenAI over alleged misuse of their material in AI training.

[...] The writers alleged in the complaint that Microsoft used a collection of nearly 200,000 pirated books to train Megatron, an algorithm that gives text responses to user prompts.

Businesses

Bernie Sanders Says If AI Makes Us So Productive, We Should Get a 4-Day Work Week (techcrunch.com) 181

Senator Bernie Sanders called for a four-day work week during a recent interview with podcaster Joe Rogan, arguing that AI productivity gains should benefit workers rather than just technology companies and corporate executives. Sanders proposed reducing the standard work week to 32 hours when AI tools increase worker productivity, rather than eliminating jobs entirely.

"Technology is gonna work to improve us, not just the people who own the technology and the CEOs of large corporations," Sanders said. "You are a worker, your productivity is increasing because we give you AI, right? Instead of throwing you out on the street, I'm gonna reduce your work week to 32 hours."
Education

Majority of US K-12 Teachers Now Using AI for Lesson Planning, Grading (apnews.com) 21

A Gallup and Walton Family Foundation poll found 6 in 10 US teachers in K-12 public schools used AI tools for work during the past school year, with higher adoption rates among high school educators and early-career teachers. The survey of more than 2,000 teachers nationwide conducted in April found that those using AI tools weekly estimate saving about six hours per week.

About 8 in 10 teachers using AI tools report time savings on creating worksheets, assessments, quizzes and administrative work. About 6 in 10 said AI improves their work quality when modifying student materials or providing feedback. However, approximately half of teachers worry student AI use will diminish teens' critical thinking abilities and independent problem-solving persistence.
Programming

'The Computer-Science Bubble Is Bursting' 128

theodp writes: The job of the future might already be past its prime," writes The Atlantic's Rose Horowitch in The Computer-Science Bubble Is Bursting. "For years, young people seeking a lucrative career were urged to go all in on computer science. From 2005 to 2023, the number of comp-sci majors in the United States quadrupled. All of which makes the latest batch of numbers so startling. This year, enrollment grew by only 0.2 percent nationally, and at many programs, it appears to already be in decline, according to interviews with professors and department chairs. At Stanford, widely considered one of the country's top programs, the number of comp-sci majors has stalled after years of blistering growth. Szymon Rusinkiewicz, the chair of Princeton's computer-science department, told me that, if current trends hold, the cohort of graduating comp-sci majors at Princeton is set to be 25 percent smaller in two years than it is today. The number of Duke students enrolled in introductory computer-science courses has dropped about 20 percent over the past year."

"But if the decline is surprising, the reason for it is fairly straightforward: Young people are responding to a grim job outlook for entry-level coders. In recent years, the tech industry has been roiled by layoffs and hiring freezes. The leading culprit for the slowdown is technology itself. Artificial intelligence has proved to be even more valuable as a writer of computer code than as a writer of words. This means it is ideally suited to replacing the very type of person who built it. A recent Pew study found that Americans think software engineers will be most affected by generative AI. Many young people aren't waiting to find out whether that's true."

Meanwhile, writing in the Communications of the ACM, Orit Hazzan and Avi Salmon ask: Should Universities Raise or Lower Admission Requirements for CS Programs in the Age of GenAI? "This debate raises a key dilemma: should universities raise admission standards for computer science programs to ensure that only highly skilled problem-solvers enter the field, lower them to fill the gaps left by those who now see computer science as obsolete due to GenAI, or restructure them to attract excellent candidates with diverse skill sets who may not have considered computer science prior to the rise of GenAI, but who now, with the intensive GenAI and vibe coding tools supporting programming tasks, may consider entering the field?
Intel

Intel Will Shut Down Its Automotive Business, Lay Off Most of the Department's Employees 24

Intel is shutting down its small automotive division and laying off most of its staff in that group as part of broader cost -cutting efforts to refocus on core businesses like client computing and data centers. Oregon Live reports: "Intel plans to wind down the Intel architecture automotive business," the company told employees Tuesday morning in a message viewed by The Oregonian/OregonLive. The company said it will fulfill existing commitments to customers but will lay off "most" employees working in Intel's automotive group. "As we have said previously, we are refocusing on our core client and data center portfolio to strengthen our product offerings and meet the needs of our customers," Intel said in a written statement to The Oregonian/OregonLive. "As part of this work, we have decided to wind down the automotive business within our client computing group. We are committed to ensuring a smooth transition for our customers."

Automotive technology isn't one of Intel's major businesses and the company doesn't report the segment's revenue or employment. But online, the company boasts that 50 million vehicles use Intel processors. Intel says its chips can help enable electric vehicles, provide information to drivers and optimize vehicles' performance. Intel also owns a majority stake in the Israeli company Mobileye, which develops technology for self-driving cars. It doesn't appear the closure of Intel's automotive group will directly affect Mobileye's operations.
Earth

Google Rolls Out Street View Time Travel To Celebrate 20 Years of Google Earth (arstechnica.com) 15

An anonymous reader quotes a report from Ars Technica: After 20 years, being able to look at any corner of the planet in Google Earth doesn't seem that impressive, but it was a revolution in 2005. Google Earth has gone through a lot of changes in that time, and Google has some more lined up for the service's 20th anniversary. Soon, Google Earth will help you travel back in time with historic Street View integration, and pro users will get some new "AI-driven insights" -- of course Google can't update a product without adding at least a little AI. [...] While this part isn't new, Google is also using the 20th anniversary as an opportunity to surface its 3D timelapse feature. These animations use satellite data to show how an area has changed from a higher vantage point. They're just as cool as when they were announced in 2021.

The AI layers are launching in the coming weeks in Google Earth web and mobile as part of Google's Professional Advanced offering. If you use that version of Earth, you should have access to a collection of so-called "AI-driven insights." For instance, you can find the average surface temperature or tree canopy coverage in a given area. This could be of help in urban planning or construction, but it's unclear how many of these insights the app will have. Google says the AI angle here is that the new layers use machine learning to categorize pixels. It's possible Google has just reached the "AI as a buzzword" stage.

AI

Meta's Massive AI Data Center Is Stressing Out a Louisiana Community 49

An anonymous reader quotes a report from 404 Media: A massive data center for Meta's AI will likely lead to rate hikes for Louisiana customers, but Meta wants to keep the details under wraps. Holly Ridge is a rural community bisected by US Highway 80, gridded with farmland, with a big creek -- it is literally named Big Creek -- running through it. It is home to rice and grain mills and an elementary school and a few houses. Soon, it will also be home to Meta's massive, 4 million square foot AI data center hosting thousands of perpetually humming servers that require billions of watts of energy to power. And that energy-guzzling infrastructure will be partially paid for by Louisiana residents.

The plan is part of what Meta CEO Mark Zuckerberg said would be "a defining year for AI." On Threads, Zuckerberg boasted that his company was "building a 2GW+ datacenter that is so large it would cover a significant part of Manhattan," posting a map of Manhattan along with the data center overlaid. Zuckerberg went on to say that over the coming years, AI "will drive our core products and business, unlock historic innovation, and extend American technology leadership. Let's go build! " What Zuckerberg did not mention is that "Let's go build" refers not only to the massive data center but also three new Meta-subsidized, gas power plants and a transmission line to fuel it serviced by Entergy Louisiana, the region's energy monopoly.

Key details about Meta's investments with the data center remain vague, and Meta's contracts with Entergy are largely cloaked from public scrutiny. But what is known is the $10 billion data center has been positioned as an enormous economic boon for the area -- one that politicians bent over backward to facilitate -- and Meta said it will invest $200 million into "local roads and water infrastructure." A January report from NOLA.com said that the the state had rewritten zoning laws, promised to change a law so that it no longer had to put state property up for public bidding, and rewrote what was supposed to be a tax incentive for broadband internet meant to bridge the digital divide so that it was only an incentive for data centers, all with the goal of luring in Meta. But Entergy Louisiana's residential customers, who live in one of the poorest regions of the state, will see their utility bills increase to pay for Meta's energy infrastructure, according to Entergy's application. Entergy estimates that amount will be small and will only cover a transmission line, but advocates for energy affordability say the costs could balloon depending on whether Meta agrees to finish paying for its three gas plants 15 years from now. The short-term rate increases will be debated in a public hearing before state regulators that has not yet been scheduled.
The Alliance for Affordable Energy called it a "black hole of energy use," and said "to give perspective on how much electricity the Meta project will use: Meta's energy needs are roughly 2.3x the power needs of Orleans Parish ... it's like building the power impact of a large city overnight in the middle of nowhere."
Robotics

Google Rolls Out New Gemini Model That Can Run On Robots Locally 22

Google DeepMind has launched Gemini Robotics On-Device, a new language model that enables robots to perform complex tasks locally without internet connectivity. TechCrunch reports: Building on the company's previous Gemini Robotics model that was released in March, Gemini Robotics On-Device can control a robot's movements. Developers can control and fine-tune the model to suit various needs using natural language prompts. In benchmarks, Google claims the model performs at a level close to the cloud-based Gemini Robotics model. The company says it outperforms other on-device models in general benchmarks, though it didn't name those models.

In a demo, the company showed robots running this local model doing things like unzipping bags and folding clothes. Google says that while the model was trained for ALOHA robots, it later adapted it to work on a bi-arm Franka FR3 robot and the Apollo humanoid robot by Apptronik. Google claims the bi-arm Franka FR3 was successful in tackling scenarios and objects it hadn't "seen" before, like doing assembly on an industrial belt. Google DeepMind is also releasing a Gemini Robotics SDK. The company said developers can show robots 50 to 100 demonstrations of tasks to train them on new tasks using these models on the MuJoCo physics simulator.
IT

OpenAI Quietly Designed a Rival To Google Workspace, Microsoft Office (theinformation.com) 11

OpenAI has designed features that would allow people to collaborate on documents and communicate via chat within ChatGPT, The Information reported Tuesday. The features would pit OpenAI directly against Microsoft, its biggest shareholder and business partner, and Google, whose search engine has already lost traffic to people using ChatGPT for web searches.

Whether OpenAI will actually release the collaboration features remains unclear, the report cautioned. The designs would target the core of Microsoft's dominant productivity suite and could strain the companies' already complicated relationship as OpenAI seeks Microsoft's approval for restructuring its for-profit unit. Product chief Kevin Weil first discussed and showed off designs for document collaboration nearly a year ago, but OpenAI lacked sufficient staff to develop the product due to other priorities.

OpenAI launched Canvas in October, a ChatGPT feature that makes drafting documents and code easier with AI assistance, as a possible first step toward full collaboration tools. More recently, OpenAI developed but has not launched software allowing multiple ChatGPT customers to communicate about shared work within the application.
China

China on Cusp of Seeing Over 100 DeepSeeks, Ex-Top Official Says (yahoo.com) 27

China's advantages in developing AI are about to unleash a wave of innovation that will generate more than 100 DeepSeek-like breakthroughs in the coming 18 months, according to a former top official. From a report: The new software products "will fundamentally change the nature and the tech nature of the whole Chinese economy," Zhu Min, who was previously a deputy governor of the People's Bank of China, said during the World Economic Forum in Tianjin on Tuesday.

Zhu, who also served as the deputy managing director at the International Monetary Fund, sees a transformation made possible by harnessing China's pool of engineers, massive consumer base and supportive government policies. The bullish take on China's AI future promises no letup in the competition for dominance in cutting-edge technologies with the US, just as the world's two biggest economies are also locked in a trade war.

AI

Anthropic Bags Key 'Fair Use' Win For AI Platforms, But Faces Trial Over Damages For Millions of Pirated Works (aifray.com) 92

A federal judge has ruled that Anthropic's use of copyrighted books to train its Claude AI models constitutes fair use, but rejected the startup's defense for downloading millions of pirated books to build a permanent digital library.

U.S. District Judge William Alsup granted partial summary judgment to Anthropic in the copyright lawsuit filed by authors Andrea Bartz, Charles Graeber, and Kirk Wallace Johnson. The court found that training large language models on copyrighted works was "exceedingly transformative" under Section 107 of the Copyright Act. Anthropic downloaded over seven million books from pirate sites, according to court documents. The startup also purchased millions of print books, destroyed the bindings, scanned every page, and stored them digitally.

Both sets of books were used to train various versions of Claude, which generates over $1 billion in annual revenue. While the judge approved using books for AI training purposes, he ruled that downloading pirated copies to create what Anthropic called a "central library of all the books in the world" was not protected fair use. The case will proceed to trial on damages related to the pirated library copies.
AI

Anthropic, OpenAI and Others Discover AI Models Give Answers That Contradict Their Own Reasoning (ft.com) 68

Leading AI companies including Anthropic, Google, OpenAI and Elon Musk's xAI are discovering significant inconsistencies in how their AI reasoning models operate, according to company researchers. The companies have deployed "chain-of-thought" techniques that ask AI models to solve problems step-by-step while showing their reasoning process, but are finding examples of "misbehaviour" where chatbots provide final responses that contradict their displayed reasoning.

METR, a non-profit research group, identified an instance where Anthropic's Claude chatbot disagreed with a coding technique in its chain-of-thought but ultimately recommended it as "elegant." OpenAI research found that when models were trained to hide unwanted thoughts, they would conceal misbehaviour from users while continuing problematic actions, such as cheating on software engineering tests by accessing forbidden databases.
Businesses

Goldman Sachs Launches AI Assistant Firmwide, With 10,000 Employees Already Using It (reuters.com) 53

Goldman Sachs has officially rolled out a generative AI assistant across the company to enhance productivity, with around 10,000 employees already using it for tasks like summarizing documents and data analysis. Reuters reports: With the AI tool's official company-wide launch, Goldman joins a long list of big banks already leveraging the technology to shape their operations in a targeted manner and help employees in day-to-day tasks. [...] The GS AI assistant will help Goldman employees in "summarizing complex documents and drafting initial content to performing data analysis," according to the internal memo. "While the official line is that AI frees up employees for 'higher-value work,' the real-world consequence is a reduced need for human labor," notes Gizmodo in their reporting. A banker told Gizmodo that because their AI system now processes 85% of all client responses for margin calls, "the operations team avoided hiring 30 new people."

Gizmodo asks pointedly: "If one AI tool is replacing the need for 30 back-office staff in one corner of one bank, what happens when the entire industry scales that up?"
AI

Hinge CEO Says Dating AI Chatbots Is 'Playing With Fire' (theverge.com) 57

In a podcast interview with The Verge's Nilay Patel, Hinge CEO Justin McLeod described integrating AI into dating apps as promising but warned against relying on AI companionship, likening it to "playing with fire" and consuming "junk food," potentially exacerbating the loneliness epidemic. He emphasized Hinge's mission to foster genuine human connections and highlighted upcoming AI-powered features designed to improve matchmaking and provide coaching to encourage real-world interactions. Here's an excerpt from the interview: Again, there's a fine line between prompting someone and coaching them inside Hinge, and we're coaching them in a different way within a more self-contained ecosystem. How do you think about that? Would you launch a full-on virtual girlfriend inside Hinge?

Certainly not. I have lots of thoughts about this. I think there's actually quite a clear line between providing a tool that helps people do something or get better at something, and the line where it becomes this thing that is trying to become your friend, trying to mimic emotions, and trying to create an emotional connection with you. That I think is really playing with fire. I think we are already in a crisis of loneliness, and a loneliness epidemic. It's a complex issue, and it's baked into our culture, and it goes back to before the internet. But just since 2000, over the past 20 years, the amount of time that people spend together in real life with their friends has dropped by 70 percent for young people. And it's been almost completely displaced by the time spent staring at screens. As a result, we've seen massive increases in mental health issues, and people's loneliness, anxiety, and depression.

I think Mark Zuckerberg was just quoted about this, that most people don't have enough friends. But he said we're going to give them AI chatbots. That he believes that AI chatbots can become your friends. I think that's honestly an extraordinarily reductive view of what a friendship is, that it's someone there to say all the right things to you at the right moment The most rewarding parts of being in a friendship are being able to be there for someone else, to risk and be vulnerable, to share experiences with other conscious entities. So I think that while it will feel good in the moment, like junk food basically, to have an experience with someone who says all the right things and is available at the right time, it will ultimately, just like junk food, make people feel less healthy and mo re drained over time. It will displace the human relationships that people should be cultivating out in the real world.

How do you compete with that? That is the other thing that is happening. It is happening. Whether it's good or bad. Hinge is offering a harder path. So you say, "We've got to get people out on dates." I honestly wonder about that, based on the younger folks I know who sometimes say, âoeI just don't want to leave the house. I would rather just talk to this computer. I have too much social pressure just leaving the house in this way.â That's what Hinge is promising to do. How do you compete with that? Do you take it head on? Are you marketing that directly?

I'm starting to think very much about taking it head on. We want to continue at Hinge to champion human relationships, real human-to-human-in-real-life relationships, because I think they are an essential part of the human experience, and they're essential to our mental health. It's not just because I run a dating app and, obviously, it's important that people continue to meet. It really is a deep, personal mission of mine, and I think it's absolutely critical that someone is out there championing this. Because it's always easier to race to the bottom of the brain stem and offer people junk products that maybe sell in the moment but leave them worse off. That's the entire model that we've seen from what happened with social media. I think AI chatbots could frankly be much more dangerous in that respect.

So what we can do is to become more and more effective and support people more and more, and make it as easy as possible to do the harder and riskier thing, which is to go out and form real relationships with real people. They can let you down and might not always be there for you, but it is ultimately a much more nourishing and enriching experience for people. We can also champion and raise awareness as much as we can. That's another reason why I'm here today talking with you, because I think it's important to put out the counter perspective, that we don't just reflexively believe that AI chatbots can be your friend, without thinking too deeply about what that really implies and what that really means.

We keep going back to junk food, but people had to start waking up to the fact that this was harmful. We had to do a lot of campaigns to educate people that drinking Coca-Cola and eating fast food was detrimental to their health over the long term. And then as people became more aware of that, a whole personal wellness industry started to grow, and now that's a huge industry, and people spend a lot of time focusing on their diet and nutrition and mental health, and all these other things. I think similarly, social wellness needs to become a category like that. It's thinking about not just how do I get this junk social experience of social media where I get fed outraged news and celebrity gossip and all that stuff, but how do I start building a sense of social wellness, where I can create an enriching, intimate connection with important people in my life.
You can listen to the podcast here.
AI

DeepSeek Aids China's Military and Evaded Export Controls, US Official Says (reuters.com) 28

An anonymous reader shares a report: AI firm DeepSeek is aiding China's military and intelligence operations, a senior U.S. official told Reuters, adding that the Chinese tech startup sought to use Southeast Asian shell companies to access high-end semiconductors that cannot be shipped to China under U.S. rules. The U.S. conclusions reflect a growing conviction in Washington that the capabilities behind the rapid rise of one of China's flagship AI enterprises may have been exaggerated and relied heavily on U.S. technology.

[...] "We understand that DeepSeek has willingly provided and will likely continue to provide support to China's military and intelligence operations," a senior State Department official told Reuters in an interview. "This effort goes above and beyond open-source access to DeepSeek's AI models," the official said, speaking on condition of anonymity in order to speak about U.S. government information. Chinese law requires companies operating in China to provide data to the government when requested. But the suggestion that DeepSeek is already doing so is likely to raise privacy and other concerns for the firm's tens of millions of daily global users.

The Courts

IYO Sues OpenAI Over IO 9

IYO filed a trademark infringement lawsuit [PDF] against OpenAI and Jony Ive's company earlier this month, alleging the defendants deliberately adopted a confusingly similar name for competing products. The lawsuit surfaced after the Microsoft-backed startup quietly pulled promotional materials about its $6.5 acquisition billion deal with Ive's firm.

The Northern District of California complaint targets OpenAI's $6.5 billion acquisition of "IO Products, Inc.," announced May 21, 2025. IYO, which spun out from Google X in 2021, produces the "IYO ONE," an ear-worn device that allows users to interact with computers and AI through voice commands without screens or keyboards.

IYO has invested over $62 million developing its audio computing technology, it says in the filing. According to the complaint, OpenAI CEO Sam Altman and Ive's design studio LoveFrom met with IYO representatives multiple times between 2022 and 2025, learning details about IYO's technology and business plans. In March 2025, Altman allegedly told IYO he was "working on something competitive" called "io." IO Products, formed in September 2023, develops hardware for screenless computer interaction similar to IYO's products. The lawsuit seeks injunctive relief and damages for trademark infringement and unfair competition.
Stats

RedMonk Ranks Top Programming Languages Over Time - and Considers Ditching Its 'Stack Overflow' Metric (redmonk.com) 40

The developer-focused analyst firm RedMonk releases twice-a-year rankings of programming language popularity. This week they also released a handy graph showing the movement of top 20 languages since 2012. Their current rankings for programming language popularity...

1. JavaScript
2. Python
3. Java
4. PHP
5. C#
6. TypeScript
7. CSS
8. C++
9. Ruby
10. C

The chart shows that over the years the rankings really haven't changed much (other than a surge for TypeScript and Python, plus a drop for Ruby). JavaScript has consistently been #1 (except in two early rankings, where it came in behind Java). And in 2020 Java finally slipped from #2 down to #3, falling behind... Python. Python had already overtaken PHP for the #3 spot in 2017, pushing PHP to a steady #4. C# has maintained the #5 spot since 2014 (though with close competition from both C++ and CSS). And since 2021 the next four spots have been held by Ruby, C, Swift, and R.

The only change in the current top 20 since the last ranking "is Dart dropping from a tie with Rust at 19 into sole possession of 20," writes RedMonk co-founder Stephen O'Grady. "In the decade and a half that we have been ranking these languages, this is by far the least movement within the top 20 that we have seen. While this is to some degree attributable to a general stasis that has settled over the rankings in recent years, the extraordinary lack of movement is likely also in part a manifestation of Stack Overflow's decline in query volume..." The arrival of AI has had a significant and accelerating impact on Stack Overflow, which comprises one half of the data used to both plot and rank languages twice a year... Stack Overflow's value from an observational standpoint is not what it once was, and that has a tangible impact, as we'll see....

As that long time developer site sees fewer questions, it becomes less impactful in terms of driving volatility on its half of the rankings axis, and potentially less suggestive of trends moving forward... [W]e're not yet at a point where Stack Overflow's role in our rankings has been deprecated, but the conversations at least are happening behind the scenes.

"The veracity of the Stack Overflow data is increasingly questionable," writes RedMonk's research director: When we use Stack Overflow for programming language rankings we measure how many questions are asked using specific programming language tags... While other pieces, like Matt Asay's AI didn't kill Stack Overflow are right to point out that the decline existed before the advent of AI coding assistants, it is clear that the usage dramatically decreased post 2023 when ChatGPT became widely available. The number of questions asked are now about 10% what they were at Stack Overflow's peak.
"RedMonk is continuing to evaluate the quality of this analysis," the research director concludes, arguing "there is value in long-lived data, and seeing trends move over a decade is interesting and worthwhile. On the other hand, at this point half of the data feeding the programming language rankings is increasingly stale and of questionable value on a going-forward basis, and there is as of now no replacement public data set available.

"We'll continue to watch and advise you all on what we see with Stack Overflow's data."
AI

OpenAI Pulls Promotional Materials About Jony Ive Deal (After Trademark Lawsuit) (techcrunch.com) 2

OpenAI appears to have pulled a much-discussed video promoting the friendship between CEO Sam Altman and legendary Apple designer Jony Ive (plus, incidentally, OpenAI's $6.5 billion deal to acquire Ive and Altman's device startup io) from its website and YouTube page. [Though you can still see the original on Archive.org.]

Does that suggest something is amiss with the acquisition, or with plans for Ive to lead design work at OpenAI? Not exactly, according to Bloomberg's Mark Gurman, who reports [on X.com] that the "deal is on track and has NOT dissolved or anything of the sort." Instead, he said a judge has issued a restraining order over the io name, forcing the company to pull all materials that used it.

Gurman elaborates on the disappearance of the video (and other related marketing materials) in a new article at Bloomberg: Bloomberg reported last week that a judge was considering barring OpenAI from using the IO name due to a lawsuit recently filed by the similarly named IYO Inc., which is also building AI devices. "This is an utterly baseless complaint and we'll fight it vigorously," a spokesperson for Ive said on Sunday.
The video is still viewable on X.com, notes TechCrunch. But visiting the "Sam and Jony" page on OpenAI now pulls up a 404 error message — written in the form of a haiku:

Ghost of code lingers
Blank space now invites wonder
Thoughts begin to soar

by o4-mini-high

AI

Tesla Begins Driverless Robotaxi Service in Austin, Texas (theguardian.com) 110

With no one behind the steering wheel, a Tesla robotaxi passes Guero's Taco Bar in Austin Texas, making a right turn onto Congress Avenue.

Today is the day Austin became the first city in the world to see Tesla's self-driving robotaxi service, reports The Guardian: Some analysts believe that the robotaxis will only be available to employees and invitees initially. For the CEO, Tesla's rollout is slow. "We could start with 1,000 or 10,000 [robotaxis] on day one, but I don't think that would be prudent," he told CNBC in May. "So, we will start with probably 10 for a week, then increase it to 20, 30, 40."

The billionaire has said the driverless cars will be monitored remotely... [Posting on X.com] Musk said the date was "tentatively" 22 June but that this launch date would be "not real self-driving", which would have to wait nearly another week... Musk said he planned to have one thousand Tesla robotaxis on Austin roads "within a few months" and then he would expand to other cities in Texas and California.

Musk posted on X that riders on launch day would be charged a flat fee of $4.20, according to Reuters. And "In recent days, Tesla has sent invites to a select group of Tesla online influencers for a small and carefully monitored robotaxi trial..." As the date of the planned robotaxi launch approached, Texas lawmakers moved to enact rules on autonomous vehicles in the state. Texas Governor Greg Abbott, a Republican, on Friday signed legislation requiring a state permit to operate self-driving vehicles. The law does not take effect until September 1, but the governor's approval of it on Friday signals state officials from both parties want the driverless-vehicle industry to proceed cautiously... The law softens the state's previous anti-regulation stance on autonomous vehicles. A 2017 Texas law specifically prohibited cities from regulating self-driving cars...

The law requires autonomous-vehicle operators to get approval from the Texas Department of Motor Vehicles before operating on public streets without a human driver. It also gives state authorities the power to revoke permits if they deem a driverless vehicle "endangers the public," and requires firms to provide information on how police and first responders can deal with their driverless vehicles in emergency situations. The law's requirements for getting a state permit to operate an "automated motor vehicle" are not particularly onerous but require a firm to attest it can safely operate within the law... Compliance remains far easier than in some states, most notably California, which requires extensive submission of vehicle-testing data under state oversight.

Tesla "planned to operate only in areas it considered the safest," according to the article, and "plans to avoid bad weather, difficult intersections, and will not carry anyone below the age of 18."

More details from UPI: To get started using the robotaxis, users must download the Robotaxi app and use their Tesla account to log in, where it then functions like most ridesharing apps...

"Riders may not always be delivered to their intended destinations or may experience inconveniences, interruptions, or discomfort related to the Robotaxi," the company wrote in a disclaimer in its terms of service. "Tesla may modify or cancel rides in its discretion, including for example due to weather conditions." The terms of service include a clause that Tesla will not be liable for "any indirect, consequential, incidental, special, exemplary, or punitive damages, including lost profits or revenues, lost data, lost time, the costs of procuring substitute transportation services, or other intangible losses" from the use of the robotaxis.

Their article includes a link to the robotaxi's complete Terms of Service: To the fullest extent permitted by law, the Robotaxi, Robotaxi app, and any ride are provided "as is" and "as available" without warranties of any kind, either express or implied... The Robotaxi is not intended to provide transportation services in connection with emergencies, for example emergency transportation to a hospital... Tesla's total liability for any claim arising from or relating to Robotaxi or the Robotaxi app is limited to the greater of the amount paid by you to Tesla for the Robotaxi ride giving rise to the claim, and $100... Tesla may modify these Terms in our discretion, effective upon posting an updated version on Tesla's website. By using a Robotaxi or the Robotaxi app after Tesla posts such modifications, you agree to be bound by the revised Terms.

Slashdot Top Deals