×
AI

Scammers' New Way of Targeting Small Businesses: Impersonating Them (wsj.com) 17

Copycats are stepping up their attacks on small businesses. Sellers of products including merino socks and hummingbird feeders say they have lost customers to online scammers who use the legitimate business owners' videos, logos and social-media posts to assume their identities and steer customers to cheap knockoffs or simply take their money. WSJ: "We used to think you'd be targeted because you have a brand everywhere," said Alastair Gray, director of anticounterfeiting for the International Trademark Association, a nonprofit that represents brand owners. "It now seems with the ease at which these criminals can replicate websites, they can cut and paste everything." Technology has expanded the reach of even the smallest businesses, making it easy to court customers across the globe. But evolving technology has also boosted opportunities for copycats; ChatGPT and other advances in artificial intelligence make it easier to avoid language or spelling errors, often a signal of fraud.

Imitators also have fine-tuned their tactics, including by outbidding legitimate brands for top position in search results. "These counterfeiters will market themselves just like brands market themselves," said Rachel Aronson, co-founder of CounterFind, a Dallas-based brand-protection company. Policing copycats is particularly challenging for small businesses with limited financial resources and not many employees. Online giants such as Amazon.com and Meta Platforms say they use technology to identify and remove misleading ads, fake accounts or counterfeit products.

Apple

Apple Brings ChatGPT To Its Apps, Including Siri (techcrunch.com) 49

Apple is bringing ChatGPT, OpenAI's AI-powered chatbot experience, to Siri and other first-party apps and capabilities across its operating systems. From a report: "We're excited to partner with Apple to bring ChatGPT to their users in a new way," OpenAI CEO Sam Altman said in a statement. "Apple shares our commitment to safety and innovation, and this partnership aligns with OpenAI's mission to make advanced AI accessible to everyone." Soon, Siri will be able to tap ChatGPT for "expertise" where it might be helpful, Apple says. For example, if you need menu ideas for a meal to make for friends using some ingredients from your garden, you can ask Siri, and Siri will automatically feed that info to ChatGPT for an answer after you give it permission to do so. You can include photos with the questions you ask ChatGPT via Siri, or ask questions related to your docs or PDFs. Apple's also integrated ChatGPT into system-wide writing tools like Writing Tools, which lets you create content with ChatGPT -- including images -- or ask an initial idea and send it to ChatGPT to get a revision or variation back. Apple said ChatGPT within Apple's apps is free and data isn't being shared with the Microsoft-backed firm. ChatGPT subscribers can connect their accounts and access paid features right from these experiences, the company said.

Apple Intelligence -- Apple's efforts to combine the power of generative models with personal context -- is free to Apple device owners but works with "iOS 18" on iPhone 15 Pro, macOS 15 and iPadOS 17.
AI

Apple Unveils Apple Intelligence 29

As rumored, Apple today unveiled Apple Intelligence, its long-awaited push into generative artificial intelligence (AI), promising highly personalized experiences built with safety and privacy at its core. The feature, referred to as "A.I.", will be integrated into Apple's various operating systems, including iOS, macOS, and the latest, VisionOS. CEO Tim Cook said that Apple Intelligence goes beyond artificial intelligence, calling it "personal intelligence" and "the next big step for Apple."

Apple Intelligence is built on large language and intelligence models, with much of the processing done locally on the latest Apple silicon. Private Cloud Compute is being added to handle more intensive tasks while maintaining user privacy. The update also includes significant changes to Siri, Apple's virtual assistant, which will now support typed queries and deeper integration into various apps, including third-party applications. This integration will enable users to perform complex tasks without switching between multiple apps.

Apple Intelligence will roll out to the latest versions of Apple's operating systems, including iOS and iPadOS 18, macOS Sequoia, and visionOS 2.
XBox (Games)

Micrsoft Confirms Cheaper All-Digital Xbox Series X As It Marches Beyond Physical Games (kotaku.com) 72

Microsoft has announced a new lineup of Xbox consoles, including an all-digital white Xbox Series X with a 1TB SSD, priced at $450. The company is also retiring the Carbon Black Series S, replacing it with a white version featuring a 1TB SSD and a $350 price point. Additionally, a new Xbox Series X with a disc drive and 2TB of storage will launch for $600.

The move comes as Microsoft continues to focus on digital gaming and subscription services like Game Pass, with reports suggesting that the PS5 is outselling Xbox Series consoles 2:1. The shift has led to minimal physical Xbox game sections in stores and some first-party titles, like Hellblade 2, not receiving physical releases. Despite rumors of a multiplatform approach, Microsoft maintains its commitment to its own gaming machines, promising a new "next-gen" console in the future, potentially utilizing generative-AI technology.

Further reading: Upcoming Games Include More Xbox Sequels - and a Medieval 'Doom'.
AI

Teams of Coordinated GPT-4 Bots Can Exploit Zero-Day Vulnerabilities, Researchers Warn (newatlas.com) 27

New Atlas reports on a research team that successfuly used GPT-4 to exploit 87% of newly-discovered security flaws for which a fix hadn't yet been released. This week the same team got even better results from a team of autonomous, self-propagating Large Language Model agents using a Hierarchical Planning with Task-Specific Agents (HPTSA) method: Instead of assigning a single LLM agent trying to solve many complex tasks, HPTSA uses a "planning agent" that oversees the entire process and launches multiple "subagents," that are task-specific... When benchmarked against 15 real-world web-focused vulnerabilities, HPTSA has shown to be 550% more efficient than a single LLM in exploiting vulnerabilities and was able to hack 8 of 15 zero-day vulnerabilities. The solo LLM effort was able to hack only 3 of the 15 vulnerabilities.
"Our findings suggest that cybersecurity, on both the offensive and defensive side, will increase in pace," the researchers conclude. "Now, black-hat actors can use AI agents to hack websites. On the other hand, penetration testers can use AI agents to aid in more frequent penetration testing. It is unclear whether AI agents will aid cybersecurity offense or defense more and we hope that future work addresses this question.

"Beyond the immediate impact of our work, we hope that our work inspires frontier LLM providers to think carefully about their deployments."

Thanks to long-time Slashdot reader schwit1 for sharing the article.
Robotics

Dutch Police Test AI-Powered Robot Dog to Raid Drug Labs (interestingengineering.com) 29

"Police and search and rescue forces worldwide are increasingly using robots to assist in carrying out their operations," writes Interesting Engineering. "Now, the Dutch police are looking at employing AI-powered autonomous robot dogs in drug lab raids to protect officers from criminal risks, hazardous chemicals, and explosions."

New Scientist's Matthew Sparkes (also a long-time Slashdot reader) shares this report: Dutch police are planning to use an autonomous robotic dog in drug lab raids to avoid placing officers at risk from criminals, dangerous chemicals and explosions. If tests in mocked-up scenarios go well, the artificial intelligence-powered robot will be deployed in real raids, say police. Simon Prins at Politie Nederland, the Dutch police force, has been testing and using robots in criminal investigations for more than two decades, but says they are only now growing capable enough to be practical for more...
Some context from Interesting Engineering: The police force in the Netherlands carries out such raids at least three to four times a week... Since 2021, the force has already been using a Spot quadruped, fitted with a robotic arm, from Boston Dynamics to carry out drug raids and surveillance. However, the Spot is remotely controlled by a handler... [Significant technological advancements] have prompted the Dutch force to explore fully autonomous operations with Spot.

Reportedly, such AI-enabled autonomous robots are expected to inspect drug labs, ensure no criminals are present, map the area, and identify dangerous chemicals... Initial tests by force suggest that Spot could explore and map a mock drug lab measuring 15 meters by 20 meters. It was able to find hazardous chemicals and put them away into a designated storage container.

Their article notes that Spot "can do laser scans and visual, thermal, radiation, and acoustic inspections using add-on payloads and onboard cameras." (A video from Boston Dynamics — the company behind Spot — also seems to show the robot dog spraying something on a fire.)

The video seems aimed at police departments, touting the robot dog's advantages for "safety and incident response":
  • Enables safer investigation of suspicious packages
  • Detection of hazardous chemicals
  • De-escalation of tense or dangerous situations
  • Get eyes on dangerous situations

It also notes the robot "can be operated from a safe distance," suggesting customers "Use Spot® to place cameras, radios, and more for tactical reconnaissance."


United States

Is Nuclear Power in America Reviving - or Flailing? (msn.com) 209

Last week America's energy secretary cheered the startup of a fourth nuclear reactor at a Georgia power plant, calling it "the largest producer of clean energy, and the largest producer of electricity in the United States" after a third reactor was started up there in December.

From the U.S. Energy Department's transcript of the speech: Each year, Units 3 and 4 are going to produce enough clean power to power 1 million homes and businesses, enough energy to power roughly 1 in 4 homes in Georgia. Preventing 10 million metric tons of carbon dioxide pollution annually. That, by the way, is like planting more than 165 million trees every year!

And that's not to mention the historic investments that [electric utility] Southern has made on the safety front, to ensure this facility meets — and exceeds — the highest operating standards in the world....

To reach our goal of net zero by 2050, we have to at least triple our current nuclear capacity in this country. That means we've got to add 200 more gigawatts by 2050. Okay, two down, 198 to go! In building [Unit] 4, we've solved our greatest design challenges. We've stood up entire supply chains.... And so it's time to cash in on our investments by building more. More of these facilities. The Department of Energy's Loan Programs Office stands ready to help, with hundreds of billions of dollars in what we call Title 17 loans... Since the President signed the Inflation Reduction Act and the Bipartisan Infrastructure Law, companies across the nation have announced 29 new or expanded nuclear facilities — across 16 states — representing about 1,600 potential new jobs. And the majority of those projects will expand the domestic uranium production and fuel fabrication, strengthening these critical supply chains...

Bottom line is, in short, we are determined to build a world-class nuclear industry in the United States, and we're putting our money where our mouth is.

America's Energy Secretary told the Washington Post that "Whether it happens through small modular reactors, or AP1000s, or maybe another design out there worthy of consideration, we want to see nuclear built." The Post notes the Energy department gave a $1.5 billion loan to restart a Michigan power plant which was decommissioned in 2022. "It would mark the first time a shuttered U.S. nuclear plant has been reactivated."

"But in this country with 54 nuclear plants across 28 states, restarting existing reactors and delaying their closure is a lot less complicated than building new ones." When the final [Georgia] reactor went online at the end of April, the expansion was seven years behind schedule and nearly $20 billion over budget. It ultimately cost more than twice as much as promised, with ratepayers footing much of the bill through surcharges and rate hikes...

Administration officials say the country has no choice but to make nuclear power a workable option again. The country is fast running short on electricity, demand for power is surging amid a boom in construction of data centers and manufacturing plants, and a neglected power grid is struggling to accommodate enough new wind and solar power to meet the nation's needs...

As the administration frames the narrative of the plant as one of perseverance and innovation that clears a path for restoring U.S. nuclear energy dominance, even some longtime boosters of the industry question whether this country will ever again have a vibrant nuclear energy sector. "It is hard for me to envision state energy regulators signing off on another one of these, given how badly the last ones went," said Matt Bowen, a nuclear scholar at the Center on Global Energy Policy at Columbia University, who was an adviser on nuclear energy issues in the Obama administration.

The article notes there are 19 AP1000 reactors (the design used at the Georgia plant) in development around the world. "None of them are being built in the United States."
Programming

Rust Growing Fastest, But JavaScript Reigns Supreme (thenewstack.io) 55

"Rust is the fastest-growing programming language, with its developer community doubling in size over the past two years," writes The New Stack, "yet JavaScript remains the most popular language with 25.2 million active developers, according to the results of a recent survey." The 26th edition of SlashData's Developer Nation survey showed that the Rust community doubled its number of users over the past two years — from two million in the first quarter of 2022 to four million in the first quarter of 2024 — and by 33% in the last 12 months alone. The SlashData report covers the first quarter of 2024. "Rust has developed a passionate community that advocates for it as a memory-safe language which can provide great performance, but cybersecurity concerns may lead to an even greater increase," the report said. "The USA and its international partners have made the case in the last six months for adopting memory-safe languages...."

"JavaScript's dominant position is unlikely to change anytime soon, with its developer population increasing by 4M developers over the last 12 months, with a growth rate in line with the global developer population growth," the report said. The strength of the JavaScript community is fueled by the widespread use of the language across all types of development projects, with at least 25% of developers in every project type using it, the report said. "Even in development areas not commonly associated with the language, such as on-device coding for IoT projects, JavaScript still sees considerable adoption," SlashData said.

Also, coming in strong, Python has overtaken Java as the second most popular language, driven by the interest in machine learning and AI. The battle between Python and Java shows Python with 18.2 million developers in Q1 2024 compared to Java's 17.7 million. This comes about after Python added more than 2.1 million net new developers to its community over the last 12 months, compared to Java which only increased by 1.2 million developers... Following behind Java there is a six-million-developer gap to the next largest community, which is C++ with 11.4 million developers, closely trailed by C# with 10.2 million and PHP with 9.8 million. Languages with the smallest communities include Objective-C with 2.7 million developers, Ruby with 2.5 million, and Lua with 1.8 million. Meanwhile, the Go language saw its developer population grow by 10% over the last year. It had previously outpaced the global developer population growth, growing by 5Y% over the past two years, from three million in Q1 2022 to 4.7 million in Q1 2024.

"TNS analyst Lawrence Hecht has a few different takeaways. He notes that with the exceptions of Rust, Go and JavaScript, the other major programming languages all grew slower than the total developer population, which SlashData says increased 39% over the last two years alone."
Graphics

Nvidia Takes 88% of the GPU Market Share (xda-developers.com) 83

As reported by Jon Peddie Research, Nvidia now holds 88% of the GPU market after its market share jumped 8% in its most recent quarter. "This jump shaves 7% off of AMD's share, putting it down to 19% total," reports XDA Developers. "And if you're wondering where that extra 1% went, it came from all of Intel's market share, squashing it down to 0%." From the report: Dr. Jon Peddie, president of Jon Peddie Research, mentions how the GPU market hasn't really looked "normal" since the 2007 recession. Ever since then, everything from the crypto boom to COVID has messed with the usual patterns. Usually, the first quarter of a year shows a bit of a dip in GPU sales, but because of AI's influence, it may seem like that previous norm may be forever gone: "Therefore, one would expect Q2'24, a traditional quarter, to also be down. But, all the vendors are predicting a growth quarter, mostly driven by AI training systems in hyperscalers. Whereas AI trainers use a GPU, the demand for them can steal parts from the gaming segment. So, for Q2, we expect to see a flat to low gaming AIB result and another increase in AI trainer GPU shipments. The new normality is no normality."
Businesses

Samsung Electronics Workers Strike For the First Time Ever (theverge.com) 3

Victoria Song reports via The Verge: Samsung Electronics workers went on a strike on Friday for the very first time in the company's history. The move comes at a time when the Korean corporation faces increased competition from other chipmakers, particularly as demand for AI chips grows. The National Samsung Electronics Union (NSEU), the largest of the company's several unions, called for the one-day strike at Samsung's Seoul office building as negotiations over pay bonuses and time off hit a standstill. The New York Times reports that the majority of striking workers come from Samsung's chip division. (Samsung Electronics is technically only a subsidiary comprising its consumer tech, appliances, and semiconductor divisions; Samsung itself is a conglomerate that controls real estate, retail, insurance, food production, hotels, and a whole lot more.) It's unclear how many of the NSEU's roughly 28,400 members participated in the walkout. Even so, multiple outlets are reporting that the walkout is unlikely to affect chip production or trigger shortages. Union leaders told Bloomberg that further actions are planned if management refuses to engage.

That said, the fact that it's happening at all is awkward timing for Samsung, particularly due to tensions with the chipmaking portion of its business. Last year, the division reported a 15 trillion won ($11 billion) loss, leading to a 15-year low in operating profits. The current AI boom played a big role in the massive loss. Samsung has historically been the world leader in making high-bandwidth memory chips â" the kind that are in demand right now to power next-gen generative AI features. However, last year's decline was partly because Samsung wasn't prepared for increased demand, allowing local rival SK Hynix to take the top spot.

AI

Ashton Kutcher: Entire Movies Can Be Made on OpenAI's Sora Someday (businessinsider.com) 46

Hollywood actor and venture capitalist Ashton Kutcher believes that one day, entire movies will be made on AI tools like OpenAI's Sora. From a report: The actor was speaking at an event last week organized by the Los Angeles-based think tank Berggruen Institute, where he revealed that he'd been playing around with the ChatGPT maker's new video generation tool. "I have a beta version of it and it's pretty amazing," said Kutcher, whose VC firm Sound Venture's portfolio includes an investment in OpenAI. "You can generate any footage that you want. You can create good 10, 15-second videos that look very real."

"It still makes mistakes. It still doesn't quite understand physics. But if you look at the generation of this that existed one year ago, as compared to Sora, it's leaps and bounds. In fact, there's footage in it that I would say you could easily use in a major motion picture or a television show," he continued. Kutcher said this would help lower the costs of making a film or television show. "Why would you go out and shoot an establishing shot of a house in a television show when you could just create the establishing shot for $100?" Kutcher said. "To go out and shoot it would cost you thousands of dollars,"

Kutcher was so bullish about AI advancements that he said he believed people would eventually make entire movies using tools like Sora. "You'll be able to render a whole movie. You'll just come up with an idea for a movie, then it will write the script, then you'll input the script into the video generator, and it will generate the movie," Kutcher said. Kutcher, of course, is no stranger to AI.

Microsoft

Windows Won't Take Screenshots of Everything You Do After All (theverge.com) 81

Microsoft says it's making its new Recall feature in Windows 11 that screenshots everything you do on your PC an opt-in feature and addressing various security concerns. From a report: The software giant first unveiled the Recall feature as part of its upcoming Copilot Plus PCs last month, but since then, privacy advocates and security experts have been warning that Recall could be a "disaster" for cybersecurity without changes. Thankfully, Microsoft has listened to the complaints and is making a number of changes before Copilot Plus PCs launch on June 18th. Microsoft had originally planned to turn Recall on by default, but the company now says it will offer the ability to disable the controversial AI-powered feature during the setup process of new Copilot Plus PCs. "If you don't proactively choose to turn it on, it will be off by default," says Windows chief Pavan Davuluri.
AI

It's Not AI, It's 'Apple Intelligence' (gizmodo.com) 28

An anonymous reader shares a report: Apple is expected to announce major artificial intelligence updates to the iPhone, iPad, and Mac next week during its Worldwide Developers Conference. Except Apple won't call its system artificial intelligence, like everyone else, according to Bloomberg's Mark Gurman on Friday. The system will reportedly be called "Apple Intelligence," and allegedly will be made available to new versions of the iPhone, iPad, and Mac operating systems. Apple Intelligence, which is shortened to just AI, is reportedly separate from the ChatGPT-like chatbot Apple is expected to release in partnership with OpenAI. Apple's in-house AI tools are reported to include assistance in message writing, photo editing, and summarizing texts. Bloomberg reports that some of these AI features will run on the device while others will be processed through cloud-based computing, depending on the complexity of the task. The name feels a little too obvious. While this is the first we're hearing of an actual name for Apple's AI, it's entirely unsurprising that Apple is choosing a unique brand to call its artificial intelligence systems.
AI

California AI Bill Sparks Backlash from Silicon Valley Giants (arstechnica.com) 59

California's proposed legislation to regulate AI has sparked a backlash from Silicon Valley heavyweights, who claim the bill will stifle innovation and force AI start-ups to leave the state. The Safe and Secure Innovation for Frontier Artificial Intelligence Systems Act, passed by the state Senate last month, requires AI developers to adhere to strict safety frameworks, including creating a "kill switch" for their models. Critics argue that the bill places a costly compliance burden on smaller AI companies and focuses on hypothetical risks. Amendments are being considered to clarify the bill's scope and address concerns about its impact on open-source AI models.
AI

Artists Are Deleting Instagram For New App Cara In Protest of Meta AI Scraping (fastcompany.com) 21

Some artists are jumping ship for the anti-AI portfolio app Cara after Meta began using Instagram content to train its AI models. Fast Company explains: The portfolio app bills itself as a platform that protects artists' images from being used to train AI, and only allowing AI content to be posted if it's clearly labeled. Based on the number of new users the Cara app has garnered over the past few days, there seems to be a need. Between May 31 and June 2, Cara's user base tripled from less than 100,000 to more than 300,000 profiles, skyrocketing to the top of the app store. [...] Cara is a social networking app for creatives, in which users can post images of their artwork, memes, or just their own text-based musings. It shares similarities with major social platforms like X (formerly Twitter) and Instagram on a few fronts. Users can access Cara through a mobile app or on a browser. Both options are free to use. The UI itself is like an arts-centric combination of X and Instagram. In fact, some UI elements seem like they were pulled directly from other social media sites. (It's not the most innovative approach, but it is strategic: as a new app, any barriers to potential adoption need to be low).

Cara doesn't train any AI models on its content, nor does it allow third parties to do so. According to Cara's FAQ page, the app aims to protect its users from AI scraping by automatically implementing "NoAI" tags on all of its posts. The website says these tags "are intended to tell AI scrapers not to scrape from Cara." Ultimately, they appear to be html metadata tags that politely ask bad actors not to get up to any funny business, and it's pretty unlikely that they hold any actual legal weight. Cara admits as much, too, warning its users that the tags aren't a "fully comprehensive solution and won't completely prevent dedicated scrapers." With that in mind, Cara assesses the "NoAI" tagging system as a "a necessary first step in building a space that is actually welcoming to artists -- one that respects them as creators and doesn't opt their work into unethical AI scraping without their consent."

In December, Cara launched another tool called Cara Glaze to defend its artists' work against scrapers. (Users can only use it a select number of times.) Glaze, developed by the SAND Lab at University of Chicago, makes it much more difficult for AI models to accurately understand and mimic an artist's personal style. The tool works by learning how AI bots perceive artwork, and then making a set of minimal changes that are invisible to the human eye but confusing to the AI model. The AI bot then has trouble "translating" the art style and generates warped recreations. In the future, Cara also plans to implement Nightshade, another University of Chicago software that helps protect artwork against AI scapers. Nightshade "poisons" AI training data by adding invisible pixels to artwork that can cause AI software to completely misunderstand the image. Beyond establishing shields against data mining, Cara also uses a third party service to detect and moderate any AI artwork that's posted to the site. Non-human artwork is forbidden, unless it's been properly labeled by the poster.

AI

Adobe Responds To Vocal Uproar Over New Terms of Service Language (venturebeat.com) 34

Adobe is facing backlash over new Terms of Service language amid its embrace of generative AI in products like Photoshop and customer experience software. The ToS, sent to Creative Cloud Suite users, doesn't mention AI explicitly but includes a reference to machine learning and a clause prohibiting AI model training on Adobe software. From a report: In particular, users have objected to Adobe's claims that it "may access, view, or listen to your Content through both automated and manual methods -- using techniques such as machine learning in order to improve our Services and Software and the user experience," which many took to be a tacit admission both of surveilling them and of training AI on their content, even confidential content for clients protected under non-disclosure agreements or confidentiality clauses/contracts between said Adobe users and clients.

A spokesperson for Adobe provided the following statement in response to VentureBeat's questions about the new ToS and vocal backlash: "This policy has been in place for many years. As part of our commitment to being transparent with our customers, we added clarifying examples earlier this year to our Terms of Use regarding when Adobe may access user content. Adobe accesses user content for a number of reasons, including the ability to deliver some of our most innovative cloud-based features, such as Photoshop Neural Filters and Remove Background in Adobe Express, as well as to take action against prohibited content. Adobe does not access, view or listen to content that is stored locally on any user's device."

Chrome

Google Is Working On a Recall-Like Feature For Chromebooks, Too (pcworld.com) 47

In an interview with PCWorld's Mark Hachman, Google's ChromeOS chief said the company is cautiously exploring a Recall-like feature for Chromebooks, dubbed "memory." Microsoft's AI-powered Recall feature for Windows 11 was unveiled at the company's Build 2024 conference last month. The feature aims to improve local searches by making them as efficient as web searches, allowing users to quickly retrieve anything they've seen on their PC. Using voice commands and contextual clues, Recall can find specific emails, documents, chat threads, and even PowerPoint slides. Given the obvious privacy and security concerns, many users have denounced the feature, describing it as "literal spyware or malware." PCWorld reports: I sat down with John Solomon, the vice president at Google responsible for ChromeOS, for a lengthy interview around what it means for Google's low-cost Google platform as the PC industry moved to AI PCs. Microsoft, of course, is launching Copilot+ PCs alongside Qualcomm's Snapdragon X Elite -- an Arm chip. And Chromebooks, of course, have a long history with Arm. But it's Recall that we eventually landed upon -- or, more precisely, how Google sidles into the same space. Recall is great in theory, but in practice may be more problematic.) Recall the Project Astra demo that Google showed off at its Google I/O conference. One of the key though understated aspects of it was how Astra "remembered" where the user's glasses were.

Astra didn't appear to be an experience that could be replicated on the Chromebook. Most users aren't going to carry a Chromebook around (a device which typically lacks a rear camera) visually identifying things. Solomon respectfully disagreed. "I think there's a piece of it which is very relevant, which is this notion of having some kind of context and memory of what's been happening on the device," Solomon said. "So think of something that's like, maybe viewing your screen and then you walk away, you get distracted, you chat to someone at the watercooler and you come back. You could have some kind of rewind function, you could have some kind of recorder function that would kind of bring you back to that. So I think that there is a crossover there.

"We're actually talking to that team about where the use case could be," Solomon added of the "memory" concept. "But I think there's something there in terms of screen capture in a way that obviously doesn't feel creepy and feels like the user's in control." That sounds a lot like Recall! But Solomon was quick to point out that one of the things that has turned off users to Recall was the lack of user control: deciding when, where, and if to turn it on. "I'm not going to talk about Recall, but I think the reason that some people feel it's creepy is when it doesn't feel useful, and it doesn't feel like something they initiated or that they get a clear benefit from it," Solomon said. "If the user says like -- let's say we're having a meeting, and discussing complex topics. There's a benefit of running a recorded function if at the end of it it can be useful for creating notes and the action items. But you as a user need to put that on and decide where you want to have that."

AI

DuckDuckGo Offers 'Anonymous' Access To AI Chatbots Through New Service 7

An anonymous reader quotes a report from Ars Technica: On Thursday, DuckDuckGo unveiled a new "AI Chat" service that allows users to converse with four mid-range large language models (LLMs) from OpenAI, Anthropic, Meta, and Mistral in an interface similar to ChatGPT while attempting to preserve privacy and anonymity. While the AI models involved can output inaccurate information readily, the site allows users to test different mid-range LLMs without having to install anything or sign up for an account. DuckDuckGo's AI Chat currently features access to OpenAI's GPT-3.5 Turbo, Anthropic's Claude 3 Haiku, and two open source models, Meta's Llama 3 and Mistral's Mixtral 8x7B. The service is currently free to use within daily limits. Users can access AI Chat through the DuckDuckGo search engine, direct links to the site, or by using "!ai" or "!chat" shortcuts in the search field. AI Chat can also be disabled in the site's settings for users with accounts.

According to DuckDuckGo, chats on the service are anonymized, with metadata and IP address removed to prevent tracing back to individuals. The company states that chats are not used for AI model training, citing its privacy policy and terms of use. "We have agreements in place with all model providers to ensure that any saved chats are completely deleted by the providers within 30 days," says DuckDuckGo, "and that none of the chats made on our platform can be used to train or improve the models." However, the privacy experience is not bulletproof because, in the case of GPT-3.5 and Claude Haiku, DuckDuckGo is required to send a user's inputs to remote servers for processing over the Internet. Given certain inputs (i.e., "Hey, GPT, my name is Bob, and I live on Main Street, and I just murdered Bill"), a user could still potentially be identified if such an extreme need arose.
In regard to hallucination concerns, DuckDuckGo states in its privacy policy: "By its very nature, AI Chat generates text with limited information. As such, Outputs that appear complete or accurate because of their detail or specificity may not be. For example, AI Chat cannot dynamically retrieve information and so Outputs may be outdated. You should not rely on any Output without verifying its contents using other sources, especially for professional advice (like medical, financial, or legal advice)."
Businesses

Humane Said To Be Seeking a $1 Billion Buyout After Only 10,000 Orders of Its AI Pin (engadget.com) 40

An anonymous reader writes: It emerged recently that Humane was trying to sell itself for as much as $1 billion after its confuddling, expensive and ultimately pretty useless AI Pin flopped. A New York Times report that dropped on Thursday shed a little more light on the company's sales figures and, like the wearable AI assistant itself, the details are not good.

By early April, around the time that many devastating reviews of the AI Pin were published, Humane is said to have received around 10,000 orders for the device. That's a far cry from the 100,000 it was hoping to ship this year, and about 9,000 more than I thought it might get. It's hard to think it picked up many more orders beyond those initial 10,000 after critics slaughtered the AI Pin.
One of the companies that Humane has engaged with for the sale is HP, the Times reported.
Microsoft

'Microsoft Has Lost Trust With Its Users and Windows Recall is the Straw That Broke the Camel's Back' (windowscentral.com) 170

In a column at Windows Central, a blog that focuses on Microsoft news, senior editor Zac Bowden discusses the backlash against Windows Recall, a new AI feature in Microsoft's Copilot+ PCs. While the feature is impressive, allowing users to search their entire Windows history, many are concerned about privacy and security. Bowden argues that Microsoft's history of questionable practices, such as ads and bloatware, has eroded user trust, making people skeptical of Recall's intentions. Additionally, the reported lack of encryption for Recall's data raises concerns about third-party access. Bowden argues that Microsoft could have averted the situation by testing the feature openly to address these issues early on and build trust with users. He adds: Users are describing the feature as literal spyware or malware, and droves of people are proclaiming they will proudly switch to Linux or Mac in the wake of it. Microsoft simply doesn't enjoy the same benefit of the doubt that other tech giants like Apple may have.

Had Apple announced a feature like Recall, there would have been much less backlash, as Apple has done a great job building loyalty and trust with its users, prioritizing polished software experiences, and positioning privacy as a high-level concern for the company.

Slashdot Top Deals