Bitcoin

Donald and Melania Trump Launch a Pair of Meme Coins (cnn.com) 214

Donald and Melania Trump have launched a pair of meme coins just before President Trump was sworn into office. The coins are already worth billions of dollars, raising "serious ethical questions and conflicts of interest," said Richard Painter, a law professor at the University of Minnesota. CNN reports: Melania Trump launched her cryptocurrency $MELANIA in a social media post Sunday, sending her husband's cryptocurrency $TRUMP, announced two days earlier, plummeting. "The Official Melania Meme is live! You can buy $MELANIA now. https://melaniameme.com," the future first lady wrote on X Sunday. Meme coins are a type of highly volatile cryptocurrency inspired by popular internet or cultural trends. They carry no intrinsic value but can soar, or plummet, in price. "My NEW Official Trump Meme is HERE!" Trump wrote on X Friday. "It's time to celebrate everything we stand for: WINNING! Join my very special Trump Community. GET YOUR $TRUMP NOW. Go to http://gettrumpmemes.com -- Have Fun!" Both coins are trading on the Solana blockchain. [...]

$TRUMP is the first cryptocurrency endorsed by the incoming president, who once trashed bitcoin as "based on thin air." [...] While executive branch employees must follow conflict of interest criminal statutes that prevent them from participating in matters that impact their own financial interests, the law does not apply to the president or the vice president. [...] The Trump coin's market capitalization, which is based on the 200 million coins circulating, is capped at $13 billion, according to CoinMarketCap. The meme coin's website said there will be 1 billion Trump coins over the next three years. Both $MELANIA and $TRUMP's websites contain disclaimers saying the coins are "intended to function as a support for, and engagement with" the values of their respective brands and "are not intended to be, or to be the subject of, an investment opportunity, investment contract, or security of any type."

The website says the meme coin is not politically affiliated. But 80% of the coin's supply is held by Trump Organization-affiliate CIC Digital and Fight Fight Fight LLC, which are both subject to a three-year unlocking schedule -- so they cannot sell all of their holdings at once. Trump coin's fully diluted value (which reflects the eventual total supply of Trump coins) stood at around $54 billion as of Monday morning, according to CoinMarketCap. At that value, the 80% linked to Trump is worth a staggering $43 billion, at least on paper. The $TRUMP coin's website says it is "the only official Trump meme. Now, you can get your piece of history. This Trump Meme celebrates a leader who doesn't back down, no matter the odds," the website reads.
"Trump owning 80% and timing launch hours before inauguration is predatory and many will likely get hurt by it," Nick Tomaino, a former Coinbase executive, said in a post on X. "Trump should be airdropping to the people rather than enriching himself or his team on this."
Encryption

Europol Chief Says Big Tech Has 'Responsibility' To Unlock Encrypted Messages (ft.com) 80

Technology giants must do more to co-operate with law enforcement on encryption or they risk threatening European democracy, according to the head of Europol, as the agency gears up to renew pressure on companies at the World Economic Forum in Davos this week. From a report: Catherine De Bolle told the Financial Times she will meet Big Tech groups in the Swiss mountain resort to discuss the matter, claiming that companies had a "social responsibility" to give the police access to encrypted messages that are used by criminals to remain anonymous. "Anonymity is not a fundamental right," said the EU law enforcement agency's executive director.

"When we have a search warrant and we are in front of a house and the door is locked, and you know that the criminal is inside of the house, the population will not accept that you cannot enter." In a digital environment, the police needed to be able to decode these messages to fight crime, she added. "You will not be able to enforce democracy [without it]."

United States

The Pentagon Says AI is Speeding Up Its 'Kill Chain' 34

An anonymous reader shares a report: Leading AI developers, such as OpenAI and Anthropic, are threading a delicate needle to sell software to the United States military: make the Pentagon more efficient, without letting their AI kill people. Today, their tools are not being used as weapons, but AI is giving the Department of Defense a "significant advantage" in identifying, tracking, and assessing threats, the Pentagon's Chief Digital and AI Officer, Dr. Radha Plumb, told TechCrunch in a phone interview.

"We obviously are increasing the ways in which we can speed up the execution of kill chain so that our commanders can respond in the right time to protect our forces," said Plumb. The "kill chain" refers to the military's process of identifying, tracking, and eliminating threats, involving a complex system of sensors, platforms, and weapons. Generative AI is proving helpful during the planning and strategizing phases of the kill chain, according to Plumb. The relationship between the Pentagon and AI developers is a relatively new one. OpenAI, Anthropic, and Meta walked back their usage policies in 2024 to let U.S. intelligence and defense agencies use their AI systems. However, they still don't allow their AI to harm humans. "We've been really clear on what we will and won't use their technologies for," Plumb said, when asked how the Pentagon works with AI model providers.
Movies

A Videogame Meets Shakespeare in 'Grand Theft Hamlet' Film (yahoo.com) 9

The Los Angeles Times calls it "a guns-blazingly funny documentary about two out-of-work British actors who spent a chunk of their COVID-19 lockdown staging Shakespeare's masterpiece on the mean streets of Grand Theft Auto V."

Grand Theft Hamlet won SXSW's Jury Award for best documentary, and has now opened in U.S. theatres this weekend (and begun streaming on Mubi), after opening in the U.K. and Ireland. But nearly the entire film is set in Grand Theft Auto's crime-infested version of Los Angeles, the Times reports, "where even the good guys have weapons and a nihilistic streak — the vengeful Prince of Denmark fits right in." Yet when Sam Crane, a.k.a. @Hamlet_thedane, launches into one of the Bard's monologues, he's often murdered by a fellow player within minutes. Everyone's a critic.

Crane co-directed the movie with his wife, Pinny Grylls, a first-time gamer who functions as the film's camera of sorts. What her character sees, where she chooses to stand and look, makes up much of the film, although the editing team does phenomenal work splicing in other characters' points of view. (We're never outside of the game until the last 30 seconds; only then do we see anyone's real face....) The Bard's story is only half the point. Really, this is a classic let's-put-on-a-pixilated-show tale about the need to create beauty in the world — even this violent world — especially when stage productions in England have shuttered, forcing Crane, a husband and father, and Mark Oosterveen, single and lonely, to kill time speeding around the digital desert...

To our surprise (and theirs), the play's tussles with depression and anguish and inertia become increasingly resonant as the production and the pandemic limps toward their conclusions. When Crane and Oosterveen's "Grand Theft Auto" avatars hop into a van with an anonymous gamer and ask this online stranger for his thoughts on Hamlet's suicidal soliloquy, the man, a real-life delivery driver stuck at home with a broken leg, admits, "I don't think I'm in the right place to be replying to this right now...."

In 2014 Hamlet was also staged in Guild Wars 2, the article points out. "This is, however, the first attempt I'm aware of that attempts to do the whole thing live in one go, no matter if one of the virtual actors falls to their doom from a blimp.

"As Grylls says, 'You can't stop production just because somebody dies.'"
China

RedNote Scrambles to Hire English-Speaking Content Moderators (wired.com) 73

ABC News reported that the official newspaper of China's communist party is claiming TikTok refugees on RedNote found a "new home," and "openness, communication, and mutual learning are... the heartfelt desires of people from all countries."

But in fact, Wired reports, "China's Cyberspace Administration, the country's top internet watchdog, has reportedly already grown concerned about content being shared by foreigners on Xiaohongshu," and "warned the platform earlier this week to 'ensure China-based users can't see posts from U.S. users,' according to The Information."

And that's just the beginning. Wired reports that RedNote is now also "scrambling to hire English-speaking moderators." Social media platforms in China are legally required to remove a wide range of content, including nudity and graphic violence, but especially information that the government deems politically sensitive... "RedNote — like all platforms owned by Chinese companies — is subject to the Chinese Communist Party's repressive laws," wrote Allie Funk, research director for technology and democracy at the nonprofit human rights organization Freedom House, in an email to WIRED. "Independent researchers have documented how keywords deemed sensitive to those in power, such as discussion of labor strikes or criticism of Xi Jinping, can be scrubbed from the platform."

But the influx of American TikTok users — as many as 700,000 in merely two days, according to Reuters — could be stretching Xiaohongshu's content moderation abilities thin, says Eric Liu, an editor at China Digital Times, a California-based publication documenting censorship in China, who also used to work as a content moderator himself for the Chinese social media platform Weibo... Liu reposted a screenshot on Bluesky showing that some people who recently joined Xiaohongshu have received notifications that their posts can only be shown to other users after 48 hours, seemingly giving the company time to determine whether they may be violating any of the platform's rules. This is a sign that Xiaohongshu's moderation teams are unable to react swiftly, Liu says...

While the majority of the new TikTok refugees still appear to be enjoying their time on Xiaohongshu, some have already had their posts censored. Christine Lu, a Taiwanese-American tech entrepreneur who created a Xiaohongshu account on Wednesday, says she was suspended after uploading three provocative posts about Tiananmen, Tibet, and Taiwan. "I support more [Chinese and American] people engaging directly. But also, knowing China, I knew it wouldn't last for long," Lu tells WIRED.

Despite the 700,000 signups in two days, "It's also worth nothing that the migration to RedNote is still very small, and only a fraction of the 170 million people in the US who use TikTok," notes The Conversation. (And they add that "The US government also has the authority to pressure Apple to remove RedNote from the US App Store if it thinks the migration poses a national security threat.")

One nurse told the Los Angeles Times Americans signed up for the app because they "just don't want to give in" to "bullying" by the U.S. government. (The Times notes she later recorded a video acknowledging that on the Chinese-language app, "I don't know what I'm doing, I don't know what I'm reading, I'm just pressing buttons.") On Tuesday, the Wall Street Journal reported that Chinese officials had discussed the possibility of selling TikTok to a trusted non-Chinese party such as Elon Musk, who already owns social media platform X. However, analysts said that Bytedance is unlikely to agree to a sale of the underlying algorithm that powers the app, meaning the platform under a new owner could still look drastically different.
Google

Google Won't Add Fact Checks Despite New EU Law (axios.com) 185

According to Axios, Google has told the EU it will not add fact checks to search results and YouTube videos or use them in ranking or removing content, despite the requirements of a new EU law. From the report: In a letter written to Renate Nikolay, the deputy director general under the content and technology arm at the European Commission, Google's global affairs president Kent Walker said the fact-checking integration required by the Commission's new Disinformation Code of Practice "simply isn't appropriate or effective for our services" and said Google won't commit to it. The code would require Google to incorporate fact-check results alongside Google's search results and YouTube videos. It would also force Google to build fact-checking into its ranking systems and algorithms.

Walker said Google's current approach to content moderation works and pointed to successful content moderation during last year's "unprecedented cycle of global elections" as proof. He said a new feature added to YouTube last year that enables some users to add contextual notes to videos "has significant potential." (That program is similar to X's Community Notes feature, as well as new program announced by Meta last week.)

The EU's Code of Practice on Disinformation, introduced in 2022, includes several voluntary commitments that tech firms and private companies, including fact-checking organizations, are expected to deliver on. The Code, originally created in 2018, predates the EU's new content moderation law, the Digital Services Act (DSA), which went into effect in 2022.

The Commission has held private discussions over the past year with tech companies, urging them to convert the voluntary measures into an official code of conduct under the DSA. Walker said in his letter Thursday that Google had already told the Commission that it didn't plan to comply. Google will "pull out of all fact-checking commitments in the Code before it becomes a DSA Code of Conduct," he wrote. He said Google will continue to invest in improvements to its current content moderation practices, which focus on providing people with more information about their search results through features like Synth ID watermarking and AI disclosures on YouTube.

Google

Google Strikes World's Largest Biochar Carbon Removal Deal 33

Google has partnered with Indian startup Varaha to purchase 100,000 tons of carbon dioxide removal credits by 2030, marking its largest deal in India and the largest involving biochar, a carbon removal solution made from biomass. TechCrunch reports: The offtake agreement credits will be delivered to Google by 2030 from Varaha's industrial biochar project in the western Indian state of Gujarat, the two firms said on Thursday. [...] Biochar is produced in two ways: artisanal and industrial. The artisanal method is community-driven, where farmers burn crop residue in conical flasks without using machines. In contrast, industrial biochar is made using large reactors that process 50-60 tons of biomass daily.

Varaha's project will generate industrial biochar from an invasive plant species, Prosopis Juliflora, using its pyrolysis facility in Gujarat. The invasive species impacts plant biodiversity and has overtaken grasslands used for livestock. Varaha will harvest the plant and make efforts to restore native grasslands in the region, the company's co-founder and CEO Madhur Jain said in an interview. Once the biochar is produced, a third-party auditor will submit their report to Puro.Earth to generate credits. Although biochar is seen as a long-term carbon removal solution, its permanence can vary between 1,000 and 2,500 years depending on production and environmental factors.

Jain told TechCrunch that Varaha tried using different feedstocks and different parameters within its reactors to find the best combination to achieve permanence close to 1,600 years. The startup has also built a digital monitoring, reporting and verification system, integrating remote sensing to monitor biomass availability. It even has a mobile app that captures geo-tagged, time-stamped images to geographically document activities, including biomass excavation and biochar's field application. With its first project, Varaha said it processed at least 40,000 tons of biomass and produced 10,000 tons of biochar last year.
Microsoft

Microsoft Patches Windows To Eliminate Secure Boot Bypass Threat (arstechnica.com) 39

Microsoft has patched a Windows vulnerability that allowed attackers to bypass Secure Boot, a critical defense against firmware infections, the company said. The flaw, tracked as CVE-2024-7344, affected Windows devices for at least seven months. Security researcher Martin Smolar discovered the vulnerability in a signed UEFI application within system recovery software from seven vendors, including Howyar.

The application, reloader.efi, circumvented standard security checks through a custom PE loader. Administrative attackers could exploit the vulnerability to install malicious firmware that persists even after disk reformatting. Microsoft revoked the application's digital signature, though the vulnerability's impact on Linux systems remains unclear.
United States

A New Jam-Packed Biden Executive Order Tackles Cybersecurity, AI, and More (wired.com) 127

U.S. President Joe Biden has issued a comprehensive cybersecurity executive order, four days before leaving office, mandating improvements to government network monitoring, software procurement, AI usage, and foreign hacker penalties.

The 40-page directive aims to leverage AI's security benefits, implement digital identities for citizens, and address vulnerabilities that have allowed Chinese and Russian intrusions into U.S. government systems. It requires software vendors to prove secure development practices and gives the Commerce Department eight months to establish mandatory cybersecurity standards for government contractors.
Nintendo

Nintendo To Unveil Next-Generation Switch 2 in April 35

Nintendo announced on Thursday it will unveil its next-generation Switch 2 gaming console at a digital event on April 2, marking the end of its nearly eight-year-old flagship model. The Japanese gaming giant revealed in a two-minute video that the new device maintains a similar hybrid design to the original Switch but is larger, with redesigned controllers that attach magnetically.
EU

GOG Joins European Federation of Game Archives, Museums and Preservation Projects (prowly.com) 42

GOG.com, a European digital distribution platform known for offering DRM-free video games, announced they've joined the European Federation of Game Archives, Museums and Preservation Projects (EFGAMP). From the release: "GOG was created with video game preservation in mind," said Maciej Golebiewski, Managing Director at GOG. "Classic games and the mission to safeguard them for future generations have always been at the core of our work. Over the past decade, we've honed our expertise in this area. The GOG Preservation Program, which ensures compatibility for over 100 games and delivers hundreds of enhancements, is just one example of this commitment. We were thrilled to see the Program warmly received not only by our players but also by our partners and the gaming industry as a whole."

Golebiewski further explained that GOG's role in preservation extends beyond its platform. He highlighted, "As a European company, we feel a responsibility to lead in preserving gaming heritage. Joining EFGAMP reinforces this commitment. Our next step is to expand institutional collaboration with museums and governmental and non-governmental organizations worldwide. We hope our experience will contribute meaningfully to their efforts. We are also discussing exciting new game preservation projects, which we look forward to sharing soon."

Transportation

DJI Removes US Drone Flight Restrictions Over Airports, Wildfires (theverge.com) 93

Chinese drone maker DJI has removed software restrictions that previously prevented its drones from flying over sensitive areas in the United States, including airports, wildfires, and government buildings like the White House, replacing them with dismissible warnings.

The policy shift comes amid rising U.S. distrust of Chinese drones and follows a recent incident where a DJI drone disrupted firefighting efforts in Los Angeles. The company defended the change, saying drone regulations have matured with the FAA's new Remote ID tracking requirement, which functions like a digital license plate.
Facebook

Meta Says It Isn't Ending Fact-Checks Outside US 'At This Time' (cointelegraph.com) 153

An anonymous reader quotes a report from CoinTelegraph: Social media platform Meta has confirmed that its fact-checking feature on Facebook, Instagram and Threads will only be removed in the US for now, according to a Jan. 13 letter sent to Brazil's government. "Meta has already clarified that, at this time, it is terminating its independent Fact-Checking Program only in the United States, where we will test and refine the community notes [feature] before expanding to other countries," Meta told Brazil's Attorney General of the Union (AGU) in a Portuguese-translated letter.

Meta's letter followed a 72-hour deadline Brazil's AGU set for Meta to clarify to whom the removal of the third-party fact verification feature would apply. [...] Brazil has expressed dissatisfaction with Meta's removal of its fact check feature, Brazil Attorney-General Jorge Messias said on Jan. 10. "Brazil has rigorous legislation to protect children and adolescents, vulnerable populations, and the business environment, and we will not allow these networks to transform the environment into digital carnage or barbarity."
Last Tuesday, Meta CEO Mark Zuckerberg announced an end to fact-checking on Facebook and Instagram -- a move he described as an attempt to restore free expression on its platforms. He likened his company's fact-checking process to a George Orwell novel, saying it "something out of 1984" and let to a broad belief that Meta fact-checkers "were too biased."
Facebook

Meta Is Blocking Links to Decentralized Instagram Competitor Pixelfed (404media.co) 53

Meta is deleting links to Pixelfed, a decentralized, open-source Instagram competitor, labeling them as "spam" on Facebook and removing them immediately. 404 Media reports: Pixelfed is an open-source, community funded and decentralized image sharing platform that runs on Activity Pub, which is the same technology that supports Mastodon and other federated services. Pixelfed.social is the largest Pixelfed server, which was launched in 2018 but has gained renewed attention over the last week. Bluesky user AJ Sadauskas originally posted that links to Pixelfed were being deleted by Meta; 404 Media then also tried to post a link to Pixelfed on Facebook. It was immediately deleted. Pixelfed has seen a surge in user signups in recent days, after Meta announced it is ending fact-checking and removing restrictions on speech across its platforms.

Daniel Supernault, the creator of Pixelfed, published a "declaration of fundamental rights and principles for ethical digital platforms, ensuring privacy, dignity, and fairness in online spaces." The open source charter contains sections titled "right to privacy," "freedom from surveillance," "safeguards against hate speech," "strong protections for vulnerable communities," and "data portability and user agency."

"Pixelfed is a lot of things, but one thing it is not, is an opportunity for VC or others to ruin the vibe. I've turned down VC funding and will not inject advertising of any form into the project," Supernault wrote on Mastodon. "Pixelfed is for the people, period."
AI

CEO of AI Music Company Says People Don't Like Making Music 82

An anonymous reader quotes a report from 404 Media: Mikey Shulman, the CEO and founder of the AI music generator company Suno AI, thinks people don't enjoy making music. "We didn't just want to build a company that makes the current crop of creators 10 percent faster or makes it 10 percent easier to make music. If you want to impact the way a billion people experience music you have to build something for a billion people," Shulman said on the 20VC podcast. "And so that is first and foremost giving everybody the joys of creating music and this is a huge departure from how it is now. It's not really enjoyable to make music now [...] It takes a lot of time, it takes a lot of practice, you need to get really good at an instrument or really good at a piece of production software. I think the majority of people don't enjoy the majority of the time they spend making music."

Suno AI works like other popular generative AI tools, allowing users to generate music by writing text prompts describing the kind of music they want to hear. Also like many other generative AI tools, Suno was trained on heaps of copyrighted music it fed into its training dataset without consent, a practice Suno is currently being sued for by the recording industry. In the interview, Shulman says he's disappointed that the recording industry is suing his company because he believes Suno and other similar AI music generators will ultimately allow more people to make and enjoy music, which will only grow the audience and industry, benefiting everyone. That may end up being true, and could be compared to the history of electronic music, digital production tools, or any other technology that allowed more people to make more music.
Apple

EU Probes Apple's New App Store Fees (yahoo.com) 43

European Union regulators are investigating Apple's revised app store fees amid concerns they may increase costs for developers, according to Bloomberg News.

The European Commission sent questionnaires to developers in December focusing on Apple's new "core technology fee" of $0.51 per app installation, part of its compliance with EU's Digital Markets Act. Under Apple's revised structure, developers can maintain existing terms with commissions up to 30% on app sales, or choose a new model with lower commission rates but additional charges.
Google

Google Wants to Track Your Digital Fingerprints Again (mashable.com) 54

Google is reintroducing "digital fingerprinting" in five weeks, reports Mashable, describing it as "a data collection process that ingests all of your online signals (from IP address to complex browser information) and pinpoints unique users or devices." Or, to put it another way, Google "is tracking your online behavior in the name of advertising."

The UK's Information Commissioner's Office called Google's decision "irresponsible": it is likely to reduce people's choice and control over how their information is collected. The change to Google's policy means that fingerprinting could now replace the functions of third-party cookies... Google itself has previously said that fingerprinting does not meet users' expectations for privacy, as users cannot easily consent to it as they would cookies. This in turn means they cannot control how their information is collected. To quote Google's own position on fingerprinting from 2019: "We think this subverts user choice and is wrong...." When the new policy comes into force on 16 February 2025, organisations using Google's advertising technology will be able to deploy fingerprinting without being in breach of Google's own policies. Given Google's position and scale in the online advertising ecosystem, this is significant.
Their post ends with a warning that those hoping to use fingerprinting for advertising "will need to demonstrate how they are complying with the requirements of data protection law. These include providing users with transparency, securing freely-given consent, ensuring fair processing and upholding information rights such as the right to erasure."

But security and privacy researcher Lukasz Olejnik asks if Google's move is the biggest privacy erosion in 10 years.... Could this mark the end of nearly a decade of progress in internet and web privacy? It would be unfortunate if the newly developing AI economy started from a decrease of privacy and data protection standards. Some analysts or observers might then be inclined to wonder whether this approach to privacy online might signal similar attitudes in other future Google products, like AI... The shift is rather drastic. Where clear restrictions once existed, the new policy removes the prohibition (so allows such uses) and now only requires disclosure... [I]f the ICO's claims about Google sharing IP addresses within the adtech ecosystem are accurate, this represents a significant policy shift with critical implications for privacy, trust, and the integrity of previously proposed Privacy Sandbox initiatives.
Their post includes a disturbing thought. "Reversing the stance on fingerprinting could open the door to further data collection, including to crafting dynamic, generative AI-powered ads tailored with huge precision. Indeed, such applications would require new data..."

Thanks to long-time Slashdot reader sinij for sharing the news.
AI

Futurist Predicts AI-Powered 'Digital Superpowers' by 2030 (bigthink.com) 100

Unanimous AI's founder Louis Rosenberg predicts a "wave" of new superhuman abilities is coming soon that we experience profoundly "as self-embodied skills that we carry around with us throughout our lives"...

"[B]y 2030, a majority of us will live our lives with context-aware AI agents bringing digital superpowers into our daily experiences." They will be unleashed by context-aware AI agents that are loaded into body-worn devices that see what we see, hear what we hear, experience what we experience, and provide us with enhanced abilities to perceive and interpret our world... The majority of these superpowers will be delivered through AI-powered glasses with cameras and microphones that act as their eyes and ears, but there will be other form factors for people who just don't like eyewear... [For example, earbuds with built in cameras] We will whisper to these intelligent devices, and they will whisper back, giving us recommendations, guidance, spatial reminders, directional cues, haptic nudges, and other verbal and perceptual content that will coach us through our days like an omniscient alter ego... When you spot that store across the street, you simply whisper to yourself, "I wonder when it opens?" and a voice will instantly ring back into your ears, "10:30 a.m...."

By 2030, we will not need to whisper to the AI agents traveling with us through our lives. Instead, you will be able to simply mouth the words, and the AI will know what you are saying by reading your lips and detecting activation signals from your muscles. I am confident that "mouthing" will be deployed because it's more private, more resilient to noisy spaces, and most importantly, it will feel more personal, internal, and self-embodied. By 2035, you may not even need to mouth the words. That's because the AI will learn to interpret the signals in our muscles with such subtlety and precision — we will simply need to think about mouthing the words to convey our intent... When you grab a box of cereal in a store and are curious about the carbs, or wonder whether it's cheaper at Walmart, the answers will just ring in your ears or appear visually. It will even give you superhuman abilities to assess the emotions on other people's faces, predict their moods, goals, or intentions, coaching you during real-time conversations to make you more compelling, appealing, or persuasive...

I don't make these claims lightly. I have been focused on technologies that augment our reality and expand human abilities for over 30 years and I can say without question that the mobile computing market is about to run in this direction in a very big way.

Instead of Augmented Reality, how about Augmented Mentality? The article notes Meta has already added context-aware AI to its Ray-Ban glasses and suggests that within five years Meta might try "selling us superpowers we can't resist". And Google's new AI-powered operating system Android XR hopes to augment our world with seamless context-aware content. But think about where this is going. "[E]ach of us could find ourselves in a new reality where technologies controlled by third parties can selectively alter what we see and hear, while AI-powered voices whisper in our ears with targeted advice and guidance."

And yet " by 2030 the superpowers that these devices give us won't feel optional. After all, not having them could put us at a social and cognitive disadvantage."

Thanks to Slashdot reader ZipNada for sharing the news.
United States

Should In-Game Currency Receive Federal Government Banking Protections? (yahoo.com) 91

Friday America's consumer watchdog agency "proposed a rule to give virtual video game currencies protections similar to those of real-world bank accounts..." reports the Washington Post, "so players can receive refunds or compensation for unauthorized transactions, similar to how banks are required to respond to claims of fraudulent activity." The Consumer Financial Protection Bureau is seeking public input on a rule interpretation to clarify which rights are protected and available to video game consumers under the Electronic Fund Transfer Act. It would hold video game companies subject to violations of federal consumer financial law if they fail to address financial issues reported by customers. The public comment period lasts from Friday through March 31. In particular, the independent federal agency wants to hear from gamers about the types of transactions they make, any issues with in-game currencies, and stories about how companies helped or denied help.

The effort is in response to complaints to the bureau and the Federal Trade Commission about unauthorized transactions, scams, hacking attempts and account theft, outlined in an April bureau report that covered banking in video games and virtual worlds. The complaints said consumers "received limited recourse from gaming companies." Companies may ban or lock accounts or shut down a service, according to the report, but they don't generally guarantee refunds to people who lost property... The April report says the bureau and FTC received numerous complaints from players who contacted their banks regarding unauthorized charges on Roblox. "These complaints note that while they received refunds through their financial institutions, Roblox then terminated or locked their account," the report says.

AI

Foreign Cybercriminals Bypassed Microsoft's AI Guardrails, Lawsuit Alleges (arstechnica.com) 3

"Microsoft's Digital Crimes Unit is taking legal action to ensure the safety and integrity of our AI services," according to a Friday blog post by the unit's assistant general counsel. Microsoft blames "a foreign-based threat-actor group" for "tools specifically designed to bypass the safety guardrails of generative AI services, including Microsoft's, to create offensive and harmful content.

Microsoft "is accusing three individuals of running a 'hacking-as-a-service' scheme," reports Ars Technica, "that was designed to allow the creation of harmful and illicit content using the company's platform for AI-generated content" after bypassing Microsoft's AI guardrails: They then compromised the legitimate accounts of paying customers. They combined those two things to create a fee-based platform people could use. Microsoft is also suing seven individuals it says were customers of the service. All 10 defendants were named John Doe because Microsoft doesn't know their identity.... The three people who ran the service allegedly compromised the accounts of legitimate Microsoft customers and sold access to the accounts through a now-shuttered site... The service, which ran from last July to September when Microsoft took action to shut it down, included "detailed instructions on how to use these custom tools to generate harmful and illicit content."

The service contained a proxy server that relayed traffic between its customers and the servers providing Microsoft's AI services, the suit alleged. Among other things, the proxy service used undocumented Microsoft network application programming interfaces (APIs) to communicate with the company's Azure computers. The resulting requests were designed to mimic legitimate Azure OpenAPI Service API requests and used compromised API keys to authenticate them. Microsoft didn't say how the legitimate customer accounts were compromised but said hackers have been known to create tools to search code repositories for API keys developers inadvertently included in the apps they create. Microsoft and others have long counseled developers to remove credentials and other sensitive data from code they publish, but the practice is regularly ignored. The company also raised the possibility that the credentials were stolen by people who gained unauthorized access to the networks where they were stored...

The lawsuit alleges the defendants' service violated the Computer Fraud and Abuse Act, the Digital Millennium Copyright Act, the Lanham Act, and the Racketeer Influenced and Corrupt Organizations Act and constitutes wire fraud, access device fraud, common law trespass, and tortious interference.

Slashdot Top Deals