Bitcoin

Robinhood and Kraken Launch New Global Stablecoin Network With Paxos' USDG 14

Leading fintech and digital asset firms, including Robinhood, Kraken and Galaxy Digital, have introduced a joint stablecoin pegged to the U.S. dollar. Called the Global Dollar Network, it seeks to enhance the stablecoin market by lowering transaction costs, boosting consumer protections, and facilitating cross-border transactions with rewards for institutional participants. Crypto Briefing reports: The network will utilize Paxos's new stablecoin, the Global Dollar (USDG), which complies with the Monetary Authority of Singapore's upcoming stablecoin framework. USDG is designed to return yield on reserve assets to participants who contribute to its adoption, encouraging the development of crypto and financial solutions using the token. The Global Dollar Network aims to address shortcomings in the stablecoin market, such as high transaction costs and limited consumer protections.

The network has opened an invite-only phase for select custodians, exchanges, payment processors, merchants, and banks to develop new solutions using USDG. Initial distribution is available on Anchorage Digital, Galaxy Digital, Kraken, and Paxos platforms, with plans to expand access through additional partners in the coming months.
Programming

Python Overtakes JavaScript on GitHub, Annual Survey Finds (github.blog) 97

GitHub released its annual "State of the Octoverse" report this week. And while "Systems programming languages, like Rust, are also on the rise... Python, JavaScript, TypeScript, and Java remain the most widely used languages on GitHub."

In fact, "In 2024, Python overtook JavaScript as the most popular language on GitHub." They also report usage of Jupyter Notebooks "skyrocketed" with a 92% jump in usage, which along with Python's rise seems to underscore "the surge in data science and machine learning on GitHub..." We're also seeing increased interest in AI agents and smaller models that require less computational power, reflecting a shift across the industry as more people focus on new use cases for AI... While the United States leads in contributions to generative AI projects on GitHub, we see more absolute activity outside the United States. In 2024, there was a 59% surge in the number of contributions to generative AI projects on GitHub and a 98% increase in the number of projects overall — and many of those contributions came from places like India, Germany, Japan, and Singapore...

Notable growth is occurring in India, which is expected to have the world's largest developer population on GitHub by 2028, as well as across Africa and Latin America... [W]e have seen greater growth outside the United States every year since 2013 — and that trend has sped up over the past few years.

Last year they'd projected India would have the most developers on GitHub #1 by 2027, but now believe it will happen a year later. This year's top 10?

1. United States
2. India
3. China
4. Brazil
5. United Kingdom
6. Russia
7. Germany
8. Indonesia
9. Japan
10. Canada

Interestingly, the UK's population ranks #21 among countries of the world, while Germany ranks #19, and Canada ranks #36.)

GitHub's announcement argues the rise of non-English, high-population regions "is notable given that it is happening at the same time as the proliferation of generative AI tools, which are increasingly enabling developers to engage with code in their natural language." And they offer one more data point: GitHub's For Good First Issue is a curated list of Digital Public Goods that need contributors, connecting those projects with people who want to address a societal challenge and promote sustainable development...

Significantly, 34% of contributors to the top 10 For Good Issue projects... made their first contribution after signing up for GitHub Copilot.

There's now 518 million projects on GitHub — with a year-over-year growth of 25%...
AI

Leaked Training Shows Doctors In New York's Biggest Hospital System Using AI (404media.co) 34

Slashdot reader samleecole shared this report from 404 Media: Northwell Health, New York State's largest healthcare provider, recently launched a large language model tool that it is encouraging doctors and clinicians to use for translation, sensitive patient data, and has suggested it can be used for diagnostic purposes, 404 Media has learned. Northwell Health has more than 85,000 employees.

An internal presentation and employee chats obtained by 404 Media shows how healthcare professionals are using LLMs and chatbots to edit writing, make hiring decisions, do administrative tasks, and handle patient data. In the presentation given in August, Rebecca Kaul, senior vice president and chief of digital innovation and transformation at Northwell, along with a senior engineer, discussed the launch of the tool, called AI Hub, and gave a demonstration of how clinicians and researchers—or anyone with a Northwell email address—can use it... AI Hub can be used for "clinical or clinical adjacent" tasks, as well as answering questions about hospital policies and billing, writing job descriptions and editing writing, and summarizing electronic medical record excerpts and inputting patients' personally identifying and protected health information.

The demonstration also showed potential capabilities that included "detect pancreas cancer," and "parse HL7," a health data standard used to share electronic health records.

The leaked presentation shows that hospitals are increasingly using AI and LLMs to streamlining administrative tasks, and shows that some are experimenting with or at least considering how LLMs would be used in clinical settings or in interactions with patients.

Open Source

New 'Open Source AI Definition' Criticized for Not Opening Training Data (slashdot.org) 38

Long-time Slashdot reader samj — also a long-time Debian developertells us there's some opposition to the newly-released Open Source AI definition. He calls it a "fork" that undermines the original Open Source definition (which was originally derived from Debian's Free Software Guidelines, written primarily by Bruce Perens), and points us to a new domain with a petition declaring that instead Open Source shall be defined "solely by the Open Source Definition version 1.9. Any amendments or new definitions shall only be recognized with clear community consensus via an open and transparent process."

This move follows some discussion on the Debian mailing list: Allowing "Open Source AI" to hide their training data is nothing but setting up a "data barrier" protecting the monopoly, disabling anybody other than the first party to reproduce or replicate an AI. Once passed, OSI is making a historical mistake towards the FOSS ecosystem.
They're not the only ones worried about data. This week TechCrunch noted an August study which "found that many 'open source' models are basically open source in name only. The data required to train the models is kept secret, the compute power needed to run them is beyond the reach of many developers, and the techniques to fine-tune them are intimidatingly complex. Instead of democratizing AI, these 'open source' projects tend to entrench and expand centralized power, the study's authors concluded."

samj shares the concern about training data, arguing that training data is the source code and that this new definition has real-world consequences. (On a personal note, he says it "poses an existential threat to our pAI-OS project at the non-profit Kwaai Open Source Lab I volunteer at, so we've been very active in pushing back past few weeks.")

And he also came up with a detailed response by asking ChatGPT. What would be the implications of a Debian disavowing the OSI's Open Source AI definition? ChatGPT composed a 7-point, 14-paragraph response, concluding that this level of opposition would "create challenges for AI developers regarding licensing. It might also lead to a fragmentation of the open-source community into factions with differing views on how AI should be governed under open-source rules." But "Ultimately, it could spur the creation of alternative definitions or movements aimed at maintaining stricter adherence to the traditional tenets of software freedom in the AI age."

However the official FAQ for the new Open Source AI definition argues that training data "does not equate to a software source code." Training data is important to study modern machine learning systems. But it is not what AI researchers and practitioners necessarily use as part of the preferred form for making modifications to a trained model.... [F]orks could include removing non-public or non-open data from the training dataset, in order to train a new Open Source AI system on fully public or open data...

[W]e want Open Source AI to exist also in fields where data cannot be legally shared, for example medical AI. Laws that permit training on data often limit the resharing of that same data to protect copyright or other interests. Privacy rules also give a person the rightful ability to control their most sensitive information — like decisions about their health. Similarly, much of the world's Indigenous knowledge is protected through mechanisms that are not compatible with later-developed frameworks for rights exclusivity and sharing.

Read on for the rest of their response...
Privacy

PimEyes 'Made a Public Rolodex of Our Faces'. Should You Opt Out? (msn.com) 32

The free face-image search engine PimEyes "scans through billions of images from the internet and finds matches of your photo that could have appeared in a church bulletin or a wedding photographer's website," -us/news/technology/they-made-a-public-rolodex-of-our-faces-here-s-how-i-tried-to-get-out/ar-AA1tlpPuwrites a Washington Post columnist.

So to find and delete themselves from "the PimEyes searchable Rolodex of faces," they "recently handed over a selfie and a digital copy of my driver's license to a company I don't trust." PimEyes says it empowers people to find their online images and try to get unwanted ones taken down. But PimEyes face searches are largely open to anyone with either good or malicious intent. People have used PimEyes to identify participants in the Jan. 6, 2021, attack on the Capitol, and creeps have used it to publicize strangers' personal information from just their image.

The company offers an opt-out form to remove your face from PimEyes searches. I did it and resented spending time and providing even more personal information to remove myself from the PimEyes repository, which we didn't consent to be part of in the first place. The increasing ease of potentially identifying your name, work history, children's school, home address and other sensitive information from one photo shows the absurdity of America's largely unrestrained data-harvesting economy.

While PimEyes' CEO said they don't keep the information you provide to opt-out, "you give PimEyes at least one photo of yourself plus a digital copy of a passport or ID with personal details obscured..." according to the article. (PimEyes' confirmation email "said I might need to repeat the opt-out with more photos...") Some digital privacy experts said it's worth opting out of PimEyes, even if it's imperfect, and that PimEyes probably legitimately needs a personal photo and proof of identity for the process. Others found it "absurd" to provide more information to PimEyes... or they weren't sure opting out was the best choice... Experts said the fundamental problem is how much information is harvested and accessible without your knowledge or consent from your phone, home speakers, your car and information-organizing middlemen like PimEyes and data brokers.

Nathan Freed Wessler, an American Civil Liberties Union attorney focused on privacy litigation, said laws need to change the assumption that companies can collect almost anything about you or your face unless you go through endless opt-outs. "These systems are scary and abusive," he said. "If they're going to exist, they should be based on an opt-in system."

AI

More Than 60% of CEOs Are 'Digitally Illiterate', According To Their Own Employees 73

Corporate resistance to AI tools is costing employees six hours per week in manual tasks that could be automated, according to research by recruitment firm SThree. Sixty-three percent of workers blame management's "digital illiteracy" for slow AI adoption, despite major companies rushing to tout AI initiatives since ChatGPT's launch. A 2023 tech.io study found two-thirds of business leaders barely use AI tools due to limited understanding.
Businesses

Siemens To Buy Altair For $10.6 Billion In Digital Portfolio Push (yahoo.com) 10

An anonymous reader quotes a report from Reuters: Siemens will buy Altair Engineering for $10.6 billion, the American engineering software firm said on Wednesday, as the German company seeks to strengthen its presence in the fast-growing industrial software market. The offer price of $113 per share represents a premium of about 18.7% to Altair's closing price on Oct. 21, a day before Reuters first reported that the company was exploring a sale. The deal for Michigan-based Altair is Siemens's biggest acquisition since Siemens Healthineers bought medical device maker Varian Medical Systems for $16.4 million in 2020. [...]

The transaction is anticipated to add to Siemens' earnings per share in about two years from the deal's closing, which is expected in the second half of 2025. It will also increase Siemens' digital business revenue by about 8%, adding approximately 600 million euros ($651.36 million) to the company's digital business revenue in fiscal 2023. The transaction would have a revenue impact of about $500 million per year in the mid-term and more than $1 billion per year in the long term, Siemens said.

AI

Robert Downey Jr. Threatens To Sue Over AI Recreations of His Likeness (variety.com) 62

Oscar winner Robert Downey Jr. has threatened legal action against future studio executives who attempt to recreate his likeness using AI. "I intend to sue all future executives just on spec," Downey said when asked about potential AI recreations of his performances. He dismissed concerns about Marvel Studios using his likeness without permission, citing trust in their leadership. During the interview, he criticized tech executives who position themselves as AI gatekeepers, calling it "a massive fucking error."
The Almighty Buck

JPMorgan Begins Suing Customers In 'Infinite Money Glitch' (cnbc.com) 222

JPMorgan Chase is suing customers who exploited an ATM glitch that allowed them to withdraw funds before a check bounced. CNBC reports: The bank on Monday filed lawsuits in at least three federal courts, taking aim at some of the people who withdrew the highest amounts in the so-called infinite money glitch that went viral on TikTok and other social media platforms in late August. [...] JPMorgan, the biggest U.S. bank by assets, is investigating thousands of possible cases related to the "infinite money glitch," though it hasn't disclosed the scope of associated losses. Despite the waning use of paper checks as digital forms of payment gain popularity, they're still a major avenue for fraud, resulting in $26.6 billion in losses globally last year, according to Nasdaq's Global Financial Crime Report.

The infinite money glitch episode highlights the risk that social media can amplify vulnerabilities discovered at a financial institution. Videos began circulating in late August showing people celebrating the withdrawal of wads of cash from Chase ATMs shortly after bad checks were deposited. Normally, banks only make available a fraction of the value of a check until it clears, which takes several days. JPMorgan says it closed the loophole a few days after it was discovered.

The lawsuits are likely to be just the start of a wave of litigation meant to force customers to repay their debts and signal broadly that the bank won't tolerate fraud, according to the people familiar. JPMorgan prioritized cases with large dollar amounts and indications of possible ties to criminal groups, they said. The civil cases are separate from potential criminal investigations; JPMorgan says it has also referred cases to law enforcement officials across the country.
"Fraud is a crime that impacts everyone and undermines trust in the banking system," JPMorgan spokesman Drew Pusateri said in a statement to CNBC. "We're pursuing these cases and actively cooperating with law enforcement to make sure if someone is committing fraud against Chase and its customers, they're held accountable."
Bitcoin

Russia Publishes New Crypto Law Expanding State Control Over Digital Assets 21

Russia has enacted a new law expanding control over cryptocurrency mining, granting multiple federal agencies access to digital currency identifier addresses, among other things. The country is also advancing its regulatory framework and experimenting with crypto in international trade. From a report: Taking effect on Nov. 1, the legislation includes several amendments designed to strengthen oversight and impose limitations on crypto mining activities based on regional needs. The law enables the Russian government to implement mining restrictions by location and define specific procedures and circumstances for banning mining operations. A notable provision in the law gives the government the power to stop digital currency mining pools from functioning in certain areas. Additionally, the government now has the authority to regulate infrastructure providers supporting mining operations.

This legislation also grants multiple federal agencies, beyond the Federal Financial Monitoring Service (Rosfinmonitoring), access to digital currency identifier addresses. This expansion includes federal executive agencies and law enforcement, bolstering their capability to track transactions that may be linked to money laundering or terrorist financing activities. Moreover, the amendments transfer responsibility for the national mining register from the Ministry of Digital Development to the Federal Tax Service, which will now oversee mining registrations for businesses and remove those with repeated infractions. While individual miners can continue without registering if they adhere to specific electricity consumption limits, companies and individual entrepreneurs must comply with new registration requirements.
IOS

Apple Intelligence Is Out Today (theverge.com) 36

An anonymous reader quotes a report from The Verge: Apple's AI features are finally starting to appear. Apple Intelligence is launching today on the iPhone, iPad, and Mac, offering features like generative AI-powered writing tools, notification summaries, and a cleanup tool to take distractions out of photos. It's Apple's first official step into the AI era, but it'll be far from its last. Apple Intelligence has been available in developer and public beta builds of Apple's operating systems for the past few months, but today marks the first time it'll be available in the full public OS releases. Even so, the features will still be marked as "beta," and Apple Intelligence will very much remain a work in progress. (You'll have to get on a waitlist to try Apple Intelligence, too.) Siri gets a new look, but its most consequential new features -- like the ability to take action in apps -- probably won't arrive until well into 2025.

In the meantime, Apple has released a very "AI starter kit" set of features. "Writing Tools" will help you summarize notes, change the tone of your messages to make them friendlier or more professional, and turn a wall of text into a list or table. You'll see AI summaries in notifications and emails, along with a new focus mode that aims to filter out unimportant alerts. The updated Siri is signified by a glowing border around the screen, and it now allows for text input by double-tapping the bottom of the screen. It's helpful stuff, but we've seen a lot of this before, and it'll hardly represent a seismic shift in how you use your iPhone. Apple says that more Apple Intelligence features will arrive in December. [...] Availability will expand in December to Australia, Canada, Ireland, New Zealand, South Africa, and the UK, with additional languages coming in April.
Despite Apple's previous claim that Apple Intelligence wouldn't be available in the European Union due to the Digital Markets Act, the features will, in fact, be coming to Europe in April of next year.

Further reading: Apple Updates the iMac With M4 Chip
Software

Can the EU Hold Software Makers Liable For Negligence? (lawfaremedia.org) 132

When it comes to introducing liability for software products, "the EU and U.S. are taking very different approaches," according to Lawfare's cybersecurity newsletter. "While the U.S. kicks the can down the road, the EU is rolling a hand grenade down it to see what happens." Under the status quo, the software industry is extensively protected from liability for defects or issues, and this results in systemic underinvestment in product security. Authorities believe that by making software companies liable for damages when they peddle crapware, those companies will be motivated to improve product security... [T]he EU has chosen to set very stringent standards for product liability, apply them to people rather than companies, and let lawyers sort it all out.

Earlier this month, the EU Council issued a directive updating the EU's product liability law to treat software in the same way as any other product. Under this law, consumers can claim compensation for damages caused by defective products without having to prove the vendor was negligent or irresponsible. In addition to personal injury or property damages, for software products, damages may be awarded for the loss or destruction of data. Rather than define a minimum software development standard, the directive sets what we regard as the highest possible bar. Software makers can avoid liability if they prove a defect was not discoverable given the "objective state of scientific and technical knowledge" at the time the product was put on the market.

Although the directive is severe on software makers, its scope is narrow. It applies only to people (not companies), and damages for professional use are explicitly excluded. There is still scope for collective claims such as class actions, however. The directive isn't law itself but sets the legislative direction for EU member states, and they have two years to implement its provisions. The directive commits the European Commission to publicly collating court judgements based on the directive, so it will be easy to see how cases are proceeding.

Major software vendors used by the world's most important enterprises and governments are publishing comically vulnerable code without fear of any blowback whatsoever. So yes, the status quo needs change. Whether it needs a hand grenade lobbed at it is an open question. We'll have our answer soon.

Google

'We Took on Google and They Were Forced to Pay Billions' (bbc.com) 58

"Google essentially disappeared us from the internet," says the couple who created price-comparison site Foundem in 2006. Google's search results for "price comparison" and "comparison shopping" buried their site — for more than three years.

Today the BBC looks at their 15-year legal battle, which culminated with a then record €2.4 billion fine (£2 billion or $2.6 billion) for Google, which was deemed to have abused its market dominance. The case has been hailed as a landmark moment in the global regulation of Big Tech. Google spent seven years fighting that verdict, issued in June 2017, but in September this year Europe's top court — the European Court of Justice — rejected its appeals.

Speaking to Radio 4's The Bottom Line in their first interview since that final verdict, Shivaun and Adam explained that at first, they thought their website's faltering start had simply been a mistake. "We initially thought this was collateral damage, that we had been false positive detected as spam," says Shivaun, 55. "We just assumed we had to escalate to the right place and it would be overturned...." The couple sent Google numerous requests to have the restriction lifted but, more than two years later, nothing had changed and they said they received no response. Meanwhile, their website was "ranking completely normally" on other search engines, but that didn't really matter, according to Shivaun, as "everyone's using Google".

The couple would later discover that their site was not the only one to have been put at a disadvantage by Google — by the time the tech giant was found guilty and fined in 2017 there were around 20 claimants, including Kelkoo, Trivago and Yelp... In its 2017 judgement, the European Commission found that Google had illegally promoted its own comparison shopping service in search results, whilst demoting those of competitors... "I guess it was unfortunate for Google that they did it to us," Shivaun says. "We've both been brought up maybe under the delusion that we can make a difference, and we really don't like bullies."

Even Google's final defeat in the case last month did not spell the end for the couple. They believe Google's conduct remains anti-competitive and the EC is looking into it. In March this year, under its new Digital Markets Act, the commission opened an investigation into Google's parent company, Alphabet, over whether it continues to preference its own goods and services in search results... The Raffs are also pursuing a civil damages claim against Google, which is due to begin in the first half of 2026. But when, or if, a final victory comes for the couple it will likely be a Pyrrhic one — they were forced to close Foundem in 2016.

A spokesperson for Google told the BBC the 2024 judgment from the European Court of Justice only relates to "how we showed product results from 2008-2017. The changes we made in 2017 to comply with the European Commission's Shopping decision have worked successfully for more than seven years, generating billions of clicks for more than 800 comparison shopping services.

"For this reason, we continue to strongly contest the claims made by Foundem and will do so when the case is considered by the courts."
AI

Did Capturing Carbon from the Air Just Get Easier? (berkeley.edu) 121

"We passed Berkeley air — just outdoor air — into the material to see how it would perform," says U.C. Berkeley chemistry professor Omar Yaghi, "and it was beautiful.

"It cleaned the air entirely of CO2," Yaghi says in an announcement from the university. "Everything."

SFGate calls it "a discovery that could help potentially mitigate the effects of climate change..." Yaghi's lab has worked on carbon capture since the 1990s and began work on these crystalline structures in 2005. The innovative substance has lots of tiny holes, making it "great for storing gases or liquids, much like a sponge holds water," Yaghi said... While it could take one to two years for the powder to be usable in large-scale applications, Yaghi co-founded Atoco, an Irvine company, to commercialize his research and expand it beyond just carbon capture and storage.
"Capturing carbon from the air just got easier," says the headline on the anouncement from the university, which explains why this technology is crucial: [T]oday's carbon capture technologies work well only for concentrated sources of carbon, such as power plant exhaust. The same methods cannot efficiently capture carbon dioxide from ambient air, where concentrations are hundreds of times lower than in flue gases. Yet direct air capture, or DAC, is being counted on to reverse the rise of CO2 levels, which have reached 426 parts per million, 50% higher than levels before the Industrial Revolution. Without it, according to the Intergovernmental Panel on Climate Change, we won't reach humanity's goal of limiting warming to 1.5 degreesC (2.7 degreesF) above preexisting global averages.

A new type of absorbing material developed by chemists at the University of California, Berkeley, could help get the world to negative emissions... According to Yaghi, the new material could be substituted easily into carbon capture systems already deployed or being piloted to remove CO2 from refinery emissions and capture atmospheric CO2 for storage underground. UC Berkeley graduate student Zihui Zhou, the paper's first author, said that a mere 200 grams of the material, a bit less than half a pound, can take up as much CO2 in a year — 20 kilograms (44 pounds) — as a tree.

Their research was published this week in the journal Nature.

And it's also interesting that they're using AI, according to the university's announcement: Yaghi is optimistic that artificial intelligence can help speed up the design of even better COFs and MOFs for carbon capture or other purposes, specifically by identifying the chemical conditions required to synthesize their crystalline structures. He is scientific director of a research center at UC Berkeley, the Bakar Institute of Digital Materials for the Planet (BIDMaP), which employs AI to develop cost-efficient, easily deployable versions of MOFs and COFs to help limit and address the impacts of climate change. "We're very, very excited about blending AI with the chemistry that we've been doing," he said.
Another potential use could be for harvesting water from desert air for drinking water, Yaghi told SFGate. But he seems very focused specifically on carbon capture.

"Another thing is that we need a strong determination among officials and industries to make carbon capture a high priority. Things have to change, but I believe that direct carbon capture from air is very doable."
DRM

US Copyright Office Grants DMCA Exemption For Ice Cream Machines (extremetech.com) 82

The Librarian of Congress has granted a DMCA exemption allowing independent repair of soft-serve machines, addressing the persistent issue of restricted repairs on McDonald's frequently malfunctioning machines. ExtremeTech reports: Section 1201 of the DMCA makes it illegal to bypass a digital lock protecting copyrighted work. That can be the DRM on a video file you download from iTunes, the carrier locks that prevent you from using a phone on other networks, or even the software running a McDonald's soft serve machine that refuses to accept third-party repairs. By locking down a product with DRM, companies can dictate when and how items are repaired under threat of legal consequences. This is an ongoing issue for people who want to fix all those busted ice cream machines.

Earlier this year, iFixit and Public Knowledge submitted their request for an exemption that would have covered a wide swath of industrial equipment. The request included everything from building management software to the aforementioned ice cream machines. Unfortunately, the Copyright Office was unconvinced on some of these points. However, the Librarian of Congress must be just as sick as the rest of us to hear the ice cream machine is broken. The office granted an exception for "retail-level food preparation equipment."

That means restaurant owners and independent repair professionals will be able to bypass the software locks that keep kitchen machinery offline until the "right" repair services get involved. This should lower prices and speed up repairs in such situations. Public Knowledge and iFixit express disappointment that the wider expansion was not granted, but they're still celebrating with some delicious puns (and probably ice cream).
"There's nothing vanilla about this victory; an exemption for retail-level commercial food preparation equipment will spark a flurry of third-party repair activity and enable businesses to better serve their customers," said Meredith Rose, Senior Policy Counsel at Public Knowledge.
Privacy

UnitedHealth Says Change Healthcare Hack Affects Over 100 Million (techcrunch.com) 35

UnitedHealth Group said a ransomware attack in February resulted in more than 100 million individuals having their private health information stolen. The U.S. Department of Health and Human Services first reported the figure on Thursday. TechCrunch reports: The ransomware attack and data breach at Change Healthcare stands as the largest known digital theft of U.S. medical records, and one of the biggest data breaches in living history. The ramifications for the millions of Americans whose private medical information was irretrievably stolen are likely to be life lasting. UHG began notifying affected individuals in late July, which continued through October. The stolen data varies by individual, but Change previously confirmed that it includes personal information, such as names and addresses, dates of birth, phone numbers and email addresses, and government identity documents, including Social Security numbers, driver's license numbers, and passport numbers. The stolen health data includes diagnoses, medications, test results, imaging and care and treatment plans, and health insurance information -- as well as financial and banking information found in claims and payment data taken by the criminals.

The cyberattack became public on February 21 when Change Healthcare pulled much of its network offline to contain the intruders, causing immediate outages across the U.S. healthcare sector that relied on Change for handling patient insurance and billing. UHG attributed the cyberattack to ALPHV/BlackCat, a Russian-speaking ransomware and extortion gang, which later took credit for the cyberattack. The ransomware gang's leaders later vanished after absconding with a $22 million ransom paid by the health insurance giant, stiffing the group's contractors who carried out the hacking of Change Healthcare out of their new financial windfall. The contractors took the data they stole from Change Healthcare and formed a new group, which extorted a second ransom from UHG, while publishing a portion of the stolen files online in the process to prove their threat.

There is no evidence that the cybercriminals subsequently deleted the data. Other extortion gangs, including LockBit, have been shown to hoard stolen data, even after the victim pays and the criminals claim to have deleted the data. In paying the ransom, Change obtained a copy of the stolen dataset, allowing the company to identify and notify the affected individuals whose information was found in the data. Efforts by the U.S. government to catch the hackers behind ALPHV/BlackCat, one of the most prolific ransomware gangs today, have so far failed. The gang bounced back following a takedown operation in 2023 to seize the gang's dark web leak site. Months after the Change Healthcare breach, the U.S. State Department upped its reward for information on the whereabouts of the ALPHV/BlackCat cybercriminals to $10 million.

Education

Code.org Taps No-Code Tableau To Make the Case For K-12 Programming Courses 62

theodp writes: "Computer science education is a necessity for all students," argues tech-backed nonprofit Code.org in its newly-published 2024 State of Computer Science Education (Understanding Our National Imperative) report. "Students of all identities and chosen career paths need quality computer science education to become informed citizens and confident creators of content and digital tools."

In the 200-page report, Code.org pays special attention to participation in "foundational computer science courses" in high school. "Across the country, 60% of public high schools offer at least one foundational computer science course," laments Code.org (curiously promoting a metric that ignores school size which nonetheless was embraced by Education Week and others).

"A course that teaches foundational computer science includes a minimum amount of time applying learned concepts through programming (at least 20 hours of programming/coding for grades 9-12 high schools)," Code.org explains in a separate 13-page Defining Foundational Computer Science document. Interestingly, Code.org argues that Data and Informatics courses -- in which "students may use Oracle WebDB, SQL, PL/SQL, SPSS, and SAS" to learn "the K-12 CS Framework concepts about data and analytics" -- do not count, because "the course content focuses on querying using a scripting language rather than creating programs [the IEEE's Top Programming Languages 2024 begs to differ]." Code.org similarly dissed the use of the Wolfram Language for broad educational use back in 2016.

With its insistence on the importance of kids taking Code.org-defined 'programming' courses in K-12 to promote computational thinking, it's probably no surprise to see that the data behind the 2024 State of Computer Science Education report was prepared using Python (the IEEE's top programming language) and presented to the public in a Jupyter notebook. Just kidding. Ironically, the data behind the 2024 State of Computer Science Education analysis is prepared and presented by Code.org in a no-code Tableau workbook.
Businesses

Kroger and Walmart Deny 'Surge Pricing' After Adopting Digital Price Tags (nytimes.com) 149

An anonymous reader shares a report: Members of Congress are raising the alarm about new technology at supermarkets: They say Kroger and other major grocery stores are implementing digital price tags that could allow for dynamic pricing, meaning the sticker price on items like eggs and milk could change regularly. They also claim data from facial recognition technology at Kroger could be considered in pricing decisions.

Kroger denied the claims, saying it has no plans to implement dynamic pricing or use facial recognition software. Walmart also said it had no plans for dynamic pricing, and that facial recognition was not being used to affect pricing, but the company did not specify whether the tool was being used for other purposes. Both Walmart, which has 4,606 U.S. stores, and Kroger, which has nearly 2,800 U.S. stores, also suggested that the effects of digital price tags are being exaggerated, and economic experts say that most grocery bills won't be higher as a result of the tags. Still, data privacy experts have concerns about new technology being implemented at grocery stores broadly.

Graphics

Adobe Made Its Painting App Completely Free To Take On Procreate 27

Adobe's Fresco painting app is now free for everyone, in an attempt to lure illustrators to join its creative software suite. The Verge reports: Fresco is essentially Adobe's answer to apps like Procreate and Clip Studio Paint, which all provide a variety of tools for both digital art and simulating real-world materials like sketching pencils and watercolor paints. Adobe Fresco is designed for touch and stylus-supported devices, and is available on iPad, iPhone, and Windows PCs. The app already had a free-to-use tier, but premium features like access to the full Adobe Fonts library, a much wider brush selection, and the ability to import custom brushes previously required a $9.99 annual subscription. That's pretty affordable for an Adobe subscription, but still couldn't compete with Procreate's $12.99 one-time purchase model.

Starting today, all of Fresco's premium features are no longer locked behind a paywall. The app first launched in 2019 and isn't particularly well-known compared to more established Adobe apps like Photoshop and Illustrator that feature more complex, professional design tools. Fresco still has some interesting features of its own, like reflective and rotation symmetry (which mirror artwork as you draw) and the ability to quickly animate drawings with motion presets like "bounce" and "breathe."
Bitcoin

Peter Todd In Hiding After Being 'Unmasked' As Bitcoin Creator Satoshi Nakamoto (wired.com) 77

An anonymous reader quotes a report from Wired: When Canadian developer Peter Todd found out that a new HBO documentary, Money Electric: The Bitcoin Mystery, was set to identify him as Satoshi Nakamoto, the creator of Bitcoin, he was mostly just pissed. "This was clearly going to be a circus," Todd told WIRED in an email. The identity of the person -- or people -- who created Bitcoin has been the subject of speculation since December 2010, when they disappeared from public view. The mystery has proved all the more irresistible for the trove of bitcoin Satoshi is widely believed to have controlled, suspected to be worth many billions of dollars today. When the documentary was released on October 8, Todd joined a long line of alleged Satoshis.

Documentary maker Cullen Hoback, who in a previous film claimed to have identified the individual behind QAnon, laid out his theory to Todd on camera. The confrontation would become the climactic scene of the documentary. But Todd nonetheless claims he didn't see it coming; he alleges he was left with the impression the film was about the history of Bitcoin, not the identity of its creator. Since the documentary aired, Todd has repeatedly and categorically denied that he created Bitcoin: "For the record, I am not Satoshi," he alleges. "I think Cullen made the Satoshi accusation for marketing. He needed a way to get attention for his film."

For his part, Hoback remains confident in his conclusions. The various denials and deflections from Todd, he claims, are part of a grand and layered misdirection. "While of course we can't outright say he is Satoshi, I think that we make a very strong case," says Hoback. Whatever the truth, Todd will now bear the burden of having been unmasked as Satoshi. He has gone into hiding. [...] Todd expects that "continued harassment by crazy people" will become the indefinite status quo. But he says the potential personal safety implications are his chief concern -- and the reason he has gone into hiding. "Obviously, falsely claiming that ordinary people of ordinary wealth are extraordinarily rich exposes them to threats like robbery and kidnapping," says Todd. "Not only is the question dumb, it's dangerous. Satoshi obviously didn't want to be found, for good reasons, and no one should help people trying to find Satoshi."
"I think the idea that it puts their life [at risk] is a little overblown," says Hoback. "This person is potentially on track to become the wealthiest on Earth."

"If countries are considering adopting this in their treasuries or making it legal tender, the idea that there's potentially this anonymous figure out there who controls one twentieth of the total supply of digital gold is pretty important."

Slashdot Top Deals