Digital

What Can We Learn from the Computers of 1966? (harvardmagazine.com) 61

Harry R. Lewis has been a Harvard CS professor — teaching both Bill Gates and Mark Zuckerberg — and the dean of Harvard college. Born in 1947, Lewis remembers flipping the 18 toggle switches on Harvard's PDP-4 back in 1966 — up ("click!") or down ("CLACK"). And he thinks there's a lesson for today from a time when "Computers were experienced as physical things."

[T]he machine had a personality because it had a body you could feel and listen to. You could tell whether it was running smoothly by the way it sounded...

Unlike the unreliable mechanical contraptions of yore, today's computers — uninteresting though they may be to look at if you can find them at all — mostly don't break down, so we have fewer reasons to remember their physicality. Does it matter that the line between humans and the machines we have created has so blurred? Of course it does. We have known for a long time that we would eventually lose the calculation game to our creations; it has happened. We are likely to lose Turing's "Imitation Game" too, in which a computer program, communicating with a human via typed text, tries to fool the user into confusing it with a human at another keyboard. (ChatGPT and its ilk are disturbingly convincing conversationalists already.)

Our challenge, in the presence of ubiquitous, invisible, superior intelligent agents, will be to make sure that we, and our heirs and successors, remember what makes us human... All computers can do is pretend to be human. They can be, in the language of the late philosopher Daniel Dennett '63, counterfeit humans... The first error is suggesting that computers can be digitally trained to be superior versions of human intellects. And the second is inferring that human judgment will not be needed once computers get smart enough...

[N]o AI system can be divorced from the judgments of the humans who created it... Only hubristic humans could think that their counterfeits might completely substitute for human companionship, wisdom, curiosity, and judgment.â

Even back in 1966, Lewis says he learned two lessons that "have stood the test of time. Be careful what you ask them for. And it can be hard to tell what they are doing."

One example? "In those pre-miniaturization days, the ordinary operation of the central processor generated so much radiation that you would put a transistor radio on the console and tune it in between AM stations. From the other side of the room, the tone of the static indicated whether the machine had crashed or not."
Windows

Who Wrote the Code for Windows' 'Blue Screen of Death'? (sfgate.com) 40

Who wrote the code for Windows' notorious "Blue Screen of Death? It's "been a source of some contention," writes SFGate: A Microsoft developer blog post from Raymond Chen in 2014 said that former Microsoft CEO Steve Ballmer wrote the text for the Ctrl+Alt+Del dialog in Windows 3.1. That very benign post led to countless stories from tech media claiming Ballmer was the inventor of the "Blue Screen of Death." That, in turn, prompted a follow-up developer blog post from Chen titled "Steve Ballmer did not write the text for the blue screen of death...."

Chen then later tried to claim he was responsible for the "Blue Screen of Death," saying he coded it into Windows 95. Problem is, it already existed in previous iterations of Windows, and 95 simply removed it. Chen added it back in, which he sort of cops to, saying: "And I'm the one who wrote it. Or at least modified it last." No one challenged Chen's 2014 self-attribution, until 2021, when former Microsoft developer Dave Plummer stepped in. According to Plummer, the "Blue Screen of Death" was actually the work of Microsoft developer John Vert, whom logs revealed to be the father of the modern Windows blue screen way back in version 3.1.

Plummer spoke directly with Vert, according to Vert, who'd remembered that he got the idea because there was already a blue screen with white text in both his machine at the time (a MIPS RISC box) and this text editor (SlickEdit)...
Firefox

Firefox 128 Criticized for Including Small Test of 'Privacy-Preserving' Ad Tech by Default (itsfoss.com) 57

"Many people over the past few days have been lashing out at Mozilla," writes the blog Its FOSS, "for enabling Privacy-Preserving Attribution by default on Firefox 128, and the lack of publicity surrounding its introduction."

Mozilla responded that the feature will only run "on a few sites in the U.S. under strict supervision" — adding that users can disable it at any time ("because this is a test"), and that it's only even enabled if telemetry is also enabled.

And they also emphasize that it's "not tracking." The way it works is there's an "aggregation service" that can periodically send advertisers a summary of ad-related actions — again, aggregated data, from a mass of many other users. (And Mozilla says that aggregated summary even includes "noise that provides differential privacy.") This Privacy-Preserving Attribution concept "does not involve sending information about your browsing activities to anyone... Advertisers only receive aggregate information that answers basic questions about the effectiveness of their advertising."

More from It's FOSS: Even though Mozilla mentioned that PPA would be enabled by default on Firefox 128 in a few of its past blog posts, they failed to communicate this decision clearly, to a wider audience... In response to the public outcry, Firefox CTO, Bobby Holley, had to step in to clarify what was going on.

He started with how the internet has become a massive cesspool of surveillance, and doing something about it was the primary reason many people are part of Mozilla. He then expanded on their approach with Firefox, which, historically speaking, has been to ship a browser with anti-tracking features baked in to tackle the most common surveillance techniques. But, there were two limitations with this approach. One was that advertisers would try to bypass these countermeasures. The second, most users just accept the default options that they are shown...

Bas Schouten, Principal Software Engineer at Mozilla, made it clear at the end of a heated Mastodon thread that "[opt-in features are] making privacy a privilege for the people that work to inform and educate themselves on the topic. People shouldn't need to do that, everyone deserves a more private browser. Privacy features, in Firefox, are not meant to be opt-in. They need to be the default.

"If you are 'completely anti-ads' (i.e. even if their implementation is private), you probably use an ad blocker. So are unaffected by this."

This has already provoked a discussion among Slashdot readers. "It doesn't seem that evil to me," argues Slashdot reader geekprime. "Seems like the elimination of cross site cookies is a privacy enhancing idea." (They cite Mozilla's statement that their goal is "to inform an emerging Web standard designed to help sites understand how their ads perform without collecting data about individual people. By offering sites a non-invasive alternative to cross-site tracking, we hope to achieve a significant reduction in this harmful practice across the web.")

But Slashdot reader TheNameOfNick disagrees. "How realistic is the part where advertisers stop tracking you because they get less information from the browser maker...?"

Mozilla has provided simple instructions for disabling the feature:
  • Click the menu button and select Settings.
  • In the Privacy & Security panel, find the Website Advertising Preferences section.
  • Uncheck the box labeled Allow websites to perform privacy-preserving ad measurement.

Television

Remembering Bob Newhart, Legendary Comedian - and Commodore PET Owner (latimes.com) 24

Long-time Slashdot reader theodp writes: Bob Newhart, whose stammering, deadpan unflappability carried him to stardom as a standup comedian and later in television and movies, has died at age 94. He remains best known for the television shows, "The Bob Newhart Show" (1972-78) and "Newhart" (1982-90), both of which were built around his persona as a reasonable man put-upon by crazies. A younger crowd may remember Newhart from his roles in the movie "Elf" (2003) and TV's "The Big Bang Theory" (2013-18).

Less known about Newhart is that he was an early Commodore PET owner, recalling for the LA Times in 2001: "I remember leafing through a copy of Popular Science magazine and seeing an ad for a Commodore computer that had 8- or 16 kilobytes [in 1977]. It had an awful-looking screen, and it was $795. I thought I'd better get one because I had sons who were going to be in high school and might want to know about computers. Later, I moved up to the 64 KB model and thought that was silly because it was more memory than I would ever possibly need.

"I got them for the kids and then found I was fascinated by them. The first ones had tape drives. You would get a program like a word processor, put the tape in and then walk away for about a half an hour while the computer loaded it. But the first time I used a spell checker and it corrected a word, I thought, 'We are getting close to God here."

ISS

Virgin Galactic Flies 3D Printer Into Space. Its Next Mission: Bioprinting on the ISS (berkeley.edu) 13

"In a significant advancement for space technology, a team of UC Berkeley researchers, led by doctoral student Taylor Waddell, successfully launched a 3D printer into space," reports the university's student newspaper: As part of the Virgin Galactic 07 mission, the team sent a 3D printer named SpaceCAL to space to explore the potential of Computed Axial Lithography, or CAL, and additive manufacturing in space... During its 140-second flight in suborbital space, the SpaceCAL printer autonomously detected microgravity and printed four test parts: two space shuttles and two Benchies, or 3D-printed boats created to check the printer's accuracy, according to Sean Chu, a member of the team who worked on designing structures and mechanisms. Within the 140 seconds, the process involved multiple steps such as printing, post-washing, flushing with water and post-curing with light to fully solidify the parts.
But that's just the beginning, says the university's engineering department: To date, CAL has shown that it can successfully print with more than 60 different materials on Earth, such as silicones, glass composites and biomaterials. According to Waddell, this versatility could come in handy for both the cabin and the crew... "CAL is also capable of repairing the crew. We can print dental replacements, skin grafts or lenses, or things personalized in emergency medicine for astronauts, which is very important in these missions, too."

Someday, CAL may be used to print even more sophisticated parts, such as human organs. Lawrence Livermore National Lab has received a grant from NASA to test this technology on the International Space Station. "They're going to basically do bioprinting on the Space Station," said Waddell. "And the long, long-term goal is to print organs up in space with CAL, then bring them back down to Earth." Next, Waddell and his colleagues hope to begin work with NASA on developing and validating a single object that could support crew health and wellness, like a dental crown for an astronaut or a surgical wound closure tool...

This project was made possible through a $1.4 million grant and engineering support provided by NASA. In addition, Virgin Galactic played a pivotal role in taking this project to the next level.

NASA

Fastest Object Ever Made By Humans Continues Circling the Sun, 500x Faster Than Sound (sciencealert.com) 61

An anonymous reader shared this report from ScienceAlert: NASA's Parker Solar Probe, tasked with taking a close-up look at the Sun's outer corona, has just equalled the record for the fastest-moving human-made object ever. The previous record holder? The Parker Solar Probe, again. The probe was recorded traveling at 635,266 kilometers (394,736 miles) per hour on June 29, the second time it's reached that speed since it launched in 2018. We're talking around 500 times faster than the speed of sound here. It's on course to get even faster too, with a top speed of around 692,000 kph (430,000 mph) expected when it makes its closest approach to the Sun in 2025.
It's the probe's 20th approach to the sun, according to the article, with the probe using Venus "to create a sort of gravity-powered slingshot," according to the article. (NASA has created a nice interactive 3D model of the probe...)

Besides collecting particle samples in 2021, "The probe is eventually going to get nice and close to the swirling mass of ultra-hot plasma surrounding the Sun, and take a wealth of different measurements to help improve our scientific understanding of it."
Programming

Rust Leaps Forward on Language Popularity Index (infoworld.com) 59

An anonymous reader shared this report from InfoWorld: Rust has leaped to its highest position ever in the monthly Tiobe index of language popularity, scaling to the 13th spot this month, with placement in the top 10 anticipated in an upcoming edition. Previously, Rust has never gone higher than 17th place in the Tiobe Programming Index. Tiobe CEO Paul Jansen attributed Rust's ascent in the just-released July index to a February 2024 U.S. White House report recommending Rust over C/C+ for safety reasons. He also credited the growing community and ecosystem support for the language. "Rust is finally moving up."
The article adds that these rankings are based on "the number of skilled engineers worldwide, courses, and third-party vendors pertaining to languages, examining websites such as Google, Amazon, Wikipedia, and more than 20 others to determine the monthly numbers."
  1. Python
  2. C++
  3. C
  4. Java
  5. C#
  6. JavaScript
  7. Go
  8. Visual Basic
  9. Fortran
  10. SQL

Interestingly, Rust has just moved into the top ten on the rival rankings from the rival Pypl Popularity of Programming Language index (which according to the article "assesses how often languages are searched on in Google.")

  1. Python
  2. Java
  3. JavaScript
  4. C#
  5. C/C++
  6. R
  7. PHP
  8. TypeScript
  9. Swift
  10. Rust

Power

Three Mile Island Considers Nuclear Restart (reuters.com) 94

An anonymous reader quotes a report from Reuters: Constellation Energy is in talks with the Pennsylvania governor's office and state lawmakers to help fund a possible restart of part of its Three Mile Island power facility, the site of a nuclear meltdown in the 1970s, three sources familiar with the discussions said on Tuesday. The conversations, which two sources described as "beyond preliminary," signal that Constellation is advancing plans to revive part of the southern Pennsylvania nuclear generation site, which operated from 1974 to 2019. The nuclear unit Constellation is considering restarting is separate from the one that melted down.

The sources said that a shut Michigan nuclear plant, which was recently awarded a $1.5 billion conditional loan to restart from the administration of U.S. President Joe Biden, could serve as a private-public sector blueprint for Three Mile Island. The sources asked not to be named due to the sensitivity of the discussions. "Though we have determined it would be technically feasible to restart the unit, we have not made any decision on a restart as there are many economic, commercial, operational and regulatory considerations remaining," Constellation spokesperson Dave Snyder said in an email. Snyder did not comment on the specifics of discussions about reopening the Pennsylvania site.

Last month, Constellation told Reuters that it had cleared an engineering study of Three Mile Island, though it was unknown if the Baltimore, Maryland-based energy company would move forward with plans to reopen the site. Constellation also said that given the current premium placed on nuclear energy, acquiring other sites was generally off the table and the company would instead look to expand its existing fleet. The Three Mile Island unit that could be restarted is different to the site's unit 2, which experienced a partial meltdown in 1979 in the most famous commercial nuclear accident in U.S. history.
The report notes that "no U.S. nuclear power plant has been reopened after shutting." A restart will not only be costly, but it will be challenged over safety and environmental concerns.
Open Source

Developer Successfully Boots Up Linux on Google Drive (ersei.net) 42

Its FOSS writes: When it comes to Linux, we get to see some really cool, and sometimes quirky projects (read Hannah Montana Linux) that try to show off what's possible, and that's not a bad thing. One such quirky undertaking has recently surfaced, which sees a sophomore trying to one-up their friend, who had booted Linux off NFS. With their work, they have been able to run Arch Linux on Google Drive.
Their ultimate idea included FUSE (which allows running file-system code in userspace). The developer's blog post explains that when Linux boots, "the kernel unpacks a temporary filesystem into RAM which has the tools to mount the real filesystem... it's very helpful! We can mount a FUSE filesystem in that step and boot normally.... " Thankfully, Dracut makes it easy enough to build a custom initramfs... I decide to build this on top of Arch Linux because it's relatively lightweight and I'm familiar with how it work."
Doing testing in an Amazon S3 container, they built an EFI image — then spent days trying to enable networking... And the adventure continues. ("Would it be possible to manually switch the root without a specialized system call? What if I just chroot?") After they'd made a few more tweaks, "I sit there, in front of my computer, staring. It can't have been that easy, can it? Surely, this is a profane act, and the spirit of Dennis Ritchie ought't've stopped me, right? Nobody stopped me, so I kept going..." I build the unified EFI file, throw it on a USB drive under /BOOT/EFI, and stick it in my old server... This is my magnum opus. My Great Work. This is the mark I will leave on this planet long after I am gone: The Cloud Native Computer.

Despite how silly this project is, there are a few less-silly uses I can think of, like booting Linux off of SSH, or perhaps booting Linux off of a Git repository and tracking every change in Git using gitfs. The possibilities are endless, despite the middling usefulness.

If there is anything I know about technology, it's that moving everything to The Cloud is the current trend. As such, I am prepared to commercialize this for any company wishing to leave their unreliable hardware storage behind and move entirely to The Cloud. Please request a quote if you are interested in True Cloud Native Computing.

Unfortunately, I don't know what to do next with this. Maybe I should install Nix?

Open Source

FreeBSD Contributor Mocks Gloomy Predictions for the Open Source Movement (acm.org) 94

In Communications of the ACM, long-time FreeBSD contributor Poul-Henning Kamp mocks the idea that the free and open-source software movement has "come apart" and "will end in tears and regret." Economists and others focused on money — like my bank — have had a lot of trouble figuring out the free and open source software (FOSS) phenomenon, and eventually they seem to have reached the conclusion that it just makes no sense. So, they go with the flow. Recently, very serious people in the FOSS movement have started to write long and thoughtful opinion pieces about how it has all come apart and will end in tears and regret. Allow me to disagree...
What follows is a humorous history of how the Open Source movement bested a series of ill-conceived marketing failures starting after the "utterly bad" 1980s when IBM had an "unimaginably huge monopoly" — and an era of vendor lock-in from companies trying to be the next IBM: Out of that utter market failure came Minix, (Net/Free/Open)BSD, and Linux, at a median year of approximately 1991. I can absolutely guarantee that if we had been able to buy a reasonably priced and solid Unix for our 32-bit PCs — no strings attached — nobody would be running FreeBSD or Linux today, except possibly as an obscure hobby. Bill Gates would also have had a lot less of our money...
The essay moves on to when "that dot-com thing happened, fueled by the availability of FOSS operating systems, which did a much better job than any operating system you could buy — not just for the price, but in absolute terms of performance on any given piece of hardware. Thus, out of utter market failure, the FOSS movement was born."

And ultimately, the essay ends with our present day, and the phenomenon of companies that "make a business out of FOSS or derivatives thereof..." The "F" in FOSS was never silent. In retrospect, it seems clear that open source was not so much the goal itself as a means to an end, which is freedom: freedom to fix broken things, freedom from people who thought they could clutch the source code tightly and wield our ignorance of it as a weapon to force us all to pay for and run Windows Vista. But the FOSS movement has won what it wanted, and no matter how much oldsters dream about their glorious days as young revolutionaries, it is not coming back; the frustrations and anger of IT in 2024 are entirely different from those of 1991.

One very big difference is that more people have realized that source code is a liability rather than an asset. For some, that realization came creeping along the path from young teenage FOSS activists in the late 1990s to CIOs of BigCorp today. For most of us, I expect, it was the increasingly crushing workload of maintaining legacy code bases...

Microsoft

Christie's Likens Microsoft's Work On MS-DOS To Einstein's Work In Physics 110

Longtime Slashdot reader theodp writes: "If Einstein paved the way for a new era in physics," explains auction house Christie's in a promotion piece for its upcoming offering of 150+ "objects of scientific and historical importance" from the Paul G. Allen Collection (including items from the shuttered Living Computers Museum), "Mr. Allen and his collaborators ushered in a new era of computing. Starting with MS-DOS in 1981, Microsoft then went on to revolutionize personal computing with the launch of Windows in 1985."

Christie's auction and characterization of MS-DOS as an Allen and Microsoft innovation comes 30 years after the death of Gary Kildall, whose unpublished memoir, the Seattle Times reported in Kildall's July 1994 obituary, called DOS "plain and simple theft" of Kildall's CP/M OS. PC Magazine's The Rise of DOS: How Microsoft Got the IBM PC OS Contract notes that Paul Allen himself traced the genesis of MS-DOS back to a phone call Allen made to Seattle Computer Products owner Rod Brock in which Microsoft licensed Tim Paterson's CP/M-inspired QDOS (Quick and Dirty Operating System) for $10,000 plus a royalty of $15,000 for every company that licensed the software. A shrewd buy-low-sell-high business deal, yes, but hardly an Einstein-caliber breakthrough idea.
Education

High School AP CS A Exam Takers Struggled Again With Java Array Question 159

theodp writes: As with last year," tweeted College Board's AP Program Chief Trevor Packer, "the most challenging free-response question on this year's AP Computer Science A exam was Q4 on 2D Array." While it takes six pages of the AP CS A exam document [PDF] to ask question 4 (of 4), the ask of students essentially boils down to using Java to move from the current location in a 2-D grid to either immediately below or to the right of that location based on which neighbor contains the lesser value, and adding the value at that location to a total (suggested Java solution, alternative Excel VBA solution). Much like rules of the children's game Pop-O-Matic Trouble, moves are subject to the constraint that you cannot move to the right or ahead if it takes you to an invalid position (beyond the grid dimensions).

Ironically, many of the AP CS A students who struggled with the grid coding problem were likely exposed by their schools from kindergarten on to more than a decade's worth of annual Hour of Code tutorials that focused on the concepts of using code to move about in 2-D grids. The move-up-down-left-right tutorials promoted by schools came from tech-backed nonprofit Code.org and its tech giant partners and have been taught over the years by the likes of Bill Gates, Mark Zuckerberg, and President Obama, as well as characters from Star Wars, Disney Princess movies, and Microsoft Minecraft.

The news of American high school students struggling again with fairly straightforward coding problems after a year-long course of instruction comes not only as tech companies and tech-tied nonprofits lobby state lawmakers to pass bills making CS a high school graduation requirement in the US, but also as a new report from King's College urges lawmakers and educators to address a stark decline in the number of UK students studying computing at secondary school, which is blamed on the replacement of more approachable ICT (Information and Communications Technology) courses with more rigorous computer science courses in 2013 (a switch pushed by Google and Microsoft), which it notes students have perceived as too difficult and avoided taking.
Sci-Fi

William Gibson's 'Neuromancer' to Become a Series on Apple TV+ 149

It's been adapted into a graphic novel, a videogame, a radio play, and an opera, according to Wikipedia — which also describes years of trying to adapt Neuromancer into a movie. "The landmark 1984 cyberpunk novel has been on Hollywood's wishlist for decades," writes Gizmodo, "with multiple filmmakers attempting to bring it to the big screen." (Back in 2010, Slashdot's CmdrTaco even posted an update with the headline "Neuromancer Movie In Your Future?" with a 2011 story promising the movie deal was "moving forward....")

But now Deadline reports it's becoming a 10-episode series on Apple TV+ (co-produced by Apple Studios) starring Callum Turner and Brianna Middleton: Created for television by Graham Roland and JD Dillard, Neuromancer follows a damaged, top-rung super-hacker named Case (Turner) who is thrust into a web of digital espionage and high stakes crime with his partner Molly (Middleton), a razor-girl assassin with mirrored eyes, aiming to pull a heist on a corporate dynasty with untold secrets.
More from Gizmodo: "We're incredibly excited to be bringing this iconic property to Apple TV+," Roland and Dillard said in a statement. "Since we became friends nearly 10 years ago, we've looked for something to team up on, so this collaboration marks a dream come true. Neuromancer has inspired so much of the science fiction that's come after it and we're looking forward to bringing television audiences into Gibson's definitive 'cyberpunk' world."
The novel launched Gibson's "Sprawl" trilogy of novels (building on the dystopia in his 1982 short story "Burning Chrome"), also resurrecting the "Molly Millions" character from Johnny Mnemonic — an even earlier short story from 1981...
Science

Get Ready For Nuclear Clocks (arxiv.org) 50

Long-time Slashdot reader jrronimo says JILA physicist Jun Ye's group "has made a breakthrough towards the next stage of precision timekeeping."

From their paper recently published to arXiv: Optical atomic clocks use electronic energy levels to precisely keep track of time. A clock based on nuclear energy levels promises a next-generation platform for precision metrology and fundamental physics studies.... These results mark the start of nuclear-based solid-state optical clocks and demonstrate the first comparison of nuclear and atomic clocks for fundamental physics studies. This work represents a confluence of precision metrology, ultrafast strong field physics, nuclear physics, and fundamental physics.
The Media

Citing 'Crisis' in Local Reporting, Associated Press Creates Sister Organization to Seek Grants (apnews.com) 25

Founded in 1846, the not-for-profit Associated Press distributes its news stories to other news outlets. But are free online sites putting those outlets at risk?

This week the Associated Press wrote that a "crisis" in local and state news reporting "shows little signs of abating" — and that it's now setting up "a sister organization that will seek to raise money" for those outlets. The organization, which will have a board of directors independent of the AP, will solicit philanthropic spending to boost this news coverage, both within the AP and through outside organizations, the news outlet said Tuesday. "We feel we have to lean in at this point, not pull back," said Daisy Veerasingham, the AP's president and CEO. "But the supporting mechanism — the local newspaper market that used to support this — can't afford to do that anymore." Veerasingham said she's been encouraged by preliminary talks with some funders who have expressed concern about the state of local journalism...

The local news industry has collapsed over the past two decades, with the number of journalists working in newspapers dropping from 75,000 to 31,000 in 2022, according to Northwestern University. More than half of the nation's counties have no local news outlets or only one.

The AP's CEO offered this succinct summary of their goal. "We want to add new products and services to help the industry."
Bitcoin

Linux Foundation Announces Intent to Form LF Decentralized Trust (linuxfoundation.org) 9

This week the Linux Foundation announced a new organization for decentralized systems and technologies, with an aim of "fostering innovation and collaboration" in both their development and deployment.

It will build on existing Linux Foundation blockchain and digital identity projects, according to the announcement, while supporting "a rapidly growing decentralized technology landscape." To foster this broader ecosystem, LF Decentralized Trust will encompass the growing portfolio of Hyperledger projects and host new open source software, communities, standards, and specifications that are critical to the macro shift toward decentralized systems of distributed trust....

LF Decentralized Trust's expanded project and member ecosystem will be both essential to emerging tokenized assets classes and networks, as well as to modernizing the core infrastructure for finance, trade, government, healthcare, and more. LF Decentralized Trust will serve as a neutral home for the open development of a broad range of ledger, identity, security, interoperability, scale, implementation, and related technologies... LF Decentralized Trust will also include new directed funding models that will drive strategic investments by members into individual projects and project resources.

"With LF Decentralized Trust, we're expanding our commitment to open source innovation by embracing a wider array of decentralized technologies," said Jim Zemlin, Executive Director of the Linux Foundation. "This new, elevated foundation will enable the community to build a more robust ecosystem that drives forward transparency, security, and efficiency in global infrastructure."

"After eight years of advancing the development of blockchain, decentralized identity and related technologies via the Hyperledger community, the time has come to broaden our effort and impact," said Daniela Barbosa, General Manager, Blockchain and Identity, the Linux Foundation. "Ledgers and ledger technologies are but one component of the decentralized systems that will underpin a digital-first global economy. LF Decentralized Trust is where we will gather and grow an expanded community and portfolio of technologies to deliver the transparency, reliability, security and efficiency needed to successfully upgrade critical systems around the world."

The announcement includes quotes of support from numerous companies including Oracle, Siemens, Visa, Accenture, Citi, and Hitachi. Some highlights:
  • "The formation of the LF Decentralized Trust reflects the growing demand for open source resources that are critical to the management and functionality of decentralized systems." — CEO of Digital Asset
  • "The adoption of decentralized infrastructure is at an inflection point, reflecting the increasing demand from both enterprises and consumers for more secure and transparent digital transactions. As the industry leader for onchain data, blockchain abstraction, and interoperability, we're excited to see the formation of the LF Decentralized Trust and to expand our collaboration with leading financial institutions on advancing tokenized assets and the onchain economy at large." — CMO at Chainlink Labs.
  • "As a founding member of the Hyperledger Foundation, and given our unique position in the financial markets, we recognize the vast potential for open-source innovation and decentralized technologies when it comes to reducing risk, increasing resiliency and improving security. The expansion of Hyperledger Foundation into LF Decentralized Trust represents an exciting opportunity to continue expanding these groundbreaking technologies." — a managing director at DTCC

NASA

Mars Rover's SHELOC Instrument Back Online (nasa.gov) 14

Longtime Slashdot reader thephydes writes: NASA Jet Propulsion Laboratory (JPL) has announced that the SHERLOC (Scanning Habitable Environments with Raman & Luminescence for Organics and Chemicals) instrument on the Perseverance rover has been brought back online "Six months of running diagnostics, testing, imagery and data analysis, troubleshooting, and retesting couldn't come with a better conclusion," said SHERLOC principal investigator Kevin Hand of JPL.

JPL writes in a press release. "Mounted on the rover's robotic arm, SHERLOC uses cameras, spectrometers, and a laser to search for organics and minerals that have been altered by watery environments and may be signs of past microbial life." In addition to its black-and-white context camera, SHERLOC is assisted by WATSON, a color camera for taking close-up images of rock grains and surface textures.
The instrument stopped working this past January when it encountered an issue where the "movable lens cover designed to protect the instrument's spectrometer and one of its cameras from dust became frozen in a position that prevented SHERLOC from collecting data," says JPL.

"Analysis by the SHERLOC team pointed to the malfunction of a small motor responsible for moving the protective lens cover as well as adjusting focus for the spectrometer and the Autofocus and Context Imager (ACI) camera. By testing potential solutions on a duplicate SHERLOC instrument at JPL, the team began a long, meticulous evaluation process to see if, and how, the lens cover could be moved into the open position."
The Matrix

Researchers Upend AI Status Quo By Eliminating Matrix Multiplication In LLMs 72

Researchers from UC Santa Cruz, UC Davis, LuxiTech, and Soochow University have developed a new method to run AI language models more efficiently by eliminating matrix multiplication, potentially reducing the environmental impact and operational costs of AI systems. Ars Technica's Benj Edwards reports: Matrix multiplication (often abbreviated to "MatMul") is at the center of most neural network computational tasks today, and GPUs are particularly good at executing the math quickly because they can perform large numbers of multiplication operations in parallel. [...] In the new paper, titled "Scalable MatMul-free Language Modeling," the researchers describe creating a custom 2.7 billion parameter model without using MatMul that features similar performance to conventional large language models (LLMs). They also demonstrate running a 1.3 billion parameter model at 23.8 tokens per second on a GPU that was accelerated by a custom-programmed FPGA chip that uses about 13 watts of power (not counting the GPU's power draw). The implication is that a more efficient FPGA "paves the way for the development of more efficient and hardware-friendly architectures," they write.

The paper doesn't provide power estimates for conventional LLMs, but this post from UC Santa Cruz estimates about 700 watts for a conventional model. However, in our experience, you can run a 2.7B parameter version of Llama 2 competently on a home PC with an RTX 3060 (that uses about 200 watts peak) powered by a 500-watt power supply. So, if you could theoretically completely run an LLM in only 13 watts on an FPGA (without a GPU), that would be a 38-fold decrease in power usage. The technique has not yet been peer-reviewed, but the researchers -- Rui-Jie Zhu, Yu Zhang, Ethan Sifferman, Tyler Sheaves, Yiqiao Wang, Dustin Richmond, Peng Zhou, and Jason Eshraghian -- claim that their work challenges the prevailing paradigm that matrix multiplication operations are indispensable for building high-performing language models. They argue that their approach could make large language models more accessible, efficient, and sustainable, particularly for deployment on resource-constrained hardware like smartphones. [...]

The researchers say that scaling laws observed in their experiments suggest that the MatMul-free LM may also outperform traditional LLMs at very large scales. The researchers project that their approach could theoretically intersect with and surpass the performance of standard LLMs at scales around 10^23 FLOPS, which is roughly equivalent to the training compute required for models like Meta's Llama-3 8B or Llama-2 70B. However, the authors note that their work has limitations. The MatMul-free LM has not been tested on extremely large-scale models (e.g., 100 billion-plus parameters) due to computational constraints. They call for institutions with larger resources to invest in scaling up and further developing this lightweight approach to language modeling.
Piracy

South Korean ISP 'Infected' 600,000 Torrenting Subscribers With Malware (torrentfreak.com) 21

An anonymous reader quotes a report from TorrentFreak: Last week, an in-depth investigative report from JBTC revealed that Korean Internet provider KT, formerly known as Korea Telecom, distributed malware onto subscribers' computers to interfere with and block torrent traffic. File-sharing continues to be very popular in South Korea, but operates differently than in most other countries. "Webhard" services, short for Web Hard Drive, are particularly popular. These are paid BitTorrent-assisted services, which also offer dedicated web seeds, to ensure that files remain available.

Webhard services rely on the BitTorrent-enabled 'Grid System', which became so popular in Korea that ISPs started to notice it. Since these torrent transfers use a lot of bandwidth, which is very costly in the country, providers would rather not have this file-sharing activity on their networks. KT, one of South Korea's largest ISPs with over 16 million subscribers, was previously caught meddling with the Grid System. In 2020, their throttling activities resulted in a court case, where the ISP cited 'network management' costs as the prime reason to interfere. The Court eventually sided with KT, ending the case in its favor, but that wasn't the end of the matter. An investigation launched by the police at the time remains ongoing. New reports now show that the raid on KT's datacenter found that dozens of devices were used in the 'throttling process' and they were doing more than just limiting bandwidth.

When Webhard users started reporting problems four years ago, they didn't simply complain about slow downloads. In fact, the main concern was that several Grid-based Webhard services went offline or reported seemingly unexplainable errors. Since all complaining users were KT subscribers, fingers were pointed in that direction. According to an investigation by Korean news outlet JBTC, the Internet provider actively installed malware on computers of Webhard services. This activity was widespread and effected an estimated 600,000 KT subscribers. The Gyeonggi Southern Police Agency, which carried out the raid and investigation, believes this was an organized hacking attempt. A dedicated KT team allegedly planted malware to eavesdrop on subscribers and interfere with their private file transfers. [...] Why KT allegedly distributed the malware and what it precisely intended to do is unclear. The police believe there were internal KT discussions about network-related costs, suggesting that financial reasons played a role.

Netscape

Slashdot Asks: What Do You Remember About the Web in 1994? (fastcompany.com) 171

"The Short Happy Reign of the CD-ROM" was just one article in a Fast Company series called 1994 Week. As the week rolled along they also re-visited Yahoo, Netscape, and how the U.S. Congress "forced the videogame industry to grow up."

But another article argues that it's in web pages from 1994 that "you can start to see in those weird, formative years some surprising signs of what the web would be, and what it could be." It's hard to say precisely when the tipping point was. Many point to September '93, when AOL users first flooded Usenet. But the web entered a new phase the following year. According to an MIT study, at the start of 1994, there were just 623 web servers. By year's end, it was estimated there were at least 10,000, hosting new sites including Yahoo!, the White House, the Library of Congress, Snopes, the BBC, sex.com, and something called The Amazing FishCam. The number of servers globally was doubling every two months. No one had seen growth quite like that before. According to a press release announcing the start of the World Wide Web Foundation that October, this network of pages "was widely considered to be the fastest-growing network phenomenon of all time."

As the year began, Web pages were by and large personal and intimate, made by research institutions, communities, or individuals, not companies or brands. Many pages embodied the spirit, or extended the presence, of newsgroups on Usenet, or "User's Net." (Snopes and the Internet Movie Database, which landed on the Web in 1993, began as crowd-sourced projects on Usenet.) But a number of big companies, including Microsoft, Sun, Apple, IBM, and Wells Fargo, established their first modest Web outposts in 1994, a hint of the shopping malls and content farms and slop factories and strip mines to come. 1994 also marked the start of banner ads and online transactions (a CD, pizzas), and the birth of spam and phishing...

[B]ack in '94, the salesmen and oilmen and land-grabbers and developers had barely arrived. In the calm before the storm, the Web was still weird, unruly, unpredictable, and fascinating to look at and get lost in. People around the world weren't just writing and illustrating these pages, they were coding and designing them. For the most part, the design was non-design. With a few eye-popping exceptions, formatting and layout choices were simple, haphazard, personal, and — in contrast to most of today's web — irrepressibly charming. There were no table layouts yet; cascading style sheets, though first proposed in October 1994 by Norwegian programmer Håkon Wium Lie, wouldn't arrive until December 1996... The highways and megalopolises would come later, courtesy of some of the world's biggest corporations and increasingly peopled by bots, but in 1994 the internet was still intimate, made by and for individuals... Soon, many people would add "under construction" signs to their Web pages, like a friendly request to pardon our dust. It was a reminder that someone was working on it — another indication of the craft and care that was going into this never-ending quilt of knowledge.

The article includes screenshots of Netscape in action from browser-emulating site OldWeb.Today (albeit without using a 14.4 kbps modems). "Look in and think about how and why this web grew the way it did, and what could have been. Or try to imagine what life was like when the web wasn't worldwide yet, and no one knew what it really was."

Slashdot reader tedlistens calls it "a trip down memory lane," offering "some telling glimpses of the future, and some lessons for it too." The article revisits 1994 sites like Global Network Navigator, Time-Warner's Pathfinder, and Wired's online site HotWired as well as 30-year-old versions of the home pages for Wells Fargo and Microsoft.

What did they miss? Share your own memories in the comments.

What do you remember about the web in 1994?

Slashdot Top Deals