×
Open Source

T2 Linux 24.5 Released (t2sde.org) 22

ReneR writes: A major T2 Linux milestone has been released, shipping with full support for 25 CPU architectures and several C libraries, as well as restored support for Intel IA-64 Itanium. Additionally, many vintage X.org DDX drivers were fixed and tested to work again, as well as complete support for the latest KDE 6 and GNOME 46.

T2 is known for its sophisticated cross compile support and support for nearly all existing CPU architectures: Alpha, Arc, ARM(64), Avr32, HPPA(64), IA64, M68k, MIPS(64), Nios2, PowerPC(64)(le), RISCV(64), s390x, SPARC(64), and SuperH x86(64). T2 is an increasingly popular choice for embedded systems and virtualization. It also still supports the Sony PS3, Sgi, Sun and HP workstations, as well as the latest ARM64 and RISCV64 architectures.

The release contains a total of 5,140 changesets, including approximately 5,314 package updates, 564 issues fixed, 317 packages or features added and 163 removed, and around 53 improvements. Usually most packages are up-to-date, including Linux 6.8, GCC 13, LLVM/Clang 18, as well as the latest version of X.org, Mesa, Firefox, Rust, KDE 6 and GNOME 46!

More information, source and binary distribution are open source and free at T2 SDE.

Unix

OSnews Decries 'The Mass Extinction of Unix Workstations' (osnews.com) 284

Anyone remember the high-end commercial UNIX workstations from a few decades ago — like from companies like IBM, DEC, SGI, and Sun Microsystems?

Today OSnews looked back — but also explored what happens when you try to buy one today> : As x86 became ever more powerful and versatile, and with the rise of Linux as a capable UNIX replacement and the adoption of the NT-based versions of Windows, the days of the UNIX workstations were numbered. A few years into the new millennium, virtually all traditional UNIX vendors had ended production of their workstations and in some cases even their associated architectures, with a lacklustre collective effort to move over to Intel's Itanium — which didn't exactly go anywhere and is now nothing more than a sour footnote in computing history.

Approaching roughly 2010, all the UNIX workstations had disappeared.... and by now, they're all pretty much dead (save for Solaris). Users and industries moved on to x86 on the hardware side, and Linux, Windows, and in some cases, Mac OS X on the software side.... Over the past few years, I have come to learn that If you want to get into buying, using, and learning from UNIX workstations today, you'll run into various problems which can roughly be filed into three main categories: hardware availability, operating system availability, and third party software availability.

Their article details their own attempts to buy one over the years, ultimately concluding the experience "left me bitter and frustrated that so much knowledge — in the form of documentation, software, tutorials, drivers, and so on — is disappearing before our very eyes." Shortsightedness and disinterest in their own heritage by corporations, big and small, is destroying entire swaths of software, and as more years pass by, it will get ever harder to get any of these things back up and running.... As for all the third-party software — well, I'm afraid it's too late for that already. Chasing down the rightsholders is already an incredibly difficult task, and even if you do find them, they are probably not interested in helping you, and even if by some miracle they are, they most likely no longer even have the ability to generate the required licenses or release versions with the licensing ripped out. Stuff like Pro/ENGINEER and SoftWindows for UNIX are most likely gone forever....

Software is dying off at an alarming rate, and I fear there's no turning the tide of this mass extinction.

The article also wonders why companies like HPE don't just "dump some ISO files" onto an FTP server, along with patch depots and documentation. "This stuff has no commercial value, they're not losing any sales, and it will barely affect their bottom line.
Linux

T2 SDE Linux 22.6 Released - and an AI Bot Contributed More Revisions Than Humans (t2sde.org) 18

"T2 SDE is not just a regular Linux distribution," reads the announcement. "It is a flexible Open Source System Development Environment or Distribution Build Kit (others might even name it Meta Distribution). T2 allows the creation of custom distributions with state of the art technology, up-to-date packages and integrated support for cross compilation."

Slashdot reader ReneR writes: The T2 project released a major milestone update, shipping full support for 25 CPU architectures, variants, and C libraries. Support for cross compiling was further improved to also cover Rust, Ada, ObjC, Fortran, and Go!

This is also the first major release where an AI powered package update bot named 'data' contributed more changes than human contributors combined! [Data: 164, humans: 141]

T2 is known for its sophisticated cross compile support as well as supporting nearly all existing CPU architectures: alpha, arc, arm, arm64, avr32, hppa, ia64, m68k, mipsel, mips64, nios2, ppc, ppc64-32, ppc64le, riscv, riscv64, s390x, spare, sparc64, superh x86, x86-64 and x32 for a wide use in Embedded systems. The project also still supports the Sony PS3, Sgi Octane and Sun workstations as well as state of the art ARM64, RISCV64 as well as AMD64 for regular cloud, server, or simply enthusiast workstation use.

The Gimp

Gimp Turns 25 (theregister.com) 121

New submitter thegreatbob shares a report: The General Image Manipulation Program, GIMP, has turned 25. A brief celebration post detailed how the package started life as a July 1995 Usenet thought bubble by then-student Peter Mattis, who posted the following to several newsgroups: Suppose someone decided to write a graphical image manipulation program (akin to photoshop). Out of curiosity (and maybe something else), I have a few (2) questions: What kind of features should it have? (tools, selections, filters, etc.) What file formats should it support? (jpeg, gif, tiff, etc.)" Four months later, Mattis and fellow University of California Berkeley student Spencer Kimball delivered what they described as software "designed to provide an intuitive graphical interface to a variety of image editing operations."

The software ran on Linux 1.2.13, Solaris 2.4, HPUX 9.05, and SGI IRIX. The answer to the file format support question turned out to be GIF, JPEG, PNG, TIFF, and XPM. The rest is history. Richard Stallman gave Mattis and Kimball permission to change the "General" in its name to "GNU", reflecting its open-source status. Today the program is released under the GNU General Public License. As the program added features such as layers, it grew more popular and eventually became a byword for offering a FOSS alternative to Photoshop even though the project pushes back against that description. The project's celebration page says volunteers did their "best to provide a sensible workflow to users by using common user interface patterns. That gave us a few questionable monikers like 'Photoshop for Linux', 'free Photoshop', and 'that ugly piece of software'. We still can wholeheartedly agree with the latter one only!"

Netscape

Silicon Valley Legends Launch 'Beyond Identity' To Eliminate All Passwords (securityweek.com) 143

SecurityWeek editor wiredmikey shares new that Jim Clark and Tom Jermoluk (past founders of Netscape, Silicon Graphics and @Home Network) "have launched a phone-resident personal certificate-based authentication and authorization solution that eliminates all passwords."

Security Week reports: The technology used is not new, being based on X.509 certificates and SSL (invented by Netscape some 25 years ago and still the bedrock of secure internet communications). It is the opportunity provided by the modern smartphone with biometric user access, enough memory and power, and a secure enclave to store the private keys of a self-certificate that never leaves the device that is new. The biometric access ties the phone to its user, and the Beyond Identity certificate authenticates the device/user to the service provider, whether that's a bank or a corporate network...

"When this technology was created at Netscape during the beginning of the World Wide Web, it was conceived as a mechanism for websites to securely communicate, but the tools didn't yet exist to extend the chain all the way to the end user," commented Jermoluk. "Beyond Identity includes the user in the same chain of certificates bound together with the secure encrypted transport (TLS) used by millions of websites in secure communications today...."

With no passwords, the primary cause of data breaches (either to steal passwords or by using stolen passwords) is gone. It removes all friction from the access process, takes the password reset load off the help desk, and can form the basis of a zero-trust model where identity is the perimeter.

Though they're first focusing on the corporate market, their solution should be available to consumers by the end of 2020, the article reports, which speculates that the possibility of pre-also installing the solution on devices "is not out of the question."
Open Source

Tech Press Rushes To Cover New Linus Torvalds Mailing List Outburst (zdnet.com) 381

"Linux frontman Linus Torvalds thinks he's 'more self-aware' these days and is 'trying to be less forceful' after his brief absence from directing Linux kernel developers because of his abusive language on the Linux kernel mailing list," reports ZDNet.

"But true to his word, he's still not necessarily diplomatic in his communications with maintainers..." Torvalds' post-hiatus outburst was directed at Dave Chinner, an Australian programmer who maintains the Silicon Graphics (SGI)-created XFS file system supported by many Linux distros. "Bullshit, Dave," Torvalds told Chinner on a mailing list. The comment from Chinner that triggered Torvalds' rebuke was that "the page cache is still far, far slower than direct IO" -- a problem Chinner thinks will become more apparent with the arrival of the newish storage-motherboard interface specification known as Peripheral Express Interconnect Express (PCIe) version 4.0. Chinner believes page cache might be necessary to support disk-based storage, but that it has a performance cost....

"You've made that claim before, and it's been complete bullshit before too, and I've called you out on it then too," wrote Torvalds. "Why do you continue to make this obviously garbage argument?" According to Torvalds, the page cache serves its correct purpose as a cache. "The key word in the 'page cache' name is 'cache'," wrote Torvalds.... "Caches work, Dave. Anybody who thinks caches don't work is incompetent. 99 percent of all filesystem accesses are cached, and they never do any IO at all, and the page cache handles them beautifully," Torvalds wrote.

"When you say the page cache is slower than direct IO, it's because you don't even see or care about the *fast* case. You only get involved once there is actual IO to be done."

"The thing is," reports the Register, "crucially, Chinner was talking in the context of specific IO requests that just don't cache well, and noted that these inefficiencies could become more obvious as the deployment of PCIe 4.0-connected non-volatile storage memory spreads."

Here's how Chinner responded to Torvalds on the mailing list. "You've taken one single statement I made from a huge email about complexities in dealing with IO concurrency, the page cache and architectural flaws in the existing code, quoted it out of context, fabricated a completely new context and started ranting about how I know nothing about how caches or the page cache work."

The Register notes their conversation also illustrates a crucial difference from closed-source software development. "[D]ue to the open nature of the Linux kernel, Linus's rows and spats play out in public for everyone to see, and vultures like us to write up about."
ISS

SpaceX Will Deliver The First Supercomputer To The ISS (hpe.com) 98

Slashdot reader #16,185, Esther Schindler writes: "By NASA's rules, not just any computer can go into space. Their components must be radiation hardened, especially the CPUs," reports HPE Insights. "Otherwise, they tend to fail due to the effects of ionizing radiation. The customized processors undergo years of design work and then more years of testing before they are certified for spaceflight." As a result, the ISS runs the station using two sets of three Command and Control Multiplexer DeMultiplexer computers whose processors are 20MHz Intel 80386SX CPUs, right out of 1988. "The traditional way to radiation-harden a spacecraft computer is to add redundancy to its circuits or by using insulating substrates instead of the usual semiconductor wafers on chips. That's expensive and time consuming. HPE scientists believe that simply slowing down a system in adverse conditions can avoid glitches and keep the computer running."

So, assuming the August 15 SpaceX Falcon 9 rocket launch goes well, there will be a supercomputer headed into space -- using off-the-shelf hardware. Let's see if the idea pans out. "We may discover a set of parameters with which a supercomputer can successfully run for at least a year without errors," says Dr. Mark R. Fernandez, the mission's co-principal investigator for software and SGI's HPC technology officer. "Alternately, one or more components of the system will fail, in which case we will then do the typical failure analysis on Earth. That will let us learn what to change to make the systems more reliable in the future."

The article points out that the New Horizons spacecraft that just flew past Pluto has a 12MHz Mongoose-V CPU, based on the MIPS R3000 CPU. "You may remember its much faster ancestor: the chip that took you on adventures in the original Sony PlayStation, circa 1994."
Silicon Graphics

SGI Desktop Clone Gets A New Version On Fedora (maxxinteractive.com) 103

Silicon Graphics workstations used the IRIX Interactive Desktop (formerly called Indigo Magic Desktop) for its IRIX operating system (based on UNIX System V with BSD extensions). "Anyone who remembers working on a SGI machine probably has fond memories of the Magic Desktop for IRIX," remembered one Slashdot reader in 2002. At the time a project called 5Dwm was working on a clone, and its work is still being continued by MaXX Interactive. Today Slashdot reader Daniel Mark shared the news that after "several years and many long nights," the company is announcing a new release for Fedora 25, adding that "more Linux Distributions support will be added over the coming days/weeks." They're calling it "something new and fresh in the Linux Desktop space." The MaXX Desktop is available in two versions, the free Community Edition (CE) which provides basic SGI Desktop experience and the commercially available Professional Edition (PE) that comes with support, CPU and GPU specific optimizations and a full SGI Desktop experience... So there is no surprise here, the MaXX Desktop is a highly tuned Workstation Environment for the Linux x86_64 and ia64 platforms. Multi-core processing, NVidia GPU specific optimizations are among the things that makes the MaXX Desktop so fast, light-weight and stable.
Businesses

HPE Acquires SGI For $275 Million (venturebeat.com) 100

An anonymous reader writes: Hewlett Packard Enterprise has announced today that it has acquired SGI for $275 million in cash and debt. VentureBeat provides some backstory on the company that makes servers, storage, and software for high-end computing: "SGI (originally known as Silicon Graphics) was cofounded in 1981 by Jim Clark, who later cofounded Netscape with Marc Andreessen. It filed for Chapter 11 bankruptcy in 2009 after being de-listed from the New York Stock Exchange. In 2009 it was acquired by Rackable Systems, which later adopted the SGI branding. SGI's former campus in Mountain View, California, is now the site of the Googleplex. SGI, which is now based in Milpitas, California, brought in $533 million in revenue in its 2016 fiscal year and has 1,100 employees, according to the statement. HPE thinks buying SGI will be neutral in terms of its financial impact in the year after the deal is closed, which should happen in the first quarter of HPE's 2017 fiscal year, and later a catalyst for growth." HP split into two separate companies last year, betting that the smaller parts will be nimbler and more able to reverse four years of declining sales.
Programming

Interviews: Ask Alexander Stepanov and Daniel E. Rose a Question 80

An anonymous reader writes "Alexander Stepanov studied mathematics at Moscow State University and has been programming since 1972. His work on foundations of programming has been supported by GE, Brooklyn Polytechnic, AT&T, HP, SGI, and, since 2002, Adobe. In 1995 he received the Dr. Dobb's Journal Excellence in Programming Award for the design of the C++ Standard Template Library. Currently, he is the Senior Principal Engineer at A9.com. Daniel E. Rose is a programmer and research scientist who has held management positions at Apple, AltaVista, Xigo, Yahoo, and is the Chief Scientist for Search at A9.com. His research focuses on all aspects of search technology, ranging from low-level algorithms for index compression to human-computer interaction issues in web search. Rose led the team at Apple that created desktop search for the Macintosh. In addition to working together, the pair have recently written a book, From Mathematics to Generic Programming. Alexander and Daniel have agreed to answer any questions you may have about their book, their work, or programming in general. As usual, ask as many as you'd like, but please, one per post."
Intel

Intel and SGI Test Full-Immersion Cooling For Servers 102

itwbennett (1594911) writes "Intel and SGI have built a proof-of-concept supercomputer that's kept cool using a fluid developed by 3M called Novec that is already used in fire suppression systems. The technology, which could replace fans and eliminate the need to use tons of municipal water to cool data centers, has the potential to slash data-center energy bills by more than 90 percent, said Michael Patterson, senior power and thermal architect at Intel. But there are several challenges, including the need to design new motherboards and servers."
Supercomputing

Scientists Using Supercomputers To Puzzle Out Dinosaur Movement 39

Nerval's Lobster writes "Scientists at the University of Manchester in England figured out how the largest animal ever to walk on Earth, the 80-ton Argentinosaurus, actually walked on earth. Researchers led by Bill Sellers, Rudolfo Coria and Lee Margetts at the N8 High Performance Computing facility in northern England used a 320 gigaflop/second SGI High Performance Computing Cluster supercomputer called Polaris to model the skeleton and movements of Argentinosaurus. The animal was able to reach a top speed of about 5 mph, with 'a slow, steady gait,' according to the team (PDF). Extrapolating from a few feet of bone, paleontologists were able to estimate the beast weighed between 80 and 100 tons and grew up to 115 feet in length. Polaris not only allowed the team to model the missing parts of the dinosaur and make them move, it did so quickly enough to beat the deadline for PLOS ONE Special Collection on Sauropods, a special edition of the site focusing on new research on sauropods that 'is likely to be the "de facto" international reference for Sauropods for decades to come,' according to a statement from the N8 HPC center. The really exciting thing, according to Coria, was how well Polaris was able to fill in the gaps left by the fossil records. 'It is frustrating there was so little of the original dinosaur fossilized, making any reconstruction difficult,' he said, despite previous research that established some rules of weight distribution, movement and the limits of dinosaurs' biological strength."
Games

Valve's SteamBox Gets a Name and an Early Demo at CES 328

xynopsis writes "Looks like the final version of the Linux based Steam Gaming Console has been made public at CES. The result of combined efforts of small-form-factor maker Xi3 and Valve, the gaming box named 'Piston' is a potential game changer in transforming the Linux desktop and gaming market. The pretty device looks like a shrunk Tezro from Silicon Graphics when SGI used to be cool." Looks like Gabe Newell wasn't kidding.
Hardware

Managing Servers In the Frigid Cold 122

1sockchuck writes "Some data centers are kept as chilly as meat lockers. But IT operations in colder regions face challenges in managing conditions — hence Facebook's to use environmentally controlled trucks to make deliveries to its new data center in Sweden, which is located on the edge of the Arctic Circle. The problem is the temperature change in transporting gear. 'A rapid rate of change (in temperature) can create condensation on the electronics, and that's no good,' said Facebook's Frank Frankovsky."
Space

Hawking Is First User of "Big Brain" Supercomputer 93

miller60 writes "Calling your product the 'Big Brain Computer' is a heady claim. It helps if you have Dr. Stephen Hawking say that the product can help unlock the secrets of the universe. SGI says its UV2 can scale to 4,096 cores and 64 terabytes of memory, with a peak I/O rate of four terabytes per second and runs off-the-shelf Linux software. Hawking says the UV2 'will ensure that UK researchers remain at the forefront of fundamental and observational cosmology.'"
Australia

New Supercomputer Boosts Aussie SKA Telescope Bid 32

angry tapir writes "Australian academic supercomputing consortium iVEC has acquired another major supercomputer, Fornax, to be based at the University of Western Australia, to further the country's ability to conduct data-intensive research. The SGI GPU-based system, also known as iVEC@UWA, is made up of 96 nodes, each containing two 6-core Intel Xeon X5650 CPUs, an NVIDIA Tesla C2050 GPU, 48 GB RAM and 7TB of storage. All up, the system has 1152 cores, 96 GPUs and an additional dedicated 500TB fabric attached storage-based global filesystem. The system is a boost to the Australian-NZ bid to host the Square Kilometer Array radio telescope."
Graphics

Adobe Goes To Flash 10.1, Forgoes Security Fix For 10 320

An anonymous reader writes "The recent critical zero-day security flaw in Flash 10 may have fast-tracked the release of Flash 10.1 today. Adobe 10.1 boasts the much anticipated H.264 hardware acceleration. Except for Linux and Mac OS (PDF): 'Flash Player 10.1, H.264 hardware acceleration is not supported under Linux and Mac OS. Linux currently lacks a developed standard API that supports H.264 hardware video decoding, and Mac OS X does not expose access to the required APIs.' Your humble anonymous reporter, who is using Fedora Linux with a ATI IGP 340M, is very pleased that the developers of the OSS drivers have provided hardware acceleration for my GPU ('glxinfo : direct rendering: Yes,' 'OpenGL renderer string: Mesa DRI R100 (RS200 4337) 20090101 NO-TCL DRI2'), but even if Adobe did provide hardware acceleration for H.264 on Linux, they wouldn't provide it for me because they disable it for GPUs with SGI in the Client vendor string. Adobe 10.1, with all its goodness, now gives me around 95% CPU usage as opposed to about 75% with the previous release. Good times. I anticipate my Windows friends will have a much better experience."
Silicon Graphics

SGI Rolls Out "Personal Supercomputers" 303

CWmike writes "They aren't selling personal supercomputers at Best Buy just yet. But that day probably isn't too far off, as the costs continue to fall and supercomputers become easier to use. Silicon Graphics International on Monday released its first so-called personal supercomputer. The new Octane III system is priced from $7,995 with one Xeon 5500 processor. The system can be expanded to an 80-core system with a capacity of up to 960GB of memory. This new supercomputer's peak performance of about 726 GFLOPS won't put it on the Top 500 supercomputer list, but that's not the point of the machine, SGI says. A key feature instead is the system's ease of use."
Classic Games (Games)

The Ethics of Selling GPLed Software For the iPhone 782

SeanCier writes "We're a small (two-person) iPhone app developer whose first game has recently been released in the App store. In the process, we've inadvertently stepped in it, bringing up a question of the GPL and free software ethics that I'm hoping the Slashdot community can help us clear up, one way or the other. XPilot, a unique and groundbreaking UNIX-based game from the early/mid nineties, was a classic in its day, but was forgotten and has been dead for years, both in terms of use and development. My college roommate and I were addicted to it at the time, even running game servers and publishing custom maps. As it's fully open source (GPLv2), and the iPhone has well over twice the graphics power of the SGI workstations we'd used in college, we decided it was a moral imperative to port it to our cellphones. In the process, we hoped, we could breathe life back into this forgotten classic (not to mention turning a years-old joke into reality). We did so, and the result was more playable than we'd hoped, despite the physical limitations of the phone. We priced it at $2.99 on the App store (we don't expect it to become the Next Big Thing, but hoped to recoup our costs — such as server charges and Apple's annual $99 developer fee), released the source on our web page, then enthusiastically tracked down every member of the original community we could find to let them know of the hoped-for renaissance. Which is where things got muddy. After it hit the App store, one of the original developers of XPilot told us he feels adamantly that we're betraying the spirit of the GPL by charging for it." Read on for the rest of Sean's question.
Space

Aussie Scientists Build a Cluster To Map the Sky 58

Tri writes "Scientists at the Siding Spring Observatory have built a new system to map and record over 1 billion objects in the southern hemisphere sky. They collect 700 GB of data every night, which they then crunch down using some perl scripts and make available to other scientists through a web interface backed on Postgresql. 'Unsurprisingly, the Southern Sky Survey will result in a large volume of raw data — about 470 terabytes ... when complete. ... the bulk of the analysis of the SkyMapper data will be done on a brand new, next generation Sun supercomputer kitted out with 12,000 cores. Due to be fully online by December, the supercomputer will offer a tenfold increase in performance over the facility's current set up of two SGI machines, each with just under 3500 cores in total.'"

Slashdot Top Deals