IT

How Not To Hire a North Korean IT Spy (csoonline.com) 17

CSO Online reports that North Korea "is actively infiltrating Western companies using skilled IT workers who use fake identities to pose as remote workers with foreign companies, typically but not exclusively in the U.S."

Slashdot reader snydeq shares their report, which urges information security officers "to carry out tighter vetting of new hires to ward off potential 'moles' — who are increasingly finding their way onto company payrolls and into their IT systems." The schemes are part of illicit revenue generation efforts by the North Korean regime, which faces financial sanctions over its nuclear weapons program, as well as a component of the country's cyberespionage activities.

The U.S. Treasury department first warned about the tactic in 2022. Thosands of highly skilled IT workers are taking advantage of the demand for software developers to obtain freelance contracts from clients around the world, including in North America, Europe, and East Asia. "Although DPRK [North Korean] IT workers normally engage in IT work distinct from malicious cyber activity, they have used the privileged access gained as contractors to enable the DPRK's malicious cyber intrusions," the Treasury department warned... North Korean IT workers present themselves as South Korean, Chinese, Japanese, or Eastern European, and as U.S.-based teleworkers. In some cases, DPRK IT workers further obfuscate their identities by creating arrangements with third-party subcontractors.

Christina Chapman, a resident of Arizona, faces fraud charges over an elaborate scheme that allegedly allowed North Korean IT workers to pose as U.S. citizens and residents using stolen identities to obtain jobs at more than 300 U.S. companies. U.S. payment platforms and online job site accounts were abused to secure jobs at more than 300 companies, including a major TV network, a car manufacturer, a Silicon Valley technology firm, and an aerospace company... According to a U.S. Department of Justice indictment, unsealed in May 2024, Chapman ran a "laptop farm," hosting the overseas IT workers' computers inside her home so it appeared that the computers were located in the U.S. The 49-year-old received and forged payroll checks, and she laundered direct debit payments for salaries through bank accounts under her control. Many of the overseas workers in her cell were from North Korea, according to prosecutors. An estimated $6.8 million were paid for the work, much of which was falsely reported to tax authorities under the name of 60 real U.S. citizens whose identities were either stolen or borrowed...

Ukrainian national Oleksandr Didenko, 27, of Kyiv, was separately charged over a years-long scheme to create fake accounts at U.S. IT job search platforms and with U.S.-based money service transmitters. "Didenko sold the accounts to overseas IT workers, some of whom he believed were North Korean, and the overseas IT workers used the false identities to apply for jobs with unsuspecting companies," according to the U.S. Department of Justice. Didenko, who was arrested in Poland in May, faces U.S. extradition proceedings...

How this type of malfeasance plays out from the perspective of a targeted firm was revealed by security awareness vendor KnowBe4's candid admission in July that it unknowingly hired a North Korean IT spy... A growing and substantial body of evidence suggests KnowBe4 is but one of many organizations targeted by illicit North Korean IT workers. Last November security vendor Palo Alto reported that North Korean threat actors are actively seeking employment with organizations based in the U.S. and other parts of the world...

Mandiant, the Google-owned threat intel firm, reported last year that "thousands of highly skilled IT workers from North Korea" are hunting work. More recently, CrowdStrike reported that a North Korean group it dubbed "Famous Chollima" infiltrated more than 100 companies with imposter IT pros.

The article notes the infiltrators use chatbots to tailor the perfect resume "and further leverage AI-created deepfakes to pose as real people." And the article includes this quote from a former intelligence analyst for the U.S. Air Force turned cybersecurity strategist at Sysdig. "In some cases, they may try to get jobs at tech companies in order to steal their intellectual property before using it to create their own knock-off technologies."

The article closes with its suggested "countermeasures," including live video-chats with prospective remote-work applicants — and confirming an applicant's home address.
Data Storage

Asia's Richest Man Says He Will Give Everyone 100 GB of Free Cloud Storage (techcrunch.com) 43

Mukesh Ambani, Asia's richest man and the chairman of Reliance Industries, said this week that his telecom firm will offer users 100 GB of free cloud storage. Oil-to-retail giant Reliance, which is India's most valuable firm by market cap, has upended the telecom market in India by offering free voice calls and dirt-cheap internet access.

Jio, Reliance's telecom subsidiary, serves 490 million subscribers, more than any rival in India. Jio offers access to at least 2GB of data per day for 14 days to subscribers for a total of $2.3. TechCrunch adds: Reliance plans to offer Jio users up to 100 GB of free cloud storage through its Jio AI Cloud service, set to launch around Diwali in October, Ambani said.
Microsoft

Microsoft Partners Beware: Action Pack To Be Retired in 2025 (theregister.com) 24

Microsoft is to discontinue the Microsoft Action Pack and Microsoft Learning Pack on January 21, 2025, sending partners off to potentially pricier and cloudier options. From a report: The Action Pack and Learning Pack, alongside Silver or Gold Membership, gave Microsoft partners access to many on-premises licenses for the company's software. The company's recommended replacements, Partner Success Core Benefits and Partner Success Expanded, abandon those benefits in favor of cloud services. According to Microsoft, it is "evolving the partner benefits offerings to provide partners with the tools and support they need to continue to lead the way in the shifting tech landscape."

Or cutting back on some things in favor of others. After all, it would never do to have all that software running on-premises when Microsoft has a perfectly good cloud ready to take on partner workloads. A Register reader affected by the change told us: "The first impact for us will be cost. We'll need to go from Action Pack ($515 + VAT) to Partner Success Core ($970 + VAT). Secondly, the benefits appear to have moved all online. "That's not a problem for day-to-day operations but it will make it harder when trying to recreate a customer environment with legacy software."

Businesses

Internal AWS Sales Guidelines Spread Doubt About OpenAI's Capabilities (businessinsider.com) 14

An anonymous reader shares a report: OpenAI lacks advanced security and customer support. It's just a research company, not an established cloud provider. The ChatGPT-maker is not focused enough on corporate customers. These are just some of the talking points Amazon Web Services' salespeople are told to follow when dealing with customers using, or close to buying, OpenAI's products, according to internal sales guidelines obtained by Business Insider. Other talking points from the documents include OpenAI's lack of access to third-party AI models and weak enterprise-level contracts. AWS salespeople should dispel the hype around AI chatbots like ChatGPT, and steer the conversation toward AWS's strength of running the cloud infrastructure behind popular AI services, the guidelines added.

[...] The effort to criticize OpenAI is also unusual for Amazon, which often says it's so customer-obsessed that it pays little attention to competitors. This is the latest sign that suggests Amazon knows it has work to do to catch up in the AI race. OpenAI, Microsoft, and Google have taken an early lead and could become the main platforms where developers build new AI products and tools. Though Amazon created a new AGI team last year, the company's existing AI models are considered less powerful than those made by its biggest competitors. Instead, Amazon has prioritized selling AI tools like Bedrock, which gives customers access to third-party AI models. AWS also offers cloud access to in-house AI chips that compete with Nvidia GPUs, with mixed results so far.

Crime

ARRL Pays $1 Million Ransom To Decrypt Their Systems After Attack (bleepingcomputer.com) 95

The nonprofit American Radio Relay League — founded in 1914 — has approximately 161,000 members, according to Wikipedia (with over 7,000 members outside the U.S.)

But sometime in early May its systems network was compromised, "by threat actors using information they had purchased on the dark web," the nonprofit announced this week. The attackers accessed the ARRL's on-site systems — as well as most of its cloud-based systems — using "a wide variety of payloads affecting everything from desktops and laptops to Windows-based and Linux-based servers." Despite the wide variety of target configurations, the threat actors seemed to have a payload that would host and execute encryption or deletion of network-based IT assets, as well as launch demands for a ransom payment, for every system... The FBI categorized the attack as "unique" as they had not seen this level of sophistication among the many other attacks, they have experience with.

Within 3 hours a crisis management team had been constructed of ARRL management, an outside vendor with extensive resources and experience in the ransomware recovery space, attorneys experienced with managing the legal aspects of the attack including interfacing with the authorities, and our insurance carrier. The authorities were contacted immediately as was the ARRL President... [R]ansom demands were dramatically weakened by the fact that they did not have access to any compromising data. It was also clear that they believed ARRL had extensive insurance coverage that would cover a multi-million-dollar ransom payment. After days of tense negotiation and brinkmanship, ARRL agreed to pay a $1 million ransom. That payment, along with the cost of restoration, has been largely covered by our insurance policy...

Today, most systems have been restored or are waiting for interfaces to come back online to interconnect them. While we have been in restoration mode, we have also been working to simplify the infrastructure to the extent possible. We anticipate that it may take another month or two to complete restoration under the new infrastructure guidelines and new standards.

ARRL's called the attack "extensive", "sophisticated", "highly coordinated" and "an act of organized crime". And tlhIngan (Slashdot reader #30335) shared this detail from BleepingComputer.

"While the organization has not yet linked the attack to a specific ransomware operation, sources told BleepingComputer that the Embargo ransomware gang was behind the breach."
Programming

Amazon CEO: AI-Assisted Code Transformation Saved Us 4,500 Years of Developer Work (x.com) 130

Long-time Slashdot reader theodp shared this anecdote about Amazon's GenAI assistant for software development, Amazon Q: On Thursday, Amazon CEO Andy Jassy took to Twitter to boast that using Amazon Q to do Java upgrades has already saved Amazon from having to pay for 4,500 developer-years of work. ("Yes, that number is crazy but, real," writes Jassy). And Jassy says it also provided Amazon with an additional $260M in annualized efficiency gains from enhanced security and reduced infrastructure costs.

"Our developers shipped 79% of the auto-generated code reviews without any additional changes," Jassy explained. "This is a great example of how large-scale enterprises can gain significant efficiencies in foundational software hygiene work by leveraging Amazon Q."

Jassy — who FORTUNE reported had no formal training in computer science — also touted Amazon Q's Java upgrade prowess in his Letter to Shareholders earlier this year, as has Amazon in its recent SEC filings ("today, developers can save months using Q to move from older versions of Java to newer, more secure and capable ones; in the near future, Q will help developers transform their .net code as well"). Earlier this week, Business Insider reported on a leaked recording of a fireside chat in which AWS CEO Matt Garman predicted a paradigm shift in coding as a career in the foreseeable future with the prevalence of AI. According to Garman, "If you go forward 24 months from now, or some amount of time — I can't exactly predict where it is — it's possible that most developers are not coding."

The Military

Workers at Google DeepMind Push Company to Drop Military Contracts (time.com) 143

Nearly 200 Google DeepMind workers signed a letter urging Google to cease its military contracts, expressing concerns that the AI technology they develop is being used in warfare, which they believe violates Google's own AI ethics principles. "The letter is a sign of a growing dispute within Google between at least some workers in its AI division -- which has pledged to never work on military technology -- and its Cloud business, which has contracts to sell Google services, including AI developed inside DeepMind, to several governments and militaries including those of Israel and the United States," reports TIME Magazine. "The signatures represent some 5% of DeepMind's overall headcount -- a small portion to be sure, but a significant level of worker unease for an industry where top machine learning talent is in high demand." From the report: The DeepMind letter, dated May 16 of this year, begins by stating that workers are "concerned by recent reports of Google's contracts with military organizations." It does not refer to any specific militaries by name -- saying "we emphasize that this letter is not about the geopolitics of any particular conflict." But it links out to an April report in TIME which revealed that Google has a direct contract to supply cloud computing and AI services to the Israeli Military Defense, under a wider contract with Israel called Project Nimbus. The letter also links to other stories alleging that the Israeli military uses AI to carry out mass surveillance and target selection for its bombing campaign in Gaza, and that Israeli weapons firms are required by the government to buy cloud services from Google and Amazon.

"Any involvement with military and weapon manufacturing impacts our position as leaders in ethical and responsible AI, and goes against our mission statement and stated AI Principles," the letter that circulated inside Google DeepMind says. (Those principles state the company will not pursue applications of AI that are likely to cause "overall harm," contribute to weapons or other technologies whose "principal purpose or implementation" is to cause injury, or build technologies "whose purpose contravenes widely accepted principles of international law and human rights.") The letter says its signatories are concerned with "ensuring that Google's AI Principles are upheld," and adds: "We believe [DeepMind's] leadership shares our concerns." [...]

The letter calls on DeepMind's leaders to investigate allegations that militaries and weapons manufacturers are Google Cloud users; terminate access to DeepMind technology for military users; and set up a new governance body responsible for preventing DeepMind technology from being used by military clients in the future. Three months on from the letter's circulation, Google has done none of those things, according to four people with knowledge of the matter. "We have received no meaningful response from leadership," one said, "and we are growing increasingly frustrated."

Space

The Wow! Signal Deciphered. It Was Hydrogen All Along. (universetoday.com) 32

The Wow! signal, detected on August 15, 1977, was an intense radio transmission that appeared artificial and raised the possibility of extraterrestrial contact. However, recent research suggests it may have been caused by a natural astrophysical event involving a magnetar flare striking a hydrogen cloud. Universe Today reports: New research shows that the Wow! Signal has an entirely natural explanation. The research is "Arecibo Wow! I: An Astrophysical Explanation for the Wow! Signal." The lead author is Abel Mendez from the Planetary Habitability Laboratory at the University of Puerto Rico at Arecibo. It's available at the pre-print server arxiv.org. Arecibo Wow! is a new effort based on an archival study of data from the now-defunct Arecibo Radio Telescope from 2017 to 2020. The observations from Arecibo are similar to those from Big Ear but "are more sensitive, have better temporal resolution, and include polarization measurements," according to the authors. "Our latest observations, made between February and May 2020, have revealed similar narrowband signals near the hydrogen line, though less intense than the original Wow! Signal," said Mendez.

Arecibo detected signals similar to the Wow! signal but with some differences. They're far less intense and come from multiple locations. The authors say these signals are easily explained by an astrophysical phenomenon and that the original Wow! signal is, too. "We hypothesize that the Wow! Signal was caused by sudden brightening from stimulated emission of the hydrogen line due to a strong transient radiation source, such as a magnetar flare or a soft gamma repeater (SGR)," the researchers write. Those events are rare and rely on precise conditions and alignments. They can cause clouds of hydrogen to brighten considerably for seconds or even minutes.

The researchers say that what Big Ear saw in 1977 was the transient brightening of one of several H1 (neutral hydrogen) clouds in the telescope's line of sight. The 1977 signal was similar to what Arecibo saw in many respects. "The only difference between the signals observed in Arecibo and the Wow! Signal is their brightness. It is precisely the similarity between these spectra that suggests a mechanism for the origin of the mysterious signal," the authors write. These signals are rare because the spatial alignment between source, cloud, and observer is rare. The rarity of alignment explains why detections are so rare. The researchers were able to identify the clouds responsible for the signal but not the source. Their results suggest that the source is much more distant than the clouds that produce the hydrogen signal. "Given the detectability of the clouds as demonstrated in our data, this insight could enable precise location of the signal's origin and permit continuous monitoring for subsequent events," the researchers explain.

Microsoft

Microsoft Engineers' Pay Data Leaked, Reveals Compensation Details (businessinsider.com) 73

Software engineers at Microsoft earn an average total compensation ranging from $148,436 to $1,230,000 annually, depending on their level, according to a leaked spreadsheet viewed by Business Insider. The data, voluntarily shared by hundreds of U.S.-based Microsoft employees, includes information on salaries, performance-based raises, promotions, and bonuses. The highest-paid engineers work in Microsoft's newly formed AI organization, with average total compensation of $377,611. Engineers in Cloud and AI, Azure, and Experiences and Devices units earn between $242,723 and $255,126 on average.
IT

110K Domains Targeted in 'Sophisticated' AWS Cloud Extortion Campaign (theregister.com) 33

A sophisticated extortion campaign has targeted 110,000 domains by exploiting misconfigured AWS environment files, security firm Cyble reports. The attackers scanned for exposed .env files containing cloud access keys and other sensitive data. Organizations that failed to secure their AWS environments found their S3-stored data replaced with ransom notes.

The attackers used a series of API calls to verify data, enumerate IAM users, and locate S3 buckets. Though initial access lacked admin privileges, they created new IAM roles to escalate permissions. Cyble researchers noted the attackers' use of AWS Lambda functions for automated scanning operations.
Music

Sonos CEO Says the Old App Can't Be Rereleased (theverge.com) 106

The old Sonos app won't be making a return to replace the buggy new version. According to Sonos CEO Patrick Spence, rereleasing the old app would make things worse now that updated software has already been sent out to the company's speakers and cloud infrastructure. The Verge reports: In a Reddit AMA response posted Tuesday, Sonos CEO Spence says that he was hopeful "until very recently" that the company could rerelease the app, confirming a report from The Verge that the company was considering doing so. [...] Since the new app was released on May 7th, Spence has issued a formal apology and announced in August that the company would be delaying the launch of two products "until our app experience meets the level of quality that we, our customers, and our partners expect from Sonos." "The trick of course is that Sonos is not just the mobile app, but software that runs on your speakers and in the cloud too," writes Spence in the Reddit AMA. "In the months since the new mobile app launched we've been updating the software that runs on our speakers and in the cloud to the point where today S2 is less reliable & less stable then what you remember. After doing extensive testing we've reluctantly concluded that re-releasing S2 would make the problems worse, not better. I'm sure this is disappointing. It was disappointing to me."
Privacy

Microsoft Copilot Studio Exploit Leaks Sensitive Cloud Data (darkreading.com) 8

An anonymous reader quotes a report from Dark Reading: Researchers have exploited a vulnerability in Microsoft's Copilot Studio tool allowing them to make external HTTP requests that can access sensitive information regarding internal services within a cloud environment -- with potential impact across multiple tenants. Tenable researchers discovered the server-side request forgery (SSRF) flaw in the chatbot creation tool, which they exploited to access Microsoft's internal infrastructure, including the Instance Metadata Service (IMDS) and internal Cosmos DB instances, they revealed in a blog post this week. Tracked by Microsoft as CVE-2024-38206, the flaw allows an authenticated attacker to bypass SSRF protection in Microsoft Copilot Studio to leak sensitive cloud-based information over a network, according to a security advisory associated with the vulnerability. The flaw exists when combining an HTTP request that can be created using the tool with an SSRF protection bypass, according to Tenable.

"An SSRF vulnerability occurs when an attacker is able to influence the application into making server-side HTTP requests to unexpected targets or in an unexpected way," Tenable security researcher Evan Grant explained in the post. The researchers tested their exploit to create HTTP requests to access cloud data and services from multiple tenants. They discovered that "while no cross-tenant information appeared immediately accessible, the infrastructure used for this Copilot Studio service was shared among tenants," Grant wrote. Any impact on that infrastructure, then, could affect multiple customers, he explained. "While we don't know the extent of the impact that having read/write access to this infrastructure could have, it's clear that because it's shared among tenants, the risk is magnified," Grant wrote. The researchers also found that they could use their exploit to access other internal hosts unrestricted on the local subnet to which their instance belonged. Microsoft responded quickly to Tenable's notification of the flaw, and it has since been fully mitigated, with no action required on the part of Copilot Studio users, the company said in its security advisory.
Further reading: Slack AI Can Be Tricked Into Leaking Data From Private Channels
Intel

Intel Discontinues High-Speed, Open-Source H.265/HEVC Encoder Project (phoronix.com) 37

Phoronix's Michael Larabel reports: As part of Intel's Scalable Video Technology (SVT) initiative they had been developing SVT-HEVC as a BSD-licensed high performance H.265/HEVC video encoder optimized for Xeon Scalable and Xeon D processors. But recently they've changed course and the project has been officially discontinued. [...] The SVT-AV1 project a while ago was already punted to the Alliance for Open Media (AOMedia) project and one of its lead maintainers having joined Meta from Intel two years ago. SVT-AV1 continues excelling great outside the borders of Intel but SVT-HEVC (and SVT-VP9) have remained Intel open-source projects but at least officially SVT-HEVC has ended.

SVT-HEVC hadn't seen a new release since 2021 and there are already several great open-source H.265 encoders out there like x265 and Kvazaar. But as of a few weeks ago, SVT-HEVC upstream is now discontinued. The GitHub repository was put into a read-only state [with a discontinuation notice]. Meanwhile SVT-VP9 doesn't have any discontinuation notice at this time. The SVT-VP9 GitHub repository remains under Intel's Open Visual Cloud account although it hasn't seen any new commits in four months and the last tagged release was back in 2020.

Businesses

North America Added a Whole Silicon Valley's Worth of Data Center Inventory This Year (sherwood.news) 34

North America's eight primary data center markets added 515 megawatts (MW) of new supply in the first half of 2024 -- the equivalent of Silicon Valley's entire existing inventory -- according to a new report real-estate services firm CBRE. From a report: All of Silicon Valley has 459 MW of data center supply, while those main markets have a total of 5,689 MW. That's up 10% from a year ago and about double what it was five years ago. Data center space under construction is up nearly 70% from a year ago and is currently at a record high. But the vast majority of that is already leased, and vacancy rates have shrunk to a record low of 2.8%. In other words, developers are building an insane amount of data center capacity, but it's still not enough to meet the growing demands of cloud computing and artificial intelligence providers.
Programming

'GitHub Actions' Artifacts Leak Tokens, Expose Cloud Services and Repositories (securityweek.com) 19

Security Week brings news about CI/CD workflows using GitHub Actions in build processes. Some workflows can generate artifacts that "may inadvertently leak tokens for third party cloud services and GitHub, exposing repositories and services to compromise, Palo Alto Networks warns." [The artifacts] function as a mechanism for persisting and sharing data across jobs within the workflow and ensure that data is available even after the workflow finishes. [The artifacts] are stored for up to 90 days and, in open source projects, are publicly available... The identified issue, a combination of misconfigurations and security defects, allows anyone with read access to a repository to consume the leaked tokens, and threat actors could exploit it to push malicious code or steal secrets from the repository. "It's important to note that these tokens weren't part of the repository code but were only found in repository-produced artifacts," Palo Alto Networks' Yaron Avital explains...

"The Super-Linter log file is often uploaded as a build artifact for reasons like debuggability and maintenance. But this practice exposed sensitive tokens of the repository." Super-Linter has been updated and no longer prints environment variables to log files.

Avital was able to identify a leaked token that, unlike the GitHub token, would not expire as soon as the workflow job ends, and automated the process that downloads an artifact, extracts the token, and uses it to replace the artifact with a malicious one. Because subsequent workflow jobs would often use previously uploaded artifacts, an attacker could use this process to achieve remote code execution (RCE) on the job runner that uses the malicious artifact, potentially compromising workstations, Avital notes.

Avital's blog post notes other variations on the attack — and "The research laid out here allowed me to compromise dozens of projects maintained by well-known organizations, including firebase-js-sdk by Google, a JavaScript package directly referenced by 1.6 million public projects, according to GitHub. Another high-profile project involved adsys, a tool included in the Ubuntu distribution used by corporations for integration with Active Directory." (Avital says the issue even impacted projects from Microsoft, Red Hat, and AWS.) "All open-source projects I approached with this issue cooperated swiftly and patched their code. Some offered bounties and cool swag."

"This research was reported to GitHub's bug bounty program. They categorized the issue as informational, placing the onus on users to secure their uploaded artifacts." My aim in this article is to highlight the potential for unintentionally exposing sensitive information through artifacts in GitHub Actions workflows. To address the concern, I developed a proof of concept (PoC) custom action that safeguards against such leaks. The action uses the @actions/artifact package, which is also used by the upload-artifact GitHub action, adding a crucial security layer by using an open-source scanner to audit the source directory for secrets and blocking the artifact upload when risk of accidental secret exposure exists. This approach promotes a more secure workflow environment...

As this research shows, we have a gap in the current security conversation regarding artifact scanning. GitHub's deprecation of Artifacts V3 should prompt organizations using the artifacts mechanism to reevaluate the way they use it. Security defenders must adopt a holistic approach, meticulously scrutinizing every stage — from code to production — for potential vulnerabilities. Overlooked elements like build artifacts often become prime targets for attackers. Reduce workflow permissions of runner tokens according to least privilege and review artifact creation in your CI/CD pipelines. By implementing a proactive and vigilant approach to security, defenders can significantly strengthen their project's security posture.

The blog post also notes protection and mitigation features from Palo Alto Networks....
Data Storage

Ask Slashdot: What Network-Attached Storage Setup Do You Use? 135

"I've been somewhat okay about backing up our home data," writes long-time Slashdot reader 93 Escort Wagon.

But they could use some good advice: We've got a couple separate disks available as local backup storage, and my own data also gets occasionally copied to encrypted storage at BackBlaze. My daughter has her own "cloud" backups, which seem to be a manual push every once in a while of random files/folders she thinks are important. Including our media library, between my stuff, my daughter's, and my wife's... we're probably talking in the neighborhood of 10 TB for everything at present. The whole setup is obviously cobbled together, and the process is very manual. Plus it's annoying since I'm handling Mac, Linux, and Windows backups completely differently (and sub-optimally). Also, unsurprisingly, the amount of data we possess does seem to be increasing with time.

I've been considering biting the bullet and buying an NAS [network-attached storage device], and redesigning the entire process — both local and remote. I'm familiar with Synology and DSM from work, and the DS1522+ looks appealing. I've also come across a lot of recommendations for QNAP's devices, though. I'm comfortable tackling this on my own, but I'd like to throw this out to the Slashdot community.

What NAS do you like for home use. And what disks did you put in it? What have your experiences been?

Long-time Slashdot reader AmiMoJo asks "Have you considered just building one?" while suggesting the cheapest option is low-powered Chinese motherboards with soldered-in CPUs. And in the comments on the original submission, other Slashdot readers shared their examples:
  • destined2fail1990 used an AMD Threadripper to build their own NAS with 10Gbps network connectivity.
  • DesertNomad is using "an ancient D-Link" to connect two Synology DS220 DiskStations
  • Darth Technoid attached six Seagate drives to two Macbooks. "Basically, I found a way to make my older Mac useful by simply leaving it on all the time, with the external drives attached."

But what's your suggestion? Share your own thoughts and experiences. What NAS do you like for home use? What disks would you put in it?

And what have your experiences been?

AI

'AI-Powered Remediation': GitHub Now Offers 'Copilot Autofix' Suggestions for Code Vulnerabilities (infoworld.com) 18

InfoWorld reports that Microsoft-owned GitHub "has unveiled Copilot Autofix, an AI-powered software vulnerability remediation service."

The feature became available Wednesday as part of the GitHub Advanced Security (or GHAS) service: "Copilot Autofix analyzes vulnerabilities in code, explains why they matter, and offers code suggestions that help developers fix vulnerabilities as fast as they are found," GitHub said in the announcement. GHAS customers on GitHub Enterprise Cloud already have Copilot Autofix included in their subscription. GitHub has enabled Copilot Autofix by default for these customers in their GHAS code scanning settings.

Beginning in September, Copilot Autofix will be offered for free in pull requests to open source projects.

During the public beta, which began in March, GitHub found that developers using Copilot Autofix were fixing code vulnerabilities more than three times faster than those doing it manually, demonstrating how AI agents such as Copilot Autofix can radically simplify and accelerate software development.

"Since implementing Copilot Autofix, we've observed a 60% reduction in the time spent on security-related code reviews," says one principal engineer quoted in GitHub's announcement, "and a 25% increase in overall development productivity."

The announcement also notes that Copilot Autofix "leverages the CodeQL engine, GPT-4o, and a combination of heuristics and GitHub Copilot APIs." Code scanning tools detect vulnerabilities, but they don't address the fundamental problem: remediation takes security expertise and time, two valuable resources in critically short supply. In other words, finding vulnerabilities isn't the problem. Fixing them is...

Developers can keep new vulnerabilities out of their code with Copilot Autofix in the pull request, and now also pay down the backlog of security debt by generating fixes for existing vulnerabilities... Fixes can be generated for dozens of classes of code vulnerabilities, such as SQL injection and cross-site scripting, which developers can dismiss, edit, or commit in their pull request.... For developers who aren't necessarily security experts, Copilot Autofix is like having the expertise of your security team at your fingertips while you review code...

As the global home of the open source community, GitHub is uniquely positioned to help maintainers detect and remediate vulnerabilities so that open source software is safer and more reliable for everyone. We firmly believe that it's highly important to be both a responsible consumer of open source software and contributor back to it, which is why open source maintainers can already take advantage of GitHub's code scanning, secret scanning, dependency management, and private vulnerability reporting tools at no cost. Starting in September, we're thrilled to add Copilot Autofix in pull requests to this list and offer it for free to all open source projects...

While responsibility for software security continues to rest on the shoulders of developers, we believe that AI agents can help relieve much of the burden.... With Copilot Autofix, we are one step closer to our vision where a vulnerability found means a vulnerability fixed.

AI

AI PCs Made Up 14% of Quarterly PC Shipments (reuters.com) 73

AI PCs accounted for 14% of all PC shipped in the second quarter with Apple leading the way, research firm Canalys said on Tuesday, as added AI capabilities help reinvigorate demand. From a report: PC providers and chipmakers have pinned high hopes on devices that can perform AI tasks directly on the system, bypassing the cloud, as the industry slowly emerges from its worst slump in years. These devices typically feature neural processing units dedicated to performing AI tasks.

Apple commands about 60% of the AI PC market, the research firm said in the report, pointing to its Mac portfolio incorporating M-series chips with a neural engine. Within Microsoft's Windows, AI PC shipments grew 127% sequentially in the quarter. The tech giant debuted its "Copilot+" AI PCs in May, with Qualcomm's Snapdragon PC chips based on Arm Holdings' architecture.

Space

Milky Way May Escape Fated Collision With Andromeda Galaxy (science.org) 33

sciencehabit shares a report from Science.org: For years, astronomers thought it was the Milky Way's destiny to collide with its near neighbor the Andromeda galaxy a few billion years from now. But a new simulation finds a 50% chance the impending crunch will end up a near-miss, at least for the next 10 billion years. It's been known that Andromeda is heading toward our home Galaxy since 1912, heading pretty much straight at the Milky Way at a speed of 110 kilometers per second. Such galaxy mergers, which can be seen in progress elsewhere in the universe, are spectacularly messy affairs. Although most stars survive unscathed, the galaxies' spiral structures are obliterated, sending streams of stars spinning off into space. After billions of years, the merged galaxies typically settle into a single elliptical galaxy: a giant featureless blob of stars. A study from 2008 suggested a Milky Way-Andromeda merger was inevitable within the next 5 billion years, and that in the process the Sun and Earth would get gravitationally grabbed by Andromeda for a time before ending up in the distant outer suburbs of the resulting elliptical, which the researchers dub "Milkomeda."

In the new simulation, researchers made use of the most recent and best estimates of the motion and mass of the four largest galaxies in the Local Group. They then plugged those into simulations developed by the Institute for Computational Cosmology at Durham University. First, they ran the simulation including just the Milky Way and Andromeda and found that they merged in slightly less than half of the cases -- lower odds than other recent estimates. When they included the effect of the Triangulum galaxy, the Local Group's third largest, the merger probability increased to about two-thirds. But with the inclusion of the Large Magellanic Cloud, a satellite galaxy of the Milky Way that is the fourth largest in the Local Group, those chances dropped back down to a coin flip. And if the cosmic smashup does happen, it won't be for about 8 billion years. "As it stands, proclamations of the impending demise of our Galaxy appear greatly exaggerated," the researchers write. Meanwhile, if the accelerating expansion of the universe continues unabated, all other galaxies will disappear beyond our cosmic event horizon, leaving Milkomeda as the sole occupant of the visible universe.
The study is available as a preprint on arXiv.
Earth

Excess Memes and 'Reply All' Emails Are Bad For Climate, Researcher Warns (theguardian.com) 120

An anonymous reader quotes a report from The Guardian: When "I can has cheezburger?" became one of the first internet memes to blow our minds, it's unlikely that anyone worried about how much energy it would use up. But research has now found that the vast majority of data stored in the cloud is "dark data", meaning it is used once then never visited again. That means that all the memes and jokes and films that we love to share with friends and family -- from "All your base are belong to us", through Ryan Gosling saying "Hey Girl", to Tim Walz with a piglet -- are out there somewhere, sitting in a datacenter, using up energy. By 2030, the National Grid anticipates that datacenters will account for just under 6% of the UK's total electricity consumption, so tackling junk data is an important part of tackling the climate crisis.

Ian Hodgkinson, a professor of strategy at Loughborough University has been studying the climate impact of dark data and how it can be reduced. "I really started a couple of years ago, it was about trying to understand the negative environmental impact that digital data might have," he said. "And at the top of it might be quite an easy question to answer, but it turns out actually, it's a whole lot more complex. But absolutely, data does have a negative environmental impact." He discovered that 68% of data used by companies is never used again, and estimates that personal data tells the same story. [...] One funny meme isn't going to destroy the planet, of course, but the millions stored, unused, in people's camera rolls does have an impact, he explained: "The one picture isn't going to make a drastic impact. But of course, if you maybe go into your own phone and you look at all the legacy pictures that you have, cumulatively, that creates quite a big impression in terms of energy consumption."
Since we're paying to store data in the cloud, cloud operators and tech companies have a financial incentive to keep people from deleting junk data, says Hodgkinson. He recommends people send fewer pointless emails and avoid the "dreaded 'reply all' button."

"One [figure] that often does the rounds is that for every standard email, that equates to about 4g of carbon. If we then think about the amount of what we mainly call 'legacy data' that we hold, so if we think about all the digital photos that we have, for instance, there will be a cumulative impact."

Slashdot Top Deals