×
Education

Professors Are Now Using AI to Grade Essays. Are There Ethical Concerns? (cnn.com) 102

A professor at Ithaca College runs part of each student's essay through ChatGPT, "asking the AI tool to critique and suggest how to improve the work," reports CNN. (The professor said "The best way to look at AI for grading is as a teaching assistant or research assistant who might do a first pass ... and it does a pretty good job at that.")

And the same professor then requires their class of 15 students to run their draft through ChatGPT to see where they can make improvements, according to the article: Both teachers and students are using the new technology. A report by strategy consultant firm Tyton Partners, sponsored by plagiarismâdetection platform Turnitin, found half of college students used AI tools in Fall 2023. Meanwhile, while fewer faculty members used AI, the percentage grew to 22% of faculty members in the fall of 2023, up from 9% in spring 2023.

Teachers are turning to AI tools and platforms — such as ChatGPT, Writable, Grammarly and EssayGrader — to assist with grading papers, writing feedback, developing lesson plans and creating assignments. They're also using the burgeoning tools to create quizzes, polls, videos and interactives to up the ante" for what's expected in the classroom. Students, on the other hand, are leaning on tools such as ChatGPT and Microsoft CoPilot — which is built into Word, PowerPoint and other products.

But while some schools have formed policies on how students can or can't use AI for schoolwork, many do not have guidelines for teachers. The practice of using AI for writing feedback or grading assignments also raises ethical considerations. And parents and students who are already spending hundreds of thousands of dollars on tuition may wonder if an endless feedback loop of AI-generated and AI-graded content in college is worth the time and money.

A professor of business ethics at the University ofâVirginia "suggested teachers use AI to look at certain metrics — such as structure, language use and grammar — and give a numerical score on those figures," according to the article. ("But teachers should then grade students' work themselves when looking for novelty, creativity and depth of insight.")

But a writer's workshop teacher at the University of Lynchburg in Virginia "also sees uploading a student's work to ChatGPT as a 'huge ethical consideration' and potentially a breach of their intellectual property. AI tools like ChatGPT use such entries to train their algorithms..."

Even the Ithaca professor acknowledged to CNN that "If teachers use it solely to grade, and the students are using it solely to produce a final product, it's not going to work."
AI

In America, A Complex Patchwork of State AI Regulations Has Already Arrived (cio.com) 13

While the European Parliament passed a wide-ranging "AI Act" in March, "Leaders from Microsoft, Google, and OpenAI have all called for AI regulations in the U.S.," writes CIO magazine. Even the Chamber of Commerce, "often opposed to business regulation, has called on Congress to protect human rights and national security as AI use expands," according to the article, while the White House has released a blueprint for an AI bill of rights.

But even though the U.S. Congress hasn't passed AI legislation — 16 different U.S. states have, "and state legislatures have already introduced more than 400 AI bills across the U.S. this year, six times the number introduced in 2023." Many of the bills are targeted both at the developers of AI technologies and the organizations putting AI tools to use, says Goli Mahdavi, a lawyer with global law firm BCLP, which has established an AI working group. And with populous states such as California, New York, Texas, and Florida either passing or considering AI legislation, companies doing business across the US won't be able to avoid the regulations. Enterprises developing and using AI should be ready to answer questions about how their AI tools work, even when deploying automated tools as simple as spam filtering, Mahdavi says. "Those questions will come from consumers, and they will come from regulators," she adds. "There's obviously going to be heightened scrutiny here across the board."
There's sector-specific bills, and bills that demand transparency (of both development and output), according to the article. "The third category of AI bills covers broad AI bills, often focused on transparency, preventing bias, requiring impact assessment, providing for consumer opt-outs, and other issues."

One example the article notes is Senate Bill 1047, introduced in the California State Legislature in February, "would require safety testing of AI products before they're released, and would require AI developers to prevent others from creating derivative models of their products that are used to cause critical harms."

Adrienne Fischer, a lawyer with Basecamp Legal, a Denver law firm monitoring state AI bills, tells CIO that many of the bills promote best practices in privacy and data security, but said the fragmented regulatory environment "underscores the call for national standards or laws to provide a coherent framework for AI usage."

Thanks to Slashdot reader snydeq for sharing the article.
Microsoft

Is Microsoft Working on 'Performant Sound Recognition' AI Technologies? (windowsreport.com) 28

Windows Report speculates on what Microsoft may be working on next based on a recently-published patent for "performant sound recognition AI technologies" (dated April 2, 2024): Microsoft's new technology can recognize different types of sounds, from doorbells to babies crying, or dogs barking, but not limited to them. It can also recognize sounds of coughing or breathing difficulties, or unusual noises, such as glass breaking. Most intriguing, it can recognize and monitor environmental sounds, and they can be further processed to let users know if a natural disaster is about to happen...

The neural network generates scores and probabilities for each type of sound event in each segment. This is like guessing what type of sound each segment is and how sure it is about the guess. After that, the system does some post-processing to smooth out the scores and probabilities and generate confidence values for each type of sound for different window sizes.

Ultimately, this technology can be used in various applications. In a smart home device, it can detect when someone breaks into the house, by recognizing the sound of glass shattering, or if a newborn is hungry, or distressed, by recognizing the sounds of baby crying. It can also be used in healthcare, to accurately detect lung or heart diseases, by recognizing heartbeat sounds, coughing, or breathing difficulties. But one of its most important applications would be to prevent casual users of upcoming natural disasters by recognizing and detecting sounds associated with them.

Thanks to Slashdot reader John Nautu for sharing the article.
China

China Will Use AI To Disrupt Elections in the US, South Korea and India, Microsoft Warns (theguardian.com) 157

China will attempt to disrupt elections in the US, South Korea and India this year with artificial intelligence-generated content after making a dry run with the presidential poll in Taiwan, Microsoft has warned. From a report: The US tech firm said it expected Chinese state-backed cyber groups to target high-profile elections in 2024, with North Korea also involved, according to a report by the company's threat intelligence team published on Friday. "As populations in India, South Korea and the United States head to the polls, we are likely to see Chinese cyber and influence actors, and to some extent North Korean cyber actors, work toward targeting these elections," the report reads.

Microsoft said that "at a minimum" China will create and distribute through social media AI-generated content that "benefits their positions in these high-profile elections." The company added that the impact of AI-made content was minor but warned that could change. "While the impact of such content in swaying audiences remains low, China's increasing experimentation in augmenting memes, videos and audio will continue -- and may prove effective down the line," said Microsoft. Microsoft said in the report that China had already attempted an AI-generated disinformation campaign in the Taiwan presidential election in January. The company said this was the first time it had seen a state-backed entity using AI-made content in a bid to influence a foreign election.

UPDATE: Last fall, America's State Department "accused the Chinese government of spending billions of dollars annually on a global campaign of disinformation," reports the Wall Street Journal: In an interview, Tom Burt, Microsoft's head of customer security and trust, said China's disinformation operations have become much more active in the past six months, mirroring rising activity of cyberattacks linked to Beijing. "We're seeing them experiment," Burt said. "I'm worried about where it might go next."
Linux

German State Moving Tens of Thousands of PCs To Linux and LibreOffice (documentfoundation.org) 143

The Document Foundation: Following a successful pilot project, the northern German federal state of Schleswig-Holstein has decided to move from Microsoft Windows and Microsoft Office to Linux and LibreOffice (and other free and open source software) on the 30,000 PCs used in the local government. As reported on the homepage of the Minister-President: "Independent, sustainable, secure: Schleswig-Holstein will be a digital pioneer region and the first German state to introduce a digitally sovereign IT workplace in its state administration. With a cabinet decision to introduce the open-source software LibreOffice as the standard office solution across the board, the government has given the go-ahead for the first step towards complete digital sovereignty in the state, with further steps to follow."
Microsoft

Microsoft Edge Will Let You Control How Much RAM It Uses Soon (theverge.com) 62

Microsoft is working on a new feature for its Edge browser that will let you limit the amount of RAM it uses. From a report: Leopeva64, who is one of the best at finding new Edge features, has spotted a new settings section in test builds of the browser that includes a slider so you can limit how much RAM Edge gets access to. The RAM slider appears to be targeted toward PC gamers, as there is a setting in Canary versions of Edge that lets you limit the amount of RAM when you're playing a PC game or all of the time. While the slider lets you pick between just 1GB and 16GB on a system with 16GB of RAM, Microsoft warns that "setting a low limit may impact browser speed."
Microsoft

Microsoft Reveals Subscription Pricing for Using Windows 10 Beyond 2025 (windowscentral.com) 121

Microsoft announced an extended support program for Windows 10 last year that would allow users to pay for continued security updates beyond the October 2025 end of support date. Today, the company has unveiled the pricing structure for that program, which starts at $61 per device, and doubles every year for three years. Windows Central: Security updates on Windows are important, as they keep you protected from any vulnerabilities that are discovered in the OS. Microsoft releases a security update for Windows 10 once a month, but that will stop when October 2025 rolls around. Users still on Windows 10 after that date will officially be out of support, unless you pay.

The extended support program for Windows 10 will let users pay for three years of additional security updates. This is handy for businesses and enterprise customers who aren't yet ready to upgrade their fleet of employee laptops and computers to Windows 11. For the first time, Microsoft is also allowing individual users at home to join the extended support program, which will let anyone running Windows 10 pay for extended updates beyond October 2025 for three years. The price is $61 per device, but that price doubles every year for three years. That means the second year will cost you $122 per device, and the third year will cost $244 per device.

Microsoft

Microsoft and Quantinuum Say They've Ushered in the Next Era of Quantum Computing (techcrunch.com) 24

Microsoft and Quantinuum today announced a major breakthrough in quantum error correction. Using Quantinuum's ion-trap hardware and Microsoft's new qubit-virtualization system, the team was able to run more than 14,000 experiments without a single error. From a report: This new system also allowed the team to check the logical qubits and correct any errors it encountered without destroying the logical qubits. This, the two companies say, has now moved the state-of-the-art of quantum computing out of what has typically been dubbed the era of Noisy Intermediate Scale Quantum (NISQ) computers.

"Noisy" because even the smallest changes in the environment can lead a quantum system to essentially become random (or "decohere"), and "intermediate scale" because the current generation of quantum computers is still limited to just over a thousand qubits at best. A qubit is the fundamental unit of computing in quantum systems, analogous to a bit in a classic computer, but each qubit can be in multiple states at the same time and doesn't fall into a specific position until measured, which underlies the potential of quantum to deliver a huge leap in computing power.

It doesn't matter how many qubits you have, though, if you barely have time to run a basic algorithm before the system becomes too noisy to get a useful result -- or any result at all. Combining several different techniques, the team was able to run thousands of experiments with virtually no errors. That involved quite a bit of preparation and pre-selecting systems that already looked to be in good shape for a successful run, but still, that's a massive improvement from where the industry was just a short while ago.
Further reading: Microsoft blog.
United States

Scathing Federal Report Rips Microsoft For Shoddy Security (apnews.com) 81

quonset shares a report: In a scathing indictment of Microsoft corporate security and transparency, a Biden administration-appointed review board issued a report Tuesday saying "a cascade of errors" by the tech giant let state-backed Chinese cyber operators break into email accounts of senior U.S. officials including Commerce Secretary Gina Raimondo.

The Cyber Safety Review Board, created in 2021 by executive order, describes shoddy cybersecurity practices, a lax corporate culture and a lack of sincerity about the company's knowledge of the targeted breach, which affected multiple U.S. agencies that deal with China. It concluded that "Microsoft's security culture was inadequate and requires an overhaul" given the company's ubiquity and critical role in the global technology ecosystem. Microsoft products "underpin essential services that support national security, the foundations of our economy, and public health and safety."

The panel said the intrusion, discovered in June by the State Department and dating to May "was preventable and should never have occurred," blaming its success on "a cascade of avoidable errors." What's more, the board said, Microsoft still doesn't know how the hackers got in. [...] It said Microsoft's CEO and board should institute "rapid cultural change" including publicly sharing "a plan with specific timelines to make fundamental, security-focused reforms across the company and its full suite of products."

Security

New XZ Backdoor Scanner Detects Implants In Any Linux Binary (bleepingcomputer.com) 33

Bill Toulas reports via BleepingComputer: Firmware security firm Binarly has released a free online scanner to detect Linux executables impacted by the XZ Utils supply chain attack, tracked as CVE-2024-3094. CVE-2024-3094 is a supply chain compromise in XZ Utils, a set of data compression tools and libraries used in many major Linux distributions. Late last month, Microsoft engineer Andres Freud discovered the backdoor in the latest version of the XZ Utils package while investigating unusually slow SSH logins on Debian Sid, a rolling release of the Linux distribution.

The backdoor was introduced by a pseudonymous contributor to XZ version 5.6.0, which remained present in 5.6.1. However, only a few Linux distributions and versions following a "bleeding edge" upgrading approach were impacted, with most using an earlier, safe library version. Following the discovery of the backdoor, a detection and remediation effort was started, with CISA proposing downgrading the XZ Utils 5.4.6 Stable and hunting for and reporting any malicious activity.

Binarly says the approach taken so far in the threat mitigation efforts relies on simple checks such as byte string matching, file hash blocklisting, and YARA rules, which could lead to false positives. This approach can trigger significant alert fatigue and doesn't help detect similar backdoors on other projects. To address this problem, Binarly developed a dedicated scanner that would work for the particular library and any file carrying the same backdoor. [...] Binarly's scanner increases detection as it scans for various supply chain points beyond just the XZ Utils project, and the results are of much higher confidence.
Binarly has made a free API available to accomodate bulk scans, too.
AI

Microsoft is Working on an Xbox AI Chatbot (theverge.com) 11

Microsoft is currently testing a new AI-powered Xbox chatbot that can be used to automate support tasks. From a report: Sources familiar with Microsoft's plans tell The Verge that the software giant has been testing an "embodied AI character" that animates when responding to Xbox support queries. I understand this Xbox AI chatbot is part of a larger effort inside Microsoft to apply AI to its Xbox platform and services.

The Xbox AI chatbot is connected to Microsoft's support documents for the Xbox network and ecosystem, and can respond to questions and even process game refunds from Microsoft's support website. "This agent can help you with your Xbox support questions," reads a description of the Xbox chatbot internally at Microsoft. Microsoft expanded the testing pool for its Xbox chatbot more broadly in recent days, suggesting that this prototype "Xbox Support Virtual Agent" may one day handle support queries for all Xbox customers. Microsoft confirmed the existence of its chatbot to The Verge.

AI

OpenAI Removes Sam Altman's Ownership of Its Startup Fund (reuters.com) 6

According to a filing with the SEC, OpenAI has removed CEO Sam Altman's ownership and control of the company's venture capital fund that backs AI startups. Reuters reports: The change, documented in the March 29 filing, came after Altman's ownership of the OpenAI Startup Fund raised eyebrows for its unusual structure--while being marketed similar to a corporate venture arm, the fund was raised by Altman from outside limited partners and he made investment decisions. OpenAI has said Altman does not have financial interest in the fund despite the ownership.

Axios first reported on the ownership change on Monday. In a statement, a spokesperson for OpenAI said the fund's initial general partner (GP) structure was a temporary arrangement, and "this change provides further clarity." The OpenAI Startup Fund is investing $175 million raised from OpenAI partners such as Microsoft, although OpenAI itself is not an investor. Control of the fund has been moved over to Ian Hathaway, a partner at the fund since 2021, according to the filing. Altman will no longer be a general partner at the fund. OpenAI said Hathaway has overseen the fund's accelerator program and led investments in such companies as Harvey, Cursor and Ambience Healthcare.

AI

Huge AI Funding Leads To Hype and 'Grifting,' Warns DeepMind's Demis Hassabis (ft.com) 30

The surge of money flooding into AI has resulted in some crypto-like hype that is obscuring the incredible scientific progress in the field, according to Sir Demis Hassabis, co-founder of DeepMind. From a report: The chief executive of Google's AI research division told the Financial Times that the billions of dollars being poured into generative AI start-ups and products "brings with it a whole attendant bunch of hype and maybe some grifting and some other things that you see in other hyped-up areas, crypto or whatever."

"Some of that has now spilled over into AI, which I think is a bit unfortunate. And it clouds the science and the research, which is phenomenal," he added. "In a way, AI's not hyped enough but in some senses it's too hyped. We're talking about all sorts of things that are just not real." The launch of OpenAI's ChatGPT chatbot in November 2022 sparked an investor frenzy as start-ups raced to develop and deploy generative AI and attract venture capital funding. VC groups invested $42.5bn in 2,500 AI start-up equity rounds last year, according to market analysts CB Insights. Public market investors have also rushed into the so-called Magnificent Seven technology companies, including Microsoft, Alphabet and Nvidia, that are spearheading the AI revolution. Their rise has helped to propel global stock markets to their strongest first-quarter performance in five years.

Microsoft

Microsoft To Unbundle Office and Teams Following Years-long Criticism (techcrunch.com) 58

Microsoft will introduce a new version of Microsoft 365 and Office 365 subscription service that excludes Teams, unbundling a suite following scrutiny from the European Union regulator and complaints from rival Slack. From a report: The move follows Microsoft agreeing to sell Office 365 suite sans Microsoft Teams offering in the EU and Switzerland last year. The company introduced Teams as a complimentary offering to the Office 365 suite in 2017. Microsoft has enjoyed an unfair advantage by coupling the two offerings, many businesses have argued. Slack, owned by Salesforce, termed the move "illegal" alleging that Microsoft forced installation of Teams to customers through its market-dominant productivity suite and hid the true cost of the chat and video service.
Microsoft

Microsoft Engineer Sends Rust Linux Kernel Patches For In-Place Module Initialization (phoronix.com) 49

"What a time we live in," writes Phoronix, "where Microsoft not only continues contributing significantly to the Linux kernel but doing so to further flesh out the design of the Linux kernel's Rust programming language support..." Microsoft engineer Wedson Almeida Filho has sent out the latest patches working on Allocation APIs for the Rust Linux kernel code and also in leveraging those proposed APIs [as] a means of allowing in-place module initialization for Rust kernel modules. Wedson Almeida Filho has been a longtime Rust for Linux contributor going back to his Google engineering days and at Microsoft the past two years has shown no signs of slowing down on the Rust for Linux activities...

The Rust for Linux kernel effort remains a very vibrant effort with a wide variety of organizations contributing, even Microsoft engineers.

Government

Congress Bans Staff Use of Microsoft's AI Copilot (axios.com) 32

The U.S. House has set a strict ban on congressional staffers' use of Microsoft Copilot, the company's AI-based chatbot, Axios reported Friday. From the report: The House last June restricted staffers' use of ChatGPT, allowing limited use of the paid subscription version while banning the free version. The House's Chief Administrative Officer Catherine Szpindor, in guidance to congressional offices obtained by Axios, said Microsoft Copilot is "unauthorized for House use."

"The Microsoft Copilot application has been deemed by the Office of Cybersecurity to be a risk to users due to the threat of leaking House data to non-House approved cloud services," it said. The guidance added that Copilot "will be removed from and blocked on all House Windows devices."

AI

NYC's Government Chatbot Is Lying About City Laws and Regulations (arstechnica.com) 57

An anonymous reader quotes a report from Ars Technica: NYC's "MyCity" ChatBot was rolled out as a "pilot" program last October. The announcement touted the ChatBot as a way for business owners to "save ... time and money by instantly providing them with actionable and trusted information from more than 2,000 NYC Business web pages and articles on topics such as compliance with codes and regulations, available business incentives, and best practices to avoid violations and fines." But a new report from The Markup and local nonprofit news site The City found the MyCity chatbot giving dangerously wrong information about some pretty basic city policies. To cite just one example, the bot said that NYC buildings "are not required to accept Section 8 vouchers," when an NYC government info page says clearly that Section 8 housing subsidies are one of many lawful sources of income that landlords are required to accept without discrimination. The Markup also received incorrect information in response to chatbot queries regarding worker pay and work hour regulations, as well as industry-specific information like funeral home pricing. Further testing from BlueSky user Kathryn Tewson shows the MyCity chatbot giving some dangerously wrong answers regarding treatment of workplace whistleblowers, as well as some hilariously bad answers regarding the need to pay rent.

MyCity's Microsoft Azure-powered chatbot uses a complex process of statistical associations across millions of tokens to essentially guess at the most likely next word in any given sequence, without any real understanding of the underlying information being conveyed. That can cause problems when a single factual answer to a question might not be reflected precisely in the training data. In fact, The Markup said that at least one of its tests resulted in the correct answer on the same query about accepting Section 8 housing vouchers (even as "ten separate Markup staffers" got the incorrect answer when repeating the same question). The MyCity Chatbot -- which is prominently labeled as a "Beta" product -- does tell users who bother to read the warnings that it "may occasionally produce incorrect, harmful or biased content" and that users should "not rely on its responses as a substitute for professional advice." But the page also states front and center that it is "trained to provide you official NYC Business information" and is being sold as a way "to help business owners navigate government."
NYC Office of Technology and Innovation Spokesperson Leslie Brown told The Markup that the bot "has already provided thousands of people with timely, accurate answers" and that "we will continue to focus on upgrading this tool so that we can better support small businesses across the city."
Businesses

Microsoft, OpenAI Plan $100 Billlion 'Stargate' AI Supercomputer (reuters.com) 41

According to The Information (paywalled), Microsoft and OpenAI are planning a $100 billion datacenter project that will include an artificial intelligence supercomputer called "Stargate." Reuters reports: The Information reported that Microsoft would likely be responsible for financing the project, which would be 100 times more costly than some of the biggest current data centers, citing people involved in private conversations about the proposal. OpenAI's next major AI upgrade is expected to land by early next year, the report said, adding that Microsoft executives are looking to launch Stargate as soon as 2028. The proposed U.S.-based supercomputer would be the biggest in a series of installations the companies are looking to build over the next six years, the report added.

The Information attributed the tentative cost of $100 billion to a person who spoke to OpenAI CEO Sam Altman about it and a person who has viewed some of Microsoft's initial cost estimates. It did not identify those sources. Altman and Microsoft employees have spread supercomputers across five phases, with Stargate as the fifth phase. Microsoft is working on a smaller, fourth-phase supercomputer for OpenAI that it aims to launch around 2026, according to the report. Microsoft and OpenAI are in the middle of the third phase of the five-phase plan, with much of the cost of the next two phases involving procuring the AI chips that are needed, the report said. The proposed efforts could cost in excess of $115 billion, more than three times what Microsoft spent last year on capital expenditures for servers, buildings and other equipment, the report stated.

Cloud

Cloud Server Host Vultr Rips User Data Ownership Clause From ToS After Web Outage (theregister.com) 28

Tobias Mann reports via The Register: Cloud server provider Vultr has rapidly revised its terms-of-service after netizens raised the alarm over broad clauses that demanded the "perpetual, irrevocable, royalty-free" rights to customer "content." The red tape was updated in January, as captured by the Internet Archive, and this month users were asked to agree to the changes by a pop-up that appeared when using their web-based Vultr control panel. That prompted folks to look through the terms, and there they found clauses granting the US outfit a "worldwide license ... to use, reproduce, process, adapt ... modify, prepare derivative works, publish, transmit, and distribute" user content.

It turned out these demands have been in place since before the January update; customers have only just noticed them now. Given Vultr hosts servers and storage in the cloud for its subscribers, some feared the biz was giving itself way too much ownership over their stuff, all in this age of AI training data being put up for sale by platforms. In response to online outcry, largely stemming from Reddit, Vultr in the past few hours rewrote its ToS to delete those asserted content rights. CEO J.J. Kardwell told The Register earlier today it's a case of standard legal boilerplate being taken out of context. The clauses were supposed to apply to customer forum posts, rather than private server content, and while, yes, the terms make more sense with that in mind, one might argue the legalese was overly broad in any case.

"We do not use user data," Kardwell stressed to us. "We never have, and we never will. We take privacy and security very seriously. It's at the core of what we do globally." [...] According to Kardwell, the content clauses are entirely separate to user data deployed in its cloud, and are more aimed at one's use of the Vultr website, emphasizing the last line of the relevant fine print: "... for purposes of providing the services to you." He also pointed out that the wording has been that way for some time, and added the prompt asking users to agree to an updated ToS was actually spurred by unrelated Microsoft licensing changes. In light of the controversy, Vultr vowed to remove the above section to "simplify and further clarify" its ToS, and has indeed done so. In a separate statement, the biz told The Register the removal will be followed by a full review and update to its terms of service.
"It's clearly causing confusion for some portion of users. We recognize that the average user doesn't have a law degree," Kardwell added. "We're very focused on being responsive to the community and the concerns people have and we believe the strongest thing we can do to demonstrate that there is no bad intent here is to remove it."
Cloud

Amazon Bets $150 Billion on Data Centers Required for AI Boom (yahoo.com) 26

Amazon plans to spend almost $150 billion in the coming 15 years on data centers, giving the cloud-computing giant the firepower to handle an expected explosion in demand for artificial intelligence applications and other digital services. From a report: The spending spree is a show of force as the company looks to maintain its grip on the cloud services market, where it holds about twice the share of No. 2 player Microsoft. Sales growth at Amazon Web Services slowed to a record low last year as business customers cut costs and delayed modernization projects. Now spending is starting to pick up again, and Amazon is keen to secure land and electricity for its power-hungry facilities.

"We're expanding capacity quite significantly," said Kevin Miller, an AWS vice president who oversees the company's data centers. "I think that just gives us the ability to get closer to customers." Over the past two years, according to a Bloomberg tally, Amazon has committed to spending $148 billion to build and operate data centers around the world. The company plans to expand existing server farm hubs in northern Virginia and Oregon as well as push into new precincts, including Mississippi, Saudi Arabia and Malaysia.

Slashdot Top Deals