×
Microsoft

Microsoft Asks Hundreds of China-Based AI Staff To Consider Relocating Amid US-China Tensions (wsj.com) 36

Microsoft is asking hundreds of employees in its China-based cloud-computing and AI operations to consider transferring outside the country, as tensions between Washington and Beijing mount around the critical technology. WSJ: Such staff, mostly engineers with Chinese nationality, were recently offered the opportunity to transfer to countries including the U.S., Ireland, Australia and New Zealand, people familiar with the matter said. The company is asking about 700 to 800 people [non-paywalled link], who are involved in machine learning and other work related to cloud computing, one of the people said.ÂThe move by one of America's biggest cloud-computing and AI companies comes as the Biden administration seeks to put tighter curbs around China's capability to develop state-of-the-art AI. The White House is considering new rules that would require Microsoft and other U.S. cloud-computing companies to get licenses before giving Chinese customers access to AI chips.
Microsoft

Microsoft's AI Push Imperils Climate Goal As Carbon Emissions Jump 30% (bnnbloomberg.ca) 68

Microsoft's ambitious goal to be carbon negative by 2030 is threatened by its expanding AI operations, which have increased its carbon footprint by 30% since 2020. To meet its targets, Microsoft must quickly adopt green technologies and improve efficiency in its data centers, which are critical for AI but heavily reliant on carbon-intensive resources. Bloomberg reports: Now to meet its goals, the software giant will have to make serious progress very quickly in gaining access to green steel and concrete and less carbon-intensive chips, said Brad Smith, president of Microsoft, in an exclusive interview with Bloomberg Green. "In 2020, we unveiled what we called our carbon moonshot. That was before the explosion in artificial intelligence," he said. "So in many ways the moon is five times as far away as it was in 2020, if you just think of our own forecast for the expansion of AI and its electrical needs." [...]

Despite AI's ravenous energy consumption, this actually contributes little to Microsoft's hike in emissions -- at least on paper. That's because the company says in its sustainability report that it's 100% powered by renewables. Companies use a range of mechanisms to make such claims, which vary widely in terms of credibility. Some firms enter into long-term power purchase agreements (PPAs) with renewable developers, where they shoulder some of a new energy plant's risk and help get new solar and wind farms online. In other cases, companies buy renewable energy credits (RECs) to claim they're using green power, but these inexpensive credits do little to spur new demand for green energy, researchers have consistently found. Microsoft uses a mix of both approaches. On one hand, it's one of the biggest corporate participants in power purchase agreements, according to BloombergNEF, which tracks these deals. But it's also a huge purchaser of RECs, using these instruments to claim about half of its energy use is clean, according to its environmental filings in 2022. By using a large quantity of RECs, Microsoft is essentially masking an even larger growth in emissions. "It is Microsoft's plan to phase out the use of unbundled RECs in future years," a spokesperson for the company said. "We are focused on PPAs as a primary strategy."

So what else can be done? Smith, along with Microsoft's Chief Sustainability Officer Melanie Nakagawa, has laid out clear steps in the sustainability report. High among them is to increase efficiency, which is to use the same amount of energy or computing to do more work. That could help reduce the need for data centers, which will reduce emissions and electricity use. On most things, "our climate goals require that we spend money," said Smith. "But efficiency gains will actually enable us to save money." Microsoft has also been at the forefront of buying sustainable aviation fuels that has helped reduce some of its emissions from business travel. The company also wants to partner with those who will "accelerate breakthroughs" to make greener steel, concrete and fuels. Those technologies are starting to work at a small scale, but remain far from being available in commercial quantities even if expensive. Cheap renewable power has helped make Microsoft's climate journey easier. But the tech giant's electricity consumption last year rivaled that of a small European country -- beating Slovenia easily. Smith said that one of the biggest bottlenecks for it to keep getting access to green power is the lack of transmission lines from where the power is generated to the data centers. That's why Microsoft says it's going to increase lobbying efforts to get governments to speed up building the grid.
If Microsoft's emissions remain high going into 2030, Smith said the company may consider bulk purchases of carbon removal credits, even though it's not "the desired course."

"You've got to be willing to invest and pay for it," said Smith. Climate change is "a problem that humanity created and that humanity can solve."
Apple

Apple Brings Eye-Tracking To Recent iPhones and iPads (engadget.com) 36

This week, in celebration of Global Accessibility Awareness Day, Apple is introducing several new accessibility features. Noteworthy additions include eye-tracking support for recent iPhone and iPad models, customizable vocal shortcuts, music haptics, and vehicle motion cues. Engadget reports: The most intriguing feature of the set is the ability to use the front-facing camera on iPhones or iPads (at least those with the A12 chip or later) to navigate the software without additional hardware or accessories. With this enabled, people can look at their screen to move through elements like apps and menus, then linger on an item to select it. That pause to select is something Apple calls Dwell Control, which has already been available elsewhere in the company's ecosystem like in Mac's accessibility settings. The setup and calibration process should only take a few seconds, and on-device AI is at work to understand your gaze. It'll also work with third-party apps from launch, since it's a layer in the OS like Assistive Touch. Since Apple already supported eye-tracking in iOS and iPadOS with eye-detection devices connected, the news today is the ability to do so without extra hardware. [...]

There are plenty more features coming to the company's suite of products, including Live Captions in VisionOS, a new Reader mode in Magnifier, support for multi-line braille and a virtual trackpad for those who use Assistive Touch. It's not yet clear when all of these announced updates will roll out, though Apple has historically made these features available in upcoming versions of iOS. With its developer conference WWDC just a few weeks away, it's likely many of today's tools get officially released with the next iOS.
Apple detailed all the new features in a press release.
Android

Android 15 Gets 'Private Space,' Theft Detection, and AV1 Support (arstechnica.com) 36

An anonymous reader quotes a report from Ars Technica: Google's I/O conference is still happening, and while the big keynote was yesterday, major Android beta releases have apparently been downgraded to Day 2 of the show. Google really seems to want to be primarily an AI company now. Android already had some AI news yesterday, but now that the code-red requirements have been met, we have actual OS news. One of the big features in this release is "Private Space," which Google says is a place where users can "keep sensitive apps away from prying eyes, under an additional layer of authentication."

First, there's a new hidden-by-default portion of the app drawer that can hold these sensitive apps, and revealing that part of the app drawer requires a second round of lock-screen authentication, which can be different from the main phone lock screen. Just like "Work" apps, the apps in this section run on a separate profile. To the system, they are run by a separate "user" with separate data, which your non-private apps won't be able to see. Interestingly, Google says, "When private space is locked by the user, the profile is paused, i.e., the apps are no longer active," so apps in a locked Private Space won't be able to show notifications unless you go through the second lock screen.

Another new Android 15 feature is "Theft Detection Lock," though it's not in today's beta and will be out "later this year." The feature uses accelerometers and "Google AI" to "sense if someone snatches your phone from your hand and tries to run, bike, or drive away with it." Any of those theft-like shock motions will make the phone auto-lock. Of course, Android's other great theft prevention feature is "being an Android phone." Android 12L added a desktop-like taskbar to the tablet UI, showing recent and favorite apps at the bottom of the screen, but it was only available on the home screen and recent apps. Third-party OEMs immediately realized that this bar should be on all the time and tweaked Android to allow it. In Android 15, an always-on taskbar will be a normal option, allowing for better multitasking on tablets and (presumably) open foldable phones. You can also save split-screen-view shortcuts to the taskbar now.

An Android 13 developer feature, predictive back, will finally be turned on by default. When performing the back gesture, this feature shows what screen will show up behind the current screen you're swiping away. This gives a smoother transition and a bit of a preview, allowing you to cancel the back gesture if you don't like where it's going. [...] Because this is a developer release, there are tons of under-the-hood changes. Google is a big fan of its own next-generation AV1 video codec, and AV1 support has arrived on various devices thanks to hardware decoding being embedded in many flagship SoCs. If you can't do hardware AV1 decoding, though, Android 15 has a solution for you: software AV1 decoding.

AI

Senators Urge $32 Billion in Emergency Spending on AI After Finishing Yearlong Review (apnews.com) 110

A bipartisan group of four senators led by Majority Leader Chuck Schumer is recommending that Congress spend at least $32 billion over the next three years to develop AI and place safeguards around it, writing in a report released Wednesday that the U.S. needs to "harness the opportunities and address the risks" of the quickly developing technology. AP: The group of two Democrats and two Republicans said in an interview Tuesday that while they sometimes disagreed on the best paths forward, it was imperative to find consensus with the technology taking off and other countries like China investing heavily in its development. They settled on a raft of broad policy recommendations that were included in their 33-page report. While any legislation related to AI will be difficult to pass, especially in an election year and in a divided Congress, the senators said that regulation and incentives for innovation are urgently needed.
Google

Google Will Use Gemini To Detect Scams During Calls (techcrunch.com) 57

At Google I/O on Tuesday, Google previewed a feature that will alert users to potential scams during a phone call. TechCrunch reports: The feature, which will be built into a future version of Android, uses Gemini Nano, the smallest version of Google's generative AI offering, which can be run entirely on-device. The system effectively listens for "conversation patterns commonly associated with scams" in real time. Google gives the example of someone pretending to be a "bank representative." Common scammer tactics like password requests and gift cards will also trigger the system. These are all pretty well understood to be ways of extracting your money from you, but plenty of people in the world are still vulnerable to these sorts of scams. Once set off, it will pop up a notification that the user may be falling prey to unsavory characters.

No specific release date has been set for the feature. Like many of these things, Google is previewing how much Gemini Nano will be able to do down the road sometime. We do know, however, that the feature will be opt-in.

AI

Project Astra Is Google's 'Multimodal' Answer to the New ChatGPT (wired.com) 9

At Google I/O today, Google introduced a "next-generation AI assistant" called Project Astra that can "make sense of what your phone's camera sees," reports Wired. It follows yesterday's launch of GPT-4o, a new AI model from OpenAI that can quickly respond to prompts via voice and talk about what it 'sees' through a smartphone camera or on a computer screen. It "also uses a more humanlike voice and emotionally expressive tone, simulating emotions like surprise and even flirtatiousness," notes Wired. From the report: In response to spoken commands, Astra was able to make sense of objects and scenes as viewed through the devices' cameras, and converse about them in natural language. It identified a computer speaker and answered questions about its components, recognized a London neighborhood from the view out of an office window, read and analyzed code from a computer screen, composed a limerick about some pencils, and recalled where a person had left a pair of glasses. [...] Google says Project Astra will be made available through a new interface called Gemini Live later this year. [Demis Hassabis, the executive leading the company's effort to reestablish leadership inÂAI] said that the company is still testing several prototype smart glasses and has yet to make a decision on whether to launch any of them.

Hassabis believes that imbuing AI models with a deeper understanding of the physical world will be key to further progress in AI, and to making systems like Project Astra more robust. Other frontiers of AI, including Google DeepMind's work on game-playing AI programs could help, he says. Hassabis and others hope such work could be revolutionary for robotics, an area that Google is also investing in. "A multimodal universal agent assistant is on the sort of track to artificial general intelligence," Hassabis said in reference to a hoped-for but largely undefined future point where machines can do anything and everything that a human mind can. "This is not AGI or anything, but it's the beginning of something."

Movies

Google Targets Filmmakers With Veo, Its New Generative AI Video Model (theverge.com) 12

At its I/O developer conference today, Google announced Veo, its latest generative AI video model, that "can generate 'high-quality' 1080p resolution videos over a minute in length in a wide variety of visual and cinematic styles," reports The Verge. From the report: Veo has "an advanced understanding of natural language," according to Google's press release, enabling the model to understand cinematic terms like "timelapse" or "aerial shots of a landscape." Users can direct their desired output using text, image, or video-based prompts, and Google says the resulting videos are "more consistent and coherent," depicting more realistic movement for people, animals, and objects throughout shots. Google DeepMind CEO Demis Hassabis said in a press preview on Monday that video results can be refined using additional prompts and that Google is exploring additional features to enable Veo to produce storyboards and longer scenes.

As is the case with many of these AI model previews, most folks hoping to try Veo out themselves will likely have to wait a while. Google says it's inviting select filmmakers and creators to experiment with the model to determine how it can best support creatives and will build on these collaborations to ensure "creators have a voice" in how Google's AI technologies are developed. Some Veo features will also be made available to "select creators in the coming weeks" in a private preview inside VideoFX -- you can sign up for the waitlist here for an early chance to try it out. Otherwise, Google is also planning to add some of its capabilities to YouTube Shorts "in the future."
Along with its new AI models and tools, Google said it's expanding its AI content watermarking and detection technology. The company's new upgraded SynthID watermark imprinting system "can now mark video that was digitally generated, as well as AI-generated text," reports The Verge in a separate report.
Businesses

OpenAI's Chief Scientist and Co-Founder Is Leaving the Company (nytimes.com) 19

OpenAI's co-founder and Chief Scientist, Ilya Sutskever, is leaving the company to work on "something personally meaningful," wrote CEO Sam Altman in a post on X. "This is very sad to me; Ilya is easily one of the greatest minds of our generation, a guiding light of our field, and a dear friend. [...] I am forever grateful for what he did here and committed to finishing the mission we started together." He will be replaced by OpenAI researcher Jakub Pachocki. Here's Altman's full X post announcing the departure: Ilya and OpenAI are going to part ways. This is very sad to me; Ilya is easily one of the greatest minds of our generation, a guiding light of our field, and a dear friend. His brilliance and vision are well known; his warmth and compassion are less well known but no less important.

OpenAI would not be what it is without him. Although he has something personally meaningful he is going to go work on, I am forever grateful for what he did here and committed to finishing the mission we started together. I am happy that for so long I got to be close to such genuinely remarkable genius, and someone so focused on getting to the best future for humanity.

Jakub is going to be our new Chief Scientist. Jakub is also easily one of the greatest minds of our generation; I am thrilled he is taking the baton here. He has run many of our most important projects, and I am very confident he will lead us to make rapid and safe progress towards our mission of ensuring that AGI benefits everyone.
The New York Times notes that Ilya joined three other board members to force out Altman in a chaotic weekend last November. Ultimately, Altman returned as CEO five days later. Ilya said he regretted the move.
AI

AI in Gmail Will Sift Through Emails, Provide Search Summaries, Send Emails (arstechnica.com) 43

An anonymous reader shares a report: Google's Gemini AI often just feels like a chatbot built into a text-input field, but you can really start to do special things when you give it access to a ton of data. Gemini in Gmail will soon be able to search through your entire backlog of emails and show a summary in a sidebar. That's simple to describe but solves a huge problem with email: even searching brings up a list of email subjects, and you have to click-through to each one just to read it.

Having an AI sift through a bunch of emails and provide a summary sounds like a huge time saver and something you can't do with any other interface. Google's one-minute demo of this feature showed a big blue Gemini button at the top right of the Gmail web app. Tapping it opens the normal chatbot sidebar you can type in. Asking for a summary of emails from a certain contact will get you a bullet-point list of what has been happening, with a list of "sources" at the bottom that will jump you right to a certain email. In the last second of the demo, the user types, "Reply saying I want to volunteer for the parent's group event," hits "enter," and then the chatbot instantly, without confirmation, sends an email.

AI

Google's Invisible AI Watermark Will Help Identify Generative Text and Video 17

Among Google's swath of new AI models and tools announced today, the company is also expanding its AI content watermarking and detection technology to work across two new mediums. The Verge: Google's DeepMind CEO, Demis Hassabis, took the stage for the first time at the Google I/O developer conference on Tuesday to talk not only about the team's new AI tools, like the Veo video generator, but also about the new upgraded SynthID watermark imprinting system. It can now mark video that was digitally generated, as well as AI-generated text.

[...] Google had also enabled SynthID to inject inaudible watermarks into AI-generated music that was made using DeepMind's Lyria model. SynthID is just one of several AI safeguards in development to combat misuse by the tech, safeguards that the Biden administration is directing federal agencies to build guidelines around.
Google

Google Search Will Now Show AI-Generated Answers To Millions By Default (engadget.com) 59

Google is shaking up Search. On Tuesday, the company announced big new AI-powered changes to the world's dominant search engine at I/O, Google's annual conference for developers. From a report: With the new features, Google is positioning Search as more than a way to simply find websites. Instead, the company wants people to use its search engine to directly get answers and help them with planning events and brainstorming ideas. "[With] generative AI, Search can do more than you ever imagined," wrote Liz Reid, vice president and head of Google Search, in a blog post. "So you can ask whatever's on your mind or whatever you need to get done -- from researching to planning to brainstorming -- and Google will take care of the legwork."

Google's changes to Search, the primary way that the company makes money, are a response to the explosion of generative AI ever since OpenAI's ChatGPT released at the end of 2022. [...] Starting today, Google will show complete AI-generated answers in response to most search queries at the top of the results page in the US. Google first unveiled the feature a year ago at Google I/O in 2023, but so far, anyone who wanted to use the feature had to sign up for it as part of the company's Search Labs platform that lets people try out upcoming features ahead of their general release. Google is now making AI Overviews available to hundreds of millions of Americans, and says that it expects it to be available in more countries to over a billion people by the end of the year.

Facebook

Meta Will Shut Down Workplace, Its Business Chat Tool (axios.com) 21

Meta is shutting down Workplace, the tool it sold to businesses that combined social and productivity features, according to messages to customers obtained by Axios and confirmed by Meta. From the report:Meta has been cutting jobs and winnowing its product line for the last few years while investing billions first in the metaverse and now in AI. Micah Collins, Meta's senior director of product management, sent a message to customers alerting them of the shutdown.

Collins said customers can use Workplace through September 2025, when it will become available only to download or read existing data. The service will shut down completely in 2026. Workplace was formerly Facebook at Work, and launched in its current form in 2016. In 2021 the company reported it had 7 million paid subscribers.

AI

Slashdot Asks: How Do You Protest AI Development? (wired.com) 170

An anonymous reader quotes a report from Wired: On a side street outside the headquarters of the Department of Science, Innovation and Technology in the center of London on Monday, 20 or so protesters are getting their chants in order. "What do we want? Safe AI! When do we want it?" The protesters hesitate. "Later?" someone offers. The group of mostly young men huddle for a moment before breaking into a new chant. "What do we want? Pause AI! When do we want it? Now!" These protesters are part of Pause AI, a group of activists petitioning for companies to pause development of large AI models which they fear could pose a risk to the future of humanity. Other PauseAI protests are taking place across the globe: In San Francisco, New York, Berlin, Rome, Ottawa, and ahandful of other cities. Their aim is to grab the attention of voters and politicians ahead of the AI Seoul Summit -- a follow-up to the AI Safety Summit held in the UK in November 2023. But the loosely organized group of protesters itself is still figuring out exactly the best way to communicate its message.

"The Summit didn't actually lead to meaningful regulations," says Joep Meindertsma, the founder of PauseAI. The attendees at the conference agreed to the "Bletchley Declaration," but that agreement doesn't mean much, Meindertsma says. "It's only a small first step, and what we need are binding international treaties." [...] There is also the question of how PauseAI should achieve its aims. On the group's Discord, some members discussed the idea of staging sit-ins at the headquarters of AI developers. OpenAI, in particular, has become a focal point of AI protests. In February, Pause AI protests gathered in front of OpenAI'sSan Francisco offices, after the company changed its usage policies to remove a ban on military and warfare applications for its products. Would it be too disruptive if protests staged sit-ins or chained themselves to the doors of AI developers, one member of the Discord asked. "Probably not. We do what we have to, in the end, for a future with humanity, while we still can." [...]

Director of Pause AI US, Holly Elmore, wants the movement to be a "broad church" that includes artists, writers, and copyright owners whose livelihoods are put at risk from AI systems that can mimic creative works. "I'm a utilitarian. I'm thinking about the consequences ultimately, but the injustice that really drives me to do this kind of activism is the lack of consent" from companies producing AI models, she says. "We don't have to choose which AI harm is the most important when we're talking about pausing as a solution. Pause is the only solution that addresses all of them." [Joseph Miller, the organizer of PauseAI's protest in London] echoed this point. He says he's spoken to artists whose livelihoods have been impacted by the growth of AI art generators. "These are problems that are real today, and are signs of much more dangerous things to come." One of the London protesters, Gideon Futerman, has a stack of leaflets he's attempting to hand out to civil servants leaving the building opposite. He has been protesting with the group since last year. "The idea of a pause being possible has really taken root since then," he says.
According to Wired, the leaders of Pause AI said they were not considering sit-ins or encampments near AI offices at this time. "Our tactics and our methods are actually very moderate," says Elmore. "I want to be the moderate base for a lot of organizations in this space. I'm sure we would never condone violence. I also want Pause AI to go further than that and just be very trustworthy."

Meindertsma agrees, saying that more disruptive action isn't justified at the moment. "I truly hope that we don't need to take other actions. I don't expect that we'll need to. I don't feel like I'm the type of person to lead a movement that isn't completely legal."

Slashdotters, what is the most effective way to protest AI development? Is the AI genie out of the bottle? Curious to hear your thoughts
Supercomputing

Intel Aurora Supercomputer Breaks Exascale Barrier 28

Josh Norem reports via ExtremeTech: At the recent International supercomputing conference called ISC 2024, Intel's newest Aurora supercomputer installed at Argonne National Laboratory raised a few eyebrows by finally surpassing the exascale barrier. Before this, only AMD's Frontier system had been able to achieve this level of performance. Intel also achieved what it says is the world's best performance for AI at 10.61 "AI exaflops." Intel reported the news on its blog, stating Aurora was now officially the fastest supercomputer for AI in the world. It shares the distinction in collaboration with Argonne National Laboratory and Hewlett Packard Enterprise (HPE), which both built and houses the system in its current state, which Intel says was at 87% functionality for the recent tests. In the all-important Linpack (HPL) test, the Aurora computer hit 1.012 exaflops, meaning it has almost doubled the performance on tap since its initial "partial run" in late 2023, where it hit just 585.34 petaflops. The company then said it expected to cross the exascale barrier with Aurora eventually, and now it has.

Intel says for the ISC 2024 tests, Aurora was operating with 9,234 nodes. The company notes it ranked second overall in LINPACK, meaning it's still unable to dethrone AMD's Frontier system, which is also an HPE supercomputer. AMD's Frontier was the first supercomputer to break the exascale barrier in June 2022. Frontier sits at around 1.2 exaflops in Linpack, so Intel is knocking on its door but still has a way to go before it can topple it. However, Intel says Aurora came in first in the Linpack-mixed benchmark, reportedly highlighting its unparalleled AI performance. Intel's Aurora supercomputer uses the company's latest CPU and GPU hardware, with 21,248 Sapphire Rapids Xeon CPUs and 63,744 Ponte Vecchio GPUs. When it's fully operational later this year, Intel believes the system will eventually be capable of crossing the 2-exaflop barrier.
AI

ChatGPT Is Getting a Mac App 9

OpenAI has launched an official macOS app for ChatGPT, with a Windows version coming "later this year." "Both free and paid users will be able to access the new app, but it will only be available to ChatGPT Plus users starting today before a broader rollout in 'the coming weeks,'" reports The Verge. From the report: In the demo shown by OpenAI, users could open the ChatGPT desktop app in a small window, alongside another program. They asked ChatGPT questions about what's on their screen -- whether by typing or saying it. ChatGPT could then respond based on what it "sees." OpenAI says users can ask ChatGPT a question by using the Option + Space keyboard shortcut, as well as take and discuss screenshots within the app. Further reading: OpenAI Launches New Free Model GPT-4o
IBM

IBM Open-Sources Its Granite AI Models (zdnet.com) 10

An anonymous reader quotes a report from ZDNet: IBM managed the open sourcing of Granite code by using pretraining data from publicly available datasets, such as GitHub Code Clean, Starcoder data, public code repositories, and GitHub issues. In short, IBM has gone to great lengths to avoid copyright or legal issues. The Granite Code Base models are trained on 3- to 4-terabyte tokens of code data and natural language code-related datasets. All these models are licensed under the Apache 2.0 license for research and commercial use. It's that last word -- commercial -- that stopped the other major LLMs from being open-sourced. No one else wanted to share their LLM goodies.

But, as IBM Research chief scientist Ruchir Puri said, "We are transforming the generative AI landscape for software by releasing the highest performing, cost-efficient code LLMs, empowering the open community to innovate without restrictions." Without restrictions, perhaps, but not without specific applications in mind. The Granite models, as IBM ecosystem general manager Kate Woolley said last year, are not "about trying to be everything to everybody. This is not about writing poems about your dog. This is about curated models that can be tuned and are very targeted for the business use cases we want the enterprise to use. Specifically, they're for programming."

These decoder-only models, trained on code from 116 programming languages, range from 3 to 34 billion parameters. They support many developer uses, from complex application modernization to on-device memory-constrained tasks. IBM has already used these LLMs internally in IBM Watsonx Code Assistant (WCA) products, such as WCA for Ansible Lightspeed for IT Automation and WCA for IBM Z for modernizing COBOL applications. Not everyone can afford Watsonx, but now, anyone can work with the Granite LLMs using IBM and Red Hat's InstructLab.

AI

AI Hitting Labour Forces Like a 'Tsunami', IMF Chief Says (yahoo.com) 90

AI is hitting the global labour market "like a tsunami" International Monetary Fund Managing Director Kristalina Georgieva said on Monday. AI is likely to impact 60% of jobs in advanced economies and 40% of jobs around the world in the next two years, Georgieva told an event in Zurich. From a report: "We have very little time to get people ready for it, businesses ready for it," she told the event organised by the Swiss Institute of International Studies, associated to the University of Zurich. "It could bring tremendous increase in productivity if we manage it well, but it can also lead to more misinformation and, of course, more inequality in our society."
Microsoft

Microsoft Places Uses AI To Find the Best Time For Your Next Office Day 55

An anonymous reader shares a report: Microsoft is attempting to solve the hassle of coordinating with colleagues on when everyone will be in the office. It's a problem that emerged with the increase in hybrid and flexible work after the recent covid-19 pandemic, with workers spending less time in the office. Microsoft Places is an AI-powered app that goes into preview today and should help businesses that rely on Outlook and Microsoft Teams to better coordinate in-office time together.

"When employees get to the office, they don't want to be greeted by a sea of empty desks -- they want face-time with their manager and the coworkers they collaborate with most frequently," says Microsoft's corporate vice president of AI at work, Jared Spataro, in a blog post. "With Places, you can more easily coordinate across coworkers and spaces in the office."
Facebook

Meta Explores AI-Assisted Earphones With Cameras (theinformation.com) 23

An anonymous reader shares a report: Meta Platforms is exploring developing AI-powered earphones with cameras, which the company hopes could be used to identify objects and translate foreign languages, according to three current employees. Meta's work on a new AI device comes as several tech companies look to develop AI wearables, and after Meta added an AI assistant to its Ray-Ban smart glasses.

Meta CEO Mark Zuckerberg has seen several possible designs for the device but has not been satisfied with them, one of the employees said. It's unclear if the final design will be in-ear earbuds or over-the-ear headphones. Internally, the project goes by the name Camerabuds. The timeline is also unclear. Company leaders had expected a design to be approved in the first quarter, one of the people said. But employees have identified multiple potential problems with the project, including that long hair may cover the cameras on the earbuds. Also, putting a camera and batteries into tiny devices could make the earbuds bulky and risk making them uncomfortably hot. Attaching discreet cameras to a wearable device may also raise privacy concerns, as Google learned with Google Glass.

Slashdot Top Deals