AI

When AI Botches Your Medical Diagnosis, Who's To Blame? (qz.com) 198

Robert Hart has posed an interested question in his report on Quartz: When artificial intelligence botches your medical diagnosis, who's to blame? Do you blame the AI, designer or organization? It's just one of many questions popping up and starting to be seriously pondered by experts as artificial intelligence and automation continue to become more entwined into our daily lives. From the report: The prospect of being diagnosed by an AI might feel foreign and impersonal at first, but what if you were told that a robot physician was more likely to give you a correct diagnosis? Medical error is currently the third leading cause of death in the U.S., and as many as one in six patients in the British NHS receive incorrect diagnoses. With statistics like these, it's unsurprising that researchers at Johns Hopkins University believe diagnostic errors to be "the next frontier for patient safety." Of course, there are downsides. AI raises profound questions regarding medical responsibility. Usually when something goes wrong, it is a fairly straightforward matter to determine blame. A misdiagnosis, for instance, would likely be the responsibility of the presiding physician. A faulty machine or medical device that harms a patient would likely see the manufacturer or operator held to account. What would this mean for an AI?
Republicans

President Trump's Budget Includes a $2 Trillion Math Error (time.com) 354

An anonymous reader quotes a report from TIME: President Trump's budget includes a simple accounting error that adds up to a $2 trillion oversight. Under the proposed budget released Tuesday, the Trump Administration's proposed tax cuts would boost economic growth enough to pay for $1.3 trillion in spending by 2027. But the tax cuts are also supposed to be revenue-neutral, meaning that trillion dollars is already supposed to pay for the money lost from the tax cuts. Former Treasury Secretary Lawrence Summers called the oversight an "elementary double count" and "a logical error of the kind that would justify failing a student in an introductory economics course" in an op-ed in the Washington Post.
United States

The Reign of the $100 Graphing Calculator Required By Every US Math Class Is Finally Ending (engadget.com) 281

If you took a math class at some point in the US, there is likely a bulky $100 calculator gathering dust somewhere in your closet. Fast forward to today, and the Texas Instruments 84 -- or the TI 84-Plus, or the TI-89 or any of the other even more expensive hardware variants -- is quickly losing relevance. Engadget adds: Thanks to a new deal, they'll soon get a free option. Starting this spring, pupils in 14 US states will be able to use the TI-like Desmos online calculator during standardized testing run by the Smarter Balanced consortium. "We think students shouldn't have to buy this old, underpowered device anymore," Desmos CEO Eli Luberoff said. The Desmos calculator will be embedded directly into the assessments, meaning students will have access during tests with no need for an external device. It'll also be available to students in grades 6 through 8 and high school throughout the year. The calculator is free to use, and the company makes money by charging organizations to use it, according to Bloomberg.
Education

'U Can't Talk to Ur Professor Like This' (nytimes.com) 486

Millennial college students have become far too casual when they talk with their professors, reads an opinion piece on The New York Times. Addressing professors by their first names and sending misspelled, informal emails with text abbreviations have become common practices (Editor's note: the link could be paywalled; here's a syndicated source) among many students than educators would like, Molly Worthen, an assistant professor of history at the University of North Carolina, Chapel Hill adds. From the article: Over the past decade or two, college students have become far more casual in their interactions with faculty members. My colleagues around the country grumble about students' sloppy emails and blithe informality. "When students started calling me by my first name, I felt that was too far, and I've got to say something," Mark Tomforde, a math professor at the University of Houston said. Sociologists who surveyed undergraduate syllabuses from 2004 and 2010 found that in 2004, 14 percent addressed issues related to classroom etiquette; six years later, that number had more than doubled, to 33 percent. This phenomenon crosses socio-economic lines. My colleagues at Stanford gripe as much as the ones who teach at state schools, and students from more privileged backgrounds are often the worst offenders. [...] Insisting on traditional etiquette is also simply good pedagogy. It's a teacher's job to correct sloppy prose, whether in an essay or an email. And I suspect that most of the time, students who call faculty members by their first names and send slangy messages are not seeking a more casual rapport. They just don't know they should do otherwise -- no one has bothered to explain it to them. Explaining the rules of professional interaction is not an act of condescension; it's the first step in treating students like adults.
Programming

Power of Modern Programming Languages is That They Are Expressive, Readable, Concise, Precise, and Executable (scientificamerican.com) 268

An anonymous reader shares a Scientific American article: Programming has changed. In first generation languages like FORTRAN and C, the burden was on programmers to translate high-level concepts into code. With modern programming languages -- I'll use Python as an example -- we use functions, objects, modules, and libraries to extend the language, and that doesn't just make programs better, it changes what programming is. Programming used to be about translation: expressing ideas in natural language, working with them in math notation, then writing flowcharts and pseudocode, and finally writing a program. Translation was necessary because each language offers different capabilities. Natural language is expressive and readable, pseudocode is more precise, math notation is concise, and code is executable. But the price of translation is that we are limited to the subset of ideas we can express effectively in each language. Some ideas that are easy to express computationally are awkward to write in math notation, and the symbolic manipulations we do in math are impossible in most programming languages. The power of modern programming languages is that they are expressive, readable, concise, precise, and executable. That means we can eliminate middleman languages and use one language to explore, learn, teach, and think.
Crime

Debian Developer Imprisoned In Russia Over Alleged Role In Riots (itwire.com) 93

An anonymous reader writes: "Dmitry Bogatov, Debian developer and Tor node admin, is still being held in a Moscow jail," tweeted the EFF Saturday. IT Wire reports that the 25-year-old math teacher was arrested earlier this month "on suspicion of organizing riots," and is expected to be held in custody until June 8. "The panel investigating the protests claims Bogatov posted several incitory messages on the sysadmin.ru forum; for example, one claim said he was asking people to bring 'bottles, fabric, gasoline, turpentine, foam plastic' to Red Square, according to a post at Hacker News. The messages were sent in the name of one Airat Bashirov and happened to be transmitted through the Tor node that Bogatov was running. The Hacker News post said Bogatov's lawyer had produced surveillance video footage to show that he was elsewhere at the time when the messages were posted.
"After Dmitry's arrest," reports the Free Bogatov site, "Airat Bashirov continue to post messages. News outlets 'Open Russia' and 'Mediazona' even got a chance to speak with him."

Earlier this month the Debian GNU/Linux project also posted a message of support, noting Dmitry maintains several packages for command line and system tools, and saying their group "honours his good work and strong dedication to Debian and Free Software... we hope he is back as soon as possible to his endeavours... In the meantime, the Debian Project has taken measures to secure its systems by removing Dmitry's keys in the case that they are compromised."
Math

Oregon Fines Man For Writing a Complaint Email Stating 'I Am An Engineer' (vice.com) 734

pogopop77 quotes a report from Motherboard: In September 2014, Mats Jarlstrom, an electronics engineer living in Beaverton, Oregon, sent an email to the state's engineering board. The email claimed that yellow traffic lights don't last long enough, which "puts the public at risk." "I would like to present these facts for your review and comments," he wrote. This email resulted not with a meeting, but with a threat from The Oregon State Board of Examiners for Engineering and Land Surveying [stating]: "ORS 672.020(1) prohibits the practice of engineering in Oregon without registration -- at a minimum, your use of the title 'electronics engineer' and the statement 'I'm an engineer' create violations." In January of this year, Jarlstrom was officially fined $500 by the state for the crime of "practicing engineering without being registered." Since the engineering board in Oregon said Jarlstrom should not be free to publish or present his ideas about the fast-turning yellow traffic lights, due to his "practice of engineering in Oregon without registration," he and the Institute for Justice sued them in federal court for violating his First Amendment rights. "I'm not practicing engineering, I'm just using basic mathematics and physics, Newtonian laws of motion, to make calculations and talk about what I found," he said. Sam Gedge, an attorney for the Institute for Justice, told Motherboard: "Mats has a clear First Amendment right to talk about anything from taxes to traffic lights. It's an instance of a licensing board trying to suppress speech."
AI

Google's AlphaGo Will Face Its Biggest Challenge Yet Next Month -- But Why Is It Still Playing? (theguardian.com) 115

From a report on The Guardian: A year on from its victory over Go star Lee Sedol, Google DeepMind is preparing a "festival" of exhibition matches for its board game-playing AI, AlphaGo, to see how far it has evolved in the last 12 months. Headlining the event will be a one-on-one match against the current number one player of the ancient Asian game, 19-year-old Chinese professional Ke Jie. DeepMind has had its eye on this match since even before AlphaGo beat Lee. On the eve of his trip to Seoul in March 2016, the company's co-founder, Demis Hassabis, told the Guardian: "There's a young kid in China who's very, very strong, who might want to play us." As well as the one-on-one match with Jie, which will be played over the course of three games, AlphaGo will take part in two other games with slightly odder formats. But why is Google's AI still playing Go, you ask? An article on The Outline adds: Its [Google's] experiments with Go -- a game thought to be years away from being conquered by AI before last year -- are designed to bring us closer to designing a computer with human-like understanding that can solve problems like a human mind can. Historically, there have been tasks that humans do well -- communicating, improvising, emoting -- and tasks that computers do well, which tend to be those that require lots of computations -- like math of any kind, including statistical analysis and modeling of, say, journeying to the moon. Slowly, artificial intelligence scientists have been pushing that barrier. [...] Go is played on a board with an 19-by-19 grid (updated after readers pointed out it's not 18x18 grid). Each player takes turn placing stones (one player with white, the other with black) on empty intersections of the grid. The goal is to completely surround the stones of another player, removing them from the board. The number of possible positions compared to chess thanks in part to the size of the board and ability to take any unoccupied position is part of what makes it so complex. As DeepMind co-founder Demis Hassabis put it last year, "There are 1,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000, 000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000, 000,000,000,000,000,000,000,000,000,000 possible positions."
Java

Ask Slashdot: Should I Move From Java To Scala? 245

"Scala is one of the JVM languages that manages to maintain a hip and professional vibe at the same time," writes long-time Slashdot reader Qbertino -- building up to a big question: One reason for this probably being that Scala was built by people who knew what they were doing. It has been around for a few years now in a mature form and I got curious about it a few years back. My question to the Slashdot community: Is getting into Scala worthwhile from a practical/industry standpoint or is it better to just stick with Java? Have you done larger, continuous multi-year, multi-man and mission-critical applications in Scala and what are your experiences?
The original submission asks two related questions. First, "Do you have to be a CS/math genius to make sense of Scala and use it correctly?" But more importantly, "Is Scala there to stay wherever it is deployed and used in real-world scenarios, or are there pitfalls and cracks showing up that would deter you from using Scala once again?" So share your experiences and answers in the comments. Would you recommend moving from Java to Scala?
Education

More Compulsory Math Lessons Do Not Encourage Women To Pursue STEM Careers, Study Finds (phys.org) 239

An anonymous reader shares a report: The demand for employees in STEM careers (science, technology, engineering and math) is particularly high, as corporations compete to attract skilled professionals in the international market. What is known as "curriculum intensification" is often used around the world to attract more university entrants -- and particularly more women -- to these subjects; that is to say, students have on average more mandatory math courses at a higher level. Scientists from the LEAD Graduate School and Research Network at the University of Tubingen have now studied whether more advanced math lessons at high schools actually encourages women to pursue STEM careers. Their work shows that an increase in advanced math courses during two years before the final school-leaving exams does not automatically create the desired effects. On the contrary: one upper secondary school reform in Germany, where all high school students have to take higher level math courses, has only increased the gender differences regarding their interests in activities related to the STEM fields. The young female students' belief in their own math abilities was lower after the reform than before. The results have now been published in the Journal of Educational Psychology.
Science

No, We Probably Don't Live in a Computer Simulation, Says Physicist (gizmodo.com) 418

Science doesn't have all the answers. There are plenty of things it may never prove, like whether there's a God. Or whether we're living in a computer simulation, something proposed by Swedish philosopher Nick Bostrom. From an article on Gizmodo: This kind of thinking made at least one person angry, theoretical physicist and science writer Sabine Hossenfelder from the Frankfurt Institute for Advanced Studies in Germany. Last week, she took to her blog Backreactions to vent. It's not the statement "we're living in a simulation" that upsets Hossenfelder. It's the fact that philosophers are making assertions that, if true, should most certainly manifest themselves in our laws of physics. "I'm not saying it's impossible," Hossenfelder told Gizmodo. "But I want to see some backup for this claim." Backup to prove such a claim would require a lot of work and a lot of math, enough to solve some of the most complex problems in theoretical physics.
Programming

Math Teacher Solves Adobe Semaphore Puzzle (mercurynews.com) 52

linuxwrangler writes: For over 4 years, lights atop Adobe's office building in San Jose have flashed out a secret message. This week, the puzzle was solved by Tennessee math teacher Jimmy Waters. As part of the winnings, Adobe is donating software and 3D printers to Waters' school in his name. "The semaphore had been transmitting the audio broadcast of Neil Armstrong's historic moon landing in 1969," reports The Mercury News. "That's right, not the text but the actual audio." The report provides some backstory: "Waters discovered the project, San Jose Semaphore, last summer while he was looking up something about Thomas Pynchon's 1966 novel, 'The Crying of Lot 49.' The text of that work was the code originally programmed by New York-based artist Ben Rubin in 2006. Seeing there was a new message, Waters began trying to decipher it while watching and writing down the sequences online from Tennessee. He discovered a pattern that led him to believe it could represent a space -- or a silence -- in an audio file, and when he graphed the results it looked like an audio wave. He dismissed that as being too difficult but came back to it and eventually ran his results into a program that would convert his numbers to audio. The first results came back sounding like chipmunks squeaking. So he tweaked things and found himself listening to the historic broadcast, which ends with Armstrong's famous line, 'That's one small step for man, one giant leap for mankind.'" You can listen to the semaphore message here.
Math

Cooling To Absolute Zero Mathematically Outlawed After a Century (newscientist.com) 210

After more than 100 years of debate -- which at one point even elicited interest from Albert Einstein and Max Planck, physicists have finally offered up mathematical proof of the third law of thermodynamics, which states that a temperature of absolute zero cannot be physically achieved because it's impossible for the entropy (or disorder) of a system to hit zero. While scientists have long suspected that there's an intrinsic 'speed limit' on the act of cooling in our Universe that prevents us from ever achieving absolute zero (0 Kelvin, -273.15 C, or -459.67 F), this is the strongest evidence yet that our current laws of physics hold true when it comes to the lowest possible temperature. From a report on NewScientist: Now Jonathan Oppenheim and Lluis Masanes at University College London have mathematically derived the unattainability principle and placed limits on how fast a system can cool, creating a general proof of the third law. "In computer science, people ask this question all the time: how long does it take to perform a computation?" says Oppenheim. "Just as a computing machine performs a computation, a cooling machine cools a system." So, he and Masanes asked how long it takes to get cold. Cooling can be thought of as a series of steps: heat is removed from the system and dumped into the surrounding environment again and again, and each time the system gets colder. How cold depends on how much work can be done to remove the heat and the size of the reservoir for dumping it. By applying mathematical techniques from quantum information theory, they proved that no real system will ever reach 0 kelvin: it would take an infinite number of steps. Getting close to absolute zero is possible, though, and Masanes and Oppenheim quantified the steps of cooling, setting speed limits for how cold a given system can get in finite time.
Math

This Is How the Number 3.14 Got the Name 'Pi' (time.com) 133

An anonymous reader shares a Time article: Ancient research on real numbers likely "didn't get improved upon until the age of Newton," says John Conway, mathematics professor emeritus at Princeton University who once won the school's Pi Day pie-eating contest. Sir Isaac Newton recorded 16 digits of pi in 1665, later admitting that he was "ashamed" of how long he had worked on the computations, as it meant that he had "no other business at the time," per the MAA. It was not until the 18th century -- about two millennia after the significance of the number 3.14 was first calculated by Archimedes -- that the name "pi" was first used to denote the number. In other words, the Greek letter used to represent the idea was not actually picked by the Ancient Greeks who discovered it. British mathematician William Jones came up with the Greek letter and symbol for the figure in 1706, and it was popularized by Swiss mathematician Leonhard Euler, Catherine the Great's mathematician, a few decades later. "Euler was a much better mathematician than the people who used [pi] before, and he wrote very good textbooks," says Conway. "He used it because the Greek letter Pi corresponds with the letter 'P'... and pi is about the perimeter of the circle."
IT

Slashdot Asks: Are Password Rules Bullshit? (codinghorror.com) 498

Here's what Jeff Atwood, a founder of Stack Overflow thinks: Password rules are bullshit. They don't work.
They heavily penalize your ideal audience, people that use real random password generators. Hey, guess what, that password randomly didn't have a number or symbol in it. I just double checked my math textbook, and yep, it's possible. I'm pretty sure.
They frustrate average users, who then become uncooperative and use "creative" workarounds that make their passwords less secure.
Are often wrong, in the sense that they are grossly incomplete and/or insane.
Seriously, for the love of God, stop with this arbitrary password rule nonsense already. If you won't take my word for it, read this 2016 NIST password rules recommendation. It's right there, "no composition rules". However, I do see one error, it should have said "no bullshit composition rules".
What do you think?
Programming

Douglas Crockford Envisions A Post-JavaScript World (infoworld.com) 300

JavaScript developer (and JSON proponent) Douglas Crockford recently described "a theoretical post-JavaScript World," according to InfoWorld. Crockford "believes the web development staple needs a successor that can fix multiple programming nuances." An anonymous reader summarizes their report: Despite its status as the world's most popular language, Crockford told an audience at the Oracle Code conference, "It would be sad if JavaScript turns out to be the last language." He complained that JavaScript has two different ways of declaring variables -- let and var -- as well as two different "bottom variables" with no value -- both null and undefined. "There's an argument among language designers, should we have bottom values at all? But there's nobody who thinks you should have two of them."

According to InfoWorld, Crockford "also presented a scenario with JavaScript being turned into a purely functional programming language by getting rid of 'impurities' like date, the delete operation, math.random and object.assign. Afterward, he stressed replacing JavaScript rather than adding functional capabilities to it... The next language also should be better able to deal with multiple cores. Most languages have followed the sequential model of Fortran, executing one operation after another, he said. 'That's not how the world works anymore. We now have lots of cores available to us, which all want to be running at the same time.'"

In other news, Crockford also proposed ending the "spaces vs. tabs" debate by simply eliminating tabs altogether.
Math

How Algorithms May Affect You (phys.org) 85

New submitter Muckluck shares an excerpt from a report via Phys.Org that provides "an interesting look at how algorithms may be shaping your life": When you browse online for a new pair of shoes, pick a movie to stream on Netflix or apply for a car loan, an algorithm likely has its word to say on the outcome. The complex mathematical formulas are playing a growing role in all walks of life: from detecting skin cancers to suggesting new Facebook friends, deciding who gets a job, how police resources are deployed, who gets insurance at what cost, or who is on a "no fly" list. Algorithms are being used -- experimentally -- to write news articles from raw data, while Donald Trump's presidential campaign was helped by behavioral marketers who used an algorithm to locate the highest concentrations of "persuadable voters." But while such automated tools can inject a measure of objectivity into erstwhile subjective decisions, fears are rising over the lack of transparency algorithms can entail, with pressure growing to apply standards of ethics or "accountability." Data scientist Cathy O'Neil cautions about "blindly trusting" formulas to determine a fair outcome. "Algorithms are not inherently fair, because the person who builds the model defines success," she said. Phys.Org cites O'Neil's 2016 book, "Weapons of Math Destruction," which provides some "troubling examples in the United States" of "nefarious" algorithms. "Her findings were echoed in a White House report last year warning that algorithmic systems 'are not infallible -- they rely on the imperfect inputs, logic, probability, and people who design them,'" reports Phys.Org. "The report noted that data systems can ideally help weed out human bias but warned against algorithms 'systematically disadvantaging certain groups.'"
Education

Pioneering Data Genius Hans Rosling Passes Away At Age 68 (bbc.com) 53

An anonymous reader writes: On Tuesday, Sweden's prime minister tweeted that Hans Rosling "made human progress across our world come alive for millions," and the public educator will probably best be remembered as the man who could condense 200 years of global history into four minutes. He was a geek's geek, a former professor of global health who "dropped out" because he wanted to help start a nonprofit about data. Specifically, it urged data-based decisions for global development policy, and the Gapminder foundation created the massive Trendalyzer tool which let users build their own data visualizations. Eventually they handed off the tool to Google who used it with open-source scientific datasets. The BBC describes Rosling as a "public educator" with a belief that facts "could correct 'global ignorance' about the reality of the world, which 'has never been less bad.'" Rosling's TED talks include "The Best Data You've Never Seen" and "How Not To Be Ignorant About The World," and in 2015 he also gave a talk titled "How to Beat Ebola." Hans Rosling died Tuesday at age 68.
Math

You Can Make Any Number Out of Four 4s Because Math Is Amazing (youtube.com) 309

Andrew Moseman, writing for Popular Mechanics: Here's a fun math puzzle to brighten your day. Say you've got four 4s -- 4, 4, 4, 4 -- and you're allowed to place any normal math symbols around them. How many different numbers can you make? According to the fantastic YouTube channel Numberphile, you can make all of them. Really. You just have to have some fun and get creative. When you first start out, the problem seems pretty simple. So, for example, 4 - 4 + 4 - 4 = 0. To make 1, you can do 4 / 4 + 4 - 4. In fact, you can make all the numbers up to about 20 using only the basic arithmetic operations of addition, subtraction, multiplication, and division. But soon that's not enough. To start reaching bigger numbers, the video explains, you must pull in more sophisticated operations like square roots, exponents, factorials (4!, or 4 x 3 x 2 x 1), and concatenation (basically, turning 4 and 4 into 44).

Slashdot Top Deals