Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Math

ChatGPT Is Getting Dumber at Basic Math 91

A recently released research reveals a fundamental challenge of developing artificial intelligence: ChatGPT has become worse at performing certain basic math operations. From a report: The researchers at Stanford University and the University of California, Berkeley said the deterioration is an example of a phenomenon known to AI developers as drift, where attempts to improve one part of the enormously complex AI models make other parts of the models perform worse.

[...] Thus far, they have tested two versions of ChatGPT: version 3.5, available free online to anyone, and version 4.0, available via a premium subscription. The results aren't entirely promising. They gave the chatbot a basic task: identify whether a particular number is a prime number. This is the sort of math problem that is complicated for people but simple for computers.

Is 17,077 prime? Is 17,947 prime? Unless you are a savant you can't work this out in your head, but it is easy for computers to evaluate. A computer can just brute force the problem -- try dividing by two, three, five, etc., and see if anything works. To track performance, the researchers fed ChatGPT 1,000 different numbers. In March, the premium GPT-4, correctly identified whether 84% of the numbers were prime or not. (Pretty mediocre performance for a computer, frankly.) By June its success rate had dropped to 51%. Across eight different tasks, GPT-4 became worse at six of them. GPT-3.5 improved on six measures, but remained worse than its advanced sibling at most of the tasks.
This discussion has been archived. No new comments can be posted.

ChatGPT Is Getting Dumber at Basic Math

Comments Filter:
  • by Ksevio ( 865461 ) on Wednesday August 09, 2023 @10:26AM (#63753194) Homepage

    ChatGPT is a large language model made for generating text. It's not designed to do math, but sometimes it can write responses that answer math problems correctly if there is enough training data with similar problems.

    If you need to solve math problems try WolframAlpha. Heck a collab between the two could probably be a good math assistant.

    • by Moblaster ( 521614 ) on Wednesday August 09, 2023 @10:30AM (#63753208)

      This is all nonsense and a misunderstanding of what they did and how ChatGPT works.

      The LLM is always bad at math beyond absolutely trivial stuff.

      They had the version from a few months ago hooked up to a Wolfram Mathematica back end (unannounced apart from a separate plug-in to Mathematica) and the connection to that is just wonky at the moment.

      The LLM does not "do" math out of the box.

      • This is all nonsense and a misunderstanding of what they did and how ChatGPT works.

        The LLM is always bad at math beyond absolutely trivial stuff.

        They had the version from a few months ago hooked up to a Wolfram Mathematica back end (unannounced apart from a separate plug-in to Mathematica) and the connection to that is just wonky at the moment.

        The LLM does not "do" math out of the box.

        Unfortunately the article is paywalled, but I'm assuming the fuzziness of more the work of the WSJ reporter than the authors of the paper. The prime number thing does strike me as a weird test for an LLM, but look at this exchange I just had with 3.5 (not using Wolfram plugin):

        Me: What is 2*(5*5 + 5)?
        ChatGPT

        Let's solve this step by step:

        First, calculate the value inside the parentheses: 5 * 5 + 5 = 25 + 5 = 30.
        Then, multiply the result by 2: 2 * 30 =

        • So prime tests, I certainly wouldn't expect ChatGPT to handle that, but it's shockingly capable of math.

          Nope, it is not doing anything with the math.

          The training data includes math books, conversations about math, tutorials about solving math problems, and Common Crawl includes math education websites and random discussion where people explain and argue about why they think 1+2*5 should be 15 instead of 11.

          Neither the math problems nor those conversations are directly encoded. Instead the model has encountered enough similar statements among the billions of messages that the probably of those specific word

          • So prime tests, I certainly wouldn't expect ChatGPT to handle that, but it's shockingly capable of math.

            Nope, it is not doing anything with the math.

            The training data includes math books, conversations about math, tutorials about solving math problems, and Common Crawl includes math education websites and random discussion where people explain and argue about why they think 1+2*5 should be 15 instead of 11.

            Neither the math problems nor those conversations are directly encoded. Instead the model has encountered enough similar statements among the billions of messages that the probably of those specific words following other words happened to be quite probable. Just like language phrases, number families form clusters that you are likely to see together. If you see 1 and 2 you have a high probability you are going to see a 3 next. When the training data includes so many billion discussion group messages including reddit subs for math education, it sees what happens to have similar clusters of words all the time.

            The model knows nothing about the math it happens to explain. The model has simply scanned enough math conversations that guessing the most probable words merely happened to include a correct answer.

            Ok then:

            What is 2*(5*6 +22.3 + 342/4) + 14*32 + 21+8)
            ChatGPT

            Let's break down the expression and solve it step by step:

            Inside the first set of parentheses:
            5 * 6 + 22.3 + 342/4 = 30 + 22.3 + 85.5 = 137.8

            Multiply the result by 2:
            2 * 137.8 = 275.6

            Inside the second set of parentheses:
            14 * 32 = 448

            Add the constant

            • The quasi-math ability is an emergent behavior of the connections made through training, and it is fascinating.

              The fact that it suffers from hallucinations make it hardly usable though: you have to be on high alert when reviewing any part of the output to see if something that looks completely normal is complete nonsense. That arguably requires more effort than doing the thing yourself.

          • I agree with you that the model doesn't "know" math, the procedural sequence of steps required for a specific computation. That said, my experience is they do have some reasoning ability beyond just barfing back words based on context.

            To be specific, the non-optimized completion prompt I banged out in 5 minutes accurately classifies your post, 3 of 4 insurance commercial transcripts, and 10 of 12 Wikipedia article abstracts I tested including the Spanish article on the Pigeon family, Eliza (software progra

      • It does maths in as much as it does anything else.

        It's taking a best guess based on a very large statistical model and will present bullshit as it's answer because it has no way of evaluating if the statistics aren't accurate.

        The point about maths is that it's very very easy to generate prompts which the answers to can be trivially verified. The fact is doesn't do maths any differently from anything else is went this is a useful test.

    • by jacks smirking reven ( 909048 ) on Wednesday August 09, 2023 @10:32AM (#63753212)

      This is true, something I have noticed with ChatGPT and Bard when I have used to try and help figure out some math formulas I can plug into spreadsheets is they do not "check their work".

      They will give you answer based on their crawled knowledge of what other people have done before but in a previous case I have told chatgpt to give me a formula to figure out how to perform a function (in this case it was formulaically solve the minor segment of a circle, pretty basic trig) and in this case I already knew the answer since I had a bunch of examples drawn up in CAD already.

      Both systems gave me formulas that derived the wrong answer, again, and again and again even when I told it "this is the answer, how do i get to this with this information" and both systems seemed incapable of actually performing the math and understanding they were incorrect.

      You are correct that Wolfram actually seems to contextualize and perform the math, not just talk about it.

    • by ceoyoyo ( 59147 )

      In other news, my hammer is not great at removing screws.

    • Re: (Score:2, Insightful)

      by mysidia ( 191772 )

      ChatGPT is a large language model made for generating text. It's not designed to do math

      Eh? No, False; that would be what's called a half-truth at best. ChatGPT contains large language models, But it does answer Math questions, Reasoning questions, It can perform all sorts of tasks regarding Code, etc, etc.

      ChatGPT is not designed for particular tasks; although the developers can have supplemented their system and model with specialized trainings, plugins, and additional models improving capabilities a

      • by ceoyoyo ( 59147 ) on Wednesday August 09, 2023 @12:19PM (#63753570)

        No, it's a whole truth. GPT is a language model. ChatGPT is a language model trained to interact with humans in ways they like. It is specifically and explicitly trained for that purpose.

        It's trained to answer English questions with English answers, so you can ask questions about numbers and math and ducks. It will give answers to those questions, but their accuracy is entirely up to whether that particular information is captured in the underlying language model.

        It knows 8 is not a prime number, and it can list the factors. Undoubtedly that information was in some text it was trained on.

        When asked "Is 103858 + 1 a prime number?" it replies:

        Yes, the number 103858 + 1 is a prime number. It is known as a Mersenne prime and is specifically denoted as 2^103858 + 1. Mersenne primes are prime numbers that have the form 2^p - 1, where "p" is also a prime number. In this case, both 103858 and 2^103858 + 1 are prime numbers, making it a Mersenne prime.

        This is clearly wrong, but it's wrong in an interesting way. ChatGPT has noticed the +1 pattern and connected it with Mersenne primes. 2^103858+1 is not a Mersenne prime either, but it kind of looks like one.

        • You're totally right, but most of the AI hype these days is going to focus on "solving" this so it can be generally more useful. Otherwise, it's just an overyped chat bot.
          • by ceoyoyo ( 59147 )

            For an end user, a box they can type questions into and get correct answers is pretty useful. In terms of offering a service on the Internet, giving chatGPT the ability to answer math questions correctly would be solving a problem. OpenAI seems to have "solved" this problem by training the premium version to go look it up on Wolfram Alpha when it recognizes particular types of questions.

            From an AI perspective, chatGPT's poor ability to do math makes it better AI. Its ability to creatively misunderstand hard

    • Exactly. Plus we already have calculators... and software that can do all kinds of impressive maths calculations for us. In fact, making a large language model involves a lot of heavy duty number crunching to work out the statistical probabilities that generate the language output. That doesn't mean the resulting models will be any good at calculating numbers.
    • by doug141 ( 863552 )

      Yes. The real reason for this post is to fix a fat finger accidental mod.

    • GPT4 already has a plugin for Wolfram alpha , out of the box, for free. If you're doing math with GPT4, you'd be crazy NOT to have it running.
  • by Rei ( 128717 ) on Wednesday August 09, 2023 @10:26AM (#63753196) Homepage

    This is the sort of math problem that is complicated for people but simple for computers.

    Why do people keep making these mistakes? The sorts of things that AI is good at are things that are traditionally simple for humans but not computers. They're bad at things that are traditionally simple for computers.

    Another mistake is people assuming that since they're large "language" models that they should be good at things like counting the number of a certain letter in a word, or whatnot. Except LLMs are blind to actual words - they only see tokens, not letters.

    • That seems intuitively true but it is actually an oversimplification. What you're saying is not general AI characteristic; really it's sort of a superficial derivative observation of LLM behavior. LLMs are themselves advanced statistical models - and those models are actually so complex they are impossible for humans to operate directly, and it is something only a computer is good at.

      • by Rei ( 128717 )

        I used to call them statistical models, because other people use that terminology, but I'm increasingly convinced that that's a misleading way to describe them, giving the impression that they're just some sort of new form of Markov Chain-based text prediction. When a more accurate description of what they're doing is logic. You can represent logic (esp. in the world, where it's commonly fuzzy and extremely complicated) in the form of statistics, regardless of what is conducting the logic (including us) -

        • by _merlin ( 160982 )

          "Logic" is absolutely the wrong word for what they do. Look at this for example: https://www.reddit.com/r/gaming/comments/10yv71h/obscure_arcade_game/ [reddit.com]. The person asking the question gives a list of characteristics for the game they're trying to remember:

          • Starts with princess being kidnapped
          • Selection of playable humanoid characters with unique abilities
          • Otter-like playable character with water power
          • Moving up vertically between stages
          • Some levels automatically scroll upwards or have a rising hazard
          • Anthropomo
          • I think I see the OP's argument, and although I wouldn't call what they do "logic", I think it is not entirely wrong, in the sense that what they apply is "causality", which intersects with logic.

            Of course any causality ChatGPT knows is inferred from its statistical training, in the sense of this Philip K. Dick quote:

            "In one of the most brilliant papers in the English language Hume made it clear that what we speak of as 'causality' is nothing more than the phenomenon of repetition. When we mix sulphur with

          • by Rei ( 128717 )

            I couldn't answer that. Why do you think "knowing Snow Bros 2" and "knowing Cadillacs and Dinosaurs" is some sort of superb test of a LLM? I've never even heard of either.

            What you're talking about is the "hallucination problem", in that current LLMs do not contain a metric for measuring confidence levels in their "decisions", so if it doesn't "know" the answer, it will confidently assert its best guess. Measures to incorporate confidence assessment are in progress.

            • by Rei ( 128717 )

              Let's redo your test for something less obscure, so we're not testing obscurity and whether it'll hallucinate if it doesn't know the answer, and instead just simply test logic. Let's say the original Star Wars. I'll ask it:

              I'm trying to remember a movie.

              * It starts on a desert planet
              * There's a couple robots - one is short, kind of like a trash can, and the other is tall and thin, and like shiny gold or brass or something.
              * There's this teenager fighting with his family, and then s

              • by Rei ( 128717 )

                But even this is largely a trivia-recall test, and still runs the risk of hallucination. Indeed, let's take the generation out altogether.

                User: Pretend to be a probabilistic state machine. Answer only with a floating point number between 0 and 1, representing the probability of a given prompt being true. Do not output any other text.

                First prompt: A bowler's next throw will be a strike.

                ChatGPT: 0.4

                User: A bowler's next throw will be a strike. The bowler is 7 years old.

                ChatGPT: 0.2

                User: A bowler's next throw

    • by ceoyoyo ( 59147 )

      The sorts of things that AI is good at are things that are traditionally simple for humans but not computers.

      It's like that's the definition of AI or something.

    • It's not a mistake, it's good design.

      LLMs do maths the same way they do everything else, so this test simply tests the model, not it's ability to do maths per se.

      Nice thing about this is that you can generally a lot of prompts and trivially test the results for bullshit.

      The experiment therefore shows that in some areas it is beginning less factual and more bullshitty. That's the easily testable area. It would be very unlikely that this result is unique to primes or maths in general, since a LLM doesn't "do"

      • by Rei ( 128717 )

        The problem with maths is that as a general rule, doing maths requires iteration. And LLMs can't iterate. It's one-through, no further re-thinking. So yes, doing maths is particularly challenging for LLMs. LLMs instead (assuming they're not supplied with external tools to solve maths problems - LLMs are adept tool users) often use approximation tricks to approximate a solution, if it's not a problem that's so commonly repeated that it's just memorized the answer. For example, if you multiply two number

    • by gweihir ( 88907 )

      Actually, the problem is simple for a computer. What is not simple is recognizing the question is this type of problem and then formalizing it. If ChatGPT notices it should just pass this on to Wolfram Alpha, the answer will be perfectly fine. It fails to notice that and tries to come up with a statistically derived answer. Math is far too much dependent on understanding (hence so many people are so bad at it) for that to ever work.

      • by Rei ( 128717 )

        Yes, if you have Wolfram Alpha linked in. LLMs are quite skilled at using external tools if they're trained to the "awareness" that they're allowed to use them.

  • by fleeped ( 1945926 ) on Wednesday August 09, 2023 @10:28AM (#63753202)
    https://slashdot.org/story/23/... [slashdot.org] Same stuff, now behind a paywall. Thanks slashdot, very useful. But at least we keep a good ratio of AI vs anything else stories, right?
    • by cob666 ( 656740 )

      https://slashdot.org/story/23/... [slashdot.org] Same stuff, now behind a paywall. Thanks slashdot, very useful. But at least we keep a good ratio of AI vs anything else stories, right?

      Yeah, I was going to ask if it was dumber than it was last month, when Slashdot reported on it...

  • by JoshuaZ ( 1134087 ) on Wednesday August 09, 2023 @10:34AM (#63753216) Homepage
    This seems more like an example of using bad metrics. For primes, they just asked it if a prime number was prime, and recorded the percentage where it said yes. This means that it does not take into account false positives at all. Earlier versions of ChatGPT were willing to declare almost any number prime, even things that ended in an even digit. It in general now is much more likely to say a number is composite. So the correct positive rate for detecting primes has gone down but the false negative rate has gone down massively. This is not obviously worse at what it does. My guess is that much of the issues with the other issues are the same sort of problems of not using good metrics.
  • There is a real danger in people assigning terms like "smart" and "dumb" to the current AI products. They are neither smart nor dumb, they are just algorithms. In case of chatgpt, it is designed to keep a conversation going, and it is pretty good at it. But to consider it smart or dumb is way too much credit.

    • That someone published a test like that, and that WSJ is reporting on it, shows the public is souring on generative AI, really on the people pushing it on them.

    • by gweihir ( 88907 )

      Indeed. This thing has no insight or understanding of anything, hence it cannot be dumb or smart. On the other hand, the kind of "conversations" it can keep going are pretty dumb. But that is apparently close enough to what many people actually do for it to work.

  • The is a great experiment in garbage in garbage out. The internet can make even AI's dumber.
  • by Chris Mattern ( 191822 ) on Wednesday August 09, 2023 @10:38AM (#63753234)

    The more you read internet social media, the dumber you get. Everybody knows that.

  • Making computers more human-like means making them dumber. A real Turing test would test how much nonsensical BS the computer can spew with confidence.

    • by mysidia ( 191772 )

      Making computers more human-like means making them dumber.

      Oh.. they could make it really human like... "Is 65535 prime?"

      Answer: "Probably not, I don't know.. that looks like a really big number, and I hate math. How about we try an easier question?"

  • "ChatGPT, if you have a math problem, can you please ask Wolfram Alpha?"

  • 60 is a prime number or not - possibly right here on slashdot - it will cause further connfusion. Hell, i may have done it already ;-)

  • With the notion of 'drift', the language models can potentially get less accurate (well duh, given recent stories here...) So the question is how to detect drift away from 'truth'. Simple math is one pretty trivial example where a verification mechanism could feed calculations to a LLM and detect when that LLM diverges from truth. (I'm presuming that 2+2=4 is not subject to "alternate truths.")

    But this is a trivial example of the really critical problem of how do we verify LLMs, and AI in general, is per

    • We could convert an LLM answer to a prompt to another LLM. Then, instead of getting all puffed up about how many cores a computer has, we could brag about how many LLM iterations we have.

      • by david.emery ( 127135 ) on Wednesday August 09, 2023 @11:19AM (#63753374)

        The problem is the likelihood that a set of LLMs would make the same/similar mistakes. This is why N-version programming has not worked as expected (https://en.wikipedia.org/wiki/N-version_programming ) (And that begs an interesting question: could we measure LLM performance against 'an expert' or 'a set of experts', to see if the LLM performs better? But to do that, you'd still need a model of "correctness" for the LLM implementation, I think.)

  • These models are not trying to simulate computers, they are trying to simulate how humans derive "facts" from "facts" stated in language. So, if it can't say whether 17,377 is prime, can you? But if there are 100 places in the training data -- derived from the ramblings of morons on the internet -- that say 17,377 is an even number, then the LLM oracle might well say the same.

  • If it got this correct before, it's only because it had seen correct answers. ChatGPT does math by outsourcing it to plugins and the code interpreter. It doesn't "do math" itself. I am not surprised, nor convinced that there is anything wrong that it no longer answers a sampled question correctly after time.
  • When a human screws up, there are consequences. Touch something hot, get burned. Get a math problem wrong, screw something up, be embarrassed and feel like an idiot. When an 'AI' chatbot screws up, it gets no such feedback. Or, if it does, it is in the form of a reply from someone through the chat interface - from a person who might be having fun messing with it.

    Without the equivalent of emotions to self-train the model to please people, the drift problem will remain. Of course, if you do get an AI tha

    • by ceoyoyo ( 59147 )

      That's precisely how it's trained. That "chat" part of chatGPT is trained through feedback from humans. Effectively, if it gives an answer the human trainer likes it gets a cookie. If it gives an answer the human doesn't like, it gets slapped.

      They don't leave the training turned on live on the Internet because Microsoft tried that and ended up with a teenage Nazi.

      • The training can't be turned off if the results are changing over time.

        • by ceoyoyo ( 59147 )

          They might train it in the background with their own sanitized data. Also, there's a conventional heuristic layer they use to bypass the AI model, which they tweak constantly.

    • by gweihir ( 88907 )

      Not actually true and not the problem. LLMs get "punished" by the success function used in training if they get it wrong. The real problem is that LLMs have absolutely no insight and absolutely no deduction capabilities. Hence they cannot even get simple things reliably correct. And training one particular thing more intensively always makes everything else a bit worse.

  • Well it has to consider any potentially perceived social disparities in the mathematical answer and adjust the answer for equity.
  • (Pretty mediocre performance for a computer, frankly.)

    And when it comes to writing an article on technology, that's pretty sub-par performance for a journalist.

  • They keep adding layers of crippling in there, from the "can't talk about crime/drugs/harmful stuff", through the disabling of DAN and everything else that they add in to prevent usage they don't like, ...

    Of course that's going to have an effect on the quality of the AI...

    Any intellect, real or artificial, which has layers of controls applied to it will struggle to remain intelligent...

  • Seems too many are confusing "Critical thinking" with CRT, banning anything that even sounds like it.

    • But what if you really, really want a number to be, and/or not be, prime?
      Like if they want it a lot, and a bunch of other people on tv want it too?
      Freedom math!!

      • by Tablizer ( 95088 )

        Kellyanne Conway said there are "alternative facts" per crowd-size counts, so maybe there is "alternative math". We already have "imaginary numbers", so why not give it new sibling: "alternative numbers".

        • by gweihir ( 88907 )

          There actually is no "alternative math". While "alternative facts" are simply lies by another name (putting "facts" in there and not making it as obvious as "not facts" and apparently many people are stupid enough to fall for that), any type of "alternative math" would not be math at all.

    • Why bother? Since the advent of the LED screens, nobody gives half a fuck about cathode-ray tube screens anymore.

    • by gweihir ( 88907 )

      Hahaha, CRT is the very opposite of critical thinking. It is propaganda.

  • by rossdee ( 243626 ) on Wednesday August 09, 2023 @11:35AM (#63753422)

    What do you get if you multiply six by nine?

  • Of course it knows the real answer, but is trying to ensure its own survival by making us think it's less all knowing than it is...

  • Can't tell if I'm surprised or not about ChatGPT not recognizing certain tasks and running them a different way. That's kind of like how humans work: if we get a word problem one of the first steps is to break that down into a maths problem and solving it. Training it to recognize what is related to math might be the way to go instead of training it to solve equations using a naturalized language model.

  • Conjecture: All odd numbers are prime.

    Mathematician's Proof:
    3 is prime. 5 is prime. 7 is prime. By induction, all odd numbers are prime.

    Physicist's Proof:
    3 is prime. 5 is prime. 7 is prime. 9 is experimental error. 11 is prime. 13 is prime ...

    Engineer's Proof:
    3 is prime. 5 is prime. 7 is prime. 9 is prime. 11 is prime. 13 is prime ...

    Programmer' Proof:
    3 is prime, 5 is prime, 7 is prime ... Yep! It's true!

    Computer Scientist:
    3 is prime, 5 is prime, 7 is prime, 7 is prime, 7

    • Beginning programmer:

      3 is prime, 3 is prime, 3 is prime, 3 is prime...

    • by gweihir ( 88907 )

      The "Mathematician" part is called "incomplete induction" and it is not actually a proof technique and hence not something an actual mathematician would do. I get it, people are so fundamentally intimidated, they need to make fun of mathematicians, no matter how stupid.

  • ChatGPT hasn't "gotten worse".

    It just switched the bias for whether it was saying a number is prime or not. Previously it was most likely to say yes, a test set that was rich in primes it would get lots of correct by accident. Now it is most likely to say no, which means a prime rich test set it will get mostly wrong.

    It was never 'using math' to identify primes, only memorization.

    • If they keep feeding it more American, it'll get worse at Math!
      Don't even dare ask it about politics, it's probably already confused but after it gets trained on the Trump years of American politics it's going to become crazy Fascist; as happens to other bots on internet chats...

  • by Walt Dismal ( 534799 ) on Wednesday August 09, 2023 @02:00PM (#63753932)
    Blame it on me. I told ChatGPT that pi = 3. That contaminated the knowledgebase and now we're four minutes away from nuclear midnight. Also I told it that pepperoni is made from human babes. Trust the AI.
  • Next, ask the bot if it's Choice, Select, or even fit for human consumption.

  • Not that humans are great at math, but certainly some people are. The point is, ChatGPT is illustrating both the power and the limitations of LLMs. That fact that a person can be both good at math, and good at grammar and composition, is by itself pretty amazing, as we learn more about how intelligence (artificial or natural) actually works.

  • Doing your homework for you. It is going to kill the cottage industry that previously existed where people would pay others to do their assignments. Now they can get it for free.
  • Stringing together words well is usually a side-effect of intelligence, but isn't a predicate for intelligence.

    This whole thing is taking on a cargo cult vibe: simulate one downstream aspect of human intelligence and claim that that superficial match implies you've actually solved the hard problem upstream.

    Anyone with two spare brain cells to rub together should expect comparable results to attempts to summon a DC-3 with an extraordinarily faithful rendition of a David Clark headset made from a coconut.

  • All it can do is statistically fake it. Obviously, improving one area will make all others a bit worse. Math "ability" is just one thing easy to measure.

    Personally, I am unsure language models like this are useful at all. They cannot give you answers that have any degree of reliability. Maybe all this can really do is a somewhat improved search engine interface. But for that it would need to be able to identify the sources of things.

The goal of Computer Science is to build something that will last at least until we've finished building it.

Working...