Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Science Technology

Meshing Developmental Evolution and Technology 249

Jerry23 writes "IT Conversations has free audio of a very provocative talk by futurist and developmental systems theorist, John Smart. He weaves a big-picture narrative featuring developmental evolution, technological acceleration, computational autonomy, the emergence and behavior of human social systems, why prediction has such a poor history, the unique growth properties of Information and Communication Technology and the limitations of biotech, finally culminating in his case for the inevitability of digital personality capture and a ubiquitous Linguistic User Interface. Among many other things, he asks 'What will Windows (and the Google Browser) of 2015 look like?'"
This discussion has been archived. No new comments can be posted.

Meshing Developmental Evolution and Technology

Comments Filter:
  • Too Limited (Score:3, Interesting)

    by Gogogoch ( 663730 ) on Saturday March 26, 2005 @09:30PM (#12057726)
    Forget what MS Windows and Google will look like in 2015. What will they look like in 2215?

    For an answer read anything by Ian M Banks
    • Re:Too Limited (Score:5, Insightful)

      by kaiser423 ( 828989 ) on Saturday March 26, 2005 @09:39PM (#12057780)
      What will they look like? I really don't care.

      Technology is going to progress, and by 2015 we'll have neat stuff, and by 2215 we'll have even neater stuff. End of story. It's not tied to Google or MS or anything else. It's tech.

      Anyone theorizing about stuff now might as well go make their "predictions list" along with all the other Nostradmus' and people talking about flying cars.
      • Technology is going to progress, and by 2015 we'll have neat stuff, and by 2215 we'll have even neater stuff.
        By 2215 "we" will all be dead.

        As for Windows ten years down the road, that sounds like an easy one. Windows XP would be very familiar to Windows 95 users. In fact there were no significant steps in between the two; Windows 98 was Windows 95+ and Windows XP is Windows 2000+.

    • Re:Too Limited (Score:3, Interesting)

      by Gogogoch ( 663730 )

      Oops, typo. I should have said Iain M Banks [iainbanks.net] .

      He is just brilliant - totally reinvented, or is that reinvigorated, SF. You don't believe me? Try "Consider Phlebas", "Excession", or "Look to Windward"..

      • Re:Too Limited (Score:2, Interesting)

        by tomsuchy ( 813628 )
        I can't believe you put "Consider Phlebas" first! "Player of Games" is much more accessible for the novice Banks reader; "Consider Phlebas" might turn people off, because it is rather dry, and really only provides a biased "outsider's" view of the Culture. Admittedly, "Player of Games" makes the Culture look like a bunch of deceitful pricks (the ending, with Imsaho, which appropriately enough means "blow it up" in Arabic). Excession is probably the best of the three you identified, (and is the first one I
        • Anyone got any other good recommendations?

          Well, If you like Iain M. Banks, I strongly recommend Alastair Reynolds: Revelation Space, Redemption Ark and Absolution Gap. Great trilogy, a lot of really hard SF (written by an astrophycicist working at ESA), combined with a very interesting and insightful view on possible cultural evolution driven by technology. One of the most credible pictures how a society with sub-light space travel and the resulting relativistic distortions might look like. That, and yes,

    • Re:Too Limited (Score:2, Interesting)

      by andy753421 ( 850820 )
      I wouldn't be surprised neither google or Microsoft still exist in 2215. If they are around they probably won't be that important anymore. If we take an example from the past, what were the most important companys 200 years ago? Probably some farm equipment company, or a ship builder. As much as we think that computers are the thing of future, by 2215 their will probably be some totally different technology that's even more important. There's always some new technology such as 'steam trains' or 'automobiles

      • But (steam power, automobiles, computers) that's all tangible things. The next revolution will be social and will happen when average Mare-Cans start to explore outside of their own borders and mindset.

        Even if 10 million Americans are complete morons, that envelope calculation estimate also implies that there are at least double that amount who are un-morons.
    • by bstadil ( 7110 ) on Saturday March 26, 2005 @09:45PM (#12057818) Homepage
      Google Technological Singularity M [wikipedia.org] or use the Wikipedia link. Speculating on anything after machine intelligences starts improving itself is futile. ETA 2060
    • It will look like XP SP9, because Longhorn will be delayed once more. Duh.
    • Re:Too Limited (Score:3, Insightful)

      by andreyw ( 798182 )
      By 2015 we'll have really neat tech. By 2215 I don't care - I'll be dead anyways.
    • The only company I can think of that is in existence today that was in existence 200 years ago is the Hudson Bay Company, and right now I believe it's struggling and may not last even another decade.

      My money would be that neither MS nor Google will exist by 2215, but then I'd be long dead by then so it wouldn't be my money anymore. So what's the point in trying to imagine something that far away unless you're trying to write some sci fi story?

  • by ravenspear ( 756059 ) on Saturday March 26, 2005 @09:36PM (#12057766)
    By 2015 the codebase will be so unstable that all M$ developers will have to gasp in horror, hold their breath for 15 minutes, and pray for the gods of software development to have mercy every time they commit a patch to the repository, lest Bill's children take away their pension for life.
  • From the summary: (Score:3, Interesting)

    by Mad Merlin ( 837387 ) on Saturday March 26, 2005 @09:39PM (#12057785) Homepage
    'What will Windows (and the Google Browser) of 2015 look like?'

    Assuming that Windows is still around in 2015. To be honest, I don't think it will be at the rate it's going. Then again, that may just be wishful thinking on my part.

  • by Stalyn ( 662 ) on Saturday March 26, 2005 @09:40PM (#12057787) Homepage Journal
    all you have to do is spew meaningless bullshit? man. I've been doing that for years when can I get one of these futurist jobs?
  • Singularity (Score:5, Interesting)

    by PxM ( 855264 ) on Saturday March 26, 2005 @09:42PM (#12057794)
    This is a good time as any to mention Vinge's Singularity [everything2.com]. The main topic is AI, but he also talks about IA or Intelligence Amplification. The DM in the article is a type of IA for communications systems between people. It would merge the useful parts of online communications such as active logging without the problematic impersonal problems that are sometimes caused. This gets extended further when people are connected 24/7 and they have the ability to treat the real world and the wired world much more similarly

    --
    Want a free iPod? [freeipods.com]
    Or try a free Nintendo DS, GC, PS2, Xbox. [freegamingsystems.com] (you only need 4 referrals)
    Wired article as proof [wired.com]
  • Robot wife (Score:2, Funny)

    by pgsimpso ( 857585 )
    I'm still waiting for my robot wife!
  • Windows in 2015 (Score:3, Interesting)

    by zymano ( 581466 ) on Saturday March 26, 2005 @09:48PM (#12057825)
    Hopefully by 2015 there will be 'other' alternatives to windows.

    Maybe the billion linux os' can get together and make everything seamless by then.

    If not there will be Haiku OS. [haiku-os.org]
    • by Stevyn ( 691306 ) on Saturday March 26, 2005 @10:01PM (#12057885)
      I'm sure it will be Google's Macwinix and we'll all be bitching about it.
    • In 2015 we'll all ask ourselves, "What did we do before the Hurd was released?"
      • And since 2010, the answer will be "play Duke Nukem Forever".
      • at the rate the Hurd is going, that'll be 2015 A.P. (Anno nostri Penguini, the Year of Our Penguin), which is 1991 + 2015 = 4006 a.d. People and AI constructs alike will laud its cool features and ideas, but only students and hobbyists will use it because it will only address up to 1/32 of the storage capacity of a standard positronic brain, and device drivers for neural interconnect to earth's sentient species will be lacking. Later that year, Debian will release Sarge-stable.
    • My Speculation (Score:3, Interesting)

      by linguae ( 763922 )

      In 2015, Linux and BSD + KDE/GNOME would probably be commonplace on most desktops, and alternate operating systems such as Plan 9 and The Hurd will finally see the spotlight, in usages such as servers, research, or learning the innings of those systems. Mac OS X will probably be OS XI or OS XII, and it will probably be an operating system for those who want something better than KDE/GNOME, as well as those who love the seamless integration between Mac hardware and the Mac OS. Windows will still exist, for

    • I tend to think that it is quite possible Microsoft may eventually go a similar route than Apple have gone with OSX and build a new operating system on top of some *nix flavour. By 2015, Windows will most likely still be what you would expect Windows to look and feel like, but it may be an evolved .NET and Avalon/Aero on top of a *nix core with a POSIX API and a legacy Windows API for backwards compatibility.

      Microsoft may step in and purchase SCO's software assets when SCO goes into liquidation and then us
  • by G4from128k ( 686170 ) on Saturday March 26, 2005 @09:48PM (#12057826)
    One key breakthrough will be to give computers the ability to take an intentional stance (short definition [magma.ca] or longer essay [wustl.edu]) with regard to users. If Google could infer why I am searching instead of just what I am searching, it would be able to do a much better job. This would move from search-as-data-retrieval to search-as-intelligent-dialog.

    I'm not sure if this can happen by 2015, but it seems like a key goal that is much more important than adding "Genuine People Personalities" [psychcentral.com] to computers
    • by quokkapox ( 847798 ) <quokkapox@gmail.com> on Saturday March 26, 2005 @10:20PM (#12057979)
      I agree, integrating feedback from real humans is badly needed to improve search results. For instance, we know Google can't tell if you clicked a link in the search result page directly, but it can learn from how people behave as they hit the cache for different resulting links. If you hit the cache for result #1, and then quickly back out of that page and hit result #3, and then your session ends, or you revise your query, Google (and the other search engines) need to be able to learn from your behavior, and similar behavior exhibited by other humans.

      They're probably working on that. And they, unlike microsoft, have the software to run the massive computations required to implement this type of machine learning. That would be my 20% project, anyway.

    • Not sure I want to see this carried through to its entirety. While it sounds good on the surface, it's important to consider the potential for utter failure. Consider the following hypothetical example:

      I'm searching for something on Google, say a fix for a PC I'm working on. The reason I'm working on it is because I'm interested in a career in IT, and building my skills both in repair and customer relations. Therefore, logically and based on previous searches, Google knows that I am interested not only
    • If Google could infer why I am searching instead of just what I am searching, it would be able to do a much better job.

      And, as with people, when it gets it wrong it's worse than if it was just a dumb but obedient tool. That's the problem I have with anything that presents itself as a mind-reader: when it doesn't read my mind, I have to read its mind to predict what it will do in response to my input.

      In the end it makes things more complex. I'd rather have a tool whose response doesn't depend on what it t
    • This is smart.
      I have been contemplating something along these lines for a while.
      If I am doing a repetative task on my computer what would it take for the OS to see the pattern and take the work out of my hands?

      As an example. Lets say I have a folder of MP3 files and I want to rename all of them by putting the artists name in front of the song name. Say the old name is 'Walk the Dog.mp3' so I would change it to 'Aerosmith - Walk the Dog.mp3'.
      The way I would currently do it would be to do a cut and paste o
  • LUIs and the K-Prize (Score:5, Interesting)

    by Baldrson ( 78598 ) * on Saturday March 26, 2005 @09:48PM (#12057827) Homepage Journal
    Primitive LUIs exist today in interfaces like Google, but will become dramatically more powerful over the next few decades.

    I am quite excited by the confluence of advances in prize awards for technology advancement, and advances with the theory of compression. I'm convinced that if a substantial prize award can be created for dramatic advances in natural language text compression, it will lead directly to a solution to the most critical aspect of the "AI problem" -- that being the problem of the explosion of words without concomitant understanding. I had high hopes for the Internet being the new Gutenberg press leading to a new enlightenment but I'm concerned that without dramatic advances in AI to correlate the huge corpus being generated, the benefits of the new enlightenment may be too long in coming to save us from ourselves.

    My work on a legislative proposal for fusion technology prizes [geocities.com] was picked up by one of the founders of the Tokamak program. The more recent X-Prize award has a renewed the popularity of such prizes.

    As a consequence I've been suggesting the creation of a new prize based on Kolmogorov complexity. As argued by Mahoney in "Text Compression as a Test for Artificial Intelligence [psu.edu]":

    "The Turing test for artificial intelligence is widely accepted, but is subjective, qualitative, non-repeatable, and difficult to implement. An alternative test without these drawbacks is to insert a machine's language model into a predictive encoder and compress a corpus of natural language text. A ratio of 1.3 bits per character or less indicates that the machine has AI."

    A simple prize criterion would be for the first program producing a major natural language text corpus, with the size of the program being less than 1.3 bits per character of the produced corpus. Smaller intermediate prizes would help spur broader interest.

    • A simple prize criterion would be for the first program producing a major natural language text corpus, with the size of the program being less than 1.3 bits per character of the produced corpus. Smaller intermediate prizes would help spur broader interest.

      It may be better to pattern such a prize after the Methuselah mouse prize [methuselahmouse.org], where beating the old record would net you a portion of the prize proportional to how much the old record was beaten by. The size of the prize pot grows as donators add more mone
      • The M-Prize is a pretty good way of dealing with prizes whose criterion can be reduced to a single metric.

        Abstracting their prize criterion:

        Previous record: X
        New record: X+Y
        Fund contains: $Z at noon GMT on day of new record
        Winner receives: $Z x (Y/(X+Y))

        Applying this to Kolmogorov complexity (ignoring several technical details for the moment):

        S = size of uncompressed corpus
        P = size of program producing uncompressed corpus
        M = S/P

        Anytime someone demonstrates a larger M, they are awarded money a

        • Another point: Would you want to add an additional condition that all awardees would have to release the source code to their work, to allow future contestants to build on it? I believe the RoboCup robotic soccer competition does something like this.

          The only problem with it is that it doesn't include Mahoney's threshold of 1.3 bits per character for artificial intelligence.

          Indeed, but I don't see this as much of a problem. It favors steady incremental progress, which should eventually surpass the thresh
    • Irrespective of computational power and programming cleverness, I'm not sure people want to talk to machines. I know I feel wierd and would rather press buttons than "talk to myself". Voice-recognition software has gotten very good but remains underutilised (except for TLAs, the disabled? and medical transcription).

    • I think this is the first plausible criterion for AI I've seen outside the Turing test (not that I've been looking too hard).

      My question is, now that we have the test, what do we use it for? What difference would it make whether a computational system "has AI"? Does it have legal implications? How differently should it be treated, and why? Is it just a PR buzzword?

      Passing the Turing test has (almost too obvious) consequences, not because its a test for intelligence, but a test for human likeness, extrapol
      • The most important practical application is information quality. Information quality is a profound advance over simple data quality [google.com]. Basically, information quality is about the quality of ideas and abstractions used in descriptions of phenomena or data. Think Ockham's Razor writ-large.

        This penetrates all aspects of knowledge and society.

        Quality information helps us simplify our world view without making it less accurate -- or conversely -- make our world view more accurate without making it more comp

  • 2015 (Score:5, Funny)

    by SPIM ( 250250 ) on Saturday March 26, 2005 @09:48PM (#12057830)
    What will Windows of 2015 look like?'

    If history is any indication, it'll be a poorly implemented version of something mac users have been using since 2012. (I kid...only a little).
    • Which will subsequently (ca. 2018) be copied by the KDE/GNOME/(insert your favourite desktop environment here) GUI designers...

      seriously.. whatever it looks like... we'll bitch about it, now won't we?

  • This Developmental "Evolution" stuff is very much in doubt. I propose that people consider the concept of Developmental Creation Science.

    Yes. It's the theory that much technological development was supernaturally begat by a Creator. The idea that man develops technology in an evolutionary manner is just absurd! Each of the major kinds of technology was created functionally complete from the beginning and did not "evolve" from some other kind of technology.

    Oh, and I think kids should be taught this in scho

  • by quokkapox ( 847798 ) <quokkapox@gmail.com> on Saturday March 26, 2005 @10:06PM (#12057911)
    Predicting future scenarios is extremely inaccurate because people tend to focus exclusively on one area and extrapolate it too far into the future without considering the inevitable interactions with other co-evolving technologies, cultural trends, and economic factors.

    In a way, we do have our flying cars. It just turns out that most of us don't want/need/afford one parked in our driveway. A helicopter is essentially a flying car, but it's noisy, difficult to operate safely, and expensive to operate and maintain. Likewise, a jetliner is just a flying passenger train.

    Nobody, including John Smart or Vernor Vinge, can make meaningful predictions any significant distance into the future. I think things are changing so fast now, that even 10-15 years into the future is almost inscrutable unless you're making very broad generalizations. You can say for sure: We'll have computers. They'll be real fast. But who knows what all we'll be doing with them.

    Now with some good Intelligence Amplification, giving you the ability to consider the myriad variables and chart out many possibilities in future space, like a decision tree or a chess-playing program, and prune the unlikely ones, you can maybe construct a fuzzy map of the different courses the future will take. But you'll have to wait and see which one actually happens, just like everybody else.

    Alright, I have to get back to brainwashing Jabberwacky [jabberwacky.com] ...

    • by timeOday ( 582209 ) on Sunday March 27, 2005 @01:59AM (#12058774)
      I think things are changing so fast now, that even 10-15 years into the future is almost inscrutable unless you're making very broad generalizations.
      I'm not so sure about that. 10 years ago the cool thing was... the Internet. Has anything really changed in the last 10 years? Jurassic Park as a special effects feature now 12 years old, and it doesn't look particularly out of date.

      My own theory of scientific progress is that while facts and theories proliferate exponentially, they tend to diminish in significance at the same rate. When a field is new, it's easy to make breakthroughs. As it matures, most of the big ideas have been thought but there are armies of people churning out lots of paper.

      Look at medical research; the most significant breakthoughs were the easiest to make - like penicillin. Now we're pouring billions into Cancer year after year and making a little progress. Longevity and quality of life are not increasing exponentially.

      Look at transportation, rockets and jets were invented 60 years ago and since then nothing has supplanted them. Passenger jets don't look much different from 40 years ago. Cars haven't changed fundamentally in approximately one lifetime.

      I'm not saying change has ceased, only that I'm not sure things are really changing any faster now than they were a couple hundred years ago. Some poeple think we're accelerating ever faster towards an incomprehensible future with no continuity, I don't think so.

      • Past 10 years... (Score:3, Interesting)

        by AlXtreme ( 223728 )
        Okay, I'll bite. You're right that change isn't faster than it was 100 years ago, but it's not so that nothing has changed in the last 10 years.

        How about mobile phones? Wireless networking? PDA's? P2P? And this is only in our small field of ICT. 10 years ago nearly nobody had a mobile phone, now everyone and his dog has one. 10 years ago we were more or less bound to our boxes, now they are bound to us.

        If we (buzzword warning!) extrapolate this 10 years into the future, I'd expect things like implated c

      • I think you miss the myriad of small little changes that happen. Just some of the examples you gave

        Cars - performance/fuel consumption, the use of colour keyed plastic bumpers instead of chrome, windows glued in place instead of fasterned with rubber and chrome strip

        Aircraft - Much bigger, with ability to fly further, automatic navigation (actually at a price that individuals can afford)

        When you living amidst these small changes, you don't notice (think frog not jumping out of heating water before it bo
  • I bet it will look pretty much the same as it does now.
  • by argoff ( 142580 ) on Saturday March 26, 2005 @10:21PM (#12057984)
    .. es easy. The least proprietary technology always wins out. Not the prettiest, not the best designed or the most elloquent. Always the least proprietary.

    That's how intel made it, that's how windows (ironically) made it, that's how the tcp/ip internet made it, and that's how linux is going to make it today and why it will simply kick butt.
    • The least proprietary technology always wins out .... that's how windows (ironically) made it

      Could you explain that one to me, please. Now, if you do a s/Windows/DOS there, it would make sense, because DOS was distributable to just about any PC-clone manufacturer, at a time where most personal computers were tied to a specific operating system. I always thought that Windows made it because Windows was a logical extension of MS-DOS, and people started to "upgrade" to Windows 3.0 and 3.1 when Windows be

      • I always thought that Windows made it because Windows was a logical extension of MS-DOS, and people started to "upgrade" to Windows 3.0 and 3.1 when Windows became palatable enough for users to use..... Perhaps I missed something or overlooked something.

        You're right about MS-DOS. Being on the IBM platform, and since the platform was opened by litigation for exploitation by 3rd parties, DOS was on the most open computing platform.

        Windows was a rather different story. Some of the intertia driving windows w
      • In the late eighties and early nineties MS gave a compiler to anyone who asked nicely. As a result there was a large ammount of software that ran on Windows. This is one of the ways MS sank OS/2. IBM tried to make money on OS/2 compilers when MS was all but giving theirs away for free.
  • 2015, perfect (Score:2, Insightful)

    by Nxok ( 683394 )
    2015, averaging right around oil peaking, with all its fun geopolitical and subsequent economic ramifications. I think the more pressing questions then will be, how am I going to eat today, than what google will look like.
  • The whole idea of "layers" (h/w, OS, apps) is so backward.

    I hope by 2015 it'll all become seamless (brain-Google, brain-Gmail, etc. interfaces) and that Windows and Linux will become OS of the old generation.

    It's hard for me to understand why anyone would want to see any of the following (examples):
    a) BIOS setup screen(s)
    b) boot dialogue
    c) login screen
    d) Start menu (or its equivalents)
    e) dial-up (xDSL or other) dialog box
    f) traditional interfaces and menus (File :: Save As...)
    h) Internet/Networking setup
    i
    • The only problem that I see with your idea is that the computer still needs an operating system in order to manage memory, handle applications, do what an operating system does.

      However, I do agree with your point about the layers. There should be some more integration between the computer, the operating system, and the desktop environment. No, I don't mean that the OS and the desktop environment has to be in one giant monolithic interface; they should be modular. However, I do agree that the layers shou

    • The whole idea of "layers" (h/w, OS, apps) is so backward.

      I hope by 2015 it'll all become seamless (brain-Google, brain-Gmail, etc. interfaces) and that Windows and Linux will become OS of the old generation.

      It's hard for me to understand why anyone would want to see any of the following (examples):
      a) BIOS setup screen(s)
      b) boot dialogue
      c) login screen
      d) Start menu (or its equivalents)
      e) dial-up (xDSL or other) dialog box
      f) traditional interfaces and menus (File :: Save As...)
      h) Internet/Networking setup
  • by Maljin Jolt ( 746064 ) on Saturday March 26, 2005 @11:09PM (#12058167) Journal
    'What will Windows (and the Google Browser) of 2015 look like?'

    Google browser written in python will run on your latest Apple iBorg brain augmentation, hacked to run up to date linux 2.10.x kernel instead of MacBrain OS. No retina projector will be required for recieving google ads, they will be pushed directly to /dev/eye/right neural uplink interface no matter if you are awake or sleeping.

    For Windows, things will be different. Google will buy Microsoft in 2013, releasing full Windows XXL source code under one of the following Google Directory entries:

    /Top/Recreation/Humor/ [google.com]
    /Top/Shopping/Antiques_and_Collectibles/Classified s/ [google.com]
    /Top/Society/Issues/Business/Allegedly_Unethical_F irms/Microsoft [google.com]
    /Top/Society/Lifestyle Choices [google.com]

    Final selection of the topic will be performed by Slashdot poll, which result of is unpredictable at the moment.

  • Buzzword Bingo! (Score:4, Insightful)

    by dr.newton ( 648217 ) on Saturday March 26, 2005 @11:46PM (#12058296) Homepage
    Welcome to Buzzword Bingo, I AM your host, John Smart. Can you spot the buzzwords?

    Let's play!

    accelerating change
    I am just constantly suprised by new technological emergences
    how do we socially interface with those
    accelerating change
    get in the zone
    keep our eye on the ball
    accelerating change
    you can say this in the mirror every day
    the future is now
    it's already out there
    a can-do, change-aware attitude
    accelerating change
    accelerating intelligence
    intimacy of the human machine
    evolutionary development - you're gonna hear this phrase a lot - anybody who uses this phrase thinks deeply about change
    accelerating change

    But seriously folks, that's about 5 minutes into an hour-long talk. Does this guy take himself seriously? Is he joking?

    Smells like a leftover marketing plan from the Dot-com boom.
  • Futures wiki (Score:3, Interesting)

    by LionKimbro ( 200000 ) on Sunday March 27, 2005 @12:03AM (#12058358) Homepage
    You may all be interested in the TaoRiver Futures wiki. [taoriver.net]

    There's also another one developing, the WikiCities Futures wiki. [wikicities.com]

    The idea is that by combining our understandings from our respective fields, we can attempt to better understand the possibilities open to us, and the timing and dependencies behind them.

    Many other related wiki are listed on the Futures wiki WikiNode. [taoriver.net]
  • by John Smart ( 871104 ) on Sunday March 27, 2005 @01:36AM (#12058656)
    Hi /.,

    For articles on the Linguistic User Interface see:
    http://singularitywatch.com/lui.html [singularitywatch.com]
    http://singularitywatch.com/promontorypoint.html [singularitywatch.com]

    For more on Evolutionary Development:
    http://singularitywatch.com/convergentevolution.ht ml [singularitywatch.com]

    If you find the topic of accelerating technological change fascinating and important you might enjoy our e-newsletter, Accelerating Times :
    http://accelerating.org/news/signup.php3 [accelerating.org]

    You might also wish to attend our annual fall conference at Stanford, Accelerating Change .
    Past conference public archives are at the website of our nonprofit, the Acceleration Studies Foundation:
    http://accelerating.org/ [accelerating.org]

    We are still early in understanding our universal, cultural, and technological records of accelerating change, and this topic may be the most important and valuable one we could consider, as change and its opportunities may come faster every year forward for the rest of our lives.

    We'd love any of you with interest in these fascinating topics to join our community.
    Hope to meet some of you at Stanford in September.

    Best,
    John Smart
    President, ASF
  • Ok let's think. Recently on /. we've seen about 3D displays. I bet that in 2015 they'll become a reality. And the media/internet will adapt.

    We'll be searching for webpages in 3D, having a graph of relevant websites, and by pointing at them with your magic-wand/3D-mouse/whatever, you'll see a miniature snapshot of the website/mediasite. Press click (or even use your brain-machine interface to *think* click) and the website will appear.

    After you finish browsing, you simply turn off your flat-panel 3D projec
  • by theDunedan ( 462687 ) on Sunday March 27, 2005 @02:09AM (#12058817)
    I was expecting Smart to make a connection between LUI'S and Personality Capture, but if he did I missed it. It has to do with the notion that Natural Language Processing (which he pointed out is such a challenge) is naturally done by personalities. Okay, perhaps it is not a link with the "capture" part of Personality Capture, but we capture things now with computers. The link is with the "personality" part.

    Language processing is based on life experience. In order for a neural net to learn language it must have inputs such that it can understand a concept such as "select", "walk", or "win." For a computer to understand "select" might be pretty easy. The trainer could say "I select a file" while he uses a mouse to select a file. The neural net could interface with that. But more sophisticated interfaces will be required to provide the nn with "context" for less computer-like concepts. We could put the nn into a robot that walks (like recently discussed on /.), along with visual processing so that it can experience walking see other things walking. Then when it hears references to "walk" it can make the connection.

    (Yes, people who are unable to walk from infancy can speak intelligently about walking. Blind people can speak of and understand seeing. But those objections miss the point of a lack of "context input." As I understand it, a totally blind person does not know what "red" really is. (If I am misspeaking on this point, I apologize, especially to blind people or their close friends.))

    Now consider this sentence, which is spelled phonetically:

    "wonwonwonandwonwontoo"

    Pretend that you heard it spoken instead of saw it written. The proficienct English speaker would realize several things. First, she would parse that into individual words:

    "won won won and won won too"

    Then she would do a lot of fast computation work to try different parts of speech for each word such that the sentence fits semantically and grammatically into rules of English. She might then write the sentence on paper as:

    "One won one and one won two."

    And that is enough for her to understand that the sentence is a fragment of a larger text, a newpaper report on the dog show or something. This is because the sentence has a lot of ellipses in it with anaphoric references being elided. Since references must be present, there must be more text associated with the sentence. With the references put back into the sentence it would read

    "One person won one prize and one person won two prizes."

    or "one dog won one bone . . ." or something.

    The proficient English speaker would not even be thinking about "anaphoric elliptical references." She would "just know."

    All of these levels of computation go on in our brains constantly when we participate in all forms of communication. And in order for a LUI to work properly, the machine will also have to be able to do the same thing. Yet without other faculties (such as visual processing, mobility, etc.) these things can not be learned either. Hence, Linguistic User Interfaces and Personality emulation are intrinsically linked (and pretty darn tough).

    the Dunedan
  • 3-d virtual reality stereoscopic ultra-high resolution blue screens of death!
  • I'll probably get modded to troll for this one, but this is the scoop here:

    The AI singularity? Ain't. Gonna. Happen.

    Why?

    Because it doesn't have to.

    People will simply move the line of "intelligence" down to some esoteric nonsense about "data compression" and "natural language" or some other load of obscure drivel, and the result will be the astounding discovery that the MacOS has been AWARE since got knows when it figured out how to spit out a blank floppy at start up, or a DVD player boots out a D

  • by Gravlox ( 871136 ) on Sunday March 27, 2005 @04:31AM (#12059162)
    I always love it when smart people "think" they know biology. They always assume that mammals are the pinnacle of evolution, and that man is the ultimate mammal. Two eyes better than many? Obvioulsy this guy has never actually looked at arthropods. Walking upright is the ultimate? Using that as a rule, his Troodont should have been the master. It did walk upright. Running aint everything. Pluse, Troodont and it's kin lasted for longer than the raptors. Not a fair comparison- the really bad day when a huge chunk of flaming rock wipes out 70% of all life on the planet makes all the "who is better" arguments literally go up in smoke. Why do we have two eyes? Bilateral symmetry does seem to be popular amoung terrestrial vetebrates. Not surprising, since all terestrial vertebrates are descendents of fish. Have not seen any fish with 3 fillets on em. I hate when people try and attach meaning and direction to evolution. Evolution direction. There is no anti-entropy going on, other that the usual eating/sleeping/procreating. Ask Flipper if walking upright is a requirement for sentince. I cannot comment on his views on computing, but he comes across as a poseur to this biologist.
  • Out of touch (Score:3, Insightful)

    by adoarns ( 718596 ) on Sunday March 27, 2005 @09:57AM (#12059826) Homepage Journal
    I'm odd man out on this conversation, because I don't think human intelligence is computable, and I don't think we're likely to get things like universal constructors or technological singularities. Not for lack of enthusiasm, unfortunately.

    But what strikes me in all the comments heretofore has been this idea of improving usability and efficiency by having the computer anticipate your actions, get to know you, listen in real language.

    Well, I don't think this would at all be useful. I switched over to GNU/Linux two years ago on a lark, but I've stayed over because of the absolute richness of the toolset, especially textutils and text editors like vim--as I'm a writer, good text-manipulation is important.

    These tools are generic, but precise. I have made my own toolchain to cope with the tasks I have to do each day. On Windows, I had Word. That's it--just Word. On Linux, I use awk, bash-scripting, perl, textutils, darcs, vim, (La)TeX and a host of others.

    Having tools is where it's at. Better and more tools. Evolve tools and evolve toolchains.

    Natural language is wonderful for human expression, but it's imprecise for detailed specification. Witness the development of mathematical notation, BNR notation, architectural schematics, UML. Programming languages aren't simply weird because it's easier to parse, but because their stilted format gives them predictable behavior. Real human language dips into and out of metaphor freely, invents neologisms, is imbued with dialect, invokes slang, and is more full than not of social and emotional content. Which makes writing stories really fun and easy, but is shit for writing programs--which, let's face it, are just automated tasks.

    I don't want Windows Search to tweak my search based on the last fifty items I looked for; I *do* want to be able to tweak the Search myself so that it can bring up relavent text within the file, as well as strip some metainformation I myself added to it and display it. That's my idea of efficient.

Garbage In -- Gospel Out.

Working...