Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
Science

Bell Labs moves bandwidth to 1.6 terabits 56

javac writes "Scientists and engineers from Bell Labs have demonstrated a prototype long-distance optical-transmission system that quadruples the capacity of today's commercial systems to 1.6 terabits. Read the full story"
This discussion has been archived. No new comments can be posted.

Bell Labs moves bandwidth to 1.6 terebits

Comments Filter:
  • Posted by FascDot Killed My Previous Use:

    "We are only two orders of magnitude from exabit speeds. tera=10**12, peta=10**15, exa=10**18."

    kilo = 10^3
    mega = 10^6
    giga = 10^9
    tera = 10^12
    peta = 10^15
    exa = 10^18

    An "order of magnitude" is "power of ten". Sounds like tera to exa is 6 (18 minus 12) orders of magnitude. Even using Moore's law (double ever 18 months), that's quite a ways off.

    Does anyone know the theoretical maximum bitrate of fiber?
    --
    "Please remember that how you say something is often more important than what you say." - Rob Malda
  • Posted by mathman100@geocities.com:

    i'm not sure about fiber, but if you use the entire EM spectrum in something like a vacuum:

    infinate frequencies * infinate amplitudes = a really really big number

    although i wouldn't feel very safe if gamma waves and x-rays were just flying around in this data transfer medium.
  • I went to a talk about 3 months ago where the guy was talking about increasing bandwidth. Basically, to sum up a two hour talk in a few words, it boils down to "Getting fiber optics to move information faster isn't a challange, it's getting the information into and out of the optic that's the problem." He went on about details of nanofabrication, and new synthetic structures can bump up the transfer rate and frequency regulation of light (*NOT the frequency of the light, but the rate that they can change the frequency*).

    All his work was lab based synthisis of nano structures, and not commercial applications. So, his work was significantly beyond what is production ready by about a year (at LEAST). Anyone know of a solid referance for the basics in what communication hardware/structures/circut design is currently being used, and what directions they are moving? If what I saw was true (and the science behind it was solid), he could easily boost the best theoretical hardware by an order of magnitude from existing stuff. I understand the science, chemistry, and fabrication stuff he talked about, and I can see how it would be better, but I don't understand the protocols completely, and the designs that are back between the computers and the electric to light wave meterials he was designing.

  • Jeff and Rob 'definately' have a hard time 'speling' correctly after spending so much time poring through articles... :op
  • Simply put, a consumer will not directly use one of these- that's stupid. Consumers will benefit because the backbones will become way less congested- which means all these "high-speed" connectivity items such as Cable Modems and xDSL will be better able to utilize their maximum throughput. As it stands now, I can see the 'net getting more choked with packet traffic as people jump onto the broadband services in droves.
  • No joking about this being accessable to common people or even something at least a few orders of magnitude slower. It would be nice to download the latest distribution cd in a minute or enjoy instant gratification on freshmeat. Point, click, make install! I tried 128K ISDN for a year and did not notice much improvement over 56K when surfing. The world would be a nice place with a big pipe to each computer.
  • The wavelength of the light spectrum used and thickness of the fiber are restrictions. Shorter wavelengths help, but distance suffers. Infrared is a longer, but efficient wavelenth. Ultraviolet does not have a chance. Skinnier fibers help due to the wavelength being refracted to the center.

    To get an idea where the limit would be challenging for fiber, take the wavelength and invert it to get the frequency. That is the point where working with single pulses of light gets interesting. From there, you could share the same fiber with many different wavelengths and even use analog components of the waveforms like a modem. It just depends on how much money you want to throw at the hardware.
  • And finally, someone has added a regular comment.
  • Bit is the general term for a binary digit. Tera is the SI standard prefix for 10^12. So, it follows that a terabit is 10^12 bits.

    In other words, a lot of information. In this context, terabit can be pretty much assumed to mean terabit per second, as a measure of bandwidth. To put it in perspective, the fastest and busiest servers around feel good about claiming terabits per day transferred.

  • 3 years ago NEC already had a product called Narrow Band for which I was managing the software driver development. They renamed it Spectral Wave, which supported 8 OC-48 along a single fibre. A year later they were running 32 frequencies along the single fibre. It's no surprise to me that someone has upped the number of frequencies to 40, though at speeds much greater than OC48. All I'm saying is DWDM is not such a hot new technoledgy and Lucent shouldn't be touting itself as the instigator of the advance. When you think about it, the whole concept of freq-multiplexing makes a lot of sense anyway. However, extending the signal length should be a great advance in the lessening of Line Repeater necessity.

    But the sad end of the stick is we still have Cable and local Telephone monopolies jacking up prices for lower and lower quality service. Alas, 'tis but a pipe dream...
  • Yeah, but coming or going?


    ...sorry
  • You don't only depend on the segment of wire that comes into your house. If the backbone on the net is terabit instead of gigabit, you're more likely to be able to actually use the lower bandwidth you're paying for, even if you're hitting sites everyone else is hitting.

    Your parents are wrong: long distance was wildly expensive pre-1984, and while local rates went up somewhat, it's not that much once you take inflation into account.

    I say fuck antitrust laws, you say. Fine. Let's go to the world of 1984. You can't buy a modem; you can only rent one from the phone company at a steep rate, so much so that most people use acoustic couplers that can't possibly get more than 1200 bits/sec. You get cheap local calls, but calling out of town costs 10x as much as what you pay today.

    If Ma Bell hadn't been broken up, the net as you know it would not exist. Yes, there would be an Internet, but since all the long-distance lines would have to be leased from a monopolist at monopoly rates, it would cost at least ten times as much to have access, and since Ma Bell was extremely, extremely conservative about what types of equipment could be attached to the phone network, there would be far less innovation. No internal modems for PCs, that's for sure. You'd have to use external boxes leased from the phone company.

  • SOMEPLACE -- Some company invented something today that increases the capacity of something or another by ten trillion times. "This thing we came up with is so incredible" said the company's CEO. Consumers are never expected to use or see this coveted thing, but we thought we'd tell you about it in detail anyway.


    Actually, the article is describing practical application of known experimental technologies - exactly what is needed to bring it into real use in the backbone and (later) downstream. We've heard about multi-terabit transmission with DWDM before, but this is looking closer to an actual product.

  • I went to a talk about 3 months ago where the guy was talking about increasing bandwidth. Basically, to sum up a two hour talk in a few words, it boils down to "Getting fiber optics to move information faster isn't a challange, it's getting the information into and out of the optic that's the problem." He went on about details of nanofabrication, and new synthetic structures can bump up the transfer rate and frequency regulation of light (*NOT the frequency of the light, but the rate that they can change the frequency*).


    This used a different approach to packing more information into the fiber - the use of many carriers, each modulated at a relatively low frequency, and spaced far enough apart that there wasn't cross-talk between them. The professor you refer to seems to be looking at ways of more quickly modulating a single carrier. This is certainly very useful, but there are other ways of increasing bandwidth. Frequency-domain schemes of the kind that I described are the ones that have been generating the recent news (though there are limits to how far it is practical to push this).

  • Well, the frequency of visible light is about
    6x10^14 Hz. Unless we can make ultraviolet fibres, that is an upper bound.


    Not quite. Your sampling rate is limited to 6.0e14 Hz, but you can have many signal levels that can be detected. The problem is that for n levels, you need n*n photons to compensate for measurement uncertainty, which means that for b bits per sample you need 2^(2b) photons per sample. The energy requirements for this get nasty very fast, but for a small number of levels it's practical.


    You run into analogous problems with frequency-domain schemes (this was a time-domain scheme). You can increase the data rate by increasing power, but the power cost is ugly.

  • by Christopher Thomas ( 11717 ) on Wednesday June 09, 1999 @08:28AM (#1858652)
    Does anyone know the theoretical maximum bitrate of fiber?


    This was covered in a thread a couple of months back. A few limits apply.


    Firstly, you are limited by the bandwidth of your optical carrier. For visible light and assuming reasonable power constraints, you won't be able to get more than about 1.0e15 bps. You could push this by pumping in a _lot_ more power or by using a higher frequency, but these have very serious problems. IMO, a better solution would just be to use bundles of fibers (or multiple lasers or what-have-you).


    DWDM and other frequency domain techniques won't help here - as you modulate data onto a carrier, its frequency spreads out, limiting how densely you can do WDM, or how much data you can transmit per channel using a given density of carriers. 1.0e15 bps or thereabouts is as far as you can go without producing/using frequencies higher that the visible light range.


    The other, more immediate limit is in the physical medium that is carrying your optical signal. In this case, fiber. Fiber is mainly limited by the operating frequency range of the amplifiers used to boost the signal (usually erbium-doped fiber lasers). I'm told that this is in the 1.0e11 to 1.0e14 Hz range (I honestly don't remember where within those bounds it fell). Other readers can probably give you more accurate information.


    One of the points mentioned in the article was that they hoped to develop better amplifiers - that could process a wider range of frequencies and let them use another frequency range for transmitting data. This was a topic for future research, not an announcement of something they'd accomplished, though.

  • France Telecom did claim to have the record
    a few days ago with a 1 Terabit demonstration
    in March. It seems that this kind of record
    is not easy to hold a long time. Note however
    that FT demonstrated 1 Tb over 1000 km, while
    Bell demonstrated 1.6 Tb over only 400 km
  • It's expensive to terminate fiber and to lay cable
    but for long runs of many-fiber cable the bulk of
    the cost is the cable itself - about $500/fiber/km. A cable halfway around the world
    (20,000 km) would cost $10 million/fiber.
    At 1.6 terabits each fiber can carry 25 million
    64 kbps phone connections. Assuming a cable lifetime of 10 years that's $0.04 per year, right?
  • Oh, come on. Goodlatte's a freekin joke. Him and all the other "get the rest of gob'ment off your back so I can git on it" crackers from va. are a joke. Do you really think GTE is going to build an internet backbone straight to your house just because goodlatte is going to force the FCC to stop some regulation of phone companies? Don't you think that maybe the bad ol' phone company doesn't run cable and adsl to your holler because maybe there aren't enough sons of the soil back up in there to cover their costs? Oh yeah, and bring back ma bell, that's a good idea. Maybe we can make it (mandate it, congressman?) that AT&T supplies ALL the bandwidth for all internet traffic, that way we could have a chance to enforce goodlatte's anti-spam portion of his "internet freedom act."
  • Bummer, Error 505 trying to look at the link...
  • I believe I should take the blame for that, not Hemos, we should put credit where credit is deu.
    Jonathan
  • To all these people who think fiber to pc isn't happening, remember two days ago, Lucent (who now owns Bell Labs) is selling multimedia fiber strait to houses in Atlanta, $60 a pop. If this works out well, it will spread. Once people get used to the internet at 1000k/s instead of 4k/s on a good day, there will be no going back.
  • hmmm. I don't know if you realize this but Bell Labs is now owned by Lucent Technologies. They have to direct ties to AT&T
  • There are quite a few DWDM-based technologies within striking distance, many of which are further along than Bell Labs' work. Many of them are startups, and don't have the luxury of immediate disclosure the way Bell Labs does.

    http://www.lightchip.com
  • "Tera" is 10 to the twelfth power, or a 1 with with 12 zeroes after it. In other words, a terabit is thousand times larger than a gigabit. The next one after "tera" are "exo" and "peta", although I confess I can't remember what order they're in.

    A terabyte might be slightly different, though, since bytes tend to be grouped in powers of 2: kilobyte = 2^10, megabyte = 2^20, so a terabyte could be 2^40.

    It's quite a lot, either way.
    --

  • by Dunx ( 23729 )
    Sorry, forget that - didn't read the post properly in the first place.
    --
  • by debrain ( 29228 ) on Wednesday June 09, 1999 @08:29AM (#1858663) Journal
    Nortel has several terabit fiberoptic line as well. The speed just keeps getting bigger and bigger. And I've got to ask, what will we do with it all?

    I've a several megabit ADSL connection to the internet at home, and a T1 at work, and I still spend most of my bandwidth at home reading news and sending email, and using local do-dads at work. The entire system is computer-centric. But the underlying philosophy is person centric and while we look at what we have for each computer, we often forget what we have for each person.

    And the successiful in the future will be looking at what we will have in the future, which will be computer-independent, yet entirely person centric.

    Someday, not so far off as many would hope, fear, wish, or believe, that which we will look at in a computer will not be what we value now, but rather the speed at which we communicate with others.

    That is a step in the right direction. Next we must change how and what we communicate. Therein you will find the true leaders of this technological era.

  • think we can encode a Real time voice at 64k/sec into a streaming mp3? :) that would be pretty sweet.... :)

    I can encode 128 kbps mp3's off my cd's at about 8.5x. So if we assume the algorithim scales linearly for encoding (I have no idea if this is right or not). With a PIII-450 we could encode 128 * 8.5 = 1068 kbps in realtime...

    With my k6-2 250 I was getting about 2.2x so about 280 kbps in realtime.

    So I figure you'd need about a p133 to comfortably handle encoding and decoding in realtime for voice conferencing (this is assuming you use the xing codec). But then why would you use mp3 for this? Speak freely and bunch of other good software are already out there for this specifically.
  • Ah well, I knew vaguely there was a limit somewhere, but I wasn't sure where. I am aware that real time is different than non-RT, but I was just trying to get a feel for what speed processor you would need. It would be more than a bit silly to 'compress' a 44khz - 16 bit - stereo stream to 1068 kbps, that's something like 135 kBps, which is only 15 kBps less than the original stream.

    With today's processors and codecs, there's not too much challenge with encoding voice down to 64 kbps or 32 or 16. The problem lies more with TCP/IP. Packets arriving out of order, dying, being delayed, etc. IPv6 (I believe) does have QOS features, so IP telephony should get quite a bit better when this happens.
  • now that we have superfast connections, and superfast pcs wonder if Telecommunications will finaly get better using IP...... I'd hope that video confrencing would be very economical (assuming phone co's or US Post office doesn't try to get a cut by taxing us some more) becasue it's nice to teleconfrence, but it's not a good 'telephone' conversation. With any luck, we'll be able to do that with massive bandwidth sooner.... think we can encode a Real time voice at 64k/sec into a streaming mp3? :) that would be pretty sweet.... :)
  • > think we can encode a Real time voice at 64k/sec into a streaming mp3? that would be pretty sweet.... :)

    Well, I just finished an app that sends live mp3 through the realplayer/realserver. Using mp3 encoding hardware from Telos, there is a delay of about 2 seconds in the hardware alone (between the straight audio input to the encoder and the player on a networked machine). The realsystem adds some more lag. So, if you don't mind waiting at _least_ 4 seconds between what you say and the response you get from the person on the other end... It may get better in the future with faster hardware, but there are other, better, audio encoding formats out there to use for conferencing. Though, yeah, it would be cool.
  • >I can encode 128 kbps mp3's off my cd's at about >8.5x. So if we assume the algorithim scales >linearly for encoding (I have no idea if this is >right or not). With a PIII-450 we could encode >128 * 8.5 = 1068 kbps in realtime...

    Nope. MPEG-1 audio layer 3 (aka MP3) only goes up to 320 kbps. Plus, as usual, doing realtime is a slightly different problem from doing non-realtime.
  • There was a discussion on /. last week about a company claiming exobits/sec across power lines. They were roundly denounced as charlatains, and I'm pretty sure that press release was a hoax or a marketing dweeb was doing too much cocaine.

    But this is the real thing, a nice incremental jump in leading edge telecoms. Granted, you won't be getting one of these thing in your living room in the next few years, but maybe in a decade you will see this level of technology pumping bits around fibered cities like Palo Alto.

    We are only two orders of magnitude from exabit speeds. tera=10**12, peta=10**15, exa=10**18. Get to work! :-)

    the AntiCypher
  • (Warning: a bit off topic)
    Actually, my cable phone service (From the local provider - Jones Communications) is BETTER than the RBOC's (Bell Atlantic). There was a major fiber cut next door to my apartment complex. The BA phones throughout the complex were out for three days. But those of us on the Jones phones had full service with no interruption because it was on a separate fiber and exit point from the BA circuits.

    That, and my two phone lines with Jones is cheaper than my one line used to be with BA.

    Let's here it for (albeit limited) local loop competition!
  • Well...you may not want to be so vitriolic against us Virginian Crackers, as you say. MAE-East (through which a plurality of Internet traffic routes) is in a garage in Tysons Corner. UUNET is based in Virginia, and most major backbone providers have major offices somewhere on the Dulles Toll Road. If Silicon Valley is leading the computer revolution, Northern Virginia is leading the telecom one...

    So you basically want more bandwidth, but don't want to pay for it. You could get pretty much all the bandwidth you want by purchasing a DirectPC dish. That would've been illegal before the breakup of AT&T. Oh, wait it COSTS MONEY to get that bandwidth.

    I guess it's just easier to complain.

    Ah well...back to keeping the 'Net humming. It's what we Virginia Crackers do, you know.
  • Yeah, I agree. Every month the processing power, amount of storage and amount of memory we have at our fingertips seemingly increases logarithmically, and yet we're (most of us) surfing the web on modems that are only 6 to 10 times faster than they were 7 years ago or so.

    Pretty soon here I'm planning on seeing how well USWest ADSL works to my place. A little finger-crossing, a little cash, and (I expect) a lot of swearing should do the trick.

    Sam Jooky
  • You run into analogous problems with frequency-domain schemes (this was a time-domain scheme). You can increase the data rate by increasing power, but the power cost is ugly.

    Not if we develope cold fusion :)
  • >Does anyone know the theoretical maximum bitrate of fiber?

    Hemp or Bran?

    Later
    Erik Z

  • (Whiney geek voice)
    But I waaaannnt one!

    Ok, seriously, the backbone MAY benifit. Who really benifits is Ma bell. Don't think she's going to give this tech to other companies. It can push FOUR times the data though exsiting lines. This means less stations need to be built and maintained. More profit gained by selling the same amount of bandwith for the same price. Now they have all this extra bandwidth, they can undercut their competitors by quite a bit.

    Geez, all that bandwidth, if I get one for the home, I may have enough room to try teleportation.
    All my new friends would be short and skinny, less transmission time.

    Later
    Erik Z
  • I can't even get adsl or cable where I live, and isdn is $1 per hour from gte!!! Why the hell would I care about bell atlantic having a 1.6gb network when the lamers @ the phone companies won't extend adsl and cable service to small and medium sized towns like mine? I hope bob goodlatte's internet freedom act gets through, we virginians need it since GTE is too stingy and greedy to give us decent adsl service. Better yet, why not let AT&T buy GTE, Bell Atlantic and all other phone companes and make AT&T roll out cheap cable service like it is already doing in big cities!! My parents tell me that they paid lower rates on phone calls before that AT&T antitrust case than they do now. I say fuck antitrust laws
  • Nortel announced a commercial 1.6 terabit technology in May. They've even signed a contract with COLT Telecom in Europe to supply a network. See:

    http://www.quicken.com/investments/news/story/pr /?story=/news/stories/pr/19990510/to006.ht m&symbol=NT

    and

    http://www.quicken.com/investments/news/story/pr /?story=/news/stories/pr/19990504/to001.ht m&symbol=NT
  • If they want to test TCP/IP networking using this system, I will kindly allow then to run one of these bad boys to my house ;-)
    Seriously though, it will be nice to watch phone prices come into the penny a minute range soon maybe...
    At 6-10 bucks an hour phone calls to my gilrfriend add up fast...
    In anouther random comment, when high bandwidth comes to the house, do you think we will have a digital subchannel for voice (like ISDN? I think that's how it works...) or TCP/IP phoning?
  • I'm presently sitting in front of a pair of Fujitsu OC-192 systems. The computer I'm writing this on is connected to the 'net by a 56k dialup. Y'know why? Because you can't just throw packets into one of these things! They do NOT know or care what your computer has to say. The difficult part of networking, and the thing that NAPs consist of, is getting computer data into and out of telecom pipelines. One of these days really soon I'm going to type up a page about how telecom really works, so computer users can understand it.

    For the moment, let's just say that there isn't a PC in the world that can talk to anything faster than a T3 natively. So you mux a few dozen or a few hundred T3s onto a fiber. So what? The computer doesn't care. But now you have the ability to add more computers at either end of the pipe. What we need are for more backbone providers to do detailed packet analysis and purchase more circuits between the heavy-traffic areas of their networks. We need looser peering agreements that let packets follow the more logical geographical route. We need fatter NAPs that can handle more packets, because the circuits to connect them are relatively cheap and plentiful.

    Then in order to make it all work, we need to provide faster connectivity from the ISP's POPs to the customers. As an AC pointed out, most of the copper buried in the ground dates from the 60s or earlier. Lots of it is cotton-and-wax insulated with a lead outer sheath. The telco will never figure this out. Think wireless.
  • he asked what a terEbit was..not what a terAbit was.

A morsel of genuine history is a thing so rare as to be always valuable. -- Thomas Jefferson

Working...