Bell Labs moves bandwidth to 1.6 terabits 56
javac writes "Scientists and engineers
from Bell Labs have demonstrated a prototype long-distance
optical-transmission system that quadruples the capacity of today's
commercial systems to 1.6 terabits. Read the
full
story"
OK, what now? (Score:1)
"We are only two orders of magnitude from exabit speeds. tera=10**12, peta=10**15, exa=10**18."
kilo = 10^3
mega = 10^6
giga = 10^9
tera = 10^12
peta = 10^15
exa = 10^18
An "order of magnitude" is "power of ten". Sounds like tera to exa is 6 (18 minus 12) orders of magnitude. Even using Moore's law (double ever 18 months), that's quite a ways off.
Does anyone know the theoretical maximum bitrate of fiber?
--
"Please remember that how you say something is often more important than what you say." - Rob Malda
Re:Maximum bitrate of fiber (Score:1)
i'm not sure about fiber, but if you use the entire EM spectrum in something like a vacuum:
infinate frequencies * infinate amplitudes = a really really big number
although i wouldn't feel very safe if gamma waves and x-rays were just flying around in this data transfer medium.
Anyone Know The Science? (Score:2)
All his work was lab based synthisis of nano structures, and not commercial applications. So, his work was significantly beyond what is production ready by about a year (at LEAST). Anyone know of a solid referance for the basics in what communication hardware/structures/circut design is currently being used, and what directions they are moving? If what I saw was true (and the science behind it was solid), he could easily boost the best theoretical hardware by an order of magnitude from existing stuff. I understand the science, chemistry, and fabrication stuff he talked about, and I can see how it would be better, but I don't understand the protocols completely, and the designs that are back between the computers and the electric to light wave meterials he was designing.
Re:what's a "terebit" anyway? (Score:1)
Uh, you miss the point totally... (Score:1)
Re:Whatever ... (Score:2)
Re:OK, what now? (Score:2)
To get an idea where the limit would be challenging for fiber, take the wavelength and invert it to get the frequency. That is the point where working with single pulses of light gets interesting. From there, you could share the same fiber with many different wavelengths and even use analog components of the waveforms like a modem. It just depends on how much money you want to throw at the hardware.
Re:Hemp or Bran (Score:1)
Re:terabits...gigahertz what else? (Score:1)
In other words, a lot of information. In this context, terabit can be pretty much assumed to mean terabit per second, as a measure of bandwidth. To put it in perspective, the fastest and busiest servers around feel good about claiming terabits per day transferred.
Big Deal (Score:1)
But the sad end of the stick is we still have Cable and local Telephone monopolies jacking up prices for lower and lower quality service. Alas, 'tis but a pipe dream...
Re:OK, what now? (Score:1)
...sorry
Re:And I should care because? (Score:1)
You don't only depend on the segment of wire that comes into your house. If the backbone on the net is terabit instead of gigabit, you're more likely to be able to actually use the lower bandwidth you're paying for, even if you're hitting sites everyone else is hitting.
Your parents are wrong: long distance was wildly expensive pre-1984, and while local rates went up somewhat, it's not that much once you take inflation into account.
I say fuck antitrust laws, you say. Fine. Let's go to the world of 1984. You can't buy a modem; you can only rent one from the phone company at a steep rate, so much so that most people use acoustic couplers that can't possibly get more than 1200 bits/sec. You get cheap local calls, but calling out of town costs 10x as much as what you pay today.
If Ma Bell hadn't been broken up, the net as you know it would not exist. Yes, there would be an Internet, but since all the long-distance lines would have to be leased from a monopolist at monopoly rates, it would cost at least ten times as much to have access, and since Ma Bell was extremely, extremely conservative about what types of equipment could be attached to the phone network, there would be far less innovation. No internal modems for PCs, that's for sure. You'd have to use external boxes leased from the phone company.
This is an engineering announcement. (Score:2)
Actually, the article is describing practical application of known experimental technologies - exactly what is needed to bring it into real use in the backbone and (later) downstream. We've heard about multi-terabit transmission with DWDM before, but this is looking closer to an actual product.
How they did this (Score:2)
This used a different approach to packing more information into the fiber - the use of many carriers, each modulated at a relatively low frequency, and spaced far enough apart that there wasn't cross-talk between them. The professor you refer to seems to be looking at ways of more quickly modulating a single carrier. This is certainly very useful, but there are other ways of increasing bandwidth. Frequency-domain schemes of the kind that I described are the ones that have been generating the recent news (though there are limits to how far it is practical to push this).
Re:OK, what now? (Score:2)
6x10^14 Hz. Unless we can make ultraviolet fibres, that is an upper bound.
Not quite. Your sampling rate is limited to 6.0e14 Hz, but you can have many signal levels that can be detected. The problem is that for n levels, you need n*n photons to compensate for measurement uncertainty, which means that for b bits per sample you need 2^(2b) photons per sample. The energy requirements for this get nasty very fast, but for a small number of levels it's practical.
You run into analogous problems with frequency-domain schemes (this was a time-domain scheme). You can increase the data rate by increasing power, but the power cost is ugly.
Maximum bitrate of fiber (Score:3)
This was covered in a thread a couple of months back. A few limits apply.
Firstly, you are limited by the bandwidth of your optical carrier. For visible light and assuming reasonable power constraints, you won't be able to get more than about 1.0e15 bps. You could push this by pumping in a _lot_ more power or by using a higher frequency, but these have very serious problems. IMO, a better solution would just be to use bundles of fibers (or multiple lasers or what-have-you).
DWDM and other frequency domain techniques won't help here - as you modulate data onto a carrier, its frequency spreads out, limiting how densely you can do WDM, or how much data you can transmit per channel using a given density of carriers. 1.0e15 bps or thereabouts is as far as you can go without producing/using frequencies higher that the visible light range.
The other, more immediate limit is in the physical medium that is carrying your optical signal. In this case, fiber. Fiber is mainly limited by the operating frequency range of the amplifiers used to boost the signal (usually erbium-doped fiber lasers). I'm told that this is in the 1.0e11 to 1.0e14 Hz range (I honestly don't remember where within those bounds it fell). Other readers can probably give you more accurate information.
One of the points mentioned in the article was that they hoped to develop better amplifiers - that could process a wider range of frequencies and let them use another frequency range for transmitting data. This was a topic for future research, not an announcement of something they'd accomplished, though.
Record broken (Score:2)
a few days ago with a 1 Terabit demonstration
in March. It seems that this kind of record
is not easy to hold a long time. Note however
that FT demonstrated 1 Tb over 1000 km, while
Bell demonstrated 1.6 Tb over only 400 km
So it should cost $0.04/YEAR to phone Australia? (Score:1)
but for long runs of many-fiber cable the bulk of
the cost is the cable itself - about $500/fiber/km. A cable halfway around the world
(20,000 km) would cost $10 million/fiber.
At 1.6 terabits each fiber can carry 25 million
64 kbps phone connections. Assuming a cable lifetime of 10 years that's $0.04 per year, right?
Re:And I should care because? (Score:1)
error 505 (Score:1)
Re:what's a "terebit" anyway? (Score:1)
Jonathan
because broadband will be as neccecary as cable. (Score:1)
Re:Uh, you miss the point totally... (Score:1)
Re:This is an engineering announcement. (Score:1)
http://www.lightchip.com
Re:what's a "terebit" anyway? (Score:1)
A terabyte might be slightly different, though, since bytes tend to be grouped in powers of 2: kilobyte = 2^10, megabyte = 2^20, so a terabyte could be 2^40.
It's quite a lot, either way.
--
Duh... (Score:1)
--
Fine Young Cannibals (Score:3)
I've a several megabit ADSL connection to the internet at home, and a T1 at work, and I still spend most of my bandwidth at home reading news and sending email, and using local do-dads at work. The entire system is computer-centric. But the underlying philosophy is person centric and while we look at what we have for each computer, we often forget what we have for each person.
And the successiful in the future will be looking at what we will have in the future, which will be computer-independent, yet entirely person centric.
Someday, not so far off as many would hope, fear, wish, or believe, that which we will look at in a computer will not be what we value now, but rather the speed at which we communicate with others.
That is a step in the right direction. Next we must change how and what we communicate. Therein you will find the true leaders of this technological era.
Re:terabits...gigahertz what else? (Score:1)
I can encode 128 kbps mp3's off my cd's at about 8.5x. So if we assume the algorithim scales linearly for encoding (I have no idea if this is right or not). With a PIII-450 we could encode 128 * 8.5 = 1068 kbps in realtime...
With my k6-2 250 I was getting about 2.2x so about 280 kbps in realtime.
So I figure you'd need about a p133 to comfortably handle encoding and decoding in realtime for voice conferencing (this is assuming you use the xing codec). But then why would you use mp3 for this? Speak freely and bunch of other good software are already out there for this specifically.
Re:terabits...gigahertz what else? (Score:1)
With today's processors and codecs, there's not too much challenge with encoding voice down to 64 kbps or 32 or 16. The problem lies more with TCP/IP. Packets arriving out of order, dying, being delayed, etc. IPv6 (I believe) does have QOS features, so IP telephony should get quite a bit better when this happens.
terabits...gigahertz what else? (Score:1)
Re:terabits...gigahertz what else? (Score:1)
Well, I just finished an app that sends live mp3 through the realplayer/realserver. Using mp3 encoding hardware from Telos, there is a delay of about 2 seconds in the hardware alone (between the straight audio input to the encoder and the player on a networked machine). The realsystem adds some more lag. So, if you don't mind waiting at _least_ 4 seconds between what you say and the response you get from the person on the other end... It may get better in the future with faster hardware, but there are other, better, audio encoding formats out there to use for conferencing. Though, yeah, it would be cool.
Re:terabits...gigahertz what else? (Score:1)
Nope. MPEG-1 audio layer 3 (aka MP3) only goes up to 320 kbps. Plus, as usual, doing realtime is a slightly different problem from doing non-realtime.
Terabits now, Petabits soon, Exabits when? (Score:2)
But this is the real thing, a nice incremental jump in leading edge telecoms. Granted, you won't be getting one of these thing in your living room in the next few years, but maybe in a decade you will see this level of technology pumping bits around fibered cities like Palo Alto.
We are only two orders of magnitude from exabit speeds. tera=10**12, peta=10**15, exa=10**18. Get to work!
the AntiCypher
My Cable Phone is BETTER (Score:1)
Actually, my cable phone service (From the local provider - Jones Communications) is BETTER than the RBOC's (Bell Atlantic). There was a major fiber cut next door to my apartment complex. The BA phones throughout the complex were out for three days. But those of us on the Jones phones had full service with no interruption because it was on a separate fiber and exit point from the BA circuits.
That, and my two phone lines with Jones is cheaper than my one line used to be with BA.
Let's here it for (albeit limited) local loop competition!
Re:And I should care because? (Score:1)
So you basically want more bandwidth, but don't want to pay for it. You could get pretty much all the bandwidth you want by purchasing a DirectPC dish. That would've been illegal before the breakup of AT&T. Oh, wait it COSTS MONEY to get that bandwidth.
I guess it's just easier to complain.
Ah well...back to keeping the 'Net humming. It's what we Virginia Crackers do, you know.
Re:Whatever ... (Score:1)
Pretty soon here I'm planning on seeing how well USWest ADSL works to my place. A little finger-crossing, a little cash, and (I expect) a lot of swearing should do the trick.
Sam Jooky
Re:OK, what now? (Score:1)
Not if we develope cold fusion
Re:OK, what now? (Score:1)
Hemp or Bran?
Later
Erik Z
Re:Uh, you miss the point totally... (Score:1)
(Whiney geek voice)
But I waaaannnt one!
Ok, seriously, the backbone MAY benifit. Who really benifits is Ma bell. Don't think she's going to give this tech to other companies. It can push FOUR times the data though exsiting lines. This means less stations need to be built and maintained. More profit gained by selling the same amount of bandwith for the same price. Now they have all this extra bandwidth, they can undercut their competitors by quite a bit.
Geez, all that bandwidth, if I get one for the home, I may have enough room to try teleportation.
All my new friends would be short and skinny, less transmission time.
Later
Erik Z
And I should care because? (Score:1)
Nortel was first off the mark (Score:1)
http://www.quicken.com/investments/news/story/p
and
http://www.quicken.com/investments/news/story/p
Willing to help out... (Score:2)
Seriously though, it will be nice to watch phone prices come into the penny a minute range soon maybe...
At 6-10 bucks an hour phone calls to my gilrfriend add up fast...
In anouther random comment, when high bandwidth comes to the house, do you think we will have a digital subchannel for voice (like ISDN? I think that's how it works...) or TCP/IP phoning?
Schmerabits. Let's talk "to the curb"! (Score:1)
For the moment, let's just say that there isn't a PC in the world that can talk to anything faster than a T3 natively. So you mux a few dozen or a few hundred T3s onto a fiber. So what? The computer doesn't care. But now you have the ability to add more computers at either end of the pipe. What we need are for more backbone providers to do detailed packet analysis and purchase more circuits between the heavy-traffic areas of their networks. We need looser peering agreements that let packets follow the more logical geographical route. We need fatter NAPs that can handle more packets, because the circuits to connect them are relatively cheap and plentiful.
Then in order to make it all work, we need to provide faster connectivity from the ISP's POPs to the customers. As an AC pointed out, most of the copper buried in the ground dates from the 60s or earlier. Lots of it is cotton-and-wax insulated with a lead outer sheath. The telco will never figure this out. Think wireless.
Re:terabits...gigahertz what else? (Score:1)