Communities

Writing
Writing
Codidact Meta
Codidact Meta
The Great Outdoors
The Great Outdoors
Photography & Video
Photography & Video
Scientific Speculation
Scientific Speculation
Cooking
Cooking
Electrical Engineering
Electrical Engineering
Judaism
Judaism
Languages & Linguistics
Languages & Linguistics
Software Development
Software Development
Mathematics
Mathematics
Christianity
Christianity
Code Golf
Code Golf
Music
Music
Physics
Physics
Linux Systems
Linux Systems
Power Users
Power Users
Tabletop RPGs
Tabletop RPGs
Community Proposals
Community Proposals
tag:snake search within a tag
answers:0 unanswered questions
user:xxxx search by author id
score:0.5 posts with 0.5+ score
"snake oil" exact phrase
votes:4 posts with 4+ votes
created:<1w created < 1 week ago
post_type:xxxx type of post
Search help
Notifications
Mark all as read See all your notifications »
Q&A

End-user experience and prominent use cases of robust interplanetary internet

+0
−0

Having recently learned about the Interplanetary Internet in development by NASA's Jet Propulsion Laboratory, and inspired by Kim Stanley Robinson's novel 2312, I began to wonder about the actual reality of a dense information network spanning the whole Solar System.

Question

  • What would a typical user use such a network for?
  • How reliably would it work?
  • What would the expected bandwidths be?
  • Would using it be exorbitantly expensive?
  • And how would it actually feel to use such a system?

Considerations

First of all, the network infrastructure would be there. Relay clusters are in place in various orbits around the planets and the Sun, some even around dwarf planets or asteroids.

Second, the methods of transmission would be ones currently known to us: Laser and radio (and data mules, where necessary).

Third, there would be a significant diaspora of the human race all over the Solar System. Though travel is relatively cheap, it is not necessarily convenient.

EDIT (14.8.2016), bandwidth hints: NASA tested a Moon-Earth laser broadband in 2013 achieving a nice 622 Mbps with a puny, low-powered device. They are testing a more advanced setup in 2017: "The LCRD will be capable of shifting 1.25Gbps of encoded traffic, or 2.88Gbps of uncoded data using laser equipment that is just four inches long and which uses considerably less power than a radio communications system." Link to article. Exciting times!

History
Why does this post require moderator attention?
You might want to add some details to your flag.
Why should this post be closed?

This post was sourced from https://worldbuilding.stackexchange.com/q/51549. It is licensed under CC BY-SA 3.0.

0 comment threads

1 answer

+0
−0

Actually, there is a network with properties similar to those that would likely be seen on an interplanetary version of the Internet. We can use it for comparison.

It's called FidoNet.

FidoNet uses a store-and-forward architecture to cope with high cost of long distance transfers, and batch processing of messages and requests. It has a highly hierarichal address structure where the node addresses encode information about each node's location. Communication between nodes has historically been over dial-up modem links, but Internet links have also been used.

The three main services provided by FidoNet are netmail, echomail and file requests (freqs).

Netmail is mainly one-to-one communication, similar in principle to Internet e-mail. It is what all other services provided over FidoNet are built on top of, similar in a way to how everything on the Internet is built on top of IP.

Echomail is one-to-many communication, similar in principle to Usenet or later-date web forums or question and answers sites.

Freqs is usually person-to-system communication, with the purpose of obtaining files made available by a remote system. It was often, but not exclusively, used for software distribution, where each system didn't have to have everything locally available. At a time when significant storage capacity came at a hefty cost, this helped reduce the up-front cost of setting up a node but transferred that cost to an on-going cost of transferring files requested by the users. Because of these ongoing costs, excessive freq'ing was commonly seen as disrespectful.

Because FidoNet used store-and-forward techniques and often multiple hops, delivery times of hours or days were not uncommon. Because dial-up links were the norm during FidoNet's days of glory, instead of tying up the phone line (and preventing others from reaching the node you were on), reading and writing messages offline then connecting to make a batch transfer of anything new was common. There were several specialized software packages that provided nice, relatively user-friendly user interfaces for managing netmail, echomail and freqs. Systems often exchanged messages during the night, as not only was expected usage often lower, cost was often lower as well.

FidoNet also allowed for "crashmail", which was generally reserved for high-priority traffic. Crashmail was identical to netmail, but requested any system it passed through to pass the traffic on as quickly as possible. Unwarranted use of crashmail was seen as exceedingly rude, because it incurred an additional cost to every system administrator along the message delivery path, but it did have legitimate uses. Some systems disallowed crashmail, treating it as regular netmail.

See how this might be similar to how an interplanetary Internet might realistically function?

  • Links are intermittent. Deploying sufficient nodes to always guarantee a direct path from one endpoint to another will likely be prohibitively expensive, and nodes are bound to go offline every now and then for any of a phletora of reasons, and nodes will sometimes be busy handling (possibly higher-priority) traffic to or from a different node. Designing around a store-and-forward architecture reduces the impact to the end user of such intermittency; the end user will simply see that the message took slightly longer to be delivered, and if they look at trace data (similar to e-mail's Received: headers) they may see that the data took an unexpected path toward its destination or even was re-routed while in transit.
  • Speed of light propagation delay is considerable. On Earth, even a one-second propagation delay is a long time; for an interplanetary Internet, it takes one second just to get from Earth to Earth's moon, let alone send an acknowledgement back. Any form of interactive use will be prohibitively slow, so batch, likely message-based, processing makes sense.

Combine these two, and we get a network based around the idea of taking some kind of "message" or "package", accepting responsibility for its delivery, and arranging for its eventual delivery to a base station near the recipient (where "near" might mean "on the same planet"), from where it would be routed in a manner more suitable for planet-local traffic. While interplanetary, the traffic could then be routed by a variety of methods or links, depending on its priority and what links are currently online and available. Correspondingly, users may be charged different rates for different-priority traffic, and some ultra-high priority classes may be restricted to certain users of the network or even the network itself. Correspondingly, the lowest-priority traffic might simply piggyback on a transport spacecraft, with all of what that means in terms of delivery times.

Under the hood, strong cryptography and advanced compression and error-correcting algorithms will very likely be used to detect and correct for data corruption, reduce the amount of data that needs to be transmitted, ensure data privacy against eavesdroppers, and ensure that the correct user is appropriately billed for their own traffic and not anybody else's, among other possible uses. Remember that at anything resembling interplanetary distances, bandwidth is at a large premium (the terrestrial connection to my home could pretty much saturate that about a gigabit per second you mentioned NASA toying with for the LCRD for next year, if I simply spoke to my ISP and paid for more upstream bandwidth), and retransmissions are very expensive for the network, so there are incentives to reduce both as much as possible. All this will be transparent to the user of the network, who will simply see the end result of their messages being delivered and billed for.

Thus we can answer your questions.

What would a typical user use such a network for?

Batch- or message-oriented communications. It takes too long to deliver anything for any real-time use, so once you leave your own planet's planetary network (which might possibly include the moons and spacecraft in orbit of that planet), you give up getting an immediate response.

Thus, for interplanetary traffic, the user experience will be more like sending paper mail, or posting on a web forum, or sending an e-mail, and waiting to get a reply, than the back-and-forth of instant messaging or video chat. If adequate bandwidth is available, it's certainly possible to send images, audio or video back and forth, but directly interacting with the person or system at the other end of the link will generally not be practical simply due to the inherent latency of the physical distances involved, let alone those potentially introduced by there existing no complete, direct path between the two endpoints at the time.

How reliably would it work?

A store-and-forward network can be very reliable (almost arbitrarily reliable), especially if the individual nodes and link hops are sufficiently reliable and the hops are short enough that immediate confirmation from the next node is reasonable. Because any node will retain the message at the very least until it receives confirmation from the next node along the way that the message has been successfully received and passed all relevant checks, a message can always be retransmitted, possibly through a different node or path, should the need arise. Borrowing from one approach of mitigating the Two Generals' Problem, high-priority traffic can be sent through multiple paths simultaneously, to improve the chances of one of the copies making it through quickly in case a node along the way becomes unavailable while the message is en route. The nodes would likely be made sufficiently autonomous that they are able to determine themselves the most appropriate "next" (closer to the ultimate destination) node to send the data to, which would allow the network to gracefully handle nodes becoming unavailable while data is in transit.

What would the expected bandwidths be?

Impossible to say. Ultimately, the limiting factor will probably be the Shannon-Hartley theorem, which gives the maximum theoretical information rate of a communications channel of a given bandwidth and a given signal-to-noise ratio. We can improve the S/N ratio by increasing power to the transmitter, but that costs energy. This is one of the places where different classes of traffic may be employed; a high-priority message may warrant using some reserve battery capacity to increase the transmitter output power, to help ensure its successful delivery, at the cost of reduced ability to do that again in the immediate future (until the batteries have been recharged from whatever primary electricity source, such as solar panels or RTGs, that the node uses).

Would using it be exorbitantly expensive?

Not necessarily, but that depends very much on your definition of "exorbitantly". As I have already said, different traffic priorities could be charged at different rates, and the user would select the traffic priority level appropriate for the message they are sending or the request they are making. The vast majority of traffic would likely use a "bulk" classification of some kind, which basically means best-effort and transmission whenever the network would otherwise be idle, with no real delivery time guarantees. Higher priority classes would be used for any traffic that requires some form of expediated delivery, kind of like how you can choose between priority mail and economy mail when sending a postal package to someone.

The major cost for something like this will be up-front, in deploying the large number of nodes that will be required to provide reasonable latencies. That cost will need to be recouped somehow, and it's likely that user fees and data transfer fees will be a major part of how the cost of establishing the network is recovered. Because in your world "travel is relatively cheap" but "not necessarily convenient", the cost of establishing the network might be lower than it would be in our world, and the ultimate cost to the end user for using the network should, in an ideal world, reflect that lower cost to the network operator. There will be ongoing costs for replacing nodes that become unusable for various reasons, but with some planning ahead, those costs can be spread out over time.

How would it actually feel to use such a system?

You would be considering everything rather carefully. Not only because even in the best of cases delivery can easily take hours (and there will likely be no guaranteed way to recall or change a message once you have sent it), but also because you lose much of the back-and-forth available with today's Earth-bound Internet where traffic roundtrip times are measured in fractions of a second.

Using an interplanetary network will probably feel more similar to e-mail, or FidoNet, or even mail order, than it will feel similar to casually browsing the web, looking at whatever you find that looks interesting. Planet-local storage and "package" preparation will be an absolute requirement to make the end-user experience reasonable.

History
Why does this post require moderator attention?
You might want to add some details to your flag.

0 comment threads

Sign up to answer this question »