Communities

Writing
Writing
Codidact Meta
Codidact Meta
The Great Outdoors
The Great Outdoors
Photography & Video
Photography & Video
Scientific Speculation
Scientific Speculation
Cooking
Cooking
Electrical Engineering
Electrical Engineering
Judaism
Judaism
Languages & Linguistics
Languages & Linguistics
Software Development
Software Development
Mathematics
Mathematics
Christianity
Christianity
Code Golf
Code Golf
Music
Music
Physics
Physics
Linux Systems
Linux Systems
Power Users
Power Users
Tabletop RPGs
Tabletop RPGs
Community Proposals
Community Proposals
tag:snake search within a tag
answers:0 unanswered questions
user:xxxx search by author id
score:0.5 posts with 0.5+ score
"snake oil" exact phrase
votes:4 posts with 4+ votes
created:<1w created < 1 week ago
post_type:xxxx type of post
Search help
Notifications
Mark all as read See all your notifications »
Rigorous Science

Starbuilding: What is lacking in the logic behind Cosmos 2 star system generation algorithm?

+0
−0

Preamble

The Alternity Cosmos II is a complement to a dice role-playing game that uses heuristics based on hard-science to 'build' plausible star systems for the Alternity game: http://www.alternityrpg.net/resources/1375/original/cosmos-2.pdf

The algorithm* itself is the center of the current question, as I would like to know which are it's shortcomings, other than the quantization of the results product of representing probabilities with discrete dices. *: Understood as the steps and assumptions that those steps make to end up producing a plausible star system from randomness.

To answer the question this 'game specific' detail is not useful, in fact counterproductive, when analyzing the correctness of the ideas. The following website can ease the main pain dices can cause when reading the document: Anydice.com.

The other main shortcoming of the algorithm is that it uses the GRAPH method as magnitude in many cases. There is the equivalence to real world metrics: http://www.warrensburgweb.net/alternity/system/GRAPH.html

Question

What does the logic of the Cosmos 2 'algorithm' lack in terms of plausible science and considerations taken into account?

Which would be a right way to predict it with the information known up to that point in the algorithm?

I'd like to know what does it get blatantly wrong, what does it ignore and what does it get partially wrong. But only in the cases where it pretty much destroys the whole model. Things like: the model does not take into account solar winds, and at this distance, planet N will be stripped of its atmosphere. (Pretty big deal! The precise example is already covered well enough, I think.)

An answer pointing something out should also tell how the conclusion is met, either by an existing formula, a real life example OUTSIDE EARTH (biosphere chemically changes everything) or with a simplification good enough using the parameters known. The formula can either be actual science or a plausible assumption that fits our current understanding of the universe.

Off-limits

There are some details that are not covered in the algorithm, and thus are arguably shortcomings of the model, that I'm not at all interested in including.

I'm not interested in:

  • Strange orbits other than the ones depicted in the original text.
  • Oort cloud details or Kuiper belt like asteroid belts.
  • Trojan belts or Trojan planetoids (other than the already detailed on the document).
  • Topological details of the surface of any orbital body.
  • Biosphere details, assumptions or considerations of allegedly habitable exo-planets.
  • Basically anything that does not concern the formation of an orbital body or it's stable distribution around a star.

Other details, which I already know are not covered well enough in it, but I'd like to get answers about:

  • Better atmosphere content prediction that includes chemical composition, not the shallow GRAPH system. (when it exists)
  • Better hydrosphere content prediction. (when it exists)
  • Better surface composition. (when not talking about gas or ice giants)
  • Core composition on planetoid and lesser bodies (< 900km).

What is the use of such knowledge?

I'll implement programatically the algorithm, changing the 'dice throws' by real probability functions, and including all the corrections that we end up adding as answers to this question. Woho! Free plausible star systems for everyone! A net win to all the world's daydreamers.

Answers

Feel free to post a partial answer, i.e. only addressing a single wrongness that you know of. I will accept the answer that gives either of two: The most corrections to the given model or the most complete answer to a deep and impactful wrong assumption done by Mark Peoples, the author of Cosmos 2.

The hard-science tag is not gratuitous, and I will not accept implausible made up things or dodgy assumptions, independently of the awesomeness, amusement and hilariousness they provide.

Note: I know this is BROAD! But the answers are very specific.

History
Why does this post require moderator attention?
You might want to add some details to your flag.
Why should this post be closed?

This post was sourced from https://worldbuilding.stackexchange.com/q/41216. It is licensed under CC BY-SA 3.0.

0 comment threads

1 answer

+0
−0

This is essentially a partial answer, insofar as it's a series of loosely chained together critiques. Sometime in the future, I'll revise it so it's more organized, but for now, it's a work in progress. I apologize for any and all problems regarding readability.


On page 14, Peoples gives some dice parameters to determine the number of three types of systems (which he denotes as Giant, Major, and Minor systems). I'll admit that I don't fully understand the parameters he gives, but I do know that it would be much simpler to use a version of the initial mass function (IMF). The IMF can be used to determine the fraction of stars between masses $M_2$ and $M_1$, where $M_2>M_1$: $$\xi(M)=\xi_0M^{-\alpha}$$ The value of $\alpha$ varies in different mass ranges (the originator of the IMF, Edwin Salpeter, originally used $\alpha=2.35$ for all ranges). Kroupa (2001) gave one detailed study, giving the following values of $\alpha$: $$\alpha=\begin{cases} 0.3\pm0.7,&\quad0.01\leq M/M_\odot<0.08\\ 1.3\pm0.5,&\quad0.08\leq M/M_\odot<0.50\\ 2.3\pm0.3,&\quad0.50\leq M/M_\odot<1.00\\ 2.3\pm0.7,&\quad1.00\leq M/M_\odot\\ \end{cases}$$ where $M_\odot$ is the mass of the Sun. All you have to do is integrate $\xi$ between $M_1$ and $M_2$ to find the fraction of stars (and thus systems) in that range of masses. Here are your (Toxyd's) results, assuming $\xi_0=1$:

$$\text{IMF Results}$$ $$\begin{array}{|c|c|c|c|c|} \hline \text{Class}&M_1(M_\odot)&M_2(M_\odot)&\text{Result}&\text{Percentage}\\ \hline \text{B (Dwarf)}&0.01&0.08&0.1869&3.7942\%\\ \hline \text{M}&0.08&0.50&3.0075&61.1350\%\\ \hline \text{K}&0.50&0.80&0.8660&17.3606\%\\ \hline \text{G}&0.80&1.04&0.2971&6.0393\%\\ \hline \text{F}&1.04&1.40&0.2343&4.7608\%\\ \hline \text{A}&1.40&2.10&0.2035&4.1366\%\\ \hline \text{B}&2.10&16.00&0.2723&5.5352\%\\ \hline \text{O}&16.00&150.00&0.0198&0.4029\%\\ \hline \text{Total}&0.01&150.00&4.9144&100.0000\%\\ \hline \end{array}$$

Note that these numbers are not perfectly representative of the decrease in stellar fractions with increasing mass, because the mass bins are different sizes.

For those visual learners, here's a modified version of Kroupa's Figure 2 (from here):


Here, $\Gamma=\alpha-1$, and is often used in place of $\alpha$.

I wrote a short Python script that generates stellar populations according to the Salpeter IMF. Here's an output of it. I plotted ten stars on a graph of their masses and which bin they were in (out of ten bins). I used an upper limit of $100M_{\odot}$.

enter image description here

Notice how most of the stars are less than $1M_{\odot}$. This is partly an artifact of randomization and partly because the Salpeter IMF isn't fantastic for low-mass stars. Still, it shows how dominant low-mass stars are.


On page 21, Peoples writes

An important thing to bear in mind is that most Newborn systems will be within open star clusters or even embedded in emission nebulae.

This might be a bit of an understatement. Most stars with protoplanetary disks are less than ~106 years old, as shown by Figure 1 of Mamajek (2009):

That said, being in a binary system may influence the dissipation of the disk (see Daemgen et al. (2015)).

Molecular clouds, where stars are born, have varying lifetimes; Murray (2010) found a mean lifetime of 17$\pm$4$\times$106 years. By comparison, open clusters may continue to be gravitationally bound for much longer. The upshot of this is that the vast majority of stars with disks that formed in open clusters will still be in open clusters when the disks finally dissipate, while a smaller fraction will still be in molecular clouds (which may contain young open clusters), assuming a constant star formation rate. That said, the number of stars in a cluster may influence the lifetime of a disk (see this presentation). Larger clusters mean that a star will undergo close encounters more frequently, which can rob the disk of material.


On page 27, Peoples discusses different orbital ranges, dividing the system up into four concentric rings. I don't think this is a great idea - to be frank, I don't like how he categorizes things in general, because there are seldom discrete boundaries - because a protoplanetary disk can initially be approximated by a somewhat smooth curve.

It might be nice to go through a statistical treatment of planetary formation, as you're planning on creating probabilistic algorithms. An interesting attempt is Hasegawa & Pudritz (2013). They calculated something they call the Planet Formation Frequencies (PPF), or the number of planets in a given zone: $$\text{PPFs}(\text{Zone i})=\sum_{\eta_{\text{acc}}}\sum_{\eta_{\text{dep}}}w_{\text{mass}}(\eta_{\text{acc}})w_{\text{lifetime}}(\eta_{\text{dep}})\text{SPFFs}(\text{Zone i}, \eta_{\text{acc}},\eta_{\text{dep}})$$ Where $w_{\text{mass}}$ and $w_{\text{lifetime}}$ are Gaussian weight functions relating to disk parameters $\eta_{\text{acc}}$ and $\eta_{\text{dep}}$, and $\text{SPFFs}$ is the Specific Planet Formation Frequencies, a function of the same parameters. It is important to note that the five zones do separate planets into different groups (e.g. Zone 1 consists of Hot Jupiters), but they can be adjusted as needed. Interestingly enough, the idea of zone is actually quite similar to Peoples' analysis.

Other good statistical analyses are Benz et al. (2014) and Hernández-Mena & Benet (2010) . Williams & Cieza (2011) and D'Angel & Podolak (2015) are also excellent works on protoplanetary disks and early planet formation.


Starting on page 33, Peoples discussed exoplanet composition. When describing the different divisions he created of exoplanets, he gives a mass range in which each type of exoplanet can exist. For example, his "Nerean" planets with global oceans have masses between $0.3-1.0 M_{\oplus}$, where $M_{\oplus}$ is the mass of the Earth. In my educated-but-totally-not-expert opinion, this is a bad idea. While some have given firm limits to the size of a terrestrial planet, as I discussed in this answer (Lopez & Fortney (2013) suggest $1.5 M_{\oplus}$ as the upper bound for super-Earths, and $2.0 M_{\oplus}$ as the lower bound for mini-Neptunes, with a transition region in between), the precise composition of the planets follows a continuous distribution - and in fact, as Wikipedia notes, some advocate for a completely continuous distribution, with no boundaries whatsoever (see Schlaufman (2014)).

Empirical evidence hasn't helped much. Scientists have discovered thousands of exoplanets, but most have been massive gas planets like Jupiter and Saturn. That's likely just observational bias, because these planets are easier to detect using the transit or radial velocity methods. This means that we have comparatively little data to show that there is a continuum of composition, because the composition of a terrestrial planet is generally difficult to determine.

That said, the evidence we have so far seems to support the continuum hypothesis. For fun, I went to exoplanets.org and used their data to plot exoplanets we have sufficient data for on a graph of radius vs. density (we don't have data for mass and radius on every exoplanet yet, so this is obviously not all the exoplanets we know of). I set the limits of radius as between $0.01 M_J$ and $10 M_J$, where $M_J$ is the mass of Jupiter, and I set the limits of density as between $0.001\text{ g/cm}^3$ and $100\text{ g/cm}^3$, using a log-log scale. I then added in the planets of the Solar System, added in the limits given by Lopez & Fortney, and added central density values used in the theoretical polytropic models by Seager et al. (2007), which is a fantastic reference for modeling terrestrial planets of different compositions. This is the result:

The continuous downward trend of density with increasing radius until somewhere in the range of $7-8M_J$ should show that there isn't really a density cutoff; if there was, we should see some discontinuities. Now, the continuum may have some "jumps" because there may well be some hard limits, but on the whole, this doesn't seem likely.


These are my main critiques of the whole document. That said, on the whole, Peoples' algorithm seems to have solid scientific footing. My only main concern is that it's ten years old, and a lot of progress has been made when it comes to exoplanets and planetary modelling (especially with later modifications of the Nice Model). I recommend finding a couple of more recent papers on the subject of planet formation, to better understand the topic.

The other thing that bothered me consistently - and that made the algorithm hard to analyze - was Peoples' naming system. The vast majority of terms he uses - see, for example, how he divided up the planet types - are not adopted at all by the scientific community. They're arbitrary labels that have virtually no significance. If you want to do further reading on a certain subject, this naming makes it extremely difficult to find more information, because the names are not used elsewhere. So if you do come up with your algorithm, I would advise trashing the Cosmos II names and just use the ones used by scientists.


Algorithm

Inspired by Jim2B's awesome answer, I decided to create my own probabilistic algorithm. It turns out that there's a lot more research done in the area of statistical modeling of planetary systems that I originally thought, which is quite helpful. Along with some of the standard tools in the astrophysical toolkit, I was able to come up with a relatively simple algorithm that you should be able to implement, if you so desire.

Here's the algorithm:

  1. Using a stellar IMF, determine the mass of your star.
  2. Compute various other properties of the star that depend on its mass: radius (when on the main sequence), lifetime, temperature, rotation rate, etc.
  3. Determine a reasonable value for the mass of the protoplanetary disk, based on the mass of the star and other factors (e.g, a strong stellar wind). From this, calculate the disk's properties, including its density profile and radial temperature.
  4. Calculate the orbital and basic physical properties of the planets, based on stellar and disk mass and metallicity.
  5. Refine the properties of the planets, given mass-radius relations and other models.

Details

1. Stellar initial mass function (IMF)

I discussed the stellar IMF at the start of my analysis of the Cosmos II algorithm, and you did some basic calculations regarding the percentage of stars in a given mass range. Therefore, you should already be familiar with the canonical power law Salpeter IMF $$\xi(M)=\xi_0M^{-\alpha}\tag{1a}$$ and the equation for the number of stars with masses between $M_1$ and $M_2$: $$N=\int_{M_1}^{M_2}\xi_0M^{-\alpha}dM\tag{1b}$$ The values of $\alpha$ are given as before. The IMF is actually much more than these two equations. For a fantastic discussion, see Kroupa (2012) (which I have yet to read most of).

Make sure that if you're creating a population of stars - say, with one hundred stars in a given region of space - you normalize the IMF, such that $N=100$ when integrated over the entire mass range, the upper limit of which is up for debate. As Kroupa (2012) notes, this limit has been recalculated many times over based on different restrictions, including the Eddington limit (hydrostatic equilibrium), destructive pulsations, and protostar accretion. Recent observations suggest that there may be a limit of $\sim150M_{\odot}$ at all metallicities, so this may be your best choice.

The lower limit, of course, is the boundary between brown dwarfs and dim M-class red dwarfs, which is around 80 Jupiter masses, or roughly $0.076M_{\odot}$. Just like the boundary between planetary mass objects and brown dwarfs, though, this is fuzzy, and it's extremely difficult to come up with a cutoff point between the two. $0.076M_{\odot}$ is good enough.


2. Stellar properties

While there are many quantities you need to calculate here - the exact number varies depending on how specific you want to be - most of the equations are extremely simple. I assume you're not trying to compute an exact model of the interior of each star, which would require dealing with the equations of stellar structure and/or the Lane-Emden equation, none of which are particularly easy to do (see e.g. here and here). If you want to study large quantities of stars (e.g. an open cluster), this is incredibly inefficient and unnecessary. I'll restrict this section to quantities that are much easier to observe.

The first and easiest to determine is the stellar luminosity. For stars on the main sequence, we can use the mass-luminosity relation, which has the general form of $$\left(\frac{L}{L_{\odot}}\right)=b\left(\frac{M}{M_{\odot}}\right)^a\tag{2a}$$ where $$a=\begin{cases} 2.3,&\quad0.01\leq M/M_\odot<0.43\\ 4,&\quad0.43this answer explains, you really need a detailed stellar model to come up with an exact answer. The typical approximation used for main sequence stars is the following (see these notes): $$\left(\frac{R}{R_{\odot}}\right)\propto\left(\frac{M}{M_{\odot}}\right)^c\tag{2b}$$ where $$c=\begin{cases} \sim0.90,\quad\text{low-mass stars}\\ \sim0.78,\quad\text{intermediate-mass stars}\\ \sim0.50,\quad\text{high-mass stars} \end{cases}$$ The values of this exponent vary based on the exact fit of the data set, as do the approximate cutoffs in each mass range. The cutoff between the high-mass stars and intermediate-/low- mass stars seems to fall around $1.7M_{\odot}$, if you use $c$ between $0.90-1.00$, as shown in Figure 13.10 here. It is an indicator of a change in importance of the proton-proton chain reaction in low-mass stars to the CNO cycle in high-mass stars.

Last is surface temperature. We can approximate this using the Stefan-Boltzmann law: $$L=4\pi\sigma R^2T^4\tag{2c}$$ where $\sigma$ is the Stefan-Boltzmann constant. Rearranging this, we get $$T=\left(\frac{L}{4\pi\sigma R^2}\right)^{\frac{1}{4}}$$ Note that we've already calculated all of the other quantities already from the star's mass. The Stefan-Boltzmann law is an approximation, as it assumes that the star is a perfect blackbody, but it is quite accurate; Wikipedia indicates that Stefan's estimates were only off by about 1%.

These three quantities are the three easiest stellar parameters to estimate based solely on knowledge of the star's mass. Other factors, like metallicity, age, and exact chemical composition also play roles, but the differences are not important for our approximations. There are, however, other quantities that you might want to compute. Here are some suggestions:

It's worth pointing out that, given its mass, you can easily find out some of the properties of a typical main sequence star which we already calculated, just as is the case with rotation rate. If you're in a pinch, you can always try and find a real-life analogue of your star that's well enough studied, and base your numbers on it.


3. The protoplanetary disk

Protoplanetary disks are much more complex than you might imagine. Rather than have homogeneous temperature, density, and composition, their properties vary greatly both radially and vertically. We do, unfortunately, have to make some guesses when modeling the early evolution of planet formation in a protoplanetary disk, because we cannot determine all of the disk's properties from the information we have about its parent star, but we can still make some reasonable assumptions.

I'll run through several interesting (and important) values you'll need to figure out. The first is the total mass of the disk. Williams & Cieza (2011) determined a range in disk mass for a given parent star mass. While the range itself is broad, covering a couple orders of magnitude, there appears to be a trend of increasing disk mass with increasing stellar mass, except for very massive stars, whose disks may dissipate thanks to strong stellar winds.

The density of the disk is another important factor. Historically, a simple truncated power law model was favored, with surface density decreasing with radius (i.e. $\Sigma\propto R^{-p}$, with $\Sigma$ being the surface density and $p>0$) out to a certain radius, and being zero after that. However, an exponentially tapered relation derived from physical assumptions has shown to be more effective: $$\Sigma(R)=(2-\gamma)\frac{M_d}{2\pi R_c^2}\left(\frac{R}{R_c}\right)^{-\gamma}\exp\left[-\left(\frac{R}{R_c}\right)^{2-\gamma}\right]\tag{3a}$$ Here, $M_d$ is the disk mass, $R_c$ is a characteristic radius, and $\gamma$ describes the relationship between radius and viscosity, $\mu$: $$\mu\propto R^{\gamma}$$ $R_c$ is generally taken to be the radius at which the total mass contained between $R=0$ and $R=R_c$ is about $2/3M_d$. Williams & Cieza note that Andrews et al. (2009, 2010b) found a correlation $$M_d\propto R_c^{1.6\pm0.3}$$ The density of the disk, $\rho$, is simple enough to determine: $$\rho(R,Z)=\frac{\Sigma(R)}{\sqrt{2\pi}H}\exp\left[-\frac{Z^2}{2H^2}\right]\tag{3b}$$ where $H(R)$ is the scale height. $H(R)$ depends on the amount of radiation reaching the disk at radius $R$, and so a power law can be created: $$H\propto R^h$$ with $1.3

As lecture notes by Philip Armitage explain, the ratio of $H/R$ determines how the disk "flares" at larger radii. He also calculates the radial temperature profile, derived from stellar flux received, to be $$T\propto R^{-3/4}\tag{3c}$$ However, models where $T\propto R^{-1/2}$ model some disks well. The choice is yours, but it really won't affect the formation of planets too much. The point is that temperature decreases at increasing $R$, as expected from the inverse square law of flux.

It should be noted that protoplanetary disks will continue to evolve. Mass will be accreted by the star, and eventually collisions between small bodies will lead to the formation of planets and important changes in disk structure. Finally, the remains of the disk will dissipate entirely. The equations above will change over time; don't expect to be able to model the evolution of a disk without additional information.


4. Planets

There are a number of things we can try to determine about the planets in the system:

  • Number
  • Mass
  • Orbital parameters
  • Composition

Let's attack the orbital parameters first. Tremaine (2015) is an excellent treatment of statistical modeling of planetary orbits. He begins by coming up with a stability criterion for neighboring planets depending on their mutual Hill radius, an approximation of the region of space in which a body is gravitationally dominant. Next, he determines the planetary $n$-body distribution function, which distributes the semi-major axes $\mathbf{a}=(a_1, . . . , a_n)$ and eccentricities $\mathbf{e}=(e_1, . . . , e_n)$ in phase space: $$dp(\mathbf{a},\mathbf{e})=C(\pi^2\Omega_c^2\bar{a}^3)^nH(a_1-\bar{a}e_1a_0-h_0)\prod_{i=1}^nda_ide_i^2H(a_{i+1}-\bar{a}e_{i+1}-a_i-\bar{a}e_i-h_i)\tag{4a}$$ where $C$ normalizes the function, $\Omega_c$ is the orbital angular speed, $\bar{a}$ is the radius of part of the system, $H(x)$ is the Heaviside step function, and $h_i$ depends on masses $m_i$ and $m_{i+1}$, and is given in Tremaine's Equation 4.

Tremaine then makes a substitution for $H(x)$ to rewrite $dp(\mathbf{a},\mathbf{e})$. He then determines the characteristic function of the disturbing function: $$P_n(\mathbf{k},\mathbf{p})=\int dp(\mathbf{a},\mathbf{e})\exp\left[i\sum_{i=1}^n(k_ia_i+p_ie_i)\right]\tag{4b}$$ I'm not going to write out the result when this is expanded out, but I will note that the $n$-planet distribution function is the inverse Fourier transform of the characteristic function. For example, when $n=1$, $$p_1(a,e)=\frac{1}{(2\pi)^2}\int_{-\infty}^{\infty}dkdp\exp\left[-i(ka+pe)\right]P_1(k,p)\tag{4c}$$ I'd like to note that Tremaine's results are designed from the time after the giant impact period in a planetary system, if there is one. Prior to and during the impact period, orbits are not yet stable, thanks to collisions and planetary migration (see also Scholarpedia).

We can also determine the masses of the planets. First, though, let's look at a mass-distance diagram. Mass-distance diagrams show the distribution of masses $m$ and different semi-major axes $a$. A good example showing evolution is Figure 8 of Mordasini et al. (2009):

Figure 13 shows the final positions of 50,000 simulated planets:

When determining the masses of the planets, we can look at the planetary initial mass function (PIMF), the planetary analogue of the stellar IMF. Mordasini et al. (2012) give some interesting graphs in Figure 2:

]

Notice the large number of low-mass planets before the giant impact phase (as expected), as well as the increase around 10-20 Earth masses, corresponding to roughly one Jupiter mass.

There is not necessarily a global power law form for the PIMF (the notes by Armitage quote Marcy et al. (2008) who did make a best-fit power law model for the number of planets at a given mass and semi-major axis), so you'll have to do some guessing, but this should give you a rough estimate as to how common planets of different masses are.

Using the information above - mass and basic orbital parameters - it might be possible to determine the composition of a planet. However, as I've discussed before, there is no one-to-one correlation between mass and density, and therefore no one-to-one correlation between mass and composition. You can make some educated guesses, using, for example, the plot of mass-vs.-density I made from exoplanets.org, and these may be relatively accurate. However, any result comes down at some point to your choice.

What about the number of planets? This, too, may come down in part to your choice. There are limits to the amount of the planets you can have in a system - and Fang & Margot (2013) found that a substantial fraction of systems are indeed "packed" - but these limits may not always be reached. Bear in mind that only a small percentage of all the mass in the protoplanetary disk will become planets.

If you're into modeling the growth of planets, you can always use the coagulation equation (Equation 195 of Armitage's notes): $$\frac{dn_k}{dt}=\frac{1}{2}\sum_{i+j=k}A_{ij}n_in_j-n_k\sum_{i=1}^{\infty}A_{ki}n_i\tag{4d}$$ which describes the number of bodies of mass $m_k=km_1$, which are formed when bodies of masses $m_i$ and $m_j$ merge at rate $A_{ij}$. However, only a few analytical solutions exist for various $A_{ij}$. Note that using this requires setting $n_1$ at $t=0$, which requires a lot of guesswork. Migration and orbital issues will have to be dealt with later. However, it's still an interesting tool.


5. Other properties

Once you choose the mass and composition of each planet, you can learn quite a lot about it. From here on out, actually, much of what happens is up to you. You can design rivers and oceans, jungles and deserts, mountains and plains, and so much more. Of course, things won't always be straightforward, and problems will come up along the way. Fortunately, I seem to recall a certain question-and-answer website that might be able to help you out. . .


Conclusion

That's my version of the algorithm. In many ways, I think it's better than the Cosmos II algorithm insofar as it gives you a good quantitative description of the planetary system. You can determine a lot of things, even though at times you need some guesswork. However, I still feel that Peoples' version is a good overview for many first-time worldbuilders, if you're willing to ignore some of his terminology. I couldn't find very many things wrong with it; it's a fantastic resource.

History
Why does this post require moderator attention?
You might want to add some details to your flag.

0 comment threads

Sign up to answer this question »