Buzz Aldrin's Space Program Manager

Buzz Aldrin's Space Program Manager

Not enough ratings
Advanced techniques and tips
By mreed2
Describes a wide range of technique and tricks that are useful on all difficulty modes, although more useful on Buzz-Hard difficulty.
2
   
Award
Favorite
Favorited
Unfavorite
Prologue
This was going to be my detailed walkthrough for playing SPM at the "Buzz-Hard" difficulty -- until I discovered that I had enough general tips and tricks that it made more sense to split it out into two guides, one covering general strategies and techniques that are useful for all difficulties, and one that actually chronicles a winning game at the "Buzz-Hard" difficulty.

So... This guide discusses how to tease information out of the UI and put it to use to create strategies that will enable you to win (more or less consistently) on all difficulty levels.
Save Scumming
Savescumming is the practice of saving the game before performing some action that depends (in whole or part) on the generation of a random number, then reloading the save game if the results are unfavorable.

The vast majority of games use a "Psedu-Random Number Generator" (Wikipedia[en.wikipedia.org]) to produce their random numbers. To dramatically simplify matters, a pRNG could (and has, but no game would use such a mechanism) been implemented a long list of numbers, with the starting point on the list being set by a "Random Number Seed." So, as long as the seed number is the same, the next random number generated will be the same, as will the next, and so forth.

Game developers whose games are highly dependent on random numbers (most cRPGs, many turn-based strategy games, and many others) save the seed in the save file as a mechanism to discourage save scumming. SPM is such a game, and rightly so.

Such measures are defeated by generating one (or more) random numbers after loading the save before engaging the the critical action. A subset of games that save seeds have several different seeds (feeding distinct pRNGs) with one seed being for "critical" stuff (the stuff that a player is likely to want to interfere with) and another for unimportant stuff (like whether or not the flame on a torch emits 3 pixels of sparks or 2 on a given cycle). Other games simply avoid (whether intentionally or not) using random numbers outside of critical contexts and I suspect that the developers thought SPM fell into this category. Either way, the goal is to make it much harder to "bump" your position in the random number list the pRNG is using.

However...

Whenever you enter a building in SPM (say, the mission control center), a random number is generated to determine which background should be used (there are 3, I believe). The random number comes from the same source that is used to determine whether or not a mission step is successful, so by reloading the autosave at the end of a season (which, happily, is made when you hit the "End Turn" button, so all of your actions for that quarter are retained) and rapidly entering and exiting a building, you can "bump" the pRNG past an unfavorable sequence of values and (hopefully) towards a more favorable sequence.

This is almost certainly both unintended and would be considered by the developers a bug -- after all, I didn't even realize that there were multiple backgrounds that could be used in the various buildings. When I tried it my thinking was that the transition from day to night was random, and that would be the source of random numbers to bump the pRNG (it isn't -- the day / night transition is triggered after visiting a building 4 times). However, given the age of this game, it seems very unlikely that it will ever be fixed.

Note that visiting a building generates one random number. If your mission failed in step 15, then you need to generate at least 15 random numbers to advance the pRNG far enough to prevent the offending random number from causing a failure on an earlier step. It may be more than 15, if you had one or more previous steps "glitch", because when a glitch occurs a second random number is generated (the chance to recover from the glitch). Thus, expect to spend lots of time opening and closing building as fast as possible if you plan on save scumming. :)
Budgeting
At levels of difficulty below Buzz-Hard, it is not necessary (although it is useful) to go through this elaborate budgeting process. As long as you play the game reasonably, you'll probably be OK. This budgeting process is intended for players that which to hyper-optimize their path through the game, and / or are playing on Buzz-Hard.

If you start at a new game you can easily calculate the amount of money you have to spend before the next budget review. Just take the "Cash on hand" figure and add 15 times the "Quarterly budget" number, and that's all you've going to get. At the Buzz-Hard difficulty, you'll always start with exactly $12,500 cash on hand and a quarterly budget of $2500 -- but, as the payroll expenses for scientists are random, the net quarterly budget will be something less. In four games (at the Buzz-Hard level of difficulty) the starting quarterly budgets were $1894, $1920, $1975, $1956 for example. Running the above calculation on these values produces total budgets of $41,840, $40,910, $41,300, and $42,125.

With this starting number, you can look at each program and determine how much it would cost to open and subtract it from the starting number. For example, opening the "BioSat" program costs $3072.3 while the Atlas program costs $5250. If the costs of opening the programs for your selected strategy exceeds the total budget then... Well, that strategy isn't going to work.

Next, you can add in the launch costs. For rockets (human rated or not), this cost is easily accessible on the screen where you open the program (its "Unit cost"). For Atlas, the cost is $3575. For programs (such as BioSat) I don't believe that this number is available without opening the program. Once the program is open, the value can be found in the tool tip in the mission as shown in this screenshot:

Obviously, you should save the game, then advance time until you can afford to open the program, open it, note the cost, and reload.

Adding together the cost of the rocket ($3575) plus the cost of the payload ($1878) you can determine the cost per launch ($5453), then multiply by the number of launches you need. If this number, plus the previous number, exceeds the total budget, then the strategy won't work. To do the math for you -- opening Atlas + BioSat = $8322.3, and the cost for a single launch is $5453, so the total is $13,775.3. That leaves $28064.70 left, so all looks well -- for now.

Next, you need to consider the maintenance programs associated with programs. This is easily available for unopened programs (both rockets and programs) and is $575 / Quarter for Atlas and $334 / Quarter for BioSat, for a total of $909 / Quarter. The maximum possible length these programs can be open is 15 for BioSat ($5010 total) and 13 for Atlas (since you need to build the VAB before you can open any rocket program) ($7475) for a maximum total cost of $12,485. Still within budget, but we are down to $15,579.70.

Next, add in the cost of building construction (MCC = $300, VAB = $800, SET Level 2 = $1000) accounting for another $2100. Then the maintenance costs for these buildings need to be added in (MCC = $150 / Q * 14 Q = $2100, VAB = $250 / Q * 14 Q = $3500, SET Level 2 = $400 [extra -- the $200 cost for SET Level 1 in incorporated into the quarterly budget automatically] *14 Q = $5600). Adding that all up, new construction will cost $2100 and the maximum maintenance cost is $11200, for a grand total of $13,300. That leaves us with... $2,279.7 left to spend.

Next, you need to add in payroll expenses. The cost for the initial 5 (rarely 4) scientists is included in the quarterly budget, but new hires have to be added in. New hires cost between $25 - $50 / Q, so I use a cost of $30 / Q for planning purposes. You need a total of 8 scientists to allocate 4 to Atlas and 4 to BioSat (so 3 extra), plus you need 5 mission controllers. So, in total you need to hire an extra 8 staff @ $30 / Q = $240 / Q. Assuming you delay hiring scientists until the SET Level 2 upgrade completes, the maximum length that you'll be paying these staff is 13 quarters, so a total of $3,120. That leaves us with... -$840.3. :(

The final expense that needs to be considered is training -- up to this point we've been assuming that you open programs / hire staff as soon as possible, but new hires are pretty useless. They need to spend quite a bit of time in "Advanced Training" before they are actually useful, at a cost of $100 / advanced training session. Each session lasts 3 quarters, so that needs to be added to the budget -- but, on the plus side, that means that we can delay opening programs, which means that we can defer building facilities (such as the VAB), all of which frees up money.

At this point you should break out a spreadsheet -- this allows you to easily answer questions like "What effect does it have on the bottom line if I open this program 3 quarters later, but add 5 extra advanced training sessions?" Eventually, you'll actually have to play the game to find out, but lots of potential strategies can be quickly discarded by simply looking at program costs (opening cost + maintenance) and launch costs. Its quickly obvious that Atlas + Explorer + BioSat, with both missions launching on Atlas rockets puts you way over budget on Buzz-Hard (although this strategy does work on "Hard").

After much trial and error (& spreadsheet work), I determined that the best plan is:
  1. Build the MCC immediately (55Q1)
  2. Hire Mission Controllers (x5) in 55Q2,
  3. Delay the upgrade to the SET Level 2 to 55Q3,
  4. Also build VAB in 55Q3,
  5. Hire extra (x3) scientists in 56Q1,
  6. Open Atlas in 56Q1 [note that the VAB construction finishes this turn, and that scientists have just exited advanced training, so its easy to switch them to R&D Atlas],
  7. Open BioSat in 57Q2 [again, note that scientists exit advanced training this turn]
Assuming that you train all staff 100% of the time that you don't have work for them to do, you end up with $6942 in 59Q4 -- which is enough for one Atlas/BioSat launch (83.3 reliability on Atlas, 74.9 on BioSat).

This turns out to not be a very good strategy -- at lower difficulties, you'll have the funds for two launches, and Explorer I + BioSat works better. At Buzz-Hard, this strategy doesn't earn enough prestige to get the largest possible budget for the 2nd review cycle even under ideal cases -- and, if the Soviets launch Sputnik (which they did in 5 out of 6 attempts) you fall into a prestige hole that you can't reasonably get out of.

If you are curious, the right strategy for Buzz-Hard is to push for a Mercury / Redstone launch in 59Q4. By the time the launch occurs, its surprisingly safe, the Soviets will never attempt such a launch, the prestige awarded is sufficient to get the largest possible budget, and the technology carryover is high enough that you can get Atlas up and running in a reasonable time frame. For more, see my walkthrough on Buzz-Hard.
Prestige Accounting (or Budgeting Part 2)
You will have to go through this process to be success at the Hard level of difficulty -- at least for the first 2 budget review cycles. Getting the largest possible budget increase at the first two budget review cycles is just too critical to be left up to chance.

The main source of information on potential prestige awards is the "Mission Preview" screen, which you can access as follows:
  1. Go to the Headquarters building, then navigate to the program that you are interested in. You'll see something like this:

  2. Select a specific program, producing this:

  3. Click on the cursive i in a circle next to the specific mission you are interested in (in this case, "Orbital Flight" to bring up this screen

  4. And there you go -- this flight is worth 6,025 prestige.

But... This number assumes that you launch the mission immediately, with no other missions from yourself or your opponent before that. This isn't a reasonable assumption when you are attempting to plan out a whole budget review cycle. For example, the mission "Orbital Flight" shown includes the "Mercury Uncrewed Orbital Test Flight" milestone, so if we plan to run that mission (before running "Orbital Flight") then we need to back out the value of this milestone to avoid double counting prestige bonuses.

Examination of the goals achieved with "Orbital Flight" reveals that it includes the "Explorer 1" goals -- "Earth Orbiting Satellite" and "Radio Signals Emissions from LEO". So (assuming these goals haven't been accomplished yet, and we plan to accomplish them in another mission), we also need to view the prestige awarded by the "Explorer 1" / Regular Mission" and subtract that prestige (1225) from the "Orbital Flight" total.

Once this subtraction is done, we are left with 4,800 -- which is actually correct, assuming that this mission accomplishes the world-first "Man in Space" milestone. The actual reward is 5,450, which is 4800+650 and the extra points come from the small amount of points awarded for accomplishing individual steps (which I don't attempt to account for).

Note that if I planned to launch the "Uncrewed Orbital Flight Test" mission, I'd need to subtract an additional 300 from the value of "Orbital Flight", because this mission includes those milestones as well.

To be clear, missions that have already been accomplished (at the time that you view the "Mission Preview" screen) will be accounted for in the prestige total shown. So, if you've flown Explorer 1, you don't need to subtract the Explorer 1 prestige from the Mercury "Orbital Flight" mission.
Detailed Prestige Accounting
You aren't satisfied with "reasonable approximation?" Well, fine, here's the rest of the story.

To determine how much prestige is awarded for a milestone, you must open a program, then select a mission that accomplishes that milestone. When you view a mission, you'll see something like this:

The right hand side (under "Goals to achieve") tells you, by goal, how much each goal will accomplish and whether or not you've already achieved it. Note that the sum of the prestige shown should exactly match the number given in the "Mission Preview" screen.

Note that, as far as I can tell, there is no way to determine how much prestige a milestone will award if your opponent accomplishes it first, or if you accomplish it twice (you'll get some prestige for accomplishing it for a second time, but not much) without actually achieving these conditions in the game. If you are really desperate, you could look at the XML files (you want "SPM-Windows_Data\Scripts\Data Scripts\General Container\Goals.xml"), which does contain the information -- but not the residual prestige earned when you repeat a milestone that you've already accomplished.

Next, individual steps award a small amount of prestige, even if it doesn't accomplish one of the goals, as shown:

1st / 2nd / later are literal here -- if you launch a mission twice before the opponent launches it once, you'll get both the 1st and 2nd bonus. However, as far as I can tell, within a single program steps with the same name are shared among various missions. For example, the "Launch Preparations" step shown in the screenshot is shared with the mission "Suborbital Flight". However, for programs that have both crewed and uncrewed options, a crewed "Launch Preparation" step isn't the same as an uncrewed "Launch Preparation" step. These numbers aren't included in the value shown on the main "Mission Preview" screen -- they add to that value.
If the mission doesn't succeed, then you don't get credit for the individual steps -- even the ones that completed successfully!
Tiger Teams
Before we discuss if and when tiger teams should be used, we need to discuss how they work, which means discussing how mission success is determined.

When you launch a mission, for each step a check is made to see if the step is successful. The reliability percentage that is used for this check is based on a weighted average of the hardware components in the mission plus the skills of the astronauts (if present). This number can be calculated manually by bringing up the "Mission Preview" screen for the mission and selecting an individual step.

If the program isn't opened yet, or hardware hasn't been assigned, the preview still includes the "Involvement %" line.

If you want to see the contribution of the astronauts, you must not only open the program but actually start the process of scheduling the mission you are interested in. After you assign mission controllers to the mission and click on "Next", it will ask you to assign astronauts. On this screen, if you hover over a slot where an astronaut belongs, it will show you how much and in what ways the astronaut contributes to each step.

If this check passes then all is well -- the mission continues to the next step.

If the first check fails, on the other hand, you'll be presented with this screen:

For the first line (the one that is free), the probability is shown is generated by taking the hardware reliability (as modified by astronauts), as calculated above, multiplied by the weighted average of the skills of the mission controllers involved in the step. This will always be lower than the first check, but with highly skilled controllers it can be very close.

If you wish, you can determine which mission controllers are involved by observing the which lights are lit in the "Flight Controllers" and "Flight Crew" areas are lit (including any that are blinking). If you want the exact percentages each mission controller contributes to the weighted average, you will need to look at the Mission Preview screen (before or after the mission) and click on the "Mission Controllers" button.

Note that the skill used is the one shown in the mission controller select step -- that is, a Flight Director (or Assistant Flight Director) has a skill rating of "the average of all of the individual skills", rather than some more complex calculation. As the flight director is involved in most "saving throw" calculations, getting this number as high as possible is critical in getting high values here.
The cost of a team is set by the cost of the mission, with the one team costing the exact same amount it cost to launch the mission, two teams costing 1.75 times the launch cost, and three teams costing 2.5 times the launch cost.
Note that the benefit of tiger teams goes down the more reliable the hardware is -- the screenshot above was taken when the Mariner 3 hardware reliability was 25%, while this one was taken with the Atlas/A reliability at 86.4%.

You can see that I'm getting a much smaller benefit (in terms of absolute percentage) in the second screenshot versus the first.
If this second check (which I refer to as a "Saving Throw") succeeds, then the step succeeds and the mission continues. If it fails, the step fails, and the mission fails catastrophically. With the exception of a crewed mission failing on the "Launch Preparations" step, if astronauts are involved in the mission they always die on a failure.
Hiring tiger teams for one step won't help on future steps, so if your hardware is very unreliable you could spend many, many times the cost of the mission trying to ensure success. For the mission shown, I bought 3 teams (raising the chance to recover to 62.5 %) twice, and a single team once (because I ran out of money) -- and the third check failed, so the mission as a whole failed.
Based on my experience, you shouldn't hire tiger teams when the chance to resolve is > (say) 75%, especially on expensive missions. The small increase in the chance of success combined with the extreme cost makes it a waste of money versus the alternative of re-running the mission next quarter if it fails (and it might not, after all).

However, tiger teams shine when you are trying to launch a mission to accelerate the pace of your R&D for a program (almost always a rocket) and you have surplus funds. For example, launching a Explorer I / Saturn V (!) mission only costs $9,581.5, with 4 steps that are dominated by the booster reliability. With a 83.4% reliable Explorer I and a 50.1% reliable Saturn V I got the following (admittedly, after 6 reloads!):
  1. Launch preparations (2 teams hired, 53.1% -> 71.8%, -$16,768)

  2. Countdown (2 teams hired,
    (2 teams hired, 56.2% -> 73.7%, $-16,768)

  3. Launch (successful)
  4. Ascent (successful)
  5. Satellite Deployment (successful)
  6. Earth Orbit (successful)
For all my work, I got a 10.4% reliability increase in the Saturn V -- about 3 seasons worth of R&D. This isn't worth it, not even slightly.

But...

If I R&D the Saturn V up to 65% before I run the mission (4 seasons starting from 51%, including the launch season -- the reliability at launch was 64.2%, while Explorer 1 was 83.4%) is much more productive.
  1. Launch Preparations (64.6%, no teams)
  2. Countdown (success)
  3. Launch (67.5 -> 80.5%, 2 teams, -$16,768)
  4. Ascent (success)
  5. Satellite Deployment (success)
  6. Earth Orbit (success)
For a +5.6% increase in reliability (to 69.8%) of the Saturn V, which is again about 3 seasons worth of R&D (3 seasons with no launch produce a reliability of 70.7). If we launch at that point, we get:
  1. Launch Preparations (success)
  2. Countdown (success)
  3. Ascent (73.3%, no teams)
  4. Satellite Deployment (80.2%, no teams)
  5. Earth Orbit (success)
Which increased the reliability of the Saturn V from 70.7% -> 77.6% (+6.9%), which is 4 seasons worth of R&D (resulting in a 77.0% reliability).

The conclusion from all of this?
  1. There's a "sweet spot" between 55% - 65% where tiger teams are worthwhile. Lower than this, and the cost is to high for too little benefit -- above it, the benefits from tiger teams are too small for the cost.
  2. Tiger teams are most valuable for rockets, as rockets can be attached to inexpensive payloads to create relatively cheap missions.
  3. Tiger teams are also useful for payloads when the payload is only used on a few steps (for example, EVA suits and to a lesser degree Lunar Modules), as this limits the number of times tiger teams might be required
Scientists
Training
Ultimately, how many and what training scientists should receive depends on how you are going to get to the moon. The various strategies have the following requirements:
Mode
Capsules
EVA
EOR
x12
x4
LOR
x8
x4
DA
x8
x4
In addition to what's shown on the table, you'll obviously need x4 human rated rockets, and you'll most likely want x4 Probes and x4 non-human rated rockets as well, although you only need them for the early game.

The easiest strategy is to hire 16 scientists which you train as follows (switching when skill exceeds 90):
  1. x4 Probes -> x4 Capsules
  2. x4 Human Rated Rockets -> x4 EVA
  3. x4 Rockets -> x4 Capsules
  4. x4 Capsules
The problem with this strategy is that you can't execute it on Buzz-Hard -- you need x4 Rockets, x4 Human Rated Rockets, x4 EVA, x4 Capsules for Gemini and Pioneer 4 / Ranger 3, so some compromises need to be made. The fact that you can't afford the maintenance cost for SET Center Level 3 (and are therefore limited to 10 scientists) adds another level of complexity.

The strategy I (more or less) follow in this walk through is:
If a scientists skill is is greater than 90, you should switch to training something on the next list. Never send a scientists to training in a skill > 90 unless you have nothing better to do with them (because, for example, you have 20 scientists hired and trained -- quite possible in lower difficulty levels).

Once you have successfully launched a mission, you should treat the relevant components as being "Fully researched" and switch scientists to training towards the next list. The exception is when the launch is intended to improve the R&D rate (whether via use of tiger teams or via save scumming) rather than accomplishing milestones. The logic here is that if it is safe enough to launch a mission, then future missions with that hardware component will also be "safe enough," especially since each successful mission will further increase the reliability of the component.

In all cases, you should select which scientists to send to which training by sorting the list in descending order and assigning the scientists with the highest skill (which isn't greater than 90) to the training until the required number of scientists have been assigned to training.
First, train to:
  1. x4 Capsules
  2. x4 Human Rated Rockets
After Mercury & Redstone is researched, then train to:
  1. x4 Probes
  2. x4 Capsules
Once BioSat is researched, then hire 2 more scientists and train to:
  1. x2 Capsules
  2. x2 Human Rated Rockets
  3. x2 Rockets
  4. x2 Probes
  5. x2 EVA
Once Gemini & Titan II is researched, build the level 3 SET, hire a total of 16 scientists, and train to:
  1. x8 Capsules
  2. x4 Human Rated Rockets
  3. x4 EVA
This doesn't work very well in practice, because you don't have any control over which scientists are "dual skilled" -- scientists that end up dual skilled in (say) Capsules and EVA aren't as useful as scientists who are dual skilled in Rockets and EVA.

A potential superior strategy is to follow the above list, but dual skill scientists according to the following list (rather than simply sending scientists to training in their best skill that is less than 90)
  1. Probes -> EVA
  2. Rockets -> Capsules
  3. Human Rated Rockets -> Capsules
  4. EVA -> Capsules
  5. Capsules -> Human Rated Rockets
This is much more complex to implement, however. And it isn't even clear how much better it would be in practice, because you need scientists in all the specialties at the same time when you open Gemini (as the Agena vehicle is considered a "Probe"), so... No strategy is really going to work all that well during this era. It would help you during the lunar landing phase, however -- you don't need Probes or Rockets during this time, but you need lots of capsules and some EVA.

Research
I almost omitted this section altogether, but... There are a few things to say beyond the obvious: You should assign scientists to R&D the programs that you are planning to launch. :)

First, the game contains an internal list of items that are "related" to one another. While the list of technologies that are related isn't accessible via the games UI, it is setup so that earlier technologies within a research area (say, "Probes") are all related to one another. So, Explorer 1 is related to Pioneer 4 (Lunar Flyby) is related to Ranger 3 (Lunar Impactor), is related to Mariner 5, and so forth. This doesn't necessarily mean that Explorer 1 is related directly to "Mariner 5" (although I think it is), but the network is fairly dense -- in the vast majority of cases, technologies that you think are related are in fact directly related to one another.

If you really insist on looking at how technologies are related, this information can be found in the "SPM-Windows_Data\Scripts\Data Scripts\General Container\ReliabilityTransfers-<Agency>.xml", where <agency> should be replaced by either GSA, NASA, or Soviet depending on what you are interested in. You can find the SPM directory by right clicking on the game in Steam, selecting "Properties", then "Local Files", followed by "Browse Local Files." The XML files are simple text files.

When two technologies are related some fraction of the reliability of an existing item will be added to the starting reliability of the new item. How much higher depends on how the developers specifically setup the link, and a new item can only benefit from one such reliability boost (the game selects whichever link produces the best starting reliability). Programs that would logically be very closely related get the highest reliability boost -- for example, a highly reliable Saturn 1B can lead to the Saturn V starting with a reliability as high as 51%.

The primary gameplay purpose of this is to make players that are performing poorly (who have smaller budgets, and are therefore researching things serially) more competitive with players who are performing well (who have large budgets, and are therefore researching things in parallel) in a way that makes intuitive / historical sense.

The means that it may be worthwhile to delay opening a program for one or two seasons if you expect an items reliability to improve rapidly (due to lots of launches, for example) Delaying a quarter or two in such a situation may produce faster research than opening the program immediately.

Second, don't underestimate the advantage of launching high risk (reliability between 70% - 80%) as a way to accelerated your research progress. One successful mission can replace 3 seasons of research, and simple missions (simple sub-orbitals and orbitals) don't require that many checks to complete successfully. Yes, such flights are risky, but its the only way to achieve a landing on the moon by 1970. Tiger teams can be useful for such missions as well, if funds allow -- but they have a dedicated section.

Finally, not all items are created equal. EVA suits, in particular, are only used on a very few missions, and even then only on a few steps. Lunar modules are also only used on few missions, but on somewhat more steps. As they are checked less frequently, both items are good candidates to skimp on research when you simply don't have enough scientists or time to go around. EVA suits also have the advantage of being quick to research, which means their reliability will (more or less) keep pace with the capsules even if only two scientists are assigned.
Milestone Penalties
When you skip a mission within a progression (for example, launching the crewed "Orbital Flight" without first launching the "Uncrewed Orbital Flight Test") you will be warned that a milestone penalty will be assessed on the flight, and what goal(s) you are missing:

However, this message doesn't tell you how the milestone penalty is assessed.

Because I was familiar with BARIS, I assumed that milestone penalties worked the same in this game as it did in that -- a 7.5% penalty, subtracted directly from the reliability numbers of all components in all steps, per milestone skipped. This is very, very incorrect.

It turns out that if you click on the "Assessment" button:

You will get a screen like this:

This means that the reliability for the Mercury capsule will be reduced by 15% for all steps. I believe, but can't prove, that this is an actual 15% penalty -- that is, if your Mercury reliability is 80%, then 15% of 80 is 12, so the reliability of the Mercury capsule will be 80-12 = 68%. However, I see no way to verify this, because there is no way to see the raw reliability numbers that the game uses during a mission.

If there are multiple milestone penalties that apply, the "Assessment" screen will look like this:

I believe this works as follows (assuming the Mercury capsule reliability is 80%):
  1. 80*0.2 = 16, 80 - 16 = 64% reliability after assessing the "Man in Space" milestone penalty.
  2. 64*0.1 = 6.4, 64 - 6.4 = 57.6% reliability after assessing the "Mercury Uncrewed Orbital Test Flight" milestone penalty.
Its worth pointing out that the penalties on the second ("Orbital Flight") don't align with the penalties from the first screenshot ("Suborbital Flight"). Each specific mission has its own list of milestone that must be accomplished before the mission can be launched without penalties, and each mission has its own set of penalties associated with skipping the milestones.

One final example:

(Note this is from attempting a Gemini EOR+LOR mission without performing any of the test flights, neither in earth orbit nor lunar orbit prior to scheduling the landing).
Assuming that all components are researched to 80%, the penalties stack up as follows:
  1. Gemini capsule: 80 * 0.03 = 2.4 -> 80 - 2.4 = 77.6
  2. Light LM: 80 * 0.05 = 4 -> 80 - 4 = 76%; 76 * 0.03 = 2.28 -> 76 - 2.28 = 73.72%
That's... Not a very big penalty, frankly. Especially since you get to cut out 2 earth orbital flights and 3 lunar flights. Had I known this when I was making the walkthrough, I might well have skipped all or most of the Gemini test flights.

In short: Milestone penalties might be absurdly large, making it very dangerous to skip even a single milestone, or mild enough to make skipping numerous steps a reasonable option. The only way to find out is by clicking on the "Assessment" button for each possible mission and seeing what the results are -- there is no "general rule" that can be safely applied to guesstimate milestone penalties in advance.