Signal Simulator

Signal Simulator

Not enough ratings
Coordinate Detection Control : why and how ?
By Kaydonidre
Another take on the Coordinate Detection Control.
   
Award
Favorite
Favorited
Unfavorite
Foreword
So, I know the game is outdated, surpassed by VotV and so on but I recently got into it by sheer curiosity. There is something about this game and VotV that really gets me hooked. In particular when you pair it with some ambiance music (I would recommend Retrovex Frozen Horizon for instance).

The gameplay loop is fine but the real point of interest for me in this game is the coordinates approximation you have to do to align correctly your antennas. The game itself being very cryptic and poorly explained about it, it actually created the kind of stimulus to solve problem I like to feel.

I read some threads about it and either it seemed completely oversimplified (just take the mean and go with it) or some people seemed to have approached the problem in their own way. And while some explanations might be crystal for some, it definitely is not for me. So, I gave a good thinking about this system and I’d like to share my thoughts here. Maybe it will help some of the few new players that stumble upon this oddity.

So let‘s talk about Coordinate Detection Control (CDC).
First : why do we have to align the antennas ?
Somewhere in space, something emits a wavelength and we try to listen to it. Problem : a lot of things in space emit wavelengths. That‘s the background noise you hear when you keep the audio feedback on the audio signal control.

When you acquire a signal, an external device gives you a series of readings with coordinates to set for your antennas (Azimuth and Elevation). The idea is to orientate the antennas such as the receiving part, the small electronic unit at the top of the rod, is facing the signal in such a way that the parabolic dish will effectively reflect any wavelength toward it, thus amplifying the signal. If the dish is misaligned, you lose some of the signal and instead you receive more background noise.

And here is the hard part : the signal source being far away in space, a mere millimetre of misalignment could translate into millions of km at the location of the signal. Thankfully, our antennas have some tolerance.

So, we have to find to closest coordinates. And we don‘t have all the time we want. Earth is rotating on itself (and around the Sun) so any alignment you set up will end up being wrong after a few hours. Additionally, any signal could be a momentary emission for all we know. I don‘t know if this is why we lose some signals over time in the game, but I like to think it is.
How does this work in the game ?
When you set the right polarization and frequency, you start to receive coordinates on the CDC screen. The computer will store a fixed number of values depending on the size of the buffer you have. Each reading is affected by an error so you don‘t know exactly what are the correct coordinates.

It is important to understand that the maximum, minimum and average values are always calculated from this buffer only. It means the computer loses memory of any lower value than the displayed minimum or higher value than the displayed maximum.

So, you have to watch carefully and possibly register manually the absolute min and max values you see before they are erased by other readings. This is why, if you rely only on the average calculated by the CDC you have a high probability of not getting the proper alignment.

Now, there is a set of coordinates that are exact. The signal source is somewhere and for a short amount of time its position relative to us will remain the same.

Our CDC has an error based on the «coordinate range» (for example 20). The choice of wording is confusing for me here. As I understand with trial and errors, the so called «coordinate range» corresponds to the positive or negative error of the detector. So, if the true Azimuth is 150°, it means the CDC will give readings anywhere between 130° and 170°, those are our limits, it will not go further.

It also means the real range of values is 2 x 20 = 40.

So, as I understand, the «coordinate range» is actually the «coordinate error» and the device range (dRange) is twice that.

If you get the 2 limits (min and max), it becomes very easy because the exact coordinate is the arithmetic mean of those 2 values. But it is not that simple because we don‘t know if we have seen the absolute min/max or just relatives min/max. Thus, the importance of watching the CDC readings to update min and max values manually on a spreadsheet or piece of paper.

There are several ways to go from there, including relative mean corrections, probability calculations and so on.
mRange / dRange method
One method I found quite reliable and very simple is to monitor how much of dRange you have registered so far. Basically, I keep updating min and max values on a spreadsheet and I calculate the difference between those two values : this is my measured range (mRange). I compare mRange to dRange to see what margin of error is left.

Example : let‘s say my error (CDC coordinate range) is 20, so dRange = 40. I let the CDC go for 1 minute and update manually the following values :

Azimuth : min 73.23, max 104.69, calculated mean : 88.96
mRange : 104.69-73.23 = 31.46
dRange-mRange : 40 - 31.46 = 6.14
mRange / dRange : 31.46/40 = 0.79

I infer that I still could see some higher or lower values but I already covered 79% of dRange. I could wait to get a better ratio or try the calculated mean. So far, after several tries, I saw that going for around 80% of dRange is a reliable estimate.

At 80% the calculated mean you obtain from your min/max values will be at the very worse 20% of the error away from the real value. In this case, with 79% that would be :

Max possible offset : 0.21 x 20 = 4.2
You can be sure the true Azimuth is located in the interval [84.76-93.16].

So that‘s it. That‘s how I understand this step and how I approached it. Feel free to come up with your own method, I just felt this step could use more clarification.

Happy signal processing !