Install Steam
login
|
language
简体中文 (Simplified Chinese)
繁體中文 (Traditional Chinese)
日本語 (Japanese)
한국어 (Korean)
ไทย (Thai)
Български (Bulgarian)
Čeština (Czech)
Dansk (Danish)
Deutsch (German)
Español - España (Spanish - Spain)
Español - Latinoamérica (Spanish - Latin America)
Ελληνικά (Greek)
Français (French)
Italiano (Italian)
Bahasa Indonesia (Indonesian)
Magyar (Hungarian)
Nederlands (Dutch)
Norsk (Norwegian)
Polski (Polish)
Português (Portuguese - Portugal)
Português - Brasil (Portuguese - Brazil)
Română (Romanian)
Русский (Russian)
Suomi (Finnish)
Svenska (Swedish)
Türkçe (Turkish)
Tiếng Việt (Vietnamese)
Українська (Ukrainian)
Report a translation problem
So Wallpaper Engine processes audio by reading audio inputs in two channels (one for left ear, one for right ear). Each channel is split into 64 pieces, where each part represents a frequency range. For example, assume the right/left channels are bringing in sound which ranges from 0 Hz to 20 kHz. If you divide that entire range into 64 even pieces, you end up with 64 frequency ranges that are 312.5 Hz apart. This means that selecting piece #1 would give you frequencies between 0 Hz – 312.5 Hz (low-end). Selecting piece #64 would give you frequencies between 19.688 kHz – 20 kHz (very high-end). Note that in reality, the 64 pieces are not evenly divided into equal ranges of Hz. In reality, the first piece is around 0 Hz – 35 Hz .
Wallpaper Engine references these audio pieces in an array called “audioArray” which has a fixed length of 128. The audio data for the left ear is in index 0-63, while the audio data for the right ear is 64-127. The value of each element in this array is a floating point value ranging from 0.00 – 1.00 and represents the volume of a specific frequency range. For instance, if index 0 had a value of 0.3, that would mean that frequencies in the 0 Hz to ~35 Hz range were detected at 30% of max volume in the left ear channel. The Wallpaper Engine documentation has a fantastic free (and copyable) demonstration project [docs.wallpaperengine.io] for this.
Now that the underlying audio processing has been explained, it’s time to talk about what this project actually does to visualize the audio. In the project.json file for this project you will find the customizations for the user options/settings (such as the sliders, buttons, etc.) Whatever you set will override a default configuration located in the project’s script.js JavaScript file (note the default is a “frequency range” of 8, a “frequency range start” of 0, and a “sound sensitivity” of 0.25. The JavaScript file is what actually controls the logic of the visualization.
When looking at the audio processing only, the script.js file starts by loading and replacing the default configs with whatever the user’s settings are. It then checks to see if the user configured the frequency range and frequency range start in a way that doesn’t make sense based on how they are used in the code. Your frequency range and frequency range start must not add up to more than 63.
Why? Because the frequency range start represents a cut starting at whatever index or piece (of the 64 possible pieces in one channel) you choose. Meanwhile the frequency range is the amount of indexes/pieces you want to step up from your chosen frequency start. The way this works, it just assumes the audio in the left/right channels is close enough to equal (or not important enough) to visualize by just letting you select frequencies in what is technically the left ear only.
As an example, say my frequency range start is 0 and my frequency range is 5. I’m saying that I want indices 0-5 from the audioArray, which represents 0 Hz - ~125 Hz. If I said a frequency start of 2 and a frequency range of 5, I’m saying I want indices 2-7 from the audioArray, which represents ~60 Hz - ~150 Hz. Figuring out specific ranges is kinda hard, but you can play a single frequency tone with the demo project I mentioned earlier to help.
Lastly, sound sensitivity is how you perfect your config after you have the right frequency range. Turn the sensitivity up to make more splats for the same amount of sound and vice-versa. As mentioned earlier, each element of the audioArray is a floating point value indicating how loud the sound for that frequency range is. If you have a crazy hard bass song, expect the values for indices 0-5 of the audioArray to be basically at the max (1.0). Higher values in the audioArray also increase the amount of splats/visuals assuming you are filtering for those frequencies in the audioArray.