Good source of web audio features
http://labs.music.baidu.com/demo/audiolab/?p=0
http://labs.music.baidu.com/demo/interactions/visualization/
Getting Started with Web Audio API
Web Audio API - High-level JavaScript API for processing and synthesizing audio
http://www.sitepoint.com/5-libraries-html5-audio-api/
http://codegeekz.com/10-javascript-audio-libraries-for-developers/
http://www.smartjava.org/content/exploring-html5-web-audio-visualizing-sound
http://ianreah.com/2013/02/28/Real-time-analysis-of-streaming-audio-data-with-Web-Audio-API.html
https://blog.groupbuddies.com/posts/39-tutorial-html-audio-capture-streaming-to-node-js-no-browser-extensions
https://nusofthq.com/blog/recording-mp3-using-only-html5-and-javascript-recordmp3-js/
https://github.com/icatcher-at/MP3RecorderJS
https://github.com/akrennmair/libmp3lame-js
http://www.patrick-wied.at/blog/how-to-create-audio-visualizations-with-javascript-html
http://www.schillmania.com/projects/soundmanager2/demo/360-player/canvas-visualization.html
http://www.sitepoint.com/3-breakthrough-ways-to-visualize-html5-audio/
Audio: visualization
http://readwrite.com/2008/03/13/the_best_tools_for_visualization
Programming with sound - Introduction
http://blogs.msdn.com/b/dawate/archive/2009/06/22/intro-to-audio-programming-part-1-how-audio-data-is-represented.aspx
Digital audio:
- stereo: 2 channels, for left/right ear
- mono: 1 channel only.
- sampling rate: ~ 44,100/sec, 44.1k Hz. Number of points/second. Other rate: 8/48/96/192K Hz.
- fundamental frequency: Number of oscillations of the wave per sec. 440 Hz - Concert A. 261.25 Hz - Concert C. Higher frequency == higher pitch.
- decibel: amplitude of the wave
- bit depth/bits per sample: 8-bit/16-bit waveform - use 8/16 samples for a waveform. The larger, the better quality.
- multiple tones: sound in real world is overlap of multiple sine waves/tones.
http://blogs.msdn.com/b/dawate/archive/2009/06/23/intro-to-audio-programming-part-2-demystifying-the-wav-format.aspx
- wave format: maybe the most basic sound format. Developed by Microsoft and IBM.
- uncompressed. but cannot go over 4GB, since file size header field is 32-bit unsigned int.
- chunks
- header
- format chunk
- data chunk
- making a wave file.
http://blogs.msdn.com/b/dawate/archive/2009/06/24/intro-to-audio-programming-part-3-synthesizing-simple-wave-audio-using-c.aspx
http://blogs.msdn.com/b/dawate/archive/2009/06/25/intro-to-audio-programming-part-4-algorithms-for-different-sound-waves-in-c.aspx
Pure Data
http://puredata.info/
Pure Data (aka Pd) is an open source visual programming language. Pd enables musicians, visual artists, performers, researchers, and developers to create software graphically, without writing lines of code. Pd is used to process and generate sound, video, 2D/3D graphics, and interface sensors, input devices, and MIDI. Pd can easily work over local and remote networks to integrate wearable technology, motor systems, lighting rigs, and other equipment. Pd is suitable for learning basic multimedia processing and visual programming methods as well as for realizing complex systems for large-scale projects.
Pd is free software and can be downloaded in different versions.
http://en.wikipedia.org/wiki/List_of_audio_programming_languages
Programming with sound - Overview
http://forum.devmaster.net/t/what-is-sound-programming/20773
Unlike most other things, it's real-time stuff (we're talking 0.02ms stuff here, as oposed to video's 17ms; and nobody cares if you miss a frame here or there, but audio must be flawless), and you have to deal with latencies and all sort of fun stuff like that.
Again, it looks simple and the concepts are pretty easy, but for some reason it always ends up being tricky. Thus, most people end up using libraries like FMOD, and *still* end up having issues here and there.
Also, if you want to synthesize audio, there is indeed a whole lot of interesting mathematics and theory you won't see much in other places.
I like to lump "sound programming" into 3 broad (sometimes overlapping) categories.
1) Sound Implementation programmer. This sounds like what you probalby think of. That's the programmer (often an intern ;0) whose job is to sprinkle the code with calls like "PlaySound(SNDTag, Parameters); through the game. (it can be quite a bit more than that, of course). Sound Implementation is usually, but not always, pretty straightforward
2) Sound Engine/Tool programmer. These programmers write the high-level sound engines that perform things like real-time streaming, data file parsing, etc. That requires pretty good knowledge of real-time and system programming. They may need to write entire (friendly and robust) User interfaces that a sound designer or composer will use when creating content. They are the essential link between content and code in content-driven audio development systems.
3) low-level audio signal processing programmer. These programmers typically need to know assembly language like the back of their hands as well as the complex mathematics behind the processing of audio. FFTs, DFT's, MLTs, IIR, FIR filters, data compression/decompression algorithms. That's pretty heavy stuff. And there's a ton of stuff going on out there. People do entire PhD dissertations on things like physical modelling of sound, real-time analysis of music, or environmental modeling..
So there's a lot of "meaty" stuff in game audio programming-- way more so than it may seem on the surface. Video game music and sound design still need a lot of specialized code to make it work
Brian Schmidt
Executive Director
www.GameSoundCon.com5
http://stackoverflow.com/questions/4801690/i-want-to-learn-audio-programming
- analogue sound: magnetic tape.. complex numbers, fourier transformation.
- sampled sound: CD, mp3, Ogg vorbis.. DSP.
- synthesized sound: sound + instrument type.
A programmer's guide to sound (1997)
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment