Acoustic Velocity and Loudspeaker Delay with Temperature and Python
NOTE: visit https://www.github.com/kreivalabs for the current code versions.
Use the following scripts to calculate acoustic velocity in air based on measured temperature in degrees Fahrenheit. You can also input a measured distance from one loudspeaker to another to calculate the approximate delay in milliseconds, based on the acoustic velocity calculation.
The third example script requires Future additions from http://www.python-future.org
Save the text below as a .py file to run in the Terminal of your choice
Python 2:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 |
#!/usr/bin/env python
#version 1.3 / 02-November-2017
# Calculate acoustic velocity in air based on temperature. Calculate resulting delay time
# in milliseconds for a measured distance.
# NOTE: there is no method for factoring in gas density in air. Returned values are suitable
# for indoor use where temperature and humidity swings are not extreme.
title="Acoustic Velocity and Loudspeaker Delay Calculator"
print title
print"=" * 80
# Prompt for user input
temp_fahrenheit = float(input("Enter temperture in degrees Fahrenheit: "))
measured_distance = float(input("Enter measured distance between speakers in feet: "))
# Convert Fahrenheit to degrees Celsius.
temp_celsius = (temp_fahrenheit - 32) * 5/9
# Calculate acoustic velocity in meters/second.
meters_seconds = (temp_celsius * 0.606) + 331.3
# Convert meters/second to feet/millisecond.
feet_milliseconds = meters_seconds * 0.00328084
# Calculate time differential based on acoustic velocity and measured distance.
delay_time = measured_distance / feet_milliseconds
# Round results to two decimal places - suitable for most loudspeaker processing hardware and software.
meters_seconds_round = str(round(meters_seconds, 4))
feet_milliseconds_round = str(round(feet_milliseconds, 4))
delay_time_round = str(round(delay_time, 4))
# Return results
print"=" * 80
print"Approximate acoustic velocity is:"
print meters_seconds_round, "m/s, or", feet_milliseconds_round, "ft/ms."
print""
print"Approximate delay time is" , delay_time_round, "ms."
print""
print"Press to exit."
raw_input() |
Python 3:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 |
#!/usr/bin/env python
# version 1.3 / 02-November-2017
# Calculate acoustic velocity in air based on temperature. Calculate resulting delay time
# in milliseconds for a measured distance.
# NOTE: there is no method for factoring in gas density in air. Returned values are suitable
# for indoor use where temperature and humidity swings are not extreme.
title='Acoustic Velocity and Loudspeaker Delay Calculator'
print(title)
print('=' * 80)
# Prompt for user input
temp_fahrenheit = float(input("Enter temperture in degrees Fahrenheit: "))
measured_distance = float(input("Enter measured distance between speakers in feet: "))
# Convert Fahrenheit to degrees Celsius
temp_celsius = (temp_fahrenheit - 32) * 5/9
# Calculate acoustic velocity in meters/second
meters_seconds = (temp_celsius * 0.606) + 331.3
# Convert meters/second to feet/millisecond
feet_milliseconds = meters_seconds * 0.00328084
# Calculate time differential based on acoustic velocity and measured distance
delay_time = measured_distance / feet_milliseconds
# Round results to two decimal places - suitable for most loudspeaker processing hardware and software
meters_seconds_round = str(round(meters_seconds, 4))
feet_milliseconds_round = str(round(feet_milliseconds, 4))
delay_time_round = str(round(delay_time, 4))
# Return results
print('=' * 80)
print('Approximate acoustic velocity is:',)
print(meters_seconds_round, 'm/s, or', feet_milliseconds_round, 'ft/ms.')
print()
print('Approximate delay time is', delay_time_round, 'ms.')
print()
print('Press key to exit.')
|
Python 3 with Future:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 |
#!/usr/bin/env python
#version 1.3 / 02-November-2017
# Calculate acoustic velocity in air based on temperature. Calculate resulting delay time
# in milliseconds for a measured distance.
# NOTE: there is no method for factoring in gas density in air. Returned values are suitable
# for indoor use where temperature and humidity swings are not extreme.
# This script uses Future additions from http://www.python-future.org
# Import Future additions
from __future__ import absolute_import, division, print_function, unicode_literals
from builtins import input
title = 'Acoustic Velocity and Loudspeaker Delay Calculator'
print(title)
print('=' * 80)
# Prompt for user input
temp_fahrenheit = float(input("Enter temperture in degrees Fahrenheit:\n "))
measured_distance = float(input("Enter measured distance between speakers in feet:\n "))
# Convert Fahrenheit to degrees Celsius
temp_celsius = (temp_fahrenheit - 32) * 5/9
# Calculate acoustic velocity in meters/second
meters_seconds = (temp_celsius * 0.606) + 331.3
# Convert meters/second to feet/millisecond
feet_milliseconds = meters_seconds * 0.00328084
# Calculate time differential based on acoustic velocity and measured distance
delay_time = measured_distance / feet_milliseconds
# Round returned values to two decimal places - suitable for most loudspeaker processing hardware or software
meters_seconds_round = str(round(meters_seconds, 4))
feet_milliseconds_round = str(round(feet_milliseconds, 4))
delay_time_round = str(round(delay_time, 4))
# Display results
print('=' * 80)
print('Approximate acoustic velocity is: ')
print(meters_seconds_round, 'm/s, or', feet_milliseconds_round, 'ft/ms.')
print()
print('Approximate delay time is', delay_time_round, 'ms.')
print() |
Samples and Time Converter with Python
NOTE: visit https://www.github.com/kreivalabs for up to date code versions.
The following will calculate elapsed time in milliseconds and seconds for a given sample rate and time (entered in seconds). It will also calculate the number of individual samples elapsed based on sample rate and time (seconds). This Python script utilizes the Future functions available for free at http://www.python-future.org.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 |
#!/usr/bin/env python
# version 1.1 02-November-2017
# Convert samples to time, based on sampling rate, number of samples and seconds
# Requires Future additions from http://www.python-future.org
# Import Future functions
from __future__ import absolute_import, division, print_function, unicode_literals
from builtins import input
# Import math functionality
import math
title="Samples and Time Converter"
print(title)
print("=" *80)
# Prompt for user input
sample_rate = float(input("Enter sample rate in kHz, for example - 44.1, 48, 88.2, 96: "))
num_samples = int(input("Enter number of samples as integer: "))
num_seconds = float(input("Enter elapsed time in seconds: "))
# Elapsed time = samples/sample rate
# Samples over time = sample rate * time (seconds)
samples_seconds = num_samples / (sample_rate * 1000)
samples_milliseconds = (num_samples / (sample_rate * 1000)) * 1000
samples_time = (sample_rate * 1000) * num_seconds
# Return results, prompt for user input to exit
print("=" * 80)
print("Elapsed time in milliseconds: ", samples_milliseconds)
print()
print("Elapsed time in seconds: ", samples_seconds)
print()
print("Samples elapsed in", num_seconds, "seconds:" , samples_time)
print()
print("Press the key to exit.")
input() |
Speaker Coverage Measurements with Python
NOTE: visit https://www.github.com/kreivalabs for up to date code versions.
Calculate the coverage pattern area of a point-source loudspeaker based on its dispersion angle and the measured distance from source to listener (or some other point).
This method uses Future functionality from http://www.python-future.org:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 |
#!/usr/bin/env python
# version 1.1 02-November-2017
# Requires Future additions (http://www.python-future.org)
# Calculate speaker coverage based on driver dispersion angle and measured distance from source.
# Import Future functions
from __future__ import absolute_import, division, print_function, unicode_literals
from builtins import input
import math
title="Loudspeaker Coverage Calculator"
print(title)
print("=" * 80)
# Prompt for user input
cone_degrees = int(input("Enter speaker dispersion angle in degrees: "))
measured_distance = float(input("Enter measured distance from speaker in feet: "))
# Convert angles to radians
angle_radians = math.radians(cone_degrees)
radians_div = angle_radians / 2
# Calculate coverage pattern
coverage_pattern = ((math.tan(radians_div)) * measured_distance)* 2
# Round results to two decimal places
coverage_pattern_round = str(round(coverage_pattern, 2))
# Return results, prompt for user input to quit
print("Approximate speaker coverage pattern is: ", coverage_pattern_round, "ft.")
# Return results, prompt for user input to quit print("=" * 80)
print("Speaker coverage pattern is: ", coverage_pattern_round, "ft.")
>>>>>>> Stashed changes
print()
print("Press the key to exit.")
input() |
Fun With Animatronics – A few years back, I worked with Washington Ensemble Theatre on a play called Sprawl which featured characters who are infected with a zombie-like virus that turns them into giant insect creatures. My takes was to design and build an animatronic control system for the actors, which would allow them to move antennae on their masks via control input from a glove. Using flex sensors, Arduino UNOs, servo motors and reg-green-blue-white LEDs, we achieved a fun, low tech horor effect.
Early development consisted of getting control over the servos via the flex sensors, using a wig form as a stand in for the actor’s head. Two flex sensors were fitted to each glove, so that the actor had independent control over each antenna.



Arduino sketch:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 |
/* This program controls two HS225-MG servos with input values from a pair of 2.2" flex sensors to motorize antennae constume pieces. Each sensor controls a motor. This program also powers two RGB LEDs that fade through the color wheel and assumes common cathode LEDs. See comment below to change to common anode. Built for Washington Ensemble Theatre's production of "Sprawl" at 12th Avenue Arts, Seattle WA, January 2015. This sketch controls helmet #2 (Chris Hill) based on data values from Spectra Symbol flex sensors on that unit. */ #include <Servo.h> Servo servoLeft, servoRight; int flexpin = A0; int flexpin1 = A1; int redPin = 7; int greenPin = 6; int bluePin = 5; int redLevel = 0; int greenLevel = 0; int blueLevel = 0; float counter = 0; float pi = 3.14159; //uncomment the following line if using a Common Anode LED //#define COMMON_ANODE void setup() { servoLeft.attach(9); servoRight.attach(10); pinMode(redPin, OUTPUT); pinMode(greenPin, OUTPUT); pinMode(bluePin, OUTPUT); Serial.begin(9600); } void loop() { counter = counter + 1; redLevel = sin(counter/100)*1000; greenLevel = sin(counter/100 + pi*2/3)*1000; blueLevel = sin(counter/100 + pi*4/3)*1000; redLevel = map(redLevel,-1000,1000,0,100); greenLevel = map(greenLevel,-1000,1000,0,100); blueLevel = map(blueLevel,-1000,1000,0,100); analogWrite(redPin,redLevel); analogWrite(greenPin,greenLevel); analogWrite(bluePin,blueLevel); int flexposition; int servoposition; int flexposition1; int servoposition1; flexposition = analogRead(flexpin); flexposition1 = analogRead(flexpin1); servoposition = map(flexposition, 760, 860, 0, 180); servoposition = constrain(servoposition, 0, 180); servoposition1 = map(flexposition1, 760, 860, 0, 180); servoposition1 = constrain(servoposition1, 0, 180); servoLeft.write(servoposition); servoRight.write(servoposition1); Serial.print("sensor1: "); Serial.print(flexposition); Serial.print(" servo1: "); Serial.println(servoposition); Serial.print("sensor2: "); Serial.print(flexposition1); Serial.print(" servo2: "); Serial.println(servoposition1); delay(10); } void setColor(int red, int green, int blue) { #ifdef COMMON_ANODE // if using common anode, adjusts color values accordingly. red = 255 - red; green = 255 - green; blue = 255 - blue; #endif analogWrite(redPin, red); analogWrite(greenPin, green); analogWrite(bluePin, blue); } |
Next we crammed an UNO, solderless breadboard, servos, tubing, LEDs and a heap of wire onto bicycle helmets fitted with the mask, jaws adn hair. The solderless components were only intended for testing, but ended up staying in the final version due to time constraints. I ultimately made a modified version of the rig, with the sketch burned to an AT Tiny85 and soldered together onto a wafer in a small project box.



Controlling Sonic Pi from QLab – Use Sonic Pi to generate music. Sonic Pi, created by Sam Aaron at Cambridge University, is a fantastic live-coding music synth utilizing Ruby script and the SuperCollider synth engine. Originally developed for the Raspberry Pi platform, it is also available, free, on Windows, Mac OS X and desktop Linux. Utilizing script commands from QLab (as AppleScript cues) and a command line interface for Sonic Pi developed by Nick Johnstone called sonic-pi-cli, you can randomly call up pre-built instruments/pads/drones on a Pi connected to the audio system (you can also do this internally on the QLab host). With the script cues placed within a Fire All group triggered by a script, the Pi will create a reasonably random layering of sounds. The control script selects from the child cues within the group will only start cues who are not running (“whose running is false.”)
The Pi is connected via Ethernet to the QLab host, connected via SSH. You can install sonic-pi-cli via gem on a Mac.
As a simplified example, the following scripts call Ruby files from a directory located on the machine running both QLab and Sonic Pi. It requires that Sonic Pi is running before the call is made. All communication is local to the host machine. To run these same commands to a connected Pi, simply SSH into the Pi and then update the file pathway as necessary. The initial call will activate Terminal and keep the window hidden. All subsequent calls containing “in window 1” will keep the commands within the same terminal window.
1 2 3 |
tell application "Terminal"
do script "`sonic_pi \"run_file '~/Dropbox/Code/Ruby/SonicPi/Dark Swirl Ambient Generator.rb'\"`"
end tell |
To run additional buffers/scripts without opening new terminal windows, you can use:
1 2 3 |
tell application "Terminal"
do script "`sonic_pi \"run_file '~/Dropbox/Code/Ruby/SonicPi/randomBells02.rb'\"`" in window 1
end tell |
To stop playback of all buffers/scripts, use:
1 2 3 |
tell application "Terminal"
do script "sonic_pi stop" in window 1
end tell |
To select a random Ruby file to playback, place all of the trigger scripts (those that call a specific .rb file) into a Fire All group. Create a script cue with the following:
1 2 3 4 5 6 7 8 9 10 |
tell application id "com.figure53.qlab.3" to tell front workspace
set triggerCues to cues of cue "2000" whose armed is true -- this is your group cue, change cue number as necessary
try -- In case there aren't any possible cues
set someCue to some item of triggerCues
if armed of someCue is true then
start someCue
end if
delay 0.1
set armed of someCue to false
end try |
Set a remote trigger (a randomized wait time with a restart, etc) to run the script cue that points to the group of .rb scripts. This will ensure that files which have been called will be disarmed and not allowed to be selected again. You can comment out that safety if you want files to be called in multiple thread runs.
Make a texting applet for QLab and QDisplay –
UPDATE – See project repository at https://www.github.com/kreivalabs/frontOfHouseMessenger for up to date code.
ORIGINAL – Created to solve the problem of verbal communication between a mix engineer and wireless engineer during performances. Using AppleScript, QLab and QDisplay, create a non-verbal communication method to allow the A2 to warn the A1 of wireless mic dropouts, and the A1 to acknowledge receipt without getting on the com.
This method utilizes QDisplay from Figure53, a companion application to QLab, AppleScript, and Remote Apple Events (System Preferences>Sharing). Ensure QDisplay is installed on the both machines, then enable Apple Remote Events (again, on both machines). On the sending side, paste the following into AppleScript Editor and save it as a script. Placing it in ~/Library/Scripts will put it in the menu bar if “Show Script menu in menu bar” is selected in Script Editor>Preferences.
Note: you will need the administrative user name and password for the receiving machine, as well as its IP address. Use static addressing for show networks.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 |
(*
This is the AppleScript version for the SENDING end (the remote engineer who sends messages TO front of house). Note: eppc protocol is unstable on OS X 10.8
--
Configuration:
This applet requires "QDisplay" (https://github.com/Figure53/QDisplay) to be installed on the remote machine. eppc protocol addressing takes the form "eppc://username:password@IPaddress"
Requires that the local (sending) user have administrative privileges over the remote (receiving) machine
*)
set remoteMachine to "eppc://username:password@IPaddress"
-- update the above to username, password and IPaddress of remote (receiving) machine
-- do not use special characters '@' or ':' in password
display dialog "Message to FOH:" default answer "" buttons {"Cancel", "Send"} default button 2 with title "FOH Messaging"
set theMessage to the text returned of the result
try
using terms from application "QDisplay"
tell application "QDisplay" of machine remoteMachine
set message to "-- INCOMING --"
set messageSize to 80
set messageColor to "red"
delay 0.5
repeat 4 times
-- clear the message window
set message to ""
delay 0.5
-- set the warning message
set message to "-- INCOMING --"
delay 0.5
end repeat
-- clear the window
set message to ""
delay 0.5
-- display text entered in dialog box
set message to theMessage
end tell
end using terms from
end try |
Create another Script cue on the target machine that clears the message window:
1 2 3 |
l application "QDisplay"
set message to "" -- no string entered clears the message window
end tell |
If using a MIDI-enabled console, set this “clear” script to fire via a MIDI message and map that message to a user defined key on the desk. Otherwise, use a hotkey trigger. Next, create a third script cue on the target machine which continues from the “clear” script cue, and contains the following:
1 2 3 4 5 6 7 8 9 10 11 12 13 |
(*
This is the AppleScript file run from the front of house machine that receives messages FROM the remote machine/engineer. Note: eppc protocol is unstable on OS X 10.8
--
Configuration:
eppc protocol addressing takes the form "//user:password@IPaddress"
requires that the remote user have administrative privileges over the destination device
*)
set remoteMachine to "eppc://user:password@xxx.xxx.xxx.xxx"
-- change the above to match the userName, passWord and iPAddress of the destination/remote machine
try
using terms from application "QDisplay"
tell application "QDisplay" of machine remoteMachine
-- signal to remote station that message was received
set message to "Received"
set messageSize to 40
set messageColor to "red"
end tell
end using terms from
end try |
Lastly, create a duplicate “clear window” script on the A2 side, to purge the QDisplay window after the “received” message is printed.
Source available at https://github.com/kreivalabs/frontOfHouseMessenger
Remote voices with AppleScript – I once did a show in which a laptop on stage needed to beep, ding and generate the Apple voice assistant sounds, all while in motion across the stage. Since there wasn’t a place to hide a speaker with the machine in constant motion, I used the eppc protocol supported by Apple Remote Events (System Preferences>Sharing) and the following script. The laptop could be tethered via Ethernet, or on the same Wi-Fi show network as the QLab machine (always used closed networks for show systems). Note – the eppc protocol is unstable under OS X 10.8
1 2 3 4 5 6 7 8 |
set remoteFinder to application "Finder" of machine "eppc://user:pass@IP_address" -- set the user name, password and IP address of the target machine
using terms from application "Finder"
tell remoteFinder
tell application file id "com.apple.SystemEvents"
say "Hello Dave. It’s nice to see you today"
end tell
end tell
end using terms from |
Make a secure ad hoc wireless network – At some point along the way (I can’t remember when…), Apple removed authentication security from the ad hoc network function “Create a network…” in the Wi-Fi setup. I use ad hoc networks to link up iDevices to Macs running Max, QLab, Isadora, etc, and not being able to lock up the network is a real drag, and security hole. Here’s a better method.
Setup
First, go to System Preferences>Sharing. Uncheck “Internet Sharing’ if it is checked. Then, enter the following in Terminal:
1 |
sudo networksetup -createnetworkservice Loopback lo0
sudo networksetup -setmanual Loopback 172.20.42.42 255.255.255.255 |
You can alter the IP address to your liking. You will use it in a few more steps.
Configuration
- Go to System Preferences>Network and configure the newly created ‘Loopback’ network device to match the IP and subnet above, setting the ‘Configure IPv4’ menu to ‘Manually.”
- After setting up ‘Loopback,’ go to System Preferences>Sharing. Click ‘Internet Sharing’ on the left. Change ‘Share your connection from’ to ‘Loopback’ on the right.
- Under ‘To computers using’ check ‘Wi-Fi’
- Click ‘Wi-Fi options…’ and set a name, password and channel for the new wireless network. Click OK.
- Check the ‘Internet Sharing’ box to enable the new secure network.
Remote application control with AppleScript – Building on the techniques above, you can use this script to remotely open an application on a target machine and launch a file. Again, this may not work as expected under OS X 10.8. The example below launches iTunes and plays a sound file.
1 2 3 4 5 6 7 8 9 10 11 |
set remoteFinder to application "Finder" of machine "eppc://user:pass@IP_Address" -- set the user name, password and IP address of the target machine
using terms from application "Finder"
tell remoteFinder
open application file id "com.apple.iTunes"
delay 1
tell remoteFinder
set theFile to POSIX file "/path/to/your/file" -- adjust this to the path of your file, including file extension
open theFile using application file id "com.apple.iTunes"
end tell
end tell
end using terms from |
Print a document with QLab – A further example of AppleScript integration with QLab, this method allows for the sending of print jobs as a QLab cue.
1 2 3 4 5 6 |
set theFile to (POSIX file "path/to/your/file") -- change this to the appropriate file location
tell application "yourPrinterSoftware" -- the control application of your printer
activate
print theFile without «class pdlg» -- inhibits the printer dialog window
quit
end tell |
The method above assumes a USB printer, but this could be altered to use a wireless or AirPrint device, so long as the QLab host and the printer are on the same network.
Installation video wall control with QLab – In the Summer of 2015, Left Coast Mac was contracted by Martin Christofel of Scenografique to design and program a dual display installation video wall for Axon’s corporate office lobby in Seattle. The media design took the form of a space station bridge, looking out onto a star field, with multiple sprites moving in and out of frame, a heads up display that prompted a visual shift, all of which had to run during business hours, without the need for the staff to turn it on and off daily. For this, we used QLab, as it would give us the most flexibility in setting event timings, transitioning between “scenes,” as it were, and because of its AppleScript and OSC integration.
Since I knew this would be a big programming job, I broke out the media and events into discrete cue lists within the larger workspace, allowing me to focus only on certain elements at the time, which could be cross-referenced from a master list of GO commands.
To automate the process, I created simple AppleScript applets that were triggered by the Mac Pro’s internal clock (via Calendar events) to halt playback at the end of the work day (18:00 hours, Monday-Thursday) and reboot the machine the following morning at 06:45 hours to clear the RAM cache. At 07:05 hours, another applet launches the workspace, which then auto-loads and begins playback. At 20:00 hours on Fridays, the machine shuts down for the weekend, then via another automated action, starts up Monday morning.
The media has been playing uninterrupted since July of 2015.
You can read articles about the office and its themed environment here and here (yes, it won “Geekiest Office of the Year”).




Program LEDs to flicker like a candle – Sketched and beta tested using an Arduino Uno. Grab some LEDs and a dev board to try it out. After you tweak it to your liking, burn it to an ATTiny chip so you can hid it in an actual candle.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 |
/* LED candle flicker effect for Don Quixote, Cornish College of the Arts Fall 2015-2016 Brendan Patrick Hogan Sound Design Area Head | Performance Production */ int ledPin1 = 5; // yellow LED 1 int ledPin2 = 6; // yellow LED 2 int ledPin3 = 7; // red LED 1 void setup() { pinMode(ledPin1, OUTPUT); pinMode(ledPin2, OUTPUT); pinMode(ledPin3, OUTPUT); } void loop() { analogWrite(ledPin1, random(90)+90); // yellow 1 analogWrite(ledPin2, random(80)+50); // yellow 2 analogWrite(ledPin3, random(90)+90); // red 1 delay(random(100)); } |