Wouldn’t it be nice if I could take a picture of a bird as it passed through an invisible beam of light? The idea’s not original, these things exist, but they are quite dear, so I am experimenting with making these.
The most obvious way is a light source and a photocell, and indeed many years ago at secondary school I developed an analogue circuit[ref]people normally consider monostables as digital but mine was built using discrete transistors and resistors, and the time delay was infinitely variable, as it would be with a CMOS 4538, so I consider it analogue[/ref] using OC71 transistors scavenged off postwar computer boards to make up monostable multivibrators for the delay elements and one with the black scraped away from the housing to act as a phototransistor.
This gonzo technology of 40 years ago triggered the flash for the source negatives used in the animation – you set a very slow drip, and as the drop passes the photocell it triggers the delay. By increasing the delay between the drop passing and the flash going off you get the progressive animation, assuming each drop makes a similar pattern.
This was done with a manual camera, a new Canon AE1 ISTR that one of the other kids had. But the trick is to do all this in the dark, click the camera on Bulb and use the trigger to trip the electronic flash, which responds within milliseconds and has a short duration of about a millisecond if you reflect some of the flash back into the photocell of the flash (to turn it off as early as possible).
So there’s nothing incredibly hard about doing this, in controlled conditions, in a darkened room. If I were doing it again, I’d do it in the same basic way, using a phototransistor and a CD4538 CMOS dual monostable rather than a discrete monostable – one half to give the delay controllable with a pot and the other to make a pulse off the falling edge to go into a NPN transistor to trip the flash. There’s no need to muck about with PIC microcontrollers or Arduinos though you could do it that way if you really have to for a higher cost plus the aggravation of writing code, plus the jitter of the Arduino sampling the sensor/responding to the interrupt. In high-speed photography sub-milliseconds matter.
Everything gets harder outdoors
Outdoors you have massive and variable amounts of light from the sun, distances are longer, there’s just a whole lot more hurt all round.
Let’s first scope the problem of response time. A typical bird like a house sparrow flies at 46km/h – but I’m going to assume it’s a lot lower as it approaches something of interest like a feeder, let’s say 10km/h, which is 2.8m/s. If the field of view is about 1m wide, the sparrow will cross that in about a third of a second, so if I position the beam just out of the right-hand edge the sparrow will reach the middle of the frame in 180 ms.
Herein lies the first problem. I set up a pendulum experiment with a reel of solder hanging from the light fitting – the distance from the sensor in the stand to when the solder got recorded is about 40cm. The solder drops about 40cm in the first swing,
The potential energy at rest is mgh, where m is the mass of the reel, g is the Earth’s gravity at sea level of about 9.8ms−2 This gets entirely converted to kinetic energy at the low-water mark in front of the sensor, which is ½mv², therefore 2gh=v², so
v=√(2gh) ≅2.8ms−1, roughly the speed of the flying sparrow approaching the feeder and I’m getting a latency of about 140ms from that 40cm displacement either side (0.4÷2.8≅0.14)
Try as I might I could never reduce this, it was only when I substituted an electronic flash for the camera that I saw I was limited by the latency of the shutter – the flash picks the reel up in about a fifth of the time from the centreline (~30ms). Now the camera is an EOS400D and although I simulated the first button press I didn’t use mirror lockup – this page from this site which offers a smart trigger design indicates 110ms, to which I’d add the trigger latency of 30ms determined with the flash to give 140ms, so I am in the right ballpark. There’s not a stupendous amount of mileage in trying to reduce the trigger latency, if I am going to use this outside. People do photograph wildlife in the night like bats using the open-shutter -> wait for trigger -> trigger flash but this isn’t going to work in the day, so the shutter lag is an inherent problem.
This might be easier using something like a mobile phone camera like the Raspberry Pi, but against that the digicam software may be slow, To be determined.
Normal light tripwires including the rather nice SmaTrig2 seem to use an unmodulated light beam. I confess to being tempted by the idea of using a laser pointer, in which case I would use the CMOS monostable variant of the old school project. The signal is probably strong enough to be used outdoors, provided the receiving sensor is shielded by a tube to get rid of most of the ambient light. There are two downsides of that. One is you red dot your subject, as shown in the description of the lovely leaping rat picture. This is solvable in principle by turning off the laser on triggering – there is at least 50ms before the picture gets taken, which is enough to turn the beam off. The other downside is that you need to align a laser fairly well on the sensor because of the narrow beam. In the field that will involve wrangling more tripods and make the beam more sensitive to wind moving the source, no gaffer-taping it to a tree branch 😉 This is not going to be fun trying to hit a 5mm sensor with a 2mm laser dot over 5m or more.
Modulated IR light
There’s a huge industry built around IR remote controls, which send short pulses typically at a rate of about 38kHz, which is on-off keyed by the modulation. Integrated receivers are common, which deliver the modulation (not the 38kHz pulsed). Initial experiments are promising, it’s easy to sustain a beam across a few metres in sunlight if I recess the receiver in a tube about 2cm wide by about 5cm long. There’s not particular need to aim or shield the emitter, the typical emitter beamwidth of 15° to 45° means I can be quite casual about alignment in the field, unlike a laser. Although many cameras are sensitive to IR a bit, the beam is not of the same intensity so it doesn’t red-dot the target in practice. The components are cheap. Borrowing a stroke of genius from Wildlife Gadgetman I swiped the idea of using a trunking bend to make a low-cost housing, mounting and snoot all in one
The modulation rate sets a floor on the detection speed. I initially used a 400Hz modulation rate, but increased this to reduce latency. The spec sheet for the IR receiver sets the minimum on time is 200us and requires the maximum modulation duty cycle is 40%, so the minimum cycle time is 500µs, giving a frequency of 2kHz. In practice I need 8 cycles of 38kHz and 12 missing cycles, giving me 1.9kHz so the detection latency is at least 400µs. The cycle time is 500µs but by looking for both edges this can be reduced, though not to half because the duty cycle is asymmetric. For high-speed photography that’s too slow, but given I have a 100ms at least shutter delay this isn’t a problem.
Using the pendulum test and flash I get this
tripping the flash eliminates parallax errors (the LED wasn’t exactly on the centreline of the camera). If I say the total error between the two ghost images at the centre of the black receiver is about 1cm, that’s about half of 1/280th of a second or approximately 2ms. I am okay with that.
The receiver uses an Arduino set to look for change, running in a delay loop of 400us and dropping out if the interrupt has set a flag. I use a Ciseco/Wireless Things RFu 328 which is basically an Arduino Uno with radio on the side – being able to send radio messages is great for debugging and perhaps one day I will use the radio for remote triggering too or taking a census of the sparrows on my feeder.
The Arduino loops and tests if an edge has been received every 400us, as a result there’s a lot of jitter on the trigger
However, this is a job that can be done with a CD4538. Set the first monostable as rising edge retriggerable with a period greater than 530µs. When the beam is present it will always be set, but will reset on the first missing pulse (or the second or third if you set the interval longer). Set the second monostable to give you about 50ms triggered from the falling edge and take the output into a transistor to pull down the camera trigger. Job done for 80p rather than £15….
I used the RFu because it was interesting to see how long the beam was blocked for and ease of fiddling with the circuit parameters, It also allows me to set lockout periods and suchlike, which are going to be useful in the field. Although the gratuitous jitter grates, it’s very small compared to the latency of the shutter, and indeed the jitter of the shutter release time.