Why I Shoot Film in 2018 📷

Photography is one of my (numerous and disparate) hobbies, and you may be perplexed to learn that I shoot quite a lot of 35mm film. Can you even do that any more? Isn’t that really inconvenient?  As it turns out, yes on both counts, but I quite enjoy it!

I have always been interested in documenting the world around me, and I was given a Nikon F-301 (35mm film SLR) camera as a teenager to learn with.  Almost from the start, I was what might be termed an “aspiring” photographer – more serious than your average “holiday snapper”, and not a professional, but always wanting to improve my skills and images.  I was rarely happy with taking the odd snapshot here and there and was always looking out for opportunities to record how I saw the world.

The Learning Process

I am not a naturally talented photographer.  I often found myself with underexposed, badly composed and/or cluttered images.  As I’d term them now, I was creating “Where’s Wally” images. On a trip to Cornwall one year I took a fair few images like this.

A scene of a beach with many people gathered on it around tents and with wind breaks, many with surf boards.
St Ives in surfing season, can you spot the man in the red and white stripey top?

Like, what is that even of? Clearly, there’s stuff going on, but there is no obvious subject.

During the same holiday, however, I took a few images that were less flawed.

A diary A5 notebook resting at an angle on a small window sill
A diary resting on a window frame

At least, here, my composition is significantly better.  I’m not sure if I was particularly aware of the “rule of thirds” at the time, but this doesn’t quite follow it (and the horizon is wonky, and … and … and….).

All of this learning was done on film.  Often, I’d bump into limitations of the medium.  I’d often like to photograph “my” band at gigs – and that’s how I learnt about film speeds, something that is still relevant today (although not for the same reasons).  Trying to use ISO 200 film in low light conditions, is not so great! ISO 800 on the other hand – worked a treat, but the grain was a lot more noticeable.

A guitarist and lead singer on stage in red lighting, screaming into a microphone
Lead singer of “my” band, on Fujicolor Pro 800

In the later 2000’s I gained access to digital cameras which weren’t terrible. Digital is a lot less forgiving than film (particularly in terms of exposure latitude), but also less punishing.  You could delete a bad shot almost immediately – whereas I’d have to wait days and spend money to find a film shot was god awful, or that the flash fired at the wrong time and all of the photos from your friends “prom” party were underexposed garbage (true story).

Digital cameras continued to improve, and I was able to afford my own equipment as I progressed in life.  I could continue to learn about composition, saturation, lighting and other things without churning out hundreds of expensive failures.  Things were good.

A photograph of a police officer in a high visisbility jacket looking down at something out of the frame
Police officer keeping an eye on people celebrating Chinese New Year in London, shot on a Canon 1200D.

Rediscovery

Sometime later, during a house move, I unearthed my Nikon F-301 and lenses.  I had a look through the viewfinder of my F-301 with my (rather lovely) 105mm Nikkor lens.  I had forgotten how awesome that lens is, having mostly been restricted to the kit 18-55mm lens that came with my DSLR. I also had a nice 22mm wide and (classic) 35mm prime lens too.  I really wanted to use them again.  I could have got an adapter for my DSLR, but I wouldn’t have got the full frame experience that I was used to.  It wouldn’t be the same.

To my delight, after doing some research it turned out you can still get 35mm film. And there are even people shooting it!

The F-301 body was broken.  At some point in the last decade, it had been dropped – the mirror flap mechanism had been dislodged.  I took it to Sendean cameras in Clerkenwell to get a quote to repair it.  £100, at least.

SLR bodies are not generally particularly expensive – at least not compared to the lenses.  The F-301 was a pretty common “enthusiast grade” body in the late 1990’s and, as it turns, out there’s plenty of them floating around in good working order.  I quickly found one on eBay and won the auction for £35.  To save on shipping costs, I met up with the seller at Marylebone station and performed what would have looked like some kind of Soviet-era clandestine exchange of photographic equipment and cash, beneath the departure boards.

I now had a working camera and selection of lenses. I also discovered a very expired roll of Fuji Superia (my previous day-to-day film of choice) nestling in my kit bag. Unfortunately, over the years, dirt had become lodged into the light seal and when it was developed it left an enormous scratch down the entire length of the negative.

A photograph of a lift mechanism with a bright blue scratch down the middle
Scratchy…

Becoming the Modern Film Shooter

Having exhausted my stock of scratchy Fuju Superia, I managed to source some new film from Silverprint. I attempted to get some Agfa vista plus from Poundland, but whilst their website said my local store had some in stock, trying to explain to the people working there what I was looking for turned out to be challenging.  (“What do you mean, film? We’ve got DVDs over here… No? Do you mean Kitchen film? Why would you want 35mm of cling film? What do you want, you weirdo!”).

There are a few labs available that can process film, mostly mail order. Most can do black and white and the C-41 process. After testing a few labs, my favourite turned out to be UK Film Lab (now Canadian Film Lab) that not only allow you to push/pull process but also scan to your specifications and give a commentary on every film they develop.  Top class service.  Since then, other services have popped up, and I’ve used filmdev a fair bit, but Canadian Film Lab remain my firm favourite so far.

The process of getting film developed has changed significantly since I shot film for the first time.  Getting a roll developed used to be a matter of taking your film to a high-street store, waiting for a period of time, then going to pick up some prints and maybe (if you paid extra!) a CD containing scans of your photographs. Now we have the Internet – and high-quality digital printing at home (which is usually what the low-cost labs did anyway).  It’s now just a matter of sending your film to the lab, waiting for them to process it, paying your invoice and moments later you’re given a link to download your scans – so satisfying! A little later, your negatives are returned to you for archiving.

From above, at the bottom of the image, a cylindrical black tank around 15cm in diameter on it's side, and above the lid of the tank which has a hole for pouring chemicals in and a light trap to prevent light entering the tank. Above that is a red plastic cap to prevent chemicals from leaving the tank. To either side are two reels which have been split in half which are used to hold film
A daylight processing tank, with two reels for 35mm or 120 film.

Although that’s not the only way to process your film.  Developing your own film is actually surprisingly easy, particularly for black and white – you don’t need a dark room, just a light-proof changing bag, the ability to make up solutions of three chemicals (below) and a daylight film processing tank (above).

Three bottles of chemicals, from left to right, one large labelled Ilfotec DD-X, two smaller ones labelled Ilfostop and Rapid Fixer.
Just three chemicals – from left to right, developer (brings the image out on the film), stop bath (stops development process), and fixer (prevents film from reacting to light any further)

Scanning is a bit more tricky (and expensive), but if you have a bit of cash you can get yourself a pretty good negative scanner – or if you don’t you could get a fairly rubbish film scanner (sorry, Lomography!) … or you could go to a photography workshop like Photofusion in South London and hire a scanning suite, if one is available near you.

Three photographs, two of a cat and one of members of the public from above on a wooden desk
Two pictures of Max, and another from my series “commuting” printed in The Bright Rooms, in Peckham, South London

You’ll notice I’ve yet to say anything about prints.  That’s an interesting part – often the outlet for your images will be online, so prints are not a requirement.  It’s entirely possible to go from exposed film to Instagram post in an hour or two, without ever printing the image to paper.  This ‘digilogue’ workflow is pretty common for modern film shooters.  Although, there is something incredibly satisfying about making a really good darkroom print – which is something I’m only getting into now.

But why?

Having figured out that it’s possible, why did I bother to continue?

Isn’t it really expensive?

Well, yes and no.  The raw cost per image is much higher than digital.  However, extremely good quality film kit is much cheaper than the modern (digital) equivalent.  Good optics are good optics, and even old lenses still produce excellent images (providing they’ve been looked after well).

Also, your shooting ratio tends to be much higher than it would be with digital (partly since every shot has a monetary value, you are incentivised to make every shot count).

So whilst yes, buying film, processing it, scanning it, all have an ongoing cost associated with them, the images you produce tend to be of higher quality – partly for the kit you’re using, partly because you’re thinking a lot more about the shots you take.

But isn’t digital better quality?

Controversial!  Ken Rockwell goes into a lot more depth about this, but the main takeaway (for me, at least) is that you can’t really directly compare the two mediums. It’d be like saying that an inkjet printer is better quality than an oil painting.

Digital sensors are a grid of pixels, each of which only records one colour at a time.  Film has a continuum of light-sensitive emulsion and is limited by chemistry and optics.  In terms of resolution, Ken comes up with 35mm film being the equivalent of an 87 Megapixel sensor. I don’t know about you, but I can’t afford one of those …

Many extra large projection installations use film for their source material – it enlarges much better than digital, and you don’t get the screen door effect.

Fine, but what about grain? I quite like the look of film grain, actually.  It’s certainly better than multicoloured digital noise.  Putting my software engineering hat on, maybe film grain is more of a feature than a bug? I mean, it is literally a feature in Photoshop.  People pay for the kind of effects you get with film with software like Silver Efex Pro.

Isn’t it inconvenient?

Yes. In the same way, building your own furniture is inconvenient. You really do have to enjoy the process for it to be worth it. But achieving an excellent result is really very rewarding!

If you just want pictures, film photography is probably not for you, but if you want to be physically connected to the images you produce – then maybe it is.  With digital, the process ends when you press the shutter.  With film, that’s just the beginning.

Many people are scared of not being able to instantly see their pictures.  To begin with, so was I.  What if I was wasting film? What if they don’t turn out right? Actually, shooting film is an excellent way to un-learn your excessive chimping habit – and focus on what you’re actually trying to do.  Rather than fiddling with settings randomly until you get the right exposure, you can stop and think about what you’re trying to do – then compose the shot – then shoot.  It’s a lot slower, but sometimes it’s good to slow down a bit and really focus.  The first few times I shot a film camera I found myself staring at the (blank) back of my camera after taking the picture.

OMG film is amazing, let’s shoot everything with film!

Err, well, actually…maybe not.

I’m not exclusively shooting film.  There are some things that don’t benefit from it.  Particularly, I like to do product photography occasionally (like the pictures of the daylight tank and chemicals above).  Unless you want your product photos to look like a 70’s magazine advert, using the medium of film doesn’t really do much here.  And using flash is a lot more tricky for film (getting it right for anything more than a basic single flash setup involves doing maths).  At least with digital, you can kind of wing it and chimp until you get the lighting correct.

Also, my phone camera is surprisingly good (providing you wipe the lens before taking photos – protip), and is more than sufficient for most note taking needs.

It’s pretty cool though …

The number of people shooting film is surprising.  There are entire YouTube channels dedicated to people shooting film.  Film cameras that used to cost £10k can now be purchased for a fraction of the price.  Medium format, in particular, used to be out of the reach of most enthusiasts, is now a feasible investment (although not one I’ve made as of yet).

There is a range of film stocks available, including re-reeled Kodak Vision cinema stock – which is the most advanced film technology out there (and still being used in cinema!).  There are even people shooting in large format.

 

So if you have a 35mm film camera languishing somewhere, why not pop in a roll of Kodak ColourPlus , shoot it and send it off to filmdev.co.uk (or your local lab) and see what happens.

My Headphones

As someone who works in audio, I appreciate a good quality set of headphones.  I am, by no means, an audiophile – I’m an engineer. I want my products to operate within tolerances that are suitable for their purpose – I’m not up for cryogenically freezing my cables or painting my CDs with green pen, or indeed any other magical tweaks to get marginal gains, at best.

Sennheiser HD-25, my old “daily driver”

For a long time, I used to rock a pair of Sennheiser HD-25: headphones designed to attenuate the sound of the sonic boom on Concord and were only released to the public after people kept stealing them and giving them to House DJs who, as it turned out, were quite into the high levels of isolation they had.  They were really handy for the kind of stuff I was doing back then – monitoring broadcast mixes in loud environments, sound checking bands, that kind of thing.

These days, I don’t have quite such stringent requirements – and as great as the HD-25’s are, on-ear cans tend to get a bit uncomfortable for me for long periods of time.  However, I have retained the penchant for full-bodied bass.

Thus, my audiophonic journey led me to Beyerdynamic. Their DT100 were almost ubiquitous in the UK’s radio industry at one point (and still have a pretty big presence today), and notable by the fact that every component of them can be replaced. Earlier in my career as a broadcast engineer, I spent a fair time doing just that.  From the drivers to the cable to the headband to the internal wiring, each piece will have a part number and you can replace it.  The problem, for me, is that they were uncomfortable and sounded … well, pretty rubbish if I’m honest! And I couldn’t help feeling like if I were to go out wearing them, I’d look like Alan Partridge

Steve Coogan as ‘Alan Partridge’ wearing DT100’s

But on the positive side, they company have a slightly sleeker brethren, the Beyerdynamic DT770 PRO, used by hundreds of musicians in recording studios, favoured by pop radio stations – and frankly, if they’re good enough for Trent Reznor, they’re good enough for me.  In all seriousness, they have a fairly flat frequency response 20Hz-20kHz with a bit of a bump at the top and bottom, and have excellent isolation properties.  And, importantly, I know how they sound, so can evaluate audio with them pretty well (distinguishing the sound of the cans from the source material).

DT770’s, approved for use with beards.

So, good starting point. And, they also offer 80 ohm and 32 ohm versions which make them more suitable for use with mobile devices and consumer kit (which would struggle to drive the 250 ohm versions on their own which are the pro standard). Admitadley they’re not the same drivers any more, but they’re fairly close.  So, I go for a Beyerdynamic DT770 Pro 80 Ohm (yes, this is an affiliate link)

The stock cable is fine, but sometimes the minijack connector doesn’t fit into phones, the cable is too long, or is in some way inconvenient. So I decided to modify the headphones to make the cable detatchable.

This is a pretty simple modification, once you get the hang of dismantling and reassembling these (and, as I previously mentioned, I’ve dismantled quite a lot of headphones)

Jason from customcans.co.uk shows you how to fit a mini-XLR connector in this video

In my case, I instead went for a smaller 3.5mm minijack socket which required less intrusive modifications.

Clearly, I now need a cable for my headphones.  I could have bought one, but I decided what the heck, I’ll make that myself as well.

One of my favourite brands for flexible and rugged cable is Van Damme, and for headphones where you just need 3 conductors (ground, left and right), their Pro Grade Classic XKE 1 Pair Install Cable is thin and unobtrusive, but high quality. Although you shouldn’t really put left and right on a twisted pair, the distances I’m likely to be going with this (a meter or two, at most) means this is mostly a non-problem.

Historically, I’ve not been a fan of minijack connectors.  Give me an XLR or even a 1/4″ Jack to solder up, and I’m happy, but minijacks are just a bit fiddly for my liking.  Or that’s what I thought – until I discovered the Amphenol KS3PB-AU which is, frankly, a joyous thing to solder and assemble.  And, unlike crappy minijacks from Maplin (RIP), all of the terminals are actually connected when it comes out of the bag!

They look pretty swank as well.

To finish the cable off, partly because it looks nice but mostly because my cat would chew through it otherwise, some 6mm Flexo PET Expandable Braided Sleeving in Neon Blue, and some 4:1 7mm blue heat shrink to hold down the ends.  A benefit of applying techflex is it makes it a lot more difficult to tangle.  A downside is that it does cause some minor microphonics when it rubs against things, but that’s no worse than many other headphone cables.

My effort at a good looking headphone cable

This is what I use to connect my cans to a PC or laptop.  I’ve also got a similar but slightly longer cable with 6.25mm stereo jack at one end, sleeved in white techflex, just in case I find myself needing to hook into some slightly more pro kit and need a bit of space to move.

Finally – I have a problem that my Pixel 2 does not have a headphone port, and I’ve not had much luck with the Google USB-C to 3.5mm stereo adapters. So I decided to go Bluetooth, and got myself a SZMDLX Bluetooth reciever (yep, that’s certainly a jumble of letters right there!) which seems to do the job, and can be clipped to the headphones quite conveniently

I also made a shorter cable with a right angle Neutrik minijack (eugh) to hook this up, as the one which came with the item became faulty in a matter of days.

 

Finally, after sweating into the stock earpads for a fairly significant amount of time, I decided to get some Brainwavz Round Pads in a  black PU Leather finish.  And, for me, this is a significant upgrade as it provides a better seal around my head (and glasses!) and helps stop the bass from escaping.

Wireless DT770’s anyone?

So there you have it, a guide to my headphones that nobody asked for but I wrote anyway!

🗺️ Recording GPS Tracks with GPSlogger on Android for Realtime Trains 🚆

(featured image is a Class 800 test train set spotted at Paddington shortly before the launch of the Intercity 125 replacements in October 2017, operated by GWR and Virgin East Coast)

As a long time user of Realtimetrains.co.uk , and someone with an appreciation of Open Rail Data (having used it myself), I jumped at the chance of helping their development.  They’ve put out an appeal to record GPS tracks of train services that you travel on with the link to the service on the application.

A lot of the work behind the scenes performed by a small team involves maintaining data relating to train positioning and comparing this with the signalling system outputs we use. This is an entirely manual task involving one of us going out with a radio controlled watch and monitoring the passage of trains through stations and junctions. An area of recent interest for us has been attempting to compute this automatically using other known bits of information.

In order to validate this effort, we either have to do the manual task with a watch or collect a large dataset of GPS traces to compare against our dataset. The more data we have will allow us to improve the end product.

Which is awesome.  I wanted to contribute. I found a suitable app for Android called GPSlogger which seemed to fit the bill.  It has quite a few settings, so I thought I’d write a quick guide on what I’ve used to successfully generate a working track for a service.

Wanting to make sure the data I produced was as useful and error free as possible, I asked @Realtimetrains for some guidance

I have put together some settings that should help achieve these goals using GPSlogger.

One of the main things is to make sure that we are recording the data frequently – which isn’t necessarily great for battery life, so be wary of this on long journeys! 

 

First of all, hit the hamburger menu (stack?), and go into …

Logging Settings

Tick Log to GPX

Set New file creation to Custom file

Set the Custom file name to “%YEAR%MONTH%DAY-” – this will mean when you create your GPX files, you can append the service ID and uniquely identify the service with the ID and the date.

Tick Ask for a file name on every start, so you have the chance to put the WTT schedule UID into the filename when you start a track.

Untick Allow custom file name to change dynamically to prevent multiple files from being created (you can merge them later, but it’s an extra step that’s not really needed!)

Next, go back and navigate to ..

Performance (Timings, filters and listeners)

In Location Providers, have:

  • ✔ GPS enabled
  • ❌ Network disabled
  • ❌ Passive disabled

(this prevents your location from being sourced from the cell network or wifi/bluetooth beacons, which can throw off the accuracy)

Set the Logging interval to 0 (zero) (as frequently as possible).  You’ll need this to ensure your GPX file contains enough points!

Set Distance Filter to 0 (in order to log new points even when you are stationary), and ensure that Don’t log if I’m not moving is turned OFF (the fact you are not moving at a particular time is useful information!)

Set Accuracy Filter to 60 (so that points that are wildly inaccurate are not recorded)

Optional: You might want to tick Keep GPS on between fixes, which will increase the accuracy and frequency of the measurements (as it won’t have to wait to get a lock between each measurement).  However , it will drain your battery a lot faster when combined with the Logging interval set to 0 as above!

You’re pretty much set now…

Creating a Track

First, find your service using the Realtime trains website.  Click the More detail link, and find the WTT schedule UID (which might be something like W07729).

Go into GPSlogger – once the train is at the platform, press Start Logging. If you’ve set it up as I describe above, it’ll prompt your for a filename.  Paste the WTT schedule UID at the end of the filename – so “%YEAR%MONTH%DAY-W07729″ in our example).  This will result in a filename like 20180314-W07729.gpx , which uniquely identifies the service.

The logger will begin recording data points.  Keep your phone out with visibility of the sky to give it the best chance of getting accurate fixes.  When you reach your destination, press stop.

Now hit the share button bottom left, pick the GPX file you just created and email it with a link to the service to Realtime Trains for them to include in their dataset ! Or do whatever else you like with the file (there is integrations for various cloud storage platforms, SFTP, FTP etc).

Why I don’t want a Yahoo ID any more

I was at one time, a BT customer.  They had the best deal for FTTC broadband in my area, so it made sense to go with them.  One of the things they offered at the time was a Yahoo account – which for some inexplicable reason you were forced to link to any previous account with the same email address.

I created one – with my usual online moniker of naxxfish.  What a mistake that turned out to be….

Continue reading “Why I don’t want a Yahoo ID any more”

CanterburyMedia Site to Site Link

One of the unique challenges of CSR FM is it’s structure, where it is part funded by two separate universities each with a presence in Canterbury (University of Kent and Christ Church University) and their respective student unions (Kent Union and CCCU).  Student members could be enrolled at either institution, and as such each institution has it’s own radio studio on campus – each of which has an equal chance of being put on air. The station also invites members of the community to participate. 

This has presented a challenge, in that live audio needs to make it’s way from either studio to the transmitter, hosted at the University of  Kent in Eliot College.  From the outset of the CSR project, the route between the two has always been over IP – there was no other reasonable option.  Up until recently, that was realised using a IP Codec – with it’s packets being routed over the institution’s networks.

This is perfectly fine, and for the most part worked very well.  However, recently CSR has had a fundamental infrastructure change to it’s audio distribution (as part of the Student Media Center project), which has made every single audio source and destination available using Axia’s Livewire Audio over IP protocol over their internal network.  This allows fantastic flexibility and allowing studios to route any source to their mixing consoles, as well as increased interoperability with our automation systems and customisable GPIO control.

However, the second studio (Studio Blue) was still connected via an IP codec link, which was not integrated into the system at all and offered limited capacity for routing over the link (only a stereo pair to and from the router).  Unfortunately the link between the two sites, over the academic networks that link the universities, would not be suitable to transport Livewire (for a number of very good reasons, lack of multicast and custom QoS being one).  It was therefore necessary that we provided a Layer 2 link between the two locations to carry this traffic, which we had complete control over.

Continue reading “CanterburyMedia Site to Site Link”

Keynestock 2014

Once again, I got roped into helping out CSRfm and this time KTV in getting their OBs from Keynstock 2014 on air.

There were some not insignficant challenges. Our normal network access at Keynes was effectively cut off due to some changes to configuration.  This was quite troublesome, as we had always previously relied on this access to get our signals back to our HQ.  The outlook seemed bleak.  What we ended up doing instead, though, actually seemed to work out rather better.

Continue reading “Keynestock 2014”

PathfinderPC HTTP JSON API

I’ve been helping with the installation of a brand new Axia Livewire network at CSR FM. The network is a bit different to the usual installation – and that deserves it’s own post.  We’ve been using PathfinderPC to do all of the routing control.

It’s all pretty clever stuff – but we wanted to be able to extract information from Pathfinder so that we could do handy things like find out which studio is on air, and use that information to show the right webcam on the website, or use it to add further information to our Now Playing data.  Pathfinder has a way of doing this – using Protocol Translators.  Basically, it’s a TCP listener (or client, or Serial Port) which accepts and sends commands to a remote device.  The protocol is very well documented in the manual, and is very flexible in what it lets you do.

But, it’s a bit of a pain to connect to from, lets say, PHP – which isn’t really well suited to doing socket operations.  Also, you’d want to cache the results somehow, lest poor Pathfinder get inundated by people looking at the webcam!

No.

So I decided to make a little thingy that sits in between Pathfinder and the potentially unruly web apps on the other end.

Thus, PFInterfaceWeb.  It exposes various data sources over HTTP – like a list of all source, destinations and current routes.  Also – and particularly usefully – memory slots!  This lets us query our Stack Event logic and work out who’s on air, what mics are live…. etc etc.

Oh, also, it optionally can send messages to a STOMP compatible Message Queue server whenever a memory slot, or route, changes, or custom Protocol Translator commands are sent.

At some point, I’ll make it monitor GPIO, and also Silence Detect events.

Hopefully, this will prove handy!

Betheremin 0.1

My partner, Beth, asked me if I could make her a Theremin.  So I have.  It’s called the Betheremin.

A Theremin is a musical instrument, which changes pitch and/or volume as you bring your hand close to it’s antenna(e).  The way this works is by your hand influencing the capacitance of a resonator circuit, changing the frequency at which it oscillates.  This difference in frequency creates a “beat” frequency against a reference oscillator, which can then be used to create an audible frequency or control a Voltage Controlled Amplifier.

Continue reading “Betheremin 0.1”