About me

For my professional website, with information about my research, publications and teaching, see www.sites.google.com/site/rmlevans.

Monday, 26 November 2018

Evolutionary games and altruism in the face of disaster

My mathematical modelling has taken a detour away from soft-matter research and into the field of Evolutionary Game Theory. So, instead of studying material properties that emerge from vast collections of interacting molecules, I have been studying the strategies that emerge from vast collections of interacting agents (simulated beings that live in my computer).

It turns out that Evolutionary Game Theory (EGT) is as much fun as it sounds, even though the games in question won't win any 5-star reviews on gaming platforms, as they typically take a fraction of a second to play. In EGT, the complex competitive processes of life are modelled by simple agents playing a simple game. They reproduce and die according to the game’s outcome, and their offspring inherit imperfect copies of the parent’s strategy. 




By applying techniques similar to those for analysing order and disorder of molecules, I have discovered some general features of Darwinian evolution, that can create highly altruistic behaviours in the presence of rare events such as gluts or disasters. The discovery was published last month in the paper "Pay-off scarcity causes evolution of risk-aversion and extreme altruism", R M L Evans, Scientific Reports (2018) 8:16074, at www.nature.com/articles/s41598-018-34384-w

There are a few well-established elementary games used by game-theorists. The agents in my simulations play one of those standard games, called the Ultimatum Game. Previously published studies of the game have involved humans, machines, and even capucin monkeys. So, whatever you are, you too can play. Here's how.

You'll need two friends: one to play the game with you, and the other in a sort of "Mother Nature" role, dictating the order of play and providing a bounty; say, £1. This £1 (or $1 or ¥1 - choose your favourite currency) represents some natural resource that the players may receive if they cooperate. 

Mother Nature nominates one of the players to be the "proposer" and the other to be the "responder". The proposer offers some portion of the bounty (say, 10 pence) to the responder. If the responder accepts the offer, then the proposer keeps the remainder of the bounty (90 pence in this case). On the other hand, if the proposer rejects the offer, then NEITHER player receives anything. The offer is made once only; no back-and-forth haggling is allowed.

How much would you offer? How much would you accept? Since there is nothing to gain by rejecting an offer, you might expect the responder to accept any offer however low. And if the proposer is greedy, and is confident that the responder will accept any offer, then they will offer as little as possible (1 pence). Those particular strategies are known as the "Nash equilibrium" of the game, i.e. the set of strategies adopted by rational, self-interested players, playing against other rational, self-interested players. It is named after John Nash, the Nobel Laureate in Economics whose biography, "A beautiful mind" was made into an Oscar-winning film.

Perhaps you feel uncomf
ortable with the idea of trying to fleece your fellow player by offering them the bare minimum, or with the idea of accepting a miserly offer. If so, you share that feeling with other humans and monkeys, who have been observed to offer typically between 10% and 50% of the bounty, in numerous studies of the ultimatum game. This is one trivial example of our in-built altruism (a tendency to help others even at cost to oneself) that is evident in countless more serious situations. For instance, many garden birds give alarm calls when they see a predator such as a cat. This is an altruistic behaviour because, while it may save other birds, it draws the cat's attention to the calling bird.

How do you suppose such altruistic behaviour evolved? The questions poses a fascinating conundrum. If a bird carries a gene that gives it a tendency to sound the alarm when it sees a cat, this bird is less likely to have offspring (owing to a fatal feline attack) than another bird without the gene. So, it seems that genes for altruism should quickly die out due to natural selection, out-competed by the descendants of individuals without the gene.

This conundrum has taxed the minds of scientists for several decades, during which a number of subtle processes have been discovered to produce altruism, within the framework of Darwinian evolution. "Kin selection" often plays a role. For example, whether by choice or by geographical necessity, a certain type of alarm-calling animal might tend to live in close proximity to its close relatives. Then many of the beneficiaries of its altruism also carry the altruistic gene. Thus the colony of altruists can out-compete neighbouring colonies whose members more often fall prey to unseen predators.

The newly published process for the evolution of altruism is different from kin selection, and from other processes previously known to promote altruistic behaviour. To explain it, let me tell you more about my simulations. The agents in my model are far simpler than birds or any other living organism, even bacteria. Each agent occupies a site on a square grid, so it has four neighbours to play with. The agents never move. Each agents has its own, unchanging strategy for playing the ultimatum game, should the need ever arise. That strategy consists of just two numbers: the offer that it will always make whenever it is chosen to play the role of proposer, and the threshold value for accepting offers whenever it acts as responder - it will always reject any offer below its personal threshold. Those two numbers comprise an agent's strategy. 

In EGT, a strategy is like a genetic code, because it gets passed on to an agent's offspring whenever it reproduces. At the start of a simulation, an agent with a randomly assigned strategy (pair of numbers) is put onto each grid site. The grid is big, so there are typically around a quarter of a million agents. Then the simulation runs as follows. The computer picks an agent at random to propose its offer, and randomly picks one of its four neighbours to respond. They are each allocated some pretend money, or no money, according to the rules of the ultimatum game, and the computer updates each of their bank accounts. Then the process is repeated, with a new randomly chosen proposer and resonder, over and over again.

Sometimes, instead an agent being chosen to make a proposal, it is chosen to reproduce asexually. Its offspring inherits the parent's strategy, but with small random numbers added or subtracted from each. Since the grid is already full, where will this offspring live? One of the parent's neighbours must be replaced (to death; mwahahaha). The algorithm now chooses which of the four neighbours gets the chop, and the poorest neighbour is chosen with highest probability. 

This whole process of births, deaths and games is repeated billions of times in each simulation. The process mimics Darwinian natural selection, because those who are most successful at the game (of life in the natural world, or the ultimatum game in this case), have the best chance of living long enough to reproduce, and their children inherit imperfect copies of their genome (or strategy).

"What strategies evolve?" I hear you ask. On the face of it, it seems obvious that the best strategies for getting rich should proliferate, so that the population will evolve towards the Nash equilibrium, giving away as little as possible and accepting every offer. Sure enough, the Nash equilibrium has previously been observed in a similar model with regular updates, where every agent makes its proposal once to each neighbour and then the richest ones reproduce. With randomness in the order of play (as in my simulations), it is already known that higher offers evolve: more generous that the Nash equilibrium, but below 50%. In those previous studies, the rate of births and deaths has always been similar to the rate of game play, so that most agents play the game once or twice in their lifetime.

One novelty of my work is that I set the rate of births and deaths to be extremely high, which, surprisingly, no-one has tried before. With reproduction occurring a hundred times as frequently as gaming, most agents in the simulation never get an opportunity to put their strategy into practice by actually playing a game. Typically, the family strategy is passed down from parent to child for a hundred generations between uses. Under these circumstances, I find that evolution switches from favouring greed to favouring generosity. In a population with such scarce opportunities for winning resources, the average offer rises to 75%, meaning that most proposers keep less for themselves than they give away. And their generosity is indiscriminate, because agents cannot recognise close kin or choose a recipient. While 75% is the average offer, many agents give away close to 100% of their budget. It is particularly surprising that families with such a strategy are not out-competed when they meet more selfish exploiters that accept their generosity but give little back when acting as proposer.


The underlying reasons for this strange and inspiring evolutionary phenomenon are subtle and connected to the statistics of random interactions that govern Brownian motion of molecules. A theorem, derived in my paper, shows that the phenomenon is not a quirk of the ultimatum game, but stems from a more general principle that, when pay-offs are very rare compared with life-expectancy, evolution no longer favours strategies that maximize an agent's average income, but instead selects strategies that minimize the risk of zero pay-off. You might think those are the same thing. But remember, a lot of random influences are at work - in the order of play, the arrangement of agents on the grid, the timing of births and deaths. So a given strategy does not reliably give a unique pay-off, but instead gives rise to a whole set of possible pay-offs with different probabilities. For instance, imagine that strategy A yields possible pay-offs with probabilities shown by the histogram in figure 1a, while strategy B leads to pay-offs with probabilities in figure 1b. On average, strategy B tends to give higher pay-offs. So evolution with favour organisms with that strategy if it is used reasonably often. But, if many generations pass between opportunities to play, then strategy A will evolve because it has a lower risk of missing out entirely on the rare resource, as shown by the lower bar at zero pay-off.

While several subtle effects are at play here, all discussed in the paper, one of the reasons for this extraordinary feature of evolution is quite straightforward. If pay-offs are very rare, then almost all individuals have exactly zero wealth. In that case, an individual with a strategy that wins even a tiny amount of wealth becomes the richest in its neighbourhood, and so out-competes its neighbours. There is thus no benefit in adopting a greedier strategy that wins more wealth.

This newly discovered evolution of risk-aversion and insensitivity to average pay-off has a levelling effect, removing the benefits of greed and promoting cooperation and altruism. So how is this relevant to the real world? Of course, we didn't play the ultimatum game in our distant evolutionary past. But, if the game represents rare life-changing events like disasters or gluts, requiring decisive action to avoid losing out, then a similar process might be responsible for the behavioural traits that evolved as a consequence. In the model, those traits, or strategies, are never actually used by most individuals for the purpose of that all-important but extraordinarily rare game. Nevertheless, those individuals carry a predisposition for altruism and pass it down the generations. Perhaps, then, the everyday altruism, that we observe in many real organisms, arises from a predisposition that evolved to survive much rarer and more significant events than we have seen in our lifetimes.

Wednesday, 24 December 2014

The value of idealized models

(First published on iopblog)


Every physicist has to know the joke about the dairy farmer. Seriously, if you don't know it, you can't call yourself a physicist. It really should be added to the IoP's requirements for the accreditation of physics degrees. If you have such a degree, and none of your lecturers ever told you the joke, please write a letter of complaint to your alma-mater immediately. In case you find yourself in that unhappy situation, here it is:

A dairy farmer, struggling to make a profit, asked the academic staff of his local university to help improve the milk-yield of his herd. Realising it was an interdisciplinary problem, he approached a theoretical physicist, an engineer and a biologist. After due consideration, the engineer contacted him. "Good news!" she said. "My new design of milking machine will reduce wastage, increasing your yield by 5%." The farmer thanked her, but explained that nothing short of a 100% increase could save the farm from financial ruin. The biologist came up with a better plan: genetically-modified cows would produce 50% more milk. But it was still not enough.

At last the theoretical physicist called, sounding very excited. "I can help!" he said. "I've worked out how to increase your milk yield by six hundred percent."

"Fantastic!" said the farmer. "What do I have to do?"

"It's quite straightforward," explained the physicist. "You just have to consider a spherical cow in a vacuum..."

I shouldn't break the cardinal rule never to explain a joke, but... the gag works because you recognize the theoretical physicist's habit of simplifying and idealizing real-world problems. At least, I hope you recognise it, although I wonder if it's a dying art. With the availability of vast computer-processing power and fantastically detailed experimental data in many fields, there is an increasing trend to construct hugely complex and comprehensive theoretical models, and number-crunch them into submission. Peta-scale computers can accurately simulate the trajectories of vast numbers of atoms interacting in complex biological fluids, and can even model the non-equilibrium thermodynamics of the atmosphere realistically enough to fluke an accurate weather forecast occasionally.

Superficially, it might seem like a good thing if our theoretical models can match real-world data. But is it? If I succeed in making a computer spit out accurate numbers from a model that is too complex for my meagre mortal mind to disentangle, can I claim to have learnt anything about the world?

In terms of improving our understanding and ability to develop new ideas and innovations, making a computer produce the same data as an experiment has little value. Imagine I construct a computer model of an amoeba, that includes the dynamics of every molecule and every electron in it. I can be confident that the output of this model will perfectly match the behaviour of the amoeba. So there is no point in wasting computer-time simulating that model; I already know what the results will be, and it will teach me precisely nothing about the amoeba.

If I want to learn how an amoeba (or anything) works, by theoretical modelling, I need to leave things out of the model. Only then will I discover whether those features were important and, if so, for what.

When I was a physics undergraduate, I remember once explaining Galileo's famous experiment to a classicist friend; the (possibly apocryphal) one where he dropped large and small stones from the leaning tower of Pisa, to demonstrate that gravity applies the same acceleration to all bodies. "But a stone falls faster than a feather," she protested. I said that was just because of the air resistance, so the demonstration would work perfectly if you could take the air away. "But you can't," she pointed out. "The theory's pointless if it doesn't apply to the real world. So Galileo was wrong." I have a strong suspicion that she was just trying to wind me up - and succeeding. The point, which she probably appreciated really, is that the idealized scenario teaches us about gravity, and we can't hope to understand the effects of gravity-plus-air before we understand gravity alone.

Similarly, if Newton had acknowledged that no object has ever found itself perfectly free of any unbalanced force, he would never have formulated his first law of motion. If Schroedinger had fretted that an electron and proton cannot be fully isolated from all external influences, he would have failed to solve the structure of the hydrogen atom and establish the fundamentals of quantum mechanics. The simplicity of the laws of nature can only be investigated by idealized models (like the one-dimensional "fluid" below), before adding the bells and whistles of more realistic scenarios.


With increasing research emphasis on throwing massive experimental and computational power at chemically complex biophysical and nanotechnological systems, and in the face of financial pressure to follow applications-led research, it would be easy to forget the importance of developing idealized models, elegant enough to deduce general principles that transcend any one specific application. So let's adiabatically raise a semi-infinite glass of (let's assume) milk, and drink to the health of the spherical cow.


Tuesday, 23 December 2014

What are exams for? On measuring ability and disability

(First published on iopblog)


Equal rights
I'm going to go out on a limb here and assert that equality of opportunity is a good thing. There, I've said it. Gone are the bad old days when jobs and privileges were determined at birth. No longer do you have to be an aristocrat or wealthy land-owner to study science; Michael Faraday broke that mould. Neither is being born with a Y-chromosome still a prerequisite for academic success. While that playing field may not be as level as it should be, at least officially-sanctioned sexism has been abolished since Rosalind Franklin's day. Encouragingly, I currently teach a cohort of undergraduate mathematicians, at Leeds University, with a near-equal female:male gender ratio of 52:48.

Belatedly, we have seen improvements in equality of opportunity for people with disabilities. An inspiring leap forward was made by the London 2012 Paralympics in dispelling some of our social prejudices. Meanwhile, with the introduction of the Equality Act 2010, educational establishments have set up new procedures to ensure that disability does not result in inequality.

For a simple and obvious example, consider a physics undergraduate student who uses a wheelchair. Their inability to walk has no bearing on their potential quality as a physicist. So their university has a responsibility to make sure that they are not disadvantaged during their learning and assessment. It would be unfair to arrange their exams to take place at the top of a steep flight of stairs. Their institution needs to be aware of their condition and make sure that they can access the exam.

Similarly, universities must make exams accessible to blind or partially-sighted students by printing their exam papers in Braille or large print. It is obvious that poor eyesight should not prevent a person being a good physicist. So we lecturers and examiners must make sure that our formal assessments of a physicist's abilities reflect only those abilities relevant to being a physicist, while taking appropriate account of a candidate's medical conditions.

A student with a disability can visit a university's Equality and Diversity Unit to have their needs assessed by a qualified professional, who will write a formal Assessment of Needs: a document that is circulated to their teachers, explaining what special provisions are required to prevent the student being disadvantaged by their condition. So a student with hearing difficulty might have an Assessment of Needs containing a statement such as, "Lectures should be arranged in a room with a hearing loop." It makes sense, and can be very helpful.

Learning equality
Things become a lot more complicated where a specific learning difficulty (SpLD) is involved because, whereas hearing or walking are not crucial abilities for STEM subjects (Science, Technology, Engineering, Mathematics), learning is a university's core business. The Equality Service at the University of Leeds has useful information about SpLDs. It says,

"Each SpLD is characterised by an unusual skills profile. This often leads to difficulties with academic tasks, despite having average or above average intelligence or general ability."

This makes a thought-provoking distinction between ability with academic tasks and intelligence. It presupposes particular definitions of "intelligence" and "academic". I don't know how to define either of those things, but it seems safe to say that the particular type of intelligence that is relevant to university work could be called "academic intelligence".

When learning is itself the subject of an Assessment of Needs, as it is for people with Asperger syndrome or dyscalculia, for two examples, then the assessor's own academic background becomes relevant. Assessors and staff of Equality and Diversity Units often have medical or humanities training. (I confess this is an anecdotal observation, not based on good data.) So their views of STEM-subject exams are not based on experience. Yet they and other medical professionals are required to write Assessments of Needs that carry the weight of law, and dictate some parameters of the teaching and assessment delivered by the subject-specialists.

For instance, while good writing style is deemed relevant to an English degree, physics examiners are often instructed not to mark a particular student's work on the basis of their grammar. The assumption is that ability to write good English is not part of the discipline of physics, and can be separated from it as easily as the ability to walk or to hear. The instruction assumes that the exam should only test the student's ability to calculate or recall facts, rather than a holistic ability to understand an English description, translate it into a calculation, solve the problem, interpret the solution and communicate it well. Of course, no examiner would mark a physicist's work exclusively on their writing style, so we are only talking about a handful of marks at stake.

Of necessity, when the 2010 Equality Act became law, new systems were hastily put in place, without much time for consultation. As a consequence, examiners were never asked, for instance, whether we should expect a physicist to demonstrate good communication ability. As we iron out the system's early teething troubles, we need to address these kinds of question. What exactly is an exam supposed to test? To what extent can we separate our assessment of a physics student's linguistic ability from their other skills? This is not a rhetorical question. I don't know the answer, but I do know that it is complicated and not obvious, and should be debated before the rules are set in stone.

Here is a cartoon that brings the issue into sharp focus.
Cartoon courtesy of QuickMeme www.quickmeme.com/p/3vpax2
It all hinges on what the selection is for. If this is the exam for a swimming qualification, it is entirely unsuitable. If it is a job interview for a steeplejack, then it's a fair test that discriminates appropriately between the best and the worst. It is easy to define the appropriate skills for a steeplejack. How should we define a scientist?


A matter of time
To avoid any misunderstanding, I want to make a clear distinction between teaching and examination. Any good teacher needs to have empathy for their students, and pitch their teaching at a suitable level and tempo for each individual. Being armed with the maximum possible information about the student's particular needs and abilities is always helpful, and the good teacher will make appropriate provisions whenever possible. The extent to which special provisions should be made during exams is an entirely separate question.

The most common provision in an Assessment of Needs is the stipulation that a particular student should be given extra time (typically 25% extra) to complete their exam. This raises a fundamental question. If a person can solve a particular puzzle more quickly that another person, although both might get there in the end, should their university award them a higher grade? A person's intellectual ability cannot be quantified on a single, one-dimensional scale. It is many-faceted, and speed of problem-solving is one aspect of it.

One might suggest that exams do not exist to test a person's intellectual ability, but only to test how much they have learnt during a particular course of study. That sounds like a reasonable idea but, in fact, we do not measure a student's scientific ability at the beginning and end of a science degree course, and award the qualification for self-improvement, irrespective of whether they are any good at the subject. On the contrary, the letters "BSc" are purported to be a standardized benchmark, indicating a particular absolute level of ability. That ability might have been innate when the student arrived at university, or might be the result of more-than-average hard work.

The most common method for determining ability in any academic subject is by timed exams. An exam tests what a student can achieve within a finite time interval. At the end of that interval, anyone who has not finished misses out on the marks that they were too slow to accrue. An exception is made if a medical professional has predicted, rightly or wrongly, that the student would need extra time for their exams, and has written their prediction in an Assessment of Needs. This inconsistency presents a problem. A more accurate and individually-tailored assessment of each student's needs could be made in the exam hall. We could unambiguously identify the students who need extra time, as a result of their own unique abilities and disabilities. They are the ones who run out of time!

So there would be advantages (as well as massive logistical difficulties) in having un-timed exams. They would allow each student to demonstrate their abilities, whilst removing the element of speed from the assessment of their expertise. With the best intentions, we have stumbled into the new age of equality with a flawed mixture of two systems. Candidates, whose needs have not been assessed, have their abilities measured by a fixed-duration exam, while others have the duration of their exam determined by their abilities.


The question that must urgently be addressed is this. Do we want exams to test what a candidate is able to achieve within a fixed time, or do we only want to know what they can achieve when given as much time as they require? Creating fair and meaningful methods of assessment requires an open debate on what we want from an exam, what we want from a degree classification, and what we want from a physicist.

Wednesday, 1 January 2014

When I sawed through my telescope

(You read it right; not "saw" but "sawed".)

Four years ago, after decades of prevarication, I finally bought the telescope that I had promised myself since childhood. Despite its modest price tag, the Chinese-built "Skywatcher" reflecting telescope is an impressively precise piece of equipment. Its mirror's surface is formed into a parabolic curve to an accuracy better than a quarter of a wavelength of light across its entire 4½ inch diameter. This optical perfection means that all of the light collected from a star is brought to an impeccable focus with no loss of brightness due to cancellation of light waves arriving out of kilter.

This incredible achievement of human civilization has shown me the wonders of the universe, channelling ancient light from distant galaxies into my pupil. For four years I have mollycoddled its pristine optical surfaces which would fail if scratched by any abrasives such as metal dust.

So, it was with some trepidation that I took a hacksaw to my pride and joy.



I had been toying with this drastic measure for a couple of years, ever since realising that the 'scope could not be coaxed into focussing starlight directly onto the sensor of my camera. The idea was always swiftly dismissed, knowing that one slip of the saw would write the thing off, and that even a successful cut would still leave me with a pile of telescope pieces and no experience of assembling and perfectly aligning them.

When, at last, I had a week's holiday to take at home, while the children were still at school, I decided to take the plunge and indulge two solitary obsessions: metalwork and astronomy. This blog entry takes the form of a photo diary and how-to manual for the foolhardy amateur astronomer.


How it works

You see, a telescope's job is to concentrate each star's light into a bright pinpoint. It actually creates a miniature reconstruction of the astronomical original, blazing away just inside the eyepiece holder. The astronomer then views this bijou stellar facsimilie using the eyepiece which is nothing more than a high-quality magnifying glass, making it appear at a comfortable viewing distance. The overall effect is both to magnify (spreading out the angular separations of the stars) and to brighten the stars. It gives a brighter view than the naked eye alone by collecting all of the light that falls through the big hole at the front of the telescope, and squeezing it through the much smaller area of the human pupil that limits the naked eye's light-collecting power.

This is all summarized by the picture below, that shows some of the wavefronts of starlight being concentrated and funnelled into an observers eye. This picture demonstrates a refracting, rather than a reflecting telescope, because it allows the wavefronts to be shown more clearly. The objective mirror of a refracting telescope does exactly the same job as the big objective lens at the front of the refractor, except that it also sends the light back the way it came, to be viewed through an eyepiece near the top of the telescope tube. This would make the diagram more cluttered, with incoming and outgoing light overlayed in the tube's interior.



Astrophotography

So much for using a telescope with your eye. After doing so for a while, I could no longer contain my flabbergastedness (is that a word?) and wanted to share it. So, how do you photograph what the telescope sees? There are basically two options:

(1) Point a camera at the eyepiece, in place of your eye. This technique goes by the grandiose name of afocal astrophotography, (nicely explained here) and it supports an industry of various brackets and adapters to hold the camera steady and align it with the eyepiece. I used this method for a while, with some success. It allows you to choose the magnification of the final picture by selecting an appropriate eyepiece. But it has some drawbacks. A small fraction of the meagre starlight is lost due to imperfect transmission at all of the glass surfaces in the eypiece and in the camera's lens. Significantly more can be lost due to various apertures and lens-stops in the light-path.

A more crucial drawback is that anchoring a heavy camera to the end of a long eyepiece and then swiveling it around to follow the sky can put excessive bending forces on the poor old telescope, which was not bred for heavy lifting. This was a particular problem for my light-weight entry-level Skywatcher Skyhawk which, despite its top-quality optics, holds the eyepiece in a nasty plastic focusser.

The focusser had to go. I bought a well-engineered replacement for the plastic tat. Unfortunately, it was designed to fit a bigger scope, so had to be shortened to avoid it crashing into the secondary mirror when wound fully in. This meant re-making some of its parts.




Also, the new focusser barrel was wider, requiring a bigger hole in the side of the scope. So, after disassembling the optics and protecting most of the paintwork from scratches using masking tape and duct tape, I filed away at the thin aluminium.

The focusser's base-plate was designed to fit the cyllindrical surface of a larger-diameter 'scope. Re-shaping and fitting it involved making some bespoke aluminium brackets...

...and resin to fill the gaps, molded to the right shape by pressing it against the telescope itself. Greaseproof paper prevented it sticking to the paintwork.


A better way

(2) Rather than settling for the afocal technique, most astrophotographers prefer to remove the unwieldy eyepiece altogether, with a technique known as principal focus astrophotography.

It's very simple. You remove the camera's own lens and use the telescope as its lens. This means placing the camera's light-sensitive surface (the film in an old-fashioned camera, or the CMOS or CCD sensor nowadays) at the focal plane of the telescope, where all the starlight is concentrated. The focal plane is normally located inside the eyepiece-holder. It needs to be shifted if its going to fall on the camera's sensor, which can be done by moving the telescope's objective mirror (or lens).

Unfortunately, in the Skywatcher Skyhawk, the mirror's adjusting bolts won't move it far enough. So there was only one thing for it: to shorten the telescope by just the right amount. Too little, and the effort would be wasted as the focussed light would still not reach the sensor. Too much, and the focusser would never extend far enough to focus an eyepiece for normal operation.

Working out how much to cut off is easier said than done, since it's tricky to measure the current position of the focal plane, or the distance to the camera's sensor. These things can be calculated by focusing on a nearby object and measuring its distance from the mirror. The measurement needs to be accurate and the calculations error-free. How confident would you feel about sawing off the right amount, based on these scribblings?

A little saucepan came to the rescue.

By lucky chance, this saucepan was a perfect press-fit inside the telescope tube. So, by turning it into a temporary mount for the mirror...

...I could slide the mirror into a new position, to try it out before doing irrevocable violence to my treasured toy. The resulting affront to nature, henceforth known as the Saucescope, confirmed my calculation, that a little over an inch needed to be removed.

So, with everything carefully marked out...
...and heart in mouth, I took saw in hand and gingerly cut into the blue-anodized aluminium. There was no margin for error, because the cut end would not be hidden, but would simply butt up against the mirror housing, with any scratches or wobbles in full view.

With the nerve-wracking part of the job done, all that remained was to clean off every speck of glass-damaging metal dust, rebuild and collimate the scope.


The camera now fits snugly in the new focal plane.


On the subsequent clear nights, I was relieved to find that the work had all been worthwhile and I had not ruined my beautiful telescope. What could have been a time-consuming disaster, given a moment's lapse in concentration, turned out to be the most satisfying project I've undertaken recently. Here are some of the results, photographed with the new set-up:

The Dumbbell Nebula

The Pleiades

The Orion Nebula

Monday, 18 November 2013

Good Old-Fashioned Technology

(First published on iopblog)

Remember the good old days when micro-electronics were traditional and wholesome? OK, perhaps my sense of nostalgia is a mite over-developed, but I have noticed an unfortunate trend in recent technological developments. While it is nice to have affordable tech with almost magical power, the slickness of the designers' art has a tendency to conceal the real world from us. I care about that because, as a physicist and an educator, my raison d'être is to reveal the real world.

I don't want to over-state the case because, actually, I love new gizmos that allow me to browse my music collection through my TV and to photograph the Orion Nebula in mere moments using the unbelievably sensitive ISO25600 setting on my camera. I wouldn't want to halt the inevitable march of progress even if I could. But I would like to take this opportunity to mark the passing of some dearly loved and enlightening technology that has gone the way of all flesh.


Magnetic entertainment
Take, for instance, the cathode ray tube (CRT). Until about ten years ago, all televisions lit up our livingrooms by smashing high-energy electrons into phosphorescent pixels inside a glass vacuum tube. The elementary particles were launched from an electron gun at the back of the TV, in which they leapt from a hot, negatively charged electrode and raced towards a positive electrode, narrowly missed it, and hit the screen instead. A negative electrode is called a cathode, hence the electrons were dubbed "cathode rays" before J. J. Thompson discovered their true identity. The name stuck.

The CRT was a real-life particle accelerator residing in every home and, in retrospect, it was an absolute gift to all physics teachers. It's harder to teach about electrons if students have to take their existence on trust, or observe them only inside some arcane laboratory glassware.

Credit: Marcin Białek
Anyone with a magnet and a sense of mischief could discover how their traditional telly steered its beam of charged particles magnetically. Before consigning my own idiot-lantern to the tip last month, I took these pictures that show a magnet exerting a Lorentz force on the electrons, making them swerve and hit the wrong pixel. (It's slightly risky to do this if you want to keep your TV, as it could become permanently magnetized!)


Analogue ghosts
The switch-over from analogue to digital TV signals has robbed us of another neat physics demo. "Ghosting" was an annoying artefact that appeared on the screen if you used the wrong type of aerial cable. The people in TV-land each seemed to be stalked by a spectral doppelganger standing a few inches to their right.
Credit: Cablefax.com
Back in the analogue age, it was a familiar sight in any students' TV lounge, and I used to discuss it in my "Vibrations and Waves" lectures as a nice example of impedance-mismatching. You see, the electromagnetic oscillations, picked up by the TV aerial, travel as waves down a co-ax cable to the television. Like any wave-carrying medium, this cable is characterised by a wave-impedance that indicates how much power is needed to push a given size of wave along it. If the wave meets a joint between two cables with different impedances, only part of its power continues through the second cable to the telly. Some fraction of the signal is reflected back along the first cable, where it bounces off the aerial and sets out again towards the TV, slightly delayed. So the same signal arrives twice at the TV, resulting in a double image.

These days, the electromagnetic waves still perform the same physics as ever, echoing off mismatched cables. But the digital encoding of audio-visual information lets clever circuitry reconstruct a pristine picture from a degraded signal. So we can enjoy our high-brow entertainment without the distraction of aberrant natural phenomena.

Of course, physics lecturers could preface their discussion of wave-impedance by explaining what TV looked like in the olden days, but the relevance of the example is lost. Still, I'm in no position to complain about this development since, like any consumer of electronics, given the choice, I'll opt for the TV with the clearest picture. 
 
Discotheque versus MP3otheque
My record player is another old friend that accompanied the CRT to the dump during this month's domestic clear-out (so I told my wife; it's really hidden in my workshop. Shhh!). Having at last finished converting all my old vinyl into the vastly more convenient MP3 format, before "throwing it out", I used the turntable to teach my young children about sound. It was great fun and, with the music safely backed-up, I could relax about the youngsters scratching the discs.

The wonderful thing about a record is that it's very obviously a frozen sound wave. Look closely at its surface, and the wiggling shape of the sound is there right before your eyes. Peer at the stylus as it follows the groove, and you can see how it shakes in time with the air.

To demonstrate even more directly that sound is nothing more than shaking air, we did away with the intervention of the amplifier by creating a primitive gramophone. It was easily done by rolling a sheet of paper into a cone, and sticking a sharp pin through it near the apex. Gently resting the pin's point on the record as it turns on the turntable makes the paper cone sing with a scratchy human voice. If you still own a turntable, I recommend trying out this magic, but only on discs that you don't mind scratching. With a bit of practise, the demo can be simplified even more, using only a flat sheet of paper and resting one corner of it in the record's groove.

True beauty is flawed
Many new devices distance us from physical phenomena, for the valid reason that they are just much more complicated, and often much smaller, than their forebears. I have little chance of showing my children how MP3 files create sound because, unlike a gramophone stylus, all of the processing is complex and rather abstract.

Other devices shield us from reality only because it is fashionable to do so. For example, when you switch on a fairly old radio - even one with automatic tuning - you hear a few seconds of white or coloured noise as the tuner seeks the right frequency. It's a nice sound, evocative of the electromagnetic physics of the carrier wave. Newer models refuse to engage the speaker until their furtive tuning is completed, and the sterile perfection of the user's experience can be guaranteed. This trend is not confined to radios; it's the reason why a lot of new gadgets are slow to switch on. They are designed not to betray the imperfect physical nature of their workings. That is a shame, because imperfections are important in helping us to understand the world.

Biologists learn how complicated organisms work by observing them going wrong in various ways. One standard technique that geneticists use, to discover the purpose of a gene, is to deliberately break it. They breed organisms in which a particular gene is switched off or made to malfunction. This sheds light on the workings of the genome. In humans, where ethics prevents tampering for the purposes of research, doctors glean the most knowledge by observing imperfections and accidents that arise randomly. Oliver Sachs's book, "The man who mistook his wife for a hat" gives many fascinating examples of brain function that could not have been understood without observing the results of some unfortunate mishaps.

By hiding imperfections from us, the designers of new gadgets are doing us a disservice. Back in the days when cars were basic and unreliable, every motorist knew how an engine worked. Now that they are flawlessly controlled by microprocessors, we have lost those skills and knowledge. As the technologists get better at polishing their performance, our opportunities for insight diminish.

I am glad that some new devices buck the trend and flaunt their mechanisms for all to see. Among the products not afraid to bare all are the TAG Heuer belt-driven wrist-watchDyson's celebrated vacuum cleaners, and most motorbikes. Let's encourage manufacturers to do more of this sort of thing.

Meanwhile, I am making the most of the old gadgets while they are still with us. It's a race against time to show the children how to build a crystal set before the analogue radio signal is switched off. And I have lost count of the number of steam engines and beam engines that we have visited together. Perhaps you can share some other examples of illuminating venerable technology that I should introduce them to before it's too late.

Although I feel misty-eyed at the demise of old machines with all their educational potential, I feel no kinship for the luddites or for King Canute. As simple contraptions disappear, we educators will just have to raise our game. Sic transit gloria mundi.