Question: How do you begin to start to make sense of the observational data - what are the key skills for observing that are vital in your field of work?
I’m actually a theoretical physicist. I don’t carry out observations/experiments or analyze raw data myself. What I do is work out the implications of observations/experiments for theoretical models and vice-versa. So I can answer your 1st question, but not really the 2nd one.
.
For what I do the keys things are having a reasonable understanding of what the observations are actually measuring and how, what simplifying assumptions (which may or may not be valid) have been made in analyzing the data and the potential sources of errors, in particular systematic errors.
I’m an experimental physicist, so my observations are the results of experiments. But I think the key skills of observers such as astronomers and experiment particle physicists are essentially the ability to apply computational methods to reduce the data to quantities that have physical meaning. That requires computing, mathematical, and statistical skills; a good understanding of the scientific instrument in order to correct the observations for instrumental artifacts, and of course a sufficient understanding of applicable theory, if such exists, to serve as a guide as to what quantities are likely to be most relevant. Today, in practice, we work in collaborations and therefore the required skills can be distributed over many scientists.
“Observing” in the modern sense (for both astronomers and particle physicists) usually means “using a detector that is designed to provide the data that I need”. The lifetime of detectors is generally long, so not everyone who works in the field actually designs or constructs detectors – in fact, that’s a bit of a minority pursuit. The key skills you need as an experimental physicist/observational astronomer are not so much in the actual observing itself, which may well be automatic (T2K is going to take data whether I analyse it or not), but in the decisions that you make about how and what you observe and what you do with the data. For an astronomer, the process might go like this:
1. A theoretical astrophysicist has made a prediction about some property of, say, very massive stars. As an observer, I wish to test this prediction.
2. What observations would best test the prediction? Which objects should I observe? At what wavelength(s) should I observe them? How much data will I need? (This separates into “how faint are the expected features, so how long an exposure time will I need?” and “how many objects of this type will I need to observe to really test the prediction?”)
3. Are suitable observations already available? (Astronomers are very good about sharing data. Someone may have observed a suitable object for an entirely different project, and I may be able to use their archive data to at least take a first look at whether the prediction is plausible.)
4. Assuming that I need more data, what instrument(s) are best suited to the task? (Suitability includes wavelength range, sensitivity, precision, and – for ground-based instruments – position: I am not going to be able to use a souther hemisphere telescope if the object I want to observe is Polaris.)
5. OK. I now apply for telescope time on my instrument(s) of choice. As telescope time is generally oversubscribed, I will need to make a good case for why my observation is important and why this instrument is the right one for the job.
6. Great! I got my telescope time and now have data. Because I had to specify how much data I needed in my application for telescope time, and because I had to decide what I was going to do before deciding which telescope to ask for time on, I already know how I am going to analyse these data – for example, I may be planning to look at the detailed structure of a particular spectral line or lines. So this stage is reasonably well specified in advance.
7. I have analysed my data and so far the prediction seems to be supported/rejected (delete as applicable). However, I now need to check more objects/investigate in more detail/try to find out why the data didn’t match the prediction. Return to step 2. Rinse and repeat.
Particle physicists have a somewhat different approach, in that a typical particle physics detector takes data all the time (well, whenever the beam is on, if it’s accelerator based), and the same data are used for may different purposes. So, for a particle physicist, the procedure is more along the lines of:
1. I am a member of collaboration X, which sets boundaries on the sort of analysis I can do. (For example, Harrison couldn’t do a neutrino oscillation analysis on his experiment, and I can’t study the properties of the top quark on mine.) What theoretical question, accessible to my experiment, do I want to address?
2. What is the experimental signature of the type of interaction I want to study? (In other words, what observational properties distinguish, say, events in which top quarks were produced from events in which they weren’t?)
3. Using simulations, I will develop a set of selection criteria which will isolate my desired signal sample from among all the other stuff that my experiment has recorded. (I must do this with simulations, because I must not allow the real data to bias my selection, or my results will be unreliable.) Normally, I will not have written the simulations: that’s the job of a theorist!
4. OK, I have a selection, which according to simulations will produce a sample that is 65% pure and has an efficiency of 45% (in other words, 65% of my sample are the type of event I really want, and 35% are similar-looking events that aren’t really from my process; of all the events I’m interested in, my selection will pick up 45% of them – the rest are too similar to big backgrounds to be salvageable). I run my selection on the data.
5. Great! My sample contains more events than could plausibly be attributed to background alone, so I actually do have a signal. I now analyse the data to obtain the properties of the signal (e.g. mass of the produced particle). The moist difficult part of my analysis is likely to be assessing the various sources of experimental error, so that I can quote my final result with appropriate experimental errors. Particle physicists have to have a good understanding of statistical theory!
I hope this gives you some idea of what’s involved. Like much of modern science, it’s complicated!
I use the results from lots of different obervations/experiments, for different problems.
.
I can give you some examples from the field of dark matter searches. We have various good ideas for what the dark matter could be, for instance (to pick 2 of the ones I’ve worked on): Primordial Black Holes (PBHs) and Weakly Interacting Massive Particles (WIMPs). Primordial Black Holes are black holes formed in the early Universe (and not from the collapse of stars) which can have mass ranging from similar to a Mountain to hundreds of times the Sun. WIMPs are exactly what their name says: particles which are massive by particle standards (a few to 1000s of times as heavy as a proton) and which interact only weakly with each other and the normal matter (protons, neutrons,…). They could have been produced in the early Universe, in the right amount to be the dark matter.
.
The different dark matter candidates have very different properties. So we need to use different methods to try and detect them. These methods are a mixture of astronomical observations and particle physics experiments.
.
Planetary and Solar mass PBHs can be detected by gravitational microlensing. When they pass inbetween us and a Star they bend space by a small amount and as a result the star is temporarily brightened. There are collaborations who have monitored lots of stars in the Large and Small Magellanic Clouds (and more recently also in our nearest neighbour galaxy, Andromeda) looking for these microlensing events. They use telescopes to collect light from lots of stars repeatedly over timescales of hours, days or years and look for stars which have suddenly become very bright and then gone back to how they were before again. I then use their results to put limits on how many PBHs there can be in our Milky Way galaxy and how heavy they can be. Specifically I use the number of events they’ve observed and how long each one lasted for (the heavier the PBH, the longer the brightening lasts for).
.
One way of looking for WIMPs is direct detection. Because they’re weakly interacting they mostly pass straight through things. But a tiny fraction of them will bump into the nuclei at the centre of atoms and make them recoil, like a pool ball (if the dark matter is WIMPs this happens roughly a dozen times in your body every year-but don’t worry it won’t do any harm!). Direct detection experiments are experiments built specifically to look for the energy deposited in the detector by the nuclei recoiling. Depending on what the experiment is made of (common examples are germanium, xenon and argon) the energy takes different forms: ionisation (ejecting an electron from an atom), scintillation (exciting an electron to a higher energy level with a photon then being emitted when it goes back down again) and phonons (a rise in the temperature of a detector cooled to just above absolute zero). The experimentalists detect these events, and carefully measure their energy. [there’s a lot more I could write about this step, but I’ve already written a lot, and I need to dash off to the gym now…] I then compare the number of events and their energies with what we expect from theoretical models, of the particle physics properties of the WIMP (how much they weight and how they interact with atoms) and also the astrophysical distribution of WIMPs within the Milky Way galaxy.
Thank you for the detailed explanation. It really is fascinating. But to follow up:
Gravitational Microlensing. “A star is temporarily brightened.” How long? How much brighter?
@Shane. The duration and brightness of microlensing events both vary depending on multiple things.
.
As I mentioned in my previous answer, the heavier the black hole (or other compact object) the longer the event. For Solar mass compact objects in the Milky Way’s halo microlensing stars in the Magellanic Clouds, typical durations are 100s of days. For planetary and asteroid mass compact objects microlensing stars in Andromeda, the typical durations are minutes or hours.
.
The closer the compact object gets to the line of sight between us and the star, the brighter the event is.
.
Slide 6 of this talk I gave at a workshop in Brussels on primordial black holes goes into more detail: http://www.nottingham.ac.uk/~ppzag/Green-microlensing-mod.pdf
Comments
Mod - Shane commented on :
Hi Anne.
Thanks for your answer. Unfortunately for me it just raises more questions…
What observations do you work with? Have you some examples?
Harrison commented on :
I’m an experimental physicist, so my observations are the results of experiments. But I think the key skills of observers such as astronomers and experiment particle physicists are essentially the ability to apply computational methods to reduce the data to quantities that have physical meaning. That requires computing, mathematical, and statistical skills; a good understanding of the scientific instrument in order to correct the observations for instrumental artifacts, and of course a sufficient understanding of applicable theory, if such exists, to serve as a guide as to what quantities are likely to be most relevant. Today, in practice, we work in collaborations and therefore the required skills can be distributed over many scientists.
Susan commented on :
“Observing” in the modern sense (for both astronomers and particle physicists) usually means “using a detector that is designed to provide the data that I need”. The lifetime of detectors is generally long, so not everyone who works in the field actually designs or constructs detectors – in fact, that’s a bit of a minority pursuit. The key skills you need as an experimental physicist/observational astronomer are not so much in the actual observing itself, which may well be automatic (T2K is going to take data whether I analyse it or not), but in the decisions that you make about how and what you observe and what you do with the data. For an astronomer, the process might go like this:
1. A theoretical astrophysicist has made a prediction about some property of, say, very massive stars. As an observer, I wish to test this prediction.
2. What observations would best test the prediction? Which objects should I observe? At what wavelength(s) should I observe them? How much data will I need? (This separates into “how faint are the expected features, so how long an exposure time will I need?” and “how many objects of this type will I need to observe to really test the prediction?”)
3. Are suitable observations already available? (Astronomers are very good about sharing data. Someone may have observed a suitable object for an entirely different project, and I may be able to use their archive data to at least take a first look at whether the prediction is plausible.)
4. Assuming that I need more data, what instrument(s) are best suited to the task? (Suitability includes wavelength range, sensitivity, precision, and – for ground-based instruments – position: I am not going to be able to use a souther hemisphere telescope if the object I want to observe is Polaris.)
5. OK. I now apply for telescope time on my instrument(s) of choice. As telescope time is generally oversubscribed, I will need to make a good case for why my observation is important and why this instrument is the right one for the job.
6. Great! I got my telescope time and now have data. Because I had to specify how much data I needed in my application for telescope time, and because I had to decide what I was going to do before deciding which telescope to ask for time on, I already know how I am going to analyse these data – for example, I may be planning to look at the detailed structure of a particular spectral line or lines. So this stage is reasonably well specified in advance.
7. I have analysed my data and so far the prediction seems to be supported/rejected (delete as applicable). However, I now need to check more objects/investigate in more detail/try to find out why the data didn’t match the prediction. Return to step 2. Rinse and repeat.
Particle physicists have a somewhat different approach, in that a typical particle physics detector takes data all the time (well, whenever the beam is on, if it’s accelerator based), and the same data are used for may different purposes. So, for a particle physicist, the procedure is more along the lines of:
1. I am a member of collaboration X, which sets boundaries on the sort of analysis I can do. (For example, Harrison couldn’t do a neutrino oscillation analysis on his experiment, and I can’t study the properties of the top quark on mine.) What theoretical question, accessible to my experiment, do I want to address?
2. What is the experimental signature of the type of interaction I want to study? (In other words, what observational properties distinguish, say, events in which top quarks were produced from events in which they weren’t?)
3. Using simulations, I will develop a set of selection criteria which will isolate my desired signal sample from among all the other stuff that my experiment has recorded. (I must do this with simulations, because I must not allow the real data to bias my selection, or my results will be unreliable.) Normally, I will not have written the simulations: that’s the job of a theorist!
4. OK, I have a selection, which according to simulations will produce a sample that is 65% pure and has an efficiency of 45% (in other words, 65% of my sample are the type of event I really want, and 35% are similar-looking events that aren’t really from my process; of all the events I’m interested in, my selection will pick up 45% of them – the rest are too similar to big backgrounds to be salvageable). I run my selection on the data.
5. Great! My sample contains more events than could plausibly be attributed to background alone, so I actually do have a signal. I now analyse the data to obtain the properties of the signal (e.g. mass of the produced particle). The moist difficult part of my analysis is likely to be assessing the various sources of experimental error, so that I can quote my final result with appropriate experimental errors. Particle physicists have to have a good understanding of statistical theory!
I hope this gives you some idea of what’s involved. Like much of modern science, it’s complicated!
Anne commented on :
Hi Shane,
I use the results from lots of different obervations/experiments, for different problems.
.
I can give you some examples from the field of dark matter searches. We have various good ideas for what the dark matter could be, for instance (to pick 2 of the ones I’ve worked on): Primordial Black Holes (PBHs) and Weakly Interacting Massive Particles (WIMPs). Primordial Black Holes are black holes formed in the early Universe (and not from the collapse of stars) which can have mass ranging from similar to a Mountain to hundreds of times the Sun. WIMPs are exactly what their name says: particles which are massive by particle standards (a few to 1000s of times as heavy as a proton) and which interact only weakly with each other and the normal matter (protons, neutrons,…). They could have been produced in the early Universe, in the right amount to be the dark matter.
.
The different dark matter candidates have very different properties. So we need to use different methods to try and detect them. These methods are a mixture of astronomical observations and particle physics experiments.
.
Planetary and Solar mass PBHs can be detected by gravitational microlensing. When they pass inbetween us and a Star they bend space by a small amount and as a result the star is temporarily brightened. There are collaborations who have monitored lots of stars in the Large and Small Magellanic Clouds (and more recently also in our nearest neighbour galaxy, Andromeda) looking for these microlensing events. They use telescopes to collect light from lots of stars repeatedly over timescales of hours, days or years and look for stars which have suddenly become very bright and then gone back to how they were before again. I then use their results to put limits on how many PBHs there can be in our Milky Way galaxy and how heavy they can be. Specifically I use the number of events they’ve observed and how long each one lasted for (the heavier the PBH, the longer the brightening lasts for).
.
One way of looking for WIMPs is direct detection. Because they’re weakly interacting they mostly pass straight through things. But a tiny fraction of them will bump into the nuclei at the centre of atoms and make them recoil, like a pool ball (if the dark matter is WIMPs this happens roughly a dozen times in your body every year-but don’t worry it won’t do any harm!). Direct detection experiments are experiments built specifically to look for the energy deposited in the detector by the nuclei recoiling. Depending on what the experiment is made of (common examples are germanium, xenon and argon) the energy takes different forms: ionisation (ejecting an electron from an atom), scintillation (exciting an electron to a higher energy level with a photon then being emitted when it goes back down again) and phonons (a rise in the temperature of a detector cooled to just above absolute zero). The experimentalists detect these events, and carefully measure their energy. [there’s a lot more I could write about this step, but I’ve already written a lot, and I need to dash off to the gym now…] I then compare the number of events and their energies with what we expect from theoretical models, of the particle physics properties of the WIMP (how much they weight and how they interact with atoms) and also the astrophysical distribution of WIMPs within the Milky Way galaxy.
Mod - Shane commented on :
Thank you for the detailed explanation. It really is fascinating. But to follow up:
Gravitational Microlensing. “A star is temporarily brightened.” How long? How much brighter?
Anne commented on :
@Shane. The duration and brightness of microlensing events both vary depending on multiple things.
.
As I mentioned in my previous answer, the heavier the black hole (or other compact object) the longer the event. For Solar mass compact objects in the Milky Way’s halo microlensing stars in the Magellanic Clouds, typical durations are 100s of days. For planetary and asteroid mass compact objects microlensing stars in Andromeda, the typical durations are minutes or hours.
.
The closer the compact object gets to the line of sight between us and the star, the brighter the event is.
.
Slide 6 of this talk I gave at a workshop in Brussels on primordial black holes goes into more detail: http://www.nottingham.ac.uk/~ppzag/Green-microlensing-mod.pdf