I've decided to make clear, kind of once and for all, what science is, how it works, why it works so well, and most of all, why people who don't understand science attack it for things it doesn't say or do. I consider this a public service. I also think, once and for all time, I can point people HERE and say, "Look, I answered your question!"
When I was in seminary, I took a seminar on science and theology. The professor, the late great Roy Morrison, and I had a conversation after one class session in which he made the point that many scientists repeat information gained from radioactive dating without considering it is rooted in assumptions that are not falsifiable. He also said that doesn't mean using radiometric dating isn't wrong-headed.
He was right.
I thought I'd begin this series of posts by talking about the way science operates, including the way it incorporates non-falsifiable assumptions as its starting-points. Science is a wonderful tool that, over the past several centuries has evolved in to a remarkable way of answering questions about the world in which we live. While less developed, the social sciences like psychology, sociology, political science, and anthropology, do offer students the opportunity for keener understanding of human life.
Science works so well because, at its best, it begins with the assumption that everything we know, including all our scientific understanding, is wrong. It may serve us well so far, or as far as it goes; if some new piece of information arises, however, that isn't accounted for by our prevailing understandings, those understandings themselves have to change to accommodate the newly discovered facts. In the process, how we understand all sorts of other things also changes.
The way we understand things in science is referred to as "theories". Scientific theories are provisional statements about how disparate yet related phenomena work themselves out. Scientific theories, among the many ways human beings have used to understand their world, make two related claims. First, like many ways of understanding, whether alchemy or magic or even theology, science claims to make predictions about future events based upon the way various theories describe how particular phenomena occur. Second, and unlike those mentioned and others, science (at its best, anyway) says that if its predictions are not accurate, it will change the theory. In practice, there are a number of ways theories are not altered or tossed away in the face of contrary evidence. It usually takes the accumulated weight of contrary evidence to convince most scientists, over a longish period of time, to stop using theories that have been consistently falsified. Still, the number of scientific theories that have been discarded over the centuries precisely because they were falsified is quite large. The twentieth century alone has seen several basic theoretical shifts in physics, shifts that have implications for chemistry, biology, and other branches of science as well.
Theories are no less rooted in unfalsifiable assumptions than anything else we humans do. To return to the example at the beginning of this post, consider the discovery of radioactive decay. After the discovery of radiation at the end of the 19th century, many prominent physicists worked hard to discover radioactive materials. In the process of their studies, recognizing that "radiation" means that a given element is releasing elementary particles, it was soon discovered that, for a given mass of a radioactive element, the release of particles that creates the various types of radiation occur at a statistically regular rate. From this, it became a matter of somewhat complicated mathematics to use the measured radioactive level of a particular mass of a radioactive element and determine not only when it was created, but when it would, at some point, reach then end of its radioactive decay, becoming both inert and transmuted to another element.
We now use this understanding in all sorts of ways. Because there are traces of radioactive carbon in all carbon-based life, we can use our knowledge of the half-life of radioactive carbon to date everything from fossils to petrified wood. Radioisotopic labeling is now a pretty common method for studying all sorts of things, both in living and dead organic matter. The results we receive from all sorts of uses of our understanding of radioactive decay are, and have been, consistent across the board, and are just one reason the guess about the age of the earth - around four and a half billion years, within a range of a few hundred million years one way or another - is not just a guess, but a pretty confident assertion .
There's only one problem with this whole theory. We have no way of knowing if the rate of radioactive decay, which we humans observe as a statistically regular event, has always been as constant and regular. There is no way to show, definitively, that it even occurred prior to its discovery. There may well have been a change in the rate of radioactive decay over the multiple billions of years of the history of the universe. The problem with these perfectly reasonable alternatives is simple: There is no way to investigate them. We cannot go back in time, say, to the formation of planet Earth to check the decay rates of the various radioactive elements and see if they differ from current rates. It may well be the case this happened. We cannot find out, however, if this is true.
So, scientists make the assumption that decay rates of radioactive elements have been a constant since the beginning of the Universe. Setting aside protests and working under the assumption that decay rates have been constant has shown the theory to be remarkably fruitful of all sorts of things, many of which couldn't have been imagined when radioactive decay was first discovered and codified.
This somewhat mundane, and I hope easily understood, example of the way untestable assumptions work in science makes clear that science, for all the things it does remarkably well, actually works within its human limits. Whether it's the study of radioactive decay, or weather phenomena, or the activity of human societies in different times and places, science offers us the remarkable ability to understand all sorts of things; yet it does so always with the understanding that it is doing so with certain things - call them givens, perhaps, or axioms - that are assumed to be true only because they cannot be disproved otherwise.
As we move forward through this series, I hope to demonstrate what a remarkable tool the scientific method has shown itself to be, despite its many limitations.
When I was in seminary, I took a seminar on science and theology. The professor, the late great Roy Morrison, and I had a conversation after one class session in which he made the point that many scientists repeat information gained from radioactive dating without considering it is rooted in assumptions that are not falsifiable. He also said that doesn't mean using radiometric dating isn't wrong-headed.
He was right.
I thought I'd begin this series of posts by talking about the way science operates, including the way it incorporates non-falsifiable assumptions as its starting-points. Science is a wonderful tool that, over the past several centuries has evolved in to a remarkable way of answering questions about the world in which we live. While less developed, the social sciences like psychology, sociology, political science, and anthropology, do offer students the opportunity for keener understanding of human life.
Science works so well because, at its best, it begins with the assumption that everything we know, including all our scientific understanding, is wrong. It may serve us well so far, or as far as it goes; if some new piece of information arises, however, that isn't accounted for by our prevailing understandings, those understandings themselves have to change to accommodate the newly discovered facts. In the process, how we understand all sorts of other things also changes.
The way we understand things in science is referred to as "theories". Scientific theories are provisional statements about how disparate yet related phenomena work themselves out. Scientific theories, among the many ways human beings have used to understand their world, make two related claims. First, like many ways of understanding, whether alchemy or magic or even theology, science claims to make predictions about future events based upon the way various theories describe how particular phenomena occur. Second, and unlike those mentioned and others, science (at its best, anyway) says that if its predictions are not accurate, it will change the theory. In practice, there are a number of ways theories are not altered or tossed away in the face of contrary evidence. It usually takes the accumulated weight of contrary evidence to convince most scientists, over a longish period of time, to stop using theories that have been consistently falsified. Still, the number of scientific theories that have been discarded over the centuries precisely because they were falsified is quite large. The twentieth century alone has seen several basic theoretical shifts in physics, shifts that have implications for chemistry, biology, and other branches of science as well.
Theories are no less rooted in unfalsifiable assumptions than anything else we humans do. To return to the example at the beginning of this post, consider the discovery of radioactive decay. After the discovery of radiation at the end of the 19th century, many prominent physicists worked hard to discover radioactive materials. In the process of their studies, recognizing that "radiation" means that a given element is releasing elementary particles, it was soon discovered that, for a given mass of a radioactive element, the release of particles that creates the various types of radiation occur at a statistically regular rate. From this, it became a matter of somewhat complicated mathematics to use the measured radioactive level of a particular mass of a radioactive element and determine not only when it was created, but when it would, at some point, reach then end of its radioactive decay, becoming both inert and transmuted to another element.
We now use this understanding in all sorts of ways. Because there are traces of radioactive carbon in all carbon-based life, we can use our knowledge of the half-life of radioactive carbon to date everything from fossils to petrified wood. Radioisotopic labeling is now a pretty common method for studying all sorts of things, both in living and dead organic matter. The results we receive from all sorts of uses of our understanding of radioactive decay are, and have been, consistent across the board, and are just one reason the guess about the age of the earth - around four and a half billion years, within a range of a few hundred million years one way or another - is not just a guess, but a pretty confident assertion .
There's only one problem with this whole theory. We have no way of knowing if the rate of radioactive decay, which we humans observe as a statistically regular event, has always been as constant and regular. There is no way to show, definitively, that it even occurred prior to its discovery. There may well have been a change in the rate of radioactive decay over the multiple billions of years of the history of the universe. The problem with these perfectly reasonable alternatives is simple: There is no way to investigate them. We cannot go back in time, say, to the formation of planet Earth to check the decay rates of the various radioactive elements and see if they differ from current rates. It may well be the case this happened. We cannot find out, however, if this is true.
So, scientists make the assumption that decay rates of radioactive elements have been a constant since the beginning of the Universe. Setting aside protests and working under the assumption that decay rates have been constant has shown the theory to be remarkably fruitful of all sorts of things, many of which couldn't have been imagined when radioactive decay was first discovered and codified.
This somewhat mundane, and I hope easily understood, example of the way untestable assumptions work in science makes clear that science, for all the things it does remarkably well, actually works within its human limits. Whether it's the study of radioactive decay, or weather phenomena, or the activity of human societies in different times and places, science offers us the remarkable ability to understand all sorts of things; yet it does so always with the understanding that it is doing so with certain things - call them givens, perhaps, or axioms - that are assumed to be true only because they cannot be disproved otherwise.
As we move forward through this series, I hope to demonstrate what a remarkable tool the scientific method has shown itself to be, despite its many limitations.