Looking at modern paranormal investigations and results, one tends to hang their head. I have spent a decade or more in the field and have become somewhat jaded with the methods used in interpretation of data as well as the manner in which we approach said collection. I am not saying that investigations are pointless, what I am saying is that there is a better way. Many people are working on that better way with various results. I may have a novel way of looking at the data we recover.
First, let me say that I am not a mathematician; in fact, the reason I went into social sciences in the first place is that complex equations are difficult for me. However, sometimes concepts will stick with me when I hear them, and this one did. The idea of using mathematical expressions to correlate the data, which we collect, is starting to sound interesting. Now I am not talking about statistical analysis, which I am familiar with, for instance I have found that in 34% of the investigations that I have personally conducted there is a statistical relation of transient EMF readings of 2 or higher and possible EVPs. In fact, 47.2% of the time, in these locations, we find the two happening within the same location (i.e. a single room) within the same voice session. These statistics are garnered from locations that are presumably haunted, and show a high level of activity based on the reports by residence or other witnesses. However, the new application I am proposing is a different approach.
Looking at our data scientifically, one sees randomness in the collected material. One data set may show one set of results, which lead to presumptions, and another set will show something very different. Any outside agency that wishes to tear down the validity of the research can point to the randomness in the data sets in an effort to invalidate the whole. The question is, is it really random, or is there something missing in the data which makes the results harder to quantify, thus allowing for any results to be tainted by picking apart the data and finding the missing pieces. Once we know the missing pieces, or rather where the missing pieces may lie, we can start to tune our data collection to address it.
What I am suggesting is only done so from a neophyte understanding of the math. However, the concept of applying mathematical equations to the data gathered during investigations should be further explored. This is just one example of where these methods could be employed.
Applying the concept of information entropy to the data could provide insights into the missing pieces. Or more to the point to show what variables are more certain to have an impact on the outcome. Information entropy is a measure of the uncertainty surrounding a given variable. From the outside it would appear that, the variable is random. With the amount of possibly random variables in any given experiment, it becomes apparent that you need to find how much uncertainty there is with each variable. The lower the uncertainty the higher probability that the variable has a deeper impact than others does.
Implications, if my understanding of the concept is right, may be great for the future of the field. I think that further explorations in cross-discipline applications are warranted. I hope that this concept may further this search.