Model development procedure

In this page, you’ll find an abstract description of a step-by-step procedure to develop a model that represents the truth. These steps help you to give your model a purpose, gather relevant information, construct a robust model, get rid of your bias and it explains a procedure to thoroughly test your model.

It’s a puzzle!
Consider your model a puzzle. Facts are pieces of that puzzle. Interpretations are connected facts/pieces of the puzzle. Sometimes pieces of a puzzle fit nicely, while it is not correct.

Step 0: tentatively define the purpose of your model
At this point, you can make clear for yourself why you are building a model. If you don’t do that, you may wander around through information, not knowing what to do with it. However, at this point your goal should not be too firm, as that could increase bias. Your definitive purpose can be established later.

Step 1: gather information
Now gather information about the subject to build a model for. This information should be as pure as possible. It is best to find pure observations, like images, measurements and raw data.

Step 2: remove the interpretations
The standard explanations and interpretations should be ignored in the start phase to reduce the chance of bias. In most information sources, facts and interpretations are mixed together, so one should think for each part of information if it can really be witnessed or not. An example: deep time can never be witnessed, so that should always be ignored in this phase.

Step 3: make a list of all possible interpretations
In most cases, observations can be interpreted in multiple ways. For example: if the observation is that there is a rock laying on the ground, ask questions, like: How did it get there? Was it thrown, water transport, air transport, fallen from the sky? Just make a simple list of all possibilities you can think of. It does not matter if there are any absurd interpretations in it. It does matter if you omit interpretations that you consider absurd: that is bias. Remember: the truth can be incredible. Also consider options that do not fit in your world view. Consider this an out-of-the-box brainstorm session.

Step 4: define expected properties of the subject for each interpretation
All possibilities may cause the object to have gained different properties. For example: if a rock fell from the sky, it probably has damages that matches the structure of the ground. This way you can make predictions for each alternative. In an ideal situation, you can define clear unique properties for each possible interpretation. In reality, that would often not be the case.

Step 5: verify the predictions/expectations
Once you have formulated properties of the object for each interpretation, you can search for more detailed information in order to find the expected properties. This way you can eliminate part of the interpretations.

Step 6: sort by likeliness
When you have done this, you can sort the interpretations by likeliness. It is important to keep all not-eliminated interpretations in tact, even if they are very unlikely. It is only meant for prioritizing.

Step 7: gather interpreted information
At this point, you can start reading about what others have written about the subject you investigate. In a lot of cases, you will find out that there are even more interpretations possible than you thought. Just add them to your list. In a lot of other cases, you will find out that the writer(s) draw(s) the same conclusions as you. In rare cases, you have found a more likely interpretation than the writer. In all cases you will learn from it, so it is worth doing it. It may seem time consuming, but realize that your research is quite shallow at this point. It does often not take (much) more time than just reading everything about the subject. It often even helps you to understand the subject better, so you can read much faster.

Step 8: figure out the perspective of the other investigators (model builders)
When you have done this, you can try to define why the writer has chosen a specific interpretation. A lot of scientists don’t keep multiple interpretations in tact, so they choose something they consider most likely. There is a chance that they have not interpreted it in a valid way. Also, most scientists are trained in a specific world view. They can easily base their conclusion on a framework that does not represent the truth. Never just assume experts are right. They can be biased by complicated circular reasoning and hidden assumptions. Note: that is not conspiracy, it’s just psychology.

Step 9: take all perspectives seriously
In a lot of cases, there are contradicting conclusions available for a subject. Challenge yourself. Why would you be right? Why would the other be right? Step into every perspective in order to gather all views of the subject. You will learn from it. If you omit one perspective, you are biased. Don’t fight it, just think it through.

Step 10: definitively determine the purpose of your model
You must make very clear why your model exists. Putting this in step 10 may seem late. Actually you should be doing this throughout performing step 1 to 9. However, when you have established the purpose too early, it may cause bias: you already have a goal in mind. If you expect that goal is to show your hypotesis represents the truth, while it actually isn’t, you are going to experience cognitive dissonance. It may cause your model to be false before you have even started.

In step 0, you have tentatively established the purpose, and that goal may have mutated due to progressive insight as you were going. That’s why I suggest to definitively determine the purpose at step 10.

Step 11: define the limits of what your model can be used for
You try to make a useful model, but there is often a limit to what it can be validly applied for. Make that limits very clear.

For example, if your model is to calculate gravity, you may have a formula involving “gravity is an acceleration of 9.81 m/s^2”. This is effective for a lot of gravity calculations on earth. However, for other planets the value is different, and for more precise calculations on earth you should mind that this acceleration decreases once you are further away from the earth. So the limits of this model is that you can only use it for gravity calculations near earth, with a limited significance in precision. Also prescribe the precision your model needs to have in order to be useful and prescribe how much data you need in order to gain that precision.

Step 12: test if your model generates contradictions
Get out of your comfort-zone. You probably don’t like the contradictions your model generates. If you want to make yourself believe that you are smarter than others, you can avoid contradictions and wrap them in assumptions or just deny them. Cognitive dissonance reduction is the easiest escape. However, if you are interested in the truth, then you must accept contradictions that occur. The contradictions are very valuable pieces of information that can make you aware of your own perspective and cognitive dissonance. Remember, there is only one truth. If there are lots of contradicting models, at most 1 can be representing the truth. Therefore the chance that you have a correct model is very small.

Step 13: remove the assumptions from each contradicting model
Do not choose which assumptions is right and which is wrong, just remove all assumptions that er involved in a contradiction to reconsider them thoroughly later.

Step 14: build a larger model by combining submodels
Consider submodels large parts of a puzzle. Then try to connect multiple submodels together as elegantly as possible. After that, you can retry some assumptions you detached in step 11, and see which one still fits and which doesn’t. Remember: likeliness is irrelevant. All levels of “likely” is plausible.

Step 15: check for compatibility
Check if the combined sub-models are compatible with each other. If new contradictions occur, the models are incompatible. You should dig deeper to find more assumptions and interpretations and detach them. Also detach interpretations you do not want to detach.

It could be that the models are equivalent in what they can do, but are just not the same internally. In that case they could be interchangeable. A very basic example: “it is 6 because it is 2 + 2 + 2” Or “it is 6 because it is 3 * 2”. Those are equivalent. This could be much more complex in some cases.

Step 16: check for circular reasoning
If you replaced an assumption with a conclusion, it becomes a circular reasoning if that conclusion supports itself somehow. Don’t consider it true in that case. It’s plausible: uncertain.

Step 17: verify your model
Check if your model does what you want it to. Does it match the purpose you have determined?

Step 18: validate your model
Is what your model does the right thing to do? Predict how your model would behave to a new set of data, give your model a new set of data and compare the prediction with your models output.

Step 19: check for falsifiability
If your model is not falsifiable, you have a bad model. Your model is not falsifyable when it is not able to make predictions. When your model says “whatever we find, this model can explain it” it actually does not explain anything. Change it so that it can be tested. Evolution is an example of an unfalsifiable model. Every object fits in it, whether evolution is true or not. That looks like good modeling, but it is very not.

Step 20: set up an uncertainty bandwidth
Most people only investigate the most likely scenario. It does not help much. In stead, set up a boundary. For every assumption, put some extreme values in and iterate it until your model fails. Try factor 100: your model works. Try 1000: your model fails. Try 500: your model works. Try 750: your model fails… etc. Then, try 10: your model works. Try 1: your model fails, etc… You will have a rough boundary, like: “if we find that the factor is between 5 and 623, my model works, otherwise it fails.”

Step 21: build a possibility tree
Don’t get tempted to eliminate unlikely parts of your model. Keep each uncertainty in your model as an uncertainty. It may look inefficient, though eliminating the truth will make all your effort pointless. Therefore, having a lot of uncertainties in your model is infinitely more efficient than accidentally eliminating the truth. You will get a tree of possibilities, with each branch having an uncertainty bandwidth.

Step 22: prune the possibility tree.
If you really feel a need to eliminate some branches, start investigating some of them. In stead of investigating likely branches, it is best to start with the least likely. Those should be easy to eliminate. If they aren’t, then they are not as unlikely as you thought. Remember, absence of evidence is not evidence of absence. In case of absent evidence, generate evidence (by experiment or calculation). And think it through thoroughly. Common sense alone is not enough! If your first calculation shows it is in deed unlikely, you should calculate more until it shows “false” in stead of “unlikely”. You NEED facts, experiments or calculations in order to eliminate a branch of your possibility tree.

Step 23: fight your model!
You don’t want your model to fail, but you must put it to the severest test you can think of. Challenge it with the facts you don’t want to see. Gathering facts that fit in is easy, but pointless. It will only increase your bias and tunnel vision. Remember, it is not your pride that matters, but the truth. If your model fails, then you have work to do. Gather facts that you cannot explain and start with step 1 for these facts.

Step 24: ask for comments
Let others take a look at your model. They will see things that you didn’t. This helps you to improve your model.

This way, your model gets stronger and stronger every time. You can eventually answer questions that nobody thought of, and you can easily uncover hidden assumptions in other models. If you are afraid of facts, then find out why you are afraid of it. Get rid of the fear and accept every fact. But be skeptical about every suggested fact. They may be based on assumptions, even very small ones. Challenge every statement. If one fact does not fit in a model, the model is incorrect. Remember: today there is no correct model available in the world. They are all false. They may be false in very basic assumptions that nobody dares to question. Still, false models can be valuable, as long as you dare to consider and question them.

<- Previous page: Basics     Next page: Model analysis procedure ->

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s