Many test processes and test tools mention that you have to establish the expected results to every test. It is at best risk ignorance not to take the notion of expected results with a kilo of salt #YMMV.
If you can establish the result of your solution in a deterministic, algorithmic and consistent way in a non-trivial problem, then you can solve the halting problem. But I doubt your requirements are trivial.. or even always right. Even the best business analyst or subject matter expert may be wrong. Your best oracle may fail too. Or you may be focussing on only getting what you measure.
When working with validation in seemingly very controlled environments changes and rework happens a lot, as every new finding needs the document trail back to square one.. Stuff happens. Validation is not testing, but looking to show that the program did work as requested at some time. It is a race towards the lowest common denominator. IT suppliers can do better that just to look for “as designed” .
Still the Cynefin framework illustrates that there are repeatable and known projects, and in those contexts you should probably focus on looking to check as many as the simple binary questions in a tool supported way, and work on the open questions for your testing.
Speaking of open ends – every time I see an explicit expected result I tend to add the following disclaimer inspired by Michael Bolton (song to the tune of nothing else matters ):
And nothing odd happens … that I know of … on my machine, in this situation 
And odd is something that may harm my user, business or program result
But I’d rather skip this test step and work on the description of the test and guidelines to mention:
- Think! Use your head – use what you know.
- You are the expert.
- If it smells fishy, there is something fishy going on
- Wonder, Wander, Galumph if you may
- Analyse, consider,.. if you get an idea, try it!
- Learn, Explore, and by all means test
And now to something completely different:
3: I’ve heard that somewhere…