Observational astronomy is plagued with selection effects that must be taken into account when interpreting data from astronomical surveys. Because of the physical limitations of observing time and instrument sensitivity, datasets are rarely complete. However, determining specifically what is missing from any sample is not always straightforward. For example, there are always more faint objects (such as galaxies) than bright ones in any brightness-limited sample, but faint objects may not be of the same kind as bright ones. Assuming they are can lead to mischaracterizing the population of objects near the boundary of what can be detected. Similarly, starting with nearby objects that can be well observed and assuming that objects much farther away (and sampled from a younger universe) are of the same kind can lead us astray. Demographic models of galaxy populations can be used as inputs to observing system simulations to create ``mock'' catalogues that can be used to characterize and account for multiple, interacting selection effects. The use of simulations for this purpose is common practice in astronomy, and blurs the line between observations and simulations; the observational data cannot be interpreted independent of the simulations. We will describe this methodology and argue that astrophysicists have developed effective ways to establish the reliability of simulation-dependent observational programs. The reliability depends on how well the physical and demographic properties of the simulated population can be constrained through independent observations. We also identify a new challenge raised by the use of simulations, which we call the ``problem of uncomputed alternatives.'' Sometimes the simulations themselves create unintended selection effects when the limits of what can be simulated lead astronomers to only consider a limited space of alternative proposals.