Experience shows that the best technology is not always adopted. In the security arena no technology has to stand a harder challenge or has higher consequences for changing society by failure than voting technology. Best technology in voting is defined by accuracy, security, and integrity. But trust prescribes what technology we use. In practice, voting technology choices are driven by what people are politically comfortable with or by initiatives administrators can take trying out technology someone has made for them. This paper analyses how this kind of “trust” plays out: its influencers and consequences, such as a negative trust-legitimacy-participation- incentive loop. The paper then formalises problems that developers of improved systems face. The analysis is underscored by example, especially drawing from issues faced by a recent experiment on the implementation of multiple voting systems in parallel.