The increasing development of new technologies related to the health and food industries involves both policy makers and society as a whole in decisions on the safety and marketing regulations of new biotechnology products. However, the regulation of biotechnology is characterized by incomplete expert knowledge, and consequently limited information is communicated to the public, which ultimately judges a technology. Given that regulation is politically driven, and assuming that decisions makers are accountable to the public, a reluctant and uninformed public might end up driving risk policy decisions. In such an environment, policy makers might search for ad hoc rules to cope with the lack of sufficient information. Among these rules, a controversial and widespread rule adopted by risk-averse decision makers is the so-called "precautionary principle."
Although multiple interpretations of the precautionary principle exist (Treich 2001), the mainstream interpretation would suggest that when there is a "lack of scientific certainty" regulation should be restrictive until science provides definitive proof of the risk (or lack of risk) to society, often referred to as "erring on the side of caution." In the extreme, societies that avoid all risks, the so-called zero-risk scenario, will hinder innovation because no technology is absolutely safe. To resolve the trade-off between innovation and risk, risk analysts have largely advocated the development of cost-effectiveness analysis to examine whether the social benefits outweigh the risks of specific technologies. However, even when formal risk assessment is undertaken, regulatory controversies remain as decision makers tackle the normative questions of "how safe is safe enough?" and "what is the acceptable level of risk?"
Arguably, the concept of "acceptable safety" standards in democratic societies calls for several forms of public participation such as using public opinion in the process of risk decision making (Audetat 2000; Costa-Font and Mossialos 2005a). Indeed, if one accepts the hypothesis that some, if not a large part, of the risks incurred with new technologies are broadly speaking "social risks," which denote a broader conception of risks than purely expert-quantifiable risks, then the public might have an active role in guiding risk regulation. Furthermore, if decision making with regard to new technologies is intended to modify the social environment in which individuals live, then the consequences of those risks might be essentially unpredictable through objective risk assessments. Although one might encounter Luddite attitudes (a rejection of all new technologies) when examining public perceptions, there is evidence that these are rather infrequent, affecting less than 2 percent of the European Union public (Costa-Font and Mossialos 2003b).
Scientific disagreements and conflicts of interest in science communication often influence regulation in such a way that risk policy is regarded to be endogenous. That is, not only do public attitudes toward certain technologies influence risk regulation but risk regulation influences public attitudes as well (Margolis 1996). Moreover, the public's perceptions partially determine how regulators formulate and apply certain policies in democratic societies. However, we could plausibly speculate that the extent to which regulation is endogenous will depend on the extent to which governments are responsive to public demands. Therefore, the regulation of new technologies and public opinion are likely to be interdependent. Furthermore, the endogenous regulation model is limited by the public's understanding of scientific issues. That is, if the public is ill informed about the possible consequences of developing certain technologies, public opinion might express a behavioral response, which might not necessarily be grounded on a reasoned choice.
The rejection of some technologies can be understood as a demand for more and understandable information (Costa-Font and Mossialos 2005c). Thus, an important distinction should be made between known versus unknown risks. Not only is more information available for known risks, but the individual consequences of using the technology are more predictable for known risks. However, the social consequences of known risks might still remain uncertain even when no individual risks have been identified. When risks are unknown, risk regulation becomes complex, and risk analysts often search for simple rules to guide decision making. A typical rule is to "err on the side of caution" as a method of guiding the regulation of new technologies. To date this rule has been the cornerstone for health and environmental policy developments in the European Union.
Although diverse specifications of the precautionary principle are in place (Bodansky 1991; Cameron and Abouchar 1991), all of them differ in the degree to which such uncertainty should be reduced and on whether uncertainty should lead to either a definitive or temporary ban of technology.
The European Commission partially clarified its understanding of the precautionary principle in a communication on February 2, 2000, although some ambiguity remains. In particular, the Commission established that the precautionary principle "can under no circumstances be used to justify the adoption of arbitrary decisions ... is no...