Regulators should also ensure that AI products and services compete on a level playing field with non-AI products and services, including human-provided services. Sectoral regulations on liability, professional licensing, and professional ethics should apply equally as is appropriate to both AI and non-AI solutions. For instance, hiring decisions and credit decisions must be subject to the same rules against discrimination and bias, no matter whether they are made by humans or AI. Likewise, financial advice should be subject to similar kinds of regulation regardless of whether it is provided by humans or AI. This may reduce AI use in some sectors, and simultaneously avoid the degradation of service standards through the use of AI. . . .
There are cases where human workers are penalized for discounting the analysis of an AI solution in their workplace, creating a lopsided liability burden. For instance, nurses in some US hospitals can disregard algorithmic assessment of a patient’s diagnosis with doctor approval but face high risks for such disregard as they are penalized for overriding algorithms that turn out to be right. This may lead nurses to err on the side of caution and follow AI solutions even when they know they are wrong in a given instance. . .While these are private penalties upheld by hospital administrations, there has been at least one case where a nurse was held responsible by an arbitrator for a patient’s death because she did not override an algorithm. The arbitrator held that she was pressured by hospital policy to follow the algorithm, and thus her employer was directed to pay damages to the patient’s family.
To avoid situations where humans defer to AI against their better judgment, liability frameworks should be neutral to ensure that technology follows sectoral regulation and not the other way round. AI technology should not be applied in circumstances in which it does not meet regulatory standards.
Vipra, J. and A. Korinek (2023). Market Concentration Implications of Foundation models: The invisible hand of ChatGPT. Center on Regulation and Markets Working Paper #9, Center on Regulation and Markets at Brookings, Brookings Institution, Washington DC. (Accessed online at https://www.brookings.edu/tags/center-on-regulation-and-markets-working-papers/)
I agree with the above passage, at least as far as the authors go. The problem is they do not go far enough. Namely, the regulation of AI foundation models must also go beyond the regulator standards of the regulator of record. This most certainly holds when the AI models are integrated into the sectors and infrastructures focused on by the authors.
The key to understanding how the authors stop short of going far enough is their own example of the nurse. The nurse is actually a reliability professional, one of whose functions is to correct for regulatory error or lapses in standards. Irrespective of problems with any specific AI algorithm, no one can or should expect regulatory standards to be all-covering or comprehensive at any point of time in such dynamic fields as healthcare.
In practical terms, this means there is not just the risk of regulatory non-compliance by real-time professionals, like nurses; there is also the risk of compliance with defective regulations. Either way, the importance of time from discovery to correction of error reinforces the nature of dispersed regulatory functions beyond that of the regulator of record.