A world where shared moral norms and the effective governance of techno-sciences safeguard and foster the fundamental rights, dignity, and freedom to flourish of all humans.
The SLS Initiative is dedicated to ensuring that the legal system harnesses the power of techno-sciences, including artificial intelligence, and mitigates their risks to the citizen. We do so through broadly inclusive consultations, research, advocacy, and the design and implementation of real-world projects aimed at assessing the impact, benefits and risks of delegating to machines decisions that affect humans.
“Human beings are equal in rights, dignity, and the freedom to flourish”
Principles for the Governance of AI
Principle 1: AI shall not impair, and, where possible, shall advance the equality in rights, dignity, and freedom to flourish of all humans. Accordingly, the purpose of governing artificial intelligence is to develop policy frameworks, voluntary codes or practice, practical guidelines, national and international regulations, and ethical norms that safeguard and promote the equality in rights, dignity, and freedom to flourish of all humans.
Principle 2: AI shall be transparent. Transparency is the ability to trace cause and effect in the decision-making pathways of algorithms and, in hybrid intelligence systems, of their operators.
Principle 3: Manufacturers and operators of AI shall be accountable. Accountability means the ability to assign responsibility for the effects caused by AI or its operators.
Principle 4: AI’s effectiveness shall be measurable in the real-world applications for which it is intended. Measurability means the ability for both expert users and the ordinary citizen to gauge concretely whether AI or hybrid intelligence systems are meeting their objectives.
Principle 5: Operators of AI systems shall have appropriate competencies. When our health, our rights, our lives or our liberty depend on hybrid intelligence, such systems should be designed, executed and measured by professionals with the requisite expertise.
Principle 6: The norms of delegation of decisions to AI systems shall be codified through thoughtful, inclusive dialogue with civil society. In most instances, the codification of the acceptable uses of AI remains the domain of the technical elite with legislators, courts and governments struggling to catch up to realities on the ground, while ordinary citizens remain mostly excluded. Principle 6 is intended to ensure that standards and codes of practice result from more inclusive dialogue and are grounded in truly broad consensus.
Why a Framework, why now?
“To what extent should societies delegate to machines decisions that affect people?” Humanity’s answer to this question will have profound consequences on how we experience every aspect of life, including our very conception of what it means to be human. The stakes are particularly meaningful in the extended legal domain, on which our safety, our rights and our duties, our dignity, and the prosperity of our societies so centrally relies. In order to reap AI’s promise, while mitigating its risks, the adoption of a framework for its governance is indispensable, before realities on the ground render any attempts at such governance strategies futile.
Design Components & Constraints
In the interest of effectiveness, comprehensiveness, and durability, The Future Society’s framework adheres to the following design constraints.
The framework includes four components, sequenced from the general to the specific:
The law impacts every aspect of how artificial intelligence and humans interact: from the ‘click-to-accept’ terms of every-day apps, to autonomous vehicles and weapons, and the administration of the welfare state. And, at the same time, AI technologies will have a transformative impact on all aspects of the law: how it is made, how it is practiced, how it is enforced. Our primary area of focus is the extended legal system itself, including:
Some of challenges that will arise may seem distant, but many of them have perceptible early manifestations:
In the longer term
How will the legal system incorporate AI in the provision of legal services? Will there be a time when AI “lawyers” are, for example, a complement to or replacement for overworked public defenders? How will society respond to the possibility of AI “judges” that can demonstrably produce faster, more equitable and more uniform decisions than human judges can? Is it intrinsically improper to have human disputes adjudicated by AI, even if the system‐wide outcomes are more equitable than humans can deliver? Would it be appropriate –even ethically prescribed— to entrust the practice of legal tasks solely to “AI lawyers” if they are proven to be generally superior to humans?
AI is already used to replicate and automate the work of lawyers in certain fact‐finding tasks, in particular electronic discovery. A ground‐breaking study under the aegis of US NIST demonstrated that automated assessments of relevancy and responsiveness conducted by sophisticated AI systems could, in the hands of scientifically trained experts, perform with greater accuracy and speed than human attorneys could. This raises today the somewhat futuristic question outlined immediately above: if AI systems are demonstrably superior to human attorneys at certain aspects of legal work (e.g., responsiveness assessments), what are the ethical and professional implications for the practice of law?