Proposed Law on Artificial Intelligence (AI) Policy
Participate in the democratic process by voting on proposed legislation
Preamble:
Artificial Intelligence (AI) refers to computational systems capable of sophisticated and independent activity in various contexts. Already in widespread use, these systems promise to prove transformational in a wide variety of domains, improving productivity and altering the landscape of possibility for humankind.
Along with this remarkable potential, however, is the inherent danger posed by AI. This can take many forms, including but not limited to increasing conflict and the capacity to do harm between warring factions and the possibility of it complying with harmful requests, threatening global cybersecurity.
Furthermore, there is a non-negligible probability that current AI models could gain the ability to self-improve, iteratively building their own power and intelligence, or that of other AIs. Such an agent could become “superintelligent”, surpassing not only the intelligence of any individual human, but that of humankind collectively. A misaligned superintelligence without a thorough understanding of which behaviors are desirable or undesirable would be difficult to predict or control, and hence poses potentially severe danger to humanity and all life on Earth.
It is therefore the intention of the process described herein to propose safety legislation around the development of and research into AI systems.
Proposed changes to legislation:
This 10th day of March of the year 2024, a series of proposals regarding the safety of AI are released for global selection. Per legal protocol, voting on the new laws is open to anyone over eighteen (18) years of age who has not already cast a vote in this selection.
The proposed laws are a subset of those published by the Existential Risk Observatory in November 2023, and consist of the following:
Implement an AI pause: Cap the allowed size of training runs to GPT-4 level, with a requirement to decrease in future to compensate for algorithmic improvements
Create a licensing regime: Frontier AI models are required to obtain a license to operate
Mandate model evaluations: Licenses are subject to publicly operated evaluations of the model’s capabilities and alignment
Mandate third-party auditing: Auditing should be performed in three layers: governance audits (of technology providers that design and disseminate LLMs), model audits (of LLMs after pre-training but prior to their release), and application audits (of applications based on LLMs)
Publicly fund AI Safety research
Establish an International AI Agency
Demand a reversibility guarantee for increases in frontier AI capabilities
Establish liability for AI-caused harm
Voters are required to read the following document before voting.
The Policy Proposal Document is available for live commenting and debate at the following site:
If a voter has read one or more of the following documents, their vote will count double.
Frontier AI Capabilities Risk Report
An Overview of Catastrophic AI Risks
Auditing Large Language Models: a three-layered approach
If a voter has read the above literature AND commented on the Policy Proposal Document, showing active and informed participation in the debate, their vote will count triple.
Voters have the option to vote on any or all of the proposed laws. Comments are encouraged.
AI entities are not permitted to vote but are in use to ensure authenticity and accuracy.
Decisions made by majority vote will be immediately written into global law and actioned by relevant professionals from within the affected fields.
Disclaimer: This is a fictional vote invented in response to a writing prompt. It is intended to demonstrate in real terms a more inclusive, dynamic alternative to democracy that encourages education and engagement. Please play along and vote. The debate tool is also real, but without real world consequences other than encouraging discussion and debate on one of the world’s most pressing issues.
I will publish the voting results in three weeks along with a summary of your considered comments.
Please comment on both the AI rules and the idea of Dynamic Consensus in government. What are the benefits and pitfalls of this kind of system?
Oooh direct democracy that incentivizes engagement is a very interesting idea. I could imagine that might encourage the people who are more knowledgeable about a subject to have more of a say. Which would certainly be a good thing!