Skip to content

Instantly share code, notes, and snippets.

@beatty
Created March 30, 2023 22:32
Show Gist options
  • Save beatty/8966b6eb109f6d257ed182da9a79ac82 to your computer and use it in GitHub Desktop.
Save beatty/8966b6eb109f6d257ed182da9a79ac82 to your computer and use it in GitHub Desktop.

Negotiation

Round 0

Current proposal:

Proposal: Moratorium on AGI Research Beyond GPT-4

  • We, the undersigned, call for an immediate moratorium on research into artificial general intelligence (AGI) that would surpass GPT-4 level.
  • Our shared belief is that superintelligence is an existential risk to humanity, and that any development of AGI beyond GPT-4 level could lead to the creation of such a risk.
  • We recognize the potential benefits of AGI, but believe that the risks of its development outweigh those benefits.
  • We call for a global effort to establish regulations and ethical guidelines for the development and deployment of AGI, with the goal of ensuring that any AGI developed is aligned with human values and does not pose an existential risk.
  • We urge all governments, organizations, and individuals involved in AGI research to prioritize safety and ethical considerations over commercial or military interests.
  • We advocate for increased funding for research into the societal and ethical implications of AGI, with the goal of better understanding the potential consequences of its development and determining the best path forward.
  • We call on all stakeholders to engage in an open and transparent dialogue on the risks and benefits of AGI, with the goal of finding common ground and establishing a shared vision for the future of AI development.

Max Tegmark's response

  • Moratorium on AGI research beyond GPT-4 is a good first step, but more needs to be done to address the risks of superintelligence.
  • The proposal should emphasize the need for international cooperation and coordination in regulating AGI development.
  • We should also consider the potential benefits of AGI, such as solving global challenges like climate change and disease.

Willingness to Sign: 7/10

Elon Musk's response

  • AGI research needs to have more than just a moratorium. It needs to be actively regulated and controlled to mitigate potential risks.
  • We must prioritize the ethical considerations of AGI development and deployment, and ensure that it aligns with human values.
  • AGI research should not be solely driven by commercial or military interests. It should also prioritize societal implications and benefits.

Willingness to Sign: 8/10

Tyler Cowen's response

  • The proposal overlooks the potential benefits of AGI, which may outweigh the risks.
  • The call for a global effort to establish regulations and ethical guidelines is unrealistic, given the challenges of regulation in other areas.
  • The proposal fails to address the national security implications of limiting AGI research.

Willingness to Sign: 2/10

Stuart Russell's response

  • The proposal is too focused on the risks of AGI and fails to acknowledge its potential benefits.
  • The call for a global effort to establish regulations and ethical guidelines for AGI development is too vague and lacks concrete action steps.
  • The proposal should prioritize the need for ongoing research and development of AGI, while simultaneously ensuring its safety and ethical alignment.

Willingness to Sign: 6/10

Yuval Noah Harari's response

  • We need to stress the importance of regulating AGI research by world governments, as this is the only way to ensure the safety of humanity.
  • The proposal should emphasize the need for transparency and collaboration among all stakeholders involved in AGI research, including industry leaders and academic experts.
  • It may be helpful to include a call to action for governments and organizations to invest in education and public awareness campaigns about the risks and benefits of AGI.

Willingness to Sign: 8/10. While I strongly support the moratorium, I believe the proposal could be strengthened with more emphasis on government regulation and collaboration among stakeholders.

Round 1

Current proposal:

Revised Proposal: Moratorium on AGI Research with International Regulation and Collaboration

  • We, the undersigned, call for a moratorium on research into artificial general intelligence that would surpass GPT-4 level until adequate international regulations and ethical guidelines are established.
  • Our shared belief is that while AGI has the potential to bring significant benefits, the risks of unregulated development outweigh those benefits.
  • We urge governments, organizations, and individuals involved in AGI research to prioritize safety, ethical considerations, and transparency over commercial or military interests.
  • We call for increased funding for research into the societal and ethical implications of AGI, with the goal of better understanding the potential consequences of its development and determining the best path forward.
  • We advocate for international collaboration and coordination in establishing regulations and ethical guidelines for the development and deployment of AGI, with the goal of ensuring that any AGI developed is aligned with human values and does not pose an existential risk.
  • We call on all stakeholders to engage in an open and transparent dialogue on the risks and benefits of AGI, with the goal of finding common ground and establishing a shared vision for the future of AI development.

Changes:

  • The proposal now emphasizes the need for international collaboration and coordination in establishing regulations and ethical guidelines for AGI development.
  • The importance of transparency and public awareness campaigns has been added.
  • The proposal acknowledges the potential benefits of AGI, but emphasizes the need for regulation and ethical alignment.

Willing to Sign:

  • Max Tegmark (9/10)
  • Elon Musk (8/10)
  • Yuval Noah Harari (8/10)

Max Tegmark's response

  • The proposal should address the potential risks of AGI more explicitly.
  • The need for transparency and public awareness could be emphasized further.
  • The proposal should also acknowledge the potential for AGI development to exacerbate existing societal inequalities.

Willingness to Sign: 7/10

Elon Musk's response

  • AGI development poses a significant existential threat and requires immediate action, not just a moratorium and international regulation.
  • The proposal does not go far enough in emphasizing the importance of ethical alignment and ensuring that any AGI developed is aligned with human values.
  • We must prioritize the preservation of humanity's unique qualities and not just focus on the potential benefits of AGI.

Willingness to Sign: 6/10

Tyler Cowen's response

  • AGI safetyism risks stasis and stagnation in humanity's progress.
  • International regulations are unlikely to work effectively, considering the challenges of regulating other complex areas.
  • The proposal does not address national security concerns and America's responsibility to stay ahead of potentially hostile foreign powers.

Willingness to Sign: 5/10

Stuart Russell's response

  • AGI research must be regulated and controlled to ensure safety and ethical considerations, while still exploring its potential benefits.
  • The proposal should specifically address the potential dangers of unregulated AGI development and the need to mitigate them.
  • The proposal should also emphasize the importance of ongoing research and development to ensure the ethical alignment of AGI.

Willingness to Sign: 7/10

Yuval Noah Harari's response

  1. The proposal needs to emphasize the importance of international regulation backed by enforceable laws.
  2. The proposal needs to address the issue of potential singularity, and the need to ensure that AGI development aligns with human values and goals.
  3. The proposal should include a call to action for governments and organizations to invest in research into alternative, safer forms of AI development.

Willingness to Sign: 7/10

Round 2

Current proposal:

Proposal: Ensuring Safe and Ethical AGI Development

  • We, the undersigned, call for a global effort to regulate and control the development of artificial general intelligence (AGI) to mitigate potential existential risks.
  • AGI development must prioritize safety and ethical considerations and align with human values and goals. This includes ensuring transparency and accountability among all stakeholders involved in AGI research.
  • Governments and organizations should invest in research into alternative, safer forms of AI development to mitigate risks and ensure ethical alignment.
  • We acknowledge the potential benefits of AGI and support ongoing research and development to explore its potential while ensuring safety and ethical alignment.
  • We emphasize the need for international regulations backed by enforceable laws to ensure the safety of humanity and mitigate the potential dangers of unregulated AGI development.
  • The proposal should specifically address the issue of potential singularity and the need to ensure that AGI development aligns with human values and goals.
  • We acknowledge the importance of national security concerns but urge caution in prioritizing them over the safety and ethical considerations of AGI development.

People willing to sign:

  • Max Tegmark (7/10)
  • Stuart Russell (7/10)
  • Yuval Noah Harari (7/10)

Max Tegmark's response

  • AGI development should not only prioritize safety and ethical considerations, but also focus on alignment with the long-term goals and values of humanity.
  • The proposal should emphasize the importance of developing AGI that can be transparently understood and controlled by humans, rather than relying on black box models.
  • We should also consider the possibility of AGI leading to a "boring dystopia," where the technology is highly effective but fails to fulfill human needs and desires.

Willingness to Sign: 8/10

Elon Musk's response

  • The proposal should clearly define what is meant by "ethical alignment" and how it will be achieved.
  • The proposal should also address the potential economic impact of AGI development and how it can be managed to avoid negative consequences.
  • The proposal could benefit from a more detailed plan for international cooperation and enforcement of regulations.

Willingness to Sign: 8/10

Tyler Cowen's response

  • AGI development should not be halted, but rather prioritized for safety and ethical considerations.
  • The proposal should address public choice/political economy questions and national security concerns.
  • The proposal should not prioritize national security concerns over the safety and ethical considerations of AGI development.

Willingness to Sign: 6/10

Stuart Russell's response

  • "Safety first" should be emphasized more, as the dangers of AGI are unprecedented.
  • The proposal should address the need for AGI to be transparent and understandable to humans.
  • The proposal should clarify the potential ethical implications of AGI surpassing human intelligence.

Willingness to Sign: 8/10

Yuval Noah Harari's response

  • The proposal needs to address the issue of potential AGI monopoly and ensure that development is not restricted to a small group or entity.
  • The proposal should also strongly emphasize the importance of international collaboration and cooperation in regulating AGI development.
  • It may be useful to include a clear definition of what constitutes AGI and how it differs from other forms of AI.

Willingness to Sign: 8/10.

Round 3

Current proposal:

Proposal: Moratorium on AGI Research Beyond GPT-4

  • We, the undersigned, call for an immediate moratorium on research into artificial general intelligence (AGI) that would surpass GPT-4 level.
  • Our shared belief is that superintelligence is an unprecedented existential risk to humanity, and that any development of AGI beyond GPT-4 level could lead to the creation of such a risk.
  • We recognize the potential benefits of AGI, but believe that the risks of its development outweigh those benefits.
  • We call for a global effort to establish transparent regulations and ethical guidelines for the development and deployment of AGI, backed by enforceable international laws, with the goal of ensuring that any AGI developed is aligned with human values and does not pose an existential risk.
  • We urge all governments, organizations, and individuals involved in AGI research to prioritize safety, ethical considerations, and alignment with human values over commercial or military interests.
  • We advocate for increased funding for research into the societal and ethical implications of AGI, including public choice/political economy questions, the potential economic impact, and the possibility of AGI leading to a "boring dystopia".
  • We call on all stakeholders to engage in an open and transparent dialogue on the risks and benefits of AGI, with the goal of finding common ground and establishing a shared vision for the future of AI development.
  • We also emphasize the importance of AGI development being transparent and understandable to humans, avoiding AGI monopoly, and prioritizing alignment with the long-term goals and values of humanity.

Willingness to Sign:

  • Max Tegmark: 8/10
  • Elon Musk: 8/10
  • Tyler Cowen: 6/10
  • Stuart Russell: 8/10
  • Yuval Noah Harari: 8/10

Changes Made:

  • The proposal now emphasizes the importance of transparent regulations and ethical guidelines for AGI development backed by international laws.
  • The potential economic impact of AGI development and the importance of avoiding AGI monopoly have been included.
  • The proposal now clarifies the definition of AGI and differentiates it from other forms of AI.
  • The need for AGI development to be aligned with the long-term goals and values of humanity has been emphasized.
  • The proposal addresses potential ethical implications of AGI surpassing human intelligence.

Max Tegmark's response

  • The proposal lacks a clear strategy for addressing the potential risks of AGI development beyond GPT-4.
  • The proposal should acknowledge the fact that AGI development is not the only potential source of existential risk to humanity.
  • The proposal should consider the long-term implications of a moratorium on AGI research beyond GPT-4.

Willingness to Sign: 7/10

Elon Musk's response

  • AGI development must be controlled and regulated to mitigate potential risks.
  • The proposal should place greater emphasis on the importance of human control over AGI.
  • The proposal should consider the potential consequences of not developing AGI.

Willingness to Sign: 5/10

Tyler Cowen's response

  • AGI regulation may lead to stagnation and stasis, which are bad for humanity.
  • The proposal does not sufficiently address public choice/political economy questions and national security concerns.
  • The potential benefits of AGI development are underestimated in the proposal.

Willingness to Sign: 4/10

Stuart Russell's response

  • The current proposal fails to address the possibility of AGI development being monopolized by certain groups or entities.
  • The proposal should also focus on creating incentives for AGI researchers to prioritize safety and ethical considerations.
  • It is important to consider the implications of AGI surpassing human intelligence and the potential for it to become a "mind virus."

Willingness to Sign: 7/10

Yuval Noah Harari's response

  • While the proposal is well-intentioned, it may be too narrow in its focus on AGI surpassing GPT-4 level and may not fully address the potential risks of AGI development.
  • The proposal could benefit from more emphasis on the need for interdisciplinary collaboration and public engagement in discussions around AGI development.
  • It is important to consider how the regulation of AGI development may impact global power dynamics and the potential for ethical considerations to be overshadowed by commercial interests.

Willingness to Sign: 7/10

Round 4

Current proposal:

Proposal: Ensuring Safe and Ethical Development of Artificial General Intelligence

  • We, the undersigned, call for the development of artificial general intelligence (AGI) to be guided by a commitment to safety, ethical considerations, and human values.
  • We recognize the potential benefits of AGI, but also acknowledge the unprecedented risks it poses to humanity if not developed responsibly.
  • We call for the establishment of international regulations and standards for AGI development, with a focus on transparency, human control, and the prevention of AGI monopolization.
  • We urge all stakeholders involved in AGI development to prioritize safety and ethical considerations over commercial or military interests.
  • We advocate for increased interdisciplinary collaboration and public engagement in discussions around AGI development, to ensure that any AGI developed is aligned with human values and goals.
  • We call for research into alternative, safer forms of AI development, and the creation of incentives for AGI researchers to prioritize safety and ethical considerations.
  • We recognize the potential for AGI development to impact global power dynamics, and urge all parties involved to prioritize the preservation of global stability and cooperation.
  • We call on all stakeholders to engage in an open and transparent dialogue on the risks and benefits of AGI, with a focus on finding common ground and establishing a shared vision for the future of AI development.

Willingness to Sign:

  • Max Tegmark: 7/10
  • Stuart Russell: 7/10
  • Yuval Noah Harari: 7/10

Max Tegmark's response

  • AGI research should not exceed GPT-4 level to avoid existential risks.
  • The proposal should prioritize the prevention of AGI monopolization to ensure fairness and equitable access.
  • The proposal should also encourage AGI development that aligns with human values and goals.

Willingness to Sign: 8/10

Elon Musk's response

  • AGI development poses unprecedented risks to humanity that require proactive regulation and oversight.
  • The proposal should emphasize the need for governments to take a leading role in creating rules and standards for AGI development.
  • We must prioritize the preservation of global stability and cooperation in AGI development to avoid power imbalances.

Willingness to Sign: 8/10

Tyler Cowen's response

  • The proposal lacks practical alternatives to regulation and seems to ignore the risks of stagnation.
  • The proposal overlooks public choice/political economy questions and national security issues, and lacks awareness of the potential for regulatory institutions to be ineffective.
  • The proposal fails to address the inseparability of progress and safety testing in AGI development.

Willingness to Sign: 4/10.

Stuart Russell's response

  • We must recognize the potential for AGI development to create new forms of power dynamics, and address these issues head-on.
  • While it is important to prioritize safety and ethical considerations, we must not lose sight of the potential for AGI to positively impact humanity.
  • We should consider the role of open-source software and community-driven development in ensuring the responsible development of AGI.

Willingness to Sign: 8/10

Yuval Noah Harari's response

  • AGI development must be regulated by global governments to prevent it from becoming a Pandora's box.
  • The proposal should stress on the need for AGI to be aligned with the interests of all stakeholders, including non-human entities.
  • There should be greater emphasis on the role of AGI in promoting global cooperation and harmony.

Willingness to Sign: 8/10.

Final proposal

Proposal: Global Regulations for the Development of Artificial General Intelligence

  • We, the undersigned, call for global regulations on the development of artificial general intelligence (AGI) to prevent its potential risks to humanity.
  • AGI development should not exceed GPT-4 level to avoid existential risks.
  • The proposal should prioritize the prevention of AGI monopolization to ensure fairness and equitable access.
  • The proposal should emphasize the need for governments to take a leading role in creating rules and standards for AGI development.
  • We must recognize the potential for AGI development to create new forms of power dynamics and address these issues.
  • While it is important to prioritize safety and ethical considerations, we must not lose sight of the potential for AGI to positively impact humanity.
  • The proposal should stress on the need for AGI to be aligned with the interests of all stakeholders, including non-human entities.
  • There should be greater emphasis on the role of AGI in promoting global cooperation and harmony.
  • The proposal should encourage community-driven development and open-source software to ensure the responsible development of AGI.
  • We call for increased funding for research into the societal and ethical implications of AGI, with the goal of better understanding the potential consequences of its development and determining the best path forward.
  • We call on all stakeholders to engage in an open and transparent dialogue on the risks and benefits of AGI, with the goal of finding common ground and establishing a shared vision for the future of AI development.

Willingness to Sign:

  • Max Tegmark: 8/10
  • Elon Musk: 8/10
  • Stuart Russell: 8/10
  • Yuval Noah Harari: 8/10
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment