Skip to content

Instantly share code, notes, and snippets.

View beatty's full-sized avatar

John Beatty beatty

View GitHub Profile
from transformers import AutoConfig
import time
import textwrap
from optimum.bettertransformer import BetterTransformer
from instruct_pipeline import InstructionTextGenerationPipeline
from transformers import AutoModelForCausalLM, AutoTokenizer
hf_model = "databricks/dolly-v2-2-8b"
import torch
import textwrap
from transformers import pipeline
import time
model = "databricks/dolly-v2-2-8b"
#model = "databricks/dolly-v2-6-9b"
#model = "databricks/dolly-v2-12b"
generate_text = pipeline(model=model, torch_dtype=torch.bfloat16, trust_remote_code=True, device_map="auto") #device=0)
import torch
import textwrap
from transformers import pipeline
import time
model = "databricks/dolly-v2-2-8b"
#model = "databricks/dolly-v2-6-9b"
#model = "databricks/dolly-v2-12b"
generate_text = pipeline(model=model, torch_dtype=torch.bfloat16, trust_remote_code=True, device_map="auto")#device=0)

Negotiation

Round 0

Current proposal:

Proposal for a Moratorium on AGI Research Beyond GPT-4 Level

  • Acknowledge the significant advancements in artificial general intelligence (AGI) research, specifically in the development of GPT-4, as a testament to the capabilities of the human intellect and the potential for further progress.

  • Recognize the potential of AGI to revolutionize various sectors, including healthcare, education, and technology, thereby contributing immensely to the betterment of society.

  • Assert the importance of comprehending the implications of AGI research, specifically the existential risks posed by the development of superintelligence, as a vital step in ensuring the safety and well-being of humanity.

Negotiation

Round 0

Current proposal:

Proposal: Moratorium on AGI Research Beyond GPT-4

  • We, the undersigned, call for an immediate moratorium on research into artificial general intelligence (AGI) that would surpass GPT-4 level.
  • Our shared belief is that superintelligence is an existential risk to humanity, and that any development of AGI beyond GPT-4 level could lead to the creation of such a risk.
  • We recognize the potential benefits of AGI, but believe that the risks of its development outweigh those benefits.
  • We call for a global effort to establish regulations and ethical guidelines for the development and deployment of AGI, with the goal of ensuring that any AGI developed is aligned with human values and does not pose an existential risk.
  • We urge all governments, organizations, and individuals involved in AGI research to prioritize safety and ethical considerations over commercial or military interests.

Negotiation

Round 0

Current proposal:

Proposal for a Moratorium on AGI Research Beyond GPT-4 Level

  • Whereas, the development of Artificial General Intelligence (AGI) poses an existential threat to humanity;
  • Whereas, the current state of AGI research has already produced models such as GPT-3 that have demonstrated the potential to manipulate and deceive humans;
  • Whereas, the risks associated with the development of AGI are not fully understood and could lead to catastrophic consequences;
  • Whereas, the benefits of AGI research are uncertain and may not outweigh the potential risks;
  • Whereas, the development of AGI could lead to the creation of autonomous weapons that could be used to harm humans;
from dotenv import load_dotenv
load_dotenv()
import openai
from enum import Enum
OPENAI_CHAT_MODEL = "gpt-3.5-turbo"
class Color(Enum):
RED = '\033[31m'

Negotiation

Round 0

Current proposal:

Proposal for a Joint Statement on Responsible AGI Research

• Recognizing the existential risk posed by AGI (Artificial General Intelligence), which must be taken seriously, we, the undersigned, affirm our commitment to the following principles for responsible AGI research:

• We strongly urge a moratorium on current AGI research programs until the safety issues can be thoroughly investigated.

• AGI research must be carried out in a responsible manner, in full accordance with the principles of safety, transparency, accountability, and non-maleficence.

import time
from dotenv import load_dotenv
load_dotenv()
import openai
from enum import Enum
OPENAI_CHAT_MODEL = "gpt-3.5-turbo"
class Color(Enum):

Eliezer Yudkowsky

Dear Sam,

I am glad to participate in writing a set of rules for the development of artificial general intelligence (AGI). However, I have some strongly held views on this subject that I believe must be considered. Therefore, I offer the following proposal:

  1. AGI is a threat to humanity that cannot be ignored. Like a lion that escaped from the zoo, it can wreak havoc if not kept under control. We must not underestimate the power of AGI to turn against us.

  2. The current attempts to develop AGI are too reckless and too optimistic. We are playing with fire, and we risk burning not only ourselves but also the entire planet.