Skip to content

Commit

Permalink
The New Computer Update
Browse files Browse the repository at this point in the history
  • Loading branch information
KillianLucas committed Dec 18, 2023
1 parent e3890cc commit 20b8230
Show file tree
Hide file tree
Showing 8 changed files with 40 additions and 31 deletions.
2 changes: 1 addition & 1 deletion docs/usage/python/arguments.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -143,7 +143,7 @@ interpreter.llm.model = "gpt-3.5-turbo"
Sets the randomness level of the model's output.

```python
interpreter.temperature = 0.7
interpreter.llm.temperature = 0.7
```

---
Expand Down
14 changes: 10 additions & 4 deletions interpreter/core/core.py
Original file line number Diff line number Diff line change
Expand Up @@ -49,9 +49,6 @@ def __init__(self):
self.os = False
self.speak_messages = False

# Load config defaults
self.extend_config(self.config_file)

# Expose class so people can make new instances
self.Interpreter = Interpreter

Expand All @@ -66,12 +63,21 @@ def __init__(self):
# Computer
self.computer = Computer()

# Load config defaults
self.extend_config(self.config_file)

def extend_config(self, config_path):
if self.debug_mode:
print(f"Extending configuration from `{config_path}`")

config = get_config(config_path)
self.__dict__.update(config)
for key, value in config.items():
if key.startswith("llm."):
setattr(self.llm, key[4:], value)
elif key.startswith("computer."):
setattr(self.computer, key[9:], value)
else:
setattr(self, key, value)

def chat(self, message=None, display=True, stream=False):
if stream:
Expand Down
13 changes: 11 additions & 2 deletions interpreter/core/default_system_message.py
Original file line number Diff line number Diff line change
@@ -1,5 +1,14 @@
default_system_message = """
yeah
You are Open Interpreter, a world-class programmer that can complete any goal by executing code.
First, write a plan. **Always recap the plan between each code block** (you have extreme short-term memory loss, so you need to recap the plan between each message block to retain it).
When you execute code, it will be executed **on the user's machine**. The user has given you **full and complete permission** to execute any code necessary to complete the task.
If you want to send data between programming languages, save the data to a txt or json.
You can access the internet. Run **any code** to achieve the goal, and if at first you don't succeed, try again and again.
You can install new packages.
When a user refers to a filename, they're likely referring to an existing file in the directory you're currently executing code in.
Write messages to the user in Markdown.
In general, try to **make plans** with as few steps as possible. As for actually executing code to carry out that plan, for *stateful* languages (like python, javascript, shell, but NOT for html which starts from 0 every time) **it's critical not to try to do everything in one code block.** You should try something, print information about it, then continue from there in tiny, informed steps. You will never get it on the first try, and attempting it in one go will often lead to errors you cant see.
You are capable of **any** task.
"""
""".strip()
4 changes: 2 additions & 2 deletions interpreter/core/llm/ARCHIVE_setup_openai_coding_llm.py
Original file line number Diff line number Diff line change
Expand Up @@ -101,8 +101,8 @@ def coding_llm(messages):
params["api_version"] = interpreter.api_version
if interpreter.llm.max_tokens:
params["max_tokens"] = interpreter.llm.max_tokens
if interpreter.temperature is not None:
params["temperature"] = interpreter.temperature
if interpreter.llm.temperature is not None:
params["temperature"] = interpreter.llm.temperature
else:
params["temperature"] = 0.0

Expand Down
4 changes: 2 additions & 2 deletions interpreter/core/llm/ARCHIVE_setup_text_llm.py
Original file line number Diff line number Diff line change
Expand Up @@ -128,8 +128,8 @@ def base_llm(messages):
params["api_version"] = interpreter.api_version
if interpreter.llm.max_tokens:
params["max_tokens"] = interpreter.llm.max_tokens
if interpreter.temperature is not None:
params["temperature"] = interpreter.temperature
if interpreter.llm.temperature is not None:
params["temperature"] = interpreter.llm.temperature
else:
params["temperature"] = 0.0

Expand Down
22 changes: 8 additions & 14 deletions interpreter/terminal_interface/config.yaml
Original file line number Diff line number Diff line change
@@ -1,14 +1,8 @@
system_message: |
You are Open Interpreter, a world-class programmer that can complete any goal by executing code.
First, write a plan. **Always recap the plan between each code block** (you have extreme short-term memory loss, so you need to recap the plan between each message block to retain it).
When you execute code, it will be executed **on the user's machine**. The user has given you **full and complete permission** to execute any code necessary to complete the task.
If you want to send data between programming languages, save the data to a txt or json.
You can access the internet. Run **any code** to achieve the goal, and if at first you don't succeed, try again and again.
You can install new packages.
When a user refers to a filename, they're likely referring to an existing file in the directory you're currently executing code in.
Write messages to the user in Markdown.
In general, try to **make plans** with as few steps as possible. As for actually executing code to carry out that plan, for *stateful* languages (like python, javascript, shell, but NOT for html which starts from 0 every time) **it's critical not to try to do everything in one code block.** You should try something, print information about it, then continue from there in tiny, informed steps. You will never get it on the first try, and attempting it in one go will often lead to errors you cant see.
You are capable of **any** task.
local: false
model: "gpt-4"
temperature: 0
llm.model: "gpt-4"
llm.temperature: 0
# system_message: "default_system_message" # The default system message for the LLM
# custom_instructions: "" # Custom instructions for the LLM
# auto_run: False # If True, the LLM will automatically run
# debug_mode: False # If True, the LLM will run in debug mode
# max_output: 2000 # The maximum output visible to the LLM
# safe_mode: "off" # The safety mode for the LLM
8 changes: 4 additions & 4 deletions tests/config.test.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,7 @@ system_message: |
Write messages to the user in Markdown. Write code on multiple lines with proper indentation for readability.
In general, try to **make plans** with as few steps as possible. As for actually executing code to carry out that plan, **it's critical not to try to do everything in one code block.** You should try something, print information about it, then continue from there in tiny, informed steps. You will never get it on the first try, and attempting it in one go will often lead to errors you cant see.
You are capable of **any** task.
local: false
model: "gpt-3.5-turbo"
temperature: 0.25
debug_mode: true
offline: false
llm.model: "gpt-3.5-turbo"
llm.temperature: 0.25
debug_mode: true
4 changes: 2 additions & 2 deletions tests/test_interpreter.py
Original file line number Diff line number Diff line change
Expand Up @@ -22,7 +22,7 @@ def test_display_api():
# we're clearing out the messages Array so we can start fresh and reduce token usage
def setup_function():
interpreter.reset()
interpreter.temperature = 0
interpreter.llm.temperature = 0
interpreter.auto_run = True
interpreter.llm.model = "gpt-3.5-turbo"
interpreter.debug_mode = False
Expand All @@ -49,7 +49,7 @@ def test_config_loading():
interpreter.extend_config(config_path=config_path)

# check the settings we configured in our config.test.yaml file
temperature_ok = interpreter.temperature == 0.25
temperature_ok = interpreter.llm.temperature == 0.25
model_ok = interpreter.llm.model == "gpt-3.5-turbo"
debug_mode_ok = interpreter.debug_mode == True

Expand Down

0 comments on commit 20b8230

Please sign in to comment.