Releases: OpenInterpreter/open-interpreter
v0.1.4
What's Changed
-
Add support for R language by @freestatman in #249
-
Feature: Implement and Document New Interactive Mode Commands by @moming2k in #302
-
Remove previous message and its responses from chat history with Undo-command. by @oliverpalonkorp in #273
-
Enable resume download from HF by @jerzydziewierz in #345
-
ui: Optimize welcome message by @codeacme17 in #257
-
feat: Add hints to Azure model by @codeacme17 in #237
-
docs: Upgrade issue templates by @jordanbtucker in #262
-
docs: Separate system versions into own fields by @jordanbtucker in #264
-
Docs: use x64 in WINDOWS.md and GPU.md by @jordanbtucker in #287
-
Fix using litellm.api_base, litellm.api_key, litellm.api_version by @ishaan-jaff in #284
-
Fix typo. by @Michael-Lfx in #292
-
fix(ui): Fix the display problem of welcome message by @codeacme17 in #270
-
Docs: Add security policy by @jordanbtucker in #266
-
Check disk space before downloading models by @michaelzdrav in #323
-
Update GPU.md by @metantonio in #335
-
remove duplicate import of inquirer library in get_hf_llm.py by @lalebot in #327
-
fix: merge os.environ with llama install env_vars by @jordanbtucker in #338
-
docs: move CONTRIBUTING to common path by @jordanbtucker in #350
-
Fix minor typo by @osanseviero in #248
New Contributors
- @freestatman made their first contribution in #249
- @osanseviero made their first contribution in #248
- @okisdev made their first contribution in #253
- @codeacme17 made their first contribution in #257
- @gijigae made their first contribution in #282
- @Michael-Lfx made their first contribution in #292
- @jjolly made their first contribution in #278
- @michaelzdrav made their first contribution in #323
- @metantonio made their first contribution in #335
- @lalebot made their first contribution in #327
- @jerzydziewierz made their first contribution in #345
Full Changelog: v0.1.3...v0.1.4
v0.1.3
What's Changed
- Quick fix for
--model tiiuae/falcon-180B
(redirect to GGUF version). - Quick fix for #247
Update pushed to pip
with just the fixes above. After that, I merged this commit, which will be in the next pip
version:
- Add support for R language, update instructions for package installation by @freestatman in #249
New Contributors
- @freestatman made their first contribution in #249
Full Changelog: v0.1.2...v0.1.3
v0.1.2
What's Changed
- docs: explain GPU support by @jordanbtucker in #102
- feat: add AZURE_API_KEY that falls back to OPENAI_API_KEY by @jordanbtucker in #135
- docs: explain Windows Code-Llama build requirements by @jordanbtucker in #138
- Created contribution guidelines by @TanmayDoesAI in #101
- docs: create issue templates by @jordanbtucker in #176
- moved all markdown files to a folder, updated the readme for the same… by @TanmayDoesAI in #182
- Fix download URL for CodeLlama 7B high quality model by @merlinfrombelgium in #181
- docs: add interpreter version to template by @jordanbtucker in #190
- docs: fix example version number for interpreter by @jordanbtucker in #191
- docs: add enhancement label to feature requests by @jordanbtucker in #192
- docs: prevent blank issues by @jordanbtucker in #195
- docs: provide issue template link by @jordanbtucker in #196
- Update README.md by @macterra in #197
- Create MACOS Documentation by @ihgalis in #177
- Add option to override Azure API type by @Taik in #189
- Feature: add cli environment variable by @moming2k in #157
- Update MACOS.md by @ihgalis in #215
- Falcon // Any 🤗 model via
--model meta/llama
by @KillianLucas in #213 - Update contributing.md with instructions on how to get local fork running by @oliverpalonkorp in #235
- remove redundant checks for apple silicon by @shubhe25p in #230
- Fix GPT 3.5 from failing to run commands by @Maclean-D in #96
New Contributors
- @jordanbtucker made their first contribution in #102
- @merlinfrombelgium made their first contribution in #181
- @macterra made their first contribution in #197
- @ihgalis made their first contribution in #177
- @Taik made their first contribution in #189
- @moming2k made their first contribution in #157
- @oliverpalonkorp made their first contribution in #235
- @shubhe25p made their first contribution in #230
- @Maclean-D made their first contribution in #96
Full Changelog: v0.1.1...v0.1.2
v0.1.1
What's Changed
- Added Azure support by @ifsheldon in #62
- CodeLlama improvements by @KillianLucas in #87
- Rate limit error fix
New Contributors
- @ifsheldon made their first contribution in #62
Full Changelog: v0.1.0...v0.1.1
v0.1.0
Open Interpreter v0.1.0
Open Interpreter lets LLMs run code locally. You can chat with Open Interpreter through a ChatGPT-like interface in your terminal by running $ interpreter
after installing.
- CodeLlama supported with
--local
, more models coming soon - Interpreters loaded for Python, Javascript, Shell, and Javascript
- Streaming chat in your terminal (thanks to Textualize/Rich!)
New Contributors
- @TanmayDoesAI made their first contribution in #25
Full Changelog: v0.0.297...v0.1.0
v0.0.297
What's Changed
- Windows CURL error fix by @KillianLucas in #23
- Fixed error where long conversations would hang forever (#5) by updating TokenTrim
New Contributors
- @eltociear made their first contribution in #28
Full Changelog: v0.0.296...v0.0.297
v0.0.296
Added Code Llama support.
Full Changelog: v0.0.295...v0.0.296
v0.0.295
What's Changed
- Better CLI messages
- (Experimental) Llama-2 support
v0.0.294
What's Changed
- Improved Rich active line code rendering
- Improved Open Procedures recall by @KillianLucas in #11
Full Changelog: v0.0.293...v0.0.294
v0.0.293
Now supports Applescript and HTML.
Better Windows CLI support.
Python interpreter works properly for Try/Except, nested blocks, error messages are cleaner. Resolved #7.
What's Changed
- Fix Python interpreter edge cases by @KillianLucas in #9
- Update interpreter.py - windows file system adjustment by @nirvor in #8
New Contributors
- @KillianLucas made their first contribution in #9
^ lmao why does it say this??
Full Changelog: v0.0.290...v0.0.293