https://github.com/efugier/smartcat
I hope this brings the power into my new helix
setup.
The key to make this work seamlessly is a good default prompt that tells the model to behave like a CLI tool an not write any unwanted text like markdown formatting or explanations.
@Srid just for my understanding.
The config has:
[openai] # each supported api has their own config section with api and url
api_key = "<your_api_key>"
default_model = "gpt-4"
url = "https://api.openai.com/v1/chat/completions"
[mistral]
api_key_command = "pass mistral/api_key" # you can use a command to grab the key
default_model = "mistral-medium"
url = "https://api.mistral.ai/v1/chat/completions"
Does this mean I could run ollama locally and use them as backend?
ollama provides an API, but I don't know if it is sufficient to replace openai/mistral API etc.
https://github.com/ollama/ollama/blob/main/docs/api.md
A new helix topic in #nix or #offtopic maybe?
https://nixos.zulipchat.com/#narrow/stream/420166-offtopic/topic/helix/near/422470027
@David Arnold I DMed you this as well, but since you opened this up I might as well drop this here:
https://github.com/morph-labs/rift
Tim DeHerrera schrieb:
David Arnold I DMed you this as well, but since you opened this up I might as well drop this here:
https://github.com/morph-labs/rift
This short of seems the other extreme, making the editor lsp interface the upstream: https://github.com/morph-labs/rift/tree/main/rift-engine
Maybe a good approach is to start out with something like smartcat
for now and then transition to the "Rift Code Engine" as it matures and is being adopted.
Once you got that working from helix you just have to select some text you want chatgpt to interact with, press the pipe | key and write something like
sc -r -c "write tests for that function"
and the output of the model will be appended right after your selection in the current buffer. The openai api is a bit slow but apart from that I'm super satisfied with it!
For now, it hasn't been a major inconvenience to just ask an LLM in browser to generate some code. I haven't even tried a more involved code assistant like co-pilot yet though, so maybe I just don't know what I'm missing :sweat_smile:
I like to use co-pilot for assistance, but it’s annoying when it completes things on its own, that’s a bit distracting for me. I want something like this twitter post I shared earlier: https://x.com/victortaelin/status/1753593250340856179?s=46
i.e auto-complete code when I ask the model to, avoids my round-trip to browser and also if it can learn from my code base as I am working on it, that will be awesome
I think Rift Code Engin on one hand (a proper LSP "extension" for AI support) and linux-y cat
style are probably on the two ends of the spectrum.
I have seen some tutorials where you just write a code comment and then the AI does what you ask it for in the context of the current buffer.
That's sort of the immediacy that I think is worth a lot when living your day in a code editor.
But short of using VSCode and having access to the entire plugin system, it increasingly looks like smartcat
is a good choice (combined with any OpanAPI compatible backend of ones liking via ollama et al).
I just wonder if I'm building the right mental model towards the "how to use AI in the editor" question?
Shivaraj B H schrieb:
I like to use co-pilot for assistance, but it’s annoying when it completes things on its own, that’s a bit distracting for me.
Thanks for this data point!! :handshake: Exactly what we oldies need to know that haven't gotten their hands dirty yet :smile:
I think LSP integration would be an improvment mostly because LSP tend to be aware of the entirety of the code base, it was my understanding that current code assistant are only aware of open files
Maybe it should be a proper LSP api though, instead of a standalone server
because LSP tend to be aware of the entirety of the code base, it was my understanding that current code assistant are only aware of open files
Yeah that's a good point. smartcat
litterally only of the selection, so it probably isn't all too powerful.
I wonder what @Shivaraj B H would say of smartcat
, would that fit your desired use case better or worse (because it lacks more context on the code-base/buffer)? I'll just listen to your experience and leap frog onto it :smile:
Wait!
-c, --context <CONTEXT>
glob pattern to given the matched files' content as context
That seems promising (and maybe just famously "good enough")?
I will give smartcat a try and let you know my experience
I want too, but struggling still with my key bindings in helix :smile:
Here's another helix specific one: https://github.com/leona/helix-gpt it seems that it could also be combined with any openapi compatible backend.
Oh nice, I haven't seen that one before, might try to wire it up
I was caught with other things, but I got around to trying out sc
yesterday. Here’s what I think:
I have already started on the last point that I mentioned, here’s some progress: https://nixos.zulipchat.com/#narrow/stream/426237-nixify-llm/topic/ollama/near/429110544
Also, I have written a derivation to build smartcat, dumping it here:
{ lib, fetchFromGitHub, rustPlatform }:
rustPlatform.buildRustPackage rec {
pname = "smartcat";
version = "0.6.1";
src = fetchFromGitHub {
owner = "efugier";
repo = pname;
rev = version;
hash = "sha256-na/Yt5B3nJ0OIeJKVHeoZc+V1OUyimp7PqY7SGARc5s=";
};
cargoPatches = [
../patches/smartcat/add-Cargo.lock.patch
];
cargoHash = "sha256-ifUHWPBidLXX5f2JfIw9TdyV+pVcRVWT1LmHyLHTVds=";
meta = with lib; {
description = ''
Putting a brain behind `cat`.
Integrating language models in the Unix commands ecosystem through text streams.
'';
homepage = "https://github.com/efugier/smartcat";
license = licenses.asl20;
maintainers = [ ];
};
}
add-Cargo.lock.patch
You will also need this patch file
Last updated: Nov 15 2024 at 12:33 UTC