Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Neovim Plugin and nvim-lspconfig integration #2

Open
Nold360 opened this issue Jun 8, 2024 · 37 comments · May be fixed by #17
Open

Neovim Plugin and nvim-lspconfig integration #2

Nold360 opened this issue Jun 8, 2024 · 37 comments · May be fixed by #17
Labels
documentation Improvements or additions to documentation enhancement New feature or request

Comments

@Nold360
Copy link

Nold360 commented Jun 8, 2024

Hi,

I really would love to test this with neovim, but i have no idea how to setup a custom LSP. Maybe using nvim-lspconfig

@SilasMarvin SilasMarvin added the documentation Improvements or additions to documentation label Jun 8, 2024
@SilasMarvin
Copy link
Owner

Hey @Nold360 . We were just having this discussion on Reddit yesterday and will hopefully have some example configurations in the repository and make a pr into nvim-lspconfig soon.

Here is an answer provided by Microbzz on reddit:

local lsp_ai_config = {
  -- Uncomment if using nvim-cmp
  -- capabilities = require('cmp_nvim_lsp').default_capabilities(),
  cmd = { 'lsp-ai' },
  root_dir = vim.loop.cwd(),
  init_options = {
    memory = {
      file_store = {}
    },
    models = {
      model1 = {
        type = "llama_cpp",
        repository = "mmnga/codegemma-1.1-2b-gguf",
        name = "codegemma-1.1-2b-Q8_0.gguf",
        n_ctx = 2048,
        n_gpu_layers = 999
      }
    },
    completion = {
      model = "model1",
      parameters = {
        fim = {
          start = "<|fim_prefix|>",
          middle = "<|fim_suffix|>",
          ["end"] = "<|fim_middle|>"
        },
        max_context = 2000,
        max_new_tokens = 32
      }
    }
  },
}

vim.api.nvim_create_autocmd({"BufEnter", "BufWinEnter"}, {
  callback = function() vim.lsp.start(lsp_ai_config) end,
})

You can swap out the value of init_options with whatever configuration you prefer. See the configuration section of the wiki for more info.

There is still an open discussion around getting ghost text working and potentially shipping our own neovim plugin for automatic inline completion.

@Myzel394
Copy link

Myzel394 commented Jun 9, 2024

For those of you who want a very rough pow you can use this snippet. This will use ClosedAI OpenAI's chat completion and can be called using <leader>co. This is pretty much just a work in progress, but maybe it will help someone :)

local lsp_ai_config = {
  -- Uncomment if using nvim-cmp
  -- capabilities = require('cmp_nvim_lsp').default_capabilities(),
  cmd = { 'lsp-ai' },
  root_dir = vim.loop.cwd(),
  init_options = {
    memory = {
      file_store = {}
    },
    models = {
      model1 = {
          type = "open_ai",
          chat_endpoint = "https://api.openai.com/v1/chat/completions",
          model = "gpt-4-1106-preview",
          auth_token_env_var_name = "OPENAI_API_KEY",
      }
    },
    completion = {
      model = "model1",
      parameters = {
          max_context = 2048,
          max_new_tokens = 128,
            messages = {
            {
              role = "system",
              content = "You are a chat completion system like GitHub Copilot. You will be given a context and a code snippet. You should generate a response that is a continuation of the context and code snippet."
            },
            {
              role = "user",
              content = "Context: {CONTEXT} - Code: {CODE}"
            }
          }
      }
    }
  },
}

vim.api.nvim_create_autocmd({"BufEnter", "BufWinEnter"}, {
  callback = function() vim.lsp.start(lsp_ai_config) end,
})

-- Register key shortcut
vim.keymap.set(
    "n", 
    "<leader>co", 
    function()
        print("Loading completion...")

        local x = vim.lsp.util.make_position_params(0)
        local y = vim.lsp.util.make_text_document_params(0)

        local combined = vim.tbl_extend("force", x, y)

        local result = vim.lsp.buf_request_sync(
            0,
            "textDocument/completion",
            combined,
            10000
        )

        print(vim.inspect(result))
    end,
    {
        noremap = true,
    }
)

I'd definitely wish a ghost-like text, just like copilot.vim does it. I'm not too familiar with LSPs, but #5 could be related to this

@SilasMarvin
Copy link
Owner

For those of you who want a very rough pow you can use this snippet. This will use ClosedAI OpenAI's chat completion and can be called using <leader>co. This is pretty much just a work in progress, but maybe it will help someone :)

local lsp_ai_config = {
  -- Uncomment if using nvim-cmp
  -- capabilities = require('cmp_nvim_lsp').default_capabilities(),
  cmd = { 'lsp-ai' },
  root_dir = vim.loop.cwd(),
  init_options = {
    memory = {
      file_store = {}
    },
    models = {
      model1 = {
          type = "open_ai",
          chat_endpoint = "https://api.openai.com/v1/chat/completions",
          model = "gpt-4-1106-preview",
          auth_token_env_var_name = "OPENAI_API_KEY",
      }
    },
    completion = {
      model = "model1",
      parameters = {
          max_context = 2048,
          max_new_tokens = 128,
            messages = {
            {
              role = "system",
              content = "You are a chat completion system like GitHub Copilot. You will be given a context and a code snippet. You should generate a response that is a continuation of the context and code snippet."
            },
            {
              role = "user",
              content = "Context: {CONTEXT} - Code: {CODE}"
            }
          }
      }
    }
  },
}

vim.api.nvim_create_autocmd({"BufEnter", "BufWinEnter"}, {
  callback = function() vim.lsp.start(lsp_ai_config) end,
})

-- Register key shortcut
vim.keymap.set(
    "n", 
    "<leader>co", 
    function()
        print("Loading completion...")

        local x = vim.lsp.util.make_position_params(0)
        local y = vim.lsp.util.make_text_document_params(0)

        local combined = vim.tbl_extend("force", x, y)

        local result = vim.lsp.buf_request_sync(
            0,
            "textDocument/completion",
            combined,
            10000
        )

        print(vim.inspect(result))
    end,
    {
        noremap = true,
    }
)

I'd definitely wish a ghost-like text, just like copilot.vim does it. I'm not too familiar with LSPs, but #5 could be related to this

Thank you for sharing this! To integrate fully with Neovim and provide good inline completion with ghost text I think we will need to write our own plugin. Write now it will pretty much mimic the functionality of copilot.vim but have more support for different backends for completion. This will change as we add new supported features to LSP-AI that we want Neovim to take advantage of like chatting with your code and semantic search over your code base.

If anyone sees this and is interested in writing a Neovim plugin, feel free to do it! I'm happy to help however I can. Our VS Code plugin is a really good place to start for the kind of functionality it should provide: https://github.com/SilasMarvin/lsp-ai/blob/main/editors/vscode/src/index.ts

@SilasMarvin SilasMarvin added the enhancement New feature or request label Jun 9, 2024
@Robzz
Copy link

Robzz commented Jun 10, 2024

nvim-cmp just merged support for multi-line ghost text, so a cmp based setup should now be quite viable. I'll play with it some more and see if I can get a decent example config going.

Update: yeah that works, the main issue right now is that the window containing the completion is drawn below the cursor which hides the ghost text on the following lines, but there's a PR (#1955) addressing it. I'll play with that branch a bit.
Update 2: yeah it's not perfect, the windows does not always go above the cursor, but it kinda works. The first character of the prediction is also not displayed in ghost text, not sure if a config problem or a cmp bug. Anyway, it looks like this:
Screenshot_20240610_145120

@Myzel394
Copy link

@Robzz we'd love to see how you did that! :)

@Robzz
Copy link

Robzz commented Jun 10, 2024

I'll be opening a draft PR in a bit. I would not recommend merging it until the default config is integrated in nvim-lspconfig (managing the LSP lifecycle by hand is annoying and exactly what nvim-lspconfig is here for) but at least it should give a place to point the more adventurous people that want to try it right now.

Edit: PR up, see #17

@Robzz Robzz linked a pull request Jun 10, 2024 that will close this issue
@SilasMarvin
Copy link
Owner

SilasMarvin commented Jun 10, 2024

I'll be opening a draft PR in a bit. I would not recommend merging it until the default config is integrated in nvim-lspconfig (managing the LSP lifecycle by hand is annoying and exactly what nvim-lspconfig is here for) but at least it should give a place to point the more adventurous people that want to try it right now.

This is awesome!

One thing I'm still unsure of is how to handle default configs. Right now if a user passes empty initializationOptions to LSP-AI we error. We require they provide a memory object and models array.

We could absolutely provide a default on the server for memory, but its tough choosing the default models array as there are many options for model backends, and all are hardware or api key dependent.

For the VS Code plugin, I thought that making OpenAI with gpt-4o the default in the plugin settings would be a reasonable choice, but it honestly wasn't my favorite as it still requires users to set an OPENAI_API_KEY for the plugin to work.

We want to make it as easy as possible for everyone to get started using LSP-AI but I think it requires they make some initial decision on at least what backend they want to use which brings me back to being unsure on how to implement any default config.

@Myzel394
Copy link

We want to make it as easy as possible for everyone to get started using LSP-AI but I think it requires they make some initial decision on at least what backend they want to use which brings me back to being unsure on how to implement any default config.

... and ...

For the VS Code plugin, I thought that making OpenAI with gpt-4o the default in the plugin settings would be a reasonable choice, but it honestly wasn't my favorite as it still requires users to set an OPENAI_API_KEY for the plugin to work.

I think this is the best case for a default config. It's much more likely that a user will expose an OPENAI_API_KEY than have set up a local LLM already; especially we wouldn't know what model they're running. So I'm in favor of using openai as the default config.

@SilasMarvin
Copy link
Owner

We want to make it as easy as possible for everyone to get started using LSP-AI but I think it requires they make some initial decision on at least what backend they want to use which brings me back to being unsure on how to implement any default config.

... and ...

For the VS Code plugin, I thought that making OpenAI with gpt-4o the default in the plugin settings would be a reasonable choice, but it honestly wasn't my favorite as it still requires users to set an OPENAI_API_KEY for the plugin to work.

I think this is the best case for a default config. It's much more likely that a user will expose an OPENAI_API_KEY than have set up a local LLM already; especially we wouldn't know what model they're running. So I'm in favor of using openai as the default config.

I think you are probably right here. I do think they shouldn't be defaults on the server, they should be defaults for the config / plugin to send to the server. I don't want to have defaults for the server that might expose users codebases to third parties.

@Robzz
Copy link

Robzz commented Jun 10, 2024

I think you are probably right here. I do think they shouldn't be defaults on the server, they should be defaults for the config / plugin to send to the server. I don't want to have defaults for the server that might expose users codebases to third parties.

Agree'd. I think ideally the LSP part and LSP/IDE glue should have sensible defaults, but the backend/model config should probably be left for the user to pick. So as far as I can tell, that leaves only the memory key, which accepts only an empty file store for now ? In this case aside from registering the textDocument/generation action with nvim, there wouldn't be that much to do.

@SilasMarvin
Copy link
Owner

SilasMarvin commented Jun 10, 2024

I think you are probably right here. I do think they shouldn't be defaults on the server, they should be defaults for the config / plugin to send to the server. I don't want to have defaults for the server that might expose users codebases to third parties.

Agree'd. I think ideally the LSP part and LSP/IDE glue should have sensible defaults, but the backend/model config should probably be left for the user to pick. So as far as I can tell, that leaves only the memory key, which accepts only an empty file store for now ? In this case aside from registering the textDocument/generation action with nvim, there wouldn't be that much to do.

Yes we can make the memory key have a default but I agree the model config should be controlled by the user. I just checked out your fork of nvim-lspconfig. Thank you for making that. I think for the init_options if we want to provide OpenAI gpt-4o as the default model we can use the example OpenAI chat config I have in the configuration section of the wiki: https://github.com/SilasMarvin/lsp-ai/wiki/Configuration#chat-3

Or maybe I am misunderstanding what is standard for nvim-lspconfig and its ok to provide a default that doesn't full work without the user providing more parameters?

@Robzz
Copy link

Robzz commented Jun 10, 2024

Or maybe I am misunderstanding what is standard for nvim-lspconfig and its ok to provide a default that doesn't full work without the user providing more parameters?

This I really don't know, maybe there are similar cases among the supported LSP servers in nvim-lspconfig, I haven't seen any but I only glanced over it and it's a long list. I haven't seen any AI LSP servers in there either since HuggingFace have gone the way of writing their own plugin instead for llm-ls so there's really no one to copy from. Asking on the nvim Matrix server is probably the easiest way to know for sure.

@SilasMarvin
Copy link
Owner

Or maybe I am misunderstanding what is standard for nvim-lspconfig and its ok to provide a default that doesn't full work without the user providing more parameters?

This I really don't know, maybe there are similar cases among the supported LSP servers in nvim-lspconfig, I haven't seen any but I only glanced over it and it's a long list. I haven't seen any AI LSP servers in there either since HuggingFace have gone the way of writing their own plugin instead for llm-ls so there's really no one to copy from. Asking on the nvim Matrix server is probably the easiest way to know for sure.

Got it. I'll ask in the matrix server.

We could do the same thing llm-ls does and just fork the llm-ls neovim plugin. This would provide a better user experience for completions. I know eventually we do want our own plugin.

We could also have both?

@AlejandroSuero
Copy link

For "ghost text" I am working on features for supermaven-nvim and we use something like this:

I didn't implement this feature so I am not gonna say I completely get it but when creating the autocmd adding a namespace like lsp-ai for example and when getting the result add it to a api-extended-marks, RTFM for more information on that.

local augroup = vim.api.nvim_create_augroup("lsp-ai", { clear = true })
local ns_id = vim.api.nvim_create_namespace("lsp-ai")
local opts = {
  id = 1,
  hl_mode = "combine",
}

vim.api.nvim_create_autocmd({"BufEnter", "BufWinEnter"}, {
  group = augroup,
  callback = function() vim.lsp.start(lsp_ai_config) end,
})

-- Register key shortcut
vim.keymap.set(
    "n", 
    "<leader>co", 
    function()
        print("Loading completion...")

        local x = vim.lsp.util.make_position_params(0)
        local y = vim.lsp.util.make_text_document_params(0)

        local combined = vim.tbl_extend("force", x, y)

        local result = vim.lsp.buf_request_sync(
            0,
            "textDocument/completion",
            combined,
            10000
        )

        print(vim.inspect(result))
        vim.api.nvim_buf_set_extmark(0, ns_id, vim.fn.line(".") - 1, vim.fn.col(".") - 1, opts)
    end,
    {
        noremap = true,
    }
)

Warning

Not sure this works as expected for ghost text or how the result comes in. So you may have to modified and play around a bit.

@Robzz
Copy link

Robzz commented Jun 10, 2024

Got it. I'll ask in the matrix server.

Ok great!

We could also have both?

Yes, in my understanding it's not unusual in the nvim ecosystem to have an additional plugin for language server specific features while keeping the minimal config in the lspconfig plugin, it's even the official recommendation.

@SilasMarvin
Copy link
Owner

For "ghost text" I am working on features for supermaven-nvim and we use something like this:

I didn't implement this feature so I am not gonna say I completely get it but when creating the autocmd adding a namespace like lsp-ai for example and when getting the result add it to a api-extended-marks, RTFM for more information on that.

local augroup = vim.api.nvim_create_augroup("lsp-ai", { clear = true })
local ns_id = vim.api.nvim_create_namespace("lsp-ai")
local opts = {
  id = 1,
  hl_mode = "combine",
}

vim.api.nvim_create_autocmd({"BufEnter", "BufWinEnter"}, {
  group = augroup,
  callback = function() vim.lsp.start(lsp_ai_config) end,
})

-- Register key shortcut
vim.keymap.set(
    "n", 
    "<leader>co", 
    function()
        print("Loading completion...")

        local x = vim.lsp.util.make_position_params(0)
        local y = vim.lsp.util.make_text_document_params(0)

        local combined = vim.tbl_extend("force", x, y)

        local result = vim.lsp.buf_request_sync(
            0,
            "textDocument/completion",
            combined,
            10000
        )

        print(vim.inspect(result))
        vim.api.nvim_buf_set_extmark(0, ns_id, vim.fn.line(".") - 1, vim.fn.col(".") - 1, opts)
    end,
    {
        noremap = true,
    }
)

Warning

Not sure this works as expected for ghost text or how the result comes in. So you may have to modified and play around a bit.

Thanks for sharing!

@SilasMarvin
Copy link
Owner

Got it. I'll ask in the matrix server.

Ok great!

We could also have both?

Yes, in my understanding it's not unusual in the nvim ecosystem to have an additional plugin for language server specific features while keeping the minimal config in the lspconfig plugin, it's even the official recommendation.

Got it that makes sense. I'll ask around about defaults in the matrix probably tomorrow.

Would you want to head up our neovim plugin? I'm thinking for now we just fork llm-ls, edit the configuration options to match the options I have for our VS Code plugin, and then just have it perform inline completion with ghost text.

@SilasMarvin SilasMarvin changed the title How to use with neovim Neovim Plugin and nvim-lspconfig integration Jun 10, 2024
@SilasMarvin
Copy link
Owner

Got it. I'll ask in the matrix server.

Ok great!

We could also have both?

Yes, in my understanding it's not unusual in the nvim ecosystem to have an additional plugin for language server specific features while keeping the minimal config in the lspconfig plugin, it's even the official recommendation.

I asked on the Matrix, they recommended discussing it on their GitHub. I suggest we create a PR that requires the user to provide defaults and have a discussion about it in that PR.

@Robzz
Copy link

Robzz commented Jun 13, 2024

Would you want to head up our neovim plugin? I'm thinking for now we just fork llm-ls, edit the configuration options to match the options I have for our VS Code plugin, and then just have it perform inline completion with ghost text.

I'm not sure how long term I can commit to it, but sure I'm happy to at least help it take it off the ground.

I asked on the Matrix, they recommended discussing it on their GitHub. I suggest we create a PR that requires the user to provide defaults and have a discussion about it in that PR.

Alright I'll send them the PR hopefully tomorrow to get the discussion going.

@SilasMarvin
Copy link
Owner

Would you want to head up our neovim plugin? I'm thinking for now we just fork llm-ls, edit the configuration options to match the options I have for our VS Code plugin, and then just have it perform inline completion with ghost text.

I'm not sure how long term I can commit to it, but sure I'm happy to at least help it take it off the ground.

I asked on the Matrix, they recommended discussing it on their GitHub. I suggest we create a PR that requires the user to provide defaults and have a discussion about it in that PR.

Alright I'll send them the PR hopefully tomorrow to get the discussion going.

This is awesome thank you!

@SuperBo
Copy link

SuperBo commented Jun 14, 2024

Hi @SilasMarvin, @Robzz, what is you guys final decision. If you decided to start a dedicated plugin, I'm happy to help (FYI, I'm developing another Neovim plugin https://github.com/SuperBo/fugit2.nvim).

@SilasMarvin
Copy link
Owner

Hi @SilasMarvin, @Robzz, what is you guys final decision. If you decided to start a dedicated plugin, I'm happy to help (FYI, I'm developing another Neovim plugin https://github.com/SuperBo/fugit2.nvim).

We definitely want to have a dedicated plugin, if you want to get started on it that would be awesome! We have one for VS Code that should be a good reference: https://github.com/SilasMarvin/lsp-ai/tree/main/editors/vscode Here is an overview on the wiki about it: https://github.com/SilasMarvin/lsp-ai/wiki/Plugins

You could also fork: https://github.com/huggingface/llm.nvim and use it as a base. I'm happy to give more input if you want, just let me know!

@SuperBo
Copy link

SuperBo commented Jun 14, 2024

@SilasMarvin, ok. Will start working on it tomorrow. Are you ok with name lsp-ai.nvim. Do you suggest any other name :D?

@SilasMarvin
Copy link
Owner

SilasMarvin commented Jun 14, 2024

@SilasMarvin, ok. Will start working on it tomorrow. Are you ok with name lsp-ai.nvim. Do you suggest any other name :D?

That is a great name for it I love it! Let me know how it goes excited to see it!

@AlejandroSuero
Copy link

@SuperBo @SilasMarvin I created this template repo for Neovim plugins, it has .editorconfig, selene and stylua ready to go with easy to use make targets and CI.

It also has plenary.nvim and vusted tests set up and ready to use with make targets and CI.

It also has more utilities but that is more of a personal opinion on why you should use something like codespell but you can always ignore it and delete it.

@SuperBo
Copy link

SuperBo commented Jun 15, 2024

@AlejandroSuero, Thank you for good template, can I cherry pick your selene and stylua config?

For testing, I prefer native busted + nlua setup. I also need to add neorocks formula also. So I will start an empty repo first without any template. Hope that don't bother you!

@AlejandroSuero
Copy link

@SuperBo for the neorocks I have something done in https://github.com/AlejandroSuero/freeze-code.nvim

Cherry pick what you want, is free to use.

I haven't got to try busted + nlua yet, usually I stick to testing with neovim since it how my plugins interact most of the time. Any good places to take a look on how to start with busted + nlua?

@SuperBo
Copy link

SuperBo commented Jun 15, 2024

@AlejandroSuero, you can see the sample setup here https://github.com/SuperBo/fugit2.nvim.

I decided to fork from https://github.com/huggingface/llm.nvim. I saw some of your pullrequest (huggingface/llm.nvim#98, huggingface/llm.nvim#97) there. Could I merge it into my fork :D?

@AlejandroSuero
Copy link

@SuperBo yeah, go for it.

I will be checking out your git plugin tomorrow and also get a look for the testing setup.

@SuperBo
Copy link

SuperBo commented Jun 16, 2024

Screen.Recording.2024-06-16.at.22.51.00.mov

First update guys, we now can ask AI for whole file code completion.

@SilasMarvin
Copy link
Owner

That is awesome!!! I love it. This is really exciting stuff!

@SuperBo
Copy link

SuperBo commented Jun 17, 2024

Can anyone help me to test this https://github.com/SuperBo/lsp-ai.nvim.

Example config lazy config can be like this:

  {
    'SuperBo/lsp-ai.nvim',
    opts = {
      -- autostart = false,
      server = {
        memory = {
          file_store = {},
        },
        models = {
          model1 =  {
            type="llama_cpp",
            file_path="/opt/model/codeqwen-1_5-7b-chat-q4_k_m.gguf",
            n_ctx=512,
            -- ctx_size= 512,
            n_gpu_layers= 500,
          }
        }
      },
      generation = {
        model = "model1",
        parameters = {
          max_tokens=256,
          max_context=1024,
          messages = {
            {
              role="system",
              content="You are a programming completion tool. Replace <CURSOR> with the correct code."
            },
            {
              role = "user",
              content = "{CODE}"
            }
          }
        }
      }
    },
    dependencies = { 'neovim/nvim-lspconfig' },
  }

Command to ask LSP-AI is LSPAIGenerate.

@fredrikaverpil
Copy link

I had to do these modifications:

  {
-    dir ='SuperBo/lsp-ai.nvim',
+    "SuperBo/lsp-ai.nvim",
    opts = {
      -- autostart = false,
      server = {
        memory = {
          file_store = {},
        },
        models = {
          model1 =  {
            type="llama_cpp",
            file_path="/opt/model/codeqwen-1_5-7b-chat-q4_k_m.gguf",
            n_ctx=512,
            -- ctx_size= 512,
            n_gpu_layers= 500,
          }
        }
      },
      generation = {
        model = "model1",
        parameters = {
          max_tokens=256,
          max_context=1024,
          messages = {
            {
              role="system",
              content="You are a programming completion tool. Replace <CURSOR> with the correct code."
            },
            {
              role = "user",
              content = "{CODE}"
            }
          }
        }
      }
    },
    dependencies = { 'neovim/nvim-lspconfig' },
+     config = function(_, opts)
+       require("lsp_ai").setup(opts)
+     end,
  }

But I can't run :LSPAIGenerate:

   Error  08:52:51 msg_show.emsg   LSPAIGenerate E492: Not an editor command: LSPAIGenerate

@SuperBo
Copy link

SuperBo commented Jun 17, 2024

@fredrikaverpil, what language are you testing (python, go, ...). I've just hard code these support file types: { "go", "java", "python", "rust" }. Will make this configurable later.

Do you have lsp-ai compiled with llama_cpp? You can test with openai if you have OPENAI key.

You don't need to add config to dependencies.

@fredrikaverpil

This comment was marked as resolved.

@SilasMarvin
Copy link
Owner

@fredrikaverpil, what language are you testing (python, go, ...). I've just hard code these support file types: { "go", "java", "python", "rust" }. Will make this configurable later.

Do you have lsp-ai compiled with llama_cpp? You can test with openai if you have OPENAI key.

You don't need to add config to dependencies.

Following up on this thread. Been talking in the Discord a little bit about our Neovim integration and would love to get you in there @SuperBo Link is in the README

@SuperBo
Copy link

SuperBo commented Jun 28, 2024

@SilasMarvin, sorry, I've been quite busy since last week. I will have more free time this weekend. See you in discord.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
documentation Improvements or additions to documentation enhancement New feature or request
Projects
None yet
Development

Successfully merging a pull request may close this issue.

7 participants