Using Local LLMs with Smart CodeInsight in RAD Studio - Code Partners

After having heard several times now from Embarcadero about an AI facility in the IDE, and not retained the details of it, and not followed up to find it out for some time … I finally took some details from one of @ianbarker 's recent webinars, and managed to connect my IDE with Anthropic/Claude.
We talked about it at our last Perth meeting : SmartCodeInsight in the IDE

So this is very timely.
I would have no hope myself for figuring out an Ollama local-ai setup.
But @Malcolm Groves is gunna tell us all about it.

6th May 2025 (the webpage has a different date on it)

1 Like

Just a reminder this webinar kicks off in a little over an hour. Hope to see some of you there.

Cheers
Malcolm

1 Like

I guess it will also be on YouTube, at some time, @Malcolm ?

Did not get this untill too late

Malcolm,

Thanks for the presentation. Would be interested to see more, i.e.: giving any such locally running model access to the WEB and filesystem, training it locally and saving the result for reuse, so it’s persistent across restarts. Is that doable?

BTW, it’s a shame no model is trained for Delphi specifically. Is Emb considering doing it? Like that qwen2.5-coder model is really insistent on using loop variable for every other thing inside the loop, I could not convince it to stop doing it ;-(

Alex

Yep, maybe even today if I can get through my todo list.

1 Like

Hi Alexander,

Yeah, that’s the kind of material I metnioned at the end. Enough people have said they are interested so I’ll plan a session on fine tuning a model.

re: Embarcadero producing one, obviously I can’t speak for them, but I have had conversations with them about this, so it is in the pot of possible ideas.

Hi John,

I’ll post a link once the replay is up.

I’ve just realised that the link you were sent for the webinar is also the unedited replay. I will still work on getting a public replay up (with some of the unrelated Q&A removed from the end), but for those who registered, just use the same link and you can already watch it.

Windsurf (previously named Codeium) has a proprietary model for code completion that specifically supports Delphi (works quite well) but is not trained purely on Delphi code. You don’t want a model that has only been trained on Delphi code. The volume and variety of code would not be sufficient to solve many generic problems.

Most premium models can generate reasonable Delphi code. All of these are trained on a wide range of code from many languages as well as papers, books and discussion on algorithms, patterns and the like. LLMs “think” in text using the strength of associations, and do more than repeat the exact sequences of text provided during training.

If you ask a model to generate Delphi code for something that it hasn’t seen Delphi specific code for it can use the training from other languages and from papers/discussions/etc to generate an answer in Delphi. Sometimes that code, while valid, is not elegantly designed Delphi code. As a colleague recently put it “you can see it thinking in another language”.

What would help though is if the models were trained on the RTL, VCL, FMX etc code that is not available publicly. Currently the good AI tools like Windsurf or Cursor are mostly (at least in non-enterprise deployments that cannot be trained on multiple repositories) limited to the code in a specific folder (and sub-folders). I have started experimenting with providing context (code) outside of the project folder in Windsurf using symbolic links so that I can include code from a common repository that is shared across multiple apps, instead of copy/paste into the project folder or using git submodules. The same trick might work to include the Delphi source code, which might improve context awareness but not to the extent of being trained on that code.

In Windsurf and Cursor you can provide a rules file to help direct the model, such as following specific coding conventions, or avoiding some annoying habits. Perhaps qwen has something similar.

I will be interested to ask different LLMs the same question … but for a recent example :

I have hooked up Claude.ai to my IDE, and I wanted a pause button for stepping through a GUI demo of an algorithm.

I asked it to propose a solution to implementing a pause button, and it returned 4 or 5 different suggestions, that were interesting by category, but only so-so in the details.

One option it delivered was a TTask, used incorrectly, so I asked it to propose a solution using a TFuture.
I got back a response that mentioned promises and futures, and was no doubt influenced by Javascript and/or C#.
:confused:

Jarrod,

Thanks, that was helpful, I’ll look at Windsurf. I had a lot of success with GPT, despite its many shortcomings. Once it all goes but a single notch higher up, it would really revolutionise development.

Alex

With Claude, I find that when it fails, it fails miserably. When it works, it works really well. I was easily saving 4-5 hours a month in typing time, so I have subscribed to it. Which gives you access to the projects feature.

I had a VCL based REST server using TMS Webcore. I uploaded 5 of the key files. And instructed it

  1. to follow my programming style,
  2. assume that non existent files would work fine
  3. which files it was allowed to change with compiler directives
  4. change it to produce a windows service, so that it could be compiled in dual mode.

It did really well.

  1. I had a few command line options (also controllable via checkbox, dropdown and buttons) such as url, test/production database, logging on/off, and swagger on/off. Somehow, it recognised them and passed them to the service both via registry and ini file (to give me a choice).
  2. It wrote all code to install/uninstall stop/start the service by adding buttons to the VCL version
    I just needed to change a few things to make it compile.
    • A unit reference
    • Two property names

The main downside for me is that when I give it my programming style rules in 8 sentences, it modifies each file 8 times (once for each rule). This makes it chew up my daily allocated resources a tad too fast.

Replay is up Using Local LLMs with Smart CodeInsight in RAD Studio - Code Partners

2 Likes

Half way through … this is really good.

Great information, and great presentation. :tada:

1 Like