RAD AI Companion - usage feedback

I mentioned during our last WADUG meeting that I had tried the RAICom at https://www.embarcadero.com/RADAICompanion a little bit.

First impression on my screen is … why so little a window ?

I had asked it for :

  • a ‘Hello World’ example, it had done well by starting from “Open the IDE application” … all the way up to compiling and running. Great for someone quite new. :tada:
  • to create code to download the text of a website, given the URL.
    It did a good, concise job with that too - using TNetHttpClient etc.

Yesterday, trying to remind myself (and correct a Claude suggestion), I asked :

  • “identify generic as tkinteger, and then print” - Cannot answer
  • “generic T = tkinteger, so then print T” - Cannot answer
  • (after my brain power returned a bit)
    “using tvalue, print generic value” - Success.

I think it’s a good concept just quite underdone.

I like that it admits when it doesn’t know something rather than just hallucinating an answer, but I’ve found there’s quite a lot it doesn’t know.

I’m told it’s been trained on the DocWiki and some eBook manuscripts. So it can tell you how to use the documented features of the VCL/FMX/RTL and Delphi language which I imagine would be good for beginners, but I was hoping for something more.

I want an AI agent that’s been trained on the actual VCL/FMX/RTL source code. That will explain to me the undocumented parts of the frameworks and libraries, that will help me gain an deeper understanding of the internals and advise me on how I can push, bend and extend those frameworks to meet my needs.

Basically I want an AI amalgamation of Danny Thorpe, Allen Bauer and Barry Kelly (perhaps that would be an AI abomination, rather than an amalgamation :grin:) to sit on my shoulder and explain what those undocumented private methods do when I next go deep diving through the frameworks.

2 Likes

100%.

Including their blog posts in the training would be a feature. + Primoz. + Stefan. + Eric van Bilsen. + Arnaud. + + + + + …

It’s very much a “let’s get the basics right and then expand it, based on feedback”.

The plan is very definitely to expand out the sources - but we have to temper that with avoiding copyright claims which are what plagues things like ChatGPT.

Just because something is on the internet it doesn’t make it OK for us to suck it in unless it’s fair use. :pleading_face:

But I definitely think some cooperation with potential sources could really help lift it, as well as understanding the RTL, VCL, and FMX libraries at the source level. The only sticking point on that is trying to clarify how to avoid accidentally invalidating our own IP too.

Given how pervasive AI is and everyone wants in, it would make sense for Emb to get a well trained LLM into its customer’s hands before it’s too late… I mean, the RTL/VCL source code is already there in the product, for anyone to train their own local LLM… but why should every customer have to do this themselves?

1 Like

It’s a fair point. :+1: