For details of the next AGM see details at
https://www.adug.org.au/agm/agm2025/
The normal Melbourne meeting will follow the conclusion of the AGM.
For details of the next AGM see details at
https://www.adug.org.au/agm/agm2025/
The normal Melbourne meeting will follow the conclusion of the AGM.
Is the presentation online or in person?
Hi Sue,
The presentation will be online. It’s a 2 hour commute for me to the city these days.
Cheers, Jarrod
Where can I locate the link to tonight’s Melbourne meeting? Thanks
The link will be here shortly before the meeting starts, also likely within the details of the AGM linked above in the Original Post.
https://us02web.zoom.us/j/89264784704?pwd=Kufxkti6pMuDaey5FiocgzS7YjE69i.1
Meeting ID: 892 6478 4704
Passcode: 417689
A follow up question that I completely forgot …
Maybe @Jarrod would also talk at some stage about his time in the C++ world, from 2000 onwards ?
Thanks for the opportunity to present last night. It has been a lot of fun developing the AI assistant features.
I only very briefly touched on MCP - model context protocol - which is essentially a standard for using a (normally) remote system for the function calls. It was designed by Anthropic (Claude) but pretty much universally adopted by all model providers and tools. Interestingly there is a newer UTCP but it doesn’t seem to be getting used very much. I didn’t talk about agents (it’s an area that I want to look into further) but they can also potentially be used to add AI capabilities to your app using various agent-to-agent protocols.
MCP has three main actors:
All of the tool use I showed last night is directly applicable to MCP. In my app the MCP host is the API server app, the MCP client is the built-in code also in the API server app, and the MCP server is split between the Delphi client app (most functions) and API server app (a couple of functions). The main difference is that in my app there is (currently/initially) no need for the API server app to ask the Delphi client (MCP server) for the list of functions because the server app knows that, however going forward it will likely be necessary for the client to announce the list of functions or the server to ask the client for the list of functions so that it can talk to different versions of the client which may end up supporting different functions.
Cheers,
Jarrod
AGM and Melbourne meeting last night. Monday 20 October 2012.
Sadly there was a fairly poor online attendance last night.
So we had to wait a little while until we got a Quorum so the AGM could
commence.
Once this was achieved, we got through the requirements.
Last year’s minutes were accepted.
The financials were also accepted. (We are a bit over 3K in front of last year)
Unfortunately, we did not gain any new committee members.
As we still have a couple of spare committee positions available, YOU
could offer yourself for one and are unlikely to be knocked back.
It was noted that we were a bit disappointed with how the symposium
worked out last year with many presentations being pre recorded and in
one case the presenter not being available for questions afterwards.
(Time zone issues were something we didn’t realise was a major problem
until quite late in the piece). We intend having at least a couple of
presenters travelling to the symposium next year. We need to both
promote it better and make it better.
After the AGM
Jarrod provided an excellent presentation on the AI
helper being implemented in their Energy market product.
It turns out this presentation perhaps should have been stand alone, as it
was quite long and ran fairly late. (I was getting a bit tired and
terse by the end of it, sorry for that)
But it was excellent, hopefully it will be available as a video in the
not too distant future.
The system is Desktop client ↔ Energy Cloud Server ↔ AI
This allows the AI keys and Tokens to be secure on the Energy Cloud Server.
With the Desktop client using existing authentication flows for access
to the Energy Cloud Server.
Amongst (quite a few) other things he showed Voice to Text, followed by
processing of the text and then text back to voice. This was quite
impressive.
He has also made a streaming markdown renderer.
You could ask it (talking) for some help about a (energy market related)
topic and it could find and display the appropriate help topic.
He gave a short introduction of how the question asked was converted to an
embedding? that could then be used with the PostgresQL vector database
which contained embeddings from the help topics to find the top few
matching help topics.
He showed the (hidden) system prompts that amongst other things stop it
attempting to answer questions not related to the energy market.
Apparently, the AI vendors also include their own very large system
prompts to try to ensure the AI behaves the way they want it to.
Cool,
Thanks Jarrod
(and all the attendees)
Hi @Paul_McGee, I’m not sure that it would be that interesting
and to be honest I left the C++ scene quite a while ago. I am Delphi first these days! Back in the day I did an ADUG presentation on “C++ for Delphi Developers” showing the similarities and differences in the language and I think I touched on some C++ libraries but in most of my C++ time I was using C++Builder so the IDE/form designer/VCL/RTL was all the same. I don’t think I have much to add these days!
Thanks @Roger_Plant, nice summary. I knew that there was a lot to cover but I didn’t think that the session would run 2.5 hours! There is too much going on in the AI world to keep up but very interesting to try to follow along and work on something concrete. It’s my 4th presentation on AI in the last two weeks or so (one on generative AI in general, three on this app) but I went into a lot more detail on the tech this time, so I’m all presented out now and need a rest
.
Hello Jarrod & all others that are interested in AI,
I attended the PASCAL Conference 2025 in Sundern Germany this year. At the conference Benjamin Rosseaux presented on his efforts to build a PASCAL implementation of an LLM neural network. In your presentation the server used Python to do the job, @Jarrod . In Benjamin’s implementation the calculations of the weights are done in PASCAL. This results in a much faster computation and reduced hardware requirements.
Benjamin suffers from the same motor neuron disease as Steven Hawking did. So his presentation consisted of slides he had prepared and text to voice he put into three videos on top of the slides.
You can watch these videos here:
PasLLM - Pascal Powerhouse - Zero-Dependency LLM Interference Engine
https://www.youtube.com/watch?v=TnKjYhJ8C1gMCP/Tool-Usage with PALM - Pascal-native LLM interference engine
https://www.youtube.com/watch?v=lml0V0zooLMPALM Unlikely AI Powerhouse (NotebookLM presentation about PALM)
https://www.youtube.com/watch?v=LYG33LAhGxE
Michael Van Canneyt, who works on the Free PASCAL compiler, showed some examples of how to use Benjamin’s work. One example was a connector for SQLite. Michael had a SQLite database of a school in the Netherlands. He told the AI that the table names are in dutch and that the machine should inspect the FK constraints and table columns. Then he asked question like: “How many students passed a biology exam last semester?” and Benjamin’s engine returned the correct number.
Yet there is no support for GPUs, but already now the AI engine is pretty fast. I wonder what Co-Pilot and ChatGPT will say when Benjamin’s version becomes available.
Salut,
Mathias
@Mathias wow, PasLLM is a very ambitious project! That is a very technical and specialised domain and I can’t imagine how much time he has put into implementing the current features. I hope that it does well. It would be interesting to look into it in more detail to see how some of the features are implemented. A possible ADUG presentation for you or someone else that is keen
.
If it wasn’t clear in the presentation the Python server API that I developed is just the plumbing between the client and the AI models and vector DB, with some business rules (it does some processing like audio format conversion if required) but it doesn’t do any of the LLM inference (or vector retrieval) calculations. We send the prompt to the model provider (OpenAI) who run the LLM inference in their environment. Some googling says that their APIs are primarily built using Python but the performance critical LLM inference uses C++ and significant GPU compute using CUDA/ROCm.
I would not expect this to be the case:
To summarize: it would be running same size models, on the same GPU with the same VRAM, using fast executables, in either case.
So, while I would certainly prefer the whole thing to be in Delphi / Pascal, switching is not likely to make it any faster, IMHO.
Alex