OpenAI - GPT-4o

Raymond Cheung on twitter :

"It’s only been 2 days since OpenAI revealed GPT-4o.

Users are uncovering incredible capabilities that completely change how we use and interact with AI.

The 12 most impressive use cases so far:"

  • video game in 1 min
  • AI tutor
  • assist the blind
  • real time translation
  • interview prep
  • 3d model from text
  • lead a meeting
  • recreate sn application
  • personal assistant
  • transcribe handwriting
  • generste better text in images
  • analyse faces

Did AI generate the list? Needed a spell checker :rofl:

If it’s in my post … then it’s me on my phone on the side of the raod. (that was deliberate :slight_smile: )

1 Like

The capability that LLM’s/MLLM’s already has and the pace that we are seeing AI improve is astounding.

Item 9 on that list is part of a rapid fire interview with Sam Altman. One of the things that he mentions is that he expects OpenAI to greatly improve the ability for AI to code. LLM’s like GPT-4/4o and Claude 3 Opus already do very well on a variety of coding tasks and there are many coding specific models that are also impressive (and can find and understand your private code). It makes me wonder what they have in store.

1 Like

Except the problem we have never solved is the ability to define what we want (in terms of development). Almost 35 years into a software development career and we still have people thinking that the main problem is coding. It may reduce simple mistakes, which is a good thing, and I’m all for assistance (and “syntactic sugar” in languages), but that will only improve things a little

I don’t really have high hopes for AI because it doesn’t “think” and therefore doesn’t solve the problems I address in real life

What problems are you wanting to solve? What do you mean by “define what we want”?

The current AI coding assistants are a combination of low-level coding assistance and higher-level broad coding tasks. Through simple prompting AI can currently create and extend classes, write boilerplate and plumbing code, design a basic UI, generate and extend databases, write unit tests, even write complete, albeit simple, applications, plan out and implement a larger sequence of steps to solve a larger problem, etc etc. These are all very useful tasks that are far above syntactic sugar or reducing simple mistakes.

LLM’s are currently limited by tokens (input limit of 128k for GPT-4o and 1M for Gemini, output limit of 4k for GPT-4o and 8k for Gemini for example but can continue a longer result over several responses) but that is increasing, meaning that they will be able to do more in each step as efficiencies/investment/compute resources increase, to the point where we won’t need the step-wise plan/implement/continue and can ask for larger solutions in a single take.

Developing an app is hard, as we all know. Will AI be able to develop a complete, reasonably complicated, app with little prompting? Sure but not yet. A lot of detail would be missed in a simple prompt/request so you will need to coach it, take on the role of the product manager/customer/architect, copy/paste some of the output or press buttons on behalf of the solution that the AI came up with. It is inevitable that we will have AI assistants (personal and business) that listen in to our meetings and other conversations throughout the working day, read the docs that we do, watch our screen, and then be able to perform much more complicated and tailored tasks using a lot of contextual information and require much less coaching. This is is part of the vision of OpenAI and other AI companies and we are already moving along that path. As we see more APIs provide AI tool capabilities the scope of what AI can do will greatly increase. You only need to look at what Microsoft announced today with Copilot+ PCs and some of the apps that will be powered in part by AI, and at the advancements in GPT-4o and Gemini announced in the last week to see where we are heading.

Well said Jarrod. Many others seem to be unconscious of the now inevitable and accelerating disruption which is happening. We live in amazing if scary times.



Maybe I just do a different job to everyone else. Recently I did a quick estimation of my output over the past couple of decades – around 200K debugged lines of code in production. Roughly equal to about 200 lines of code per week, or less than 5 lines per hour. Let’s be generous and assume 5 lines of code takes a couple of minutes to write (given today’s IDEs that’s ridiculously long). So I spend roughly about 3% of my time coding, and 97% of my time not coding (thinking mainly, but also reading, design, etc, etc)

Then when I think about the code I do write, most of it is original. I use as many libraries (third-party code) that I can, and write the least amount of code I can. And that is for new code. Working for clients on an existing code-base requires even less coding and more thought, with limitations unique to the current code base

Not sure how AI is really going to help there, no matter how good it gets. What it can do is reduce mistakes, ensure once I decide to write something it is more likely to be correct, make suggestions, etc. So it might help me slightly. But it cannot do the thinking for me. And having worked with a lot of developers over the years, the biggest impediment to progress is not getting the code right – it is working out what needs to be done

What am I missing?


I completely agree with Misha.

Also, the thoughtlessness of the current AI incarnations is just dangerous, I’d rather not have any such flawed help at all.


Well probably every developer has someone they could point to as someone they admire, and would like to learn from.

And maybe all of us have benefited from talking about something we are doing, or plan to do, with a local or online group of other devs.

So imagine if you could have a 1-1 chat, basically pair programming with an AI, that is of excellent quality. Not today, sure, but we’re only a year or so into this phase.

I wonder if you’ll need to pay extra to be able to get one that you can angrily disagree with, swear black and blue at, and then go get pizza with :thinking: ???

AI, of “excellent quality”? By what measure? Will score highly on knowledge and poorly on personal skills. And that will never change. A mentor will KNOW whether you understand what they are saying. AI never will. The value of a mentor is that they suggest the lines of enquiry. AI responds. It can’t ever really know what you need to know because it takes a human to tease that out. AI will never be capable of “guessing” human thought because it isn’t human, nor can ever think like a human. I’m baffled as to why anyone would even try, doomed as they are to failure. AI has lots of strengths, but some weaknesses it can never overcome (AI can never KNOW if it is right or not).

The strengths of humans are the weaknesses of AI that will never be overcome (original thought is one). The strengths of AI are the weaknesses of humans (never getting tired, consistent, ability to retain information forever). AI complements human thought but can’t replace it. IMHO the world has been sold a “dud” in that AI will never reach the goals that it’s proponents, who stand to make large sums of money from people using AI, seem to think are inevitable. It’s like to be sceptical is to be heretical in this area. AI could possibly be the most over-hyped technology in history

1 Like

I’m with Misha on this - I also spend way more time (probably too much) thinking about what code to write, or how to overcome design issues etc.

I’m even more disturbed by the human cost of all this investment in AI - when I hear people like Sam Altman say he doesn’t care if he burns through $500M or $50B because we will have some fantastic tech - so :face_with_symbols_over_mouth: what? Imagine how many people could be housed, fed, educated with that sort of money.

To me the AI industry is just like the military industrial complex - sucking in vast amounts of money from governments and investors all to make a buck for their shareholders.


That is great insight Vincent. It points to how AI is the next military problem and that is why there is lots of investment. Investment in protecting integral information and making integral information recognizable will be worthwhile.

1 Like

This is the Sam Altman interview that got me so riled up

1 Like

I have used AI fairly extensively and I get annoyed by its limitations in what it can do. A lot of that has to do with the limited inputs and outputs. With larger inputs possible and user memory, I think more of what people think is human will be able to be copied by a machine. GPT4o introduced reading emotions from peoples voices and faces and outputting emotion in the voice output. Advertising has been using computers for years to market to people by modeling peoples behavior. I think that the worry is that AI will learn to be able to manipulate people through their emotions.

I think original thought is somewhat overrated (not totally though). Many problems are repeated over and over. With its massive knowledge as well, it can have answers that a normal person would not normally know. Longer term, either original thought will have to be an emergent property of the AI, or original thought will need to come from people, with the help of AI.

Sounds like a developer to me. :smiley:


Will score highly on knowledge and poorly on personal skills. And that will never change. A mentor will KNOW whether you understand what they are saying. AI never will. The value of a mentor is that they suggest the lines of enquiry. AI responds.

I think that you underestimate the current capabilities of AI. As a simple example it can already operate semi-autonomously (given some original context and then through interactive dialogue) such as acting as a tutor, assessor or reviewer (but perhaps not yet personalised enough to be called a mentor), posing interesting questions without further prompting, adapting based on your responses to probe deeper or provide hints.

Khan Academy introduced their AI assistant Khanmigo more than a year ago. It was already very useful then. Here’s an overview given in a TED talk by Sal Khan:

How AI Could Save (Not Destroy) Education

It can’t ever really know what you need to know because it takes a human to tease that out. AI will never be capable of “guessing” human thought because it isn’t human, nor can ever think like a human.

It isn’t yet at the level of expert personal assistant because the context and memory that it has are currently limited, so for a while yet we will need to provide it more context in each interaction and receive more limited personalised responses.

What does being “human” mean? There are many facets and AI is already excelling in several of them. It’s not just general knowledge. As demonstrated in one of the items in the original post it can determine your likely state of mind from a single photo. It can also detect the emotion in your voice. As the amount of context it has increases it will become more useful. As its memory increases it will be able to determine what “normal” means for you and understand your limits (of knowledge, positivity, concentration, …). It has already been shown to greatly outperform humans in emotional awareness. In one study it outperformed all 180 Bachelors and PhD psychology students on social intelligence. Is it perfect? Absolutely not but it is already advnaced enough to be very useful and the capabilities are improving.

AI could possibly be the most over-hyped technology in history

I couldn’t disagree more. Time will tell I suppose but be prepared for sweeping changes whether you like them or not :slight_smile:

This is the Sam Altman interview that got me so riled up

In his defence he did say “where eventually we create way more value for society than that (was spent)”.

Imagine how many people could be housed, fed, educated with that sort of money.

But that money simply doesn’t exist for those uses. Whether we like it or not Microsoft isn’t spending $10B on housing any time soon, so what we get are investments in technology like AI.

Are we going to be better off with advanced AI? Who knows. There are some obvious positives and negatives. There are some disasterous possibilities that we haven’t yet figured out. But we are in for a ride that we cannot yet stop.

“when” sounds like a done deal, where it really should be “if”

The money exists, but companies exist to make money for their shareholders.

Yeah I get that - we will get AI whether we need it or not.

The next few years will be interesting, because companies will be racing to use it to downsize their workforce and “create value for their shareholders” - think Amazon - not known for looking after anyone but Jeff.

And then there are all the nefarious uses we are already seeing pop up - elections are going to be fun :thinking:

It’s a race to the bottom folks - I fear for the human race - not because of “skynet” or AI as such - but because of humans.

I’m not a big fan of Sam Altman. I smirked a little because it looks like he’s shot himself in foot with the “Her” tweet and bungling of trying to get permission to use Scarlett Johansson’s voice (and apparently doing it anyway). That said, love him or hate him with OpenAI - and Microsoft’s big pile of cash - he and his colleagues/backers have, I think, brought things on immensely in the field of AI to something credible and a moment of advancement like the adoption of the internet or the release of the iPhone.

We can argue about the efficacy and usefulness of AI but in such a firestorm of cutting-edge developments we’re all right, and wrong.

Alan Turning (my personal forever hero) may never have anticipated or envisioned it this way - but the true Turing Test will be when we are all completely fooled over a protracted period of time by some near-future version of chatty AI which all of us completely believe is a real human but subsequently gets revealed to be yet another iteration of a LLM or AI chatbot type thing.

Turing was incredibly en pointe when he framed the Turing Test as “if you can’t tell if it’s a human or not, then it’s intelligent”. We’re only a few steps away; maybe it has already happened or is happening right now. When it does, life is going to get interesting. The manipulation of election results and getting us to unwittingly fall in love with some kind of high profit margin breakfast cereal is only the very edge of how things will change.

Eli Pariser’s book, The Filter Bubble was incredibly prescient when it examined how Orwellian thought control was a blunt tool and that Google et al have refined this into a surgical instrument with Facebook and the like weaponizing it. If you haven’t read the book, I urge you to do it.

OpenAI spewing out code examples is cool and all, but it’s the underlaying potential for it being employed as a tool for outwitting us is the badness - in both senses of the word - of the rise of AI.