OpenAI - GPT-4o

I won’t address any of the individual points raised here, but I couldn’t disagree more with some of the sentiments. So rather I will make a bold prediction:

I predict that not only will AI fail to deliver as promised, but there will be a massive “push back” from a majority of the population as they realise that having machines trying to be “human-like” is definitely NOT what they want.

I think the “uncanny valley effect” will be front and centre here :wink: Personally the idea of a machine “faking” emotion is about as nauseating as it gets. I will be very happy to opt out. I have never used an AI assistant, and outside of development, possibly never will. I value “human interaction” and my ability to choose, unprompted, what I want to do too highly.

Cheers, Misha

1 Like

The point I’m making is that you’re describing the current and near situation - what happens when the Uncanny Valley effect is either resolved or is not apparent? I am not referring to an generative AI-based TikTok influencer or some smoothed-over AI Instagram model - I am mean the more widespread, insidious, non-obvious AI.

Your insurance quotes are already likely calculated by AI of sorts. Trading in futures and so on are definitely using AI. Social media posts and ‘algorithms’ are widely manipulated by AI (two of the main guys in that arena are ex Borland employees).

The human interaction is definitely a problem when we use the non-visual cues of things like body language or facial micro-expressions - but that’s the area that AI is almost a novelty; the real problem is everywhere we don’t get that. There’s a reason why we need emojis - so we can explicitly express the intent of the emotion of text when the nuance of it is nonobvious or is perilously open to misinterpretation. :grin::+1:

AI is being forced on us whether we want it or not. These new pc’s will only be interesting to me when I can install a linux distro on them.

1 Like

Recall - they gloss over the privacy aspects of this.

You can switch it off – that option will always be there

Calculation of insurance quotes, trading in futures (I work in a related field), etc, is not AI – just data analysis and machine learning. And they are probabilistic – never accurate. Likewise, the social media algorithms – if you look at targeted ads for one person, they are awful. This stuff is all designed to work on a “population” in the data set sense of the word. No single result is ever guaranteed to be correct. And that will remain no matter how much effort is put into “training” the AI (machine learning). This is completely lost on the proponents of AI. It will never be accurate for one result, and never can be, but the world and decisions are not like that.

It’s in this grey area that it will fail. Humans understand the grey area, the uncertainties, the mistakes, and for the most part accept them. Just wait and this is where the disillusionment will start. It’s why we will never get fully autonomous cars (they might get better, but will we ever accept the algorithm deciding who will be injured/killed – see I Robot). It’s why AI in the military is so dangerous. It’s why tech-heads with less than a basic understanding of “human nature” will never get it. With over 3+ decades in the industry I have been surrounded by people best suited to solving difficult problems, but ill-suited to understand the meaningful needs of people. I even have a 19 year-old on with ASD (Asperger’s as we used to call it, which just means he will be with like-minded people if he becomes a software engineer).

My overall view, hardening as I get older, is that some things we just won’t achieve, because they might be/are impossible. Not every problem can be solved, even if you throw near infinite resources at it. This list for me contains cold fusion, 100% autonomous cars, a “useful” quantum computer. I think it’s great that people try to solve what I might think is impossible (where would we be if everyone said things were too hard to solve), but that doesn’t make them right, only right a small fraction of the time.

Time will tell who got this right :wink:\

I’m not that trusting - is turning it off actually turning it off completely or just hiding ui elements. Big AI isn’t really doing all this work for our benefit, if there is one thing that “AI” needs to survive, it’s data, our data.

I think when we have busy jobs it can be easy to be so busy with work we no longer have time to learn about new technologies and changes in the world, even if we consider ourselves to be tech savvy. I found myself in that place around 2014 when Delphi work was drying up and I needed to re-invent my skillset for the web.
Initially when I first heard of AI I felt in a similar way to some of the comments above. Having spent the last year working in that space I can see that this time its different and the pessimist viewpoints are off target in this case.

The biggest change we have to get our heads around is that for the first time machines can think on their own and can effectively begin coding their gen 2 products themselves. What you see in public is just a tip of the iceberg.

Its taken a long time to get these technologies right, but we are hurtling toward (or at) a point where computers have

  • Perfect vision, with which they can not only see but understand what they see
  • The ability to listen and understand the words they hear in any language
  • The ability to speak in real time
  • The ability to think and respond
  • The ability to learn - once they learn they can send that knowledge to every other AI engine in the world in seconds
  • Access to all knowledge ever put into a computer
  • Ability to use most API’s and web sites
  • Ability to be implanted into robots that can already move faster and are stronger than humans and never get tired.
  • Computers can already drive a car at a level no worse than the average driver in many situations

Let’s compare that to the human
24 years to teach a human a basic bachelor level degree in one topic
At a population level, close to zero pass on of that knowledge to other humans in the proceeding 60 years
Limited understanding of the world, limited language abilities, many with poor spelling and grammar skills

We have to remember this is like the DOS days of AI right now. If you thought DOS was the limit you’d be shocked at where we are today and that all happened without the exponential ability of AI.

My prediction is as follows (over the next 5 years) will be without a Job

80%+ of bookkeepers
80%+ of marketing people
30% of kitchen and cafe related jobs
50%+ of accountants, financial advisors
40%+ of developers, hitting the third world the hardest
75% of call centre jobs, hitting the third world the hardest

They can’t harvest data without our consent. And that is only going to get more restrictive. Over the next decade these tech companies are going to be “reined in” (already started in Europe). Outside the tech companies, nobody is really “for” the current way things are done. I think deep down these companies are nervous that access to data will be “switched off” – and they should be! That might be also why there is a hurry to demonstrate benefits (most of which aren’t really there yet). That will be their bargaining chip

I work as a techie in the tech industry, and I am appalled by the behaviour of big tech (data harvesting and riding roughshod over intellectual property rights). The smart phone has been the worst invention ever for the under 25s (I have 3) given the complete lack of limitations put on the behaviour of the social media companies. I rate life as worse, not better, because of new tech. I am no luddite, and work at the pointy end of tech, but the things I find worthwhile are all outside the use of technology. Watch this space because the backlash has only just started

I am constantly learning. And since I work for myself, it’s even more important. Crossed into to the .NET space some 15 years ago. Recently “retooled” with the latest .NET (from .NET 5 upwards a complete re-architecture). Gone into big data analysis, machine learning, and a host of other things. Worked with a significant number of developers since 1989. And guess what, the one thing that has never changed, not even now, is that the good developers know how to understand the problem domain and think through a solution. And that ability has only got worse as tech has improved. It’s almost as if the more advanced the tech, the lesser the brain power harnessed to use it

Not only that, but I have 3 teenagers (16, 19, and 19) which has given me a great insight into the new generation born with this tech. And it’s not a pretty sight. This generation is the least capable of using this tech. They accept things blindly, they wouldn’t know how to evaluate a decent source of information from a conspiracy theory, and they couldn’t solve their way out of escaping a paper bag with their current skills. Talk to high school maths teachers (I have two as friends) and you will be truly frightened. This new tech has been used to remove the need for problem solving skills, and it is now apparent that this is the wrong way to do it. So now they will start taking the tech OUT of the classrooms

If we rely on AI to actually MAKE the decisions, rather than as an assistant, we are doomed. AI has no morals, ethics, no ability to really “understand” consequences, and currently the liability has not been sorted out. Make tech (or companies) liable for AI decision making and they will all say no. I feel like I am a lone voice in a sea of techies hurtling towards the apocalypse. However, I do feel hopeful that once we get some bad unintended consequences, governments and the community at large will see to it that the naked ambitions of the techies is curtailed

PS My prediction that the development of fully autonomous cars would stall that I made 10 years ago seems prescient now. I am no soothsayer, but the days of tech being divorced from the negative consequences has come to an end. I love my tech, but won’t ever outsource anything other than low-level decisions to tech because I know about it’s fallibility. That’s something the public are yet to understand

We have been here before with the predictions:

  • Technology will give us more leisure time (it doesn’t, we just adjust work to what we want to achieve)
  • Technology will put a lot of people out of jobs (it doesn’t, and those that go are replaced by new ones required because of the new technology)
  • Technology with make our life simpler (that’s a joke, right?)
  • Technology will free us from a lot of decision making (that’s another joke, right?)
1 Like

Re: Recall and privacy. I’m as skeptical as the next person on promises from megacorps saying that they won’t use your private data but the tech behind Recall is capable of keeping everything local by using a local vector database of embeddings for the index. The NPU runs a model to create “similarity” indexes of your data (documents, photos, etc). When you’re searching for something embeddings are created for the input phrase or source document or source image that you’re trying to find matches for. That’s all done by the NPU. Then some simple math is used to find matches between the source and index.

Microsoft says that you have full control over that local index (e.g. clear it) and no data is sent to the cloud or used in training their models. I am tending to believe them this time :slight_smile:

The holy grail of Artificial Intelligence will be found at the end of a long road strewn with Artificial Stupidity.

1 Like

Here are a few AI Youtube channels that I watch to keep up with what is going on with AI.

These guys talking about LLM from the point of view of Control Theory.

Ignore → Test 1