@Xagraniatko

People who depend on the investors believing in unlimited potential of AI refuse to discuss limitations of AI. Shocked I am, shocked!

@edzejandehaan9265

When I was young I had a subscription to a dutch popular science magazine, "Kijk". In the eighties it featured an article in which a computer scientist was speculating on the future possibilities of AI . I always remembered a quote in this article (not literally, hey this was decades ago); Maybe we are on our way to develop a sentient, conscious computer, but it could also be that we are like a monkey that's climbing a tree and thinks it's on its way to the stars....

@1999fxdx

What’s exponential is the money required to scale AI - follow the money.

@KevinSolway

They believe in the unlimited scaling of their bank accounts.

@tekperson

I have a computer science degree and have worked in the field for 40 years. When I hear computer science folks make predictions like AI will take over everything, I have to face-palm. We've been through AI hype multiple times in my career, and it's always been way overhyped. In this case, LLMs have solved a hard computer science problem (NLP), and that's very useful, but pretty much everything else they say is hype intended to get more funding for their startup.

@DominiqEffect

Problem with AI is we don't understand how human brain work in details and make intelligence possible. It's a bit like trying to build a car without knowing anything about mechanics, but seeing that from the outside the cars are similar, they have 4 wheels and 4 seats, and each has a radio - this radio is probably crucial for driving. Let's focus on radio and vivid shine color of car body.

@adamshinbrot

"To me, training large language models with more data is like going to the gym. At some points adding more weights doesn't make you stronger, it just makes you more likely to drop something heavy on your foot".

Love you Sabine.

@Isabelle.g6

The concept in "Mastering the AI Money Game" book completely explains market. Trend is changing with AI

@ascaniosobrero

One thing I did not hear pointed out with this year Nobel prize in chemistry. As a chemist involved for decades in modeling not only small molecules but proteins (receptors and enzymes) as well, I was amazed by the results that have been obtained in predicting how proteins fold. A paramount result. However, AI is not explaining WHY proteins fold that way. No rule or insight, exactly as it has been so far. AI made useful results from data, but was unable to explain anything: it did not find any law

@celebrim1

In computer science we always say that the last 5% of the task takes 95% of the time.   The problem is that people outside the industry always assume that because they have 95% of the problem solved that we are just about done.  But the last 5% always turns out to be the hard part.   The modern large language models took about 30 years to reach this point.  It could be that though we're 95% of the way to hyperintelligent AI, that the remaining amount of time necessary to solve the problem is still 600 years.

@ChadKanotz

Yeah. LLM != General AI. The problem isn't scale, it's architecture. It just won't get there.

@jonathanbeeson8614

Cheers Sabine !   Hope you are thriving.  We need your point of view and your voice.

@CharlesFVincent

I asked an LLM about local bands. It knew some facts, but it was just words taken from less than ten web pages and pasted together into sentences. After three or four questions the facts would run out and the LLM would confuse them with more famous bands, and make things up. The paucity of source material made it easy to see what it’s doing, and I can’t imagine that method solving the remaining mysteries of physics. Other kinds of neural net will be useful in data analysis, but with LLMs investors are paying billions just to compile information that we already know and present it in a different way.

@SKD-e8o

As a physicist, it has always been obvious to me that you can't just "learn" physics from everyday data (i.e., from data on "emergent" phenomena, as Sabine put it). I never understood why my computer scientist friends always insisted that with "more data" and "more sophisticated models" their models can learn something about the underlying physics.

If years of experimental research in physics has taught me anything, it is that it is very nontrivial (and in many cases, impossible) to back out the underlying physics from noisy experimental data. And that any claims of figuring out the whole underlying physics from limited and noisy data -- no matter how sophisticated a model one has -- must be treated with utmost skepticism.

But then again, what do I know? 🤷‍♂️

@gram40

'Fake it until you make it' rearing its head again for all these start up AI companies.

@RSLT

We can't build ladders to space, and the idea of scaling is not even up for debate. For someone who can barely do basic math, claiming to be able to  solve "all" of physics is amusing to watch. These people often don't understand enough math to figure out whether the AI has actually solved the problem or not, and this is the most generous scenario. The more realistic one is that they have no clue whatsoever about advanced physics. The fair assessment is that they're lying to make money and secure more funding for something we know works at a very limited level. When they say the scaling won't stop, that's a lie. In fact, we know that we can't scale up as we please, as I mentioned. You can't just keep adding bigger and bigger ladders.

@banban28232

If you trained an LLM on texts up until 2000 years ago, would it be able to deduce relativity or the theory of evolution? It's likely not , due to the fact that the  necessary scientific concepts, methods, and data simply weren’t available in those texts. So it unlikely that this super AI will be able "complete" Physics or Biology.

@demetronix

The one reason I am not expecting AI to come up with new science is that it is trained on all the garbage science that is published every day. Good luck extrapolating something from that.

@ShawnHCorey

Funny how the CEOs all say there's no limit. What do the AI scientist say?

@joerieke300

As an AI dev, I first saw this when it came out, I disagreed with some her talking points. I decided not to comment because I didn't want to be attacked by her drones. DeepSeek and Alibaba later, what's your opinion now?🤔