Discussion about this post

User's avatar
Gabriel's avatar

Great work breaking down the magic trick here!

I've always been frustrated with "AI alignment" discussions, because it's not like there ever were "big tech" alignment questions like you point out. It seems that the question is raised in the hopes of maintaining the illusion that these large tables of values (LLMs) can reason.

It is this problem that has lead me to believe that much of the industry operates as a cargo cult to promote stock values rather than a serious engineering discipline.

The lack of seriousness and reliance on sci-fi tropes makes the field come across to me as more of a storytelling exercise than a diligent pursuit of a craft, as an outsider anyways.

But them being wrong about AI sentience (or capabilities period) doesn't stop serious damage from taking place in the pursuit. At minimum I've been frustrated at the opportunity cost of all this waste.

Expand full comment
Rob (c137)'s avatar

I remember hearing before COVID they had an AI analyze vaccine safety data to come up with a conclusion on whether they were safe or problematic.

The AI said that vaccines were safe and vaccines were unsafe.

It could not tell the difference between legitimately done studies and the slop that passes for peer review.

Nowadays AI will spout the BS official safety line. If corrected it will say there are issues but this is not permanent. The corrections are there to make us think that it's a bug, not a feature.

Same with hallucination. The 60mins Google bard interview had made up titles and authors which were cited. How exactly did the AI language model do this if it's just scraping information? Also, why did they not edit that out or redo the demonstration? Because it is not a bug, but a feature that excuses when AI lies for the establishment.

Like you, I'm boggled that the engineers cannot see this.

Perhaps they are disabled in thinking... 😂

https://robc137.substack.com/p/left-brain-vs-whole-brain-in-battlestar

Expand full comment
17 more comments...

No posts