I've been thinking lately on what bothers me about how AI is being discussed and why the performative nature of the people in the space bothers me so much (amongst other things).
Over-hype for me has always soured the value of the tech being talked about. It didn't matter much to me when it was about web3/crypto, since I simply didn't care about the tech itself.
For AI though, I feel different. It has quite substantial real-world applications and already brings quite a lot of value, but the overhype is causing the problems and deficiencies of the tech to be overlooked.
We would all benefit from being able to talk honestly and openly about AI, and I think the first step is identifying what is ruining the discourse.
There are several pressures that are causing this perversion, but I mainly see it from two sides.
A recent example of being disingenuous is Anthropic's claim of writing a C-compiler in two-weeks.
Most people would probably just have seen the YouTube video, which makes impressive claims of building a "fully functional compiler" in two weeks and "walked away", "zero manual coding", and "Tested it on the Linux Kernel - It works".
Now, the blog post from them is a bit more honest, but even then makes claims such as "This was a clean-room implementation", while also having some honest parts such as "Claude simply cheats here and calls out to GCC for this phase".
The first issue opened in the compiler repo? "Hello world does not compile".
I don't think I'll do a better job than ThePrimeagen, on YouTube, explaining why the details in this announcement, while certainly impressive to some extent, is also not reflective of actual Software development at all. Highly recommend the 8:17 minutes on the video.
So now we are left in the situation where we cannot talk openly about the problems AI has, and I think that is stifling its growth.
The effect of this is that the people that are actually out there building things are not getting their problems solved, and I personally think that massively stifles AI innovation.
To fix these is almost to go against the grain.
For me personally, there are some big deficiencies when it comes to Software Engineering which makes AI quite far from being able to replace anyone competent at engineering yet:
I think most of these are fixable on the long-term, but not if we aren't honest about them existing.
I use AI daily, it boosts my productivity, just like LSP do, linters do, choosing a good programming language or framework does, etc. It didn't magically solve my problems (like its being sold as), it's a tool that I use to be more productive.
Right now I find that I land somewhere in:
I live happily somewhere between step 2 and 3, occasionally dabbling in full hand-offs when my brain is tired.
Otherwise I find that I am generally much more precise, fast, and correct in reaching a good solution compared to the AI. Who would have thought that +15 years of Engineering experience still pays off 🤷♂️