Vibe Coding vs Vibe Analysis, and the Illusion of Understanding
The term AI is used to described simulated human intelligence built on machine learning and deep learning methods. In the end does it make us more intelligent or worse off?
Adapt or… Don’t - The evolution of AI
The release of ChatGPT from OpenAI in 2022 brought the capabilities of large language models into the spotlight. Now we have OpenAI, Anthropic, Meta, Google, and xAI pushing the massive scale up of infrastructure to support development. Variations of AI tools are now ubiquitous, and thoroughly part of our culture. If you use AI, you’ve probably found yourself in a conversation trying to determine the exact benefit or doom AI will bring us. The Fall of 2025 was marked by the forecast of an AI bubble, which seems like it hasn’t popped yet. AI is certainly here to stay, but how will it be embraced and by who?
In academics, AI has started an evolution of how basic research is done. Some investigators fully endorsing its use, while others can’t be bothered to take the time to learn a new tool. Overall, it appears that the true early adopters are undergraduate students, who are significantly outpacing researchers in terms of AI usage. And for good reason, it is the greatest essay cheating tool ever invented!
But really, the explosion of AI tools targeted at students is amazing. I’m happy to have experienced teaching and being a student both before and after AI’s introduction. The major change has been in accessibility, in a recent interview, Nobel Laureate Geoffrey Hinton explained that the theory and mathematics necessary for machine learning were already available by the 1980s. However, we lacked the computational power to make it a reality. Hinton’s 1986 Nature paper1 is credited with reviving the field, inspiring “deep” learning, and computer vision. Though the real advances came with major computational increases in the 2010s, allowing for the explosion of AI in the way we know it now.
Side note, Dartmouth is now recognized as the birthplace of AI in 1956 when they hosted an AI focused summer workshop. This is where diverse research on computer simulation of intelligence coalesced into a single field of AI.2
For a long time, functional AI has been hypothesized or predicted, but it took nearly 60 years to become what it is today. Here the system of science has worked, generations have built upon previous understanding with new tools to create better solutions. What this means, is that really, ChatGPT is nothing new. We’ve long had the math to make it real, and finally the infrastructure can power it. The downside is that the rush en masse to embrace AI has led to poor science and marketing hype. Slapping the “AI Inside” sticker on your product has a certain feeling reminiscent of the old “Intel Inside” sticker.
What does your AI start up do?
Now every new company must incorporate AI in some way to stay relevant. Many recent pitches I’ve seen have some statement of “we’re using AI models for xyz purpose”. This leads me to the two classifications of AI businesses that are likely to exist: vibe coding and vibe analysis.
First is the vibe coding company, which uses AI tools to enhance production. Just like a student writing an essay, things get done faster with a relative expert (the AI) guiding the way. I don’t really have an issue with this because it is unavoidable, efficiency improvements to get more work done will always be adopted. Though in the near term, it’s not great for the future of jobs for skilled and educated workers who thought they were insulated from technological redundancy.
This does make me wonder what will happen when product developers can clock out at 10am after sending out their AI prompts for the day. We must be close to seeing “Prompt Engineer” job postings.
Second, we have the AI science implementation company. This is where the core product itself is “AI” doing ~things~, but the company can’t tell you what that really means. In this usage of AI, the data processing is sent off to a server where it’s manipulated and returned with some added valuable insight. Is it useful? Yes, but it’s not transparent. This is the dangerous version of AI startup development which has the potential to erode scientific rigor. Forget vibe coding, this is pure vibe analysis. When a human is taken out of the loop, the drive to have positive impact for another human is gone. Who cares how efficient you can make cancer diagnosis billing cycle, if there’s not a real person considering what it means to be diagnosed with cancer in the first place?
As a scientist, a black box analysis system is unacceptable due to its susceptibility to bias, data artifacts, confounding variables, and noisy data. This is the area where real researchers work to improve and explain how AI models function. But these drawbacks are hidden from the everyday user who wakes up and decides to build a health data app powered by AI for the first time. With these tools I have no doubt accessibility to technology will enable many more developers and creators by giving them opportunities that did not previously exist. Though we will all have to improve our ability to understand the reality of what we are looking at in an AI world. Anyone can recognize sloppy AI content, but will you know good AI content when you see it?



